There were tabs in between macro names and their values in their
definition, forcing everyone to do the same, and causing some
mangling in patches. Let's fix all this.
645635d commit was not sufficient to implement the heartbeat feature.
When no heartbeat was received before its timeout has expired the session was not
closed due to the fact that process_peer_sync() which is the task responsible of
handling the heartbeat and session expirations only checked the heartbeat timeout,
and sent a heartbeat message if it has expired. This has as side
effect to leave the session opened. On the remote side, a peer which receives a
heartbeat message, even if not supported, does not close the session.
Furthermore it not sufficient to update ->reconnect peer member field to schedule
a peer session release.
With this patch, a peer is flagged as alive as soon as it received peer protocol
messages (and not only heartbeat messages). When no updates must be sent,
we first check the reconnection timeout (->reconnect peer member field). If expired,
we really shutdown the session if the peer is not alive, but if the peer seen as alive,
we reset this flag and update the ->reconnect for the next period.
If the reconnection timeout has not expired, then we check the heartbeat timeout
which is there only to emit heartbeat messages upon expirations. If expired, as before this
patch we increment the heartbeat timeout by 3s to schedule the next heartbeat message
then we emit a heartbeat message waking up the peer I/O handler.
In every cases we update the task expiration to the earlier time between the
reconnection time and the heartbeat timeout time so that to be sure to check
again these two ->reconnect and ->heartbeat timers.
This flag is set by the stream layer to request an abort, and results in
CF_SHUTR being set once the abort is performed. However by analogy with
the send side, the flag was removed once the CF_SHUTR flag was set, thus
we lose the information about the cause of the shutr. This is what creates
the confusion that sometimes arises between client and server aborts.
This patch makes sure we don't remove this flag anymore in this case.
All call places only use it to perform the shutr and already check it
against CF_SHUTR. So no condition needs to be updated to take this into
account.
Some later, more careful changes may consist in refining the conditions
where we report a client reset or a server reset to ignore SHUTR when
SHUTR_NOW is set so that we don't report such misleading information
anymore.
Recent commit 63768a63d ("MEDIUM: mux-h2: Don't mix the end of the message
with the end of stream") introduced a race which may manifest itself with
small connection counts on large objects and large server timeouts in
legacy mode. Sometimes h2s_close() is called while the data layer is
subscribed to read events but nothing in the chain can cause this wake-up
to happen and some streams stall for a while at the end of a transfer
until the server timeout strikes and ends the stream completes.
We need to wake the stream up if it's subscribed to rx events there,
which is what this patch does. When the patch above is backported to
1.9, this patch will also have to be backported.
Previous commit 3ea351368 ("BUG/MEDIUM: h2: Remove the tasklet from the
task list if unsubscribing.") uncovered an issue which needs to be
addressed in the scheduler's API. The function task_remove_from_task_list()
was initially designed to remove a task from the running tasklet list from
within the scheduler, and had to be used in h2 to abort pending I/O events.
However this function was not designed to be idempotent, occasionally
causing a double removal from the tasklet list, with the second doing
nothing but affecting the apparent tasks count and making haproxy use
100% CPU on some tests consisting in stopping the client during some
transfers. The h2_unsubscribe() function can sometimes be called upon
stream exit after an error where the tasklet was possibly already
removed, so it.
This patch does 2 things :
- it renames task_remove_from_task_list() to
__task_remove_from_tasklet_list() to discourage users from calling
it. Also note the fix in the naming since it's a tasklet list and
not a task list. This function is still uesd from the scheduler.
- it adds a new, idempotent, task_remove_from_tasklet_list() function
which does nothing if the task is already not in the tasklet list.
This patch will need to be backported where the commit above is backported.
In h2_unsubscribe(), if we unsubscribe on SUB_CALL_UNSUBSCRIBE, then remove
ourself from the sending_list, and remove the tasklet from the task list.
We're probably about to destroy the stream anyway, so we don't want the
tasklet to run, or to stay in the sending_list, or it could lead to a crash.
This should be backpored to 1.9.
In h2_deferred_shut(), don't just set h2s->send_wait to NULL, instead, use
the same logic as in h2_snd_buf() and only do so if we successfully sent data
(or if we don't want to send them anymore). Setting it to NULL can lead to
crashes.
This should be backported to 1.9.
In h2s_notify_send(), use the new sending_list instead of using the old
way of setting hs->send_wait to NULL, failing to do so may lead to crashes.
This should be backported to 1.9.
In h2_deferred_shut(), only attempt to destroy the h2s if h2s->cs is NULL.
h2s->cs being non-NULL means it's still referenced by the stream interface,
so it may try to use it later, and that could lead to a crash.
This should be backported to 1.9.
This commit was reverted because of bugs. Now it should be ok. Difference with
the commit f52170d2f ("MEDIUM: proto_htx: Switch to infinite forwarding if there
is no data filte") is that when the infinite forwarding is enabled, the message
is switched to the state HTTP_MSG_DONE if the flag CF_EOI is set.
Because the flag CF_SHUTR is no more set to mark the end of the message by the
H2 multiplexer, we can rely on it again to detect aborts. there is no more need
to make a check on the flag SI_FL_CLEAN_ABRT when the option abortonclose is
enabled. So, this option should work as before for h2 clients.
This patch must be backported to 1.9 with the previous EOI patches.
As for the H2 multiplexer, When the end of a message is detected, the flag
CS_FL_EOI is set on the conn_stream.
This patch should be backported to 1.9.
The H2 multiplexer now sets CS_FL_EOI when it receives a frame with the ES
flag. And when the H2 streams is closed, it set the flag CS_FL_REOS.
This patch should be backported to 1.9.
The flag CF_EOI is now set on the input channel when the flag CS_FL_EOI is set
on the corresponding conn_stream. In addition, if a read activity is reported
when this flag is set, the stream is woken up.
This patch should be backported to 1.9.
In the commit 93e02d8b7 ("MINOR: proto-http/proto-htx: Make error handling
clearer during data forwarding"), a return clause was removed by error in the
function http_request_forward_body(). This bug seems not having any visible
impact.
This patch must be backported to 1.9.
On the send path, try to be fair, and make sure the first to attempt to
send data will actually be the first to send data when it's possible (ie
when the mux' buffer is not full anymore).
To do so, use a separate list element for the sending_list, and only remove
the h2s from the send_list/fctl_list if we successfully sent data. If we did
not, we'll keep our place in the list, and will be able to try again next time.
This should be backported to 1.9.
In lf_ip(), when LOG_OPT_HEXA modifier is used, there is a code to format the
IP address as a hexadecimal string. This code does not properly handle cases
when the IP address is IPv6. In such case, the code only prints `00000000`.
This patch adds support for IPv6. For legacy IPv4, the format remains
unchanged. If IPv6 socket is used to accept IPv6 connection, the full IPv6
address is returned. For example, IPv6 localhost, ::1, is printed as
00000000000000000000000000000001.
If IPv6 socket accepts IPv4 connection, the IPv4 address is mapped by the
kernel into the IPv4-mapped-IPv6 address space (RFC4291, section 2.5.5.2)
and is formatted as such. For example, 127.0.0.1 becomes ::ffff:127.0.0.1,
which is printed as 00000000000000000000FFFF7F000001.
This should be backported to 1.9.
Any attempt to put TLS 1.3 ciphers on servers failed with output 'unable
to set TLS 1.3 cipher suites'.
This was due to usage of SSL_CTX_set_cipher_list instead of
SSL_CTX_set_ciphersuites in the TLS 1.3 block (protected by
OPENSSL_VERSION_NUMBER >= 0x10101000L & so).
This should be backported to 1.9 and 1.8.
Signed-off-by: Pierre Cheynier <p.cheynier@criteo.com>
Reported-by: Damien Claisse <d.claisse@criteo.com>
Cc: Emeric Brun <ebrun@haproxy.com>
Some functions' roles and usage are far from being obvious, and diving
into this part each time requires deep concentration before starting to
understand who does what. Let's add a few comments which help figure
some of the useful pieces.
We tend to refrain from sending data a bit too much in the H2 mux :
whenever there are pending data in the buffer and we try to copy
something larger than 1/4 of the buffer we prefer to pause. This
is suboptimal for medium-sized objects which have to send their
headers and later their data.
This patch slightly changes this by allowing a copy of a large block
if it fits at once and if the realign cost is small, i.e. the pending
data are small or the block fits in the contiguous area. Depending on
the object size this measurably improves the download performance by
between 1 and 10%, and possibly lowers the transfer latency for medium
objects.
In h2_stop_senders(), when we're about to move the h2s about to send back
to the send_list, because we know the mux is full, instead of putting them
all in the send_list, put them back either in the fctl_list or the send_list
depending on if they are waiting for the flow control or not. This also makes
sure they're inserted in their arrival order and not reversed.
This should be backported to 1.9.
In h2_detach(), don't bother keeping the h2s even if it was waiting for
flow control if we no longer are subscribed for receiving or sending, as
nobody will do anything once we can write in the mux, anyway. Failing to do
so may lead to h2s being kept opened forever.
This should be backported to 1.9.
If we're waiting until we can send a shutr and/or a shutw, once we're done
and not considering sending anything, destroy the h2s, and eventually the
h2c if we're done with the whole connection, or it will never be done.
This should be backported to 1.9.
This reverts commit f52170d2f4.
This commit was merged too early, some areas are not ready and
transfers from H1 to H2 often stall. Christopher suggested to wait
for the other parts to be ready before reintroducing it.
In addition to stats and cache applets, there are also HTTP applet services
declared in an http-request rule. All these applets are now handled the same
way. Among other things, the header Expect is handled at the same place for all
these applets.
First of all, it is a way to handle 100-Continue for the cache without
duplicating code. Then, for the stats, it is no longer necessary to wait for the
request body.
In Lua, when an HTTP applet ends (in HTX and legacy HTTP), we must flush
remaining outgoing data on the request. But only outgoing data at time the
applet is called are consumed. If a request with a huge body is sent, an error
is triggerred because a SHUTW is catched for an unfinisehd request.
Now, we consume request data until the end. In fact, we don't try to shutdown
the request's channel for write anymore.
This patch must be backported to 1.9 after some observation period. It should
probably be backported in prior versions too. But honnestly, with refactoring
on the connection layer and the stream interface in 1.9, it is probably safer
to not do so.
In the stats applet (in HTX and legacy HTTP), after a response is fully sent to
a client, the request is consumed. It is done at the end, after all the response
was copied into the channel's buffer. But only outgoing data at time the applet
is called are consumed. Then the applet is closed. If a request with a huge body
is sent, an error is triggerred because a SHUTW is catched for an unfinisehd
request.
Now, we consume request data until the end. In fact, we don't try to shutdown
the request's channel for write anymore.
This patch must be backported to 1.9 after some observation period. It should
probably be backported in prior versions too. But honnestly, with refactoring
on the connection layer and the stream interface in 1.9, it is probably safer
to not do so.
In the cache applet (in HTX and legacy HTTP), when an cached object is sent to a
client, the request must be consumed. It is done at the end, after all the
response was copied into the channel's buffer. But only outgoing data at time
the applet is called are consumed. Then the applet is closed. If a request with
a huge body is sent, an error is triggerred because a SHUTW is catched on an
unfinished request.
Now, we consume request data as soon as possible and we do it until the end. In
fact, we don't try to shutdown the request's channel for write anymore.
This patch must be backported to 1.9 after some observation period.
Because in HTX the parsing is done by the multiplexers, there is no reason to
limit the amount of data fast-forwarded. Of course, it is only true when there
is no data filter registered on the corresponding channel. So now, we enable the
infinite forwarding when possible. However, the HTTP message state remains
HTTP_MSG_DATA. Then, when infinite forwarding is enabled, if the flag CF_SHUTR
is set, the state is switched to HTTP_MSG_DONE.
It's never easy to guess what services are built in. We currently have
the prometheus exporter in contrib/ which is the only extension for now.
Let's enumerate all available ones just like we do for filterr and pollers.
Some recent versions of gcc apparently can detect that x >> 32 will not
work on a 32-bit architecture, but are failing to see that the code will
not be built since it's enclosed in "if (sizeof(LONG) > 4)" or equivalent.
Just shift right twice by 16 bits in this case, the compiler correctly
replaces it by a single 32-bit shift.
No backport is needed.
It is just a cleanup. Error handling is grouped at the end HTTP data analysers.
This patch must be backported to 1.9 because it is used by another patch to fix
a bug.
For conveniance, in HTTP muxes (h1 and h2), the end of the stream and the end of
the message are reported the same way to the stream, by setting the flag
CS_FL_EOS. In the stream-interface, when CS_FL_EOS is detected, a shutdown for
read is reported on the channel side. This is historical. With the legacy HTTP
layer, because the parsing is done by the stream in HTTP analyzers, the EOS
really means a shutdown for read.
Most of time, for muxes h1 and h2, it works pretty well, especially because the
keep-alive is handled by the muxes. The stream is only used for one
transaction. So mixing EOS and EOM is good enough. But not everytime. For now,
client aborts are only reported if it happens before the end of the request. It
is an error and it is properly handled. But because the EOS was already
reported, client aborts after the end of the request are silently
ignored. Eventually an error can be reported when the response is sent to the
client, if the sending fails. Otherwise, if the server does not reply fast
enough, an error is reported when the server timeout is reached. It is the
expected behaviour, excpect when the option abortonclose is set. In this case,
we must report an error when the client aborts. But as said before, this event
can be ignored. So to be short, for now, the abortonclose is broken.
In fact, it is a design problem and we have to rethink all channel's flags and
probably the conn-stream ones too. It is important to split EOS and EOM to not
loose information anymore. But it is not a small job and the refactoring will be
far from straightforward.
So for now, temporary flags are introduced. When the last read is received, the
flag CS_FL_READ_NULL is set on the conn-stream. This way, we can set the flag
SI_FL_READ_NULL on the stream interface. Both flags are persistant. And to be
sure to wake the stream, the event CF_READ_NULL is reported. So the stream will
always have the chance to handle the last read.
This patch must be backported to 1.9 because it will be used by another patch to
fix the option abortonclose.
According to the H2 spec (see #8.1.4), setting the REFUSED_STREAM error code
is a way to indicate that the stream is being closed prior to any processing
having occurred, such as when a server-side H1 keepalive connection is closed
without sending anything (which differs from the regular error case since
haproxy doesn't even generate an error message). Any request that was sent on
the reset stream can be safely retried. So, when a stream is closed, if no
data was ever sent back (ie. the flag H2_SF_HEADERS_SENT is not set), we can
set the REFUSED_STREAM error code on the RST_STREAM frame.
This patch may be backported to 1.9.
This only happens for server streams because their id is assigned when the first
message is sent. If these streams are not woken up, some events can be lost
leading to frozen streams. For instance, it happens when a server closes its
connection before sending its preface.
This patch must be backported to 1.9.
When a server aborts a transfer, we used to increment the backend's
counter but not the frontend's during the forwarding phase. This fixes
it. It might be backported to all supported versions (possibly removing
the htx part) though it is of very low importance.
When the body length is greater than a chunk size (so if length of POST data
exceeds the buffer size), the requests is rejected with the status code
STAT_STATUS_EXCD. Otherwise the stats applet will wait to have all the data to
copy and parse them. But there is a problem when the total request size
(including the headers) is just lower than the buffer size but greater the
buffer size less the reserve. In such case, the body length is considered as
enough small to be processed but not entierly received. So the stats applet
waits for more data. But because outgoing data are still there, the channel's
buffer is considered as full and nothing more can be read, leading to a freeze
of the session.
Note this bug is pretty easy to reproduce with the legacy HTTP. It is harder
with the HTX but still possible. To fix the bug, in the stats applet, when the
request is not fully received, we check if at least the reserve remains
available the channel's buffer.
This patch must be backported as far as 1.5. But because the HTX does not exist
in 1.8 and lower, it will have to be adapted for these versions.
A bug was introduced in the commit b0769b ("BUG/MEDIUM: spoe: initialization
depending on nbthread must be done last"). The code depending on global.nbthread
was moved from cfg_parse_spoe_agent() to spoe_check() but the pointer on the
agent configuration was not updated to use the filter's one. The variable
curagent is a global variable only valid during the configuration parsing. In
spoe_check(), conf->agent must be used instead.
This patch must be backported to 1.9 and 1.8.
We get this with __decl_hathreads due to the lone semi-colon, let's move
it at the end of the innermost declaration :
src/listener.c: In function 'listener_accept':
src/listener.c:601:2: warning: ISO C90 forbids mixed declarations and code [-Wdeclaration-after-statement]
First of all, only GET, HEAD and POST methods are now allowed. Others will be
rejected with the status code STAT_STATUS_IVAL (invalid request). Then, for the
legacy HTTP, only POST requests with a content-length are allowed. Now, chunked
encoded requests are also considered as invalid because the chunk formatting
will interfere with the parsing of POST parameters. In HTX, It is not a problem
because data are unchunked.
This patch must be backported to 1.9. For prior versions too, but HTX part must
be removed. The patch introducing the status code STAT_STATUS_IVAL must also be
backported.
The status codes definition (STAT_STATUS_*) and their string representation
stat_status_codes) have been moved in stats files. There is no reason to keep
them in proto_http files.
When htx_from_buf() is used to get an HTX message from a buffer, htx_to_buf()
must always be called when finish. Some calls to htx_to_buf() were missing.
This patch must be backported to 1.9.
This function will only increment the total amount of bytes read by a channel
because at this stage there is no fast forwarding. So the bug is pretty limited.
This patch must be backported to 1.9.
An error is reported if the EOS is detected before the end of the message. But
we must be carefull to not report an error if there is no message at all.
This patch must be backported to 1.9.
When waking a task on a remote thread, we currently check 1) if this
thread was sleeping, and 2) if it was already marked as active before
writing to its pipe. Unfortunately this doesn't always work as desired
because only one thread from the mask is woken up, while the
active_tasks_mask indicates all eligible threads for this task. As a
result, if one multi-thread task (e.g. a health check) wakes up to run
on any thread, then an accept() dispatches an incoming connection on
thread 2, this thread will already have its bit set in active_tasks_mask
because of the previous wakeup and will not be woken up.
This is easily noticeable on 2.0-dev by injecting on a multi-threaded
listener with a single connection at a time while health checks are
running quickly in the background : the injection runs slowly with
random response times (the poll timeouts). In 1.9 it affects the
dequeing of server connections, which occasionally experience pauses
if multiple threads share the same queue.
The correct solution consists in adjusting the sleeping_thread_mask
when waking another thread up. This mask reflects threads that are
sleeping, hence that need to be signaled to wake up. Threads with a
bit in active_tasks_mask already don't have their sleeping_thread_mask
bit set before polling so the principle remains consistent. And by
doing so we can remove the old_active_mask field.
This should be backported to 1.9.
Each thread uses one epoll_fd or kqueue_fd, and a pipe (thus two FDs).
These ones have to be accounted for in the maxsock calculation, otherwise
we can reach maxsock before maxconn. This is difficult to observe but it
in fact happens when a server connects back to the frontend and has checks
enabled : the check uses its FD and serves to fill the loop. In this case
all FDs planed for the datapath are used for this.
This needs to be backported to 1.9 and 1.8.
Dragan Dosen reported that after the multi-queue changes, appending
"process 1/even" on a bind line can make the process immediately crash
when delivering a first connection. This is due to the fact that I
believed that thread_mask(mask) applied the all_threads_mask value,
but it doesn't. And in case of even/odd the bits cover more than the
available threads, resulting in too high a thread number being selected
and a non-existing task to be woken up.
No backport is needed.
Some packages used to rely on DEFAULT_MAXCONN to set the default global
maxconn value to use regardless of the initial ulimit. The recent changes
made the lowest bound set to 100 so that it is compatible with almost any
environment. Now that DEFAULT_MAXCONN is not needed for anything else, we
can use it for the lowest bound set when maxconn is not configured. This
way it retains its original purpose of setting the default maxconn value
eventhough most of the time the effective value will be higher thanks to
the automatic computation based on "ulimit -n".
This entry was still set to 2000 but never used anymore. The only places
where it appeared was as an alias to SYSTEM_MAXCONN which forces it, so
let's turn these ones to SYSTEM_MAXCONN and remove the default value for
DEFAULT_MAXCONN. SYSTEM_MAXCONN still defines the upper bound however.
When haproxy is built with 51Degrees support, but not configured to use
51Degrees database, a segfault can occur when deinit_51degrees()
function is called, eg. during soft-stop on SIGUSR1 signal.
Only builds that use Pattern algorithm are affected.
This fix must be backported to all stable branches where 51Degrees
support is available. Additional adjustments are required for some
branches due to API and naming changes.
Before c8d5b95 the "maxconn" of the backend of dynamic "use_backend"
rules was not modified (this does not make sense and this is correct).
When implementing proxy_adjust_all_maxconn(), c8d5b95 commit missed this case.
With this patch we adjust the "maxconn" of the backend of such rules only if
they are not dynamic.
Without this patch reg-tests/http-rules/h00003.vtc could make haproxy crash.
deinit_log_buffers() can be called once per thread, however startup_logs
is common to all threads. So only attempt to free it once.
This should be backported to 1.9 and 1.8.
Tests show that it's slightly faster to have this field in the listener.
The cache walk patterns are under heavy stress and having only this field
written to in the bind_conf was wasting a cache line that was heavily
read. Let's move this close to the other entries already written to in
the listener. Warning, the position does have an impact on peak performance.
Now that the P2C algorithm for the accept queue is removed, we don't
need to map a number to a thread bit anymore, so let's remove all
these fields which are taking quite some space for no reason.
At this point, the random used in the hybrid queue distribution algorithm
provides little benefit over a periodic scan, can even have a slightly
worse worst case, and it requires to establish a mapping between a
discrete number and a thread ID among a mask.
This patch introduces a different approach using two indexes. One scans
the thread mask from the left, the other one from the right. The related
threads' loads are compared, and the least loaded one receives the new
connection. Then one index is adjusted depending on the load resulting
from this election, so that we start the next election from two known
lightly loaded threads.
This approach provides an extra 1% peak performance boost over the previous
one, which likely corresponds to the removal of the extra work on the
random and the previously required two mappings of index to thread.
A test was attempted with two indexes going in the same direction but it
was much less interesting because the same thread pairs were compared most
of the time with the load climbing in a ladder-like model. With the reverse
directions this cannot happen.
By picking two randoms following the P2C algorithm, we seldom observe
asymmetric loads on bursts of small session counts. This is typically
what makes h2load take a bit of time to complete the last 100% because
if a thread gets two connections while the other ones only have one,
it takes twice the time to complete its work.
This patch proposes a modification of the p2c algorithm which seems
more suitable to this case : it mixes a rotating index with a random.
This way, we're certain that all threads are consulted in turn and at
the same time we're not forced to use the ones we're giving a chance.
This significantly increases the traffic rate. Now h2load shows faster
completion and the average request rates on H2 and the TLS resume rate
increases by a bit more than 5% compared to pure p2c.
The index was placed into the struct bind_conf because 1) it's faster
there and it's the best place to optimally distribute traffic among a
group of listeners. It's the only runtime-modified element there and
it will be quite cache-hot.
This patch adds "protobuf" protocol buffers specific converter wich
may used in combination with "ungrpc" as first converter to extract
a protocol buffers field value. It is simply implemented reusing
protobuf_field_lookup() which is the protocol buffers specific parser already
used by "ungrpc" converter which only parse a gRPC header in addition of
parsing protocol buffers message.
Update the documentation for this new "protobuf" converter.
We move the code responsible of parsing protocol buffers messages
inside gRPC messages from sample.c to include/proto/protocol_buffers.h
so that to reuse it to cascade "ungrpc" converter.
In 84e417d8 ("MINOR: ssl: support Openssl 1.1.1 early callback for
switchctx") the code was extended to also support OpenSSL 1.1.1
(code already supported BoringSSL). A configuration check warning
was updated but with the wrong logic, the #ifdef needs a && instead
of an ||.
Reported in #54.
Should be backported to 1.8.
Emeric reports that when MAX_THREADS and/or MAX_PROCS are set to lower
values, referencing thread or process numbers higher than these limits
in cpu-map returns errors. This is annoying because these typically are
silent settings that are expected to be used only when set. Let's switch
back to LONGBITS for this limit.
Since the "wurfl" device detection engine was merged slightly more than
two years ago (2016-11-04), it never received a single fix nor update.
For almost two years it didn't receive even the minimal review or changes
needed to be compatible with threads, and it's remained build-broken for
about the last 9 months, consecutive to the last buffer API changes,
without anyone ever noticing! When asked on the list, nobody confirmed
using it :
https://www.mail-archive.com/haproxy@formilux.org/msg32516.html
And obviously nobody even cared to verify that it did still build. So we
are left with this broken code with no user and no maintainer. It might
even suffer from remotely exploitable vulnerabilities without anyone
being able to check if it presents any risk. It's a pain to update each
time there is an API change because it doesn't build as it depends on
external libraries that are not publicly accessible, leading to careful
blind changes. It slows down the whole project. This situation is not
acceptable at all.
It's time to cure the problem where it is. This patch removes all this
dead, non-buildable, non-working code. If anyone ever decides to use it,
which I seriously doubt based on history, it could be reintegrated, but
this time the following guarantees will be required :
- someone has to step up as a maintainer and have his name listed in
the MAINTAINERS file (I should have been more careful last time).
This person will take the sole blame for all issues and will be
responsible for fixing the bugs and incompatibilities affecting
this code, and for making it evolve to follow regular internal API
updates.
- support building on a standard distro with automated tools (i.e. no
more "click on this site, register your e-mail and download an
archive then figure how to place this into your build system").
Dummy libs are OK though as long as they allow the mainline code to
build and start.
- multi-threaded support must be fixed. I mean seriously, not worked
around with a check saying "please disable threads, we've been busy
fishing for the last two years".
This may be backported to 1.9 given that the code has never worked there
either, thus at least we're certain nobody will miss it.
For now on, "ungrpc" may take a second optional argument to provide
the protocol buffers types used to encode the field value to be extracted.
When absent the field value is extracted as a binary sample which may then
followed by others converters like "hex" which takes binary as input sample.
When this second argument is a type which does not match the one found by "ungrpc",
this field is considered as not found even if present.
With this patch we also remove the useless "varint" and "svarint" converters.
Update the documentation about "ungrpc" converters.
Parsing protocol buffer fields always consists in skip the field
if the field is not found or store the field value if found.
So, with this patch we factorize a little bit the code for "ungrpc" converter.
While the legacy code converts h2 to h1 and provides some control over
what is passed, in htx mode there is no such control and it is possible
to pass control chars and linear white spaces in the path, which are
possibly reencoded differently once passed to the H1 side.
HTX supports parse error reporting using a special flag. Let's check
the correctness of the :path pseudo header and report any anomaly in
the HTX flag.
Thanks to Jérôme Magnin for reporting this bug with a working reproducer.
This fix must be backported to 1.9 along with the two previous patches
("MINOR: htx: unconditionally handle parsing errors in requests or
responses" and "MINOR: mux-h2: always pass HTX_FL_PARSING_ERROR
between h2s and buf on RX").
In order to allow the H2 parser to report parsing errors, we must make
sure to always pass the HTX_FL_PARSING_ERROR flag from the h2s htx to
the conn_stream's htx.
The htx request and response processing functions currently only check
for HTX_FL_PARSING_ERROR on incomplete messages because that's how mux_h1
delivers these. However with H2 we have to detect some parsing errors in
the format of certain pseudo-headers (e.g. :path), so we do have a complete
message but we want to report an error.
Let's move the parse error check earlier so that it always triggers when
the flag is present. It was also moved for htx_wait_for_request_body()
since we definitely want to be able to abort processing such an invalid
request even if it appears complete, but it was not changed in the forward
functions so as not to truncate contents before the position of the first
error.
This patch simply extracts the code of smp_fetch_req_ungrpc() for "req.ungrpc"
from http_fetch.c to move it to sample.c with very few modifications.
Furthermore smp_fetch_body_buf() used to fetch the body contents is no more needed.
Update the documentation for gRPC.
A crash in H2 was reported in issue #52. It turns out that there is a
small but existing race by which a conn_stream could detach itself
using h2_detach(), not being able to destroy the h2s due to pending
output data blocked by flow control, then upon next h2s activity
(transfer_data or trailers parsing), an ES flag may need to be turned
into a CS_FL_REOS bit, causing a dereference of a NULL stream. This is
a side effect of the fact that we still have a few places which
incorrectly depend on the CS flags, while these flags should only be
set by h2_rcv_buf() and h2_snd_buf().
All candidate locations along this path have been secured against this
risk, but the code should really evolve to stop depending on CS anymore.
This fix must be backported to 1.9 and possibly partially to 1.8.
The global maxconn value is often a pain to configure :
- in development the user never has the permissions to increase the
rlim_cur value too high and gets warnings all the time ;
- in some production environments, users may have limited actions on
it or may only be able to act on rlim_fd_cur using ulimit -n. This
is sometimes particularly true in containers or whatever environment
where the user has no privilege to upgrade the limits.
- keeping config homogenous between machines is even less easy.
We already had the ability to automatically compute maxconn from the
memory limits when they were set. This patch goes a bit further by also
computing the limit permitted by the configured limit on the number of
FDs. For this it simply reverses the rlim_fd_cur calculation to determine
maxconn based on the number of reserved sockets for listeners & checks,
the number of SSL engines and the number of pipes (absolute or relative).
This way it becomes possible to make maxconn always be the highest possible
value resulting in maxsock matching what was set using "ulimit -n", without
ever setting it. Note that we adjust to the soft limit, not the hard one,
since it's what is configured with ulimit -n. This allows users to also
limit to low values if needed.
Just like before, the calculated value is reported in verbose mode.
We'll need to know the global maxsock before the maxconn calculation.
Actually only two components were calculated too late, the peers FD
and the stats FD. Let's move them a few lines upward.
The default number of pipes is adjusted based on the sum of frontends
and backends maxconn/fullconn settings. Now that it is possible to have
a null maxconn on a frontend to indicate "unlimited" with commit
c8d5b95e6 ("MEDIUM: config: don't enforce a low frontend maxconn value
anymore"), the sum of maxconn may remain low and limited to the only
frontends/backends where this limit is set.
This patch considers this new unlimited case when doing the check, and
automatically switches to the default value which is maxconn/4 in this
case. All the calculation was moved to a distinct function for ease of
use. This function also supports returning unlimited (-1) when the
value depends on global.maxconn and this latter is not yet set.
When the master re-execs itself on reload, it doesn't restore the initial
rlim_fd_cur/rlim_fd_max values, which have been modified by the ulimit-n
or global maxconn directives. This is a problem, because if these values
were set really low it could prevent the process from restarting, and if
they were set very high, this could have some implications on the restart
time, or later on the computed maxconn.
Let's simply reset these values to the ones we had at boot to maintain
the system in a consistent state.
A backport could be performed to 1.9 and maybe 1.8. This patch depends on
the two previous ones.
It's not normal that external processes are run with high FD limits,
as quite often such processes (especially shell scripts) will iterate
over all FDs to close them. Ideally we should even provide a tunable
with the external-check directive to adjust this value, but at least
we need to restore it to the value that was active when starting
haproxy (before it was adjusted for maxconn). Additionally with very
low maxconn values causing rlim_fd_cur to be low, some heavy checks
could possibly fail. This was also mentioned in issue #45.
Currently the following config and scripts report this :
$ cat rlim.cfg
global
maxconn 500000
external-check
listen www
bind :8001
timeout client 5s
timeout server 5s
timeout connect 5s
option external-check
external-check command "$PWD/sleep1.sh"
server local 127.0.0.1:80 check inter 1s
$ cat sleep1.sh
#!/bin/sh
/bin/sleep 0.1
echo -n "soft: ";ulimit -S -n
echo -n "hard: ";ulimit -H -n
# ./haproxy -db -f rlim.cfg
soft: 1000012
hard: 1000012
soft: 1000012
hard: 1000012
Now with the fix :
# ./haproxy -db -f rlim.cfg
soft: 1024
hard: 4096
soft: 1024
hard: 4096
This fix should be backported to stable versions but it depends on
"MINOR: global: keep a copy of the initial rlim_fd_cur and rlim_fd_max
values" and "BUG/MINOR: init: never lower rlim_fd_max".
If a ulimit-n value is set, we must not lower the rlim_max value if the
new value is lower, we must only adjust the rlim_cur one. The effect is
that on very low values, this could prevent a master-worker reload, or
make an external check fail by lack of FDs.
This may be backported to 1.9 and earlier, but it depends on this patch
"MINOR: global: keep a copy of the initial rlim_fd_cur and rlim_fd_max
values".
Let's keep a copy of these initial values. They will be useful to
compute automatic maxconn, as well as to restore proper limits when
doing an execve() on external checks.
This patch implements peer heartbeat feature to prevent any haproxy peer
from reconnecting too often, consuming sockets for nothing.
To do so, we add PEER_MSG_CTRL_HEARTBEAT new message to PEER_MSG_CLASS_CONTROL peers
control class of messages. A ->heartbeat field is added to peer structs
to store the heatbeat timeout value which is handled by the same function as for ->reconnect
to control the session timeouts. A 2-bytes heartbeat message is sent every 3s when
no updates have to be sent. This way, the peer which receives such a message is sure
the remote peer is still alive. So, it resets the ->reconnect peer session
timeout to its initial value (5s). This prevents any reconnection to an
already connected alive peer.
Historically the default frontend's maxconn used to be quite low (2000),
which was sufficient two decades ago but often proved to be a problem
when users had purposely set the global maxconn value but forgot to set
the frontend's.
There is no point in keeping this arbitrary limit for frontends : when
the global maxconn is lower, it's already too high and when the global
maxconn is much higher, it becomes a limiting factor which causes trouble
in production.
This commit allows the value to be set to zero, which becomes the new
default value, to mean it's not directly limited, or in fact it's set
to the global maxconn. Since this operation used to be performed before
computing a possibly automatic global maxconn based on memory limits,
the calculation of the maxconn value and its propagation to the backends'
fullconn has now moved to a dedicated function, proxy_adjust_all_maxconn(),
which is called once the global maxconn is stabilized.
This comes with two benefits :
1) a configuration missing "maxconn" in the defaults section will not
limit itself to a magically hardcoded value but will scale up to the
global maxconn ;
2) when the global maxconn is not set and memory limits are used instead,
the frontends' maxconn automatically adapts, and the backends' fullconn
as well.
It is possible to update a frontend's maxconn from the CLI. Unfortunately
when doing this it scratches all listeners' maxconn values and sets them
all to the new frontend's value. This can be problematic when mixing
different traffic classes (bind to interface or private networks, etc).
Now that the listener's maxconn is allowed to remain unset, let's not
change these values when setting the frontend's maxconn. This way the
overall frontend's limit can be raised but if certain specific listeners
had their own value forced in the config, they will be preserved. This
makes more sense and is more in line with the principle of defaults
propagation.
It's pointless to always set and maintain l->maxconn because the accept
loop already enforces the frontend's limit anyway. Thus let's stop setting
this value by default and keep it to zero meaning "no limit". This way the
frontend's maxconn will be used by default. Of course if a value is set,
it will be enforced.
In an attempt to try to provide automatic maxconn settings, we need to
decorrelate a listner's backlog and maxconn so that these values can be
independent. This introduces a listener_backlog() function which retrieves
the backlog value from the listener's backlog, the frontend's, the
listener's maxconn, the frontend's or falls back to 1024. This
corresponds to what was done in cfgparse.c to force a value there except
the last fallback which was not set since the frontend's maxconn is always
known.
We were not checking p->feconn nor the global actconn soon enough. In
older versions this could result in a frontend accepting more connections
than allowed by its maxconn or the global maxconn, exactly N-1 extra
connections where N is the number of threads, provided each of these
threads were running a different listener. But with the lock removal,
it became worse, the excess could be the listener's maxconn multiplied
by the number of threads. Among the nasty side effect was that LI_FULL
could be removed while the limit was still over and in some cases the
polling on the socket was no re-enabled.
This commit takes care of updating and checking p->feconn and the global
actconn *before* processing the connection, so that the listener can be
turned off before accepting the socket if needed. This requires to move
some of the bookkeeping operations form session to listen, which totally
makes sense in this context.
Now the limits are properly respected, even if a listener's maxconn is
over a frontend's. This only applies on top of the listener lock removal
series and doesn't have to be backported.
There is a very difficult to reproduce race in the listener's accept
code, which is much easier to reproduce once connection limits are
properly enforced. It's an ABBA lock issue :
- the following functions take l->lock then lq_lock :
disable_listener, pause_listener, listener_full, limit_listener,
do_unbind_listener
- the following ones take lq_lock then l->lock :
resume_listener, dequeue_all_listener
This is because __resume_listener() only takes the listener's lock
and expects to be called with lq_lock held. The problem can easily
happen when listener_full() and limit_listener() are called a lot
while in parallel another thread releases sessions for the same
listener using listener_release() which in turn calls resume_listener().
This scenario is more prevalent in 2.0-dev since the removal of the
accept lock in listener_accept(). However in 1.9 and before, a different
but extremely unlikely scenario can happen :
thread1 thread2
............................ enter listener_accept()
limit_listener()
............................ long pause before taking the lock
session_free()
dequeue_all_listeners()
lock(lq_lock) [1]
............................ try_lock(l->lock) [2]
__resume_listener()
spin_lock(l->lock) =>WAIT[2]
............................ accept()
l->accept()
nbconn==maxconn =>
listener_full()
state==LI_LIMITED =>
lock(lq_lock) =>DEADLOCK[1]!
In practice it is almost impossible to trigger it because it requires
to limit both on the listener's maxconn and the frontend's rate limit,
at the same time, and to release the listener when the connection rate
goes below the limit between poll() returns the FD and the lock is
taken (a few nanoseconds). But maybe with threads competing on the
same core it has more chances to appear.
This patch removes the lq_lock and replaces it with a lockless queue
for the listener's wait queue (well, technically speaking a self-locked
queue) brought by commit a8434ec14 ("MINOR: lists: Implement locked
variations.") and its few subsequent fixes. This relieves us from the
need of the lq_lock and removes the deadlock. It also gets rid of the
distinction between __resume_listener() and resume_listener() since the
only difference was the lq_lock. All listener removals from the list
are now unconditional to avoid races on the state. It's worth noting
that the list used to never be initialized and that it used to work
only thanks to the state tests, so the initialization has now been
added.
This patch must carefully be backported to 1.9 and very likely 1.8.
It is mandatory to be careful about replacing all manipulations of
l->wait_queue, global.listener_queue and p->listener_queue.
Since LIST_DEL_LOCKED() and LIST_POP_LOCKED() now automatically reinitialize
the removed element, there's no need for keeping this LIST_INIT() call in the
idle connection code.
global.maxsock used to be augmented by the frontend's maxconn value
for each frontend listener, which is absurd when there are many
listeners in a frontend because the frontend's maxconn fixes an
upper limit to how many connections will be accepted on all of its
listeners anyway. What is needed instead is to add one to count the
listening socket.
In addition, the CLI's and peers' value was incremented twice, the
first time when creating the listener and the second time in the
main init code.
Let's now make sure we only increment global.maxsock by the required
amount of sockets. This means not adding maxconn for each listener,
and relying on the global values when they are correct.
Threads have long matured by now, still for most users their usage is
not trivial. It's about time to enable them by default on platforms
where we know the number of CPUs bound. This patch does this, it counts
the number of CPUs the process is bound to upon startup, and enables as
many threads by default. Of course, "nbthread" still overrides this, but
if it's not set the default behaviour is to start one thread per CPU.
The default number of threads is reported in "haproxy -vv". Simply using
"taskset -c" is now enough to adjust this number of threads so that there
is no more need for playing with cpu-map. And thanks to the previous
patches on the listener, the vast majority of configurations will not
need to duplicate "bind" lines with the "process x/y" statement anymore
either, so a simple config will automatically adapt to the number of
processors available.
tune.listener.multi-queue { on | off }
Enables ('on') or disables ('off') the listener's multi-queue accept which
spreads the incoming traffic to all threads a "bind" line is allowed to run
on instead of taking them for itself. This provides a smoother traffic
distribution and scales much better, especially in environments where threads
may be unevenly loaded due to external activity (network interrupts colliding
with one thread for example). This option is enabled by default, but it may
be forcefully disabled for troubleshooting or for situations where it is
estimated that the operating system already provides a good enough
distribution and connections are extremely short-lived.
It's important to monitor the accept queues to know if some incoming
connections had to be handled by their originating thread due to an
overflow. It's also important to be able to confirm thread fairness.
This patch adds "accq_pushed" to activity reporting, which reports
the number of connections that were successfully pushed into each
thread's queue, and "accq_full", which indicates the number of
connections that couldn't be pushed because the thread's queue was
full.
The idea is to redistribute an incoming connection to one of the
threads a bind_conf is bound to when there is more than one. We do this
using a random improved by the p2c algorithm : a random() call returns
two different thread numbers. We then compare their respective connection
count and the length of their accept queues, and pick the least loaded
one. We even use this deferred accept mechanism if the target thread
ends up being the local thread, because this maintains fairness between
all connections and tests show that it's about 1% faster this way,
likely due to cache locality. If the target thread's accept queue is
full, the connection is accepted synchronously by the current thread.
There is one point where we can migrate a connection to another thread
without taking risk, it's when we accept it : the new FD is not yet in
the fd cache and no task was created yet. It's still possible to assign
it a different thread than the one which accepted the connection. The
only requirement for this is to have one accept queue per thread and
their respective processing tasks that have to be woken up each time
an entry is added to the queue.
This is a multiple-producer, single-consumer model. Entries are added
at the queue's tail and the processing task is woken up. The consumer
picks entries at the head and processes them in order. The accept queue
contains the fd, the source address, and the listener. Each entry of
the accept queue was rounded up to 64 bytes (one cache line) to avoid
cache aliasing because tests have shown that otherwise performance
suffers a lot (5%). A test has shown that it's important to have at
least 256 entries for the rings, as at 128 it's still possible to fill
them often at high loads on small thread counts.
The processing task does almost nothing except calling the listener's
accept() function and updating the global session and SSL rate counters
just like listener_accept() does on synchronous calls.
At this point the accept queue is implemented but not used.
In order to quickly pick a thread ID when accepting a connection, we'll
need to know certain pre-computed values derived from the thread mask,
which are counts of bits per position multiples of 1, 2, 4, 8, 16 and
32. In practice it is sufficient to compute only the 4 first ones and
store them in the bind_conf. We update the count every time the
bind_thread value is adjusted.
The fields in the bind_conf struct have been moved around a little bit
to make it easier to group all thread bit values into the same cache
line.
The function used to return a thread number is bind_map_thread_id(),
and it maps a number between 0 and 31/63 to a thread ID between 0 and
31/63, starting from the left.
Function mask_find_rank_bit() returns the bit position in mask <m> of
the nth bit set of rank <r>, between 0 and LONGBITS-1 included, starting
from the left. For example ranks 0,1,2,3 for mask 0x55 will be 6, 4, 2
and 0 respectively. This algorithm is based on a popcount variant and
is described here : https://graphics.stanford.edu/~seander/bithacks.html.
This function used to hold the listener's lock as a way to stay safe
against concurrent manipulations, but it turns out this is wrong. First,
the lock is held during l->accept(), which itself might indirectly call
listener_release(), which, if the listener is marked full, could result
in __resume_listener() to be called and the lock being taken twice. In
practice it doesn't happen right now because the listener's FULL state
cannot change while we're doing this.
Second, all the code does is now protected against concurrent accesses.
It used not to be the case in the early days of threads : the frequency
counters are thread-safe. The rate limiting doesn't require extreme
precision. Only the nbconn check is not thread safe.
Third, the parts called here will have to be called from different
threads without holding this lock, and this becomes a bigger issue
if we need to keep this one.
This patch does 3 things which need to be addressed at once :
1) it moves the lock to the only 2 functions that were not protected
since called form listener_accept() :
- limit_listener()
- listener_full()
2) it makes sure delete_listener() properly checks its state within
the lock.
3) it updates the l->nbconn tracking to make sure that it is always
properly reported and accounted for. There is a point of particular
care around the situation where the listener's maxconn is reached
because the listener has to be marked full before accepting the
connection, then resumed if the connection finally gets dropped.
It is not possible to perform this change without removing the
lock due to the deadlock issue explained above.
This patch almost doubles the accept rate in multi-thread on a shared
port between 8 threads, and multiplies by 4 the connection rate on a
tcp-request connection reject rule.
Now that nbproc and nbthread are exclusive, we can still provide more
detailed explanations about what we've found in the config when a bind
line appears on multiple threads and processes at the same time, then
ignore the setting.
This patch reduces the listener's thread mask to a single mask instead
of an array of masks per process. Now we have only one thread mask and
one process mask per bind-conf. This removes ~504 bytes of RAM per
bind-conf and will simplify handling of thread masks.
If a "bind" line only refers to process numbers not found by its parent
frontend or not covered by the global nbproc directive, or to a thread
not covered by the global nbthread directive, a warning is emitted saying
what will be used instead.
When 1.8 was released, we wanted to support both nbthread and nbproc to
observe how things would go. Since then it appeared obvious that the two
are never used together because of the pain to configure affinity in this
case, and instead of bringing benefits, it brings the limitations of both
models, and causes multiple threads to compete for the same CPU. In
addition, it costs a lot to support both in parallel, so let's get rid
of this once for all.
The test on l->nbconn forces to exit the loop before updating the freq
counters, so the last session which reaches a listener's limit will not
be accounted for in the session rate measurement.
Let's move the test at the beginning of the loop and mark the listener
as saturated on exit.
This may be backported to 1.9 and 1.8.
The number of bytes to use with "my_realloc2()" in parse_dotted_nums()
was wrong: missing multiplication by the size of an element of an array
when reallocating it.
When calling calloc(), cast global.nbthread to unsigned int, so that gcc
doesn't freak out, as it has no way of knowing global.nbthread can't be
negative.
Instead of having one task per thread and per server that does clean the
idling connections, have only one global task for every servers.
That tasks parses all the servers that currently have idling connections,
and remove half of them, to put them in a per-thread list of connections
to kill. For each thread that does have connections to kill, wake a task
to do so, so that the cleaning will be done in the context of said thread.
Use the locked macros when manipulating idle_orphan_conns, so that other
threads can remove elements from it.
It will be useful later to avoid having a task per server and per thread to
cleanup the orphan list.
The if-statement was converted into a while-loop in
7fe45698f5 to handle EINTR.
This special handling was later replaced in
0a03c0f022 by conn_sock_send.
The while-loop was not changed back and is not unconditionally
exited after one iteration, with no `continue` inside the body.
Replace by an if-statement.
Released version 2.0-dev1 with the following main changes :
- MINOR: mux-h2: only increase the connection window with the first update
- REGTESTS: remove the expected window updates from H2 handshakes
- BUG/MINOR: mux-h2: make empty HEADERS frame return a connection error
- BUG/MEDIUM: mux-h2: mark that we have too many CS once we have more than the max
- MEDIUM: mux-h2: remove padlen during headers phase
- MINOR: h2: add a bit-based frame type representation
- MINOR: mux-h2: remove useless check for empty frame length in h2s_decode_headers()
- MEDIUM: mux-h2: decode HEADERS frames before allocating the stream
- MINOR: mux-h2: make h2c_send_rst_stream() use the dummy stream's error code
- MINOR: mux-h2: add a new dummy stream for the REFUSED_STREAM error code
- MINOR: mux-h2: fail stream creation more cleanly using RST_STREAM
- MINOR: buffers: add a new b_move() function
- MINOR: mux-h2: make h2_peek_frame_hdr() support an offset
- MEDIUM: mux-h2: handle decoding of CONTINUATION frames
- CLEANUP: mux-h2: remove misleading comments about CONTINUATION
- BUG/MEDIUM: servers: Don't try to reuse connection if we switched server.
- BUG/MEDIUM: tasks: Decrement tasks_run_queue in tasklet_free().
- BUG/MINOR: htx: send the proper authenticate header when using http-request auth
- BUG/MEDIUM: mux_h2: Don't add to the idle list if we're full.
- BUG/MEDIUM: servers: Fail if we fail to allocate a conn_stream.
- BUG/MAJOR: servers: Use the list api correctly to avoid crashes.
- BUG/MAJOR: servers: Correctly use LIST_ELEM().
- BUG/MAJOR: sessions: Use an unlimited number of servers for the conn list.
- BUG/MEDIUM: servers: Flag the stream_interface on handshake error.
- MEDIUM: servers: Be smarter when switching connections.
- MEDIUM: sessions: Keep track of which connections are idle.
- MINOR: payload: add sample fetch for TLS ALPN
- BUG/MEDIUM: log: don't mark log FDs as non-blocking on terminals
- MINOR: channel: Add the function channel_add_input
- MINOR: stats/htx: Call channel_add_input instead of updating channel state by hand
- BUG/MEDIUM: cache: Be sure to end the forwarding when XFER length is unknown
- BUG/MAJOR: htx: Return the good block address after a defrag
- MINOR: lb: allow redispatch when using consistent hash
- CLEANUP: mux-h2: fix end-of-stream flag name when processing headers
- BUG/MEDIUM: mux-h2: always restart reading if data are available
- BUG/MINOR: mux-h2: set the stream-full flag when leaving h2c_decode_headers()
- BUG/MINOR: mux-h2: don't check the CS count in h2c_bck_handle_headers()
- BUG/MINOR: mux-h2: mark end-of-stream after processing response HEADERS, not before
- BUG/MINOR: mux-h2: only update rxbuf's length for H1 headers
- BUG/MEDIUM: mux-h1: use per-direction flags to indicate transitions
- BUG/MEDIUM: mux-h1: make HTX chunking consistent with H2
- BUG/MAJOR: stream-int: Update the stream expiration date in stream_int_notify()
- BUG/MEDIUM: proto-htx: Set SI_FL_NOHALF on server side when request is done
- BUG/MEDIUM: mux-h1: Add a task to handle connection timeouts
- MINOR: mux-h2: make h2c_decode_headers() return a status, not a count
- MINOR: mux-h2: add a new dummy stream : h2_error_stream
- MEDIUM: mux-h2: make h2c_decode_headers() support recoverable errors
- BUG/MINOR: mux-h2: detect when the HTX EOM block cannot be added after headers
- MINOR: mux-h2: remove a misleading and impossible test
- CLEANUP: mux-h2: clean the stream error path on HEADERS frame processing
- MINOR: mux-h2: check for too many streams only for idle streams
- MINOR: mux-h2: set H2_SF_HEADERS_RCVD when a HEADERS frame was decoded
- BUG/MEDIUM: mux-h2: decode trailers in HEADERS frames
- MINOR: h2: add h2_make_h1_trailers to turn H2 headers to H1 trailers
- MEDIUM: mux-h2: pass trailers to H1 (legacy mode)
- MINOR: htx: add a new function to add a block without filling it
- MINOR: h2: add h2_make_htx_trailers to turn H2 headers to HTX trailers
- MEDIUM: mux-h2: pass trailers to HTX
- MINOR: mux-h1: parse the content-length header on output and set H1_MF_CLEN
- BUG/MEDIUM: mux-h1: don't enforce chunked encoding on requests
- MINOR: mux-h2: make HTX_BLK_EOM processing idempotent
- MINOR: h1: make the H1 headers block parser able to parse headers only
- MEDIUM: mux-h2: emit HEADERS frames when facing HTX trailers blocks
- MINOR: stream/htx: Add info about the HTX structs in "show sess all" command
- MINOR: stream: Add the subscription events of SIs in "show sess all" command
- MINOR: mux-h1: Add the subscription events in "show fd" command
- BUG/MEDIUM: h1: Get the h1m state when restarting the headers parsing
- BUG/MINOR: cache/htx: Be sure to count partial trailers
- BUG/MEDIUM: h1: In h1_init(), wake the tasklet instead of calling h1_recv().
- BUG/MEDIUM: server: Defer the mux init until after xprt has been initialized.
- MINOR: connections: Remove a stall comment.
- BUG/MEDIUM: cli: make "show sess" really thread-safe
- BUILD: add a new file "version.c" to carry version updates
- MINOR: stream/htx: add the HTX flags output in "show sess all"
- MINOR: stream/cli: fix the location of the waiting flag in "show sess all"
- MINOR: stream/cli: report more info about the HTTP messages on "show sess all"
- BUG/MINOR: lua: bad args are returned for Lua actions
- BUG/MEDIUM: lua: dead lock when Lua tasks are trigerred
- MINOR: htx: Add an helper function to get the max space usable for a block
- MINOR: channel/htx: Add HTX version for some helper functions
- BUG/MEDIUM: cache/htx: Respect the reserve when cached objects are served
- BUG/MINOR: stats/htx: Respect the reserve when the stats page is dumped
- DOC: regtest: make it clearer what the purpose of the "broken" series is
- REGTEST: mailers: add new test for 'mailers' section
- REGTEST: Add a reg test for health-checks over SSL/TLS.
- BUG/MINOR: mux-h1: Close connection on shutr only when shutw was really done
- MEDIUM: mux-h1: Clarify how shutr/shutw are handled
- BUG/MINOR: compression: Disable it if another one is already in progress
- BUG/MINOR: filters: Detect cache+compression config on legacy HTTP streams
- BUG/MINOR: cache: Disable the cache if any compression filter precedes it
- REGTEST: Add some informatoin to test results.
- MINOR: htx: Add a function to truncate all blocks after a specific offset
- MINOR: channel/htx: Add the HTX version of channel_truncate/erase
- BUG/MINOR: proto_htx: Use HTX versions to truncate or erase a buffer
- BUG/CRITICAL: mux-h2: re-check the frame length when PRIORITY is used
- DOC: Fix typo in req.ssl_alpn example (commit 4afdd138424ab...)
- DOC: http-request cache-use / http-response cache-store expects cache name
- REGTEST: "capture (request|response)" regtest.
- BUG/MINOR: lua/htx: Respect the reserve when data are send from an HTX applet
- REGTEST: filters: add compression test
- BUG/MEDIUM: init: Initialize idle_orphan_conns for first server in server-template
- BUG/MEDIUM: ssl: Disable anti-replay protection and set max data with 0RTT.
- DOC: Be a bit more explicit about allow-0rtt security implications.
- MINOR: mux-h1: make the mux_h1_ops struct static
- BUILD: makefile: add an EXTRA_OBJS variable to help build optional code
- BUG/MEDIUM: connection: properly unregister the mux on failed initialization
- BUG/MAJOR: cache: fix confusion between zero and uninitialized cache key
- REGTESTS: test case for map_regm commit 271022150d
- REGTESTS: Basic tests for concat,strcmp,word,field,ipmask converters
- REGTESTS: Basic tests for using maps to redirect requests / select backend
- DOC: REGTESTS README varnishtest -Dno-htx= define.
- MINOR: spoe: Make the SPOE filter compatible with HTX proxies
- MINOR: checks: Store the proxy in checks.
- BUG/MEDIUM: checks: Avoid having an associated server for email checks.
- REGTEST: Switch to vtest.
- REGTEST: Adapt reg test doc files to vtest.
- BUG/MEDIUM: h1: Make sure we destroy an inactive connectin that did shutw.
- BUG/MINOR: base64: dec func ignores padding for output size checking
- BUG/MEDIUM: ssl: missing allocation failure checks loading tls key file
- MINOR: ssl: add support of aes256 bits ticket keys on file and cli.
- BUG/MINOR: backend: don't use url_param_name as a hint for BE_LB_ALGO_PH
- BUG/MINOR: backend: balance uri specific options were lost across defaults
- BUG/MINOR: backend: BE_LB_LKUP_CHTREE is a value, not a bit
- MINOR: backend: move url_param_name/len to lbprm.arg_str/len
- MINOR: backend: make headers and RDP cookie also use arg_str/len
- MINOR: backend: add new fields in lbprm to store more LB options
- MINOR: backend: make the header hash use arg_opt1 for use_domain_only
- MINOR: backend: remap the balance uri settings to lbprm.arg_opt{1,2,3}
- MINOR: backend: move hash_balance_factor out of chash
- MEDIUM: backend: move all LB algo parameters into an union
- MINOR: backend: make the random algorithm support a number of draws
- BUILD/MEDIUM: da: Necessary code changes for new buffer API.
- BUG/MINOR: stick_table: Prevent conn_cur from underflowing
- BUG: 51d: Changes to the buffer API in 1.9 were not applied to the 51Degrees code.
- BUG/MEDIUM: stats: Get the right scope pointer depending on HTX is used or not
- DOC: add a missing space in the documentation for bc_http_major
- REGTEST: checks basic stats webpage functionality
- BUG/MEDIUM: servers: Make assign_tproxy_address work when ALPN is set.
- BUG/MEDIUM: connections: Add the CO_FL_CONNECTED flag if a send succeeded.
- DOC: add github issue templates
- MINOR: cfgparse: Extract some code to be re-used.
- CLEANUP: cfgparse: Return asap from cfg_parse_peers().
- CLEANUP: cfgparse: Code reindentation.
- MINOR: cfgparse: Useless frontend initialization in "peers" sections.
- MINOR: cfgparse: Rework peers frontend init.
- MINOR: cfgparse: Simplication.
- MINOR: cfgparse: Make "peer" lines be parsed as "server" lines.
- MINOR: peers: Make outgoing connection to SSL/TLS peers work.
- MINOR: cfgparse: SSL/TLS binding in "peers" sections.
- DOC: peers: SSL/TLS documentation for "peers"
- BUG/MINOR: startup: certain goto paths in init_pollers fail to free
- BUG/MEDIUM: checks: fix recent regression on agent-check making it crash
- BUG/MINOR: server: don't always trust srv_check_health when loading a server state
- BUG/MINOR: check: Wake the check task if the check is finished in wake_srv_chk()
- BUG/MEDIUM: ssl: Fix handling of TLS 1.3 KeyUpdate messages
- DOC: mention the effect of nf_conntrack_tcp_loose on src/dst
- BUG/MINOR: proto-htx: Return an error if all headers cannot be received at once
- BUG/MEDIUM: mux-h2/htx: Respect the channel's reserve
- BUG/MINOR: mux-h1: Apply the reserve on the channel's buffer only
- BUG/MINOR: mux-h1: avoid copying output over itself in zero-copy
- BUG/MAJOR: mux-h2: don't destroy the stream on failed allocation in h2_snd_buf()
- BUG/MEDIUM: backend: also remove from idle list muxes that have no more room
- BUG/MEDIUM: mux-h2: properly abort on trailers decoding errors
- MINOR: h2: declare new sets of frame types
- BUG/MINOR: mux-h2: CONTINUATION in closed state must always return GOAWAY
- BUG/MINOR: mux-h2: headers-type frames in HREM are always a connection error
- BUG/MINOR: mux-h2: make it possible to set the error code on an already closed stream
- BUG/MINOR: hpack: return a compression error on invalid table size updates
- MINOR: server: make sure pool-max-conn is >= -1
- BUG/MINOR: stream: take care of synchronous errors when trying to send
- CLEANUP: server: fix indentation mess on idle connections
- BUG/MINOR: mux-h2: always check the stream ID limit in h2_avail_streams()
- BUG/MINOR: mux-h2: refuse to allocate a stream with too high an ID
- BUG/MEDIUM: backend: never try to attach to a mux having no more stream available
- MINOR: server: add a max-reuse parameter
- MINOR: mux-h2: always consider a server's max-reuse parameter
- MEDIUM: stream-int: always mark pending outgoing SI_ST_CON
- MINOR: stream: don't wait before retrying after a failed connection reuse
- MEDIUM: h2: always parse and deduplicate the content-length header
- BUG/MINOR: mux-h2: always compare content-length to the sum of DATA frames
- CLEANUP: h2: Remove debug printf in mux_h2.c
- MINOR: cfgparse: make the process/thread parser support a maximum value
- MINOR: threads: make MAX_THREADS configurable at build time
- DOC: nbthread is no longer experimental.
- BUG/MINOR: listener: always fill the source address for accepted socketpairs
- BUG/MINOR: mux-h2: do not report available outgoing streams after GOAWAY
- BUG/MINOR: spoe: corrected fragmentation string size
- BUG/MINOR: task: fix possibly missed event in inter-thread wakeups
- BUG/MEDIUM: servers: Attempt to reuse an unfinished connection on retry.
- BUG/MEDIUM: backend: always call si_detach_endpoint() on async connection failure
- SCRIPTS: add the issue tracker URL to the announce script
- MINOR: peers: Extract some code to be reused.
- CLEANUP: peers: Indentation fixes.
- MINOR: peers: send code factorization.
- MINOR: peers: Add new functions to send code and reduce the I/O handler.
- MEDIUM: peers: synchronizaiton code factorization to reduce the size of the I/O handler.
- MINOR: peers: Move update receive code to reduce the size of the I/O handler.
- MINOR: peers: Move ack, switch and definition receive code to reduce the size of the I/O handler.
- MINOR: peers: Move high level receive code to reduce the size of I/O handler.
- CLEANUP: peers: Be more generic.
- MINOR: peers: move error handling to reduce the size of the I/O handler.
- MINOR: peers: move messages treatment code to reduce the size of the I/O handler.
- MINOR: peers: move send code to reduce the size of the I/O handler.
- CLEANUP: peers: Remove useless statements.
- MINOR: peers: move "hello" message treatment code to reduce the size of the I/O handler.
- MINOR: peers: move peer initializations code to reduce the size of the I/O handler.
- CLEANUP: peers: factor the error handling code in peer_treet_updatemsg()
- CLEANUP: peers: factor error handling in peer_treat_definedmsg()
- BUILD/MINOR: peers: shut up a build warning introduced during last cleanup
- BUG/MEDIUM: mux-h2: only close connection on request frames on closed streams
- CLEANUP: mux-h2: remove two useless but misleading assignments
- BUG/MEDIUM: checks: Check that conn_install_mux succeeded.
- BUG/MEDIUM: servers: Only destroy a conn_stream we just allocated.
- BUG/MEDIUM: servers: Don't add an incomplete conn to the server idle list.
- BUG/MEDIUM: checks: Don't try to set ALPN if connection failed.
- BUG/MEDIUM: h2: In h2_send(), stop the loop if we failed to alloc a buf.
- BUG/MEDIUM: peers: Handle mux creation failure.
- BUG/MEDIUM: servers: Close the connection if we failed to install the mux.
- BUG/MEDIUM: compression: Rewrite strong ETags
- BUG/MINOR: deinit: tcp_rep.inspect_rules not deinit, add to deinit
- CLEANUP: mux-h2: remove misleading leftover test on h2s' nullity
- BUG/MEDIUM: mux-h2: wake up flow-controlled streams on initial window update
- BUG/MEDIUM: mux-h2: fix two half-closed to closed transitions
- BUG/MEDIUM: mux-h2: make sure never to send GOAWAY on too old streams
- BUG/MEDIUM: mux-h2: do not abort HEADERS frame before decoding them
- BUG/MINOR: mux-h2: make sure response HEADERS are not received in other states than OPEN and HLOC
- MINOR: h2: add a generic frame checker
- MEDIUM: mux-h2: check the frame validity before considering the stream state
- CLEANUP: mux-h2: remove stream ID and frame length checks from the frame parsers
- BUG/MINOR: mux-h2: make sure request trailers on aborted streams don't break the connection
- DOC: compression: Update the reasons for disabled compression
- BUG/MEDIUM: buffer: Make sure b_is_null handles buffers waiting for allocation.
- DOC: htx: make it clear that htxbuf() and htx_from_buf() always return valid pointers
- MINOR: htx: never check for null htx pointer in htx_is_{,not_}empty()
- MINOR: mux-h2: consistently rely on the htx variable to detect the mode
- BUG/MEDIUM: peers: Peer addresses parsing broken.
- BUG/MEDIUM: mux-h1: Don't add "transfer-encoding" if message-body is forbidden
- BUG/MEDIUM: connections: Don't forget to remove CO_FL_SESS_IDLE.
- BUG/MINOR: stream: don't close the front connection when facing a backend error
- BUG/MEDIUM: mux-h2: wait for the mux buffer to be empty before closing the connection
- MINOR: stream-int: add a new flag to mention that we want the connection to be killed
- MINOR: connstream: have a new flag CS_FL_KILL_CONN to kill a connection
- BUG/MEDIUM: mux-h2: do not close the connection on aborted streams
- BUG/MINOR: server: fix logic flaw in idle connection list management
- MINOR: mux-h2: max-concurrent-streams should be unsigned
- MINOR: mux-h2: make sure to only check concurrency limit on the frontend
- MINOR: mux-h2: learn and store the peer's advertised MAX_CONCURRENT_STREAMS setting
- BUG/MEDIUM: mux-h2: properly consider the peer's advertised max-concurrent-streams
- MINOR: xref: Add missing barriers.
- MINOR: muxes: Don't bother to LIST_DEL(&conn->list) before calling conn_free().
- MINOR: debug: Add an option that causes random allocation failures.
- BUG/MEDIUM: backend: always release the previous connection into its own target srv_list
- BUG/MEDIUM: htx: check the HTX compatibility in dynamic use-backend rules
- BUG/MINOR: tune.fail-alloc: Don't forget to initialize ret.
- BUG/MINOR: backend: check srv_conn before dereferencing it
- BUG/MEDIUM: mux-h2: always omit :scheme and :path for the CONNECT method
- BUG/MEDIUM: mux-h2: always set :authority on request output
- BUG/MEDIUM: stream: Don't forget to free s->unique_id in stream_free().
- BUG/MINOR: threads: fix the process range of thread masks
- BUG/MINOR: config: fix bind line thread mask validation
- CLEANUP: threads: fix misleading comment about all_threads_mask
- CLEANUP: threads: use nbits to calculate the thread mask
- OPTIM: listener: optimize cache-line packing for struct listener
- MINOR: tools: improve the popcount() operation
- MINOR: config: keep an all_proc_mask like we have all_threads_mask
- MINOR: global: add proc_mask() and thread_mask()
- MINOR: config: simplify bind_proc processing using proc_mask()
- MINOR: threads: make use of thread_mask() to simplify some thread calculations
- BUG/MINOR: compression: properly report compression stats in HTX mode
- BUG/MINOR: task: close a tiny race in the inter-thread wakeup
- BUG/MAJOR: config: verify that targets of track-sc and stick rules are present
- BUG/MAJOR: spoe: verify that backends used by SPOE cover all their callers' processes
- BUG/MAJOR: htx/backend: Make all tests on HTTP messages compatible with HTX
- BUG/MINOR: config: make sure to count the error on incorrect track-sc/stick rules
- DOC: ssl: Clarify when pre TLSv1.3 cipher can be used
- DOC: ssl: Stop documenting ciphers example to use
- BUG/MINOR: spoe: do not assume agent->rt is valid on exit
- BUG/MINOR: lua: initialize the correct idle conn lists for the SSL sockets
- BUG/MEDIUM: spoe: initialization depending on nbthread must be done last
- BUG/MEDIUM: server: initialize the idle conns list after parsing the config
- BUG/MEDIUM: server: initialize the orphaned conns lists and tasks at the end
- MINOR: config: make MAX_PROCS configurable at build time
- BUG/MAJOR: spoe: Don't try to get agent config during SPOP healthcheck
- BUG/MINOR: config: Reinforce validity check when a process number is parsed
- BUG/MEDIUM: peers: check that p->srv actually exists before using p->srv->use_ssl
- CONTRIB: contrib/prometheus-exporter: Add a Prometheus exporter for HAProxy
- BUG/MINOR: mux-h1: verify the request's version before dropping connection: keep-alive
- BUG: 51d: In Hash Trie, multi header matching was affected by the header names stored globaly.
- MEDIUM: 51d: Enabled multi threaded operation in the 51Degrees module.
- BUG/MAJOR: stream: avoid double free on unique_id
- BUILD/MINOR: stream: avoid a build warning with threads disabled
- BUILD/MINOR: tools: fix build warning in the date conversion functions
- BUILD/MINOR: peers: remove an impossible null test in intencode()
- BUILD/MINOR: htx: fix some potential null-deref warnings with http_find_stline
- BUG/MEDIUM: peers: Missing peer initializations.
- BUG/MEDIUM: http_fetch: fix the "base" and "base32" fetch methods in HTX mode
- BUG/MEDIUM: proto_htx: Fix data size update if end of the cookie is removed
- BUG/MEDIUM: http_fetch: fix "req.body_len" and "req.body_size" fetch methods in HTX mode
- BUILD/MEDIUM: initcall: Fix build on MacOS.
- BUG/MEDIUM: mux-h2/htx: Always set CS flags before exiting h2_rcv_buf()
- MINOR: h2/htx: Set the flag HTX_SL_F_BODYLESS for messages without body
- BUG/MINOR: mux-h1: Add "transfer-encoding" header on outgoing requests if needed
- BUG/MINOR: mux-h2: Don't add ":status" pseudo-header on trailers
- BUG/MINOR: proto-htx: Consider a XFER_LEN message as chunked by default
- BUG/MEDIUM: h2/htx: Correctly handle interim responses when HTX is enabled
- MINOR: mux-h2: Set HTX extra value when possible
- BUG/MEDIUM: htx: count the amount of copied data towards the final count
- MINOR: mux-h2: make the H2 MAX_FRAME_SIZE setting configurable
- BUG/MEDIUM: mux-h2/htx: send an empty DATA frame on empty HTX trailers
- BUG/MEDIUM: servers: Use atomic operations when handling curr_idle_conns.
- BUG/MEDIUM: servers: Add a per-thread counter of idle connections.
- MINOR: fd: add a new my_closefrom() function to close all FDs
- MINOR: checks: use my_closefrom() to close all FDs
- MINOR: fd: implement an optimised my_closefrom() function
- BUG/MINOR: fd: make sure my_closefrom() doesn't miss some FDs
- BUG/MAJOR: fd/threads, task/threads: ensure all spin locks are unlocked
- BUG/MAJOR: listener: Make sure the listener exist before using it.
- MINOR: fd: Use closefrom() as my_closefrom() if supported.
- BUG/MEDIUM: mux-h1: Report the right amount of data xferred in h1_rcv_buf()
- BUG/MINOR: channel: Set CF_WROTE_DATA when outgoing data are skipped
- MINOR: htx: Add function to drain data from an HTX message
- MINOR: channel/htx: Add function to skips output bytes from an HTX channel
- BUG/MAJOR: cache/htx: Set the start-line offset when a cached object is served
- BUG/MEDIUM: cache: Get objects from the cache only for GET and HEAD requests
- BUG/MINOR: cache/htx: Return only the headers of cached objects to HEAD requests
- BUG/MINOR: mux-h1: Always initilize h1m variable in h1_process_input()
- BUG/MEDIUM: proto_htx: Fix functions applying regex filters on HTX messages
- BUG/MEDIUM: h2: advertise to servers that we don't support push
- MINOR: standard: Add a function to parse uints (dotted notation).
- MINOR: arg: Add support for ARGT_PBUF_FNUM arg type.
- MINOR: http_fetch: add "req.ungrpc" sample fetch for gRPC.
- MINOR: sample: Add two sample converters for protocol buffers.
- DOC: sample: Add gRPC related documentation.
Add "varint" to convert all the protocol buffers binary varints excepted the signed
ones ("sint32" and "sint64") to an integer. The binary signed varints may be
converted to an integer with "svarint" converter implemented by this patch.
These two new converters do not take any argument.
This patch implements "req.ungrpc" sample fetch method to decode and
parse a gRPC request. It takes only one argument: a protocol buffers
field number to identify the protocol buffers message number to be looked up.
This argument is a sort of path in dotted notation to the terminal field number
to be retrieved.
ex:
req.ungrpc(1.2.3.4)
This sample fetch catch the data in raw mode, without interpreting them.
Some protocol buffers specific converters may be used to convert the data
to the correct type.
This function is useful to parse strings made of unsigned integers
and to allocate a C array of unsigned integers from there.
For instance this function allocates this array { 1, 2, 3, 4, } from
this string: "1.2.3.4".
The h2c_send_settings() function was initially made to serve on the
frontend. Here we don't need to advertise that we don't support PUSH
since we don't do that ourselves. But on the backend side it's
different because PUSH is enabled by default so we must announce that
we don't want the server to use it.
This must be backported to 1.9.
The HTX functions htx_apply_filter_to_req_headers() and
htx_apply_filter_to_resp_headers() contain 2 bugs. The first one is about the
matching on each header. The chunk 'hdr' used to format a full header line was
never reset. The second bug appears when we try to replace or remove a
header. The variable ctx was not fully initialized, leading to sefaults.
This patch must be backported to 1.9.
It is used at the end of the function to know if the end of the message was
reached. So we must be sure to always initialize it.
This patch must be backported to 1.9.
The body of a cached object must not be sent in response to a HEAD request. This
works for the legacy HTTP because the parsing is performed by HTTP analyzers
_AND_ because the connection is closed at the end of the transaction. So the
body is ignored. But the applet send it. For the HTX, the applet must skip the
body explicitly.
This patch must be backported to 1.9.
Only responses for GET requests are stored in the cache. But there is no check
on the method during the lookup. So it is possible to retrieve an object from
the cache independently of the method, from the time the key of the object
matches. Now, lookups are performed only for GET and HEAD requests.
This patch must be backportedi in 1.9.
When the function htx_add_stline() is used, this offset is automatically set
when necessary. But the HTX cache applet adds all header blocks of the responses
manually, including the start-line. So its offset must be explicitly set by the
applet.
When everything goes well, the HTTP analyzer http_wait_for_response() looks for
the start-line in the HTX messages, calling http_find_stline(). If necessary,
the start-line offet will also be automatically set during this stage. So the
bug of the HTX cache applet does not hurt most of the time. But, when an error
occurred, HTTP responses analyzers can be bypassed. In such caese, the
start-line offset of cached responses remains unset.
Some part of the code relies on the start-line offset to process the HTX
messages. Among others, when H2 responses are sent to clients, the H2
multiplexer read the start-line without any check, because it _MUST_ always be
there. if its offset is not set, a NULL pointer is dereferenced leading to a
segfault.
The patch must be backported to 1.9.
The function htx_drain() can now be used to drain data from an HTX message.
It will be used by other commits to fix bugs, so it must be backported to 1.9.
h1_rcv_buf() must return the amount of data copied in the channel's buffer and
not the number of bytes parsed. Because this value is used during the fast
forwarding to decrement to_forward value, returning the wrong value leads to
undefined behaviours.
This patch must be backported to 1.9.
Add a new option, USE_CLOSEFROM. If set, it is assumed the system provides
a closefrom() function, so use it.
It is only implicitely used on FreeBSD for now, it should work on
OpenBSD/NetBSD/DragonflyBSD/Solaris too, but as I have no such system to
test it, I'd rather leave it disabled by default. Users can add USE_CLOSEFROM
explicitely on their make command line to activate it.
In listener_accept(), make sure we have a listener before attempting to
use it.
An another thread may have closed the FD meanwhile, and set fdtab[fd].owner
to NULL.
As the listener is not free'd, it is ok to attempt to accept() a new
connection even if the listener was closed. At worst the fd has been
reassigned to another connection, and accept() will fail anyway.
Many thanks to Richard Russo for reporting the problem, and suggesting the
fix.
This should be backported to 1.9 and 1.8.
Calculate if the fd or task should be locked once, before locking, and
reuse the calculation when determing when to unlock.
Fixes a race condition added in 87d54a9a for fds, and b20aa9ee for tasks,
released in 1.9-dev4. When one thread modifies thread_mask to be a single
thread for a task or fd while a second thread has locked or is waiting on a
lock for that task or fd, the second thread will not unlock it. For FDs,
this is observable when a listener is polled by multiple threads, and is
closed while those threads have events pending. For tasks, this seems
possible, where task_set_affinity is called, but I did not observe it.
This must be backported to 1.9.
The optimized my_closefrom() implementation introduced with previous commit
9188ac60e ("MINOR: fd: implement an optimised my_closefrom() function")
has a small bug causing it to miss some FDs at the end of each batch.
The reason is that poll() returns the number of non-zero events, so
it contains the size of the batch minus the FDs to close. Thus if the
FDs to close are at the beginning they'll be seen but if they're at the
end after all other closed ones, the returned count will not cover them.
No backport is needed.
The idea is that poll() can set the POLLNVAL flag for each invalid
FD in a pollfd list. Thus this function makes use of poll() when
compiled in, and builds lists of up to 1024 FDs at once, checks the
output and only closes those which do not have this flag set. Tests
show that this is about twice as fast as blindly calling close() for
each closed fd.
This is a naive implementation of closefrom() which closes all FDs
starting from the one passed in argument. closefrom() is not provided
on all operating systems, and other versions will follow.
Add a per-thread counter of idling connections, and use it to determine
how many connections we should kill after the timeout, instead of using
the global counter, or we're likely to just kill most of the connections.
This should be backported to 1.9.
Use atomic operations when dealing with srv->curr_idle_conns, as it's shared
between threads, otherwise we could get inconsistencies.
This should be backported to 1.9.
When chunked-encoding is used in HTX mode, a trailers HTX block is always
made due to the way trailers are currently implemented (verbatim copy of
the H1 representation). Because of this it's not possible to know when
processing data that we've reached the end of the stream, and it's up
to the function encoding the trailers (h2s_htx_make_trailers) to put the
end of stream. But when there are no trailers and only an empty HTX block,
this one cannot produce a HEADERS frame, thus it cannot send the END_STREAM
flag either, leaving the other end with an incomplete message, waiting for
either more data or some trailers. This is particularly visible with POST
requests where the server continues to wait.
What this patch does is transform the HEADERS frame into an empty DATA
frame when meeting an empty trailers block. It is possible to do this
because we've not sent any trailers so the other end is still waiting
for DATA frames. The check is made after attempting to encode the list
of headers, so as to minimize the specific code paths.
Thanks to Dragan Dosen for reporting the issue with a reproducer.
This fix must be backported to 1.9.
This creates a new tunable "tune.h2.max-frame-size" to adjust the
advertised max frame size. When not set it still defaults to the buffer
size. It is convenient to advertise sizes lower than the buffer size,
for example when using very large buffers.
Currently htx_xfer_blks() respects the <count> limit for each block
instead of for the sum of the transfered blocks. This causes it to
return slightly more than requested when both headers and data are
present in the source buffer, which happens early in the transfer
when the reserve is still active. Thus with large enough headers,
the reserve will not be respected.
Note that this function is only called from h2_rcv_buf() thus this only
affects data entering over H2 (H2 requests or H2 responses).
This fix must be backported to 1.9.
For now, this can be only done when a content-length is specified. In fact, it
is the same value than h2s->body_len, the remaining body length according to
content-length. Setting this field allows the fast forwarding at the channel
layer, improving significantly data transfer for big objects.
This patch may be backported to 1.9.
1xx responses does not work in HTTP2 when the HTX is enabled. First of all, when
a response is parsed, only one HEADERS frame is expected. So when an interim
response is received, the flag H2_SF_HEADERS_RCVD is set and the next HEADERS
frame (for another interim repsonse or the final one) is parsed as a trailers
one. Then when the response is sent, because an EOM block is found at the end of
the interim HTX response, the ES flag is added on the frame, closing too early
the stream. Here, it is a design problem of the HTX. Iterim responses are
considered as full messages, leading to some ambiguities when HTX messages are
processed. This will not be fixed now, but we need to keep it in mind for future
improvements.
To fix the parsing bug, the flag H2_MSGF_RSP_1XX is added when the response
headers are decoded. When this flag is set, an EOM block is added into the HTX
message, despite the fact that there is no ES flag on the frame. And we don't
set the flag H2_SF_HEADERS_RCVD on the corresponding H2S. So the next HEADERS
frame will not be parsed as a trailers one.
To fix the sending bug, the ES flag is not set on the frame when an interim
response is processed and the flag H2_SF_HEADERS_SENT is not set on the
corresponding H2S.
This patch must be backported to 1.9.
An HTX message with a known body length is now considered by default as
chunked. It means the header "content-length" must be found to consider it as a
non-chunked message. Before, it was the reverse, the message was considered with
a content length by default. But it is a bug for HTTP/2 messages. There is no
chunked transfer encoding in HTTP/2 but internally messages without content
length are considered as chunked. It eases HTTP/1 <-> HTTP/2 conversions.
This patch must be backported to 1.9.
As for outgoing response, if an HTTP/1.1 or above request is sent to a server
with neither the headers "content-length" nor "transfer-encoding", it is
considered as a chunked request and the header "transfer-encoding: chunked" is
automatically added. Of course, it is only true for requests with a
body. Concretely, it only happens for incoming HTTP/2 requests sent to an
HTTP/1.1 server.
This patch must be backported to 1.9.
This information is usefull to know if a body is expected or not, regardless the
presence or not of the header "Content-Length" and its value. Once the ES flag
is set on the header frame or when the content length is 0, we can safely add
the flag HTX_SL_F_BODYLESS on the HTX start-line.
Among other things, it will help the mux-h1 to know if it should add TE header
or not. It will also help the HTTP compression filter.
This patch must be backported to 1.9 because a bug fix depends on it.
It is especially important when some data are blocked in the RX buf and the
channel buffer is already full. In such case, instead of exiting the function
directly, we need to set right flags on the conn_stream. CS_FL_RCV_MORE and
CS_FL_WANT_ROOM must be set, otherwise, the stream-interface will subscribe to
receive events, thinking it is not blocked.
This bug leads to connection freeze when everything was received with some data
blocked in the RX buf and a channel full.
This patch must be backported to 1.9.
When in HTX mode, in functions smp_fetch_body_len() and
smp_fetch_body_size() we were subtracting the size of each header block
from the total size htx->data to calculate the size of body, and that
could result in wrong calculated value.
To avoid this, we now loop on blocks to sum up the size of only those
that are of type HTX_BLK_DATA.
This patch must be backported to 1.9.
When client-side or server-side cookies are parsed, if the end of the cookie
line is removed, the HTX message must be updated. The length of the HTX block is
decreased and the data size of the HTX message is modified accordingly. The
update of the HTX block was ok but the update of the HTX message was wrong,
leading to undefined behaviours during the data forwarding. One of possible
effect was a freeze of the connection and no data forward.
This patch must be backported in 1.9.
The resulting value produced in functions smp_fetch_base() and
smp_fetch_base32() was wrong when in HTX mode.
This patch also adds the semicolon at the end of the for-loop line, used
in function smp_fetch_path(), since it's actually with no body.
This must be backported to 1.9.
Initialize ->srv peer field for all the peers, the local peer included.
Indeed, a haproxy process needs to connect to the local peer of a remote
process. Furthermore, when a "peer" or "server" line is parsed by parse_server()
the address must be copied to ->addr field of the peer object only if this address
has been also parsed by parse_server(). This is not the case if this address belongs
to the local peer and is provided on a "server" line.
After having parsed the "peer" or "server" lines of a peer
sections, the ->srv part of all the peer must be initialized for SSL, if
enabled. Same thing for the binding part.
Revert 1417f0b commit which is no more required.
No backport is needed, this is purely 2.0.
http_find_stline() carefully verifies that it finds a start line otherwise
returns NULL when not found. But a few calling functions ignore this NULL
in return and dereference this pointer without checking. Let's add the
test where needed in the callers. If it turns out that over the long term
a start line is mandatory, then the test will be removed and the faulty
function will have to be simplified.
This must be backported to 1.9.
intencode() tests for the nullity of the target pointer passed in
argument, but the code calling intencode() never does so and happily
dereferences it. gcc at -O3 detects this as a potential null deref.
Let's remove this incorrect and misleading test. If this pointer was
null, the code would already crash in the calling functions.
This must be backported to stable versions.
Some gcc versions emit potential null deref warnings at -O3 in
date2str_log(), gmt2str_log() and localdate2str_log() after utoa_pad()
because this function may return NULL if its size argument is too small
for the integer value. And it's true that we can't guarantee that the
input number is always valid.
This must be backported to all stable versions.
gcc 6+ complains about a possible null-deref here due to the test in
objt_server() :
if (objt_server(s->target))
HA_ATOMIC_ADD(&objt_server(s->target)->counters.retries, 1);
Let's simply change it to __objt_server(). This can be backported to
1.9 and 1.8.
Commit 32211a1 ("BUG/MEDIUM: stream: Don't forget to free
s->unique_id in stream_free().") addressed a memory leak but in
exchange may cause double-free due to the fact that after freeing
s->unique_id it doesn't null it and then calls http_end_txn()
which frees it again. Thus the process quickly crashes at runtime.
This fix must be backported to all stable branches where the
aforementioned patch was backported.
The existing threading flag in the 51Degrees API
(FIFTYONEDEGREES_NO_THREADING) has now been mapped to the HAProxy
threading flag (USE_THREAD), and the 51Degrees module code has been made
thread safe.
In Pattern, the cache is now locked with a spin lock from hathreads.h
using a new lable 'OTHER_LOCK'. The workset pool is now created with the
same size as the number of threads to avoid any time waiting on a
worket.
In Hash Trie, the global device offsets structure is only used in single
threaded operation. Multi threaded operation creates a new offsets
structure in each thread.
Some logic around mutli header matching in Hash Trie has been improved
where only the name of the most important header was stored in the
global heade_names structure. Now all headers are stored, so are used in
the mutli header matching correctly.
The mux h1 properly avoid to set "connection: keep-alive" when the response
is in HTTP/1.1 but it forgot to check the request's version. Thus when the
client requests using HTTP/1.0 and connection: keep-alive (like ab does),
the response is in 1.1 with no keep-alive and ab waits for the close without
checking for the content-length. Response headers actually depend on the
recipient, thus on both request and response's version.
This patch must be backported to 1.9.
Now, in the function parse_process_number(), when a process number or a set of
processes is parsed, an error is triggered if an invalid character is found. It
means following syntaxes are not forbidden and will emit an alert during the
HAProxy startup:
1a
1/2
1-2-3
This bug was reported on Github. See issue #36.
This patch may be backported to 1.9 and 1.8.
During SPOP healthchecks, a dummy appctx is used to create the HAPROXY-HELLO
frame and then to parse the AGENT-HELLO frame. No agent are attached to it. So
it is important to not rely on an agent during these stages. When HAPROXY-HELLO
frame is created, there is no problem, all accesses to an agent are
guarded. This is not true during the parsing of the AGENT-HELLO frame. Thus, it
is possible to crash HAProxy with a SPOA declaring the async or the pipelining
capability during a healthcheck.
This patch must be backported to 1.9 and 1.8.
For some embedded systems, it's pointless to have 32- or even 64- large
arrays of processes when it's known that much fewer processes will be
used in the worst case. Let's introduce this MAX_PROCS define which
contains the highest number of processes allowed to run at once. It
still defaults to LONGBITS but may be lowered.
This also depends on the nbthread count, so it must only be performed after
parsing the whole config file. As a side effect, this removes some code
duplication between servers and server-templates.
This must be backported to 1.9.
The idle conns lists are sized according to the number of threads. As such
they cannot be initialized during the parsing since nbthread can be set
later, as revealed by this simple config which randomly crashes when used.
Let's do this at the end instead.
listen proxy
bind :4445
mode http
timeout client 10s
timeout server 10s
timeout connect 10s
http-reuse always
server s1 127.0.0.1:8000
global
nbthread 8
This fix must be backported to 1.9 and 1.8.
The agent used to be configured depending on global.nbthread while nbthread
may be set after the agent is parsed. Let's move this part to the spoe_check()
function to make sure nbthread is always correct and arrays are appropriately
sized.
This fix must be backported to 1.9 and 1.8.
Commit 40a007cf2 ("MEDIUM: threads/server: Make connection list
(priv/idle/safe) thread-safe") made a copy-paste error when initializing
the Lua sockets, as the TCP one was initialized twice. Fortunately it has
no impact because the pointers are set to NULL after a memset(0) and are
not changed in between.
This must be backported to 1.9 and 1.8.
As reported by Christopher, we may call spoe_release_agent() when leaving
after an allocation failure or a config parse error. We must not assume
agent->rt is valid there as the allocation could have failed.
This should be backported to 1.9 and 1.8.
When commit 151e1ca98 ("BUG/MAJOR: config: verify that targets of track-sc
and stick rules are present") added a check for some process inconsistencies
between rules and their stick tables, some errors resulted in a "return 0"
statement, which is taken as "no error" in some cases. Let's fix this.
This must be backported to all versions using the above commit.
A piece of code about the HTX was lost this summer, after the "big merge"
(htx/http2/connection layer refactoring). I forgot to keep HTX changes in the
functions connect_server() and assign_server(). So, this patch fixes "uri",
"url_param" and "hdr" LB algorithms when the HTX is enabled. It also fixes
evaluation of the "sni" expression on server lines.
This issue was reported on github. See issue #32.
This patch must be backported in 1.9.
When a filter is installed on a proxy and references spoe, we must be
absolutely certain that the whole chain is valid on a given process
when running in multi-process mode. The problem here is that if a proxy
1 runs on process 1, referencing an SPOE agent itself based on a backend
running on process 2, this last one will be completely deinited on
process 1, and will thus cause random crashes when it gets messages
from this proess.
This patch makes sure that the whole chain is valid on all of the caller's
processes.
This fix must be backported to all spoe-enabled maintained versions. It
may potentially disrupt configurations which already randomly crash.
There hardly is any intermediary solution though, such configurations
need to be fixed.
Stick and track-sc rules may optionally designate a table in a different
proxy. In this case, a number of verifications are made such as validating
that this proxy actually exists. However, in multi-process mode, the target
table might indeed exist but not be bound to the set of processes the rules
will execute on. This will definitely result in a random behaviour especially
if these tables do require peer synchronization, because some tasks will be
started to try to synchronize form uninitialized areas.
The typical issue looks like this :
peers my-peers
peer foo ...
listen proxy
bind-process 1
stick on src table ip
...
backend ip
bind-process 2
stick-table type ip size 1k peers my-peers
While it appears obvious that the example above will not work, there are
less obvious situations, such as having bind-process in a defaults section
and having a larger set of processes for the referencing proxy than the
referenced one.
The present patch adds checks for such situations by verifying that all
processes from the referencing proxy are present on the other one in all
track-sc* and stick-* rules, and in sample fetch / converters referencing
another table so that sc_inc_gpc0() and similar are safe as well.
This fix must be backported to all maintained versions. It may potentially
disrupt configurations which already randomly crash. There hardly is any
intermediary solution though, such configurations need to be fixed.
__task_wakeup() takes care of a small race that exists between threads,
but it uses a store barrier that is not sufficient since apparently the
state read after clearing the leaf_p pointer sometimes is incorrect. This
results in missed wakeups between threads competing at a high rate. Let's
use a full barrier instead to serialize the operations.
This may be backported to 1.9 though it's extremely unlikely that this
bug will ever manifest itself there.
When HTX support was added to HTTP compression, a set of counters was missed,
namely comp_in and comp_byp, resulting in no stats being available for compression.
This must be backported to 1.9.
At a number of places we used to have null tests on bind_proc for
listeners and proxies. Let's simplify all these tests by always
having the proper bits reported via proc_mask().
When no nbproc is specified, a computation leads to reading bind_thread[-1]
before checking if the thread mask is valid for a bind conf. It may either
report a false warning and compute a wrong mask, or miss some incorrect
configs.
This must be backported to 1.9 and possibly 1.8.
Commit 421f02e ("MINOR: threads: add a MAX_THREADS define instead of
LONGBITS") used a MAX_THREADS macros to fix threads limits. However,
one change was wrong as it affected the upper bound of the process
loop when setting threads masks. No backport is needed.
In stream_free(), free s->unique_id. We may still have one, because it's
allocated in log.c::strm_log() no matter what, even if it's a TCP connection
and thus it won't get free'd by http_end_txn().
Failure to do so leads to a memory leak.
This should probably be backported to all maintained branches.
PiBa-NL reported that some servers don't fall back to the Host header when
:authority is absent. After studying all the combinations of Host and
:authority, it appears that we always have to send the latter, hence we
never need the former. In case of CONNECT method, the authority is retrieved
from the URI part, otherwise it's extracted from the Host field.
The tricky part is that we have to scan all headers for the Host header
before dumping other headers. This is due to the fact that we must emit
pseudo headers before other ones. One improvement could possibly be made
later in the request parser to search and emit the Host header immediately
if authority was not found. This would cost nothing on the vast marjority
of requests and make the lookup faster on output since Host would appear
first.
This fix must be backported to 1.9.
Commit 3c4e19f42 ("BUG/MEDIUM: backend: always release the previous
connection into its own target srv_list") introduced a valid warning
about a null-deref risk since we didn't check conn_new()'s return value.
This patch must be backported to 1.9 with the patch above.
In mem_should_fail(), if we don't want to fail the allocation, either
because mem_fail_rate is 0, or because we're still initializing, don't
forget to initialize ret, or we may return a non-zero value, and fail
an allocation we didn't want to fail.
This should only affect users that compile with DEBUG_FAIL_ALLOC.
I would have sworn it was done, probably we lost it during the refactoring.
If a frontend is in HTX and the backend not (and conersely), this is
normally detected at config parsing time unless the rule is dynamic. In
this case we must abort with an error 500. The logs will report "RR"
(resource issue while processing request) with the frontend and the
backend assigned, so that it's possible to figure what was attempted.
This must be backported to 1.9.
There was a bug reported in issue #19 regarding the fact that haproxy
could mis-route requests to the wrong server. It turns out that when
switching to another server, the old connection was put back into the
srv_list corresponding to the stream's target instead of this connection's
target. Thus if this connection was later picked, it was pointing to the
wrong server.
The patch fixes this and also clarifies the assignment to srv_conn->target
so that it's clear we don't change it when picking it from the srv_list.
This must be backported to 1.9 only.
When compiling with DEBUG_FAIL_ALLOC, add a new option, tune.fail-alloc,
that gives the percentage of chances an allocation fails.
This is useful to check that allocation failures are always handled
gracefully.
Till now we used to only rely on tune.h2.max-concurrent-streams but if
a peer advertises a lower limit this can cause streams to be reset or
even the conection to be killed. Let's respect the peer's value for
outgoing streams.
This patch should be backported to 1.9, though it depends on the following
ones :
BUG/MINOR: server: fix logic flaw in idle connection list management
MINOR: mux-h2: max-concurrent-streams should be unsigned
MINOR: mux-h2: make sure to only check concurrency limit on the frontend
MINOR: mux-h2: learn and store the peer's advertised MAX_CONCURRENT_STREAMS setting
We used not to take it into account because we only used the configured
parameter everywhere. This patch makes sure we can actually learn the
value advertised by the peer. We still enforce our own limit on top of
it however, to make sure we can actually limit resources or stream
concurrency in case of suboptimal server settings.
h2_has_too_many_cs() was renamed to h2_frt_has_too_many_cs() to make it
clear it's only used to throttle the frontend connection, and the call
places were adjusted to only call this code on a front connection. In
practice it was already the case since the H2_CF_DEM_TOOMANY flag is
only set there. But now the ambiguity is removed.
With variable connection limits, it's not possible to accurately determine
whether the mux is still in use by comparing usage and max to be equal due
to the fact that one determines the capacity and the other one takes care
of the context. This can cause some connections to be dropped before they
reach their stream ID limit.
It seems it could also cause some connections to be terminated with
streams still alive if the limit was reduced to match the newly computed
avail_streams() value, though this cannot yet happen with existing muxes.
Instead let's switch to usage reports and simply check whether connections
are both unused and available before adding them to the idle list.
This should be backported to 1.9.
We used to rely on a hint that a shutw() or shutr() without data is an
indication that the upper layer had performed a tcp-request content reject
and really wanted to kill the connection, but sadly there is another
situation where this happens, which is failed keep-alive request to a
server. In this case the upper layer stream silently closes to let the
client retry. In our case this had the side effect of killing all the
connection.
Instead of relying on such hints, let's address the problem differently
and rely on information passed by the upper layers about the intent to
kill the connection. During shutr/shutw, this is detected because the
flag CS_FL_KILL_CONN is set on the connstream. Then only in this case
we send a GOAWAY(ENHANCE_YOUR_CALM), otherwise we only send the reset.
This makes sure that failed backend requests only fail frontend requests
and not the whole connections anymore.
This fix relies on the two previous patches adding SI_FL_KILL_CONN and
CS_FL_KILL_CONN as well as the fix for the connection close, and it must
be backported to 1.9 and 1.8, though the code in 1.8 could slightly differ
(cs is always valid) :
BUG/MEDIUM: mux-h2: wait for the mux buffer to be empty before closing the connection
MINOR: stream-int: add a new flag to mention that we want the connection to be killed
MINOR: connstream: have a new flag CS_FL_KILL_CONN to kill a connection
The new flag SI_FL_KILL_CONN is now set by the rare actions which
deliberately want the whole connection (and not just the stream) to be
killed. This is only used for "tcp-request content reject",
"tcp-response content reject", "tcp-response content close" and
"http-request reject". The purpose is to desambiguate the close from
a regular shutdown. This will be used by the next patches.
When finishing to respond on a stream, a shutw() is called (resulting
in either an end of stream or RST), then h2_detach() is called, and may
decide to kill the connection is a number of conditions are satisfied.
Actually one of these conditions is that a GOAWAY frame was already sent
or attempted to be sent. This one is wrong, because it can happen in at
least these two situations :
- a shutw() sends a GOAWAY to obey tcp-request content reject
- a graceful shutdown is pending
In both cases, the connection will be aborted with the mux buffer holding
some data. In case of a strong abort the client will not see the GOAWAY or
RST and might want to try again, which is counter-productive. In case of
the graceful shutdown, it could result in truncated data. It looks like a
valid candidate for the issue reported here :
https://www.mail-archive.com/haproxy@formilux.org/msg32433.html
A backport to 1.9 and 1.8 is necessary.
In 1.5-dev13, a bug was introduced by commit e3224e870 ("BUG/MINOR:
session: ensure that we don't retry connection if some data were sent").
If a connection error is reported after some data were sent (and lost),
we used to accidently mark the front connection as being in error instead
of only the back one because the two direction flags were applied to the
same channel. This case is extremely rare with raw connections but can
happen a bit more often with multiplexed streams. This will result in
the error not being correctly reported to the client.
This patch can be backported to all supported versions.
If we're adding a connection to the server orphan idle list, don't forget
to remove the CO_FL_SESS_IDLE flag, or we will assume later it's still
attached to a session.
This should be backported to 1.9.
When a HTTP/1.1 or above response is emitted to a client, if the flag
H1_MF_XFER_LEN is set whereas H1_MF_CLEN and H1_MF_CHNK are not, the header
"transfer-encoding" is added. It is a way to make HTX chunking consistent with
H2. But we must exclude all cases where the message-body is explicitly forbidden
by the RFC:
* for all 1XX, 204 and 304 responses
* for any responses to HEAD requests
* for 200 responses to CONNECT requests
For these 3 cases, the flag H1_MF_XFER_LEN is set but H1_MF_CLEN and H1_MF_CHNK
not. And the header "transfer-encoding" must not be added.
See issue #27 on github for details about the bug.
This patch must be backported in 1.9.
This bug was introduced by 355b203 commit which prevented the peer
addresses to be parsed for the local peer of a "peers" section.
When adding "parse_addr" boolean parameter to parse_server(), this commit
missed the case where the syntax with "peer" keyword should still be
supported in addition to the new syntax with "server"+"bind" keyword.
May be backported as fas as 1.5.
In h2_frt_transfer_data(), we support both HTX and legacy modes. The
HTX mode is detected from the proxy option and sets a valid pointer
into the htx variable. Better rely on this variable in all the function
rather than testing the option again. This way the code is clearer and
even the compiler knows this pointer is valid when it's dereferenced.
This should be backported to 1.9 if the b_is_null() patch is backported.
We used to respond a connection error in case we received a trailers
frame on a closed stream, but it's a problem to do this if the error
was caused by a reset because the sender has not yet received it and
is just a victim of the timing. Thus we must not close the connection
in this case.
This patch may be backported to 1.9 but then it requires the following
previous ones :
MINOR: h2: add a generic frame checker
MEDIUM: mux-h2: check the frame validity before considering the stream state
CLEANUP: mux-h2: remove stream ID and frame length checks from the frame parsers
It's not convenient to have such structural checks mixed with the ones
related to the stream state. Let's remove all these basic tests that are
already covered once for all when reading the frame header.
There are some uneasy situation where it's difficult to validate a frame's
format without being in an appropriate state. This patch makes sure that
each frame passes through h2_frame_check() before being checked in the
context of the stream's state. This makes sure we can always return a GOAWAY
for protocol violations even if we can't process the frame.
The new function h2_frame_check() checks the protocol limits for the
received frame (length, ID, direction) and returns a verdict made of
a connection error code. The purpose is to be able to validate any
frame regardless of the state and the ability to call the frame handler,
and to emit a GOAWAY early in this case.
RFC7540#5.1 states that these are the only states allowing any frame
type. For response HEADERS frames, we cannot accept that they are
delivered on idle streams of course, so we're left with these two
states only. It is important to test this so that we can remove the
generic CLOSE_STREAM test for such frames in the main loop.
This must be backported to 1.9 (1.8 doesn't have response HEADERS).
If a response HEADERS frame arrives on a closed connection (due to a
client abort sending an RST_STREAM), it's currently immediately rejected
with an RST_STREAM, like any other frame. This is incorrect, as HEADERS
frames must first be decoded to keep the HPACK decoder synchronized,
possibly breaking subsequent responses.
This patch excludes HEADERS/CONTINUATION/PUSH_PROMISE frames from the
central closed state test and leaves to the respective frame parsers
the responsibility to decode the frame then send RST_STREAM.
This fix must be backported to 1.9. 1.8 is not directly impacted since
it doesn't have response HEADERS nor trailers thus cannot recover from
such situations anyway.
The H2 spec requires to send GOAWAY when the client sends a frame after
it has already closed using END_STREAM. Here the corresponding case was
the fallback of a series of tests on the stream state, but it unfortunately
also catches old closed streams which we don't know anymore. Thus any late
packet after we've sent an RST_STREAM will trigger this GOAWAY and break
other streams on the connection.
This can happen when launching two tabs in a browser targetting the same
slow page through an H2-to-H2 proxy, and pressing Escape to stop one of
them. The other one gets an error when the page finally responds (and it
generally retries), and the logs in the middle indicate SD-- flags since
the late response was cancelled.
This patch takes care to only send GOAWAY on streams we still know.
It must be backported to 1.9 and 1.8.
When receiving a HEADERS or DATA frame with END_STREAM set, we would
inconditionally switch to half-closed(remote). This is wrong because we
could already have been in half-closed(local) and need to switch to closed.
This happens in the following situations :
- receipt of the end of a client upload after we've already responded
(e.g. redirects to POST requests)
- receipt of a response on the backend side after we've already finished
sending the request (most common case).
This may possibly have caused some streams to stay longer than needed
at the end of a transfer, though this is not apparent in tests.
This must be backported to 1.9 and 1.8.
When a settings frame updates the initial window, all affected streams's
window is updated as well. However the streams are not put back into the
send list if they were already blocked on flow control. The effect is that
such a stream will only be woken up by a WINDOW_UPDATE message but not by
a SETTINGS changing the initial window size. This can be verified with
h2spec's test http2/6.9.2/1 which occasionally fails without this patch.
It is unclear whether this situation is really met in field, but the
fix is trivial, it consists in adding each unblocked streams to the
wait list as is done for the window updates.
This fix must be backported to 1.9. For 1.8 the patch needs quite
a few adaptations. It's better to copy-paste the code block from
h2c_handle_window_update() adding the stream to the send_list when
its mws is > 0.
The WINDOW_UPDATE and DATA frame handlers used to still have a check on
h2s to return either h2s_error() or h2c_error(). This is a leftover from
the early code. The h2s cannot be null there anymore as it has already
been dereferenced before reaching these locations.
RFC 7232 section 2.3.3 states:
> Note: Content codings are a property of the representation data,
> so a strong entity-tag for a content-encoded representation has to
> be distinct from the entity tag of an unencoded representation to
> prevent potential conflicts during cache updates and range
> requests. In contrast, transfer codings (Section 4 of [RFC7230])
> apply only during message transfer and do not result in distinct
> entity-tags.
Thus a strong ETag must be changed when compressing. Usually this is done
by converting it into a weak ETag, which represents a semantically, but not
byte-by-byte identical response. A conversion to a weak ETag still allows
If-None-Match to work.
This should be backported to 1.9 and might be backported to every supported
branch with compression.
If we failed to install the mux, just close the connection, or
conn_fd_handler() will be called for the FD, and crash as soon as it attempts
to access the mux' wake method.
This should be backported to 1.9.
If we failed to add the connection to the session, only try to add it back
to the server idle list if it has a mux, otherwise the connection is
incomplete and unusable.
This should be backported to 1.9.
In connect_server(), if we failed to add the connection to the session,
only destroy the conn_stream if we just allocated it, otherwise it may
have been allocated outside connect_server(), along with a connection which
has its destination address set.
Also use si_release_endpoint() instead of cs_destroy(), to make sure the
stream_interface doesn't reference it anymore.
This should be backported to 1.9.
If conn_install_mux failed, then the connection has no mux and won't be
usable, so just give up is on failure instead of ignoring it.
This should be backported to 1.9.
A subtle bug was introduced with H2 on the backend. RFC7540 states that
an attempt to create a stream on an ID not higher than the max known is
a connection error. This was translated into rejecting HEADERS frames
for closed streams. But with H2 on the backend, if the client aborts
and causes an RST_STREAM to be emitted, the stream is effectively closed,
and if/once the server responds, it starts by emitting a HEADERS frame
with this ID thus it is interpreted as a connection error.
This test must of course consider the side the mux is installed on and
not take this for a connection error on responses.
The effect is that an aborted stream on an outgoing H2 connection, for
example due to a client stopping a transfer with option abortonclose
set, would lead to an abort of all other streams. In the logs, this
appears as one or several CD-- line(s) followed by one or several SD--
lines which are victims.
Thanks to Luke Seelenbinder for reporting this problem and providing
enough elements to help understanding how to reproduce it.
This fix must be backported to 1.9.
A new warning appears when building at -O0 since commit 3f0fb9df6 ("MINOR:
peers: move "hello" message treatment code to reduce the size of the I/O
handler."), it is related to the fact that proto_len is initialized from
strlen() which is not a constant. Let's replace it with sizeof-1 instead
and also mark the variable as static since it's useless outside of the file.
The error handling code was extremely repetitive and error-prone due
to the numerous copy-pastes, some involving unlocks or free. Let's
factor this out. The code could be further simplified, but 12 locations
were already cleaned without taking risks.
Implements two new functions to init peer flags and other stuff after
having accepted or connected them with the peer I/O handler so that
to reduce its size.
May be backported as far as 1.5.
This patch implements three functions to read and parse the three
line of a "hello" peer protocol message so that to call them from the
peer I/O handler and reduce its size.
May be backported as far as 1.5.
When implementing peer_recv_msg() we added the statements reached with
a "goto imcomplete" at the end of this function. This statements
are executed only when co_getblk() returns something <0. So they
are useless for now on, and may be safely removed. The following
section wich was responsible of sending any peer protocol messages
were reached only when co_getblk() returned 0 (no more message to
read). In this case we replace the "goto impcomplete" statement by
a "goto send_msgs" to reach this only when peer_recv_msg() returns 0.
May be backported as far as 1.5.
This patch extracts the code responsible of sending peer protocol
messages from the peer I/O handler to create a new function and to
reduce the size of this handler.
May be backported as far as 1.5.
Extract the code of the peer I/O handler responsible of treating
any peer protocol message to create peer_treat_awaited_msg() function.
Also rename peer_recv_updatemsg() to peer_treat_updatemsg() as this
function only parse a stick-table update message already received
by peer_recv_msg().
May be backported as far as 1.5.
Implement three new functions to treat peer acks, switch and
definition messages extracting the code from the big swich-case
of the peer I/O handler to give more chances to this latter to be
readable.
May be backported as far as 1.5.
This patch implements a new function to treat the stick-table
update messages so that to reduce the size of the peer I/O handler
by ~200 lines.
May be backported as far as 1.5.
This patch reduces the size of the peer I/O handler implementing
a new function named peer_send_updatemsg() which uses the already
implement peer_prepare_updatemsg(), then ci_putblk().
Reuse the code used to implement peer_send_(ack|swith)msg() function
especially the more generic function peer_send_msg().
May be backported as far as 1.5.
Implements peer_send_*msg() functions for switch and ack messages which call the
already defined peer_prepare_*msg() before calling ci_putblk().
These two new functions are used at three places in the peer_io_handler().
May be backported as far as 1.5.
In case an asynchronous connection (ALPN) succeeds but the mux fails to
attach, we must release the stream interface's endpoint, otherwise we
leave the stream interface with an endpoint pointing to a freed connection
with si_ops == si_conn_ops, and sess_update_st_cer() calls si_shutw() on
it, causing it to crash.
This must be backported to 1.9 only.
In connect_server(), if the previous connection failed, but had an alpn, no
mux was created, and thus the stream_interface's endpoint would be the
connection. In this case, instead of forgetting about it, and overriding
the stream_interface's endpoint later, try to reuse the connection, or the
connection will still be in the session's connection list, and will reference
to a stream that was probably destroyed.
This should be backported to 1.9.
The calculation of available outgoing H2 streams was improved by commit
d64a3ebe6 ("BUG/MINOR: mux-h2: always check the stream ID limit in
h2_avail_streams()"), but it still is incorrect because RFC7540#6.8
specifically forbids the creation of new streams after a GOAWAY frame
was received. Thus we must not mark the connection as available anymore
in order to be able to handle a graceful shutdown.
This needs to be backported to 1.9.
The source address was not set but passed down the chain to the upper
layer's accept() calls. Let's initialize it like other UNIX sockets in
this case. At the moment it should not have any impact since socketpairs
are only usable for the master CLI.
This should be backported to 1.9.
There's some value in being able to limit MAX_THREADS, either to save
precious resources in embedded environments, or to protect certain
deployments against accidently incorrect settings.
With this patch, if MAX_THREADS is defined at build time, it will be
used. However, given that LONGBITS is not a macro but is defined
according to sizeof(long), we can't check the value range at build
time and instead we need to perform the check at early boot time.
However, the compiler is able to optimize away the constant comparisons
and doesn't even emit the check code when values are correct.
The output message regarding threading support was improved to report
the number of threads.
This is mandated by RFC7541#8.1.2.6. Till now we didn't have a copy of
the content-length header field. But now that it's already parsed, it's
easy to add the check.
The reg-test was updated to match the new behaviour as the previous one
expected unadvertised data to be silently discarded.
This should be backported to 1.9 along with previous patch (MEDIUM: h2:
always parse and deduplicate the content-length header) after it has got
a bit more exposure.
The header used to be parsed only in HTX but not in legacy. And even in
HTX mode, the value was dropped. Let's always parse it and report the
parsed value back so that we'll be able to store it in the streams.
When a connection reuse fails, we must not wait before retrying, as most
likely the issue is related to the reused connection and not to the server
itself.
This should be backported to 1.9, though it depends on previous patches
dealing with SI_ST_CON for connection reuse.
Before the first send() attempt, we should be in SI_ST_CON, not
SI_ST_EST, since we have not yet attempted to send and we are
allowed to retry. This is particularly important with complex
outgoing muxes which can fail during the first send attempt (e.g.
failed stream ID allocation).
It only requires that sess_update_st_con_tcp() knows about this
possibility, as we must not forcefully close a reused connection
when facing an error in this case, this will be handled later.
This may be backported to 1.9 with care after some observation period.
This parameter allows to limit the number of successive requests sent
on a connection. Let's compare it to the number of streams already sent
on the connection to decide if the connection may still appear in the
idle list or not. This may be used to help certain servers work around
resource leaks, and also helps dealing with the issue of the GOAWAY in
flight which requires to set a usage limit on the client to be reliable.
This must be backported to 1.9.
Some servers may wish to limit the total number of requests they execute
over a connection because some of their components might leak resources.
In HTTP/1 it was easy, they just had to emit a "connection: close" header
field with the last response. In HTTP/2, it's less easy because the info
is not always shared with the component dealing with the H2 protocol and
it could be harder to advertise a GOAWAY with a stream limit.
This patch provides a solution to this by adding a new "max-reuse" parameter
to the server keyword. This parameter indicates how many times an idle
connection may be reused for new requests. The information is made available
and the underlying muxes will be able to use it at will.
This patch should be backported to 1.9.
The code dealing with idle connections used to check the number of streams
available on the connection only to unlink the connection from the idle
list. But this still resulted in too many streams reusing the same connection
when they were already attached to it.
We must detect that there is no more room and refrain from using this
connection at all, and instead fall back to the no-reuse case. Ideally
we should try to search among other idle connections, but for a backport
let's stay safe.
This must be backported to 1.9.
One of the reasons for the excessive number of aborted requests when a
server sets a limit on the highest stream ID is that we don't check
this limit while allocating a new stream.
This patch does this at two locations :
- when a backend stream is allocated, we verify that there are still
IDs left ;
- when the ID is assigned, we verify that it's not higher than the
advertised limit.
This should be backported to 1.9.
This function is used to decide whether to put an idle connection back
into the idle pool. While it considers the limit in number of concurrent
requests, it does not consider the limit in number of streams, so if a
server announces a low limit in a GOAWAY frame, it will be ignored.
However there is a caveat : since we assign the stream IDs when sending
them, we have a number of allocated streams which max_id doesn't take
care of. This can be addressed by adding a new nb_reserved count on each
connection to keep track of the ID-less streams.
This patch makes sure we take care of the remaining number of streams
if such a limit was announced, or of the number of streams before the
highest ID. Now it is possible to accurately know how many streams
can be allocated, and the number of failed outgoing streams has dropped
in half.
This must be backported to 1.9.
We currently detect a number of situations where we have to immediately
deal with a state change, but we failed to consider the case of the
synchronous error reported on the stream-interface. We definitely do not
want to have to wait for a timeout to handle this one, especially at the
beginning of the connection when it can lead to an immediate retry.
This should be backported to 1.9.
The keyword parser doesn't check the value range, but supported values are
-1 and positive values, thus we should check it.
This can be backported to 1.9.
RFC7541#6.3 mandates that an error is reported when a dynamic table size
update announces a size larger than the one configured with settings. This
is tested by h2spec using test "hpack/6.3/1".
This must be backported to 1.9 and possibly 1.8 as well.
When sending RST_STREAM in response to a frame delivered on an already
closed stream, we used not to be able to update the error code and
deliver an RST_STREAM with a wrong code (e.g. H2_ERR_CANCEL). Let's
always allow to update the code so that RST_STREAM is always sent
with the appropriate error code (most often H2_ERR_STREAM_CLOSED).
This should be backported to 1.9 and possibly to 1.8.
There are incompatible MUST statements in the HTTP/2 specification. Some
require a stream error and others a connection error for the same situation.
As discussed in the thread below, let's always apply the connection error
when relevant (headers-like frame in half-closed(remote)) :
https://mailarchive.ietf.org/arch/msg/httpbisa/pOIWRBRBdQrw5TDHODZXp8iblcE
This must be backported to 1.9, possibly to 1.8 as well.
Since we now support CONTINUATION frames, we must take care of properly
aborting the connection when they are sent on a closed stream. By default
we'd get a stream error which is not sufficient since the compression
context is modified and unrecoverable.
More info in this discussion :
https://mailarchive.ietf.org/arch/msg/httpbisa/azZ1jiOkvM3xrpH4jX-Q72KoH00
This needs to be backported to 1.9 and possibly to 1.8 (less important there).
There was an incomplete test in h2c_frt_handle_headers() resulting
in negative return values from h2c_decode_headers() not being taken
as errors. The effect is that the stream is then aborted on timeout
only.
This fix must be backported to 1.9.
The current test consists in removing muxes which report that they're going
to assign their last available stream, but a mux may already be saturated
without having passed in this situation at all. This is what happens with
mux_h2 when receiving a GOAWAY frame informing the mux about the ID of the
last stream the other end is willing to process. The limit suddenly changes
from near infinite to 0. Currently what happens is that such a mux remains
in the idle list for a long time and refuses all new streams. Now at least
it will only fail a single stream in a retryable way. A future improvement
should consist in trying to pick another connection from the idle list.
This fix must be backported to 1.9.
In case we cannot allocate a stream ID for an outgoing stream, the stream
will be aborted. The problem is that we also release it and it will be
destroyed again by the application detecting the error, leading to a NULL
dereference in h2_shutr() and h2_shutw(). Let's only mark the error on the
CS and let the rest of the code handle the close.
This should be backported to 1.9.
It's almost funny but one side effect of the latest zero-copy changes made
to mux-h1 resulted in the temporary buffer being copied over itself at the
exact same location. This has no impact except slowing down operations and
irritating valgrind. The cause is an incorrect pointer check after the
alignment optimizations were made.
This needs to be backported to 1.9.
Reported-by: Tim Duesterhus <tim@bastelstu.be>
There is no reason to use the reserve to limit the amount of data of the input
buffer that we can parse at a time. The only important thing is to keep the
reserve free in the channel's buffer.
This patch should be backported to 1.9.
When data are pushed in the channel's buffer, in h2_rcv_buf(), the mux-h2 must
respect the reserve if the flag CO_RFL_KEEP_RSV is set. In HTX, because the
stream-interface always sees the buffer as full, there is no other way to know
the reserve must be respected.
This patch must be backported to 1.9.
When an HTX stream is waiting for a request or a response, it reports an error
(400 for the request or 502 for the response) if a parsing error is reported by
the mux (HTX_FL_PARSING_ERROR). The mux-h1 uses this error, among other things,
when the headers are too big to be analyzed at once. But the mux-h2 doesn't. So
the stream must also report an error if the multiplexer is unable to emit all
headers at once. The multiplexers must always emit all the headers at once
otherwise it is an error.
There are 2 ways to detect this error:
* The end-of-headers marker was not received yet _AND_ the HTX message is not
empty.
* The end-of-headers marker was not received yet _AND_ the multiplexer have
some data to emit but it is waiting for more space in the channel's buffer.
Note the mux-h2 is buggy for now when HTX is enabled. It does not respect the
reserve. So there is no way to hit this bug.
This patch must be backported to 1.9.
In OpenSSL 1.1.1 TLS 1.3 KeyUpdate messages will trigger the callback
that is used to verify renegotiation is disabled. This means that these
KeyUpdate messages fail. In OpenSSL 1.1.1 a better mechanism is
available with the SSL_OP_NO_RENEGOTIATION flag that disables any TLS
1.2 and earlier negotiation.
So if this SSL_OP_NO_RENEGOTIATION flag is available, instead of having
a manual check, trust OpenSSL and disable the check. This means that TLS
1.3 KeyUpdate messages will work properly.
Reported-By: Adam Langley <agl@imperialviolet.org>
With tcp-check, the result of the check is set by the function tcpcheck_main()
from the I/O layer. So it is important to wake up the check task to handle the
result and finish the check. Otherwise, we will wait the task timeout to handle
the result of a tcp-check, delaying the next check by as much.
This patch also fixes a problem about email alerts reported by PiBa-NL (Pieter)
on the ML [1] on all versions since the 1.6. So this patch must be backported
from 1.9 to 1.6.
[1] https://www.mail-archive.com/haproxy@formilux.org/msg32190.html
When we load health values from a server state file, make sure what we assign
to srv->check.health actually matches the state we restore.
This should be backported as far as 1.6.
In order to address the mailers issues, we needed to store the proxy
into the checks struct, which was done by commit c98aa1f18 ("MINOR:
checks: Store the proxy in checks."). However this one did it only for
the health checks and not for the agent checks, resulting in an immediate
crash when the agent is enabled on a random config like this one :
listen agent
bind :8000
server s1 255.255.255.255:1 agent-check agent-port 1
Thanks to Seri Kim for reporting it and providing a reproducer in
issue #20. This fix must be backported to 1.9.
If we fail to initialize pollers due to fdtab/fdinfo/polled_mask
not getting allocated, we free any of those that were allocated
and exit. However the ordering was incorrect, and there was an old
unused and unreachable "fail_cache" path as well, which needs to
be taken when no poller works.
This was introduced with this commit during 1.9-dev :
cb92f5c ("MINOR: pollers: move polled_mask outside of struct fdtab.")
It needs to be backported to 1.9 only.
Make "bind" keywork be supported in "peers" sections.
All "bind" settings are supported on this line.
Add "default-bind" option to parse the binding options excepted the bind address.
Do not parse anymore the bind address for local peers on "server" lines.
Do not use anymore list_for_each_entry() to set the "peers" section
listener parameters because there is only one listener by "peers" section.
May be backported to 1.5 and newer.
This patch adds pointer to a struct server to peer structure which
is initialized after having parsed a remote "peer" line.
After having parsed all peers section we run ->prepare_srv to initialize
all SSL/TLS stuff of remote perr (or server).
Remaining thing to do to completely support peer protocol over SSL/TLS:
make "bind" keyword be supported in "peers" sections to make SSL/TLS
incoming connections to local peers work.
May be backported to 1.5 and newer.
With this patch "default-server" lines are supported in "peers" sections
to setup the default settings of peers which are from now setup
when parsing both "peer" and "server" lines.
May be backported to 1.5 and newer.
Even if not already the case, we suppose that the frontend "peers" section
may have been already initialized outside of "peer" line, we seperate
their initializations from their binding initializations.
May be backported to 1.5 and newer.
Use ->local "peers" struct member to flag a "peers" section frontend
has being initialized. This is to be able to initialize the frontend
of "peers" sections on lines different from "peer" lines.
May be backported to 1.5 and newer.
Create init_peers_frontend() function to allocate and initialize
the frontend of "peers" sections (->peers_fe) so that to reuse it later.
May be backported to 1.5 and newer.
If a send succeeded, add the CO_FL_CONNECTED flag, the send may have been
called by the upper layers before we even realized we were connected, and we
may even read the response before we get the information, and si_cs_recv()
has to know if we were connected or not.
This should be backported to 1.9.
If an ALPN is set on the server line, then when we reach assign_tproxy_address,
the stream_interface's endpoint will be a connection, not a conn_stream,
so make sure assign_tproxy_address() handles both cases.
This should be backported to 1.9.
For HTX streams, the scope pointer is relative to the URI in the start-line. But
for streams using the legacy HTTP representation, the scope pointer is relative
to the beginning of output data in the channel's buffer. So we must be careful
to use the right one depending on the HTX is used or not.
Because the start-line is used to get de scope pointer, it is important to keep
it after the parsing of post paramters. So now, instead of removing blocks when
read in the function stats_process_http_post(), we just move on next, leaving it
in the HTX message.
Thanks to Pieter (PiBa-NL) to report this bug.
This patch must be backported to 1.9.
The code using the deprecated 'buf->p' has been updated to use
'ci_head(buf)' as described in section 5 of
'doc/internals/buffer-api.txt'.
A compile time switch on 'BUF_NULL' has also been added to enable the
same source code to be used with pre and post API change HAProxy.
This should be backported to 1.9, and is compatible with previous
versions.
The most significant change from 1.8 to >=1.9 is the buffer
data structure, using the new field and fixing along side
a little hidden compilation warning.
This must be backported to 1.9.
When an argument <draws> is present, it must be an integer value one
or greater, indicating the number of draws before selecting the least
loaded of these servers. It was indeed demonstrated that picking the
least loaded of two servers is enough to significantly improve the
fairness of the algorithm, by always avoiding to pick the most loaded
server within a farm and getting rid of any bias that could be induced
by the unfair distribution of the consistent list. Higher values N will
take away N-1 of the highest loaded servers at the expense of performance.
With very high values, the algorithm will converge towards the leastconn's
result but much slower. The default value is 2, which generally shows very
good distribution and performance. This algorithm is also known as the
Power of Two Random Choices and is described here :
http://www.eecs.harvard.edu/~michaelm/postscripts/handbook2001.pdf
Since all of them are exclusive, let's move them to an union instead
of eating memory with the sum of all of them. We're using a transparent
union to limit the code changes.
Doing so reduces the struct lbprm from 392 bytes to 372, and thanks
to these changes, the struct proxy is now down to 6480 bytes vs 6624
before the changes (144 bytes saved per proxy).
This one is a proxy option which can be inherited from defaults even
if the LB algo changes. Move it out of the lb_chash struct so that we
don't need to keep anything separate between these structs. This will
allow us to merge them into an union later. It even takes less room
now as it fills a hole and removes another one.
The algo-specific settings move from the proxy to the LB algo this way :
- uri_whole => arg_opt1
- uri_len_limit => arg_opt2
- uri_dirs_depth1 => arg_opt3
Some algorithms require a few extra options (up to 3). Let's provide
some room in lbprm to store them, and make sure they're passed from
defaults to backends.
These ones used to rely on separate variables called hh_name/hh_len
but they are exclusive with the former. Let's use the same variable
which becomes a generic argument name and length for the LB algorithm.
There are a few instances where the lookup algo is tested against
BE_LB_LKUP_CHTREE using a binary "AND" operation while this macro
is a value among a set, and not a bit. The test happens to work
because the value is exactly 4 and no bit overlaps with the other
possible values but this is a latent bug waiting for a new LB algo
to appear to strike. At the moment the only other algo sharing a bit
with it is the "first" algo which is never supported in the same code
places.
This fix should be backported to maintained versions for safety if it
passes easily, otherwise it's not important as it will not fix any
visible issue.
The "balance uri" options "whole", "len" and "depth" were not properly
inherited from the defaults sections. In addition, "whole" and "len"
were not even reset when parsing "uri", meaning that 2 subsequent
"balance uri" statements would not have the expected effect as the
options from the first one would remain for the second one.
This may be backported to all maintained versions.
At a few places in the code we used to rely on this variable to guess
what LB algo was in place. This is wrong because if the defaults section
presets "balance url_param foo" and a backend uses "balance roundrobin",
these locations will still see this url_param_name set and consider it.
The harm is limited, as this only causes the beginning of the request
body to be buffered. And in general this is a bad practice which prevents
us from cleaning the lbprm stuff. Let's explicitly check the LB algo
instead.
This may be backported to all currently maintained versions.
Openssl switched from aes128 to aes256 since may 2016 to compute
tls ticket secrets used by default. But Haproxy still handled only
128 bits keys for both tls key file and CLI.
This patch permit the user to set aes256 keys throught CLI or
the key file (80 bytes encoded in base64) in the same way that
aes128 keys were handled (48 bytes encoded in base64):
- first 16 bytes for the key name
- next 16/32 bytes for aes 128/256 key bits key
- last 16/32 bytes for hmac 128/256 bits
Both sizes are now supported (but keys from same file must be
of the same size and can but updated via CLI only using a key of
the same size).
Note: This feature need the fix "dec func ignores padding for output
size checking."
This patch fixes missing allocation checks loading tls key file
and avoid memory leak in some error cases.
This patch should be backport on branches 1.9 and 1.8
Decode function returns an error even if the ouptut buffer is
large enought because the padding was not considered. This
case was never met with current code base.
In h1_process(), if we have no associated stream, and the connection got a
shutw, then destroy it, it is unusable and it may be our last chance to do
so.
This should be backported to 1.9.
When using a check to send email, avoid having an associated server, so that
we don't modify the server state if we fail to send an email.
Also revert back to initialize the check status to HCHK_STATUS_INI, now that
set_server_check_status() stops early if there's no server, we shouldn't
get in a mail loop anymore.
This should be backported to 1.9.
Instead of assuming we have a server, store the proxy directly in struct
check, and use it instead of s->server.
This should be a no-op for now, but will be useful later when we change
mail checks to avoid having a server.
This should be backported to 1.9.
The cache uses the first 32 bits of the uri's hash as the key to reference
the object in the cache. It makes a special case of the value zero to mean
that the object is not in the cache anymore. The problem is that when an
object hashes as zero, it's still inserted but the eb32_delete() call is
skipped, resulting in the object still being chained in the memory area
while the block has been reclaimed and used for something else. Then when
objects which were chained below it (techically any object since zero is
at the root) are deleted, the walk through the upper object may encounter
corrupted values where valid pointers were expected.
But while this should only happen statically once on 4 billion, the problem
gets worse when the cache-use conditions don't match the cache-store ones,
because cache-store runs with an uninitialized key, which can create objects
that will never be found by the lookup code, or worse, entries with a zero
key preventing eviction of the tree node and resulting in a crash. It's easy
to accidently end up on such a config because the request rules generally
can't be used to decide on the response :
http-request cache-use cache if { path_beg /images }
http-response cache-store cache
In this test, mixing traffic with /images/$RANDOM and /foo/$RANDOM will
result in random keys being inserted, some of them possibly being zero,
and crashes will quickly happen.
The fix consists in 1) always initializing the transaction's cache_hash
to zero, and 2) never storing a response for which the hash has not been
calculated, as indicated by the value zero.
It is worth noting that objects hashing as value zero will never be cached,
but given that there's only one chance among 4 billion that this happens,
this is totally harmless.
This fix must be backported to 1.9 and 1.8.
When using early data, disable the OpenSSL anti-replay protection, and set
the max amount of early data we're ready to accept, based on the size of
buffers, or early data won't work with the released OpenSSL 1.1.1.
This should be backported to 1.8.
When initializing server-template all of the servers after the first
have srv->idle_orphan_conns initialized within server_template_init()
The first server does not have this initialized and when http-reuse
is active this causes a segmentation fault when accessed from
srv_add_to_idle_list(). This patch removes the check for
srv->tmpl_info.prefix within server_finalize_init() and allows
the first server within a server-template to have srv->idle_orphan_conns
properly initialized.
This should be backported to 1.9.
In the function hlua_applet_htx_send_yield(), there already was a test to
respect the reserve but the wrong function was used to get the available space
for data in the HTX buffer. Instead of calling htx_free_space(), the function
htx_free_data_space() must be used. But in fact, there is no reason to bother
with that anymore because the function channel_htx_recv_max() has been added for
this purpose.
The result of this bug is that the call to htx_add_data() failed unexpectedly
while the amount of written data was incremented, leading the applet to think
all data was sent. To prevent any futher bugs, a test has been added to yield if
we are not able to write data into the channel buffer.
This patch must be backported to 1.9.
Tim Düsterhus reported a possible crash in the H2 HEADERS frame decoder
when the PRIORITY flag is present. A check is missing to ensure the 5
extra bytes needed with this flag are actually part of the frame. As per
RFC7540#4.2, let's return a connection error with code FRAME_SIZE_ERROR.
Many thanks to Tim for responsibly reporting this issue with a working
config and reproducer. This issue was assigned CVE-2018-20615.
This fix must be backported to 1.9 and 1.8.
channel_truncate() is not aware of the underlying format of the messages. So if
there are some outgoing data in the channel when called, it does some unexpected
operations on the channel's buffer. So the HTX version, channel_htx_truncate(),
must be used. The same is true for channel_erase(). It resets the buffer but not
the HTX message. So channel_htx_erase() must be used instead. This patch is
flagged as a bug, but as far as we know, it was never hitted.
This patch should be backported to 1.9. If so, following patch must be
backported too:
* MINOR: channel/htx: Add the HTX version of channel_truncate/erase
We need to check if any compression filter precedes the cache filter. This is
only possible when the compression is configured in the frontend while the cache
filter is configured on the backend (via a cache-store action or
explicitly). This case cannot be detected during HAProxy startup. So in such
cases, the cache is disabled.
The patch must be backported to 1.9.
On legacy HTTP streams, it is forbidden to use the compression with the
cache. When the compression filter is explicitly specified, the detection works
as expected and such configuration are rejected at startup. But it does not work
when the compression filter is implicitly defined. To fix the bug, the implicit
declaration of the compression filter is checked first, before calling .check()
callback of each filters.
This patch should be backported to 1.9.
Since the commit 9666720c8 ("BUG/MEDIUM: compression: Use the right buffer
pointers to compress input data"), the compression can be done twice. The first
time on the frontend and the second time on the backend. This may happen by
configuring the compression in a default section.
To fix the bug, when the response is checked to know if it should be compressed
or not, if the flag HTTP_MSGF_COMPRESSING is set, the compression is not
performed. It means it is already handled by a previous compression filter.
Thanks to Pieter (PiBa-NL) to report this bug.
This patch must be backported to 1.9.
Now, h1_shutr() only do a shutdown read and try to set the flag
H1C_F_CS_SHUTDOWN if shutdown write was already performed. On its side,
h1_shutw(), if all conditions are met, do the same for the shutdown write. The
real connection close is done when the mux h1 is released, in h1_release().
The flag H1C_F_CS_SHUTW was renamed to H1C_F_CS_SHUTDOWN to be less ambiguous.
This patch may be backported to 1.9.
In h1_shutr(), to fully close the connection, we must be sure the shutdown write
was already performed on the connection. So we know rely on connection flags
instead of conn_stream flags. If CO_FL_SOCK_WR_SH is already set when h1_shutr()
is called, we can do a full connection close. Otherwise, we just do the shutdown
read.
Without this patch, it is possible to close the connection too early with some
outgoing data in the output buf.
This patch must be backported to 1.9.
As for the cache applet, this one must respect the reserve on HTX streams. This
patch is tagged as MINOR because it is unlikely to fully fill the channel's
buffer. Some tests are already done to not process almost full buffer.
This patch must be backported to 1.9.
It is only true for HTX streams. The legacy code relies on ci_putblk() which is
already aware of the reserve. It is mandatory to not fill the reserve to let
other filters analysing data. It is especially true for the compression
filter. It needs at least 20 bytes of free space, plus at most 5 bytes per 32kB
block. So if the cache fully fills the channel's buffer, the compression will
not have enough space to do its job and it will block the data forwarding,
waiting for more free space. But if the buffer fully filled with input data (ie
no outgoing data), the stream will be frozen infinitely.
This patch must be backported to 1.9. It depends on the following patches:
* BUG/MEDIUM: cache/htx: Respect the reserve when cached objects are served
from the cache
* MINOR: channel/htx: Add HTX version for some helper functions
When a task is created from Lua context out of initialisation,
the hlua_ctx_init() function can be called from safe environement,
so we must not initialise it. While the support of threads appear,
the safe environment set a lock to ensure only one Lua execution
at a time. If we initialize safe environment in another safe
environmenet, we have a dead lock.
this patch adds the support of the idicator "already_safe" whoch
indicates if the context is initialized form safe Lua fonction.
thank to Flakebi for the report
This patch must be backported to haproxy-1.9 and haproxy-1.8
In tcp actions case, the argument n - 1 is returned. For example:
http-request lua.script stuff
display "stuff" as first arg
tcp-request content lua.script stuff
display "lua.script" as first arg
The action parser doesn't use the *cur_arg value.
Thanks to Andy Franks for the bug report.
This patch mist be backported in haproxy-1.8 and haproxy-1.9
The "show sess all" command didn't allow to detect whether compression
is in use for a given stream, which is sometimes annoying. Let's add a
few more info about the HTTP messages, namely the flags, body len, chunk
len and the "next" pointer.
The "waiting" flag indicates if the stream is waiting for some memory,
and was placed on the same output line as the txn for ease of reading.
But since 1.6 the txn is not part of the stream anymore so this output
was placed under a condition, resulting in "waiting" to appear only
when a txn is present. Let's move it upper, closer to the stream's
flags to fix this.
This may safely be backported though it has little value for older
versions.
Commit b9af88151 ("MINOR: stream/htx: Add info about the HTX structs in
"show sess all" command") accidently forgot the flags on the request
path, it was only on the response path.
It makes sense to backport this to 1.9 so that both outputs are the same.
While testing fixes, it's sometimes confusing to rebuild only one C file
(e.g. a mux) and not to have the correct commit ID reported in "haproxy -v"
nor on the stats page.
This patch adds a new "version.c" file which is always rebuilt. It's
very small and contains only 3 variables derived from the various
version strings. These variables are used instead of the macros at the
few places showing the version. This way the output version of the
running code is always correct for the parts that were rebuilt.
This one used to rely on a few spin locks around lists manipulations
only but 1) there were still a few races (e.g. when aborting, or
between STAT_ST_INIT and STAT_ST_LIST), and 2) after last commit
which dumps htx info it became obvious that dereferencing the buffer
contents is not safe at all.
This patch uses the thread isolation from the rendez-vous point
instead, to guarantee that nothing moves during the dump. It may
make the dump a bit slower but it will be 100% safe.
This fix must be backported to 1.9, and possibly to 1.8 which likely
suffers from the short races above, eventhough they're extremely
hard to trigger.
In connect_server(), if we're using a new connection, and we have to
initialize the mux right away, only do it so after si_connect() has been
called. si_connect() is responsible for initializing the xprt, and the
mux initialization may depend on the xprt being usable, as it may try to
receive data. Otherwise, the connection will be flagged as having an error,
and we will have to try to connect a second time.
This should be backported to 1.9.
In h1_init(), instead of calling h1_recv() directly, just wake the tasklet,
so that the receive will be done later.
h1_init() might be called from connect_server(), which is itself called
indirectly from process_stream(), and if the receive fails, we may call
si_cs_process(), which may destroy the channel buffers while process_stream()
still expects them to exist.
This should be backported to 1.9.
When a chunked object is served from the cache, If the trailers are not pushed
in the channel's buffer in one time, we still have to count them in the total
written bytes in the buffer.
This patch must be backported to 1.9.
Since the commit 0f8fb6b7f ("MINOR: h1: make the H1 headers block parser able to
parse headers only"), when headers are not received in one time, a parsing error
is returned because the local state in the function h1_headers_to_hdr_list() was
not initialized with the previous one (in fact, it was not initialized at all).
So now, we start the parsing of headers with the state H1_MSG_HDR_FIRST when the
flag H1_MF_HDRS_ONLY is set. Otherwise, we always get it from the h1m.
This patch must be backported to 1.9.
For HTX streams, info about the HTX structure is now dumped for the request and
the response channels in "show sess all" command.
The patch may be backported to 1.9.
Now the H2 mux will parse and encode the HTX trailers blocks and send
the corresponding HEADERS frame. Since these blocks contain pure H1
trailers which may be fragmented on line boundaries, if first needs
to collect all of them, parse them using the H1 parser, build a list
and finally encode all of them at once once the EOM is met. Note that
this HEADERS frame always carries the end-of-headers and end-of-stream
flags.
This was tested using the helloworld examples from the grpc project,
as well as with the h2c tools. It doesn't seem possible at the moment
to test tailers using varnishtest though.
Currently the H1 headers parser works for either a request or a response
because it starts from the start line. It is also able to resume its
processing when it was interrupted, but in this case it doesn't update
the list.
Make it support a new flag, H1_MF_HDRS_ONLY so that the caller can
indicate it's only interested in the headers list and not the start
line. This will be convenient to parse H1 trailers.
We want to make sure we won't emit another empty DATA frame if we meet
HTX_BLK_EOM after and end of stream was already sent. For now it cannot
happen as far as HTX is respected, but with trailers it may become
ambiguous.
Recent commit 4710d20 ("BUG/MEDIUM: mux-h1: make HTX chunking
consistent with H2") tried to address chunking inconsistencies between
H1/HTX/H2 and has enforced it on every outgoing message carrying
H1_MF_XFER_LEN without H1_MF_CLEN nor H1_MF_CHNK. But it also does it
on requests, which is not appropriate since a request by default
doesn't have a message body unless explicitly mentioned. Also make
sure we only do this on HTTP/1.1 messages.
The problem is to guarantee the highest level of compatibility between
H1/H1, H1/H2, H2/H1 in each direction regarding the lack of content-
length. We have this truth table (a star '*' indicates which one can
pass trailers) :
H1 client -> H1 server :
request:
CL=0 TE=0 XL=1 -> CL=0 TE=0
CL=0 TE=1 XL=1 -> CL=0 TE=1 *
CL=1 TE=0 XL=1 -> CL=1 TE=0
CL=1 TE=1 XL=1 -> CL=1 TE=1 *
response:
CL=0 TE=0 XL=0 -> CL=0 TE=0
CL=0 TE=1 XL=1 -> CL=0 TE=1 *
CL=1 TE=0 XL=1 -> CL=1 TE=0
CL=1 TE=1 XL=1 -> CL=1 TE=1 *
H2 client -> H1 server : (H2 messages always carry XFER_LEN)
request:
CL=0 XL=1 -> CL=0 TE=0
CL=1 XL=1 -> CL=1 TE=0
response:
CL=0 TE=0 XL=0 -> CL=0
CL=0 TE=1 XL=1 -> CL=0 *
CL=1 TE=0 XL=1 -> CL=1
CL=1 TE=1 XL=1 -> CL=1 *
H1 client -> H2 server : (H2 messages always carry XFER_LEN)
request:
CL=0 TE=0 XL=1 -> CL=0
CL=0 TE=1 XL=1 -> CL=0 *
CL=1 TE=0 XL=1 -> CL=1
CL=1 TE=1 XL=1 -> CL=1 *
response:
CL=0 XL=1 -> CL=0 TE=1 *
CL=1 XL=1 -> CL=1 TE=0
For H1 client to H2 server, it will be possible to rely on the presence
of "TE: trailers" in the H1 request to automatically switch to chunks
in the response, and be able to pass trailers at the end. For now this
check is not implemented so an H2 response missing a content-length to
an H1 request will always have a transfer-encoding header added and
trailers will be forwarded if any.
This patch depends on previous commit "MINOR: mux-h1: parse the
content-length header on output and set H1_MF_CLEN" to work properly.
Since the aforementioned commit is scheduled for backport to 1.9 this
commit must also be backported to 1.9.
The H1_MF_CLEN flag is needed to figure whether a content-length header is
present or not when producing a request, so let's check it on output just
like we already check the transfer-encoding header.
This function is usable to transform a list of H2 header fields to a
HTX trailers block. It takes care of rejecting forbidden headers and
pseudo-headers when performing the conversion. It also emits the
trailing CRLF that is currently needed in the HTX trailers block.
When forwarding an H2 request to an H1 server, if the request doesn't
have a content-length header field, it is chunked. In this case it is
possible to send trailers to the server, which is what this patch does.
If the transfer is performed without chunking, then the trailers are
silently discarded.
This function is usable to transform a list of H2 header fields to a
H1 trailers block. It takes care of rejecting forbidden headers and
pseudo-headers when performing the conversion.
This is not exactly a bug but a long-time design limitation. We used not
to decode trailers in H2, resulting in broken connections each time a
trailer was sent, since it was impossible to keep the HPACK decompressor
synchronized. Now that the sequencing of operations permits it, we must
make sure to at least properly decode them.
What we try to do is to identify if a HEADERS frame was already seen and
use this indication to know if it's a headers or a trailers. For this,
h2c_decode_headers() checks if the stream indicates that a HEADERS frame
was already received. If so, it decodes it and emits the trailing
0 CRLF CRLF in case of H1, or the HTX_EOD + HTX_EOM blocks in case of HTX,
to terminate the data stream.
The trailers contents are still deleted for now but the request works, and
the connection remains synchronized and usable for subsequent streams.
The correctness may be tested using a simple config and h2spec :
h2spec -o 1000 -v -t -S -k -h 127.0.0.1 -p 4443 generic/4/4
This should definitely be backported to 1.9 given the low impact for the
benefit. However it cannot be backported to 1.8 since the operations cannot
be resumed. The following patches are also needed with this one :
MINOR: mux-h2: make h2c_decode_headers() return a status, not a count
MINOR: mux-h2: add a new dummy stream : h2_error_stream
MEDIUM: mux-h2: make h2c_decode_headers() support recoverable errors
BUG/MINOR: mux-h2: detect when the HTX EOM block cannot be added after headers
MINOR: mux-h2: check for too many streams only for idle streams
MINOR: mux-h2: set H2_SF_HEADERS_RCVD when a HEADERS frame was decoded
The HEADERS frame parser checks if we still have too many streams, but
this should only be done for idle streams, otherwise it would prevent
us from processing trailer frames.
In h2c_frt_handle_headers() and h2c_bck_handle_headers() we have an unused
error path made of the strm_err label, while send_rst is used to emit an
RST upon stream error after forcing the stream to h2_refused_stream. Let's
remove this unused strm_err block now.
In h2c_frt_handle_headers(), we test the stream for SS_ERROR just after
setting it to SS_OPEN, this makes no sense and creates confusion in the
error path. Remove this misleading test.
In case we receive a very large HEADERS frame which doesn't leave enough
room to place the EOM block after the decoded headers, we must fail the
stream. This test was missing, resulting in the loss of the EOM, possibly
leaving the stream waiting for a time-out.
Note that we also clear h2c->dfl here so that we don't attempt to clear
it twice when going back to the demux.
If this is backported to 1.9, it also requires that the following patches
are backported as well :
MINOR: mux-h2: make h2c_decode_headers() return a status, not a count
MINOR: mux-h2: add a new dummy stream : h2_error_stream
MEDIUM: mux-h2: make h2c_decode_headers() support recoverable errors
When a decoding error is recoverable, we should emit a stream error and
not a connection error. This patch does this by carefully checking the
connection state before deciding to send a connection error. If only the
stream is in error, an RST_STREAM is sent.
This function used to return a byte count for the output produced, or
zero on failure. Not only this value is not used differently than a
boolean, but it prevents us from returning stream errors when a frame
cannot be extracted because it's too large, or from parsing a frame
and producing nothing on output.
This patch modifies its API to return <0 on errors, 0 on inability to
proceed, or >0 on success, irrelevant to the amount of output data.
The mux h1 mainly depends on the stream to handle errors and timeouts. But,
there is one unhandled case. If a timeout occurred when some outgoing data are
blocked in the output buffer, the stream is detached and the mux waits infinitly
the data are gone before closing the connection. To fix the bug, a task has been
added on the mux to handle connection timeouts. For now, a expiration date is
set only when some outgoing data are blocked. And if a stream is still attached
when the mux's task timed out, an error flag is set on the mux but the
connection is not closed immediatly. We assume the stream will hit the same
timeout just after.
This patch must be backported to 1.9.
In the function htx_end_request, the flag SI_FL_NOHALF must be set on the server
side once the request is in the state HTTP_MSG_DONE. But the response state was
checked before and the flag was only set when the response was also in the state
HTTP_MSG_DONE. Of course, it is not desirable.
This patch must be backported to 1.9.
Since a long time, the expiration date of a stream is only updated in
process_stream(). It is calculated, among others, using the channels expiration
dates for reads and writes (.rex and .wex values). But these values are updated
by the stream-interface. So when this happens at the connection layer, the
update is only done if the stream's task is woken up. Otherwise, the stream
expiration date is not immediatly updated. This leads to unexpected
behaviours. Time to time, users reported that the wrong timeout was hitted or
the wrong termination state was reported. This is partly because of this
bug.
Recently, we observed some blocked sessions for a while when big objects are
served from the cache applet. It seems only concern the clients not reading the
response. Because delivered objects are big, not all data can be sent. And
because delivered objects are big, data are fast forwarded (from the input to
the output with no stream wakeup). So in such situation, the stream expiration
date is never updated and no timeout is hitted. The session remains blocked
while the client remains connected.
This bug exists at least since HAProxy 1.5. But recent changes on the connection
layer make it more visible. It must be backported from 1.9 to 1.6. And with more
pain it should be backported to 1.5.
When transfering from H1 to H1, chunking is always indicated by the
presence of the Transfer-encoding header field. But when a message
comes from H2 there is no such header and only HTX_SL_F_XFER_LEN
ought to be relied on. This one will also result in H1_MF_XFER_LEN
to be set, just like transfer-encoding, so let's always rely on
this latter flag to detect the need for chunking (when CLEN is not
here) and automatically add the transfer-encoding header if it was
not present, as reported by H1_MF_CHNK.
This must be backported to 1.9.
The H1 mux needs to store some information regarding the states that
were met (EOD, trailers, etc) for each direction but currently uses
only one set of flags. This results in failures when both the request
and the response use chunked-encoding because some elements are believed
to have been met already and a trailing 0 CRLF or just a CRLF may be
missing at the end.
The solution here consists in splitting these flags per direction, one
set for input processing and another set for output processing. Only
two flags were affected so this is not a big deal.
This needs to be backported to 1.9.
In h2c_decode_headers() we update the buffer's length according to the
amount of data produced (outlen). But in case of HTX this outlen value
is not a quantity, just an indicator of success, resulting in the buffer
being added one extra byte and temporarily showing .data > .size, which
is wrong. Fortunately this is overridden when leaving the function by
htx_to_buf() so the impact only exists in step-by-step debugging, but
it definitely needs to be fixed.
This must be backported to 1.9.
When dealing with a server's H2 response, we used to set the
end-of-stream flag on the conn_stream and the stream before parsing
the response, which is incorrect since we can fail to process this
response by lack of room, buffer or anything. The extend of this problem
is still limited to a few rare cases, but with trailers it will cause a
systematic failure.
This fix must be backported to 1.9.
This function handles response HEADERS frames, it is not responsible
for creating new streams thus it must not check if we've reached the
stream count limit, otherwise it could lead to some undesired pauses
which bring no benefit.
This must be backported to 1.9.
If we exit this function because some data are pending in the rxbuf, we
currently don't indicate any blocking flag, which will prevent the operation
from being attempted again. Let's set H2_CF_DEM_SFULL in this case to indicate
there's not enough room in the stream buffer so that the operation may be
attempted again once we make room. It seems that this issue cannot be
triggered right now but it definitely will with trailers.
This fix should be backported to 1.9 for completeness.
h2c_restart_reading() is used at various place to resume processing of
demux data, but this one refrains from doing so if the mux is already
subscribed for receiving. It just happens that even if some incoming
frame processing is interrupted, the mux is always subscribed for
receiving, so this condition alone is not enough, it must be combined
with the fact that the demux buffer is empty, otherwise some resume
events are lost. This typically happens when we refrain from processing
some incoming data due to missing room in the stream's rxbuf, and want
to resume in h2c_rcv_buf(). It will become even more visible with trailers
since these ones want to have an empty rxbuf before proceeding.
This must be backported to 1.9.
In h2c_decode_headers() we mistakenly check for H2_F_DATA_END_STREAM
while we should check for H2_F_HEADERS_END_STREAM. Both have the same
value (1) but better stick to the correct flag.
When an HTX structure is defragmented, it is possible to retrieve the new block
corresponding to an old one. This is useful to do a defrag during a loop on
blocks, to be sure to continue looping on the good block. But, instead of
returning the address of the new block in the HTX structure, the one in the
temporary structure used to do the defrag was returned, leading to unexpected
behaviours.
This patch must be backported to 1.9.
This bug exists in the HTX code and in the legacy one. When the body length is
unknown, the applet hangs. For the legacy code, it hangs because the end of the
cached object is not correctly handled and the applet is never recalled. For the
HTX code, only the begining of the response (the 1st buffer) is sent then the
applet hangs. To work in HTX, The fast forwarding must be correctly handled.
This patch must be backported to 1.9.
[cf: the patch adding the function channel_add_input must be backported with
this one. It does not exist in 1.8 because only responses with a C-L are cached.]
This way we are sure the channel state is always correctly upadated, especially
the amount of data directly forwarded. For the stats applet, it is not a bug
because the fast forwarding is never used (the response is chunked and the HTX
extra field is always set to 0).
This patch must be backported to 1.9.
This function must be called when new incoming data are pushed in the channel's
buffer. It updates the channel state and take care of the fast forwarding by
consuming right amount of data and decrementing "->to_forward" accordingly when
necessary. In fact, this patch just moves a part of ci_putblk in a dedicated
function.
This patch must be backported to 1.9.
With the new ability to log to a terminal, it's convenient to be able
to use "log stdout" in a config file, except that it now results in
setting the terminal to non-blocking mode, breaking every utility
relying on stdin afterwards. Since the only reason for logging to a
terminal is to debug, do not set the FD to non-blocking mode when it's
a terminal.
This fix must be backported to 1.9.
Application-Layer Protocol Negotiation (ALPN, RFC7301) is a TLS
extension which allows a client to present a preference for which
protocols it wishes to connect to, when a single port supports multiple
multiple application protocols.
It allows a transparent proxy to take a decision based on the beginning
of an SSL/TLS stream without deciphering it.
The new fetch "req.ssl_alpn" extracts the ALPN protocol names that may
be present in the ClientHello message.
Instead of keeping track of the number of connections we're responsible for,
keep track of the number of connections we're responsible for that we are
currently considering idling (ie that we are not using, they may be in use
by other sessions), that way we can actually reuse connections when we have
more connections than the max configured.
When connecting to a server, and reusing a connection, always attempt to give
the owner of the previous session one of its own connections, so that one
session won't be responsible for too many connections.
This should be backported to 1.9.
When creating a new outgoing connection, if we're using ALPN and waiting
for the handshake completion to choose the mux, and for some reason the
handshake failed, add the SI_FL_ERR flag to the stream_interface, so that
process_streams() knows the connection failed, and can attempt to retry,
instead of just hanging.
This should be backported to 1.9.
When a session adds a connection to its connection list, we used to remove
connections for an another server if there were not enough room for our
server. This can't work, because those lists are now the list of connections
we're responsible for, not just the idle connections.
To fix this, allow for an unlimited number of servers, instead of using
an array, we're now using a linked list.
To access the first element of the list, correctly use LIST_ELEM(), or we
end up getting the head of the list, instead of getting the first connection.
This should be backported to 1.9.
In connect_server(), if we looked for an usable connection and failed to
find one, srv_conn won't be NULL at the end of list_for_each_entry(), but
will point to the head of a list, which is not a pointer to a struct
connection, so explicitely set it to NULL.
This should be backported to 1.9.
If, for some reason we failed to allocate a conn_stream when reusing an
existing connection, set srv_conn to NULL, so that we fail later, instead
of pretending all is right. This ends up giving a stream_interface with
no endpoint, and so the stream will never end.
This should be backported to 1.9.
In h2_detach(), don't add the connection to the idle list if nb_streams
is at the max. This can happen if we already closed that stream before, so
its slot became available and was used by another stream.
This should be backported to 1.9.
When we use htx and http-request auth rules, we need to send WWW-Authenticate
with a 401 and Proxy-Authenticate with a 407. We only sent Proxy-Authenticate
regardless of status, with htx enabled.
To be backported to 1.9.
In connect_server(), don't attempt to reuse the old connection if it's
targetting a different server than the one we're supposed to access, or
we will never be able to connect to a server if the first one we tried failed.
This should be backported to 1.9.
Now that the HEADERS frame decoding is retryable, we can safely try to
fold CONTINUATION frames into a HEADERS frame when the END_OF_HEADERS
flag is missing. In order to do this, h2c_decode_headers() moves the
frames payloads in-situ and leaves a hole that is plugged when leaving
the function. There is no limit to the number of CONTINUATION frames
handled this way provided that all of them fit into the buffer. The
error reported when meeting isolated CONTINUATION frames has now changed
from INTERNAL_ERROR to PROTOCOL_ERROR.
Now there is only one (unrelated) remaining failure in h2spec.
The H2 demux only checks for too many streams in h2c_frt_stream_new(),
then refuses to create a new stream and causes the connection to be
aborted by sending a GOAWAY frame. This will also happen if any error
happens during the stream creation (e.g. memory allocation).
RFC7540#5.1.2 says that attempts to create streams in excess should
instead be dealt with using an RST_STREAM frame conveying either the
PROTOCOL_ERROR or REFUSED_STREAM reason (the latter being usable only
if it is guaranteed that the stream was not processed). In theory it
should not happen for well behaving clients, though it may if we
configure a low enough h2.max_concurrent_streams limit. This error
however may definitely happen on memory shortage.
Previously it was not possible to use RST_STREAM due to the fact that
the HPACK decompressor would be desynchronized. But now we first decode
and only then try to allocate the stream, so the decompressor remains
synchronized regardless of policy or resources issues.
With this patch we enforce stream termination with RST_STREAM and
REFUSED_STREAM if this protocol violation happens, as well as if there
is a temporary condition like a memory allocation issue. It will allow
a client to recover cleanly.
This could possibly be backported to 1.9. Note that this requires that
these five previous patches are merged as well :
MINOR: h2: add a bit-based frame type representation
MEDIUM: mux-h2: remove padlen during headers phase
MEDIUM: mux-h2: decode HEADERS frames before allocating the stream
MINOR: mux-h2: make h2c_send_rst_stream() use the dummy stream's error code
MINOR: mux-h2: add a new dummy stream for the REFUSED_STREAM error code
This patch introduces a new dummy stream, h2_refused_stream, in CLOSED
status with the aforementioned error code. It will be usable to reject
unexpected extraneous streams.
We currently have 2 dummy streams allowing us to send an RST_STREAM
message with an error code matching this one. However h2c_send_rst_stream()
still enforces the STREAM_CLOSED error code for these dummy streams,
ignoring their respective errcode fields which however are properly
set.
Let's make the function always use the stream's error code. This will
allow to create other dummy streams for different codes.
It's hard to recover from a HEADERS frame decoding error after having
already created the stream, and it's not possible to recover from a
stream allocation error without dropping the connection since we can't
maintain the HPACK context, so let's decode it before allocating the
stream, into a temporary buffer that will then be offered to the newly
created stream.
Three types of frames may be padded : DATA, HEADERS and PUSH_PROMISE.
Currently, each of these independently deals with padding and needs to
wait for and skip the initial padlen byte. Not only this complicates
frame processing, but it makes it very hard to process CONTINUATION
frames after a padded HEADERS frame, and makes it complicated to perform
atomic calls to h2s_decode_headers(), which are needed if we want to be
able to maintain the HPACK decompressor's context even when dropping
streams.
This patch takes a different approach : the padding is checked when
parsing the frame header, the padlen byte is waited for and parsed,
and the dpl value is updated with this padlen value. This will allow
the frame parsers to decide to overwrite the padding if needed when
merging adjacent frames.
Since commit f210191 ("BUG/MEDIUM: h2: don't accept new streams if
conn_streams are still in excess") we're refraining from reading input
frames if we've reached the limit of number of CS. The problem is that
it prevents such situations from working fine. The initial purpose was
in fact to prevent from reading new HEADERS frames when this happens,
and causes some occasional transfer hiccups and pauses with large
concurrencies.
Given that we now properly reject extraneous streams before checking
this value, we can be sure never to have too many streams, and that
any higher value is only caused by a scheduling reason and will go
down after the scheduler calls the code.
This fix must be backported to 1.9 and possibly to 1.8. It may be
tested using h2spec this way with an h2spec config :
while :; do
h2spec -o 5 -v -t -S -k -h 127.0.0.1 -p 4443 http2/5.1.2
done
We were returning a stream error of type PROTOCOL_ERROR on empty HEADERS
frames, but RFC7540#4.2 stipulates that we should instead return a
connection error of type FRAME_SIZE_ERROR.
This may be backported to 1.9 and 1.8 though it's unlikely to have any
real life effect.
Commit dc57236 ("BUG/MINOR: mux-h2: advertise a larger connection window
size") caused a WINDOW_UPDATE message to be sent early with the connection
to increase the connection's window size. It turns out that it causes some
minor trouble that need to be worked around :
- varnishtest cannot transparently cope with the WU frames during the
handshake, forcing all tests to explicitly declare the handshake
sequence ;
- some vtc scripts randomly fail if the WU frame is sent after another
expected response frame, adding uncertainty to some tests ;
- h2spec doesn't correctly identify these WU at the connection level
that it believes are the responses to some purposely erroneous frames
it sends, resulting in some errors being reported
None of these are a problem with real clients but they add some confusion
during troubleshooting.
Since the fix above was intended to increase the upload bandwidth, we
have another option which is to increase the window size with the first
WU frame sent for the connection. This way, no WU frame is sent until
one is really needed, and this first frame will adjust the window to
the maximum value. It will make the window increase slightly later, so
the client will experience the first round trip when uploading data,
but this should not be perceptible, and is not worth the extra hassle
needed to maintain our debugging abilities. As an extra bonus, a few
extra bytes are saved for each connection until the first attempt to
upload data.
This should possibly be backported to 1.9 and 1.8.
Add a way to configure the ALPN used by check, with a new "check-alpn"
keyword. By default, the checks will use the server ALPN, but it may not
be convenient, for instance because the server may use HTTP/2, while checks
are unable to do HTTP/2 yet.
In some situations, if too short a frame header is received, we may leave
h2_process_demux() waking up the task again without checking that we were
already subscribed.
In order to avoid this once for all, let's introduce an h2_restart_reading()
function which performs the control and calls the task up. This way we won't
needlessly wake the task up if it's already waiting for I/O.
Must be backported to 1.9.
In HTX, when the compression filter analyze the EOM, it flushes the compression
context and add the last block of compressed data. But, this block can be
empty. In this case, we must ignore it.
In dns_read_name() when dns name is used with compression and start position of
name is greater than 255 name read is incorrect and causes invalid dns error.
eg: 0xc11b c specifies name compression being used. 11b represent the start
position of name but currently we are using only 1b for start position.
This should be backported as far as 1.7.
A regression was introduced with efbbdf72 BUG: dns: Prevent out-of-bounds
read in dns_validate_dns_response() as it prevented from taking into account
the last byte of the payload. this patch aims at fixing it.
this must be backported in 1.8.
In mux_h2_unsubscribe, don't forget to leave the sending_list if
SUB_CALL_UNSUBSCRIBE was set. SUB_CALL_UNSUBSCRIBE means we were about
to be woken up for writing, unless the mux was too full to get more data.
If there's an unsubscribe call in the meanwhile, we should leave the list,
or we may be put back in the send_list.
This should be backported to 1.9.
First, it's a pain to always have to think about updating this date,
second for a long time I've not been the only developer there, and third,
some users contact me hoping to get help that I can't deliver. It's about
time to redirect them to the main site where all the useful links should
be.
In h2_snd_buf(), if we couldn't send the data because of flow control, and
the connection got a shutr, then add CS_FL_ERROR (or CS_FL_ERR_PENDING). We
will never get any window update, so we will never be unlocked, anyway.
No backport is needed.
When dealing with early data we scan the list of stream to notify them.
We're not supposed to have h2s->cs == NULL here but it doesn't cost much
to make the scan more robust and verify it before notifying.
No backport is needed.
If we had no pending read, it could be complicated to report an
RST_STREAM to a sender since we used to only report it via the
rx side if subscribed. Similarly in h2_wake_some_streams() we
now try all methods, hoping to catch all possible events.
No backport is needed.
In order to report an error to the data layer, we have different ways
depending on the situation. At a lot of places it's open-coded and not
always correct. Let's create a new function h2s_alert() to handle this
task. It tries to wake on recv() first, then on send(), then using
wake().
In the mux h2, make sure we set CS_FL_ERR_PENDING and wake the recv task,
instead of setting CS_FL_ERROR, if CS_FL_EOS is not set, so if there's
potentially still some data to be sent.
We still have an issue with asynchronous errors, which is that while
they don't truncate reads anymore, they might be missed during a
send() attempt. This can happen for example when processing a request
followed by undesired data for which the stream doesn't try to receive,
while the send side experiences an error (transfer aborted by the client).
In this case we definitely want all send() attempts to fail as soon as
the error was reported, even if it's only pending. This way we leave an
opportunity to the stream interface to try to receive the last data
pending in the buffer but it cannot send anymore and knows that there
is an error when trying to do so.
Commiy 8519357c ("BUG/MEDIUM: mux-h2: report asynchronous errors in
h2_wake_some_streams()") addressed an issue with synchronous errors
but forgot to fix the call places to also pass CS_FL_ERR_PENDING
instead of CS_FL_ERROR.
No backport is needed.
In h1_shutw() and h1_shutr(), don't attempt to shutdown() the connection
if we're using keepalive and the connection has no error, or we will close
the connection too soon.
As long-time changes have accumulated over time, the exported functions
of the stream-interface were almost all prefixed "si_<something>" while
most private ones (mostly callbacks) were called "stream_int_<something>".
There were still a few confusing exceptions, which were addressed to
follow this shcme :
- stream_sock_read0(), only used internally, was renamed stream_int_read0()
and made static
- stream_int_notify() is only private and was made static
- stream_int_{check_timeouts,report_error,retnclose,register_handler,update}
were renamed si_<something>.
Now it is clearer when checking one of these if it risks to be used outside
or not.
There was a reference to struct stream in conn_free() for the case
where we're freeing a connection that doesn't have a mux attached.
For now we know it's always a stream, and we only need to do it to
put a NULL in s->si[1].end.
Let's do it better by storing the pointer to si[1].end in the context
and specifying that this pointer is always nulled if the mux is null.
This way it allows a connection to detach itself from wherever it's
being used. Maybe we could even get rid of the condition on the mux.
We most often store the mux context there but it can also be something
else while setting up the connection. Better call it "ctx" and know
that it's the owner's context than misleadingly call it mux_ctx and
get caught doing suspicious tricks.
The SUB_CAN_SEND/SUB_CAN_RECV enum values have been confusing a few
times, especially when checking them on reading. After some discussion,
it appears that calling them SUB_RETRY_SEND/SUB_RETRY_RECV more
accurately reflects their purpose since these events may only appear
after a first attempt to perform the I/O operation has failed or was
not completed.
In addition the wait_reason field in struct wait_event which carries
them makes one think that a single reason may happen at once while
it is in fact a set of events. Since the struct is called wait_event
it makes sense that this field is called "events" to indicate it's the
list of events we're subscribed to.
Last, the values for SUB_RETRY_RECV/SEND were swapped so that value
1 corresponds to recv and 2 to send, as is done almost everywhere else
in the code an in the shutdown() call.
In HTTP applets, the request's EOM was removed like other blocks when receive or
get_line was called from lua scripts. So it was impossible to stop receiving
data on successive calls when all the request body was already consumed,
blocking infinitly the applet.
Now, we never consume the EOM. So it is easy to interrupt receive/get_line
calls. In all cases, this block is consumed when the applet ends.
Before setting the infinite forward, we first forward all remaining input data
from the channel. Of course for HTX streams, this must be done using the amount
of data in the HTX message not in the channel (which appears as full because of
the HTX).
When producing an HTX message, we can't rely on the next-level H1 parser
to check and deduplicate the content-length header, so we have to do it
while parsing a message. The algorithm is the exact same as used for H1
messages.
There is an issue with some medium sized transfers occasionally not
shutting down at the end. Olivier tracked this to being caused by a
missing wakeup of process_stream(). What happens is that one of the
analysers sets CF_WAKE_WRITE to be woken up at the end of the transfer
to take note of the end of transaction, but a failed si_cs_send() at
the end of process_stream causes the call to be attempted again, with
CF_WAKE_WRITE lost. Then stream_int_notify() doesn't find any valid
condition to wake up process_stream(), and the stream stays there,
idling till the timeout.
In fact, CF_WAKE_WRITE has been designed for calling the analysers
to complete an operation without closing (keep-alive HTTP transfer
for instance). It only applies once the buffer is empty and there
is nothing left to be forwarded. In case the channel is closed, the
wakeup is already granted. So what we need here is to make sure to
wake process_stream() up in case the channel will not be closed and
it doesn't have anything left to be transferred. This is detected by
the lack of CF_AUTO_CLOSE and the emptiness of the buffer + to_forward
after a write activity. So now we take care of always waking the stream
up on end of transfers even if the analysers didn't subscribe to this
or if their subscription was lost.
CF_WAKE_WRITE should probably be killed now, though this first requires
careful inspection.
No backport is needed.
Cc: Olivier Houchard <ohouchard@haproxy.com>
Cc: Christopher Faulet <cfaulet@haproxy.com>
The error captures provided in HTX by the H1 mux would always report the
backend as the "other end". We need to assign the backend only on requests.
No backport is needed.
Today the demux only wakes a stream up after receiving some contents, but
not necessarily on close or error. Let's do it based on both error flags
and both EOS flags. With a bit of refinement we should be able to only do
it when the pending bits are there but not the static ones.
No backport is needed.
This function is called when dealing with a connection error or a GOAWAY
frame. It used to report a synchronous error instead of an asycnhronous
error, which can lead to data truncation since whatever is still available
in the rxbuf will be ignored. Let's correctly use CS_FL_ERR_PENDING instead
and only fall back to CS_FL_ERROR if CS_FL_EOS was already delivered.
No backport is needed.
If EOS has already been reported on the conn_stream, there won't be
any read anymore to turn ERR_PENDING into ERROR, so we have to do
report it directly.
No backport is needed.
It takes ages to proceed with "show fd" when there is sustained activity
because it uses the rendez-vous point for each and every file descriptor
in the loop. It's very common to see socat timeout there.
Instead of doing this, let's just isolate the function when entering the
loop. Its duration is limited by the number of FDs that may be emitted in
a single buffer anyway, so it's much lighter and responds much faster.
The h2s pointer was used to scan fctl lists prior to being used to scan
the send list by ID, so it could appear non-null eventhough the list is
empty, resulting in misleading information on empty connections.
No backport is needed.
Most of the time when we issue "show fd" to dump a mux's state, it's
to figure why a transfer is frozen. Connection, stream and conn_stream
states are critical there. And most of the time when this happens there
is a single stream left in the H2 mux, so let's always dump the last
known stream on show fd, as most of the time it will be the one of
interest.
Cyril Bonté reported a bug in the way the cookie length is computed
when aggregating multiple cookies : the first cookie name was counted
as part of the value length, causing random contents to be placed there,
possibly leading to bad requests.
No backport is needed.
Commit 7505f94f9 ("MEDIUM: h2: Don't use a wake() method anymore.")
changed the conditions to restart demuxing so that this happens as soon
as something is read. But similar to previous fix, at an end of stream
we may be woken up with nothing to read but data still available in the
demux buffer, so we must also use this as a valid condition for demuxing.
No backport is needed, this is purely 1.9.
Commit 082f559d3 ("BUG/MEDIUM: h2: restart demuxing after releasing
buffer space") tried to address a situation where transfers could stall
after a read, but the condition was not completely covered : some stalls
may still happen at end of stream because there's nothing anymore to
receive and the last data lie in the demux buffer. Thus we must also
consider this state as a valid condition to restart demuxing.
No backport is needed.
Commit d94f877cd ("BUG/MINOR: mux_pt: Set CS_FL_WANT_ROOM when count is
zero in rcv_buf() callback") triggered a pending issue with this flag,
which is that it's cleared too late and sometimes causes some Rx
transfers to stall. We need to clear it before attempting to receive
otherwise we may risk to see an earlier copy of the flag.
Note that it should probably be defined that this flag could be purged
on each invocation of mux->rcv_buf(), which would make sense.
No backport is needed.
Add a new flag to conn_streams, CS_FL_ERR_PENDING. This is to be set instead
of CS_FL_ERR in case there's still more data to be read, so that we read all
the data before closing.
When count is zero in the function mux_pt_rcv_buf(), it means the channel's
buffer is full. So we need to set the CS_FL_WANT_ROOM on the
conn_stream. Otherwise, while the channel is full, we will try to receive in
loop more data.
A bug was introduced when the buffers API was refactored. It was when wrapping
input data were compressed. the pointer b_peek(in, 0) was used instead of
"b_orig(in)". b_peek(in, 0) is in fact the same as b_head(in).
Commit 19ed92b ("MINOR: hpack: optimize header encoding for short names")
introduced an error in the space computation for short names, as it removed
the length encoding from the count without replacing with 1 (the minimum
byte). This results in the last byte of the area being occasionally
overwritten, which is immediately detected with -DDEBUG_MEMORY_POOLS as
the canary at the end gets overwritten.
No backport is needed.
h1_process_input() may occasionally be called with an empty input
buffer, and the code behind cannot deal with that, let's check the
condition at the beginning.
No backport is needed.
In h2_deferred_shut, if we're done sending the shutr/shutw, don't destroy
the h2s if it still has a conn_stream attached, or the conn_stream may try
to access it again.
The cache runs in an applet, so it delivers data into the input side
of the channel's buffer. Thus it must also abort feeding the buffer
as soon as CF_SHUTR is present, not just CF_SHUTW*, since these last
ones may only appear later. There doesn't seem to be an observable
side effect of this bug, the fix probably doesn't even need to be
backported.
The HTX-specific cache code uses HTX_CACHE_* states which overlap with
the legacy HTTP states. A typo in the error handling made the state
become HTTP_CACHE_END, which equals 3 and is the value for HTX_CACHE_EOD,
which explains why we were seeing a transition to trailers and memory
corruption.
no backport needed.
When creating new conn_streams, always set the CS_FL_NOT_FIRST flag. We
don't really care about being the first request for HTTP/2, this only
really makes sense for HTTP/1, and that way we can reuse connections.
Add the newly created to the idle list as long as http-reuse != never, and
when completing a H2 request, add the connection to the safe list instead of
the idle list, if we have to add it at that point, that means we created
many streams so we know it's safe.
In session, don't keep an infinite number of connection that can idle.
Add a new frontend parameter, "max-session-srv-conns" to set a max number,
with a default value of 5.
Instead of trying to get the session from the connection, which is not
always there, and of course there could be multiple sessions per connection,
provide it with the init() and attach() methods, so that we know the
session for each outgoing stream.
In the mux_h1 and mux_h2, move the test to see if we should add the
connection in the idle list until after we destroyed the h1s/h2s, that way
later we'll be able to check if the connection has no stream at all, and if
it should be added to the server idling list.
Instead of the old "idle-timeout" mechanism, add a new option,
"pool-purge-delay", that sets the delay before purging idle connections.
Each time the delay happens, we destroy half of the idle connections.
Add a new command, "pool-max-conn" that sets the maximum number of connections
waiting in the orphan idling connections list (as activated with idle-timeout).
Using "-1" means unlimited. Using pools is now dependant on this.
Change the default for http-reuse from "never" to "safe", as it has been
the recommended setting for a few versions now and backend H2 makes little
sense without it.
Some warnings were removed from the config parser since it can dynamically
be disabled depending on the server's configuration, so there's no need to
disable it on a whole backend just for one server.
Caching the response with the compression enabled was totally broken. To fix the
problem, the compression must be done after caching the response. Otherwise it
needs to change the cache to store compressed and uncompressed objects for the
same ressource. So, because it is not possible for now, it is forbidden to
declare the compression filter before the cache one. To ease the configuration,
both can be implicitly declared (without "filter" keyword). The compression will
automatically be inserted after the cache.
Then, to make it works this way, the compression filter has been slighly
modified. Now, the response headers are updated after http-response rules
evaluations, instead of before. So, if the response contains a "Content-length"
header, it will be kept with the response stored in the cache. So this cached
response will be able to be served to clients not supporting the compression at
all.
This bugfix concerns the thread deinit but affects the master process.
When the master process falls in wait mode (it fails to reload the
configuration), it launches the deinit_pollers_per_thread and close the
thread waker pipe. It closes rd (-1) and wr (0).
Closing a FD in the master can have several sides effects and the
process will probably quit at some point.
In this case it assigns 0 to the socketpair of a worker during the next
correct reload, and then closes the socketpair once it falls in wait
mode again. The worker assumes that the master died and leaves.
Commit f8188c6 ("MEDIUM: threads/logs: Make logs thread-safe") made logs
thread-local but it also made the copy of the startup-logs thread-local,
meaning that when threads are configured, upon startup the list of startup
logs appears to be empty. Let's just remove the THEAD_LOCAL directive
there, as the check for the startup period is already present.
This fix should be backported to 1.8.
When refactoring the build option strings in 1.9, the thread support
was placed outside of the ifdef block resulting in threads always being
mentioned even if that was not true. Let's fix this and also mention
when threads are disabled to help troubleshooting.
Removing deprecated APIs is an optional part of OpenWrt's build system to
save some space on embedded devices.
Also added compatibility for LibreSSL.
Signed-off-by: Rosen Penev <rosenp@gmail.com>
PiBa-NL reported an issue affecting logs when stdout is enabled at the
same time as an IP address. It does not affect FD and UNIX, but does
still affect multiple FDs. What happens is that the condition to detect
that the initialization was not made relies on the FD being -1, and in
this case the FD points to the *unique* FD used for AF_INET sockets, so
the configured socket used for outgoing logs over UDP gets overwritten
by the last configured FD. This is not appropriate, so instead we rely
on the sin_port part of the IPv4-mapped address to store the
initialization state for each FD.
This part deserves being significantly revamped, as IPv6 is still not
possible due to the way the FDs are managed, and inherited FDs are a
bit hackish.
Note that this patch relies on "MINOR: tools: preset the port of
fd-based "sockets" to zero" in order to operate properly.
No backport is needed.
Addresses made of a file descriptor store the file descriptor into the
address part of a sin_addr. Contrary to other address classes, there's
no way to figure later based on the FD if an initialization was done
(which is how logs initialize their FDs). The port part is currently
left with random data, so let's instead specifically set the port part
to zero when creating an FD, and let the code using it set whatever
info it needs there, typically an initialization state.
PiBa-NL reported that since this commit f157384 ("MINOR: backend: count
the number of connect and reuse per server and per backend"), reg-test
connection/h00001 fails. Indeed it does, the server is not checked for
existing prior to updating its counter. It should also fail with
transparent mode.
Commit 84cca66 ("BUG/MEDIUM: htx: When performing zero-copy, start from
the right offset.") uncovered another issue which is that the send function
may occasionally be called without any block. It's important to check for
this case when computing the zero-copy offsets.
No backport is needed.
In sess_build_logline(), don't attempt to call sample_fetch_as_type()
if we don't have a stream.
It used never to happen in the past before commit 09bb27c ("MEDIUM: log:
make sess_build_logline() support being called with no stream"). But now
it can happen when sess_log() is called from the lower layers (i.e. the
H2 mux got garbage when it was expecting a preface frame), and it reveals
that some sample fetch functions and some converter fnuctions which rely
on the stream don't test for its existence. For the sample fetch functions,
a durable solution is easy and would consist in adapting sample_process()
to verify the SMP_USE_* bits when the stream is not set. But for the
converters we don't have this option as they don't declare whether or not
they use a stream (which possibly is the real issue).
Thus for now let's disable sample_fetch_as_type() if a stream does not
exist, until it can be more accurately refined later.
If a reload was issued to the master process and failed, it is critical
that the admin sees it because it means that the saved configuration
does not work anymore and might not be usable after a full restart. For
this reason in this case we modify the "master" prompt to explicitly
indicate that a reload failed.
If the reload fail after the parsing of the configuration, the
mworker_proc structures are created for the processes it tried to
create.
The mworker_proc_list_to_env() function was exporting these unitialized
structures in the "HAPROXY_PROCESSES" environment variable which was
leading to this kind of output in "show proc":
4294967295 worker [was: 1] 1 17879d 16h26m28s
When using zerocopy, start from the beginning of the data, not from the
beginning of the buffer, it may have contained headers, and so the data
won't start at the beginning of the buffer.
This patch is a bit huge but nothing special here. Some functions have been
duplicated to support the HTX, some others have a switch inside to do so. So,
now, it is possible to create HTTP applets from HTX proxies. However, TCP
applets remains unsupported.
Functions from then Channel class are now forbidden for LUA scripts called from
HTTP proxies. These functions totally hijacked the HTTP parser, leaving it in an
undefined state. This patch is tagged as MAJOR because it could be see as a
compatibility breakage. But a LUA script using one of these functions has a very
low probablity to work correctly except by chance.
So, concretely, following functions are concerned: Channel.get, Channel.dup,
Channel.getline, Channel.set, Channel.append, Channel.send,
Channel.forward. Others remain available.
When deciding whether to scan the global run queue or not, we currently
check the configured threads number, and if it's 1 we skip the queue
since it's not supposed to be used. However when running with a master
process and multiple threads in the workers, the master will turn this
number back to 1 while some task wakeups might possibly have set bits
in the global tasks mask, thus causing active_tasks_mask to have one
bit permanently set, preventing the process from sleeping.
Instead of checking global.nbthread, let's check for the current
thread's bit in global_tasks_mask. First it will make this part of the
code more consistent, working like a test and set operation, it will
solve the issue with master+nbthread and as a bonus it will save a
lock/unlock for each scheduler call when the thread doesn't have a
task in the global run queue.
The tooltip in the HTML stats page was damaged by commit 1b0f85e47 ("MINOR:
stats: also report the failed header rewrites warnings on the stats page"),
due to the header rewrites counter being inserted at the wrong place and
taking the place of the other statuses.
This is only for 1.9, no backport is needed.
Sadly we didn't have the cumulated number of connections established to
servers till now, so let's now update it per backend and per-server and
report it in the stats. On the stats page it appears in the tooltip
when hovering over the total sessions count field.
The transpory layer now respects buffer alignment, so we don't need to
cheat anymore pretending we have some data at the head, adjusting the
buffer's head is enough.
For a long time we've been realigning empty buffers in the transport
layers, where the I/Os were performed based on callbacks. Doing so is
optimal for higher data throughput but makes it trickier to optimize
unaligned data, where mux_h1/h2 have to claim some data are present
in the buffer to force unaligned accesses to skip the frame's header
or the chunk header.
We don't need to do this anymore since the I/O calls are now always
performed from top to bottom, so it's only the mux's responsibility
to realign an empty buffer if it wants to.
In practice it doesn't change anything, it's just a convention, and
it will allow the code to be simplified in a next patch.
The cconf variable was not initialized before the two first possible
error exits before being freed, resulting in random crashes instead
of displaying an error message if the cache ID was missing from the
filter declaration.
No backport is needed, this is exclusively 1.9.
The leastconn algorithm queues available servers based on their weighted
current load. But this results in an inaccurate load balancing when weights
differ and the load is very low, because what matters is not the load before
picking the server but the load resulting from picking the server. At the
very least, it must be granted that servers with the highest weight are
always picked first when no server has any connection.
This patch addresses this by simply adding one to the current connections
count when queuing the server, since this is the load the server will have
once picked. This finally allows to bridge the gap that existed between
the "leastconn" and the "first" algorithms.
In the mux detach function, when using HTX, take the connection over if
it no longer has an owner (ie because the session that was the owner left).
It is done for legacy code in proto_http.c, but not for HTX.
Also when using HTX, in H2, try to add the connection back to idle_conns if
it was not already (ie we used to use all the available streams, and we're
freeing one). That too was done in proto_http.c.
Before trying to add a connection to the idle list, make sure it doesn't
have the error, the shutr or the shutw flag. If any of them is present,
don't bother trying to recycle the connection, it's going to be destroyed
anyway.
A first patch was pushed to fix this bug if it happens during the headers
parsing. But it is also possible to hit the bug during the parsing of
chunks. For instance, if the server sends only part of the trailers, some data
remains unparsed. So it the server closes its connection without sending the end
of the response, we fall back again into an infinite loop.
The fix contains in 2 parts. First, we block the receive if a read0 or an error
is detected on the connection, independently if the input buffer is empty or
not. Then, the flags CS_FL_RCV_MORE and CL_FL_WANT_ROOM are always reset when
input data are processed. We set them again only when necessary.
Add a new method to mux, "reset", that is used to let the mux know the
connection attempt failed, and we're about to retry, so it just have to
reinit itself. Currently only the H1 mux needs it.
When the connection failed, we don't really want to close the conn_stream,
as we're probably about to retry, so just make sure the file descriptor is
closed.
In si_cs_recv(), report that arrive at the end of stream only if we were
indeed connected, we don't want that if the connection failed and we're about
to retry.
CS_FL_EOS | CS_FL_REOS can be set by the mux if the connection failed, so make
sure we remove them before retrying to connect, or it may lead to a premature
close of the connection.
In the master CLI, the commands and the prefix were still parsed and
trimmed after the pattern payload. Don't parse anything but the end of a
line till we are in payload mode.
Put the search of the pattern after the trim so we can use correctly a
payload with a command which is prefixed by @.
Handle the CLI level in the master CLI. In order to do this, the master
CLI stores the level in the stream. Each command are prefixed by a
"user" or "operator" command before they are forwarded to the target
CLI.
The level can be configured in the haproxy program arguments with the
level keyword: -S /tmp/sock,level,admin -S /tmp/sock2,level,user.
Implement "show cli level" which show the level of the current CLI
session.
Implement "operator" and "user" which lower the permissions of the
current CLI session.
We need to make sure that the record length is not making us read
past the end of the data we received.
Before this patch we could for example read the 16 bytes
corresponding to an AAAA record from the non-initialized part of
the buffer, possibly accessing anything that was left on the stack,
or even past the end of the 8193-byte buffer, depending on the
value of accepted_payload_size.
To be backported to 1.8, probably also 1.7.
Some callers of dns_read_name() do not make sure that we can read
the first byte, holding the length of the next label, without going
past our buffer, so we need to make sure of that.
In addition, if the label is a compressed one we need to make sure
that we can read the following byte to compute the target offset.
To be backported to 1.8, probably also 1.7.
When a compressed pointer is encountered, dns_read_name() will call
itself with the pointed-to offset in the packet.
With a specially crafted packet, it was possible to trigger an
infinite-loop recursion by making the pointer points to itself.
While it would be possible to handle that particular case differently
by making sure that the target is different from the current offset,
it would still be possible to craft a packet with a very long chain
of valid pointers, always pointing backwards. To prevent a stack
exhaustion in that case, this patch restricts the number of recursive
calls to 100, which should be more than enough.
To be backported to 1.8, probably also 1.7.
The commit 3815b227f ("MEDIUM: mux-h1: implement true zero-copy of DATA blocks")
broke the output of chunked messages. When the zero-copy was performed on such
messages, no chunk size was emitted nor ending CRLF.
Now, the chunked envelope is added when necessary. We have at least the size of
the struct htx to emit it. So 40 bytes for now. It should be enough.
Change the output of the relative pid for the old processes, displays
"[was: X]" instead of just "X" which was confusing if you want to
connect to the CLI of an old PID.
H2 has a 9-byte frame header, and HTX has a 40-byte frame header.
By artificially advancing the Rx header and limiting the amount of
bytes read to protect the end of the buffer, we can make the data
payload perfectly aligned with HTX blocks and optimize the copy.
This is similar to what was done for the H1 mux : when the mux's buffer
is empty and the htx area contains exactly one data block of the same
size as the requested count, and all window and frame size conditions are
satisfied, then it's possible to simply swap the caller's buffer with the
mux's output buffer and adjust offsets and length to match the entire
DATA HTX block in the middle. An H2 frame header has to be prepended
before the block but this always fits in an HTX frame header.
In this case we perform a true zero-copy operation from end-to-end. This
is the situation that happens all the time with large files. When using
HTX over H2 over TLS, this brings a 3% extra performance gain. TLS remains
a limiting factor here but the copy definitely has a cost. Also since
haproxy can now use H2 in clear, the savings can be higher.
Due to blocking factor being different on H1 and H2, we regularly end
up with tails of data blocks that leave room in the mux buffer, making
it tempting to copy the pending frame into the remaining room left, and
possibly realigning the output buffer.
Here we check if the output buffer contains data, and prefer to wait
if either the current frame doesn't fit or if it's larger than 1/4 of
the buffer. This way upon next call, either a zero copy, or a larger
and aligned copy will be performed, taking the whole chunk at once.
Doing so increases the H2 bandwidth by slightly more than 1% on large
objects.
By default H2 uses a 65535 bytes window for the connection, and changing
it requires sending a WINDOW_UPDATE message. We only used to update the
window when receiving data, thus never increasing it further.
As reported by user klzgrad on the mailing list, this seriously limits
the upload bitrate, and will have an even higher impact on the backend
H2 connections to origin servers.
There is no technical reason for keeping this window so low, so let's
increase it to the maximum possible value (2G-1). We do this by
pretending we've already received that many data minus the maximum
data the client might already send (65535), so that an early
WINDOW_UPDATE message is sent right after the SETTINGS frame.
This should be backported to 1.8. This patch depends on previous
patch "BUG/MINOR: mux-h2: refrain from muxing during the preface".
The condition to refrain from processing the mux was insufficient as it
would only handle the outgoing connections. In essence it is not that much
of a problem since we don't have streams yet on an incoming connetion. But
it prevents waiting for the end of the preface before sending an early
WINDOW_UPDATE message, thus causing the connections to fail in this case.
This must be backported to 1.8 with a few minor adaptations.
Since HTX casts the buffer to a struct and stores relative pointers at the
end, it is mandatory that its end is properly aligned. This patch enforces
a buffer size rounding up to the next multiple of two void*, thus 8 on
32-bit and 16 on 64-bit, to match what malloc() already does on the beginning
of the buffer. In pratice it will never be really noticeable since default
sizes already are such multiples.
When the mux's buffer is empty and the htx area contains exactly one
data block of the same size as the requested count, then it's possible
to simply swap the caller's buffer with the mux's output buffer and
adjust offsets and length to match the entire DATA HTX block in the
middle. In this case we perform a true zero-copy operation from
end-to-end. This is the situation that happens all the time with large
files. With this change, the HTX bit rate performance catches up again
with the legacy mode (measured at 97%).
These flags haven't been used for a while. SF_TUNNEL was reintroduced
by commit d62b98c6e ("MINOR: stream: don't set backend's nor response
analysers on SF_TUNNEL") to handle the two-level streams needed to
deal with the first model for H2, and was not removed after this model
was abandonned. SF_INITIALIZED was only set. SF_CONN_TAR was never
referenced at all.
Now that h1 and legacy HTTP are two distinct things, there's no need
to keep the legacy HTTP parsers in h1.c since they're only used by
the legacy code in proto_http.c, and h1.h doesn't need to include
hdr_idx anymore. This concerns the following functions :
- http_parse_reqline();
- http_parse_stsline();
- http_msg_analyzer();
- http_forward_trailers();
All of these were moved to http_msg.c.
Lots of HTTP code still uses struct http_msg. Not only this code is
still huge, but it's part of the legacy interface. Let's move most
of these functions to a separate file http_msg.c to make it more
visible which file relies on what. It's mostly symmetrical with
what is present in http_htx.c.
The function http_transform_header_str() which used to rely on two
function pointers to look up a header was simplified to rely on
two variants http_legacy_replace_{,full_}header(), making both
sides of the function much simpler.
No code was changed beyond these moves.
All the HTX definition is self-contained and doesn't really depend on
anything external since it's a mostly protocol. In addition, some
external similar files (like h2) also placed in common used to rely
on it, making it a bit awkward.
This patch moves the two htx.h files into a single self-contained one.
The historical dependency on sample.h could be also removed since it
used to be there only for http_meth_t which is now in http.h.
As for the compression filter, the cache filter must be explicitly declared
(using the filter keyword) if other filters than cache are used. It is mandatory
to explicitly define the filters order.
Documentation has been updated accordingly.
This is only true for HTX proxies. On legacy HTTP proxy, if the compression and
the cache are both enabled, an error during HAProxy startup is triggered.
With the HTX, now you can use both in any order. If the compression is defined
before the cache, then the responses will be stored compressed. If the
compression is defined after the cache, then the responses will be stored
uncompressed. So in the last case, when a response is served from the cache, it
will compressed too like any response.