When an H2 request is converted into an HTX message, All cookie headers are
grouped into one, each value separated by a semicolon (;). To do so, we add the
header "cookie" with the first value and then we update the value by appending
other cookies. But during this operation, only the size of the HTX block is
updated. And not the data length of the whole HTX message.
It is an old bug and it seems to work by chance till now. But it may lead to
undefined behaviour by time to time.
This patch must be backported to 2.0 and 1.9
gcc-3.4 fails to compile pattern.c :
src/pattern.c: In function `pat_match_ip':
src/pattern.c:1092: error: unrecognizable insn:
(insn 186 185 187 9 src/pattern.c:970 (set (reg/f:SI 179)
(high:SI (const:SI (plus:SI (symbol_ref:SI ("static_pattern") [flags 0x22] <var_decl fe5bae80 static_pattern>)
(const_int 8 [0x8]))))) -1 (nil)
(nil))
src/pattern.c:1092: internal compiler error: in extract_insn, at recog.c:2083
This happens when performing the memcpy() on the union, and in this
case the workaround is trivial (and even cleaner) using a cast instead.
gcc-3.4 fails to compile standard.c :
src/standard.c: In function `str2sa_range':
src/standard.c:1034: error: unrecognizable insn:
(insn 582 581 583 37 src/standard.c:949 (set (reg/f:SI 262)
(high:SI (const:SI (plus:SI (symbol_ref:SI ("*ss.4") [flags 0x22] <var_decl fe782e80 ss>)
(const_int 2 [0x2]))))) -1 (nil)
(nil))
src/standard.c:1034: internal compiler error: in extract_insn, at recog.c:2083
The workaround is explained here :
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=21613
It only requires creating a local variable containing the result of the
cast, which is totally harmless, so let's do it.
When we're done sending/receiving early data, and we add the handshake
flags on the connection, make sure we wake the associated tasklet up, so that
the handshake will be initiated.
When commit 0350b90e3 ("MEDIUM: htx: make htx_add_data() never defragment
the buffer") was introduced, it made htx_add_data() actually be able to
add less data than it was asked for, and the callers must use the returned
value to know how much was added. The H2 code used to rely on the frame
length instead of the return value. A version of the code doing this was
written but is obviously not the one that got merged, resulting in breaking
large uploads or downloads when HTX would have instead defragmented the
buffer because the HTX side sees less contents than what the H2 side sees.
This patch fixes this again. No backport is needed.
In connect_server(), if we don't yet have a mux, because we're choosing
one depending on the ALPN, don't attempt to send early data. We can't do
it because those data would depend on the mux, that will only be determined
by the handshake.
This should be backported to 1.9.
In connect_server(), don't wait until we negociate the ALPN to choose the
mux, the only mux we want to use is the mux_pt anyway.
This should be backported to 1.9.
Olivier found that commit 99ad1b3e8 ("MINOR: mux-h2: stop relying on
CS_FL_REOS") managed to break abortonclose again with H2. What happens
is that while the CS_FL_REOS flag was set on some transitions to the
HREM state, it's not set on all and is in fact only set when the low
level connection is closed. So making the replacement condition match
the HREM and ERROR states is not correct and causes completely correct
requests to send advertise an early close of the connection layer while
only the stream's input is closed.
In order to avoid this, we now properly split the checks for the CLOSED
state and for the closed connection. This way there is no risk to set
the EOS flag too early on the connection.
No backport is needed.
In commit 86eded6c6 ("CLEANUP: tasks: rename task_remove_from_tasklet_list()
to tasklet_remove_*") which consisted in removing the casts between tasks
and tasklet, I was a bit too fast to believe that we only saw tasklets in
this function since process_runnable_tasks() also uses it with tasks under
a cast. So removing the bookkeeping on task_list_size was not appropriate.
Bah, the joy of casts which hide the real thing...
This patch does two things at once to address this mess once for all:
- it restores the decrement of task_list_size when it's a real task,
but moves it to process_runnable_task() since it's the only place
where it's allowed to call it with a task
- it moves the increment there as well and renames
task_insert_into_tasklet_list() to tasklet_insert_into_tasklet_list()
of obvious consistency reasons.
This way the increment/decrement of task_list_size is made at the only
places where the cast is enforced, so it has less risks to be missed.
The comments on top of these functions were updated to reflect that they
are only supposed to be used with tasklets and that the caller is responsible
for keeping task_list_size up to date if it decides to enforce a task there.
Now we don't have to worry anymore about how these functions work outside
of the scheduler, which is better longterm-wise. Thanks to Christopher for
spotting this mistake.
No backport is needed.
If a mux is in busy mode when the outgoing EOM is consummed, it is important to
wake it up for I/O. Because in busy mode, the mux is not subscribed for
receive. Otherwise, it depends on the applicative layer to shutdown the H1
stream. Wake it up allows the mux to catch the read0 as soon as possible.
This patch must be backported to 1.9.
The function really only operates on tasklets, its arguments are always
tasklets cast as tasks to match the function's type, to be cast back to
a struct tasklet. Let's rename it to tasklet_remove_from_tasklet_list(),
take a struct tasklet, and get rid of the undesired task casts.
It's really confusing to call it a task because it's a tasklet and used
in places where tasks and tasklets are used together. Let's rename it
to tasklet to remove this confusion.
Since h7da71293e431b5ebb3d6289a55b0102331788ee6as has been added, the
server name (srv->id in the code) is now unique per backend, which
means it can reliabely be used to identify a server recovered from the
server-state file.
This patch cleans up the parsing of server-state file and ensure we use
only the server name as a reliable key.
By default, the scheme "https" is always used. But when an explicit scheme was
defined and when this scheme is "http", we use it in the request sent to the
server. This is done by checking flags of the start-line. If the flag
HTX_SL_F_HAS_SCHM is set, it means an explicit scheme was defined on the client
side. And if the flag HTX_SL_F_SCHM_HTTP is set, it means the scheme "http" was
used.
We first try to figure out if the URI of the start-line is absolute or not. So,
if it does not start by a slash ("/"), it means the URI is an absolute one and
the flag HTX_SL_F_HAS_SCHM is set. Then checks are performed to know if the
scheme is "http" or "https" and the corresponding flag is set,
HTX_SL_F_SCHM_HTTP or HTX_SL_F_SCHM_HTTPS. Other schemes, for instance ftp, are
ignored.
The flag HTX_SL_F_HAS_SCHM is always set because H2 requests have always an
explicit scheme. Then, the pseudo-header ":scheme" is tested. If it is set to
"http", the flag HTX_SL_F_SCHM_HTTP is set. Otherwise, for all other cases, the
flag HTX_SL_F_SCHM_HTTPS is set. For now, it seems reasonable to have a fallback
on the scheme "https".
This state is used in the legacy HTTP when everything was received from an
endpoint but a filter doesn't forward all the data. It is used to not report a
client or a server abort, depending on channels flags.
The same must be done on HTX streams. Otherwise, the message may be
truncated. For instance, it may happen with the filter trace with the random
forwarding enabled on the response channel.
This patch must be backported to 1.9.
In the HTX structure, the field <first> is used to know where to (re)start the
analysis. It may differ from the message's head. It is especially important to
update it to handle 1xx messages, to be sure to restart the analysis on the next
message (another 1xx message or the final one). It is also updated when some
data are forwarded (the headers or part of the body). But this update is an
error and must never be done at the analysis level. It is a bug, because some
sample fetches may be used after the data forwarding (but before the first send
of course). At this stage, if the first block position does not point on the
start-line, most of HTTP sample fetches fail.
So now, when something is forwarding by HTX analyzers, the first block position
is not update anymore.
This issue was reported on Github. See #119. No backport needed.
When a block's payload is moved during an expansion or when the whole block is
removed, the addresses of free spaces are updated accordingly. We must be
careful to reset them when <tail_addr> becomes equal to <end_addr>. In this
situation, we can maximize the free space between the blocks and their payload
and set the other one to 0. It is also important to be sure to never have
<end_addr> greater than <tail_addr>.
Instead of using the macro MAX_HTTP_HDR to limit the number of headers parsed
before throwing an error, we now use the custom global variable
global.tune.max_http_hdr.
This patch must be backported to 1.9.
Previous fix about the random forwarding on the message body was not enough to
fix the bug in all cases. Among others, when there is no data but only the EOM,
we must forward everything.
This patch must be backported to 1.9 if the patch 0bdeeaacb ("BUG/MINOR:
flt_trace/htx: Only apply the random forwarding on the message body.") is also
backported.
In h1_init(), also add the H1C_F_CS_WAIT_CONN flag if the handshake didn't
complete, otherwise we may end up letting the upper layer sending data too
soon.
When built with the dummy 51Degrees library for testing, the output will
include "(dummy library)" to ensure it is clear that this is this is not
the API.
gcc (Ubuntu 5.4.0-6ubuntu1~16.04.11) 5.4.0 20160609
Copyright (C) 2015 Free Software Foundation, Inc.
This is free software; see the source for copying conditions. There is NO
warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
complains:
> src/debug.c: In function "ha_panic":
> src/debug.c:162:2: warning: ignoring return value of "write", declared with attribute warn_unused_result [-Wunused-result]
> (void) write(2, trash.area, trash.data);
> ^
Change the formatting of the uptime field in "show proc" so it's easier
to parse it. Remove the space between the day and the hour and align the
field on 15 characters.
The _51d_fetch method, and the two methods it calls to fetch HTTP
headers (_51d_set_device_offsets, and _51d_set_headers), now support
both legacy and HTX operation.
This should be backported to 1.9.
This action is particularly convenient to replace some deprecated usees
of "reqrep". It takes a match and a format string including back-
references. The reqrep warning was updated to suggest it as well.
In h1_process(), don't consider we're connected if we still have handshakes
pending. It used not to happen, because we would not be called if there
were any ongoing handshakes, but that changed now that the handshakes are
handled by a xprt, and not by conn_fd_handler() directly.
For quite a long time we've been saying that the default error files
should produce HTTP/1.1 responses and since it's of low importance, it
always gets forgotten.
So here it finally comes. Each status code now properly contains a
content-length header so that the output is clean and doesn't force
upstream proxies to switch to chunked encoding or to close the connection
immediately after the response, which is particularly annoying for 401
or 407 for example. It's worth noting that the 3xx codes had already
been turned to HTTP/1.1.
This patch will obviously not change anything for user-provided error files.
The error message indicating an unknown keyword on an http-request rule
doesn't mention the "deny_status" option which comes with the "deny" rule,
this is particularly confusing.
This can be backported to all versions supporting this option.
This reverts commit 6c7fe5c3700fac7cc945b2b756df30874cbf77a6.
This patch was harmless, but not needed, conn_upgrade_mux_fe() already takes
care of setting the buffer to BUF_NULL.
The function htx_add_data_before() was removed because it was buggy. The
function htx_move_blk_before() may be used if necessary to do something
equivalent, except it just moves blocks. It doesn't handle the adding.
In an HTX message, it may have 2 available rooms to store a new block. The first
one is between the blocks and their payload. Blocks are added starting from the
end of the buffer and their payloads are added starting from the begining. So
the first free room is between these 2 edges. The second one is at the begining
of the buffer, when we start to wrap to add new payloads. Once we start to use
this one, the other one is ignored until the next defragmentation of the HTX
message.
In theory, there is no problem. But in practice, some lacks in the HTX structure
force us to defragment too often HTX messages to always be in a known state. The
second free room is not tracked as it should do and the first one may be easily
corrupted when rewrites happen.
So to fix the problem and avoid unecessary defragmentation, the HTX structure
has been refactored. The front (the block's position of the first payload before
the blocks) is no more stored. Instead we keep the relative addresses of 3 edges:
* tail_addr : The start address of the free space in front of the the blocks
table
* head_addr : The start address of the free space at the beginning
* end_addr : The end address of the free space at the beginning
Here is the general view of the HTX message now:
head_addr end_addr tail_addr
| | |
V V V
+------------+------------+------------+------------+------------------+
| | | | | |
| PAYLOAD | Free space | PAYLOAD | Free space | Blocks area |
| ==> | 1 | ==> | 2 | <== |
+------------+------------+------------+------------+------------------+
<head_addr> is always lower or equal to <end_addr> and <tail_addr>. <end_addr>
is always lower or equal to <tail_addr>.
In addition;, to simplify everything, the blocks area are now contiguous. It
doesn't wrap anymore. So the head is always the block with the lowest position,
and the tail is always the one with the highest position.
There is no bug here, but this patch improves the debug message reported during
the random forwarding. The original offset is kept untouched so its value may be
used to format the message. Before, 0 was always reported.
The function htx_add_data_before() is buggy and cannot work. It first add a data
block and then move it before another one, passed in argument. The problem
happens when a defragmentation is done to add the new block. In this case, the
reference is no longer valid, because the blocks are rearranged. So, instead of
moving the new block before the reference, it is moved at the head of the HTX
message.
So this function has been removed. It was only used by the compression filter to
add a last data block before a TLR, EOT or EOM block. Now, the new function
htx_add_last_data() is used. It adds a last data block, after all others and
before any TLR, EOT or EOM block. Then, the next bock is get. It is the first
non-data block after data in the HTX message. The compression loop continues
with it.
This patch must be backported to 1.9.
Since the commit 8f3c256f7 ("MEDIUM: cache/htx: Always store info about HTX
blocks in the cache"), it is possible to read info about a data block without
sending anything. It is possible because we rely on the function htx_add_data(),
which will try to add data without any defragmentation. In such case, info about
the data block are skipped but don't count in data sent.
No need to backport this patch, expect if the commit 8f3c256f7 is backported
too.
PiBa-NL found some pathological cases where starting threads can hinder
each other and cause a measurable slow down. This problem is reproducible
with the following config (haproxy must be built with -DDEBUG_DEV) :
global
stats socket /tmp/sock1 mode 666 level admin
nbthread 64
backend stopme
timeout server 1s
option tcp-check
tcp-check send "debug dev exit\n"
server cli unix@/tmp/sock1 check
This will cause the process to be stopped once the checks are ready to
start. Binding all these to just a few cores magnifies the problem.
Starting them in loops shows a significant time difference among the
commits :
# before startup serialization
$ time for i in {1..20}; do taskset -c 0,1,2,3 ./haproxy-e186161 -db -f slow-init.cfg >/dev/null 2>&1; done
real 0m1.581s
user 0m0.621s
sys 0m5.339s
# after startup serialization
$ time for i in {1..20}; do taskset -c 0,1,2,3 ./haproxy-e4d7c9dd -db -f slow-init.cfg >/dev/null 2>&1; done
real 0m2.366s
user 0m0.894s
sys 0m8.238s
In order to address this, let's use plain mutexes and cond_wait during
the init phase. With this done, waiting threads now sleep and the problem
completely disappeared :
$ time for i in {1..20}; do taskset -c 0,1,2,3 ./haproxy -db -f slow-init.cfg >/dev/null 2>&1; done
real 0m0.161s
user 0m0.079s
sys 0m0.149s
When checking the result of an ebis_insert() call in an ebtree with unique keys,
if already present, in place of freeing() the old one and return the new one,
rather the correct way is to free the new one, and return the old one. For
this, the __dict_insert() function was folded into dict_insert() as this
significantly simplifies the test of duplicates.
Thanks to Olivier for having reported this bug which came with this one:
"MINOR: dict: Add dictionary new data structure".
There's no point in calling this on each and every thread since the first
thread passing there will enable the listeners, and the next ones will
simply scan all of them in turn to discover that they are already
initialized. Let's only initilize them on the first thread. This could
slightly speed up start up on very large configurations, eventhough most
of the time is still spent in the main thread binding the sockets.
A few measurements have constantly shown that this decreases the startup
time by ~0.1s for 150k listeners. Starting all of them in parallel doesn't
provide better results and can still expose some undesired races.
Since commit 6ec902a ("MINOR: threads: serialize threads initialization")
we now serialize threads initialization. But doing so has emphasized another
race which is that some threads may actually start the loop before others
are done initializing.
As soon as all threads enter the first thread_release() call, their rdv
bit is cleared and they're all waiting for all others' rdv to be cleared
as well, with their harmless bit set. The first one to notice the cleared
mask will progress through thread_isolate(), take rdv again preventing
most others from noticing its short pass to zero, and this first one will
be able to run all the way through the initialization till the last call
to thread_release() which it happily crosses, being the only one with the
rdv bit, leaving the room for one or a few others to do the same. This
results in some threads entering the loop before others are done with
their initialization, which is particularly bad. PiBa-NL reported that
some regtests fail for him due to this (which was impossible to reproduce
here, but races are racy by definition). However placing some printf()
in the initialization code definitely shows this unsychronized startup.
This patch takes a different approach in three steps :
- first, we don't start with thread_release() anymore and we don't
set the rdv mask anymore in the main call. This was initially done
to let all threads start toghether, which we don't want. Instead
we just start with thread_isolate(). Since all threads are harmful
by default, they all wait for each other's readiness before starting.
- second, we don't release with thread_release() but with
thread_sync_release(), meaning that we don't leave the function until
other ones have reached the point in the function where they decide
to leave it as well.
- third, it makes sure we don't start the listeners using
protocol_enable_all() before all threads have allocated their local
FD tables or have initialized their pollers, otherwise startup could
be racy as well. It's worth noting that it is even possible to limit
this call to thread #0 as it only needs to be performed once.
This now guarantees that all thread init calls start only after all threads
are ready, and that no thread enters the polling loop before all others have
completed their initialization.
Please check GH issues #111 and #117 for more context.
No backport is needed, though if some new init races are reported in
1.9 (or even 1.8) which do not affect 2.0, then it may make sense to
carefully backport this small series.
This function provides an alternate way to leave a critical section run
under thread_isolate(). Currently, a thread may remain in thread_release()
without having the time to notice that the rdv mask was released and taken
again by another thread entering thread_isolate() (often the same that just
released it). This is because threads wait in harmless mode in the loop,
which is compatible with the conditions to enter thread_isolate(). It's
not possible to make them wait with the harmless bit off or we cannot know
when the job is finished for the next thread to start in thread_isolate(),
and if we don't clear the rdv bit when going there, we create another
race on the start point of thread_isolate().
This new synchronous variant of thread_release() makes use of an extra
mask to indicate the threads that want to be synchronously released. In
this case, they will be marked harmless before releasing their sync bit,
and will wait for others to release their bit as well, guaranteeing that
thread_isolate() cannot be started by any of them before they all left
thread_sync_release(). This allows to construct synchronized blocks like
this :
thread_isolate()
/* optionally do something alone here */
thread_sync_release()
/* do something together here */
thread_isolate()
/* optionally do something alone here */
thread_sync_release()
And so on. This is particularly useful during initialization where several
steps have to be respected and no thread must start a step before the
previous one is completed by other threads.
This one must not be placed after any call to thread_release() or it would
risk to block an earlier call to thread_isolate() which the current thread
managed to leave without waiting for others to complete, and end up here
with the thread's harmless bit cleared, blocking others. This might be
improved in the future.
thread_release() is to be called after thread_isolate(), i.e. when the
thread already has its harmless bit cleared. No need to clear it twice,
thus avoid calling thread_harmless_end() and directly check the rdv
bits then loop on them.