Today the demux only wakes a stream up after receiving some contents, but
not necessarily on close or error. Let's do it based on both error flags
and both EOS flags. With a bit of refinement we should be able to only do
it when the pending bits are there but not the static ones.
No backport is needed.
This function is called when dealing with a connection error or a GOAWAY
frame. It used to report a synchronous error instead of an asycnhronous
error, which can lead to data truncation since whatever is still available
in the rxbuf will be ignored. Let's correctly use CS_FL_ERR_PENDING instead
and only fall back to CS_FL_ERROR if CS_FL_EOS was already delivered.
No backport is needed.
If EOS has already been reported on the conn_stream, there won't be
any read anymore to turn ERR_PENDING into ERROR, so we have to do
report it directly.
No backport is needed.
It takes ages to proceed with "show fd" when there is sustained activity
because it uses the rendez-vous point for each and every file descriptor
in the loop. It's very common to see socat timeout there.
Instead of doing this, let's just isolate the function when entering the
loop. Its duration is limited by the number of FDs that may be emitted in
a single buffer anyway, so it's much lighter and responds much faster.
The h2s pointer was used to scan fctl lists prior to being used to scan
the send list by ID, so it could appear non-null eventhough the list is
empty, resulting in misleading information on empty connections.
No backport is needed.
Most of the time when we issue "show fd" to dump a mux's state, it's
to figure why a transfer is frozen. Connection, stream and conn_stream
states are critical there. And most of the time when this happens there
is a single stream left in the H2 mux, so let's always dump the last
known stream on show fd, as most of the time it will be the one of
interest.
Cyril Bonté reported a bug in the way the cookie length is computed
when aggregating multiple cookies : the first cookie name was counted
as part of the value length, causing random contents to be placed there,
possibly leading to bad requests.
No backport is needed.
Commit 7505f94f9 ("MEDIUM: h2: Don't use a wake() method anymore.")
changed the conditions to restart demuxing so that this happens as soon
as something is read. But similar to previous fix, at an end of stream
we may be woken up with nothing to read but data still available in the
demux buffer, so we must also use this as a valid condition for demuxing.
No backport is needed, this is purely 1.9.
Commit 082f559d3 ("BUG/MEDIUM: h2: restart demuxing after releasing
buffer space") tried to address a situation where transfers could stall
after a read, but the condition was not completely covered : some stalls
may still happen at end of stream because there's nothing anymore to
receive and the last data lie in the demux buffer. Thus we must also
consider this state as a valid condition to restart demuxing.
No backport is needed.
Commit d94f877cd ("BUG/MINOR: mux_pt: Set CS_FL_WANT_ROOM when count is
zero in rcv_buf() callback") triggered a pending issue with this flag,
which is that it's cleared too late and sometimes causes some Rx
transfers to stall. We need to clear it before attempting to receive
otherwise we may risk to see an earlier copy of the flag.
Note that it should probably be defined that this flag could be purged
on each invocation of mux->rcv_buf(), which would make sense.
No backport is needed.
Add a new flag to conn_streams, CS_FL_ERR_PENDING. This is to be set instead
of CS_FL_ERR in case there's still more data to be read, so that we read all
the data before closing.
When count is zero in the function mux_pt_rcv_buf(), it means the channel's
buffer is full. So we need to set the CS_FL_WANT_ROOM on the
conn_stream. Otherwise, while the channel is full, we will try to receive in
loop more data.
A bug was introduced when the buffers API was refactored. It was when wrapping
input data were compressed. the pointer b_peek(in, 0) was used instead of
"b_orig(in)". b_peek(in, 0) is in fact the same as b_head(in).
Commit 19ed92b ("MINOR: hpack: optimize header encoding for short names")
introduced an error in the space computation for short names, as it removed
the length encoding from the count without replacing with 1 (the minimum
byte). This results in the last byte of the area being occasionally
overwritten, which is immediately detected with -DDEBUG_MEMORY_POOLS as
the canary at the end gets overwritten.
No backport is needed.
h1_process_input() may occasionally be called with an empty input
buffer, and the code behind cannot deal with that, let's check the
condition at the beginning.
No backport is needed.
In h2_deferred_shut, if we're done sending the shutr/shutw, don't destroy
the h2s if it still has a conn_stream attached, or the conn_stream may try
to access it again.
The cache runs in an applet, so it delivers data into the input side
of the channel's buffer. Thus it must also abort feeding the buffer
as soon as CF_SHUTR is present, not just CF_SHUTW*, since these last
ones may only appear later. There doesn't seem to be an observable
side effect of this bug, the fix probably doesn't even need to be
backported.
The HTX-specific cache code uses HTX_CACHE_* states which overlap with
the legacy HTTP states. A typo in the error handling made the state
become HTTP_CACHE_END, which equals 3 and is the value for HTX_CACHE_EOD,
which explains why we were seeing a transition to trailers and memory
corruption.
no backport needed.
When creating new conn_streams, always set the CS_FL_NOT_FIRST flag. We
don't really care about being the first request for HTTP/2, this only
really makes sense for HTTP/1, and that way we can reuse connections.
Add the newly created to the idle list as long as http-reuse != never, and
when completing a H2 request, add the connection to the safe list instead of
the idle list, if we have to add it at that point, that means we created
many streams so we know it's safe.
In session, don't keep an infinite number of connection that can idle.
Add a new frontend parameter, "max-session-srv-conns" to set a max number,
with a default value of 5.
Instead of trying to get the session from the connection, which is not
always there, and of course there could be multiple sessions per connection,
provide it with the init() and attach() methods, so that we know the
session for each outgoing stream.
In the mux_h1 and mux_h2, move the test to see if we should add the
connection in the idle list until after we destroyed the h1s/h2s, that way
later we'll be able to check if the connection has no stream at all, and if
it should be added to the server idling list.
Instead of the old "idle-timeout" mechanism, add a new option,
"pool-purge-delay", that sets the delay before purging idle connections.
Each time the delay happens, we destroy half of the idle connections.
Add a new command, "pool-max-conn" that sets the maximum number of connections
waiting in the orphan idling connections list (as activated with idle-timeout).
Using "-1" means unlimited. Using pools is now dependant on this.
Change the default for http-reuse from "never" to "safe", as it has been
the recommended setting for a few versions now and backend H2 makes little
sense without it.
Some warnings were removed from the config parser since it can dynamically
be disabled depending on the server's configuration, so there's no need to
disable it on a whole backend just for one server.
Caching the response with the compression enabled was totally broken. To fix the
problem, the compression must be done after caching the response. Otherwise it
needs to change the cache to store compressed and uncompressed objects for the
same ressource. So, because it is not possible for now, it is forbidden to
declare the compression filter before the cache one. To ease the configuration,
both can be implicitly declared (without "filter" keyword). The compression will
automatically be inserted after the cache.
Then, to make it works this way, the compression filter has been slighly
modified. Now, the response headers are updated after http-response rules
evaluations, instead of before. So, if the response contains a "Content-length"
header, it will be kept with the response stored in the cache. So this cached
response will be able to be served to clients not supporting the compression at
all.
This bugfix concerns the thread deinit but affects the master process.
When the master process falls in wait mode (it fails to reload the
configuration), it launches the deinit_pollers_per_thread and close the
thread waker pipe. It closes rd (-1) and wr (0).
Closing a FD in the master can have several sides effects and the
process will probably quit at some point.
In this case it assigns 0 to the socketpair of a worker during the next
correct reload, and then closes the socketpair once it falls in wait
mode again. The worker assumes that the master died and leaves.
Commit f8188c6 ("MEDIUM: threads/logs: Make logs thread-safe") made logs
thread-local but it also made the copy of the startup-logs thread-local,
meaning that when threads are configured, upon startup the list of startup
logs appears to be empty. Let's just remove the THEAD_LOCAL directive
there, as the check for the startup period is already present.
This fix should be backported to 1.8.
When refactoring the build option strings in 1.9, the thread support
was placed outside of the ifdef block resulting in threads always being
mentioned even if that was not true. Let's fix this and also mention
when threads are disabled to help troubleshooting.
Removing deprecated APIs is an optional part of OpenWrt's build system to
save some space on embedded devices.
Also added compatibility for LibreSSL.
Signed-off-by: Rosen Penev <rosenp@gmail.com>
PiBa-NL reported an issue affecting logs when stdout is enabled at the
same time as an IP address. It does not affect FD and UNIX, but does
still affect multiple FDs. What happens is that the condition to detect
that the initialization was not made relies on the FD being -1, and in
this case the FD points to the *unique* FD used for AF_INET sockets, so
the configured socket used for outgoing logs over UDP gets overwritten
by the last configured FD. This is not appropriate, so instead we rely
on the sin_port part of the IPv4-mapped address to store the
initialization state for each FD.
This part deserves being significantly revamped, as IPv6 is still not
possible due to the way the FDs are managed, and inherited FDs are a
bit hackish.
Note that this patch relies on "MINOR: tools: preset the port of
fd-based "sockets" to zero" in order to operate properly.
No backport is needed.
Addresses made of a file descriptor store the file descriptor into the
address part of a sin_addr. Contrary to other address classes, there's
no way to figure later based on the FD if an initialization was done
(which is how logs initialize their FDs). The port part is currently
left with random data, so let's instead specifically set the port part
to zero when creating an FD, and let the code using it set whatever
info it needs there, typically an initialization state.
PiBa-NL reported that since this commit f157384 ("MINOR: backend: count
the number of connect and reuse per server and per backend"), reg-test
connection/h00001 fails. Indeed it does, the server is not checked for
existing prior to updating its counter. It should also fail with
transparent mode.
Commit 84cca66 ("BUG/MEDIUM: htx: When performing zero-copy, start from
the right offset.") uncovered another issue which is that the send function
may occasionally be called without any block. It's important to check for
this case when computing the zero-copy offsets.
No backport is needed.
In sess_build_logline(), don't attempt to call sample_fetch_as_type()
if we don't have a stream.
It used never to happen in the past before commit 09bb27c ("MEDIUM: log:
make sess_build_logline() support being called with no stream"). But now
it can happen when sess_log() is called from the lower layers (i.e. the
H2 mux got garbage when it was expecting a preface frame), and it reveals
that some sample fetch functions and some converter fnuctions which rely
on the stream don't test for its existence. For the sample fetch functions,
a durable solution is easy and would consist in adapting sample_process()
to verify the SMP_USE_* bits when the stream is not set. But for the
converters we don't have this option as they don't declare whether or not
they use a stream (which possibly is the real issue).
Thus for now let's disable sample_fetch_as_type() if a stream does not
exist, until it can be more accurately refined later.
If a reload was issued to the master process and failed, it is critical
that the admin sees it because it means that the saved configuration
does not work anymore and might not be usable after a full restart. For
this reason in this case we modify the "master" prompt to explicitly
indicate that a reload failed.
If the reload fail after the parsing of the configuration, the
mworker_proc structures are created for the processes it tried to
create.
The mworker_proc_list_to_env() function was exporting these unitialized
structures in the "HAPROXY_PROCESSES" environment variable which was
leading to this kind of output in "show proc":
4294967295 worker [was: 1] 1 17879d 16h26m28s
When using zerocopy, start from the beginning of the data, not from the
beginning of the buffer, it may have contained headers, and so the data
won't start at the beginning of the buffer.
This patch is a bit huge but nothing special here. Some functions have been
duplicated to support the HTX, some others have a switch inside to do so. So,
now, it is possible to create HTTP applets from HTX proxies. However, TCP
applets remains unsupported.
Functions from then Channel class are now forbidden for LUA scripts called from
HTTP proxies. These functions totally hijacked the HTTP parser, leaving it in an
undefined state. This patch is tagged as MAJOR because it could be see as a
compatibility breakage. But a LUA script using one of these functions has a very
low probablity to work correctly except by chance.
So, concretely, following functions are concerned: Channel.get, Channel.dup,
Channel.getline, Channel.set, Channel.append, Channel.send,
Channel.forward. Others remain available.