Dragan Dosen reported that after the multi-queue changes, appending
"process 1/even" on a bind line can make the process immediately crash
when delivering a first connection. This is due to the fact that I
believed that thread_mask(mask) applied the all_threads_mask value,
but it doesn't. And in case of even/odd the bits cover more than the
available threads, resulting in too high a thread number being selected
and a non-existing task to be woken up.
No backport is needed.
Some packages used to rely on DEFAULT_MAXCONN to set the default global
maxconn value to use regardless of the initial ulimit. The recent changes
made the lowest bound set to 100 so that it is compatible with almost any
environment. Now that DEFAULT_MAXCONN is not needed for anything else, we
can use it for the lowest bound set when maxconn is not configured. This
way it retains its original purpose of setting the default maxconn value
eventhough most of the time the effective value will be higher thanks to
the automatic computation based on "ulimit -n".
This entry was still set to 2000 but never used anymore. The only places
where it appeared was as an alias to SYSTEM_MAXCONN which forces it, so
let's turn these ones to SYSTEM_MAXCONN and remove the default value for
DEFAULT_MAXCONN. SYSTEM_MAXCONN still defines the upper bound however.
When haproxy is built with 51Degrees support, but not configured to use
51Degrees database, a segfault can occur when deinit_51degrees()
function is called, eg. during soft-stop on SIGUSR1 signal.
Only builds that use Pattern algorithm are affected.
This fix must be backported to all stable branches where 51Degrees
support is available. Additional adjustments are required for some
branches due to API and naming changes.
Before c8d5b95 the "maxconn" of the backend of dynamic "use_backend"
rules was not modified (this does not make sense and this is correct).
When implementing proxy_adjust_all_maxconn(), c8d5b95 commit missed this case.
With this patch we adjust the "maxconn" of the backend of such rules only if
they are not dynamic.
Without this patch reg-tests/http-rules/h00003.vtc could make haproxy crash.
deinit_log_buffers() can be called once per thread, however startup_logs
is common to all threads. So only attempt to free it once.
This should be backported to 1.9 and 1.8.
Tests show that it's slightly faster to have this field in the listener.
The cache walk patterns are under heavy stress and having only this field
written to in the bind_conf was wasting a cache line that was heavily
read. Let's move this close to the other entries already written to in
the listener. Warning, the position does have an impact on peak performance.
Now that the P2C algorithm for the accept queue is removed, we don't
need to map a number to a thread bit anymore, so let's remove all
these fields which are taking quite some space for no reason.
At this point, the random used in the hybrid queue distribution algorithm
provides little benefit over a periodic scan, can even have a slightly
worse worst case, and it requires to establish a mapping between a
discrete number and a thread ID among a mask.
This patch introduces a different approach using two indexes. One scans
the thread mask from the left, the other one from the right. The related
threads' loads are compared, and the least loaded one receives the new
connection. Then one index is adjusted depending on the load resulting
from this election, so that we start the next election from two known
lightly loaded threads.
This approach provides an extra 1% peak performance boost over the previous
one, which likely corresponds to the removal of the extra work on the
random and the previously required two mappings of index to thread.
A test was attempted with two indexes going in the same direction but it
was much less interesting because the same thread pairs were compared most
of the time with the load climbing in a ladder-like model. With the reverse
directions this cannot happen.
By picking two randoms following the P2C algorithm, we seldom observe
asymmetric loads on bursts of small session counts. This is typically
what makes h2load take a bit of time to complete the last 100% because
if a thread gets two connections while the other ones only have one,
it takes twice the time to complete its work.
This patch proposes a modification of the p2c algorithm which seems
more suitable to this case : it mixes a rotating index with a random.
This way, we're certain that all threads are consulted in turn and at
the same time we're not forced to use the ones we're giving a chance.
This significantly increases the traffic rate. Now h2load shows faster
completion and the average request rates on H2 and the TLS resume rate
increases by a bit more than 5% compared to pure p2c.
The index was placed into the struct bind_conf because 1) it's faster
there and it's the best place to optimally distribute traffic among a
group of listeners. It's the only runtime-modified element there and
it will be quite cache-hot.
This patch adds "protobuf" protocol buffers specific converter wich
may used in combination with "ungrpc" as first converter to extract
a protocol buffers field value. It is simply implemented reusing
protobuf_field_lookup() which is the protocol buffers specific parser already
used by "ungrpc" converter which only parse a gRPC header in addition of
parsing protocol buffers message.
Update the documentation for this new "protobuf" converter.
We move the code responsible of parsing protocol buffers messages
inside gRPC messages from sample.c to include/proto/protocol_buffers.h
so that to reuse it to cascade "ungrpc" converter.
In 84e417d8 ("MINOR: ssl: support Openssl 1.1.1 early callback for
switchctx") the code was extended to also support OpenSSL 1.1.1
(code already supported BoringSSL). A configuration check warning
was updated but with the wrong logic, the #ifdef needs a && instead
of an ||.
Reported in #54.
Should be backported to 1.8.
Emeric reports that when MAX_THREADS and/or MAX_PROCS are set to lower
values, referencing thread or process numbers higher than these limits
in cpu-map returns errors. This is annoying because these typically are
silent settings that are expected to be used only when set. Let's switch
back to LONGBITS for this limit.
Since the "wurfl" device detection engine was merged slightly more than
two years ago (2016-11-04), it never received a single fix nor update.
For almost two years it didn't receive even the minimal review or changes
needed to be compatible with threads, and it's remained build-broken for
about the last 9 months, consecutive to the last buffer API changes,
without anyone ever noticing! When asked on the list, nobody confirmed
using it :
https://www.mail-archive.com/haproxy@formilux.org/msg32516.html
And obviously nobody even cared to verify that it did still build. So we
are left with this broken code with no user and no maintainer. It might
even suffer from remotely exploitable vulnerabilities without anyone
being able to check if it presents any risk. It's a pain to update each
time there is an API change because it doesn't build as it depends on
external libraries that are not publicly accessible, leading to careful
blind changes. It slows down the whole project. This situation is not
acceptable at all.
It's time to cure the problem where it is. This patch removes all this
dead, non-buildable, non-working code. If anyone ever decides to use it,
which I seriously doubt based on history, it could be reintegrated, but
this time the following guarantees will be required :
- someone has to step up as a maintainer and have his name listed in
the MAINTAINERS file (I should have been more careful last time).
This person will take the sole blame for all issues and will be
responsible for fixing the bugs and incompatibilities affecting
this code, and for making it evolve to follow regular internal API
updates.
- support building on a standard distro with automated tools (i.e. no
more "click on this site, register your e-mail and download an
archive then figure how to place this into your build system").
Dummy libs are OK though as long as they allow the mainline code to
build and start.
- multi-threaded support must be fixed. I mean seriously, not worked
around with a check saying "please disable threads, we've been busy
fishing for the last two years".
This may be backported to 1.9 given that the code has never worked there
either, thus at least we're certain nobody will miss it.
For now on, "ungrpc" may take a second optional argument to provide
the protocol buffers types used to encode the field value to be extracted.
When absent the field value is extracted as a binary sample which may then
followed by others converters like "hex" which takes binary as input sample.
When this second argument is a type which does not match the one found by "ungrpc",
this field is considered as not found even if present.
With this patch we also remove the useless "varint" and "svarint" converters.
Update the documentation about "ungrpc" converters.
Parsing protocol buffer fields always consists in skip the field
if the field is not found or store the field value if found.
So, with this patch we factorize a little bit the code for "ungrpc" converter.
While the legacy code converts h2 to h1 and provides some control over
what is passed, in htx mode there is no such control and it is possible
to pass control chars and linear white spaces in the path, which are
possibly reencoded differently once passed to the H1 side.
HTX supports parse error reporting using a special flag. Let's check
the correctness of the :path pseudo header and report any anomaly in
the HTX flag.
Thanks to Jérôme Magnin for reporting this bug with a working reproducer.
This fix must be backported to 1.9 along with the two previous patches
("MINOR: htx: unconditionally handle parsing errors in requests or
responses" and "MINOR: mux-h2: always pass HTX_FL_PARSING_ERROR
between h2s and buf on RX").
In order to allow the H2 parser to report parsing errors, we must make
sure to always pass the HTX_FL_PARSING_ERROR flag from the h2s htx to
the conn_stream's htx.
The htx request and response processing functions currently only check
for HTX_FL_PARSING_ERROR on incomplete messages because that's how mux_h1
delivers these. However with H2 we have to detect some parsing errors in
the format of certain pseudo-headers (e.g. :path), so we do have a complete
message but we want to report an error.
Let's move the parse error check earlier so that it always triggers when
the flag is present. It was also moved for htx_wait_for_request_body()
since we definitely want to be able to abort processing such an invalid
request even if it appears complete, but it was not changed in the forward
functions so as not to truncate contents before the position of the first
error.
This patch simply extracts the code of smp_fetch_req_ungrpc() for "req.ungrpc"
from http_fetch.c to move it to sample.c with very few modifications.
Furthermore smp_fetch_body_buf() used to fetch the body contents is no more needed.
Update the documentation for gRPC.
A crash in H2 was reported in issue #52. It turns out that there is a
small but existing race by which a conn_stream could detach itself
using h2_detach(), not being able to destroy the h2s due to pending
output data blocked by flow control, then upon next h2s activity
(transfer_data or trailers parsing), an ES flag may need to be turned
into a CS_FL_REOS bit, causing a dereference of a NULL stream. This is
a side effect of the fact that we still have a few places which
incorrectly depend on the CS flags, while these flags should only be
set by h2_rcv_buf() and h2_snd_buf().
All candidate locations along this path have been secured against this
risk, but the code should really evolve to stop depending on CS anymore.
This fix must be backported to 1.9 and possibly partially to 1.8.
The global maxconn value is often a pain to configure :
- in development the user never has the permissions to increase the
rlim_cur value too high and gets warnings all the time ;
- in some production environments, users may have limited actions on
it or may only be able to act on rlim_fd_cur using ulimit -n. This
is sometimes particularly true in containers or whatever environment
where the user has no privilege to upgrade the limits.
- keeping config homogenous between machines is even less easy.
We already had the ability to automatically compute maxconn from the
memory limits when they were set. This patch goes a bit further by also
computing the limit permitted by the configured limit on the number of
FDs. For this it simply reverses the rlim_fd_cur calculation to determine
maxconn based on the number of reserved sockets for listeners & checks,
the number of SSL engines and the number of pipes (absolute or relative).
This way it becomes possible to make maxconn always be the highest possible
value resulting in maxsock matching what was set using "ulimit -n", without
ever setting it. Note that we adjust to the soft limit, not the hard one,
since it's what is configured with ulimit -n. This allows users to also
limit to low values if needed.
Just like before, the calculated value is reported in verbose mode.
We'll need to know the global maxsock before the maxconn calculation.
Actually only two components were calculated too late, the peers FD
and the stats FD. Let's move them a few lines upward.