We used to mix high latency tasks and low latency tasklets in the same
list, and to even refill bulk tasklets there, causing some unfairness
in certain situations (e.g. poll-less transfers between many connections
saturating the machine with similarly-sized in and out network interfaces).
This patch changes the mechanism to split the load into 3 lists depending
on the task/tasklet's desired classes :
- URGENT: this is mainly for tasklets used as deferred callbacks
- NORMAL: this is for regular tasks
- BULK: this is for bulk tasks/tasklets
Arbitrary ratios of max_processed are picked from each of these lists in
turn, with the ability to complete in one list from what was not picked
in the previous one. After some quick tests, the following setup gave
apparently good results both for raw TCP with splicing and for H2-to-H1
request rate:
- 0 to 75% for urgent
- 12 to 50% for normal
- 12 to what remains for bulk
Bulk is not used yet.
New function run_tasks_from_list() will run over a tasklet list and will
run all the tasks and tasklets it finds there within a limit of <max>
that is passed in arggument. This is a preliminary work for scheduler QoS
improvements.
Previous patch 160287b676 ("MEDIUM: pipe/thread: maintain a per-thread
local cache of recently used pipes") didn't replace all pipe counter
updates with atomic ops since some were already under a lock, which is
obviously not a valid reason since these ones can be updated in parallel
to other atomic ops. The result was that the pipes_used could seldom be
seen as negative in the stats (harmless) but also this could result in
slightly more pipes being allocated than permitted, thus stealing a few
file descriptors that were not usable for connections anymore. Let's use
pure atomic ops everywhere these counters are updated.
No backport is needed.
In order to completely remove the pipe locking cost and try to reuse
hot pipes, each thread now maintains a local cache of recently used pipes
that is no larger than its share (maxpipes/nbthreads). All extra pipes
are instead refilled into the global pool. Allocations are made from the
local pool first, and fall back to the global one before allocating one.
This completely removes the observed pipe locking cost at high bit rates,
which was still around 5-6%.
In a quick test involving splicing, we can see that get_pipe() and
put_pipe() together consume up to 12% of the CPU. That's not surprizing
considering how much work is performed under the lock, including the
pipe struct allocation, the pipe creation and its initialization. Same
for releasing, we don't need a lock there to call close() nor to free
to the pool.
Changing this alone was enough to cut the overhead in half. A better
approach should consist in having a per-thread pipe cache, which will
also help keep pages hot in the CPU caches.
src/ssl_sock.c: In function ‘cli_io_handler_show_cert’:
src/ssl_sock.c:10214:6: warning: unused variable ‘n’ [-Wunused-variable]
int n;
^
Fix this problem in the io handler of the "show ssl cert" function.
Given that raw_sock's functions solely act on connections and that all its
callers properly use subscribe() when they want to receive/send more, there
is no more reason for calling fd_{cant,cond,done}_{send,recv} anymore as
this call is immediately overridden by the subscribe call. It's also worth
noting that the purpose of fd_cond_recv() whose purpose was to speculatively
enable reading in the FD cache if the FD was active but not yet polled was
made to save on expensive epoll_ctl() calls and was implicitly covered more
cleanly by recent commit 5d7dcc2a8e ("OPTIM: epoll: always poll for recv if
neither active nor ready").
No change on the number of calls to epoll_ctl() was noticed consecutive to
this change.
this log could be sometimes a bit confusing (depending on the number in
fact) when you read it (e.g is it the number of active connection?) -
only trained eyes knows haproxy output a different log when closing
active connections while stopping.
Signed-off-by: William Dauchy <w.dauchy@criteo.com>
hostname were limited to 62 char, which is not RFC1035 compliant;
- the parsing loop should stop when above max label char
- fix len label test where d[i] was wrongly used
- simplify the whole function to avoid using two extra char* variable
this should fix github issue #387
Signed-off-by: William Dauchy <w.dauchy@criteo.com>
Reviewed-by: Tim Duesterhus <tim@bastelstu.be>
Acked-by: Baptiste <bedis9@gmail.com>
triggered by coverity; src_port is set earlier.
this should fix github issue #467
Fixes: 7fec021537 ("MEDIUM: proxy_protocol: Convert IPs to v6 when
protocols are mixed")
This should be backported to 1.8.
Signed-off-by: William Dauchy <w.dauchy@criteo.com>
Reviewed-by: Tim Duesterhus <tim@bastelstu.be>
Many HTTP actions rely on <.arg.http> in the act_rule structure. Not all actions
use the log-format expression, but it must be initialized anyway. Otherwise,
HAProxy may crash during the deinit when the release function is called.
No backport needed. This patch should fix issue #468.
In issue #465, we see that Coverity detected dead code in checks.c
which is in fact a missing parenthesis to build the connect() flags
consecutive to the API change in commit fdcb007ad8 ("MEDIUM: proto:
Change the prototype of the connect() method.").
The impact should be imperceptible as in the best case it may have
resulted in a missed optimization trying to save a syscall or to merge
outgoing packets.
It may be backported as far as 2.0 though it's not critical.
In ssl_sock_init(), if we fail to allocate the BIO, don't forget to free
the SSL *, or we'd end up with a memory leak.
This should be backported to 2.1 and 2.0.
As the server early data buffer is allocated in the middle of the loop
used to allocate the SSL session without being freed before retrying,
this leads to a memory leak.
To fix this we move the section of code responsible of this early data buffer
alloction after the one reponsible of allocating the SSL session.
Must be backported to 2.1 and 2.0.
When commit 477902bd2e made the conn_stream
allocation unconditional, it unfortunately moved the code doing the allocation
inside #if USE_OPENSSL, which means anybody compiling haproxy without
openssl wouldn't allocate any conn_stream, and would get a segfault later.
Fix that by moving the code that does the allocation outside #if USE_OPENSSL.
In process_stream(), when a client or a server abort is handled, the
corresponding listener's counter is incremented. But, we must be sure to have a
listener attached to the session. This bug was introduced by the commit
cff0f739e5.
Thanks to Fred to reporting me the bug.
No need to backport this patch, except if commit cff0f739e5 is backported.
A stupid cut-paste bug was introduced in the commit cff0f739e5. Backend
counters must of course be incremented on the stream's backend. Not the
frontend.
No need to backport this patch, except if commit cff0f739e5 is backported.
A first patch was made during 2.0-dev to silence a bogus warning emitted
by gcc : dd1c8f1f72 ("MINOR: cfgparse: Add a cast to make gcc happier."),
but it happens it was not sufficient as the warning re-appeared on 32-bit
machines under gcc-8 and gcc-9 :
src/cfgparse.c: In function 'check_config_validity':
src/cfgparse.c:3642:33: warning: argument 1 range [2147483648, 4294967295] exceeds maximum object size 2147483647 [-Walloc-size-larger-than=]
newsrv->idle_orphan_conns = calloc((unsigned int)global.nbthread, sizeof(*newsrv->idle_orphan_conns));
^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
This warning doesn't trigger in other locations, and it immediately
vanishes if the previous or subsequent loops do not depend on
global.nbthread anymore, or if the field ordering of the struct server
changes! As discussed in the thread at:
https://www.mail-archive.com/haproxy@formilux.org/msg36107.html
playing with -Walloc-size-larger-than has no effect. And a minimal
reproducer could be isolated, indicating it's pointless to circle around
this one. Let's just cast nbthread to ushort so that gcc cannot make
this wrong detection. It's unlikely we'll use more than 65535 threads in
the near future anyway.
This may be backported to older releases if they are also affected, at
least to ease the job of distro maintainers.
Thanks to Ilya for testing.
lua-prepend-path allows the administrator to specify a custom Lua library
path to load custom Lua modules that are useful within the context of HAProxy
without polluting the global Lua library folder.
While the H2 parser properly checks for the absence of anything but
"trailers" in the TE header field, we forget to check this when sending
the request to an H2 server. The problem is that an H2->H2 conversion
may keep "gzip" and fail on the next stage.
This patch makes sure that we only send "TE: trailers" if the TE header
contains the "trailers" token, otherwise it's dropped.
This fixes issue #464 and should be backported till 1.9.
Since commit 1b8e68e89a ("MEDIUM: stick-table: Stop handling stick-tables
as proxies."), a rule referencing the current proxy with no table leads
to the following error :
[ALERT] 023/071924 (16479) : Proxy 'px': unable to find stick-table '(null)'.
[ALERT] 023/071914 (16479) : Fatal errors found in configuration.
for a config like this one:
backend px
stick on src
This patch fixes it and should be backported as far as 2.0.
A few places in health checks and stream-int on the send path were still
checking for this flag. Now we do not and instead we rely on snd_buf()
to report the error if any.
It's worth noting that all 3 real muxes still use CO_FL_SOCK_WR_SH and
CO_FL_ERROR interchangeably at various places to decide to abort and/or
free their data. This should be clarified and fixed so that only
CO_FL_ERROR is used, and this will render the error paths simpler and
more accurate.
The test was added before splice() and send() to make sure we never
accidently send after a shutdown, because upper layers do not all
check and it's not their job to do it. In such a case we also set
errno to EPIPE so that the error can be accurately reported, e.g.,
in health checks.
Just like with CO_FL_SOCK_RD_SH, we don't need to check for this flag too
early because conn_sock_send() already does it. No error was lost so it
was harmless, it was only useless code.
The handshake functions dedicated to proxy proto, netscaler and
socks4 all check for this flag before proceeding. This is wrong,
they must not do and instead perform the call to recv() then
report the close. The reason for this is that the current
construct managed to lose the CO_ER_CIP_EMPTY error code in case
the connection was already shut, thus causing a race condition
with some errors being reported correctly or as unknown depending
on the timing.
There are still leftovers from the pre-xprt_handshake era with lots
of places where I/O callbacks refrain from receiving/sending if they
see that a handshake is present. This needlessly duplicates the
subscribe calls as it will automatically be done by the underlying
xprt_handshake code when attempting the operation.
The only reason for still checking CO_FL_HANDSHAKE is when we decide
to instantiate xprt_handshake. This patch removes all other ones.
As mentioned in commit c192b0ab95 ("MEDIUM: connection: remove
CO_FL_CONNECTED and only rely on CO_FL_WAIT_*"), there is a lack of
consistency on which flags are checked among L4/L6/HANDSHAKE depending
on the code areas. A number of sample fetch functions only check for
L4L6 to report MAY_CHANGE, some places only check for HANDSHAKE and
many check both L4L6 and HANDSHAKE.
This patch starts to make all of this more consistent by introducing a
new mask CO_FL_WAIT_XPRT which is the union of L4/L6/HANDSHAKE and
reports whether the transport layer is ready or not.
All inconsistent call places were updated to rely on this one each time
the goal was to check for the readiness of the transport layer.
Most places continue to check CO_FL_HANDSHAKE while in fact they should
check CO_FL_HANDSHAKE_NOSSL, which contains all handshakes but the one
dedicated to SSL renegotiation. In fact the SSL layer should be the
only one checking CO_FL_SSL_WAIT_HS, so as to avoid processing data
when a renegotiation is in progress, but other ones randomly include it
without knowing. And ideally it should even be an internal flag that's
not exposed in the connection.
This patch takes CO_FL_SSL_WAIT_HS out of CO_FL_HANDSHAKE, uses this flag
consistently all over the code, and gets rid of CO_FL_HANDSHAKE_NOSSL.
In order to limit the confusion that has accumulated over time, the
CO_FL_SSL_WAIT_HS flag which indicates an ongoing SSL handshake,
possibly used by a renegotiation was moved after the other ones.
As mentioned in c192b0ab95 ("MEDIUM: connection: remove CO_FL_CONNECTED
and only rely on CO_FL_WAIT_*"), si_cs_recv() currently does not propagate
CS_FL_EOS to CF_READ_NULL if CO_FL_WAIT_L4L6 is set, while this situation
doesn't exist anymore. Let's get rid of this confusing test.
We only add the Early-data header, or get ssl_fc_has_early to return 1, if
we didn't already did the SSL handshake, as otherwise, we know the early
data were fine, and there's no risk of replay attack. But to do so, we
wrongly checked CO_FL_HANDSHAKE, we have to check CO_FL_SSL_WAIT_HS instead,
as we don't care about the status of any other handshake.
This should be backported to 2.1, 2.0, and 1.9.
When deciding if we should add the Early-Data header, or if the sample fetch
should return
Commit 477902bd2e ("MEDIUM: connections: Get ride of the xprt_done
callback.") broke the master CLI for a very obscure reason. It happens
that short requests immediately terminated by a shutdown are properly
received, CS_FL_EOS is correctly set, but in si_cs_recv(), we refrain
from setting CF_SHUTR on the channel because CO_FL_CONNECTED was not
yet set on the connection since we've not passed again through
conn_fd_handler() and it was not done in conn_complete_session(). While
commit a8a415d31a ("BUG/MEDIUM: connections: Set CO_FL_CONNECTED in
conn_complete_session()") fixed the issue, such accident may happen
again as the root cause is deeper and actually comes down to the fact
that CO_FL_CONNECTED is lazily set at various check points in the code
but not every time we drop one wait bit. It is not the first time we
face this situation.
Originally this flag was used to detect the transition between WAIT_*
and CONNECTED in order to call ->wake() from the FD handler. But since
at least 1.8-dev1 with commit 7bf3fa3c23 ("BUG/MAJOR: connection: update
CO_FL_CONNECTED before calling the data layer"), CO_FL_CONNECTED is
always synchronized against the two others before being checked. Moreover,
with the I/Os moved to tasklets, the decision to call the ->wake() function
is performed after the I/Os in si_cs_process() and equivalent, which don't
care about this transition either.
So in essence, checking for CO_FL_CONNECTED has become a lazy wait to
check for (CO_FL_WAIT_L4_CONN | CO_FL_WAIT_L6_CONN), but that always
relies on someone else having synchronized it.
This patch addresses it once for all by killing this flag and only checking
the two others (for which a composite mask CO_FL_WAIT_L4L6 was added). This
revealed a number of inconsistencies that were purposely not addressed here
for the sake of bisectability:
- while most places do check both L4+L6 and HANDSHAKE at the same time,
some places like assign_server() or back_handle_st_con() and a few
sample fetches looking for proxy protocol do check for L4+L6 but
don't care about HANDSHAKE ; these ones will probably fail on TCP
request session rules if the handshake is not complete.
- some handshake handlers do validate that a connection is established
at L4 but didn't clear CO_FL_WAIT_L4_CONN
- the ->ctl method of mux_fcgi, mux_pt and mux_h1 only checks for L4+L6
before declaring the mux ready while the snd_buf function also checks
for the handshake's completion. Likely the former should validate the
handshake as well and we should get rid of these extra tests in snd_buf.
- raw_sock_from_buf() would directly set CO_FL_CONNECTED and would only
later clear CO_FL_WAIT_L4_CONN.
- xprt_handshake would set CO_FL_CONNECTED itself without actually
clearing CO_FL_WAIT_L4_CONN, which could apparently happen only if
waiting for a pure Rx handshake.
- most places in ssl_sock that were checking CO_FL_CONNECTED don't need
to include the L4 check as an L6 check is enough to decide whether to
wait for more info or not.
It also becomes obvious when reading the test in si_cs_recv() that caused
the failure mentioned above that once converted it doesn't make any sense
anymore: having CS_FL_EOS set while still waiting for L4 and L6 to complete
cannot happen since for CS_FL_EOS to be set, the other ones must have been
validated.
Some of these parts will still deserve further cleanup, and some of the
observations above may induce some backports of potential bug fixes once
totally analyzed in their context. The risk of breaking existing stuff
is too high to blindly backport everything.
ocsp_issuer is primary set from ckch->chain when PEM is loaded from file,
but not set when PEM is loaded via CLI payload. Set ckch->ocsp_issuer in
ssl_sock_load_pem_into_ckch to fix that.
Should be backported in 2.1.
We can't just assume conn_create_mux() will be called, and set CO_FL_CONNECTED,
conn_complete_session() might be call synchronously if we're not using SSL,
so ew haee no choice but to set CO_FL_CONNECTED in there. This should fix
the recent breakage of the mcli reg tests.
When using the OCSP response, if the issuer of the response is in
the certificate chain, its address will be stored in ckch->ocsp_issuer.
However, since the ocsp_issuer could be filled by a separate file, this
pointer is free'd. The refcount of the X509 need to be incremented to
avoid a double free if we free the ocsp_issuer AND the chain.
As reported in bug #447, gcc 9.2 invents impossible code paths and then
complains that we don't check for our pointers to be NULL... This code
path is not critical, better add the test to shut it up than try to
help it being less creative.
This code hasn't changed for a while, so it could help distros to
backport this to older releases.
objt_conn() may return a NULL though here we don't have this situation
anymore since the connection is always there, so let's simply switch
to the unchecked __objt_conn(). This addresses issue #454.
Coverity rightfully reported that it's pointless to test for "conn"
to be null while all code paths leading to it have already
dereferenced it. This addresses issue #461.
When using "set ssl cert" on the CLI, if we load a new PEM, the previous
sctl, issuer and OCSP response are still loaded. This doesn't make any
sense since they won't be usable with a new private key.
This patch free the previous data.
Should be backported in 2.1.
When using multiple filters with "show table", it can be useful to
report which filter entry failed
> show table MY_TABLE data.gpc0 gt 0 data.gpc0a lt 1000
Filter entry #2: Unknown data type
> show table MY_TABLE data.gpc0 gt 0 data.gpc0 lt 1000a
Filter entry #2: Require a valid integer value to compare against
We now also catch garbage data after the filter
> show table MY_TABLE data.gpc0 gt 0 data.gpc0 lt 1000 data.gpc0 gt 1\
data.gpc0 gt 10 a
Detected extra data in filter, 16th word of input, after '10'
Even before multi-filter feature we've also silently accepted garbage
after the input, hiding potential bugs
> show table MY_TABLE data.gpc0 gt 0 data.gpc0
or
> show table MY_TABLE data.gpc0 gt 0 a
In both cases, only first filter entry would be used, silently ignoring
extra filter entry or garbage data.
Last, but not the least, it is now possible to detect multi-filter
feature from cli with something like the following:
> show table MY_TABLE data.blah
Filter entry #1: Unknown data type
The xprt_done_cb callback was used to defer some connection initialization
until we're connected and the handshake are done. As it mostly consists of
creating the mux, instead of using the callback, introduce a conn_create_mux()
function, that will just call conn_complete_session() for frontend, and
create the mux for backend.
In h2_wake(), make sure we call the wake method of the stream_interface,
as we no longer wakeup the stream task.
In connect_server(), when creating a new connection for which we don't yet
know the mux (because it'll be decided by the ALPN), instead of associating
the connection to the stream_interface, always create a conn_stream. This way,
we have less special-casing needed. Store the conn_stream in conn->ctx,
so that we can reach the upper layers if needed.
We don't properly check for missing data values for additional filter
entries, passing out of bounds index to args[], then passing to strlen.
Introduced in commit 1a693fc2: (MEDIUM: cli: Allow multiple filter
entries for "show table")
Last commit 1a693fc2fd ("MEDIUM: cli: Allow multiple filter entries for "show table"")
broke the build at two places:
src/stick_table.c: In function 'table_prepare_data_request':
src/stick_table.c:3620:33: warning: ordered comparison of pointer with integer zero [-Wextra]
src/stick_table.c: In function 'cli_io_handler_table':
src/stick_table.c:3763:5: error: 'for' loop initial declarations are only allowed in C99 mode
src/stick_table.c:3763:5: note: use option -std=c99 or -std=gnu99 to compile your code
make: *** [src/stick_table.o] Error 1
This patch fixes both. No backport needed.
"set ssl cert <filename> <payload>" CLI command should have the same
result as reload HAproxy with the updated pem file (<filename>).
Is not the case, DHparams/cert-chain is kept from the previous
context if no DHparams/cert-chain is set in the context (<payload>).
This patch should be backport to 2.1
In conn_recv_netscaler_cip(), don't forget to allocate conn->src and
conn->dst, as those are now dynamically allocated. Not doing so results in
getting a crash when using netscaler.
This should fix github issue #460.
This should be backported to 2.1.
For complex stick tables with many entries/columns, it can be beneficial
to filter using multiple criteria. The maximum number of filter entries
can be controlled by defining STKTABLE_FILTER_LEN during build time.
This patch can be backported to older releases.
Released version 2.2-dev1 with the following main changes :
- DOC: this is development again
- MINOR: version: this is development again, update the status
- SCRIPTS: update create-release to fix the changelog on new branches
- CLEANUP: ssl: Clean up error handling
- BUG/MINOR: contrib/prometheus-exporter: decode parameter and value only
- BUG/MINOR: h1: Don't test the host header during response parsing
- BUILD/MINOR: trace: fix use of long type in a few printf format strings
- DOC: Clarify behavior of server maxconn in HTTP mode
- MINOR: ssl: deduplicate ca-file
- MINOR: ssl: compute ca-list from deduplicate ca-file
- MINOR: ssl: deduplicate crl-file
- CLEANUP: dns: resolution can never be null
- BUG/MINOR: http-htx: Don't make http_find_header() fail if the value is empty
- DOC: ssl/cli: set/commit/abort ssl cert
- BUG/MINOR: ssl: fix SSL_CTX_set1_chain compatibility for openssl < 1.0.2
- BUG/MINOR: fcgi-app: Make the directive pass-header case insensitive
- BUG/MINOR: stats: Fix HTML output for the frontends heading
- BUG/MINOR: ssl: fix X509 compatibility for openssl < 1.1.0
- DOC: clarify matching strings on binary fetches
- DOC: Fix ordered list in summary
- DOC: move the "group" keyword at the right place
- MEDIUM: init: prevent process and thread creation at runtime
- BUG/MINOR: ssl/cli: 'ssl cert' cmd only usable w/ admin rights
- BUG/MEDIUM: stream-int: don't subscribed for recv when we're trying to flush data
- BUG/MINOR: stream-int: avoid calling rcv_buf() when splicing is still possible
- BUG/MINOR: ssl/cli: don't overwrite the filters variable
- BUG/MEDIUM: listener/thread: fix a race when pausing a listener
- BUG/MINOR: ssl: certificate choice can be unexpected with openssl >= 1.1.1
- BUG/MEDIUM: mux-h1: Never reuse H1 connection if a shutw is pending
- BUG/MINOR: mux-h1: Don't rely on CO_FL_SOCK_RD_SH to set H1C_F_CS_SHUTDOWN
- BUG/MINOR: mux-h1: Fix conditions to know whether or not we may receive data
- BUG/MEDIUM: tasks: Make sure we switch wait queues in task_set_affinity().
- BUG/MEDIUM: checks: Make sure we set the task affinity just before connecting.
- MINOR: debug: replace popen() with pipe+fork() in "debug dev exec"
- MEDIUM: init: set NO_NEW_PRIVS by default when supported
- BUG/MINOR: mux-h1: Be sure to set CS_FL_WANT_ROOM when EOM can't be added
- BUG/MEDIUM: mux-fcgi: Handle cases where the HTX EOM block cannot be inserted
- BUG/MINOR: proxy: make soft_stop() also close FDs in LI_PAUSED state
- BUG/MINOR: listener/threads: always use atomic ops to clear the FD events
- BUG/MINOR: listener: also clear the error flag on a paused listener
- BUG/MEDIUM: listener/threads: fix a remaining race in the listener's accept()
- MINOR: listener: make the wait paths cleaner and more reliable
- MINOR: listener: split dequeue_all_listener() in two
- REORG: listener: move the global listener queue code to listener.c
- DOC: document the listener state transitions
- BUG/MEDIUM: kqueue: Make sure we report read events even when no data.
- BUG/MAJOR: dns: add minimalist error processing on the Rx path
- BUG/MEDIUM: proto_udp/threads: recv() and send() must not be exclusive.
- DOC: listeners: add a few missing transitions
- BUG/MINOR: tasks: only requeue a task if it was already in the queue
- MINOR: tasks: split wake_expired_tasks() in two parts to avoid useless wakeups
- DOC: proxies: HAProxy only supports 3 connection modes
- DOC: remove references to the outdated architecture.txt
- BUG/MINOR: log: fix minor resource leaks on logformat error path
- BUG/MINOR: mworker: properly pass SIGTTOU/SIGTTIN to workers
- BUG/MINOR: listener: do not immediately resume on transient error
- BUG/MINOR: server: make "agent-addr" work on default-server line
- BUG/MINOR: listener: fix off-by-one in state name check
- BUILD/MINOR: unix sockets: silence an absurd gcc warning about strncpy()
- MEDIUM: h1-htx: Add HTX EOM block when the message is in H1_MSG_DONE state
- MINOR: http-htx: Add some htx sample fetches for debugging purpose
- REGTEST: Add an HTX reg-test to check an edge case
- DOC: clarify the fact that replace-uri works on a full URI
- BUG/MINOR: sample: fix the closing bracket and LF in the debug converter
- BUG/MINOR: sample: always check converters' arguments
- MINOR: sample: Validate the number of bits for the sha2 converter
- BUG/MEDIUM: ssl: Don't set the max early data we can receive too early.
- MINOR: ssl/cli: 'show ssl cert' give information on the certificates
- BUG/MINOR: ssl/cli: fix build for openssl < 1.0.2
- MINOR: debug: support logging to various sinks
- MINOR: http: add a new "replace-path" action
- REGTEST: ssl: test the "set ssl cert" CLI command
- REGTEST: run-regtests: implement #REQUIRE_BINARIES
- MINOR: task: only check TASK_WOKEN_ANY to decide to requeue a task
- BUG/MAJOR: task: add a new TASK_SHARED_WQ flag to fix foreing requeuing
- BUG/MEDIUM: ssl: Revamp the way early data are handled.
- MINOR: fd/threads: make _GET_NEXT()/_GET_PREV() use the volatile attribute
- BUG/MEDIUM: fd/threads: fix a concurrency issue between add and rm on the same fd
- REGTEST: make the "set ssl cert" require version 2.1
- BUG/MINOR: ssl: openssl-compat: Fix getm_ defines
- BUG/MEDIUM: state-file: do not allocate a full buffer for each server entry
- BUG/MINOR: state-file: do not store duplicates in the global tree
- BUG/MINOR: state-file: do not leak memory on parse errors
- BUG/MAJOR: mux-h1: Don't pretend the input channel's buffer is full if empty
- BUG/MEDIUM: stream: Be sure to never assign a TCP backend to an HTX stream
- BUILD: ssl: improve SSL_CTX_set_ecdh_auto compatibility
- BUILD: travis-ci: link with ssl libraries using rpath instead of LD_LIBRARY_PATH/DYLD_LIBRARY_PATH
- BUILD: travis-ci: reenable address sanitizer for clang builds
- BUG/MINOR: checks: refine which errno values are really errors.
- BUG/MINOR: connection: only wake send/recv callbacks if the FD is active
- CLEANUP: connection: conn->xprt is never NULL
- MINOR: pollers: add a new flag to indicate pollers reporting ERR & HUP
- MEDIUM: tcp: make tcp_connect_probe() consider ERR/HUP
- REORG: connection: move tcp_connect_probe() to conn_fd_check()
- MINOR: connection: check for connection validation earlier
- MINOR: connection: remove the double test on xprt_done_cb()
- CLEANUP: connection: merge CO_FL_NOTIFY_DATA and CO_FL_NOTIFY_DONE
- MINOR: poller: do not call the IO handler if the FD is not active
- OPTIM: epoll: always poll for recv if neither active nor ready
- OPTIM: polling: do not create update entries for FD removal
- BUG/MEDIUM: checks: Only attempt to do handshakes if the connection is ready.
- BUG/MEDIUM: connections: Hold the lock when wanting to kill a connection.
- BUILD: CI: modernize cirrus-ci
- MINOR: config: disable busy polling on old processes
- MINOR: ssl: Remove unused variable "need_out".
- BUG/MINOR: h1: Report the right error position when a header value is invalid
- BUG/MINOR: proxy: Fix input data copy when an error is captured
- BUG/MEDIUM: http-ana: Truncate the response when a redirect rule is applied
- BUG/MINOR: channel: inject output data at the end of output
- BUG/MEDIUM: session: do not report a failure when rejecting a session
- MEDIUM: dns: implement synchronous send
- MINOR: raw_sock: make sure to disable polling once everything is sent
- MINOR: http: Add 410 to http-request deny
- MINOR: http: Add 404 to http-request deny
- CLEANUP: mux-h2: remove unused goto "out_free_h2s"
- BUILD: cirrus-ci: choose proper openssl package name
- BUG/MAJOR: listener: do not schedule a task-less proxy
- CLEANUP: server: remove unused err section in server_finalize_init
- REGTEST: set_ssl_cert.vtc: replace "echo" with "printf"
- BUG/MINOR: stream-int: Don't trigger L7 retry if max retries is already reached
- BUG/MEDIUM: tasks: Use the MT macros in tasklet_free().
- BUG/MINOR: mux-h2: use a safe list_for_each_entry in h2_send()
- BUG/MEDIUM: mux-h2: fix missing test on sending_list in previous patch
- CLEANUP: ssl: remove opendir call in ssl_sock_load_cert
- MEDIUM: lua: don't call the GC as often when dealing with outgoing connections
- BUG/MEDIUM: mux-h2: don't stop sending when crossing a buffer boundary
- BUG/MINOR: cli/mworker: can't start haproxy with 2 programs
- REGTEST: mcli/mcli_start_progs: start 2 programs
- BUG/MEDIUM: mworker: remain in mworker mode during reload
- DOC: clarify crt-base usage
- CLEANUP: compression: remove unused deinit_comp_ctx section
- BUG/MEDIUM: mux_h1: Don't call h1_send if we subscribed().
- BUG/MEDIUM: raw_sock: Make sur the fd and conn are sync.
- CLEANUP: proxy: simplify proxy_parse_rate_limit proxy checks
- BUG/MAJOR: hashes: fix the signedness of the hash inputs
- REGTEST: add sample_fetches/hashes.vtc to validate hashes
- BUG/MEDIUM: cli: _getsocks must send the peers sockets
- CLEANUP: cli: deduplicate the code in _getsocks
- BUG/MINOR: stream: don't mistake match rules for store-request rules
- BUG/MEDIUM: connection: add a mux flag to indicate splice usability
- BUG/MINOR: pattern: handle errors from fgets when trying to load patterns
- MINOR: connection: move the CO_FL_WAIT_ROOM cleanup to the reader only
- MINOR: stream-int: remove dependency on CO_FL_WAIT_ROOM for rcv_buf()
- MEDIUM: connection: get rid of CO_FL_CURR_* flags
- BUILD: pattern: include errno.h
- MEDIUM: mux-h2: do not try to stop sending streams on blocked mux
- MEDIUM: mux-fcgi: do not try to stop sending streams on blocked mux
- MEDIUM: mux-h2: do not make an h2s subscribe to itself on deferred shut
- MEDIUM: mux-fcgi: do not make an fstrm subscribe to itself on deferred shut
- REORG: stream/backend: move backend-specific stuff to backend.c
- MEDIUM: backend: move the connection finalization step to back_handle_st_con()
- MEDIUM: connection: merge the send_wait and recv_wait entries
- MEDIUM: xprt: merge recv_wait and send_wait in xprt_handshake
- MEDIUM: ssl: merge recv_wait and send_wait in ssl_sock
- MEDIUM: mux-h1: merge recv_wait and send_wait
- MEDIUM: mux-h2: merge recv_wait and send_wait event notifications
- MEDIUM: mux-fcgi: merge recv_wait and send_wait event notifications
- MINOR: connection: make the last arg of subscribe() a struct wait_event*
- MINOR: ssl: Add support for returning the dn samples from ssl_(c|f)_(i|s)_dn in LDAP v3 (RFC2253) format.
- DOC: Fix copy and paste mistake in http-response replace-value doc
- BUG/MINOR: cache: Fix leak of cache name in error path
- BUG/MINOR: dns: Make dns_query_id_seed unsigned
- BUG/MINOR: 51d: Fix bug when HTX is enabled
- MINOR: http-htx: Move htx sample fetches in the scope "internal"
- MINOR: http-htx: Rename 'internal.htx_blk.val' to 'internal.htx_blk.data'
- MINOR: http-htx: Make 'internal.htx_blk_data' return a binary string
- DOC: Add a section to document the internal sample fetches
- MINOR: mux-h1: Inherit send flags from the upper layer
- MINOR: contrib/prometheus-exporter: Add heathcheck status/code in server metrics
- BUG/MINOR: http-ana/filters: Wait end of the http_end callback for all filters
- BUG/MINOR: http-rules: Remove buggy deinit functions for HTTP rules
- BUG/MINOR: stick-table: Use MAX_SESS_STKCTR as the max track ID during parsing
- MEDIUM: http-rules: Register an action keyword for all http rules
- MINOR: tcp-rules: Always set from which ruleset a rule comes from
- MINOR: actions: Use ACT_RET_CONT code to ignore an error from a custom action
- MINOR: tcp-rules: Kill connections when custom actions return ACT_RET_ERR
- MINOR: http-rules: Return an error when custom actions return ACT_RET_ERR
- MINOR: counters: Add a counter to report internal processing errors
- MEDIUM: http-ana: Properly handle internal processing errors
- MINOR: http-rules: Add a rule result to report internal error
- MINOR: http-rules: Handle internal errors during HTTP rules evaluation
- MINOR: http-rules: Add more return codes to let custom actions act as normal ones
- MINOR: tcp-rules: Handle denied/aborted/invalid connections from TCP rules
- MINOR: http-rules: Handle denied/aborted/invalid connections from HTTP rules
- MINOR: stats: Report internal errors in the proxies/listeners/servers stats
- MINOR: contrib/prometheus-exporter: Export internal errors per proxy/server
- MINOR: counters: Remove failed_secu counter and use denied_resp instead
- MINOR: counters: Review conditions to increment counters from analysers
- MINOR: http-ana: Add a txn flag to support soft/strict message rewrites
- MINOR: http-rules: Handle all message rewrites the same way
- MINOR: http-rules: Add a rule to enable or disable the strict rewriting mode
- MEDIUM: http-rules: Enable the strict rewriting mode by default
- REGTEST: Fix format of set-uri HTTP request rule in h1or2_to_h1c.vtc
- MINOR: actions: Add a function pointer to release args used by actions
- MINOR: actions: Regroup some info about HTTP rules in the same struct
- MINOR: http-rules/tcp-rules: Call the defined action function first if defined
- MINOR: actions: Rename the act_flag enum into act_opt
- MINOR: actions: Add flags to configure the action behaviour
- MINOR: actions: Use an integer to set the action type
- MINOR: http-rules: Use a specific action type for some custom HTTP actions
- MINOR: http-rules: Make replace-header and replace-value custom actions
- MINOR: http-rules: Make set-header and add-header custom actions
- MINOR: http-rules: Make set/del-map and add/del-acl custom actions
- MINOR: http-rules: Group all processing of early-hint rule in its case clause
- MEDIUM: http-rules: Make early-hint custom actions
- MINOR: http-rule/tcp-rules: Make track-sc* custom actions
- MINOR: tcp-rules: Make tcp-request capture a custom action
- MINOR: http-rules: Add release functions for existing HTTP actions
- BUG/MINOR: http-rules: Fix memory releases on error path during action parsing
- MINOR: tcp-rules: Add release functions for existing TCP actions
- BUG/MINOR: tcp-rules: Fix memory releases on error path during action parsing
- MINOR: http-htx: Add functions to read a raw error file and convert it in HTX
- MINOR: http-htx: Add functions to create HTX redirect message
- MINOR: config: Use dedicated function to parse proxy's errorfiles
- MINOR: config: Use dedicated function to parse proxy's errorloc
- MEDIUM: http-htx/proxy: Use a global and centralized storage for HTTP error messages
- MINOR: proxy: Register keywords to parse errorfile and errorloc directives
- MINOR: http-htx: Add a new section to create groups of custom HTTP errors
- MEDIUM: proxy: Add a directive to reference an http-errors section in a proxy
- MINOR: http-rules: Update txn flags and status when a deny rule is executed
- MINOR: http-rules: Support an optional status on deny rules for http reponses
- MINOR: http-rules: Use same function to parse request and response deny actions
- MINOR: http-ana: Add an error message in the txn and send it when defined
- MEDIUM: http-rules: Support an optional error message in http deny rules
- REGTEST: Add a strict rewriting mode reg test
- REGEST: Add reg tests about error files
- MINOR: ssl: accept 'verify' bind option with 'set ssl cert'
- BUG/MINOR: ssl: ssl_sock_load_ocsp_response_from_file memory leak
- BUG/MINOR: ssl: ssl_sock_load_issuer_file_into_ckch memory leak
- BUG/MINOR: ssl: ssl_sock_load_sctl_from_file memory leak
- BUG/MINOR: http_htx: Fix some leaks on error path when error files are loaded
- CLEANUP: http-ana: Remove useless test on txn when the error message is retrieved
- BUILD: CI: introduce ARM64 builds
- BUILD: ssl: more elegant anti-replay feature presence check
- MINOR: proxy/http-ana: Add support of extra attributes for the cookie directive
- MEDIUM: dns: use Additional records from SRV responses
- CLEANUP: Consistently `unsigned int` for bitfields
- CLEANUP: pattern: remove the pat_time definition
- BUG/MINOR: http_act: don't check capture id in backend
- BUG/MINOR: ssl: fix build on development versions of openssl-1.1.x
A wrong behavior was introduced by
e9544935e8, leading to preventing loading
any configuration where a capture slot id is used in a backend.
IE, the configuration below does not parse:
frontend f
bind *:80
declare capture request len 32
default_backend webserver
backend webserver
http-request capture req.hdr(Host) id 1
The point is that such type of configuration is valid and should run.
This patch enforces the check of capture slot id only if the action rule
is configured in a frontend.
The point is that at configuration parsing time, it is impossible to
check which frontend could point to this backend (furthermore if we use
dynamic backend name resolution at runtime).
The documentation has been updated to warn the user to ensure that
relevant frontends have required declaration when such rule has to be
used in a backend.
If no capture slot can be found, then the action will just not be
executed and HAProxy will process the next one in the list, as expected.
This should be backported to all supported branches (bug created as part
of a bug fix introduced into 1.7 and backported to 1.6).
Most DNS servers provide A/AAAA records in the Additional section of a
response, which correspond to the SRV records from the Answer section:
;; QUESTION SECTION:
;_http._tcp.be1.domain.tld. IN SRV
;; ANSWER SECTION:
_http._tcp.be1.domain.tld. 3600 IN SRV 5 500 80 A1.domain.tld.
_http._tcp.be1.domain.tld. 3600 IN SRV 5 500 80 A8.domain.tld.
_http._tcp.be1.domain.tld. 3600 IN SRV 5 500 80 A5.domain.tld.
_http._tcp.be1.domain.tld. 3600 IN SRV 5 500 80 A6.domain.tld.
_http._tcp.be1.domain.tld. 3600 IN SRV 5 500 80 A4.domain.tld.
_http._tcp.be1.domain.tld. 3600 IN SRV 5 500 80 A3.domain.tld.
_http._tcp.be1.domain.tld. 3600 IN SRV 5 500 80 A2.domain.tld.
_http._tcp.be1.domain.tld. 3600 IN SRV 5 500 80 A7.domain.tld.
;; ADDITIONAL SECTION:
A1.domain.tld. 3600 IN A 192.168.0.1
A8.domain.tld. 3600 IN A 192.168.0.8
A5.domain.tld. 3600 IN A 192.168.0.5
A6.domain.tld. 3600 IN A 192.168.0.6
A4.domain.tld. 3600 IN A 192.168.0.4
A3.domain.tld. 3600 IN A 192.168.0.3
A2.domain.tld. 3600 IN A 192.168.0.2
A7.domain.tld. 3600 IN A 192.168.0.7
SRV record support was introduced in HAProxy 1.8 and the first design
did not take into account the records from the Additional section.
Instead, a new resolution is associated to each server with its relevant
FQDN.
This behavior generates a lot of DNS requests (1 SRV + 1 per server
associated).
This patch aims at fixing this by:
- when a DNS response is validated, we associate A/AAAA records to
relevant SRV ones
- set a flag on associated servers to prevent them from running a DNS
resolution for said FADN
- update server IP address with information found in the Additional
section
If no relevant record can be found in the Additional section, then
HAProxy will failback to running a dedicated resolution for this server,
as it used to do.
This behavior is the one described in RFC 2782.
It is now possible to insert any attribute when a cookie is inserted by
HAProxy. Any value may be set, no check is performed except the syntax validity
(CTRL chars and ';' are forbidden). For instance, it may be used to add the
SameSite attribute:
cookie SRV insert attr "SameSite=Strict"
The attr option may be repeated to add several attributes.
This patch should fix the issue #361.
Since patches initiated with d4f9a60e "MINOR: ssl: deduplicate ca-file",
no more file access is done for 'verify' bind options (crl/ca file).
Remove conditional restriction for "set ssl cert" CLI commands.
It is now possible to set the error message to use when a deny rule is
executed. It may be a specific error file, adding "errorfile <file>" :
http-request deny deny_status 400 errorfile /etc/haproxy/errorfiles/400badreq.http
It may also be an error file from an http-errors section, adding "errorfiles
<name>" :
http-request deny errorfiles my-errors # use 403 error from "my-errors" section
When defined, this error message is set in the HTTP transaction. The tarpit rule
is also concerned by this change.
It is now possible to set the error message to return to client in the HTTP
transaction. If it is defined, this error message is used instead of proxy's
errors or default errors.
When a deny rule is executed, the flag TX_CLDENY and the status code are set on
the HTTP transaction. Now, these steps are handled by the code executing the
deny rule. So into http_req_get_intercept_rule() for the request and
http_res_get_intercept_rule() for the response.
It is now possible to import in a proxy, fully or partially, error files
declared in an http-errors section. It may be done using the "errorfiles"
directive, followed by a name and optionally a list of status code. If there is
no status code specified, all error files of the http-errors section are
imported. Otherwise, only error files associated to the listed status code are
imported. For instance :
http-errors my-errors
errorfile 400 ...
errorfile 403 ...
errorfile 404 ...
frontend frt
errorfiles my-errors 403 404 # ==> error 400 not imported
A new section may now be declared in the configuration to create global groups
of HTTP errors. These groups are not linked to a proxy and are referenced by
name. The section must be declared using the keyword "http-errors" followed by
the group name. This name must be unique. A list of "errorfile" directives may
be declared in such section. For instance:
http-errors website-1
errorfile 400 /path/to/site1/400.http
errorfile 404 /path/to/site1/404.http
http-errors website-2
errorfile 400 /path/to/site2/400.http
errorfile 404 /path/to/site2/404.http
For now, it is just possible to create "http-errors" sections. There is no
documentation because these groups are not used yet.
All custom HTTP errors are now stored in a global tree. Proxies use a references
on these messages. The key used for errorfile directives is the file name as
specified in the configuration. For errorloc directives, a key is created using
the redirect code and the url. This means that the same custom error message is
now stored only once. It may be used in several proxies or for several status
code, it is only parsed and stored once.
http_parse_errorloc() may now be used to create an HTTP 302 or 303 redirect
message with a specific url passed as parameter. A parameter is used to known if
it is a 302 or a 303 redirect. A status code is passed as parameter. It must be
one of the supported HTTP error codes to be valid. Otherwise an error is
returned. It aims to be used to parse "errorloc" directives. It relies on
http_load_errormsg() to do most of the job, ie converting it in HTX.
http_parse_errorfile() may now be used to parse a raw HTTP message from a
file. A status code is passed as parameter. It must be one of the supported HTTP
error codes to be valid. Otherwise an error is returned. It aims to be used to
parse "errorfile" directives. It relies on http_load_errorfile() to do most of
the job, ie reading the file content and converting it in HTX.
When an error occurred during the parsing of a TCP action, if some memory was
allocated, it should be released before exiting. Here, the fix consists for
replace a call to free() on a sample expression by a call to
release_sample_expr().
This patch may be backported to all supported versions.
When an error occurred during the parsing of an HTTP action, if some memory was
allocated, it should be released before exiting. Sometime a call to free() is
used on a sample expression instead of a call to release_sample_expr(). Other
time, it is just a string or a regex that should be released.
There is no real reason to backport this patch. Especially because this part was
highly modified recentely in 2.2-DEV.
Now, this action is use its own dedicated function and is no longer handled "in
place" during the TCP rules evaluation. Thus the action name ACT_TCP_CAPTURE is
removed. The action type is set to ACT_CUSTOM and a check function is used to
know if the rule depends on request contents while there is no inspect-delay.
Now, these actions use their own dedicated function and are no longer handled
"in place" during the TCP/HTTP rules evaluation. Thus the action names
ACT_ACTION_TRK_SC0 and ACT_ACTION_TRK_SCMAX are removed. The action type is now
the tracking index. Thus the function trk_idx() is no longer needed.
Now, the early-hint action uses its own dedicated action and is no longer
handled "in place" during the HTTP rules evaluation. Thus the action name
ACT_HTTP_EARLY_HINT is removed. In additionn, http_add_early_hint_header() and
http_reply_103_early_hints() are also removed. This part is now handled in the
new action_ptr callback function.
To know if the 103 response start-line must be added, we test if it is the first
rule of the ruleset or if the previous rule is not an early-hint rule. And at
the end, to know if the 103 response must be terminated, we test if it is the
last rule of the ruleset or if the next rule is not an early-hint rule. This
way, all the code dealing with early-hint rules is grouped in its case clause.
Now, these actions use their own dedicated function and are no longer handled
"in place" during the HTTP rules evaluation. Thus the action names
ACT_HTTP_*_ACL and ACT_HTTP_*_MAP are removed. The action type is now mapped as
following: 0 = add-acl, 1 = set-map, 2 = del-acl and 3 = del-map.
Now, these actions use their own dedicated function and are no longer handled
"in place" during the HTTP rules evaluation. Thus the action names
ACT_HTTP_SET_HDR and ACT_HTTP_ADD_VAL are removed. The action type is now set to
0 to set a header (so remove existing ones if any and add a new one) or to 1 to
add a header (add without remove).
Now, these actions use their own dedicated function and are no longer handled
"in place" during the HTTP rules evaluation. Thus the action names
ACT_HTTP_REPLACE_HDR and ACT_HTTP_REPLACE_VAL are removed. The action type is
now set to 0 to evaluate the whole header or to 1 to evaluate every
comma-delimited values.
The function http_transform_header_str() is renamed to http_replace_hdrs() to be
more explicit and the function http_transform_header() is removed. In fact, this
last one is now more or less the new action function.
The lua code has been updated accordingly to use http_replace_hdrs().
For set-method, set-path, set-query and set-uri, a specific action type is
used. The same as before but no longer stored in <arg.http.i>. Same is done for
replace-path and replace-uri. The same types are used than the "set-" versions.
Some flags can now be set on an action when it is registered. The flags are
defined in the act_flag enum. For now, only ACT_FLAG_FINAL may be set on an
action to specify if it stops the rules evaluation. It is set on
ACT_ACTION_ALLOW, ACT_ACTION_DENY, ACT_HTTP_REQ_TARPIT, ACT_HTTP_REQ_AUTH,
ACT_HTTP_REDIR and ACT_TCP_CLOSE actions. But, when required, it may also be set
on custom actions.
Consequently, this flag is checked instead of the action type during the
configuration parsing to trigger a warning when a rule inhibits all the
following ones.
The flags in the act_flag enum have been renamed act_opt. It means ACT_OPT
prefix is used instead of ACT_FLAG. The purpose of this patch is to reserve the
action flags for the actions configuration.
When TCP and HTTP rules are evaluated, if an action function (action_ptr field
in the act_rule structure) is defined for a given action, it is now always
called in priority over the test on the action type. Concretly, for now, only
custom actions define it. Thus there is no change. It just let us the choice to
extend the action type beyond the existing ones in the enum.
Info used by HTTP rules manipulating the message itself are splitted in several
structures in the arg union. But it is possible to group all of them in a unique
struct. Now, <arg.http> is used by most of these rules, which contains:
* <arg.http.i> : an integer used as status code, nice/tos/mark/loglevel or
action id.
* <arg.http.str> : an IST used as header name, reason string or auth realm.
* <arg.http.fmt> : a log-format compatible expression
* <arg.http.re> : a regular expression used by replace rules
Arguments used by actions are never released during HAProxy deinit. Now, it is
possible to specify a function to do so. ".release_ptr" field in the act_rule
structure may be set during the configuration parsing to a specific deinit
function depending on the action type.
Now, by default, when a rule performing a rewrite on an HTTP message fails, an
internal error is triggered. Before, the failure was ignored. But most of users
are not aware of this behavior. And it does not happen very often because the
buffer reserve space in large enough. So it may be surprising. Returning an
internal error makes the rewrite failure explicit. If it is acceptable to
silently ignore it, the strict rewriting mode can be disabled.
It is now possible to explicitly instruct rewriting rules to be strict or not
towards errors. It means that in this mode, an internal error is trigger if a
rewrite rule fails. The HTTP action "strict-mode" can be used to enable or
disable the strict rewriting mode. It can be used in an http-request and an
http-response ruleset.
For now, by default the strict rewriting mode is disabled. Because it is the
current behavior. But it will be changed in another patch.
In HTTP rules, error handling during a rewrite is now handle the same way for
all rules. First, allocation errors are reported as internal errors. Then, if
soft rewrites are allowed, rewrite errors are ignored and only the
failed_rewrites counter is incremented. Otherwise, when strict rewrites are
mandatory, interanl errors are returned.
For now, only soft rewrites are supported. Note also that the warning sent to
notify a rewrite failure was removed. It will be useless once the strict
rewrites will be possible.
the HTTP_MSGF_SOFT_RW flag must now be set on the HTTP transaction to ignore
rewrite errors on a message, from HTTP rules. The mode is called the soft
rewrites. If thes flag is not set, strict rewrites are performed. In this mode,
if a rewrite error occurred, an internal error is reported.
For now, HTTP_MSGF_SOFT_RW is always set and there is no way to switch a
transaction in strict mode.
Now, for these counters, the following rules are followed to know if it must be
incremented or not:
* if it exists for a frontend, the counter is incremented
* if stats must be collected for the session's listener, if the counter exists
for this listener, it is incremented
* if the backend is already assigned, if the counter exists for this backend,
it is incremented
* if a server is attached to the stream, if the counter exists for this
server, it is incremented
It is not hardcoded rules. Some counters are still handled in a different
way. But many counters are incremented this way now.
The failed_secu counter is only used for the servers stats. It is used to report
the number of denied responses. On proxies, the same info is stored in the
denied_resp counter. So, it is more consistent to use the same field for
servers.
The stats field ST_F_EINT has been added to report internal errors encountered
per proxy, per listener and per server. It appears in the CLI export and on the
HTML stats page.
The new possible results for a custom action (deny/abort/invalid) are now handled
during HTTP rules evaluation. These codes are mapped on HTTP rules ones :
* ACT_RET_DENY => HTTP_RULE_RES_DENY
* ACT_RET_ABRT => HTTP_RULE_RES_ABRT
* ACT_RET_INV => HTTP_RULE_RES_BADREQ
For now, no custom action uses these new codes.
The new possible results for a custom action (deny/abort/invalid) are now
handled during TCP rules evaluation. For L4/L5 rules, the session is
rejected. For L7 rules, the right counter is incremented, then the connections
killed.
For now, no custom action uses these new codes.
The HTTP_RULE_RES_ERROR code is now used by HTTP analyzers to handle internal
errors during HTTP rules evaluation. It is used instead of HTTP_RULE_RES_BADREQ,
used for invalid requests/responses. In addition, the SF_ERR_RESOURCE flag is
set on the stream when an allocation failure happens.
Note that the return value of http-response rules evaluation is now tested in
the same way than the result of http-request rules evaluation.
Now, processing errors are properly handled. Instead of returning an error 400
or 502, depending where the error happens, an error 500 is now returned. And the
processing_errors counter is incremented. By default, when such error is
detected, the SF_ERR_INTERNAL stream error is used. When the error is caused by
an allocation failure, and when it is reasonnably possible, the SF_ERR_RESOURCE
stream error is used. Thanks to this patch, bad requests and bad responses
should be easier to detect.
Thanks to the commit "MINOR: actions: Use ACT_RET_CONT code to ignore an error
from a custom action", it is now possible to trigger an error from a custom
action in http rules. Now, when a custom action returns the ACT_RET_ERR code
from an http-request rule, an error 400 is returned. And from an http-response
rule, an error 502 is returned.
Be careful if this patch is backported. The other mentioned patch must be
backported first.
Thanks to the commit "MINOR: actions: Use ACT_RET_CONT code to ignore an error
from a custom action", it is now possible to trigger an error from a custom
action in tcp-content rules. Now, when a custom action returns the ACT_RET_ERR
code, it has the same behavior than a reject rules, the connection is killed.
Be careful if this patch is backported. The other mentioned patch must be
backported first.
Some custom actions are just ignored and skipped when an error is encoutered. In
that case, we jump to the next rule. To do so, most of them use the return code
ACT_RET_ERR. Currently, for http rules and tcp content rules, it is not a
problem because this code is handled the same way than ACT_RET_CONT. But, it
means there is no way to handle the error as other actions. The custom actions
must handle the error and return ACT_RET_DONE. For instance, when http-request
rules are processed, an error when we try to replace a header value leads to a
bad request and an error 400 is returned to the client. But when we fail to
replace the URI, the error is silently ignored. This difference between the
custom actions and the others is an obstacle to write new custom actions.
So, in this first patch, ACT_RET_CONT is now returned from custom actions
instead of ACT_RET_ERR when an error is encoutered if it should be ignored. The
behavior remains the same but it is now possible to handle true errors using the
return code ACT_RET_ERR. Some actions will probably be reviewed to determine if
an error is fatal or not. Other patches will be pushed to trigger an error when
a custom action returns the ACT_RET_ERR code.
This patch is not tagged as a bug because it is just a design issue. But others
will depends on it. So be careful during backports, if so.
The ruleset from which a TCP rule comes from (the <from> field in the act_rule
structure) is only set when a rule is created from a registered keyword and not
for all TCP rules. But this information may be useful to check the configuration
validity or during the rule evaluation. So now, we systematically set it.
There are many specific http actions that don't use the action registration
mechanism (allow, deny, set-header...). Instead, the parsing of these actions is
inlined in the functions responsible to parse the http-request/http-response
rules. There is no reason to not register an action keyword for all these
actions. It it the purpose of this patch. The new functions responsible to parse
these http actions are defined in http_act.c
During the parsing of the sc-inc-gpc0, sc-inc-gpc1 and sc-inc-gpt1 actions, the
maximum stick table track ID allowed is tested against ACT_ACTION_TRK_SCMAX. It
is the action number and not the maximum number of stick counters. Instead,
MAX_SESS_STKCTR must be used.
This patch must be backported to all stable versions.
Functions to deinitialize the HTTP rules are buggy. These functions does not
check the action name to release the right part in the arg union. Only few info
are released. For auth rules, the realm is released and there is no problem
here. But the regex <arg.hdr_add.re> is always unconditionally released. So it
is easy to make these functions crash. For instance, with the following rule
HAProxy crashes during the deinit :
http-request set-map(/path/to/map) %[src] %[req.hdr(X-Value)]
For now, These functions are simply removed and we rely on the deinit function
used for TCP rules (renamed as deinit_act_rules()). This patch fixes the
bug. But arguments used by actions are not released at all, this part will be
addressed later.
This patch must be backported to all stable versions.
Filters may define the "http_end" callback, called at the end of the analysis of
any HTTP messages. It is called at the end of the payload forwarding and it can
interrupt the stream processing. So we must be sure to not remove the XFER_BODY
analyzers while there is still at least filter in progress on this callback.
Unfortunatly, once the request and the response are borh in the DONE or the
TUNNEL mode, we consider the XFER_BODY analyzer has finished its processing on
both sides. So it is possible to prematurely interrupt the execution of the
filters "http_end" callback.
To fix this bug, we switch a message in the ENDING state. It is then switched in
DONE/TUNNEL mode only after the execution of the filters "http_end" callback.
This patch must be backported (and adapted) to 2.1, 2.0 and 1.9. The legacy HTTP
mode shoud probaly be fixed too.
Send flags (CO_SFL_*) used when xprt->snd_buf() is called, in h1_send(), are now
inherited from the upper layer, when h1_snd_buf() is called. First, the flag
CO_SFL_MSG_MORE is no more set if the output buffer is full, but only if the
stream-interface decides to set it. It has more info to do it than the
mux. Then, the flag CO_SFL_STREAMER is now also handled this way. It was just
ignored till now.
When HTX is enabled, the sample flags were set too early. When matching for
multiple HTTP headers, the sample is fetched more than once, meaning that the
flags would need to be set again. Instead, the flags are now set last (just
before the outermost function returns). This could be further improved by
passing around the message without calling prefetch again.
This patch must be backported as far as 1.9. it should fix bug #450.
Left shifting of large signed values and negative values is undefined.
In a test script clang's ubsan rightfully complains:
> runtime error: left shift of 1934242336581872173 by 13 places cannot be represented in type 'int64_t' (aka 'long')
This bug was introduced in the initial version of the DNS resolver
in 325137d603. The fix must be backported
to HAProxy 1.6+.
Modifies the existing sample extraction methods (smp_fetch_ssl_x_i_dn,
smp_fetch_ssl_x_s_dn) to accommodate a third argument that indicates the
DN should be returned in LDAP v3 format. When the third argument is
present, the new function (ssl_sock_get_dn_formatted) is called with
three parameters including the X509_NAME, a buffer containing the format
argument, and a buffer for the output. If the supplied format matches
the supported format string (currently only "rfc2253" is supported), the
formatted value is extracted into the supplied output buffer using
OpenSSL's X509_NAME_print_ex and BIO_s_mem. 1 is returned when a dn
value is retrieved. 0 is returned when a value is not retrieved.
Argument validation is added to each of the related sample
configurations to ensure the third argument passed is either blank or
"rfc2253" using strcmp. An error is returned if the third argument is
present with any other value.
Documentation was updated in configuration.txt and it was noted during
preliminary reviews that a CLEANUP patch should follow that adjusts the
documentation. Currently, this patch and the existing documentation are
copied with some minor revisions for each sample configuration. It
might be better to have one entry for all of the samples or entries for
each that reference back to a primary entry that explains the sample in
detail.
Special thanks to Chris, Willy, Tim and Aleks for the feedback.
Author: Elliot Otchet <degroens@yahoo.com>
Reviewed-by: Tim Duesterhus <tim@bastelstu.be>
The subscriber used to be passed as a "void *param" that was systematically
cast to a struct wait_event*. By now it appears clear that the subscribe()
call at every layer is well defined and always takes a pointer to an event
subscriber of type wait_event, so let's enforce this in the functions'
prototypes, remove the intermediary variables used to cast it and clean up
the comments to clarify what all these functions do in their context.
This is the last of the "recv_wait+send_wait merge" patches and is
functionally equivalent to previous commit "MEDIUM: mux-h2: merge
recv_wait and send_wait event notifications" but for FCGI this time.
The principle is pretty much the same, since the code is very similar.
We use a single wait_event for both recv and send and rely on the
subscribe flags to know the desired notifications.
This is the continuation of the recv+send event notifications merge
that was started. This patch is less trivial than the previous ones
because the existence of a send event subscription is also used to
decide to put a stream back into the send list.
This is the same principle as previous commit, but for the H1 mux this
time. The checks in the subscribe()/unsubscribe() calls were factored
and some BUG_ON() were added to detect unexpected cases.
h1_wake_for_recv() and h1_wake_for_send() needed to be refined to
consider the current subscription before deciding to wake up.
In practice all callers use the same wait_event notification for any I/O
so instead of keeping specific code to handle them separately, let's merge
them and it will allow us to create new events later.
Currently there's still lots of code in conn_complete_server() that performs
one half of the connection setup, which is then checked and finalized in
back_handle_st_con(). There isn't a valid reason for this anymore, we can
simplify this and make sure that conn_complete_server() only wakes the stream
up to inform it about the fact the whole connection stack is set up so that
back_handle_st_con() finishes its job at the stream-int level.
It looks like the there could even be further simplified, but for now it
was moved straight out of conn_complete_server() with no modification.
For more than a decade we've kept all the sess_update_st_*() functions
in stream.c while they're only there to work in relation with what is
currently being done in backend.c (srv_redispatch_connect, connect_server,
etc). Let's move all this pollution over there and take this opportunity
to try to find slightly less confusing names for these old functions
whose role is only to handle transitions from one specific stream-int
state:
sess_update_st_rdy_tcp() -> back_handle_st_rdy()
sess_update_st_con_tcp() -> back_handle_st_con()
sess_update_st_cer() -> back_handle_st_cer()
sess_update_stream_int() -> back_try_conn_req()
sess_prepare_conn_req() -> back_handle_st_req()
sess_establish() -> back_establish()
The last one remained in stream.c because it's more or less a completion
function which does all the initialization expected on a connection
success or failure, can set analysers and emit logs.
The other ones could possibly slightly benefit from being modified to
take a stream-int instead since it's really what they're working with,
but it's unimportant here.
This is the port to FCGI of previous commit "MEDIUM: mux-h2: do not make
an h2s subscribe to itself on deferred shut".
The purpose is to avoid subscribing to the send_wait list when trying to
close, because we'll soon have to merge both recv and send lists. Basic
testing showed no difference (performance nor issues).
The logic handling the deferred shutdown is a bit complex because it
involves a wait_event struct in each h2s dedicated to subscribing to
itself when shutdowns are not immediately possible. This implies that
we will not be able to support a shutdown and a receive subscription
in the future when we merge all wait events.
Let's solely rely on the H2_SF_WANT_SHUT_{R,W} flags instead and have
an autonomous tasklet for this. This requires to add a few controls
in the code because now when waking up a stream we need to check if it
is for I/O or just a shut, but since sending and shutting are exclusive
it's not difficult.
One point worth noting is that further resources could be shaved off
by only allocating the tasklet when failing to shut, given that in the
vast majority of streams it will never be used. In fact the sole purpose
of the tasklet is to support calling this code from outside the H2 mux
context. Looking at the code, it seems that not too many adaptations
would be required to have the send_list walking code deal with sending
the shut bits itself and further simplify all this.
This is essentially the same change as applied to mux-h2 in previous commit
"MEDIUM: mux-h2: do not try to stop sending streams on blocked mux". The
goal is to make sure we don't need to keep the item in the send_wait list
until it's executed so that we can later merge it with the recv_wait list.
No performance changes were observed.
This partially reverts commit d846c267 ("MINOR: h2: Don't run tasks that
are waiting to send if mux in full"). This commit was introduced to
limit the start/stop overhead incurred by waking many streams to let
only a few work. But since commit 9c218e7521 ("MAJOR: mux-h2: switch
to next mux buffer on buffer full condition."), this situation occurs
way less (typically 2000 to 4000 times less often) and the benefits of
the patch above do not outweigh its shortcomings anymore. And commit
c7ce4e3e7f ("BUG/MEDIUM: mux-h2: don't stop sending when crossing a
buffer boundary") addressed a root cause of many unexpected sleeps and
wakeups.
The main problem it's causing is that it requires to keep the element
in the send_wait list until it's executed, leaving the entry in an
uncertain state, and significantly complicating the coexistence of this
list and the wait list dedicated to shutdown. Also it happens that this
call to tasklet_remove_from_task_list() will not be usable anymore once
we start to support streams on different threads. And finally, some of
the other streams that we remove might very well have managed to find
their way to the h2_snd_buf() with an unblocked condition as well so it
is possible that some of these removals were not welcome.
So this patch now makes sure that send_wait is immediately nulled when
the task is woken up, and that we don't have to play with it afterwards.
Since we don't need to stop the tasklets anymore, we don't need the
sending_list that we can remove.
However one very useful benefit of the sending_list was that it used to
provide the information about the fact that the stream already tried to
send and failed. This was an important factor to improve fairness because
late arrived streams should not be allowed to send if others are already
scheduled. So this patch introduces a new per-stream flag H2_SF_NOTIFIED
to distinguish such streams.
With this patch the fairness is preserved, and the ratio of aborted
h2_snd_buf() due to other streams already sending remains quite low
(~0.3-2.1% measured depending on object size, this is within
expectations for 100 independent streams).
If the contention issue the patch above used to address comes up again
in the future, a much better (though more complicated) solution would
be to switch to per-connection buffer pools to distribute between the
connection and the streams so that by default there are more buffers
available for the mux and the streams only have some when the mux's are
unused, i.e. it would push the memory pressure back to the data layer.
One observation made while developing this patch is that when dealing
with large objects we still spend a huge amount of time scanning the
send_list with tasks that are already woken up every time a send()
manages to purge a bit more data. Indeed, by removing the elements
from the list when H2_SF_NOTIFIED is set, the netowrk bandwidth on
1 MB objects fetched over 100 streams per connection increases by 38%.
This was not done here to preserve fairness but is worth studying (e.g.
by keeping a restart pointer on the list or just having a flag indicating
if an entry was added since last scan).
Commit 3c79d4bdc introduced the use of errno in pattern.c without
including errno.h.
If we build haproxy without any option errno is not defined and the
build fails.
These ones used to serve as a set of switches between CO_FL_SOCK_* and
CO_FL_XPRT_*, and now that the SOCK layer is gone, they're always a
copy of the last know CO_FL_XPRT_* ones that is resynchronized before
I/O events by calling conn_refresh_polling_flags(), and that are pushed
back to FDs when detecting changes with conn_xprt_polling_changes().
While these functions are not particularly heavy, what they do is
totally redundant by now because the fd_want_*/fd_stop_*() actions
already perform test-and-set operations to decide to create an entry
or not, so they do the exact same thing that is done by
conn_xprt_polling_changes(). As such it is pointless to call that
one, and given that the only reason to keep CO_FL_CURR_* is to detect
changes there, we can now remove them.
Even if this does only save very few cycles, this removes a significant
complexity that has been responsible for many bugs in the past, including
the last one affecting FreeBSD.
All tests look good, and no performance regressions were observed.
The only case where this made sense was with mux_h1 but Since we
introduced CS_FL_MAY_SPLICE, we don't need to rely on this anymore,
thus we don't need to clear it either when we do not splice.
There is a last check on this flag used to determine if the rx channel
is full and that cannot go away unless it's changed to use the CS
instead but for now this wouldn't add any benefit so better not do
it yet.
CO_FL_WAIT_ROOM is set by the splicing function in raw_sock, and cleared
by the stream-int when splicing is disabled, as well as in
conn_refresh_polling_flags() so that a new call to ->rcv_pipe() could
be attempted by the I/O callbacks called from conn_fd_handler(). This
clearing in conn_refresh_polling_flags() makes no sense anymore and is
in no way related to the polling at all.
Since we don't call them from there anymore it's better to clear it
before attempting to receive, and to set it again later. So let's move
this operation where it should be, in raw_sock_to_pipe() so that it's
now symmetric. It was also placed in raw_sock_to_buf() so that we're
certain that it gets cleared if an attempt to splice is replaced with
a subsequent attempt to recv(). And these were currently already achieved
by the call to conn_refresh_polling_flags(). Now it could theorically be
removed from the stream-int.
We need to do some error handling after we call fgets to make sure everything
went fine. If we don't users can be fooled into thinking they can load pattens
from directory because cfgparse doesn't flinch. This applies to acl patterns
map files.
This should be backported to all supported versions.
Commit c640ef1a7d ("BUG/MINOR: stream-int: avoid calling rcv_buf() when
splicing is still possible") fixed splicing in TCP and legacy mode but
broke it badly in HTX mode.
What happens in HTX mode is that the channel's to_forward value remains
set to CHN_INFINITE_FORWARD during the whole transfer, and as such it is
not a reliable signal anymore to indicate whether more data are expected
or not. Thus, when data are spliced out of the mux using rcv_pipe(), even
when the end is reached (that only the mux knows about), the call to
rcv_buf() to get the final HTX blocks completing the message were skipped
and there was often no new event to wake this up, resulting in transfer
timeouts at the end of large objects.
All this goes down to the fact that the channel has no more information
about whether it can splice or not despite being the one having to take
the decision to call rcv_pipe() or not. And we cannot afford to call
rcv_buf() inconditionally because, as the commit above showed, this
reduces the forwarding performance by 2 to 3 in TCP and legacy modes
due to data lying in the buffer preventing splicing from being used
later.
The approach taken by this patch consists in offering the muxes the ability
to report a bit more information to the upper layers via the conn_stream.
This information could simply be to indicate that more data are awaited
but the real need being to distinguish splicing and receiving, here
instead we clearly report the mux's willingness to be called for splicing
or not. Hence the flag's name, CS_FL_MAY_SPLICE.
The mux sets this flag when it knows that its buffer is empty and that
data waiting past what is currently known may be spliced, and clears it
when it knows there's no more data or that the caller must fall back to
rcv_buf() instead.
The stream-int code now uses this to determine if splicing may be used
or not instead of looking at the rcv_pipe() callbacks through the whole
chain. And after the rcv_pipe() call, it checks the flag again to decide
whether it may safely skip rcv_buf() or not.
All this bitfield dance remains a bit complex and it starts to appear
obvious that splicing vs reading should be a decision of the mux based
on permission granted by the data layer. This would however increase
the API's complexity but definitely need to be thought about, and should
even significantly simplify the data processing layer.
The way it was integrated in mux-h1 will also result in no more calls
to rcv_pipe() on chunked encoded data, since these ones are currently
disabled at the mux level. However once the issue with chunks+splice
is fixed, it will be important to explicitly check for curr_len|CHNK
to set MAY_SPLICE, so that we don't call rcv_buf() after each chunk.
This fix must be backported to 2.1 and 2.0.
In process_sticking_rules() we only want to apply the first store-request
rule for a given table, but when doing so we need to make sure we only
count actual store-request rules when we list the sticking rules.
Failure to do so leads to not being able to write store-request and match
sticking rules in any order as a match rule after a store-request rule
will be ignored.
The following configuration reproduces the issue:
global
stats socket /tmp/foobar
defaults
mode http
frontend in
bind *:8080
default_backend bar
backend bar
server s1 127.0.0.1:21212
server s2 127.0.0.1:21211
stick store-request req.hdr(foo)
stick match req.hdr(foo)
stick-table type string size 10
listen foo
bind *:21212
bind *:21211
http-request deny deny_status 200 if { dst_port 21212 }
http-request deny
This patch fixes issue #448 and should be backported as far as 1.6.
Since the fix 5fd3b28 ("BUG/MEDIUM: cli: _getsocks must send the peers
sockets") for bug #443. The code which sends the socket for the peers
and the proxies is duplicated. This patch move this code in a separated
function.
This bug prevents to reload HAProxy when you have both the seamless
reload (-x / expose-fd listeners) and the peers.
Indeed the _getsocks command does not send the FDs of the peers
listeners, so if no reuseport is possible during the bind, the new
process will fail to bind and exits.
With this feature, it is not possible to fallback on the SIGTTOU method
if we didn't receive all the sockets, because you can't close() the
sockets of the new process without closing those of the previous
process, they are the same.
Should fix bug #443.
Must be backported as far as 1.8.
Wietse Venema reported in the thread below that we have a signedness
issue with our hashes implementations: due to the use of const char*
for the input key that's often text, the crc32, sdbm, djb2, and wt6
algorithms return a platform-dependent value for binary input keys
containing bytes with bit 7 set. This means that an ARM or PPC
platform will hash binary inputs differently from an x86 typically.
Worse, some algorithms are well defined in the industry (like CRC32)
and do not provide the expected result on x86, possibly causing
interoperability issues (e.g. a user-agent would fail to compare the
CRC32 of a message body against the one computed by haproxy).
Fortunately, and contrary to the first impression, the CRC32c variant
used in the PROXY protocol processing is not affected. Thus the impact
remains very limited (the vast majority of input keys are text-based,
such as user-agent headers for exmaple).
This patch addresses the issue by fixing all hash functions' prototypes
(even those not affected, for API consistency). A reg test will follow
in another patch.
The vast majority of users do not use these hashes. And among those
using them, very few will pass them on binary inputs. However, for the
rare ones doing it, this fix MAY have an impact during the upgrade. For
example if the package is upgraded on one LB then on another one, and
the CRC32 of a binary input is used as a stick table key (why?) then
these CRCs will not match between both nodes. Similarly, if
"hash-type ... crc32" is used, LB inconsistency may appear during the
transition. For this reason it is preferable to apply the patch on all
nodes using such hashes at the same time. Systems upgraded via their
distros will likely observe the least impact since they're expected to
be upgraded within a short time frame.
And it is important for distros NOT to skip this fix, in order to avoid
distributing an incompatible implementation of a hash. This is the
reason why this patch is tagged as MAJOR, eventhough it's extremely
unlikely that anyone will ever notice a change at all.
This patch must be backported to all supported branches since the
hashes were introduced in 1.5-dev20 (commit 98634f0c). Some parts
may be dropped since implemented later.
Link to Wietse's report:
https://marc.info/?l=postfix-users&m=157879464518535&w=2
rate-limits are valid for both frontend and listen, but not backend; so
we can simplify this check in a similar manner as it is done in e.g
max-keep-alive-queue.
this should fix github issue #449
Signed-off-by: William Dauchy <w.dauchy@criteo.com>
Commit 08fa16e397 made sure
we let the fd layer we didn't want to poll anymore if
we failed to send and sendto() returne EAGAIN.
However, just disabling the polling with fd_stop_send()
while not notifying the connection layer means the
connection layer may believe the polling is activated
and nothing needs to be done when it is wrong.
A better fix is to revamp that whole code, for the
time being, just make sure the fd and connection
layer are properly synchronised.
This should fix the problem recently reported on FreeBSD.
In h1_snd_buf(), only attempt to call h1_send() if we haven't
already subscribed.
It makes no sense to do it if we subscribed, as we know we failed
to send before, and will create a useless call to sendto(), and
in 2.2, the call to raw_sock_from_buf() will disable polling if
it is enabled.
This should be backported to 2.2, 2.1, 2.0 and 1.9.
Since commit 27d93c3f94 ("BUG/MAJOR: compression/cache: Make it
really works with these both filters"), we no longer use section
deinit_comp_ctx.
This should fix github issue #441
Signed-off-by: William Dauchy <w.dauchy@criteo.com>
If you reload an haproxy started in master-worker mode with
"master-worker" in the configuration, and no "-W" argument,
the new process lost the fact that is was in master-worker mode
resulting in weird behaviors.
The bigest problem is that if it is reloaded with an bad configuration,
the master will exits instead of remaining in waitpid mode.
This problem was discovered in bug #443.
Should be backported in every version using the master-worker mode.
(as far as 1.8)
When trying to start HAProxy with the master CLI and more than one
program in the configuration, it refuses to start with:
[ALERT] 013/132926 (1378) : parsing [cur--1:0] : proxy 'MASTER', another server named 'cur--1' was already defined at line 0, please use distinct names.
[ALERT] 013/132926 (1378) : Fatal errors found in configuration.
The problem is that haproxy tries to create a server for the MASTER
proxy but only the worker are supposed to be in the server list.
Fix issue #446.
Must be backported as far as 2.0.
In version 2.0, after commit 9c218e7521 ("MAJOR: mux-h2: switch to next
mux buffer on buffer full condition."), the H2 mux started to use a ring
buffer for the output data in order to reduce competition between streams.
However, one corner case was suboptimally covered: when crossing a buffer
boundary, we have to shrink the outgoing frame size to the one left in
the output buffer, but this shorter size is later used as a signal of
incomplete send due to a buffer full condition (which used to be true when
using a single buffer). As a result, function h2s_frt_make_resp_data()
used to return less than requested, which in turn would cause h2_snd_buf()
to stop sending and leave some unsent data in the buffer, and si_cs_send()
to subscribe for sending more later.
But it goes a bit further than this, because subscribing to send again
causes the mux's send_list not to be empty anymore, hence extra streams
can be denied the access to the mux till the first stream is woken again.
This causes a nasty wakeup-sleep dance between streams that makes it
totally impractical to try to remove the sending list. A test showed
that it was possible to observe 3 million h2_snd_buf() giveups for only
100k requests when using 100 concurrent streams on 20kB objects.
It doesn't seem likely that a stream could get blocked and time out due
to this bug, though it's not possible either to demonstrate the opposite.
One risk is that incompletely sent streams do not have any blocking flags
so they may not be identified as blocked. However on first scan of the
send_list they meet all conditions for a wakeup.
This patch simply allows to continue on a new frame after a partial
frame. with only this change, the number of failed h2_snd_buf() was
divided by 800 (4% of calls). And by slightly increasing the H2C_MBUF_CNT
size, it can go down to zero.
This fix must be backported to 2.1 and 2.0.
In order to properly close connections established from Lua in case
a Lua context dies, the context currently automatically gets a flag
HLUA_MUST_GC set whenever an outgoing connection is used. This causes
the GC to be enforced on the context's death as well as on yield. First,
it does not appear necessary to do it when yielding, since if the
connections die they are already cleaned up. Second, the problem with
the flag is that even if a connection gets properly closed, the flag is
not removed and the GC continues to be called on the Lua context.
The impact on performance looks quite significant, as noticed and
diagnosed by Sadasiva Gujjarlapudi in the following thread:
https://www.mail-archive.com/haproxy@formilux.org/msg35810.html
This patch changes the flag for a counter so that each created
connection increments it and each cleanly closed connection decrements
it. That way we know we have to call the GC on the context's death only
if the count is non-null. As reported in the thread above, the Lua
performance gain is now over 20% by doing this.
Thanks to Sada and Thierry for the design discussion and tests that
led to this solution.
Since commit 3180f7b554 ("MINOR: ssl: load certificates in
alphabetical order"), `readdir` was replaced by `scandir`. We can indeed
replace it with a check on the previous `stat` call.
This micro cleanup can be a good benefit when you have hundreds of bind
lines which open TLS certificates directories in terms of syscall,
especially in a case of frequent reloads.
Signed-off-by: William Dauchy <w.dauchy@criteo.com>
Previous commit 989539b048 ("BUG/MINOR: mux-h2: use a safe
list_for_each_entry in h2_send()") accidently lost its sending_list test,
resulting in some elements to be woken up again while already in the
sending_list and h2_unsubscribe() crashing on integrity tests (only
when built with DEBUG_DEV).
If the fix above is backported this one must be as well.
h2_send() uses list_for_each_entry() to scan paused streams and resume
them, but happily deletes any leftover from a previous failed unsubscribe,
which is obviously not safe and would corrupt the list. In practice this
is a proof that this doesn't happen, but it's not the best way to prove it.
In order to fix this and reduce the maintenance burden caused by code
duplication (this list walk exists at 3 places), let's introduce a new
function h2_resume_each_sending_h2s() doing exactly this and use it at
all 3 places.
This bug was introduced as a side effect of fix 998410a41b ("BUG/MEDIUM:
h2: Revamp the way send subscriptions works.") so it should be backported
as far as 1.9.
When an HTTP response is received, at the stream-interface level, if a L7 retry
must be triggered because of the status code, the response is trashed and a read
error is reported on the response channel. Then the stream handles this error
and perform the retry. Except if the maximum connection retries is reached. In
this case, an error is reported. Because the server response was already trashed
by the stream-interface, a generic 502 error is returned to the client instead
of the server's one.
Now, the stream-interface triggers a L7 retry only if the maximum connection
retries is not already reached. Thus, at the end, the last server's response is
returned.
This patch must be backported to 2.1 and 2.0. It should fix the issue #439.
Since commit 980855bd95 ("BUG/MEDIUM: server: initialize the orphaned
conns lists and tasks at the end"), we no longer use err section.
This should fix github issue #438
Signed-off-by: William Dauchy <w.dauchy@criteo.com>
Apparently seamingless commit 0591bf7deb ("MINOR: listener: make the
wait paths cleaner and more reliable") caused a nasty regression and
revealed a rare race that hits regtest stickiness/lb-services.vtc
about 4% of the times for 8 threads.
The problem is that when a multi-threaded listener wakes up on an
incoming connection, several threads can receive the event, especially
when idle. And all of them will race to accept the connections in
parallel, adjusting the listener's nbconn and proxy's feconn until
one reaches the proxy's limit and declines. At this step the changes
are cancelled, the listener is marked "limited", and when the threads
exit the function, one of them will unlimit the listener/proxy again
so that it can accept incoming connections again.
The problem happens when many threads connect to a small peers section
because its maxconn is very limited (typically 6 for 2 peers), and it's
sometimes possible for enough competing threads to hit the limit and
one of them will limit the listener and queue the proxy's task... except
that peers do not initialize their proxy task since they do not use rate
limiting. Thus the process crashes when doing task_schedule(p->task).
Prior to the cleanup patch above, this didn't happen because the error
path that was dedicated to only limiting the listener did not call
task_schedule(p->task).
Given that the proxy's task is optional, and that the expire value
passed there is always TICK_ETERNITY, it's sufficient and reasonable to
avoid calling this task_schedule() when expire is not set. And for long
term safety we can also avoid to do it when the task is not set. A first
fix consisted in allocating a task for the peers proxies but it's never
used and would eat resources for reason.
No backport is needed as this commit was only merged into 2.2.
Since commit fa8aa867b9 ("MEDIUM: connections: Change struct
wait_list to wait_event.") we no longer use this section.
this should fix github issue #437
Signed-off-by: William Dauchy <w.dauchy@criteo.com>
Analysing traces revealed a rare but surprizing pattern :
connect() = -1 EAGAIN
send() = success
epoll_ctl(ADD, EPOLLOUT)
epoll_wait()
recvfrom() = success
close()
What happens is that the failed connect() creates an FD update for pollout,
but the successful synchronous send() doesn't disable it because polling was
only disabled in the FD handler. But a successful synchronous connect()
cancellation is a good opportunity to disable polling before it's effectively
enabled in the next loop, so better disable it when reaching the end. The
cost is very low if it was already disabled anyway (one atomic op).
This only affects local connections but with this the typical number of
epoll_ctl() calls per connection dropped from ~4.2 to ~3.8 for plain TCP
and 10k transfers.
In dns_send_query(), there's no point in first waking up the FD, to get
called back by the poller to send the request and sleep. Instead let's
simply send the request as soon as it's known and only subscribe to the
poller when the socket buffers are full and it's required to poll (i.e.
almost never).
This significantly reduces the number of calls to the poller. A large
config sees the number of epoll_ctl() calls reduced from 577 to 7 over
10 seconds, the number of recvfrom() from 1533 to 582 and the number of
sendto() from 369 to 162.
It also has the extra benefit of building each requests only once per
resolution and sending it to multiple resolvers instead of rebuilding
it for each and every resolver.
This will reduce the risk of seeing situations similar to bug #416 in
the future.
In session_accept_fd() we can perform a synchronous call to
conn_complete_session() and if it succeeds the connection is accepted
and turned into a session. If it fails we take it as an error while it
is not, in this case, it's just that a tcp-request rule has decided to
reject the incoming connection. The problem with reporting such an event
as an error is that the failed status is passed down to the listener code
which decides to disable accept() for 100ms in order to leave some time
for transient issues to vanish, and that's not what we want to do here.
This fix must be backported as far as 1.7. In 1.7 the code is a bit
different as tcp_exec_l5_rules() is called directly from within
session_new_fd() and ret=0 must be assigned there.
In co_inject(), data must be inserted at the end of output, not the end of
input. For the record, this function does not take care of input data which are
supposed to not exist. But the caller may reset input data after or before the
call. It is its own choice.
This bug, among other effects, is visible when a redirect is performed on
the response path, on legacy HTTP mode (so for HAProxy < 2.1). The redirect
response is appended after the server response when it should overwrite it.
Thanks to Kevin Zhu <ip0tcp@gmail.com> to report the bug. It must be backported
as far as 1.9.
When a redirect rule is executed on the response path, we must truncate the
received response. Otherwise, the redirect is appended after the response, which
is sent to the client. So it is obviously a bug because the redirect is not
performed. With bodyless responses, it is the "only" bug. But if the response
has a body, the result may be invalid. If the payload is not fully received yet
when the redirect is performed, an internal error is reported.
It must be backported as far as 1.9.
In proxy_capture_error(), input data are copied in the error snapshot. The copy
must take care of the data wrapping. But the length of the first block is
wrong. It should be the amount of contiguous input data that can be copied
starting from the input's beginning. But the mininum between the input length
and the buffer size minus the input length is used instead. So it is a problem
if input data are wrapping or if more than the half of the buffer is used by
input data.
This patch must be backported as far as 1.9.
During H1 messages parsing, when the parser has finished to parse a full header
line, some tests are performed on its value, depending on its name, to be sure
it is valid. The content-length is checked and converted in integer and the host
header is also checked. If an error occurred during this step, the error
position must point on the header value. But from the parser point of view, we
are already on the start of the next header. Thus the effective reported
position in the error capture is the beginning of the unparsed header line. It
is a bit confusing when we try to figure out why a message is rejected.
Now, the parser state is updated to point on the invalid value. This way, the
error position really points on the right position.
This patch must be backported as far as 1.9.
The "need_out" variable was used to let the ssl code know we're done
reading early data, and we should start the handshake.
Now that the handshake function is responsible for taking care of reading
early data, all that logic has been removed from ssl_sock_to_buf(), but
need_out was forgotten, and left. Remove it know.
This patch was submitted by William Dauchy <w.dauchy@criteo.com>, and should
fix github issue #434.
This should be backported to 2.0 and 2.1.
in the context of seamless reload and busy polling, older processes will
create unecessary cpu conflicts; we can assume there is no need for busy
polling for old processes which are waiting to be terminated.
This patch is not a bug fix itself but might be a good stability
improvment when you are un the context of frequent seamless reloads with
a high "hard-stop-after" value; for that reasons I think this patch
should be backported in all 2.x versions.
Signed-off-by: William Dauchy <w.dauchy@criteo.com>
In connect_server(), when we decide we want to kill the connection of
another thread because there are too many idle connections, hold the
toremove_lock of the corresponding thread, othervise, there's a small race
condition where we could try to add the connection to the toremove_connections
list while it has already been free'd.
This should be backported to 2.0 and 2.1.
When creating a new check connection, only attempt to add an handshake
connection if the connection has fully been initialized. It can not be the
case if a DNS resolution is still pending, and thus we don't yet have the
address for the server, as the handshake code assumes the connection is fully
initialized and would otherwise crash.
This is not ideal, the check shouldn't probably run until we have an address,
as it leads to check failures with "Socket error".
While I'm there, also add an xprt handshake if we're using socks4, otherwise
checks wouldn't be able to use socks4 properly.
This should fix github issue #430
This should be backported to 2.0 and 2.1.
The cost of enabling polling in one direction with epoll is very high
because it requires one syscall per FD and per direction change. In
addition we don't know about input readiness until we either try to
receive() or enable polling and watch the result. With HTTP keep-alive,
both are equally expensive as it's very uncommon to see the server
instantly respond (unless it's a second stage of the same process on
localhost, which has become much less common with threads).
But when a connection is established it's also quite usual to have to
poll for sending (except on localhost or UNIX sockets where it almost
always instantly works). So this cost of polling could be factored out
with the second step if both were enabled together.
This is the idea behind this patch. What it does is to always enable
polling for Rx if it's not ready and at least one direction is active.
This means that if it's not explicitly disabled, or if it was but in a
state that causes the loss of the information (rx ready cannot be
guessed), then let's take any opportunity for a polling change to
enable it at the same time, and learn about rx readiness for free.
In addition the FD never gets unregistered for Rx unless it's ready
and was blocked (buffer full). This avoids a lot of the flip-flop
behaviour at beginning and end of requests.
On a test with 10k requests in keep-alive, the difference is quite
noticeable:
Before:
% time seconds usecs/call calls errors syscall
------ ----------- ----------- --------- --------- ----------------
83.67 0.010847 0 20078 epoll_ctl
16.33 0.002117 0 2231 epoll_wait
0.00 0.000000 0 20 20 connect
------ ----------- ----------- --------- --------- ----------------
100.00 0.012964 22329 20 total
After:
% time seconds usecs/call calls errors syscall
------ ----------- ----------- --------- --------- ----------------
96.35 0.003351 1 2644 epoll_wait
2.36 0.000082 4 20 20 connect
1.29 0.000045 0 66 epoll_ctl
------ ----------- ----------- --------- --------- ----------------
100.00 0.003478 2730 20 total
It may also save a recvfrom() after connect() by changing the following
sequence, effectively saving one epoll_ctl() and one recvfrom() :
before | after
-----------------------------+----------------------------
- connect() | - connect()
- epoll_ctl(add,out) | - epoll_ctl(add, in|out)
- sendto() | - epoll_wait() = out
- epoll_ctl(mod,in|out) | - send()
- epoll_wait() = out | - epoll_wait() = in|out
- recvfrom() = EAGAIN | - recvfrom() = OK
- epoll_ctl(mod,in) | - recvfrom() = EAGAIN
- epoll_wait() = in | - epoll_ctl(mod, in)
- recvfrom() = OK | - epoll_wait()
- recvfrom() = EAGAIN |
- epoll_wait() |
(...)
Now on a 10M req test on 16 threads with 2k concurrent conns and 415kreq/s,
we see 190k updates total and 14k epoll_ctl() only.
Both flags became equal in commit 82967bf9 ("MINOR: connection: adjust
CO_FL_NOTIFY_DATA after removal of flags"), which already predicted the
overlap between xprt_done_cb() and wake() after the removal of the DATA
specific flags in 1.8. Let's simply remove CO_FL_NOTIFY_DATA since the
"_DONE" version already covers everything and explains the intent well
enough.
The conn_fd_handler used to have one possible call to this function to
notify about end of handshakes, and another one to notify about connection
setup or error. But given that we're now only performing wakeup calls
after connection validation, we don't need to keep two places to run
this test since the conditions do not change in between.
This patch merges the two tests into a single one and moves the
CO_FL_CONNECTED test appropriately as well so that it's called even
on the error path if needed.
In conn_fd_handler() we used to first give a chance to the send()
callback to try to send data and validate the connection at the same
time. But since 1.9 we do not call this callback anymore inline, it's
scheduled. So let's validate the connection ealier so that all other
decisions can be taken based on this confirmation. This may notably
be useful to the xprt_done_cb() to know that the connection was
properly validated.
The function is not TCP-specific at all, it covers all FD-based sockets
so let's move this where other similar functions are, in connection.c,
and rename it conn_fd_check().
Now that we know what pollers can return ERR/HUP, we can take this
into account to save one syscall: with such a poller, if neither are
reported, then we know the connection succeeded and we don't need to
go with getsockopt() nor connect() to validate this. In addition, for
the remaining cases (select() or suspected errors), we'll always go
through the extra connect() attempt and enumerate possible "in progress",
"connected" or "failed" status codes and take action solely based on
this.
This results in one saved syscall on modern pollers, only a second
connect() still being used on select() and the server's address never
being needed anymore.
Note that we cannot safely replace connect() with getsockopt() as the
latter clears the error on the socket without saving it, and health
checks rely on it for their reporting. This would be OK if the error
was saved in the connection itself.
In practice it's all pollers except select(). It turns out that we're
keeping some legacy code only for select and enforcing it on all
pollers, let's offer the pollers the ability to declare that they
do not need that.
Since commit c3df4507fa ("MEDIUM: connections: Wake the upper layer even
if sending/receiving is disabled.") the send/recv callbacks are called
on I/O if the FD is ready and not just if it's active. This means that
in some situations (e.g. send ready but nothing to send) we may
needlessly enter the if() block, notice we're not subscribed, set
io_available=1 and call the wake() callback even if we're just called
for read activity. Better make sure we only do this when the FD is
active in that direction..
This may be backported as far as 2.0 though it should remain under
observation for a few weeks first as the risk of harm by a mistake
is higher than the trouble it should cause.
Two regtest regularly fail in a random fashion depending on the machine's
load (one could really wonder if it's really worth keeping such
unreproducible tests) :
- tcp-check_multiple_ports.vtc
- 4be_1srv_smtpchk_httpchk_layer47errors.vtc
It happens that one of the reason is the time it takes to connect to
the local socket (hence the load-dependent aspect): if connect() on the
loopback returns EINPROGRESS then this status is reported instead of a
real error. Normally such a test is expected to see the error cleaned
by tcp_connect_probe() but it really depends on the timing and instead
we may very well send() first and see this error. The problem is that
everything is collected based on errno, hoping it won't get molested
in the way from the last unsuccesful syscall to wake_srv_chk(), which
obviously is hard to guarantee.
This patch at least makes sure that a few non-errors are reported as
zero just like EAGAIN. It doesn't fix the root cause but makes it less
likely to report incorrect failures.
This fix could be backported as far as 1.9.
SSL_CTX_set_ecdh_auto() is not defined when OpenSSL 1.1.1 is compiled
with the no-deprecated option. Remove existing, incomplete guards and
add a compatibility macro in openssl-compat.h, just as OpenSSL does:
bf4006a6f9/include/openssl/ssl.h (L1486)
This should be backported as far as 2.0 and probably even 1.9.
With a TCP frontend, it is possible to upgrade a connection to HTTP when the
backend is in HTTP mode. Concretly the upgrade install a new mux. So, once it is
done, the downgrade to TCP is no longer possible. So we must take care to never
assign a TCP backend to a stream on this connection. Otherwise, HAProxy crashes
because raw data from the server are handled as structured data on the client
side.
This patch fixes the issue #420. It must be backported to all versions
supporting the HTX.
A regression was introduced by the commit 76014fd1 ("MEDIUM: h1-htx: Add HTX EOM
block when the message is in H1_MSG_DONE state"). When nothing is copied in the
channel's buffer when the input message is parsed, we erroneously pretend it is
because there is not enough room by setting the CS_FL_WANT_ROOM flag on the
conn-stream. This happens when a partial request is parsed. Because of this
flag, we never try anymore to get input data from the mux because we first wait
for more room in the channel's buffer, which is empty. Because of this bug, it
is pretty easy to freeze a h1 connection.
To fix the bug, we must obsiously set the CS_FL_WANT_ROOM flag only when there
are still data to transfer while the channel's buffer is not empty.
This patch must be backported if the patch 76014fd1 is backported too. So for
now, no backport needed.
Issue #417 reports a possible memory leak in the state-file loading code.
There's one such place in the loop which corresponds to parsing errors
where the curreently allocated line is not freed when dropped. In any
case this is very minor in that no more than the file's length may be
lost in the worst case, considering that the whole file is kept anyway
in case of success. This fix addresses this.
It should be backported to 2.1.
The global state file tree isn't configured for unique keys, so if an
entry appears multiple times, e.g. due to a bogus script that concatenates
entries multiple times, this will needlessly eat memory. Let's just drop
duplicates.
This should be backported to 2.1.
Starting haproxy with a state file of 700k servers eats 11.2 GB of RAM
due to a mistake in the function that loads the strings into a tree: it
allocates a full buffer for each backend+server name instead of allocating
just the required string. By just fixing this we're down to 80 MB.
This should be backported to 2.1.
There's a very hard-to-trigger bug in the FD list code where the
fd_add_to_fd_list() function assumes that if the FD it's trying to add
is already locked, it's in the process of being added. Unfortunately, it
can also be in the process of being removed. It is very hard to trigger
because it requires that one thread is removing the FD while another one
is adding it. First very few FDs run on multiple threads (listeners and
DNS), and second, it does not make sense to add and remove the FD at the
same time.
In practice the DNS code built on the older callback-only model does
perform bursts of fd_want_send() for all resolvers at once when it wants
to send a new query (dns_send_query()). And this is more likely to happen
when here are lots of resolutions in parallel and many resolvers, because
the dns_response_recv() callback can also trigger a series of queries on
all resolvers for each invalid response it receives. This means that it
really is perfectly possible to both stop and start in parallel during
short periods of time there.
This issue was not reported before 2.1, but 2.1 had the FD cache, built
on the exact same code base. It's very possible that the issue caused
exactly the opposite situation, where an event was occasionally lost,
causing a DNS retry that worked, and nobody noticing the problem in the
end. In 2.1 the lost entries are the updates asking for not polling for
writes anymore, and the effect is that the poller contiuously reports
writability on the socket when the issue happens.
This patch fixes bug #416 and must be backported as far as 1.8, and
absolutely requires that previous commit "MINOR: fd/threads: make
_GET_NEXT()/_GET_PREV() use the volatile attribute" is backported as
well otherwise it will make the issue worse.
Special thanks to Julien Pivotto for setting up a reliable reproducer
for this difficult issue.
These macros are either used between atomic ops which cause the volatile
to be implicit, or with an explicit volatile cast. However not having it
in the macro causes some traps in the code because certain loop paths
cannot safely be used without risking infinite loops if one isn't careful
enough.
Let's place the volatile attribute inside the macros and remove them from
the explicit places to avoid this. It was verified that the output executable
remains exactly the same byte-wise.
Instead of attempting to read the early data only when the upper layer asks
for data, allocate a temporary buffer, stored in the ssl_sock_ctx, and put
all the early data in there. Requiring that the upper layer takes care of it
means that if for some reason the upper layer wants to emit data before it
has totally read the early data, we will be stuck forever.
This should be backported to 2.1 and 2.0.
This may fix github issue #411.
Since 1.9 with commit b20aa9eef3 ("MAJOR: tasks: create per-thread wait
queues") a task bound to a single thread will not use locks when being
queued or dequeued because the wait queue is assumed to be the owner
thread's.
But there exists a rare situation where this is not true: the health
check tasks may be running on one thread waiting for a response, and
may in parallel be requeued by another thread calling health_adjust()
after a detecting a response error in traffic when "observe l7" is set,
and "fastinter" is lower than "inter", requiring to shorten the running
check's timeout. In this case, the task being requeued was present in
another thread's wait queue, thus opening a race during task_unlink_wq(),
and gets requeued into the calling thread's wait queue instead of the
running one's, opening a second race here.
This patch aims at protecting against the risk of calling task_unlink_wq()
from one thread while the task is queued on another thread, hence unlocked,
by introducing a new TASK_SHARED_WQ flag.
This new flag indicates that a task's position in the wait queue may be
adjusted by other threads than then one currently executing it. This means
that such WQ manipulations must be performed under a lock. There are two
types of such tasks:
- the global ones, using the global wait queue (technically speaking,
those whose thread_mask has at least 2 bits set).
- some local ones, which for now will be placed into the global wait
queue as well in order to benefit from its lock.
The flag is automatically set on initialization if the task's thread mask
indicates more than one thread. The caller must also set it if it intends
to let other threads update the task's expiration delay (e.g. delegated
I/Os), or if it intends to change the task's affinity over time as this
could lead to the same situation.
Right now only the situation described above seems to be affected by this
issue, and it is very difficult to trigger, and even then, will often have
no visible effect beyond stopping the checks for example once the race is
met. On my laptop it is feasible with the following config, chained to
httpterm:
global
maxconn 400 # provoke FD errors, calling health_adjust()
defaults
mode http
timeout client 10s
timeout server 10s
timeout connect 10s
listen px
bind :8001
option httpchk /?t=50
server sback 127.0.0.1:8000 backup
server-template s 0-999 127.0.0.1:8000 check port 8001 inter 100 fastinter 10 observe layer7
This patch will automatically address the case for the checks because
check tasks are created with multiple threads bound and will get the
TASK_SHARED_WQ flag set.
If in the future more tasks need to rely on this (multi-threaded muxes
for example) and the use of the global wait queue becomes a bottleneck
again, then it should not be too difficult to place locks on the local
wait queues and queue the task on its bound thread.
This patch needs to be backported to 2.1, 2.0 and 1.9. It depends on
previous patch "MINOR: task: only check TASK_WOKEN_ANY to decide to
requeue a task".
Many thanks to William Dauchy for providing detailed traces allowing to
spot the problem.
After processing a task, its RUNNING bit is cleared and at the same time
we check for other bits to decide whether to requeue the task or not. It
happens that we only want to check the TASK_WOKEN_* bits, because :
- TASK_RUNNING was just cleared
- TASK_GLOBAL and TASK_QUEUE cannot be set yet as the task was running,
preventing it from being requeued
It's important not to catch yet undefined flags there because it would
prevent addition of new task flags. This also shows more clearly that
waking a task up with flags 0 is not something safe to do as the task
will not be woken up if it's already running.
This action is very similar to "replace-uri" except that it only acts on the
path component. This is assumed to better match users' expectations when they
used to rely on "replace-uri" in HTTP/1 because mostly origin forms were used
in H1 while mostly absolute URI form is used in H2, and their rules very often
start with a '/', and as such do not match.
It could help users to get this backported to 2.0 and 2.1.
As discussed in the thread below [1], the debug converter is currently
not of much use given that it's only built when DEBUG_EXPR is set, and
it is limited to stderr only.
This patch changes this to make it take an optional prefix and an optional
target sink so that it can log to stdout, stderr or a ring buffer. The
default output is the "buf0" ring buffer, that can be consulted from the
CLI.
[1] https://www.mail-archive.com/haproxy@formilux.org/msg35671.html
Note: if this patch is backported, it also requires the following commit to
work: 46dfd78cbf ("BUG/MINOR: sample: always check converters' arguments").
Commit d4f946c ("MINOR: ssl/cli: 'show ssl cert' give information on the
certificates") introduced a build issue with openssl version < 1.0.2
because it uses the certificate bundles.
When accepting the max early data, don't set it on the SSL_CTX while parsing
the configuration, as at this point global.tune.maxrewrite may still be -1,
either because it was not set, or because it hasn't been set yet. Instead,
set it for each connection, just after we created the new SSL.
Not doing so meant that we could pretend to accept early data bigger than one
of our buffer.
This should be backported to 2.1, 2.0, 1.9 and 1.8.
Instead of failing the conversion when an invalid number of bits is
given the sha2 converter now fails with an appropriate error message
during startup.
The sha2 converter was introduced in d437630237,
which is in 2.1 and higher.
In 1.5-dev20, sample-fetch arguments parsing was addresse by commit
689a1df0a1 ("BUG/MEDIUM: sample: simplify and fix the argument parsing").
The issue was that argument checks were not run for sample-fetches if
parenthesis were not present. Surprisingly, the fix was mde only for
sample-fetches and not for converters which suffer from the exact same
problem. There are even a few comments in the code mentioning that some
argument validation functions are not called when arguments are missing.
This fix applies the exact same method as the one above. The impact of
this bug is limited because over the years the code has learned to work
around this issue instead of fixing it.
This may be backported to all maintained versions.
The closing bracket was emitted for the "debug" converter even when the
opening one was not sent, and the new line was not always emitted. Let's
fix this. This is harmless since this converter is not built by default.
These sample fetches are internal and must be used for debugging purpose. Idea
is to have a way to add some checks on the HTX content from http rules. The main
purpose is to ease reg-tests writing.
During H1 parsing, the HTX EOM block is added before switching the message state
to H1_MSG_DONE. It is an exception in the way to convert an H1 message to
HTX. Except for this block, the message is first switched to the right state
before starting to add the corresponding HTX blocks. For instance, the message
is switched in H1_MSG_DATA state and then the HTX DATA blocks are added.
With this patch, the message is switched to the H1_MSG_DONE state when all data
blocks or trailers were processed. It is the caller responsibility to call
h1_parse_msg_eom() when the H1_MSG_DONE state is reached. This way, it is far
easier to catch failures when the HTX buffer is full.
The H1 and FCGI muxes have been updated accordingly.
This patch may eventually be backported to 2.1 if it helps other backports.
Apparently gcc developers decided that strncpy() semantics are no longer
valid and now deserve a warning, especially if used exactly as designed.
This results in issue #304. Let's just remove one to the target size to
please her majesty gcc, the God of C Compilers, who tries hard to make
users completely eliminate any use of string.h and reimplement it by
themselves at much higher risks. Pfff....
This can be backported to stable version, the fix is harmless since it
ignores the last zero that is already set on next line.
As reported in issue #408, "agent-addr" doesn't work on default-server
lines. This is due to the transcription of the old "addr" option in commit
6e5e0d8f9e ("MINOR: server: Make 'default-server' support 'addr' keyword.")
which correctly assigns it to the check.addr and agent.addr fields, but
which also copies the default check.addr into both the check's and the
agent's addr fields. Thus the default agent's address is never used.
This fix makes sure to copy the check from the check and the agent from
the agent. However it's worth noting that if "addr" is specified on the
server line, it will still overwrite both the check and the agent's
addresses.
This must be backported as far as 1.8.
The listener supports a "transient error" situation, which corresponds
to those situations where accept fails badly but poll() reports an event.
This happens for example when a listener is paused, or on out of FD. The
same mechanism is used when facing a maxconn or maxsessrate limitation.
When this happens, the listener is disabled for up to 100ms and put back
into the global listener queue so that it automatically wakes up again
as soon as the conditions change from an existing connection releasing
one resource, or the system recovers from a transient issue.
The listener_accept() function has a bug in its exit path causing a
freshly limited listener to be immediately enabled again because all
the conditions are met (connection count < max). It doesn't take into
account the fact that the listener might have been queued and must
first wait for the timeout to expire before doing so. The impact is
that upon certain errors, the faulty process will busy loop on the
accept code without sleeping. This is the scenario reported and
diagnosed by @hedong0411 in issue #382.
This commit fixes it by verifying that the global queue's delay is
at least expired before deciding to resume the listener. Another
approach could consist in having an extra state like LI_DELAY for
situations where only a delay is acceptable, but this would probably
not bring anything except more complex code.
This issue was introduced with the lock-free listener accept code
(commits 3f0d02b and 82c9789a) that were backported to 1.8.20+ and
1.9.7+, so this fix must be backported to the relevant branches.
If a new process is started with -sf and it fails to bind, it may send
a SIGTTOU to the master process in hope that it will temporarily unbind.
Unfortunately this one doesn't catch it and stops to background instead
of forwarding the signal to the workers. The same is true for SIGTTIN.
This commit simply implements an extra signal handler for the master to
deal with such signals that must be passed down to the workers. It must
be backported as far as 1.8, though there the code differs in that it's
entirely in haproxy.c and doesn't require an extra sig handler.
As reported by Ilya in issue #392, Coverity found that we're leaking
allocated strings on error paths in parse_logformat(). Let's use a
proper exit label for failures instead of seeding return 0 everywhere.
This should be backported to all supported versions.
We used to have wake_expired_tasks() wake up tasks and return the next
expiration delay. The problem this causes is that we have to call it just
before poll() in order to consider latest timers, but this also means that
we don't wake up all newly expired tasks upon return from poll(), which
thus systematically requires a second poll() round.
This is visible when running any scheduled task like a health check, as there
are systematically two poll() calls, one with the interval, nothing is done
after it, and another one with a zero delay, and the task is called:
listen test
bind *:8001
server s1 127.0.0.1:1111 check
09:37:38.200959 clock_gettime(CLOCK_THREAD_CPUTIME_ID, {tv_sec=0, tv_nsec=8696843}) = 0
09:37:38.200967 epoll_wait(3, [], 200, 1000) = 0
09:37:39.202459 clock_gettime(CLOCK_THREAD_CPUTIME_ID, {tv_sec=0, tv_nsec=8712467}) = 0
>> nothing run here, as the expired task was not woken up yet.
09:37:39.202497 clock_gettime(CLOCK_THREAD_CPUTIME_ID, {tv_sec=0, tv_nsec=8715766}) = 0
09:37:39.202505 epoll_wait(3, [], 200, 0) = 0
09:37:39.202513 clock_gettime(CLOCK_THREAD_CPUTIME_ID, {tv_sec=0, tv_nsec=8719064}) = 0
>> now the expired task was woken up
09:37:39.202522 socket(AF_INET, SOCK_STREAM, IPPROTO_TCP) = 7
09:37:39.202537 fcntl(7, F_SETFL, O_RDONLY|O_NONBLOCK) = 0
09:37:39.202565 setsockopt(7, SOL_TCP, TCP_NODELAY, [1], 4) = 0
09:37:39.202577 setsockopt(7, SOL_TCP, TCP_QUICKACK, [0], 4) = 0
09:37:39.202585 connect(7, {sa_family=AF_INET, sin_port=htons(1111), sin_addr=inet_addr("127.0.0.1")}, 16) = -1 EINPROGRESS (Operation now in progress)
09:37:39.202659 epoll_ctl(3, EPOLL_CTL_ADD, 7, {EPOLLOUT, {u32=7, u64=7}}) = 0
09:37:39.202673 clock_gettime(CLOCK_THREAD_CPUTIME_ID, {tv_sec=0, tv_nsec=8814713}) = 0
09:37:39.202683 epoll_wait(3, [{EPOLLOUT|EPOLLERR|EPOLLHUP, {u32=7, u64=7}}], 200, 1000) = 1
09:37:39.202693 clock_gettime(CLOCK_THREAD_CPUTIME_ID, {tv_sec=0, tv_nsec=8818617}) = 0
09:37:39.202701 getsockopt(7, SOL_SOCKET, SO_ERROR, [111], [4]) = 0
09:37:39.202715 close(7) = 0
Let's instead split the function in two parts:
- the first part, wake_expired_tasks(), called just before
process_runnable_tasks(), wakes up all expired tasks; it doesn't
compute any timeout.
- the second part, next_timer_expiry(), called just before poll(),
only computes the next timeout for the current thread.
Thanks to this, all expired tasks are properly woken up when leaving
poll, and each poll call's timeout remains up to date:
09:41:16.270449 clock_gettime(CLOCK_THREAD_CPUTIME_ID, {tv_sec=0, tv_nsec=10223556}) = 0
09:41:16.270457 epoll_wait(3, [], 200, 999) = 0
09:41:17.270130 clock_gettime(CLOCK_THREAD_CPUTIME_ID, {tv_sec=0, tv_nsec=10238572}) = 0
09:41:17.270157 socket(AF_INET, SOCK_STREAM, IPPROTO_TCP) = 7
09:41:17.270194 fcntl(7, F_SETFL, O_RDONLY|O_NONBLOCK) = 0
09:41:17.270204 setsockopt(7, SOL_TCP, TCP_NODELAY, [1], 4) = 0
09:41:17.270216 setsockopt(7, SOL_TCP, TCP_QUICKACK, [0], 4) = 0
09:41:17.270224 connect(7, {sa_family=AF_INET, sin_port=htons(1111), sin_addr=inet_addr("127.0.0.1")}, 16) = -1 EINPROGRESS (Operation now in progress)
09:41:17.270299 epoll_ctl(3, EPOLL_CTL_ADD, 7, {EPOLLOUT, {u32=7, u64=7}}) = 0
09:41:17.270314 clock_gettime(CLOCK_THREAD_CPUTIME_ID, {tv_sec=0, tv_nsec=10337841}) = 0
09:41:17.270323 epoll_wait(3, [{EPOLLOUT|EPOLLERR|EPOLLHUP, {u32=7, u64=7}}], 200, 1000) = 1
09:41:17.270332 clock_gettime(CLOCK_THREAD_CPUTIME_ID, {tv_sec=0, tv_nsec=10341860}) = 0
09:41:17.270340 getsockopt(7, SOL_SOCKET, SO_ERROR, [111], [4]) = 0
09:41:17.270367 close(7) = 0
This may be backported to 2.1 and 2.0 though it's unlikely to bring any
user-visible improvement except to clarify debugging.
This is a complement to previous fix for bug #399. The exclusion between
the recv() and send() calls prevents send handlers from being called if
rx readiness is reported. The DNS code can trigger this situations with
threads where the fd_recv_ready() flag disappears between the test in
dgram_fd_handler() and the second test in dns_resolve_recv() while a
thread calls fd_cant_recv(), and this situation can sustain itself for
a while. With 8 threads and an error in the socket queue, placing a
printf on the return statement in dns_resolve_recv() scrolls very fast.
Simply removing the "else" in dgram_fd_handler() addresses the issue.
This fix must be backported as far as 1.6.
It was reported in bug #399 that the DNS sometimes enters endless loops
after hours working fine. The issue is caused by a lack of error
processing in the DNS's recv() path combined with an exclusive recv OR
send in the UDP layer, resulting in some errors causing CPU loops that
will never stop until the process is restarted.
The basic cause is that the FD_POLL_ERR and FD_POLL_HUP flags are sticky
on the FD, and contrary to a stream socket, receiving an error on a
datagram socket doesn't indicate that this socket cannot be used anymore.
Thus the Rx code must at least handle this situation and flush the error
otherwise it will constantly be reported. In theory this should not be a
big issue but in practise it is due to another bug in the UDP datagram
handler which prevents the send() callback from being called when Rx
readiness was reported, so the situation cannot go away. It happens way
more easily with threads enabled, so that there is no dead time between
the moment the FD is disabled and another recv() is called, such as in
the example below where the request was sent to a closed port on the
loopback provoking an ICMP unreachable to be sent back:
[pid 20888] 18:26:57.826408 sendto(29, ";\340\1\0\0\1\0\0\0\0\0\1\0031wt\2eu\0\0\34\0\1\0\0)\2\0\0\0\0\0\0\0", 35, 0, NULL, >
[pid 20893] 18:26:57.826566 recvfrom(29, 0x7f97c54ef2f0, 513, 0, NULL, NULL) = -1 ECONNREFUSED (Connection refused)
[pid 20889] 18:26:57.826601 recvfrom(29, 0x7f97c76182f0, 513, 0, NULL, NULL) = -1 EAGAIN (Resource temporarily unavailable)
[pid 20892] 18:26:57.826630 recvfrom(29, 0x7f97c5cf02f0, 513, 0, NULL, NULL) = -1 EAGAIN (Resource temporarily unavailable)
[pid 20891] 18:26:57.826684 recvfrom(29, 0x7f97c66162f0, 513, 0, NULL, NULL) = -1 EAGAIN (Resource temporarily unavailable)
[pid 20895] 18:26:57.826716 recvfrom(29, 0x7f97bffda2f0, 513, 0, NULL, NULL) = -1 EAGAIN (Resource temporarily unavailable)
[pid 20894] 18:26:57.826747 recvfrom(29, 0x7f97c4cee2f0, 513, 0, NULL, NULL) = -1 EAGAIN (Resource temporarily unavailable)
[pid 20888] 18:26:58.419838 recvfrom(29, 0x7ffcc8712c20, 513, 0, NULL, NULL) = -1 EAGAIN (Resource temporarily unavailable)
[pid 20893] 18:26:58.419900 recvfrom(29, 0x7f97c54ef2f0, 513, 0, NULL, NULL) = -1 EAGAIN (Resource temporarily unavailable)
(... hundreds before next sendto() ...)
This situation was handled by clearing HUP and ERR when recv()
returns <0.
A second case was handled, there was a control for a missing dgram
handler, but it does nothing, causing the FD to ring again if this
situation ever happens. After looking at the rest of the code, it
doesn't seem possible to face such a situation because these handlers
are registered during startup, but at least we need to handle it
properly.
A third case was handled, that's mainly a small optimization. With
threads and massive responses, due to the large lock around the loop,
it's likely that some threads will have seen fd_recv_ready() and will
wait at the lock(). But if they wait here, chances are that other
threads will have eliminated pending data and issued fd_cant_recv().
In this case, better re-check fd_recv_ready() before performing the
recv() call to avoid the huge amounts of syscalls that happen on
massively threaded setups.
This patch must be backported as far as 1.6 (the atomic AND just
needs to be turned to a regular AND).
When we have a EVFILT_READ event, an optimization was made, and the FD was
not reported as ready to receive if there were no data available. That way,
if the socket was closed by our peer (the EV8EOF flag was set), and there were
no remaining data to read, we would just close(), and avoid doing a recv().
However, it may be fine for TCP socket, but it is not for UDP.
If we send data via UDP, and we receive an error, the only way to detect it
is to attempt a recv(). However, in this case, kevent() will report a read
event, but with no data, so we'd just ignore that read event, nothing would be
done about it, and the poller would be woken up by it over and over.
To fix this, report read events if either we have data, or the EV_EOF flag
is not set.
This should be backported to 2.1, 2.0, 1.9 and 1.8.
The global listener queue code and declarations were still lying in
haproxy.c while not needed there anymore at all. This complicates
the code for no reason. As a result, the global_listener_queue_task
and the global_listener_queue were made static.
We use it half times for the global_listener_queue and half times
for a proxy's queue and this requires the callers to take care of
these. Let's split it in two versions, the current one working only
on the global queue and another one dedicated to proxies for the
per-proxy queues. This cleans up quite a bit of code.
In listener_accept() there are several situations where we have to wait
for an event or a delay. These ones all implement their own call to
limit_listener() and the associated task_schedule(). In addition to
being ugly and confusing, one expire date computation is even wrong as
it doesn't take in account the fact that we're using threads and that
the value might change in the middle. Fortunately task_schedule() gets
it right for us.
This patch creates two jump locations, one for the global queue and
one for the proxy queue, allowing the rest of the code to only compute
the expire delay and jump to the right location.
Recent fix 4c044e274c ("BUG/MEDIUM: listener/thread: fix a race when
pausing a listener") is insufficient and moves the race slightly farther.
What now happens is that if we're limiting a listener due to a transient
error such as an accept() error for example, or because the proxy's
maxconn was reached, another thread might in the mean time have switched
again to LI_READY and at the end of the function we'll disable polling on
this FD, resulting in a listener that never accepts anything anymore. It
can more easily happen when sending SIGTTOU/SIGTTIN to temporarily pause
the listeners to let another process bind next to them.
What this patch does instead is to move all enable/disable operations at
the end of the function and condition them to the state. The listener's
state is checked under the lock and the FD's polling state adjusted
accordingly so that the listener's state and the FD always remain 100%
synchronized. It was verified with 16 threads that the cost of taking
that lock is not measurable so that's fine.
This should be backported to the same branches the patch above is
backported to.
When accept() fails because a listener is temporarily paused, the
FD might have both FD_POLL_HUP and FD_POLL_ERR bits set. While we do
not exploit FD_POLL_ERR here it's better to clear it because it is
reported on "show fd" and is confusing.
This may be backported to all versions.
There was a leftover of the single-threaded era when removing the
FD_POLL_HUP flag from the listeners. By not using an atomic operation
to clear the flag, another thread acting on the same listener might
have lost some events, though this would have resulted in that thread
to reprocess them immediately on the next loop pass.
This should be backported as far as 1.8.
The proxies' soft_stop() function closes the FDs in all opened states
except LI_PAUSED. This means that a transient error on a listener might
cause it to turn back to the READY state if it happens exactly when a
reload signal is received.
This must be backported to all supported versions.
During the HTTP response parsing, if there is not enough space in the channel's
buffer, it is possible to fail to add the HTX EOM block while all data in the
rxbuf were consumed. As for the h1 mux, we must notify the conn-stream the
buffer is full to have a chance to add the HTX EOM block later. In this case, we
must also be carefull to not report a server abort by setting too early the
CS_FL_EOS flag on the conn-stream.
To do so, the FCGI_SF_APPEND_EOM flag must be set on the FCGI stream to know the
HTX EOM block is missing.
This patch must be backported to 2.1.
During the message parsing, when the HTX buffer is full and only the HTX EOM
block cannot be added, it is important to notify the conn-stream that some
processing must still be done but it is blocked because there is not enough room
in the buffer. The way to do so is to set the CS_FL_WANT_ROOM flag on the
conn-stream. Otherwise, because all data are received and consumed, the mux is
not called anymore to add this last block, leaving the message unfinished from
the HAProxy point of view. The only way to unblock it is to receive a shutdown
for reads or to hit a timeout.
This patch must be backported to 2.1 and 2.0. The 1.9 does not seem to be
affected.
HAProxy doesn't need to call executables at run time (except when using
external checks which are strongly recommended against), and is even expected
to isolate itself into an empty chroot. As such, there basically is no valid
reason to allow a setuid executable to be called without the user being fully
aware of the risks. In a situation where haproxy would need to call external
checks and/or disable chroot, exploiting a vulnerability in a library or in
haproxy itself could lead to the execution of an external program. On Linux
it is possible to lock the process so that any setuid bit present on such an
executable is ignored. This significantly reduces the risk of privilege
escalation in such a situation. This is what haproxy does by default. In case
this causes a problem to an external check (for example one which would need
the "ping" command), then it is possible to disable this protection by
explicitly adding this directive in the global section. If enabled, it is
possible to turn it back off by prefixing it with the "no" keyword.
Before the option:
$ socat - /tmp/sock1 <<< "expert-mode on; debug dev exec sudo /bin/id"
uid=0(root) gid=0(root) groups=0(root
After the option:
$ socat - /tmp/sock1 <<< "expert-mode on; debug dev exec sudo /bin/id"
sudo: effective uid is not 0, is /usr/bin/sudo on a file system with the
'nosuid' option set or an NFS file system without root privileges?
popen() is annoying because it doesn't catch stderr. The command was
implemented using it just by pure laziness, let's just redo it a bit
cleaner using normal syscalls. Note that this command is only enabled
when built with -DDEBUG_DEV.
In process_chk_conn(), make sure we set the task affinity to the current
thread as soon as we're attempting a connection (and reset the affinity to
"any thread" if we detect a failure).
We used to only set the task affinity if connect_conn_chk() returned
SF_ERR_NONE, however for TCP checks, SF_ERR_UP is returned, so for those
checks, the task could still run on any thread, and this could lead to a
race condition where the connection runs on one thread, while the task runs
on another one, which could create random memory corruption and/or crashes.
This may fix github issue #369.
This should be backported to 2.1, 2.0 and 1.9.
The h1_recv_allowed() function is inherited from the h2 multiplexer. But for the
h1, conditions to know if we may receive data are less complex because there is
no multiplexing and because data are not parsed when received. So now, following
rules are respected :
* if an error or a shutdown for reads was detected on the connection we must
not attempt to receive
* if the input buffer failed to be allocated or is full, we must not try to
receive
* if the input processing is busy waiting for the output side, we may attempt
to receive
* otherwise must may not attempt to receive
This patch must be backported as far as 1.9.
The CO_FL_SOCK_RD_SH flag is only set when a read0 is received. So we must not
rely on it to set the H1 connection in shutdown state (H1C_F_CS_SHUTDOWN). In
fact, it is suffisant to set the connection in shutdown state when the shutdown
for writes is forwared to the sock layer.
This patch must be backported as far as 1.9.
On the server side, when a H1 stream is detached from the connection, if the
connection is not reusable but some outgoing data remain, the connection is not
immediatly released. In this case, the connection is not inserted in any idle
connection list. But it is still attached to the session. Because of that, it
can be erroneously reused. h1_avail_streams() always report a free slot if no
stream is attached to the connection, independently on the connection's
state. It is obviously a bug. If a second request is handled by the same session
(it happens with H2 connections on the client side), this connection is reused
before we close it.
There is small window to hit the bug, but it may lead to very strange
behaviors. For instance, if a first h2 request is quickly aborted by the client
while it is blocked in the mux on the server side (so before any response is
received), a second request can be processed and sent to the server. Because the
connection was not closed, the possible reply to the first request will be
interpreted as a reply to the second one. It is probably the bug described by
Peter Fröhlich in the issue #290.
To fix the bug, a new flag has been added to know if an H1 connection is idle or
not. So now, H1C_F_CS_IDLE is set when a connection is idle and useable to
handle a new request. If it is set, we try to add the connection in an idle
connection list. And h1_avail_streams() only relies on this flag
now. Concretely, this flag is set when a K/A stream is detached and both the
request and the response are in DONE state. It is exclusive to other H1C_F_CS
flags.
This patch must be backported as far as 1.9.
It's regression from 9f9b0c6 "BUG/MEDIUM: ECC cert should work with
TLS < v1.2 and openssl >= 1.1.1". Wilcard EC certifcate could be selected
at the expense of specific RSA certificate.
In any case, specific certificate should always selected first, next wildcard.
Reflect this rule in a loop to avoid any bug in certificate selection changes.
Fix issue #394.
It should be backported as far as 1.8.
There exists a race in the listener code where a thread might disable
receipt on a listener's FD then turn it to LI_PAUSED while at the same
time another one faces EAGAIN on accept() and enables it again via
fd_cant_recv(). The result is that the FD is in LI_PAUSED state with
its polling still enabled. listener_accept() does not do anything then
and doesn't disable the FD either, resulting in a thread eating all the
CPU as reported in issue #358. A solution would be to take the listener's
lock to perform the fd_cant_recv() call and do it only if the FD is still
in LI_READY state, but this would be totally overkill while in practice
the issue only happens during shutdown.
Instead what is done here is that when leaving we recheck the state and
disable polling if the listener is not in LI_READY state, which never
happens except when being limited. In the worst case there could be one
extra check per thread for the time required to converge, which is
absolutely nothing.
This fix was successfully tested, and should be backported to all
versions using the lock-free listeners, which means all those containing
commit 3f0d02bb ("MAJOR: listener: do not hold the listener lock in
listener_accept()"), hence 2.1, 2.0, 1.9.7+, 1.8.20+.
When a crt-list line using an already used ckch_store does not contain
filters, it will overwrite the ckchs->filters variable with 0.
This problem will generate all sni_ctx of this ckch_store without
filters. Filters generation mustn't be allowed in any case.
Must be backported in 2.1.
In si_cs_recv(), we can end up with a partial splice() call that will be
followed by an attempt to us rcv_buf(). Sometimes this works and places
data into the buffer, which then prevent splicing from being used, and
this causes splice() and recvfrom() calls to alternate. Better simply
refrain from calling rcv_buf() when there are data in the pipe and still
data to be forwarded. Usually this indicates that we've ate everything
available and that we still want to use splice() on subsequent calls.
This should be backported to 2.1 and 2.0.
If we cannot splice incoming data using rcv_pipe() due to remaining data
in the buffer, we must not subscribe to the mux but instead tag the
stream-int as blocked on missing Rx room. Otherwise when data are
flushed, calling si_chk_rcv() will have no effect because the WAIT_EP
flag remains present, and we'll end in an rx timeout. This case is very
hard to reproduce, and requires an inversion of the polling side in the
middle of a transfer. This can only happen when the client and the server
are using similar links and when splicing is enabled. It typically takes
hundreds of MB to GB for the problem to happen, and tends to be magnified
by the use of option contstats which causes process_stream() to be called
every 5s and to try again to recv.
This fix must be backported to 2.1, 2.0, and possibly 1.9.
Some concerns are regularly raised about the risk to inherit some Lua
files which make use of a fork (e.g. via os.execute()) as well as
whether or not some of bugs we fix might or not be exploitable to run
some code. Given that haproxy is event-driven, any foreground activity
completely stops processing and is easy to detect, but background
activity is a different story. A Lua script could very well discretely
fork a sub-process connecting to a remote location and taking commands,
and some injected code could also try to hide its activity by creating
a process or a thread without blocking the rest of the processing. While
such activities should be extremely limited when run in an empty chroot
without any permission, it would be better to get a higher assurance
they cannot happen.
This patch introduces something very simple: it limits the number of
processes and threads to zero in the workers after the last thread was
created. By doing so, it effectively instructs the system to fail on
any fork() or clone() syscall. Thus any undesired activity has to happen
in the foreground and is way easier to detect.
This will obviously break external checks (whose concept is already
totally insecure), and for this reason a new option
"insecure-fork-wanted" was added to disable this protection, and it
is suggested in the fork() error report from the checks. It is
obviously recommended not to use it and to reconsider the reasons
leading to it being enabled in the first place.
If for any reason we fail to disable forks, we still start because it
could be imaginable that some operating systems refuse to set this
limit to zero, but in this case we emit a warning, that may or may not
be reported since we're after the fork point. Ideally over the long
term it should be conditionned by strict-limits and cause a hard fail.
Since the flag STAT_SHOWADMIN was removed, the frontends heading in the HTML
output appears unaligned because the space reserved for the checkbox (not
displayed for frontends) is not inserted.
This patch fixes the issue #390. It must be backported to 2.1.
The header name configured by the directive "pass-header", in the "fcgi-app"
section, must be case insensitive. For now, it must be in lowercase to match an
header. Internally, header names are in lowercase but there is no reason to
impose this syntax in the configuration.
This patch must be backported to 2.1.
Commit 1c65fdd5 "MINOR: ssl: add extra chain compatibility" really implement
SSL_CTX_set0_chain. Since ckch can be used to init more than one ctx with
openssl < 1.0.2 (commit 89f58073 for X509_chain_up_ref compatibility),
SSL_CTX_set1_chain compatibility is required.
This patch must be backported to 2.1.
http_find_header() is used to find the next occurrence of a header matching on
its name. When found, the matching header is returned with the corresponding
value. This value may be empty. Unfortunatly, because of a bug, an empty value
make the function fail.
This patch must be backported to 2.1, 2.0 and 1.9.
`eb` being tested above, `res` cannot be null, so the condition is
not needed and introduces potential dead code.
also fix a typo in associated comment
This should fix issue #349
Reported-by: Илья Шипицин <chipitsine@gmail.com>
Signed-off-by: William Dauchy <w.dauchy@criteo.com>
Load file for crl or ca-cert is realy done with the same function in OpenSSL,
via X509_STORE_load_locations. Accordingly, deduplicate crl-file and ca-file
can share the same function.
ca-list can be extracted from ca-file already loaded in memory.
This patch set ca-list from deduplicated ca-file when needed
and share it in ca-file tree.
As a corollary, this will prevent file access for ca-list when
updating a certificate via CLI.
Typically server line like:
'server-template srv 1-1000 *:443 ssl ca-file ca-certificates.crt'
load ca-certificates.crt 1000 times and stay duplicated in memory.
Same case for bind line: ca-file is loaded for each certificate.
Same 'ca-file' can be load one time only and stay deduplicated in
memory.
As a corollary, this will prevent file access for ca-file when
updating a certificate via CLI.
Building on a 32-bit platform produces these warnings in trace code:
src/stream.c: In function 'strm_trace':
src/stream.c:226:29: warning: format '%lu' expects argument of type 'long unsigned int', but argument 9 has type 'size_t {aka const unsigned int}' [-Wformat=]
chunk_appendf(&trace_buf, " req=(%p .fl=0x%08x .ana=0x%08x .exp(r,w,a)=(%u,%u,%u) .o=%lu .tot=%llu .to_fwd=%u)",
^
src/stream.c:229:29: warning: format '%lu' expects argument of type 'long unsigned int', but argument 9 has type 'size_t {aka const unsigned int}' [-Wformat=]
chunk_appendf(&trace_buf, " res=(%p .fl=0x%08x .ana=0x%08x .exp(r,w,a)=(%u,%u,%u) .o=%lu .tot=%llu .to_fwd=%u)",
^
src/mux_fcgi.c: In function 'fcgi_trace':
src/mux_fcgi.c:443:29: warning: format '%lu' expects argument of type 'long unsigned int', but argument 3 has type 'size_t {aka const unsigned int}' [-Wformat=]
chunk_appendf(&trace_buf, " - VAL=%lu", *val);
^
src/mux_h1.c: In function 'h1_trace':
src/mux_h1.c:290:29: warning: format '%lu' expects argument of type 'long unsigned int', but argument 3 has type 'size_t {aka const unsigned int}' [-Wformat=]
chunk_appendf(&trace_buf, " - VAL=%lu", *val);
^
Let's just cast the type to long. This should be backported to 2.1.
During the H1 message parsing, the host header is tested to be sure it matches
the request's authority, if defined. When there are multiple host headers, we
also take care they are all the same. Of course, these tests must only be
performed on the requests. A host header in a response has no special meaning.
This patch must be backported to 2.1.
This commit removes the explicit checks for `if (err)` before
passing `err` to `memprintf`. `memprintf` already checks itself
whether the `**out*` parameter is `NULL` before doing anything.
This reduces the indentation depth and makes the code more readable,
before there is less boilerplate code.
Instead move the check into the ternary conditional when the error
message should be appended to a previous message. This is consistent
with the rest of ssl_sock.c and with the rest of HAProxy.
Thus this patch is the arguably cleaner fix for issue #374 and builds
upon
5f1fa7db86 and
8b453912ce
Additionally it fixes a few places where the check *still* was missing.
Since commit 88ebd40 ("MINOR: trace: add allocation of buffer-sized
trace buffers") we have a trace buffer allocated at boot time. But
there was a copy-paste error there making the test verify that the
trash was allocated instead of the trace buffer. The result is that
depending on the link order either the test will succeed or fail,
preventing haproxy from starting at all.
No backport is needed.
Christopher found another issue in the H2 backend implementation that
results from a miss in the H2 spec: the processing of a HEADERS frame
is always permitted in IDLE state, but this doesn't make sense on the
response path! And here when facing such a frame, we try to decode it
while we didn't allocate any stream, so we end up trying to fill the
idle stream's buffer (read-only) and crash.
What we're doing here is that if we get a HEADERS frame in IDLE state
from a server, we terminate the connection with a PROTOCOL_ERROR. No
such transition seems to be permitted by the spec but it seems to be
the only sane solution.
This fix must be backported as far as 1.9. Note that in 2.0 and earlier
there's no h2_frame_check_vs_state() function, instead the check is
inlined in h2_process_demux().
Tim Dsterhus found that the amount of sanitization we perform on HTTP
header field names received in H2 is insufficient. Currently we reject
upper case letters as mandated by RFC7540#8.1.2, but section 10.3 also
requires that intermediaries translating streams to HTTP/1 further
refine the filtering to also reject invalid names (which means any name
that doesn't match a token). There is a small trick here which is that
the colon character used to start pseudo-header names doesn't match a
token, so pseudo-header names fall into that category, thus we have to
swap the pseudo-header name lookup with this check so that we only check
from the second character (past the ':') in case of pseudo-header names.
Another possibility could have been to perform this check only in the
HTX-to-H1 trancoder but doing would still expose the configured rules
and logs to such header names.
This fix must be backported as far as 1.8 since this bug could be
exploited and serve as the base for an attack. In 2.0 and earlier,
functions h2_make_h1_request() and h2_make_h1_trailers() must also
be adapted to sanitize requests coming in legacy mode.
Tim Dsterhus reported an annoying problem in the H2 decoder related to
an ambiguity in the H2 spec. The spec says in section 10.3 that HTTP/2
allows header field values that are not valid (since they're binary) and
at the same time that an H2 to H1 gateway must be careful to reject headers
whose values contain \0, \r or \n.
Till now, and for the sake of the ability to maintain end-to-end binary
transparency in H2-to-H2, the H2 mux wouldn't reject this since it does
not know what version will be used on the other side.
In theory we should in fact perform such a check when converting an HTX
header to H1. But this causes a problem as it means that all our rule sets,
sample fetches, captures, logs or redirects may still find an LF in a header
coming from H2. Also in 2.0 and older in legacy mode, the frames are instantly
converted to H1 and HTX couldn't help there. So this means that in practice
we must refrain from delivering such a header upwards, regardless of any
outgoing protocol consideration.
Applying such a lookup on all headers leaving the mux comes with a
significant performance hit, especially for large ones. A first attempt
was made at placing this into the HPACK decoder to refrain from learning
invalid literals but error reporting becomes more complicated. Additional
tests show that doing this within the HTX transcoding loop benefits from
the hot L1 cache, and that by skipping up to 8 bytes per iteration the
CPU cost remains within noise margin, around ~0.5%.
This patch must be backported as far as 1.8 since this bug could be
exploited and serve as the base for an attack. In 2.0 and earlier the
fix must also be added to functions h2_make_h1_request() and
h2_make_h1_trailers() to handle legacy mode. It relies on previous patch
"MINOR: ist: add ist_find_ctl()" to speed up the control bytes lookup.
All credits go to Tim for his detailed bug report and his initial patch.
gcc complains rightfully:
src/ssl_sock.c: In function ‘ssl_sock_prepare_all_ctx’:
src/ssl_sock.c:5507:3: warning: format not a string literal and no format arguments [-Wformat-security]
ha_warning(errmsg);
^
src/ssl_sock.c:5509:3: warning: format not a string literal and no format arguments [-Wformat-security]
ha_alert(errmsg);
^
src/ssl_sock.c: In function ‘cli_io_handler_commit_cert’:
src/ssl_sock.c:10208:3: warning: format not a string literal and no format arguments [-Wformat-security]
chunk_appendf(trash, err);
Introduced in 8b453912ce.
Since commit 9a1ab08 ("CLEANUP: ssl-sock: use HA_OPENSSL_VERSION_NUMBER
instead of OPENSSL_VERSION_NUMBER") we restrict LibreSSL to the OpenSSL
1.0.1 API, to avoid breaking LibreSSL every minute. We set
HA_OPENSSL_VERSION_NUMBER to 0x1000107fL if LibreSSL is detected and
only allow curves to be configured if HA_OPENSSL_VERSION_NUMBER is at
least 0x1000200fL.
However all relevant LibreSSL releases actually support settings curves,
which is now broken. Fix this by always allowing curve configuration when
using LibreSSL.
Reported on GitHub in issue #366.
Fixes: 9a1ab08 ("CLEANUP: ssl-sock: use HA_OPENSSL_VERSION_NUMBER instead
of OPENSSL_VERSION_NUMBER").
recent commit 8b453912ce ("MINOR: ssl: ssl_sock_prepare_ctx() return an error code")
converted all errors handling; in this patch we always test `err`, but
three of them are missing. I did not found a plausible explanation about
it.
this should fix issue #374
Fixes: 8b453912ce ("MINOR: ssl: ssl_sock_prepare_ctx() return an error code")
Reported-by: Илья Шипицин <chipitsine@gmail.com>
Signed-off-by: William Dauchy <w.dauchy@criteo.com>
The link to the known bugs page for the current version is built and
reported there. When it is a development version (less than 2 dots),
instead a link to github open issues is reported as there's no way to
be sure about the current situation in this case and it's better that
users report their trouble there.
As discussed on Discourse here:
https://discourse.haproxy.org/t/haproxy-branch-support-lifetime/4466
it's not always easy for end users to know the lifecycle of the version
they are using. This patch introduces a "Status" line in the output of
"haproxy -vv" indicating whether it's a development, stable, long-term
supported version, possibly with an estimated end of life for the branch
when it can be anticipated (e.g. for stable versions). This field should
be adjusted when creating a major release to reflect the new status.
It may make sense to backport this to other branches to clarify the
situation.
Apply the configuration of the ssl_bind_conf on the generated SSL_CTX.
It's a little bit hacky at the moment because the ssl_sock_prepare_ctx()
function was made for the configuration parsing, not for being using at
runtime. Only the 'verify' bind keyword seems to cause a file access so
we prevent it before calling the function.
Rework ssl_sock_prepare_ctx() so it fills a buffer with the error
messages instead of using ha_alert()/ha_warning(). Also returns an error
code (ERR_*) instead of the number of errors.
It was noted in #48 that there are times when a configuration
may use the server-template directive with SRV records and
simultaneously want to control weights using an agent-check or
through the runtime api. This patch adds a new option
"ignore-weight" to the "resolve-opts" directive.
When specified, any weight indicated within an SRV record will
be ignored. This is for both initial resolution and ongoing
resolution.
The previous patch on this function (36b536d6c "BUG/MEDIUM: stream-int: Don't
loose events on the CS when an EOS is reported") contains a bug. The return
value is based on the conn-stream's flags. But it may be reset if the CS is
closed. Ironically it was exactly the purpose of this patch...
This patch must be backported to 2.0 and 1.9.
When no data filter are registered on a channel, if the message length is known,
the HTTP payload is infinitely forwarded to save calls to process_stream(). When
we finally fall back again in XFER_BODY analyzers, we detect the end of the
message by checking channel flags. If CF_EOI or CF_SHUTR is set, we switch the
message in DONE state. For CF_EOI, it is relevant. But not for CF_SHUTR. a
shutdown for reads without the end of input must be interpreted as an abort for
messages with a known length.
Because of this bug, some aborts are not properly handled and reported. Instead,
we interpret it as a legitimate shutdown.
This patch must be backported to 2.0.
There are two issues with the way tunnel mode is detected on the response
path. First, when a response with an unknown content length is handled, the
request is also switched in tunnel mode. It is obviously wrong. Because it was
done on the server side only (so not during the request parsing), it is no
noticeable effects.
The second issue is about the way protocol upgrades are handled. The request is
switched in tunnel mode from the time the 101 response is processed. So an
unfinished request may be switched in tunnel mode too early. It is not a common
use, but a protocol upgrade on a POST is allowed. Thus, parsing of the payload
may be hijacked. It is especially bad for chunked payloads.
Now, conditions to switch the request in tunnel mode reflect what should be
done. Especially for the second issue. We wait the end of the request to switch
it in tunnel mode.
This patch must be backported to 2.0 and 1.9. Note that these versions are only
affected by the second issue but the patch cannot be easily splitted.
Some BUG_ON() tests emit a warning because of a potential null pointer
dereference on an HTX block. In fact, it should never happen, but now, GCC is
happy.
This patch must be backported to 2.0.
In si_cs_recv(), when a shutdown for reads is handled, the conn-stream may be
closed. It happens when the ouput channel is closed for writes or if
SI_FL_NOHALF is set on the stream-interface. In this case, conn-stream's flags
are reset. Thus, if an error (CS_FL_ERROR) or an end of input (CS_FL_EOI) is
reported by the mux, the event is lost. si_cs_recv() does not report these
events by itself. It relies on si_cs_process() to report them to the
stream-interface and/or the channel.
For instance, if CS_FL_EOS and CS_FL_EOI are set by the H1 multiplexer during a
call to si_cs_recv() on the server side, if the conn-stream is closed (read0 +
SI_FL_NOHALF), the CS_FL_EOI flag is lost. Thus, this may lead the stream to
interpret it as a server abort.
Now, conn-stream's flags are processed at the end of si_cs_recv(). The function
is responsible to set the right flags on the stream-interface and/or the
channel. Due to this patch, the function is now almost linear. Except some early
checks at the beginning, there is only one return statement. It also fixes a
potential bug because of an inconsistency between the splicing and the buffered
receipt. On the first case, CS_FL_EOS if handled before errors on the connection
or the conn-stream. On the second one, it is the opposite.
This patch must be backported to 2.0 and 1.9.
There is a compiler warning after commit a9363eb6 ("BUG/MEDIUM: ssl:
'tune.ssl.default-dh-param' value ignored with openssl > 1.1.1"):
src/ssl_sock.c: In function 'ssl_sock_prepare_ctx':
src/ssl_sock.c:4481:4: error: statement with no effect [-Werror=unused-value]
Fix it by adding a (void)
The peer flags (->flags member of peer struct) are reset by __peer_session_deinit()
function. PEER_F_ALIVE flag which is used by the heartbeat part of the peer protocol
to mark a peer as being alive was not reset by this function. This simple patch adds
add the statement to this.
Note that, at this time, there was no identified issue due to this missing reset.
Must be backported to 2.0.
Upon a reexec_on_failure, if the process tried to exit after the
initialization of the process structure but before it was filled with a
PID, the PID in the mworker_proc structure is set to -1.
In this particular case the -sf argument is filled with -1 and haproxy
will exit with the usage message because of that argument.
Should be backported in 2.0.
This patch introduces the new CLI command 'abort ssl cert' which abort
an on-going transaction and free its content.
This command takes the name of the filename of the transaction as an
argument.
As the peers protocol expects to parse at least one encoded integer value for
each stick-table data field even when not configured on the local side,
about the "server_name" data field we must emit something even if it has
not been set (no server was configured for instance).
As this data field is made of first one encoded integer which is the length
of the remaining data (the dictionary cache entry), we encode the length 0
when emitting such an absent dictionary cache entry.
On the remote side, when we decode such an integer with 0 as value, we stop
parsing the data field and that's it.
Must be backported to 2.0.
This patch adds three counters to help in debugging peers protocol issues
to "peer" struct:
->no_hbt counts the number of reconnection period without receiving heartbeat
->new_conn counts the number of reconnections after ->reconnect timeout expirations.
->proto_err counts the number of protocol errors.
Add RX/TX heartbeat counters to "peer" struct to have an idead about which
peer is alive or not.
Dump these counters values on the CLI via "show peers" command.
This patch enable us to dump the stick-table information of remote or local peers
without already opened peer session. This may be the case also for the local peer
during synchronizations with an old processus (reload).
Certificate selection in client_hello_cb (openssl >= 1.1.1) correctly
handles crt-list neg filter. Certificate selection for openssl < 1.1.1
has not been touched for a while: crt-list neg filter is not the same
than his counterpart and is wrong. Fix it to mimic the same behavior
has is counterpart.
It should be backported as far as 1.6.
With CLI cert update, sni_ctx can be removed at runtime. ssl_pkey_info_index
ex_data is filled with one of sni_ctx.kinfo pointer but SSL_CTX can be shared
between sni_ctx. Remove and free a sni_ctx can lead to a segfault when
ssl_pkey_info_index ex_data is used (in ssl_sock_get_pkey_algo). Removing the
dependency on ssl_pkey_info_index ex_data is the easiest way to fix the issue.
since the introduction of mworker, the setuid/setgid was duplicated in
two places; try to improve that by creating a dedicated function.
this patch does not introduce any functional change.
Signed-off-by: William Dauchy <w.dauchy@criteo.com>
in mworker mode used with uid/gid settings, it was not possible to get
a coredump despite the set-dumpable option.
indeed prctl(2) manual page specifies the dumpable attribute is reverted
to `/proc/sys/fs/suid_dumpable` in a few conditions such as process
effective user and group are changed.
this patch moves the whole set-dumpable logic before the polling code in
order to catch all possible cases where we could have changed the
uid/gid. It however does not cover the possible segfault at startup.
this patch should be backported in 2.0.
Signed-off-by: William Dauchy <w.dauchy@criteo.com>
Allow the sc-set-gpt0 action to set GPT0 to a value dynamically evaluated from
its <expr> argument (in addition to the existing static <int> alternative).
The copy of the startup logs used to rely on a re-allocated memory area
on the fly, that would attempt to be delivered at once over the CLI. But
if it's too large (too many warnings) it will take time to start up, and
may not even show up on the CLI as it doesn't fit in a buffer.
The ring buffer infrastructure solves all this with no more code, let's
switch to this instead. It simply requires a parsing function to attach
the ring via ring_attach_cli() and all the rest is automatically handled.
Initially this was imagined as a code cleanup, until a test with a config
involving 100k backends and just one occurrence of
"load-server-state-from-file global" in the defaults section took approx
20 minutes to parse due to the O(N^2) cost of concatenating the warnings
resulting in ~1 TB of data to be copied, while it took only 0.57s with
the ring.
Ideally this patch should be backported to 2.0 and 1.9, though it relies
on the ring infrastructure which will then also need to be backported.
Configs able to trigger the bug are uncommon, so another workaround for
older versions without backporting the rings would consist in simply
limiting the size of the error message in print_message() to something
always printable, which will only return the first errors.
ring_attach_cli() is called by the keyword parsing function to dump a
ring to the CLI. It can only work with a specific handler and release
function. Let's make it set them appropriately instead of having the
caller know these functions. This way adding a command to dump a ring
is as simple as declaring a parsing function calling ring_attach_cli().
It was set to MAX_SYSLOG_LEN (1K). It is a bit short to print debug
traces. Especially when part of a buffers is dump. Now, the maximum length is
set to BUFSIZE (16K).
This is mandatory to process input one more time to add the EOM in the HTX
message and to set CS_FL_EOI on the conn-stream. Otherwise, in the stream, a
SHUTR will be reported on the corresponding channel without the EOI. It may be
erroneously interpreted as an abort.
This patch must be backported to 2.0 and 1.9.
Errors during the payload or the trailers parsing are reported with the
HTX_FL_PARSING_ERROR flag on the HTX message and not a negative return
value. This change was introduced when the fonctions to convert an H1 message to
HTX one were moved to a dedicated file. But the h1 mux was not fully updated
accordingly.
No backport needed except if the commits about file h1_htx.c are backported.
Now, for the sessions, the maximum times (queue, connect, response, total) are
reported in addition of the averages over the last 1024 connections. These
values are called qtime_max, ctime_max, rtime_max and ttime_max.
This patch is related to #272.
For backends and servers, some average times for last 1024 connections are
already calculated. For the moment, the averages for the time passed in the
queue, the connect time, the response time (for HTTP session only) and the total
time are calculated. Now, in addition, the maximum time observed for these
values are also stored.
In addition, These new counters are cleared as all other max values with the CLI
command "clear counters".
This patch is related to #272.
This change make the payload filtering uniform between TCP and HTTP
filters. Now, in TCP, like in HTTP, there is only one callback responsible to
forward data. Thus, old callbacks, tcp_data() and tcp_forward_data(), are
replaced by a single callback function, tcp_payload(). This new callback gets
the offset in the payload to (re)start the filtering and the maximum amount of
data it can forward. It is the filter's responsibility to be compatible with HTX
streams. If not, it must not set the flag FLT_CFG_FL_HTX.
Because of this change, nxt and fwd offsets are no longer needed. Thus they are
removed from the filter structure with their update functions,
flt_change_next_size() and flt_change_forward_size(). Moreover, the trace filter
has been updated accordingly.
This patch breaks the compatibility with the old API. Thus it should probably
not be backported. But, AFAIK, there is no TCP filter, thus the breakage is very
limited.
For now, TCP callbacks are incompatible with the HTX streams because they are
designed to manipulate raw buffers. A new callback will probably be added to be
used in both modes, raw and HTX. So, for HTX streams, these callbacks are
ignored. This should not be a real problem because there is no known filters,
expect the trace filter, implementing these callbacks.
This patch must be backported to 2.0 and 1.9.
A corner case was opened in the listener_accept() code by commit 3f0d02bbc2
("MAJOR: listener: do not hold the listener lock in listener_accept()"). The
issue is when one listener (or a group of) managed to eat all the proxy's or
all the process's maxconn, and another listener tries to accept a new socket.
This results in the atomic increment to detect the excess connection count
and immediately abort, without pausing the listener, thus the call is
immediately performed again. This doesn't happen when the test is run on a
single listener because this listener got limited when crossing the limit.
But with 2 or more listeners, we don't have this luxury.
The solution consists in limiting the listener as soon as we have to
decline accepting an incoming connection. This means that the listener
will not be marked full yet if it gets the exact connection count but
this is not a problem in practice since all other listeners will only be
marked full after their first attempt. Thus from now on, a listener is
only full once it has already failed taking an incoming connection.
This bug was definitely responsible for the unreproduceable occasional
reports of high CPU usage showing epoll_wait() returning immediately
without accepting an incoming connection, like in bug #129.
This fix must be backported to 1.9 and 1.8.
The "shutdown sessions" admin-mode command used to open-code the list
traversal while there's already a function for this: srv_shutdown_streams().
Better use it.
The "shutdown session server" command used to open-code the list traversal
while there's already a function for this: srv_shutdown_streams(). Better
use it.
Since previous commit a132e5efa9 ("BUG/MEDIUM: Make sure we leave the
session list in session_free().") it's pointless to delete the conn
element inside "if" blocks given that the second test is always true
as well. Let's simplify this with a single LIST_DEL_INIT() before the
test.
In session_free(), if we're about to destroy a connection that had no mux,
make sure we leave the session_list before calling conn_free(). Otherwise,
conn_free() would call session_unown_conn(), which would potentially free
the associated srv_list, but session_free() also frees it, so that would
lead to a double free, and random memory corruption.
This should be backported to 1.9 and 2.0.
There is a very short race in the queues which happens in the following
situation:
- stream A on thread 1 is being processed by a server
- stream B on thread 2 waits in the backend queue for a server
- stream B on thread 2 is fed up with waiting and expires, calls
stream_free() which calls pendconn_free(), which sees the
stream attached
- at the exact same instant, stream A finishes on thread 1, sees
one stream is waiting (B), detaches it and wakes it up
- stream B continues pendconn_free() and calls pendconn_unlink()
- pendconn_unlink() now detaches the node again and performs a
second deletion (harmless since idempotent), and decrements
srv/px->nbpend again
=> the number of connections on the proxy or server may reach -1 if/when
this race occurs.
It is extremely tight as it can only occur during the test on p->leaf_p
though it has been witnessed at least once. The solution consists in
testing leaf_p again once the lock is held to make sure the element was
not removed in the mean time.
This should be backported to 2.0 and 1.9, probably even 1.8.
We need to call vars_init() when the list is empty otherwise we
can't use variables in the response scope. This regression was
introduced by cda7f3f5 (MINOR: stream: don't prune variables if
the list is empty).
The following config reproduces the issue:
defaults
mode http
frontend in
bind *:11223
http-request set-var(req.foo) str("foo") if { path /bar }
http-request set-header bar %[var(req.foo)] if { var(req.foo) -m found }
http-response set-var(res.bar) str("bar")
http-response set-header foo %[var(res.bar)] if { var(res.bar) -m found }
use_backend out
backend out
server s1 127.0.0.1:11224
listen back
bind *:11224
http-request deny deny_status 200
> GET /ba HTTP/1.1
> Host: localhost:11223
> User-Agent: curl/7.66.0
> Accept: */*
>
< HTTP/1.0 200 OK
< Cache-Control: no-cache
< Content-Type: text/html
> GET /bar HTTP/1.1
> Host: localhost:11223
> User-Agent: curl/7.66.0
> Accept: */*
>
< HTTP/1.0 200 OK
< Cache-Control: no-cache
< Content-Type: text/html
< foo: bar
This must be backported as far as 1.9.
Documentation states that the interval between 2 DNS resolution is
driven by "timeout resolve <time>" directive.
From a code point of view, this was applied unless the latest status of
the resolution was VALID. In such case, "hold valid" was enforce.
This is a bug, because "hold" timers are not here to drive how often we
want to trigger a DNS resolution, but more how long we want to keep an
information if the status of the resolution itself as changed.
This avoid flapping and prevent shutting down an entire backend when a
DNS server is not answering.
This issue was reported by hamshiva in github issue #345.
Backport status: 1.8
As reported by David Birdsong on the ML, the HTTP action do-resolve does
not use the DNS cache.
Actually, the action is "registred" to the resolution for said name to
be resolved and wait until an other requester triggers the it. Once the
resolution is finished, then the action is updated with the result.
To trigger this, you must have a server with runtime DNS resolution
enabled and run a do-resolve action with the same fqdn AND they use the
same resolvers section.
This patch fixes this behavior by ensuring the resolution associated to
the action has a valid answer which is not considered as expired. If
those conditions are valid, then we can use it (it's the "cache").
Backport status: 2.0
Since the legacy HTTP mode was removed, the stream is always released at the end
of each HTTP transaction and a new is created to handle the next request for
keep-alive connections. So the HTTP transaction is no longer reset and the
function http_reset_txn() can be removed.
All TCP and HTTP captures are stored in 2 arrays, one for the request and
another for the response. In HAPRoxy 1.5, these arrays are part of the HTTP
transaction and thus are released during its cleanup. Because in this version,
the transaction is part of the stream (in 1.5, streams are still called
sessions), the cleanup is always performed, for HTTP and TCP streams.
In HAProxy 1.6, the HTTP transaction was moved out from the stream and is now
dynamically allocated only when required (becaues of an HTTP proxy or an HTTP
sample fetch). In addition, still in 1.6, the captures arrays were moved from
the HTTP transaction to the stream. This way, it is still possible to capture
elements from TCP rules for a full TCP stream. Unfortunately, the release is
still exclusively performed during the HTTP transaction cleanup. Thus, for a TCP
stream where the HTTP transaction is not required, the TCP captures, if any, are
never released.
Now, all captures are released when the stream is freed. This fixes the memory
leak for TCP streams. For streams with an HTTP transaction, the captures are now
released when the transaction is reset and not systematically during its
cleanup.
This patch must be backported as fas as 1.6.
Runtime traces are now supported for the streams, only if compiled with
debug. process_stream() is covered as well as TCP/HTTP analyzers and filters.
In traces, the first argument is always a stream. So it is easy to get the info
about the channels and the stream-interfaces. The second argument, when defined,
is always a HTTP transaction. And the third one is an HTTP message. The trace
message is adapted to report HTTP info when possible.
Names of these macros may enter in conflict with the macros of the runtime
tracing mechanism. So the prefix "FLT_" has been added to avoid any ambiguities.
Despite the addition of the mux layer, no change have been made on how to enable
the TCP splicing on process_stream(). We still check if transport layer on both
sides support the splicing, but we don't check the muxes support. So it is
possible to start to splice data with an unencrypted H2 connection on a side and
an H1 connection on the other. This leads to a freeze of the stream until a
client or server timeout is reached.
This patch fixed a part of the issue #356. It must be backported as far as 1.8.
The mux H1 announces the support of the TCP splicing. It only works for payload
data. It works for messages with an explicit content-length or for tunnelled
data. For chunked messages, the mux H1 should normally not try to xfer more than
the current chunk through the pipe. Unfortunately, this works on the read side
but the send is completely bogus. During the output formatting, the announced
size of chunks does not handle the size that will be spliced. Because there is
no formatting when spliced data are sent, the produced message is malformed and
rejected by the peer.
For now, because it is quick and simple, the TCP splicing is disabled for
chunked messages. I will try to enable it again in a proper way. I don't know
for now if it will be backportable in previous versions. This will depend on the
amount of changes required to handle it.
This patch fixes a part of the issue #356. It must be backported to 2.0 and 1.9.
This patch is easy to review: let's call parse_logsrv() function to parse
"log" directive as this is already for other sections for proxies.
This enable us to log incoming TCP connections for the listeners for "peers"
sections.
Update the documentation for "peers" section.
If the SSL_CTX of a previous instance (ckch_inst) was used as a
default_ctx, replace the default_ctx of the bind_conf by the first
SSL_CTX inserted in the SNI tree.
Use the RWLOCK of the sni tree to handle the change of the default_ctx.
When trying to update a certificate <file>.{rsa,ecdsa,dsa}, but this one
does not exist and if <file> was used as a regular file in the
configuration, the error was ambiguous. Correct it so we can return a
certificate not found error.
Commit bc6ca7c ("MINOR: ssl/cli: rework 'set ssl cert' as 'set/commit'")
broke the ability to commit a unique certificate which does not use a
bundle extension .{rsa,ecdsa,dsa}.
When doing an 'ssl set cert' with a certificate which does not exist in
configuration, the appctx->ctx.ssl.old_ckchs->path was duplicated while
app->ctx.ssl.old_ckchs was NULL, resulting in a NULL dereference.
Move the code so the 'not referenced' error is done before this.
The sample fetche can get srv_name without foreach
`core.backends["bk"].servers`.
Then we can get Server class quickly via
`core.backends[txn.f:be_name()].servers[txn.f:srv_name()]`.
Issue#342
Fix 541a534 ("BUG/MINOR: ssl/cli: fix build of SCTL and OCSP") was not
enough.
[wla: It will probably be better later to put the #ifdef in the
functions so they can return an error if they are not implemented]
Since we now have full URIs with h2, stats may fail to work over H2
so we must carefully only check the path there if the stats URI was
passed with a path only. This way it remains possible to intercept
proxy requests to report stats on explicit domains but it continues
to work as expected on origin requests.
No backport needed.
In case a stream tries to send on a connection error, we must report the
error so that the stream interface keeps the data available and may safely
retry on another connection. Till now this would happen only before the
connection was established, not in case of a failed handshake or an early
GOAWAY for example.
This should be backported to 2.0 and 1.9.
If a connection faces an error or a timeout, it must be removed from its
idle list ASAP. We certainly don't want to risk sending new streams on
it.
This should be backported to 2.0 (replacing MT_LIST_DEL with LIST_DEL_LOCKED)
and 1.9 (there's no lock there, the idle lists are per-thread and per-server
however a LIST_DEL_INIT will be needed).
If an H2 mux has met an error, we must not report available streams
anymore, or it risks to accumulate new streams while not being able
to process them.
This should be backported to 2.0 and 1.9.
It can be sometimes interesting to have a timestamp with a
resolution of less than a second.
It is currently painful to obtain this, because concatenation
of date and date_us lead to a shorter timestamp during first
100ms of a second, which is not parseable and needs ugly ACLs
in configuration to prepend 0s when needed.
To improve this, add an optional <unit> parameter to date sample
to report an integer with desired unit.
Also support this unit in http_date converter to report
a date string with sub-second precision.
The domain option of the cookie keyword allows to define which domain or
domains should use the the cookie value of a cookie-based server
affinity. If the domain does not start with a dot, the user agent should
only use the cookie on hosts that matches the provided domains. If the
configured domain starts with a dot, the user agent can use the cookie
with any host ending with the configured domain.
haproxy config parser helps the admin warning about a potentially buggy
config: defining a domain without an embedded dot which does not start
with a dot, which is forbidden by the RFC.
The current condition to issue the warning implements RFC2109. This
change updates the implementation to RFC6265 which allows domain without
a leading dot.
Should be backported to all supported versions. The feature exists at least
since 1.5.
Remove the leftovers of the certificate + bundle updating in 'ssl set
cert' and 'commit ssl cert'.
* Remove the it variable in appctx.ctx.ssl.
* Stop doing everything twice.
* Indent
This patch splits the 'set ssl cert' CLI command into 2 commands.
The previous way of updating the certificate on the CLI was limited with
the bundles. It was only able to apply one of the tree part of the
certificate during an update, which mean that we needed 3 updates to
update a full 3 certs bundle.
It was also not possible to apply atomically several part of a
certificate with the ability to rollback on error. (For example applying
a .pem, then a .ocsp, then a .sctl)
The command 'set ssl cert' will now duplicate the certificate (or
bundle) and update it in a temporary transaction..
The second command 'commit ssl cert' will commit all the changes made
during the transaction for the certificate.
This commit breaks the ability to update a certificate which was used as
a unique file and as a bundle in the HAProxy configuration. This way of
using the certificates wasn't making any sense.
Example:
// For a bundle:
$ echo -e "set ssl cert localhost.pem.rsa <<\n$(cat kikyo.pem.rsa)\n" | socat /tmp/sock1 -
Transaction created for certificate localhost.pem!
$ echo -e "set ssl cert localhost.pem.dsa <<\n$(cat kikyo.pem.dsa)\n" | socat /tmp/sock1 -
Transaction updated for certificate localhost.pem!
$ echo -e "set ssl cert localhost.pem.ecdsa <<\n$(cat kikyo.pem.ecdsa)\n" | socat /tmp/sock1 -
Transaction updated for certificate localhost.pem!
$ echo "commit ssl cert localhost.pem" | socat /tmp/sock1 -
Committing localhost.pem.
Success!
this patch introduces a strict-limits parameter which enforces the
setrlimit setting instead of a warning. This option can be forcingly
disable with the "no" keyword.
The general aim of this patch is to avoid bad surprises on a production
environment where you change the maxconn for example, a new fd limit is
calculated, but cannot be set because of sysfs setting. In that case you
might want to have an explicit failure to be aware of it before seeing
your traffic going down. During a global rollout it is also useful to
explictly fail as most progressive rollout would simply check the
general health check of the process.
As discussed, plan to use the strict by default mode starting from v2.3.
Signed-off-by: William Dauchy <w.dauchy@criteo.com>
in global config parsing, we currently expect to have a possible no
keyword (KWN_NO), but we never allow it in config parsing.
another patch could have been to simply remove the code handling a
possible KWN_NO.
take this opportunity to update documentation of set-dumpable.
Signed-off-by: William Dauchy <w.dauchy@criteo.com>
In connect_server(), if we're reusing a connection, only use SF_SRV_REUSED
if the connection is fully ready. We may be using a multiplexed connection
created by another stream that is not yet ready, and may fail.
If we set SF_SRV_REUSED, process_stream() will then not wait for the timeout
to expire, and retry to connect immediately.
This should be backported to 1.9 and 2.0.
This commit depends on 55234e33708c5a584fb9efea81d71ac47235d518.
Add a new method, ctl(), to muxes. It uses a "enum mux_ctl_type" to
let it know which information we're asking for, and can output it either
directly by returning the expected value, or by using an optional argument.
"output" argument.
Right now, the only known mux_ctl_type is MUX_STATUS, that will return 0 if
the mux is not ready, or MUX_STATUS_READY if the mux is ready.
We probably want to backport this to 1.9 and 2.0.
We previously relied on chunk_cat(dst, b_fromist(src)) for this but it
is not reliable as the allocated buffer is inside the expression and
may be on a temporary stack. While it's possible to allocate stack space
for a struct and return a pointer to it, it's not possible to initialize
it form a temporary variable to prevent arguments from being evaluated
multiple times. Since this is only used to append an ist after a chunk,
let's instead have a chunk_istcat() function to perform exactly this
from a native ist.
The only call place (URI computation in the cache) was updated.
Actually gcc believes it has detected a possible truncation but it
cannot since the output string is necessarily at least one char
shorter than what it expects. However addressing it is easy and
removes the need for an intermediate copy so let's do it.
The per-thread UUID string produced by generate_pseudo_uuid() could be
off by one character due to too small of size limit in snprintf(). In
practice the UUID remains large enough to avoid any collision though.
This should be backported to 2.0 and 1.9.
The gcc warning about format truncation in get_gmt_offset() is annoying
since we always call it with a valid time thus it cannot fail. However
it's true that nothing guarantees that future code reuses this function
incorrectly in the future, so better enforce the modulus on one day and
shut the warning.
Rework the 'set ssl cert' IO handler so it is clearer.
Use its own SETCERT_ST_* states insted of the STAT_ST ones.
Use an inner loop in SETCERT_ST_GEN and SETCERT_ST_INSERT to do the work
for both the certificate and the bundle.
The io_release() is now called only when the CKCH spinlock is taken so
we can unlock during a release without any condition.
Since commit 90b098c ("BUG/MINOR: cli: don't call the kw->io_release if
kw->parse failed"), the io_release() callback is not called anymore when
the parse() failed. Call it directly on the error path of the
cli_parse_set_cert() function.
This bug is pretty pernicious and have serious consequences : In 2.1, an
infinite loop in process_stream() because the backend stream-interface remains
in the ready state (SI_ST_RDY). In 2.0, a call in loop to process_stream()
because the stream-interface remains blocked in the connect state
(SI_ST_CON). In both cases, it happens after a connection retry attempt. In 1.9,
it seems to not happen. But it may be just by chance or just because it is
harder to get right conditions to trigger the bug. However, reading the code,
the bug seems to exist too.
Here is how the bug happens in 2.1. When we try to establish a new connection to
a server, the corresponding stream-interface is first set to the connect state
(SI_ST_CON). When the underlying connection is known to be connected (the flag
CO_FL_CONNECTED set), the stream-interface is switched to the ready state
(SI_ST_RDY). It is a transient state between the connect state (SI_ST_CON) and
the established state (SI_ST_EST). It must be handled on the next call to
process_stream(), which is responsible to operate the transition. During all
this time, errors can occur. A connection error or a client abort. The transient
state SI_ST_RDY was introduced to let a chance to process_stream() to catch
these errors before considering the connection as fully established.
Unfortunatly, if a read0 is catched in states SI_ST_CON or SI_ST_RDY, it is
possible to have a shutdown without transition to SI_ST_DIS (in fact, here,
SI_ST_CON is swichted to SI_ST_RDY). This happens if the request was fully
received and analyzed. In this case, the flag SI_FL_NOHALF is set on the backend
stream-interface. If an error is also reported during the connect, the behavior
is undefined because an error is returned to the client and a connection retry
is performed. So on the next connection attempt to the server, if another error
is reported, a client abort is detected. But the shutdown for writes was already
done. So the transition to the state SI_ST_DIS is impossible. We stay in the
state SI_ST_RDY. Because it is a transient state, we loop in process_stream() to
perform the transition.
It is hard to understand how the bug happens reading the code and even harder to
explain. But there is a trivial way to hit the bug by sending h2 requests to a
server only speaking h1. For instance, with the following config :
listen tst
bind *:80
server www 127.0.0.1:8000 proto h2 # in reality, it is a HTTP/1.1 server
It is a configuration error, but it is an easy way to observe the bug. Note it
may happen with a valid configuration.
So, after a careful analyzis, it appears that si_cs_recv() should never be
called for a not fully established stream-interface. This way the connection
retries will be performed before reporting an error to the client. Thus, if a
shutdown is performed because a read0 is handled, the stream-interface is
inconditionnaly set to the transient state SI_ST_DIS.
This patch must be backported to 2.0 and 1.9. However on these versions, this
patch reveals a design flaw about connections and a bad way to perform the
connection retries. We are working on it.
In h2_send(), when something is sent, we remove the flags
(H2_CF_MUX_MFULL|H2_CF_DEM_MROOM) on the h2 connection. This way, we are able to
wake up all streams waiting to send data. Unfortunatly, these flags are
unconditionally removed, even when nothing was sent. So if the h2c is blocked
because the mux buffers are full and we are unable to send anything, all streams
in the send_list are woken up for nothing. Now, we only remove these flags if at
least a send succeeds.
This patch must be backport to 2.0.
The io_release() callback of the cli_kw is supposed to be used to clean
what an io_handler() has made. It is called once the work in the IO
handler is finished, or when the connection was aborted by the client.
This patch fixes a bug where the io_release callback was called even
when the parse() callback failed. Which means that the io_release() could
called even if the io_handler() was not called.
Should be backported in every versions that have a cli_kw->release().
(as far as 1.7)
As reported in issue #343, there is one case where a NULL stream can
still be dereferenced, when getting &s->txn->flags. Let's protect all
assignments to stay on the safe side for future additions.
No backport is needed.
Debug commands will usually mark the fate of the process. We'd rather
have them counted and visible in a core or in stats output than trying
to guess how a flag combination could happen. The counter is only
incremented when the command is about to be issued however, so that
failed attempts are ignored.
Instead of relying on DEBUG_DEV for most debugging commands, which is
limiting, let's condition them to expert mode. Only one ("debug dev exec")
remains conditionned to DEBUG_DEV because it can have a security implication
on the system. The commands are not listed unless "expert-mode on" was first
entered on the CLI :
> expert-mode on
> help
debug dev close <fd> : close this file descriptor
debug dev delay [ms] : sleep this long
debug dev exec [cmd] ... : show this command's output
debug dev exit [code] : immediately exit the process
debug dev hex <addr> [len]: dump a memory area
debug dev log [msg] ... : send this msg to global logs
debug dev loop [ms] : loop this long
debug dev panic : immediately trigger a panic
debug dev stream ... : show/manipulate stream flags
debug dev tkill [thr] [sig] : send signal to thread
> debug dev stream
Usage: debug dev stream { <obj> <op> <value> | wake }*
<obj> = {strm | strm.f | sif.f | sif.s | sif.x | sib.f | sib.s | sib.x |
txn.f | req.f | req.r | req.w | res.f | res.r | res.w}
<op> = {'' (show) | '=' (assign) | '^' (xor) | '+' (or) | '-' (andnot)}
<value> = 'now' | 64-bit dec/hex integer (0x prefix supported)
'wake' wakes the stream asssigned to 'strm' (default: current)
Some commands like the debug ones are not enabled by default but can be
useful on some production environments. In order to avoid the temptation
of using them incorrectly, let's introduce an "expert" mode for a CLI
connection, which allows some commands to appear and be used. It is
enabled by command "expert-mode on" which is not listed by default.
This function adds some control by verifying that the target address is
really readable. It will not protect against writing to wrong places,
but will at least protect against a large number of mistakes such as
incorrectly copy-pasted addresses.
This new "debug dev stream" command allows to manipulate flags, timeouts,
states for streams, channels and stream interfaces, as well as waking a
stream up. These may be used to help reproduce certain bugs during
development. The operations are performed to the stream assigned by
"strm" which defaults to the CLI's stream. This stream pointer can be
chosen from one of those reported in "show sess". Example:
socat - /tmp/sock1 <<< "debug dev stream strm=0x1555b80 req.f=-1 req.r=now wake"
We never enter val_fc_time_value when an associated fetcher such as `fc_rtt` is
called without argument. meaning `type == ARGT_STOP` will never be true and so
the default `data.sint = TIME_UNIT_MS` will never be set. remove this part to
avoid thinking default data.sint is set to ms while reading the code.
Signed-off-by: William Dauchy <w.dauchy@criteo.com>
[Cf: This patch may safely backported as far as 1.7. But no matter if not.]
Commit 541a534 ("BUG/MINOR: ssl/cli: fix build of SCTL and OCSP")
introduced a bug in which we iterate outside the array durint a 'set ssl
cert' if we didn't built with the ocsp or sctl.
To avoid affecting too much the traffic during a certificate update,
create the SNIs in a IO handler which yield every 10 ckch instances.
This way haproxy continues to respond even if we tries to update a
certificate which have 50 000 instances.
When updating a certificate from the CLI, it is not possible to revert
some of the changes if part of the certicate update failed. We now
creates a copy of the ckch_store for the changes so we can revert back
if something goes wrong.
Even if the ckch_store was affected before this change, it wasn't
affecting the SSL_CTXs used for the traffic. It was only a problem if we
try to update a certificate after we failed to do it the first time.
The new ckch_store is also linked to the new sni_ctxs so it's easy to
insert the sni_ctxs before removing the old ones.
ssl_sock_copy_cert_key_and_chain() copy the content of a
<src> cert_key_and_chain to a <dst>.
It applies a refcount increasing on every SSL structures (X509, DH,
privte key..) and allocate new buffers for the other fields.
It is now possible to update new parts of a CKCH from the CLI.
Currently you will be able to update a PEM (by default), a OCSP response
in base64, an issuer file, and a SCTL file.
Each update will creates a new CKCH and new sni_ctx structure so we will
need a "commit" command later to apply several changes and create the
sni_ctx only once.
Split the ssl_sock_load_crt_file_into_ckch() in two functions:
- ssl_sock_load_files_into_ckch() which is dedicated to opening every
files related to a filename during the configuration parsing (PEM, sctl,
ocsp, issuer etc)
- ssl_sock_load_pem_into_ckch() which is dedicated to opening a PEM,
either in a file or a buffer
ssl_sock_load_issuer_file_into_ckch() is a new function which is able to
load an issuer from a buffer or from a file to a CKCH.
Use this function directly in ssl_sock_load_crt_file_into_ckch()
The ssl_sock_load_sctl_from_file() function was modified to
fill directly a struct cert_key_and_chain.
The function prototype was normalized in order to be used with the CLI
payload parser.
This function either read text from a buffer or read a file on the
filesystem.
It fills the ocsp_response buffer of the struct cert_key_and_chain.
The ssl_sock_load_ocsp_response_from_file() function was modified to
fill directly a struct cert_key_and_chain.
The function prototype was normalized in order to be used with the CLI
payload parser.
This function either read a base64 from a buffer or read a binary file
on the filesystem.
It fills the ocsp_response buffer of the struct cert_key_and_chain.
The logs were added to the H2 mux so that we can report logs in case
of errors that prevent a stream from being created, but as a side effect
these logs are emitted twice for backend connections: once by the H2 mux
itself and another time by the upper layer stream. It can even happen
more with connection retries.
This patch makes sure we do not emit logs for backend connections.
It should be backported to 2.0 and 1.9.
As reported in issue #335, a lot of contention happens on the PATLRU lock
when performing expensive regex lookups. This is absurd since the purpose
of the LRU cache was to have a fast cache for expressions, thus the cache
must not be shared between threads and must remain lockless.
This commit makes the LRU cache thread-local and gets rid of the PATLRU
lock. A test with 7 threads on 4 cores climbed from 67kH/s to 369kH/s,
or a scalability factor of 5.5.
Given the huge performance difference and the regression caused to
users migrating from processes to threads, this should be backported at
least to 2.0.
Thanks to Brian Diekelman for his detailed report about this regression.
As reported in issue #331, the code used to cast a 32-bit to a 64-bit
stick-table key is wrong. It only copies the 32 lower bits in place on
little endian machines or overwrites the 32 higher ones on big endian
machines. It ought to simply remove the wrong cast dereference.
This bug was introduced when changing stick table keys to samples in
1.6-dev4 by commit bc8c404449 ("MAJOR: stick-tables: use sample types
in place of dedicated types") so it the fix must be backported as far
as 1.6.
A trick is used to set SESSION_ID, and SESSION_ID_CONTEXT lengths
to 0 and avoid ASN1 encoding of these values.
There is no specific function to set the length of those parameters
to 0 so we fake this calling these function to a different value
with the same buffer but a length to zero.
But those functions don't seem to check the length of zero before
performing a memcpy of length zero but with src and dst buf on the
same pointer, causing valgrind to bark.
So the code was re-work to pass them different pointers even
if buffer content is un-used.
In a second time, reseting value, a memcpy overlap
happened on the SESSION_ID_CONTEXT. It was re-worked and this is
now reset using the constant global value SHCTX_APPNAME which is a
different pointer with the same content.
This patch should be backported in every version since ssl
support was added to haproxy if we want valgrind to shut up.
This is tracked in github issue #56.
Processing of SRV record weight was inaccurate and when a SRV record's
weight was set to 0, HAProxy enforced it to '1'.
This patch aims at fixing this without breaking compability with
previous behavior.
Backport status: 1.8 to 2.0
fopen() can return NULL when state file is missing. This patch adds a
check of fopen() return value so we can skip processing in such case.
No backport needed.
Previously an expression like:
path,field(2,/) -m found
always returned `true`.
Bug exists since the `field` converter exists. That is:
f399b0debf
The fix should be backported to 1.6+.
When running haproxy -c, the cache parser is trying to allocate the size
of the cache. This can be a problem in an environment where the RAM is
limited.
This patch moves the cache allocation in the post_check callback which
is not executed during a -c.
This patch may be backported at least to 2.0 and 1.9. In 1.9, the callbacks
registration mechanism is not the same. So the patch will have to be adapted. No
need to backport it to 1.8, the code is probably too different.
When a stick counter is fetched, it is important that the requested counter does
not exceed (MAX_SESS_STKCTR -1). Actually, there is no bug with a default build
because, by construction, MAX_SESS_STKCTR is defined to 3 and we know that we
never exceed the max value. scN_* sample fetches are numbered from 0 to 2. For
other sample fetches, the value is tested.
But there is a bug if MAX_SESS_STKCTR is set to a lower value. For instance
1. In this case the counters sc1_* and sc2_* may be undefined.
This patch fixes the issue #330. It must be backported as far as 1.7.
When an error occurred in the function bind_parse_tls_ticket_keys(), during the
configuration parsing, the opened file is not always closed. To fix the bug, all
errors are catched at the same place, where all ressources are released.
This patch fixes the bug #325. It must be backported as far as 1.7.
When using the master CLI with 'fd@', during a reload, the master CLI
proxy is stopped. Unfortunately if this is an inherited FD it is closed
too, and the master CLI won't be able to bind again during the
re-execution. It lead the master to fallback in waitpid mode.
This patch forbids the inherited FDs in the master's listeners to be
closed during a proxy_stop().
This patch is mandatory to use the -W option in VTest versions that contain the
-mcli feature.
(86e65f1024)
Should be backported as far as 1.9.
If openssl 1.1.1 is used, c2aae74f0 commit mistakenly enables DH automatic
feature from openssl instead of ECDH automatic feature. There is no impact for
the ECDH one because the feature is always enabled for that version. But doing
this, the 'tune.ssl.default-dh-param' was completely ignored for DH parameters.
This patch fix the bug calling 'SSL_CTX_set_ecdh_auto' instead of
'SSL_CTX_set_dh_auto'.
Currently some users may use a 2048 DH bits parameter, thinking they're using a
1024 bits one. Doing this, they may experience performance issue on light hardware.
This patch warns the user if haproxy fails to configure the given DH parameter.
In this case and if openssl version is > 1.1.0, haproxy will let openssl to
automatically choose a default DH parameter. For other openssl versions, the DH
ciphers won't be usable.
A commonly case of failure is due to the security level of openssl.cnf
which could refuse a 1024 bits DH parameter for a 2048 bits key:
$ cat /etc/ssl/openssl.cnf
...
[system_default_sect]
MinProtocol = TLSv1
CipherString = DEFAULT@SECLEVEL=2
This should be backport into any branch containing the commit c2aae74f0.
It requires all or part of the previous CLEANUP series.
This addresses github issue #324.
All bind keyword parsing message were show as alerts.
With this patch if the message is flagged only with ERR_WARN and
not ERR_ALERT it will show a label [WARNING] and not [ALERT].
ssl_sock_load_dh_params used to return >0 or -1 to indicate success
or failure. Make it return a set of ERR_* instead so that its callers
can transparently report its status. Given that its callers only used
to know about ERR_ALERT | ERR_FATAL, this is the only code returned
for now. An error message was added in the case of failure and the
comment was updated.
ssl_sock_put_ckch_into_ctx used to return 0 or >0 to indicate success
or failure. Make it return a set of ERR_* instead so that its callers
can transparently report its status. Given that its callers only used
to know about ERR_ALERT | ERR_FATAL, this is the only code returned
for now. And a comment was updated.
ckch_inst_new_load_store() and ckch_inst_new_load_multi_store used to
return 0 or >0 to indicate success or failure. Make it return a set of
ERR_* instead so that its callers can transparently report its status.
Given that its callers only used to know about ERR_ALERT | ERR_FATAL,
his is the only code returned for now. And the comment was updated.
ssl_sock_load_ckchs() used to return 0 or >0 to indicate success or
failure even though this was not documented. Make it return a set of
ERR_* instead so that its callers can transparently report its status.
Given that its callers only used to know about ERR_ALERT | ERR_FATAL,
this is the only code returned for now. And a comment was added.
These functions were returning only 0 or 1 to mention success or error,
and made it impossible to return a warning. Let's make them return error
codes from ERR_* and map all errors to ERR_ALERT|ERR_FATAL for now since
this is the only code that was set on non-zero return value.
In addition some missing comments were added or adjusted around the
functions' return values.
In mux_pt_io_cb(), instead of always calling the wake method, only do so
if nobody subscribed for receive. If we have a subscription, just wake the
associated tasklet up.
This should be backported to 1.9 and 2.0.
There's a small window where the mux_pt tasklet may be woken up, and thus
mux_pt_io_cb() get scheduled, and then the connection is attached to a new
stream. If this happen, don't do anything, and just let the stream know
by calling its wake method. If the connection had an error, the stream should
take care of destroying it by calling the detach method.
This should be backported to 2.0 and 1.9.
This reverts commit "BUG/MEDIUM: mux_pt: Make sure we don't have a
conn_stream before freeing.".
mux_pt_io_cb() is only used if we have no associated stream, so we will
never have a cs, so there's no need to check that, and we of course have to
destroy the mux in mux_pt_detach() if we have no associated session, or if
there's an error on the connection.
This should be backported to 2.0 and 1.9.
The idle cleanup tasks' masks are wrong for threads 32 to 64, which
causes the wrong thread to wake up and clean the connections that it
does not own, with a risk of crash or infinite loop depending on
concurrent accesses. For thread 32, any thread between 32 and 64 will
be woken up, but for threads 33 to 64, in fact threads 1 to 32 will
run the task instead.
This issue only affects deployments enabling more than 32 threads. While
is it not common in 1.9 where this has to be explicit, and can easily be
dealt with by lowering the number of threads, it can be more common in
2.0 since by default the thread count is determined based on the number
of available processors, hence the MAJOR tag which is mostly relevant
to 2.x.
The problem was first introduced into 1.9-dev9 by commit 0c18a6fe3
("MEDIUM: servers: Add a way to keep idle connections alive.") and was
later moved to cfgparse.c by commit 980855bd9 ("BUG/MEDIUM: server:
initialize the orphaned conns lists and tasks at the end").
This patch needs to be backported as far as 1.9, with care as 1.9 is
slightly different there (uses idle_task[] instead of idle_conn_cleanup[]
like in 2.x).
On error, make sure we don't have a conn_stream before freeing the connection
and the associated mux context. Otherwise a stream will still reference
the connection, and attempt to use it.
If we still have a conn_stream, it will properly be free'd when the detach
method is called, anyway.
This should be backported to 2.0 and 1.9.
There are 2 kinds of tcp info fetchers. Those returning a time value (fc_rtt and
fc_rttval) and those returning a counter (fc_unacked, fc_sacked, fc_retrans,
fc_fackets, fc_lost, fc_reordering). Because of a bug, the counters were handled
as time values, and by default, were divided by 1000 (because of an invalid
conversion from us to ms). To work around this bug and have the right value, the
argument "us" had to be specified.
So now, tcp info fetchers returning a counter don't support any argument
anymore. To not break old configurations, if an argument is provided, it is
ignored and a warning is emitted during the configuration parsing.
In addition, parameter validiation is now performed during the configuration
parsing.
This patch must be backported as far as 1.7.
Patch 56996da ("BUG/MINOR: mworker/ssl: close OpenSSL FDs on reload")
fixes a issue where the /dev/random FD was leaked by OpenSSL upon a
reload in master worker mode. Indeed the FD was not flagged with
CLOEXEC.
The fix was checking if ssl_used_frontend or ssl_used_backend were set
to close the FD. This is wrong, indeed the lua init code creates an SSL
server without increasing the backend value, so the deinit is never
done when you don't use SSL in your configuration.
To reproduce the problem you just need to build haproxy with openssl and
lua with an openssl which does not use the getrandom() syscall. No
openssl nor lua configuration are required for haproxy.
This patch must be backported as far as 1.8.
Fix issue #314.
The recent changes to address URI issues mixed with the recent fix to
stop caching absolute URIs have caused the cache not to cache H2 requests
anymore since these ones come with a scheme and authority. Let's unbreak
this by using absolute URIs all the time, now that we keep host and
authority in sync. So what is done now is that if we have an authority,
we take the whole URI as it is as the cache key. This covers H2 and H1
absolute requests. If no authority is present (most H1 origin requests),
then we prepend "https://" and the Host header. The reason for https://
is that most of the time we don't care about the scheme, but since about
all H2 clients use this scheme, at least we can share the cache between
H1 and H2.
No backport is needed since the breakage only affects 2.1-dev.
When a response generated by HAProxy is handled by the mux H1, if the
corresponding request has not fully been received, the close mode is
forced. Thus, the client is notified the connection will certainly be closed
abruptly, without waiting the end of the request.
The flag HTX_FL_PROXY_RESP is now set on responses generated by HAProxy,
excluding responses returned by applets and services. It is an informative flag
set by the applicative layer.
When an error file was loaded, the flag HTX_SL_F_XFER_LEN was never set on the
HTX start line because of a bug. During the headers parsing, the flag
H1_MF_XFER_LEN is never set on the h1m. But it was the condition to set
HTX_SL_F_XFER_LEN on the HTX start-line. Instead, we must only rely on the flags
H1_MF_CLEN or H1_MF_CHNK.
Because of this bug, it was impossible to keep a connection alive for a response
generated by HAProxy. Now the flag HTX_SL_F_XFER_LEN is set when an error file
have a content length (chunked responses are unsupported at this stage) and the
connection may be kept alive if there is no connection header specified to
explicitly close it.
This patch must be backported to 2.0 and 1.9.
It currently is not possible to figure the exact haproxy version from a
core file for the sole reason that the version is stored into a const
string and as such ends up in the .text section that is not part of a
core file. By turning them into variables we move them to the data
section and they appear in core files. In order to help finding them,
we just prepend an extra variable in front of them and we're able to
immediately spot the version strings from a core file:
$ strings core | fgrep -A2 'HAProxy version'
HAProxy version follows
2.1-dev2-e0f48a-88
2019/10/15
(These are haproxy_version and haproxy_date respectively). This may be
backported to 2.0 since this part is not support to impact anything but
the developer's time spent debugging.
246c024 ("MINOR: ssl: load the ocsp in/from the ckch") broke the loading
of OCSP files. The function ssl_sock_load_ocsp_response_from_file() was
not returning 0 upon success which lead to an error after the .ocsp was
read.
The error messages for OCSP in ssl_sock_load_crt_file_into_ckch() add a
double extension to the filename, that can be confusing. The messages
reference a .issuer.issuer file.
If the user agent data contains text that has special characters that
are used to format the output from the vfprintf() function, haproxy
crashes. String "%s %s %s" may be used as an example.
% curl -A "%s %s %s" localhost:10080/index.html
curl: (52) Empty reply from server
haproxy log:
00000000:WURFL-test.clireq[00c7:ffffffff]: GET /index.html HTTP/1.1
00000000:WURFL-test.clihdr[00c7:ffffffff]: host: localhost:10080
00000000:WURFL-test.clihdr[00c7:ffffffff]: user-agent: %s %s %s
00000000:WURFL-test.clihdr[00c7:ffffffff]: accept: */*
segmentation fault (core dumped)
gdb 'where' output:
#0 strlen () at ../sysdeps/x86_64/strlen.S:106
#1 0x00007f7c014a8da8 in _IO_vfprintf_internal (s=s@entry=0x7ffc808fe750, format=<optimized out>,
format@entry=0x7ffc808fe9c0 "WURFL: retrieve header request returns [%s %s %s]\n",
ap=ap@entry=0x7ffc808fe8b8) at vfprintf.c:1637
#2 0x00007f7c014cfe89 in _IO_vsnprintf (
string=0x55cb772c34e0 "WURFL: retrieve header request returns [(null) %s %s %s B,w\313U",
maxlen=<optimized out>,
format=format@entry=0x7ffc808fe9c0 "WURFL: retrieve header request returns [%s %s %s]\n",
args=args@entry=0x7ffc808fe8b8) at vsnprintf.c:114
#3 0x000055cb758f898f in send_log (p=p@entry=0x0, level=level@entry=5,
format=format@entry=0x7ffc808fe9c0 "WURFL: retrieve header request returns [%s %s %s]\n")
at src/log.c:1477
#4 0x000055cb75845e0b in ha_wurfl_log (
message=message@entry=0x55cb75989460 "WURFL: retrieve header request returns [%s]\n") at src/wurfl.c:47
#5 0x000055cb7584614a in ha_wurfl_retrieve_header (header_name=<optimized out>, wh=0x7ffc808fec70)
at src/wurfl.c:763
In case WURFL (actually HAProxy) is not compiled with debug option
enabled (-DWURFL_DEBUG), this bug does not come to light.
This patch could be backported in every version supporting
the ScientiaMobile's WURFL. (as far as 1.7)
As stated in the RCF7230#5.4, a client must send a field-value for the header
host that is identical to the authority if the target URI includes one. So, now,
by default, if the authority, when provided, does not match the value of the
header host, an error is triggered. To mitigate this behavior, it is possible to
set the option "accept-invalid-http-request". In that case, an http error is
captured without interrupting the request parsing.
There is no reason for a client to send several headers host. It even may be
considered as a bug. However, it is totally invalid to have different values for
those. So now, in such case, an error is triggered during the request
parsing. In addition, when several headers host are found with the same value,
only the first instance is kept and others are skipped.
When the option "accept-invalid-http-request" is enabled, some parsing errors
are ignored. But the position of the error is reported. In legacy HTTP mode,
such errors were captured. So, we now do the same in the H1 multiplexer.
If required, this patch may be backported to 2.0 and 1.9.
When an outgoing HTX message is formatted to a raw message, DATA blocks may be
splitted to not tranfser more data than expected. But if the buffer is almost
full, the formatting is interrupted, leaving some unused free space in the
buffer, because data are too large to be copied in one time.
Now, we transfer as much data as possible. When the message is chunked, we also
count the size used to encode the data.
When an outgoing HTX message is formatted to a raw message, if we fail to copy
data of an HTX block into the output buffer, we mark it as full. Before it was
only done calling the function buf_room_for_htx_data(). But this function is
designed to optimize input processing.
This patch must be backported to 2.0 and 1.9.
In functions htx_*_to_h1(), most of time several calls to chunk_memcat() are
chained. The expected size is always compared to available room in the buffer to
be sure the full copy will succeed. But it is a bit risky because it relies on
the fact the function chunk_memcat() evaluates the available room in the buffer
in a same way than htx ones. And, unfortunately, it does not. A bug in
chunk_memcat() will always leave a byte unused in the buffer. So, for instance,
when a chunk is copied in an almost full buffer, the last CRLF may be skipped.
To fix the issue, we now rely on the result of chunk_memcat() only.
This patch must be backported to 2.0 and 1.9.
The SSL engines code was written below the OCSP #ifdef, which means you
can't build the engines code if the OCSP is deactived in the SSL lib.
Could be backported in every version since 1.8.
A NULL dereference can occur when inserting SNIs. In the case of
checking for duplicates, if there is already several sni_ctx with the
same key.
Fix issue #321.
Don't try to load the files containing the issuer and the OCSP response
each time we generate a SSL_CTX.
The .ocsp and the .issuer are now loaded in the struct
cert_key_and_chain only once and then loaded from this structure when
creating a SSL_CTX.
Don't try to load the file containing the sctl each time we generate a
SSL_CTX.
The .sctl is now loaded in the struct cert_key_and_chain only once and
then loaded from this structure when creating a SSL_CTX.
Note that this now make possible the use of sctl with multi-cert
bundles.
$ echo -e "set ssl cert certificate.pem <<\n$(cat certificate2.pem)\n" | \
socat stdio /var/run/haproxy.stat
Certificate updated!
The operation is locked at the ckch level with a HA_SPINLOCK_T which
prevents the ckch architecture (ckch_store, ckch_inst..) to be modified
at the same time. So you can't do a certificate update at the same time
from multiple CLI connections.
SNI trees are also locked with a HA_RWLOCK_T so reading operations are
locked only during a certificate update.
Bundles are supported but you need to update each file (.rsa|ecdsa|.dsa)
independently. If a file is used in the configuration as a bundle AND
as a unique certificate, both will be updated.
Bundles, directories and crt-list are supported, however filters in
crt-list are currently unsupported.
The code tries to allocate every SNIs and certificate instances first,
so it can rollback the operation if that was unsuccessful.
If you have too much instances of the certificate (at least 20000 in my
tests on my laptop), the function can take too much time and be killed
by the watchdog. This will be fixed later. Also with too much
certificates it's possible that socat exits before the end of the
generation without displaying a message, consider changing the socat
timeout in this case (-t2 for example).
The size of the certificate is currently limited by the maximum size of
a payload, that must fit in a buffer.
The ssl_sock_load_{multi}_ckchs() function were renamed and modified:
- allocate a ckch_inst and loads the sni in it
- return a ckch_inst or NULL
- the sni_ctx are not added anymore in the sni trees from there
- renamed in ckch_inst_new_load_{multi}_store()
- new ssl_sock_load_ckchs() function calls
ckch_inst_new_load_{multi}_store() and add the sni_ctx to the sni trees.
ssl_sock_load_multi_ckchs() is now able to fail without polluting the
bind_conf trees and leaking memory.
It is a prerequisite to load certificate on-the-fly with the CLI.
The insertion of the sni_ctxs in the trees are done once everything has
been allocated correctly.
ssl_sock_load_ckchn() is now able to fail without polluting the
bind_conf trees and leaking memory.
It is a prerequisite to load certificate on-the-fly with the CLI.
The insertion of the sni_ctxs in the trees are done once everything has
been allocated correctly.
In order to allow the creation of sni_ctx in runtime, we need to split
the function to allow rollback.
We need to be able to allocate all sni_ctxs required before inserting
them in case we need to rollback if we didn't succeed the allocation.
The function was splitted in 2 parts.
The first one ckch_inst_add_cert_sni() allocates a struct sni_ctx, fill
it with the right data and insert it in the ckch_inst's list of sni_ctx.
The second will take every sni_ctx in the ckch_inst and insert them in
the bind_conf's sni tree.
struct ckch_inst represents an instance of a certificate (ckch_node)
used in a bind_conf. Every sni_ctx created for 1 ckch_node in a
bind_conf are linked in this structure.
This patch allocate the ckch_inst for each bind_conf and inserts the
sni_ctx in its linked list.
The ssl_sock_populate_sni_keytypes_hplr() function does not return an
error upon an allocation failure.
The process would probably crash during the configuration parsing if the
allocation fail since it tries to copy some data in the allocated
memory.
This patch could be backported as far as 1.5.
This patch frees the sni_keytype nodes once the sni_ctxs have been
allocated in ssl_sock_load_multi_ckchn();
Could be backported in every version using the multi-cert SSL bundles.
The ssl_sock_add_cert_sni() function never return an error when a
sni_ctx allocation fail. It silently ignores the problem and continues
to try to allocate other snis.
It is unlikely that a sni allocation will succeed after one failure and
start a configuration without all the snis. But to avoid any problem we
return a -1 upon an sni allocation error and stop the configuration
parsing.
This patch must be backported in every version supporting the crt-list
sni filters. (as far as 1.5)
A ckch_store is a storage which contains a pointer to one or several
cert_key_and_chain structures.
This patch renames ckch_node to ckch_store, and ckch_n, ckchn to ckchs.
As using an mt_list for the tasklet list is costly, instead use a regular list,
but add an mt_list for tasklet woken up by other threads, to be run on the
current thread. At the beginning of process_runnable_tasks(), we just take
the new list, and merge it into the task_list.
This should give us performances comparable to before we started using a
mt_list, but allow us to use tasklet_wakeup() from other threads.
I introduced this mistake when adding the description for the stats
metrics, it's even amazing it built and worked at all! This was
reported by Travis CI on non-GNU platforms :
src/stats.c:92:39: warning: use of GNU 'missing =' extension in designator [-Wgnu-designator]
[INF_NAME] { .name = "Name", .desc = "Product name" },
^
=
No backport is needed.
In issue #277 is reported a strange problem related to a fast-spinning
applet which seems to show valid progress being made. It's uncertain how
this can happen, maybe some very specific timing patterns manage to place
just a few bytes in each buffer and result in the peers applet being called
a lot. But it appears possible to artificially cross the spinning threshold
by asking for monster stats page (500 MB) and limiting the send() size to
1 MSS (1460 bytes), causing the stats page to be called for very small
blocks which most often do not leave enough room to place a new chunk.
The idea developed in this patch consists in not crashing for an applet
which reaches a very high call rate if it shows some indication of
progress. Detecting progress on applets is not trivial but in our case
we know that they must at least not claim to wait for a buffer allocation
if this buffer is present, wait for room if the buffer is empty, ask for
more data without polling if such data are still present, nor leave with
an empty input buffer without having written anything nor read anything
from the other side while a shutw is pending.
Doing so doesn't affect normal behaviors nor abuses of our existing
applets and does at least protect against an applet performing an
early return without processing events, or one causing an endless
loop by asking for impossible conditions.
This must be backported to 2.0.
Now "show info desc", "show info typed desc" and "show stat typed desc"
will report (hopefully) accurate descriptions of each field. These ones
were verified in the code. When some metrics are specific to the process
or the thread, they are indicated. Sometimes a config option is known
for a setting and it is reported as well. The purpose mainly is to help
sysadmins in field more easily sort out issues vs non-issues. In part
inspired by this very informative talk :
https://kernel-recipes.org/en/2019/metrics-are-money/
Example:
$ socat - /var/run/haproxy.sock <<< "show info desc"
Name: HAProxy:"Product name"
Version: 2.1-dev2-991035-31:"Product version"
Release_date: 2019/10/09:"Date of latest source code update"
Nbthread: 1:"Number of started threads (global.nbthread)"
Nbproc: 1:"Number of started worker processes (global.nbproc)"
Process_num: 1:"Relative process number (1..Nbproc)"
Pid: 11975:"This worker process identifier for the system"
Uptime: 0d 0h00m10s:"How long ago this worker process was started (days+hours+minutes+seconds)"
Uptime_sec: 10:"How long ago this worker process was started (seconds)"
Memmax_MB: 0:"Worker process's hard limit on memory usage in MB (-m on command line)"
PoolAlloc_MB: 0:"Amount of memory allocated in pools (in MB)"
PoolUsed_MB: 0:"Amount of pool memory currently used (in MB)"
PoolFailed: 0:"Number of failed pool allocations since this worker was started"
Ulimit-n: 300000:"Hard limit on the number of per-process file descriptors"
Maxsock: 300000:"Hard limit on the number of per-process sockets"
Maxconn: 149982:"Hard limit on the number of per-process connections (configured or imposed by Ulimit-n)"
Hard_maxconn: 149982:"Hard limit on the number of per-process connections (imposed by Memmax_MB or Ulimit-n)"
CurrConns: 0:"Current number of connections on this worker process"
CumConns: 1:"Total number of connections on this worker process since started"
CumReq: 1:"Total number of requests on this worker process since started"
MaxSslConns: 0:"Hard limit on the number of per-process SSL endpoints (front+back), 0=unlimited"
CurrSslConns: 0:"Current number of SSL endpoints on this worker process (front+back)"
CumSslConns: 0:"Total number of SSL endpoints on this worker process since started (front+back)"
Maxpipes: 0:"Hard limit on the number of pipes for splicing, 0=unlimited"
PipesUsed: 0:"Current number of pipes in use in this worker process"
PipesFree: 0:"Current number of allocated and available pipes in this worker process"
ConnRate: 0:"Number of front connections created on this worker process over the last second"
ConnRateLimit: 0:"Hard limit for ConnRate (global.maxconnrate)"
MaxConnRate: 0:"Highest ConnRate reached on this worker process since started (in connections per second)"
SessRate: 0:"Number of sessions created on this worker process over the last second"
SessRateLimit: 0:"Hard limit for SessRate (global.maxsessrate)"
MaxSessRate: 0:"Highest SessRate reached on this worker process since started (in sessions per second)"
SslRate: 0:"Number of SSL connections created on this worker process over the last second"
SslRateLimit: 0:"Hard limit for SslRate (global.maxsslrate)"
MaxSslRate: 0:"Highest SslRate reached on this worker process since started (in connections per second)"
SslFrontendKeyRate: 0:"Number of SSL keys created on frontends in this worker process over the last second"
SslFrontendMaxKeyRate: 0:"Highest SslFrontendKeyRate reached on this worker process since started (in SSL keys per second)"
SslFrontendSessionReuse_pct: 0:"Percent of frontend SSL connections which did not require a new key"
SslBackendKeyRate: 0:"Number of SSL keys created on backends in this worker process over the last second"
SslBackendMaxKeyRate: 0:"Highest SslBackendKeyRate reached on this worker process since started (in SSL keys per second)"
SslCacheLookups: 0:"Total number of SSL session ID lookups in the SSL session cache on this worker since started"
SslCacheMisses: 0:"Total number of SSL session ID lookups that didn't find a session in the SSL session cache on this worker since started"
CompressBpsIn: 0:"Number of bytes submitted to HTTP compression in this worker process over the last second"
CompressBpsOut: 0:"Number of bytes out of HTTP compression in this worker process over the last second"
CompressBpsRateLim: 0:"Limit of CompressBpsOut beyond which HTTP compression is automatically disabled"
Tasks: 10:"Total number of tasks in the current worker process (active + sleeping)"
Run_queue: 1:"Total number of active tasks+tasklets in the current worker process"
Idle_pct: 100:"Percentage of last second spent waiting in the current worker thread"
node: wtap.local:"Node name (global.node)"
Stopping: 0:"1 if the worker process is currently stopping, otherwise zero"
Jobs: 14:"Current number of active jobs on the current worker process (frontend connections, master connections, listeners)"
Unstoppable Jobs: 0:"Current number of unstoppable jobs on the current worker process (master connections)"
Listeners: 13:"Current number of active listeners on the current worker process"
ActivePeers: 0:"Current number of verified active peers connections on the current worker process"
ConnectedPeers: 0:"Current number of peers having passed the connection step on the current worker process"
DroppedLogs: 0:"Total number of dropped logs for current worker process since started"
BusyPolling: 0:"1 if busy-polling is currently in use on the worker process, otherwise zero (config.busy-polling)"
FailedResolutions: 0:"Total number of failed DNS resolutions in current worker process since started"
TotalBytesOut: 0:"Total number of bytes emitted by current worker process since started"
BytesOutRate: 0:"Number of bytes emitted by current worker process over the last second"
Now "show info" supports "desc" after the default and "typed" formats,
and "show stat" supports this after the typed format. In both cases
this appends the description for the represented metric between double
quotes. The same could be done for JSON output but would possibly require
to update the schema first.
Several times some users have expressed the non-intuitive aspect of some
of our stat/info metrics and suggested to add some help. This patch
replaces the char* arrays with an array of name_desc so that we now have
some reserved room to store a description with each stat or info field.
These descriptions are currently empty and not reported yet.
Now "show info" and "show stat" can parse "desc" as an output format
modifier that will be passed down the chain to add some descriptions
to the fields depending on the format in use. For now it is not
exploited.
Some functions used to take flags + appctx with flags==appctx.flags,
others neither, others just one of them. Some functions used to have
the flags before the object being dumped (server) while others had
it after (listener). This patch aims at cleaning this up a little bit
by following this principle:
- low-level functions which do not need the appctx take flags only
- medium-level functions which already use the appctx for other
reasons do not keep the flags
- top-level functions which already have the stream-int don't need
the flags nor the appctx.
This flag is used to decide to show the check box in front of a proxy
on the HTML stat page. It is always equal to STAT_ADMIN except when the
proxy has no backend capability (i.e. a pure frontend) or has no server,
in which case it's only used to avoid leaving an empty column at the
beginning of the table. Not only this is pretty useless, but it also
causes the columns not to align well when mixing multiple proxies with
or without servers.
Let's simply always use STAT_ADMIN and get rid of this flag.
Now we only use the appctx flags everywhere in the code, and the uri_auth
flags are read only by the HTTP analyser which presets the appctx ones.
This will allow to simplify access to the flags everywhere.
We used to rely on some config flags defined in uri_auth.h set during
parsing, and another set of STAT_* flags defined in stats.h set at run
time, with a somewhat gray area between the two sets. This is confusing
in the stats code as both are called "flags" in various functions and
it's quite hard to know which one describes what.
This patch cleans this up by replacing all ST_* by a newly assigned
value from the STAT_* set so that we can now use unified flags to
describe both the configuration and the current state. There is no
functional change at all.
This flag was added in 1.4-rc1 by commit 329f74d463 ("[BUG] uri_auth: do
not attemp to convert uri_auth -> http-request more than once") to
address the case where two proxies inherit the stats settings from
the defaults instance, and the first one compiles the expression while
the second one uses it. In this case since they use the exact same
uri_auth pointer, only the first one should compile and the second one
must not fail the check. This was addressed by adding an ST_CONVDONE
flag indicating that the expression conversion was completed and didn't
need to be done again. But this is a hack and it becomes cumbersome in
the middle of the other flags which are all relevant to the stats
applet. Let's instead fix it by checking if we're dealing with an
alias of the defaults instance and refrain from compiling this twice.
This allows us to remove the ST_CONVDONE flag.
A typical config requiring this check is :
defaults
mode http
stats auth foo:bar
listen l1
bind :8080
listen l2
bind :8181
Without this (or previous) check it would cmoplain when checking l2's
validity since the rule was already built.
Both "show info" and "show stat" support the "typed" output format and
the "json" output format. I just never can remind them, which is an
indication that some help is missing.
H2 strongly recommends that clients exclusively use the absolute form
for requests, which contains a scheme, an authority and a path, instead
of the old format involving the Host header and a path. Thus there is
no way to distinguish between a request intended for a proxy and an
origin request, and as such proxied requests are lost.
This patch makes sure to keep the encoding of all absolute form requests
so that the URI is kept end-to-end. If the scheme is http or https, there
is an uncertainty so the request is tagged as a normalized URI so that
the other end (H1) can decide to emit it in origin form as this is by far
the most commonly expected one, and it's certain that quite a number of
H1 setups are not ready to cope with absolute URIs.
There is a direct visible impact of this change, which is that the uri
sample fetch will now return absolute URIs (as they really come on the
wire) whenever these are used. It also means that default http logs will
report absolute URIs.
If a situation is once met where a client uses H2 to join an H1 proxy
with haproxy in the middle, then it will be trivial to add an option to
ask the H1 output to use absolute encoding for such requests.
Later we may be able to consider that the normalized URI is the default
output format and stop sending them in origin form unless an option is
set.
Now chaining multiple instances keeps the semantics as far as possible
along the whole chain :
1) H1 to H1
H1:"GET /" --> H1:"GET /" # log: /
H1:"GET http://" --> H1:"GET http://" # log: http://
H1:"GET ftp://" --> H1:"GET ftp://" # log: ftp://
2) H2 to H1
H2:"GET /" --> H1:"GET /" # log: /
H2:"GET http://" --> H1:"GET /" # log: http://
H2:"GET ftp://" --> H1:"GET ftp://" # log: ftp://
3) H1 to H2 to H2 to H1
H1:"GET /" --> H2:"GET /" --> H2:"GET /" --> H1:"GET /"
H1:"GET http://" --> H2:"GET http://" --> H2:"GET http://" --> H1:"GET /"
H1:"GET ftp://" --> H2:"GET ftp://" --> H2:"GET ftp://" --> H1:"GET ftp://"
Thus there is zero loss on H1->H1, H1->H2 nor H2->H2, and H2->H1 is
normalized in origin format if ambiguous.
Instead of mapping the Host header field to :authority, we now act
differently if the request is in origin form or in absolute form.
If it's absolute, we extract the scheme and the authority from the
request, fix the path if it's empty, and drop the Host header.
Otherwise we take the scheme from the http/https flags in the HTX
layer, make the URI be the path only, and emit the Host header,
as indicated in RFC7540#8.1.2.3. This allows to distinguish between
absolute and origin requests for H1 to H2 conversions.
Till now we've been producing path components of the URI and using the
:authority header only to be placed into the host part. But this practice
is not correct, as if we're used to convey H1 proxy requests over H2 then
over H1, the absolute URI is presented as a path on output, which is not
valid. In addition the scheme on output is not updated from the absolute
URI either.
Now the request parser will continue to deliver origin-form for request
received using the http/https schemes, but will use the absolute-form
when dealing with other schemes, by concatenating the scheme, the authority
and the path if it's not '*'.
When a request start-line is converted to its raw representation, if its URI is
normalized, only the path part is used. Most of H2 clients send requests using
the absolute form (:scheme + :authority + :path), regardless the request is sent
to a proxy or not. But, when the request is relayed to an H1 origin server, it
is unusual to send it using the absolute form. And, even if the servers must
support this form, some old servers may reject it. So, for such requests, we
only get the path of the absolute URI. Most of time, it will be the right
choice. However, an option will probably by added to customize this behavior.
In HTTP, the request authority, if any, and the Host header must be identical
(excluding any userinfo subcomponent and its "@" delimiter). So now, during the
request analysis, when the Host header is updated, the start-line is also
updated. The authority of an absolute URI is changed accordingly. Symmetrically,
if the URI is changed, if it contains an authority, then then Host header is
also changed. In this latter case, the flags of the start-line are also updated
to reflect the changes on the URI.
The function http_get_authority() may be used to parse a URI and looks for the
authority, between the scheme and the path. An option may be used to skip the
user info (part before the '@'). Most of time, the user info will be ignored.
The H2 request parsing is not trivial given that we have multiple
possible syntaxes. Mainly we can have :authority or not, and when
a CONNECT method is seen, :scheme and :path are missing. This mostly
updates the functions' comments and header index assignments to make
them less confusing. Functionally there is no change.
In these muxes, when an integer value is provided in a trace, it must be the 4th
argument. The 3rd one, if defined, is always an HTX message. Unfortunately, some
traces are buggy and the 4th argument is erroneously passed in 3rd position.
No backport needed.
There are some reports of users not being able to pass "enterprise"
traffic through haproxy when using H2 because it doesn't emit CONTINUATION
frames and as such is limited to headers no longer than the negociated
max-frame-size which usually is 16 kB.
This patch implements support form emitting CONTINUATION when a HEADERS
frame cannot fit within a limit of mfs. It does this by first filling a
buffer-wise frame, then truncating it starting from the tail to append
CONTINUATION frames. This makes sure that we can truncate on any byte
without being forced to stop on a header boundary, and ensures that the
common case (no fragmentation) doesn't add any extra cost. By moving
the tail first we make sure that each byte is moved only once, thus the
performance impact remains negligible.
This addresses github issue #249.
If a request contains an absolute URI and gets its Host header field
rewritten, or just the request's URI without touching the Host header
field, it can lead to different Host and authority parts. The cache
will always concatenate the Host and the path while a server behind
would instead ignore the Host and use the authority found in the URI,
leading to incorrect content possibly being cached.
Let's simply refrain from caching absolute requests for now, which
also matches what the comment at the top of the function says. Later
we can improve this by having a special handling of the authority.
This should be backported as far as 1.8.
As for the mux h1 and h2, traces are now supported in the mux fcgi. All parts of
the multiplexer is covered by these traces. Events are splitted by categories
(fconn, fstrm, stream, rx, tx and rsp) for a total of ~40 different events with
5 verboisty levels.
In traces, the first argument is always a connection. So it is easy to get the
fconn (conn->ctx). The second argument is always a fstrm. The third one is an
HTX message. Depending on the context it is the request or the response. In all
cases it is owned by a channel. Finally, the fourth argument is an integer
value. Its meaning depends on the calling context.
When the output buffer allocation failed, we block stream processing. When
finally a buffer is available and we succed to allocate the output buffer, it
seems fair to wake up the stream.
When an outgoing h1 message is formatted, if it is considered as chunked but the
corresponding header is missing, we add it. And as all other h1 headers, if
configured so, the case of this header must be adjusted.
No backport needed.
It is not explicitly stated in the documentation, but some users rely on this
behavior. When the server name is inserted in a request, headers with the same
name are first removed.
This patch is not tagged as a bug, because it is not explicitly documented. We
choose to keep the same implicit behavior to not break existing
configuration. Because this option is used very little, it is not a big deal.
As for the mux h2, traces are now supported in the mux h1. All parts of the
multiplexer is covered by these traces. Events are splitted by categories (h1c,
h1s, stream, rx and tx) for a total of ~30 different events with 5 verboisty
levels.
In traces, the first argument is always a connection. So it is easy to get the
h1c (conn->ctx). The second argument is always a h1s. The third one is an HTX
message. Depending on the context it is the request or the response. In all
cases it is owned by a channel. Finally, the fourth argument is an integer
value. Its meaning depends on the calling context.
This function now uses the address of the pointer to the htx message where the
copy must be performed. This way, when a zero-copy is performed, there is no
need to refresh the caller's htx message. It is a bit easier to do that way,
especially to add traces in the mux-h1.
When a new H2 connection is initialized, the connection context is not changed
before the end. So, traces emitted during this initialization are buggy, except
the last one when no error occurred, because the connection context is not an
h2c.
To fix the bug, the connection context is saved and set as soon as possible. So,
the connection can always safely be used in all traces, except for the very
first one. And on error, the connection context is restored.
No need to backport.
When we configure a "peers" section without local peer, this makes haproxy
old process crash on reload.
Such a configuration file allows to reproduce this issue:
global
stats socket /tmp/sock1 mode 666 level admin
stats timeout 10s
peers peers
peer localhost 127.0.0.1:1024
This bug was introduced by this commit:
"MINOR: cfgparse: Make "peer" lines be parsed as "server" lines"
This commit introduced a new condition to detect a "peers" section without
local peer. This is a "peers" section with a frontend struct which has no ->id
initialized member. Such a "peers" section must be removed.
This patch adds this new condition to remove such peers sections without local
peer as this was always done before.
Must be backported to 2.0.
When executing tasks, don't forget to decrement tasks_run_queue once we
popped one task from the task_list. tasks_run_queue used to be decremented
by __tasklet_remove_from_tasklet_list(), but we now call MT_LIST_POP().
Alexandre Derumier reported issue #308 in which the client timeout will
strike on an H2 mux when it's shorter than the server's response time.
What happens in practice is that there is no activity on the connection
and there's no data pending on output so we can expire it. But this does
not take into account the possibility that some streams are in fact
waiting for the data layer above. So what we do now is that we enforce
the timeout when:
- there are no more streams
- some data are pending in the output buffer
- some streams are blocked on the connection's flow control
- some streams are blocked on their own flow control
- some streams are in the send/sending list
In all other cases the connection will not timeout as it means that some
streams are actively used by the data layer.
This fix must be backported to 2.0, 1.9 and probably 1.8 as well. It
depends on the new "blocked_list" field introduced by "MINOR: mux-h2:
add a per-connection list of blocked streams". It would be nice to
also backport "ebtree: make eb_is_empty() and eb_is_dup() take a const"
to avoid a build warning.
Currently the H2 mux doesn't have a list of all the streams blocking on
the H2 side. It only knows about those trying to send or waiting for a
connection window update. It is problematic to enforce timeouts because
we never know if a stream has to live as long as the data layer wants or
has to be timed out becase it's waiting for a stream window update. This
patch adds a new list, "blocked_list", to store streams blocking on
stream flow control, or later, dependencies. Streams blocked on sfctl
are now added there. It doesn't modify the rest of the logic.
This reverts commit 1263540fe8.
As discussed in issues #214 and #251, this is not the correct way to
cache CORS responses, since it relies on hacking the cache to cache
the OPTIONS method which is explicitly non-cacheable and for which
we cannot rely on any standard caching semantics (cache headers etc
are not expected there). Let's roll this back for now and keep that
for a more reliable and flexible CORS-specific solution later.
@davidmogar reported a github issue (#227) about problems with
do-resolve action when the request contains a body.
The variable was never populated in such case, despite tcpdump shows a
valid DNS response coming back.
The do-resolve action is a task in HAProxy and so it's waken by the
scheduler each time the scheduler think such task may have some work to
do.
When a simple HTTP request is sent, then the task is called, it sends
the DNS request, then the scheduler will wake up the task again later
once the DNS response is there.
Now, when the client send a PUT or a POST request (or any other type)
with a BODY, then the do-resolve action if first waken up once the
headers are processed. It sends the DNS request. Then, when the bytes
for the body are processed by HAProxy AND the DNS response has not yet
been received, then the action simply terminates and cleans up all the
data associated to this resolution...
This patch detect such behavior and if the action is now waken up while
a DNS resolution is in RUNNING state, then the action will tell the
scheduler to wake it up again later.
Backport status: 2.0 and above
src/ssl_sock.c:2928:12: warning: ‘ssl_sock_is_ckch_valid’ defined but not used [-Wunused-function]
static int ssl_sock_is_ckch_valid(struct cert_key_and_chain *ckch)
This function is only used with openssl >= 1.0.2, this patch adds a
condition to build the function.
`size` is used in conditional jumps and valgrind complains:
==24145== Conditional jump or move depends on uninitialised value(s)
==24145== at 0x4B3028: smp_is_safe (sample.h:98)
==24145== by 0x4B3028: smp_make_safe (sample.h:125)
==24145== by 0x4B3028: smp_to_stkey (stick_table.c:936)
==24145== by 0x4B3F2A: sample_conv_in_table (stick_table.c:1113)
==24145== by 0x420AD4: hlua_run_sample_conv (hlua.c:3418)
==24145== by 0x54A308F: ??? (in /usr/lib/x86_64-linux-gnu/liblua5.3.so.0.0.0)
==24145== by 0x54AFEFC: ??? (in /usr/lib/x86_64-linux-gnu/liblua5.3.so.0.0.0)
==24145== by 0x54A29F1: ??? (in /usr/lib/x86_64-linux-gnu/liblua5.3.so.0.0.0)
==24145== by 0x54A3523: lua_resume (in /usr/lib/x86_64-linux-gnu/liblua5.3.so.0.0.0)
==24145== by 0x426433: hlua_ctx_resume (hlua.c:1097)
==24145== by 0x42D7F6: hlua_action (hlua.c:6218)
==24145== by 0x43A414: http_req_get_intercept_rule (http_ana.c:3044)
==24145== by 0x43D946: http_process_req_common (http_ana.c:500)
==24145== by 0x457892: process_stream (stream.c:2084)
Found while investigating issue #306.
A variant of this issue exists since 55da165301,
which was using the old `chunk` API instead of the `buffer` API thus this patch
must be backported to HAProxy 1.6 and higher.
A break is missing in the switch statement in the function
stats_emit_json_data_field(). This bug was introduced in the commit 88a0db28a
("MINOR: stats: Add the support of float fields in stats").
This patch fixes the issue #302 and #303. It must be backported to 2.0.
Ilya reported in bug #300 that ASAN found a read overflow during startup
in the fcgi code due to a missing empty element at the end of the list
of sample fetches. The effect is that will randomly either work or crash
on startup.
No backport is needed, this is solely for 2.1-dev.
It is now possible to format stats counters as floats. But the stats applet does
not use it.
This patch is required by the Prometheus exporter to send the time averages in
seconds. If the promex change is backported, this patch must be backported
first.
the option "http-send-name-header" is an eyesore. It was responsible of several
bugs because it is handled after the message analysis. With the HTX
representation, the situation is cleaner because no rewind on forwarded data is
required. But it remains ugly.
With recent changes in HAProxy, we have the opportunity to make it fairly
better. The message formatting in now done in the HTTP multiplexers. So it seems
to be the right place to handle this option. Now, the server name is added by
the HTTP multiplexers (h1, h2 and fcgi).
A different engine-id is now generated for each thread. So, it is possible to
enable the async mode with several threads.
This patch may be backported to older versions.
Use the same algo than the sample fetch uuid(). This one was added recently. So
it is better to use the same way to generate UUIDs.
This patch may be backported to older versions.
SPOE engine-id is the same for all processes when nbproc is more than 1. So, in
async mode, an agent receiving a NOTIFY frame from a process may send the ACK to
another process. It is abviously wrong. A different engine-id must be generated
for each process.
This patch must be backported to 2.0, 1.9 and 1.8.
When a request is received, if the h2 preface is matched, an implicit upgrade
from h1 to h2 is performed. This must only be done for the first request on a
connection. But a test was missing to unsure it is really the first request.
This patch must be backported to 2.0.
When a frame is received for a unknown or already closed stream, it must be
skipped. This also happens when a stream error is reported. But we must be sure
to only skip received data. In the loop in h2_process_demux(), when such frames
are handled, all the frame lenght is systematically skipped. If the frame
payload is partially received, it leaves the demux buffer in an undefined
state. Because of this bug, all sort of errors may be observed, like crash or
intermittent freeze.
This patch must be backported to 2.0, 1.9 and 1.8.
Since the commit 6884aa3e ("BUG/MAJOR: mux-h2: Handle HEADERS frames received
after a RST_STREAM frame"), HEADERS frames received for an unknown or already
closed stream are decoded. Once decoded, an error is reported for the
stream. But because it is a dummy stream (h2_closed_stream), its state cannot be
changed. So instead, we must return the dummy error stream (h2_error_stream).
This patch must be backported to 2.0 and 1.9.
Consecutive to commit 6884aa3eb0 ("BUG/MAJOR: mux-h2: Handle HEADERS frames
received after a RST_STREAM frame") some valid frames on closed streams
(RST_STREAM, PRIORITY, WINDOW_UPDATE) were now rejected. It turns out that
the previous condition was in fact intentional to catch only sensitive
frames, which was indeed a mistake since these ones needed to be decoded
to keep HPACK synchronized. But we must absolutely accept WINDOW_UPDATES
or we risk to stall some transfers. And RST/PRIO definitely are valid.
Let's adjust the condition to reflect that and update the comment to
explain the reason for this unobvious condition.
This must be backported to 2.0 and 1.9 after the commit above is brought
there.
This way we now always have the events date which were really missing,
especially when used with traces :
<0>2019-09-26T07:57:25.183845 [00|h2|1|mux_h2.c:3024] receiving H2 HEADERS frame : h2c=0x1ddcad0(B,FRP) h2s=0x1dde9e0(3,HCL)
<0>2019-09-26T07:57:25.183845 [00|h2|4|mux_h2.c:2505] h2c_bck_handle_headers(): entering : h2c=0x1ddcad0(B,FRP) h2s=0x1dde9>
<0>2019-09-26T07:57:25.183846 [00|h2|4|mux_h2.c:4096] h2c_decode_headers(): entering : h2c=0x1ddcad0(B,FRP)
<0>2019-09-26T07:57:25.183847 [00|h2|4|mux_h2.c:4298] h2c_decode_headers(): leaving : h2c=0x1ddcad0(B,FRH)
<0>2019-09-26T07:57:25.183848 [00|h2|0|mux_h2.c:2559] rcvd H2 response : h2c=0x1ddcad0(B,FRH) : [3] H2 RES: HTTP/2.0 200
<0>2019-09-26T07:57:25.183849 [00|h2|4|mux_h2.c:2560] h2c_bck_handle_headers(): leaving : h2c=0x1ddcad0(B,FRH) h2s=0x1dde9e>
<0>2019-09-26T07:57:25.183849 [00|h2|4|mux_h2.c:2866] h2_process_demux(): no more Rx data : h2c=0x1ddcad0(B,FRH)
<0>2019-09-26T07:57:25.183849 [00|h2|4|mux_h2.c:3123] h2_process_demux(): notifying stream before switching SID : h2c=0x1dd>
<0>2019-09-26T07:57:25.183850 [00|h2|4|mux_h2.c:1014] h2s_notify_recv(): in : h2c=0x1ddcad0(B,FRH) h2s=0x1dde9e0(3,HCL)
<0>2019-09-26T07:57:25.183850 [00|h2|4|mux_h2.c:3135] h2_process_demux(): leaving : h2c=0x1ddcad0(B,FRH)
<0>2019-09-26T07:57:25.183851 [00|h2|4|mux_h2.c:3319] h2_send(): entering : h2c=0x1ddcad0(B,FRH)
<0>2019-09-26T07:57:25.183851 [00|h2|4|mux_h2.c:3145] h2_process_mux(): entering : h2c=0x1ddcad0(B,FRH)
<0>2019-09-26T07:57:25.183851 [00|h2|4|mux_h2.c:3234] h2_process_mux(): leaving : h2c=0x1ddcad0(B,FRH)
<0>2019-09-26T07:57:25.183852 [00|h2|4|mux_h2.c:3428] h2_send(): leaving with everything sent : h2c=0x1ddcad0(B,FRH)
<0>2019-09-26T07:57:25.183852 [00|h2|4|mux_h2.c:3319] h2_send(): entering : h2c=0x1ddcad0(B,FRH)
It looks like some format options could finally be separate from the sink,
or maybe enforced. For example we could imagine making the date optional
or its resolution configurable within a same buffer.
Similarly, maybe trace events would like to always emit the date even
on stdout, while traffic logs would prefer not to emit the date in the
ring buffer given that there's already one in the message.
We often need ISO time + microseconds in traces and ring buffers, thus
function does this by calling gettimeofday() and keeping a cached value
of the part representing the tv_sec value, and only rewrites the microsecond
part. The cache is per-thread so it's lockless and safe to use as-is.
Some tests already show that it's easy to see 3-4 events in a single
microsecond, thus it's likely that the nanosecond version will have to
be implemented as well. But certain comments on the net suggest that
some parsers are having trouble beyond microsecond, thus for now let's
stick to the microsecond only.
When doing a soft shutdown, we won't be making new connections anymore so
there's no point in keeping the namespace file descriptors open anymore.
Keeping these open effectively makes it impossible to properly clean up
namespaces which are no longer used in the new configuration until all
previously opened connections are closed in the old worker process.
This change introduces a cleanup function that is called during soft shutdown
that closes all namespace file descriptors by iterating over the namespace
ebtree.
In state match error cases, we don't know what frame type was received
because we don't reach the frame parsers. Let's add the demuxed frame
type and flags in the trace when it's known. For this we make sure to
always reset h2c->dsi when switching back to FRAME_H. Only one location
was missing. The state transitions were not always clear (sometimes
reported before, sometimes after), these were clarified by being
reported only before switching.
It was difficult in traces showing h2-to-h2 communications to figure the
connection side solely based on the pointer. With this patch we prepend
'F' or 'B' before the state to make this more explicit:
[06|h2|4|mux_h2.c:5487] h2_rcv_buf(): entering : h2c=0x7f6acc026440(F,FRH) h2s=0x7f6acc021720(1,CLO)
[06|h2|4|mux_h2.c:5547] h2_rcv_buf(): leaving : h2c=0x7f6acc026440(F,FRH) h2s=0x7f6acc021720(1,CLO)
[06|h2|4|mux_h2.c:4040] h2_shutw(): entering : h2c=0x7f6acc026440(F,FRH) h2s=0x7f6acc021720(1,CLO)
Now that we can wake tasklet for other threads, make sure that if the thread
is sleeping, we wake it up, or the tasklet won't be executed until it's
done sleeping.
That also means that, before going to sleep, and after we put our bit
in sleeping_thread_mask, we have to check that nobody added a tasklet for
us, just checking for global_tasks_mask isn't enough anymore.
The aim is to rassemble all scheduler information related to the current
thread. It simply points to task_per_thread[tid] without having to perform
the operation at each time. We save around 1.2 kB of code on performance
sensitive paths and increase the request rate by almost 1%.
There are a number of tests there which are enforced on tasklets while
they will never apply (various handlers, destroyed task or not, arguments,
results, ...). Instead let's have a single TASK_IS_TASKLET() test and call
the tasklet processing function directly, skipping all the rest.
It now appears visible that the only unneeded code is the update to
curr_task that is never used for tasklets, except for opportunistic
reporting in the debug handler, which can only catch si_cs_io_cb,
which in practice doesn't appear in any report so the extra cost
incurred there is pointless.
This change alone removes 700 bytes of code, mostly in
process_runnable_tasks() and increases the performance by about
1%.
In process_runnable_tasks() we perform a lot of dereferences to
task_per_thread[tid] but tid is thread_local and the compiler cannot
know that it doesn't change so this results in making lots of thread
local accesses and array dereferences. By just keeping a copy pointer
of this, we let the compiler optimize the code. Just doing this has
reduced process_runnable_tasks() by 124 bytes in the fast path. Doing
the same in wake_expired_tasks() results in 16 extra bytes saved.
In process_runnable_task(), after the task's process() function returns,
we used to check if the return is not NULL and is not a tasklet, to update
profiling measurements. This is useless since only tasks can return non-null
here. Let's remove this useless test.
As identified in issue #278, the backport of commit c594039225 ("BUG/MINOR:
checks: do not uselessly poll for reads before the connection is up")
introduced a regression in 2.0 when default checks are enabled (not
"option tcp-check"), but it did not affect 2.1.
What happens is that in 2.0 and earlier we have the fd cache which makes
a speculative call to the I/O functions after an attempt to connect, and
the __event_srv_chk_r() function was absolutely not designed to be called
while a connection attempt is still pending. Thus what happens is that the
test for success/failure expects the verdict to be final before waking up
the check task, and since the connection is not yet validated, it fails.
It will usually work over the loopback depending on scheduling, which is
why it doesn't fail in reg tests.
In 2.1 after the failed connect(), we subscribe to polling and usually come
back with a validated connection, so the function is not expected to be
called before it completes, except if it happens as a side effect of some
spurious wake calls, which should not have any effect on such a check.
The other check types are not impacted by this issue because they all
check for a minimum data length in the buffer, and wait for more data
until they are satisfied.
This patch fixes the issue by explicitly checking that the connection
is established before trying to read or to give a verdict. This way the
function becomes safe to call regardless of the connection status (even
if it's still totally ugly).
This fix must be backported to 2.0.
If an error occurred on the connection or the conn-stream, no syncrhonous send
is performed. If the error was not already processed and there is no more I/O,
it will never be processed and the stream will never be notified of this
error. This may block the stream until a timeout is reached or infinitly if
there is no timeout.
Concretly, this bug can be triggered time to time with h2spec, running the test
"http2/5.1.1/2".
This patch depends on the commit 328ed220a "BUG/MINOR: stream-int: Process
connection/CS errors first in si_cs_send()". Both must be backported to 2.0 and
probably to 1.9. In 1.9, the code is totally different, so this patch would have
to be adapted.
Errors on the connections or the conn-stream must always be processed in
si_cs_send(), even if the stream-interface is already subscribed on
sending. This patch does not fix any concrete bug per-se. But it is required by
the following one to handle those errors during synchronous sends.
This patch must be backported with the following one to 2.0 and probably to 1.9
too, but with caution because the code is really different.
Now that we can wake up a remote thread's tasklet, it's way more
interesting to use a tasklet than a task in the accept queue, as it
will avoid passing through all the scheduler. Just doing this increases
the accept rate by about 4%, overall recovering the slight loss
introduced by the tasklet change. In addition it makes sure that
even a heavily loaded scheduler (e.g. many very fast checks) will
not delay a connection accept.
When namespaces are used in the configuration, the respective namespace handles
are opened during config parsing and stored in an ebtree for lookup later.
Unfortunately, when the master process re-execs itself these file descriptors
were not closed, effectively leaking the fds and preventing destruction of
namespaces no longer present in the configuration.
This change fixes this issue by opening the namespace file handles as
close-on-exec, making sure that they will be closed during re-exec.
Change the tasklet code so that the tasklet list is now a mt_list.
That means that tasklet now do have an associated tid, for the thread it
is expected to run on, and any thread can now call tasklet_wakeup() for
that tasklet.
One can change the associated tid with tasklet_set_tid().
Instead of using the same type for regular linked lists and "autolocked"
linked lists, use a separate type, "struct mt_list", for the autolocked one,
and introduce a set of macros, similar to the LIST_* macros, with the
MT_ prefix.
When we use the same entry for both regular list and autolocked list, as
is done for the "list" field in struct connection, we know have to explicitely
cast it to struct mt_list when using MT_ macros.
See GH issues #141 for all the context. In short, registered signal
handlers are not inherited by other threads during startup, which is
normally not a problem, except that we need that the same thread as
the one doing the fork() cleans up the old process using waitpid()
once its death is reported via SIGCHLD, as happens in external checks.
The only simple solution to this at the moment is to make sure that
external checks are exclusively run on the first thread, the one
which registered the signal handlers on startup. It will be far more
than enough anyway given that external checks must not require to be
load balanced on multiple threads! A more complex solution could be
designed over the long term to let each thread deal with all signals
but it sounds overkill.
This must be backported as far as 1.8.
As stated in the RFC7540#5.1, an endpoint that receives any frame other than
PRIORITY after receiving a RST_STREAM MUST treat that as a stream error of type
STREAM_CLOSED. However, frames carrying compression state must still be
processed before being dropped to keep the HPACK decoder synchronized. This had
to be the purpose of the commit 8d9ac3ed8b ("BUG/MEDIUM: mux-h2: do not abort
HEADERS frame before decoding them"). But, the test on the frame type was
inverted.
This bug is major because desynchronizing the HPACK decoder leads to mixup
indexed headers in messages. From the time an HEADERS frame is received and
ignored for a closed stream, wrong headers may be sent to the following streams.
This patch may fix several bugs reported on github (#116, #290, #292). It must
be backported to 2.0 and 1.9.
The function parse_fcgi_flt() is called when the keyword "fcgi-app" is found on
a filter line. We don't need to compare it again in the function.
This patch fixes the issue #284. No backport needed.
This multiplexer is only available on the backend side. It may handle
multiplexed connections if the FCGI application supports it. A FCGI application
must be configured on the backend to be used. If not redefined during the
request processing by the FCGI filter, this mux handles all mandatory
parameters.
There is a limitation on the way the requests are processed. The parameters must
be encoded into a uniq PARAMS record. It means, once encoded, all HTTP headers
and FCGI parameters must small enough to be store in a buffer. Otherwise, an
internal processing error is returned.
The FCGI application handles all the configuration parameters used to format
requests sent to an application. The configuration of an application is grouped
in a dedicated section (fcgi-app <name>) and referenced in a backend to be used
(use-fcgi-app <name>). To be valid, a FCGI application must at least define a
document root. But it is also possible to set the default index, a regex to
split the script name and the path-info from the request URI, parameters to set
or unset... In addition, this patch also adds a FCGI filter, responsible for
all processing on a stream.
When an HTX message is formatted to an H1 or H2 message, pseudo-headers (with
header names starting by a colon (':')) are now ignored. In fact, for now, only
H2 messages have such headers, and the H2 mux already skips them when it creates
the HTX message. But in the futur, it may be useful to keep these headers in the
HTX message to help the message analysis or to do some processing during the
HTTP formatting. It would also be a good idea to have scopes for pseudo-headers
(:h1-, :h2-, :fcgi-...) to limit their usage to a specific mux.
This function will try to do a zero-copy transfer. Otherwise, it adds a data
block. The same is used for messages with a content-length, chunked messages and
messages with unknown body length.
To avoid code duplication in the futur mux FCGI, functions parsing H1 messages
and converting them into HTX have been moved in the file h1_htx.c. Some
specific parts remain in the mux H1. But most of the parsing is now generic.
Application is a generic term here. It is a modules which handle its own log
server list, with no dependency on a proxy. Such applications can now call the
function app_log() to log messages, passing a log server list and a tag as
parameters. Internally, the function __send_log() has been adapted accordingly.
Now, following sample fetches may be used to get information about
authentication:
* http_auth_type : returns the auth method as supplied in Authorization header
* http_auth_user : returns the auth user as supplied in Authorization header
* http_auth_pass : returns the auth pass as supplied in Authorization header
Only Basic authentication is supported.
Most of times, when a keyword is added in proxy section or on the server line,
we need to have a post-parser callback to check the config validity for the
proxy or the server which uses this keyword.
It is possible to register a global post-parser callback. But all these
callbacks need to loop on the proxies and servers to do their job. It is neither
handy nor efficient. Instead, it is now possible to register per-proxy and
per-server post-check callbacks.
Most of times, when any allocation is done during configuration parsing because
of a new keyword in proxy section or on the server line, we must add a call in
the deinit() function to release allocated ressources. It is now possible to
register a post-deinit callback because, at this stage, the proxies and the
servers are already releases.
Now, it is possible to register deinit callbacks per-proxy or per-server. These
callbacks will be called for each proxy and server before releasing them.
When an error occurred in a mux, most of time, an error is also reported on the
conn-stream, leading to an error (read and/or write) on the channel. When a
parsing or a processing error is reported for the HTX message, it is better to
handle it first.
Since the commit 1b8e68e8 ("MEDIUM: stick-table: Stop handling stick-tables as
proxies."), the target field into the table context of the CLI applet was not
anymore a pointer to a proxy. It was replaced by a pointer to a stktable. But,
some parts of the code was not updated accordingly. the function
table_prepare_data_request() still tries to cast it to a pointer to a proxy. The
result is totally undefined. With a bit of luck, when the "show table" command
is used with a data type, we failed to find a table and the error "Data type not
stored in this table" is returned. But crashes may also be experienced.
This patch fixes the issue #262. It must be backported to 2.0.
Recently Lua code which uses Proxy class (get_stats method) stopped
working ("table index is nil from [C] method 'get_stats'")
It probably affects other codepaths too.
This should be backported do 2.0 and 1.9.
In the function connect_server(), when we are not able to reuse a connection and
too many FDs are opened, the variable srv must be defined to kill an idle
connection.
This patch fixes the issue #257. It must be backported to 2.0
This only happens during the configuration parsing. First leak is the string
representing the last converter parsed, if any. The second one is on the error
path, when the allocation of the ACL expression failed. In this case, the sample
was not released.
This patch fixes the issue #256. It must be backported to all stable versions.
Since the legacy HTTP mode has been removed, this flag is not necessary
anymore. Removing this flag, a test on the HTX message at the end of the
function h2c_decode_headers() has also been removed fixing the github
issue #244.
No backport needed.
When a filter returns an error during the HTTP analysis, an error must be
returned if the status code is not already set. On the request path, an error
400 is returned. On the response path, an error 502 is returned. The status is
considered as unset if its value is not strictly positive.
If needed, this patch may be backported to all versions having filters (as far
as 1.7). Because nobody have never report any bug, the backport to 2.0 is
probably enough.
It is now possible to export stats using the JSON format from the HTTP stats
page. Like for the CSV export, to export stats in JSON, you must add the option
";json" on the stats URL. It is also possible to dump the JSON schema with the
option ";json-schema". Corresponding Links have been added on the HTML page.
This patch fixes the issue #263.
In several SSL functions, the XPRT context is retrieved before any check on the
connection. In the function ssl_sock_is_ssl(), a test suggests the connection
may be null. So, it is safer to test the ssl connection before retrieving its
XPRT context. It removes any ambiguities and prevents possible null pointer
dereferences.
This patch fixes the issue #265. It must be backported to 2.0.
It seems to be possible to have no frontend for a listener. A test was missing
before dereferencing it at the end of the function listener_accept().
This patch fixes the issue #264. It must be backported to 2.0 and 1.9.
This adds two extra fields to the stats, one for the current number of idle
connections and one for the configured limit. A tooltip link now appears on
the HTML page to show these values in front of the active connection values.
This should be backported to 2.0 and 1.9 as it's the only way to monitor
the idle connections behaviour.
As mentioned in previous commit, these flags do not map well to
modern poller capabilities. Let's use the FD_EV_*_{R,W} flags instead.
This first patch only performs a 1-to-1 mapping making sure that the
previously reported flags are still reported identically while using
the closest possible semantics in the pollers.
It's worth noting that kqueue will now support improvements such as
returning distinctions between shut and errors on each direction,
though this is not exploited for now.
In order to address the absurd polling sequence described in issue #253,
let's make sure we disable receiving on a connection until it's established.
Previously with bottom-top I/Os, we were almost certain that a connection
was ready when the first I/O was confirmed. Now we can enter various
functions, including process_stream(), which will attempt to read
something, will fail, and will then subscribe. But we don't want them
to try to receive if we know the connection didn't complete. The first
prerequisite for this is to mark the connection as not ready for receiving
until it's validated. But we don't want to mark it as not ready for sending
because we know that attempting I/Os later is extremely likely to work
without polling.
Once the connection is confirmed we re-enable recv readiness. In order
for this event to be taken into account, the call to tcp_connect_probe()
was moved earlier, between the attempt to send() and the attempt to recv().
This way if tcp_connect_probe() enables reading, we have a chance to
immediately fall back to this and read the possibly pending data.
Now the trace looks like the following. It's far from being perfect
but we've already saved one recvfrom() and one epollctl():
epoll_wait(3, [], 200, 0) = 0
socket(AF_INET, SOCK_STREAM, IPPROTO_TCP) = 7
fcntl(7, F_SETFL, O_RDONLY|O_NONBLOCK) = 0
setsockopt(7, SOL_TCP, TCP_NODELAY, [1], 4) = 0
connect(7, {sa_family=AF_INET, sin_port=htons(8000), sin_addr=inet_addr("127.0.0.1")}, 16) = -1 EINPROGRESS (Operation now in progress)
epoll_ctl(3, EPOLL_CTL_ADD, 7, {EPOLLIN|EPOLLOUT|EPOLLRDHUP, {u32=7, u64=7}}) = 0
epoll_wait(3, [{EPOLLOUT, {u32=7, u64=7}}], 200, 1000) = 1
connect(7, {sa_family=AF_INET, sin_port=htons(8000), sin_addr=inet_addr("127.0.0.1")}, 16) = 0
getsockopt(7, SOL_SOCKET, SO_ERROR, [0], [4]) = 0
sendto(7, "OPTIONS / HTTP/1.0\r\n\r\n", 22, MSG_DONTWAIT|MSG_NOSIGNAL, NULL, 0) = 22
epoll_ctl(3, EPOLL_CTL_MOD, 7, {EPOLLIN|EPOLLRDHUP, {u32=7, u64=7}}) = 0
epoll_wait(3, [{EPOLLIN|EPOLLRDHUP, {u32=7, u64=7}}], 200, 1000) = 1
getsockopt(7, SOL_SOCKET, SO_ERROR, [0], [4]) = 0
getsockopt(7, SOL_SOCKET, SO_ERROR, [0], [4]) = 0
recvfrom(7, "HTTP/1.0 200\r\nContent-length: 0\r\nX-req: size=22, time=0 ms\r\nX-rsp: id=dummy, code=200, cache=1, size=0, time=0 ms (0 real)\r\n\r\n", 16384, 0, NULL, NULL) = 126
close(7) = 0
'ssl_sock' wasn't fully initialized so a new session can inherit some
flags from an old one.
This causes some fetches, related to client's certificate presence or
its verify status and errors, returning erroneous values.
This issue could generate other unexpected behaviors because a new
session could also inherit other flags such as SSL_SOCK_ST_FL_16K_WBFSIZE,
SSL_SOCK_SEND_UNLIMITED, or SSL_SOCK_RECV_HEARTBEAT from an old session.
This must be backported to 2.0 but it's useless for previous.
As discussed in issue #178, the change brought around 1.9-dev11 by commit
1eb6c55808 ("MINOR: lb: make the leastconn algorithm more accurate")
causes some harm in the situation it tried to improve. By always applying
the server's weight even for no connection, we end up always picking the
same servers for the first connections, so under a low load, if servers
only have either 0 or 1 connections, in practice the same servers will
always be picked.
This patch partially restores the original behaviour but still keeping
the spirit of the aforementioned patch. Now what is done is that servers
with no connections will always be picked first, regardless of their
weight, so they will effectively follow round-robin. Only servers with
one connection or more will see an accurate weight applied.
This patch was developed and tested by @malsumis and @jaroslawr who
reported the initial issue. It should be backported to 2.0 and 1.9.
When an error occurs in the post-parser callback which checks configuration
validity of the option outgoing-headers-case-adjust-file, the error message is
freed too early, before being used.
No backport needed. It fixes the github issue #258.
It's pointless to start to perform a recv() call on a connection that is
not yet established. The only purpose used to be to subscribe but that
causes many extra syscalls when we know we can do it later.
This patch only attempts a read if the connection is established or if
there is no write planed, since we want to be certain to be called. And
in wake_srv_chk() we continue to attempt to read if the reader was not
subscribed, so as to perform the first read attempt. In case a first
result is provided, __event_srv_chk_r() will not do anything anyway so
this is totally harmless in this case.
This fix requires that commit "BUG/MINOR: checks: make __event_chk_srv_r()
report success before closing" is applied before, otherwise it will break
some checks (notably SSL) by doing them again after the connection is shut
down. This completes the fixes on the checks described in issue #253 by
roughly cutting the number of syscalls in half. It must be backported to
2.0.
On a plain TCP check, this function will do nothing except shutting the
connection down and will not even update the status. This prevents it
from being called again, which is the reason why we attempt to do it
once too early. Let's first fix this function to make it report success
on plain TCP checks before closing, as it does for all other ones.
This must be backported to 2.0. It should be safe to backport to older
versions but it doesn't seem it would fix anything there.
Since the change of I/O direction, we must not wait for an empty connect
callback before sending the request, we must attempt to send it as soon
as possible so that we don't uselessly poll. This is what this patch
does. This reduces the total check duration by a complete poll loop
compared to what is described in issue #253.
This must be backported to 2.0.
Since the change of I/O direction, we perform the connect() call and
the send() call together from the top. But the send call must at least
disable polling for writes once it does not have anything left to send.
This bug is partially responsible for the waste of resources described
in issue #253.
This must be backported to 2.0.
Since commit 7ac0e35f2 in 1.9-dev1 ("MAJOR: fd: compute the new fd polling
state out of the fd lock") we've started to update the FD POLLED bit a
bit more aggressively. Lately with the removal of the FD cache, this bit
is always equal to the ACTIVE bit. There's no point continuing to watch
it and update it anymore, all it does is create confusion and complicate
the code. One interesting side effect is that it now becomes visible that
all fd_*_{send,recv}() operations systematically call updt_fd_polling(),
except fd_cant_recv()/fd_cant_send() which never saw it change.
HTTP responses with headers than impinge upon the reserve must not be
cached. Otherwise, there is no warranty to have enough space to add the header
"Age" when such cached responses are delivered.
This patch must be backported to 2.0 and 1.9. For these versions, the same must
be done for the legacy HTTP mode.
In the cache, huge HTTP headers will use several shctx blocks. When a response
is returned from the cache, these headers must be properly copied in the
corresponding HTX message by updating the pointer where to copied a header
part.
This patch must be backported to 2.0 and 1.9.
The loop is now stopped only when nothing else is consumed from the input buffer
or if a parsing error is encountered. This will let a chance to detect cases
when we fail to add the EOM.
For instance, when the max is reached after the headers parsing and all the
message is received. In this case, we may have the flag H1S_F_REOS set without
the flag H1S_F_APPEND_EOM and no pending input data, leading to an error because
we think it is an abort.
This patch must be backported to 2.0. This bug does not affect 1.9.
Otherwise some processing may be performed twice. For instance, if the header
"Content-Length" is parsed on the first pass, when the parsing is restarted, we
skip it because we think another header with the same value was already seen. In
fact, it is currently the only existing bug that can be encountered. But it is
safer to reset all the h1m on restart to avoid any future bugs.
This patch must be backported to 2.0 and 1.9
Otherwise, the following final response could inherit of some of these
flags. For instance, because informational responses have no body, the flag
HTTP_MSGF_BODYLESS is set for 1xx messages. If it is not reset, this flag will
be kept for the final response.
One of visible effect of this bug concerns the HTTP compression. When the final
response is preceded by an 1xx message, the compression is not performed. This
was reported in github issue #229.
This patch must be backported to 2.0 and 1.9. Note that the file http_ana.c does
not exist for these branches, the patch must be applied on proto_htx.c instead.
Commit 8a4ffa0a ("MINOR: send-proxy-v2: sends authority TLV according
to TLV received") is missing parentheses around a variable assignment
used as condition in an if statement, and gcc isn't happy about it.
This bug came with 015e4d7 commit: "MINOR: stick-tables: Add peers process
binding computing" where the "stick" rules cases were missing when computing
the peer local listener process binding. At parsing time we store in the
stick-table struct ->proxies_list the proxies which refer to this stick-table.
The process binding is computed after having parsed the entire configuration file
with this simple loop in cfgparse.c:
/* compute the required process bindings for the peers from <stktables_list>
* for all the stick-tables, the ones coming with "peers" sections included.
*/
for (t = stktables_list; t; t = t->next) {
struct proxy *p;
for (p = t->proxies_list; p; p = p->next_stkt_ref) {
if (t->peers.p && t->peers.p->peers_fe) {
t->peers.p->peers_fe->bind_proc |= p->bind_proc;
}
}
}
Note that if this process binding is not correctly initialized, the child forked
by the master-worker stops the peer local listener. Should be also the case
when daemonizing haproxy.
Must be backported to 2.0.
Since patch "7185b789", the authority TLV in a PROXYv2 header from a
client connection is stored. Authority TLV sends in PROXYv2 should be
taken into account to allow chaining PROXYv2 without droping it.
Now by prefixing a log server with "ring@<name>" it's possible to send
the logs to a ring buffer. One nice thing is that it allows multiple
sessions to consult the logs in real time in parallel over the CLI, and
without requiring file system access. At the moment, ring0 is created as
a default sink for tracing purposes and is available. No option is
provided to create new rings though this is trivial to add to the global
section.
Instead of detecting an AF_UNSPEC address family for a log server and
to deduce a file descriptor, let's create a target type field and
explicitly mention that the socket is of type FD.
When logging to a file descriptor, we'd rather use the unified
fd_write_frag_line() which uses the FD's lock than perform the
writev() ourselves and use a per-server lock, because if several
loggers point to the same output (e.g. stdout) they are still
not locked and their logs may interleave. The function above
instead relies on the fd's lock so this is safer and will even
protect against concurrent accesses from other areas (e.g traces).
The function also deals with the FD's non-blocking mode so we do
not have to keep specific code for this anymore in the logs.
Logs and sinks were resorting to dirty hacks to initialize an FD to
non-blocking mode. Now we have a bit for this in the fd tab so we can
do it on the fly on first use of the file descriptor. Previously it was
set per log server by writing value 1 to the port, or during a sink
initialization regardless of the usage of the fd.
The "cache" entry was still present in the fdtab struct and it was
reported in "show sess". Removing it broke the cache-line alignment
on 64-bit machines which is important for threads, so it was fixed
by adding an attribute(aligned()) when threads are in use. Doing it
only in this case allows 32-bit thread-less platforms to see the
struct fit into 32 bytes.
Ilya reported in issue #242 that h2c_handle_priority() was having
unreachable code... Obviously, I missed the braces around the "if",
leaving an unconditional return.
No backport is needed.
Now it is possible for a reader to subscribe and wait for new events
sent to a ring buffer. When new events are written to a ring buffer,
the applets that are subscribed are woken up to display new events.
For now we only support this with the CLI applet called by "show events"
since the I/O handler is indeed a CLI I/O handler. But it's not
complicated to add other mechanisms to consume events and forward them
to external log servers for example. The wait mode is enabled by adding
"-w" after "show events <sink>". An extra "-n" was added to directly
seek to new events only.
Some CLI parsers are currently abusing the CLI context types such as
pointers to stuff longs into them by lack of room. But the context is
80 bytes while cli is only 48, thus there's some room left. This patch
adds a list element and two size_t usable as various offsets. The list
element is initialized.
There are two problems with the way we count watchers on a ring:
- the test for >=255 was accidently kept to 1 used during QA
- if the producer deletes all data up to the reader's position
and the reader is called, cannot write, and is called again,
it will have a zero offset, causing it to initialize itself
multiple times and each time adding a new refcount.
Let's change this and instead use ~0 as the initial offset as it's
not possible to have it as a valid one. We now store it into the
CLI context's size_t o0 instead of casting it to a void*.
No backport needed.
user-level traces are more readable when visually aligned. This is easily
done by writing "rcvd" instead of "received" to align with "sent" :
$ socat - /tmp/sock1 <<< "show events buf0"
[00|h2|0|mux_h2.c:2465] rcvd H2 request : [1] H2 REQ: GET /?s=10k HTTP/2.0
[00|h2|0|mux_h2.c:4563] sent H2 response : [1] H2 RES: HTTP/1.1 200
h2c->dsi is only for demuxing, and needed while decoding a new request.
But if we already have a valid stream ID (e.g. response or outgoing
request), we should use it instead. This avoids seeing [0] in front of
the responses at user level.
This is as documented in "trace h2 verbosity", level "minimal" only
features flags and doesn't perform any decoding at all, "simple" does,
just like "clean" which is the default for end uesrs.
The "clean" output will be suitable for user and proto-level output
where the internal stuff (state, pointers, etc) is not desired but
just the basic protocol elements.
We need this all the time in traces, let's have it now. For the sake
of compact outputs, the strings are all 3-chars long. The "show fd"
output was improved to make use of this.
All functions of the h2 data path were updated to receive one or multiple
TRACE() calls, at least one pair of TRACE_ENTER()/TRACE_LEAVE(), and those
manipulating protocol elements have been improved to report frame types,
special state transitions or detected errors. Even with careful tests, no
performance impact was measured when traces are disabled.
They are not completely exploited yet, the callback function tries to
dump a lot about them, but still doesn't provide buffer dumps, nor does
it indicate the stream or connection error codes.
The first argument is always set to the connection when known. The second
argument is set to the h2s when known, sometimes a 3rd argument is set to
a buffer, generally the rxbuf or htx, and occasionally the 4th argument
points to an integer (number of bytes read/sent, error code).
Retrieving a 10kB object produces roughly 240 lines when at developer
level, 35 lines at data level, 27 at state level, and 10 at proto level
and 2 at user level.
For now the headers are not dumped, but the start line are emitted in
each direction at user level.
The patch is marked medium because it touches lots of places, though
it takes care not to change the execution path.
The new function h2_trace() is called when relevant by the trace subsystem
in order to provide extra information about the trace being produced. It
can for example display the connection pointer, the stream pointer, etc.
It is declared in the trace source as the default callback as we expect
it to be versatile enough to enrich most traces.
In addition, for requests and responses, if we have a buffer and we can
decode it as an HTX buffer, we can extract user-friendly information from
the start line.
For now the traces are not used. Supported events are categorized by
where the activity comes from (h2c, h2s, stream, etc), a direction
(send/recv/wake), and a list of possibilities for each of them (frame
types, errors, shut, ...). This results in ~50 different events that
try to cover a lot of possibilities when it's needed to filter on
something specific. Special events like protocol error are handled.
A few aggregate events like "rx_frame" or "tx_frame" are planed to
cover all frame types at once by being placed all the time with any
of the other ones.
We also state that the first argument is always the connection. This way
the trace subsystem will be able to safely retrieve some useful info, and
we'll still be able to get the h2c from there (conn->ctx) in a pretty
print function. The second argument will always be an h2s, and in order
to propose it for tracking, we add its description. We also define 4
verbosity levels, which seems more than enough.
The detail level initially based on syslog levels is not used, while
something related is missing, trace verbosity, to indicate whether or
not we want to call the decoding callback and what level of decoding
we want (raw captures etc). Let's change the field to "verbosity" for
this. A verbosity of zero means that the decoding callback is not
called, and all other levels are handled by this callback and are
source-specific. The source is now prompted to list the levels that
are proposed to the user. When the source doesn't define anything,
"quiet" and "default" are available.
It's very convenient to know the trace level in the output, at least to
grep/grep -v on it. The main usage is to filter/filter out the developer
traces at level DEVEL. For this we now add the numeric level (0 to 4) just
after the source name in the brackets. The output now looks like this :
[00|h2|4|mux_h2.c:3174] h2_send(): entering : h2c=0x27d75a0 st=2
-, -, | ------------, --------, -------------------> message
| | | | '-> function name
| | | '-> source file location
| | '-> trace level (4=dev)
| '-> trace source
'-> thread number
Working on adding traces to mux-h2 revealed that the function names are
manually copied a lot in developer traces. The reason is that they are
not preprocessor macros and as such cannot be concatenated. Let's
slightly adjust the trace() function call to take a function name just
after the file:line argument. This argument is only added for the
TRACE_DEVEL and 3 new TRACE_ENTER, TRACE_LEAVE, and TRACE_POINT macros
and left NULL for others. This way the function name is only reported
for traces aimed at the developers. The pretty-print callback was also
extended to benefit from this. This will also significantly shrink the
data segment as the "entering" and "leaving" strings will now be merged.
One technical point worth mentioning is that the function name is *not*
passed as an ist to the inline function because it's not considered as
a builtin constant by the compiler, and would lead to strlen() being
run on it from all call places before calling the inline function. Thus
instead we pass the const char * (that the compiler knows where to find)
and it's the __trace() function that converts it to an ist for internal
consumption and for the pretty-print callback. Doing this avoids losing
5-10% peak performance.
The "payload" trace level was ambigous because its initial purpose was
to be able to dump received data. But it doesn't make sense to force to
report data transfers just to be able to report state changes. For
example, all snd_buf()/rcv_buf() operations coming from the application
layer should be tagged at this level. So here we move this payload level
above the state transitions and rename it to avoid the ambiguity making
one think it's only about request/response payload. Now it clearly is
about any data transfer and is thus just below the developer level. The
help messages on the CLI and the doc were slightly reworded to help
remove this ambiguity.
Save the authority TLV in a PROXYv2 header from the client connection,
if present, and make it available as fc_pp_authority.
The fetch can be used, for example, to set the SNI for a backend TLS
connection.
Traces were missing the thread number and the source name, which was
really annoying. Now the thread number is emitted on two digits inside
the square brackets, followed by the source name then the line location,
each delimited with a vertical bar, such as below :
[00|h2|mux_h2.c:2651] Notifying stream about SID change : h2c=0x7f3284581ae0 st=3 h2s=0x7f3284297f00 id=523 st=4
[00|h2|mux_h2.c:2708] receiving H2 HEADERS frame : h2c=0x7f3284581ae0 st=3 dsi=525 (st=0)
[02|h2|mux_h2.c:2194] Received H2 request : h2c=0x7f328d3d1ae0 st=2 : [525] H2 REQ: GET / HTTP/2.0
[02|h2|mux_h2.c:2561] Expecting H2 frame header : h2c=0x7f328d3d1ae0 st=2
It becomes apparent that most traces will use a single trace pretty
print callback, so let's allow the trace source to declare a default
one so that it can be omitted from trace calls, and will be used if
no other one is specified.
The principle is that when emitting a message, if some dropped events
were logged, we first attempt to report this counter before going
further. This is done under an exclusive lock while all logs are
produced under a shared lock. This ensures that the dropped line is
accurately reported and doesn't accidently arrive after a later
event.
The new "show events" CLI keyword lists supported event sinks. When
passed a buffer-type sink it completely dumps it.
no drops at all during attachment even at 8 millon evts/s.
still missing the attachment limit though.
This now provides sink_new_buf() which allocates a ring buffer. One such
ring ("buf0") of 1 MB is created already, and may be used by sink_write().
The sink's creation should probably be moved somewhere else later.
The three functions (attach, IO handler, and release) are meant to be
called by any CLI command which requires to dump the contents of a ring
buffer. We do not implement anything generic to dump any ring buffer on
the CLI since it's meant to be used by other functionalities above.
However these functions deal with locking and everything so it's trivial
to embed them in other code.
This function tries to write to the ring buffer, possibly removing enough
old messages to make room for the new one. It takes two arrays of fragments
on input to ease the insertion of prefixes by the caller. It atomically
writes the message, possibly truncating it if desired, and returns the
operation's status.
Our circular buffers are well suited for being used as ring buffers for
not-so-structured data. The machanism here consists in making room in a
buffer before inserting a new record which is prefixed by its size, and
looking up next record based on the previous one's offset and size. We
can have up to 255 consumers watching for data (dump in progress, tail)
which guarantee that entrees are not recycled while they're being dumped.
The complete representation is described in the header file. For now only
ring_new(), ring_resize() and ring_free() are created.
Let's not mess up with fd-specific code, locking nor message formating
here, and use the new generic function instead. This substantially
simplifies the sink_write() code and makes it more agnostic to the
output representation and storage.
Currently both logs and event sinks may use a file descriptor to
atomically emit some output contents. The two may use the same FD though
nothing is done to make sure they use the same lock. Also there is quite
some redundancy between the two. Better make a specific function to send
a fragmented message to a file descriptor which will take care of the
locking via the fd's lock. The function is also able to truncate a
message and to enforce addition of a trailing LF when building the
output message.
It will sometimes be useful to encode varints to know the output size in
advance. Two versions are provided, one inline using a switch/case construct
which will be trivial for use with constants (and will be very fast albeit
huge) and one function iterating on the number which is 5 times smaller,
for use with variables.
The converter can be useful to look up a server queue from a dynamic value.
It takes an input value of type string, either a server name or
<backend>/<server> format and returns the number of queued sessions
on that server. Can be used in places where we want to look up
queued sessions from a dynamic name, like a cookie value (e.g.
req.cook(SRVID),srv_queue) and then make a decision to break
persistence or direct a request elsewhere.
Signed-off-by: Nenad Merdanovic <nmerdan@haproxy.com>
The url32 sample fetch does not take the path part of the URL into
account. This is because in smp_fetch_url32() we erroneously modify
path.len and path.ptr before testing their value and building the
path based part of the hash.
This fixes issue #235
This must be backported as far as 1.9, when HTX was introduced.
The delete_listener() function takes the listener's lock before taking
the proto_lock, which is contrary to what other functions do, possibly
causing an AB/BA deadlock. In practice the two only places where both
are taken are during protocol_enable_all() and delete_listener(), the
former being used during startup and the latter during stop. In practice
during reload floods, it is technically possible for a thread to be
initializing the listeners while another one is stopping. While this
is too hard to trigger on 2.0 and above due to the synchronization of
all threads during startup, it's reasonably easy to do in 1.9 by having
hundreds of listeners, starting 64 threads and flooding them with reloads
like this :
$ while usleep 50000; do killall -USR2 haproxy; done
Usually in less than a minute, all threads will be deadlocked. The fix
consists in always taking the proto_lock before the listener lock. It
seems to be the only place where these two locks were reversed. This
fix needs to be backported to 2.0, 1.9, and 1.8.
If haproxy is built with profiling enabled with -pg, it is possible to
see the master quit during a reload while it's re-executing itself with
error code 155 (signal 27) saying "Profile timer expired)". This happens
if the SIGPROF signal is delivered during the execve() call while the
handler was already unregistered. The issue itself is not directly inside
haproxy but it's easy to address. This patch disables this signal before
calling execvp() during a master reload. A simple test for this consists
in running this little script with haproxy started in master-worker mode :
$ while usleep 50000; do killall -USR2 haproxy; done
This fix should be backported to all versions using the master-worker
model.
If a receipt ends with the HTX buffer full and everything is completed except
appending the HTX EOM block, we end up detecting an error because the H1
parser did not switch to H1_MSG_DONE yet while all conditions for an end of
stream and end of buffer are met. This can be detected by retrieving 31532
or 31533 chunk-encoded bytes over H1 and seeing haproxy log "SD--" at the
end of a successful transfer.
Ideally the EOM part should be totally independent on the H1 message state
since the block was really parsed and finished. So we should switch to a
last state requiring to send only EOM. However this needs a few risky
changes. This patch aims for simplicity and backport safety, thus it only
adds a flag to the H1 stream indicating that an EOM is still needed, and
excludes this condition from the ones used to detect end of processing. A
cleaner approach needs to be studied, either by adding a state before DONE
or by setting DONE once the various blocks are parsed and before trying to
send EOM.
This fix must be backported to 2.0. The issue does not seem to affect 1.9
though it is not yet known why, probably that it is related to the different
encoding of trailers which always leaves a bit of room to let EOM be stored.
The H1 message parser calls the various message block parsers with an
offset indicating where in the buffer to start from, and only consumes
the data at the end of the parsing. The headers and trailers parsers
have a condition detecting if a headers or trailers block is too large
to fit into the buffer. This is detected by an incomplete block while
the buffer is full. Unfortunately it doesn't take into account the fact
that the block may be parsed after other blocks that are still present
in the buffer, resulting in aborting some transfers early as reported
in issue #231. This typically happens if a trailers block is incomplete
at the end of a buffer full of data, which typically happens with data
sizes multiple of the buffer size minus less than the trailers block
size. It also happens with the CRLF that follows the 0-sized chunk of
any transfer-encoded contents is itself on such a boundary since this
CRLF is technically part of the trailers block. This can be reproduced
by asking a server to retrieve exactly 31532 or 31533 bytes of static
data using chunked encoding with curl, which reports:
transfer closed with outstanding read data remaining
This issue was revealed in 2.0 and does not affect 1.9 because in 1.9
the trailers block was processed at once as part of the data block
processing, and would simply give up and wait for the rest of the data
to arrive.
It's interesting to note that the headers block parsing is also affected
by this issue but in practice it has a much more limited impact since a
headers block is normally only parsed at the beginning of a buffer. The
only case where it seems to matter is when dealing with a response buffer
full of 100-continue header blocks followed by a regular header block,
which will then be rejected for the same reason.
This fix must be backported to 2.0 and partially to 1.9 (the headers
block part).
Now we try to find frontend, listener, backend, server, connection,
session, stream, from the presented argument of type connection,
stream or session. Various combinations and bounces allow to
retrieve most of them almost all the time. The extraction is
performed early so that we'll be able to apply filters later.
The lock-on is set if it was not there while the trace is running and
a valid pointer is available. If it was already set and doesn't match,
no trace is produced.
When no criterion is provided, it carefully enumerates all available
ones, including those provided by the source itself. Otherwise it sets
the new criterion and resets the lockon pointer.
When we stop or pause a trace (either on a matching event or by hand),
we must also stop the lock-on feature so that we don't follow any
further activity on this pointer even if it is recycled. For now this
is not exploited.
The trace() call will support an optional decoding callback and 4
arguments that this function is supposed to know how to use to provide
extra information. The output remains unchanged when the function is
NULL. Otherwise, the message is pre-filled into the thread-local
trace_buf, and the function is called with all arguments so that it
completes the buffer in a readable form depending on the expected
level of detail.
This new "level" argument will allow the trace sources to label the
traces for different purposes, and filter out some of them if they
are not relevant to the current target. Right now we have 5 different
levels:
- USER : the least verbose one, only a few functional information
- PAYLOAD: like user but also displays some payload-related information
- PROTO: focuses on the protocol's framing
- STATE: also indicate state internal transitions or non-transitions
- DEVELOPER: adds extra info about branches taken in the code (break
points, return points)
We now pass an extra argument "where" to the trace() call, which
is supposed to be an ist made of the concatenation of the filename
and the line number. We only keep the last 10 chars from this string
since the end of file names is most often easy to recognize. This
gives developers useful information at very low cost.
For now it remains quite basic. It performs a few state checks, calls
the source's sink if defined, and performs the transitions between
RUNNING, STOPPED and WAITING when the configured events match.
The new "show trace" CLI command lists available trace sources and
indicates their status, their sink, and number of dropped packets.
When "show trace <source>" is used, the list of known events is
also listed with their status per action (report/start/stop/pause).
The "level" keyword allows to indicate the expected level of verbosity
in the traces, among "user" (least verbose, just synthetic info) to
"developer" (very detailed, including function entry/leaving). It's only
displayed and set but not used yet.
For now it lists the sources if one is not provided, and checks
for the source's existence. It lists the events if not provided,
checks for their existence if provided, and adjusts reported
events/start/stop/pause events, and performs state transitions.
It lists sinks and adjusts them as well. Filters, lock, and
level are not implemented yet.
The principle of this subsystem will be to support taking live traces
at various places in the code with conditional triggers, filters, and
ability to lock on some elements. The traces will support typed events
and will be sent into sinks made of ring buffers, file descriptors or
remote servers.
This is the most basic type of sink. It pre-registers "stdout" and
"stderr", and is able to use writev() on them. The writev() operation
is locked to avoid mixing outputs. It's likely that the registration
should move somewhere else to take into account the fact that stdout
and stderr are still opened or are closed.
The principle will be to be able to dispatch events to various destinations
called "sinks". This is already done in part in logs where log servers can
be either a UDP socket or a file descriptor. This will be needed with the
new trace subsystem where we may also want to add ring buffers. And it turns
out that all such destinations make sense at all places. Logs may need to be
sent to a TCP server via a ring buffer, or consulted from the CLI. Trace
events may need to be sent to stdout/stderr as well as to remote log servers.
This patch creates a new structure "sink" aiming at addressing these similar
needs. The goal is to merge together what is common to all of them, such as
the output format, the dropped events count, etc, and also keep separately
the target identification (network address, file descriptor). Provisions
were made to have a "waiter" on the sink. For a TCP log server it will be
the task to wake up after writing to the log buffer. For a ring buffer, it
could be the list of watchers on the CLI running a "tail" operation and
waiting for new events. A lock was also placed in the struct since many
operations will require some locking, including the FD ones. The output
formats covers those in use by logs and two extra ones prepending the ISO
time in front of the message (convenient for stdio/buffer).
For now only the generic infrastructure is present, no type-specific
output is implemented. There's the sink_write() function which prepares
and formats a message to be sent, trying hard to avoid copies and only
using pointer manipulation, where the type-specific code just has to be
added. Dropped messages are already counted (for now 100% drop). The
message is put into an iovec array as it will be trivial to use with
file descriptors and sockets.
The function call tracing code is a quite old and was never ported to
support threads. It's not even sure whether it still works well, but
at least its presence creates confusion for future work so let's rename
it to calltrace.c and add a comment about its lack of thread-safety.
In h1_rcv_buf(), wake the h1c tasklet as long as we're not done reading the
request/response, and the h1c is not already subscribed for receiving. Now
that we no longer subscribe in h1_recv() if we managed to read data, we
rely on h1_rcv_buf() calling us again, but h1_process_input() may have
returned 0 if we only received part of the request, so we have to wake
the tasklet to be sure to get more data again.
When we dump a thread's state (show thread, panic) we don't know if
anything is happening in Lua, which can be problematic especially when
calling external functions. With this patch, the thread dump code can
now detect if we're running in a global Lua task (hlua_process_task),
or in a TCP or HTTP Lua service (task_run_applet and applet.fct ==
hlua_applet_tcp_fct or http_applet_http_fct), or a fetch/converter
from an analyser (s->hlua != NULL). In such situations, it's able to
append a formatted Lua backtrace of the Lua execution path with
function names, file names and line numbers.
Note that a shorter alternative could be to call "luaL_where(hlua->T,0)"
which only prints the current location, but it's not necessarily sufficient
for complex code.
The current functions are seen outside from the debugging code and are
convenient to export so that we can improve the thread dump output :
void hlua_applet_tcp_fct(struct appctx *ctx);
void hlua_applet_http_fct(struct appctx *ctx);
struct task *hlua_process_task(struct task *task, void *context, unsigned short state);
Of course they are only available when USE_LUA is defined.
This is somewhat related to indent_msg() except that this one places a
known prefix at the beginning of each line, allows to replace the EOL
character, and not to insert a prefix on the first line if not desired.
It works with a normal output buffer/chunk so it doesn't need to allocate
anything nor to modify the input string. It is suitable for use in multi-
line backtraces.
In mux_pt_attach(), don't inconditionally call unsubscribe, and only do so
if we were subscribed. The idea was that at this point we would always be
subscribed, as for the mux_pt attach would only be called after at least one
request, after which the mux_pt would have subscribed, but this is wrong.
We can also be called if for some reason the connection failed before the
xprt was created. And with no xprt, attempting to call unsubscribe will
probably lead to a crash.
This should be backported to 2.0.
The stats applet waits to have a full body to process POST requests. Because
when it is waiting for the end of a request it does not produce anything, the
applet may be blocked. The client side is blocked because the stats applet does
not consume anything and the applet is waiting because all the body is not
received. Registering the analyzer AN_REQ_HTTP_BODY when a POST request is sent
for the stats applet solves the issue.
This patch must be backported to 2.0.
This bug was introduced by the commit bfab2ddd ("MINOR: hlua: Add a flag on the
lua txn to know in which context it can be used"). The wrong test was done. So
the timeout was always set on the response channel. It may lead to an infinite
loop.
This patch must be backported everywhere the commit bfab2ddd is. For now, at
least to 2.0, 1.9 and 1.8.
In the REORG of commit 1a18b5414 ("REORG: connection: centralize the
conn_set_{tos,mark,quickack} functions") a bug was introduced by
calling conn_set_tos instead of conn_set_mark.
This was reported in issue #212
This should be backported to 1.9 and 2.0.
When we upgrade the mux from TCP to H2/HTX, don't use cs_destroy() to free
the conn_stream, use cs_free() instead. Using cs_destroy() would call the
mux detach method, and at that point of time the mux would be the H2 mux,
which knows nothing about that conn_stream, so bad things would happen.
This should eventually make upgrade from TCP to H2/HTX work, and fix
the github issue #196.
This should be backported to 2.0.
In stream_end_backend(), if we're upgrading from TCP to H1/HTX, as we don't
destroy the stream, we have to add the SF_HTX flag on the stream, or bad
things will happen.
This was broken when attempting to fix github issue #196.
This should be backported to 2.0.
There were 221 places where a status message or an error message were built
to be returned on the CLI. All of them were replaced to use cli_err(),
cli_msg(), cli_dynerr() or cli_dynmsg() depending on what was expected.
This removed a lot of duplicated code because most of the times, 4 lines
are replaced by a single, safer one.
Right now we used to have extremely inconsistent states to report output,
one is CLI_ST_PRINT which prints constant message cli->msg with the
assigned severity, and CLI_ST_PRINT_FREE which prints dynamically
allocated cli->err with severity LOG_ERR, and nothing in between,
eventhough it's useful to be able to report dynamically allocated
messages as well as constant error messages.
This patch adds two extra states, which are not particularly well named
given the constraints imposed by existing ones. One is CLI_ST_PRINT_ERR
which prints a constant error message. The other one is CLI_ST_PRINT_DYN
which prints a dynamically allocated message. By doing so we maintain
the compatibility with current code.
It is important to keep in mind that we cannot pre-initialize pointers
and automatically detect what message type it is based on the assigned
fields, because the CLI's context is in a union shared with all other
users, thus unused fields contain anything upon return. This is why we
have no choice but using 4 states. Keeping the two fields <msg> and
<err> remains useful because one is const and not the other one, and
this catches may copy-paste mistakes. It's just that <err> is pretty
confusing here, it should be renamed.
Since last commit there's no point anymore in having two variants of the
same function, let's switch to b_free() only. __b_drop() was renamed to
__b_free() for obvious consistency reasons.
CO_FL_EARLY_SSL_HS/CO_FL_EARLY_DATA are removed for BoringSSL. Early
data can be checked via BoringSSL API and ssl_fc_has_early can used it.
This should be backported to all versions till 1.8.
Since BoringSSL commit 777a2391 "Hold off flushing NewSessionTicket until write.",
0-RTT doesn't work. It appears that half-RTT data (response from 0-RTT) never
worked before the BoringSSL fix. For HAProxy the regression come from 010941f8
"BUG/MEDIUM: ssl: Use the early_data API the right way.": the problem is link to
the logic of CO_FL_EARLY_SSL_HS used for OpenSSL. With BoringSSL, handshake is
done before reading early data, 0-RTT data and half-RTT data are processed as
normal data: CO_FL_EARLY_SSL_HS/CO_FL_EARLY_DATA is not needed, simply remove
it.
This should be backported to all versions till 1.8.
Allow HAProxy to cache responses to OPTIONS HTTP requests.
This is useful in the use case of "Cross-Origin Resource Sharing" (cors)
to cache CORS responses from API servers.
Since HAProxy does not support Vary header for now, this would be only
useful for "access-control-allow-origin: *" use case.
Current HTTP cache hash contains only the Host header and the url path.
That said, request method should also be added to the mix to support
caching other request methods on the same URL. IE GET and OPTIONS.
The frame check code in the demuxer was moved to its own function to
keep the demux function clean enough. This also simplifies the test
case as we can now simply call this function once in H2_CS_FRAME_P
state.
When parsing references to stick-tables declared as backends, they are added to
a list of proxies (they are proxies!) which refer to this stick-tables.
Before this patch we added them to these list without checking they were already
present, making the silly hypothesis the actions/sample were checked/resolved in the same
order the proxies are parsed.
This patch implement a simple inline function to in_proxies_list() to test
the presence of a proxy in a list of proxies. We use this function when resolving
/checking samples/actions.
This bug was introduced by 015e4d7 commit.
Must be backported to 2.0.
In SMTP, MySQL and PgSQL checks, we're supposed to finish with a message
to politely quit the server, otherwise some of them will log some errors.
This is the case with Postfix as reported in GH issue #187. Since commit
fe4abe6 ("BUG/MEDIUM: connections: Don't call shutdown() if we want to
disable linger.") we are a bit more aggressive on outgoing connection
closure and checks were not prepared for this.
This patch makes the 3 checks above disable the linger_risk for these
checks so that we close cleanly, with the side effect that it will leave
some TIME_WAIT connections behind (hence why it should not be generalized
to all checks). It's worth noting that in issue #187 it's mentioned that
this patch doesn't seem to be sufficient for Postfix, however based only
on local network activity this looks OK, so maybe this will need to be
improved later.
Given that the patch above was backported to 2.0 and 1.9, this one should
as well.
In Patrick's trace it was visible that after a stream had been missed,
the next stream would receive a WINDOW_UPDATE with the first one's
credit added to its own. This makes sense because in case of error
h2c->rcvd_s is not reset. Given that this counter is per frame, better
reset it when starting to parse a new frame, it's easier and safer.
This must be backported as far as 1.8.
In h2_process_mux() if we have some room and an attempt to send a window
update for the connection was pending, it's done first. But it's not done
for the stream, which will have for effect of postponing this attempt
till next pass into h2_process_demux(), at the risk of seeing the send
buffer full again. Let's always try to send both pending frames as soon
as possible.
This should be backported as far as 1.8.
Patrick Hemmer reported a rare case where the H2 mux emits spurious
RST_STREAM(STREAM_CLOSED) that are triggered by the send path and do
not even appear to be associated with a previous incoming frame, while
the send path never emits such a thing.
The problem is particularly complex (hence its rarity). What happens is
that when data are uploaded (POST) we must refill the sending stream's
window by sending a WINDOW_UPDATE message (and we must refill the
connection's too). But in a highly bidirectional traffic, it is possible
that the mux's buffer will be full and that there is no more room to build
this WINDOW_UPDATE frame. In this case the demux parser switches to the
H2_CS_FRAME_A state, noting that an "acknowledgement" is needed for the
current frame, and it doesn't change the current stream nor frame type.
But the stream's state was possibly updated (typically OPEN->HREM when
a DATA frame carried the ES flag).
Later the data can leave the buffer, wake up h2_io_cb(), which calls
h2_send() to send pending data, itself calling h2_process_mux() which
detects that there are unacked data in the connection's window so it
emits a WINDOW_UPDATE for the connection and resets the counter. so it
emits a WINDOW_UPDATE for the connection and resets the counter. Then
h2_process() calls h2_process_demux() which continues the processing
based on the current frame type and the current state H2_CS_FRAME_A.
Unfortunately the protocol compliance checks matching the frame type
against the current state are still present. These tests are designed
for new frames only, not for those in progress, but they are not
limited by frame types. Thus the current DATA frame is checked again
against the current stream state that is now HREM, and fails the test
with a STREAM_CLOSED error.
The quick and backportable solution consists in adding the test for
this ACK and bypass all these checks that were already validated prior
to the state transition. A better long-term solution would consist in
having a new state between H and P indicating the frame is new and
needs to be checked ("N" for new?) and apply the protocol tests only
in this state. In addition everywhere we decide to send a window
update, we should send a stream WU first if there are unacked data
for the current stream. Last, rcvd_s should always be reset when
transitioning to FRAME_H (and a BUGON for this in dev would help).
The bug will be way harder to trigger on 2.0 than on 1.8/1.9 because
we have a ring buffer for the connection so the buffer full situations
are extremely rare.
This fix must be backpored to all versions having H2 (as far as 1.8).
Special thanks to Patrick for providing exploitable traces.