It is invalid to manipulate responses from http-request rules or to manipulate
requests from http-response rules. When http-request rules are evaluated, the
connection to server is not yet established, so there is no response at all. And
when http-response rules are evaluated, the request has already been sent to the
server.
Now, the calling direction is checked. So functions "txn.http:req_*" can now
only be called from http-request rules and the functions "txn.http:res_*" can
only be called from http-response rules.
This issue was reported on Github (#190).
This patch must be backported to all versions since the 1.6.
For HTX streams, when txn:done() is called, the work is delegated to the
function http_reply_and_close(). But it is not enough. The channel's analyzers
must also be reset. Otherwise, some analyzers may still be called while
processing should be aborted.
For instance, if the function is called from an http-request rules on the
frontend, request analyzers on the backend side are still called. So we may try
to add an header to the request, while this one was already reset.
This patch must be backported to 2.0 and 1.9.
This patch excludes freebsd, osx and generic targets for this vtc.
Basic tcp checks performed by haproxy on a linux system leverage the
TCP_QUICKACK option which implies that the connection is never
established from the perspective of the backend server. On other systems
a regular tcp 3 way handshake is performed immediately followed by a
reset, which from the perspective of the server is an aborted connection.
When we run this regtest on FreeBSD (or anything other than linux) there
is a race condition in the server_thread() function of the vtc_server.c
file. If we receive the reset when we are in accept() then fd is -1 and
vtest calls vtc_fatal, failing the test.
Other checks specific reg-tests were excluded on FreeBSD, osx and
generic for the same reason, but were at the time documented as being
disabled because they used TCP_DEFER_ACCEPT. These commits are
15685c791 ("REGTEST: Exclude freebsd target for some reg tests") and
03c6ab0cb ("REGTEST: exclude osx and generic targets for
40be_2srv_odd_health_checks")
In h1_rcv_buf(), don't wake the H1 tasklet to attempt to receive more data
if we got the whole request. It will lead to a recv and maybe to a subscribe
while it may not be needed.
If the connection is keep alive, the tasklet will be woken up later by
h1_detach(), so that we'll be able to get the next request, or an end of
connection.
In h1_recv(), don't subscribe if we managed to receive data. We may not have
to, if we received a complete request, and a new receive will be attempted
later, as the tasklet is woken up either by h1_rcv_buf() or by h1_detach.
Insufficiently described bug fixes are still too frequent. It's a real
pain to create each new maintenance release, as 3/4 of the time is spent
trying to guess what problem a patch fixes, which is already important
in order to decide whether to pick the fix or not, but is even more
capital in order to write understandable release notes.
Christopher rightfully demands that a patch tagged "BUG" MUST ABSOLUTELY
describe the problem and why this problem is a bug. Describing the fix
is one thing but if the bug is unknown, why would there be a fix ? How
can a stable maintainer be convinced to take a fix if its author didn't
care about checking whether it was a real bug ? This patch tries to
explain a bit better what really needs to appear in the commit message
and how to describe a bug.
To be backported to all relevant stable versions.
Since 1.9 we support sending logs to various non-blocking outputs like
stdou/stderr or flies, by using writev() which guarantees that it only
returns after having written everything or nothing. However the syscall
may be interrupted while doing so, and this is visible when writing to
a tty during debug sessions, as some logs occasionally appear interleaved
if an xterm or SSH connection is not very fast. Performance here is not a
critical concern, log correctness is. Let's simply take the logger's lock
around the writev() call to prevent multiple senders from stepping onto
each other's toes.
This may be backported to 2.0 and 1.9.
In sess_established(), don't immediately switch the backend stream_interface
to SI_ST_DIS if we only got a SHUTR. We may still have something to send,
ie if the request is a POST, and we should be switched to SI_ST8DIS later
when the shutw will happen.
This should be backported to 2.0 and 1.9.
When the number of nodes is increased because the server weight is changed, the
nodes array must be realloc. But its new size is not correctly set. Only the
total number of nodes is used to set the new size. But it must also depends on
the size of a node. It must be the total nomber of nodes times the size of a
node.
This issue was reported on Github (#189).
This patch must be backported to all versions since the 1.6.
This one was added by commit daacf3664 ("BUG/MEDIUM: protocols: add a
global lock for the init/deinit stuff") but I forgot to add it to the
include file, breaking DEBUG_THREAD.
There is no standard case for HTTP header names because, as stated in the
RFC7230, they are case-insensitive. So applications must handle them in a
case-insensitive manner. But some bogus applications erroneously rely on the
case used by most browsers. This problem becomes critical with HTTP/2
because all header names must be exchanged in lowercase. And HAProxy uses the
same convention. All header names are sent in lowercase to clients and servers,
regardless of the HTTP version.
This design choice is linked to the HTX implementation. So, for previous
versions (2.0 and 1.9), a workaround is to disable the HTX mode to fall
back to the legacy HTTP mode.
Since the legacy HTTP mode was removed, some users reported interoperability
issues because their application was not able anymore to handle HTTP/1 message
received from HAProxy. So, we've decided to add a way to change the case of some
headers before sending them. It is now possible to define a "mapping" between a
lowercase header name and a version supported by the bogus application. To do
so, you must use the global directives "h1-case-adjust" and
"h1-case-adjust-file". Then options "h1-case-adjust-bogus-client" and
"h1-case-adjust-bogus-server" may be used in proxy sections to enable the
conversion. See the configuration manual for more info.
Of course, our advice is to urgently upgrade these applications for
interoperability concerns and because they may be vulnerable to various types of
content smuggling attacks. But, if your are really forced to use an unmaintained
bogus application, you may use these directive, at your own risks.
If it is relevant, this feature may be backported to 2.0.
There is one unprotected call to stop_proxy() from the manage_proxy()
task, so there is a single caller by definition, but there is also
another such call from the CLI's "shutdown frontend" parser. This
one does it under the proxy's lock but the first one doesn't use it.
Thus it is theorically possible to corrupt the list of listeners in a
proxy by issuing "shutdown frontend" and SIGUSR1 exactly at the same
time. While it sounds particularly contrived or stupid, it could
possibly happen with automated tools that would send actions via
various channels. This could cause the process to loop forever or
to crash and thus stop faster than expected.
This might be backported as far as 1.8.
Dragan Dosen found that the listeners lock is not sufficient to protect
the listeners list when proxies are stopping because the listeners are
also unlinked from the protocol list, and under certain situations like
bombing with soft-stop signals or shutting down many frontends in parallel
from multiple CLI connections, it could be possible to provoke multiple
instances of delete_listener() to be called in parallel for different
listeners, thus corrupting the protocol lists.
Such operations are pretty rare, they are performed once per proxy upon
startup and once per proxy on shut down. Thus there is no point trying
to optimize anything and we can use a global lock to protect the protocol
lists during these manipulations.
This fix (or a variant) will have to be backported as far as 1.8.
When client-side or server-side cookies are parsed, HAProxy enters in an
infinite loop if a Cookie/Set-Cookie header value starts by a delimiter (a colon
or a semicolon). Depending on the operating system, the service may become
degraded, unresponsive, or may trigger haproxy's watchdog causing a service stop
or automatic restart.
To fix this bug, in the loop parsing the attributes, we must be sure to always
skip delimiters once the first attribute-value pair was parsed, empty or
not. The credit for the fix goes to Olivier.
CVE-2019-14241 was assigned to this bug. This patch fixes the Github issue #181.
This patch must be backported to 2.0 and 1.9. However, the patch will have to be
adapted.
Empty error files may be used to disable the sending of any message for specific
error codes. A common use-case is to use the file "/dev/null". This way the
default error message is overridden and no message is returned to the client. It
was supported in the legacy HTTP mode, but not in HTX. Because of a bug, such
messages triggered an error.
This patch must be backported to 2.0 and 1.9. However, the patch will have to be
adapted.
In http_reply_and_close() and http_server_error(), we must be sure to have an
allocated buffer (buf.size > 0) to consider it as a valid HTX message. For now,
there is no way to hit this bug. But a fix to support "empty" error messages in
HTX is pending. Such empty messages, after parsing, will be converted into
unallocated buffer (buf.size == 0).
This patch must be backported to 2.0 and 1.9. owever, the patch will have to be
adapted.
Github user @jpulz reported a crash with tcp-checks in issue #184
where cs==NULL. If we enter the function with cs==NULL and check->result
!= CHK_RES_UKNOWN, we'll go directly to out_end_tcpcheck and dereference
cs. We must validate there that cs is valid (and conn at the same time
since it would be NULL as well).
This fix must be backported as far as 1.8.
With the previous commit 03627245c ("BUG/MEDIUM: mux-h1: Trim excess server data
at the end of a transaction"), we try to avoid to handle junk data coming from a
server as a response. But it only works for data already received. Starting from
the moment a server sends an invalid response, it is safer to close the
connection too, because more data may come after and there is no good reason to
handle them.
So now, when a conn_stream is detached from a server connection, if there are
some unexpected input data, we simply trim them and close the connection
ASAP. We don't close it immediately only if there are still some outgoing data
to deliver to the server.
This patch must be backported to 2.0 and 1.9.
Now that we start by releasing any possibly existing connection,
the conditions simplify a little bit and some of the complex cases
can be removed. A few comments were also added for non-obvious cases.
When entering connect_server() we're not supposed to have a connection
already, except when retrying a failed connection, which is pretty rare.
Let's simplify the code by starting to unconditionally release any existing
connection. For now we don't go further, as this change alone will lead to
quite some simplification that'd rather be done as a separate cleanup.
Lua cosockets do not need to allocate the remote connection anymore.
However this was trickier than expected because some tests were made
on this remote connection's existence to detect establishment instead
of relying on the stream interface's state (which is how it's now done).
The flag SF_ADDR_SET was set a bit too early (before assigning the
address) so this was moved to the right place. It should not have had
any impact beyond confusing debugging.
The only remaining occurrence of the remote connection knowledge now
is for getsockname() which requires to access the connection to send
the syscall, and it's unlikely that we'll need to change this before
QUIC or so.
When forcing the outgoing address of a connection, till now we used to
allocate this outgoing connection and set the address into it, then set
SF_ADDR_SET. With connection reuse this causes a whole lot of issues and
difficulties in the code.
Thanks to the previous changes, it is now possible to store the target
address into the stream instead, and copy the address from the stream to
the connection when initializing the connection. assign_server_address()
does this and as a result SF_ADDR_SET now reflects the presence of the
target address in the stream, not in the connection. The http_proxy mode,
the peers and the master's CLI now use the same mechanism. For now the
existing connection code was not removed to limit the amount of tricky
changes, but the allocated connection is not used anymore.
This change also revealed a latent issue that we've been having around
option http_proxy : the address was set in the connection but neither the
SF_ADDR_SET nor the SF_ASSIGNED flags were set. It looks like the connection
could establish only due to the fact that it existed with a non-null
destination address.
The purpose will be to store the target address there and not to
allocate a connection just for this anymore. For now it's only placed
in the struct, a few fields were moved to plug some holes, and the
entry is freed on release (never allocated yet for now). This must
have no impact. Note that in order to fit, the store_count which
previously was an int was turned into a short, which is way more
than enough given that the hard-coded limit is 8.
Now that we have dynamically allocated addresses, there's no need to
clear an address before reusing it, just release it. Note that this
is not equivalent to saying that an address is never zero, as shown in
assign_server_address() where an address 0.0.0.0 can still be assigned
to a connection for the time it takes to modify it.
Now addresses are dynamically allocated when needed. Each connection is
created with src=dst=NULL, these entries are allocated on the fly, and
released when the connection is released.
This commit places calls to sockaddr_alloc() at the places where an address
is needed, and makes sure that the allocation is properly tested. This does
not add too many error paths since connection allocations are already in the
vicinity and share the same error paths. For the two cases where a
clear_addr() was called, instead the address was not allocated.
This pool will be used to allocate storage for source and destination
addresses used in connections. Two functions sockaddr_{alloc,free}()
were added and will have to be used everywhere an address is needed.
These ones are safe for progressive replacement as they check that the
existing pointer is set before replacing it. The pool is not yet used
during allocation nor freeing. Also they operate on pointers to pointers
so they will perform checks and replace values. The free one nulls the
pointer.
All reads were carefully reviewed for only reading already checked
values. Assignments were commented indicating that an allocation will
be needed once they become dynamic. The memset() used to clear the
addresses should then be turned to a free() and a NULL assignment.
Two places will require a dynamic address allocation since the connection
is created from scratch. For the source address it looks like the
clear_addr() call will simply have to be removed as the pointer will
already be NULL.
Most of the locations were already safe, only two places needed to have
one extra check to avoid assuming that cli_conn->src is necessarily set
(it is in practice but let's stay safe).
In session_accept_fd() we'll soon have to dynamically allocate the
address, or better, steal it from the caller and define a strict calling
convention regarding who's responsible for the freeing. In the simpler
session_prepare_log_prefix(), just add an attempt to retrieve the address
if not yet set and do not dereference it on failure.
No allocation is needed there. Some extra checks were added in the
stream dump code to make sure the source address is effectively valid
(it always is but it doesn't cost much to be certain).
One place (transparent proxy) will require an allocation when the
address becomes dynamic. A few dereferences of the family were adjusted
to preliminary check for the address pointer to exist at all. The
remaining operations were already performed under control of a
successful retrieval.
The target address is duplicated from the peer's configured one. For
now we keep the target address as-is but we'll have to dynamically
allocate it and place it into the stream instead. Maybe a sockaddr_dup()
will help by the way.
The "show peers" part is safe as it's already called after checking
the addresses' validity.
This one will soon need a dynamic allocation, though this will be
temporary as ideally the address will be placed on the stream and no
connection will be allocated anymore.
This is in preparation for the switch to dynamic address allocation,
let's migrate the code using the old fields to the pointers instead.
Note that no extra check was added for now, the purpose is only to
get the code to use the pointers and still work.
In the proxy protocol message handling we make sure the addresses are
properly allocated before declaring them unset.
At the moment we're facing difficulties with connection reuse based on
the fact that connections may be allocated very early only to set a
target address in transparent mode. With the imminent removal of the
legacy mode, the connection reuse by a same stream will not exist
anymore and all this awful complexity is not justified anymore. However
we still need to be able to assign addresses somewhere.
Thus instead of allocating a connection, we'll only place addresses where
needed in the stream during operations. But this takes quite some room
(typically 128 bytes). This is a nice opportunity for cleaning all this
up and dynamically allocatating the addresses fields, which will result
in actually saving memory from connection structs since most of the time
the client's "to" address is not used and the server's "from" is not used
either, thus saving ~256 bytes per end-to-end connection.
For now these new "src" and "dst" pointers point to addr.from and addr.to.
This will allow us to smoothly update the whole code to use these pointers
prior to going further and switching them to pools.
These functions are not used anymore. They didn't report failures and
as such were often misused. conn_get_src() and conn_get_dst() now
replaced them everywhere.
In smp_fetch_url32_src() and smp_fetch_base32_src() it's better to
validate that the source address was properly initialized since it
will soon be dynamic, thus let's call conn_get_src().