This adds the "on-marked-down shutdown-sessions" statement on "server" lines,
which causes all sessions established on a server to be killed at once when
the server goes down. The task's priority is reniced to the highest value
(1024) so that servers holding many tasks don't cause a massive slowdown due
to the wakeup storm.
The motivation for this is to allow iteration of all the connections
of a server without the expense of iterating over the global list
of connections.
The first use of this will be to implement an option to close connections
associated with a server when is is marked as being down or in maintenance
mode.
* The declaration of peer_session_create() does
not match its definition. As it is only
used inside of peers.c make it static.
* Make the declaration of peers_register_table()
match its definition.
* Also, make all functions in peers.c that
are not also in peers.h static
gcc (Debian 4.6.0-2) 4.6.1 20110329 (prerelease)
Copyright (C) 2011 Free Software Foundation, Inc.
This is free software; see the source for copying conditions. There is NO
warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
...
src/proto_http.c:3029:14: warning: variable ‘del_cl’ set but not used [-Wunused-but-set-variable]
In file included from ebtree/eb64tree.c:23:0:
ebtree/eb64tree.h: In function ‘__eb64_lookup’:
ebtree/eb64tree.h:128:6: warning: variable ‘node_bit’ set but not used [-Wunused-but-set-variable]
ebtree/eb64tree.h: In function ‘__eb64i_lookup’:
ebtree/eb64tree.h:180:6: warning: variable ‘node_bit’ set but not used [-Wunused-but-set-variable]
In file included from ebtree/ebpttree.h:26:0,
from ebtree/ebimtree.c:23:
ebtree/eb64tree.h: In function ‘__eb64_lookup’:
ebtree/eb64tree.h:128:6: warning: variable ‘node_bit’ set but not used [-Wunused-but-set-variable]
ebtree/eb64tree.h: In function ‘__eb64i_lookup’:
ebtree/eb64tree.h:180:6: warning: variable ‘node_bit’ set but not used [-Wunused-but-set-variable]
In file included from ebtree/ebpttree.h:26:0,
from ebtree/ebistree.h:25,
from ebtree/ebistree.c:23:
ebtree/eb64tree.h: In function ‘__eb64_lookup’:
ebtree/eb64tree.h:128:6: warning: variable ‘node_bit’ set but not used [-Wunused-but-set-variable]
ebtree/eb64tree.h: In function ‘__eb64i_lookup’:
ebtree/eb64tree.h:180:6: warning: variable ‘node_bit’ set but not used [-Wunused-but-set-variable]
mysqld >= 5.5 want the client to announce 4.1+ authentication support, even if we have no password, so we do this.
I also check on a debian potato mysqld 3.22 and it works too so i assume we are good from 3.22 to 5.5.
[WT: this must be backported to 1.4]
The fullconn value is not easy to get right when doing dynamic regulation,
as it should depend on the maxconns of the frontends that can reach a
backend. Since the parameter is mandatory, many configs are found with
an inappropriate default value.
Instead of rejecting configs without a fullconn value, we now set it to
10% of the sum of the configured maxconns of all the frontends which are
susceptible to branch to the backend. That way if new frontends are added,
the backend's fullconn automatically adjusts itself.
Bashkim Kasa reported that the stats admin page did not work when colons
were used in server or backend names. This was caused by url-encoding
resulting in ':' being sent as '%3A'. Now we systematically decode the
field names and values to fix this issue.
Since version 1.0.0, it's forbidden to have a cookie specified without at
least one server. This test is useless and makes it complex to write APIs
to iteratively generate working configurations. Remove the test.
It's more expensive to call splice() on short payloads than to use
recv()+send(). One of the reasons is that doing a splice() involves
allocating a pipe. One other reason is that the kernel will have to
copy itself if we try to splice less than a page. So let's fix a
short offset of 4kB below which we don't splice.
A quick test shows that on chunked encoded data, with splice we had
6826 syscalls (1715 splice, 3461 recv, 1650 send) while with this
patch, the same transfer resulted in 5793 syscalls (3896 recv, 1897
send).
Fast-forwarding between file descriptors is nice but can be counter-productive
when only one part of the buffer is forwarded, because it can result in doubling
the number of send() syscalls. This is what happens on HTTP chunking, because
the chunk data are sent, then the CRLF + next chunk size are parsed and immediately
scheduled for forwarding. This results in two send() for the same block while a
single one would have done it.
Now that we support the http-no-delay mode, we can optimize HTTP
chunking again by always waiting for more data to come until the
last chunk is met.
This patch may or may not be backported to 1.4, it's not a big deal,
it will mainly help for chunks which are aligned with the buffer size.
There are some very rare server-to-server applications that abuse the HTTP
protocol and expect the payload phase to be highly interactive, with many
interleaved data chunks in both directions within a single request. This is
absolutely not supported by the HTTP specification and will not work across
most proxies or servers. When such applications attempt to do this through
haproxy, it works but they will experience high delays due to the network
optimizations which favor performance by instructing the system to wait for
enough data to be available in order to only send full packets. Typical
delays are around 200 ms per round trip. Note that this only happens with
abnormal uses. Normal uses such as CONNECT requests nor WebSockets are not
affected.
When "option http-no-delay" is present in either the frontend or the backend
used by a connection, all such optimizations will be disabled in order to
make the exchanges as fast as possible. Of course this offers no guarantee on
the functionality, as it may break at any other place. But if it works via
HAProxy, it will work as fast as possible. This option should never be used
by default, and should never be used at all unless such a buggy application
is discovered. The impact of using this option is an increase of bandwidth
usage and CPU usage, which may significantly lower performance in high
latency environments.
This change should be backported to 1.4 since the first report of such a
misuse was in 1.4. Next patch will also be needed.
The FL_TCP flag was a leftover from the old days we were using TCP_CORK.
With MSG_MORE it's not needed anymore so we can remove the condition and
sensibly simplify the test.
When sending is complete, it's preferred to systematically clear the flags
that were set for that transfer. What could happen is that the to_forward
counter had caused the MSG_MORE flag to be set and BF_EXPECT_MORE not to
be cleared, resulting in this flag being unexpectedly maintained for next
round.
The code has taken extreme care of not doing this till now, but it's not
acceptable that the caller has to know these precise semantics. So let's
unconditionnally clear the flag instead.
For the sake of safety, this fix should be backported to 1.4.
Commit 57f5c1 used to provide a nice improvement on chunked encoding since
it ensured that we did not set a PUSH flag for every chunk or buffer data
part of a chunked transfer.
Some applications appear to erroneously abuse HTTP chunking in order to
get interactive exchanges between a user agent and an origin server with
very small chunks. While it happens to work through haproxy, it's terribly
slow due to the latency added after passing each chunk to the system, who
could wait up to 200ms before pushing them onto the wire.
So we need an interactive mode for such usages. In the mean time, step back
on the optim, but not completely, so that we still keep the flag as long as
we know we're not finished with the current chunk.
This change should be backported to 1.4 too as the issue was discovered
with it.
This status code is used in response to requests matching "monitor-uri".
Some users need to adjust it to fit their needs (eg: make some strings
appear there). As it's already defined as a chunked string and used
exactly like other status codes, it makes sense to make it configurable
with the usual "errorfile", "errorloc", ...
Some people like to make the monitoring URL testable from unsafe locations.
Reporting haproxy's existence there can sometimes be problematic. This patch
should not be backported to 1.4 because it is possible, eventhough unlikely,
that some scripts rely on this word to appear there.
As reported by Lauri-Alo Adamson, version 1.5-dev6 doesn't support
stick-tables with a binary type.
This issue was introduced in the commit 4f92d32 where a line was erroneously
deleted, and is 1.5-specific.
Mark Brooks reported that commit 1b4b7c broke tproxy in 1.5-dev6. Nick
Chalk tracked the issue down to a missing address family setting in
tcp_bind_socket() which resulted in a failure to use get_addr_len().
This issue is 1.5-specific.
Christopher Blencowe reported that the httpchk_expect() function was
lacking a test for incomplete responses : if the server sends only the
headers in the first packet and the body in a subsequent one, there is
a risk that the check fails without waiting for more data. A failure
rate of about 1% was reported.
This fix must be backported to 1.4.
When doing fix 24581bae02 to correctly handle
response cookies, an unfortunate typo was inserted in the less likely code
path, resulting in a risk of crash when cookie-based persistence is enabled
and the server emits a cookie with several spaces around the equal sign.
This bug was noticed during a code backport. Its effects were never reported
because this situation is very unlikely to appear, but it can be provoked on
purpose by the server.
This patch must be backported to 1.4 versions which contain the fix above
(anything > 1.4.8), and to similar 1.3 versions > 1.3.25. 1.5-dev versions
after 1.5-dev2 are affected too.
John Helliwell reported a runtime issue on Solaris since 1.5-dev5. Traces
show that connect() returns EINVAL, which means the socket length is not
appropriate for the family. Solaris does not like being called with sizeof
and needs the address family's size on sockaddr_storage.
The fix consists in adding a get_addr_len() function which returns the
socket's address length based on its family. Tests show that this works
for both IPv4 and IPv6 addresses.
Since IPv6 is a different type than IPv4, the pattern fetch functions
src6 and dst6 were added. IPv6 stick-tables can also fetch IPv4 addresses
with src and dst. In this case, the IPv4 addresses are mapped to their
IPv6 counterpart, according to RFC 4291.
Patch 5ab04ec47c was incomplete,
because if the first send() fails on an empty buffer, we fail
to rearm the polling and we can't establish the connection
anymore.
The issue was reported by Ben Timby who provided large amounts
of traces of various tests helping to reliably reproduce the issue.
Since the latest additions to buffer_forward(), it became too large for
inlining, so let's uninline it. The code size drops by 3kB. Should be
backported to 1.4 too.
Despite much care around handling the content-length as a 64-bit integer,
forwarding was broken on 32-bit platforms due to the 32-bit nature of
the ->to_forward member of the "buffer" struct. The issue is that this
member is declared as a long, so while it works OK on 64-bit platforms,
32-bit truncate the content-length to the lower 32-bits.
One solution could consist in turning to_forward to a long long, but it
is used a lot in the critical path, so it's not acceptable to perform
all buffer size computations on 64-bit there.
The fix consists in changing the to_forward member to a strict 32-bit
integer and ensure in buffer_forward() that only the amount of bytes
that can fit into it is considered. Callers of buffer_forward() are
responsible for checking that their data were taken into account. We
arbitrarily ensure we never consider more than 2G at once.
That's the way it was intended to work on 32-bit platforms except that
it did not.
This issue was tracked down hard at Exosec with Bertrand Jacquin,
Thierry Fournier and Julien Thomas. It remained undetected for a long
time because files larger than 4G are almost always transferred in
chunked-encoded format, and most platforms dealing with huge contents
these days run on 64-bit.
The bug affects all 1.5 and 1.4 versions, and must be backported.
Since we now have the copy of the target in the session, use it instead
of relying on the SI for it. The SI drops the target upon unregister()
so applets such as stats were logged as "NOSRV".
Johannes Smith reported some wrong retries count in logs associated with bad
requests. The cause was that the conn_retries field in the stream interface
was only initialized when attempting to connect, but is used when logging,
possibly with an uninitialized value holding last connection's conn_retries.
This could have been avoided by making use of a stream interface initializer.
This bug is 1.5-specific.
Function gethostbyname is deprecated since IEEE Std 1003.1-2008 and
was replaced by getaddrinfo (available since IEEE Std 1003.1-2004).
Contrary to gethostbyname, getaddrinfo is specified to support both
IPv4 and IPv4 addresses.
Since some libc doesn't handle getaddrinfo properly, constant
USE_GETADDRINFO must be defined at compile time to enable use of
getaddrinfo.
It's always been a mess to debug wrong listening addresses because
the parsing function does not indicate the file and line number. Now
it does. Since the code was almost a duplicate of str2sa_range, it
now makes use of it and has been sensibly reduced.
The parser now distinguishes between pure addresses and address:port. This
is useful for some config items where only an address is required.
Raw IPv6 addresses are now parsed, but IPv6 host name resolution is still not
handled (gethostbyname does not resolve IPv6 names to addresses).
This option enables use of the PROXY protocol with the server, which
allows haproxy to transport original client's address across multiple
architecture layers.
Upon connection establishment, stream_sock is now able to send a PROXY
line before sending any data. Since it's possible that the buffer is
already full, and we don't want to allocate a block for that line, we
compute it on-the-fly when we need it. We just store the offset from
which to (re-)send from the end of the line, since it's assumed that
multiple outputs of the same proxy line will be strictly equivalent. In
practice, one call is enough. We just make sure to handle the case where
the first send() would indicate an incomplete output, eventhough it's
very unlikely to ever happen.
If the check fails for a low-level socket error (eg: address family not
supportd), we currently ignore the status. We must report the error and
declare a failed health check in this case. The only real reason for this
would be when an IPv6 check is required on an IPv4-only system.
And also rename "req_acl_rule" "http_req_rule". At the beginning that
was a bit confusing to me, especially the "req_acl" list which in fact
holds what we call rules. After some digging, it appeared that some
part of the code is 100% HTTP and not just related to authentication
anymore, so let's move that part to HTTP and keep the auth-only code
in auth.c.
Right now, http-request rules are not evaluated if the URL matches the
stats request. This is quite unexpected. For instance, in the config
below, an abuser present in the abusers list will not be prevented access
to the stats.
listen pub
bind :8181
acl abuser src -f abusers.lst
http-request deny if abuser
stats uri /stats
It is not a big deal but it's not documented as such either. For 1.5, let's
have both lists be evaluated in turn, until one blocks. For 1.4 we'll simply
update the doc to indicate that.
Also instead of duplicating the code, the patch factors out the list walking
code. The HTTP auth has been moved slightly earlier, because it was set after
the header addition code, but we don't need to add headers to a request we're
dropping.
It's very annoying that frontend and backend stats are merged because we
don't know what we're observing. For instance, if a "listen" instance
makes use of a distinct backend, it's impossible to know what the bytes_out
means.
Some points take care of not updating counters twice if the backend points
to the frontend, indicating a "listen" instance. The thing becomes more
complex when we try to add support for server side keep-alive, because we
have to maintain a pointer to the backend used for last request, and to
update its stats. But we can't perform such comparisons anymore because
the counters will not match anymore.
So in order to get rid of this situation, let's have both frontend AND
backend stats in the "struct proxy". We simply update the relevant ones
during activity. Some of them are only accounted for in the backend,
while others are just for frontend. Maybe we can improve a bit on that
later, but the essential part is that those counters now reflect what
they really mean.
This patch turns internal server addresses to sockaddr_storage to
store IPv6 addresses, and makes the connect() function use it. This
code already works but some caveats with getaddrinfo/gethostbyname
still need to be sorted out while the changes had to be merged at
this stage of internal architecture changes. So for now the config
parser will not emit an IPv6 address yet so that user experience
remains unchanged.
This change should have absolutely zero user-visible effect, otherwise
it's a bug introduced during the merge, that should be reported ASAP.
This one has been removed and is now totally superseded by ->target.
To get the server, one must use target_srv(&s->target) instead of
s->srv now.
The function ensures that non-server targets still return NULL.
s->prev_srv is used by assign_server() only, but all code paths leading
to it now take s->prev_srv from the existing s->srv. So assign_server()
can do that copy into its own stack.
If at one point a different srv is needed, we still have a copy of the
last server on which we failed a connection attempt in s->target.
When dealing with HTTP keep-alive, we'll have to know if we can reuse
an existing connection. For that, we'll have to check if the current
connection was made on the exact same target (referenced in the stream
interface).
Thus, we need to first assign the next target to the session, then
copy it to the stream interface upon connect(). Later we'll check for
equivalence between those two operations.
Since all of them are defined as proxy options, it's better to ensure
that at most one of them is enabled at once. The priority has been set
according to what is already performed in the backend :
1) dispatch
2) http_proxy
3) transparent
Till now we used the fact that the dispatch address was not null to use
the dispatch mode. This is very unconvenient, so let's have a dedicated
option.
This is in fact where those parts belong to. The old data_state was replaced
by applet.state and is now initialized when the applet is registered. It's
worth noting that the applet does not need to know the session nor the
buffer anymore since everything is brought by the stream interface.
It is possible that having a separate applet struct would simplify the
code but that's not a big deal.
With HTTP keep-alive, logging the right server name will be quite
complex because the assigned server will possibly change before we log.
Also, when we want to log accesses to an applet, it's not easy because
the applet becomes NULL again before logging.
The logged server's name is now taken from the target stored in the
stream interface. That way we can log an applet, a server name, or we
could even log a proxy or anything else if we wanted to. Ideally the
session should contain a desired target which is the one which should
be logged.
Now that we have the target pointer and type in the stream interface,
we don't need the applet.handler pointer anymore. That makes the code
somewhat cleaner because we know we're dealing with an applet by checking
its type instead of checking the pointer is not null.
When doing a connect() on a stream interface, some information is needed
from the server and from the backend. In some situations, we don't have
a server and only a backend (eg: peers). In other cases, we know we have
an applet and we don't want to connect to anything, but we'd still like
to have the info about the applet being used.
For this, we now store a pointer to the "target" into the stream interface.
The target describes what's on the other side before trying to connect. It
can be a server, a proxy or an applet for now. Later we'll probably have
descriptors for multiple-stage chains so that the final information may
still be found.
This will help removing many specific cases in the code. It already made
it possible to remove the "srv" and "be" parameters to tcpv4_connect_server().
I/O handlers are still delicate to manipulate. They have no type, they're
just raw functions which have no knowledge of themselves. Let's have them
declared as applets once for all. That way we can have multiple applets
share the same handler functions and we can store their names there. When
we later need to add more parameters (eg: usage stats), we'll be able to
do so in the applets themselves.
The CLI functions has been prefixed with "cli" instead of "stats" as it's
clearly what is going on there.
The applet descriptor in the stream interface should get all the applet
specific data (st0, ...) but this will be done in the next patch so that
we don't pollute this one too much.
Both Hank A. Paulson and Rob at pixsense reported a crash when
loading ACLs from a pattern file which contains empty lines.
From the tests, it appears that only files that contain nothing
but empty lines are causing that (in the past they would have had
their line feeds loaded as patterns).
The crash happens in the free_pattern() call which doesn't like to
be called with a NULL pattern. Let's make it accept it so that it's
more in line with the standard uses of free() which ignores NULLs.
Similar to the stats socket bug, we must check that the proxy is not disabled
before trying to enable/disable a server.
Even if a disabled proxy is not displayed, someone can inject a faulty proxy
name in the POST parameters. So, we must ensure that no disabled proxy can be
used.
As reported by Bryan Talbot, enabling and disabling a server in a disabled
proxy causes a segfault.
Changing the weight can also cause a similar segfault.
Bryan Talbot reported that POST requests with a query string were not
correctly processed if the hash parameter was the first one, because
the delimiter that was looked for to trigger the parsing was '&' instead
of '?'.
Also, while checking the code, it became apparent that it was enough for
a query string to be present in the request for POST parameters to be
ignored, even if the url_param was in the body and not in the URL.
The code has then been fixed like this :
1) look for URL param. If found, return it.
2) if no URL param was found and method is POST, then look it up into
the body
The code now seems to pass all request combinations.
This patch must be backported to 1.4 since 1.4 is equally broken right now.
Till now, the forwarding code was making use of the hdr_content_len member
to hold the size of the last chunk parsed. As such, it was reset after being
scheduled for forwarding. The issue is that this entry was reset before the
data could be viewed by backend.c in order to parse a POST body, so the
"balance url_param check_post" did not work anymore.
In order to fix this, we need two things :
- the chunk size (reset upon every forward)
- the total body size (not reset)
hdr_content_len was thus replaced by the former (hence the size of the patch)
as it makes more sense to have it stored that way than the way around.
This patch should be backported to 1.4 with care, considering that it affects
the forwarding code.
It seems like if a response message is chunked and the chunk size wraps
at the end of the buffer and the crlf sequence is incomplete, then we
can forward a wrong chunk size due to incorrect handling of the wrapped
size. It seems extremely unlikely to occur on real traffic (no reason to
have half of the CRLF after a chunk) but nothing prevents it from being
possible.
This fix must be backported to 1.4.
As reported by the Loadbalancer.org team, it was not possible to bind
more than 1024 ports. This is because the process' limits were set after
trying to bind the sockets, which defeats their purpose.
This fix must be backported to 1.4 and 1.3.
We used to only count one socket instead of one per listener. This makes
the socket count wrong, preventing from automatically computing the proper
number of sockets to bind.
This fix must be backported to 1.4 and 1.3.
req_acl was used instead of req_acl_final. As a matter of luck, both
happen to be the same at this point, but this is not granted in the
future.
This fix should be backported to 1.4.
Some browsers send POST requests in several packets, which was not supported
by the "stats admin" function.
This patch allows to wait for more data when they are not fully received
(we are still limited to a certain size defined by the buffer size minus its
reserved space).
It also adds support for the "Expect: 100-Continue" header.
Stefan Behte reported a strange case where depending on the position of
the Connection header in the header list, some headers added after it
were or were not usable in "balance hdr()". The reason is that when the
last header is removed, the list's tail was not updated, so any header
added after that one was not visible from the list.
This fix must be backported to 1.4 and possibly 1.3.
while working further on the changes to allow for dynamic
adding/removing of backend servers we noticed a potential problem: the
path given for the 'stats socket' global option may get truncated when
copying it into the sockaddr_un.sun_path field.
Attached patch checks the length, and reports an error if truncation
would happen.
This issue was noticed by Joerg Sonnenberger <joerg@NetBSD.org>.
src/frontend.c: In function 'frontend_accept':
src/frontend.c:110: warning: pointer targets in passing argument 5 of 'getsockopt' differ in signedness
The argument should be socklen_t and not int.
I have written a small patch to enable a correct PostgreSQL health check
It works similar to mysql-check with the very same parameters.
E.g.:
listen pgsql 127.0.0.1:5432
mode tcp
option pgsql-check user pgsql
server masterdb pgsql.server.com:5432 check inter 10000
It's better to avoid sticking on empty parameter values, as this almost
always indicates a missing parameter. Otherwise it's easy to enter a
situation where all new visitors stick to the same server.
Revert commits 035da6d1b0 and
f18b5f21ba.
These fixes were wrong. They worked but they were fixing the symptom
instead of the root cause of the problem. The real issue was in the
ebtree lookup code and it has been fixed now so these patches are not
needed anymore. It's better not to copy memory blocks when we don't
need to, so let's revert them.
Commit 035da6d1b0 was incorrect as it
could modify a live buffer. We must first ensure that we're on the
private buffer or perform a copy before modifying the data.
Gabriel Sosa reported that haproxy unexpectedly reports an error
when a pattern file loaded by an ACL contains an empty line. The
test was present but inefficient as it did not consider the '\n'
as the end of the line. This fix relies on the line length instead.
It should be backported to 1.4.
If a key to be looked up is extracted from data without being padded
and if it matches the beginning of another stored key, it is not
found in subsequent lookups because it does not end with a zero.
This bug was discovered and diagnosed by David Cournapeau.
One of the requirements we have is to run multiple instances of haproxy on a
single host; this is so that we can split the responsibilities (and change
permissions) between product teams. An issue we ran up against is how we
would distinguish between the logs generated by each instance. The solution
we came up with (please let me know if there is a better way) is to override
the application tag written to syslog. We can then configure syslog to write
these to different files.
I have attached a patch adding a global option 'log-tag' to override the
default syslog tag 'haproxy' (actually defaults to argv[0]).
By passing a negative value to the "mss" argument of "bind" lines, it
becomes possible to subtract this value to the MSS advertised by the
client, which results in segments smaller than advertised. The effect
is useful with some TCP stacks which ACK less often when segments are
not full, because they only ACK every other full segment as suggested
by RFC1122.
NOTE: currently this has no effect on Linux kernel 2.6, a kernel patch
is still required to change the MSS of established connections.
Haproxy does not include the hostname rather the IP of the machine in
the syslog headers it sends. Unfortunately this means that for each log
line rsyslog does a reverse dns on the client IP and in the case of
non-routable IPs one gets the public hostname not the internal one.
While this is valid according to RFC3164 as one might imagine this is
troublsome if you have some machines with public IPs, internal IPs, no
reverse DNS entries, etc and you want a standardized hostname based log
directory structure. The rfc says the preferred value is the hostname.
This patch adds a global "log-send-hostname" statement which accepts an
optional string to force the host name. If unset, the local host name
is used.
Since haproxy 1.4.9, combining option httpclose and option
http-pretend-keepalive can leave the connections opened until the backend
keep-alive timeout is reached, providing bad performances.
The same can occur when the proxy is in tunnel mode.
This patch ensures that the server side connection is closed after the
response and ignore http-pretend-keepalive in tunnel mode.
When a connection error is encountered on a server and the server's
connection pool is full, pending connections are not woken up because
the current connection is still accounted for on the server, so it
still appears full. This becomes visible on a server which has
"maxconn 1" because the pending connections will only be able to
expire in the queue.
Now we take care of releasing our current connection before trying to
offer it to another pending request, so that the server can accept a
next connection.
This patch should be backported to 1.4.
When a client connection aborts while the server-side connection is in
turn-around after a failed connection attempt, the turn-around timeout
is reset in shutw() but the state is not changed. The session then
remains stuck in this state forever. Change the QUE and TAR states to
DIS just as we do for CER to fix this.
This patch should be backported to 1.4.
We've had several issues related to data transfers. First, if a
client aborted an upload before the server started to respond, it
would get a 502 followed by a 400. The same was true (in the other
way around) if the server suddenly aborted while the client was
uploading the data.
The flags reported in the logs were misleading. Request errors could
be reported while the transfer was stopped during the data phase. The
status codes could also be overwritten by a 400 eventhough the start
of the response was transferred to the client.
The stats were also wrong in case of data aborts. The server or the
client could sometimes be miscredited for being the author of the
abort depending on where the abort was detected. Some client aborts
could also be accounted as request errors and some server aborts as
response errors.
Now it seems like all such issues are fixed. Since we don't have a
specific state for data flowing from the client to the server
before the server responds, we're still counting the client aborted
transfers as "CH", and they become "CD" when the server starts to
respond. Ideally a "P" state would be desired.
This patch should be backported to 1.4.
HTTP pipelining currently needs to monitor the response buffer to wait
for some free space to be able to send a response. It was not possible
for the HTTP analyser to be called based on response buffer activity.
Now we introduce a new buffer flag BF_WAKE_ONCE which is set when the
HTTP request analyser is set on the response buffer and some activity
is detected. This is not clean at all but once of the only ways to fix
the issue before we make it possible to register events for analysers.
Also it appeared that one realign condition did not cover all cases.
Using haproxy in multi-process mode (nbproc > 1), some features can be
not fully compatible or not work at all. haproxy will now display a warning on
startup for :
- appsession
- sticking rules
- stats / stats admin
- stats socket
- peers (fatal error in that case)
This counter will help quickly spot whether there are new errors or not.
It is also assigned to each capture so that a script can keep trace of
which capture was taken when.
It is possible to block on incorrectly chunked requests or responses,
but this becomes very hard to debug when it happens once in a while.
This patch adds the ability to also capture incorrectly chunked requests
and responses. The chunk will appear in the error buffer and will be
verifiable with the usual "show errors". The incorrect byte will match
the error location.
Error captures did only support contiguous messages. This is annoying
for capturing chunking errors, so let's ensure the function is able to
copy wrapped messages.
When an error message is returned to a client, all buffer contents
were left intact. Since the analysers were removed, the potentially
invalid data that were read had a chance to be sent too.
Now we ensure we only keep the already scheduled data in the buffer
and we truncate it after that. That means that responses with data
that must be blocked will really be blocked, and that incorrectly
chunked data will be stopped at the point where the chunking fails.
When haproxy parses chunk-encoded data that are scheduled to be sent, it is
possible that the other end is closed (mainly due to a client abort returning
as an error). The message state thus changes to HTTP_MSG_ERROR and the error
is reported as a chunk parsing error ("PD--") while it is not. Detect this
case before setting the flags and set the appropriate flag in this case.
Debugging parsing errors can be greatly improved if we know what the parser
state was and what the buffer flags were (especially for closed inputs/outputs
and full buffers). Let's add that to the error snapshots.
When forwarding chunk-encoded data, each chunk gets a TCP PUSH flag when
going onto the wire simply because the send() function does not know that
some data remain after it (next chunk). Now we set the BF_EXPECT_MORE flag
on the buffer if the chunk size is not null. That way we can reduce the
number of packets sent, which is particularly noticeable when forwarding
compressed data, especially as it requires less ACKs from the client.
When the number of servers is a multiple of the size of the input set,
map-based hash can be inefficient. This typically happens with 64
servers when doing URI hashing. The "avalanche" hash-type applies an
avalanche hash before performing a map lookup in order to smooth the
distribution. The result is slightly less smooth than the map for small
numbers of servers, but still better than the consistent hashing.
We'll use this hash at other places, let's make it globally available.
The function has also been renamed because its "chash_hash" name was
not appropriate.
When a header is removed, the previous header's next pointer is updated
to reflect the next of the current header. However, when cycling through
the loop, we update the prev pointer to point to the deleted header, which
means that if we delete another header, it's the deleted header's next
pointer that will be updated, leaving the deleted header in the list with
a null length, which is forbidden.
We must just not update the prev pointer after a removal.
This bug was present when either "reqdel" and "rspdel" removed two consecutive
headers. It could also occur when removing cookies in either requests or
responses, but since headers were the last header processing, the issue
remained unnoticed.
Issue reported by Hank A. Paulson.
This fix must be ported to 1.4 and possibly 1.3.
Cookies in indirect mode are removed from the cookie header. Three pointers
ought to be updated when appsession cookies are processed next, but were not.
The result is that a memcpy() can be called with a negative value causing the
process to crash. It is not sure whether this can be remotely exploited or not.
(cherry picked from commit c5f3749aa3ccfdebc4992854ea79823d26f66213)
In out of memory conditions, the ->destroy function would free all
possibly allocated pools from the current appsession, including those
that were not yet allocated nor assigned, which used to point to a
previous allocation, obviously resulting in a segfault.
(cherry picked from commit 75eae485921d3a6ce197915c769673834ecbfa5c)
In case of out of memory, it was possible to write to a null pointer
when capturing response cookies due to a missing "else" block. The
request handling was fine though.
(cherry picked from commit 62e3604d7dd27741c0b4c9e27d9e7c73495dfc32)
When running with -vv or -V -d, the list of usable polling systems
is reported. The final selection did not take into account the
possible failures during the tests, which is misleading and could
make one think that a non-working poller will be used, while it is
not the case. Fix that to really report the correct ones.
(cherry picked from commit 6d0e354e0171f08b7b3868ad2882c3663bd068a7)
Since unix sockets are supported for bind, the default backlog size was not
enough to accept the traffic. The size is now inherited from the listener
to behave like the tcp listeners.
This also affects the "stats socket" backlog, which is now determined by
"stats maxconn".
Some distros' libc are built for CPUs earlier than i686 and as such do
not offer support for Linux kernel's faster vsyscalls. This code adds
a new build option USE_VSYSCALLS to bypass libc for most commonly used
system calls. A net gain of about 10% can be observed with this change
alone.
It only works when /proc/sys/abi/vsyscall32 equals exactly 2. When it's
set to 1, the VDSO is randomized and cannot be used.
Analysers were re-evaluated when some flags were still present in the
buffers, even if they had not changed since previous pass, resulting
in a waste of CPU cycles.
Ensuring that the flags have changed has saved some useless calls :
function min calls per session (before -> after)
http_request_forward_body 5 -> 4
http_response_forward_body 3 -> 2
http_sync_req_state 10 -> 8
http_sync_res_state 8 -> 6
http_resync_states 8 -> 6
The stream_sock's accept() used to close the FD upon error, but this
was also sometimes performed by the frontend's accept() called via the
session's accept(). Those interlaced calls were also responsible for the
spaghetti-looking error unrolling code in session.c and stream_sock.c.
Now the frontend must not close the FD anymore, the session is responsible
for that. It also takes care of just closing the FD or also removing from
the FD lists, depending on its state. The socket-level accept() does not
have to care about that anymore.
Some Alert() messages were remaining in the accept() path, which they
would have no chance to be detected. Remove some of them (the impossible
ones) and replace the relevant ones with send_log() so that the admin
has a chance to catch them.
Enhance pattern convs and fetch argument parsing, now fetchs and convs callbacks used typed args.
Add more details on error messages on parsing pattern expression function.
Update existing pattern convs and fetchs to new proto.
Create stick table key type "binary".
Manage Truncation and padding if pattern's fetch-converted result don't match table key size.
If a read shutdown is encountered on the first packet of a connection
right after the data and the last analyser is unplugged at the same
time, then that last data chunk may never be forwarded. In practice,
right now it cannot happen on requests due to the way they're scheduled,
nor can it happen on responses due to the way their analysers work.
But this behaviour has been observed with new response analysers being
developped.
The reason is that when the read shutdown is encountered and an analyser
is present, data cannot be forwarded but the BF_SHUTW_NOW flag is set.
After that, the analyser gets called and unplugs itself, hoping that
process_session() will automatically forward the data. This does not
happen due to BF_SHUTW_NOW.
Simply removing the test on this flag is not enough because then aborted
requests still get forwarded, due to the forwarding code undoing the
abort.
The solution here consists in checking BF_SHUTR_NOW instead of BF_SHUTW_NOW.
BF_SHUTR_NOW is only set on aborts and remains set until ->shutr() is called.
This is enough to catch recent aborts but not prevent forwarding in other
cases. Maybe a new special buffer flag "BF_ABORT" might be desirable in the
future.
This patch does not need to be backported because older versions don't
have the analyser which make the problem appear.
Some options depends on the target architecture or compilation options.
When such an option is used on a compiled version that doesn't support it,
it's probably better to identify it as an unsupported option due to
compilation options instead of an unknown option.
Edit: better check on the empty capability than on the option bits. -Willy
There were a lot of snprintf() everywhere in the UNIX bind code. Now we
proceed as for tcp and indicate the socket path at the end between square
brackets. The code is smaller and more readable.
Add the address and port to the error message of the proxy socket that caused
the error. This can be helpful when several listening addresses are used in a
proxy.
Edit: since we now also support unix sockets (which already report their
path), better move the address reporting to proto_tcp.c by analogy.
-Willy
MAXPATHLEN may be used at other places, it's unconvenient to have it
redefined in a few files. Also, since checking it requires including
sys/param.h, some versions of it cause a macro declaration conflict
with MIN/MAX which are defined in tools.h. The solution consists in
including sys/param.h in both files so that we ensure it's loaded
before the macros are defined and MAXPATHLEN is checked.
The introduction of a new PROXY protocol for proxied connections requires
an early analyser to decode the incoming connection and set the session
flags accordingly.
Some more work is needed, among which setting a flag on the session to
indicate it's proxied, and copying the original parameters for later
comparisons with new ACLs (eg: real_src, ...).
inetaddr_host_lim_ret() used to make use of const char** for some
args, but that make it impossible ot use char** due to the way
controls are made by gcc. So let's change that.
This option makes haproxy preserve any persistence cookie emitted by
the server, which allows the server to change it or to unset it, for
instance, after a logout request.
(cherry picked from commit 52e6d75374c7900c1fe691c5633b4ae029cae8d5)
When a backend defines a new cookie, it forgot to unset any params
that could have been set in a defaults section, resulting in configs
that would sometimes refuse to load or not work as expected.
(cherry picked from commit f80bf174ed905a29a3ed8ee91fcd528da6df174f)
This match returns true when the request calling it is the first one of
a connection.
(cherry picked from commit 922ca979c50653c415852531f36fe409190ad76b)
Pattern fetches relying on destination address must first fetch
the address if it has not been done yet.
(cherry picked from commit 21abf441feb318b2ccd7df590fd89e9e824627f6)
The MySQL check has been revamped to be able to send real MySQL data,
and to avoid Aborted connects on MySQL side.
It is however backward compatible with older version, but it is highly
recommended to use the new mode, by adding "user <username>" on the
"mysql-check" line.
The new check consists in sending two MySQL packet, one Client
Authentication packet, with "haproxy" username (by default), and one
QUIT packet, to correctly close MySQL session. We then parse the Mysql
Handshake Initialisation packet and/or Error packet. It is a basic but
useful test which does not produce error nor aborted connect on the
server.
(cherry picked from commit a1e4dcfe5718311b7653d7dabfad65c005d0439b)
Health checks were all pure ASCII, but we're going to have to support some
binary checks (eg: SQL). When they're inherited from the default section,
they will be truncated to the first \0 due to strdup(). Let's fix that with
a simple malloc.
(cherry picked from commit 98fc04a766bcff80f57db2b1cd865c91761b131b)
Keywords were changed just before the commit but not in the help message.
Spotted by Hank A. Paulson.
(cherry picked from commit fdd46a0766dccec704aa1bd5acb0ac99a801c549)
When we're enabling a server again (unix CLI or stats interface), we must not mark
it completely up because it can take a while before a failure is detected. So we
mark it one step above failure, which means it's up but will be marked down upon
first failure.
(cherry picked from commit 83c3e06452457ed5660fc814cbda5bf878bf19a2)
The stats web interface must be read-only by default to prevent security
holes. As it is now allowed to enable/disable servers, a new keyword
"stats admin" is introduced to activate this admin level, conditioned by ACLs.
(cherry picked from commit 5334bab92ca7debe36df69983c19c21b6dc63f78)
Based on a patch provided by Judd Montgomery, it is now possible to
enable/disable servers from the stats web interface. This allows to select
several servers in a backend and apply the action to them at the same time.
Currently, there are 2 known limitations :
- The POST data are limited to one packet
(don't alter too many servers at a time).
- Expect: 100-continue is not supported.
(cherry picked from commit 7693948766cb5647ac03b48e782cfee2b1f14491)
In a down backend, when a zero-weight server is lost, a new
"backend down" message was emitted and the down transition of that
backend was wrongly increased. This change ensures that we don't
count that transition again.
This patch should be backported to 1.3.
(cherry picked from commit 60efc5f745b5fa70d811f977727592e47e32a281)
If a maxidle or maxlife parameter is set on the persistence cookie in
insert mode and the client did not provide a recent enough cookie,
then we emit a new cookie with a new last_seen date and the same
first_seen (if maxlife is set). Recent enough here designates a
cookie that would be rounded to the same date. That way, we can
refresh a cookie when required without doing it in all responses.
If the request did not contain such parameters, they are set anyway.
This means that a monitoring request that is forced to a server will
get an expiration date anyway, but this should not be a problem given
that the client is able to set its cookie in this case. This also
permits to force an expiration date on visitors who previously did
not have one.
If a request comes with a dated cookie while no date check is performed,
then a new cookie is emitted with no date, so that we don't risk dropping
the user too fast due to a very old date when we re-enable the date check.
All requests that were targetting the correct server and which had their
expiration date added/updated/removed in the response cookie are logged
with the 'U' ("updated") flag instead of the 'I' ("inserted"). So very
often we'll see "VU" instead of "VN".
(cherry picked from commit 8b3c6ecab6d37be5f3655bc3a2d2c0f9f37325eb)
If a cookie comes in with a first or last date, and they are configured on
the backend, they're checked. If a date is expired or too far in the future,
then the cookie is ignored and the specific reason appears in the cookie
field of the logs.
(cherry picked from commit faa3019107eabe6b3ab76ffec9754f2f31aa24c6)
These functions only require 5 chars to encode 30 bits, and don't expect
any padding. They will be used to encode dates in cookies.
(cherry picked from commit a7e2b5fc4612994c7b13bcb103a4a2c3ecd6438a)
The set-cookie status flags were not very handy and limited. Reorder
them to save some room for additional values and add the "U" flags
(for Updated expiration date) that will be used with expirable cookies
in insert mode.
(cherry picked from commit 5bab52f821bb0fa99fc48ad1b400769e66196ece)
In all cookie persistence modes but prefix, we now support cookies whose
value is suffixed with some contents after a vertical bar ('|'). This will
be used to pass an optional expiration date. So as of now we only consider
the part of the cookie value which is used before the vertical bar.
(cherry picked from commit a4486bf4e5b03b5a980d03fef799f6407b2c992d)
Add two new arguments to the "cookie" keyword, to be able to
fix a max idle and max life on them. Right now only the parameter
parsing is implemented.
(cherry picked from commit 9ad5dec4c3bb8f29129f292cb22d3fc495fcc98a)
HTTP content-based health checks will be involved in searching text in pages.
Some pages may not fit in the default buffer (16kB) and sometimes it might be
desired to have larger buffers in order to find patterns. Running checks on
smaller URIs is always preferred of course.
(cherry picked from commit 043f44aeb835f3d0b57626c4276581a73600b6b1)
This patch adds the "http-check expect [r]{string,status}" statements
which enable health checks based on whether the response status or body
to an HTTP request contains a string or matches a regex.
This probably is one of the oldest patches that remained unmerged. Over
the time, several people have contributed to it, among which FinalBSD
(first and second implementations), Nick Chalk (port to 1.4), Anze
Skerlavaj (tests and fixes), Cyril Bont (general fixes), and of course
myself for the final fixes and doc during integration.
Some people already use an old version of this patch which has several
issues, among which the inability to search for a plain string that is
not at the beginning of the data, and the inability to look for response
contents that are provided in a second and subsequent recv() calls. But
since some configs are already deployed, it was quite important to ensure
a 100% compatible behaviour on the working cases.
Thus, that patch fixes the issues while maintaining config compatibility
with already deployed versions.
(cherry picked from commit b507c43a3ce9a8e8e4b770e52e4edc20cba4c37f)
This patch provides a new "option ldap-check" statement to enable
server health checks based on LDAPv3 bind requests.
(cherry picked from commit b76b44c6fed8a7ba6f0f565dd72a9cb77aaeca7c)
Some configs may involve httpclose in a frontend and http-pretend-keepalive
in a backend. httpclose used to take priority over keepalive, thus voiding
its effect. This change ensures that when both are combined, keepalive is
still announced to the server while close is announced to the client.
(cherry picked from commit 2be7ec90fa9caf66294f446423bbab2d00db9004)
Some broken browsers still happen to send a CRLF after a POST. Those which
send a CRLF in a second packet have it queued into the system's buffers,
which causes an RST to be emitted by some systems upon close of the response
(eg: Linux). The client may then receive the RST without the last response
segments, resulting in a truncated response.
This change leaves request polling enabled on a POST so that we can flush
any late data from the request buffers.
A more complete workaround would consist in reading from the request for a
long time, until we get confirmation that the close has been ACKed. This
is much more complex and should only be studied for newer versions.
(cherry picked from commit 12e316af4f0245fde12dbc224ebe33c8fea806b2)
Jozsef R.Nagy reported a reliability issue on FreeBSD. Sometimes an error
would be emitted, reporting the inability to switch a socket to non-blocking
mode and the listener would definitely not accept anything. Cyril Bont
narrowed this bug down to the call to EV_FD_CLR(l->fd, DIR_RD).
He was right because this call is wrong. It only disables input events on
the listening socket, without setting the listener to the LI_LISTEN state,
so any subsequent call to enable_listener() from maintain_proxies() is
ignored ! The correct fix consists in calling disable_listener() instead.
It is discutable whether we should keep such error path or just ignore the
event. The goal in earlier versions was to temporarily disable new activity
in order to let the system recover while releasing resources.
There was no consistency between all the functions used to exchange data
between a buffer and a stream interface. Also, the functions used to send
data to a buffer did not consider the possibility that the buffer was
shutdown for read.
Now the functions are called buffer_{put,get}_{char,block,chunk,string}.
The old buffer_feed* functions have been left available for existing code
but marked deprecated.
si->release() was called each time we closed one direction of a stream
interface, while it should only have been called when both sides are
closed. This bug is specific to 1.5 and only affects embedded tasks.
In deinit(), it is possible that we first free the listeners, then
unbind them all. Right now this situation can't happen because the
only way to call deinit() is to pass via a soft-stop which will
already unbind all protocols. But later this might become a problem.
This patch addresses exactly the same issues as the previous one, but
for responses this time. It also introduces implicit support for the
Set-Cookie2 header, for which there's almost nothing specific to do
since it is a clean header. This one allows multiple cookies in a
same header, by respecting the HTTP messaging semantics.
The new parser has been tested with insertion, rewrite, passive,
removal, prefixing and captures, and it looks OK. It's still able
to rewrite (or delete) multiple cookies at once. Just as with the
request parser, it tries hard to fix formating of the cookies it
displaces.
This patch too should be backported to 1.4 and possibly to 1.3.
The request cookie parser did not allow spaces to appear in cookie
values nor around the equal sign. The various RFCs on the subject
say different things, some suggesting that a space is allowed after
the equal sign and being worded in a way that lets one believe it
is allowed before too. Some spaces may appear inside values and be
part of the values. The quotes allow delimiters to be embedded in
values. The spaces before and after attributes should be trimmed.
The new parser addresses all those points and has been carefully tested.
It fixes misplaced spaces around equal signs before processing the cookies
or forwarding them. It also tries its best to perform clean removals by
always keeping the delimiter after the value being removed and leaving one
space after it.
The variable inside the parser have been renamed to make the code a lot
more understandable, and one multi-function pointer has been eliminated.
Since this patch fixes real possible issues, it should be backported to 1.4
and possibly 1.3, since one (single) case of wrong spaces has been reported
in 1.3.
The code handling the Set-Cookie has not been touched yet.
This counter is incremented for each incoming connection and each active
listener, and is used to prevent haproxy from stopping upon SIGUSR1. It
will thus be possible for some tasks in increment this counter in order
to prevent haproxy from dying until they have completed their job.
The header parser has a bug which causes commas to be matched within
quotes while it was not expected. The way the code was written could
make one think it was OK. The resulting effect is that the following
config would use the second IP address instead of the third when facing
this request :
source 0.0.0.0 usesrc hdr_ip(X-Forwarded-For,2)
GET / HTTP/1.0
X-Forwarded-for: "127.0.0.1, 127.0.0.2", 127.0.0.3
This fix must be backported to 1.4 and 1.3.
Fix 4fe4190278 was a bit too strong. It
has caused some chunked-encoded responses to be truncated when a recv()
call could return multiple chunks followed by a close. The reason is
that when a chunk is parsed, only its contents are scheduled to be
forwarded. Thus, the reader sees auto_close+shutr and sets shutw_now.
The sender in turn sends the last scheduled data and does shutw().
Another nasty effect is that it has reduced the keep-alive rate. If
a response did not completely fit into the buffer, then the auto_close
bit was left on and the sender would close upon completion.
The fix consists in not making use of auto_close when chunked encoding
is used nor when keep-alive is used, which makes sense. However it is
maintained on error processing.
Thanks to Cyril Bont for reporting the issue early.
Signal zero is never delivered by the system. However having a signal to
which functions and tasks can subscribe to be notified of a stopping event
is useful. So this patch does two things :
1) allow signal zero to be delivered from any function of signal handler
2) make soft_stop() deliver this signal so that tasks can be notified of
a stopping condition.
The two new functions below make it possible to register any number
of functions or tasks to a system signal. They will be called in the
registration order when the signal is received.
struct sig_handler *signal_register_fct(int sig, void (*fct)(struct sig_handler *), int arg);
struct sig_handler *signal_register_task(int sig, struct task *task, int reason);
In case of binding failure during startup, we wait for some time sending
signals to old pids so that they release the ports we need. But if there
aren't any old pids anymore, it's useless to wait, we prefer to fail fast.
Along with this change, we now have the number of old pids really found
in the nb_oldpids variable.
If the global stats timeout statement was found before the stats socket
(or without), the parser would crash because the stats frontend was not
initialized. Now we have an allocation function which solves the issue.
This bug was introduced with 1.4 so it does not need backporting.
(was commit 1c5819d2498ae3643c3880507847f948a53d2773 in 1.4)
If a server is disabled or tracking a disabled server, it must not
dequeue requests pending in the proxy queue, it must only dequeue
its own ones.
The problem that was caused is that if a backend always had requests
in its queue, a disabled server would continue to take traffic forever.
(was commit 09d02aaf02d1f21c0c02672888f3a36a14bdd299 in 1.4)
The statistics page (the HTML one) displays a garbage value on frontends using
"rate-limit session" in HTTP mode.
This is due to the usage of the same buffer for the macros converting the max
session rate and the limit.
Steps to reproduce :
Configuration file example :
listen bug :80
mode http
rate-limit sessions
stats enable
Then start refreshing the statistics page.
This bug was introduced just before the release of haproxy 1.4.0.
(was commit 6cfaf9e91969c87a9eab1d58a15d2d0a3f346c9b in 1.4)
While it's usually desired to wait for a server response even
when the client closes its request channel, it can be problematic
with long polling requests. In order to let the server decide what
to do in such a case, if option abortonclose is set, we simply
forward the shutdown to the server. That way, it can decide to
take the appropriate action. Most servers will still process the
request, while some will probably want to abort.
Obviously, this only works as long as the client has not sent
another pipelined request over the same connection.
(was commit 0e25d86da49827ff6aa3c94132c01292b5ba4854 in 1.4)
In case of HTTP keepalive processing, we want to release the counters tracked
by the backend. Till now only the second set of counters was released, while
it could have been assigned by the frontend, or the backend could also have
assigned the first set. Now we reuse to unused bits of the session flags to
mark which stick counters were assigned by the backend and to release them as
appropriate.
The assumption that there was a 1:1 relation between tracked counters and
the frontend/backend role was wrong. It is perfectly possible to track the
track-fe-counters from the backend and the track-be-counters from the
frontend. Thus, in order to reduce confusion, let's remove this useless
{fe,be} reference and simply use {1,2} instead. The keywords have also been
renamed in order to limit confusion. The ACL rule action now becomes
"track-sc{1,2}". The ACLs are now "sc{1,2}_*" instead of "trk{fe,be}_*".
That means that we can reasonably document "sc1" and "sc2" (sticky counters
1 and 2) as sort of patterns that are available during the whole session's
life and use them just like any other pattern.
It began to be problematic to have "tcp-request" followed by an
immediate action, as sometimes it was a keyword indicating a hook
or setting ("content" or "inspect-delay") and sometimes it was an
action.
Now the prefix for connection-level tcp-requests is "tcp-request connection"
and the ones processing contents remain "tcp-request contents".
This has allowed a nice simplification of the config parser and to
clean up the doc a bit. Also now it's a bit more clear why tcp-request
connection are not allowed in backends.
Doing so allows us to track counters from backends or depending on contents.
For instance, it now becomes possible to decide to track a connection based
on a Host header if enough time is granted to parse the HTTP request. It is
also possible to just track frontend counters in the frontend and unconditionally
track backend counters in the backend without having to write complex rules.
The first track-fe-counters rule executed is used to track counters for
the frontend, and the first track-be-counters rule executed is used to track
counters for the backend. Nothing prevents a frontend from setting a track-be
rule nor a backend from setting a track-fe rule. In fact these rules are
arbitrarily split between FE and BE with no dependencies.
Having a single tracking pointer for both frontend and backend counters
does not work. Instead let's have one for each. The keyword has changed
to "track-be-counters" and "track-fe-counters", and the ACL "trk_*"
changed to "trkfe_*" and "trkbe_*".
It is now possible to dump some select table entries based on criteria
which apply to the stored data. This is enabled by appending the following
options to the end of the "show table" statement :
data.<data_type> {eq|ne|lt|gt|le|ge} <value>
For intance :
show table http_proxy data.conn_rate gt 5
show table http_proxy data.gpc0 ne 0
The compare applies to the integer value as it would be displayed, and
operates on signed long long integers.
It's a bit cumbersome to have to know all possible storable types
from the stats interface. Instead, let's have generic types for
all data, which will facilitate their manipulation.
This feature will be required at some point, when the stick tables are
used to enforce security measures. For instance, some visitors may be
incorrectly flagged as abusers and would ask the site admins to remove
their entry from the table.
It is now possible to dump a table's contents with keys, expire,
use count, and various data using the command above on the stats
socket.
"show table" only shows main table stats, while "show table <name>"
dumps table contents, only if the socket level is admin.
This patch adds support for the following session counters :
- http_req_cnt : HTTP request count
- http_req_rate: HTTP request rate
- http_err_cnt : HTTP request error count
- http_err_rate: HTTP request error rate
The equivalent ACLs have been added to check the tracked counters
for the current session or the counters of the current source.
This counter may be used to track anything. Two sets of ACLs are available
to manage it, one gets its value, and the other one increments its value
and returns it. In the second case, the entry is created if it did not
exist.
Thus it is possible for example to mark a source as being an abuser and
to keep it marked as long as it does not wait for the entry to expire :
# The rules below use gpc0 to track abusers, and reject them if
# a source has been marked as such. The track-counters statement
# automatically refreshes the entry which will not expire until a
# 1-minute silence is respected from the source. The second rule
# evaluates the second part if the first one is true, so GPC0 will
# be increased once the conn_rate is above 100/5s.
stick-table type ip size 200k expire 1m store conn_rate(5s),gpc0
tcp-request track-counters src
tcp-request reject if { trk_get_gpc0 gt 0 }
tcp-request reject if { trk_conn_rate gt 100 } { trk_inc_gpc0 gt 0}
Alternatively, it is possible to let the entry expire even in presence of
traffic by swapping the check for gpc0 and the track-counters statement :
stick-table type ip size 200k expire 1m store conn_rate(5s),gpc0
tcp-request reject if { src_get_gpc0 gt 0 }
tcp-request track-counters src
tcp-request reject if { trk_conn_rate gt 100 } { trk_inc_gpc0 gt 0}
It is also possible not to track counters at all, but entry lookups will
then be performed more often :
stick-table type ip size 200k expire 1m store conn_rate(5s),gpc0
tcp-request reject if { src_get_gpc0 gt 0 }
tcp-request reject if { src_conn_rate gt 100 } { src_inc_gpc0 gt 0}
The '0' at the end of the counter name is there because if we find that more
counters may be useful, other ones will be added.
This function looks up a key, updates its expiration date, or creates
it if it was not found. acl_fetch_src_updt_conn_cnt() was updated to
make use of it.
These counters maintain incoming and outgoing byte rates in a stick-table,
over a period which is defined in the configuration (2 ms to 24 days).
They can be used to detect service abuse and enforce a certain bandwidth
limits per source address for instance, and block if the rate is passed
over. Since 32-bit counters are used to compute the rates, it is important
not to use too long periods so that we don't have to deal with rates above
4 GB per period.
Example :
# block if more than 5 Megs retrieved in 30 seconds from a source.
stick-table type ip size 200k expire 1m store bytes_out_rate(30s)
tcp-request track-counters src
tcp-request reject if { trk_bytes_out_rate gt 5000000 }
# cause a 15 seconds pause to requests from sources in excess of 2 megs/30s
tcp-request inspect-delay 15s
tcp-request content accept if { trk_bytes_out_rate gt 2000000 } WAIT_END
These counters maintain incoming connection rates and session rates
in a stick-table, over a period which is defined in the configuration
(2 ms to 24 days). They can be used to detect service abuse and
enforce a certain accept rate per source address for instance, and
block if the rate is passed over.
Example :
# block if more than 50 requests per 5 seconds from a source.
stick-table type ip size 200k expire 1m store conn_rate(5s),sess_rate(5s)
tcp-request track-counters src
tcp-request reject if { trk_conn_rate gt 50 }
# cause a 3 seconds pause to requests from sources in excess of 20 requests/5s
tcp-request inspect-delay 3s
tcp-request content accept if { trk_sess_rate gt 20 } WAIT_END
We're now able to return errors based on the validity of an argument
passed to a stick-table store data type. We also support ARG_T_DELAY
to pass delays to stored data types (eg: for rate counters).
Some data types will require arguments (eg: period for a rate counter).
This patch adds support for such arguments between parenthesis in the
"store" directive of the stick-table statement. Right now only integers
are supported.
Most of the time we'll want to check the connection count of the
criterion we're currently tracking. So instead of duplicating the
src* tests, let's add trk_conn_cnt to report the total number of
connections from the stick table entry currently being tracked.
A nice part of the code was factored, and we should do the same
for the other criteria.
The new "bytes_in_cnt" and "bytes_out_cnt" session counters have been
added. They're automatically updated when session counters are updated.
They can be matched with the "src_kbytes_in" and "src_kbytes_out" ACLs
which apply to the volume per source address. This can be used to deny
access to service abusers.
The new "conn_cur" session counter has been added. It is automatically
updated upon "track XXX" directives, and the entry is touched at the
moment we increment the value so that we don't consider further counter
updates as real updates, otherwise we would end up updating upon completion,
which may not be desired. Probably that some other event counters (eg: HTTP
requests) will have to be updated upon each event though.
This counter can be matched against current session's source address using
the "src_conn_cur" ACL.
It was not normal to have counter fetches in proto_tcp.c. The only
reason was that the key based on the source address was fetched there,
but now we have split the key extraction and data processing, we must
move that to a more appropriate place. Session seems OK since the
counters are all manipulated from here.
Also, since we're precisely counting number of connections with these
ACLs, we rename them src_conn_cnt and src_updt_conn_cnt. This is not
a problem right now since no version was emitted with these keywords.
The "_cnt" suffix is already used by ACLs to count various data,
so it makes sense to use the same one in "conn_cnt" instead of
"conn_cum" to count cumulated connections.
This is not a problem because no version was emitted with those
keywords.
Thus we'll try to stick to the following rules :
xxxx_cnt : cumulated event count for criterion xxxx
xxxx_cur : current number of concurrent entries for criterion xxxx
xxxx_rate: event rate for criterion xxxx
This patch adds the ability to set a pointer in the session to an
entry in a stick table which holds various counters related to a
specific pattern.
Right now the syntax matches the target syntax and only the "src"
pattern can be specified, to track counters related to the session's
IPv4 source address. There is a special function to extract it and
convert it to a key. But the goal is to be able to later support as
many patterns as for the stick rules, and get rid of the specific
function.
The "track-counters" directive may only be set in a "tcp-request"
statement right now. Only the first one applies. Probably that later
we'll support multi-criteria tracking for a single session and that
we'll have to name tracking pointers.
No counter is updated right now, only the refcount is. Some subsequent
patches will have to bring that feature.
This ACL's count can change along the session's life because it depends
on other sessions' activity. Switch it to volatile since any session
could appear while evaluating the ACLs.
The buffer_feed* functions that are used to send data to buffers did only
support sending contiguous chunks while they're relying on memcpy(). This
patch improves on this by making them able to write in two chunks if needed.
Thus, the buffer_almost_full() function has been improved to really consider
the remaining space and not just what can be written at once.
Now we stop relying on BF_READ_DONTWAIT, which is unrelated to the
wakeups, and only consider activity to decide whether to wake the task
up instead of considering the other side's activity. It is worth noting
that the local stream interface's flags were not updated consecutively
to a call to chk_snd(), which could possibly result in hung tasks from
time to time. This fix will avoid possible loops and uncaught events.
Sometimes it's necessary to be able to perform some "layer 6" analysis
in the backend. TCP request rules were not available till now, although
documented in the diagram. Enable them in backend now.
When resetting a session's request analysers, we must take them from the
listener, not from the frontend. At the moment there is no difference
but this might change.
Since the BF_READ_ATTACHED bug was fixed, a new issue surfaced. When
a connection closes on the return path in tunnel mode while the request
input is already closed, the request analyser which is waiting for a
state change never gets woken up so it never closes the request output.
This causes stuck sessions to remain indefinitely.
One way to reliably reproduce the issue is the following (note that the
client expects a keep-alive but not the server) :
server: printf "HTTP/1.0 303\r\n\r\n" | nc -lp8080
client: printf "GET / HTTP/1.1\r\n\r\n" | nc 127.1 2500
The reason for the issue is that we don't wake the analysers up on
stream interface state changes. So the least intrusive and most reliable
thing to do is to consider stream interface state changes to call the
analysers.
We just need to remember what state each series of analysers have seen
and check for the differences. In practice, that works.
A later improvement later could consist in being able to let analysers
state what they're interested to monitor :
- left SI's state
- right SI's state
- request buffer flags
- response buffer flags
That could help having only one set of analysers and call them once
status changes.
After a read, there was a condition to mandatorily wake the task
up if the BF_READ_DONTWAIT flag was set. This was wrong because
the wakeup condition in this case can be deduced from the other
ones. Another condition was put on the other side not being in
SI_ST_EST state. It is not appropriate to do this because it
causes a useless wakeup at the beginning of every first request
in case of speculative polling, due to the fact that we don't
read anything and that the other side is still in SI_ST_INI.
Also, the wakeup was performed whenever to_forward was null,
which causes an unexpected wakeup upon the first read for the
same reason. However, those two conditions are valid if and
only if at least one read was performed.
Also, the BF_SHUTR flag was tested as part of the wakeup condition,
while this one can only be set if BF_READ_NULL is set too. So let's
simplify this ambiguous test by removing the BF_SHUTR part from the
condition to only process events.
Last, the BF_READ_DONTWAIT flag was unconditionally cleared,
while sometimes there would have been no I/O. Now we only clear
it once the I/O operation has been performed, which maintains
its validity until the I/O occurs.
Finally, those fixes saved approximately 16% of the per-session
wakeups and 20% of the epoll_ctl() calls, which translates into
slightly less under high load due to the request often being ready
when the read() occurs. A performance increase between 2 and 5% is
expected depending on the workload.
It does not seem necessary to backport this change to 1.4, eventhough
it fixes some performance issues. It may later be backported if
required to fix something else because the risk of regression seems
very low due to the fact that we're more in line with the documented
semantics.
Some freq counters will have to work on periods different from 1 second.
The original freq counters rely on the period to be exactly one second.
The new ones (freq_ctr_period) let the user define the period in ticks,
and all computations are operated over that period. When reading a value,
it indicates the amount of events over that period too.
This will be used when an I/O handler running in a stream interface
needs to establish a connection somewhere. We want the session
processor to evaluate both I/O handlers, depending on which side has
one. Doing so also requires that stream_int_update_embedded() wakes
the session up only when the other side is established or has closed,
for instance in order to handle connection errors without looping
indefinitely during the connection setup time.
The session processor still relies on BF_READ_ATTACHED being set,
though we must do whatever is required to remove this dependency.
When a connection is closed on a stream interface, some iohandlers
will need to be informed in order to release some resources. This
normally happens upon a shutr+shutw. It is the equivalent of the
fd_delete() call which is done for real sockets, except that this
time we release internal resources.
It can also be used with real sockets because it does not cost
anything else and might one day be useful.
Till now when a server was configured with address 0.0.0.0, the
connection was forwarded to this address which generally is intercepted
by the system as a local address, so this was completely useless.
One sometimes useful feature for outgoing transparent proxies is to
be able to forward the connection to the same address the client
requested. This patch fixes the meaning of 0.0.0.0 precisely to
ensure that the connection will be forwarded to the initial client's
destination address.
(cherry picked from commit 61ba936e6858dfcf9964d25870726621d8188fb9)
[ note: the bug was finally not present in 1.5-dev but at least we
have to reset store_count to be compatible with 1.4 ]
Commit d6e9e3b5e320b957e6c491bd92d91afad30ba638 caused recently created
entries to be removed as soon as they were created, breaking stickiness.
It is not clear whether a use-after-free was possible or not in this case.
This bug was reported by Ben Congleton and narrowed down by Herv Commowick,
both of whom also tested the fix. Thanks to them !
The quote_arg() function can be used to quote an argument or indicate
"end of line" if it's null or empty. It should be useful to more precisely
report location of problems in the configuration.
When an entry already exists, we just need to update its expiration
timer. Let's have a dedicated function for that instead of spreading
open code everywhere.
This change also ensures that an update of an existing sticky session
really leads to an update of its expiration timer, which was apparently
not the case till now. This point needs to be checked in 1.4.
This change makes use of the stick-tables to keep track of any source
address activity. Two ACLs make it possible to check the count of an
entry or update it and act accordingly. The typical usage will be to
reject a TCP request upon match of an excess value.
Till now sticky sessions only held server IDs. Now there are other
data types so it is not acceptable anymore to overwrite the server ID
when writing something. The server ID must then only be written from
the caller when appropriate. Doing this has also led to separate
lookup and storage.
This one can be parsed on the "stick-table" after with the "store"
keyword. It will hold the number of connections matching the entry,
for use with ACLs or anything else.
The stick_tables will now be able to store extra data for a same key.
A limited set of extra data types will be defined and for each of them
an offset in the sticky session will be assigned at startup time. All
of this information will be stored in the stick table.
The extra data types will have to be specified after the new "store"
keyword of the "stick-table" directive, which will reserve some space
for them.
pattern.c depended on stick_table while in fact it should be the opposite.
So we move from pattern.c everything related to stick_tables and invert the
dependency. That way the code becomes more logical and intuitive.
The name 'exps' and 'keys' in struct stksess was confusing because it was
the same name as in the table which holds all of them, while they only hold
one node each. Remove the trailing 's' to more clearly identify who's who.
Right now we're only able to store a server ID in a sticky session.
The goal is to be able to store anything whose size is known at startup
time. For this, we store the extra data before the stksess pointer,
using a negative offset. It will then be easy to cumulate multiple
data provided they each have their own offset.
It's very disturbing to see the "denied req" counter increase without
any other session counter moving. In fact, we can't count a rejected
TCP connection as "denied req" as we have not yet instanciated any
session at all. Let's use a new counter for that.
The frontend's connection was accounted for once the session was
instanciated. This was problematic because the early ACLs weren't
able to correctly account for the number of concurrent connections.
Now we count the connection once it is assigned to the frontend.
It also brings the nice advantage of being more symmetrical, because
the stream_sock's accept() does not have to account for that anymore,
only the session's accept() does.
Now we're able to reject connections very early, so we need to use a
different counter for the connections that are received and the ones
that are accepted and converted into sessions, so that the rate limits
can still apply to the accepted ones. The session rate must still be
used to compute the rate limit, so that we can reject undesired traffic
without affecting the rate.
A new function session_accept() is now called from the lower layer to
instanciate a new session. Once the session is instanciated, the upper
layer's frontent_accept() is called. This one can be service-dependant.
That way, we have a 3-phase accept() sequence :
1) protocol-specific, session-less accept(), which is pointed to by
the listener. It defaults to the generic stream_sock_accept().
2) session_accept() which relies on a frontend but not necessarily
for use in a proxy (eg: stats or any future service).
3) frontend_accept() which performs the accept for the service
offerred by the frontend. It defaults to frontend_accept() which
is really what is used by a proxy.
The TCP/HTTP proxies have been moved to this mode so that we can now rely on
frontend_accept() for any type of session initialization relying on a frontend.
The next step will be to convert the stats to use the same system for the stats.
This will be needed for the last factoring step which adds support
for application-level accept(). The tcp/http accept() code has now
been isolated and will have to move to a separate function.
Till now, the frontend relied on the backend's options for INDEPSTR,
while at the time of accept, the frontend and backend are the same.
So we now use the frontend's pointer instead of the backend and we
don't have any dependency on the backend anymore in the frontend's
accept code.
The conn_retries attribute is now assigned when switching from SI_ST_INI
to SI_ST_REQ. This eliminates one of the last dependencies on the backend
in the frontend's accept() function.
The conn_retries still lies in the session and its initialization depends
on the backend when it may not yet be known. Let's first move it to the
stream interface.
The frontend has no reason to initialize the server-side stream_interface.
It's a leftover from old times which now makes no sense due to the fact
that we don't know in the frontend whether the other side will be a socket,
a task or anything else. Removing this part is possible due to previous
patches which perform the initialization at the proper place. We'll still
have to be able to register an I/O handler for situations where everything
is known only to the frontend (eg: unix stats socket), before merging the
various instanciations of this accept() function.
It's not normal to initialize the server-side stream interface from the
accept() function, because it may change later. Thus, we introduce a new
stream_sock_prepare_interface() function which is called just before the
connect() and which sets all of the stream_interface's callbacks to the
default ones used for real sockets. The ->connect function is also set
at the same instant so that we can easily add new server-side protocols
soon.
It was particularly embarrassing that the server timeout was assigned
to buffers during an accept() just to be potentially changed later in
case of a use_backend rule. The frontend side has nothing to do with
server timeouts.
Now we initialize them right after the connect() succeeds. Later this
should change for a unique stream-interface timeout setting only.
Calling sess_establish() upon a successful connect() was essential, but
it was not clearly stated whether it was necessary for an access to an
I/O handler or not. While it would be desired, having it automatically
add the response analyzers is quite a problem, and it breaks HTTP stats.
The solution is thus not to call it for now and to perform the few response
initializations as needed.
For the long term, we need to find a way to specify the analyzers to install
during a stream_int_register_handler() if any.
The connection timeout stored in the buffer has not been used since the
stream interface were introduced. Let's get rid of it as it's one of the
things that complicate factoring of the accept() functions.
We can disable the monitor-net rules on a listener if this flag is not
set in the listener's options. This will be useful when we don't want
to check that fe->addr is set or not for non-TCP frontends.
The new LI_O_TCP_RULES listener option indicates that some TCP rules
must be checked upon accept on this listener. It is now checked by
the frontend and the L4 rules are evaluated only in this case. The
flag is only set when at least one tcp-req rule is present in the
frontend.
The L4 rules check function has now been moved to proto_tcp.c where
it ought to be.
The tcp inspection rules were fast but were only processed after a
schedule had occurred and all resources were allocated. When defending
against DDoS, it's important to be able to apply some protection the
earliest possible instant.
Thus we introduce a new set of rules : tcp-request rules which act
on pure layer4 information (no content). They are evaluated even
before the buffers are allocated for the session, saving as much
time as possible. That way it becomes possible to check an incoming
connection's source IP address against a list of authorized/blocked
networks, and immediately drop the connection.
The rules are checked even before we perform any socket-specific
operation, so that we can optimize the reject case, which will be the
problematic one during a DDoS. The second stream interface and s->txn
are also now initialized after the rules are parsed for the same
reason. All these optimisations have permitted to reach up to 212000
connnections/s with a real rule rejecting based on the source IP
address.
For a long time we had two large accept() functions, one for TCP
sockets instanciating proxies, and another one for UNIX sockets
instanciating the stats interface.
A lot of code was duplicated and both did not work exactly the same way.
Now we have a stream_sock layer accept() called for either TCP or UNIX
sockets, and this function calls the frontend-specific accept() function
which does the rest of the frontend-specific initialisation.
Some code is still duplicated (session & task allocation, stream interface
initialization), and might benefit from having an intermediate session-level
accept() callback to perform such initializations. Still there are some
minor differences that need to be addressed first. For instance, the monitor
nets should only be checked for proxies and not for other connection templates.
Last, we renamed l->private as l->frontend. The "private" pointer in
the listener is only used to store a frontend, so let's rename it to
eliminate this ambiguity. When we later support detached listeners
(eg: FTP), we'll add another field to avoid the confusion.
The 'client.c' file now only contained frontend-specific functions,
so it has naturally be renamed 'frontend.c'. Same for client.h. This
has also been an opportunity to remove some cross references from
files that should not have depended on it.
In the end, this file should contain a protocol-agnostic accept()
code, which would initialize a session, task, etc... based on an
accept() from a lower layer. Right now there are still references
to TCP.
Some ACLs in the client ought to belong to proto_tcp, or protocols.
This file should only contain frontend-specific information and will
be renamed that way in next commit.
Some functions which act on generic buffer contents without being
tcp-specific were historically in proto_tcp.c. This concerns ACLs
and RDP cookies. Those have been moved away to more appropriate
locations. Ideally we should create some new files for each layer6
protocol parser. Let's do that later.
Right now we count the incoming connection only once everything has
been allocated. Since we're planning on considering early ACL rules,
we need to count the connection earlier.
Just like we do on health checks, we should consider that ACLs that make
use of buffer data are layer 6 and not layer 4, because we'll soon have
to distinguish between pure layer 4 ACLs (without any buffer) and these
ones.
If a "stick store-request" rule is present, an entry is preallocated during
the request. However, if there is no response due to an error or to a redir
mode server, we never release it.
By using msg->sol as the beginning of a message, wrong messages were
displayed in debug mode when they were truncated on the last line,
because msg->sol points to the beginning of the last line. Use
data+msg->som instead.
This would only be wrong when the server has not completely responded yet.
Fix two other occurrences of wrong rsp<->sl associations which were harmless
but wrong anyway.
The rate-limit feature relied on a timer to define how long a frontend
must remain idle. It was not considering the pending connections, so it
was almost always ready to be used again and only the accept's limit was
preventing new connections from coming in. By accounting for the pending
connection, we can compute a correct delay and effectively make the
frontend go idle for that (short) time.
Latest BF_READ_ATTACHED fix has unveiled a nice issue with the way
HTTP requests and responses are forwarded. The case where the request
aborts after the response has responded (POST with early response)
forgot to re-enable auto-close on the response. In fact it still
worked thanks to a side effect as long as BF_READ_ATTACHED was there
to force the states to be resynced (and the flags). Since last fix,
the missing auto-close causes CLOSE_WAIT connections when the client
aborts too late during a data transfer.
The right fix consists in considering the situation where the client
experiences an error and to explicitly abort the transfer. There is
no need to wake the response analysers up for that since they'd have
no added value and the analysers flags are cleared. However for a
future usage, that might help (eg: stickiness, ...).
This fix should be backported to 1.4 if the previous one is backported
too. After all the non-reg tests, the risks to see a problem arise
without both patches seems low, and both patches touch sensible areas
of the code. So there's no hurry.
The BF_READ_ATTACHED flag was created to wake analysers once after
a connection was established. It turns out that this flag is never
cleared once set, so even if there is no event, some analysers are
still evaluated for no reason.
The bug was introduced with commit ea38854d34.
It may cause slightly increased CPU usages during data transfers, maybe
even quite noticeable once when transferring transfer-encoded data,
due to the fact that the request analysers are being checked for every
chunk.
This fix must be backported in 1.4 after all non-reg tests have been
completed.
The response analyser was not emptied upon creation of a new session. In
fact it was always zero just because last session leaved it in a zero state,
but in case of shared pools this cannot be guaranteed. The net effect is
that it was possible to have some HTTP (or any other) analysers on the
response path of a stats unix socket, which would reject the response.
This fix must be backported to 1.4.
Commit 4605e3b641cebbdbb2ee5726e5bbc3c03a2d7b5e was not enough, because
connections passing from a TCP frontend to an HTTP backend without any
ACL and via a "default_backend" statement were still working on non-initialized
data. An initialization was missing in the session_set_backend() function, next
to the initialization of hdr_idx.
It was once reported at least by Dirk Taggesell that the consistent
hash had a very poor distribution, making use of only two servers.
Jeff Persch analysed the code and found the root cause. Consistent
hash makes use of the server IDs, which are completed after the chash
array initialization. This implies that each server which does not
have an explicit "id" parameter will be merged at the same place in
the chash tree and that in the end, only the first or last servers
may be used.
The now obvious fix (thanks to Jeff) is to assign the missing IDs
earlier. However, it should be clearly understood that changing a
hash algorithm on live systems will rebalance the whole system.
Anyway, the only affected users will be the ones for which the
system is quite unbalanced already. The ones who fix their IDs are
not affected at all.
Kudos to Jeff for spotting that bug which got merged 3 days after
the consistent hashing !
When running in pure TCP mode with a traffic inspection rule to detect
HTTP protocol, we have to initialize the HTTP transaction too. The
effect of not doing this was that some incoming connections could have
been matched as carrying HTTP protocol eventhough this was not the case.
Both dispatch and http_proxy modes were broken since 1.4-dev5 when
the adjustment of server health based on response codes was introduced.
In fact, in these modes, s->srv == NULL. The result is a plain segfault.
It should have been noted critical, but the fact that it remained 6
months without being noticed indicates that almost nobody uses these
modes anymore. Also, the crash is immediate upon first request.
Further versions should not be affected anymore since it's planned to
have a dummy server instead of these annoying NULL pointers.
This ACL was missing in complex setups where the status of a remote site
has to be considered in switching decisions. Until there, using a server's
status in an ACL required to have a dedicated backend, which is a bit heavy
when multiple servers have to be monitored.
It is now possible to stick on an IP address found in a HTTP header. Right
now only the last occurrence of the header can be used, which is generally
enough for most uses. Also, the header extraction rule only knows how to
convert the header to IP. Later it will be usable as a plain string with
an implicit conversion, and the syntax will not change.
Most often, pattern files used by ACLs will be produced by tools
which emit some comments (eg: geolocation lists). It's very annoying
to have to clean the files before using them, and it does not make
much sense to be able to support patterns we already can't input in
the config file. So this patch makes the pattern file loader skip
lines beginning with a sharp and the empty ones, and strips leading
spaces and tabs.
Networks patterns loaded from files for longest match ACL testing
will now be arranged into a prefix tree. This is possible thanks to
the new prefix features in ebtree v6.0. Longest match testing is
slightly slower than exact data maching. However, the measured impact
of running at 42000 requests per second and testing whether the IP
address found in a header belongs to a list of 52000 networks or
not is 3% CPU (increase from 66% to 69%). This is low enough to
permit true geolocation based on huge tables.
Now if some ACL patterns are loaded from a file and the operation is
an exact string match, the data will be arranged in a tree, yielding
a significant performance boost on large data sets. Note that this
only works when case is sensitive.
A new dedicated function, acl_lookup_str(), has been created for this
matching. It is called for every possible input data to test and it
looks the tree up for the data. Since the keywords are loosely typed,
we would have had to add a new columns to all keywords to adjust the
function depending on the type. Instead, we just compare on the match
function. We call acl_lookup_str() when we could use acl_match_str().
The tree lookup is performed first, then the remaining patterns are
attempted if the tree returned nothing.
A quick test shows that when matching a header against a list of 52000
network names, haproxy uses 68% of one core on a core2-duo 3.2 GHz at
42000 requests per second, versus 66% without any rule, which means
only a 2% CPU increase for 52000 rules. Doing the same test without
the tree leads to 100% CPU at 6900 requests/s. Also it was possible
to run the same test at full speed with about 50 sets of 52000 rules
without any measurable performance drop.
The code is now ready to support loading pattern from filesinto trees. For
that, it will be required that the ACL keyword has a flag ACL_MAY_LOOKUP
and that the expr is case sensitive. When that is true, the pattern will
have a flag ACL_PAT_F_TREE_OK to indicate that it is possible to feed the
tree instead of a usual pattern if the parsing function is able to do this.
The tree's root is pre-initialized in the pattern's value so that the
function can easily find it. At that point, if the parsing function decides
to use the tree, it just sets ACL_PAT_F_TREE in the return flags so that
the caller knows the tree has been used and the pattern can be recycled.
That way it will be possible to load some patterns into the tree when it
is compatible, and other ones as linear linked lists. A good example of
this might be IPv4 network entries : right now we support holes in masks,
but this very rare feature is not compatible with binary lookup in trees.
So the parser will be able to decide itself whether the pattern can go to
the tree or not.
The "acl XXX -f <file>" syntax was supported but nothing was read from
the file. This is now possible. All lines are merged verbatim, even if
they contain spaces (useful for user-agents). There are shortcomings
though. The worst one is that error reporting is too approximative.
in cttproxy.c check_cttproxy_version socket is not closed before function
returned. Although it is called only once, I think it is better to close
the socket.
When trying to display an invalid request or response we received,
we must at least check that we have identified something looking
like a start of message, otherwise we can dereference a NULL pointer.
This is used to disable persistence depending on some conditions (for
example using an ACL matching static files or a specific User-Agent).
You can see it as a complement to "force-persist".
In the configuration file, the force-persist/ignore-persist declaration
order define the rules priority.
Used with the "appsesion" keyword, it can also help reducing memory usage,
as the session won't be hashed the persistence is ignored.
I met a strange behaviour with appsession.
I firstly thought this was a regression due to one of my previous patch
but after testing with a 1.3.15.12 version, I also could reproduce it.
To illustrate, the configuration contains :
appsession PHPSESSID len 32 timeout 1h
Then I call a short PHP script containing :
setcookie("P", "should not match")
When calling this script thru haproxy, the cookie "P" matches the appsession rule :
Dumping hashtable 0x11f05c8
table[1572]: should+not+match
Shouldn't it be ignored ?
If you confirm, I'll send a patch for 1.3 and 1.4 branches to check that the
cookie length is equal to the appsession name length.
This is due to the comparison length, where the cookie length is took into
account instead of the appsession name length. Using the appsession name
length would allow ASPSESSIONIDXXX (+ check that memcmp won't go after the
buffer size).
Also, while testing, I noticed that HEAD requests where not available for
URIs containing the appsession parameter. 1.4.3 patch fixes an horrible
segfault I missed in a previous patch when appsession is not in the
configuration and HAProxy is compiled with DEBUG_HASH.
Some servers do not completely conform with RFC2616 requirements for
keep-alive when they receive a request with "Connection: close". More
specifically, they don't bother using chunked encoding, so the client
never knows whether the response is complete or not. One immediately
visible effect is that haproxy cannot maintain client connections alive.
The second issue is that truncated responses may be cached on clients
in case of network error or timeout.
scar Fras Barranco reported this issue on Tomcat 6.0.20, and
Patrik Nilsson with Jetty 6.1.21.
Cyril Bont proposed this smart idea of pretending we run keep-alive
with the server and closing it at the last moment as is already done
with option forceclose. The advantage is that we only change one
emitted header but not the overall behaviour.
Since some servers such as nginx are able to close the connection
very quickly and save network packets when they're aware of the
close negociation in advance, we don't enable this behaviour by
default.
"option http-pretend-keepalive" will have to be used for that, in
conjunction with "option http-server-close".
Using get_ip_from_hdr2() we can look for occurrence #X or #-X and
extract the IP it contains. This is typically designed for use with
the X-Forwarded-For header.
Using "usesrc hdr_ip(name,occ)", it becomes possible to use the IP address
found in <name>, and possibly specify occurrence number <occ>, as the
source to connect to a server. This is possible both in a server and in
a backend's source statement. This is typically used to use the source
IP previously set by a upstream proxy.
The transparent proxy address selection was set in the TCP connect function
which is not the most appropriate place since this function has limited
access to the amount of parameters which could produce a source address.
Instead, now we determine the source address in backend.c:connect_server(),
right after calling assign_server_address() and we assign this address in
the session and pass it to the TCP connect function. This cannot be performed
in assign_server_address() itself because in some cases (transparent mode,
dispatch mode or http_proxy mode), we assign the address somewhere else.
This change will open the ability to bind to addresses extracted from many
other criteria (eg: from a header).
We'll need another flag in the 'options' member close to PR_O_TPXY_*,
and all are used, so let's move this easy one to options2 (which are
already used for SQL checks).
It's very common to see people getting trapped by HTTP-only options
which don't work in TCP proxies. To help them definitely get rid of
those configs, let's emit warnings for all options and statements
which are not supported in their mode. That includes all HTTP-only
options, the cookies and the stats.
In order to ensure internal config correctness, the options are also
disabled.
It was disturbing to see a backend name associated with a bad request
when this "backend" was in fact the frontend. Instead, we now display
"backend <NONE>" if the "backend" has no backend capability :
> show errors
[25/Mar/2010:06:44:25.394] frontend fe (#1): invalid request
src 127.0.0.1, session #0, backend <NONE> (#-1), server <NONE> (#-1)
request length 45 bytes, error at position 0:
Isidore Li reported an occasional segfault when using URL hashing, and
kindly provided backtraces and core files to help debugging.
The problem was triggered by reset connections before the URL was sent,
and was due to the same bug which was fixed by commit e45997661b
(connections were attempted in case of connection abort). While that
bug was already fixed, it appeared that the same segfault could be
triggered when URL hashing is configured in an HTTP backend when the
frontend runs in TCP mode and no URL was seen. It is totally abnormal
to try to hash a null URL, as well as to process any kind of L7 hashing
when a full request was not seen.
This additional fix now ensures that layer7 hashing is not performed on
incomplete requests.
The following patch fixed an issue but brought another one :
296897 [MEDIUM] connect to servers even when the input has already been closed
The new issue is that when a connection is inspected and aborted using
TCP inspect rules, now it is sent to the server before being closed. So
that test is not satisfying. A probably better way is not to prevent a
connection from establishing if only BF_SHUTW_NOW is set but BF_SHUTW
is not. That way, the BF_SHUTW flag is not set if the request has any
data pending, which still fixes the stats issue, but does not let any
empty connection pass through.
Also, as a safety measure, we extend buffer_abort() to automatically
disable the BF_AUTO_CONNECT flag. While it appears to always be OK,
it is by pure luck, so better safe than sorry.
This warning was first reported by Ross West on FreeBSD, then by
Holger Just on OpenSolaris. It also happens on 64bit Linux. However,
fixing the format to use long int complains on 32bit Linux where
ptrdiff_t is apparently different. Better cast the pointer difference
to an int then.
To save a little memory, the check_data buffer is only allocated
for the servers that are checked.
[WT: this patch saves 80 MB of RAM on the test config with 5000 servers]
Cyril Bont reported a regression introduced with very last changes
on the checks code, which causes failed checks on if the server does
not close the connection in time. This happens on HTTP/1.1 checks or
on SMTP checks for instance.
This fix consists in restoring the old behaviour of parsing as soon
as something is available in the response buffer, and waiting for
more data if some are missing. This also helps releasing connections
earlier (eg: a GET check will not have to download the whole object).
Apparently some systems define MSG_NOSIGNAL but do not necessarily
check it (or maybe binaries are built somewhere and used on older
versions). There were reports of very recent FreeBSD setups causing
SIGPIPEs, while older ones catch the signal. Recent FreeBSD manpages
indeed define MSG_NOSIGNAL.
So let's now unconditionnaly catch the signal. It's useless not to do
it for the rare cases where it's not needed (linux 2.4 and below).
Bernhard Krieger reported truncated HTTP responses in presence of some
specific chunk-encoded data, and kindly offered complete traces of the
issue which made it easy to reproduce it.
Those traces showed that the chunks were of exactly 8192 bytes, chunk
size and CRLF included, which was exactly half the size of the buffer.
In this situation, the function http_chunk_skip_crlf() could erroneously
try to parse a CRLF after the chunk believing there were more data
pending, because the number of bytes present in the buffer was considered
instead of the number of remaining bytes to be parsed.
This happens when a server immediately closes the connection after
the response without lingering or when we close before the end of
the data. We get an RST which translates into a late error. We must
not declare an error without checking that the contents are OK.
Since the recv() call returns every time it succeeds, we always need
to calls with one intermediate poll before detecting the end of response :
20:20:03.958207 recv(7, "HTTP/1.1 200\r\nConnection: close\r\n"..., 8030, 0) = 145
20:20:03.958365 epoll_wait(3, {{EPOLLIN, {u32=7, u64=7}}}, 8, 1000) = 1
20:20:03.958543 gettimeofday({1268767203, 958626}, NULL) = 0
20:20:03.958694 recv(7, ""..., 7885, 0) = 0
20:20:03.958833 shutdown(7, 2 /* send and receive */) = 0
Let's read as long as we can, that way we can detect end of connections
in the same call, which is much more efficient especially for LBs with
hundreds of servers :
20:29:58.797019 recv(7, "HTTP/1.1 200\r\nConnection: close\r\n"..., 8030, 0) = 145
20:29:58.797182 recv(7, ""..., 7885, 0) = 0
20:29:58.797356 shutdown(7, 2 /* send and receive */) = 0
We are seeing both real servers repeatedly going on- and off-line with
a period of tens of seconds. Packet tracing, stracing, and adding
debug code to HAProxy itself has revealed that the real servers are
always responding correctly, but HAProxy is sometimes receiving only
part of the response.
It appears that the real servers are sending the test page as three
separate packets. HAProxy receives the contents of one, two, or three
packets, apparently randomly. Naturally, the health check only
succeeds when all three packets' data are seen by HAProxy. If HAProxy
and the real servers are modified to use a plain HTML page for the
health check, the response is in the form of a single packet and the
checks do not fail.
(...)
I've added buffer and length variables to struct server, and allocated
space with the rest of the server initialisation.
(...)
It seems to be working fine in my tests, and handles check responses
that are bigger than the buffer.
Those two codes can be triggered on demand by client requests.
We must not fail a server on them.
Ideally we should ignore a certain amount of status codes which do
not indicate life nor death.
The new anonymous ACL feature was buggy. If several ones are
declared, the first rule is always matched because all of them
share the same internal name (".noname"). Now we simply declare
them with an empty name and ensure that we disable any merging
when the name is empty.
Hi Willy,
Please find a small patch to prevent haproxy segfaulting when logging captured headers in CLF format.
Example config to reproduce the bug :
listen test :10080
log 127.0.0.1 local7 debug err
mode http
option httplog clf
capture request header NonExistantHeader len 16
--
Cyril Bont
The BF_AUTO_CLOSE flag prevented a connection from establishing on
a server if the other side's input channel was already closed. This
is wrong because there may be pending data to be sent.
This was causing an issue with stats, as noticed and reported by
Cyril Bont. Since the stats are now handled as a server, sometimes
concurrent accesses were causing one of the connections to send the
shutdown(write) before the connection to the stats function was
established, which aborted it early.
This fix causes the BF_AUTO_CLOSE flag to be checked only when the
connection on the outgoing stream interface has reached an established
state. That way we're still able to connect, send the request then
close.
Marcello Gorlani reported that at least on FreeBSD, a long hostname
was reported with garbage on the stats page. POSIX does not make it
mandatory for gethostname() to NULL-terminate the string in case of
truncation, and at least FreeBSD appears not to do it. So let's
force null-termination to keep safe.
Since the last documentation cleanups, I've found more typos that I kept
in a corner instead of sending you a mail just for one character :)
--
Cyril Bont
Jozef Hovan reported a bug sometimes causing a down server to be
used in url_param hashing mode.
This happens if the following conditions are met :
- the backend contains more than one server with at least two
of different weights
- all servers but one are down
- the server which is not down has a weight which does not divide
all the other ones
Example: 3 servers with 20,20,10, the first one remains up.
The problem is caused by an optimisation in recalc_server_map()
which only fills the first map slot when only one server is up,
because all LB algorithms are optimized to use entry zero when
only one server is up... All but url_param. When doing the modulus,
we can return a position which is greater than zero and use an
entry which still refers to a server which has since been stopped.
One solution could be to optimize the url_param algo to proceed
as the other ones, but the fact that was wrong implies that we
can repeat the same bug later. So let's first correctly initialize
the map in order to avoid that trap.
When trying to spot some complex bugs, it's often needed to access
information on stuck sessions, which is quite difficult. This new
command helps one get detailed information about a session, with
flags, timers, states, etc... The buffer data are not dumped yet.
In case of pipelined requests, if the client aborts before reading response
N-1, haproxy waits forever for the data to leave the buffer before parsing
the next response.
This duplicate test should have been removed with the loop rework but was forgotten.
It was harmless, but disassembly shows that it prevents gcc from correctly optimizing
the loop.
We have been using haproxy to balance a not very well written application
(http://www.blackboard.com/). Using the "insert postonly indirect" cookie
method, I was attempting to remove the cookie when users would logout,
allowing the machine to re-balance for the next user (this application is
used in school computer labs, so a computer might stay on the whole day
but be used on and off).
I was having a lot of trouble because when the cookie was set, it was with
"Path=/", but when being cleared there was no "Path" in the set cookie
header, and because the logout page was in a different place of the
website (which I couldn't change), the cookie would not be cleared. I
don't know if this would be a problem for anyone other than me (as our
HTTP application is so un-adjustable), but just in case, I have included
the patch I used. Maybe it will help someone else.
[ WT: this was a correct fix, and I also added the same missing path to
the set-cookie option ]
It's sometimes convenient to know if a proxy has processed any connection
at all when stopping it. Since a soft restart causes the "Proxy stopped"
message to be logged for each proxy, let's add the number of connections
so that it's possible afterwards to check whether a proxy had received
any connection.
Often we need to understand why some transfers were aborted or what
constitutes server response errors. With those two counters, it is
now possible to detect an unexpected transfer abort during a data
phase (eg: too short HTTP response), and to know what part of the
server response errors may in fact be assigned to aborted transfers.
Holger Just and Ross West reported build issues on FreeBSD and
Solaris that were initially caused by the definition of
_XOPEN_SOURCE at the top of auth.c, which was required on Linux
to avoid a build warning.
Krzysztof Oledzki found that using _GNU_SOURCE instead also worked
on Linux and did not cause any issue on several versions of FreeBSD.
Solaris still reported a warning this time, which was fixed by
including <crypt.h>, which itself is not present on FreeBSD nor on
all Linux toolchains.
So by adding a new build option (NEED_CRYPT_H), we can get Solaris
to get crypt() working and stop complaining at the same time, without
impacting other platforms.
This fix was tested at least on several linux toolchains (at least
uclibc, glibc 2.2.5, 2.3.6 and 2.7), on FreeBSD 4 to 8, Solaris 8
(which needs crypt.h), and AIX 5.3 (without crypt.h).
Every time it builds without a warning.
A copy-paste typo and a missing check were causing the logs to
report "PR" instead of "SD" when a server closes before sending
full data. Also, the log would erroneously report 502 while in
fact the correct response will already have been transmitted.
Some people have reported seeing "SL" flags in their logs quite often while
this should never happen. The reason was that then a server error is detected,
we close the connection to that server and when we decide what state we were
in, we see the connection is closed, and deduce it was the last data transfer,
which is wrong. We should report DATA if the previous state was an established
state, which this patch does.
Now logs correctly report "SD" and not "SL" when a server resets a connection
before the end of the transfer.
isalnum, isdigit and friends are really annoying because they take
an int in which we should pass an unsigned char, while strings
everywhere use chars. Solaris uses macros relying on an array for
those functions, which easily triggers some warnings showing where
we have mistakenly passed a char instead of an unsigned char or an
int. Those warnings may indicate real bugs on some platforms
depending on the implementation.
There are many information available in the stats page that can only
be seen when the mouse hovers over them. But it's hard to know where
those information are. Now with a discrete dotted underline it's easier
to spot those areas.
The current and max request rates are now reported when the mouse flies
over the session rate cur/max. The total requests is displayed with the
status codes over the total sessions cell.
The bounce realign function was algorithmically good but as expected
it was not cache-friendly. Using it with large requests caused so many
cache thrashing that the function itself could drain 70% of the total
CPU time for only 0.5% of the calls !
Revert back to a standard memcpy() using a specially allocated swap
buffer. We're now back to 2M req/s on pipelined requests.
It is wrong to merge FE and BE stats for a proxy because when we consult a
BE's stats, it reflects the FE's stats eventhough the BE has received no
traffic. The most common example happens with listen instances, where the
backend gets credited for all the trafic even when a use_backend rule makes
use of another backend.
The trash buffer may now be smaller than a buffer because we can tune
it at run time. This causes a risk when we're trying to use it as a
temporary buffer to realign unaligned requests, because we may have to
put up to a full buffer into it.
Instead of doing a double copy, we're now relying on an open-coded
bouncing copy algorithm. The principle is that we move one byte at
a time to its final place, and if that place also holds a byte, then
we move it too, and so on. We finish when we've moved all the buffer.
It limits the number of memory accesses, but since it proceeds one
byte at a time and with random walk, it's not cache friendly and
should be slower than a double copy. However, it's only used in
extreme situations and the difference will not be noticeable.
It has been extensively tested and works reliably.
When a host name could not be resolved, an alert was emitted but the
service used to start with 0.0.0.0 for the IP address, because the
address parsing functions could not report an error. This is now
changed. This fix must be backported to 1.3 as it was first discovered
there.
[WT: it was not a bug, I did it on purpose to leave no hole between IDs,
though it's not very practical when admins want to force some entries
after they have been used, because they'd rather leave a hole than
renumber everything ]
Forcing some of IDs should not shift others.
Regression introduced in 53fb4ae261
---cut here---
global
stats socket /home/ole/haproxy.stat user ole group ole mode 660
frontend F1
bind 127.0.0.1:9999
mode http
backend B1
mode http
backend B2
mode http
id 9999
backend B3
mode http
backend B4
mode http
---cut here---
Before 53fb4ae261:
$ echo "show stat" | socat unix-connect:/home/ole/haproxy.stat stdio|cut -d , -f 28
iid
1
2
9999
4
5
After 53fb4ae261:
$ echo "show stat" | socat unix-connect:/home/ole/haproxy.stat stdio|cut -d , -f 28
iid
1
2
9999
3
4
With this patch:
$ echo "show stat" | socat unix-connect:/home/ole/haproxy.stat stdio|cut -d , -f 28
iid
1
2
9999
4
5
Thich patch fixes cfgparser not to leak memory on each
default server statement and adds several missing free
calls in deinit():
- free(l->name)
- free(l->counters)
- free(p->desc);
- free(p->fwdfor_hdr_name);
None of them are critical, hopefully.
Released version 1.4-rc1 with the following main changes :
- [MEDIUM] add a maintenance mode to servers
- [MINOR] http-auth: last fix was wrong
- [CONTRIB] add base64rev-gen.c that was used to generate the base64rev table.
- [MINOR] Base64 decode
- [MINOR] generic auth support with groups and encrypted passwords
- [MINOR] add ACL_TEST_F_NULL_MATCH
- [MINOR] http-request: allow/deny/auth support for frontend/backend/listen
- [MINOR] acl: add http_auth and http_auth_group
- [MAJOR] use the new auth framework for http stats
- [DOC] add info about userlists, http-request and http_auth/http_auth_group acls
- [STATS] make it possible to change a CLI connection timeout
- [BUG] patterns: copy-paste typo in type conversion arguments
- [MINOR] pattern: make the converter more flexible by supporting void* and int args
- [MINOR] standard: str2mask: string to netmask converter
- [MINOR] pattern: add support for argument parsers for converters
- [MINOR] pattern: add the "ipmask()" converting function
- [MINOR] config: off-by-one in "stick-table" after list of converters
- [CLEANUP] acl, patterns: make use of my_strndup() instead of malloc+memcpy
- [BUG] restore accidentely removed line in last patch !
- [MINOR] checks: make the HTTP check code add the CRLF itself
- [MINOR] checks: add the server's status in the checks
- [BUILD] halog: make without arch-specific optimizations
- [BUG] halog: fix segfault in case of empty log in PCT mode (cherry picked from commit fe362fe476)
- [MINOR] http: disable keep-alive when process is going down
- [MINOR] acl: add build_acl_cond() to make it easier to add ACLs in config
- [CLEANUP] config: use build_acl_cond() instead of parse_acl_cond()
- [CLEANUP] config: use warnif_cond_requires_resp() to check for bad ACLs
- [MINOR] prepare req_*/rsp_* to receive a condition
- [CLEANUP] config: specify correct const char types to warnif_* functions
- [MEDIUM] config: factor out the parsing of 20 req*/rsp* keywords
- [MEDIUM] http: make the request filter loop check for optional conditions
- [MEDIUM] http: add support for conditional request filter execution
- [DOC] add some build info about the AIX platform (cherry picked from commit e41914c77e)
- [MEDIUM] http: add support for conditional request header addition
- [MEDIUM] http: add support for conditional response header rewriting
- [DOC] add some missing ACLs about response header matching
- [MEDIUM] http: add support for proxy authentication
- [MINOR] http-auth: make the 'unless' keyword work as expected
- [CLEANUP] config: use build_acl_cond() to simplify http-request ACL parsing
- [MEDIUM] add support for anonymous ACLs
- [MEDIUM] http: switch to tunnel mode after status 101 responses
- [MEDIUM] http: stricter processing of the CONNECT method
- [BUG] config: reset check request to avoid double free when switching to ssl/sql
- [MINOR] config: fix too large ssl-hello-check message.
- [BUG] fix error response in case of server error
The fix below was incomplete :
commit d5fd51c75b
[BUG] http_server_error() must not purge a previous pending response
This can cause parts of responses to be truncated in case of
pipelined requests if the second request generates an error
before the first request is completely flushed.
Pending response data being rejected was still sent, causing inappropriate
error responses in case of error while parsing a response header. We must
purge pending data from the response buffer that were not scheduled to be
sent (l - send_max).
SSL and SQL checks did only perform a free() of the request without replacing
it, so having multiple SSL/SQL check declarations after another check type
causes a double free condition during config parsing. This should be backported
although it's harmless.
Now we establish the tunnel only once the status 200 reponse is
received. That way we can still support an authentication request
in response to a CONNECT, then a client's authentication response.
A 101 response is accompanied with an Upgrade header indicating
a new protocol that is spoken on the connection after the exchange
completes. At least we should switch to tunnel mode after such a
response.
Anonymous ACLs allow the declaration of rules which rely directly on
ACL expressions without passing via the declaration of an ACL. Example :
With named ACLs :
acl site_dead nbsrv(dynamic) lt 2
acl site_dead nbsrv(static) lt 2
monitor fail if site_dead
With anonymous ACLs :
monitor fail if { nbsrv(dynamic) lt 2 } || { nbsrv(static) lt 2 }
I'm not sure if the fix is correct:
- if (req_acl->cond)
- ret = acl_exec_cond(req_acl->cond, px, s, txn, ACL_DIR_REQ);
+ if (!req_acl->cond)
+ continue;
Doesn't it ignore rules with no condition attached? I think that the
proper solution would be the following.
One check was missing for the 'polarity' of the test. Now 'unless'
works. BTW, 'unless' provides a nice way to perform one-line auth :
acl valid-user http_auth(user-list)
http-request auth unless valid-user
This is a first attempt to add a maintenance mode on servers, using
the stat socket (in admin level).
It can be done with the following command :
- disable server <backend>/<server>
- enable server <backend>/<server>
In this mode, no more checks will be performed on the server and it
will be marked as a special DOWN state (MAINT).
If some servers were tracking it, they'll go DOWN until the server
leaves the maintenance mode. The stats page and the CSV export also
display this special state.
This can be used to disable the server in haproxy before doing some
operations on this server itself. This is a good complement to the
"http-check disable-on-404" keyword and works in TCP mode.
We're already able to know if a request is a proxy request or a
normal one, and we have an option "http-use-proxy-header" which states
that proxy headers must be checked. So let's switch to use the proxy
authentication headers and responses when this option is set and we're
facing a proxy request. That allows haproxy to enforce auth in front
of a proxy.
Support the new syntax (http-request allow/deny/auth) in
http stats.
Now it is possible to use the same syntax is the same like in
the frontend/backend http-request access control:
acl src_nagios src 192.168.66.66
acl stats_auth_ok http_auth(L1)
stats http-request allow if src_nagios
stats http-request allow if stats_auth_ok
stats http-request auth realm LB
The old syntax is still supported, but now it is emulated
via private acls and an aditional userlist.
Add generic authentication & authorization support.
Groups are implemented as bitmaps so the count is limited to
sizeof(int)*8 == 32.
Encrypted passwords are supported with libcrypt and crypt(3), so it is
possible to use any method supported by your system. For example modern
Linux/glibc instalations support MD5/SHA-256/SHA-512 and of course classic,
DES-based encryption.
Implement Base64 decoding with a reverse table.
The function accepts and decodes classic base64 strings, which
can be composed from many streams as long each one is properly
padded, for example: SGVsbG8=IEhBUHJveHk=IQ==
Just as for the req* rules, we can now condition rsp* rules with ACLs.
ACLs match on response, so volatile request information cannot be used.
A warning is emitted if a configuration contains such an anomaly.
All the req* rules except the reqadd rules can now be specified with
an if/unless condition. If a condition is specified and does not match,
the filter is ignored. This is particularly useful with reqidel, reqirep
and reqtarpit.
From now on, if request filters have ACLs defined, these ACLs will be
evaluated to condition the filter. This will be used to conditionally
remove/rewrite headers based on ACLs.
A new function was added to take care of the common code between
all those keywords. This has saved 8 kB of object code and about
500 lines of source code. This has also permitted to spot and fix
minor bugs (allocated args that were never used).
The code could be factored even more but that would make it a bit
more complex which is not interesting at this stage.
Various tests have been performed, and the warnings and errors are
still correctly reported and everything seems to work as expected.
This function automatically builds a rule, considering the if/unless
statements, and automatically updates the proxy's acl_requires, the
condition's file and line.
Krzysztof Oledzki suggested to disable keep-alive when a process
is going down due to a reload, in order to avoid ever-lasting
sessions. This is a simple and very efficient solution as it
ensures that at most one more request will be handled on a
keep-alive connection after the process has received a SIGUSR1
signal.
Now a server can check the contents of the header X-Haproxy-Server-State
to know how haproxy sees it. The same values as those reported in the stats
are provided :
- up/down status + check counts
- throttle
- weight vs backend weight
- active sessions vs backend sessions
- queue length
- haproxy node name
Currently we cannot easily add headers nor anything to HTTP checks
because the requests are pre-formatted with the last CRLF. Make the
check code add the CRLF itself so that we can later add useful info.
This converter can be applied on top of an IPv4-type pattern. It
applies a netmask which is suited for IP address storage and matching.
This can be used to make all hosts within a certain mask to share the
same table entries and as such use the same server.
The mask can be passed in dotted form (eg: 255.255.255.0) or in CIDR
form (eg: 24).
Some converters will need one or several arguments. It's not possible
to write a simple generic parser for that, so let's add the ability
for each converter to support its own argument parser, and call it
to get the arguments when it's specified. If unspecified, the arguments
are passed unmodified as string+len.
The pattern type converters currently support a string arg and a length.
Sometimes we'll prefer to pass them a list or a structure. So let's convert
the string and length into a generic void* and int that each converter may
use as it likes.
Hi Willy,
I've made a quick pass on the "defaults" column in the Proxy keywords matrix (chapter 4.1. in the documentation).
This patch resyncs the code and the documentation. I let you decide if some keywords that still work in the "defaults" section should be forbidden.
- default_backend : in the matrix, "defaults" was not supported but the keyword details say it is.
Tests also shows it works, then I've updated the matrix.
- capture cookie : in the keyword details, we can read `It is not possible to specify a capture in a "defaults" section.'.
Ok, even if the tests worked, I've added an alert in the configuration parser (as it is for capture request/response header).
- description : not supported in "defaults", I added an alert in the parser.
I've also noticed that this keyword doesn't appear in the documentation.
There's one "description" entry, but for the "global" section, which is for a different use (the patch doesn't update the documentation).
- grace : even if this is maybe useless, it works in "defaults". Documentation is updated.
- redirect : alert is added in the parser.
- rsprep : alert added in the parser.
--
Cyril Bont
We must trim any excess data from the response buffer when recycling
a keep-alive connection, because we may have blocked an invalid response
from a server that we don't want to accidentely forward once we disable
the analysers, nor do we want those data to come along with next response.
A typical example of such data would be from a buggy server responding to
a HEAD with some data, or sending more than the advertised content-length.
For deciding to set the BF_EXPECT_MORE, we reused the same code as in
http_wait_for_request(), but here we must ignore buf->lr which is not
yet set and useless. This might only have caused random sub-optimal
behaviours.
Krzysztof Oledzki reported that 1.4-dev7 would regularly crash
on an apparently very common workload. The cores he provided
showed some inter-buffer data corruption, exactly similar to
what was fixed by the following recent commit :
bbfa7938bd [BUG] buffer_replace2 must never change the ->w entry
In fact, it was buffer_insert_line2() which was still modifying the
->w pointer, causing issues with pipelined responses in keep-alive
mode if some headers were to be added.
The bug requires a remote client, a near server, large server buffers
and small client buffers to be reproduced, with response header
insertion. Still, it's surprizing that it did not trigger earlier.
Now after 100k pipelined requests it did not trigger anymore.
Despite what is explicitly stated in HTTP specifications,
browsers still use the undocumented Proxy-Connection header
instead of the Connection header when they connect through
a proxy. As such, proxies generally implement support for
this stupid header name, breaking the standards and making
it harder to support keep-alive between clients and proxies.
Thus, we add a new "option http-use-proxy-header" to tell
haproxy that if it sees requests which look like proxy
requests, it should use the Proxy-Connection header instead
of the Connection header.
This function is used to move data which is located between ->w and ->r,
so it must not touch ->w, otherwise it will displace pending data which
is before the one we're actually overwriting. The issue arises with
some pipelined responses which cause some part of the previous one to
be chopped off when removing the connection: close header, thus
corrupting last response and shifting next one. Those are detected
in the logs because the next response will be a 502 with flags PH.
When using "option persist" or "force-persist", we want to know from the
logs if the cookie referenced a valid server or a down server. Till here
the flag reported a valid server even if the server was down, which is
misleading. Now we correctly report that the requested server was down.
We can typically see "--DI" when using "option persist" with redispatch,
ad "SCDN" when using force-persist on a down server.
This is used to force access to down servers for some requests. This
is useful when validating that a change on a server correctly works
before enabling the server again.
We use to delay the response if there is a new request in the buffer.
However, if the pending request is incomplete, we should not delay the
pending responses.
This can cause parts of responses to be truncated in case of
pipelined requests if the second request generates an error
before the first request is completely flushed.
Sometimes we need to be able to change the default kernel socket
buffer size (recv and send). Four new global settings have been
added for this :
- tune.rcvbuf.client
- tune.rcvbuf.server
- tune.sndbuf.client
- tune.sndbuf.server
Those can be used to reduce kernel memory footprint with large numbers
of concurrent connections, and to reduce risks of write timeouts with
very slow clients due to excessive kernel buffering.
This one is the next step of previous patch. It correctly computes
the response mode and the Connection flag transformations depending
on the request mode and version, and the response version and headers.
We're now also able to add "Connection: keep-alive", and to convert
server's close during a keep-alive connection to a server-close
connection.
We need to improve Connection header handling in the request for it
to support the upcoming keep-alive mode. Now we have two flags which
keep in the session the information about the presence of a
Connection: close and a Connection: keep-alive headers in the initial
request, as well as two others which keep the current state of those
headers so that we don't have to parse them again. Knowing the initial
value is essential to know when the client asked for keep-alive while
we're forcing a close (eg in server-close mode). Also the Connection
request parser is now able to automatically remove single header values
at the same time they are parsed. This provides greater flexibility and
reliability.
All combinations of listen/front/back in all modes and with both
1.0 and 1.1 have been tested.
Some header values might be delimited with spaces, so it's not enough to
compare "close" or "keep-alive" with strncasecmp(). Use word_match() for
that.
Calling this function after http_find_header2() automatically deletes
the current value of the header, and removes the header itself if the
value is the only one. The context is automatically adjusted for a
next call to http_find_header2() to return the next header. No other
change nor test should be made on the transient context though.
The close mode of a transaction would be switched to tunnel mode
at the end of the processing, letting a lot of pending data pass
in the other direction if any. Let's fix that by checking for the
close mode during state resync too.
We must set the error flags when detecting that a client has reset
a connection or timed out while waiting for a new request on a keep-alive
connection, otherwise process_session() sets it itself and counts one
request error.
That explains why some sites were showing an increase in request errors
with the keep-alive.
We get a lot of those, especially with web crawlers :
recv(2, 0x810b610, 7000, 0) = -1 ECONNRESET (Connection reset by peer)
shutdown(2, 1 /* send */) = -1 ENOTCONN (Transport endpoint is not connected)
close(2) = 0
There's no need to perform the shutdown() here, the socket is already
in error so it is down.
A check was performed in buffer_replace2() to compare buffer
length with its read pointer. This has been wrong for a long
time, though it only has an impact when dealing with keep-alive
requests/responses. In theory this should be backported but
the check has no impact without keep-alive.
We can receive data with a notification of socket error. But we
must not check for the error before reading the data, because it
may be an asynchronous error notification that we check too early
while the response we're waiting for is available. If there is an
error, recv() will get it.
This should help with servers that close very fast after the response
and should also slightly lower the CPU usage during very fast checks
on massive amounts of servers since we eliminate one system call.
This should probably be backported to 1.3.
While waiting in a keep-alive state for a request, we want to silently
close if we don't get anything. However if we get a partial request it's
different because that means the client has started to send something.
This requires a new transaction flag. It will be used to implement a
distinct timeout for keep-alive and requests.
This change, suggested by Cyril Bont, makes a lot of sense and
would have made it obvious that sessid was not properly initialized
while switching to keep-alive. The code is now cleaner.
The stream_int_cond_close() function was added to preserve the
contents of the response buffer because stream_int_retnclose()
was buggy. It flushed the response instead of flushing the
request. This caused issues with pipelined redirects followed
by error messages which ate the previous response.
This might even have caused object truncation on pipelined
requests followed by an error or by a server redirection.
Now that this is fixed, simply get rid of the now useless
function.
I've tried to follow all the pool_alloc2/pool_free2 calls in the code
to track memory leaks. I've found one which only happens when there's
already no more memory when allocating a new appsession cookie.
Sometimes it can be desired to return a location which is the same
as the request with a slash appended when there was not one in the
request. A typical use of this is for sending a 301 so that people
don't reference links without the trailing slash. The name of the
new option is "append-slash" and it can be used on "redirect"
statements in prefix mode.
When using server redirection, it is possible to specify a path
consisting of only one slash. While this is discouraged (risk of
loop) it may sometimes be useful combined with content switching.
The prefixing of a '/' then causes two slashes to be returned in
the response. So we now do as with the other redirects, don't
prepend a slash if it's alone.
Some message pointers were not usable once the message reached the
HTTP_MSG_DONE state. This is the case for ->som which points to the
body because it is needed to parse chunks. There is one case where
we need the beginning of the message : server redirect. We have to
call http_get_path() after the request has been parsed. So we rely
on ->sol without counting on ->som. In order to achieve this, we're
making ->rq.{u,v} relative to the beginning of the message instead
of the buffer. That simplifies the code and makes it cleaner.
Preliminary tests show this is OK.
This might have been introduced with chunk extensions. Note that
the server redirect still does not work because http_get_path()
cannot get the correct path once the request message is in the
HTTP_MSG_DONE state (->som does not point to the start of message
anymore).
The initial code's intention was to loop on the analysers as long
as an analyser is added by another one. [This code was wrong due to
the while(0) which breaks even on a continue statement, but the
initial intention must be changed too]. In fact we should limit the
number of times we loop on analysers in order to limit latency.
Using maxpollevents as a limit makes sense since this tunable is
used for the exact same purposes. We may add another tunable later
if that ever makes sense, so it's very unlikely.
If we accept a new request and that request produces an immediate
response (error, redirect, ...), then we may fail to send it in
case of pipelined requests if the response buffer is full. To avoid
this, we check the availability of at least maxrewrite bytes in the
response buffer before accepting a new pipelined request.
During a redirect, we used to send the last chunk of response with
stream_int_cond_close(). But this is wrong in case of pipeline,
because if the response already contains something, this function
will refrain from touching the buffer. Use a concatenation function
instead.
Also, this call might still fail when the buffer is full, we need
a second fix to refrain from parsing an HTTP request as long as the
response buffer is full, otherwise we may not even be able to return
a pending redirect or an error code.
That patch was incorrect because under some circumstances, the
capture memory could be freed by session_free() and then again
by http_end_txn(), causing a double free and an eventual segfault.
The pool use count was also reported wrong due to this bug.
The cleanup code was removed from session_free() to remain only
in http_end_txn().
Hank A. Paulson reported a massive memory leak when using keep-alive
mode. The information he provided made it easy to find that captured
request and response headers were erased but not released when renewing
a request.
Several HTTP analysers used to set those flags to values that
were useful but without considering the possibility that they
were not called again to clean what they did. First, replace
direct flag manipulation with more explicit macros. Second,
enforce a rule stating that any buffer which changes one of
these flags from the default must restore it after completion,
so that other analysers see correct flags.
With both this fix and the previous one about analyser bits,
we should not see any more stuck sessions.
A request analyser may very well be added while processing a response
(eg: end of an HTTP keep-alive response). It's very dangerous to only
rely on flags that ought to change in order to loop back, so let's
correctly detect a possible new analyser addition instead of guessing.
With the introduction of keep-alive, we have created situations
where an analyser can add other analysers to the current list,
which are behind it, which have already been processed once, and
which are needed immediately because without them there will be
no more I/O activity. This is typically the case for enabling
reading of a new request after preparing for a new request.
Instead of creating specific cases for some analysers (there was
already one such before), we now use a little bit of algorithmics
to create an ordered bit chain supporting priorities and fast
operations.
Another advantage of this new construction is that it's not a
real loop anymore, so if an analyser is unknown, it will not
loop but just ignore it.
Note that it is easy to skip multiple analysers at once now in
order to speed up the checking a bit. Some test code has shown
a minor gain though.
This change has been carefully re-read and has no direct reason
of causing a regression. However it has been tagged "major"
because the fact that it runs the analysers correctly might
trigger an old sleeping bug somewhere in one of the analysers.
This patch implements default-server support allowing to change
default server options. It can be used in [defaults] or [backend]/[listen]
sections. Currently the following options are supported:
- error-limit
- fall
- inter
- fastinter
- downinter
- maxconn
- maxqueue
- minconn
- on-error
- port
- rise
- slowstart
- weight
Supported informations, available via "tr/td title":
- cap: capabilities (proxy)
- mode: one of tcp, http or health (proxy)
- id: SNMP ID (proxy, socket, server)
- IP (socket, server)
- cookie (backend, server)
This patch adds add "a link" & "a href" html tags for sockets.
As sockets may have the same name like servers, I decided to
add "+" char (forbidden in names assigned to servers), as a prefix.
It is useless to report statistics if the feature was not enabled.
It also makes possible to distinguish if health analyses is
enabled or not only by looking at the stats page.
Commit 0dfdf19b64 introduced a
regression because the connection header is now parsed and checked
depending on the configured options, but the options are set after
calling it instead of being set before.
Historically, "option httpclose" has always worked the same way. It
only mangles the "Connection" header in the request and the response
if needed, but does not affect the connection by itself, and ignores
any further data. It is dangerous to change this behaviour without
leaving any other alternative. If an active close is desired, it's
better to make use of "option forceclose" which does exactly what
it intends to do.
So as of now, "option httpclose" will only mangle the headers as
before, and will only affect the connection by itself when combined
with another connection-related option (eg: keepalive or server-close).
We basically have to mimmic the code of process_session() here, so
when the remote output is closed, we must abort otherwise we'll end
up with data which cannot leave the buffer.