The HTML page reports the current process connection rate, and the
"show info" command on the stats socket also reports the conn rate
limit and the max conn rate that was once reached.
Note that the max value can be cleared using "clear counters".
This one enforces a per-process connection rate limit, regardless of what
may be set per frontend. It can be a way to limit the CPU usage of a process
being severely attacked.
The side effect is that the global process connection rate is now measured
for each incoming connection, so it will be possible to report it.
This option permits to change the global maxconn setting within the
limit that was set by the initial value, which is now reported as the
hard maxconn value. This allows to immediately accept more concurrent
connections or to stop accepting new ones until the value passes below
the indicated setting.
The main use of this option is on systems where many haproxy instances
are loaded and admins need to re-adjust resource sharing at run time
to regain a bit of fairness between processes.
The way the unix socket is initialized is awkward. Some of the settings are put
in the sockets itself, other ones in the backend. And more importantly the
global.maxsock value is adjusted so that the stats socket evades the global
maxconn value. This complexifies maxsock computations for nothing, since the
stats socket is not supposed to receive hundreds of concurrent connections when
the global maxconn is very low. What is needed however is to ensure that there
are always connections left for the stats socket even when traffic sockets are
saturated, but this guarantee is not offered anymore by current code.
So as of now, the stats socket is subject to the global maxconn limitation just
as any other socket until a reservation mechanism is implemented.
Sometimes a bad content-length header is encountered and this causes
an abort. It's hard to debug without a trace, so let's take a capture
of the contents when this happens.
If a server starts to respond but stops before the body, then we
capture the truncated response. We don't do this on the request
because it would happen too often upon stupid attacks.
Trailing spaces after headers were not trimmed, only the leading ones
were. An issue was detected today with a content-length value which
was padded with spaces and which was rejected. Recent updates to the
http-bis draft made it a lot more clear that such spaces must be ignored,
so this is what this patch does.
It should be backported to 1.4.
Many inet_ntop calls were partially right, which was hard to detect given
the complex combinations. Some of them were relying on the listener's proto
instead of the address itself, which could have been different when dealing
with an accept-proxy connection.
The new addr_to_str() function does the dirty job and returns the family, which
makes it particularly suited to calls from switch/case statements. A large number
of if/else statements were removed and the stats output could even be cleaned up
in the case of session dump.
As a side effect of doing this, the resulting code is smaller by almost 1kB.
All changed parts have been tested and provided expected output.
A similar issue as the previous one causes port mapping to fail in some
combinations of client and server address families. Using the macros fixes
the issue.
In the number of switch/case statements added for IPv6 changes,
one was wrong and caused the check port to be ignored for outgoing
connection because the socket's family was not taken at the right
place. Use the set_host_port() macro instead to fix the issue.
The same cleanup could be performed at a number of other places
and should follow shortly.
Special thanks to Stephane Bakhos of Techboom for reporting a
detailed analysis of this bug.
Patch d5b9fd95 was missing an initialisation of "ctx.table.target", which caused
"show table" to segfault if it was issued after a "show errors" (target pointer == -1).
Some older libc don't define splice() and and don't define _syscall*()
either, which causes build errors if splicing is enabled.
To solve this, we now split the syscall redefinition into two layers :
- one file per syscall (epoll, splice)
- one common file to declare the _syscall*() macros
The code is cleaner because files using the syscalls just have to include
their respective file. It's not adviced to merge multiple syscall families
into a same file if all are not intended to be used simultaneously, because
defining unused static functions causes warnings to be emitted during build.
As a result, the new USE_MY_SPLICE parameter was added in order to be able
to define the splice() syscall separately.
If "option forwardfor" has the "if-none" argument, then the header is
only added when the request did not already have one. This option has
security implications, and should not be set blindly.
Manoj Kumar reported a case where haproxy would crash upon start-up. The
cause was an "http-check expect" statement declared in the defaults section,
which caused a NULL regex to be used during the check. This statement is not
allowed in defaults sections precisely because this requires saving a copy
of the regex in the default proxy. But the check was not made to prevent it
from being declared there, hence the issue.
Instead of adding code to detect its abnormal use, we decided to implement
it. It was not that much complex because the expect_str part was not used
with regexes, so it could hold the string form of the regex in order to
compile it again for every backend (there's no way to clone regexes).
This patch has been tested and works. So it's both a bugfix and a minor
feature enhancement.
It should be backported to 1.4 though it's not critical since the config
was not supposed to be supported.
"[MINOR] session: add a pointer to the new target into the session" (664beb8)
introduced a regression by changing the type of a peer's target from
TARG_TYPE_PROXY to TARG_TYPE_NONE. The effect of this is that during
a soft-restart the new process no longer tries to connect to the
old process to replicate its stick tables.
This patch sets the type of a peer's target as TARG_TYPE_PROXY and
replication on soft-restart works once again.
Adding health checks has become a real pain, with cross-references to all
checks everywhere because they're all a single bit. Since they're all
exclusive, let's change this to have a check number only. We reserve 4
bits allowing up to 16 checks (15+tcp), only 7 of which are currently
used. The code has shrunk by almost 1kB and we saved a few option bits.
The "dispatch" option has been moved to px->options, making a few tests
a bit cleaner.
This patch provides a new "option redis-check" statement to enable server health checks based on redis PING request (http://www.redis.io/commands/ping).
The new "set maxconn frontend XXX" statement on the stats socket allows
the admin to change a frontend's maxconn value. If some connections are
queued, they will immediately be accepted up to the new limit. If the
limit is lowered, new connections acceptation might be delayed. This can
be used to temporarily reduce or increase the impact of a specific frontend's
traffic on the whole process.
This global task is used to periodically check for end of resource shortage
and to try to enable queued listeners again. This is important in case some
temporary system-wide shortage is encountered, so that we don't have to wait
for an existing connection to be released before checking the queue again.
For situations where listeners are queued due to the global maxconn being
reached, the task is woken up at least every second. For situations where
a system resource shortage is detected (memory, sockets, ...) the task is
woken up at least every 100 ms. That way, recovery from severe events can
still be achieved under acceptable conditions.
This was revealed with one of the very latest patches which caused
the listener_queue not to be initialized on the stats socket frontend.
And in fact a number of other ones were missing too. This is getting so
boring that now we'll always make use of the same function to initialize
any proxy. Doing so has even saved about 500 bytes on the binary due to
the avoided code redundancy.
No backport is needed.
This function is finally not needed anymore, as it has been replaced with
a per-proxy task that is scheduled when some limits are encountered on
incoming connections or when the process is stopping. The savings should
be noticeable on configs with a large number of proxies. The most important
point is that the rate limiting is now enforced in a clean and solid way.
Peers were stopped on every call to maintain_proxies when stopping=1,
while they should only be stopped once upon call to soft_stop(). This
bug has little impact, mostly increased CPU usage. It's not needed to
backport it.
Instead of waking a listener up then making it sleep, we only wake them up
if we know their rate limit is fine. In the future we could improve on top
of that by deciding to wake a proxy-specific task in XX milliseconds to
take care of enabling the listeners again.
Patch d9bbe17b used to limit the rate-limit to off-by-one to avoid
a busy loop when the limit is reached. Now that the listeners are
automatically disabled and queued when a limit is reached, we don't
need this workaround anymore and can bring back the most accurate
computation.
Those states have been replaced with PR_STFULL and PR_STREADY respectively,
as it is what matches them the best now. Also, two occurrences of PR_STIDLE
in peers.c have been removed as this did not provide any form of error recovery
anyway.
Now maintain_proxies() only changes proxies states and does not affect their
listeners anymore since they are autonomous. A proxy will switch between the
PR_STIDLE and PR_STRUN states depending whether it's saturated or not. Next
step will consist in renaming PR_STIDLE to PR_STFULL. This state is now only
used to report the proxy state in the stats.
All listeners that are limited by a proxy-specific resource are now
queued at the proxy's and not globally. This allows finer-grained
wakeups when releasing resource.
When an accept() fails because of a connection limit or a memory shortage,
we now disable it and queue it so that it's dequeued only when a connection
is released. This has improved the behaviour of the process near the fd limit
as now a listener with a no connection (eg: stats) will not loop forever
trying to get its connection accepted.
The solution is still not 100% perfect, as we'd like to have this used when
proxy limits are reached (use a per-proxy list) and for safety, we'd need
to have dedicated tasks to periodically re-enable them (eg: to overcome
temporary system-wide resource limitations when no connection is released).
When a listeners encounters a resource shortage, it currently stops until
one re-enables it. This is far from being perfect as it does not yet handle
the case where the single connection from the listener is rejected (eg: the
stats page).
Now we'll have a special status for resource limited listeners and we'll
queue them into one or multiple lists. That way, each time we have to stop
a listener because of a resource shortage, we can enqueue it and change its
state, so that it is dequeued once more resources are available.
This patch currently does not change any existing behaviour, it only adds
the basic building blocks for doing that.
For listeners that are not bound to a frontend, the limit on the
number of accepted connections is tested at the end of the accept()
loop, but we don't break out of the loop, meaning that if more
connections than what the listener allows are available and if this
is less than the proxy's limits and within the size of a batch, then
they could be accepted. In practice, this problem currently cannot
appear since all listeners are bound to a frontend, and it's a very
minor issue anyway.
1.4 has the same issue (which cannot happen there either), but there
is some code after it, so it's the code cleanup which revealed it.
Managing listeners state is difficult because they have their own state
and can at the same time have theirs dictated by their proxy. The pause
is not done properly, as the proxy code is fiddling with sockets. By
introducing new functions such as pause_listener()/resume_listener(), we
make it a bit more obvious how/when they're supposed to be used. The
listen_proxies() function was also renamed to resume_proxies() since
it's only used for pause/resume.
This patch is the first in a series aiming at getting rid of the maintain_proxies
mess. In the end, proxies should not call enable_listener()/disable_listener()
anymore.
When an accept() returns -1 ENFILE du to system limits, it leaves the
connection pending in the backlog and epoll() comes back immediately
afterwards trying to make it accept it again. This causes haproxy to
remain at 100% CPU until something makes an accept() possible again.
Now upon such resource shortage, we mark the listener FULL so that we
only enable it again once at least one connection has been released.
In fact we only do that if there are some active connections on this
proxy, so that it has a chance to be marked not full again. This makes
haproxy remain idle when all resources are used, which helps a lot
releasing those resource as fast as possible.
Backport to 1.4 might be desirable but difficult and tricky.
By default on a single process, we accept 100 connections at once. This is too
much on recent CPUs where the cache is constantly thrashing, because we visit
all those connections several times. We should batch the processing slightly
less so that all the accepted session may remain in cache during their initial
processing.
Lowering the batch size from 100 to 32 has changed the connection rate for
concurrencies between 5-10k from 67 kcps to 94 kcps on a Core i5 660 (4M L3),
and forward rates from 30k to 39.5k.
Tests on this hardware show that values between 10 and 30 seem to do the job fine.
When we fail to create a session because of memory shortage, let's at
least try to send a 500 message directly on the socket. Even if we don't
have any buffers left, the kernel's orphans management will take care of
delivering the message as long as there are socket buffers left.
Patch af5149 introduced an issue which can be detected only on out of
memory conditions : a LIST_DEL() may be performed on an uninitialized
struct member instead of a LIST_INIT() during the accept() phase,
causing crashes and memory corruption to occur.
This issue was detected and diagnosed by the Exceliance R&D team.
This is 1.5-specific and very recent, so no existing deployment should
be impacted.
The motivation for this is that when soft-restart is merged
it will be come more important to free all relevant memory in deinit()
Discovered using valgrind.
The motivation for this is that when soft-restart is merged
it will be come more important to free all relevant memory in deinit()
Discovered using valgrind.
The motivation for this is that when soft-restart is merged
it will be come more important to free all relevant memory in deinit()
Discovered using valgrind.
The motivation for this is that when soft-restart is merged
it will be come more important to free all relevant memory in deinit()
Discovered using valgrind.
This is used to perform cookie-based stickiness with table replication
between multiple masters and across restarts. This partially overrides
some of the appsession capabilities.
Never add connections allocated to this sever to a stick-table.
This may be used in conjunction with backup to ensure that
stick-table persistence is disabled for backup servers.
This pattern fetch function extracts the value of the rdp cookie <name> as
a string and uses this value to match. This enables implementation of
persistence based on the mstshash cookie. This is typically done if there
is no msts cookie present.
This differs from "balance rdp-cookie" in that any balancing algorithm may
be used and thus the distribution of clients to backend servers is not
linked to a hash of the RDP cookie. It is envisaged that using a balancing
algorithm such as "balance roundrobin" or "balance leastconnect" will lead
to a more even distribution of clients to backend servers than the hash
used by "balance rdp-cookie".
Example :
listen tse-farm
bind 0.0.0.0:3389
# wait up to 5s for an RDP cookie in the request
tcp-request inspect-delay 5s
tcp-request content accept if RDP_COOKIE
# apply RDP cookie persistence
persist rdp-cookie
# Persist based on the mstshash cookie
# This is only useful makes sense if
# balance rdp-cookie is not used
stick-table type string size 204800
stick on rdp_cookie(mstshash)
server srv1 1.1.1.1:3389
server srv1 1.1.1.2:3389
apsession_refresh() and apsess_refressh are only used inside apsession.c
and thus can be made static.
The only use of apsession_refresh() is appsession_task_init().
These functions have been re-ordered to avoid the need for
a forward-declaration of apsession_refresh().
If a connection is closed by because the backend became unavailable
then log 'D' as the termination condition.
Signed-off-by: Simon Horman <horms@verge.net.au>
This adds the "on-marked-down shutdown-sessions" statement on "server" lines,
which causes all sessions established on a server to be killed at once when
the server goes down. The task's priority is reniced to the highest value
(1024) so that servers holding many tasks don't cause a massive slowdown due
to the wakeup storm.
The motivation for this is to allow iteration of all the connections
of a server without the expense of iterating over the global list
of connections.
The first use of this will be to implement an option to close connections
associated with a server when is is marked as being down or in maintenance
mode.
* The declaration of peer_session_create() does
not match its definition. As it is only
used inside of peers.c make it static.
* Make the declaration of peers_register_table()
match its definition.
* Also, make all functions in peers.c that
are not also in peers.h static
gcc (Debian 4.6.0-2) 4.6.1 20110329 (prerelease)
Copyright (C) 2011 Free Software Foundation, Inc.
This is free software; see the source for copying conditions. There is NO
warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
...
src/proto_http.c:3029:14: warning: variable ‘del_cl’ set but not used [-Wunused-but-set-variable]
In file included from ebtree/eb64tree.c:23:0:
ebtree/eb64tree.h: In function ‘__eb64_lookup’:
ebtree/eb64tree.h:128:6: warning: variable ‘node_bit’ set but not used [-Wunused-but-set-variable]
ebtree/eb64tree.h: In function ‘__eb64i_lookup’:
ebtree/eb64tree.h:180:6: warning: variable ‘node_bit’ set but not used [-Wunused-but-set-variable]
In file included from ebtree/ebpttree.h:26:0,
from ebtree/ebimtree.c:23:
ebtree/eb64tree.h: In function ‘__eb64_lookup’:
ebtree/eb64tree.h:128:6: warning: variable ‘node_bit’ set but not used [-Wunused-but-set-variable]
ebtree/eb64tree.h: In function ‘__eb64i_lookup’:
ebtree/eb64tree.h:180:6: warning: variable ‘node_bit’ set but not used [-Wunused-but-set-variable]
In file included from ebtree/ebpttree.h:26:0,
from ebtree/ebistree.h:25,
from ebtree/ebistree.c:23:
ebtree/eb64tree.h: In function ‘__eb64_lookup’:
ebtree/eb64tree.h:128:6: warning: variable ‘node_bit’ set but not used [-Wunused-but-set-variable]
ebtree/eb64tree.h: In function ‘__eb64i_lookup’:
ebtree/eb64tree.h:180:6: warning: variable ‘node_bit’ set but not used [-Wunused-but-set-variable]
mysqld >= 5.5 want the client to announce 4.1+ authentication support, even if we have no password, so we do this.
I also check on a debian potato mysqld 3.22 and it works too so i assume we are good from 3.22 to 5.5.
[WT: this must be backported to 1.4]
The fullconn value is not easy to get right when doing dynamic regulation,
as it should depend on the maxconns of the frontends that can reach a
backend. Since the parameter is mandatory, many configs are found with
an inappropriate default value.
Instead of rejecting configs without a fullconn value, we now set it to
10% of the sum of the configured maxconns of all the frontends which are
susceptible to branch to the backend. That way if new frontends are added,
the backend's fullconn automatically adjusts itself.
Bashkim Kasa reported that the stats admin page did not work when colons
were used in server or backend names. This was caused by url-encoding
resulting in ':' being sent as '%3A'. Now we systematically decode the
field names and values to fix this issue.
Since version 1.0.0, it's forbidden to have a cookie specified without at
least one server. This test is useless and makes it complex to write APIs
to iteratively generate working configurations. Remove the test.
It's more expensive to call splice() on short payloads than to use
recv()+send(). One of the reasons is that doing a splice() involves
allocating a pipe. One other reason is that the kernel will have to
copy itself if we try to splice less than a page. So let's fix a
short offset of 4kB below which we don't splice.
A quick test shows that on chunked encoded data, with splice we had
6826 syscalls (1715 splice, 3461 recv, 1650 send) while with this
patch, the same transfer resulted in 5793 syscalls (3896 recv, 1897
send).
Fast-forwarding between file descriptors is nice but can be counter-productive
when only one part of the buffer is forwarded, because it can result in doubling
the number of send() syscalls. This is what happens on HTTP chunking, because
the chunk data are sent, then the CRLF + next chunk size are parsed and immediately
scheduled for forwarding. This results in two send() for the same block while a
single one would have done it.
Now that we support the http-no-delay mode, we can optimize HTTP
chunking again by always waiting for more data to come until the
last chunk is met.
This patch may or may not be backported to 1.4, it's not a big deal,
it will mainly help for chunks which are aligned with the buffer size.
There are some very rare server-to-server applications that abuse the HTTP
protocol and expect the payload phase to be highly interactive, with many
interleaved data chunks in both directions within a single request. This is
absolutely not supported by the HTTP specification and will not work across
most proxies or servers. When such applications attempt to do this through
haproxy, it works but they will experience high delays due to the network
optimizations which favor performance by instructing the system to wait for
enough data to be available in order to only send full packets. Typical
delays are around 200 ms per round trip. Note that this only happens with
abnormal uses. Normal uses such as CONNECT requests nor WebSockets are not
affected.
When "option http-no-delay" is present in either the frontend or the backend
used by a connection, all such optimizations will be disabled in order to
make the exchanges as fast as possible. Of course this offers no guarantee on
the functionality, as it may break at any other place. But if it works via
HAProxy, it will work as fast as possible. This option should never be used
by default, and should never be used at all unless such a buggy application
is discovered. The impact of using this option is an increase of bandwidth
usage and CPU usage, which may significantly lower performance in high
latency environments.
This change should be backported to 1.4 since the first report of such a
misuse was in 1.4. Next patch will also be needed.
The FL_TCP flag was a leftover from the old days we were using TCP_CORK.
With MSG_MORE it's not needed anymore so we can remove the condition and
sensibly simplify the test.
When sending is complete, it's preferred to systematically clear the flags
that were set for that transfer. What could happen is that the to_forward
counter had caused the MSG_MORE flag to be set and BF_EXPECT_MORE not to
be cleared, resulting in this flag being unexpectedly maintained for next
round.
The code has taken extreme care of not doing this till now, but it's not
acceptable that the caller has to know these precise semantics. So let's
unconditionnally clear the flag instead.
For the sake of safety, this fix should be backported to 1.4.
Commit 57f5c1 used to provide a nice improvement on chunked encoding since
it ensured that we did not set a PUSH flag for every chunk or buffer data
part of a chunked transfer.
Some applications appear to erroneously abuse HTTP chunking in order to
get interactive exchanges between a user agent and an origin server with
very small chunks. While it happens to work through haproxy, it's terribly
slow due to the latency added after passing each chunk to the system, who
could wait up to 200ms before pushing them onto the wire.
So we need an interactive mode for such usages. In the mean time, step back
on the optim, but not completely, so that we still keep the flag as long as
we know we're not finished with the current chunk.
This change should be backported to 1.4 too as the issue was discovered
with it.
This status code is used in response to requests matching "monitor-uri".
Some users need to adjust it to fit their needs (eg: make some strings
appear there). As it's already defined as a chunked string and used
exactly like other status codes, it makes sense to make it configurable
with the usual "errorfile", "errorloc", ...
Some people like to make the monitoring URL testable from unsafe locations.
Reporting haproxy's existence there can sometimes be problematic. This patch
should not be backported to 1.4 because it is possible, eventhough unlikely,
that some scripts rely on this word to appear there.
As reported by Lauri-Alo Adamson, version 1.5-dev6 doesn't support
stick-tables with a binary type.
This issue was introduced in the commit 4f92d32 where a line was erroneously
deleted, and is 1.5-specific.
Mark Brooks reported that commit 1b4b7c broke tproxy in 1.5-dev6. Nick
Chalk tracked the issue down to a missing address family setting in
tcp_bind_socket() which resulted in a failure to use get_addr_len().
This issue is 1.5-specific.
Christopher Blencowe reported that the httpchk_expect() function was
lacking a test for incomplete responses : if the server sends only the
headers in the first packet and the body in a subsequent one, there is
a risk that the check fails without waiting for more data. A failure
rate of about 1% was reported.
This fix must be backported to 1.4.
When doing fix 24581bae02 to correctly handle
response cookies, an unfortunate typo was inserted in the less likely code
path, resulting in a risk of crash when cookie-based persistence is enabled
and the server emits a cookie with several spaces around the equal sign.
This bug was noticed during a code backport. Its effects were never reported
because this situation is very unlikely to appear, but it can be provoked on
purpose by the server.
This patch must be backported to 1.4 versions which contain the fix above
(anything > 1.4.8), and to similar 1.3 versions > 1.3.25. 1.5-dev versions
after 1.5-dev2 are affected too.
John Helliwell reported a runtime issue on Solaris since 1.5-dev5. Traces
show that connect() returns EINVAL, which means the socket length is not
appropriate for the family. Solaris does not like being called with sizeof
and needs the address family's size on sockaddr_storage.
The fix consists in adding a get_addr_len() function which returns the
socket's address length based on its family. Tests show that this works
for both IPv4 and IPv6 addresses.