In a config where server "s1" is marked disabled and "s2" tracks "s1",
s2 appears disabled on the stats but is still inserted into the LB farm
because the tracking is resolved too late in the configuration process.
We now resolve tracked servers before building LB maps and we also mark
the tracking server in maintenance mode, which previously was not done,
causing half of the issue.
Last point is that we also protect srv_is_usable() against electing a
server marked for maintenance. This is not absolutely needed but is a
safe choice and makes a lot of sense.
This fix must be backported to 1.4.
New option "http-send-name-header" specifies the name of a header which
will hold the server name in outgoing requests. This is the name of the
server the connection is really sent to, which means that upon redispatches,
the header's value is updated so that it always matches the server's name.
This patch settles the 2 loggers limitation.
Loggers are now stored in linked lists.
Using "global log", the global loggers list content is added at the end
of the current proxy list. Each "log" entries are added at the end of
the proxy list.
"no log" flush a logger list.
Up to now, if a cookie value was specified on a server when the proxy was
in TCP mode, it would cause a fatal error. Now we only report a warning,
since the cookie will be ignored. This makes it easier to generate configs
from scripts.
When reading the code, the "tracked" member of a server makes one
think the server is tracked while it's the opposite, it's a pointer
to the server being tracked. This is particularly true in constructs
such as :
if (srv->tracked) {
Since it's the second time I get caught misunderstanding it, let's
rename it to "track" to avoid the confusion.
Baptiste Assmann reported that a config where a non-existing peers
section is referenced by a stick-table causes a segfault after displaying
the error. This is caused by the freeing of the peers. Setting it to NULL
after displaying the error fixes the issue.
For a long time, the max number of headers was taken as a part of the buffer
size. Since the header size can be configured at runtime, it does not make
much sense anymore.
Nothing was making it necessary to have a static value, so let's turn this into
a tunable with a default value of 101 which equals what was previously used.
It makes no sense to have one pointer to the hdr_idx pool in each proxy
struct since these pools do not depend on the proxy. Let's have a common
pool instead as it is already the case for other types.
By default, pipes are the default size for the system. But sometimes when
using TCP splicing, it can improve performance to increase pipe sizes,
especially if it is suspected that pipes are not filled and that many
calls to splice() are performed. This has an impact on the kernel's
memory footprint, so this must not be changed if impacts are not understood.
If a peers section has no peer named as the local peer, we must destroy
it, otherwise a NULL peer frontend remains in the lists and a segfault
can happen upon a soft restart.
We also now report the missing peer name in order to help troubleshooting.
Peers' frontends must have logging disabled by default, which was not
the case, so logs were randomly emitted upon restart, sometimes causing
a new process to fail to replace the old one.
This made sense a long time ago but since the maxconn is dynamically
computed from the tracking tables, it does not make any sense anymore
and will harm future changes.
This one enforces a per-process connection rate limit, regardless of what
may be set per frontend. It can be a way to limit the CPU usage of a process
being severely attacked.
The side effect is that the global process connection rate is now measured
for each incoming connection, so it will be possible to report it.
If "option forwardfor" has the "if-none" argument, then the header is
only added when the request did not already have one. This option has
security implications, and should not be set blindly.
Manoj Kumar reported a case where haproxy would crash upon start-up. The
cause was an "http-check expect" statement declared in the defaults section,
which caused a NULL regex to be used during the check. This statement is not
allowed in defaults sections precisely because this requires saving a copy
of the regex in the default proxy. But the check was not made to prevent it
from being declared there, hence the issue.
Instead of adding code to detect its abnormal use, we decided to implement
it. It was not that much complex because the expect_str part was not used
with regexes, so it could hold the string form of the regex in order to
compile it again for every backend (there's no way to clone regexes).
This patch has been tested and works. So it's both a bugfix and a minor
feature enhancement.
It should be backported to 1.4 though it's not critical since the config
was not supposed to be supported.
Adding health checks has become a real pain, with cross-references to all
checks everywhere because they're all a single bit. Since they're all
exclusive, let's change this to have a check number only. We reserve 4
bits allowing up to 16 checks (15+tcp), only 7 of which are currently
used. The code has shrunk by almost 1kB and we saved a few option bits.
The "dispatch" option has been moved to px->options, making a few tests
a bit cleaner.
This patch provides a new "option redis-check" statement to enable server health checks based on redis PING request (http://www.redis.io/commands/ping).
This was revealed with one of the very latest patches which caused
the listener_queue not to be initialized on the stats socket frontend.
And in fact a number of other ones were missing too. This is getting so
boring that now we'll always make use of the same function to initialize
any proxy. Doing so has even saved about 500 bytes on the binary due to
the avoided code redundancy.
No backport is needed.
This function is finally not needed anymore, as it has been replaced with
a per-proxy task that is scheduled when some limits are encountered on
incoming connections or when the process is stopping. The savings should
be noticeable on configs with a large number of proxies. The most important
point is that the rate limiting is now enforced in a clean and solid way.
All listeners that are limited by a proxy-specific resource are now
queued at the proxy's and not globally. This allows finer-grained
wakeups when releasing resource.
Never add connections allocated to this sever to a stick-table.
This may be used in conjunction with backup to ensure that
stick-table persistence is disabled for backup servers.
This adds the "on-marked-down shutdown-sessions" statement on "server" lines,
which causes all sessions established on a server to be killed at once when
the server goes down. The task's priority is reniced to the highest value
(1024) so that servers holding many tasks don't cause a massive slowdown due
to the wakeup storm.
The motivation for this is to allow iteration of all the connections
of a server without the expense of iterating over the global list
of connections.
The first use of this will be to implement an option to close connections
associated with a server when is is marked as being down or in maintenance
mode.
mysqld >= 5.5 want the client to announce 4.1+ authentication support, even if we have no password, so we do this.
I also check on a debian potato mysqld 3.22 and it works too so i assume we are good from 3.22 to 5.5.
[WT: this must be backported to 1.4]
The fullconn value is not easy to get right when doing dynamic regulation,
as it should depend on the maxconns of the frontends that can reach a
backend. Since the parameter is mandatory, many configs are found with
an inappropriate default value.
Instead of rejecting configs without a fullconn value, we now set it to
10% of the sum of the configured maxconns of all the frontends which are
susceptible to branch to the backend. That way if new frontends are added,
the backend's fullconn automatically adjusts itself.
Since version 1.0.0, it's forbidden to have a cookie specified without at
least one server. This test is useless and makes it complex to write APIs
to iteratively generate working configurations. Remove the test.
There are some very rare server-to-server applications that abuse the HTTP
protocol and expect the payload phase to be highly interactive, with many
interleaved data chunks in both directions within a single request. This is
absolutely not supported by the HTTP specification and will not work across
most proxies or servers. When such applications attempt to do this through
haproxy, it works but they will experience high delays due to the network
optimizations which favor performance by instructing the system to wait for
enough data to be available in order to only send full packets. Typical
delays are around 200 ms per round trip. Note that this only happens with
abnormal uses. Normal uses such as CONNECT requests nor WebSockets are not
affected.
When "option http-no-delay" is present in either the frontend or the backend
used by a connection, all such optimizations will be disabled in order to
make the exchanges as fast as possible. Of course this offers no guarantee on
the functionality, as it may break at any other place. But if it works via
HAProxy, it will work as fast as possible. This option should never be used
by default, and should never be used at all unless such a buggy application
is discovered. The impact of using this option is an increase of bandwidth
usage and CPU usage, which may significantly lower performance in high
latency environments.
This change should be backported to 1.4 since the first report of such a
misuse was in 1.4. Next patch will also be needed.
It's always been a mess to debug wrong listening addresses because
the parsing function does not indicate the file and line number. Now
it does. Since the code was almost a duplicate of str2sa_range, it
now makes use of it and has been sensibly reduced.
The parser now distinguishes between pure addresses and address:port. This
is useful for some config items where only an address is required.
Raw IPv6 addresses are now parsed, but IPv6 host name resolution is still not
handled (gethostbyname does not resolve IPv6 names to addresses).
This option enables use of the PROXY protocol with the server, which
allows haproxy to transport original client's address across multiple
architecture layers.
And also rename "req_acl_rule" "http_req_rule". At the beginning that
was a bit confusing to me, especially the "req_acl" list which in fact
holds what we call rules. After some digging, it appeared that some
part of the code is 100% HTTP and not just related to authentication
anymore, so let's move that part to HTTP and keep the auth-only code
in auth.c.
This patch turns internal server addresses to sockaddr_storage to
store IPv6 addresses, and makes the connect() function use it. This
code already works but some caveats with getaddrinfo/gethostbyname
still need to be sorted out while the changes had to be merged at
this stage of internal architecture changes. So for now the config
parser will not emit an IPv6 address yet so that user experience
remains unchanged.
This change should have absolutely zero user-visible effect, otherwise
it's a bug introduced during the merge, that should be reported ASAP.
Since all of them are defined as proxy options, it's better to ensure
that at most one of them is enabled at once. The priority has been set
according to what is already performed in the backend :
1) dispatch
2) http_proxy
3) transparent
Till now we used the fact that the dispatch address was not null to use
the dispatch mode. This is very unconvenient, so let's have a dedicated
option.
We used to only count one socket instead of one per listener. This makes
the socket count wrong, preventing from automatically computing the proper
number of sockets to bind.
This fix must be backported to 1.4 and 1.3.
I have written a small patch to enable a correct PostgreSQL health check
It works similar to mysql-check with the very same parameters.
E.g.:
listen pgsql 127.0.0.1:5432
mode tcp
option pgsql-check user pgsql
server masterdb pgsql.server.com:5432 check inter 10000
One of the requirements we have is to run multiple instances of haproxy on a
single host; this is so that we can split the responsibilities (and change
permissions) between product teams. An issue we ran up against is how we
would distinguish between the logs generated by each instance. The solution
we came up with (please let me know if there is a better way) is to override
the application tag written to syslog. We can then configure syslog to write
these to different files.
I have attached a patch adding a global option 'log-tag' to override the
default syslog tag 'haproxy' (actually defaults to argv[0]).
By passing a negative value to the "mss" argument of "bind" lines, it
becomes possible to subtract this value to the MSS advertised by the
client, which results in segments smaller than advertised. The effect
is useful with some TCP stacks which ACK less often when segments are
not full, because they only ACK every other full segment as suggested
by RFC1122.
NOTE: currently this has no effect on Linux kernel 2.6, a kernel patch
is still required to change the MSS of established connections.
Haproxy does not include the hostname rather the IP of the machine in
the syslog headers it sends. Unfortunately this means that for each log
line rsyslog does a reverse dns on the client IP and in the case of
non-routable IPs one gets the public hostname not the internal one.
While this is valid according to RFC3164 as one might imagine this is
troublsome if you have some machines with public IPs, internal IPs, no
reverse DNS entries, etc and you want a standardized hostname based log
directory structure. The rfc says the preferred value is the hostname.
This patch adds a global "log-send-hostname" statement which accepts an
optional string to force the host name. If unset, the local host name
is used.
Using haproxy in multi-process mode (nbproc > 1), some features can be
not fully compatible or not work at all. haproxy will now display a warning on
startup for :
- appsession
- sticking rules
- stats / stats admin
- stats socket
- peers (fatal error in that case)
When the number of servers is a multiple of the size of the input set,
map-based hash can be inefficient. This typically happens with 64
servers when doing URI hashing. The "avalanche" hash-type applies an
avalanche hash before performing a map lookup in order to smooth the
distribution. The result is slightly less smooth than the map for small
numbers of servers, but still better than the consistent hashing.