isalnum, isdigit and friends are really annoying because they take
an int in which we should pass an unsigned char, while strings
everywhere use chars. Solaris uses macros relying on an array for
those functions, which easily triggers some warnings showing where
we have mistakenly passed a char instead of an unsigned char or an
int. Those warnings may indicate real bugs on some platforms
depending on the implementation.
When a host name could not be resolved, an alert was emitted but the
service used to start with 0.0.0.0 for the IP address, because the
address parsing functions could not report an error. This is now
changed. This fix must be backported to 1.3 as it was first discovered
there.
[WT: it was not a bug, I did it on purpose to leave no hole between IDs,
though it's not very practical when admins want to force some entries
after they have been used, because they'd rather leave a hole than
renumber everything ]
Forcing some of IDs should not shift others.
Regression introduced in 53fb4ae261
---cut here---
global
stats socket /home/ole/haproxy.stat user ole group ole mode 660
frontend F1
bind 127.0.0.1:9999
mode http
backend B1
mode http
backend B2
mode http
id 9999
backend B3
mode http
backend B4
mode http
---cut here---
Before 53fb4ae261:
$ echo "show stat" | socat unix-connect:/home/ole/haproxy.stat stdio|cut -d , -f 28
iid
1
2
9999
4
5
After 53fb4ae261:
$ echo "show stat" | socat unix-connect:/home/ole/haproxy.stat stdio|cut -d , -f 28
iid
1
2
9999
3
4
With this patch:
$ echo "show stat" | socat unix-connect:/home/ole/haproxy.stat stdio|cut -d , -f 28
iid
1
2
9999
4
5
Thich patch fixes cfgparser not to leak memory on each
default server statement and adds several missing free
calls in deinit():
- free(l->name)
- free(l->counters)
- free(p->desc);
- free(p->fwdfor_hdr_name);
None of them are critical, hopefully.
SSL and SQL checks did only perform a free() of the request without replacing
it, so having multiple SSL/SQL check declarations after another check type
causes a double free condition during config parsing. This should be backported
although it's harmless.
Anonymous ACLs allow the declaration of rules which rely directly on
ACL expressions without passing via the declaration of an ACL. Example :
With named ACLs :
acl site_dead nbsrv(dynamic) lt 2
acl site_dead nbsrv(static) lt 2
monitor fail if site_dead
With anonymous ACLs :
monitor fail if { nbsrv(dynamic) lt 2 } || { nbsrv(static) lt 2 }
Support the new syntax (http-request allow/deny/auth) in
http stats.
Now it is possible to use the same syntax is the same like in
the frontend/backend http-request access control:
acl src_nagios src 192.168.66.66
acl stats_auth_ok http_auth(L1)
stats http-request allow if src_nagios
stats http-request allow if stats_auth_ok
stats http-request auth realm LB
The old syntax is still supported, but now it is emulated
via private acls and an aditional userlist.
Add generic authentication & authorization support.
Groups are implemented as bitmaps so the count is limited to
sizeof(int)*8 == 32.
Encrypted passwords are supported with libcrypt and crypt(3), so it is
possible to use any method supported by your system. For example modern
Linux/glibc instalations support MD5/SHA-256/SHA-512 and of course classic,
DES-based encryption.
Just as for the req* rules, we can now condition rsp* rules with ACLs.
ACLs match on response, so volatile request information cannot be used.
A warning is emitted if a configuration contains such an anomaly.
All the req* rules except the reqadd rules can now be specified with
an if/unless condition. If a condition is specified and does not match,
the filter is ignored. This is particularly useful with reqidel, reqirep
and reqtarpit.
A new function was added to take care of the common code between
all those keywords. This has saved 8 kB of object code and about
500 lines of source code. This has also permitted to spot and fix
minor bugs (allocated args that were never used).
The code could be factored even more but that would make it a bit
more complex which is not interesting at this stage.
Various tests have been performed, and the warnings and errors are
still correctly reported and everything seems to work as expected.
Now a server can check the contents of the header X-Haproxy-Server-State
to know how haproxy sees it. The same values as those reported in the stats
are provided :
- up/down status + check counts
- throttle
- weight vs backend weight
- active sessions vs backend sessions
- queue length
- haproxy node name
Currently we cannot easily add headers nor anything to HTTP checks
because the requests are pre-formatted with the last CRLF. Make the
check code add the CRLF itself so that we can later add useful info.
Hi Willy,
I've made a quick pass on the "defaults" column in the Proxy keywords matrix (chapter 4.1. in the documentation).
This patch resyncs the code and the documentation. I let you decide if some keywords that still work in the "defaults" section should be forbidden.
- default_backend : in the matrix, "defaults" was not supported but the keyword details say it is.
Tests also shows it works, then I've updated the matrix.
- capture cookie : in the keyword details, we can read `It is not possible to specify a capture in a "defaults" section.'.
Ok, even if the tests worked, I've added an alert in the configuration parser (as it is for capture request/response header).
- description : not supported in "defaults", I added an alert in the parser.
I've also noticed that this keyword doesn't appear in the documentation.
There's one "description" entry, but for the "global" section, which is for a different use (the patch doesn't update the documentation).
- grace : even if this is maybe useless, it works in "defaults". Documentation is updated.
- redirect : alert is added in the parser.
- rsprep : alert added in the parser.
--
Cyril Bonté
Despite what is explicitly stated in HTTP specifications,
browsers still use the undocumented Proxy-Connection header
instead of the Connection header when they connect through
a proxy. As such, proxies generally implement support for
this stupid header name, breaking the standards and making
it harder to support keep-alive between clients and proxies.
Thus, we add a new "option http-use-proxy-header" to tell
haproxy that if it sees requests which look like proxy
requests, it should use the Proxy-Connection header instead
of the Connection header.
This is used to force access to down servers for some requests. This
is useful when validating that a change on a server correctly works
before enabling the server again.
Sometimes we need to be able to change the default kernel socket
buffer size (recv and send). Four new global settings have been
added for this :
- tune.rcvbuf.client
- tune.rcvbuf.server
- tune.sndbuf.client
- tune.sndbuf.server
Those can be used to reduce kernel memory footprint with large numbers
of concurrent connections, and to reduce risks of write timeouts with
very slow clients due to excessive kernel buffering.
Sometimes it can be desired to return a location which is the same
as the request with a slash appended when there was not one in the
request. A typical use of this is for sending a 301 so that people
don't reference links without the trailing slash. The name of the
new option is "append-slash" and it can be used on "redirect"
statements in prefix mode.
This patch implements default-server support allowing to change
default server options. It can be used in [defaults] or [backend]/[listen]
sections. Currently the following options are supported:
- error-limit
- fall
- inter
- fastinter
- downinter
- maxconn
- maxqueue
- minconn
- on-error
- port
- rise
- slowstart
- weight
Supported informations, available via "tr/td title":
- cap: capabilities (proxy)
- mode: one of tcp, http or health (proxy)
- id: SNMP ID (proxy, socket, server)
- IP (socket, server)
- cookie (backend, server)
This option enables HTTP keep-alive on the client side and close mode
on the server side. This offers the best latency on the slow client
side, and still saves as many resources as possible on the server side
by actively closing connections. Pipelining is supported on both requests
and responses, though there is currently no reason to get pipelined
responses.
This option was disabled for frontends in the configuration because
it was useless in its initial implementation, though it was still
checked in the code. Let's officially enable it now.
The previous check was correct: the RFC states that it is required
to have a domain-name which contained a dot AND began with a dot.
However, currently some (all?) browsers do not obey this specification,
so such configuration might work.
This patch reverts 3d8fbb6658 but
changes the check from FATAL to WARNING and extends the message.
Fix 500b8f0349 fixed the patch for the 64 bit
case but caused the opposite type issue to appear on 32 bit platforms. Cast
the difference and be done with it since gcc does not agree on type carrying
the difference between two pointers on 32 and 64 bit platforms.
Implement decreasing health based on observing communication between
HAProxy and servers.
Changes in this version 2:
- documentation
- close race between a started check and health analysis event
- don't force fastinter if it is not set
- better names for options
- layer4 support
Changes in this version 3:
- add stats
- port to the current 1.4 tree
Today I was testing headers manipulation but I met a bug with my first test.
To reproduce it, add for example this line :
rspadd Cache-Control:\ max-age=1500
Check the response header, it will provide :
Cache-Control: max-age=15000 <= the last character is duplicated
This only happens when we use backslashes on the last line of the
configuration file, without returning to the line.
Also if the last line is like :
rspadd Cache-Control:\ max-age=1500\
the last backslash causes a segfault.
This is not due to rspadd but to a more general bug in cfgparse.c :
...
if (skip) {
memmove(line + 1, line + 1 + skip, end - (line + skip + 1));
end -= skip;
}
...
should be :
...
if (skip) {
memmove(line + 1, line + 1 + skip, end - (line + skip));
end -= skip;
}
...
I've reproduced it with haproxy 1.3.22 and the last 1.4 snapshot.
In some environments it is not possible to rely on any wildcard for a
domain name (eg: .com, .net, .fr...) so it is required to send multiple
domain extensions. (Un)fortunately the syntax check on the domain name
prevented that from being done the dirty way. So let's just build a
domain list when multiple domains are passed on the same line.
(cherry picked from commit 950245ca2b)
It was a OR instead of a AND, so it was required to have a cookie
name which contained a dot AND began with a dot.
(cherry picked from commit a1e107fc13)
Holger Just reported that running ACLs with too many args caused
a segfault during config parsing. This is caused by a wrong test
on argument count. In case of too many arguments on a config line,
the last one was not correctly zeroed. This is now done and we
report the error indicating what part had been truncated.
(cherry picked from commit 3b39c1446b)
To sum up :
- len : it's now the max number of characters for the value, preventing
garbaged results.
- a new option "prefix" is added, this allows to use dynamic cookie
names (e.g. ASPSESSIONIDXXX).
Previously in the thread, I wanted to use the value found with
"capture cookie" but when i started to update the documentation, I
found this solution quite weird. I've made a small rework to not
depend on "capture cookie".
- There's the posssiblity to define the URL parser mode (path parameters
or query string).
Right now, an HTTP server cannot track a TCP server and vice-versa.
This patch enables proxy tracking without relying on the proxy's mode
(tcp/http/health). It only requires a matching proxy name to exist. The
original function was renamed to findproxy_mode().
The code part which waits for an HTTP response has been extracted
from the old function. We now have two analysers and the second one
may re-enable the first one when an 1xx response is encountered.
This has been tested and works.
The calls to stream_int_return() that were remaining in the wait
analyser have been converted to stream_int_retnclose().
This patch has 2 goals :
1. I wanted to test the appsession feature with a small PHP code,
using PHPSESSID. The problem is that when PHP gets an unknown session
id, it creates a new one with this ID. So, when sending an unknown
session to PHP, persistance is broken : haproxy won't see any new
cookie in the response and will never attach this session to a
specific server.
This also happens when you restart haproxy : the internal hash becomes
empty and all sessions loose their persistance (load balancing the
requests on all backend servers, creating a new session on each one).
For a user, it's like the service is unusable.
The patch modifies the code to make haproxy also learn the persistance
from the client : if no session is sent from the server, then the
session id found in the client part (using the URI or the client cookie)
is used to associated the server that gave the response.
As it's probably not a feature usable in all cases, I added an option
to enable it (by default it's disabled). The syntax of appsession becomes :
appsession <cookie> len <length> timeout <holdtime> [request-learn]
This helps haproxy repair the persistance (with the risk of losing its
session at the next request, as the user will probably not be load
balanced to the same server the first time).
2. This patch also tries to reduce the memory usage.
Here is a little example to explain the current behaviour :
- Take a Tomcat server where /session.jsp is valid.
- Send a request using a cookie with an unknown value AND a path
parameter with another unknown value :
curl -b "JSESSIONID=12345678901234567890123456789012" http://<haproxy>/session.jsp;jsessionid=00000000000000000000000000000001
(I know, it's unexpected to have a request like that on a live service)
Here, haproxy finds the URI session ID and stores it in its internal
hash (with no server associated). But it also finds the cookie session
ID and stores it again.
- As a result, session.jsp sends a new session ID also stored in the
internal hash, with a server associated.
=> For 1 request, haproxy has stored 3 entries, with only 1 which will be usable
The patch modifies the behaviour to store only 1 entry (maximum).
This can ensure that data is readily available on a socket when
we accept it, but a bug in the kernel ignores the timeout so the
socket can remain pending as long as the client does not talk.
Use with care.
Consistent hashing provides some interesting advantages over common
hashing. It avoids full redistribution in case of a server failure,
or when expanding the farm. This has a cost however, the hashing is
far from being perfect, as we associate a server to a request by
searching the server with the closest key in a tree. Since servers
appear multiple times based on their weights, it is recommended to
use weights larger than approximately 10-20 in order to smoothen
the distribution a bit.
In some cases, playing with weights will be the only solution to
make a server appear more often and increase chances of being picked,
so stats are very important with consistent hashing.
In order to indicate the type of hashing, use :
hash-type map-based (default, old one)
hash-type consistent (new one)
Consistent hashing can make sense in a cache farm, in order not
to redistribute everyone when a cache changes state. It could also
probably be used for long sessions such as terminal sessions, though
that has not be attempted yet.
More details on this method of hashing here :
http://www.spiteful.com/2008/03/17/programmers-toolbox-part-3-consistent-hashing/
Until now it was required that every custom ID was above 1000 in order to
avoid conflicts. Now we have the list of all assigned IDs and can automatically
pick the first unused one. This means that it is perfectly possible to interleave
automatic IDs with persistent IDs and the parser will automatically allocate
unused values starting with 1.
When a name or ID conflict is detected, it is sometimes useful to know
where the other one was declared. Now that we have this information,
report it in error messages.
This patch allows to collect & provide separate statistics for each socket.
It can be very useful if you would like to distinguish between traffic
generate by local and remote users or between different types of remote
clients (peerings, domestic, foreign).
Currently no "Session rate" is supported, but adding it should be possible
if we found it useful.
By default, when data is sent over a socket, both the write timeout and the
read timeout for that socket are refreshed, because we consider that there is
activity on that socket, and we have no other means of guessing if we should
receive data or not.
While this default behaviour is desirable for almost all applications, there
exists a situation where it is desirable to disable it, and only refresh the
read timeout if there are incoming data. This happens on sessions with large
timeouts and low amounts of exchanged data such as telnet session. If the
server suddenly disappears, the output data accumulates in the system's
socket buffers, both timeouts are correctly refreshed, and there is no way
to know the server does not receive them, so we don't timeout. However, when
the underlying protocol always echoes sent data, it would be enough by itself
to detect the issue using the read timeout. Note that this problem does not
happen with more verbose protocols because data won't accumulate long in the
socket buffers.
When this option is set on the frontend, it will disable read timeout updates
on data sent to the client. There probably is little use of this case. When
the option is set on the backend, it will disable read timeout updates on
data sent to the server. Doing so will typically break large HTTP posts from
slow lines, so use it with caution.
The "static-rr" is just the old round-robin algorithm. It is still
in use when a hash algorithm is used and the data to hash is not
present, but it was impossible to configure it explicitly. This one
is cheaper in terms of CPU and supports unlimited numbers of servers,
so it makes sense to be able to use it.
LB algo macros were composed of the LB algo by itself without any indication
of the method to use to look up a server (the lb function itself). This
method was implied by the LB algo, which was not very convenient to add
more algorithms. Now we have several fields in the LB macros, some to
describe what to look for in the requests, some to describe how to transform
that (kind of algo) and some to describe what lookup function to use.
The next patch will make it possible to factor out some code for all algos
which rely on a map.
This patch implements "description" (proxy and global) and "node" (global)
options, removes "node-name" and adds "show-node" & "show-desc" options
for "stats". It also changes the way the header lines (with proxy name) and
the statistics are displayed, so stats no longer look so clumsy with very
long names.
Instead of "node-name" it is possible to use show-node/show-desc with
an optional parameter that overrides a default node/description.
backend cust-0045
# report specific values for this customer
stats show-node Europe
stats show-desc Master node for Europe, Asia, Africa
It was becoming painful to have all the LB algos in backend.c.
Let's move them to their own files. A few hashing functions still
need be broken in two parts, one for the contents and one for the
map position.
Check if rise/fall has an argument and it is > 0 or bad things may happen
in the health checks. ;)
Now it is verified and the code no longer allows for such condition:
backend bad
(...)
server o-f0 192.168.129.27:80 check inter 4000 source 0.0.0.0 rise 0
server o-r0 192.168.129.27:80 check inter 4000 source 0.0.0.0 fall 0
server o-f1 192.168.129.27:80 check inter 4000 source 0.0.0.0 rise
server o-r1 192.168.129.27:80 check inter 4000 source 0.0.0.0 fall
[ALERT] 269/161830 (24136) : parsing [../git/haproxy.cfg:98]: 'rise' has to be > 0.
[ALERT] 269/161830 (24136) : parsing [../git/haproxy.cfg:99]: 'fall' has to be > 0.
[ALERT] 269/161830 (24136) : parsing [../git/haproxy.cfg:100]: 'rise' expects an integer argument.
[ALERT] 269/161830 (24136) : parsing [../git/haproxy.cfg:101]: 'fall' expects an integer argument.
Also add endline in the custom id checking code.
This patch adds health logging so it possible to check what
was happening before a crash. Failed healt checks are logged if
server is UP and succeeded healt checks if server is DOWN,
so the amount of additional information is limited.
I also reworked the code a little:
- check_status_description[] and check_status_info[] is now
joined into check_statuses[]
- set_server_check_status updates not only s->check_status and
s->check_duration but also s->result making the code simpler
Changes in v3:
- for now calculate and use local versions of health/rise/fall/state,
it is a slow path, no harm should be done. One day we may centralize
processing of the checks and remove the duplicated code.
- also log checks that are restoring current state
- use "conditionally succeeded" for 404 with disable-on-404
Collect information about last health check result,
including L7 code if possible (for example http or smtp
return code) and time took to finish last check.
Health check info is provided on both stats pages (html & csv)
and logged when a server is marked UP or DOWN. Currently active
check are marked with an asterisk, but only in html mode.
Currently there are 14 status codes:
UNK -> unknown
INI -> initializing
SOCKERR -> socket error
L4OK -> check passed on layer 4, no upper layers testing enabled
L4TOUT -> layer 1-4 timeout
L4CON -> layer 1-4 connection problem, for example "Connection refused"
(tcp rst) or "No route to host" (icmp)
L6OK -> check passed on layer 6
L6TOUT -> layer 6 (SSL) timeout
L6RSP -> layer 6 invalid response - protocol error
L7OK -> check passed on layer 7
L7OKC -> check conditionally passed on layer 7, for example
404 with disable-on-404
L7TOUT -> layer 7 (HTTP/SMTP) timeout
L7RSP -> layer 7 invalid response - protocol error
L7STS -> layer 7 response error, for example HTTP 5xx
The new tune.bufsize and tune.maxrewrite global directives allow one to
change the buffer size and the maxrewrite size. Right now, setting bufsize
too low will block stats sockets which will not be able to write at all.
An error checking must be added to buffer_write_chunk() so that if it
cannot write its message to an empty buffer, it causes the caller to abort.
sess_establish() used to resort to protocol-specific guesses
in order to set rep->analysers. This is no longer needed as it
gets set from the frontend and the backend as a copy of what
was defined in the configuration.
Analyser bitmaps are now stored in the frontend and backend, and
combined at configuration time. That way, set_session_backend()
does not need to perform any protocol-specific combinations.
This Linux-specific option was never really used in production and
has since been superseded by new splicing options brought by recent
Linux kernels.
It caused several particular cases in the code because the kernel
would take care of the session without haproxy being able to do
anything on it, which became hard to handle in the new architecture.
Let's simply get rid of it now that there is a replacement available.
The new "node-name" stats setting enables reporting of a node ID on
the stats page. It is possible to return the system's host name as
well as a specific name.
Romuald du Song reported a strange bug causing "option tcplog" to
unexpectedly use global log parameters if no log server was declared.
Eventhough it can be useful in some circumstances, it only hides
configuration bugs and can even cause traffic logs to be sent to
the wrong logger, since global settings are just for the process.
This has been fixed and a warning has been added for configurations
where tcplog or httplog are set without any logger. This fix must
be backported to 1.3.20, but not to 1.3.15.X in order not to risk
any regression on old configurations.
Do not exit early at the first error found while checking configuration
validity. This particularly helps spotting multiple wrong tracked server
names at once.
Try not to immediately exit on non-fatal errors while parsing a
listen section, so that the user has a chance to get most of the
errors at once, which is quite convenient especially during config
checks with the -c argument.
Try not to immediately exit on non-fatal errors while parsing the
global section, so that the user has a chance to get most of the
errors at once, which is quite convenient especially during config
checks with the -c argument. Some other errors such as unresolved
server names also don't make the parser exit too early.
The new statement "persist rdp-cookie" enables RDP cookie
persistence. The RDP cookie is then extracted from the RDP
protocol, and compared against available servers. If a server
matches the RDP cookie, then it gets the connection.
Since we can now switch from TCP to HTTP, we need to be able to apply
the HTTP request timeout after switching. That means we need to take
it from the backend and not from the frontend. Since the backend points
to the frontend before switching, that changes nothing for the normal
case.
This patch propagates the ACL conditions' "requires" bitfield
to the proxies. This makes it possible to know exactly what a
proxy might have to support for any request, which helps knowing
whether we have to allocate some space for certain types of
structures or not (eg: the hdr_idx struct).
The concept might be extended to a lot more types of information,
such as detecting whether we need to allocate some space for some
request ACLs which need a result in the response, etc...
The HTTP processing has been splitted into 7 steps, one of which
is not anymore HTTP-specific (content-switching). That way, it
becomes possible to use "use_backend" rules in TCP mode. A new
"use_server" directive should follow soon.
We want to split several steps in HTTP processing so that
we can call individual analysers depending on what processing
we want to perform. The first step consists in splitting the
part that waits for a request from the rest.
This is a first step towards support of multiple configuration files.
Now readcfgfile() only reads a file in memory and performs very minimal
parsing. The checks are performed afterwards.
Sometimes it can be useful to limit the advertised TCP MSS on
incoming connections, for instance when requests come through
a VPN or when the system is running with jumbo frames enabled.
Passing the "mss <value>" arguments to a "bind" line will set
the value. This works under Linux >= 2.6.28, and maybe a few
earlier ones, though due to an old kernel bug most of earlier
versions will probably ignore it. It is also possible that some
other OSes will support this.
This new option enables combining of request buffer data with
the initial ACK of an outgoing TCP connection. Doing so saves
one packet per connection which is quite noticeable on workloads
mostly consisting in small objects. The option is not enabled by
default.
This option disables TCP quick ack upon accept. It is also
automatically enabled in HTTP mode, unless the option is
explicitly disabled with "no option tcp-smart-accept".
This saves one packet per connection which can bring reasonable
amounts of bandwidth for servers processing small requests.