The new anonymous ACL feature was buggy. If several ones are
declared, the first rule is always matched because all of them
share the same internal name (".noname"). Now we simply declare
them with an empty name and ensure that we disable any merging
when the name is empty.
Hi Willy,
Please find a small patch to prevent haproxy segfaulting when logging captured headers in CLF format.
Example config to reproduce the bug :
listen test :10080
log 127.0.0.1 local7 debug err
mode http
option httplog clf
capture request header NonExistantHeader len 16
--
Cyril Bonté
The BF_AUTO_CLOSE flag prevented a connection from establishing on
a server if the other side's input channel was already closed. This
is wrong because there may be pending data to be sent.
This was causing an issue with stats, as noticed and reported by
Cyril Bonté. Since the stats are now handled as a server, sometimes
concurrent accesses were causing one of the connections to send the
shutdown(write) before the connection to the stats function was
established, which aborted it early.
This fix causes the BF_AUTO_CLOSE flag to be checked only when the
connection on the outgoing stream interface has reached an established
state. That way we're still able to connect, send the request then
close.
Marcello Gorlani reported that at least on FreeBSD, a long hostname
was reported with garbage on the stats page. POSIX does not make it
mandatory for gethostname() to NULL-terminate the string in case of
truncation, and at least FreeBSD appears not to do it. So let's
force null-termination to keep safe.
Since the last documentation cleanups, I've found more typos that I kept
in a corner instead of sending you a mail just for one character :)
--
Cyril Bonté
Jozef Hovan reported a bug sometimes causing a down server to be
used in url_param hashing mode.
This happens if the following conditions are met :
- the backend contains more than one server with at least two
of different weights
- all servers but one are down
- the server which is not down has a weight which does not divide
all the other ones
Example: 3 servers with 20,20,10, the first one remains up.
The problem is caused by an optimisation in recalc_server_map()
which only fills the first map slot when only one server is up,
because all LB algorithms are optimized to use entry zero when
only one server is up... All but url_param. When doing the modulus,
we can return a position which is greater than zero and use an
entry which still refers to a server which has since been stopped.
One solution could be to optimize the url_param algo to proceed
as the other ones, but the fact that was wrong implies that we
can repeat the same bug later. So let's first correctly initialize
the map in order to avoid that trap.
When trying to spot some complex bugs, it's often needed to access
information on stuck sessions, which is quite difficult. This new
command helps one get detailed information about a session, with
flags, timers, states, etc... The buffer data are not dumped yet.
In case of pipelined requests, if the client aborts before reading response
N-1, haproxy waits forever for the data to leave the buffer before parsing
the next response.
This duplicate test should have been removed with the loop rework but was forgotten.
It was harmless, but disassembly shows that it prevents gcc from correctly optimizing
the loop.
We have been using haproxy to balance a not very well written application
(http://www.blackboard.com/). Using the "insert postonly indirect" cookie
method, I was attempting to remove the cookie when users would logout,
allowing the machine to re-balance for the next user (this application is
used in school computer labs, so a computer might stay on the whole day
but be used on and off).
I was having a lot of trouble because when the cookie was set, it was with
"Path=/", but when being cleared there was no "Path" in the set cookie
header, and because the logout page was in a different place of the
website (which I couldn't change), the cookie would not be cleared. I
don't know if this would be a problem for anyone other than me (as our
HTTP application is so un-adjustable), but just in case, I have included
the patch I used. Maybe it will help someone else.
[ WT: this was a correct fix, and I also added the same missing path to
the set-cookie option ]
It's sometimes convenient to know if a proxy has processed any connection
at all when stopping it. Since a soft restart causes the "Proxy stopped"
message to be logged for each proxy, let's add the number of connections
so that it's possible afterwards to check whether a proxy had received
any connection.
Often we need to understand why some transfers were aborted or what
constitutes server response errors. With those two counters, it is
now possible to detect an unexpected transfer abort during a data
phase (eg: too short HTTP response), and to know what part of the
server response errors may in fact be assigned to aborted transfers.
Holger Just and Ross West reported build issues on FreeBSD and
Solaris that were initially caused by the definition of
_XOPEN_SOURCE at the top of auth.c, which was required on Linux
to avoid a build warning.
Krzysztof Oledzki found that using _GNU_SOURCE instead also worked
on Linux and did not cause any issue on several versions of FreeBSD.
Solaris still reported a warning this time, which was fixed by
including <crypt.h>, which itself is not present on FreeBSD nor on
all Linux toolchains.
So by adding a new build option (NEED_CRYPT_H), we can get Solaris
to get crypt() working and stop complaining at the same time, without
impacting other platforms.
This fix was tested at least on several linux toolchains (at least
uclibc, glibc 2.2.5, 2.3.6 and 2.7), on FreeBSD 4 to 8, Solaris 8
(which needs crypt.h), and AIX 5.3 (without crypt.h).
Every time it builds without a warning.
A copy-paste typo and a missing check were causing the logs to
report "PR" instead of "SD" when a server closes before sending
full data. Also, the log would erroneously report 502 while in
fact the correct response will already have been transmitted.
Some people have reported seeing "SL" flags in their logs quite often while
this should never happen. The reason was that then a server error is detected,
we close the connection to that server and when we decide what state we were
in, we see the connection is closed, and deduce it was the last data transfer,
which is wrong. We should report DATA if the previous state was an established
state, which this patch does.
Now logs correctly report "SD" and not "SL" when a server resets a connection
before the end of the transfer.
isalnum, isdigit and friends are really annoying because they take
an int in which we should pass an unsigned char, while strings
everywhere use chars. Solaris uses macros relying on an array for
those functions, which easily triggers some warnings showing where
we have mistakenly passed a char instead of an unsigned char or an
int. Those warnings may indicate real bugs on some platforms
depending on the implementation.
There are many information available in the stats page that can only
be seen when the mouse hovers over them. But it's hard to know where
those information are. Now with a discrete dotted underline it's easier
to spot those areas.
The current and max request rates are now reported when the mouse flies
over the session rate cur/max. The total requests is displayed with the
status codes over the total sessions cell.
The bounce realign function was algorithmically good but as expected
it was not cache-friendly. Using it with large requests caused so many
cache thrashing that the function itself could drain 70% of the total
CPU time for only 0.5% of the calls !
Revert back to a standard memcpy() using a specially allocated swap
buffer. We're now back to 2M req/s on pipelined requests.
It is wrong to merge FE and BE stats for a proxy because when we consult a
BE's stats, it reflects the FE's stats eventhough the BE has received no
traffic. The most common example happens with listen instances, where the
backend gets credited for all the trafic even when a use_backend rule makes
use of another backend.
The trash buffer may now be smaller than a buffer because we can tune
it at run time. This causes a risk when we're trying to use it as a
temporary buffer to realign unaligned requests, because we may have to
put up to a full buffer into it.
Instead of doing a double copy, we're now relying on an open-coded
bouncing copy algorithm. The principle is that we move one byte at
a time to its final place, and if that place also holds a byte, then
we move it too, and so on. We finish when we've moved all the buffer.
It limits the number of memory accesses, but since it proceeds one
byte at a time and with random walk, it's not cache friendly and
should be slower than a double copy. However, it's only used in
extreme situations and the difference will not be noticeable.
It has been extensively tested and works reliably.
When a host name could not be resolved, an alert was emitted but the
service used to start with 0.0.0.0 for the IP address, because the
address parsing functions could not report an error. This is now
changed. This fix must be backported to 1.3 as it was first discovered
there.
[WT: it was not a bug, I did it on purpose to leave no hole between IDs,
though it's not very practical when admins want to force some entries
after they have been used, because they'd rather leave a hole than
renumber everything ]
Forcing some of IDs should not shift others.
Regression introduced in 53fb4ae261
---cut here---
global
stats socket /home/ole/haproxy.stat user ole group ole mode 660
frontend F1
bind 127.0.0.1:9999
mode http
backend B1
mode http
backend B2
mode http
id 9999
backend B3
mode http
backend B4
mode http
---cut here---
Before 53fb4ae261:
$ echo "show stat" | socat unix-connect:/home/ole/haproxy.stat stdio|cut -d , -f 28
iid
1
2
9999
4
5
After 53fb4ae261:
$ echo "show stat" | socat unix-connect:/home/ole/haproxy.stat stdio|cut -d , -f 28
iid
1
2
9999
3
4
With this patch:
$ echo "show stat" | socat unix-connect:/home/ole/haproxy.stat stdio|cut -d , -f 28
iid
1
2
9999
4
5
Thich patch fixes cfgparser not to leak memory on each
default server statement and adds several missing free
calls in deinit():
- free(l->name)
- free(l->counters)
- free(p->desc);
- free(p->fwdfor_hdr_name);
None of them are critical, hopefully.
Released version 1.4-rc1 with the following main changes :
- [MEDIUM] add a maintenance mode to servers
- [MINOR] http-auth: last fix was wrong
- [CONTRIB] add base64rev-gen.c that was used to generate the base64rev table.
- [MINOR] Base64 decode
- [MINOR] generic auth support with groups and encrypted passwords
- [MINOR] add ACL_TEST_F_NULL_MATCH
- [MINOR] http-request: allow/deny/auth support for frontend/backend/listen
- [MINOR] acl: add http_auth and http_auth_group
- [MAJOR] use the new auth framework for http stats
- [DOC] add info about userlists, http-request and http_auth/http_auth_group acls
- [STATS] make it possible to change a CLI connection timeout
- [BUG] patterns: copy-paste typo in type conversion arguments
- [MINOR] pattern: make the converter more flexible by supporting void* and int args
- [MINOR] standard: str2mask: string to netmask converter
- [MINOR] pattern: add support for argument parsers for converters
- [MINOR] pattern: add the "ipmask()" converting function
- [MINOR] config: off-by-one in "stick-table" after list of converters
- [CLEANUP] acl, patterns: make use of my_strndup() instead of malloc+memcpy
- [BUG] restore accidentely removed line in last patch !
- [MINOR] checks: make the HTTP check code add the CRLF itself
- [MINOR] checks: add the server's status in the checks
- [BUILD] halog: make without arch-specific optimizations
- [BUG] halog: fix segfault in case of empty log in PCT mode (cherry picked from commit fe362fe476)
- [MINOR] http: disable keep-alive when process is going down
- [MINOR] acl: add build_acl_cond() to make it easier to add ACLs in config
- [CLEANUP] config: use build_acl_cond() instead of parse_acl_cond()
- [CLEANUP] config: use warnif_cond_requires_resp() to check for bad ACLs
- [MINOR] prepare req_*/rsp_* to receive a condition
- [CLEANUP] config: specify correct const char types to warnif_* functions
- [MEDIUM] config: factor out the parsing of 20 req*/rsp* keywords
- [MEDIUM] http: make the request filter loop check for optional conditions
- [MEDIUM] http: add support for conditional request filter execution
- [DOC] add some build info about the AIX platform (cherry picked from commit e41914c77e)
- [MEDIUM] http: add support for conditional request header addition
- [MEDIUM] http: add support for conditional response header rewriting
- [DOC] add some missing ACLs about response header matching
- [MEDIUM] http: add support for proxy authentication
- [MINOR] http-auth: make the 'unless' keyword work as expected
- [CLEANUP] config: use build_acl_cond() to simplify http-request ACL parsing
- [MEDIUM] add support for anonymous ACLs
- [MEDIUM] http: switch to tunnel mode after status 101 responses
- [MEDIUM] http: stricter processing of the CONNECT method
- [BUG] config: reset check request to avoid double free when switching to ssl/sql
- [MINOR] config: fix too large ssl-hello-check message.
- [BUG] fix error response in case of server error
The fix below was incomplete :
commit d5fd51c75b
[BUG] http_server_error() must not purge a previous pending response
This can cause parts of responses to be truncated in case of
pipelined requests if the second request generates an error
before the first request is completely flushed.
Pending response data being rejected was still sent, causing inappropriate
error responses in case of error while parsing a response header. We must
purge pending data from the response buffer that were not scheduled to be
sent (l - send_max).
SSL and SQL checks did only perform a free() of the request without replacing
it, so having multiple SSL/SQL check declarations after another check type
causes a double free condition during config parsing. This should be backported
although it's harmless.
Now we establish the tunnel only once the status 200 reponse is
received. That way we can still support an authentication request
in response to a CONNECT, then a client's authentication response.
A 101 response is accompanied with an Upgrade header indicating
a new protocol that is spoken on the connection after the exchange
completes. At least we should switch to tunnel mode after such a
response.
Anonymous ACLs allow the declaration of rules which rely directly on
ACL expressions without passing via the declaration of an ACL. Example :
With named ACLs :
acl site_dead nbsrv(dynamic) lt 2
acl site_dead nbsrv(static) lt 2
monitor fail if site_dead
With anonymous ACLs :
monitor fail if { nbsrv(dynamic) lt 2 } || { nbsrv(static) lt 2 }
I'm not sure if the fix is correct:
- if (req_acl->cond)
- ret = acl_exec_cond(req_acl->cond, px, s, txn, ACL_DIR_REQ);
+ if (!req_acl->cond)
+ continue;
Doesn't it ignore rules with no condition attached? I think that the
proper solution would be the following.
One check was missing for the 'polarity' of the test. Now 'unless'
works. BTW, 'unless' provides a nice way to perform one-line auth :
acl valid-user http_auth(user-list)
http-request auth unless valid-user
This is a first attempt to add a maintenance mode on servers, using
the stat socket (in admin level).
It can be done with the following command :
- disable server <backend>/<server>
- enable server <backend>/<server>
In this mode, no more checks will be performed on the server and it
will be marked as a special DOWN state (MAINT).
If some servers were tracking it, they'll go DOWN until the server
leaves the maintenance mode. The stats page and the CSV export also
display this special state.
This can be used to disable the server in haproxy before doing some
operations on this server itself. This is a good complement to the
"http-check disable-on-404" keyword and works in TCP mode.
We're already able to know if a request is a proxy request or a
normal one, and we have an option "http-use-proxy-header" which states
that proxy headers must be checked. So let's switch to use the proxy
authentication headers and responses when this option is set and we're
facing a proxy request. That allows haproxy to enforce auth in front
of a proxy.
Support the new syntax (http-request allow/deny/auth) in
http stats.
Now it is possible to use the same syntax is the same like in
the frontend/backend http-request access control:
acl src_nagios src 192.168.66.66
acl stats_auth_ok http_auth(L1)
stats http-request allow if src_nagios
stats http-request allow if stats_auth_ok
stats http-request auth realm LB
The old syntax is still supported, but now it is emulated
via private acls and an aditional userlist.
Add generic authentication & authorization support.
Groups are implemented as bitmaps so the count is limited to
sizeof(int)*8 == 32.
Encrypted passwords are supported with libcrypt and crypt(3), so it is
possible to use any method supported by your system. For example modern
Linux/glibc instalations support MD5/SHA-256/SHA-512 and of course classic,
DES-based encryption.
Implement Base64 decoding with a reverse table.
The function accepts and decodes classic base64 strings, which
can be composed from many streams as long each one is properly
padded, for example: SGVsbG8=IEhBUHJveHk=IQ==
Just as for the req* rules, we can now condition rsp* rules with ACLs.
ACLs match on response, so volatile request information cannot be used.
A warning is emitted if a configuration contains such an anomaly.
All the req* rules except the reqadd rules can now be specified with
an if/unless condition. If a condition is specified and does not match,
the filter is ignored. This is particularly useful with reqidel, reqirep
and reqtarpit.
From now on, if request filters have ACLs defined, these ACLs will be
evaluated to condition the filter. This will be used to conditionally
remove/rewrite headers based on ACLs.
A new function was added to take care of the common code between
all those keywords. This has saved 8 kB of object code and about
500 lines of source code. This has also permitted to spot and fix
minor bugs (allocated args that were never used).
The code could be factored even more but that would make it a bit
more complex which is not interesting at this stage.
Various tests have been performed, and the warnings and errors are
still correctly reported and everything seems to work as expected.
This function automatically builds a rule, considering the if/unless
statements, and automatically updates the proxy's acl_requires, the
condition's file and line.
Krzysztof Oledzki suggested to disable keep-alive when a process
is going down due to a reload, in order to avoid ever-lasting
sessions. This is a simple and very efficient solution as it
ensures that at most one more request will be handled on a
keep-alive connection after the process has received a SIGUSR1
signal.
Now a server can check the contents of the header X-Haproxy-Server-State
to know how haproxy sees it. The same values as those reported in the stats
are provided :
- up/down status + check counts
- throttle
- weight vs backend weight
- active sessions vs backend sessions
- queue length
- haproxy node name
Currently we cannot easily add headers nor anything to HTTP checks
because the requests are pre-formatted with the last CRLF. Make the
check code add the CRLF itself so that we can later add useful info.
This converter can be applied on top of an IPv4-type pattern. It
applies a netmask which is suited for IP address storage and matching.
This can be used to make all hosts within a certain mask to share the
same table entries and as such use the same server.
The mask can be passed in dotted form (eg: 255.255.255.0) or in CIDR
form (eg: 24).
Some converters will need one or several arguments. It's not possible
to write a simple generic parser for that, so let's add the ability
for each converter to support its own argument parser, and call it
to get the arguments when it's specified. If unspecified, the arguments
are passed unmodified as string+len.
The pattern type converters currently support a string arg and a length.
Sometimes we'll prefer to pass them a list or a structure. So let's convert
the string and length into a generic void* and int that each converter may
use as it likes.
Hi Willy,
I've made a quick pass on the "defaults" column in the Proxy keywords matrix (chapter 4.1. in the documentation).
This patch resyncs the code and the documentation. I let you decide if some keywords that still work in the "defaults" section should be forbidden.
- default_backend : in the matrix, "defaults" was not supported but the keyword details say it is.
Tests also shows it works, then I've updated the matrix.
- capture cookie : in the keyword details, we can read `It is not possible to specify a capture in a "defaults" section.'.
Ok, even if the tests worked, I've added an alert in the configuration parser (as it is for capture request/response header).
- description : not supported in "defaults", I added an alert in the parser.
I've also noticed that this keyword doesn't appear in the documentation.
There's one "description" entry, but for the "global" section, which is for a different use (the patch doesn't update the documentation).
- grace : even if this is maybe useless, it works in "defaults". Documentation is updated.
- redirect : alert is added in the parser.
- rsprep : alert added in the parser.
--
Cyril Bonté
We must trim any excess data from the response buffer when recycling
a keep-alive connection, because we may have blocked an invalid response
from a server that we don't want to accidentely forward once we disable
the analysers, nor do we want those data to come along with next response.
A typical example of such data would be from a buggy server responding to
a HEAD with some data, or sending more than the advertised content-length.
For deciding to set the BF_EXPECT_MORE, we reused the same code as in
http_wait_for_request(), but here we must ignore buf->lr which is not
yet set and useless. This might only have caused random sub-optimal
behaviours.
Krzysztof Oledzki reported that 1.4-dev7 would regularly crash
on an apparently very common workload. The cores he provided
showed some inter-buffer data corruption, exactly similar to
what was fixed by the following recent commit :
bbfa7938bd [BUG] buffer_replace2 must never change the ->w entry
In fact, it was buffer_insert_line2() which was still modifying the
->w pointer, causing issues with pipelined responses in keep-alive
mode if some headers were to be added.
The bug requires a remote client, a near server, large server buffers
and small client buffers to be reproduced, with response header
insertion. Still, it's surprizing that it did not trigger earlier.
Now after 100k pipelined requests it did not trigger anymore.
Despite what is explicitly stated in HTTP specifications,
browsers still use the undocumented Proxy-Connection header
instead of the Connection header when they connect through
a proxy. As such, proxies generally implement support for
this stupid header name, breaking the standards and making
it harder to support keep-alive between clients and proxies.
Thus, we add a new "option http-use-proxy-header" to tell
haproxy that if it sees requests which look like proxy
requests, it should use the Proxy-Connection header instead
of the Connection header.
This function is used to move data which is located between ->w and ->r,
so it must not touch ->w, otherwise it will displace pending data which
is before the one we're actually overwriting. The issue arises with
some pipelined responses which cause some part of the previous one to
be chopped off when removing the connection: close header, thus
corrupting last response and shifting next one. Those are detected
in the logs because the next response will be a 502 with flags PH.
When using "option persist" or "force-persist", we want to know from the
logs if the cookie referenced a valid server or a down server. Till here
the flag reported a valid server even if the server was down, which is
misleading. Now we correctly report that the requested server was down.
We can typically see "--DI" when using "option persist" with redispatch,
ad "SCDN" when using force-persist on a down server.
This is used to force access to down servers for some requests. This
is useful when validating that a change on a server correctly works
before enabling the server again.
We use to delay the response if there is a new request in the buffer.
However, if the pending request is incomplete, we should not delay the
pending responses.
This can cause parts of responses to be truncated in case of
pipelined requests if the second request generates an error
before the first request is completely flushed.
Sometimes we need to be able to change the default kernel socket
buffer size (recv and send). Four new global settings have been
added for this :
- tune.rcvbuf.client
- tune.rcvbuf.server
- tune.sndbuf.client
- tune.sndbuf.server
Those can be used to reduce kernel memory footprint with large numbers
of concurrent connections, and to reduce risks of write timeouts with
very slow clients due to excessive kernel buffering.
This one is the next step of previous patch. It correctly computes
the response mode and the Connection flag transformations depending
on the request mode and version, and the response version and headers.
We're now also able to add "Connection: keep-alive", and to convert
server's close during a keep-alive connection to a server-close
connection.
We need to improve Connection header handling in the request for it
to support the upcoming keep-alive mode. Now we have two flags which
keep in the session the information about the presence of a
Connection: close and a Connection: keep-alive headers in the initial
request, as well as two others which keep the current state of those
headers so that we don't have to parse them again. Knowing the initial
value is essential to know when the client asked for keep-alive while
we're forcing a close (eg in server-close mode). Also the Connection
request parser is now able to automatically remove single header values
at the same time they are parsed. This provides greater flexibility and
reliability.
All combinations of listen/front/back in all modes and with both
1.0 and 1.1 have been tested.
Some header values might be delimited with spaces, so it's not enough to
compare "close" or "keep-alive" with strncasecmp(). Use word_match() for
that.
Calling this function after http_find_header2() automatically deletes
the current value of the header, and removes the header itself if the
value is the only one. The context is automatically adjusted for a
next call to http_find_header2() to return the next header. No other
change nor test should be made on the transient context though.
The close mode of a transaction would be switched to tunnel mode
at the end of the processing, letting a lot of pending data pass
in the other direction if any. Let's fix that by checking for the
close mode during state resync too.
We must set the error flags when detecting that a client has reset
a connection or timed out while waiting for a new request on a keep-alive
connection, otherwise process_session() sets it itself and counts one
request error.
That explains why some sites were showing an increase in request errors
with the keep-alive.
We get a lot of those, especially with web crawlers :
recv(2, 0x810b610, 7000, 0) = -1 ECONNRESET (Connection reset by peer)
shutdown(2, 1 /* send */) = -1 ENOTCONN (Transport endpoint is not connected)
close(2) = 0
There's no need to perform the shutdown() here, the socket is already
in error so it is down.
A check was performed in buffer_replace2() to compare buffer
length with its read pointer. This has been wrong for a long
time, though it only has an impact when dealing with keep-alive
requests/responses. In theory this should be backported but
the check has no impact without keep-alive.
We can receive data with a notification of socket error. But we
must not check for the error before reading the data, because it
may be an asynchronous error notification that we check too early
while the response we're waiting for is available. If there is an
error, recv() will get it.
This should help with servers that close very fast after the response
and should also slightly lower the CPU usage during very fast checks
on massive amounts of servers since we eliminate one system call.
This should probably be backported to 1.3.
While waiting in a keep-alive state for a request, we want to silently
close if we don't get anything. However if we get a partial request it's
different because that means the client has started to send something.
This requires a new transaction flag. It will be used to implement a
distinct timeout for keep-alive and requests.
This change, suggested by Cyril Bonté, makes a lot of sense and
would have made it obvious that sessid was not properly initialized
while switching to keep-alive. The code is now cleaner.
The stream_int_cond_close() function was added to preserve the
contents of the response buffer because stream_int_retnclose()
was buggy. It flushed the response instead of flushing the
request. This caused issues with pipelined redirects followed
by error messages which ate the previous response.
This might even have caused object truncation on pipelined
requests followed by an error or by a server redirection.
Now that this is fixed, simply get rid of the now useless
function.
I've tried to follow all the pool_alloc2/pool_free2 calls in the code
to track memory leaks. I've found one which only happens when there's
already no more memory when allocating a new appsession cookie.
Sometimes it can be desired to return a location which is the same
as the request with a slash appended when there was not one in the
request. A typical use of this is for sending a 301 so that people
don't reference links without the trailing slash. The name of the
new option is "append-slash" and it can be used on "redirect"
statements in prefix mode.
When using server redirection, it is possible to specify a path
consisting of only one slash. While this is discouraged (risk of
loop) it may sometimes be useful combined with content switching.
The prefixing of a '/' then causes two slashes to be returned in
the response. So we now do as with the other redirects, don't
prepend a slash if it's alone.
Some message pointers were not usable once the message reached the
HTTP_MSG_DONE state. This is the case for ->som which points to the
body because it is needed to parse chunks. There is one case where
we need the beginning of the message : server redirect. We have to
call http_get_path() after the request has been parsed. So we rely
on ->sol without counting on ->som. In order to achieve this, we're
making ->rq.{u,v} relative to the beginning of the message instead
of the buffer. That simplifies the code and makes it cleaner.
Preliminary tests show this is OK.
This might have been introduced with chunk extensions. Note that
the server redirect still does not work because http_get_path()
cannot get the correct path once the request message is in the
HTTP_MSG_DONE state (->som does not point to the start of message
anymore).
The initial code's intention was to loop on the analysers as long
as an analyser is added by another one. [This code was wrong due to
the while(0) which breaks even on a continue statement, but the
initial intention must be changed too]. In fact we should limit the
number of times we loop on analysers in order to limit latency.
Using maxpollevents as a limit makes sense since this tunable is
used for the exact same purposes. We may add another tunable later
if that ever makes sense, so it's very unlikely.
If we accept a new request and that request produces an immediate
response (error, redirect, ...), then we may fail to send it in
case of pipelined requests if the response buffer is full. To avoid
this, we check the availability of at least maxrewrite bytes in the
response buffer before accepting a new pipelined request.
During a redirect, we used to send the last chunk of response with
stream_int_cond_close(). But this is wrong in case of pipeline,
because if the response already contains something, this function
will refrain from touching the buffer. Use a concatenation function
instead.
Also, this call might still fail when the buffer is full, we need
a second fix to refrain from parsing an HTTP request as long as the
response buffer is full, otherwise we may not even be able to return
a pending redirect or an error code.
That patch was incorrect because under some circumstances, the
capture memory could be freed by session_free() and then again
by http_end_txn(), causing a double free and an eventual segfault.
The pool use count was also reported wrong due to this bug.
The cleanup code was removed from session_free() to remain only
in http_end_txn().
Hank A. Paulson reported a massive memory leak when using keep-alive
mode. The information he provided made it easy to find that captured
request and response headers were erased but not released when renewing
a request.
Several HTTP analysers used to set those flags to values that
were useful but without considering the possibility that they
were not called again to clean what they did. First, replace
direct flag manipulation with more explicit macros. Second,
enforce a rule stating that any buffer which changes one of
these flags from the default must restore it after completion,
so that other analysers see correct flags.
With both this fix and the previous one about analyser bits,
we should not see any more stuck sessions.
A request analyser may very well be added while processing a response
(eg: end of an HTTP keep-alive response). It's very dangerous to only
rely on flags that ought to change in order to loop back, so let's
correctly detect a possible new analyser addition instead of guessing.
With the introduction of keep-alive, we have created situations
where an analyser can add other analysers to the current list,
which are behind it, which have already been processed once, and
which are needed immediately because without them there will be
no more I/O activity. This is typically the case for enabling
reading of a new request after preparing for a new request.
Instead of creating specific cases for some analysers (there was
already one such before), we now use a little bit of algorithmics
to create an ordered bit chain supporting priorities and fast
operations.
Another advantage of this new construction is that it's not a
real loop anymore, so if an analyser is unknown, it will not
loop but just ignore it.
Note that it is easy to skip multiple analysers at once now in
order to speed up the checking a bit. Some test code has shown
a minor gain though.
This change has been carefully re-read and has no direct reason
of causing a regression. However it has been tagged "major"
because the fact that it runs the analysers correctly might
trigger an old sleeping bug somewhere in one of the analysers.
This patch implements default-server support allowing to change
default server options. It can be used in [defaults] or [backend]/[listen]
sections. Currently the following options are supported:
- error-limit
- fall
- inter
- fastinter
- downinter
- maxconn
- maxqueue
- minconn
- on-error
- port
- rise
- slowstart
- weight
Supported informations, available via "tr/td title":
- cap: capabilities (proxy)
- mode: one of tcp, http or health (proxy)
- id: SNMP ID (proxy, socket, server)
- IP (socket, server)
- cookie (backend, server)
This patch adds add "a link" & "a href" html tags for sockets.
As sockets may have the same name like servers, I decided to
add "+" char (forbidden in names assigned to servers), as a prefix.
It is useless to report statistics if the feature was not enabled.
It also makes possible to distinguish if health analyses is
enabled or not only by looking at the stats page.
Commit 0dfdf19b64 introduced a
regression because the connection header is now parsed and checked
depending on the configured options, but the options are set after
calling it instead of being set before.
Historically, "option httpclose" has always worked the same way. It
only mangles the "Connection" header in the request and the response
if needed, but does not affect the connection by itself, and ignores
any further data. It is dangerous to change this behaviour without
leaving any other alternative. If an active close is desired, it's
better to make use of "option forceclose" which does exactly what
it intends to do.
So as of now, "option httpclose" will only mangle the headers as
before, and will only affect the connection by itself when combined
with another connection-related option (eg: keepalive or server-close).
We basically have to mimmic the code of process_session() here, so
when the remote output is closed, we must abort otherwise we'll end
up with data which cannot leave the buffer.
By default this function returned 0 indicating an end of analysis.
This was not a problem as long as it was the last analyser in the
chain but becomes quite a big one now since it skips the forwarder
with auto_close enabled, causing some data to pass under the nose
of the last one undetected.
There were still several situations leading to CLOSE_WAIT sockets
remaining there forever because some complex transitions were
obviously not caught due to the impossibility to resync changes
between the request and response FSMs.
This patch now centralizes the global transaction state and feeds
it from both request and response transitions. That way, whoever
finishes first, there will be no issue for converging to the correct
state.
Some heavy use of the new debugging function has helped a lot. Maybe
those calls could be removed after some time. First tests are very
positive.
This function outputs to fd #-1 the status of request and response
buffers, the transaction states, the stream interface states, etc...
That way, it's easy to find that output in an strace report, correctly
placed WRT the other syscalls.
The data forwarders are analysers. As such, the have to check for
various situations on which they have to abort, one of them being
the lack of data with closed input. Now we don't leave the functions
anymore without performing these checks. This has solved the new
CLOSE_WAIT issue that became more noticeable since last patch.
It may happen that we forward a close just after we sent the last
chunk, because we forgot to clear the AUTO_CLOSE flag.
This issue caused some pages to be truncated depending on some
timing races. Issue initially reported by Cyril Bonté.
The cookie parser could be fooled by spaces or commas in cookie names
and values, causing the persistence cookie not to be matched if located
just after such a cookie. Now spaces found in values are considered as
part of the value, and spaces, commas and semi-colons found in values
or names, are skipped till next cookie name.
This fix must be backported to 1.3.
In case of a non-blocking socket, used for connecting to a remote
server (not localhost), the error reported by the health check
was most of a time one of EINPROGRESS/EAGAIN/EALREADY.
This patch adds a getsockopt(..., SO_ERROR, ...) call so now
the proper error message is reported.
It makes sense to permit a client to keep its connection when
performing a redirect to the same host. We only detect the fact
that the redirect location begins with a slash to use the keep-alive
(if the client supports it).
By default we automatically wait for enough data to fill large
packets if buf->to_forward is not null. This causes a problem
with POST/Expect requests which have a data size but no data
immediately available. Instead of causing noticeable delays on
such requests, simply add a flag to disable waiting when sending
requests.
In server-close mode particularly, the response buffer is marked for
no-auto-close after a response passed through. This prevented a POST
request from being aborted on errors, timeouts or anything if the
response was received before the request was complete.
If we enable reading of a request immediately after completing
another one, we end up performing small reads until the request
buffer is complete. This takes time and makes it harder to realign
the buffer when needed. Just enable reading when we need to.
The rq.u field is relative to buf->data, not to msg->sol. We have
to subtract msg->som everywhere this error was made. Maybe it will
be simpler to have a pointer to the buffer in the message and find
appropriate data there.
Many times we see a lot of short responses in HTTP (typically 304 on a
reload). It is a waste of network bandwidth to send that many small packets
when we know we can merge them. When we know that another HTTP request is
following a response, we set BF_EXPECT_MORE on the response buffer, which
will turn MSG_MORE on exactly once. That way, multiple short responses can
leave pipelined if their corresponding requests were also pipelined.
While it could be dangerous to enable MSG_MORE on infinite data (eg:
interactive sessions), it makes sense to enable it when we know the
chunk to be sent is just a part of a larger one.
We used to forward more trailers than required, causing a
desynchronization of the output. Now we schedule all for forwarding
as soon as we encounter them.
This option enables HTTP keep-alive on the client side and close mode
on the server side. This offers the best latency on the slow client
side, and still saves as many resources as possible on the server side
by actively closing connections. Pipelining is supported on both requests
and responses, though there is currently no reason to get pipelined
responses.
When too large a message lies in a buffer before parsing a new
request/response, we can now wait for previous outgoing data to
leave the buffer before attempting to parse again. After that
we can consider the opportunity to realign the buffer if needed.
The HTTP parser needed the msg structure to hold pre-initialized pointers.
This causes a trouble with keep-alive because if some data is still in the
buffer, the pointers can be anywhere after the data and later become invalid
when the buffer gets realigned.
It was not needed to rely on that since we have two valid information
in the buffer itself :
- buf->lr : last visited place
- buf->w + buf->send_max : beginning of next message
So by doing the maths only on those values, we can avoid doing tricks
on msg->som.
This option was disabled for frontends in the configuration because
it was useless in its initial implementation, though it was still
checked in the code. Let's officially enable it now.
When we catch an error from the server, speed up the connection
abort since we don't want to remain long with pending data in the
socket, and we want to be able to reuse our source port ASAP.
Doing this helps us flush the system buffers from all unread data. This
avoids having orphans when clients suddenly get off the net without
reading their entire response.
This new flag may be set by any user on a stream interface to tell
the underlying protocol that there is no need for lingering on the
socket since we know the other side either received everything or
does not care about what we sent.
This will typically be used with forced server close in HTTP mode,
where we want to quickly close a server connection after receiving
its response. Otherwise the system would prevent us from reusing
the same port for some time.
The "forceclose" option used to close the output channel to the
server once it started to respond. While this happened to work with
most servers, some of them considered this as a connection abort and
immediately stopped responding.
Now that we're aware of the end of a request and response, we're able
to trivially handle this option and properly close both sides when the
server's response is complete.
During this change it appeared that forwarding could be allowed when
the BF_SHUTW_NOW flag was set on a buffer, which obviously is not
acceptable and was causing some trouble. This has been fixed too and
is the reason for the MEDIUM status on this patch.
Since we'll soon be able to close a connection with remaining data in a
buffer, it becomes obvious that we can prepare to close when we're about
to send the last chunk of data and not the whole buffer.
This error was triggered by requests not starting at the beginning
of the buffer. It cannot happen with earlier versions though it might
be a good idea to fix it anyway.
There were still issues with the buffer alignment. Now we ensure
that we always align it before a request or response is completely
parsed if there is less than maxrewrite bytes free at the end. In
practice, it's not called that often and ensures we can always work
as expected.
Since the introduction of the automatic sizing of buffers during reads,
a bug appeared where the max size could be negative, causing large
chunks of memory to be overwritten during recv() calls if a read pointer
was already past the buffer's limit.
In many places where we perform header insertion, an error control
is performed but due to a mistake, it cannot match any error :
if (unlikely(error) < 0)
instead of
if (unlikely(error < 0))
This prevents error 400 responses from being sent when the buffer is
full due to many header additions. This must be backported to 1.3.
The body parser will be used in close and keep-alive modes. It follows
the stream to keep in sync with both the request and the response message.
Both chunked transfer-coding and content-length are supported according to
RFC2616.
The multipart/byterange encoding has not yet been implemented and if not
seconded by any of the two other ones, will be forwarded till the close,
as requested by the specification.
Both the request and the response analysers converge into an HTTP_MSG_DONE
state where it will be possible to force a close (option forceclose) or to
restart with a fresh new transaction and maintain keep-alive.
This change is important. All tests are OK but any possible behaviour
change with "option httpclose" might find its root here.
When parsing body for URL parameters, we must not consider that
data are available from buf->data but from buf->data + msg->som.
This is not a problem right now but may become with keep-alive.
When parsing a request that does not start at the beginning of the
buffer, we may experience a buffer full issue. In order to avoid
this, we try to realign the buffer if it is not really full. That
will be required when we have to deal with pipelined requests.
Some wrong operations were performed on buffers, assuming the
offsets were relative to the beginning of the request while they
are relative to the beginning of the buffer. In practice this is
not yet an issue since both are the same... until we add support
for keep-alive.
It's not enough to know if the connection will be in CLOSE or TUNNEL mode,
we still need to know whether we want to read a full message to a known
length or read it till the end just as in TUNNEL mode. Some updates to the
RFC clarify slightly better the corner cases, in particular for the case
where a non-chunked encoding is used last.
Now we also take care of adding a proper "connection: close" to messages
whose size could not be determined.
Chunked encoding can be slightly more complex than what was implemented.
Specifically, it supports some optional extensions that were not parsed
till now if present, and would have caused an error to be returned.
Also, now we enforce check for too large values in chunk sizes in order
to ensure we never overflow.
Last, we're now able to return a request error if we can't read the
chunk size because the buffer is already full.
This state indicates that an HTTP message (request or response) is
complete. This will be used to know when we can re-initialize a
new transaction. Right now we only switch to it after the end of
headers if there is no data. When other analysers are implemented,
we can switch to this state too.
The condition to reuse a connection is when the response finishes
after the request. This will have to be checked when setting the
state.
The response 1xx was set too low and required a lot of tests along
the code in order to avoid some processing. We still left the test
after the response rewrite rules so that we can eliminate unwanted
headers if required.
This code really belongs to the http part since it's transaction-specific.
This will also make it easier to later reinitialize a transaction in order
to support keepalive.
We used to apply a limit to each buffer's size in order to leave
some room to rewrite headers, then we used to remove this limit
once the session switched to a data state.
Proceeding that way becomes a problem with keepalive because we
have to know when to stop reading too much data into the buffer
so that we can leave some room again to process next requests.
The principle we adopt here consists in only relying on to_forward+send_max.
Indeed, both of those data define how many bytes will leave the buffer.
So as long as their sum is larger than maxrewrite, we can safely
fill the buffers. If they are smaller, then we refrain from filling
the buffer. This means that we won't risk to fill buffers when
reading last data chunk followed by a POST request and its contents.
The only impact identified so far is that we must ensure that the
BF_FULL flag is correctly dropped when starting to forward. Right
now this is OK because nobody inflates to_forward without using
buffer_forward().
Up to now, we only had a flag in the session indicating if it had to
work in "connection: close" mode. This is not at all compatible with
keep-alive.
Now we ensure that both sides of a connection act independantly and
only relative to the transaction. The HTTP version of the request
and response is also correctly considered. The connection already
knows several modes :
- tunnel (CONNECT or no option in the config)
- keep-alive (when permitted by configuration)
- server-close (close the server side, not the client)
- close (close both sides)
This change carefully detects all situations to find whether a request
can be fully processed in its mode according to the configuration. Then
the response is also checked and tested to fix corner cases which can
happen with different HTTP versions on both sides (eg: a 1.0 client
asks for explicit keep-alive, and the server responds with 1.1 without
a header).
The mode is selected by a capability elimination algorithm which
automatically focuses on the least capable agent between the client,
the frontend, the backend and the server. This ensures we won't get
undesired situtations where one of the 4 "agents" is not able to
process a transaction.
No "Connection: close" header will be added anymore to HTTP/1.0 requests
or responses since they're already in close mode.
The server-close mode is still not completely implemented. The response
needs to be rewritten as keep-alive before being sent to the client if
the connection was already in server-close (which implies the request
was in keep-alive) and if the response has a content-length or a
transfer-encoding (but only if client supports 1.1).
A later improvement in server-close mode would probably be to detect
some situations where it's interesting to close the response (eg:
redirections with remote locations). But even then, the client might
close by itself.
It's also worth noting that in tunnel mode, no connection header is
affected in either direction. A tunnelled connection should theorically
be notified at the session level, but this is useless since by definition
there will not be any more requests on it. Thus, we don't need to add a
flag into the session right now.
Now that the HTTP analyser will already have parsed the beginning
of the request body, we don't have to check for transfer-encoding
anymore since we have the current chunk size in hdr_content_len.
The POST body analysis was split between two analysers for historical
reasons. Now we only have one analyser which checks content length
and waits for enough data to come.
Right now this analyser waits for <url_param_post_limit> bytes of
body to reach the buffer, or the first chunk. But this could be
improved to wait for any other amount of data or any specific
contents.