Add generic authentication & authorization support.
Groups are implemented as bitmaps so the count is limited to
sizeof(int)*8 == 32.
Encrypted passwords are supported with libcrypt and crypt(3), so it is
possible to use any method supported by your system. For example modern
Linux/glibc instalations support MD5/SHA-256/SHA-512 and of course classic,
DES-based encryption.
Implement Base64 decoding with a reverse table.
The function accepts and decodes classic base64 strings, which
can be composed from many streams as long each one is properly
padded, for example: SGVsbG8=IEhBUHJveHk=IQ==
Just as for the req* rules, we can now condition rsp* rules with ACLs.
ACLs match on response, so volatile request information cannot be used.
A warning is emitted if a configuration contains such an anomaly.
All the req* rules except the reqadd rules can now be specified with
an if/unless condition. If a condition is specified and does not match,
the filter is ignored. This is particularly useful with reqidel, reqirep
and reqtarpit.
From now on, if request filters have ACLs defined, these ACLs will be
evaluated to condition the filter. This will be used to conditionally
remove/rewrite headers based on ACLs.
A new function was added to take care of the common code between
all those keywords. This has saved 8 kB of object code and about
500 lines of source code. This has also permitted to spot and fix
minor bugs (allocated args that were never used).
The code could be factored even more but that would make it a bit
more complex which is not interesting at this stage.
Various tests have been performed, and the warnings and errors are
still correctly reported and everything seems to work as expected.
This function automatically builds a rule, considering the if/unless
statements, and automatically updates the proxy's acl_requires, the
condition's file and line.
Krzysztof Oledzki suggested to disable keep-alive when a process
is going down due to a reload, in order to avoid ever-lasting
sessions. This is a simple and very efficient solution as it
ensures that at most one more request will be handled on a
keep-alive connection after the process has received a SIGUSR1
signal.
Now a server can check the contents of the header X-Haproxy-Server-State
to know how haproxy sees it. The same values as those reported in the stats
are provided :
- up/down status + check counts
- throttle
- weight vs backend weight
- active sessions vs backend sessions
- queue length
- haproxy node name
Currently we cannot easily add headers nor anything to HTTP checks
because the requests are pre-formatted with the last CRLF. Make the
check code add the CRLF itself so that we can later add useful info.
This converter can be applied on top of an IPv4-type pattern. It
applies a netmask which is suited for IP address storage and matching.
This can be used to make all hosts within a certain mask to share the
same table entries and as such use the same server.
The mask can be passed in dotted form (eg: 255.255.255.0) or in CIDR
form (eg: 24).
Some converters will need one or several arguments. It's not possible
to write a simple generic parser for that, so let's add the ability
for each converter to support its own argument parser, and call it
to get the arguments when it's specified. If unspecified, the arguments
are passed unmodified as string+len.
The pattern type converters currently support a string arg and a length.
Sometimes we'll prefer to pass them a list or a structure. So let's convert
the string and length into a generic void* and int that each converter may
use as it likes.
Hi Willy,
I've made a quick pass on the "defaults" column in the Proxy keywords matrix (chapter 4.1. in the documentation).
This patch resyncs the code and the documentation. I let you decide if some keywords that still work in the "defaults" section should be forbidden.
- default_backend : in the matrix, "defaults" was not supported but the keyword details say it is.
Tests also shows it works, then I've updated the matrix.
- capture cookie : in the keyword details, we can read `It is not possible to specify a capture in a "defaults" section.'.
Ok, even if the tests worked, I've added an alert in the configuration parser (as it is for capture request/response header).
- description : not supported in "defaults", I added an alert in the parser.
I've also noticed that this keyword doesn't appear in the documentation.
There's one "description" entry, but for the "global" section, which is for a different use (the patch doesn't update the documentation).
- grace : even if this is maybe useless, it works in "defaults". Documentation is updated.
- redirect : alert is added in the parser.
- rsprep : alert added in the parser.
--
Cyril Bont
We must trim any excess data from the response buffer when recycling
a keep-alive connection, because we may have blocked an invalid response
from a server that we don't want to accidentely forward once we disable
the analysers, nor do we want those data to come along with next response.
A typical example of such data would be from a buggy server responding to
a HEAD with some data, or sending more than the advertised content-length.
For deciding to set the BF_EXPECT_MORE, we reused the same code as in
http_wait_for_request(), but here we must ignore buf->lr which is not
yet set and useless. This might only have caused random sub-optimal
behaviours.
Krzysztof Oledzki reported that 1.4-dev7 would regularly crash
on an apparently very common workload. The cores he provided
showed some inter-buffer data corruption, exactly similar to
what was fixed by the following recent commit :
bbfa7938bd [BUG] buffer_replace2 must never change the ->w entry
In fact, it was buffer_insert_line2() which was still modifying the
->w pointer, causing issues with pipelined responses in keep-alive
mode if some headers were to be added.
The bug requires a remote client, a near server, large server buffers
and small client buffers to be reproduced, with response header
insertion. Still, it's surprizing that it did not trigger earlier.
Now after 100k pipelined requests it did not trigger anymore.
Despite what is explicitly stated in HTTP specifications,
browsers still use the undocumented Proxy-Connection header
instead of the Connection header when they connect through
a proxy. As such, proxies generally implement support for
this stupid header name, breaking the standards and making
it harder to support keep-alive between clients and proxies.
Thus, we add a new "option http-use-proxy-header" to tell
haproxy that if it sees requests which look like proxy
requests, it should use the Proxy-Connection header instead
of the Connection header.
This function is used to move data which is located between ->w and ->r,
so it must not touch ->w, otherwise it will displace pending data which
is before the one we're actually overwriting. The issue arises with
some pipelined responses which cause some part of the previous one to
be chopped off when removing the connection: close header, thus
corrupting last response and shifting next one. Those are detected
in the logs because the next response will be a 502 with flags PH.
When using "option persist" or "force-persist", we want to know from the
logs if the cookie referenced a valid server or a down server. Till here
the flag reported a valid server even if the server was down, which is
misleading. Now we correctly report that the requested server was down.
We can typically see "--DI" when using "option persist" with redispatch,
ad "SCDN" when using force-persist on a down server.
This is used to force access to down servers for some requests. This
is useful when validating that a change on a server correctly works
before enabling the server again.
We use to delay the response if there is a new request in the buffer.
However, if the pending request is incomplete, we should not delay the
pending responses.
This can cause parts of responses to be truncated in case of
pipelined requests if the second request generates an error
before the first request is completely flushed.
Sometimes we need to be able to change the default kernel socket
buffer size (recv and send). Four new global settings have been
added for this :
- tune.rcvbuf.client
- tune.rcvbuf.server
- tune.sndbuf.client
- tune.sndbuf.server
Those can be used to reduce kernel memory footprint with large numbers
of concurrent connections, and to reduce risks of write timeouts with
very slow clients due to excessive kernel buffering.
This one is the next step of previous patch. It correctly computes
the response mode and the Connection flag transformations depending
on the request mode and version, and the response version and headers.
We're now also able to add "Connection: keep-alive", and to convert
server's close during a keep-alive connection to a server-close
connection.
We need to improve Connection header handling in the request for it
to support the upcoming keep-alive mode. Now we have two flags which
keep in the session the information about the presence of a
Connection: close and a Connection: keep-alive headers in the initial
request, as well as two others which keep the current state of those
headers so that we don't have to parse them again. Knowing the initial
value is essential to know when the client asked for keep-alive while
we're forcing a close (eg in server-close mode). Also the Connection
request parser is now able to automatically remove single header values
at the same time they are parsed. This provides greater flexibility and
reliability.
All combinations of listen/front/back in all modes and with both
1.0 and 1.1 have been tested.
Some header values might be delimited with spaces, so it's not enough to
compare "close" or "keep-alive" with strncasecmp(). Use word_match() for
that.
Calling this function after http_find_header2() automatically deletes
the current value of the header, and removes the header itself if the
value is the only one. The context is automatically adjusted for a
next call to http_find_header2() to return the next header. No other
change nor test should be made on the transient context though.
The close mode of a transaction would be switched to tunnel mode
at the end of the processing, letting a lot of pending data pass
in the other direction if any. Let's fix that by checking for the
close mode during state resync too.
We must set the error flags when detecting that a client has reset
a connection or timed out while waiting for a new request on a keep-alive
connection, otherwise process_session() sets it itself and counts one
request error.
That explains why some sites were showing an increase in request errors
with the keep-alive.
We get a lot of those, especially with web crawlers :
recv(2, 0x810b610, 7000, 0) = -1 ECONNRESET (Connection reset by peer)
shutdown(2, 1 /* send */) = -1 ENOTCONN (Transport endpoint is not connected)
close(2) = 0
There's no need to perform the shutdown() here, the socket is already
in error so it is down.
A check was performed in buffer_replace2() to compare buffer
length with its read pointer. This has been wrong for a long
time, though it only has an impact when dealing with keep-alive
requests/responses. In theory this should be backported but
the check has no impact without keep-alive.
We can receive data with a notification of socket error. But we
must not check for the error before reading the data, because it
may be an asynchronous error notification that we check too early
while the response we're waiting for is available. If there is an
error, recv() will get it.
This should help with servers that close very fast after the response
and should also slightly lower the CPU usage during very fast checks
on massive amounts of servers since we eliminate one system call.
This should probably be backported to 1.3.
While waiting in a keep-alive state for a request, we want to silently
close if we don't get anything. However if we get a partial request it's
different because that means the client has started to send something.
This requires a new transaction flag. It will be used to implement a
distinct timeout for keep-alive and requests.
This change, suggested by Cyril Bont, makes a lot of sense and
would have made it obvious that sessid was not properly initialized
while switching to keep-alive. The code is now cleaner.
The stream_int_cond_close() function was added to preserve the
contents of the response buffer because stream_int_retnclose()
was buggy. It flushed the response instead of flushing the
request. This caused issues with pipelined redirects followed
by error messages which ate the previous response.
This might even have caused object truncation on pipelined
requests followed by an error or by a server redirection.
Now that this is fixed, simply get rid of the now useless
function.
I've tried to follow all the pool_alloc2/pool_free2 calls in the code
to track memory leaks. I've found one which only happens when there's
already no more memory when allocating a new appsession cookie.
Sometimes it can be desired to return a location which is the same
as the request with a slash appended when there was not one in the
request. A typical use of this is for sending a 301 so that people
don't reference links without the trailing slash. The name of the
new option is "append-slash" and it can be used on "redirect"
statements in prefix mode.
When using server redirection, it is possible to specify a path
consisting of only one slash. While this is discouraged (risk of
loop) it may sometimes be useful combined with content switching.
The prefixing of a '/' then causes two slashes to be returned in
the response. So we now do as with the other redirects, don't
prepend a slash if it's alone.
Some message pointers were not usable once the message reached the
HTTP_MSG_DONE state. This is the case for ->som which points to the
body because it is needed to parse chunks. There is one case where
we need the beginning of the message : server redirect. We have to
call http_get_path() after the request has been parsed. So we rely
on ->sol without counting on ->som. In order to achieve this, we're
making ->rq.{u,v} relative to the beginning of the message instead
of the buffer. That simplifies the code and makes it cleaner.
Preliminary tests show this is OK.
This might have been introduced with chunk extensions. Note that
the server redirect still does not work because http_get_path()
cannot get the correct path once the request message is in the
HTTP_MSG_DONE state (->som does not point to the start of message
anymore).
The initial code's intention was to loop on the analysers as long
as an analyser is added by another one. [This code was wrong due to
the while(0) which breaks even on a continue statement, but the
initial intention must be changed too]. In fact we should limit the
number of times we loop on analysers in order to limit latency.
Using maxpollevents as a limit makes sense since this tunable is
used for the exact same purposes. We may add another tunable later
if that ever makes sense, so it's very unlikely.
If we accept a new request and that request produces an immediate
response (error, redirect, ...), then we may fail to send it in
case of pipelined requests if the response buffer is full. To avoid
this, we check the availability of at least maxrewrite bytes in the
response buffer before accepting a new pipelined request.
During a redirect, we used to send the last chunk of response with
stream_int_cond_close(). But this is wrong in case of pipeline,
because if the response already contains something, this function
will refrain from touching the buffer. Use a concatenation function
instead.
Also, this call might still fail when the buffer is full, we need
a second fix to refrain from parsing an HTTP request as long as the
response buffer is full, otherwise we may not even be able to return
a pending redirect or an error code.
That patch was incorrect because under some circumstances, the
capture memory could be freed by session_free() and then again
by http_end_txn(), causing a double free and an eventual segfault.
The pool use count was also reported wrong due to this bug.
The cleanup code was removed from session_free() to remain only
in http_end_txn().
Hank A. Paulson reported a massive memory leak when using keep-alive
mode. The information he provided made it easy to find that captured
request and response headers were erased but not released when renewing
a request.
Several HTTP analysers used to set those flags to values that
were useful but without considering the possibility that they
were not called again to clean what they did. First, replace
direct flag manipulation with more explicit macros. Second,
enforce a rule stating that any buffer which changes one of
these flags from the default must restore it after completion,
so that other analysers see correct flags.
With both this fix and the previous one about analyser bits,
we should not see any more stuck sessions.
A request analyser may very well be added while processing a response
(eg: end of an HTTP keep-alive response). It's very dangerous to only
rely on flags that ought to change in order to loop back, so let's
correctly detect a possible new analyser addition instead of guessing.
With the introduction of keep-alive, we have created situations
where an analyser can add other analysers to the current list,
which are behind it, which have already been processed once, and
which are needed immediately because without them there will be
no more I/O activity. This is typically the case for enabling
reading of a new request after preparing for a new request.
Instead of creating specific cases for some analysers (there was
already one such before), we now use a little bit of algorithmics
to create an ordered bit chain supporting priorities and fast
operations.
Another advantage of this new construction is that it's not a
real loop anymore, so if an analyser is unknown, it will not
loop but just ignore it.
Note that it is easy to skip multiple analysers at once now in
order to speed up the checking a bit. Some test code has shown
a minor gain though.
This change has been carefully re-read and has no direct reason
of causing a regression. However it has been tagged "major"
because the fact that it runs the analysers correctly might
trigger an old sleeping bug somewhere in one of the analysers.
This patch implements default-server support allowing to change
default server options. It can be used in [defaults] or [backend]/[listen]
sections. Currently the following options are supported:
- error-limit
- fall
- inter
- fastinter
- downinter
- maxconn
- maxqueue
- minconn
- on-error
- port
- rise
- slowstart
- weight
Supported informations, available via "tr/td title":
- cap: capabilities (proxy)
- mode: one of tcp, http or health (proxy)
- id: SNMP ID (proxy, socket, server)
- IP (socket, server)
- cookie (backend, server)
This patch adds add "a link" & "a href" html tags for sockets.
As sockets may have the same name like servers, I decided to
add "+" char (forbidden in names assigned to servers), as a prefix.
It is useless to report statistics if the feature was not enabled.
It also makes possible to distinguish if health analyses is
enabled or not only by looking at the stats page.
Commit 0dfdf19b64 introduced a
regression because the connection header is now parsed and checked
depending on the configured options, but the options are set after
calling it instead of being set before.
Historically, "option httpclose" has always worked the same way. It
only mangles the "Connection" header in the request and the response
if needed, but does not affect the connection by itself, and ignores
any further data. It is dangerous to change this behaviour without
leaving any other alternative. If an active close is desired, it's
better to make use of "option forceclose" which does exactly what
it intends to do.
So as of now, "option httpclose" will only mangle the headers as
before, and will only affect the connection by itself when combined
with another connection-related option (eg: keepalive or server-close).
We basically have to mimmic the code of process_session() here, so
when the remote output is closed, we must abort otherwise we'll end
up with data which cannot leave the buffer.
By default this function returned 0 indicating an end of analysis.
This was not a problem as long as it was the last analyser in the
chain but becomes quite a big one now since it skips the forwarder
with auto_close enabled, causing some data to pass under the nose
of the last one undetected.
There were still several situations leading to CLOSE_WAIT sockets
remaining there forever because some complex transitions were
obviously not caught due to the impossibility to resync changes
between the request and response FSMs.
This patch now centralizes the global transaction state and feeds
it from both request and response transitions. That way, whoever
finishes first, there will be no issue for converging to the correct
state.
Some heavy use of the new debugging function has helped a lot. Maybe
those calls could be removed after some time. First tests are very
positive.
This function outputs to fd #-1 the status of request and response
buffers, the transaction states, the stream interface states, etc...
That way, it's easy to find that output in an strace report, correctly
placed WRT the other syscalls.
The data forwarders are analysers. As such, the have to check for
various situations on which they have to abort, one of them being
the lack of data with closed input. Now we don't leave the functions
anymore without performing these checks. This has solved the new
CLOSE_WAIT issue that became more noticeable since last patch.
It may happen that we forward a close just after we sent the last
chunk, because we forgot to clear the AUTO_CLOSE flag.
This issue caused some pages to be truncated depending on some
timing races. Issue initially reported by Cyril Bont.
The cookie parser could be fooled by spaces or commas in cookie names
and values, causing the persistence cookie not to be matched if located
just after such a cookie. Now spaces found in values are considered as
part of the value, and spaces, commas and semi-colons found in values
or names, are skipped till next cookie name.
This fix must be backported to 1.3.
In case of a non-blocking socket, used for connecting to a remote
server (not localhost), the error reported by the health check
was most of a time one of EINPROGRESS/EAGAIN/EALREADY.
This patch adds a getsockopt(..., SO_ERROR, ...) call so now
the proper error message is reported.
It makes sense to permit a client to keep its connection when
performing a redirect to the same host. We only detect the fact
that the redirect location begins with a slash to use the keep-alive
(if the client supports it).
By default we automatically wait for enough data to fill large
packets if buf->to_forward is not null. This causes a problem
with POST/Expect requests which have a data size but no data
immediately available. Instead of causing noticeable delays on
such requests, simply add a flag to disable waiting when sending
requests.
In server-close mode particularly, the response buffer is marked for
no-auto-close after a response passed through. This prevented a POST
request from being aborted on errors, timeouts or anything if the
response was received before the request was complete.
If we enable reading of a request immediately after completing
another one, we end up performing small reads until the request
buffer is complete. This takes time and makes it harder to realign
the buffer when needed. Just enable reading when we need to.
The rq.u field is relative to buf->data, not to msg->sol. We have
to subtract msg->som everywhere this error was made. Maybe it will
be simpler to have a pointer to the buffer in the message and find
appropriate data there.
Many times we see a lot of short responses in HTTP (typically 304 on a
reload). It is a waste of network bandwidth to send that many small packets
when we know we can merge them. When we know that another HTTP request is
following a response, we set BF_EXPECT_MORE on the response buffer, which
will turn MSG_MORE on exactly once. That way, multiple short responses can
leave pipelined if their corresponding requests were also pipelined.
While it could be dangerous to enable MSG_MORE on infinite data (eg:
interactive sessions), it makes sense to enable it when we know the
chunk to be sent is just a part of a larger one.
We used to forward more trailers than required, causing a
desynchronization of the output. Now we schedule all for forwarding
as soon as we encounter them.
This option enables HTTP keep-alive on the client side and close mode
on the server side. This offers the best latency on the slow client
side, and still saves as many resources as possible on the server side
by actively closing connections. Pipelining is supported on both requests
and responses, though there is currently no reason to get pipelined
responses.
When too large a message lies in a buffer before parsing a new
request/response, we can now wait for previous outgoing data to
leave the buffer before attempting to parse again. After that
we can consider the opportunity to realign the buffer if needed.
The HTTP parser needed the msg structure to hold pre-initialized pointers.
This causes a trouble with keep-alive because if some data is still in the
buffer, the pointers can be anywhere after the data and later become invalid
when the buffer gets realigned.
It was not needed to rely on that since we have two valid information
in the buffer itself :
- buf->lr : last visited place
- buf->w + buf->send_max : beginning of next message
So by doing the maths only on those values, we can avoid doing tricks
on msg->som.
This option was disabled for frontends in the configuration because
it was useless in its initial implementation, though it was still
checked in the code. Let's officially enable it now.
When we catch an error from the server, speed up the connection
abort since we don't want to remain long with pending data in the
socket, and we want to be able to reuse our source port ASAP.
Doing this helps us flush the system buffers from all unread data. This
avoids having orphans when clients suddenly get off the net without
reading their entire response.
This new flag may be set by any user on a stream interface to tell
the underlying protocol that there is no need for lingering on the
socket since we know the other side either received everything or
does not care about what we sent.
This will typically be used with forced server close in HTTP mode,
where we want to quickly close a server connection after receiving
its response. Otherwise the system would prevent us from reusing
the same port for some time.
The "forceclose" option used to close the output channel to the
server once it started to respond. While this happened to work with
most servers, some of them considered this as a connection abort and
immediately stopped responding.
Now that we're aware of the end of a request and response, we're able
to trivially handle this option and properly close both sides when the
server's response is complete.
During this change it appeared that forwarding could be allowed when
the BF_SHUTW_NOW flag was set on a buffer, which obviously is not
acceptable and was causing some trouble. This has been fixed too and
is the reason for the MEDIUM status on this patch.
Since we'll soon be able to close a connection with remaining data in a
buffer, it becomes obvious that we can prepare to close when we're about
to send the last chunk of data and not the whole buffer.
This error was triggered by requests not starting at the beginning
of the buffer. It cannot happen with earlier versions though it might
be a good idea to fix it anyway.
There were still issues with the buffer alignment. Now we ensure
that we always align it before a request or response is completely
parsed if there is less than maxrewrite bytes free at the end. In
practice, it's not called that often and ensures we can always work
as expected.
Since the introduction of the automatic sizing of buffers during reads,
a bug appeared where the max size could be negative, causing large
chunks of memory to be overwritten during recv() calls if a read pointer
was already past the buffer's limit.
In many places where we perform header insertion, an error control
is performed but due to a mistake, it cannot match any error :
if (unlikely(error) < 0)
instead of
if (unlikely(error < 0))
This prevents error 400 responses from being sent when the buffer is
full due to many header additions. This must be backported to 1.3.
The body parser will be used in close and keep-alive modes. It follows
the stream to keep in sync with both the request and the response message.
Both chunked transfer-coding and content-length are supported according to
RFC2616.
The multipart/byterange encoding has not yet been implemented and if not
seconded by any of the two other ones, will be forwarded till the close,
as requested by the specification.
Both the request and the response analysers converge into an HTTP_MSG_DONE
state where it will be possible to force a close (option forceclose) or to
restart with a fresh new transaction and maintain keep-alive.
This change is important. All tests are OK but any possible behaviour
change with "option httpclose" might find its root here.
When parsing body for URL parameters, we must not consider that
data are available from buf->data but from buf->data + msg->som.
This is not a problem right now but may become with keep-alive.
When parsing a request that does not start at the beginning of the
buffer, we may experience a buffer full issue. In order to avoid
this, we try to realign the buffer if it is not really full. That
will be required when we have to deal with pipelined requests.
Some wrong operations were performed on buffers, assuming the
offsets were relative to the beginning of the request while they
are relative to the beginning of the buffer. In practice this is
not yet an issue since both are the same... until we add support
for keep-alive.
It's not enough to know if the connection will be in CLOSE or TUNNEL mode,
we still need to know whether we want to read a full message to a known
length or read it till the end just as in TUNNEL mode. Some updates to the
RFC clarify slightly better the corner cases, in particular for the case
where a non-chunked encoding is used last.
Now we also take care of adding a proper "connection: close" to messages
whose size could not be determined.
Chunked encoding can be slightly more complex than what was implemented.
Specifically, it supports some optional extensions that were not parsed
till now if present, and would have caused an error to be returned.
Also, now we enforce check for too large values in chunk sizes in order
to ensure we never overflow.
Last, we're now able to return a request error if we can't read the
chunk size because the buffer is already full.
This state indicates that an HTTP message (request or response) is
complete. This will be used to know when we can re-initialize a
new transaction. Right now we only switch to it after the end of
headers if there is no data. When other analysers are implemented,
we can switch to this state too.
The condition to reuse a connection is when the response finishes
after the request. This will have to be checked when setting the
state.
The response 1xx was set too low and required a lot of tests along
the code in order to avoid some processing. We still left the test
after the response rewrite rules so that we can eliminate unwanted
headers if required.
This code really belongs to the http part since it's transaction-specific.
This will also make it easier to later reinitialize a transaction in order
to support keepalive.
We used to apply a limit to each buffer's size in order to leave
some room to rewrite headers, then we used to remove this limit
once the session switched to a data state.
Proceeding that way becomes a problem with keepalive because we
have to know when to stop reading too much data into the buffer
so that we can leave some room again to process next requests.
The principle we adopt here consists in only relying on to_forward+send_max.
Indeed, both of those data define how many bytes will leave the buffer.
So as long as their sum is larger than maxrewrite, we can safely
fill the buffers. If they are smaller, then we refrain from filling
the buffer. This means that we won't risk to fill buffers when
reading last data chunk followed by a POST request and its contents.
The only impact identified so far is that we must ensure that the
BF_FULL flag is correctly dropped when starting to forward. Right
now this is OK because nobody inflates to_forward without using
buffer_forward().
Up to now, we only had a flag in the session indicating if it had to
work in "connection: close" mode. This is not at all compatible with
keep-alive.
Now we ensure that both sides of a connection act independantly and
only relative to the transaction. The HTTP version of the request
and response is also correctly considered. The connection already
knows several modes :
- tunnel (CONNECT or no option in the config)
- keep-alive (when permitted by configuration)
- server-close (close the server side, not the client)
- close (close both sides)
This change carefully detects all situations to find whether a request
can be fully processed in its mode according to the configuration. Then
the response is also checked and tested to fix corner cases which can
happen with different HTTP versions on both sides (eg: a 1.0 client
asks for explicit keep-alive, and the server responds with 1.1 without
a header).
The mode is selected by a capability elimination algorithm which
automatically focuses on the least capable agent between the client,
the frontend, the backend and the server. This ensures we won't get
undesired situtations where one of the 4 "agents" is not able to
process a transaction.
No "Connection: close" header will be added anymore to HTTP/1.0 requests
or responses since they're already in close mode.
The server-close mode is still not completely implemented. The response
needs to be rewritten as keep-alive before being sent to the client if
the connection was already in server-close (which implies the request
was in keep-alive) and if the response has a content-length or a
transfer-encoding (but only if client supports 1.1).
A later improvement in server-close mode would probably be to detect
some situations where it's interesting to close the response (eg:
redirections with remote locations). But even then, the client might
close by itself.
It's also worth noting that in tunnel mode, no connection header is
affected in either direction. A tunnelled connection should theorically
be notified at the session level, but this is useless since by definition
there will not be any more requests on it. Thus, we don't need to add a
flag into the session right now.
Now that the HTTP analyser will already have parsed the beginning
of the request body, we don't have to check for transfer-encoding
anymore since we have the current chunk size in hdr_content_len.
The POST body analysis was split between two analysers for historical
reasons. Now we only have one analyser which checks content length
and waits for enough data to come.
Right now this analyser waits for <url_param_post_limit> bytes of
body to reach the buffer, or the first chunk. But this could be
improved to wait for any other amount of data or any specific
contents.
The previous check was correct: the RFC states that it is required
to have a domain-name which contained a dot AND began with a dot.
However, currently some (all?) browsers do not obey this specification,
so such configuration might work.
This patch reverts 3d8fbb6658 but
changes the check from FATAL to WARNING and extends the message.
Fix 500b8f0349 fixed the patch for the 64 bit
case but caused the opposite type issue to appear on 32 bit platforms. Cast
the difference and be done with it since gcc does not agree on type carrying
the difference between two pointers on 32 and 64 bit platforms.
Implement decreasing health based on observing communication between
HAProxy and servers.
Changes in this version 2:
- documentation
- close race between a started check and health analysis event
- don't force fastinter if it is not set
- better names for options
- layer4 support
Changes in this version 3:
- add stats
- port to the current 1.4 tree
Cyril Bont found that when an error is detected in one config file, it
is also reported in all other ones, which is wrong. The fix obviously
consists in checking the return code from readcfgfile() and not the
accumulator.
Today I was testing headers manipulation but I met a bug with my first test.
To reproduce it, add for example this line :
rspadd Cache-Control:\ max-age=1500
Check the response header, it will provide :
Cache-Control: max-age=15000 <= the last character is duplicated
This only happens when we use backslashes on the last line of the
configuration file, without returning to the line.
Also if the last line is like :
rspadd Cache-Control:\ max-age=1500\
the last backslash causes a segfault.
This is not due to rspadd but to a more general bug in cfgparse.c :
...
if (skip) {
memmove(line + 1, line + 1 + skip, end - (line + skip + 1));
end -= skip;
}
...
should be :
...
if (skip) {
memmove(line + 1, line + 1 + skip, end - (line + skip));
end -= skip;
}
...
I've reproduced it with haproxy 1.3.22 and the last 1.4 snapshot.
In some environments it is not possible to rely on any wildcard for a
domain name (eg: .com, .net, .fr...) so it is required to send multiple
domain extensions. (Un)fortunately the syntax check on the domain name
prevented that from being done the dirty way. So let's just build a
domain list when multiple domains are passed on the same line.
(cherry picked from commit 950245ca2b)
It was a OR instead of a AND, so it was required to have a cookie
name which contained a dot AND began with a dot.
(cherry picked from commit a1e107fc13)
Gabriel Sosa reported that logs were appearing with BADREQ when
'option httplog' was used with a TCP proxy (eg: inherited via a
default instance). This patch detects it and falls back to tcplog
after emitting a warning.
(cherry picked from commit 5f0bd6537f)
Holger Just reported that running ACLs with too many args caused
a segfault during config parsing. This is caused by a wrong test
on argument count. In case of too many arguments on a config line,
the last one was not correctly zeroed. This is now done and we
report the error indicating what part had been truncated.
(cherry picked from commit 3b39c1446b)
Cameron Simpson reported an annoying case where haproxy simply reports
"Error(s) found in configuration file" when the file is not found or
not readable.
Fortunately the parsing function still returns -1 in case of open
error, so we're able to detect the issue from the caller and report
the corresponding errno message.
In order to support keepalive, we'll have to differentiate
normal sessions from tunnel sessions, which are the ones we
don't want to analyse further.
Those are typically the CONNECT requests where we don't care
about any form of content-length, as well as the requests
which are forwarded on non-close and non-keepalive proxies.
To sum up :
- len : it's now the max number of characters for the value, preventing
garbaged results.
- a new option "prefix" is added, this allows to use dynamic cookie
names (e.g. ASPSESSIONIDXXX).
Previously in the thread, I wanted to use the value found with
"capture cookie" but when i started to update the documentation, I
found this solution quite weird. I've made a small rework to not
depend on "capture cookie".
- There's the posssiblity to define the URL parser mode (path parameters
or query string).
We now set msg->col and msg->sov to the first byte of non-header.
They will be used later when parsing chunks. A new macro was added
to perform size additions on an http_msg in order to limit the risks
of copy-paste in the long term.
During this operation, it appeared that the http_msg struct was not
optimal on 64-bit, so it was re-ordered to fill the holes.
Yohan Tordjman at Dstorage found that upgrading haproxy to 1.4-dev4
caused truncated objects to be returned. An strace quickly exhibited
the issue which was 100% reproducible :
4297 epoll_wait(0, {}, 10, 0) = 0
4297 epoll_wait(0, {{EPOLLIN, {u32=7, u64=7}}}, 10, 1000) = 1
4297 splice(0x7, 0, 0x5, 0, 0xffffffffffffffff, 0x3) = -1 EINVAL (Invalid argument)
4297 shutdown(7, 1 /* send */) = 0
4297 close(7) = 0
4297 shutdown(2, 1 /* send */) = 0
4297 close(2) = 0
This is caused by the fact that the forward length is taken from
BUF_INFINITE_FORWARD, which is -1. The problem does not appear
in 32-bit mode because this value is first cast to an unsigned
long, truncating it to 32-bit (4 GB). Setting an upper bound
fixes the issue.
Also, a second error check has been added for splice. If EINVAL
is returned, we fall back to recv().
An HTTP message can be decomposed into several sub-states depending
on the transfer-encoding. We'll have to keep these state information
while parsing chunks, so we must extend the values. In order not to
change everything, we'll now consider that anything >= MSG_BODY is
the body, and that the value indicates the precise state. The
MSG_ERROR status which was greater than MSG_BODY was moved for this.
Right now, an HTTP server cannot track a TCP server and vice-versa.
This patch enables proxy tracking without relying on the proxy's mode
(tcp/http/health). It only requires a matching proxy name to exist. The
original function was renamed to findproxy_mode().
This patch extends and corrects the functionality introduced by
"Collect & provide http response codes received from servers":
- responses are now also accounted for frontends
- backend's and frontend's counters are incremented based
on responses sent to client, not received from servers
This patch adds <a href> html links for proxies, frontends, servers
and backends. Once located, can be clicked. Users no longer have to
manually add #anchor to stat's url.
All files referencing the previous ebtree code were changed to point
to the new one in the ebtree directory. A makefile variable (EBTREE_DIR)
is also available to use files from another directory.
The ability to build the libebtree library temporarily remains disabled
because it can have an impact on some existing toolchains and does not
appear worth it in the medium term if we add support for multi-criteria
stickiness for instance.
We also check the close status and terminate the server persistent
connection if appropriate. Note that since this change, we'll not
get any "Connection: close" headers added to HTTP/1.0 responses
anymore, which is good.
The code part which waits for an HTTP response has been extracted
from the old function. We now have two analysers and the second one
may re-enable the first one when an 1xx response is encountered.
This has been tested and works.
The calls to stream_int_return() that were remaining in the wait
analyser have been converted to stream_int_retnclose().
Store those elements in the transaction. RFC2616 is strictly followed.
Note that requests containing two different content-length fields are
discarded as invalid.
This patch has 2 goals :
1. I wanted to test the appsession feature with a small PHP code,
using PHPSESSID. The problem is that when PHP gets an unknown session
id, it creates a new one with this ID. So, when sending an unknown
session to PHP, persistance is broken : haproxy won't see any new
cookie in the response and will never attach this session to a
specific server.
This also happens when you restart haproxy : the internal hash becomes
empty and all sessions loose their persistance (load balancing the
requests on all backend servers, creating a new session on each one).
For a user, it's like the service is unusable.
The patch modifies the code to make haproxy also learn the persistance
from the client : if no session is sent from the server, then the
session id found in the client part (using the URI or the client cookie)
is used to associated the server that gave the response.
As it's probably not a feature usable in all cases, I added an option
to enable it (by default it's disabled). The syntax of appsession becomes :
appsession <cookie> len <length> timeout <holdtime> [request-learn]
This helps haproxy repair the persistance (with the risk of losing its
session at the next request, as the user will probably not be load
balanced to the same server the first time).
2. This patch also tries to reduce the memory usage.
Here is a little example to explain the current behaviour :
- Take a Tomcat server where /session.jsp is valid.
- Send a request using a cookie with an unknown value AND a path
parameter with another unknown value :
curl -b "JSESSIONID=12345678901234567890123456789012" http://<haproxy>/session.jsp;jsessionid=00000000000000000000000000000001
(I know, it's unexpected to have a request like that on a live service)
Here, haproxy finds the URI session ID and stores it in its internal
hash (with no server associated). But it also finds the cookie session
ID and stores it again.
- As a result, session.jsp sends a new session ID also stored in the
internal hash, with a server associated.
=> For 1 request, haproxy has stored 3 entries, with only 1 which will be usable
The patch modifies the behaviour to store only 1 entry (maximum).
When processing a GET or HEAD request in close mode, we know we don't
need to read anything anymore on the socket, so we can disable it.
Doing this can save up to 40% of the recv calls, and half of the
epoll_ctl calls.
For this we need a buffer flag indicating that we're not interesting in
reading anymore. Right now, this flag also disables both polled reads.
We might benefit from disabling only speculative reads, but we will need
at least this flag when we want to support keepalive anyway.
Currently we don't disable the flag on completion, but it does not
matter as we close ASAP when performing the shutw().
The fd_list[] used by sepoll was indexed on the fd number and was only
used to store the equivalent of an integer. Changing it to be merged
with fdtab reduces the number of pointer computations, the code size
and some initialization steps. It does not harm other pollers much
either, as only one integer was added to the fdtab array.
Some rarely information are stored in fdtab, making it larger for no
reason (source port ranges, remote address, ...). Such information
lie there because the checks can't find them anywhere else. The goal
will be to move these information to the stream interface once the
checks make use of it.
For now, we move them to an fdinfo array. This simple change might
have improved the cache hit ratio a little bit because a 0.5% of
performance increase has measured.
Till now we would only set SN_CONN_CLOSED after rewriting it. Now we
set it just after checking the Connection header so that we can use
the result later if required.
This can ensure that data is readily available on a socket when
we accept it, but a bug in the kernel ignores the timeout so the
socket can remain pending as long as the client does not talk.
Use with care.
This patch makes stats page about 30% smaller and
"CSS 2.1" + "HTML 4.01 Transitional" compliant.
There should be no visible differences.
Changes:
- add DOCTYPE for HTML 4.01 Transitional
- add missing </ul>
- remove cols=, AFAIK no modern browser support this property and
it prevents validation to pass.
- remove "align: center": there is no such property in css. There is
however "text-align: center" but it is definitely not what we would
like to see here.
- by default align .titre to center
- by default align .td to right
- remove all align=right, no longer necessary
- add class=ac (align center): shorter than "align=center" and use it when
necessary
- remove nowrap from td, instead use "white-space: nowrap" in css
Now stats page passes W3C validators for HTML & CSS. We may consider adding
"validated" icons from www.w3.org. ;)
This alone makes a typical HTML stats dump consume 10% CPU less,
because we avoid doing complex printf calls to drop them later.
Only a few common cases have been checked, those which are very
likely to run for nothing.
It is a bit expensive and complex to use to call buffer_feed()
directly from the request parser, and there are risks that some
output messages are lost in case of buffer full. Since most of
these messages are static, let's have a state dedicated to print
these messages and store them in a specific area shared with the
stats in the session. This both reduces code size and risks of
losing output data.
Krzysztof reported that using names only for get weight/set weight
was not enough because it's still possible to have multiple servers
with the same name (and my test config is one of those). He suggested
to be able to designate them by their unique numeric IDs by prefixing
the ID with a dash.
That way we can have :
set weight #120/#2
as well as
get weight static/srv1 10
Capture & display more data from health checks, like
strerror(errno) for L4 failed checks or a first line
from a response for L7 successes/failed checks.
Non ascii or control characters are masked with
chunk_htmlencode() (html stats) or chunk_asciiencode() (logs).
Add two functions to encode input chunk replacing
non-printable, non ascii or special characters
with:
"&#%u;" - chunk_htmlencode
"<%02X>" - chunk_asciiencode
Above functions should be used when adding strings, received
from possible unsafe sources, to html stats or logs.
int get_backend_server(const char *bk_name, const char *sv_name,
struct proxy **bk, struct server **sv);
This function scans the list of backends and servers to retrieve the first
backend and the first server with the given names, and sets them in both
parameters. It returns zero if either is not found, or non-zero and sets
the ones it did not found to NULL. If a NULL pointer is passed for the
backend, only the pointer to the server will be updated.
The stats socket can now run at 3 different levels :
- user
- operator (default one)
- admin
These levels are used to restrict access to some information
and commands. Only the admin can clear all stats. A user cannot
clear anything nor access sensible data such as sessions or
errors.
The most common use of "clear counters" should be to only clear
max values without affecting cumulated values, for instance,
after an incident. So we change "clear counters" to only clear
max values, and add "clear counters all" to clear all counters.
I noticed that in __eb32_insert , if the tree is empty
(root->b[EB_LEFT] == NULL) , the node.bit is not defined.
However in __task_queue there are checks:
- if (last_timer->node.bit < 0)
- if (task->wq.node.bit < last_timer->node.bit)
which might rely upon an undefined value.
This is how I see it:
1. We insert eb32_node in an empty wait queue tree for a task (called by
process_runnable_tasks() ):
Inserting into empty wait queue &task->wq = 0x72a87c8, last_timer
pointer: (nil)
2. Then, we set the last timer to the same address:
Setting last_timer: (nil) to: 0x72a87c8
3. We get a new task to be inserted in the queue (again called by
process_runnable_tasks()) , before the __task_unlink_wq() is called for
the previous task.
4. At this point, we still have last_timer set to 0x72a87c8 , but since
it was inserted in an empty tree, it doesn't have node.bit and the
values above get dereferenced with undefined value.
The bug has no effect right now because the check for equality is still
made, so the next timer will still be queued at the right place anyway,
without any possible side-effect. But it's a pending bug waiting for a
small change somewhere to strike.
Iliya Polihronov
These ACLs are used to check the number of active connections on the
frontend, backend or in a backend's queue. The avg_queue returns the
average number of queued connections per server, and for this, divides
the total number of queued connections by the number of alive servers.
The dst_conn ACL has been slightly changed to more reflect its name and
original usage, which is to return the number of connections on the
destination address/port (the socket) and not the whole frontend.
Consistent hashing provides some interesting advantages over common
hashing. It avoids full redistribution in case of a server failure,
or when expanding the farm. This has a cost however, the hashing is
far from being perfect, as we associate a server to a request by
searching the server with the closest key in a tree. Since servers
appear multiple times based on their weights, it is recommended to
use weights larger than approximately 10-20 in order to smoothen
the distribution a bit.
In some cases, playing with weights will be the only solution to
make a server appear more often and increase chances of being picked,
so stats are very important with consistent hashing.
In order to indicate the type of hashing, use :
hash-type map-based (default, old one)
hash-type consistent (new one)
Consistent hashing can make sense in a cache farm, in order not
to redistribute everyone when a cache changes state. It could also
probably be used for long sessions such as terminal sessions, though
that has not be attempted yet.
More details on this method of hashing here :
http://www.spiteful.com/2008/03/17/programmers-toolbox-part-3-consistent-hashing/
Commit 404e8ab461 introduced
smart checking for stupid acl typos. However, now haproxy shows
the warning even for valid acls, like this one:
acl Cookie-X-NoAccel hdr_reg(cookie) (^|\ |;)X-NoAccel=1(;|$)
If a frontend does not set 'option socket-stats', a 'clear counters'
on the stats socket could segfault because li->counters is NULL. The
correct fix is to check for NULL before as this is a valid situation.
Recent "struct chunk rework" introduced a NULL pointer dereference
and now haproxy segfaults if auth is required for stats but not found.
The reason is that size_t cannot store negative values, but current
code assumes that "len < 0" == uninitialized.
This patch fixes it.
There are a few remaining max values that need to move to counters.
Also, the counters are more often used than some config information,
so get them closer to the other useful struct members for better cache
efficiency.
Until now it was required that every custom ID was above 1000 in order to
avoid conflicts. Now we have the list of all assigned IDs and can automatically
pick the first unused one. This means that it is perfectly possible to interleave
automatic IDs with persistent IDs and the parser will automatically allocate
unused values starting with 1.
When a name or ID conflict is detected, it is sometimes useful to know
where the other one was declared. Now that we have this information,
report it in error messages.
This patch allows to collect & provide separate statistics for each socket.
It can be very useful if you would like to distinguish between traffic
generate by local and remote users or between different types of remote
clients (peerings, domestic, foreign).
Currently no "Session rate" is supported, but adding it should be possible
if we found it useful.
Doing this, we can remove the last BF_HIJACK user and remove
produce_content(). s->data_source could also be removed but
it is currently used to detect if the stats or a server was
used.
The stats handler used to store internal states in s->ana_state. Now
we only rely on si->st0 in which we can store as many states as we
have possible outputs. This cleans up the stats code a lot and makes
it more maintainable. It has also reduced code size by a few hundred
bytes.
We can simplify the code in the stats functions using buffer_feed_chunk()
instead of buffer_write_chunk(). Let's start with this function. This
patch also fixed an issue where we could dump past the end of the capture
buffer if it is shorter than the captured request.
In old versions, before 1.3.16, we had to refresh the timeouts after
each call to process_session() because the stream socket handler did
not do it. Now that the sockets can exchange data for a long period
without calling process_session(), we can detect an old activity and
refresh a timeout long after the last activity, causing too late a
detection of some timeouts.
The fix simply consists in not checking for activity anymore in
stream_sock_data_finish() but only set a timeout if it was not
previously set.
Calling buffer_shutw() marks the buffer as closed but if it was already
closed in the other direction, the stream interface is not marked as
closed, causing infinite loops.
We took this opportunity to completely remove buffer_shutw() and buffer_shutr()
which have no reason to be used at all and which will always cause trouble
when directly called. The stats occurrence was the last one.
By default, when data is sent over a socket, both the write timeout and the
read timeout for that socket are refreshed, because we consider that there is
activity on that socket, and we have no other means of guessing if we should
receive data or not.
While this default behaviour is desirable for almost all applications, there
exists a situation where it is desirable to disable it, and only refresh the
read timeout if there are incoming data. This happens on sessions with large
timeouts and low amounts of exchanged data such as telnet session. If the
server suddenly disappears, the output data accumulates in the system's
socket buffers, both timeouts are correctly refreshed, and there is no way
to know the server does not receive them, so we don't timeout. However, when
the underlying protocol always echoes sent data, it would be enough by itself
to detect the issue using the read timeout. Note that this problem does not
happen with more verbose protocols because data won't accumulate long in the
socket buffers.
When this option is set on the frontend, it will disable read timeout updates
on data sent to the client. There probably is little use of this case. When
the option is set on the backend, it will disable read timeout updates on
data sent to the server. Doing so will typically break large HTTP posts from
slow lines, so use it with caution.
During troubleshooting, it's often useful to get the list of supported
pollers but until now it was required to have a working configuration
first. Since the pollers are known before main() is called, let's list
them with the build options.
The "static-rr" is just the old round-robin algorithm. It is still
in use when a hash algorithm is used and the data to hash is not
present, but it was impossible to configure it explicitly. This one
is cheaper in terms of CPU and supports unlimited numbers of servers,
so it makes sense to be able to use it.
LB algo macros were composed of the LB algo by itself without any indication
of the method to use to look up a server (the lb function itself). This
method was implied by the LB algo, which was not very convenient to add
more algorithms. Now we have several fields in the LB macros, some to
describe what to look for in the requests, some to describe how to transform
that (kind of algo) and some to describe what lookup function to use.
The next patch will make it possible to factor out some code for all algos
which rely on a map.
The lbprm structure has moved to backend.h, where it should be, and
all algo-specific types and declarations have moved to their specific
files. The proxy struct is now much more readable.
This patch implements "description" (proxy and global) and "node" (global)
options, removes "node-name" and adds "show-node" & "show-desc" options
for "stats". It also changes the way the header lines (with proxy name) and
the statistics are displayed, so stats no longer look so clumsy with very
long names.
Instead of "node-name" it is possible to use show-node/show-desc with
an optional parameter that overrides a default node/description.
backend cust-0045
# report specific values for this customer
stats show-node Europe
stats show-desc Master node for Europe, Asia, Africa
We need to remove hash map accesses out of backend.c if we want to
later support new hash methods. This patch separates the hash computation
method from the server lookup. It leaves the lookup function to lb_map.c
and calls it with the result of the hash.
It was becoming painful to have all the LB algos in backend.c.
Let's move them to their own files. A few hashing functions still
need be broken in two parts, one for the contents and one for the
map position.
The code was duplicated serveral times, let's use
server_status_printf() instead.
text data bss dec hex filename
263504 5800 64224 333528 516d8 haproxy-old
262944 5800 64224 332968 514a8 haproxy-new
Depends on "struct chunk rework" and
"Health check reporting code rework + health logging, v3"
Check if rise/fall has an argument and it is > 0 or bad things may happen
in the health checks. ;)
Now it is verified and the code no longer allows for such condition:
backend bad
(...)
server o-f0 192.168.129.27:80 check inter 4000 source 0.0.0.0 rise 0
server o-r0 192.168.129.27:80 check inter 4000 source 0.0.0.0 fall 0
server o-f1 192.168.129.27:80 check inter 4000 source 0.0.0.0 rise
server o-r1 192.168.129.27:80 check inter 4000 source 0.0.0.0 fall
[ALERT] 269/161830 (24136) : parsing [../git/haproxy.cfg:98]: 'rise' has to be > 0.
[ALERT] 269/161830 (24136) : parsing [../git/haproxy.cfg:99]: 'fall' has to be > 0.
[ALERT] 269/161830 (24136) : parsing [../git/haproxy.cfg:100]: 'rise' expects an integer argument.
[ALERT] 269/161830 (24136) : parsing [../git/haproxy.cfg:101]: 'fall' expects an integer argument.
Also add endline in the custom id checking code.
This patch adds health logging so it possible to check what
was happening before a crash. Failed healt checks are logged if
server is UP and succeeded healt checks if server is DOWN,
so the amount of additional information is limited.
I also reworked the code a little:
- check_status_description[] and check_status_info[] is now
joined into check_statuses[]
- set_server_check_status updates not only s->check_status and
s->check_duration but also s->result making the code simpler
Changes in v3:
- for now calculate and use local versions of health/rise/fall/state,
it is a slow path, no harm should be done. One day we may centralize
processing of the checks and remove the duplicated code.
- also log checks that are restoring current state
- use "conditionally succeeded" for 404 with disable-on-404