This optimisation causes haproxy to time out requests that result
in two TCP packets, one packet containing the header, and one
packet containing the actual data. This is a very typical type
of response from a lot of servers.
[Willy: I suspect the fix might have an impact on the compression code
which I'm not sure completely handles calls with 0 bytes to forward]
Some old browsers that have a user-agent starting with "Mozilla/4" do
not support compressison correctly, so disable compression for those.
Internet explorer 6 after Windows XP service pack 2, IE 7, and IE 8,
do however support compression and still have a user agent starting
with Mozilla/4, so we try to enable compression for those.
MSIE has a user-agent on this form:
Mozilla/4.0 (compatible; MSIE <version>; ...)
98% of MSIE 6 SP2 user agents start with
Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1; SV1
The remaining 2% have additional flags before "SV1".
This simplified matching looking for MSIE at exactly position 25
and SV1 at exacly position 51 gives a few false negatives, so sometimes
a compression opportunity is lost.
A test against 3 hours of traffic to around 3000 news sites worldwide
gives less than 0.007% (70ppm) missed compression opportunities.
Sample conversions rely on two alternative buffers which were previously
allocated as static bufs of size BUFSIZE. Now they're initialized to the
global buffer size. It was the same for HTTP authentication. Note that it
seems that none of them was prone to any mistake when dealing with the
buffer size, but better stay on the safe side by maintaining the old
assumption that a trash buffer is always "large enough".
The trash is used everywhere to store the results of temporary strings
built out of s(n)printf, or as a storage for a chunk when chunks are
needed.
Using global.tune.bufsize is not the most convenient thing either.
So let's replace trash with a chunk and directly use it as such. We can
then use trash.size as the natural way to get its size, and get rid of
many intermediary chunks that were previously used.
The patch is huge because it touches many areas but it makes the code
a lot more clear and even outlines places where trash was used without
being that obvious.
We don't want the lower layer to forward a close while we're compressing,
and we want the system to fuse outgoing TCP segments using MSG_MORE as
much as possible to save round trips that can emerge from sending short
packets with a PUSH flag.
A test on a remote busy DSL line consisting in compressing a 100MB file
on the fly full of zeroes only showed a transfer rate of a few kB/s due
to these round trips.
We will need to be able to switch server connections on a session and
to keep idle connections. In order to achieve this, the preliminary
requirement is that the connections can survive the session and be
detached from them.
Right now they're still allocated at exactly the same place, so when
there is a session, there are always 2 connections. We could soon
improve on this by allocating the outgoing connection only during a
connect().
This current patch touches a lot of code and intentionally does not
change any functionnality. Performance tests show no regression (even
a very minor improvement). The doc has not yet been updated.
A number of older browsers have many issues with compressed contents. It
happens that all these older browsers announce themselves as "Mozilla/4"
and that despite not being all broken, the amount of working browsers
announcing themselves this way compared to all other ones is so tiny
that it's not worth wasting cycles trying to adapt to every specific
one.
So let's simply disable compression for these older browsers.
More information on this very detailed article :
http://zoompf.com/2012/02/lose-the-wait-http-compression
This commit introduces HTTP compression using the zlib library.
http_response_forward_body has been modified to call the compression
functions.
This feature includes 3 algorithms: identity, gzip and deflate:
* identity: this is mostly for debugging, and it was useful for
developping the compression feature. With Content-Length in input, it
is making each chunk with the data available in the current buffer.
With chunks in input, it is rechunking, the output chunks will be
bigger or smaller depending of the size of the input chunk and the
size of the buffer. Identity does not apply any change on data.
* gzip: same as identity, but applying a gzip compression. The data
are deflated using the Z_NO_FLUSH flag in zlib. When there is no more
data in the input buffer, it flushes the data in the output buffer
(Z_SYNC_FLUSH). At the end of data, when it receives the last chunk in
input, or when there is no more data to read, it writes the end of
data with Z_FINISH and the ending chunk.
* deflate: same as gzip, but with deflate algorithm and zlib format.
Note that this algorithm has ambiguous support on many browsers and
no support at all from recent ones. It is strongly recommended not
to use it for anything else than experimentation.
You can't choose the compression ratio at the moment, it will be set to
Z_BEST_SPEED (1), as tests have shown very little benefit in terms of
compression ration when going above for HTML contents, at the cost of
a massive CPU impact.
Compression will be activated depending of the Accept-Encoding request
header. With identity, it does not take care of that header.
To build HAProxy with zlib support, use USE_ZLIB=1 in the make
parameters.
This work was initially started by David Du Colombier at Exceliance.
This state's name is confusing as it is only used with chunked encoding
and makes newcomers think it's also related to the content-length. Let's
call it CHUNK_CRLF to clear any doubt on this.
These functions are not that long and the compiler inlines them well. Doing
so has sped up the chunked encoding parser by 41% !
Note that http_forward_trailers was also declared static because it's not
exported.
Commit ceb4ac9c states that IPv6 values are accepted by "hdr_ip" acl,
but the code didn't allow it. This patch provides the ability to accept IPv6
values.
Jaroslaw Bojar diagnosed an issue when haproxy switches to tunnel mode
after a transfer. The response data are sent with the MSG_MORE flag,
causing them to be needlessly queued in the kernel. In order to fix this,
we set the CF_NEVER_WAIT flag on the channels when switching to tunnel
mode.
One issue remained with client-side keep-alive : if the response is sent
before the end of the request, it suffers the same issue for the same
reason. This is easily addressed by setting the CF_SEND_DONTWAIT flag
on the channel when the response has been parsed and we're waiting for
the other side.
The same issue is present in 1.4 so the fix must be backported.
With this commit, we now separate the channel from the buffer. This will
allow us to replace buffers on the fly without touching the channel. Since
nobody is supposed to keep a reference to a buffer anymore, doing so is not
a problem and will also permit some copy-less data manipulation.
Interestingly, these changes have shown a 2% performance increase on some
workloads, probably due to a better cache placement of data.
These "buf" were confusing as they were really refering to channels. At
most places, a buffer was really all what was needed, so a struct buffer
was used instead. It is possible that the performance has slightly increased
by the removal of pointer offset in many pointer operations by directly
using the buffer pointer instead of the channel pointer.
Baptiste Assmann reported a bug causing a crash on recent versions when
sticking rules were set on layer 7 in a TCP proxy. The bug is easier to
reproduce with the "defer-accept" option on the "bind" line in order to
have some contents to parse when the connection is accepted. The issue
is that the acl_prefetch_http() function called from HTTP fetches relies
on hdr_idx to be preinitialized, which is not the case if there is no L7
ACL.
The solution consists in adding a new SMP_CAP_L7 flag to fetches to indicate
that they are expected to work on L7 data, so that the proxy knows that the
hdr_idx has to be initialized. This is already how ACL and HTTP mode are
handled.
The bug was present since 1.5-dev9.
While working on the changes required to make the health checks use the
new connections, it started to become obvious that some naming was not
logical at all in the connections. Specifically, it is not logical to
call the "data layer" the layer which is in charge for all the handshake
and which does not yet provide a data layer once established until a
session has allocated all the required buffers.
In fact, it's more a transport layer, which makes much more sense. The
transport layer offers a medium on which data can transit, and it offers
the functions to move these data when the upper layer requests this. And
it is the upper layer which iterates over the transport layer's functions
to move data which should be called the data layer.
The use case where it's obvious is with embryonic sessions : an incoming
SSL connection is accepted. Only the connection is allocated, not the
buffers nor stream interface, etc... The connection handles the SSL
handshake by itself. Once this handshake is complete, we can't use the
data functions because the buffers and stream interface are not there
yet. Hence we have to first call a specific function to complete the
session initialization, after which we'll be able to use the data
functions. This clearly proves that SSL here is only a transport layer
and that the stream interface constitutes the data layer.
A similar change will be performed to rename app_cb => data, but the
two could not be in the same commit for obvious reasons.
Since at least commit a458b679, msg->sov could become negative in
http_parse_chunk_size() if a chunk size wrapped around the buffer.
The effect is that at some point channel_forward() was called with
a negative size, causing all data to be transferred without being
analyzed anymore.
Since haproxy does not support keep-alive with the server yet, this
issue is not really noticeable, as the server closes the connection
in response. Still, when tunnel mode is used or when pretent-keepalive
is used, it is possible to see the problem.
This issue was reported and diagnosed by William Lallemand at
Exceliance.
Cyril Bont reported a mangled debug output when an invalid request
was sent with a faulty request line. The reason was the use of the
msg->sl.rq.l offset which was not yet initialized in this case. So
we change the way to report such an error so that first we initialize
it to zero before parsing a message, then we use that to know whether
we can trust it or not. If it's still zero, then we display the whole
buffer, truncated by debug_hdr() to the first CR or LF character, which
results in the first line only.
The same operation was performed for the response, which was wrong too.
We need to have the source and destination addresses in the connection.
They were lying in the stream interface so let's move them. The flags
SI_FL_FROM_SET and SI_FL_TO_SET have been moved as well.
It's worth noting that tcp_connect_server() almost does not use the
stream interface anymore except for a few flags.
It has been identified that once we detach the connection from the SI,
it will probably be needed to keep a copy of the server-side addresses
in the SI just for logging purposes. This has not been implemented right
now though.
This is a massive rename of most functions which should make use of the
word "channel" instead of the word "buffer" in their names.
In concerns the following ones (new names) :
unsigned long long channel_forward(struct channel *buf, unsigned long long bytes);
static inline void channel_init(struct channel *buf)
static inline int channel_input_closed(struct channel *buf)
static inline int channel_output_closed(struct channel *buf)
static inline void channel_check_timeouts(struct channel *b)
static inline void channel_erase(struct channel *buf)
static inline void channel_shutr_now(struct channel *buf)
static inline void channel_shutw_now(struct channel *buf)
static inline void channel_abort(struct channel *buf)
static inline void channel_stop_hijacker(struct channel *buf)
static inline void channel_auto_connect(struct channel *buf)
static inline void channel_dont_connect(struct channel *buf)
static inline void channel_auto_close(struct channel *buf)
static inline void channel_dont_close(struct channel *buf)
static inline void channel_auto_read(struct channel *buf)
static inline void channel_dont_read(struct channel *buf)
unsigned long long channel_forward(struct channel *buf, unsigned long long bytes)
Some functions provided by channel.[ch] have kept their "buffer" name because
they are really designed to act on the buffer according to some information
gathered from the channel. They have been moved together to the same place in
the file for better readability but they were not changed at all.
The "buffer" memory pool was also renamed "channel".
Get rid of these confusing BF_* flags. Now channel naming should clearly
be used everywhere appropriate.
No code was changed, only a renaming was performed. The comments about
channel operations was updated.
These functions do not depend on the channel flags anymore thus they're
much better suited to be used on plain buffers. Move them from channel
to buffer.
This flag is quite complex to get right and updating it everywhere is a
major pain, especially since the buffer/channel split. This is the first
step of getting rid of it. Instead now it's dynamically computed whenever
needed.
This flag was very problematic because it was composite in that both changes
to the pipe or to the buffer had to cause this flag to be updated, which is
not always simple (eg: there may not even be a channel attached to a buffer
at all).
There were not that many users of this flags, mostly setters. So the flag got
replaced with a macro which reports whether the channel is empty or not, by
checking both the pipe and the buffer.
One part of the change is sensible : the flag was also part of BF_MASK_STATIC,
which is used by process_session() to rescan all analysers in case the flag's
status changes. At first glance, none of the analysers seems to change its
mind base on this flag when it is subject to change, so it seems fine not to
add variation checks here. Otherwise it's possible that checking the buffer's
output size is more useful than checking the flag's replacement.
The "raw_sock" prefix will be more convenient for naming functions as
it will be prefixed with the data layer and suffixed with the data
direction. So let's rename the files now to avoid any further confusion.
The #include directive was also removed from a number of files which do
not need it anymore.
At the moment, the struct is still embedded into the struct channel, but
all the functions have been updated to use struct buffer only when possible,
otherwise struct channel. Some functions would likely need to be splitted
between a buffer-layer primitive and a channel-layer function.
Later the buffer should become a pointer in the struct buffer, but doing so
requires a few changes to the buffer allocation calls.
This is a massive rename. We'll then split channel and buffer.
This change needs a lot of cleanups. At many locations, the parameter
or variable is still called "buf" which will become ambiguous. Also,
the "struct channel" is still defined in buffers.h.
fdtab[].state was only used to know whether a connection was in progress
or an error was encountered. Instead we now use connection->flags to store
a flag for both. This way, connection management will be able to update the
connection status on I/O.
Commit fa7e1025 (1.3.16-rc1) introduced a minor bug by comparing req->flags
with BF_READ_ERROR instead of checking for the bit. The result is that the
error message is always returned even in case of client error. This has no
real impact but this must be fixed.
It may be backported to 1.4 and 1.3.
This one returns the concatenation of the first Host header entry with
the path. It can make content-switching rules easier, help with fighting
DDoS on certain URLs and improve shared caches efficiency.
Doing so allows us to support sticking on URL, URL's IP, URL's port and
path.
Both fetch functions should be improved to support an optional depth
allowing to stick to a server depending on just a few directory
components. This would help with portals, some prefetch-capable
caches and with outgoing connections using multiple internet links.
Commit 827aee91 merged in 1.5-dev5 introduced a regression causing
the srv pointer to be tested twice instead of srv then srv->cookie.
The result is that if a server has no cookie in prefix mode, haproxy
will crash when trying to modify it.
Such a config is very unlikely to happen, except maybe with a backup
server, which would cause haproxy to die with the last server in the
farm.
No backport is needed, only 1.5-dev was affected.
It was not possible to kill remaining sessions from the admin interface,
which is annoying especially when switching to maintenance mode. Now it's
possible.
httponly This option tells haproxy to add an "HttpOnly" cookie attribute
when a cookie is inserted. This attribute is used so that a
user agent doesn't share the cookie with non-HTTP components.
Please check RFC6265 for more information on this attribute.
secure This option tells haproxy to add a "Secure" cookie attribute when
a cookie is inserted. This attribute is used so that a user agent
never emits this cookie over non-secure channels, which means
that a cookie learned with this flag will be presented only over
SSL/TLS connections. Please check RFC6265 for more information on
this attribute.
Cookies were mixed with many other options while they're not used as options.
Move them to a dedicated bitmask (ck_opts). This has released 7 flags in the
proxy options and leaves some room for new proxy flags.
Commit d1de8af362 was incomplete, because
perform_http_redirect() also needs to rewind the buffer since it's called
after data are scheduled for forwarding.
No backport needed.
When "option forwardfor" is enabled in a frontend that uses backends,
"if-none" ignores the header name provided in the frontend.
This prevents haproxy to add the X-Forwarded-For header if the option is not
used in the backend.
This may introduce security issues for servers/applications that rely on the
header provided by haproxy.
A minimal configuration which can reproduce the bug:
defaults
mode http
listen OK
bind :9000
option forwardfor if-none
server s1 127.0.0.1:80
listen BUG-frontend
bind :9001
option forwardfor if-none
default_backend BUG-backend
backend BUG-backend
server s1 127.0.0.1:80
The state and the private pointer are not specific to the applets, since SSL
will require exactly both of them. Move them to the connection layer now and
rename them. We also now ensure that both are NULL on first call.
We start to move everything needed to manage a connection to a special
entity "struct connection". We have the data layer operations and the
control operations there. We'll also have more info in the future such
as file descriptors and applet contexts, so that in the end it becomes
detachable from the stream interface, which will allow connections to
be reused between sessions.
For now on, we start with minimal changes.
msg->som was zero before the body and was used to carry the beginning
of a chunk size for chunked-encoded messages, at a moment when msg->sol
is always zero.
Remove msg->som and replace it with msg->sol where needed.
Since the recent buffer reorg, msg->som is redundant with buf->p but still
appears at a number of places. This tiny patch allows to confirm that som
follows two states :
- 0 from the moment the message starts to be parsed
- relative offset to ->p for start of chunk when parsing chunks
During this second state, ->sol is never used, so we should probably merge
the two.
This is a left-over from the buffer changes. Msg->sol is always null at the
end of the parsing, so we must not use it anymore to read headers or find
the beginning of a message. As a side effect, the dump of the request in
debug mode is working again because it was relying on msg->sol not being
null.
Maybe it will even be mergeable with another of the message pointers.
The recent split between the buffers and HTTP messages in 1.5-dev9 caused
a major trouble : in the past, we used to keep a pointer to HTTP data in the
buffer struct itself, which was the cause of most of the pain we had to deal
with buffers.
Now the two are split but we lost the information about the beginning of
the HTTP message once it's being forwarded. While it seems normal, it happens
that several parts of the code currently rely on this ability to inspect a
buffer containing old contents :
- balance uri
- balance url_param
- balance url_param check_post
- balance hdr()
- balance rdp-cookie()
- http-send-name-header
All these happen after the data are scheduled for being forwarded, which
also causes a server to be selected. So for a long time we've been relying
on supposedly sent data that we still had a pointer to.
Now that we don't have such a pointer anymore, we only have one possibility :
when we need to inspect such data, we have to rewind the buffer so that ->p
points to where it previously was. We're lucky, no data can leave the buffer
before it's being connecting outside, and since no inspection can begin until
it's empty, we know that the skipped data are exactly ->o. So we rewind the
buffer by ->o to get headers and advance it back by the same amount.
Proceeding this way is particularly important when dealing with chunked-
encoded requests, because the ->som and ->sov fields may be reused by the
chunk parser before the connection attempt is made, so we cannot rely on
them.
Also, we need to be able to come back after retries and redispatches, which
might change the size of the request if http-send-name-header is set. All of
this is accounted for by the output queue so in the end it does not look like
a bad solution.
No backport is needed.
Before it was possible to resize the buffers using global.tune.bufsize,
the trash has always been the size of a buffer by design. Unfortunately,
the recent buffer sizing at runtime forgot to adjust the trash, resulting
in it being too short for content rewriting if buffers were enlarged from
the default value.
The bug was encountered in 1.4 so the fix must be backported there.
This flag indicates that we're not interested in keeping half-open
connections on a stream interface. It has the benefit of allowing
the socket layer to cause an immediate write close when detecting
an incoming read close. This releases resources much faster and
saves one syscall (either a shutdown or setsockopt).
This flag is only set by HTTP on the interface going to the server
since we don't want to continue pushing data there when it has
closed.
Another benefit is that it responds with a FIN to a server's FIN
instead of responding with an RST as it used to, which is much
cleaner.
Performance gains of 7.5% have been measured on HTTP connection
rate on empty objects.
Commit 5e205524 was a bit overzealous by inconditionally enabling
quick ack when a request is not yet in the buffer, because it also
does so when nothing has been received yet, causing a useless ACK
to be emitted.
Improve the situation by doing this only if the input buffer is
empty (indicating that nothing was sent by the client).
In case of keep-alive, an empty buffer means we already have a
response in flight which will serve as an ACK.
Commit e164e7a removed get_src/get_dst setting in the stream interfaces but
forgot to set it in proto_tcp. Get the feature back because we need it for
logging, transparent mode, ACLs etc... We now rely on the stream interface
direction to know what syscall to use.
One benefit of doing it this way is that we don't use getsockopt() anymore
on outgoing stream interfaces nor on UNIX sockets.
We'll soon have an SSL socket layer, and in order to ease the difference
between the two, we use the name "sock_raw" to designate the one which
directly talks to the sockets without any conversion.
http_auth and http_auth_group used to share the same fetch function, while
they're doing very different things. The first one only checks whether the
supplied credentials are valid wrt a userlist while the second not only
checks this but also checks group ownership from a list of patterns.
Recent acl/pattern merge caused a simplification here by which the fetch
function would always return a boolean, so the group match was always fine
if the user:password was valid, regardless of the patterns provided with
the ACL.
The proper solution consists in splitting the function in two, depending
on what is desired.
It's also worth noting that check_user() would probably be split, one to
check user:password, and the other one to check for group ownership for
an already valid user:password combination. At this point it is not certain
if the group mask is still useful or not considering that the passwd check
is always made.
This bug was reported and diagnosed by Cyril Bont. It first appeared
in 1.5-dev9 so it does not need any backporting.
I introduced a regression in commit 19979e176e while reworking the admin
actions results.
"Unexpected result" was displayed even if the action was applied due to a
misplaced initialization. This small patch should fix it.
Note: no need to backport.
There is no more reason for the realign function being HTTP specific,
it only operates on a buffer now. Let's move it to buffers.c instead.
It's likely that buffer_bounce_realign is broken (not used), this will
have to be inspected. The function is worth rewriting as it can be
cheaper than buffer_slow_realign() to realign large wrapping buffers.
A number of important information were missing from the error captures, so
let's improve them. Now we also log source port, session flags, transaction
flags, message flags, pending output bytes, expected buffer wrapping position,
total bytes transferred, message chunk length, and message body length.
As such, the output format has slightly evolved and the source address moved
to the third line :
[08/May/2012:11:14:36.341] frontend echo (#1): invalid request
backend echo (#1), server <NONE> (#-1), event #1
src 127.0.0.1:40616, session #4, session flags 0x00000000
HTTP msg state 26, msg flags 0x00000000, tx flags 0x00000000
HTTP chunk len 0 bytes, HTTP body len 0 bytes
buffer flags 0x00909002, out 0 bytes, total 28 bytes
pending 28 bytes, wrapping at 8030, error at position 7:
00000 GET / /?t=20000 HTTP/1.1\r\n
00026 \r\n
[08/May/2012:11:13:13.426] backend echo (#1) : invalid response
frontend echo (#1), server local (#1), event #0
src 127.0.0.1:40615, session #1, session flags 0x0000044e
HTTP msg state 32, msg flags 0x0000000e, tx flags 0x08200000
HTTP chunk len 0 bytes, HTTP body len 20 bytes
buffer flags 0x00008002, out 81 bytes, total 92 bytes
pending 11 bytes, wrapping at 7949, error at position 9:
00000 Foo: bar\r\r\n
Since the beginning of buffer&msg changes, the error position (err_pos)
had not completely been converted and some offsets still appear wrong.
Now we ensure that everywhere msg->err_pos is relative to buf->p and
we always report buf->i bytes starting at buf->p in all error captures,
which ensures that err_pos is there.
This is not exactly a bug and is specific to latest changes so no backport
is needed.
Commit 81f2fb added support for wrapping buffer captures, but unfortunately
the code used to perform two memcpy() over the same destination, causing a
loss of the start of the buffer rendering some error snapshots unusable.
This bug is present in 1.4 too and must be backported.
These operators are used regardless of the socket protocol family. Move
them to a "sock_ops" struct. ->read and ->write have been moved there too
as they have no reason to remain at the protocol level.
This is mainly a massive renaming in the code to get it in line with the
calling convention. Next patch will rename a few files to complete this
operation.
All parsing errors were known but impossible to return. Now by making use
of memprintf(), we're able to build meaningful error messages that the
caller can display.
It's easy to merge pattern and ACL fetches of cookies. It allows us
to remove two distinct fetch functions. The new function internally
uses an occurrence number to serve both purposes, but it didn't appear
worth exposing it outside so there is no keyword argument to set it.
However one of the benefits is that the "cookie" fetch for stick tables
now automatically adapts to requests and responses, so there is no more
need for set-cookie().
HTTP header fetch is now done using smp_fetch_hdr() for both ACLs and
patterns. This one also supports an occurrence number, making it possible
to specify explicit occurrences for ACLs and patterns.
src_port, dst_port and url_param have converged between ACLs and patterns.
This means that src_port is now available in patterns and that urlp_* has
been added to ACLs. Some code has moved to accommodate for static function
definitions, but there were little changes.
Patterns were using a bitmask to indicate if request or response was desired
in fetch functions and keywords. ACLs were using a bitmask in fetch keywords
and a single bit in fetch functions. ACLs were also using an ACL_PARTIAL bit
in fetch functions indicating that a non-final fetch was performed, which was
an abuse of the existing direction flag.
The change now consists in using :
- a capabilities field for fetch keywords => SMP_CAP_REQ/RES to indicate
if a keyword supports requests, responses, both, etc...
- an option field for fetch functions to indicate what the caller expects
(request/response, final/non-final)
The ACL_PARTIAL bit was reversed to get SMP_OPT_FINAL as it's more explicit
to know we're working on a final buffer than on a non-final one.
ACL_DIR_* were removed, as well as PATTERN_FETCH_*. L4 fetches were improved
to support being called on responses too since they're still available.
The <dir> field of all fetch functions was changed to <opt> which is now
unsigned.
The patch is large but mostly made of cosmetic changes to accomodate this, as
almost no logic change happened.
Having the args everywhere will make it easier to share fetch functions
between patterns and ACLs. The only place where we could have needed
the expr was in the http_prefetch function which can do well without.
We need the pattern fetchers and converters to correctly set the output type
so that they can be used by ACL fetchers. By using the sample type instead of
the keyword type, we also open the possibility to create some multi-type
pattern fetch methods later (eg: "src" being v4/v6). Right now the type in
the keyword is used to validate the configuration.
Now there is no more reference to union pattern_data. All pattern fetch and
conversion functions now make use of the common sample type. Note: none of
them adjust the type right now so it's important to do it next otherwise
we would risk sharing such functions with ACLs and seeing them fail.
These ones were either unused or improperly used. Some integers were marked
read-only, which does not make much sense. Buffers are not read-only, they're
"constant" in that they must be kept intact after any possible change.
This one is not needed anymore as we can return the data and its type in the
sample provided by the caller. ACLs now always return the proper type. BOOL
is already returned when the result is expected to be processed as a boolean.
temp_pattern has been unexported now.
The new sample types are necessary for the acl-pattern convergence.
These types are boolean and signed int. Some types were renamed for
less ambiguity (ip->ipv4, integer->uint).
This flag was used to force a boolean match even if there was no pattern
to match. It was used only by http_auth() and designed only for this one.
It's easier and cleaner to make the fetch function perform the test and
report the boolean result as a few other functions already do. It simplifies
the acl_exec_cond() logic and will help merging ACLs and patterns.
This is used to validate that arguments are coherent. For instance,
payload_lv expects that the last arg (if any) is not more negative
than the sum of the first two. The error is reported if any.
We don't need the pattern-specific args parsers anymore, make use of the
common parser instead. We still need to improve this by adding a validation
function to report abnormal argument values or combinations. We don't report
precise parsing errors yet but this was not previously done either.
arg_i was almost unused, and since we migrated to use struct arg everywhere,
the rare cases where arg_i was needed could be replaced by switching to
arg->type = ARGT_STOP.
The types and minimal number of ACL keyword arguments are now stored in
their declaration. This will allow many more fantasies if some ACL use
several arguments or types.
Doing so required to rework all ACL keyword declarations to add two
parameters. So this was a good opportunity for a general cleanup and
to sort all entries in alphabetical order.
We still have two pending issues :
- parse_acl_expr() checks for errors but has no way to report them to
the user ;
- the types of some arguments are still not resolved and kept as strings
(eg: ARGT_FE/BE/TAB) for compatibility reasons, which must be resolved
in acl_find_targets()
The ACL parser now uses the argument parser to build a typed argument list.
Right now arguments are all strings and only one argument is supported since
this is what ACLs currently support.
Latest changes have made it possible to remove all differences between
request and response processing, making it worth merging request and
response ACL fetch functions to reduce code size.
Most likely with minor adaptation it will be possible to use the same hdr_*
functions to match in the response path, and cook_* for the response cookie
too.
ACLs are volatile since they require a fetch of request buffer data which is
then copied to a temporary shared place. The issue is minor though since auth
is generally checked very early.
All ACLs which need to process HTTP contents first call this function which
performs all the preliminary tests and also triggers the request parsing if
needed. A macro was written to simplify the code.
As a side effect, it's not required anymore to check for the HTTP ACL before
checking for HTTP contents.
This function will be called by all ACL fetch functions. Right now all ACL
fetch functions have to perform the exact same tests to check whether data
are available. Also, only one of them is able to actually parse an HTTP
request.
Using the prefetch function, it will be possible to try to parse a request
on the fly and to avoid the fetch if some data are missing. This will
significantly reduce the amount of tests in all ACL fetch functions.
buffer_wrap_add was convenient for the migration but is not handy at all.
Let's have new wrappers that report input begin/end and output begin/end
instead.
It looks like we'll also need a b_adv(ofs) to advance a buffer's pointer.
buffer_ignore may only be used when the output of a buffer is empty,
but it's not granted it is always the case when sending HTTP error
responses. Better use buffer_cut_tail() instead, and use buffer_ignore
only on non-wrapping data.
The buffer pointer is now taken from the http_msg in the following
functions :
http_parse_chunk_size
http_forward_trailers
http_skip_chunk_crlf
Most internal pointers were converted to const as the result of the
operation.
The buffer pointer is now taken from the http_msg in the following functions :
- http_remove_header2
- http_header_add_tail
- http_header_add_tail2
- http_parse_connection_header
- http_change_connection_header
msg->sol is now a relative pointer just like all other ones. There is no
more absolute references to the buffer outside the struct buffer itself.
Next two cleanups should include removing buffer references to functions
which already have an msg, and removal of wrapping detection in request
and response parsing which cannot wrap by definition.
ACLs and patterns only rely on a struct http_msg and don't know the pointer
to the actual data. struct http_msg will soon only hold relative references
so that's not possible. We need http_msg to hold a reference to the struct
buffer before having relative pointers everywhere.
It is likely that doing so will also result in opportunities to simplify
a number of functions arguments. The following functions are already
candidate :
http_buffer_heavy_realign
http_capture_bad_message
http_change_connection_header
http_forward_trailers
http_header_add_tail
http_header_add_tail2
http_msg_analyzer
http_parse_chunk_size
http_parse_connection_header
http_remove_header2
http_send_name_header
http_skip_chunk_crlf
http_upgrade_v09_to_v10
These offsets were relative to the buffer itself. Now they're relative to
the buffer's origin (buf->p) which normally corresponds to the start of
current message.
This saves a big dependency between the HTTP message struct and the buffers.
It appeared during this change that ->col is not used anymore (it will have
to be removed). Next step is to turn ->eol and ->sol from absolute to relative.
The buffer's pointer <lr> was only used by HTTP parsers which also use a
struct http_msg to keep track of the parser's state. We've reached a point
where it makes no sense to keep ->lr in the buffer, as the split between
buffer and msg is only arbitrary for historical reasons.
This change ensures that touching buffers will not impact HTTP messages
anymore, making the buffers more content-agnostic. However, it becomes
very important not to forget to update msg->next when some data get
forwarded or moved (and in general each time buf->p is updated).
The new pointer in http_msg becomes relative to buffer->p so that
parsing multiple messages becomes easier. It is possible that at one
point ->som and ->next will be merged.
Note: http_parse_reqline() and http_parse_stsline() have been temporarily
modified to know the message starting point in the buffer (->p).
This change gets rid of buf->r which is always equal to buf->p + buf->i.
It removed some wrapping detection at a number of places, but required addition
of new relative offset computations at other locations. A large number of places
can be simplified now with extreme care, since most of the time, either the
pointer has to be computed once or we need a difference between the old ->w and
old ->r to compute free space. The cleanup will probably happen with the rewrite
of the buffer_input_* and buffer_output_* functions anyway.
buf->lr still has to move to the struct http_msg and be relative to buf->p
for the rework to be complete.
This change introduces the buffer's base pointer, which is the limit between
incoming and outgoing data. It's the point where the parsing should start
from. A number of computations have already been greatly simplified, but
more simplifications are expected to come from the removal of buf->r.
The changes appear good and have revealed occasional improper use of some
pointers. It is possible that this patch has introduced bugs or revealed
some, although preliminary testings tend to indicate that everything still
works as it should.
We don't have buf->l anymore. We have buf->i for pending data and
the total length is retrieved by adding buf->o. Some computation
already become simpler.
Despite extreme care, bugs are not excluded.
It's worth noting that msg->err_pos as set by HTTP request/response
analysers becomes relative to pending data and not to the beginning
of the buffer. This has not been completed yet so differences might
occur when outgoing data are left in the buffer.
Too many flags are stored in the transaction structure. Some flags are
clearly message-specific and exist in two versions (request and response).
Move them to a new "flags" field in the http_message struct instead.
There were a few unchecked write() calls in the debug code that cause
gcc 4.x to emit warnings on recent libc. We don't want to check them
as we can't make anything from the result, let's simply surround them
with an empty if statement.
Note that one of the warnings was for chdir("/") which normally cannot
fail since it follows a successful chroot (which means the perms are
necessarily there). Anyway let's move the call uppe to protect it too.
The issue only happens when DEBUG_FULL is enabled, which causes
http_msg_analyzer() to complain if it's called twice with an invalid
message, for instance because of two consecutive ACLs using req_proto_http.
The code is commented out when DEBUG_FULL is disabled, so this is not a bug,
just an annoyance for the developer.
The three warnings below are totally wrong since the variables depend on another
one which is only turned on when the variables are initialized. Still this gcc-4.1.2
isn't able to see this and prefers to complain wrongly. So let's initialize the
variables to shut it up since we're not in the fast path.
src/proto_http.c: In function 'acl_fetch_any_cookie_cnt':
src/proto_http.c:8393: warning: 'val_end' may be used uninitialized in this function
src/proto_http.c: In function 'http_process_req_stat_post':
src/proto_http.c:2577: warning: 'st_next_param' may be used uninitialized in this function
src/proto_http.c:2577: warning: 'st_cur_param' may be used uninitialized in this function
It's very annoying that we have to deal with the crappy size_t and with ints
at some places because these ones don't mix well. Patch 6f61b2 changed the
chunk len to int but its size remains size_t and some functions are having
trouble being used by several callers depending on the type of their arguments.
Let's turn extract_cookie_value() to int for now on, and plan a massive cleanup
later to remove all size_t.
These callbacks are used to retrieve the source and destination address
of a socket. The address flags are not hold on the stream interface and
not on the session anymore. The addresses are collected when needed.
This still needs to be improved to store the IP and port separately so
that it is not needed to perform a getsockname() when only the IP address
is desired for outgoing traffic.
The Unique ID, is an ID generated with several informations. You can use
a log-format string to customize it, with the "unique-id-format" keyword,
and insert it in the request header, with the "unique-id-header" keyword.
%Fi: Frontend IP
%Fp: Frontend Port
%Si: Server IP
%Sp: Server Port
%Ts: Timestamp
%rt: HTTP request counter
%H: hostname
%pid: PID
+X: Hexadecimal represenation
The +X mode in logformat displays hexadecimal for the following flags
%Ci %Cp %Fi %Fp %Bi %Bp %Si %Sp %Ts %ct %pid
rename logformat_write_string() to lf_text()
Optimize size computation
The ACL matches rely on the extract_cookie_value() function as used for
for patterns. This permits ACLs to match cookie values based on the cookie
name instead of having to perform substring matching on the cookie header.
Sometimes it is desirable to forward a particular request to a specific
server without having to declare a dedicated backend for this server. This
can be achieved using the "use-server" rules. These rules are evaluated after
the "redirect" rules and before evaluating cookies, and they have precedence
on them. There may be as many "use-server" rules as desired. All of these
rules are evaluated in their declaration order, and the first one which
matches will assign the server.
memcmp()/strcmp() calls were needed in different parts of code to determine
the status code. Each new status code introduces new calls, which can become
inefficient and source of bugs.
This patch reorganizes the code to rely on a numeric status code internally
and to be hopefully more generic.
Previously, the stats admin page required POST parameters to be provided
exactly in the same order as the HTML form.
This patch allows to handle those parameters in any orders.
Also, note that haproxy won't alter server states anymore if backend or server
names are ambiguous (duplicated names in the configuration) to prevent
unexpected results (the same should probably be applied to the stats socket).
Olufemi Omojola provided a config and a core showing a possible crash
when captures are configured on a TCP-mode frontend which branches to
an HTTP backend. The reason is that being in TCP mode, the frontend
does not allocate capture pools for the request, but the HTTP backend
tries to use them and dies on the NULL.
While such a config has long been unlikely to happen, it looks like
people using websocket tend to do this more often now.
Change the control to use the pointer instead of the number of captures
to know when to log.
This bug was reported in 1.4.20, so it must be backported there.
Merge http_sess_log() and tcp_sess_log() to sess_log() and move it to
log.c
A new field in logformat_type define if you can use a logformat
variable in TCP or HTTP mode.
doc: log-format in tcp mode
Note that due to the way log buffer allocation currently works, trying to
log an HTTP request without "option httplog" is still not possible. This
will change in the near future.
Commits 5c6209 and 072930 were aimed at avoiding undesirable PUSH flags
when forwarding chunked data, but had the undesired effect of causing
data advertised by content-length to be affected by the delayed ACK too.
This can happen when the data to be forwarded are small enough to fit into
a single send() call, otherwise the BF_EXPECT_MORE flag would be removed.
Content-length data don't need the BF_EXPECT_MORE flag since the low-level
forwarder already knows it can safely rely on bf->to_forward to set the
appropriate TCP flags.
Note that the issue is only observed in requests at the moment, though the
later introduction of server-side keep-alive could trigger the issue on the
response path too.
Special thanks to Randy Shults for reporting this issue with a lot of
details helping to reproduce it.
The fix must be backported to 1.4.
When a request completes on a server and the server connection is closed
while the client connection stays open, the HTTP engine releases all server
connection slots and scans the queues to offer the connection slot to
another pending request.
An issue happens when the released connection allows other requests to be
dequeued : may_dequeue_tasks() relies on srv->served which is only decremented
by sess_change_server() which itself is only called after may_dequeue_tasks().
This results in no connection being woken up until another connection terminates
so that may_dequeue_tasks() is called again.
This fix is minimalist and only moves sess_change_server() earlier (which is
safe). It should be reworked and the code factored out so that the same occurrence
in session.c shares the same code.
This bug has been there since the introduction of option-http-server-close and
the fix must be backported to 1.4.
Since commit 115acb97, chunk size was limited to 256MB. There is no reason for
such a limit and the comment on the code suggests a missing zero. However,
increasing the limit past 2 GB causes trouble due to some 32-bit subtracts
in various computations becoming negative (eg: buffer_max_len). So let's limit
the chunk size to 2 GB - 1 max.
commit a1cc3811 introduced an undesirable \0\n ending on HTTP log messages. This
is because of an extra character count passed to __send_log() which causes the LF
to be appended past the \0. Some syslog daemons thus log an extra empty line. The
fix is obvious. Fix the function comments to remind what they expect on their input.
This is past 1.5-dev7 regression so there's no backport needed.
http_sess_log now use the logformat linked list to make the log
string, snprintf is not used for speed issue.
CLF mode also uses logformat.
NOTE: as of now, empty fields in CLF now are "" not "-" anymore.
Marcello Gorlani reported that commit 5e205524ad
(BUG: http: re-enable TCP quick-ack upon incomplete HTTP requests) broke build
on FreeBSD.
Moving the include lower fixes the issue. This must be backported to 1.4 too.
These ones are invalid and blocked unless "option accept-invalid-http-request"
is specified in the frontend. In any case, the faulty request is logged.
Note that some of the remaining invalid chars are still not checked against,
those are the invalid ones between 32 and 127 :
34 ('"'), 60 ('<'), 62 ('>'), 92 ('\'), 94 ('^'),
96 ('`'), 123 ('{'), 124 ('|'), 125 ('}')
Using a lookup table might be better at some point.
The HTTP request parser was considering that any non-LWS char was
par of the URI. Unfortunately, this allows control chars to be sent
in the URI, sometimes resulting in backend servers misbehaving, for
instance when they interprete \0 as an end of string and respond
with plain HTTP/0.9 without headers, that haproxy blocks as invalid
responses.
RFC3986 clearly states the list of allowed characters in a URI. Even
non-ASCII chars are not allowed. Unfortunately, after having run 10
years with these chars allowed, we can't block them right now without
an optional workaround. So the first step consists in only blocking
control chars. A later patch will allow non-ASCII only when an appropriate
option is enabled in the frontend.
Control chars are 0..31 and 127, with the exception of 9, 10 and 13
(\t, \n, \r).
New option "http-send-name-header" specifies the name of a header which
will hold the server name in outgoing requests. This is the name of the
server the connection is really sent to, which means that upon redispatches,
the header's value is updated so that it always matches the server's name.
This pattern previously was limited to type IP. With the new header
extraction function, it becomes possible to extract strings, so that
the header can be returned as a string. This will not change anything
to existing configs, as string will automatically be converted to IP
when needed. However, new configs will be able to use IPv6 addresses
from headers in stick-tables, as well as stick on any non-IP header
(eg: host, user-agent, ...).
The new function does not return IP addresses but header values instead,
so that the caller is free to make what it want of them. The conversion
is not quite clean yet, as the previous test which considered that address
0.0.0.0 meant "no address" is still used. A different IP parsing function
should be used to take this into account.
Now strings and data blocks are stored in the temp_pattern's chunk
and matched against this one.
The rdp_cookie currently makes extensive use of acl_fetch_rdp_cookie()
and will be a good candidate for the initial rework so that ACLs use
the patterns framework and not the other way around.
IPv4 and IPv6 addresses are now stored into temp_pattern instead of
the dirty hack consisting into storing them into the consumer's target
address.
Some refactoring should now be possible since the methods used to fetch
source and destination addresses are similar between patterns and ACLs.
All ACL fetches which return integer value now store the result into
the temporary pattern struct. All ACL matches which rely on integer
also get their value there.
Note: the pattern data types are not set right now.
By default we disable TCP quick-acking on HTTP requests so that we
avoid sending a pure ACK immediately followed by the HTTP response.
However, if the client sends an incomplete request in a short packet,
its TCP stack might wait for this packet to be ACKed before sending
the rest of the request, delaying incoming requests by up to 40-200ms.
We can detect this undesirable situation when parsing the request :
- if an incomplete request is received
- if a full request is received and uses chunked encoding or advertises
a content-length larger than the data available in the buffer
In these situations, we re-enable TCP quick-ack if we had previously
disabled it.
This patch settles the 2 loggers limitation.
Loggers are now stored in linked lists.
Using "global log", the global loggers list content is added at the end
of the current proxy list. Each "log" entries are added at the end of
the proxy list.
"no log" flush a logger list.
Stream interfaces used to distinguish between client and server addresses
because they were previously of different types (sockaddr_storage for the
client, sockaddr_in for the server). This is not the case anymore, and this
distinction is confusing at best and has caused a number of regressions to
be introduced in the process of converting everything to full-ipv6. We can
now remove this and have a much cleaner code.
This patch introduces hdr_len, path_len and url_len for matching these
respective parts lengths against integers. This can be used to detect
abuse or empty headers.
Commit 588bd4 fixed header parsing so that trailing spaces were not part
of the returned string. Unfortunately, if a header only had spaces, the
last spaces were trimmed past the beginning of the value, causing a negative
length to be returned.
A quick code review shows that there should be no impact since the only
places where the vlen is used are either compared to a specific value or
with explicit contents (eg: digits).
This must be backported to 1.4.
These requests are mainly monitor requests, as well as stats requests when
the stats are processed by the frontend. Having this counter helps explain
the difference in number of sessions that is sometimes observed between a
frontend and a backend.
Sometimes a bad content-length header is encountered and this causes
an abort. It's hard to debug without a trace, so let's take a capture
of the contents when this happens.
If a server starts to respond but stops before the body, then we
capture the truncated response. We don't do this on the request
because it would happen too often upon stupid attacks.
Trailing spaces after headers were not trimmed, only the leading ones
were. An issue was detected today with a content-length value which
was padded with spaces and which was rejected. Recent updates to the
http-bis draft made it a lot more clear that such spaces must be ignored,
so this is what this patch does.
It should be backported to 1.4.
Many inet_ntop calls were partially right, which was hard to detect given
the complex combinations. Some of them were relying on the listener's proto
instead of the address itself, which could have been different when dealing
with an accept-proxy connection.
The new addr_to_str() function does the dirty job and returns the family, which
makes it particularly suited to calls from switch/case statements. A large number
of if/else statements were removed and the stats output could even be cleaned up
in the case of session dump.
As a side effect of doing this, the resulting code is smaller by almost 1kB.
All changed parts have been tested and provided expected output.
If "option forwardfor" has the "if-none" argument, then the header is
only added when the request did not already have one. This option has
security implications, and should not be set blindly.
This is used to perform cookie-based stickiness with table replication
between multiple masters and across restarts. This partially overrides
some of the appsession capabilities.
The motivation for this is to allow iteration of all the connections
of a server without the expense of iterating over the global list
of connections.
The first use of this will be to implement an option to close connections
associated with a server when is is marked as being down or in maintenance
mode.
gcc (Debian 4.6.0-2) 4.6.1 20110329 (prerelease)
Copyright (C) 2011 Free Software Foundation, Inc.
This is free software; see the source for copying conditions. There is NO
warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
...
src/proto_http.c:3029:14: warning: variable ‘del_cl’ set but not used [-Wunused-but-set-variable]
In file included from ebtree/eb64tree.c:23:0:
ebtree/eb64tree.h: In function ‘__eb64_lookup’:
ebtree/eb64tree.h:128:6: warning: variable ‘node_bit’ set but not used [-Wunused-but-set-variable]
ebtree/eb64tree.h: In function ‘__eb64i_lookup’:
ebtree/eb64tree.h:180:6: warning: variable ‘node_bit’ set but not used [-Wunused-but-set-variable]
In file included from ebtree/ebpttree.h:26:0,
from ebtree/ebimtree.c:23:
ebtree/eb64tree.h: In function ‘__eb64_lookup’:
ebtree/eb64tree.h:128:6: warning: variable ‘node_bit’ set but not used [-Wunused-but-set-variable]
ebtree/eb64tree.h: In function ‘__eb64i_lookup’:
ebtree/eb64tree.h:180:6: warning: variable ‘node_bit’ set but not used [-Wunused-but-set-variable]
In file included from ebtree/ebpttree.h:26:0,
from ebtree/ebistree.h:25,
from ebtree/ebistree.c:23:
ebtree/eb64tree.h: In function ‘__eb64_lookup’:
ebtree/eb64tree.h:128:6: warning: variable ‘node_bit’ set but not used [-Wunused-but-set-variable]
ebtree/eb64tree.h: In function ‘__eb64i_lookup’:
ebtree/eb64tree.h:180:6: warning: variable ‘node_bit’ set but not used [-Wunused-but-set-variable]
Bashkim Kasa reported that the stats admin page did not work when colons
were used in server or backend names. This was caused by url-encoding
resulting in ':' being sent as '%3A'. Now we systematically decode the
field names and values to fix this issue.
Now that we support the http-no-delay mode, we can optimize HTTP
chunking again by always waiting for more data to come until the
last chunk is met.
This patch may or may not be backported to 1.4, it's not a big deal,
it will mainly help for chunks which are aligned with the buffer size.
There are some very rare server-to-server applications that abuse the HTTP
protocol and expect the payload phase to be highly interactive, with many
interleaved data chunks in both directions within a single request. This is
absolutely not supported by the HTTP specification and will not work across
most proxies or servers. When such applications attempt to do this through
haproxy, it works but they will experience high delays due to the network
optimizations which favor performance by instructing the system to wait for
enough data to be available in order to only send full packets. Typical
delays are around 200 ms per round trip. Note that this only happens with
abnormal uses. Normal uses such as CONNECT requests nor WebSockets are not
affected.
When "option http-no-delay" is present in either the frontend or the backend
used by a connection, all such optimizations will be disabled in order to
make the exchanges as fast as possible. Of course this offers no guarantee on
the functionality, as it may break at any other place. But if it works via
HAProxy, it will work as fast as possible. This option should never be used
by default, and should never be used at all unless such a buggy application
is discovered. The impact of using this option is an increase of bandwidth
usage and CPU usage, which may significantly lower performance in high
latency environments.
This change should be backported to 1.4 since the first report of such a
misuse was in 1.4. Next patch will also be needed.
Commit 57f5c1 used to provide a nice improvement on chunked encoding since
it ensured that we did not set a PUSH flag for every chunk or buffer data
part of a chunked transfer.
Some applications appear to erroneously abuse HTTP chunking in order to
get interactive exchanges between a user agent and an origin server with
very small chunks. While it happens to work through haproxy, it's terribly
slow due to the latency added after passing each chunk to the system, who
could wait up to 200ms before pushing them onto the wire.
So we need an interactive mode for such usages. In the mean time, step back
on the optim, but not completely, so that we still keep the flag as long as
we know we're not finished with the current chunk.
This change should be backported to 1.4 too as the issue was discovered
with it.
This status code is used in response to requests matching "monitor-uri".
Some users need to adjust it to fit their needs (eg: make some strings
appear there). As it's already defined as a chunked string and used
exactly like other status codes, it makes sense to make it configurable
with the usual "errorfile", "errorloc", ...
Some people like to make the monitoring URL testable from unsafe locations.
Reporting haproxy's existence there can sometimes be problematic. This patch
should not be backported to 1.4 because it is possible, eventhough unlikely,
that some scripts rely on this word to appear there.
When doing fix 24581bae02 to correctly handle
response cookies, an unfortunate typo was inserted in the less likely code
path, resulting in a risk of crash when cookie-based persistence is enabled
and the server emits a cookie with several spaces around the equal sign.
This bug was noticed during a code backport. Its effects were never reported
because this situation is very unlikely to appear, but it can be provoked on
purpose by the server.
This patch must be backported to 1.4 versions which contain the fix above
(anything > 1.4.8), and to similar 1.3 versions > 1.3.25. 1.5-dev versions
after 1.5-dev2 are affected too.
Despite much care around handling the content-length as a 64-bit integer,
forwarding was broken on 32-bit platforms due to the 32-bit nature of
the ->to_forward member of the "buffer" struct. The issue is that this
member is declared as a long, so while it works OK on 64-bit platforms,
32-bit truncate the content-length to the lower 32-bits.
One solution could consist in turning to_forward to a long long, but it
is used a lot in the critical path, so it's not acceptable to perform
all buffer size computations on 64-bit there.
The fix consists in changing the to_forward member to a strict 32-bit
integer and ensure in buffer_forward() that only the amount of bytes
that can fit into it is considered. Callers of buffer_forward() are
responsible for checking that their data were taken into account. We
arbitrarily ensure we never consider more than 2G at once.
That's the way it was intended to work on 32-bit platforms except that
it did not.
This issue was tracked down hard at Exosec with Bertrand Jacquin,
Thierry Fournier and Julien Thomas. It remained undetected for a long
time because files larger than 4G are almost always transferred in
chunked-encoded format, and most platforms dealing with huge contents
these days run on 64-bit.
The bug affects all 1.5 and 1.4 versions, and must be backported.
Since we now have the copy of the target in the session, use it instead
of relying on the SI for it. The SI drops the target upon unregister()
so applets such as stats were logged as "NOSRV".
Johannes Smith reported some wrong retries count in logs associated with bad
requests. The cause was that the conn_retries field in the stream interface
was only initialized when attempting to connect, but is used when logging,
possibly with an uninitialized value holding last connection's conn_retries.
This could have been avoided by making use of a stream interface initializer.
This bug is 1.5-specific.
And also rename "req_acl_rule" "http_req_rule". At the beginning that
was a bit confusing to me, especially the "req_acl" list which in fact
holds what we call rules. After some digging, it appeared that some
part of the code is 100% HTTP and not just related to authentication
anymore, so let's move that part to HTTP and keep the auth-only code
in auth.c.
Right now, http-request rules are not evaluated if the URL matches the
stats request. This is quite unexpected. For instance, in the config
below, an abuser present in the abusers list will not be prevented access
to the stats.
listen pub
bind :8181
acl abuser src -f abusers.lst
http-request deny if abuser
stats uri /stats
It is not a big deal but it's not documented as such either. For 1.5, let's
have both lists be evaluated in turn, until one blocks. For 1.4 we'll simply
update the doc to indicate that.
Also instead of duplicating the code, the patch factors out the list walking
code. The HTTP auth has been moved slightly earlier, because it was set after
the header addition code, but we don't need to add headers to a request we're
dropping.
It's very annoying that frontend and backend stats are merged because we
don't know what we're observing. For instance, if a "listen" instance
makes use of a distinct backend, it's impossible to know what the bytes_out
means.
Some points take care of not updating counters twice if the backend points
to the frontend, indicating a "listen" instance. The thing becomes more
complex when we try to add support for server side keep-alive, because we
have to maintain a pointer to the backend used for last request, and to
update its stats. But we can't perform such comparisons anymore because
the counters will not match anymore.
So in order to get rid of this situation, let's have both frontend AND
backend stats in the "struct proxy". We simply update the relevant ones
during activity. Some of them are only accounted for in the backend,
while others are just for frontend. Maybe we can improve a bit on that
later, but the essential part is that those counters now reflect what
they really mean.
This patch turns internal server addresses to sockaddr_storage to
store IPv6 addresses, and makes the connect() function use it. This
code already works but some caveats with getaddrinfo/gethostbyname
still need to be sorted out while the changes had to be merged at
this stage of internal architecture changes. So for now the config
parser will not emit an IPv6 address yet so that user experience
remains unchanged.
This change should have absolutely zero user-visible effect, otherwise
it's a bug introduced during the merge, that should be reported ASAP.
This one has been removed and is now totally superseded by ->target.
To get the server, one must use target_srv(&s->target) instead of
s->srv now.
The function ensures that non-server targets still return NULL.
s->prev_srv is used by assign_server() only, but all code paths leading
to it now take s->prev_srv from the existing s->srv. So assign_server()
can do that copy into its own stack.
If at one point a different srv is needed, we still have a copy of the
last server on which we failed a connection attempt in s->target.
When dealing with HTTP keep-alive, we'll have to know if we can reuse
an existing connection. For that, we'll have to check if the current
connection was made on the exact same target (referenced in the stream
interface).
Thus, we need to first assign the next target to the session, then
copy it to the stream interface upon connect(). Later we'll check for
equivalence between those two operations.
This is in fact where those parts belong to. The old data_state was replaced
by applet.state and is now initialized when the applet is registered. It's
worth noting that the applet does not need to know the session nor the
buffer anymore since everything is brought by the stream interface.
It is possible that having a separate applet struct would simplify the
code but that's not a big deal.
With HTTP keep-alive, logging the right server name will be quite
complex because the assigned server will possibly change before we log.
Also, when we want to log accesses to an applet, it's not easy because
the applet becomes NULL again before logging.
The logged server's name is now taken from the target stored in the
stream interface. That way we can log an applet, a server name, or we
could even log a proxy or anything else if we wanted to. Ideally the
session should contain a desired target which is the one which should
be logged.
I/O handlers are still delicate to manipulate. They have no type, they're
just raw functions which have no knowledge of themselves. Let's have them
declared as applets once for all. That way we can have multiple applets
share the same handler functions and we can store their names there. When
we later need to add more parameters (eg: usage stats), we'll be able to
do so in the applets themselves.
The CLI functions has been prefixed with "cli" instead of "stats" as it's
clearly what is going on there.
The applet descriptor in the stream interface should get all the applet
specific data (st0, ...) but this will be done in the next patch so that
we don't pollute this one too much.
Similar to the stats socket bug, we must check that the proxy is not disabled
before trying to enable/disable a server.
Even if a disabled proxy is not displayed, someone can inject a faulty proxy
name in the POST parameters. So, we must ensure that no disabled proxy can be
used.
Bryan Talbot reported that POST requests with a query string were not
correctly processed if the hash parameter was the first one, because
the delimiter that was looked for to trigger the parsing was '&' instead
of '?'.
Also, while checking the code, it became apparent that it was enough for
a query string to be present in the request for POST parameters to be
ignored, even if the url_param was in the body and not in the URL.
The code has then been fixed like this :
1) look for URL param. If found, return it.
2) if no URL param was found and method is POST, then look it up into
the body
The code now seems to pass all request combinations.
This patch must be backported to 1.4 since 1.4 is equally broken right now.
Till now, the forwarding code was making use of the hdr_content_len member
to hold the size of the last chunk parsed. As such, it was reset after being
scheduled for forwarding. The issue is that this entry was reset before the
data could be viewed by backend.c in order to parse a POST body, so the
"balance url_param check_post" did not work anymore.
In order to fix this, we need two things :
- the chunk size (reset upon every forward)
- the total body size (not reset)
hdr_content_len was thus replaced by the former (hence the size of the patch)
as it makes more sense to have it stored that way than the way around.
This patch should be backported to 1.4 with care, considering that it affects
the forwarding code.
It seems like if a response message is chunked and the chunk size wraps
at the end of the buffer and the crlf sequence is incomplete, then we
can forward a wrong chunk size due to incorrect handling of the wrapped
size. It seems extremely unlikely to occur on real traffic (no reason to
have half of the CRLF after a chunk) but nothing prevents it from being
possible.
This fix must be backported to 1.4.
req_acl was used instead of req_acl_final. As a matter of luck, both
happen to be the same at this point, but this is not granted in the
future.
This fix should be backported to 1.4.
Some browsers send POST requests in several packets, which was not supported
by the "stats admin" function.
This patch allows to wait for more data when they are not fully received
(we are still limited to a certain size defined by the buffer size minus its
reserved space).
It also adds support for the "Expect: 100-Continue" header.
Stefan Behte reported a strange case where depending on the position of
the Connection header in the header list, some headers added after it
were or were not usable in "balance hdr()". The reason is that when the
last header is removed, the list's tail was not updated, so any header
added after that one was not visible from the list.
This fix must be backported to 1.4 and possibly 1.3.
It's better to avoid sticking on empty parameter values, as this almost
always indicates a missing parameter. Otherwise it's easy to enter a
situation where all new visitors stick to the same server.
Since haproxy 1.4.9, combining option httpclose and option
http-pretend-keepalive can leave the connections opened until the backend
keep-alive timeout is reached, providing bad performances.
The same can occur when the proxy is in tunnel mode.
This patch ensures that the server side connection is closed after the
response and ignore http-pretend-keepalive in tunnel mode.
We've had several issues related to data transfers. First, if a
client aborted an upload before the server started to respond, it
would get a 502 followed by a 400. The same was true (in the other
way around) if the server suddenly aborted while the client was
uploading the data.
The flags reported in the logs were misleading. Request errors could
be reported while the transfer was stopped during the data phase. The
status codes could also be overwritten by a 400 eventhough the start
of the response was transferred to the client.
The stats were also wrong in case of data aborts. The server or the
client could sometimes be miscredited for being the author of the
abort depending on where the abort was detected. Some client aborts
could also be accounted as request errors and some server aborts as
response errors.
Now it seems like all such issues are fixed. Since we don't have a
specific state for data flowing from the client to the server
before the server responds, we're still counting the client aborted
transfers as "CH", and they become "CD" when the server starts to
respond. Ideally a "P" state would be desired.
This patch should be backported to 1.4.
HTTP pipelining currently needs to monitor the response buffer to wait
for some free space to be able to send a response. It was not possible
for the HTTP analyser to be called based on response buffer activity.
Now we introduce a new buffer flag BF_WAKE_ONCE which is set when the
HTTP request analyser is set on the response buffer and some activity
is detected. This is not clean at all but once of the only ways to fix
the issue before we make it possible to register events for analysers.
Also it appeared that one realign condition did not cover all cases.
This counter will help quickly spot whether there are new errors or not.
It is also assigned to each capture so that a script can keep trace of
which capture was taken when.
It is possible to block on incorrectly chunked requests or responses,
but this becomes very hard to debug when it happens once in a while.
This patch adds the ability to also capture incorrectly chunked requests
and responses. The chunk will appear in the error buffer and will be
verifiable with the usual "show errors". The incorrect byte will match
the error location.
Error captures did only support contiguous messages. This is annoying
for capturing chunking errors, so let's ensure the function is able to
copy wrapped messages.
When haproxy parses chunk-encoded data that are scheduled to be sent, it is
possible that the other end is closed (mainly due to a client abort returning
as an error). The message state thus changes to HTTP_MSG_ERROR and the error
is reported as a chunk parsing error ("PD--") while it is not. Detect this
case before setting the flags and set the appropriate flag in this case.
Debugging parsing errors can be greatly improved if we know what the parser
state was and what the buffer flags were (especially for closed inputs/outputs
and full buffers). Let's add that to the error snapshots.
When forwarding chunk-encoded data, each chunk gets a TCP PUSH flag when
going onto the wire simply because the send() function does not know that
some data remain after it (next chunk). Now we set the BF_EXPECT_MORE flag
on the buffer if the chunk size is not null. That way we can reduce the
number of packets sent, which is particularly noticeable when forwarding
compressed data, especially as it requires less ACKs from the client.
When a header is removed, the previous header's next pointer is updated
to reflect the next of the current header. However, when cycling through
the loop, we update the prev pointer to point to the deleted header, which
means that if we delete another header, it's the deleted header's next
pointer that will be updated, leaving the deleted header in the list with
a null length, which is forbidden.
We must just not update the prev pointer after a removal.
This bug was present when either "reqdel" and "rspdel" removed two consecutive
headers. It could also occur when removing cookies in either requests or
responses, but since headers were the last header processing, the issue
remained unnoticed.
Issue reported by Hank A. Paulson.
This fix must be ported to 1.4 and possibly 1.3.
Cookies in indirect mode are removed from the cookie header. Three pointers
ought to be updated when appsession cookies are processed next, but were not.
The result is that a memcpy() can be called with a negative value causing the
process to crash. It is not sure whether this can be remotely exploited or not.
(cherry picked from commit c5f3749aa3ccfdebc4992854ea79823d26f66213)
In out of memory conditions, the ->destroy function would free all
possibly allocated pools from the current appsession, including those
that were not yet allocated nor assigned, which used to point to a
previous allocation, obviously resulting in a segfault.
(cherry picked from commit 75eae485921d3a6ce197915c769673834ecbfa5c)
In case of out of memory, it was possible to write to a null pointer
when capturing response cookies due to a missing "else" block. The
request handling was fine though.
(cherry picked from commit 62e3604d7dd27741c0b4c9e27d9e7c73495dfc32)
Enhance pattern convs and fetch argument parsing, now fetchs and convs callbacks used typed args.
Add more details on error messages on parsing pattern expression function.
Update existing pattern convs and fetchs to new proto.
Create stick table key type "binary".
Manage Truncation and padding if pattern's fetch-converted result don't match table key size.
This option makes haproxy preserve any persistence cookie emitted by
the server, which allows the server to change it or to unset it, for
instance, after a logout request.
(cherry picked from commit 52e6d75374c7900c1fe691c5633b4ae029cae8d5)
This match returns true when the request calling it is the first one of
a connection.
(cherry picked from commit 922ca979c50653c415852531f36fe409190ad76b)
When we're enabling a server again (unix CLI or stats interface), we must not mark
it completely up because it can take a while before a failure is detected. So we
mark it one step above failure, which means it's up but will be marked down upon
first failure.
(cherry picked from commit 83c3e06452457ed5660fc814cbda5bf878bf19a2)
The stats web interface must be read-only by default to prevent security
holes. As it is now allowed to enable/disable servers, a new keyword
"stats admin" is introduced to activate this admin level, conditioned by ACLs.
(cherry picked from commit 5334bab92ca7debe36df69983c19c21b6dc63f78)
Based on a patch provided by Judd Montgomery, it is now possible to
enable/disable servers from the stats web interface. This allows to select
several servers in a backend and apply the action to them at the same time.
Currently, there are 2 known limitations :
- The POST data are limited to one packet
(don't alter too many servers at a time).
- Expect: 100-continue is not supported.
(cherry picked from commit 7693948766cb5647ac03b48e782cfee2b1f14491)
If a maxidle or maxlife parameter is set on the persistence cookie in
insert mode and the client did not provide a recent enough cookie,
then we emit a new cookie with a new last_seen date and the same
first_seen (if maxlife is set). Recent enough here designates a
cookie that would be rounded to the same date. That way, we can
refresh a cookie when required without doing it in all responses.
If the request did not contain such parameters, they are set anyway.
This means that a monitoring request that is forced to a server will
get an expiration date anyway, but this should not be a problem given
that the client is able to set its cookie in this case. This also
permits to force an expiration date on visitors who previously did
not have one.
If a request comes with a dated cookie while no date check is performed,
then a new cookie is emitted with no date, so that we don't risk dropping
the user too fast due to a very old date when we re-enable the date check.
All requests that were targetting the correct server and which had their
expiration date added/updated/removed in the response cookie are logged
with the 'U' ("updated") flag instead of the 'I' ("inserted"). So very
often we'll see "VU" instead of "VN".
(cherry picked from commit 8b3c6ecab6d37be5f3655bc3a2d2c0f9f37325eb)
If a cookie comes in with a first or last date, and they are configured on
the backend, they're checked. If a date is expired or too far in the future,
then the cookie is ignored and the specific reason appears in the cookie
field of the logs.
(cherry picked from commit faa3019107eabe6b3ab76ffec9754f2f31aa24c6)
The set-cookie status flags were not very handy and limited. Reorder
them to save some room for additional values and add the "U" flags
(for Updated expiration date) that will be used with expirable cookies
in insert mode.
(cherry picked from commit 5bab52f821bb0fa99fc48ad1b400769e66196ece)
In all cookie persistence modes but prefix, we now support cookies whose
value is suffixed with some contents after a vertical bar ('|'). This will
be used to pass an optional expiration date. So as of now we only consider
the part of the cookie value which is used before the vertical bar.
(cherry picked from commit a4486bf4e5b03b5a980d03fef799f6407b2c992d)
Some configs may involve httpclose in a frontend and http-pretend-keepalive
in a backend. httpclose used to take priority over keepalive, thus voiding
its effect. This change ensures that when both are combined, keepalive is
still announced to the server while close is announced to the client.
(cherry picked from commit 2be7ec90fa9caf66294f446423bbab2d00db9004)
Some broken browsers still happen to send a CRLF after a POST. Those which
send a CRLF in a second packet have it queued into the system's buffers,
which causes an RST to be emitted by some systems upon close of the response
(eg: Linux). The client may then receive the RST without the last response
segments, resulting in a truncated response.
This change leaves request polling enabled on a POST so that we can flush
any late data from the request buffers.
A more complete workaround would consist in reading from the request for a
long time, until we get confirmation that the close has been ACKed. This
is much more complex and should only be studied for newer versions.
(cherry picked from commit 12e316af4f0245fde12dbc224ebe33c8fea806b2)
This patch addresses exactly the same issues as the previous one, but
for responses this time. It also introduces implicit support for the
Set-Cookie2 header, for which there's almost nothing specific to do
since it is a clean header. This one allows multiple cookies in a
same header, by respecting the HTTP messaging semantics.
The new parser has been tested with insertion, rewrite, passive,
removal, prefixing and captures, and it looks OK. It's still able
to rewrite (or delete) multiple cookies at once. Just as with the
request parser, it tries hard to fix formating of the cookies it
displaces.
This patch too should be backported to 1.4 and possibly to 1.3.
The request cookie parser did not allow spaces to appear in cookie
values nor around the equal sign. The various RFCs on the subject
say different things, some suggesting that a space is allowed after
the equal sign and being worded in a way that lets one believe it
is allowed before too. Some spaces may appear inside values and be
part of the values. The quotes allow delimiters to be embedded in
values. The spaces before and after attributes should be trimmed.
The new parser addresses all those points and has been carefully tested.
It fixes misplaced spaces around equal signs before processing the cookies
or forwarding them. It also tries its best to perform clean removals by
always keeping the delimiter after the value being removed and leaving one
space after it.
The variable inside the parser have been renamed to make the code a lot
more understandable, and one multi-function pointer has been eliminated.
Since this patch fixes real possible issues, it should be backported to 1.4
and possibly 1.3, since one (single) case of wrong spaces has been reported
in 1.3.
The code handling the Set-Cookie has not been touched yet.
The header parser has a bug which causes commas to be matched within
quotes while it was not expected. The way the code was written could
make one think it was OK. The resulting effect is that the following
config would use the second IP address instead of the third when facing
this request :
source 0.0.0.0 usesrc hdr_ip(X-Forwarded-For,2)
GET / HTTP/1.0
X-Forwarded-for: "127.0.0.1, 127.0.0.2", 127.0.0.3
This fix must be backported to 1.4 and 1.3.
Fix 4fe4190278 was a bit too strong. It
has caused some chunked-encoded responses to be truncated when a recv()
call could return multiple chunks followed by a close. The reason is
that when a chunk is parsed, only its contents are scheduled to be
forwarded. Thus, the reader sees auto_close+shutr and sets shutw_now.
The sender in turn sends the last scheduled data and does shutw().
Another nasty effect is that it has reduced the keep-alive rate. If
a response did not completely fit into the buffer, then the auto_close
bit was left on and the sender would close upon completion.
The fix consists in not making use of auto_close when chunked encoding
is used nor when keep-alive is used, which makes sense. However it is
maintained on error processing.
Thanks to Cyril Bont for reporting the issue early.
While it's usually desired to wait for a server response even
when the client closes its request channel, it can be problematic
with long polling requests. In order to let the server decide what
to do in such a case, if option abortonclose is set, we simply
forward the shutdown to the server. That way, it can decide to
take the appropriate action. Most servers will still process the
request, while some will probably want to abort.
Obviously, this only works as long as the client has not sent
another pipelined request over the same connection.
(was commit 0e25d86da49827ff6aa3c94132c01292b5ba4854 in 1.4)
Having a single tracking pointer for both frontend and backend counters
does not work. Instead let's have one for each. The keyword has changed
to "track-be-counters" and "track-fe-counters", and the ACL "trk_*"
changed to "trkfe_*" and "trkbe_*".
This patch adds support for the following session counters :
- http_req_cnt : HTTP request count
- http_req_rate: HTTP request rate
- http_err_cnt : HTTP request error count
- http_err_rate: HTTP request error rate
The equivalent ACLs have been added to check the tracked counters
for the current session or the counters of the current source.
When resetting a session's request analysers, we must take them from the
listener, not from the frontend. At the moment there is no difference
but this might change.
Till now, the frontend relied on the backend's options for INDEPSTR,
while at the time of accept, the frontend and backend are the same.
So we now use the frontend's pointer instead of the backend and we
don't have any dependency on the backend anymore in the frontend's
accept code.
The conn_retries attribute is now assigned when switching from SI_ST_INI
to SI_ST_REQ. This eliminates one of the last dependencies on the backend
in the frontend's accept() function.
The conn_retries still lies in the session and its initialization depends
on the backend when it may not yet be known. Let's first move it to the
stream interface.
It was particularly embarrassing that the server timeout was assigned
to buffers during an accept() just to be potentially changed later in
case of a use_backend rule. The frontend side has nothing to do with
server timeouts.
Now we initialize them right after the connect() succeeds. Later this
should change for a unique stream-interface timeout setting only.
The connection timeout stored in the buffer has not been used since the
stream interface were introduced. Let's get rid of it as it's one of the
things that complicate factoring of the accept() functions.
The 'client.c' file now only contained frontend-specific functions,
so it has naturally be renamed 'frontend.c'. Same for client.h. This
has also been an opportunity to remove some cross references from
files that should not have depended on it.
In the end, this file should contain a protocol-agnostic accept()
code, which would initialize a session, task, etc... based on an
accept() from a lower layer. Right now there are still references
to TCP.
By using msg->sol as the beginning of a message, wrong messages were
displayed in debug mode when they were truncated on the last line,
because msg->sol points to the beginning of the last line. Use
data+msg->som instead.
This would only be wrong when the server has not completely responded yet.
Fix two other occurrences of wrong rsp<->sl associations which were harmless
but wrong anyway.
Latest BF_READ_ATTACHED fix has unveiled a nice issue with the way
HTTP requests and responses are forwarded. The case where the request
aborts after the response has responded (POST with early response)
forgot to re-enable auto-close on the response. In fact it still
worked thanks to a side effect as long as BF_READ_ATTACHED was there
to force the states to be resynced (and the flags). Since last fix,
the missing auto-close causes CLOSE_WAIT connections when the client
aborts too late during a data transfer.
The right fix consists in considering the situation where the client
experiences an error and to explicitly abort the transfer. There is
no need to wake the response analysers up for that since they'd have
no added value and the analysers flags are cleared. However for a
future usage, that might help (eg: stickiness, ...).
This fix should be backported to 1.4 if the previous one is backported
too. After all the non-reg tests, the risks to see a problem arise
without both patches seems low, and both patches touch sensible areas
of the code. So there's no hurry.
Both dispatch and http_proxy modes were broken since 1.4-dev5 when
the adjustment of server health based on response codes was introduced.
In fact, in these modes, s->srv == NULL. The result is a plain segfault.
It should have been noted critical, but the fact that it remained 6
months without being noticed indicates that almost nobody uses these
modes anymore. Also, the crash is immediate upon first request.
Further versions should not be affected anymore since it's planned to
have a dummy server instead of these annoying NULL pointers.
It is now possible to stick on an IP address found in a HTTP header. Right
now only the last occurrence of the header can be used, which is generally
enough for most uses. Also, the header extraction rule only knows how to
convert the header to IP. Later it will be usable as a plain string with
an implicit conversion, and the syntax will not change.
Networks patterns loaded from files for longest match ACL testing
will now be arranged into a prefix tree. This is possible thanks to
the new prefix features in ebtree v6.0. Longest match testing is
slightly slower than exact data maching. However, the measured impact
of running at 42000 requests per second and testing whether the IP
address found in a header belongs to a list of 52000 networks or
not is 3% CPU (increase from 66% to 69%). This is low enough to
permit true geolocation based on huge tables.
Now if some ACL patterns are loaded from a file and the operation is
an exact string match, the data will be arranged in a tree, yielding
a significant performance boost on large data sets. Note that this
only works when case is sensitive.
A new dedicated function, acl_lookup_str(), has been created for this
matching. It is called for every possible input data to test and it
looks the tree up for the data. Since the keywords are loosely typed,
we would have had to add a new columns to all keywords to adjust the
function depending on the type. Instead, we just compare on the match
function. We call acl_lookup_str() when we could use acl_match_str().
The tree lookup is performed first, then the remaining patterns are
attempted if the tree returned nothing.
A quick test shows that when matching a header against a list of 52000
network names, haproxy uses 68% of one core on a core2-duo 3.2 GHz at
42000 requests per second, versus 66% without any rule, which means
only a 2% CPU increase for 52000 rules. Doing the same test without
the tree leads to 100% CPU at 6900 requests/s. Also it was possible
to run the same test at full speed with about 50 sets of 52000 rules
without any measurable performance drop.
When trying to display an invalid request or response we received,
we must at least check that we have identified something looking
like a start of message, otherwise we can dereference a NULL pointer.
This is used to disable persistence depending on some conditions (for
example using an ACL matching static files or a specific User-Agent).
You can see it as a complement to "force-persist".
In the configuration file, the force-persist/ignore-persist declaration
order define the rules priority.
Used with the "appsesion" keyword, it can also help reducing memory usage,
as the session won't be hashed the persistence is ignored.
I met a strange behaviour with appsession.
I firstly thought this was a regression due to one of my previous patch
but after testing with a 1.3.15.12 version, I also could reproduce it.
To illustrate, the configuration contains :
appsession PHPSESSID len 32 timeout 1h
Then I call a short PHP script containing :
setcookie("P", "should not match")
When calling this script thru haproxy, the cookie "P" matches the appsession rule :
Dumping hashtable 0x11f05c8
table[1572]: should+not+match
Shouldn't it be ignored ?
If you confirm, I'll send a patch for 1.3 and 1.4 branches to check that the
cookie length is equal to the appsession name length.
This is due to the comparison length, where the cookie length is took into
account instead of the appsession name length. Using the appsession name
length would allow ASPSESSIONIDXXX (+ check that memcmp won't go after the
buffer size).
Also, while testing, I noticed that HEAD requests where not available for
URIs containing the appsession parameter. 1.4.3 patch fixes an horrible
segfault I missed in a previous patch when appsession is not in the
configuration and HAProxy is compiled with DEBUG_HASH.
Some servers do not completely conform with RFC2616 requirements for
keep-alive when they receive a request with "Connection: close". More
specifically, they don't bother using chunked encoding, so the client
never knows whether the response is complete or not. One immediately
visible effect is that haproxy cannot maintain client connections alive.
The second issue is that truncated responses may be cached on clients
in case of network error or timeout.
scar Fras Barranco reported this issue on Tomcat 6.0.20, and
Patrik Nilsson with Jetty 6.1.21.
Cyril Bont proposed this smart idea of pretending we run keep-alive
with the server and closing it at the last moment as is already done
with option forceclose. The advantage is that we only change one
emitted header but not the overall behaviour.
Since some servers such as nginx are able to close the connection
very quickly and save network packets when they're aware of the
close negociation in advance, we don't enable this behaviour by
default.
"option http-pretend-keepalive" will have to be used for that, in
conjunction with "option http-server-close".
Using get_ip_from_hdr2() we can look for occurrence #X or #-X and
extract the IP it contains. This is typically designed for use with
the X-Forwarded-For header.
Using "usesrc hdr_ip(name,occ)", it becomes possible to use the IP address
found in <name>, and possibly specify occurrence number <occ>, as the
source to connect to a server. This is possible both in a server and in
a backend's source statement. This is typically used to use the source
IP previously set by a upstream proxy.
Bernhard Krieger reported truncated HTTP responses in presence of some
specific chunk-encoded data, and kindly offered complete traces of the
issue which made it easy to reproduce it.
Those traces showed that the chunks were of exactly 8192 bytes, chunk
size and CRLF included, which was exactly half the size of the buffer.
In this situation, the function http_chunk_skip_crlf() could erroneously
try to parse a CRLF after the chunk believing there were more data
pending, because the number of bytes present in the buffer was considered
instead of the number of remaining bytes to be parsed.
Those two codes can be triggered on demand by client requests.
We must not fail a server on them.
Ideally we should ignore a certain amount of status codes which do
not indicate life nor death.
Hi Willy,
Please find a small patch to prevent haproxy segfaulting when logging captured headers in CLF format.
Example config to reproduce the bug :
listen test :10080
log 127.0.0.1 local7 debug err
mode http
option httplog clf
capture request header NonExistantHeader len 16
--
Cyril Bont
In case of pipelined requests, if the client aborts before reading response
N-1, haproxy waits forever for the data to leave the buffer before parsing
the next response.
Often we need to understand why some transfers were aborted or what
constitutes server response errors. With those two counters, it is
now possible to detect an unexpected transfer abort during a data
phase (eg: too short HTTP response), and to know what part of the
server response errors may in fact be assigned to aborted transfers.
A copy-paste typo and a missing check were causing the logs to
report "PR" instead of "SD" when a server closes before sending
full data. Also, the log would erroneously report 502 while in
fact the correct response will already have been transmitted.
The bounce realign function was algorithmically good but as expected
it was not cache-friendly. Using it with large requests caused so many
cache thrashing that the function itself could drain 70% of the total
CPU time for only 0.5% of the calls !
Revert back to a standard memcpy() using a specially allocated swap
buffer. We're now back to 2M req/s on pipelined requests.
It is wrong to merge FE and BE stats for a proxy because when we consult a
BE's stats, it reflects the FE's stats eventhough the BE has received no
traffic. The most common example happens with listen instances, where the
backend gets credited for all the trafic even when a use_backend rule makes
use of another backend.
The trash buffer may now be smaller than a buffer because we can tune
it at run time. This causes a risk when we're trying to use it as a
temporary buffer to realign unaligned requests, because we may have to
put up to a full buffer into it.
Instead of doing a double copy, we're now relying on an open-coded
bouncing copy algorithm. The principle is that we move one byte at
a time to its final place, and if that place also holds a byte, then
we move it too, and so on. We finish when we've moved all the buffer.
It limits the number of memory accesses, but since it proceeds one
byte at a time and with random walk, it's not cache friendly and
should be slower than a double copy. However, it's only used in
extreme situations and the difference will not be noticeable.
It has been extensively tested and works reliably.
The fix below was incomplete :
commit d5fd51c75b
[BUG] http_server_error() must not purge a previous pending response
This can cause parts of responses to be truncated in case of
pipelined requests if the second request generates an error
before the first request is completely flushed.
Pending response data being rejected was still sent, causing inappropriate
error responses in case of error while parsing a response header. We must
purge pending data from the response buffer that were not scheduled to be
sent (l - send_max).
Now we establish the tunnel only once the status 200 reponse is
received. That way we can still support an authentication request
in response to a CONNECT, then a client's authentication response.
A 101 response is accompanied with an Upgrade header indicating
a new protocol that is spoken on the connection after the exchange
completes. At least we should switch to tunnel mode after such a
response.
I'm not sure if the fix is correct:
- if (req_acl->cond)
- ret = acl_exec_cond(req_acl->cond, px, s, txn, ACL_DIR_REQ);
+ if (!req_acl->cond)
+ continue;
Doesn't it ignore rules with no condition attached? I think that the
proper solution would be the following.
One check was missing for the 'polarity' of the test. Now 'unless'
works. BTW, 'unless' provides a nice way to perform one-line auth :
acl valid-user http_auth(user-list)
http-request auth unless valid-user
We're already able to know if a request is a proxy request or a
normal one, and we have an option "http-use-proxy-header" which states
that proxy headers must be checked. So let's switch to use the proxy
authentication headers and responses when this option is set and we're
facing a proxy request. That allows haproxy to enforce auth in front
of a proxy.
Support the new syntax (http-request allow/deny/auth) in
http stats.
Now it is possible to use the same syntax is the same like in
the frontend/backend http-request access control:
acl src_nagios src 192.168.66.66
acl stats_auth_ok http_auth(L1)
stats http-request allow if src_nagios
stats http-request allow if stats_auth_ok
stats http-request auth realm LB
The old syntax is still supported, but now it is emulated
via private acls and an aditional userlist.
Just as for the req* rules, we can now condition rsp* rules with ACLs.
ACLs match on response, so volatile request information cannot be used.
A warning is emitted if a configuration contains such an anomaly.
From now on, if request filters have ACLs defined, these ACLs will be
evaluated to condition the filter. This will be used to conditionally
remove/rewrite headers based on ACLs.
Krzysztof Oledzki suggested to disable keep-alive when a process
is going down due to a reload, in order to avoid ever-lasting
sessions. This is a simple and very efficient solution as it
ensures that at most one more request will be handled on a
keep-alive connection after the process has received a SIGUSR1
signal.
We must trim any excess data from the response buffer when recycling
a keep-alive connection, because we may have blocked an invalid response
from a server that we don't want to accidentely forward once we disable
the analysers, nor do we want those data to come along with next response.
A typical example of such data would be from a buggy server responding to
a HEAD with some data, or sending more than the advertised content-length.
For deciding to set the BF_EXPECT_MORE, we reused the same code as in
http_wait_for_request(), but here we must ignore buf->lr which is not
yet set and useless. This might only have caused random sub-optimal
behaviours.
Despite what is explicitly stated in HTTP specifications,
browsers still use the undocumented Proxy-Connection header
instead of the Connection header when they connect through
a proxy. As such, proxies generally implement support for
this stupid header name, breaking the standards and making
it harder to support keep-alive between clients and proxies.
Thus, we add a new "option http-use-proxy-header" to tell
haproxy that if it sees requests which look like proxy
requests, it should use the Proxy-Connection header instead
of the Connection header.
When using "option persist" or "force-persist", we want to know from the
logs if the cookie referenced a valid server or a down server. Till here
the flag reported a valid server even if the server was down, which is
misleading. Now we correctly report that the requested server was down.
We can typically see "--DI" when using "option persist" with redispatch,
ad "SCDN" when using force-persist on a down server.
This is used to force access to down servers for some requests. This
is useful when validating that a change on a server correctly works
before enabling the server again.
We use to delay the response if there is a new request in the buffer.
However, if the pending request is incomplete, we should not delay the
pending responses.
This can cause parts of responses to be truncated in case of
pipelined requests if the second request generates an error
before the first request is completely flushed.
This one is the next step of previous patch. It correctly computes
the response mode and the Connection flag transformations depending
on the request mode and version, and the response version and headers.
We're now also able to add "Connection: keep-alive", and to convert
server's close during a keep-alive connection to a server-close
connection.
We need to improve Connection header handling in the request for it
to support the upcoming keep-alive mode. Now we have two flags which
keep in the session the information about the presence of a
Connection: close and a Connection: keep-alive headers in the initial
request, as well as two others which keep the current state of those
headers so that we don't have to parse them again. Knowing the initial
value is essential to know when the client asked for keep-alive while
we're forcing a close (eg in server-close mode). Also the Connection
request parser is now able to automatically remove single header values
at the same time they are parsed. This provides greater flexibility and
reliability.
All combinations of listen/front/back in all modes and with both
1.0 and 1.1 have been tested.
Calling this function after http_find_header2() automatically deletes
the current value of the header, and removes the header itself if the
value is the only one. The context is automatically adjusted for a
next call to http_find_header2() to return the next header. No other
change nor test should be made on the transient context though.