Patch 6cbbdbf3 fixed the missing "-" delimitors in logs but it caused
them to be emitted with "http-request add-header", eventhough it was
correctly fixed for the unique-id format. Fix this by simply removing
LOG_OPT_MANDATORY in this case.
Commit 2b0108ad accidently got rid of the ability to emit a "-" for
empty log fields. This can happen for captured request and response
cookies, as well as for fetches. Since we don't want to have this done
for headers however, we set the default log method when parsing the
format. It is still possible to force the desired mode using +M/-M.
In select_compression_response_header(), some tests are rather confusing
as the "fail" label is used to deinitialize the compression context for
the session while it's branched only before initialization succeeds. The
test is always false here and the dereferencing of the comp_algo pointer
which might be null is also confusing. Remove that code which is not needed
anymore since commit ec3e3890 got rid of the latest issues.
Reported-by: Dinko Korunic <dkorunic@reflected.net>
srv cannot be null in http_perform_server_redirect(), as it's taken
from the stream interface's target which is always valid for a
server-based redirect, and it was already dereferenced above, so in
practice, gcc already removes the test anyway.
Reported-by: Dinko Korunic <dkorunic@reflected.net>
As stated in both RFC2616 and the http-bis drafts, Cache-Control:
no-transform must be looked up in the response since we're modifying
the response. However, its presence in the request is irrelevant to
any changes in the response :
7.2.1.6. no-transform
The "no-transform" request directive indicates that an intermediary
(whether or not it implements a cache) MUST NOT change the Content-
Encoding, Content-Range or Content-Type request header fields, nor
the request representation.
7.2.2.9. no-transform
The "no-transform" response directive indicates that an intermediary
(regardless of whether it implements a cache) MUST NOT change the
Content-Encoding, Content-Range or Content-Type response header
fields, nor the response representation.
Note: according to the specs, we're supposed to emit the following
response header :
Warning: 214 transformation applied
However no other product seems to do it, so the effect on user agents
is unclear.
The "reqtarpit" rule is not very handy to use. Now that we have more
flexibility with "http-request", let's finally make the tarpit rules
usable there.
There are still semantical differences between apply_filters_to_request()
and http_req_get_intercept_rule() because the former updates the counters
while the latter does not. So we currently have almost similar code leafs
for similar conditions, but this should be cleaned up later.
These are exactly the same as the classic redirect rules except
that they can be interleaved with other http-request rules for
more flexibility.
The redirect parser should probably be changed to stop at the condition
so that the caller puts its own condition pointer. At the moment, the
redirect rule and condition are parsed at once by build_redirect_rule()
and the condition is assigned to the http_req_rule.
We now have http_apply_redirect_rule() which does all the redirect-specific
job instead of having this inside http_process_req_common().
Also one of the benefit gained from uniformizing this code is that both
keep-alive and close response do emit the PR-- flags. The fix for the
flags could probably be backported to 1.4 though it's very minor.
The previous function http_perform_redirect() was becoming confusing
so it was renamed http_perform_server_redirect() since it only applies
to server-based redirection.
Several bugs were introduced recently due to a misunderstanding of how
this function works and what it was supposed to do. Since it's supposed
to only return the pointer to a rule which aborts further processing of
the request, let's rename it to avoid further issues.
The function was also slightly cleaned up without any functional change.
It happens that all of them call parse_logformat_line() which sets
proxy->to_log with a number of flags affecting the line format for
all three users. For example, having a unique-id specified disables
the default log-format since fe->to_log is tested when the session
is established.
Similarly, having "option logasap" will cause "+" to be inserted in
unique-id or headers referencing some of the fields depending on
LW_BYTES.
This patch first removes most of the dependency on fe->to_log whenever
possible. The first possible cleanup is to stop checking fe->to_log
for being null, considering that it always contains at least LW_INIT
when any such usage is made of the log-format!
Also, some checks are wrong. s->logs.logwait cannot be nulled by
"logwait &= ~LW_*" since LW_INIT is always there. This results in
getting the wrong log at the end of a request or session when a
unique-id or add-header is set, because logwait is still not null
but the log-format is not checked.
Further cleanups are required. Most LW_* flags should be removed or at
least replaced with what they really mean (eg: depend on client-side
connection, depend on server-side connection, etc...) and this should
only affect logging, not other mechanisms.
This patch fixes the default log-format and tries to limit interferences
between the log formats, but does not pretend to do more for the moment,
since it's the most visible breakage.
After the response headers are sent and the request processing is done,
the buffers are wiped out and the stream interface is closed. We must
then disable the request analysers, otherwise some processing will
happen on a closed stream interface and empty buffers which do not
match, causing all sort of crashes. This issue was introduced with
recent work on the stats, and was reported by Seri.
Previous commit was still wrong, it broke add-header and set-header
because we don't want to leave on these actions.
The http_check_access_rule() function should be redesigned, it was
initially thought for allow/deny rules but now it is executing other
non-final rules and at the same time returning a pointer to the last
final rule. That becomes a bit confusing and will need to be addressed
before we implement redirect and return.
This commit adding http-request add-header/set-header unfortunately introduced
a regression to the handling of the stats page which is not matched anymore.
Thanks to Dmitry Sivachenko for reporting this.
These two new statements allow to pass information extracted from the request
to the server. It's particularly useful for passing SSL information to the
server, but may be used for various other purposes such as combining headers
together to emulate internal variables.
At the moment, we need trash chunks almost everywhere and the only
correctly implemented one is in the sample code. Let's move this to
the chunks so that all other places can use this allocator.
Additionally, the get_trash_chunk() function now really returns two
different chunks. Previously it used to always overwrite the same
chunk and point it to a different buffer, which was a bit tricky
because it's not obvious that two consecutive results do alias each
other.
The HTTP header injection that are performed in dumpstats when responding
or when redirecting a POST request have nothing to do in dumpstats. They
do not use any state from the stats, and are 100% HTTP. Let's make the
headers there in the HTTP core, and have dumpstats only produce stats.
The dumpstats code looks like a spaghetti plate. Several functions are
supposed to be able to do several things but rely on complex states to
dispatch the work to independant functions. Most of the HTML output is
performed within the switch/case statements of the whole state machine.
Let's clean this up by adding new functions to emit the data and have
a few more iterators to avoid relying on so complex states.
The new stats dump sequence looks like this for CLI and for HTTP :
cli_io_handler()
-> stats_dump_sess_to_buffer() // "show sess"
-> stats_dump_errors_to_buffer() // "show errors"
-> stats_dump_raw_info_to_buffer() // "show info"
-> stats_dump_raw_info()
-> stats_dump_raw_stat_to_buffer() // "show stat"
-> stats_dump_csv_header()
-> stats_dump_proxy()
-> stats_dump_px_hdr()
-> stats_dump_fe_stats()
-> stats_dump_li_stats()
-> stats_dump_sv_stats()
-> stats_dump_be_stats()
-> stats_dump_px_end()
http_stats_io_handler()
-> stats_http_redir()
-> stats_dump_http() // also emits the HTTP headers
-> stats_dump_html_head() // emits the HTML headers
-> stats_dump_csv_header() // emits the CSV headers (same as above)
-> stats_dump_http_info() // note: ignores non-HTML output
-> stats_dump_proxy() // same as above
-> stats_dump_http_end() // emits HTML trailer
When a server responds prematurely to a POST request, haproxy used to
cause the transfer to be aborted before the end. This is problematic
because this causes the client to receive a TCP reset when it tries to
push more data, generally preventing it from receiving the response
which contain the reason for the premature reponse (eg: "entity too
large" or an authentication request).
From now on we take care of allowing the upload traffic to flow to the
server even when the response has been received, since the server is
supposed to drain it. That way the client receives the server response.
This bug has been present since 1.4 and the fix should probably be
backported there.
The two ACL fetches "resp_ver" and "status", if used in a request despite
the warning, would return a match of zero length. This is inappropriate,
better return a non-match to be more consistent with other ACL processing.
This returns the concatenation of the base32 fetch and the src fetch.
The resulting type is of type binary, with a size of 8 or 20 bytes
depending on the source address family. This can be used to track
per-IP, per-URL counters.
This returns a 32-bit hash of the value returned by the "base"
fetch method above. This is useful to track per-URL activity on
high traffic sites without having to store all URLs. Instead a
shorter hash is stored, saving a lot of memory. The output type
is an unsigned integer.
Until now it was only possible to use track-sc1/sc2 with "src" which
is the IPv4 source address. Now we can use track-sc1/sc2 with any fetch
as well as any transformation type. It works just like the "stick"
directive.
Samples are automatically converted to the correct types for the table.
Only "tcp-request content" rules may use L7 information, and such information
must already be present when the tracking is set up. For example it becomes
possible to track the IP address passed in the X-Forwarded-For header.
HTTP request processing now also considers tracking from backend rules
because we want to be able to update the counters even when the request
was already parsed and tracked.
Some more controls need to be performed (eg: samples do not distinguish
between L4 and L6).
If a client aborts a request with an error (typically a TCP reset), we must
log a 400. Till now we did not set the status nor close the stream interface,
causing the request to attempt to be forwarded and logging a 503.
Should be backported to 1.4 which is affected as well.
To ensure that we only count when a response was compressed, we also
check for the SN_COMP_READY flag which indicates that the compression
was effectively initialized. Comp_algo alone is meaningless.
Compression was not disabled on 1xx, 204, 304 nor HEAD requests. This
is not really a problem, but it reports more compressed responses than
really done.
Let's only look up the content-type header once. This involves
inverting the condition which is not dramatic.
Also, we now always check the value length before comparing it, and we
always reset the ctx.idx before looking a header up. Otherwise that
could make header lookups depend on their on-wire order. It would be
a minor issue however since at worst it would cause some responses not
to be compressed.
The compression is disabled when the HTTP status code is not 200, indeed
compression on some HTTP code can create issues (ex: 206, 416).
Multipart message should not be compressed eitherway.
If a client aborts with an abortonclose flag, the close is forwarded
to the server and when server response is processed, the analyser thinks
it's the server who has closed first, and logs flags "SD" or "SH" and
counts a server error. In order to avoid this, we now first detect that
the client has closed and log a client abort instead.
This likely is the reason why many people have been observing a small rate
of SD/SH flags without being able to find what the error was.
This fix should probably be backported to 1.4.
Depending on the content-types and accept-encoding fields, some responses
might or might not be compressed. Let's have a counter of the number of
compressed responses and report it in the stats to help improve compression
usage.
Some cosmetic issues were fixed in the CSV output too (missing commas at the
end).
Some users need more than 64 characters to log large cookies. The limit
was set to 63 characters (and not 64 as previously documented). Now it
is possible to change this using the global "tune.http.cookielen" setting
if required.
New option 'maxcompcpuusage' in global section.
Sets the maximum CPU usage HAProxy can reach before stopping the
compression for new requests or decreasing the compression level of
current requests. It works like 'maxcomprate' but with the Idle.
This patch makes changes in the http_response_forward_body state
machine. It checks if the compress algorithm had consumed data before
swapping the temporary and the input buffer. So it prevents null sized
zlib chunks.
Disabling compression based on the content-type was improperly done since the
introduction of the COMP_READY flag, sometimes resulting in truncated responses.
There was a possible memory leak in the zlib code when the first response of
a keep-alive session was compressed, because the next request would reset the
compression algo, preventing a later call to session_free() from releasing it.
The reason is that it is necessary to release the assigned resources in
http_end_txn_clean_session().
Instead of storing a couple of (int, ptr) in the struct connection
and the struct session, we use a different method : we only store a
pointer to an integer which is stored inside the target object and
which contains a unique type identifier. That way, the pointer allows
us to retrieve the object type (by dereferencing it) and the object's
address (by computing the displacement in the target structure). The
NULL pointer always corresponds to OBJ_TYPE_NONE.
This reduces the size of the connection and session structs. It also
simplifies target assignment and compare.
In order to improve the generated code, we try to put the obj_type
element at the beginning of all the structs (listener, server, proxy,
si_applet), so that the original and target pointers are always equal.
A lot of code was touched by massive replaces, but the changes are not
that important.
Some servers are not totally HTTP-compliant when it comes to parsing the
Connection header. This is particularly true with WebSocket where it happens
from time to time that a server doesn't support having a "close" token along
with the "Upgrade" token in the Connection header. This broken behaviour has
also been noticed on some clients though the problem is less frequent on the
response path.
Sometimes the workaround consists in enabling "option http-pretend-keepalive"
to leave the request Connection header untouched, but this is not always the
most convenient solution. This patch introduces a new solution : haproxy now
also looks for the "Upgrade" token in the Connection header and if it finds
it, then it refrains from adding any other token to the Connection header
(though "keep-alive" and "close" may still be removed if found). The same is
done for the response headers.
This way, WebSocket much with less changes even when facing non-compliant
clients or servers. At least it fixes the DISCONNECT issue that was seen
on the websocket.org test.
Note that haproxy does not change its internal mode, it just refrains from
adding new tokens to the connection header.
si_fd() is not used a lot, and breaks builds on OpenBSD 5.2 which
defines this name for its own purpose. It's easy enough to remove
this one-liner function, so let's do it.
This patch adds input and output rate calcutation on the HTTP compresion
feature.
Compression can be limited with a maximum rate value in kilobytes per
second. The rate is set with the global 'maxcomprate' option. You can
change this value dynamicaly with 'set rate-limit http-compression
global' on the UNIX socket.
This optimisation causes haproxy to time out requests that result
in two TCP packets, one packet containing the header, and one
packet containing the actual data. This is a very typical type
of response from a lot of servers.
[Willy: I suspect the fix might have an impact on the compression code
which I'm not sure completely handles calls with 0 bytes to forward]
Some old browsers that have a user-agent starting with "Mozilla/4" do
not support compressison correctly, so disable compression for those.
Internet explorer 6 after Windows XP service pack 2, IE 7, and IE 8,
do however support compression and still have a user agent starting
with Mozilla/4, so we try to enable compression for those.
MSIE has a user-agent on this form:
Mozilla/4.0 (compatible; MSIE <version>; ...)
98% of MSIE 6 SP2 user agents start with
Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1; SV1
The remaining 2% have additional flags before "SV1".
This simplified matching looking for MSIE at exactly position 25
and SV1 at exacly position 51 gives a few false negatives, so sometimes
a compression opportunity is lost.
A test against 3 hours of traffic to around 3000 news sites worldwide
gives less than 0.007% (70ppm) missed compression opportunities.
Sample conversions rely on two alternative buffers which were previously
allocated as static bufs of size BUFSIZE. Now they're initialized to the
global buffer size. It was the same for HTTP authentication. Note that it
seems that none of them was prone to any mistake when dealing with the
buffer size, but better stay on the safe side by maintaining the old
assumption that a trash buffer is always "large enough".
The trash is used everywhere to store the results of temporary strings
built out of s(n)printf, or as a storage for a chunk when chunks are
needed.
Using global.tune.bufsize is not the most convenient thing either.
So let's replace trash with a chunk and directly use it as such. We can
then use trash.size as the natural way to get its size, and get rid of
many intermediary chunks that were previously used.
The patch is huge because it touches many areas but it makes the code
a lot more clear and even outlines places where trash was used without
being that obvious.
We don't want the lower layer to forward a close while we're compressing,
and we want the system to fuse outgoing TCP segments using MSG_MORE as
much as possible to save round trips that can emerge from sending short
packets with a PUSH flag.
A test on a remote busy DSL line consisting in compressing a 100MB file
on the fly full of zeroes only showed a transfer rate of a few kB/s due
to these round trips.
We will need to be able to switch server connections on a session and
to keep idle connections. In order to achieve this, the preliminary
requirement is that the connections can survive the session and be
detached from them.
Right now they're still allocated at exactly the same place, so when
there is a session, there are always 2 connections. We could soon
improve on this by allocating the outgoing connection only during a
connect().
This current patch touches a lot of code and intentionally does not
change any functionnality. Performance tests show no regression (even
a very minor improvement). The doc has not yet been updated.
A number of older browsers have many issues with compressed contents. It
happens that all these older browsers announce themselves as "Mozilla/4"
and that despite not being all broken, the amount of working browsers
announcing themselves this way compared to all other ones is so tiny
that it's not worth wasting cycles trying to adapt to every specific
one.
So let's simply disable compression for these older browsers.
More information on this very detailed article :
http://zoompf.com/2012/02/lose-the-wait-http-compression
This commit introduces HTTP compression using the zlib library.
http_response_forward_body has been modified to call the compression
functions.
This feature includes 3 algorithms: identity, gzip and deflate:
* identity: this is mostly for debugging, and it was useful for
developping the compression feature. With Content-Length in input, it
is making each chunk with the data available in the current buffer.
With chunks in input, it is rechunking, the output chunks will be
bigger or smaller depending of the size of the input chunk and the
size of the buffer. Identity does not apply any change on data.
* gzip: same as identity, but applying a gzip compression. The data
are deflated using the Z_NO_FLUSH flag in zlib. When there is no more
data in the input buffer, it flushes the data in the output buffer
(Z_SYNC_FLUSH). At the end of data, when it receives the last chunk in
input, or when there is no more data to read, it writes the end of
data with Z_FINISH and the ending chunk.
* deflate: same as gzip, but with deflate algorithm and zlib format.
Note that this algorithm has ambiguous support on many browsers and
no support at all from recent ones. It is strongly recommended not
to use it for anything else than experimentation.
You can't choose the compression ratio at the moment, it will be set to
Z_BEST_SPEED (1), as tests have shown very little benefit in terms of
compression ration when going above for HTML contents, at the cost of
a massive CPU impact.
Compression will be activated depending of the Accept-Encoding request
header. With identity, it does not take care of that header.
To build HAProxy with zlib support, use USE_ZLIB=1 in the make
parameters.
This work was initially started by David Du Colombier at Exceliance.
This state's name is confusing as it is only used with chunked encoding
and makes newcomers think it's also related to the content-length. Let's
call it CHUNK_CRLF to clear any doubt on this.
These functions are not that long and the compiler inlines them well. Doing
so has sped up the chunked encoding parser by 41% !
Note that http_forward_trailers was also declared static because it's not
exported.
Commit ceb4ac9c states that IPv6 values are accepted by "hdr_ip" acl,
but the code didn't allow it. This patch provides the ability to accept IPv6
values.
Jaroslaw Bojar diagnosed an issue when haproxy switches to tunnel mode
after a transfer. The response data are sent with the MSG_MORE flag,
causing them to be needlessly queued in the kernel. In order to fix this,
we set the CF_NEVER_WAIT flag on the channels when switching to tunnel
mode.
One issue remained with client-side keep-alive : if the response is sent
before the end of the request, it suffers the same issue for the same
reason. This is easily addressed by setting the CF_SEND_DONTWAIT flag
on the channel when the response has been parsed and we're waiting for
the other side.
The same issue is present in 1.4 so the fix must be backported.
With this commit, we now separate the channel from the buffer. This will
allow us to replace buffers on the fly without touching the channel. Since
nobody is supposed to keep a reference to a buffer anymore, doing so is not
a problem and will also permit some copy-less data manipulation.
Interestingly, these changes have shown a 2% performance increase on some
workloads, probably due to a better cache placement of data.
These "buf" were confusing as they were really refering to channels. At
most places, a buffer was really all what was needed, so a struct buffer
was used instead. It is possible that the performance has slightly increased
by the removal of pointer offset in many pointer operations by directly
using the buffer pointer instead of the channel pointer.
Baptiste Assmann reported a bug causing a crash on recent versions when
sticking rules were set on layer 7 in a TCP proxy. The bug is easier to
reproduce with the "defer-accept" option on the "bind" line in order to
have some contents to parse when the connection is accepted. The issue
is that the acl_prefetch_http() function called from HTTP fetches relies
on hdr_idx to be preinitialized, which is not the case if there is no L7
ACL.
The solution consists in adding a new SMP_CAP_L7 flag to fetches to indicate
that they are expected to work on L7 data, so that the proxy knows that the
hdr_idx has to be initialized. This is already how ACL and HTTP mode are
handled.
The bug was present since 1.5-dev9.
While working on the changes required to make the health checks use the
new connections, it started to become obvious that some naming was not
logical at all in the connections. Specifically, it is not logical to
call the "data layer" the layer which is in charge for all the handshake
and which does not yet provide a data layer once established until a
session has allocated all the required buffers.
In fact, it's more a transport layer, which makes much more sense. The
transport layer offers a medium on which data can transit, and it offers
the functions to move these data when the upper layer requests this. And
it is the upper layer which iterates over the transport layer's functions
to move data which should be called the data layer.
The use case where it's obvious is with embryonic sessions : an incoming
SSL connection is accepted. Only the connection is allocated, not the
buffers nor stream interface, etc... The connection handles the SSL
handshake by itself. Once this handshake is complete, we can't use the
data functions because the buffers and stream interface are not there
yet. Hence we have to first call a specific function to complete the
session initialization, after which we'll be able to use the data
functions. This clearly proves that SSL here is only a transport layer
and that the stream interface constitutes the data layer.
A similar change will be performed to rename app_cb => data, but the
two could not be in the same commit for obvious reasons.
Since at least commit a458b679, msg->sov could become negative in
http_parse_chunk_size() if a chunk size wrapped around the buffer.
The effect is that at some point channel_forward() was called with
a negative size, causing all data to be transferred without being
analyzed anymore.
Since haproxy does not support keep-alive with the server yet, this
issue is not really noticeable, as the server closes the connection
in response. Still, when tunnel mode is used or when pretent-keepalive
is used, it is possible to see the problem.
This issue was reported and diagnosed by William Lallemand at
Exceliance.
Cyril Bont reported a mangled debug output when an invalid request
was sent with a faulty request line. The reason was the use of the
msg->sl.rq.l offset which was not yet initialized in this case. So
we change the way to report such an error so that first we initialize
it to zero before parsing a message, then we use that to know whether
we can trust it or not. If it's still zero, then we display the whole
buffer, truncated by debug_hdr() to the first CR or LF character, which
results in the first line only.
The same operation was performed for the response, which was wrong too.
We need to have the source and destination addresses in the connection.
They were lying in the stream interface so let's move them. The flags
SI_FL_FROM_SET and SI_FL_TO_SET have been moved as well.
It's worth noting that tcp_connect_server() almost does not use the
stream interface anymore except for a few flags.
It has been identified that once we detach the connection from the SI,
it will probably be needed to keep a copy of the server-side addresses
in the SI just for logging purposes. This has not been implemented right
now though.
This is a massive rename of most functions which should make use of the
word "channel" instead of the word "buffer" in their names.
In concerns the following ones (new names) :
unsigned long long channel_forward(struct channel *buf, unsigned long long bytes);
static inline void channel_init(struct channel *buf)
static inline int channel_input_closed(struct channel *buf)
static inline int channel_output_closed(struct channel *buf)
static inline void channel_check_timeouts(struct channel *b)
static inline void channel_erase(struct channel *buf)
static inline void channel_shutr_now(struct channel *buf)
static inline void channel_shutw_now(struct channel *buf)
static inline void channel_abort(struct channel *buf)
static inline void channel_stop_hijacker(struct channel *buf)
static inline void channel_auto_connect(struct channel *buf)
static inline void channel_dont_connect(struct channel *buf)
static inline void channel_auto_close(struct channel *buf)
static inline void channel_dont_close(struct channel *buf)
static inline void channel_auto_read(struct channel *buf)
static inline void channel_dont_read(struct channel *buf)
unsigned long long channel_forward(struct channel *buf, unsigned long long bytes)
Some functions provided by channel.[ch] have kept their "buffer" name because
they are really designed to act on the buffer according to some information
gathered from the channel. They have been moved together to the same place in
the file for better readability but they were not changed at all.
The "buffer" memory pool was also renamed "channel".
Get rid of these confusing BF_* flags. Now channel naming should clearly
be used everywhere appropriate.
No code was changed, only a renaming was performed. The comments about
channel operations was updated.
These functions do not depend on the channel flags anymore thus they're
much better suited to be used on plain buffers. Move them from channel
to buffer.
This flag is quite complex to get right and updating it everywhere is a
major pain, especially since the buffer/channel split. This is the first
step of getting rid of it. Instead now it's dynamically computed whenever
needed.
This flag was very problematic because it was composite in that both changes
to the pipe or to the buffer had to cause this flag to be updated, which is
not always simple (eg: there may not even be a channel attached to a buffer
at all).
There were not that many users of this flags, mostly setters. So the flag got
replaced with a macro which reports whether the channel is empty or not, by
checking both the pipe and the buffer.
One part of the change is sensible : the flag was also part of BF_MASK_STATIC,
which is used by process_session() to rescan all analysers in case the flag's
status changes. At first glance, none of the analysers seems to change its
mind base on this flag when it is subject to change, so it seems fine not to
add variation checks here. Otherwise it's possible that checking the buffer's
output size is more useful than checking the flag's replacement.
The "raw_sock" prefix will be more convenient for naming functions as
it will be prefixed with the data layer and suffixed with the data
direction. So let's rename the files now to avoid any further confusion.
The #include directive was also removed from a number of files which do
not need it anymore.
At the moment, the struct is still embedded into the struct channel, but
all the functions have been updated to use struct buffer only when possible,
otherwise struct channel. Some functions would likely need to be splitted
between a buffer-layer primitive and a channel-layer function.
Later the buffer should become a pointer in the struct buffer, but doing so
requires a few changes to the buffer allocation calls.
This is a massive rename. We'll then split channel and buffer.
This change needs a lot of cleanups. At many locations, the parameter
or variable is still called "buf" which will become ambiguous. Also,
the "struct channel" is still defined in buffers.h.
fdtab[].state was only used to know whether a connection was in progress
or an error was encountered. Instead we now use connection->flags to store
a flag for both. This way, connection management will be able to update the
connection status on I/O.
Commit fa7e1025 (1.3.16-rc1) introduced a minor bug by comparing req->flags
with BF_READ_ERROR instead of checking for the bit. The result is that the
error message is always returned even in case of client error. This has no
real impact but this must be fixed.
It may be backported to 1.4 and 1.3.
This one returns the concatenation of the first Host header entry with
the path. It can make content-switching rules easier, help with fighting
DDoS on certain URLs and improve shared caches efficiency.
Doing so allows us to support sticking on URL, URL's IP, URL's port and
path.
Both fetch functions should be improved to support an optional depth
allowing to stick to a server depending on just a few directory
components. This would help with portals, some prefetch-capable
caches and with outgoing connections using multiple internet links.
Commit 827aee91 merged in 1.5-dev5 introduced a regression causing
the srv pointer to be tested twice instead of srv then srv->cookie.
The result is that if a server has no cookie in prefix mode, haproxy
will crash when trying to modify it.
Such a config is very unlikely to happen, except maybe with a backup
server, which would cause haproxy to die with the last server in the
farm.
No backport is needed, only 1.5-dev was affected.
It was not possible to kill remaining sessions from the admin interface,
which is annoying especially when switching to maintenance mode. Now it's
possible.
httponly This option tells haproxy to add an "HttpOnly" cookie attribute
when a cookie is inserted. This attribute is used so that a
user agent doesn't share the cookie with non-HTTP components.
Please check RFC6265 for more information on this attribute.
secure This option tells haproxy to add a "Secure" cookie attribute when
a cookie is inserted. This attribute is used so that a user agent
never emits this cookie over non-secure channels, which means
that a cookie learned with this flag will be presented only over
SSL/TLS connections. Please check RFC6265 for more information on
this attribute.
Cookies were mixed with many other options while they're not used as options.
Move them to a dedicated bitmask (ck_opts). This has released 7 flags in the
proxy options and leaves some room for new proxy flags.
Commit d1de8af362 was incomplete, because
perform_http_redirect() also needs to rewind the buffer since it's called
after data are scheduled for forwarding.
No backport needed.
When "option forwardfor" is enabled in a frontend that uses backends,
"if-none" ignores the header name provided in the frontend.
This prevents haproxy to add the X-Forwarded-For header if the option is not
used in the backend.
This may introduce security issues for servers/applications that rely on the
header provided by haproxy.
A minimal configuration which can reproduce the bug:
defaults
mode http
listen OK
bind :9000
option forwardfor if-none
server s1 127.0.0.1:80
listen BUG-frontend
bind :9001
option forwardfor if-none
default_backend BUG-backend
backend BUG-backend
server s1 127.0.0.1:80
The state and the private pointer are not specific to the applets, since SSL
will require exactly both of them. Move them to the connection layer now and
rename them. We also now ensure that both are NULL on first call.
We start to move everything needed to manage a connection to a special
entity "struct connection". We have the data layer operations and the
control operations there. We'll also have more info in the future such
as file descriptors and applet contexts, so that in the end it becomes
detachable from the stream interface, which will allow connections to
be reused between sessions.
For now on, we start with minimal changes.
msg->som was zero before the body and was used to carry the beginning
of a chunk size for chunked-encoded messages, at a moment when msg->sol
is always zero.
Remove msg->som and replace it with msg->sol where needed.
Since the recent buffer reorg, msg->som is redundant with buf->p but still
appears at a number of places. This tiny patch allows to confirm that som
follows two states :
- 0 from the moment the message starts to be parsed
- relative offset to ->p for start of chunk when parsing chunks
During this second state, ->sol is never used, so we should probably merge
the two.
This is a left-over from the buffer changes. Msg->sol is always null at the
end of the parsing, so we must not use it anymore to read headers or find
the beginning of a message. As a side effect, the dump of the request in
debug mode is working again because it was relying on msg->sol not being
null.
Maybe it will even be mergeable with another of the message pointers.
The recent split between the buffers and HTTP messages in 1.5-dev9 caused
a major trouble : in the past, we used to keep a pointer to HTTP data in the
buffer struct itself, which was the cause of most of the pain we had to deal
with buffers.
Now the two are split but we lost the information about the beginning of
the HTTP message once it's being forwarded. While it seems normal, it happens
that several parts of the code currently rely on this ability to inspect a
buffer containing old contents :
- balance uri
- balance url_param
- balance url_param check_post
- balance hdr()
- balance rdp-cookie()
- http-send-name-header
All these happen after the data are scheduled for being forwarded, which
also causes a server to be selected. So for a long time we've been relying
on supposedly sent data that we still had a pointer to.
Now that we don't have such a pointer anymore, we only have one possibility :
when we need to inspect such data, we have to rewind the buffer so that ->p
points to where it previously was. We're lucky, no data can leave the buffer
before it's being connecting outside, and since no inspection can begin until
it's empty, we know that the skipped data are exactly ->o. So we rewind the
buffer by ->o to get headers and advance it back by the same amount.
Proceeding this way is particularly important when dealing with chunked-
encoded requests, because the ->som and ->sov fields may be reused by the
chunk parser before the connection attempt is made, so we cannot rely on
them.
Also, we need to be able to come back after retries and redispatches, which
might change the size of the request if http-send-name-header is set. All of
this is accounted for by the output queue so in the end it does not look like
a bad solution.
No backport is needed.
Before it was possible to resize the buffers using global.tune.bufsize,
the trash has always been the size of a buffer by design. Unfortunately,
the recent buffer sizing at runtime forgot to adjust the trash, resulting
in it being too short for content rewriting if buffers were enlarged from
the default value.
The bug was encountered in 1.4 so the fix must be backported there.
This flag indicates that we're not interested in keeping half-open
connections on a stream interface. It has the benefit of allowing
the socket layer to cause an immediate write close when detecting
an incoming read close. This releases resources much faster and
saves one syscall (either a shutdown or setsockopt).
This flag is only set by HTTP on the interface going to the server
since we don't want to continue pushing data there when it has
closed.
Another benefit is that it responds with a FIN to a server's FIN
instead of responding with an RST as it used to, which is much
cleaner.
Performance gains of 7.5% have been measured on HTTP connection
rate on empty objects.
Commit 5e205524 was a bit overzealous by inconditionally enabling
quick ack when a request is not yet in the buffer, because it also
does so when nothing has been received yet, causing a useless ACK
to be emitted.
Improve the situation by doing this only if the input buffer is
empty (indicating that nothing was sent by the client).
In case of keep-alive, an empty buffer means we already have a
response in flight which will serve as an ACK.
Commit e164e7a removed get_src/get_dst setting in the stream interfaces but
forgot to set it in proto_tcp. Get the feature back because we need it for
logging, transparent mode, ACLs etc... We now rely on the stream interface
direction to know what syscall to use.
One benefit of doing it this way is that we don't use getsockopt() anymore
on outgoing stream interfaces nor on UNIX sockets.
We'll soon have an SSL socket layer, and in order to ease the difference
between the two, we use the name "sock_raw" to designate the one which
directly talks to the sockets without any conversion.
http_auth and http_auth_group used to share the same fetch function, while
they're doing very different things. The first one only checks whether the
supplied credentials are valid wrt a userlist while the second not only
checks this but also checks group ownership from a list of patterns.
Recent acl/pattern merge caused a simplification here by which the fetch
function would always return a boolean, so the group match was always fine
if the user:password was valid, regardless of the patterns provided with
the ACL.
The proper solution consists in splitting the function in two, depending
on what is desired.
It's also worth noting that check_user() would probably be split, one to
check user:password, and the other one to check for group ownership for
an already valid user:password combination. At this point it is not certain
if the group mask is still useful or not considering that the passwd check
is always made.
This bug was reported and diagnosed by Cyril Bont. It first appeared
in 1.5-dev9 so it does not need any backporting.
I introduced a regression in commit 19979e176e while reworking the admin
actions results.
"Unexpected result" was displayed even if the action was applied due to a
misplaced initialization. This small patch should fix it.
Note: no need to backport.
There is no more reason for the realign function being HTTP specific,
it only operates on a buffer now. Let's move it to buffers.c instead.
It's likely that buffer_bounce_realign is broken (not used), this will
have to be inspected. The function is worth rewriting as it can be
cheaper than buffer_slow_realign() to realign large wrapping buffers.
A number of important information were missing from the error captures, so
let's improve them. Now we also log source port, session flags, transaction
flags, message flags, pending output bytes, expected buffer wrapping position,
total bytes transferred, message chunk length, and message body length.
As such, the output format has slightly evolved and the source address moved
to the third line :
[08/May/2012:11:14:36.341] frontend echo (#1): invalid request
backend echo (#1), server <NONE> (#-1), event #1
src 127.0.0.1:40616, session #4, session flags 0x00000000
HTTP msg state 26, msg flags 0x00000000, tx flags 0x00000000
HTTP chunk len 0 bytes, HTTP body len 0 bytes
buffer flags 0x00909002, out 0 bytes, total 28 bytes
pending 28 bytes, wrapping at 8030, error at position 7:
00000 GET / /?t=20000 HTTP/1.1\r\n
00026 \r\n
[08/May/2012:11:13:13.426] backend echo (#1) : invalid response
frontend echo (#1), server local (#1), event #0
src 127.0.0.1:40615, session #1, session flags 0x0000044e
HTTP msg state 32, msg flags 0x0000000e, tx flags 0x08200000
HTTP chunk len 0 bytes, HTTP body len 20 bytes
buffer flags 0x00008002, out 81 bytes, total 92 bytes
pending 11 bytes, wrapping at 7949, error at position 9:
00000 Foo: bar\r\r\n
Since the beginning of buffer&msg changes, the error position (err_pos)
had not completely been converted and some offsets still appear wrong.
Now we ensure that everywhere msg->err_pos is relative to buf->p and
we always report buf->i bytes starting at buf->p in all error captures,
which ensures that err_pos is there.
This is not exactly a bug and is specific to latest changes so no backport
is needed.
Commit 81f2fb added support for wrapping buffer captures, but unfortunately
the code used to perform two memcpy() over the same destination, causing a
loss of the start of the buffer rendering some error snapshots unusable.
This bug is present in 1.4 too and must be backported.
These operators are used regardless of the socket protocol family. Move
them to a "sock_ops" struct. ->read and ->write have been moved there too
as they have no reason to remain at the protocol level.
This is mainly a massive renaming in the code to get it in line with the
calling convention. Next patch will rename a few files to complete this
operation.
All parsing errors were known but impossible to return. Now by making use
of memprintf(), we're able to build meaningful error messages that the
caller can display.
It's easy to merge pattern and ACL fetches of cookies. It allows us
to remove two distinct fetch functions. The new function internally
uses an occurrence number to serve both purposes, but it didn't appear
worth exposing it outside so there is no keyword argument to set it.
However one of the benefits is that the "cookie" fetch for stick tables
now automatically adapts to requests and responses, so there is no more
need for set-cookie().
HTTP header fetch is now done using smp_fetch_hdr() for both ACLs and
patterns. This one also supports an occurrence number, making it possible
to specify explicit occurrences for ACLs and patterns.
src_port, dst_port and url_param have converged between ACLs and patterns.
This means that src_port is now available in patterns and that urlp_* has
been added to ACLs. Some code has moved to accommodate for static function
definitions, but there were little changes.
Patterns were using a bitmask to indicate if request or response was desired
in fetch functions and keywords. ACLs were using a bitmask in fetch keywords
and a single bit in fetch functions. ACLs were also using an ACL_PARTIAL bit
in fetch functions indicating that a non-final fetch was performed, which was
an abuse of the existing direction flag.
The change now consists in using :
- a capabilities field for fetch keywords => SMP_CAP_REQ/RES to indicate
if a keyword supports requests, responses, both, etc...
- an option field for fetch functions to indicate what the caller expects
(request/response, final/non-final)
The ACL_PARTIAL bit was reversed to get SMP_OPT_FINAL as it's more explicit
to know we're working on a final buffer than on a non-final one.
ACL_DIR_* were removed, as well as PATTERN_FETCH_*. L4 fetches were improved
to support being called on responses too since they're still available.
The <dir> field of all fetch functions was changed to <opt> which is now
unsigned.
The patch is large but mostly made of cosmetic changes to accomodate this, as
almost no logic change happened.
Having the args everywhere will make it easier to share fetch functions
between patterns and ACLs. The only place where we could have needed
the expr was in the http_prefetch function which can do well without.
We need the pattern fetchers and converters to correctly set the output type
so that they can be used by ACL fetchers. By using the sample type instead of
the keyword type, we also open the possibility to create some multi-type
pattern fetch methods later (eg: "src" being v4/v6). Right now the type in
the keyword is used to validate the configuration.
Now there is no more reference to union pattern_data. All pattern fetch and
conversion functions now make use of the common sample type. Note: none of
them adjust the type right now so it's important to do it next otherwise
we would risk sharing such functions with ACLs and seeing them fail.
These ones were either unused or improperly used. Some integers were marked
read-only, which does not make much sense. Buffers are not read-only, they're
"constant" in that they must be kept intact after any possible change.
This one is not needed anymore as we can return the data and its type in the
sample provided by the caller. ACLs now always return the proper type. BOOL
is already returned when the result is expected to be processed as a boolean.
temp_pattern has been unexported now.
The new sample types are necessary for the acl-pattern convergence.
These types are boolean and signed int. Some types were renamed for
less ambiguity (ip->ipv4, integer->uint).
This flag was used to force a boolean match even if there was no pattern
to match. It was used only by http_auth() and designed only for this one.
It's easier and cleaner to make the fetch function perform the test and
report the boolean result as a few other functions already do. It simplifies
the acl_exec_cond() logic and will help merging ACLs and patterns.
This is used to validate that arguments are coherent. For instance,
payload_lv expects that the last arg (if any) is not more negative
than the sum of the first two. The error is reported if any.
We don't need the pattern-specific args parsers anymore, make use of the
common parser instead. We still need to improve this by adding a validation
function to report abnormal argument values or combinations. We don't report
precise parsing errors yet but this was not previously done either.
arg_i was almost unused, and since we migrated to use struct arg everywhere,
the rare cases where arg_i was needed could be replaced by switching to
arg->type = ARGT_STOP.
The types and minimal number of ACL keyword arguments are now stored in
their declaration. This will allow many more fantasies if some ACL use
several arguments or types.
Doing so required to rework all ACL keyword declarations to add two
parameters. So this was a good opportunity for a general cleanup and
to sort all entries in alphabetical order.
We still have two pending issues :
- parse_acl_expr() checks for errors but has no way to report them to
the user ;
- the types of some arguments are still not resolved and kept as strings
(eg: ARGT_FE/BE/TAB) for compatibility reasons, which must be resolved
in acl_find_targets()
The ACL parser now uses the argument parser to build a typed argument list.
Right now arguments are all strings and only one argument is supported since
this is what ACLs currently support.
Latest changes have made it possible to remove all differences between
request and response processing, making it worth merging request and
response ACL fetch functions to reduce code size.
Most likely with minor adaptation it will be possible to use the same hdr_*
functions to match in the response path, and cook_* for the response cookie
too.
ACLs are volatile since they require a fetch of request buffer data which is
then copied to a temporary shared place. The issue is minor though since auth
is generally checked very early.
All ACLs which need to process HTTP contents first call this function which
performs all the preliminary tests and also triggers the request parsing if
needed. A macro was written to simplify the code.
As a side effect, it's not required anymore to check for the HTTP ACL before
checking for HTTP contents.
This function will be called by all ACL fetch functions. Right now all ACL
fetch functions have to perform the exact same tests to check whether data
are available. Also, only one of them is able to actually parse an HTTP
request.
Using the prefetch function, it will be possible to try to parse a request
on the fly and to avoid the fetch if some data are missing. This will
significantly reduce the amount of tests in all ACL fetch functions.
buffer_wrap_add was convenient for the migration but is not handy at all.
Let's have new wrappers that report input begin/end and output begin/end
instead.
It looks like we'll also need a b_adv(ofs) to advance a buffer's pointer.
buffer_ignore may only be used when the output of a buffer is empty,
but it's not granted it is always the case when sending HTTP error
responses. Better use buffer_cut_tail() instead, and use buffer_ignore
only on non-wrapping data.
The buffer pointer is now taken from the http_msg in the following
functions :
http_parse_chunk_size
http_forward_trailers
http_skip_chunk_crlf
Most internal pointers were converted to const as the result of the
operation.
The buffer pointer is now taken from the http_msg in the following functions :
- http_remove_header2
- http_header_add_tail
- http_header_add_tail2
- http_parse_connection_header
- http_change_connection_header
msg->sol is now a relative pointer just like all other ones. There is no
more absolute references to the buffer outside the struct buffer itself.
Next two cleanups should include removing buffer references to functions
which already have an msg, and removal of wrapping detection in request
and response parsing which cannot wrap by definition.
ACLs and patterns only rely on a struct http_msg and don't know the pointer
to the actual data. struct http_msg will soon only hold relative references
so that's not possible. We need http_msg to hold a reference to the struct
buffer before having relative pointers everywhere.
It is likely that doing so will also result in opportunities to simplify
a number of functions arguments. The following functions are already
candidate :
http_buffer_heavy_realign
http_capture_bad_message
http_change_connection_header
http_forward_trailers
http_header_add_tail
http_header_add_tail2
http_msg_analyzer
http_parse_chunk_size
http_parse_connection_header
http_remove_header2
http_send_name_header
http_skip_chunk_crlf
http_upgrade_v09_to_v10
These offsets were relative to the buffer itself. Now they're relative to
the buffer's origin (buf->p) which normally corresponds to the start of
current message.
This saves a big dependency between the HTTP message struct and the buffers.
It appeared during this change that ->col is not used anymore (it will have
to be removed). Next step is to turn ->eol and ->sol from absolute to relative.
The buffer's pointer <lr> was only used by HTTP parsers which also use a
struct http_msg to keep track of the parser's state. We've reached a point
where it makes no sense to keep ->lr in the buffer, as the split between
buffer and msg is only arbitrary for historical reasons.
This change ensures that touching buffers will not impact HTTP messages
anymore, making the buffers more content-agnostic. However, it becomes
very important not to forget to update msg->next when some data get
forwarded or moved (and in general each time buf->p is updated).
The new pointer in http_msg becomes relative to buffer->p so that
parsing multiple messages becomes easier. It is possible that at one
point ->som and ->next will be merged.
Note: http_parse_reqline() and http_parse_stsline() have been temporarily
modified to know the message starting point in the buffer (->p).
This change gets rid of buf->r which is always equal to buf->p + buf->i.
It removed some wrapping detection at a number of places, but required addition
of new relative offset computations at other locations. A large number of places
can be simplified now with extreme care, since most of the time, either the
pointer has to be computed once or we need a difference between the old ->w and
old ->r to compute free space. The cleanup will probably happen with the rewrite
of the buffer_input_* and buffer_output_* functions anyway.
buf->lr still has to move to the struct http_msg and be relative to buf->p
for the rework to be complete.
This change introduces the buffer's base pointer, which is the limit between
incoming and outgoing data. It's the point where the parsing should start
from. A number of computations have already been greatly simplified, but
more simplifications are expected to come from the removal of buf->r.
The changes appear good and have revealed occasional improper use of some
pointers. It is possible that this patch has introduced bugs or revealed
some, although preliminary testings tend to indicate that everything still
works as it should.
We don't have buf->l anymore. We have buf->i for pending data and
the total length is retrieved by adding buf->o. Some computation
already become simpler.
Despite extreme care, bugs are not excluded.
It's worth noting that msg->err_pos as set by HTTP request/response
analysers becomes relative to pending data and not to the beginning
of the buffer. This has not been completed yet so differences might
occur when outgoing data are left in the buffer.
Too many flags are stored in the transaction structure. Some flags are
clearly message-specific and exist in two versions (request and response).
Move them to a new "flags" field in the http_message struct instead.
There were a few unchecked write() calls in the debug code that cause
gcc 4.x to emit warnings on recent libc. We don't want to check them
as we can't make anything from the result, let's simply surround them
with an empty if statement.
Note that one of the warnings was for chdir("/") which normally cannot
fail since it follows a successful chroot (which means the perms are
necessarily there). Anyway let's move the call uppe to protect it too.
The issue only happens when DEBUG_FULL is enabled, which causes
http_msg_analyzer() to complain if it's called twice with an invalid
message, for instance because of two consecutive ACLs using req_proto_http.
The code is commented out when DEBUG_FULL is disabled, so this is not a bug,
just an annoyance for the developer.
The three warnings below are totally wrong since the variables depend on another
one which is only turned on when the variables are initialized. Still this gcc-4.1.2
isn't able to see this and prefers to complain wrongly. So let's initialize the
variables to shut it up since we're not in the fast path.
src/proto_http.c: In function 'acl_fetch_any_cookie_cnt':
src/proto_http.c:8393: warning: 'val_end' may be used uninitialized in this function
src/proto_http.c: In function 'http_process_req_stat_post':
src/proto_http.c:2577: warning: 'st_next_param' may be used uninitialized in this function
src/proto_http.c:2577: warning: 'st_cur_param' may be used uninitialized in this function
It's very annoying that we have to deal with the crappy size_t and with ints
at some places because these ones don't mix well. Patch 6f61b2 changed the
chunk len to int but its size remains size_t and some functions are having
trouble being used by several callers depending on the type of their arguments.
Let's turn extract_cookie_value() to int for now on, and plan a massive cleanup
later to remove all size_t.
These callbacks are used to retrieve the source and destination address
of a socket. The address flags are not hold on the stream interface and
not on the session anymore. The addresses are collected when needed.
This still needs to be improved to store the IP and port separately so
that it is not needed to perform a getsockname() when only the IP address
is desired for outgoing traffic.
The Unique ID, is an ID generated with several informations. You can use
a log-format string to customize it, with the "unique-id-format" keyword,
and insert it in the request header, with the "unique-id-header" keyword.
%Fi: Frontend IP
%Fp: Frontend Port
%Si: Server IP
%Sp: Server Port
%Ts: Timestamp
%rt: HTTP request counter
%H: hostname
%pid: PID
+X: Hexadecimal represenation
The +X mode in logformat displays hexadecimal for the following flags
%Ci %Cp %Fi %Fp %Bi %Bp %Si %Sp %Ts %ct %pid
rename logformat_write_string() to lf_text()
Optimize size computation
The ACL matches rely on the extract_cookie_value() function as used for
for patterns. This permits ACLs to match cookie values based on the cookie
name instead of having to perform substring matching on the cookie header.
Sometimes it is desirable to forward a particular request to a specific
server without having to declare a dedicated backend for this server. This
can be achieved using the "use-server" rules. These rules are evaluated after
the "redirect" rules and before evaluating cookies, and they have precedence
on them. There may be as many "use-server" rules as desired. All of these
rules are evaluated in their declaration order, and the first one which
matches will assign the server.
memcmp()/strcmp() calls were needed in different parts of code to determine
the status code. Each new status code introduces new calls, which can become
inefficient and source of bugs.
This patch reorganizes the code to rely on a numeric status code internally
and to be hopefully more generic.
Previously, the stats admin page required POST parameters to be provided
exactly in the same order as the HTML form.
This patch allows to handle those parameters in any orders.
Also, note that haproxy won't alter server states anymore if backend or server
names are ambiguous (duplicated names in the configuration) to prevent
unexpected results (the same should probably be applied to the stats socket).
Olufemi Omojola provided a config and a core showing a possible crash
when captures are configured on a TCP-mode frontend which branches to
an HTTP backend. The reason is that being in TCP mode, the frontend
does not allocate capture pools for the request, but the HTTP backend
tries to use them and dies on the NULL.
While such a config has long been unlikely to happen, it looks like
people using websocket tend to do this more often now.
Change the control to use the pointer instead of the number of captures
to know when to log.
This bug was reported in 1.4.20, so it must be backported there.
Merge http_sess_log() and tcp_sess_log() to sess_log() and move it to
log.c
A new field in logformat_type define if you can use a logformat
variable in TCP or HTTP mode.
doc: log-format in tcp mode
Note that due to the way log buffer allocation currently works, trying to
log an HTTP request without "option httplog" is still not possible. This
will change in the near future.
Commits 5c6209 and 072930 were aimed at avoiding undesirable PUSH flags
when forwarding chunked data, but had the undesired effect of causing
data advertised by content-length to be affected by the delayed ACK too.
This can happen when the data to be forwarded are small enough to fit into
a single send() call, otherwise the BF_EXPECT_MORE flag would be removed.
Content-length data don't need the BF_EXPECT_MORE flag since the low-level
forwarder already knows it can safely rely on bf->to_forward to set the
appropriate TCP flags.
Note that the issue is only observed in requests at the moment, though the
later introduction of server-side keep-alive could trigger the issue on the
response path too.
Special thanks to Randy Shults for reporting this issue with a lot of
details helping to reproduce it.
The fix must be backported to 1.4.
When a request completes on a server and the server connection is closed
while the client connection stays open, the HTTP engine releases all server
connection slots and scans the queues to offer the connection slot to
another pending request.
An issue happens when the released connection allows other requests to be
dequeued : may_dequeue_tasks() relies on srv->served which is only decremented
by sess_change_server() which itself is only called after may_dequeue_tasks().
This results in no connection being woken up until another connection terminates
so that may_dequeue_tasks() is called again.
This fix is minimalist and only moves sess_change_server() earlier (which is
safe). It should be reworked and the code factored out so that the same occurrence
in session.c shares the same code.
This bug has been there since the introduction of option-http-server-close and
the fix must be backported to 1.4.
Since commit 115acb97, chunk size was limited to 256MB. There is no reason for
such a limit and the comment on the code suggests a missing zero. However,
increasing the limit past 2 GB causes trouble due to some 32-bit subtracts
in various computations becoming negative (eg: buffer_max_len). So let's limit
the chunk size to 2 GB - 1 max.
commit a1cc3811 introduced an undesirable \0\n ending on HTTP log messages. This
is because of an extra character count passed to __send_log() which causes the LF
to be appended past the \0. Some syslog daemons thus log an extra empty line. The
fix is obvious. Fix the function comments to remind what they expect on their input.
This is past 1.5-dev7 regression so there's no backport needed.
http_sess_log now use the logformat linked list to make the log
string, snprintf is not used for speed issue.
CLF mode also uses logformat.
NOTE: as of now, empty fields in CLF now are "" not "-" anymore.
Marcello Gorlani reported that commit 5e205524ad
(BUG: http: re-enable TCP quick-ack upon incomplete HTTP requests) broke build
on FreeBSD.
Moving the include lower fixes the issue. This must be backported to 1.4 too.
These ones are invalid and blocked unless "option accept-invalid-http-request"
is specified in the frontend. In any case, the faulty request is logged.
Note that some of the remaining invalid chars are still not checked against,
those are the invalid ones between 32 and 127 :
34 ('"'), 60 ('<'), 62 ('>'), 92 ('\'), 94 ('^'),
96 ('`'), 123 ('{'), 124 ('|'), 125 ('}')
Using a lookup table might be better at some point.
The HTTP request parser was considering that any non-LWS char was
par of the URI. Unfortunately, this allows control chars to be sent
in the URI, sometimes resulting in backend servers misbehaving, for
instance when they interprete \0 as an end of string and respond
with plain HTTP/0.9 without headers, that haproxy blocks as invalid
responses.
RFC3986 clearly states the list of allowed characters in a URI. Even
non-ASCII chars are not allowed. Unfortunately, after having run 10
years with these chars allowed, we can't block them right now without
an optional workaround. So the first step consists in only blocking
control chars. A later patch will allow non-ASCII only when an appropriate
option is enabled in the frontend.
Control chars are 0..31 and 127, with the exception of 9, 10 and 13
(\t, \n, \r).
New option "http-send-name-header" specifies the name of a header which
will hold the server name in outgoing requests. This is the name of the
server the connection is really sent to, which means that upon redispatches,
the header's value is updated so that it always matches the server's name.
This pattern previously was limited to type IP. With the new header
extraction function, it becomes possible to extract strings, so that
the header can be returned as a string. This will not change anything
to existing configs, as string will automatically be converted to IP
when needed. However, new configs will be able to use IPv6 addresses
from headers in stick-tables, as well as stick on any non-IP header
(eg: host, user-agent, ...).
The new function does not return IP addresses but header values instead,
so that the caller is free to make what it want of them. The conversion
is not quite clean yet, as the previous test which considered that address
0.0.0.0 meant "no address" is still used. A different IP parsing function
should be used to take this into account.
Now strings and data blocks are stored in the temp_pattern's chunk
and matched against this one.
The rdp_cookie currently makes extensive use of acl_fetch_rdp_cookie()
and will be a good candidate for the initial rework so that ACLs use
the patterns framework and not the other way around.
IPv4 and IPv6 addresses are now stored into temp_pattern instead of
the dirty hack consisting into storing them into the consumer's target
address.
Some refactoring should now be possible since the methods used to fetch
source and destination addresses are similar between patterns and ACLs.
All ACL fetches which return integer value now store the result into
the temporary pattern struct. All ACL matches which rely on integer
also get their value there.
Note: the pattern data types are not set right now.
By default we disable TCP quick-acking on HTTP requests so that we
avoid sending a pure ACK immediately followed by the HTTP response.
However, if the client sends an incomplete request in a short packet,
its TCP stack might wait for this packet to be ACKed before sending
the rest of the request, delaying incoming requests by up to 40-200ms.
We can detect this undesirable situation when parsing the request :
- if an incomplete request is received
- if a full request is received and uses chunked encoding or advertises
a content-length larger than the data available in the buffer
In these situations, we re-enable TCP quick-ack if we had previously
disabled it.
This patch settles the 2 loggers limitation.
Loggers are now stored in linked lists.
Using "global log", the global loggers list content is added at the end
of the current proxy list. Each "log" entries are added at the end of
the proxy list.
"no log" flush a logger list.
Stream interfaces used to distinguish between client and server addresses
because they were previously of different types (sockaddr_storage for the
client, sockaddr_in for the server). This is not the case anymore, and this
distinction is confusing at best and has caused a number of regressions to
be introduced in the process of converting everything to full-ipv6. We can
now remove this and have a much cleaner code.
This patch introduces hdr_len, path_len and url_len for matching these
respective parts lengths against integers. This can be used to detect
abuse or empty headers.
Commit 588bd4 fixed header parsing so that trailing spaces were not part
of the returned string. Unfortunately, if a header only had spaces, the
last spaces were trimmed past the beginning of the value, causing a negative
length to be returned.
A quick code review shows that there should be no impact since the only
places where the vlen is used are either compared to a specific value or
with explicit contents (eg: digits).
This must be backported to 1.4.
These requests are mainly monitor requests, as well as stats requests when
the stats are processed by the frontend. Having this counter helps explain
the difference in number of sessions that is sometimes observed between a
frontend and a backend.
Sometimes a bad content-length header is encountered and this causes
an abort. It's hard to debug without a trace, so let's take a capture
of the contents when this happens.
If a server starts to respond but stops before the body, then we
capture the truncated response. We don't do this on the request
because it would happen too often upon stupid attacks.
Trailing spaces after headers were not trimmed, only the leading ones
were. An issue was detected today with a content-length value which
was padded with spaces and which was rejected. Recent updates to the
http-bis draft made it a lot more clear that such spaces must be ignored,
so this is what this patch does.
It should be backported to 1.4.
Many inet_ntop calls were partially right, which was hard to detect given
the complex combinations. Some of them were relying on the listener's proto
instead of the address itself, which could have been different when dealing
with an accept-proxy connection.
The new addr_to_str() function does the dirty job and returns the family, which
makes it particularly suited to calls from switch/case statements. A large number
of if/else statements were removed and the stats output could even be cleaned up
in the case of session dump.
As a side effect of doing this, the resulting code is smaller by almost 1kB.
All changed parts have been tested and provided expected output.
If "option forwardfor" has the "if-none" argument, then the header is
only added when the request did not already have one. This option has
security implications, and should not be set blindly.
This is used to perform cookie-based stickiness with table replication
between multiple masters and across restarts. This partially overrides
some of the appsession capabilities.
The motivation for this is to allow iteration of all the connections
of a server without the expense of iterating over the global list
of connections.
The first use of this will be to implement an option to close connections
associated with a server when is is marked as being down or in maintenance
mode.
gcc (Debian 4.6.0-2) 4.6.1 20110329 (prerelease)
Copyright (C) 2011 Free Software Foundation, Inc.
This is free software; see the source for copying conditions. There is NO
warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
...
src/proto_http.c:3029:14: warning: variable ‘del_cl’ set but not used [-Wunused-but-set-variable]
In file included from ebtree/eb64tree.c:23:0:
ebtree/eb64tree.h: In function ‘__eb64_lookup’:
ebtree/eb64tree.h:128:6: warning: variable ‘node_bit’ set but not used [-Wunused-but-set-variable]
ebtree/eb64tree.h: In function ‘__eb64i_lookup’:
ebtree/eb64tree.h:180:6: warning: variable ‘node_bit’ set but not used [-Wunused-but-set-variable]
In file included from ebtree/ebpttree.h:26:0,
from ebtree/ebimtree.c:23:
ebtree/eb64tree.h: In function ‘__eb64_lookup’:
ebtree/eb64tree.h:128:6: warning: variable ‘node_bit’ set but not used [-Wunused-but-set-variable]
ebtree/eb64tree.h: In function ‘__eb64i_lookup’:
ebtree/eb64tree.h:180:6: warning: variable ‘node_bit’ set but not used [-Wunused-but-set-variable]
In file included from ebtree/ebpttree.h:26:0,
from ebtree/ebistree.h:25,
from ebtree/ebistree.c:23:
ebtree/eb64tree.h: In function ‘__eb64_lookup’:
ebtree/eb64tree.h:128:6: warning: variable ‘node_bit’ set but not used [-Wunused-but-set-variable]
ebtree/eb64tree.h: In function ‘__eb64i_lookup’:
ebtree/eb64tree.h:180:6: warning: variable ‘node_bit’ set but not used [-Wunused-but-set-variable]
Bashkim Kasa reported that the stats admin page did not work when colons
were used in server or backend names. This was caused by url-encoding
resulting in ':' being sent as '%3A'. Now we systematically decode the
field names and values to fix this issue.
Now that we support the http-no-delay mode, we can optimize HTTP
chunking again by always waiting for more data to come until the
last chunk is met.
This patch may or may not be backported to 1.4, it's not a big deal,
it will mainly help for chunks which are aligned with the buffer size.
There are some very rare server-to-server applications that abuse the HTTP
protocol and expect the payload phase to be highly interactive, with many
interleaved data chunks in both directions within a single request. This is
absolutely not supported by the HTTP specification and will not work across
most proxies or servers. When such applications attempt to do this through
haproxy, it works but they will experience high delays due to the network
optimizations which favor performance by instructing the system to wait for
enough data to be available in order to only send full packets. Typical
delays are around 200 ms per round trip. Note that this only happens with
abnormal uses. Normal uses such as CONNECT requests nor WebSockets are not
affected.
When "option http-no-delay" is present in either the frontend or the backend
used by a connection, all such optimizations will be disabled in order to
make the exchanges as fast as possible. Of course this offers no guarantee on
the functionality, as it may break at any other place. But if it works via
HAProxy, it will work as fast as possible. This option should never be used
by default, and should never be used at all unless such a buggy application
is discovered. The impact of using this option is an increase of bandwidth
usage and CPU usage, which may significantly lower performance in high
latency environments.
This change should be backported to 1.4 since the first report of such a
misuse was in 1.4. Next patch will also be needed.
Commit 57f5c1 used to provide a nice improvement on chunked encoding since
it ensured that we did not set a PUSH flag for every chunk or buffer data
part of a chunked transfer.
Some applications appear to erroneously abuse HTTP chunking in order to
get interactive exchanges between a user agent and an origin server with
very small chunks. While it happens to work through haproxy, it's terribly
slow due to the latency added after passing each chunk to the system, who
could wait up to 200ms before pushing them onto the wire.
So we need an interactive mode for such usages. In the mean time, step back
on the optim, but not completely, so that we still keep the flag as long as
we know we're not finished with the current chunk.
This change should be backported to 1.4 too as the issue was discovered
with it.
This status code is used in response to requests matching "monitor-uri".
Some users need to adjust it to fit their needs (eg: make some strings
appear there). As it's already defined as a chunked string and used
exactly like other status codes, it makes sense to make it configurable
with the usual "errorfile", "errorloc", ...
Some people like to make the monitoring URL testable from unsafe locations.
Reporting haproxy's existence there can sometimes be problematic. This patch
should not be backported to 1.4 because it is possible, eventhough unlikely,
that some scripts rely on this word to appear there.
When doing fix 24581bae02 to correctly handle
response cookies, an unfortunate typo was inserted in the less likely code
path, resulting in a risk of crash when cookie-based persistence is enabled
and the server emits a cookie with several spaces around the equal sign.
This bug was noticed during a code backport. Its effects were never reported
because this situation is very unlikely to appear, but it can be provoked on
purpose by the server.
This patch must be backported to 1.4 versions which contain the fix above
(anything > 1.4.8), and to similar 1.3 versions > 1.3.25. 1.5-dev versions
after 1.5-dev2 are affected too.
Despite much care around handling the content-length as a 64-bit integer,
forwarding was broken on 32-bit platforms due to the 32-bit nature of
the ->to_forward member of the "buffer" struct. The issue is that this
member is declared as a long, so while it works OK on 64-bit platforms,
32-bit truncate the content-length to the lower 32-bits.
One solution could consist in turning to_forward to a long long, but it
is used a lot in the critical path, so it's not acceptable to perform
all buffer size computations on 64-bit there.
The fix consists in changing the to_forward member to a strict 32-bit
integer and ensure in buffer_forward() that only the amount of bytes
that can fit into it is considered. Callers of buffer_forward() are
responsible for checking that their data were taken into account. We
arbitrarily ensure we never consider more than 2G at once.
That's the way it was intended to work on 32-bit platforms except that
it did not.
This issue was tracked down hard at Exosec with Bertrand Jacquin,
Thierry Fournier and Julien Thomas. It remained undetected for a long
time because files larger than 4G are almost always transferred in
chunked-encoded format, and most platforms dealing with huge contents
these days run on 64-bit.
The bug affects all 1.5 and 1.4 versions, and must be backported.
Since we now have the copy of the target in the session, use it instead
of relying on the SI for it. The SI drops the target upon unregister()
so applets such as stats were logged as "NOSRV".
Johannes Smith reported some wrong retries count in logs associated with bad
requests. The cause was that the conn_retries field in the stream interface
was only initialized when attempting to connect, but is used when logging,
possibly with an uninitialized value holding last connection's conn_retries.
This could have been avoided by making use of a stream interface initializer.
This bug is 1.5-specific.
And also rename "req_acl_rule" "http_req_rule". At the beginning that
was a bit confusing to me, especially the "req_acl" list which in fact
holds what we call rules. After some digging, it appeared that some
part of the code is 100% HTTP and not just related to authentication
anymore, so let's move that part to HTTP and keep the auth-only code
in auth.c.
Right now, http-request rules are not evaluated if the URL matches the
stats request. This is quite unexpected. For instance, in the config
below, an abuser present in the abusers list will not be prevented access
to the stats.
listen pub
bind :8181
acl abuser src -f abusers.lst
http-request deny if abuser
stats uri /stats
It is not a big deal but it's not documented as such either. For 1.5, let's
have both lists be evaluated in turn, until one blocks. For 1.4 we'll simply
update the doc to indicate that.
Also instead of duplicating the code, the patch factors out the list walking
code. The HTTP auth has been moved slightly earlier, because it was set after
the header addition code, but we don't need to add headers to a request we're
dropping.
It's very annoying that frontend and backend stats are merged because we
don't know what we're observing. For instance, if a "listen" instance
makes use of a distinct backend, it's impossible to know what the bytes_out
means.
Some points take care of not updating counters twice if the backend points
to the frontend, indicating a "listen" instance. The thing becomes more
complex when we try to add support for server side keep-alive, because we
have to maintain a pointer to the backend used for last request, and to
update its stats. But we can't perform such comparisons anymore because
the counters will not match anymore.
So in order to get rid of this situation, let's have both frontend AND
backend stats in the "struct proxy". We simply update the relevant ones
during activity. Some of them are only accounted for in the backend,
while others are just for frontend. Maybe we can improve a bit on that
later, but the essential part is that those counters now reflect what
they really mean.
This patch turns internal server addresses to sockaddr_storage to
store IPv6 addresses, and makes the connect() function use it. This
code already works but some caveats with getaddrinfo/gethostbyname
still need to be sorted out while the changes had to be merged at
this stage of internal architecture changes. So for now the config
parser will not emit an IPv6 address yet so that user experience
remains unchanged.
This change should have absolutely zero user-visible effect, otherwise
it's a bug introduced during the merge, that should be reported ASAP.
This one has been removed and is now totally superseded by ->target.
To get the server, one must use target_srv(&s->target) instead of
s->srv now.
The function ensures that non-server targets still return NULL.
s->prev_srv is used by assign_server() only, but all code paths leading
to it now take s->prev_srv from the existing s->srv. So assign_server()
can do that copy into its own stack.
If at one point a different srv is needed, we still have a copy of the
last server on which we failed a connection attempt in s->target.
When dealing with HTTP keep-alive, we'll have to know if we can reuse
an existing connection. For that, we'll have to check if the current
connection was made on the exact same target (referenced in the stream
interface).
Thus, we need to first assign the next target to the session, then
copy it to the stream interface upon connect(). Later we'll check for
equivalence between those two operations.
This is in fact where those parts belong to. The old data_state was replaced
by applet.state and is now initialized when the applet is registered. It's
worth noting that the applet does not need to know the session nor the
buffer anymore since everything is brought by the stream interface.
It is possible that having a separate applet struct would simplify the
code but that's not a big deal.
With HTTP keep-alive, logging the right server name will be quite
complex because the assigned server will possibly change before we log.
Also, when we want to log accesses to an applet, it's not easy because
the applet becomes NULL again before logging.
The logged server's name is now taken from the target stored in the
stream interface. That way we can log an applet, a server name, or we
could even log a proxy or anything else if we wanted to. Ideally the
session should contain a desired target which is the one which should
be logged.
I/O handlers are still delicate to manipulate. They have no type, they're
just raw functions which have no knowledge of themselves. Let's have them
declared as applets once for all. That way we can have multiple applets
share the same handler functions and we can store their names there. When
we later need to add more parameters (eg: usage stats), we'll be able to
do so in the applets themselves.
The CLI functions has been prefixed with "cli" instead of "stats" as it's
clearly what is going on there.
The applet descriptor in the stream interface should get all the applet
specific data (st0, ...) but this will be done in the next patch so that
we don't pollute this one too much.
Similar to the stats socket bug, we must check that the proxy is not disabled
before trying to enable/disable a server.
Even if a disabled proxy is not displayed, someone can inject a faulty proxy
name in the POST parameters. So, we must ensure that no disabled proxy can be
used.
Bryan Talbot reported that POST requests with a query string were not
correctly processed if the hash parameter was the first one, because
the delimiter that was looked for to trigger the parsing was '&' instead
of '?'.
Also, while checking the code, it became apparent that it was enough for
a query string to be present in the request for POST parameters to be
ignored, even if the url_param was in the body and not in the URL.
The code has then been fixed like this :
1) look for URL param. If found, return it.
2) if no URL param was found and method is POST, then look it up into
the body
The code now seems to pass all request combinations.
This patch must be backported to 1.4 since 1.4 is equally broken right now.