Before, functions to filter HTTP body (and TCP data) were called from the moment
at least one filter was attached to the stream. If no filter is interested by
these data, this uselessly slows data parsing.
A good example is the HTTP compression filter. Depending of request and response
headers, the response compression can be enabled or not. So it could be really
nice to call it only when enabled.
So, now, to filter HTTP/TCP data, a filter must use the function
register_data_filter. For TCP streams, this function can be called only
once. But for HTTP streams, when needed, it must be called for each HTTP request
or HTTP response.
Only registered filters will be called during data parsing. At any time, a
filter can be unregistered by calling the function unregister_data_filter.
Now body parsing is done in http_msg_forward_body and
http_msg_forward_chunked_body functions, regardless of whether we parse a
request or a response.
Parsing result is still handled in http_request_forward_body and
http_response_forward_body functions.
This patch will ease futur optimizations, mainly on filters.
This new analyzer will be called for each HTTP request/response, before the
parsing of the body. It is identified by AN_FLT_HTTP_HDRS.
Special care was taken about the following condition :
* the frontend is a TCP proxy
* filters are defined in the frontend section
* the selected backend is a HTTP proxy
So, this patch explicitly add AN_FLT_HTTP_HDRS analyzer on the request and the
response channels when the backend is a HTTP proxy and when there are filters
attatched on the stream.
This patch simplifies http_request_forward_body and http_response_forward_body
functions.
For Chunked HTTP request/response, the body filtering can be really
expensive. In the worse case (many chunks of 1 bytes), the filters overhead is
of 3 calls per chunk. If http_data callback is useful, others are just
informative.
So these callbacks has been removed. Of course, existing filters (trace and
compression) has beeen updated accordingly. For the HTTP compression filter, the
update is quite huge. Its implementation is closer to the old one.
When no filter is attached to the stream, the CPU footprint due to the calls to
filters_* functions is huge, especially for chunk-encoded messages. Using macros
to check if we have some filters or not is a great improvement.
Furthermore, instead of checking the filter list emptiness, we introduce a flag
to know if filters are attached or not to a stream.
HTTP compression has been rewritten to use the filter API. This is more a PoC
than other thing for now. It allocates memory to work. So, if only for that, it
should be rewritten.
In the mean time, the implementation has been refactored to allow its use with
other filters. However, there are limitations that should be respected:
- No filter placed after the compression one is allowed to change input data
(in 'http_data' callback).
- No filter placed before the compression one is allowed to change forwarded
data (in 'http_forward_data' callback).
For now, these limitations are informal, so you should be careful when you use
several filters.
About the configuration, 'compression' keywords are still supported and must be
used to configure the HTTP compression behavior. In absence of a 'filter' line
for the compression filter, it is added in the filter chain when the first
compression' line is parsed. This is an easy way to do when you do not use other
filters. But another filter exists, an error is reported so that the user must
explicitly declare the filter.
For example:
listen tst
...
compression algo gzip
compression offload
...
filter flt_1
filter compression
filter flt_2
...
HTTP compression will be moved in a true filter. To prepare the ground, some
functions have been moved in a dedicated file. Idea is to keep everything about
compression algos in compression.c and everything related to the filtering in
flt_http_comp.c.
For now, a header has been added to help during the transition. It will be
removed later.
Unused empty ACL keyword list was removed. The "compression" keyword
parser was moved from cfgparse.c to flt_http_comp.c.
This patch adds the support of filters in HAProxy. The main idea is to have a
way to "easely" extend HAProxy by adding some "modules", called filters, that
will be able to change HAProxy behavior in a programmatic way.
To do so, many entry points has been added in code to let filters to hook up to
different steps of the processing. A filter must define a flt_ops sutrctures
(see include/types/filters.h for details). This structure contains all available
callbacks that a filter can define:
struct flt_ops {
/*
* Callbacks to manage the filter lifecycle
*/
int (*init) (struct proxy *p);
void (*deinit)(struct proxy *p);
int (*check) (struct proxy *p);
/*
* Stream callbacks
*/
void (*stream_start) (struct stream *s);
void (*stream_accept) (struct stream *s);
void (*session_establish)(struct stream *s);
void (*stream_stop) (struct stream *s);
/*
* HTTP callbacks
*/
int (*http_start) (struct stream *s, struct http_msg *msg);
int (*http_start_body) (struct stream *s, struct http_msg *msg);
int (*http_start_chunk) (struct stream *s, struct http_msg *msg);
int (*http_data) (struct stream *s, struct http_msg *msg);
int (*http_last_chunk) (struct stream *s, struct http_msg *msg);
int (*http_end_chunk) (struct stream *s, struct http_msg *msg);
int (*http_chunk_trailers)(struct stream *s, struct http_msg *msg);
int (*http_end_body) (struct stream *s, struct http_msg *msg);
void (*http_end) (struct stream *s, struct http_msg *msg);
void (*http_reset) (struct stream *s, struct http_msg *msg);
int (*http_pre_process) (struct stream *s, struct http_msg *msg);
int (*http_post_process) (struct stream *s, struct http_msg *msg);
void (*http_reply) (struct stream *s, short status,
const struct chunk *msg);
};
To declare and use a filter, in the configuration, the "filter" keyword must be
used in a listener/frontend section:
frontend test
...
filter <FILTER-NAME> [OPTIONS...]
The filter referenced by the <FILTER-NAME> must declare a configuration parser
on its own name to fill flt_ops and filter_conf field in the proxy's
structure. An exemple will be provided later to make it perfectly clear.
For now, filters cannot be used in backend section. But this is only a matter of
time. Documentation will also be added later. This is the first commit of a long
list about filters.
It is possible to have several filters on the same listener/frontend. These
filters are stored in an array of at most MAX_FILTERS elements (define in
include/types/filters.h). Again, this will be replaced later by a list of
filters.
The filter API has been highly refactored. Main changes are:
* Now, HA supports an infinite number of filters per proxy. To do so, filters
are stored in list.
* Because filters are stored in list, filters state has been moved from the
channel structure to the filter structure. This is cleaner because there is no
more info about filters in channel structure.
* It is possible to defined filters on backends only. For such filters,
stream_start/stream_stop callbacks are not called. Of course, it is possible
to mix frontend and backend filters.
* Now, TCP streams are also filtered. All callbacks without the 'http_' prefix
are called for all kind of streams. In addition, 2 new callbacks were added to
filter data exchanged through a TCP stream:
- tcp_data: it is called when new data are available or when old unprocessed
data are still waiting.
- tcp_forward_data: it is called when some data can be consumed.
* New callbacks attached to channel were added:
- channel_start_analyze: it is called when a filter is ready to process data
exchanged through a channel. 2 new analyzers (a frontend and a backend)
are attached to channels to call this callback. For a frontend filter, it
is called before any other analyzer. For a backend filter, it is called
when a backend is attached to a stream. So some processing cannot be
filtered in that case.
- channel_analyze: it is called before each analyzer attached to a channel,
expects analyzers responsible for data sending.
- channel_end_analyze: it is called when all other analyzers have finished
their processing. A new analyzers is attached to channels to call this
callback. For a TCP stream, this is always the last one called. For a HTTP
one, the callback is called when a request/response ends, so it is called
one time for each request/response.
* 'session_established' callback has been removed. Everything that is done in
this callback can be handled by 'channel_start_analyze' on the response
channel.
* 'http_pre_process' and 'http_post_process' callbacks have been replaced by
'channel_analyze'.
* 'http_start' callback has been replaced by 'http_headers'. This new one is
called just before headers sending and parsing of the body.
* 'http_end' callback has been replaced by 'channel_end_analyze'.
* It is possible to set a forwarder for TCP channels. It was already possible to
do it for HTTP ones.
* Forwarders can partially consumed forwardable data. For this reason a new
HTTP message state was added before HTTP_MSG_DONE : HTTP_MSG_ENDING.
Now all filters can define corresponding callbacks (http_forward_data
and tcp_forward_data). Each filter owns 2 offsets relative to buf->p, next and
forward, to track, respectively, input data already parsed but not forwarded yet
by the filter and parsed data considered as forwarded by the filter. A any time,
we have the warranty that a filter cannot parse or forward more input than
previous ones. And, of course, it cannot forward more input than it has
parsed. 2 macros has been added to retrieve these offets: FLT_NXT and FLT_FWD.
In addition, 2 functions has been added to change the 'next size' and the
'forward size' of a filter. When a filter parses input data, it can alter these
data, so the size of these data can vary. This action has an effet on all
previous filters that must be handled. To do so, the function
'filter_change_next_size' must be called, passing the size variation. In the
same spirit, if a filter alter forwarded data, it must call the function
'filter_change_forward_size'. 'filter_change_next_size' can be called in
'http_data' and 'tcp_data' callbacks and only these ones. And
'filter_change_forward_size' can be called in 'http_forward_data' and
'tcp_forward_data' callbacks and only these ones. The data changes are the
filter responsability, but with some limitation. It must not change already
parsed/forwarded data or data that previous filters have not parsed/forwarded
yet.
Because filters can be used on backends, when we the backend is set for a
stream, we add filters defined for this backend in the filter list of the
stream. But we must only do that when the backend and the frontend of the stream
are not the same. Else same filters are added a second time leading to undefined
behavior.
The HTTP compression code had to be moved.
So it simplifies http_response_forward_body function. To do so, the way the data
are forwarded has changed. Now, a filter (and only one) can forward data. In a
commit to come, this limitation will be removed to let all filters take part to
data forwarding. There are 2 new functions that filters should use to deal with
this feature:
* flt_set_http_data_forwarder: This function sets the filter (using its id)
that will forward data for the specified HTTP message. It is possible if it
was not already set by another filter _AND_ if no data was yet forwarded
(msg->msg_state <= HTTP_MSG_BODY). It returns -1 if an error occurs.
* flt_http_data_forwarder: This function returns the filter id that will
forward data for the specified HTTP message. If there is no forwarder set, it
returns -1.
When an HTTP data forwarder is set for the response, the HTTP compression is
disabled. Of course, this is not definitive.
When working on the previous bug, it appeared that it the case that was
triggering the bug would also work between two backends, one of which
doesn't support http-reuse. The reason is that while the idle connection
is moved to the private pool, upon reuse we only check if it holds the
CO_FL_PRIVATE flag. And we don't set this flag when there's no reuse.
So let's always set it in this case, it will guarantee that no undesired
connection sharing may happen.
This fix must be backported to 1.6.
Gregor Kovač reported that http_date() did not return the right day of the
week. For example "Sat, 22 Jan 2016 17:43:38 GMT" instead of "Fri, 22 Jan
2016 17:43:38 GMT". Indeed, gmtime() returns a 'struct tm' result, where
tm_wday begins on Sunday, whereas the code assumed it began on Monday.
This patch must be backported to haproxy 1.5 and 1.6.
The function http_reply_and_close has been added in proto_http.c to wrap calls
to stream_int_retnclose. This functions will be modified when the filters will
be added.
When the response body is forwarded, if the server closes the input before the
end, an error is thrown. But if the data processing is too slow, all data could
already be received and pending in the input buffer. So this is a bug to stop
processing in this context. The server doesn't really closed the input before
the end.
As an example, this could happen when HAProxy is configured to do compression
offloading. If the server closes the connection explicitly after the response
(keep-alive disabled by the server) and if HAProxy receives the data faster than
they are compressed, then the response could be truncated.
This patch fixes the bug by checking if some pending data remain in the input
buffer before returning an error. If yes, the processing continues.
Several cases of "<=" instead of "<" were found in the url_param parser,
mostly affecting the case where the parameter is wrapping. They shouldn't
affect header operations, just body parsing in a wrapped pipelined request.
The code is a bit complicated with certain operations done multiple times
in multiple functions, so it's not sure others are not left. This code
must be re-audited.
It should only be backported to 1.6 once carefully tested, because it is
possible that other bugs relied on these ones.
Krishna Kumar reported that the following configuration doesn't permit
HTTP reuse between two clients :
frontend private-frontend
mode http
bind :8001
default_backend private-backend
backend private-backend
mode http
http-reuse always
server bck 127.0.0.1:8888
The reason for this is that in http_end_txn_clean_session() we check the
stream's backend backend's http-reuse option before deciding whether the
backend connection should be moved back to the server's pool or not. But
since we're doing this after the call to http_reset_txn(), the backend is
reset to match the frontend, which doesn't have the option. However it
will work fine in a setup involving a "listen" section.
We just need to keep a pointer to the current backend before calling
http_reset_txn(). The code does that and replaces the few remaining
references to s->be inside the same function so that if any part of
code were to be moved later, this trap doesn't happen again.
This fix must be backported to 1.6.
Currently urlp fetching samples were able to find parameters with an empty
value, but the return code depended on the value length. The final result was
that acls using urlp couldn't match empty values.
Example of acl which always returned "false":
acl MATCH_EMPTY urlp(foo) -m len 0
The fix consists in unconditionally return 1 when the parameter is found.
This fix must be backported to 1.6 and 1.5.
There is a bug where "option http-keep-alive" doesn't force a response
to stay in keep-alive if the server sends the FIN along with the response
on the second or subsequent response. The reason is that the auto-close
was forced enabled when recycling the HTTP transaction and it's never
disabled along the response processing chain before the SHUTR gets a
chance to be forwarded to the client side. The MSG_DONE state of the
HTTP response properly disables it but too late.
There's no more reason for enabling auto-close here, because either it
doesn't matter in non-keep-alive modes because the connection is closed,
or it is automatically enabled by process_stream() when it sees there's
no analyser on the stream.
This bug also affects 1.5 so a backport is desired.
When a server timeout is detected on the second or nth request of a keep-alive
connection, HAProxy closes the connection without writing a response.
Some clients would fail with a remote disconnected exception and some
others would retry potentially unsafe requests.
This patch removes the special case and makes sure a 504 timeout is
written back whenever a server timeout is handled.
Signed-off-by: lsenta <laurent.senta@gmail.com>
Cyril Bont reported a reproduceable sequence which can lead to a crash
when using backend connection reuse. The problem comes from the fact that
we systematically add the server connection to an idle pool at the end of
the HTTP transaction regardless of the fact that it might already be there.
This is possible for example when processing a request which doesn't use
a server connection (typically a redirect) after a request which used a
connection. Then after the first request, the connection was already in
the idle queue and we're putting it a second time at the end of the second
request, causing a corruption of the idle pool.
Interestingly, the memory debugger in 1.7 immediately detected a suspicious
double free on the connection, leading to a very early detection of the
cause instead of its consequences.
Thanks to Cyril for quickly providing a working reproducer.
This fix must be backported to 1.6 since connection reuse was introduced
there.
The 'OPTIONS' method was not in the list of supported HTTP methods and
find_http_meth return HTTP_METH_OTHER instead of HTTP_METH_OPTIONS.
[wt: this fix needs to be backported at least to 1.5, 1.4 and 1.3]
This flag is used by custom actions to know that they're called for the
first time. The only case where it's not set is when they're resuming
from a yield. It will be needed to let them know when they have to
allocate some resources.
In HTTP it's more difficult to know when to pass the flag or not
because all actions are supposed to be final and there's no inspection
delay. Also, the input channel may very well be closed without this
being an error. So we only set the flag when option abortonclose is
set and the input channel is closed, which is the only case where the
user explicitly wants to forward a close down the chain.
Since commit bc4c1ac ("MEDIUM: http/tcp: permit to resume http and tcp
custom actions"), some actions may yield and be called back when new
information are available. Unfortunately some of them may continue to
yield because they simply don't know that it's the last call from the
rule set. For this reason we'll need to pass a flag to the custom
action to pass such information and possibly other at the same time.
When we call the function smp_prefetch_http(), if the txn is not initialized,
it doesn't work. This patch fix this. Now, smp_prefecth_http() permits to use
http with any proxy mode.
Added the definition of CHECK_HTTP_MESSAGE_FIRST and the declaration of
smp_prefetch_http to the header.
Changed smp_prefetch_http implementation to remove the static qualifier.
When converting the "method" fetch to a string, we used to get an empty
string if the first character was not an upper case. This was caused by
the lookup function which returns HTTP_METH_NONE when a lookup is not
possible, and this method being mapped to an empty string in the array.
This is a totally stupid mechanism, there's no reason for having the
result depend on the first char. In fact the message parser already
checks that the syntax matches an HTTP token so we can only land there
with a valid token, hence only HTTP_METH_OTHER should be returned.
This fix should be backported to all actively supported branches.
Before this patch, two type of custom actions exists: ACT_ACTION_CONT and
ACT_ACTION_STOP. ACT_ACTION_CONT is a non terminal action and ACT_ACTION_STOP is
a terminal action.
Note that ACT_ACTION_STOP is not used in HAProxy.
This patch remove this behavior. Only type type of custom action exists, and it
is called ACT_CUSTOM. Now, the custion action can return a code indicating the
required behavior. ACT_RET_CONT wants that HAProxy continue the current rule
list evaluation, and ACT_RET_STOP wants that HAPRoxy stops the the current rule
list evaluation.
Jesse Hathaway reported a crash that Cyril Bont diagnosed as being
caused by the manipulation of srv_conn after setting it to NULL. This
happens in http-server-close mode when the server returns either a 401
or a 407, because the connection was previously closed then it's being
assigned the CO_FL_PRIVATE flag.
This bug only affects 1.6-dev as it was introduced by connection reuse code
with commit 387ebf8 ("MINOR: connection: add a new flag CO_FL_PRIVATE").
This patch is inspired by Bowen Ni's proposal and it is based on his first
implementation:
With Lua integration in HAProxy 1.6, one can change the request method,
path, uri, header, response header etc except response line.
I'd like to contribute the following methods to allow modification of the
response line.
[...]
There are two new keywords in 'http-response' that allows you to rewrite
them in the native HAProxy config. There are also two new APIs in Lua that
allows you to do the same rewriting in your Lua script.
Example:
Use it in HAProxy config:
*http-response set-code 404*
Or use it in Lua script:
*txn.http:res_set_reason("Redirect")*
I dont take the full patch because the manipulation of the "reason" is useless.
standard reason are associated with each returned code, and unknown code can
take generic reason.
So, this patch can set the status code, and the reason is automatically adapted.
This patch normalize the return code of the configuration parsers. Before
these changes, the tcp action parser returned -1 if fail and 0 for the
succes. The http action returned 0 if fail and 1 if succes.
The normalisation does:
- ACT_RET_PRS_OK for succes
- ACT_RET_PRS_ERR for failure
Each (http|tcp)-(request|response) action use the same method
for looking up the action keyword during the cofiguration parsing.
This patch mutualize the code.
This patch merges the conguration keyword struct. Each declared configuration
keyword struct are similar with the others. This patch simplify the code.
Action function can return 3 status:
- error if the action encounter fatal error (like out of memory)
- yield if the action must terminate his work later
- continue in other cases
For performances considerations, some actions are not processed by remote
function. They are directly processed by the function. Some of these actions
does the same things but for different processing part (request / response).
This patch give the same name for the same actions, and change the normalization
of the other actions names.
This patch is ONLY a rename, it doesn't modify the code.
This patch group the action name in one file. Some action are called
many times and need an action embedded in the action caller. The main
goal is to have only one header file grouping all definitions.
This mark permit to detect if the action tag is over the allowed range.
- Normally, this case doesn't appear
- If it appears, it is processed by ded fault case of the switch
This patch removes the generic opaque type for storing the configuration of the
acion "set-src" (HTTP_REQ_ACT_SET_SRC), and use the dedicated type "struct expr"
This patch is the first of a serie which merge all the action structs. The
function "tcp-request content", "tcp-response-content", "http-request" and
"http-response" have the same values and the same process for some defined
actions, but the struct and the prototype of the declared function are
different.
This patch try to unify all of these entries.
The union name "data" is a little bit heavy while we read the source
code because we can read "data.data.sint". The rename from "data" to "u"
makes the read easiest like "data.u.sint".
This patch remove the struct information stored both in the struct
sample_data and in the striuct sample. Now, only thestruct sample_data
contains data, and the struct sample use the struct sample_data for storing
his own data.
appsessions started to be deprecated with the introduction of stick
tables, and the latter are much more powerful and flexible, and in
addition they are replicated between nodes and maintained across
reloads. Let's now remove appsession completely.
This strategy is less extreme than "always", it only dispatches first
requests to validated reused connections, and moves a connection from
the idle list to the safe list once it has seen a second request, thus
proving that it could be reused.
In connect_server(), if we don't have a connection attached to the
stream-int, we first look into the server's idle_conns list and we
pick the first one there, we detach it from its owner if it had one.
If we used to have a connection, we close it.
This mechanism works well but doesn't scale : as servers increase,
the likeliness that the connection attached to the stream interface
doesn't match the server and gets closed increases.
This flag is set on an outgoing connection when this connection gets
some properties that must not be shared with other connections, such
as dynamic transparent source binding, SNI or a proxy protocol header,
or an authentication challenge from the server. This will be needed
later to implement connection reuse.
This function is now dedicated to idle connections only, which means
that it must not be used without any endpoint nor anything not a
connection. The connection remains attached to the stream interface.
Since we now always call this function with the reuse parameter cleared,
let's simplify the function's logic as it cannot return the existing
connection anymore. The savings on this inline function are appreciable
(240 bytes) :
$ size haproxy.old haproxy.new
text data bss dec hex filename
1020383 40816 36928 1098127 10c18f haproxy.old
1020143 40816 36928 1097887 10c09f haproxy.new
This patch removes the 32 bits unsigned integer and the 32 bit signed
integer. It replaces these types by a unique type 64 bit signed.
This makes easy the usage of integer and clarify signed and unsigned use.
With the previous version, signed and unsigned are used ones in place of
others, and sometimes the converter loose the sign. For example, divisions
are processed with "unsigned", if one entry is negative, the result is
wrong.
Note that the integer pattern matching and dotted version pattern matching
are already working with signed 64 bits integer values.
There is one user-visible change : the "uint()" and "sint()" sample fetch
functions which used to return a constant integer have been replaced with
a new more natural, unified "int()" function. These functions were only
introduced in the latest 1.6-dev2 so there's no impact on regular
deployments.
The man said that gmtime() and localtime() can return a NULL value.
This is not tested. It appears that all the values of a 32 bit integer
are valid, but it is better to check the return of these functions.
However, if the integer move from 32 bits to 64 bits, some 64 values
can be unsupported.
This option enables overriding source IP address in a HTTP request. It is
useful when we want to set custom source IP (e.g. front proxy rewrites address,
but provides the correct one in headers) or we wan't to mask source IP address
for privacy or compliance.
It acts on any expression which produces correct IP address.
This modification makes possible to use sample_fetch_string() in more places,
where we might need to fetch sample values which are not plain strings. This
way we don't need to fetch string, and convert it into another type afterwards.
When using aliased types, the caller should explicitly check which exact type
was returned (e.g. SMP_T_IPV4 or SMP_T_IPV6 for SMP_T_ADDR).
All usages of sample_fetch_string() are converted to use new function.
This patch adds support of variables during the processing of each stream. The
variables scope can be set as 'session', 'transaction', 'request' or 'response'.
The variable type is the type returned by the assignment expression. The type
can change while the processing.
The allocated memory can be controlled for each scope and each request, and for
the global process.
This patch permits to register a new keyword with the keyword "tcp-request content"
'tcp-request connection", tcp-response content", http-request" and "http-response"
which is identified only by matching the start of the keyword.
for example, we register the keyword "set-var" with the option "match_pfx"
and the configuration keyword "set-var(var_name)" matchs this entry.
Commit 9fbe18e ("MEDIUM: http: add a new option http-buffer-request")
introduced a regression due to a misplaced check causing the admin
mode of the HTTP stats not to work anymore.
This patch tried to ensure that when we need a request body for the
stats applet, and we have already waited for this body, we don't wait
for it again, but the condition was applied too early causing a
disabling of the entire processing the body, and based on the wrong
HTTP state (MSG_BODY) resulting in the test never matching.
Thanks to Chad Lavoie for reporting the problem.
This bug is 1.6-only, no backport is needed.
There are two reasons for not keeping the client connection alive upon a
redirect :
- save the client from uploading all data
- avoid keeping a connection alive if the redirect goes to another domain
The first case should consider an exception when all the data from the
client have been read already. This specifically happens on response
redirects after a POST to a server. This is an easy situation to detect.
It could later be improved to cover the cases where option
http-buffer-request is used.
Sometimes it's problematic not to have "http-response redirect" rules,
for example to perform a browser-based redirect based on certain server
conditions (eg: match of a header).
This patch adds "http-response redirect location <fmt>" which gives
enough flexibility for most imaginable operations. The connection to
the server is closed when this is performed so that we don't risk to
forward any pending data from the server.
Any pending response data are trimmed so that we don't risk to
forward anything pending to the client. It's harmless to also do that
for requests so we don't need to consider the direction.
In order to support http-response redirect, the parsing needs to be
adapted a little bit to only support the "location" type, and to
adjust the log-format parser so that it knows the direction of the
sample fetch calls.
This function was made to perform a redirect on requests only, it was
using a message or txn->req in an inconsistent way and did not consider
the possibility that it could be used for the other direction. Let's
clean it up to have both a request and a response messages.
This patch adds a http response capture keyword with the same behavior
as the previous patch called "MEDIUM: capture: Allow capture with slot
identifier".
This patch modifies the current http-request capture function
and adds a new keyword "id" that permits to identify a capture slot.
If the identified doesn't exists, the action fails silently.
Note that this patch removs an unused list initilisation, which seems
to be inherited from a copy/paste. It's harmless and does not need to
be backported.
LIST_INIT((struct list *)&rule->arg.act.p[0]);
This patch adds "capture-req" and "capture-res". These two converters
capture their entry in the allocated slot given in argument and pass
the input on the output.
These ones were already obsoleted in 1.4, marked for removal in 1.5,
and not documented anymore. They used to emit warnings, and do still
require quite some code to stay in place. Let's remove them now.
The "name" and "name_len" arguments in function "smp_fetch_url_param"
could be left uninitialized for subsequent calls.
[wt: no backport needed, this is an 1.6 regression introduced by
commit 4fdc74c ("MINOR: http: split the url_param in two parts") ]
This patch is the part of the body_param fetch. The goal is to have
generic url-encoded parser which can used for parsing the query string
and the body.
There are some situations hwere it's desirable to scan multiple occurrences
of a same parameter name in the query string. This change ensures this can
work, even with an empty name which will then iterate over all parameters.
This patch removes the structs "session", "stream" and "proxy" from
the sample-fetches and converters function prototypes.
This permits to remove some weight in the prototype call.
There's an issue related with shutting down POST transfers or closing the
connection after the end of the upload : the shutdown is forwarded to the
server regardless of the abortonclose option. The problem it causes is that
during a scan, brute force or whatever, it becomes possible that all source
ports are exhausted with all sockets in TIME_WAIT state.
There are multiple issues at once in fact :
- no action is done for the close, it automatically happens at the lower
layers thanks for channel_auto_close(), so we cannot act on NOLINGER ;
- we *do* want to continue to send a clean shutdown in tunnel mode because
some protocols transported over HTTP may need this, regardless of option
abortonclose, thus we can't set the option inconditionally
- for all other modes, we do want to close the dirty way because we're
certain whether we've sent everything or not, and we don't want to eat
all source ports.
The solution is a bit complex and applies to DONE/TUNNEL states :
1) disable automatic close for everything not a tunnel and not just
keep-alive / server-close. Force-close is now covered, as is HTTP/1.0
which implicitly works in force-close mode ;
2) when processing option abortonclose, we know we can disable lingering
if the client has closed and the connection is not in tunnel mode.
Since the last case above leads to a situation where the client side reports
an error, we know the connection will not be reused, so leaving the flag on
the stream-interface is safe. A client closing in the middle of the data
transmission already aborts the transaction so this case is not a problem.
This fix must be backported to 1.5 where the problem was detected.
Due to the code being mostly inspired from the tcp-request parser, it
does some crap because both don't work the same way. The "len" argument
could be mismatched and then the length could be used uninitialized.
This is only possible in frontends of course, but it will finally
make it possible to capture arbitrary http parts, including URL
parameters or parts of the message body.
It's worth noting that an ugly (char **) cast had to be done to
call sample_fetch_string() which is caused by a 5- or 6- levels
of inheritance of this type in the API. Here it's harmless since
the function uses it as a const, but this API madness must be
fixed, starting with the one or two rare functions that modify
the args and inflict this on each and every keyword parser.
(cherry picked from commit 484a4f38460593919a1c1d9a047a043198d69f45)
Body processing is still fairly limited, but this is a start. It becomes
possible to apply regex to find contents in order to decide where to route
a request for example. Only the first chunk is parsed for now, and the
response is not yet available (the parsing function must be duplicated for
this).
req.body : binary
This returns the HTTP request's available body as a block of data. It
requires that the request body has been buffered made available using
"option http-buffer-request". In case of chunked-encoded body, currently only
the first chunk is analyzed.
req.body_len : integer
This returns the length of the HTTP request's available body in bytes. It may
be lower than the advertised length if the body is larger than the buffer. It
requires that the request body has been buffered made available using
"option http-buffer-request".
req.body_size : integer
This returns the advertised length of the HTTP request's body in bytes. It
will represent the advertised Content-Length header, or the size of the first
chunk in case of chunked encoding. In order to parse the chunks, it requires
that the request body has been buffered made available using
"option http-buffer-request".
It is sometimes desirable to wait for the body of an HTTP request before
taking a decision. This is what is being done by "balance url_param" for
example. The first use case is to buffer requests from slow clients before
connecting to the server. Another use case consists in taking the routing
decision based on the request body's contents. This option placed in a
frontend or backend forces the HTTP processing to wait until either the whole
body is received, or the request buffer is full, or the first chunk is
complete in case of chunked encoding. It can have undesired side effects with
some applications abusing HTTP by expecting unbufferred transmissions between
the frontend and the backend, so this should definitely not be used by
default.
Note that it would not work for the response because we don't reset the
message state before starting to forward. For the response we need to
1) reset the message state to MSG_100_SENT or BODY , and 2) to reset
body_len in case of chunked encoding to avoid counting it twice.
Due to the fact that we were still considering only msg->sov for the
first byte of data after calling http_parse_chunk_size(), we used to
miscompute the input data size and to count the CRLF and the chunk size
as part of the input data. The effect is that it was possible to release
the processing with 3 or 4 missing bytes, especially if they're typed by
hand during debugging sessions. This can cause the stats page to return
some errors in admin mode, and the url_param balance algorithm to fail
to properly hash a body input.
This fix must be backported to 1.5.
Recently some browsers started to implement a "pre-connect" feature
consisting in speculatively connecting to some recently visited web sites
just in case the user would like to visit them. This results in many
connections being established to web sites, which end up in 408 Request
Timeout if the timeout strikes first, or 400 Bad Request when the browser
decides to close them first. These ones pollute the log and feed the error
counters. There was already "option dontlognull" but it's insufficient in
this case. Instead, this option does the following things :
- prevent any 400/408 message from being sent to the client if nothing
was received over a connection before it was closed ;
- prevent any log from being emitted in this situation ;
- prevent any error counter from being incremented
That way the empty connection is silently ignored. Note that it is better
not to use this unless it is clear that it is needed, because it will hide
real problems. The most common reason for not receiving a request and seeing
a 408 is due to an MTU inconsistency between the client and an intermediary
element such as a VPN, which blocks too large packets. These issues are
generally seen with POST requests as well as GET with large cookies. The logs
are often the only way to detect them.
This patch should be backported to 1.5 since it avoids false alerts and
makes it easier to monitor haproxy's status.
There's not much reason for continuing to accept HTTP/0.9 requests
nowadays except for manual testing. Now we disable support for these
by default, unless option accept-invalid-http-request is specified,
in which case they continue to be upgraded to 1.0.
While RFC2616 used to allow an undeterminate amount of digits for the
major and minor components of the HTTP version, RFC7230 has reduced
that to a single digit for each.
If a server can't properly parse the version string and falls back to 0.9,
it could then send a head-less response whose payload would be taken for
headers, which could confuse downstream agents.
Since there's no more reason for supporting a version scheme that was
never used, let's upgrade to the updated version of the standard. It is
still possible to enforce support for the old behaviour using options
accept-invalid-http-request and accept-invalid-http-response.
It would be wise to backport this to 1.5 as well just in case.
The spec mandates that content-length must be removed from messages if
Transfer-Encoding is present, not just for valid ones.
This must be backported to 1.5 and 1.4.
The rules related to how to handle a bad transfer-encoding header (one
where "chunked" is not at the final place) have evolved to mandate an
abort when this happens in the request. Previously it was only a close
(which is still valid for the server side).
This must be backported to 1.5 and 1.4.
While Transfer-Encoding is HTTP/1.1, we must still parse it in HTTP/1.0
in case an agent sends it, because it's likely that the other side might
use it as well, causing confusion. This will also result in getting rid
of the Content-Length header in such abnormal situations and in having
a clean connection.
This must be backported to 1.5 and 1.4.
RFC7230 clarified the behaviour to adopt when facing both a
content-length and a transfer-encoding: chunked in a message. While
haproxy already complied with the method for getting the message
length right, and used to detect improper content-length duplicates,
it still did not remove the content-length header when facing a
transfer-encoding: chunked. Usually it is not a problem since other
agents (clients and servers) are required to parse the message
according to the rules that have been in place since RFC2616 in
1999.
However Rgis Leroy reported the existence of at least one such
non-compliant agent so haproxy could be abused to get out of sync
with it on pipelined requests (HTTP request smuggling attack),
it consider part of a payload as a subsequent request.
The best thing to do is then to remove the content-length according
to RFC7230. It used to be in the todo list with a fixme in the code
while waiting for the standard to stabilize, let's apply it now that
it's published.
Thanks to Rgis for bringing that subject to our attention.
This fix must be backported to 1.5 and 1.4.
When one of these functions replaces a part of the query string by
a shorter or longer new one, the header parsing is broken. This is
because the start of the first header is not updated.
In the same way, the total length of the request line is not updated.
I dont see any bug caused by this miss, but I guess than it is better
to store the good length.
This bug is only in the development version.
Commit 350f487 ("CLEANUP: session: simplify references to chn_{prod,cons}(&s->{req,res})")
introduced a regression causing the cli_conn to be picked from the server
side instead of the client side, so the XFF header is not appended anymore
since the connection is NULL.
Thanks to Reinis Rozitis for reporting this bug. No backport is needed
as it's 1.6-specific.
Commit bc4c1ac ("MEDIUM: http/tcp: permit to resume http and tcp custom
actions") introduced the ability to interrupt and restart processing in
the middle of a TCP/HTTP ruleset. But it doesn't do it in a consistent
way : it checks current_rule_list, immediately dereferences current_rule,
which is only set in certain cases and never cleared. So that broke the
tcp-request content rules when the processing was interrupted due to
missing data, because current_rule was not yet set (segfault) or could
have been inherited from another ruleset if it was used in a backend
(random behaviour).
The proper way to do it is to always set current_rule before dereferencing
it. But we don't want to set it for all rules because we don't want any
action to provide a checkpointing mechanism. So current_rule is set to NULL
before entering the loop, and only used if not NULL and if current_rule_list
matches the current list. This way they both serve as a guard for the other
one. This fix also makes the current rule point to the rule instead of its
list element, as it's much easier to manipulate.
No backport is needed, this is 1.6-specific.
This patch adds support for error codes 429 and 405 to Haproxy and a
"deny_status XXX" option to "http-request deny" where you can specify which
code is returned with 403 being the default. We really want to do this the
"haproxy way" and hope to have this patch included in the mainline. We'll
be happy address any feedback on how this is implemented.
Many such function need a session, and till now they used to dereference
the stream. Once we remove the stream from the embryonic session, this
will not be possible anymore.
So as of now, sample fetch functions will be called with this :
- sess = NULL, strm = NULL : never
- sess = valid, strm = NULL : tcp-req connection
- sess = valid, strm = valid, strm->txn = NULL : tcp-req content
- sess = valid, strm = valid, strm->txn = valid : http-req / http-res
The registerable http_req_rules / http_res_rules used to require a
struct http_txn at the end. It's redundant with struct stream and
propagates very deep into some parts (ie: it was the reason for lua
requiring l7). Let's remove it now.
All of them can now retrieve the HTTP transaction *if it exists* from
the stream and be sure to get NULL there when called with an embryonic
session.
The patch is a bit large because many locations were touched (all fetch
functions had to have their prototype adjusted). The opportunity was
taken to also uniformize the call names (the stream is now always "strm"
instead of "l4") and to fix indent where it was broken. This way when
we later introduce the session here there will be less confusion.
Now this one is dynamically allocated. It means that 280 bytes of memory
are saved per TCP stream, but more importantly that it will become
possible to remove the l7 pointer from fetches and converters since
it will be deduced from the stream and will support being null.
A lot of care was taken because it's easy to forget a test somewhere,
and the previous code used to always trust s->txn for being valid, but
all places seem to have been visited.
All HTTP fetch functions check the txn first so we shouldn't have any
issue there even when called from TCP. When branching from a TCP frontend
to an HTTP backend, the txn is properly allocated at the same time as the
hdr_idx.
This one will not necessarily be allocated for each stream, and we want
to use the fact that it equals null to know it's not present so that we
can always deduce its presence from the stream pointer.
This commit only creates the new pool.
The header captures are now general purpose captures since tcp rules
can use them to capture various contents. That removes a dependency
on http_txn that appeared in some sample fetch functions and in the
order by which captures and http_txn were allocated.
Interestingly the reset of the header captures were done at too many
places as http_init_txn() used to do it while it was done previously
in every call place.
The stream may never be null given that all these functions are called
from sample_process(). Let's remove this now confusing test which
sometimes happens after a dereference was already done.
When s->si[0].end was dereferenced as a connection or anything in
order to retrieve information about the originating session, we'll
now use sess->origin instead so that when we have to chain multiple
streams in HTTP/2, we'll keep accessing the same origin.
Just like for the listener, the frontend is session-wide so let's move
it to the session. There are a lot of places which were changed but the
changes are minimal in fact.
With HTTP/2, we'll have to support multiplexed streams. A stream is in
fact the largest part of what we currently call a session, it has buffers,
logs, etc.
In order to catch any error, this commit removes any reference to the
struct session and tries to rename most "session" occurrences in function
names to "stream" and "sess" to "strm" when that's related to a session.
The files stream.{c,h} were added and session.{c,h} removed.
The session will be reintroduced later and a few parts of the stream
will progressively be moved overthere. It will more or less contain
only what we need in an embryonic session.
Sample fetch functions and converters will have to change a bit so
that they'll use an L5 (session) instead of what's currently called
"L4" which is in fact L6 for now.
Once all changes are completed, we should see approximately this :
L7 - http_txn
L6 - stream
L5 - session
L4 - connection | applet
There will be at most one http_txn per stream, and a same session will
possibly be referenced by multiple streams. A connection will point to
a session and to a stream. The session will hold all the information
we need to keep even when we don't yet have a stream.
Some more cleanup is needed because some code was already far from
being clean. The server queue management still refers to sessions at
many places while comments talk about connections. This will have to
be cleaned up once we have a server-side connection pool manager.
Stream flags "SN_*" still need to be renamed, it doesn't seem like
any of them will need to move to the session.
Commit a0dc23f ("MEDIUM: http: implement http-request set-{method,path,query,uri}")
forgot to null-terminate the list, resulting in crashes when these actions
are used if the platform doesn't pad the struct with nulls.
Thanks to Gunay Arslan for reporting a detailed trace showing the
origin of this bug.
No backport to 1.5 is needed.
It's documented that these sample fetch functions should count all headers
and/or all values when called with no name but in practice it's not what is
being done as a missing name causes an immediate return and an absence of
result.
This bug is present in 1.5 as well and must be backported.
Thanks to MSIE/IIS, the "deflate" name is ambigous. According to the RFC
it's a zlib-wrapped deflate stream, but IIS used to send only a raw deflate
stream, which is the only format MSIE understands for "deflate". The other
widely used browsers do support both formats. For this reason some people
prefer to emit a raw deflate stream on "deflate" to serve more users even
it that means violating the standards. Haproxy only follows the standard,
so they cannot do this.
This patch makes it possible to have one algorithm name in the configuration
and another one in the protocol. This will make it possible to have a new
configuration token to add a different algorithm so that users can decide if
they want a raw deflate or the standard one.
This function is a callback for HTTP actions. This function
creates the replacement string from a build_logline() format
and transform the header.
This patch split this function in two part. With this modification,
the header transformation and the replacement string are separed.
We can now transform the header with another replacement string
source than a build_logline() format.
The first part is the replacement engine. It take a replacement action
number and a replacement string and process the action.
The second part is the function which is called by the 'http-request
action' to replace a request line part. This function makes the
string used as replacement.
This split permits to use the replacement engine in other parts of the
code than the request action. The Lua use it for his own http action.
These function used an invalid header parser.
- The trailing white-spaces were embedded in the replacement regex,
- The double-quote (") containing comma (,) were not respected.
This patch replace this parser by the "official" parser http_find_header2().
The function http_replace_value use bad variable to detect the end
of the input string.
Regression introduced by the patch "MEDIUM: regex: Remove null
terminated strings." (c9c2daf2)
We need to backport this patch int the 1.5 stable branch.
WT: there is no possibility to overwrite existing data as we only read
past the end of the request buffer, to copy into the trash. The copy
is bounded by buffer_replace2(), just like the replacement performed
by exp_replace(). However if a buffer happens to contain non-zero data
up to the next unmapped page boundary, there's a theorical risk of
crashing the process despite this not being reproducible in tests.
The risk is low because "http-request replace-value" did not work due
to this bug so that probably means it's not used yet.
This bug is introduced by the commit "MEDIUM: http/tcp: permit to
resume http and tcp custom actions" ( bc4c1ac6ad ).
Before this patch, the return code of the function was ignored.
After this path, if the function returns 0, it wats a YIELD.
The function http_action_set_req_line() retunrs 0, in succes case.
This patch changes the return code of this function.
It is common for rest applications to return status codes other than
200, so compress the other common 200 level responses which might
contain content.
These 4 combinations are needlessly complicated since the session already
has direct access to the associated stream interfaces without having to
check an indirect pointer.
The purpose of these two macros will be to pass via the session to
find the relevant stream interfaces so that we don't need to store
the ->cons nor ->prod pointers anymore. Currently they're only defined
so that all references could be removed.
Note that many places need a second pass of clean up so that we don't
have any chn_prod(&s->req) anymore and only &s->si[0] instead, and
conversely for the 3 other cases.
This new flag "SI_FL_ISBACK" is set only on the back SI and is cleared
on the front SI. That way it's possible only by looking at the SI to
know what side it is.
We'll soon remove direct references to the channels from the stream
interface since everything belongs to the same session, so let's
first not dereference si->ib / si->ob anymore and use macros instead.
The channels were pointers to outside structs and this is not needed
anymore since the buffers have moved, but this complicates operations.
Move them back into the session so that both channels and stream interfaces
are always allocated for a session. Some places (some early sample fetch
functions) used to validate that a channel was NULL prior to dereferencing
it. Now instead we check if chn->buf is NULL and we force it to remain NULL
until the channel is initialized.
Commit bc4c1ac ("MEDIUM: http/tcp: permit to resume http and tcp custom actions")
unfortunately broke the stats applet by moving the clearing of the analyser bit
after processing the applet headers. It used to work only in HTTP/1.1 and not
in HTTP/1.0. This is 1.6-specific, no backport is needed.
Later, the processing of some actions needs to be interrupted and resumed
later. This patch permit to resume the actions. The actions that needs
to run with the resume mode are not yet avalaible. It will be soon with
Lua patches. So the code added by this patch is untestable for the moment.
The list of "tcp_exec_req_rules" cannot resme because is called by the
unresumable function "accept_session".
Actually, this function returns a pointer on the rule that stop
the evaluation of the rule list. Later we integrate the possibility
of interrupt and resue the processsing of some actions. The current
response mode is not sufficient to returns the "interrupt" information.
The pointer returned is never used, so I change the return type of
this function by an enum. With this enum, the function is ready to
return the "interupt" value.
The functions "val_payload_lv" and "val_hdr" are useful with
lua. The lua automatic binding for sample fetchs needs to
compare check functions.
The "arg_type_names" permit to display error messages.
Some usages of the converters need to know the attached session. The Lua
needs the session for retrieving his running context. This patch adds
the "session" as an argument of the converters prototype.
These new sample fetches retrieve the list of header names as they appear
in the request or response. This can be used for debugging, for statistics
as well as an aid to better detect the presence of proxies or plugins on
some browsers, which alter the request compared to a regular browser by
adding or reordering headers.
Commit c600204 ("BUG/MEDIUM: regex: fix risk of buffer overrun in
exp_replace()") added a control of failure on the response headers,
but forgot to check for the error during request processing. So if
the filters fail to apply, we could keep the request. It might
cause some headers to silently fail to be added for example. Note
that it's tagged MINOR because a standard configuration cannot make
this case happen.
The fix should be backported to 1.5 and 1.4 though.
The two http-req/http-resp actions "replace-hdr" and "replace-value" were
expecting exactly one space after the colon, which is wrong. It was causing
the first char not to be seen/modified when no space was present, and empty
headers not to be modified either. Instead of using name->len+2, we must use
ctx->val which points to the first character of the value even if there is
no value.
This fix must be backported into 1.5.
This commit implements the following new actions :
- "set-method" rewrites the request method with the result of the
evaluation of format string <fmt>. There should be very few valid reasons
for having to do so as this is more likely to break something than to fix
it.
- "set-path" rewrites the request path with the result of the evaluation of
format string <fmt>. The query string, if any, is left intact. If a
scheme and authority is found before the path, they are left intact as
well. If the request doesn't have a path ("*"), this one is replaced with
the format. This can be used to prepend a directory component in front of
a path for example. See also "set-query" and "set-uri".
Example :
# prepend the host name before the path
http-request set-path /%[hdr(host)]%[path]
- "set-query" rewrites the request's query string which appears after the
first question mark ("?") with the result of the evaluation of format
string <fmt>. The part prior to the question mark is left intact. If the
request doesn't contain a question mark and the new value is not empty,
then one is added at the end of the URI, followed by the new value. If
a question mark was present, it will never be removed even if the value
is empty. This can be used to add or remove parameters from the query
string. See also "set-query" and "set-uri".
Example :
# replace "%3D" with "=" in the query string
http-request set-query %[query,regsub(%3D,=,g)]
- "set-uri" rewrites the request URI with the result of the evaluation of
format string <fmt>. The scheme, authority, path and query string are all
replaced at once. This can be used to rewrite hosts in front of proxies,
or to perform complex modifications to the URI such as moving parts
between the path and the query string. See also "set-path" and
"set-query".
All of them are handled by the same parser and the same exec function,
which is why they're merged all together. For once, instead of adding
even more entries to the huge switch/case, we used the new facility to
register action keywords. A number of the existing ones should probably
move there as well.
This function (and its sister regex_exec_match2()) abstract the regex
execution but make it impossible to pass flags to the regex engine.
Currently we don't use them but we'll need to support REG_NOTBOL soon
(to indicate that we're not at the beginning of a line). So let's add
support for this flag and update the API accordingly.
The way http-request/response set-header works is stupid. For a naive
reuse of the del-header code, it removes all occurrences of the header
to be set before computing the new format string. This makes it almost
unusable because it is not possible to append values to an existing
header without first copying them to a dummy header, performing the
copy back and removing the dummy header.
Instead, let's share the same code as add-header and perform the optional
removal after the string is computed. That way it becomes possible to
write things like :
http-request set-header X-Forwarded-For %[hdr(X-Forwarded-For)],%[src]
Note that this change is not expected to have any undesirable impact on
existing configs since if they rely on the bogus behaviour, they don't
work as they always retrieve an empty string.
This fix must be backported to 1.5 to stop the spreadth of ugly configs.
This fetch extracts the request's query string, which starts after the first
question mark. If no question mark is present, this fetch returns nothing. If
a question mark is present but nothing follows, it returns an empty string.
This means it's possible to easily know whether a query string is present
using the "found" matching method. This fetch is the completemnt of "path"
which stops before the question mark.
It applies to the channel and it doesn't erase outgoing data, only
pending unread data, which is strictly equivalent to what recv()
does with MSG_TRUNC, so that new name is more accurate and intuitive.
channel_reserved is confusingly named. It is used to know whether or
not the rewrite area is left intact for situations where we want to
ensure we can use it before proceeding. Let's rename it to fix this
confusion.
In 1.4-dev7, a header removal mechanism was introduced with commit 68085d8
("[MINOR] http: add http_remove_header2() to remove a header value."). Due
to a typo in the function, the beginning of the headers gets desynchronized
if the header preceeding the deleted one ends with an LF/CRLF combination
different form the one of the removed header. The reason is that while
rewinding the pointer, we go back by a number of bytes taking into account
the LF/CRLF status of the removed header instead of the previous one. The
case where it fails is in http-request del-header/set-header where the
multiple occurrences of a header are present and their LF/CRLF ending
differs from the preceeding header. The loop then stops because no more
headers are found given that the names and length do not match.
Another point to take into consideration is that removing headers using
a loop of http_find_header2() and this function is inefficient since we
remove values one at a time while it could be simpler and faster to
remove full header lines. This is something that should be addressed
separately.
This fix must be backported to 1.5 and 1.4. Note that http-send-name-header
relies on this function as well so it could be possible that some of the
issues encountered with it in 1.4 come from this bug.
Doing so ensures that even when no memory is available, we leave the
channel in a sane condition. There's a special case in proto_http.c
regarding the compression, we simply pre-allocate the tmpbuf to point
to the dummy buffer. Not reusing &buf_empty for this allows the rest
of the code to differenciate an empty buffer that's not used from an
empty buffer that results from a failed allocation which has the same
semantics as a buffer full.
b_alloc() now allocates a buffer and initializes it to the size specified
in the pool minus the size of the struct buffer itself. This ensures that
callers do not need to care about buffer details anymore. Also this never
applies memory poisonning, which is slow and useless on buffers.
Since during parsing stage, curproxy always represents a proxy to be operated,
it should be a mistake by referring proxy.
Signed-off-by: Godbach <nylzhaowei@gmail.com>
When enabling stats, response analysers were set on the request
analyser list, which 1) has no effect, and 2) means we don't have
the response analysers properly set.
In practice these response analysers are set when the connection
to the server or applet is established so we don't need/must not
set them here.
Fortunately this bug had no impact since the flags are distinct,
but it definitely is confusing.
It should be backported to 1.5.
Colin Ingarfield reported some unexplainable flags in the logs.
For example, a "LR" termination state was set on a request which was forwarded
to a server, where "LR" means that the request should have been handled
internally by haproxy.
This case happens when at least client side keep-alive is enabled. Next
requests in the connection will inherit the flags from the previous request.
2 fields are impacted : "termination_state" and "Tt" in the timing events,
where a "+" can be added, when a previous request was redispatched.
This is not critical for the service itself but can confuse troubleshooting.
The fix must be backported to 1.5 and 1.4.
When the HTTP parser is in state HTTP_MSG_ERROR, we don't know if it was
already initialized or not. If the error happens before HTTP_MSG_RQBEFORE,
random offsets might be present and we don't want to display such random
strings in debug mode.
While it's theorically possible to randomly crash the process when running
in debug mode here, this bug was not tagged MAJOR because it would not
make sense to run in debug mode in production.
This fix must be backported to 1.5 and 1.4.
MAX_SESS_STKCTR allows one to define the number of stick counters that can
be used in parallel in track-sc* rules. The naming of this macro creates
some confusion because the value there is sometimes used as a max instead
of a count, and the config parser accepts values from 0 to MAX_SESS_STKCTR
and the processing ignores anything tracked on the last one. This means
that by default, track-sc3 is allowed and ignored.
This fix must be backported to 1.5 where the problem there only affects
TCP rules.
Commit 179085c ("MEDIUM: http: move Connection header processing earlier")
introduced a regression : the backend's HTTP mode is not considered anymore
when setting the session's HTTP mode, because wait_for_request() is only
called once, when the frontend receives the request (or when the frontend
is in TCP mode, when the backend receives the request).
The net effect is that in some situations when the frontend and the backend
do not work in the same mode (eg: keep-alive vs close), the backend's mode
is ignored.
This patch moves all that processing to a dedicated function, which is
called from the original place, as well as from session_set_backend()
when switching from an HTTP frontend to an HTTP backend in different
modes.
This fix must be backported to 1.5.
Ryan Brock reported that server stickiness did not work for WebSocket
because the cookies and headers are not modified on 1xx responses. He
found that his browser correctly presents the cookies learned on 101
responses, which was not specifically defined in the WebSocket spec,
nor in the cookie spec. 101 is a very special case. Being part of 1xx,
it's an interim response. But within 1xx, it's special because it's
the last HTTP/1 response that transits on the wire, which is different
from 100 or 102 which may appear multiple times. So in that sense, we
can consider it as a final response regarding HTTP/1, and it makes
sense to allow header processing there. Note that we still ensure not
to mangle the Connection header, which is critical for HTTP upgrade to
continue to work smoothly with agents that are a bit picky about what
tokens are found there.
The rspadd rules are now processed for 101 responses as well, but the
cache-control checks are not performed (since no body is delivered).
Ryan confirmed that this patch works for him.
It would make sense to backport it to 1.5 given that it improves end
user experience on WebSocket servers.
Commit bb2e669 ("BUG/MAJOR: http: correctly rewind the request body
after start of forwarding") was incorrect/incomplete. It used to rely on
CF_READ_ATTACHED to stop updating msg->sov once data start to leave the
buffer, but this is unreliable because since commit a6eebb3 ("[BUG]
session: clear BF_READ_ATTACHED before next I/O") merged in 1.5-dev1,
this flag is only ephemeral and is cleared once all analysers have
seen it. So we can start updating msg->sov again each time we pass
through this place with new data. With a sufficiently large amount of
data, it is possible to make msg->sov wrap and validate the if()
condition at the top, causing the buffer to advance by about 2GB and
crash the process.
Note that the offset cannot be controlled by the attacker because it is
a sum of millions of small random sizes depending on how many bytes were
read by the server and how many were left in the buffer, only because
of the speed difference between reading and writing. Also, nothing is
written, the invalid pointer resulting from this operation is only read.
Many thanks to James Dempsey for reporting this bug and to Chris Forbes for
narrowing down the faulty area enough to make its root cause analysable.
This fix must be backported to haproxy 1.5.
pat_parse_meth() had some remains of an early implementation attempt for
the patterns, it initialises a trash and never sets the pattern value there.
The result is that a non-standard method cannot be matched anymore. The bug
appeared during the pattern rework in 1.5, so this fix must be backported
there. Thanks to Joe Williams of GitHub for reporting the bug.
This results in a string-based HTTP method match returning true when
it doesn't match and conversely. This bug was reported by Joe Williams.
The fix must be backported to 1.5, though it still doesn't work because
of at least 3-4 other bugs in the long path which leads to building this
pattern list.
Before the commit bbba2a8ecc
(1.5-dev24-8), the tarpit section set timeout and return, after this
commit, the tarpit section set the timeout, and go to the "done" label
which reset the timeout.
Thanks Bryan Talbot for the bug report and analysis.
This should be backported in 1.5.
A couple of typo fixed in 'http-response replace-header':
- an error when counting the number of arguments
- a typo in the alert message
This should be backported to 1.5.
Add support for http-request track-sc, similar to what is done in
tcp-request for backends. A new act_prm field was added to HTTP
request rules to store the track params (table, counter). Just
like for TCP rules, the table is resolved while checking for
config validity. The code was mostly copied from the TCP code
with the exception that here we also count the HTTP request count
and rate by hand. Probably that something could be factored out in
the future.
It seems like tracking flags should be improved to mark each hook
which tracks a key so that we can have some check points where to
increase counters of the past if not done yet, a bit like is done
for TRACK_BACKEND.
We're using the internal memory representation of base32 here, which is
wrong since these data might be exported to headers for logs or be used
to stick to a server and replicated to other peers. Let's convert base32
to big endian (network representation) when building the binary block.
This mistake is also present in 1.5, it would be better to backport it.
Daniel Dubovik reported an interesting bug showing that the request body
processing was still not 100% fixed. If a POST request contained short
enough data to be forwarded at once before trying to establish the
connection to the server, we had no way to correctly rewind the body.
The first visible case is that balancing on a header does not always work
on such POST requests since the header cannot be found. But there are even
nastier implications which are that http-send-name-header would apply to
the wrong location and possibly even affect part of the request's body
due to an incorrect rewinding.
There are two options to fix the problem :
- first one is to force the HTTP_MSG_F_WAIT_CONN flag on all hash-based
balancing algorithms and http-send-name-header, but there's always a
risk that any new algorithm forgets to set it ;
- the second option is to account for the amount of skipped data before
the connection establishes so that we always know the position of the
request's body relative to the buffer's origin.
The second option is much more reliable and fits very well in the spirit
of the past changes to fix forwarding. Indeed, at the moment we have
msg->sov which points to the start of the body before headers are forwarded
and which equals zero afterwards (so it still points to the start of the
body before forwarding data). A minor change consists in always making it
point to the start of the body even after data have been forwarded. It means
that it can get a negative value (so we need to change its type to signed)..
In order to avoid wrapping, we only do this as long as the other side of
the buffer is not connected yet.
Doing this definitely fixes the issues above for the requests. Since the
response cannot be rewound we don't need to perform any change there.
This bug was introduced/remained unfixed in 1.5-dev23 so the fix must be
backported to 1.5.
As usual, when touching any is* function, Solaris complains about the
type of the element being checked. Better backport this to 1.5 since
nobody knows what the emitted code looks like since macros are used
instead of functions.
We used to only clear flags when reusing the static sample before calling
sample_process(), but that's not enough because there's a context in samples
that can be used by some fetch functions such as auth, headers and cookies,
and not reinitializing it risks that a pointer of a different type is used
in the wrong context.
An example configuration which triggers the case consists in mixing hdr()
and http_auth_group() which both make use of contexts :
http-request add-header foo2 %[hdr(host)],%[http_auth_group(foo)]
The solution is simple, initialize all the sample and not just the flags.
This fix must be backported into 1.5 since it was introduced in 1.5-dev19.
Baptiste Assmann reported a corner case in the releasing of stick-counters:
we release content-aware counters before logging. In the past it was not a
problem, but since now we can log them it, it prevents one from logging
their value. Simply switching the log production and the release of the
counter fixes the issue.
This should be backported into 1.5.
The sample fetch function "base" makes use of the trash which is also
used by set-header/add-header etc... everything which builds a formated
line. So we end up with some junk in the header if base is in use. Let's
fix this as all other fetches by using a trash chunk instead.
This bug was reported by Baptiste Assmann, and also affects 1.5.
This is the 3rd regression caused by the changes below. The latest to
date was reported by Finn Arne Gangstad. If a server responds with no
content-length and the client's FIN is never received, either we leak
the client-side FD or we spin at 100% CPU if timeout client-fin is set.
Enough is enough. The amount of tricks needed to cover these side-effects
starts to look like used toilet paper stacked over a chocolate cake. I
don't want to eat that cake anymore!
All this to avoid reporting a server-side timeout when a client stops
uploading data and haproxy expires faster than the server... A lot of
"ifs" resulting in a technically valid log that doesn't always please
users, and whose alternative causes that many issues for all others
users.
So let's revert this crap merged since 1.5-dev25 :
Revert "CLEANUP: http: don't clear CF_READ_NOEXP twice"
This reverts commit 1592d1e72a.
Revert "BUG/MEDIUM: http: clear CF_READ_NOEXP when preparing a new transaction"
This reverts commit 77d29029af.
Revert "BUG/MEDIUM: session: don't clear CF_READ_NOEXP if analysers are not called"
This reverts commit 0943757a21.
Revert "BUG/MEDIUM: http: disable server-side expiration until client has sent the body"
This reverts commit 3bed5e9337.
Revert "BUG/MEDIUM: http: correctly report request body timeouts"
This reverts commit b9edf8fbec.
Revert "BUG/MEDIUM: http/session: disable client-side expiration only after body"
This reverts commit b1982e27aa.
If a cleaner AND SAFER way to do something equivalent in 1.6-dev, we *might*
consider backporting it to 1.5, but given the vicious bugs that have surfaced
since, I doubt it will happen any time soon.
Fortunately, that crap never made it into 1.4 so no backport is needed.
The new regex function can use string and length. The HAproxy buffer are
not null-terminated, and the use of the regex_exec* functions implies
the add of this null character. This patch replace these function by the
functions which takes a string and length as input.
Just the file "proto_http.c" is change because this one is more executed
than other. The file "checks.c" have a very low usage, and it is not
interesting to change it. Furthermore, the buffer used by "checks.c" are
null-terminated.
This patch remove all references of standard regex in haproxy. The last
remaining references are only in the regex.[ch] files.
In the file src/checks.c, the original function uses a "pmatch" array.
In fact this array is unused. This patch remove it.
When I renamed the modify-header action to replace-value, one of them
was mistakenly set to "replace-val" instead. Additionally, differentiation
of the two actions must be done on args[0][8] and not *args[8]. Thanks
Thierry for spotting...
This patch adds two new actions to http-request and http-response rulesets :
- replace-header : replace a whole header line, suited for headers
which might contain commas
- replace-value : replace a single header value, suited for headers
defined as lists.
The match consists in a regex, and the replacement string takes a log-format
and supports back-references.
Using the last rate counters, we now compute the queue, connect, response
and total times per server and per backend with a 95% accuracy over the last
1024 samples. The operation is cheap so we don't need to condition it.
This patch adds support for captures with no header name. The purpose
is to allow extra captures to be defined and logged along with the
header captures.
Commit b1982e2 ("BUG/MEDIUM: http/session: disable client-side expiration
only after body") was tricky and caused an issue which was fixed by commit
0943757 ("BUG/MEDIUM: session: don't clear CF_READ_NOEXP if analysers are
not called"). But that's not enough, another issue was introduced and further
emphasized by last fix.
The issue is that the CF_READ_NOEXP flag needs to be cleared when waiting
for a new request over that connection, otherwise we cannot expire anymore
an idle connection waiting for a new request.
This explains the neverending keepalives reported by at least 3 different
persons since dev24. No backport is needed.
Currently exp_replace() (which is used in reqrep/reqirep) is
vulnerable to a buffer overrun. I have been able to reproduce it using
the attached configuration file and issuing the following command:
wget -O - -S -q http://localhost:8000/`perl -e 'print "a"x4000'`/cookie.php
Str was being checked only in in while (str) and it was possible to
read past that when more than one character was being accessed in the
loop.
WT:
Note that this bug is only marked MEDIUM because configurations
capable of triggering this bug are very unlikely to exist at all due
to the fact that most rewrites consist in static string additions
that largely fit into the reserved area (8kB by default).
This fix should also be backported to 1.4 and possibly even 1.3 since
it seems to have been present since 1.1 or so.
Config:
-------
global
maxconn 500
stats socket /tmp/haproxy.sock mode 600
defaults
timeout client 1000
timeout connect 5000
timeout server 5000
retries 1
option redispatch
listen stats
bind :8080
mode http
stats enable
stats uri /stats
stats show-legends
listen tcp_1
bind :8000
mode http
maxconn 400
balance roundrobin
reqrep ^([^\ :]*)\ /(.*)/(.*)\.php(.*) \1\ /\3.php?arg=\2\2\2\2\2\2\2\2\2\2\2\2\2\4
server srv1 127.0.0.1:9000 check port 9000 inter 1000 fall 1
server srv2 127.0.0.1:9001 check port 9001 inter 1000 fall 1
Servers used to have 3 flags to store a state, now they have 4 states
instead. This avoids lots of confusion for the 4 remaining undefined
states.
The encoding from the previous to the new states can be represented
this way :
SRV_STF_RUNNING
| SRV_STF_GOINGDOWN
| | SRV_STF_WARMINGUP
| | |
0 x x SRV_ST_STOPPED
1 0 0 SRV_ST_RUNNING
1 0 1 SRV_ST_STARTING
1 1 x SRV_ST_STOPPING
Note that the case where all bits were set used to exist and was randomly
dealt with. For example, the task was not stopped, the throttle value was
still updated and reported in the stats and in the http_server_state header.
It was the same if the server was stopped by the agent or for maintenance.
It's worth noting that the internal function names are still quite confusing.
Till now, the server's state and flags were all saved as a single bit
field. It causes some difficulties because we'd like to have an enum
for the state and separate flags.
This commit starts by splitting them in two distinct fields. The first
one is srv->state (with its counter-part srv->prev_state) which are now
enums, but which still contain bits (SRV_STF_*).
The flags now lie in their own field (srv->flags).
The function srv_is_usable() was updated to use the enum as input, since
it already used to deal only with the state.
Note that currently, the maintenance mode is still in the state for
simplicity, but it must move as well.
It's the final part of the 2 previous patches. We prevent the server from
timing out if we still have some data to pass to it. That way, even if the
server runs with a short timeout and the client with a large one, the server
side timeout will only start to count once the client sends everything. This
ensures we don't report a 504 before the server gets the whole request.
It is not certain whether the 1.4 state machine is fully compatible with
this change. Since the purpose is only to ensure that we never report a
server error before a client error if some data are missing from the client
and when the server-side timeout is smaller than or equal to the client's,
it's probably not worth attempting the backport.
This is the continuation of previous patch "BUG/MEDIUM: http/session:
disable client-side expiration only after body".
This one takes care of properly reporting the client-side read timeout
when waiting for a body from the client. Since the timeout may happen
before or after the server starts to respond, we have to take care of
the situation in three different ways :
- if the server does not read our data fast enough, we emit a 504
if we're waiting for headers, or we simply break the connection
if headers were already received. We report either sH or sD
depending on whether we've seen headers or not.
- if the server has not yet started to respond, but has read all of
the client's data and we're still waiting for more data from the
client, we can safely emit a 408 and abort the request ;
- if the server has already started to respond (thus it's a transfer
timeout during a bidirectional exchange), then we silently break
the connection, and only the session flags will indicate in the
logs that something went wrong with client or server side.
This bug is tagged MEDIUM because it touches very sensible areas, however
its impact is very low. It might be worth performing a careful backport
to 1.4 once it has been confirmed that everything is correct and that it
does not introduce any regression.
For a very long time, back in the v1.3 days, we used to rely on a trick
to avoid expiring the client side while transferring a payload to the
server. The problem was that if a client was able to quickly fill the
buffers, and these buffers took some time to reach the server, the
client should not expire while not sending anything.
In order to cover this situation, the client-side timeout was disabled
once the connection to the server was OK, since it implied that we would
at least expire on the server if required.
But there is a drawback to this : if a client stops uploading data before
the end, its timeout is not enforced and we only expire on the server's
timeout, so the logs report a 504.
Since 1.4, we have message body analysers which ensure that we know whether
all the expected data was received or not (HTTP_MSG_DATA or HTTP_MSG_DONE).
So we can fix this problem by disabling the client-side or server-side
timeout at the end of the transfer for the respective side instead of
having it unconditionally in session.c during all the transfer.
With this, the logs now report the correct side for the timeout. Note that
this patch is not enough, because another issue remains : the HTTP body
forwarders do not abort upon timeout, they simply rely on the generic
handling from session.c. So for now, the session is still aborted when
reaching the server timeout, but the culprit is properly reported. A
subsequent patch will address this specific point.
This bug was tagged MEDIUM because of the changes performed. The issue
it fixes is minor however. After some cooling down, it may be backported
to 1.4.
It was reported by and discussed with Rachel Chavez and Patrick Hemmer
on the mailing list.
On the mailing list, seri0528@naver.com reported an issue when
using balance url_param or balance uri. The request would sometimes
stall forever.
Cyril Bont managed to reproduce it with the configuration below :
listen test :80
mode http
balance url_param q
hash-type consistent
server s demo.1wt.eu:80
and found it appeared with this commit : 80a92c0 ("BUG/MEDIUM: http:
don't start to forward request data before the connect").
The bug is subtle but real. The problem is that the HTTP request
forwarding analyzer refrains from starting to parse the request
body when some LB algorithms might need the body contents, in order
to preserve the data pointer and avoid moving things around during
analysis in case a redispatch is later needed. And in order to detect
that the connection establishes, it watches the response channel's
CF_READ_ATTACHED flag.
The problem is that a request analyzer is not subscribed to a response
channel, so it will only see changes when woken for other (generally
correlated) reasons, such as the fact that part of the request could
be sent. And since the CF_READ_ATTACHED flag is cleared once leaving
process_session(), it is important not to miss it. It simply happens
that sometimes the server starts to respond in a sequence that validates
the connection in the middle of process_session(), that it is detected
after the analysers, and that the newly assigned CF_READ_ATTACHED is
not used to detect that the request analysers need to be called again,
then the flag is lost.
The CF_WAKE_WRITE flag doesn't work either because it's cleared upon
entry into process_session(), ie if we spend more than one call not
connecting.
Thus we need a new flag to tell the connection initiator that we are
specifically interested in being notified about connection establishment.
This new flag is CF_WAKE_CONNECT. It is set by the requester, and is
cleared once the connection succeeds, where CF_WAKE_ONCE is set instead,
causing the request analysers to be scanned again.
For future versions, some better options will have to be considered :
- let all analysers subscribe to both request and response events ;
- let analysers subscribe to stream interface events (reduces number
of useless calls)
- change CF_WAKE_WRITE's semantics to persist across calls to
process_session(), but that is different from validating a
connection establishment (eg: no data sent, or no data to send)
The bug was introduced in 1.5-dev23, no backport is needed.
Till now we used to return a pointer to a rule, but that makes it
complicated to later add support for registering new actions which
may fail. For example, the redirect may fail if the response is too
large to fit into the buffer.
So instead let's return a verdict. But we needed the pointer to the
last rule to get the address of a redirect and to get the realm used
by the auth page. So these pieces of code have moved into the function
and they produce a verdict.
Both use exactly the same mechanism, except for the choice of the
default realm to be emitted when none is selected. It can be achieved
by simply comparing the ruleset with the stats' for now. This achieves
a significant code reduction and further, removes the dependence on
the pointer to the final rule in the caller.
Some of the remaining interleaving of request processing after the
http-request rules can now safely be removed, because all remaining
actions are mutually exclusive.
So we can move together all those related to an intercepting rule,
then proceed with stats, then with req*.
We still keep an issue with stats vs reqrep which forces us to
keep the stats split in two (detection and action). Indeed, from the
beginning, stats are detected before rewriting and not after. But a
reqdeny rule would stop stats, so in practice we have to first detect,
then perform the action. Maybe we'll be able to kill this in version
1.6.
Till now the Connection header was processed in the middle of the http-request
rules and some reqadd rules. It used to force some http-request actions to be
cut in two parts.
Now with keep-alive, not only that doesn't make any sense anymore, but it's
becoming a total mess, especially since we need to know the headers contents
before proceeding with most actions.
The real reason it was not moved earlier is that the "block" or "http-request"
rules can see a different version if some fields are changed there. But that
is already not reliable anymore since the values observed by the frontend
differ from those in the backend.
This patch is the equivalent of commit f118d9f ("REORG: http: move HTTP
Connection response header parsing earlier") but for the request side. It
has been tagged MEDIUM as it could theorically slightly affect some setups
relying on corner cases or invalid setups, though this does not make real
sense and is highly unlikely.
The session's backend request counters were incremented after the block
rules while these rules could increment the session's error counters,
meaning that we could have more errors than requests reported in a stick
table! Commit 5d5b5d8 ("MEDIUM: proto_tcp: add support for tracking L7
information") is the most responsible for this.
This bug is 1.5-specific and does not need any backport.
"block" rules used to build the whole response and forgot to increment
the denied_req counters. By jumping to the general "deny" label created
in previous patch, it's easier to fix this.
The issue was already present in 1.3 and remained unnoticed, in part
because few people use "block" nowadays.
Continue the cleanup of http-request post-processing to remove some
of the interleaved tests. Here we set up a few labels to deal with
the deny and tarpit actions and avoid interleaved ifs.
We still have a plate of spaghetti in the request processing rules.
All http-request rules are executed at once, then some responses are
built interlaced with other rules that used to be there in the past.
Here, reqadd is executed after an http-req redirect rule is *decided*,
but before it is *executed*.
So let's match the doc and config checks, to put the redirect actually
before the reqadd completely.
There's no point in open-coding the sending of 100-continue in
the stats initialization code, better simply rely on the function
designed to process the message body which already does it.
Commit 844a7e7 ("[MEDIUM] http: add support for proxy authentication")
merged in v1.4-rc1 added the ability to emit a status code 407 in auth
responses, but forgot to set the same status in the logs, which still
contain 401.
The bug is harmless, no backport is needed.
Last fix did address the issue for inlined patterns, but it was not
enough because the flags are lost as well when updating patterns
dynamically over the CLI.
Also if the same file was used once with -i and another time without
-i, their references would have been merged and both would have used
the same matching method.
It's appear that the patterns have two types of flags. The first
ones are relative to the pattern matching, and the second are
relative to the pattern storage. The pattern matching flags are
the same for all the patterns of one expression. Now they are
stored in the expression. The storage flags are information
returned by the pattern mathing function. This information is
relative to each entry and is stored in the "struct pattern".
Now, the expression matching flags are forwarded to the parse
and index functions. These flags are stored during the
configuration parsing, and they are used during the parse and
index actions.
This issue was introduced in dev23 with the major pattern rework,
and is a continuation of commit a631fc8 ("BUG/MAJOR: patterns: -i
and -n are ignored for inlined patterns"). No backport is needed.
These flags are only passed to pattern_read_from_file() which
loads the patterns from a file. The functions used to parse the
patterns from the current line do not provide the means to pass
the pattern flags so they're lost.
This issue was introduced in dev23 with the major pattern rework,
and was reported by Graham Morley. No backport is needed.
The latest commit about set-map/add-acl/... causes this warning for
me :
src/proto_http.c: In function 'parse_http_req_cond':
src/proto_http.c:8863: warning: implicit declaration of function 'strndup'
src/proto_http.c:8863: warning: incompatible implicit declaration of built-in function 'strndup'
src/proto_http.c:8890: warning: incompatible implicit declaration of built-in function 'strndup'
src/proto_http.c:8917: warning: incompatible implicit declaration of built-in function 'strndup'
src/proto_http.c:8944: warning: incompatible implicit declaration of built-in function 'strndup'
Use my_strndup() instead of strndup() which is not portable. No backport
needed.
The http_(res|req)_keywords_register() functions allow to register
new keywords.
You need to declare a keyword list:
struct http_req_action_kw_list test_kws = {
.scope = "testscope",
.kw = {
{ "test", parse_test },
{ NULL, NULL },
}
};
and a parsing function:
int parse_test(const char **args, int *cur_arg, struct proxy *px, struct http_req_rule *rule, char **err)
{
rule->action = HTTP_REQ_ACT_CUSTOM_STOP;
rule->action_ptr = action_function;
return 0;
}
http_req_keywords_register(&test_kws);
The HTTP_REQ_ACT_CUSTOM_STOP action stops evaluation of rules after
your rule, HTTP_REQ_ACT_CUSTOM_CONT permits the evaluation of rules
after your rule.
This patch allows manipulation of ACL and MAP content thanks to any
information available in a session: source IP address, HTTP request or
response header, etc...
It's an update "on the fly" of the content of the map/acls. This means
it does not resist to reload or restart of HAProxy.
Commit bed410e ("MAJOR: http: centralize data forwarding in the request path")
has woken up an issue in redirects, where msg->next is not reset when flushing
the input buffer. The result is an attempt to forward a negative amount of
data, making haproxy crash.
This bug does not seem to affect versions prior to dev23, so no backport is
needed.
These ones report a string as "HTTP/1.0" or "HTTP/1.1" depending on the
version of the request message or the response message, respectively.
The purpose is to be able to emit custom log lines reporting this version
in a persistent way.
Currently, the parsing of the HTTP Connection header for the response
is performed at the same place as the rule sets, which means that after
parsing the beginning of the response, we still have no information on
whether the response is keep-alive compatible or not. Let's do that
earlier.
Note that this is the same code that was moved in the previous function,
both of them are always called in a row so no change of behaviour is
expected.
A future change might consist in having a late analyser to perform the
late header changes such as mangling the connection header. It's quite
painful that currently this is mixed with the rest of the processing
such as filters.
This allows the stats page to work in keep-alive mode and to be
compressed. At compression ratios up to 80%, it's quite interesting
for large pages.
We ensure to skip filters because we don't want to unexpectedly block
a response nor to mangle response headers.
All the code inherited from version 1.1 still holds a lot ot sessions
called "t" because in 1.1 they were tasks. This naming is very annoying
and sometimes even confusing, for example in code involving tables.
Let's get rid of this once for all and before 1.5-final.
Nothing changed beyond just carefully renaming these variables.
It's useless to process 100-continue in the middle of response filters
because there's no info in the 100 response itself, and it could even
make things worse. So better use it as it is, an interim response
waiting for the next response, thus we just have to put it into
http_wait_for_response(). That way we ensure to have a valid response
in this function.
Since commit d7ad9f5 ("MAJOR: channel: add a new flag CF_WAKE_WRITE to
notify the task of writes"), we got another bug with 100-continue responses.
If the final response comes in the same packet as the 100, then the rest of
the buffer is not processed since there is no wake-up event.
In fact the change above uncoverred the real culprit which is more
likely session.c which should detect that an earlier analyser was set
and should loop back to it.
A cleaner fix would be better, but setting the flag works fine.
This issue was introduced in 1.5-dev22, no backport is needed.
Patches c623c17 ("MEDIUM: http: start to centralize the forwarding code")
and bed410e ("MAJOR: http: centralize data forwarding in the request path")
merged into 1.5-dev23 cause transfers to be silently aborted after the
server timeout due to the fact that the analysers are woken up when the
timeout strikes and they believe they have nothing more to do, so they're
terminating the transfer.
No backport is needed.
This basically reimplements commit f3221f9 ("MEDIUM: stats: add support
for HTTP keep-alive on the stats page") which was reverted by commit
51437d2 after Igor Chan reported a broken stats page caused by the bug
fix by previous commit.
Commit f003d37 ("BUG/MINOR: http: don't report client aborts as server errors")
attempted to fix a longstanding issue by which some client aborts could be
logged as server errors. Unfortunately, one of the tests involved there also
catches truncated server responses, which are reported as client aborts.
Instead, only check that the client has really closed using the abortonclose
option, just as in done in the request path (which means that the close was
propagated to the server).
The faulty fix above was introduced in 1.5-dev15, and was backported into
1.4.23.
Thanks to Patrick Hemmer for reporting this issue with traces showing the
root cause of the problem.
Commit ad90351 ("MINOR: http: Add the "language" converter to for use with accept-language")
introduced a typo in parse_qvalue :
if (*end)
*end = qvalue;
while it should be :
if (end)
*end = qvalue;
Since end is tested for being NULL. This crashes when selecting the
compression algorithm since end is NULL here. No backport is needed,
this is just in latest 1.5-dev.
The forwarding code is never obvious to enter into for newcomers, so
better improve the documentation about how states are chained and what
happens for each of them.
Doing so avoids calling channel_forward() for each part of the chunk
parsing and lowers the number of calls to channel_forward() to only
one per buffer, resulting in about 11% performance increase on small
chunks forwarding rate.
The call to flush the compression buffers only needs to be done when
entering the final states or when leaving with missing data. After
that, if trailers are present, they have to be forwarded.
Now we have valid buffer offsets, we can use them to safely parse the
input and only forward when needed. Thus we can get rid of the
consumed_data accumulator, and the code now works both for chunked and
content-length, even with a server feeding one byte at a time (which
systematically broke the previous one).
It's worth noting that 0<CRLF> must always be sent after end of data
(ie: chunk_len==0), and that the trailing CRLF is sent only content
length mode, because in chunked we'll have to pass trailers.
This is basically a revert of commit 667c2a3 ("BUG/MAJOR: http: compression
still has defects on chunked responses").
The latest changes applied to message pointers should have got rid of all
the issues that were making the compression of partial chunks unreliable.
Currently, we forward headers only if the incoming message is still before
HTTP_MSG_CHUNK_SIZE, otherwise they'll be considered as data. In practice
this is always true for the response since there's no data inspection, and
for the request there is no compression so there's no problem with forwarding
them as data.
But the principle is incorrect and will make it difficult to later add data
processing features. So better fix it now.
The new principle is simple :
- if headers were not yet forwarded, forward them now.
- while doing so, check if we need to update the state
If for some reason, the compression returns an error, the compression
is not deinitialized which also means that any pending data are not
flushed and could be lost, especially in the chunked-encoded case.
No backport is needed.
When a parsing error was encountered in a chunked response, we failed
to properly deinitialize the compression context. There was no impact
till now since compression of chunked responses was disabled. No backport
is needed.
Thanks to the last updates on the message pointers, it is now safe again to
enable forwarding of the request headers while waiting for the connection to
complete because we know how to safely rewind this part.
So this patch slightly modifies what was done in commit 80a92c0 ("BUG/MEDIUM:
http: don't start to forward request data before the connect") to let up to
msg->sov bytes be forwarded when waiting for the connection. The resulting
effect is that a POST request may now be sent with the connect's ACK, which
still saves a packet and may even be useful later when TFO is supported.
In order to avoid abusively relying on buf->o to guess how many bytes to
rewind during a redispatch, we now clear msg->sov. Thus the meaning of this
field is exactly "how many bytes of headers are left to be forwarded". It
is still possible to rewind because msg->eoh + msg->eol equal that value
before scheduling the forwarding, so we can always subtract them.
http_hdr_rewind() returns the number of bytes to rewind before buf->p to
find the beginning of headers. At the moment it's not exact as it still
relies on buf->o, assuming that no other data from a past message were
pending there, but it's what was done till there.
The purpose is to centralize further ->sov changes aiming at avoiding
to rely on buf->o.
We used to have msg->sov updated for every chunk that was parsed. The issue
is that we want to be able to rewind after chunks were parsed in case we need
to redispatch a request and perform a new hash on the request or insert a
different server header name.
Currently, msg->sov and msg->next make parallel progress. We reached a point
where they're always equal because msg->next is initialized from msg->sov,
and is subtracted msg->sov's value each time msg->sov bytes are forwarded.
So we can now ensure that msg->sov can always be replaced by msg->next for
every state after HTTP_MSG_BODY where it is used as a position counter.
This allows us to keep msg->sov untouched whatever the number of chunks that
are parsed, as is needed to extract data from POST request (eg: url_param).
However, we still need to know the starting position of the data relative to
the body, which differs by the chunk size length. We use msg->sol for this
since it's now always zero and unused in the body.
So with this patch, we have the following situation :
- msg->sov = msg->eoh + msg->eol = size of the headers including last CRLF
- msg->sol = length of the chunk size if any. So msg->sov + msg->sol = DATA.
- msg->next corresponds to the byte being inspected based on the current
state and is always >= msg->sov before starting to forward anything.
Since sov and next are updated in case of header rewriting, a rewind will
fix them both when needed. Of course, ->sol has no reason for changing in
such conditions, so it's fine to keep it relative to msg->sov.
In theory, even if a redispatch has to be performed, a transformation
occurring on the request would still work because the data moved would
still appear at the same place relative to bug->p.
This function is only a parser, it must start to parse at the next character
and only update the outgoing relative pointers, but not expect the buffer to
be aligned with the next byte to be parsed.
It's important to fix this otherwise we cannot use this function to parse
chunks without starting to forward data.
There are still some pending issues in the gzip compressor, and fixing
them requires a better handling of intermediate parsing states.
Another issue to deal with is the rewinding of a buffer during a redispatch
when a load balancing algorithm involves L7 data because the exact amount of
data to rewind is not clear. At the moment, this is handled by unwinding all
pending data, which cannot work in responses due to pipelining.
Last, having a first analysis which parses the body and another one which
restarts from where the parsing was left is wrong. Right now it only works
because we never both parse and transform in the same direction. But that
is wrong anyway.
In order to address the first issue, we'll have to use msg->eoh + msg->eol
to find the end of headers, and we still need to store the information about
the forwarded header length somewhere (msg->sol might be reused for this).
msg->sov may only be used for the start of data and not for subsequent chunks
if possible. This first implies that we stop sharing it with header length,
and stop using msg->sol there. In fact we don't need it already as it is
always zero when reaching the HTTP_MSG_BODY state. It was only updated to
reflect a copy of msg->sov.
So now as a first step into that direction, this patch ensure that msg->sol
is never re-assigned after being set to zero and is not used anymore when
we're dealing with HTTP processing and forwarding. We'll later reuse it
differently but for now it's secured.
The patch does nothing magic, it only removes msg->sol everywhere it was
already zero and avoids setting it. In order to keep the sov-sol difference,
it now resets sov after forwarding data. In theory there's no problem here,
but the patch is still tagged major because that code is complex.
One of the issues we face when we need to either forward headers only
before compressing, or rewind the stream during a redispatch is to know
the proper length of the request headers. msg->eoh always has the total
length up to the last CRLF, and we never know whether the request ended
with a single LF or a standard CRLF. This makes it hard to rewind the
headers without explicitly checking the bytes in the buffer.
Instead of doing so, we now use msg->eol to carry the length of the last
CRLF (either 1 or 2). Since it is not modified at all after HTTP_MSG_BODY,
and was only left in an undefined state, it is safe to use at any moment.
Thus, the complete header length to forward or to rewind now is always
msg->eoh + msg->eol.
Content-length encoded message bodies are trivial to deal with, but
chunked-encoded will require improvements, so let's separate the code
flows between the two to ease next steps. The behaviour is not changed
at all, the code is only rearranged.
This is the continuation of previous patch. Now that full buffers are
not rejected anymore, let's wait for at least the advertised chunk or
body length to be present or the buffer to be full. When either
condition is met, the message processing can go forward.
Thus we don't need to use url_param_post_limit anymore, which was passed
in the configuration as an optionnal <max_wait> parameter after the
"check_post" value. This setting was necessary when the feature was
implemented because there was no support for parsing message bodies.
The argument is now silently ignored if set in the configuration.
http_process_request_body() currently expects a request body containing
exactly an expected message body. This was done in order to support load
balancing on a unique POST parameter but the way it's done still suffers
from some limitations. One of them is that there is no guarantee that the
accepted message will contain the appropriate string if it starts with
another parameter. But at the same time it will reject a message when the
buffer is full.
So as a first step, we don't reject anymore message bodies that fill the
buffer.
language(<value[;value[;value[;...]]]>[,<default>])
Returns the value with the highest q-factor from a list as
extracted from the "accept-language" header using "req.fhdr".
Values with no q-factor have a q-factor of 1. Values with a
q-factor of 0 are dropped. Only values which belong to the
list of semi-colon delimited <values> will be considered. If
no value matches the given list and a default value is
provided, it is returned. Note that language names may have
a variant after a dash ('-'). If this variant is present in
the list, it will be matched, but if it is not, only the base
language is checked. The match is case-sensitive, and the
output string is always one of those provided in arguments.
The ordering of arguments is meaningless, only the ordering
of the values in the request counts, as the first value among
multiple sharing the same q-factor is used.
Example :
# this configuration switches to the backend matching a
# given language based on the request :
acl de req.fhdr(accept-language),language(de;es;fr;en) de
acl es req.fhdr(accept-language),language(de;es;fr;en) es
acl fr req.fhdr(accept-language),language(de;es;fr;en) fr
acl en req.fhdr(accept-language),language(de;es;fr;en) en
use_backend german if de
use_backend spanish if es
use_backend french if fr
use_backend english if en
default_backend choose_your_language
OpenBSD complains about this use of sprintf() :
src/proto_http.o(.text+0xb0e6): In function `http_process_request':
src/proto_http.c:4127: warning: sprintf() is often misused, please use snprintf()
Here there's no risk as the strings are way shorter than the buffer size
but let's fix it anyway.
RFC 1945 (4.1) defines an HTTP/0.9 request ("Simple-Request") as:
Simple-Request = "GET" SP Request-URI CRLF
HAProxy tries to automatically upgrade HTTP/0.9 requests to
to HTTP/1.0, by appending "HTTP/1.0" to the request and setting the
Request-URI to "/" if it was not present. The latter however is
RFC-incompatible, as HTTP/0.9 requests must already have a Request-URI
according to the definition above. Additionally,
http_upgrade_v09_to_v10() does not check whether the request method is
indeed GET (the mandatory method for HTTP/0.9).
As a result, any single- or double-word request line is regarded as a
valid HTTP request. We fix this by failing in http_upgrade_v09_to_v10()
if the request method is not GET or the request URI is not present.
The function url2sa() converts faster url like http://<ip>:<port> in a
struct sockaddr_storage. This patch add:
- the https support
- permit to return the length parsed
- support IPv6
- support DNS synchronous resolution only during start of haproxy.
The faster IPv4 convertion way is keeped. IPv6 is slower, because I use
the standard IPv6 parser function.
Till now we didn't consider "q=". It's problematic because the first
effect is that compression tokens were not even matched if it was
present.
It is important to parse it correctly because we still want to allow
a user-agent to send "q=0" to explicitly disable a compressor, or to
specify its preferences.
Now, q-values are respected in order of precedence, and when several
q-values are equal, the first occurrence is used.
This patch replace a lot of pointeur by pattern matching identifier. If
the declared ACL use all the predefined pattern matching functions, the
register function gets the functions provided by "pattern.c" and
identified by the PAT_LATCH_*.
In the case of the acl uses his own functions, they can be declared, and
the acl registration doesn't change it.
The find_smp search the smp using the value of the pat_ref_elt pointer.
The pat_find_smp_* are no longer used. The function pattern_find_smp()
known all pattern indexation, and can be found
All the pattern delete function can use her reference to the original
"struct pat_ref_elt" to find the element to be remove. The functions
pat_del_list_str() and pat_del_meth() were deleted because after
applying this modification, they have the same code than pat_del_list_ptr().
This patch extract the expect_type variable from the "struct pattern" to
"struct pattern_head". This variable is set during the declaration of
ACL and MAP. With this change, the function "pat_parse_len()" become
useless and can be replaced by "pat_parse_int()".
Implicit ACLs by default rely on the fetch's output type, so let's simply do
the same for all other ones. It has been verified that they all match.
Some functions needs to change the sample associated to pattern. This
new pointer permit to return the a pointer to the sample pointer. The
caller can use or change the value.
This commit adds a delete function for patterns. It looks up all
instances of the pattern to delete and deletes them all. The fetch
keyword declarations have been extended to point to the appropriate
delete function.
The match function known the format of the pattern. The pattern can be
stored in a list or in a tree. The pattern matching function use itself
the good entry point and indexation type.
Each pattern matching function return the struct pattern that match. If
the flag "fill" is set, the struct pattern is filled, otherwise the
content of this struct must not be used.
With this feature, the general pattern matching function cannot have
exceptions for building the "struct pattern".
The method are actuelly stored using two types. Integer if the method is
known and string if the method is not known. The fetch is declared as
UINT, but in some case it can provides STR.
This patch create new type called METH. This type contain interge for
known method and string for the other methods. It can be used with
automatic converters.
The pattern matching can expect method.
During the free or prune function, http_meth pettern is freed. This
patch initialise the freed pointer to NULL.
The operations applied on types SMP_T_CSTR and SMP_T_STR are the same,
but the check code and the declarations are double, because it must
declare action for SMP_T_C* and SMP_T_*. The declared actions and checks
are the same. this complexify the code. Only the "conv" functions can
change from "C*" to "*"
Now, if a function needs to modify input string, it can call the new
function smp_dup(). This one duplicate data in a trash buffer.
The pattern parse functions put the parsed result in a "struct pattern"
without memory allocation. If the pattern must reference the input data
without changes, the pattern point to the parsed string. If buffers are
needed to store translated data, it use th trash buffer. The indexation
function that allocate the memory later if it is needed.
Before this patch, the indexation function check the declared patttern
matching function and index the data according with this function. This
is not useful to add some indexation mode.
This commit adds dedicated indexation function. Each struct pattern is
associated with one indexation function. This function permit to index
data according with the type of pattern and with the type of match.
After the previous patches, the "pat_parse_strcat()" function disappear,
and the "pat_parse_int()" and "pat_parse_dotted_ver()" functions dont
use anymore the "opaque" argument, and take only one string on his
input.
So, after this patch, each pattern parser no longer use the opaque
variable and take only one string as input. This patch change the
prototype of the pattern parsing functions.
Now, the "char **args" is replaced by a "char *arg", the "int *opaque"
is removed and these functions return 1 in succes case, and 0 if fail.
This patch remove the limit of 32 groups. It also permit to use standard
"pat_parse_str()" function in place of "pat_parse_strcat()". The
"pat_parse_strcat()" is no longer used and its removed. Before this
patch, the groups are stored in a bitfield, now they are stored in a
list of strings. The matching is slower, but the number of groups is
low and generally the list of allowed groups is short.
The fetch function "smp_fetch_http_auth_grp()" used with the name
"http_auth_group" return valid username. It can be used as string for
displaying the username or with the acl "http_auth_group" for checking
the group of the user.
Maybe the names of the ACL and fetch methods are no longer suitable, but
I keep the current names for conserving the compatibility with existing
configurations.
The function "userlist_postinit()" is created from verification code
stored in the big function "check_config_validity()". The code is
adapted to the new authentication storage system and it is moved in the
"src/auth.c" file. This function is used to check the validity of the
users declared in groups and to check the validity of groups declared
on the "user" entries.
This resolve function is executed before the check of all proxy because
many acl needs solved users and groups.
The binary samples are sometimes copied as is into http headers.
A sample can contain bytes unallowed by the http rfc concerning
header content, for example if it was extracted from binary data.
The resulting http request can thus be invalid.
This issue does not yet happen because haproxy currently (mistakenly)
hex-encodes binary data, so it is not really possible to retrieve
invalid HTTP chars.
The solution consists in hex-encoding all non-printable chars prefixed
by a '%' sign.
No backport is needed since existing code is not affected yet.
Currently there are two places where the compression context is released,
one in session_free() and another one in http_end_txn_clean_session().
Both of them call http_end_txn(), either directly or via http_reset_txn(),
and this function is made for this exact purpose. So let's centralize the
call there instead.
Currently, "balance url_param check_post" randomly works. If the client
sends chunked data and there's another chunk after the one containing the
data, http_request_forward_body() will advance msg->sov and move the start
of data to the beginning of the last chunk, and get_server_ph_post() will
not find the data.
In order to avoid this, we add an HTTP_MSGF_WAIT_CONN flag whose goal is
to prevent the forwarding code from parsing until the connection is
confirmed, so that we're certain not to fail on a redispatch. Note that
we need to force channel_auto_connect() since the output buffer is empty
and a previous analyser might have stopped auto-connect.
The flag is currently set whenever some L7 POST analysis is needed for a
connect() so that it correctly addresses all corner cases involving a
possible rewind of the buffer, waiting for a better fix.
Note that this has been broken for a very long time. Even all 1.4 versions
seem broken but differently, with ->sov pointing to the end of the arguments.
So the fix should be considered for backporting to all stable releases,
possibly including 1.3 which works differently.
Finn Arne Gangstad reported that commit 6b726adb35 ("MEDIUM: http: do
not report connection errors for second and further requests") breaks
support for serving static files by abusing the errorfile 503 statement.
Indeed, a second request over a connection sent to any server or backend
returning 503 would silently be dropped.
The proper solution consists in adding a flag on the session indicating
that the server connection was reused, and to only avoid the error code
in this case.
Summary:
Track and report last session time on the stats page for each server
in every backend, as well as the backend.
This attempts to address the requirement in the ROADMAP
- add a last activity date for each server (req/resp) that will be
displayed in the stats. It will be useful with soft stop.
The stats page reports this as time elapsed since last session. This
change does not adequately address the requirement for long running
session (websocket, RDP... etc).
Useless strncpy were done in those two sample fetches, the
"struct chunk" allows us to dump the specified len.
The encode_string() in capture.req.uri was judged inappropriate and was
deleted.
The return type was fixed to SMP_T_CSTR.
Add 2 sample fetchs allowing to extract the method and the uri of an
HTTP request.
FIXME: the sample fetches parser can't add the LW_REQ requirement, at
the moment this flag is used automatically when you use sample fetches.
Note: also fixed the alphabetical order of other capture.req.* keywords
in the doc.
Yesterday's commit 70dffda ("MAJOR: http: switch to keep-alive mode by default")
broke HTTP/1.0 handling without keep-alive when keep-alive is enabled both in
the frontend and in the backend.
Before this patch, it used to work because tunnel mode was the default one,
so if no mode was present in the frontend and a mode was set in the backend,
the backend was the first one to parse the header. This is what the original
patch tried to do with keep-alive by default, causing the version and the
connection header to be ignored if both the frontend and the backend were
running in keep-alive mode.
The fix consists in always parsing the header in non-tunnel mode, and
processing the rest of the logic in at least once, and again if the
backend works in a different mode than the frontend.
This is 1.5-specific, no backport is needed.
The authentication function "get_http_auth()" extract credentials from
the request and keep it this values in shared cache. This function set
a flag in the session indicating that the authentication is already
parsed and the value stored in the cache are avalaible. If this flag is
set the authorization header is not re-parsed and the shared cache is
used.
If two request are simultaneous processsed, the first one check the
credentials. After this, the second request check also it's credentials
and change the data stored in the shared cache. When the first request
re-check credentials (for many reasons), they are changed. The change
can introduce a segfault.
This patch deactivate the cache upon success. When we need
authentication information from one request, they are re-parsed and
re-decoded. However, a failure to retrieve credentials is still
cached to avoid useless lookups.
This fix needs to be backported to 1.4 as well.
Since we support HTTP keep-alive, there is no more reason for staying
in tunnel mode by default. It is confusing for new users and creates
more issues than it solves. Option "http-tunnel" is available to force
to use it if really desired.
Switching to KA by default has implied to change the value of some
option flags and some transaction flags so that value zero (default)
matches keep-alive. That explains why more code has been changed than
expected. Tests have been run on the 25 combinations of frontend and
backend options, plus a few with option http-pretend-keepalive, and
no anomaly was found.
The relation between frontend and backends remains the same. Options
have been updated to take precedence over http-keep-alive which is now
implicit.
All references in the doc to haproxy not supporting keep-alive have
been fixed, and the doc for config options has been updated.
There's no particular reason for having keep-alive + httpclose combine
into forceclose when set in different frontend/backend sections, since
keep-alive does not close anything by default. Let's have this still
combination remain httpclose only.
At the very beginning of haproxy, there was "option httpclose" to make
haproxy add a "Connection: close" header in both directions to invite
both sides to agree on closing the connection. It did not work with some
rare products, so "option forceclose" was added to do the same and actively
close the connection. Then client-side keep-alive was supported, so option
http-server-close was introduced. Now we have keep-alive with a fourth
option, not to mention the implicit tunnel mode.
The connection configuration has become a total mess because all the
options above may be combined together, despite almost everyone thinking
they cancel each other, as judging from the common problem reports on the
mailing list. Unfortunately, re-reading the doc shows that it's not clear
at all that options may be combined, and the opposite seems more obvious
since they're compared. The most common issue is options being set in the
defaults section that are not negated in other sections, but are just
combined when the user expects them to be overloaded. The migration to
keep-alive by default will only make things worse.
So let's start to address the first problem. A transaction can only work in
5 modes today :
- tunnel : haproxy doesn't bother with what follows the first req/resp
- passive close : option http-close
- forced close : option forceclose
- server close : option http-server-close with keep-alive on the client side
- keep-alive : option http-keep-alive, end to end
All 16 combination for each section fall into one of these cases. Same for
the 256 combinations resulting from frontend+backend different modes.
With this patch, we're doing something slightly different, which will not
change anything for users with valid configs, and will only change the
behaviour for users with unsafe configs. The principle is that these options
may not combined anymore, and that the latest one always overrides all the
other ones, including those inherited from the defaults section. The "no
option xxx" statement is still supported to cancel one option and fall back
to the default one. It is mainly needed to ignore defaults sections (eg:
force the tunnel mode). The frontend+backend combinations have not changed.
So for examplen the following configuration used to put the connection
into forceclose :
defaults http
mode http
option httpclose
frontend foo.
option http-server-close
=> http-server-close+httpclose = forceclose before this patch! Now
the frontend's config replaces the defaults config and results in
the more expected http-server-close.
All 25 combinations of the 5 modes in (frontend,backend) have been
successfully tested.
In order to prepare for upcoming changes, a new "option http-tunnel" was
added. It currently only voids all other options, and has the lowest
precedence when mixed with another option in another frontend/backend.
When using some log-format directives in header insertion without HTTP mode,
the config parser used to report a cryptic message about option httplog being
downgraded to tcplog and with "(null):0" as the file name and line number.
This is because the lfs_file and lfs_line were not properly set for some valid
use cases of log-format directives. Now we cover http-request and http-response
as well.
One year ago, commit 5d5b5d8 ("MEDIUM: proto_tcp: add support for tracking
L7 information") brought support for tracking L7 information in tcp-request
content rules. Two years earlier, commit 0a4838c ("[MEDIUM] session-counters:
correctly unbind the counters tracked by the backend") used to flush the
backend counters after processing a request.
While that earliest patch was correct at the time, it became wrong after
the second patch was merged. The code does what it says, but the concept
is flawed. "TCP request content" rules are evaluated for each HTTP request
over a single connection. So if such a rule in the frontend decides to
track any L7 information or to track L4 information when an L7 condition
matches, then it is applied to all requests over the same connection even
if they don't match. This means that a rule such as :
tcp-request content track-sc0 src if { path /index.html }
will count one request for index.html, and another one for each of the
objects present on this page that are fetched over the same connection
which sent the initial matching request.
Worse, it is possible to make the code do stupid things by using multiple
counters:
tcp-request content track-sc0 src if { path /foo }
tcp-request content track-sc1 src if { path /bar }
Just sending two requests first, one with /foo, one with /bar, shows
twice the number of requests for all subsequent requests. Just because
both of them persist after the end of the request.
So the decision to flush backend-tracked counters was not the correct
one. In practice, what is important is to flush countent-based rules
since they are the ones evaluated for each request.
Doing so requires new flags in the session however, to keep track of
which stick-counter was tracked by what ruleset. A later change might
make this easier to maintain over time.
This bug is 1.5-specific, no backport to stable is needed.
It's easier and safer to rely on conn_ctrl_ready() everywhere than to
check the flag itself. It will also simplify adding extra checks later
if needed. Some useless controls for !ctrl have been removed, as the
CTRL_READY flag itself guarantees ctrl is set.
Recent commit d7ad9f5 ("MAJOR: channel: add a new flag CF_WAKE_WRITE to
notify the task of writes") introduced this new CF_WAKE_WRITE flag that
an analyser which requires some free space to write must set if it wants
to be notified.
Unfortunately, some places were missing. More specifically, the
compression engine can rarely be stuck by a lack of output space,
especially when dealing with non-compressible data. It then has to
stop until some pending data are flushed and for this it must set
the CF_WAKE_WRITE flag. But these cases were missed by the commit
above.
Fortunately, this change was introduced very recently and never
released, so the impact was limited.
Huge thanks to Sander Klein who first reported this issue and who kindly
and patiently provided lots of traces and test data that made it possible
to reproduce, analyze, then fix this issue.
Patrick Hemmer reported that using unique_id_format and logs did not
report the same unique ID counter since commit 9f09521 ("BUG/MEDIUM:
unique_id: HTTP request counter must be unique!"). This is because
the increment was done while producing the log message, so it was
performed twice.
A better solution consists in fetching a new value once per request
and saving it in the request or session context for all of this
request's life.
It happens that sessions already have a unique ID field which is used
for debugging and reporting errors, and which differs from the one
sent in logs and unique_id header.
So let's change this to reuse this field to have coherent IDs everywhere.
As of now, a session gets a new unique ID once it is instanciated. This
means that TCP sessions will also benefit from a unique ID that can be
logged. And this ID is renewed for each extra HTTP request received on
an existing session. Thus, all TCP sessions and HTTP requests will have
distinct IDs that will be stable along all their life, and coherent
between all places where they're used (logs, unique_id header,
"show sess", "show errors").
This feature is 1.5-specific, no backport to 1.4 is needed.
It's a bit hasardous to wipe out all channel flags, this flag should
be left intact as it protects against recursive calls. Fortunately,
we have no possibility to meet this situation with current applets,
but better fix it before it becomes an issue.
This bug has been there for a long time, but it doesn't seem worth
backporting the fix.
Since commit 6b66f3e ([MAJOR] implement autonomous inter-socket forwarding)
introduced in 1.3.16-rc1, we've been relying on a stupid mechanism to wake
up the task after a write, which was an exact copy-paste of the reader side.
The principle was that if we empty a buffer and there's no forwarding
scheduled or if the *producer* is not in a connected state, then we wake
the task up.
That does not make any sense. It happens to wake up too late sometimes (eg,
when the request analyser waits for some room in the buffer to start to
work), and leads to unneeded wakeups in client-side keep-alive, because
the task is woken up when the response is sent, while the analysers are
simply waiting for a new request.
In order to fix this, we introduce a new channel flag : CF_WAKE_WRITE. It
is designed so that an analyser can explicitly request being notified when
some data were written. It is used only when the HTTP request or response
analysers need to wait for more room in the buffers. It is automatically
cleared upon wake up.
The flag is also automatically set by the functions which try to write into
a buffer from an applet when they fail (bi_putblk() etc...).
That allows us to remove the stupid condition above and avoid some wakeups.
In http-server-close and in http-keep-alive modes, this reduces from 4 to 3
the average number of wakeups per request, and increases the overall
performance by about 1.5%.
This reverts commit f3221f99ac.
Igor reported some very strange breakage of his stats page which is
clearly caused by the chunking, though I don't see at first glance
what could be wrong. Better revert it for now.
In theory the principle is simple as we just need to send HTTP chunks
if the client is 1.1 compatible. In practice it's harder because we
have to append a CR LF after each block of data and we're never sure
to have the room for this. In order not to have to deal with this, we
instead send the CR LF prior to each chunk size. The only issue is for
the first chunk and for this reason we avoid to send the empty header
line when using chunked encoding.
We used to unconditionally disable client-side polling after the client
has posted its request. The goal was to avoid subscribing the file
descriptor to the poller for nothing.
This is perfect for the HTTP close mode where we know we won't have to
read on the client side anymore. However, when keep-alive is maintained
with the client, this makes the situation worse. Indeed, after the first
response, we'll have to wait for the client to send a next request and
since this is never immediate, we'll certainly poll. So what happens is
that polling is enabled after a response and disabled after a request,
so the polling is constantly alternating, which is very expensive with
epoll_ctl().
The solution implemented in this patch consists in only disabling the
polling if the client-side is not in keep-alive mode. That way we have
the best of both worlds. In close, we really close, and in keep-alive,
we poll only once.
The performance gained by this change is important, with haproxy jumping
from 158kreq/s to 184kreq/s (+16%) in HTTP keep-alive mode on a machine
which at best does 222k/s in raw TCP mode.
With this patch and the previous one, a keep-alive run with a fast
enough server (or enough concurrent connections to cover the connect
time) does no epoll_ctl() anymore during a run of ab -k. The net
measured gain is 19%.
Compression is normally disabled on HTTP/1.0 since it does not
support chunked encoded responses. But the test was incomplete, and
Bertrand Jacquin reported a case where if the server responded using
1.1 to an 1.0 request, then haproxy still used to compress (and of
course the client could not understand the response).
No backport is needed, this is 1.5-specific.
In HTTP keep-alive mode, if we receive a 401, we still have a chance
of being able to send the visitor again to the same server over the
same connection. This is required by some broken protocols such as
NTLM, and anyway whenever there is an opportunity for sending the
challenge to the proper place, it's better to do it (at least it
helps with debugging).
Idle connections are not monitored right now. So if a server closes after
a response without advertising it, it won't be detected until a next
request wants to use the connection. This is a bit problematic because
it unnecessarily maintains file descriptors and sockets in an idle
state.
This patch implements a very simple idle connection manager for the stream
interface. It presents itself as an I/O callback. The HTTP engine enables
it when it recycles a connection. If a close or an error is detected on the
underlying socket, it tries to drain as much data as possible from the socket,
detect the close and responds with a close as well, then detaches from the
stream interface.
Since comit b805f71 (MEDIUM: sample: let the cast functions set their
output type), the output type of a fetch function is automatically
considered and passed to the next converter. A bug introduced in
1.5-dev9 with commit f853c46 (MEDIUM: pattern/acl: get rid of
temp_pattern in ACLs) was revealed by this last one : the output type
remained string instead of UINT, causing the cast function to try to
cast the contents and to crash on a NULL deref.
Note: this fix was made after a careful review of all fetch functions.
A few non-trivial ones had their comments amended to clearly indicate
the output type.
There are very few users of http_proxy, and all of them complain about
the same thing : the request is passed unmodified to the server (in its
proxy form), and it is not possible to fix it using reqrep rules because
http_proxy happens after.
So let's have http_proxy fix the URL it has analysed to get rid of the
scheme and the host part. This will do what users of this feature expect.
In HTTP keep-alive, if we face a connection error to the server while sending
the request, the error should not be reported, and the client-side connection
should simply be closed, so that client knows it can retry. This can happen if
the server has too short a keep-alive timeout and quits at the same moment the
new request comes in.
When a connection to the server is complete, if the transaction
requests keep-alive mode, we don't shut the connection and we just
reinitialize the stream interface in order to be able to reuse the
connection afterwards.
Note that the server connection count is decremented, just like the
backend's, and that we still try to wake up waiters. But that makes
sense considering that we'll eventually be able to immediately pass
idle connections to waiters.
When allocating a new connection, only the caller knows whether it's
acceptable to reuse the previous one or not. Let's pass this information
to si_alloc_conn() which will do the cleanup if the connection is not
acceptable.
It's common to observe a an recv() call on the client side just after
the connect() to has been issued to the server side when running in
server close mode. The reason is that the whole request has been sent
and the shutw() has been queued in the channel, so the request message
switches to the MSG_CLOSED state, which didn't disable reading. Let's
do it now. That way the reading will only be re-enabled after the
response is transferred to the client. However if abortonclose is set,
we still leave it enabled.