Commit Graph

917 Commits

Author SHA1 Message Date
Willy Tarreau
472b1ee115 BUG/MEDIUM: http: accept full buffers on smp_prefetch_http
Bertrand Jacquin reported a but when using tcp_request content rules
on large POST HTTP requests. The issue is that smp_prefetch_http()
first tries to validate an input buffer, but only if the buffer is
not full. This test is wrong since it must only be performed after
the parsing has failed, otherwise we don't accept POST requests which
fill the buffer as valid HTTP requests.

This bug is 1.5-specific, no backport needed.
2013-10-14 22:47:00 +02:00
Willy Tarreau
7959a55e15 MINOR: http: compute response time before processing headers
At the moment, HTTP response time is computed after response headers are
processed. This can misleadingly assign to the server some heavy local
processing (eg: regex), and also prevents response headers from passing
information related to the response time (which can sometimes be useful
for stats).

Let's retrieve the reponse time before processing the headers instead.

Note that in order to remain compatible with what was previously done,
we disable the response time when we get a 502 or any bad response. This
should probably be changed in 1.6 since it does not make sense anymore
to lose this information.
2013-09-23 16:53:11 +02:00
William Lallemand
5b7ea3afa1 BUG/MEDIUM: unique_id: junk in log on empty unique_id
When a request fail, the unique_id was allocated but not generated.
The string was not initialized and junk was printed in the log with %ID.

This patch changes the behavior of the unique_id. The unique_id is now
generated when a request failed.

This bug was reported by Patrick Hemmer.
2013-08-31 08:01:14 +02:00
Willy Tarreau
9f09521f2d BUG/MEDIUM: unique_id: HTTP request counter must be unique!
The HTTP request counter is incremented non atomically, which means that
many requests can log the same ID. Let's increment it when it is consumed
so that we avoid this case.

This bug was reported by Patrick Hemmer. It's 1.5-specific and does not
need to be backported.
2013-08-13 17:52:20 +02:00
Willy Tarreau
ef38c39287 MEDIUM: sample: systematically pass the keyword pointer to the keyword
We're having a lot of duplicate code just because of minor variants between
fetch functions that could be dealt with if the functions had the pointer to
the original keyword, so let's pass it as the last argument. An earlier
version used to pass a pointer to the sample_fetch element, but this is not
the best solution for two reasons :
  - fetch functions will solely rely on the keyword string
  - some other smp_fetch_* users do not have the pointer to the original
    keyword and were forced to pass NULL.

So finally we're passing a pointer to the keyword as a const char *, which
perfectly fits the original purpose.
2013-08-01 21:17:13 +02:00
Willy Tarreau
276fae9ab9 MINOR: samples: add the http_date([<offset>]) sample converter.
Converts an integer supposed to contain a date since epoch to
a string representing this date in a format suitable for use
in HTTP header fields. If an offset value is specified, then
it is a number of seconds that is added to the date before the
conversion is operated. This is particularly useful to emit
Date header fields, Expires values in responses when combined
with a positive offset, or Last-Modified values when the
offset is negative.
2013-07-25 15:00:38 +02:00
Willy Tarreau
506d050600 BUG/MAJOR: http: sample prefetch code was not properly migrated
When ACLs and samples were converged in 1.5-dev18, function
"acl_prefetch_http" was not properly converted after commit 8ed669b1.
It used to return -1 when contents did not match HTTP traffic, which
was considered as a "true" boolean result by the ACL execution code,
possibly causing crashes due to missing data when checking for HTTP
traffic in TCP rules.

Another issue is that when the function returned zero, it did not
set tje SMP_F_MAY_CHANGE flag, so it could randomly exit on partial
requests before waiting for a complete one.

Last issue is that when it returned 1, it did not set smp->data.uint,
so this last one would retain a random value from a past execution.
This could randomly cause some matches to fail as well.

Thanks to Remo Eichenberger for reporting this issue with a detailed
explanation and configuration.

This bug is 1.5-specific, no backport is needed.
2013-07-06 13:36:34 +02:00
Willy Tarreau
5b15f9004d BUG/MEDIUM: http: "option checkcache" fails with the no-cache header
The checkcache option checks for cacheable responses with a set-cookie
header. Since the response processing code was refactored in 1.3.8
(commit a15645d4), the check was broken because the no-cache value
is only checked as no-cache="set-cookie", and not alone.

Thanks to Hervé Commowick for reporting this stupid bug!

The fix should be backported to 1.4 and 1.3.
2013-07-04 12:49:28 +02:00
Lukas Tribus
67db8df12b MEDIUM: http: add IPv6 support for "set-tos"
As per RFC3260 #4 and BCP37 #4.2 and #5.2, the IPv6 counterpart of TOS
is "traffic class".

Add support for IPv6 traffic class in "set-tos" by moving the "set-tos"
related code to the new inline function inet_set_tos(), handling IPv4
(IP_TOS), IPv6 (IPV6_TCLASS) and IPv4-mapped sockets (IP_TOS, like
::ffff:127.0.0.1).

Also define - if missing - the IN6_IS_ADDR_V4MAPPED() macro in
include/common/compat.h for compatibility.
2013-06-23 18:01:38 +02:00
Lukas Tribus
2dd1d1a93f BUG/MINOR: http: fix "set-tos" not working in certain configurations
s->req->prod->conn->addr.to.ss_family contains only useful data if
conn_get_to_addr() is called early. If thats not the case (nothing in the
configuration needs the destination address like logs, transparent, ...)
then "set-tos" doesn't work.

Fix this by checking s->req->prod->conn->addr.from.ss_family instead.
Also fix a minor doc issue about set-tos in http-response.
2013-06-23 18:01:31 +02:00
Willy Tarreau
dc13c11c1e BUG/MEDIUM: prevent gcc from moving empty keywords lists into BSS
Benoit Dolez reported a failure to start haproxy 1.5-dev19. The
process would immediately report an internal error with missing
fetches from some crap instead of ACL names.

The cause is that some versions of gcc seem to trim static structs
containing a variable array when moving them to BSS, and only keep
the fixed size, which is just a list head for all ACL and sample
fetch keywords. This was confirmed at least with gcc 3.4.6. And we
can't move these structs to const because they contain a list element
which is needed to link all of them together during the parsing.

The bug indeed appeared with 1.5-dev19 because it's the first one
to have some empty ACL keyword lists.

One solution is to impose -fno-zero-initialized-in-bss to everyone
but this is not really nice. Another solution consists in ensuring
the struct is never empty so that it does not move there. The easy
solution consists in having a non-null list head since it's not yet
initialized.

A new "ILH" list head type was thus created for this purpose : create
an Initialized List Head so that gcc cannot move the struct to BSS.
This fixes the issue for this version of gcc and does not create any
burden for the declarations.
2013-06-21 23:29:02 +02:00
Willy Tarreau
67dad2715b BUG/CRITICAL: fix a possible crash when using negative header occurrences
When a config makes use of hdr_ip(x-forwarded-for,-1) or any such thing
involving a negative occurrence count, the header is still parsed in the
order it appears, and an array of up to MAX_HDR_HISTORY entries is created.
When more entries are used, the entries simply wrap and continue this way.

A problem happens when the incoming header field count exactly divides
MAX_HDR_HISTORY, because the computation removes the number of requested
occurrences from the count, but does not care about the risk of wrapping
with a negative number. Thus we can dereference the array with a negative
number and randomly crash the process.

The bug is located in http_get_hdr() in haproxy 1.5, and get_ip_from_hdr2()
in haproxy 1.4. It affects configurations making use of one of the following
functions with a negative <value> occurence number :

   - hdr_ip(<name>, <value>)  (in 1.4)
   - hdr_*(<name>, <value>)   (in 1.5)

It also affects "source" statements involving "hdr_ip(<name>)" since that
statement implicitly uses -1 for <value> :

   - source 0.0.0.0 usesrc hdr_ip(<name>)

A workaround consists in rejecting dangerous requests early using
hdr_cnt(<name>), which is available both in 1.4 and 1.5 :

   block if { hdr_cnt(<name>) ge 10 }

This bug has been present since the introduction of the negative offset
count in 1.4.4 via commit bce70882. It has been reported by David Torgerson
who offered some debugging traces showing where the crash happened, thus
making it significantly easier to find the bug!

CVE-2013-2175 was assigned to this bug.

This fix must absolutely be backported to 1.4.
2013-06-17 12:00:22 +02:00
Willy Tarreau
3c4beb1feb CLEANUP: http: remove the bogus urlp_ip ACL match
This one is wrong, never matches and cannot work. It was brought by a blind
copy-paste from the url_* version in 1.5-dev9, but there is no underlying
fetch returning an IP type for this.
2013-06-12 22:26:04 +02:00
Willy Tarreau
c32484ed35 MEDIUM: acl: remove 15 additional useless ACLs that are equivalent to their fetches
The following 15 ACLs were missed from previous review, and are not needed
either.

hdr_cnt, hdr_ip, hdr_val, rep_ssl_hello_type, req_len, req_ssl_hello_type,
scook_cnt, scook_val, shdr_cnt, shdr_ip, shdr_val, url_ip, url_port,
urlp_val, req_proto_http.
2013-06-12 22:23:40 +02:00
Willy Tarreau
6d4e4e8dd2 MEDIUM: acl: remove a lot of useless ACLs that are equivalent to their fetches
The following 116 ACLs were removed because they're redundant with their
fetch function since last commit which allows the fetch function to be
used instead for types BOOL, INT and IP. Most places are now left with
an empty ACL keyword list that was not removed so that it's easier to
add other ACLs later.

always_false, always_true, avg_queue, be_conn, be_id, be_sess_rate, connslots,
nbsrv, queue, srv_conn, srv_id, srv_is_up, srv_sess_rate, res.comp, fe_conn,
fe_id, fe_sess_rate, dst_conn, so_id, wait_end, http_auth, http_first_req,
status, dst, dst_port, src, src_port, sc1_bytes_in_rate, sc1_bytes_out_rate,
sc1_clr_gpc0, sc1_conn_cnt, sc1_conn_cur, sc1_conn_rate, sc1_get_gpc0,
sc1_gpc0_rate, sc1_http_err_cnt, sc1_http_err_rate, sc1_http_req_cnt,
sc1_http_req_rate, sc1_inc_gpc0, sc1_kbytes_in, sc1_kbytes_out, sc1_sess_cnt,
sc1_sess_rate, sc1_tracked, sc1_trackers, sc2_bytes_in_rate,
sc2_bytes_out_rate, sc2_clr_gpc0, sc2_conn_cnt, sc2_conn_cur, sc2_conn_rate,
sc2_get_gpc0, sc2_gpc0_rate, sc2_http_err_cnt, sc2_http_err_rate,
sc2_http_req_cnt, sc2_http_req_rate, sc2_inc_gpc0, sc2_kbytes_in,
sc2_kbytes_out, sc2_sess_cnt, sc2_sess_rate, sc2_tracked, sc2_trackers,
sc3_bytes_in_rate, sc3_bytes_out_rate, sc3_clr_gpc0, sc3_conn_cnt,
sc3_conn_cur, sc3_conn_rate, sc3_get_gpc0, sc3_gpc0_rate, sc3_http_err_cnt,
sc3_http_err_rate, sc3_http_req_cnt, sc3_http_req_rate, sc3_inc_gpc0,
sc3_kbytes_in, sc3_kbytes_out, sc3_sess_cnt, sc3_sess_rate, sc3_tracked,
sc3_trackers, src_bytes_in_rate, src_bytes_out_rate, src_clr_gpc0,
src_conn_cnt, src_conn_cur, src_conn_rate, src_get_gpc0, src_gpc0_rate,
src_http_err_cnt, src_http_err_rate, src_http_req_cnt, src_http_req_rate,
src_inc_gpc0, src_kbytes_in, src_kbytes_out, src_sess_cnt, src_sess_rate,
src_updt_conn_cnt, table_avl, table_cnt, ssl_c_ca_err, ssl_c_ca_err_depth,
ssl_c_err, ssl_c_used, ssl_c_verify, ssl_c_version, ssl_f_version, ssl_fc,
ssl_fc_alg_keysize, ssl_fc_has_crt, ssl_fc_has_sni, ssl_fc_use_keysize,
2013-06-11 21:22:58 +02:00
Willy Tarreau
51347ed94c MEDIUM: http: add the "set-mark" action on http-request/http-response rules
"set-mark" is used to set the Netfilter MARK on all packets sent to the
client to the value passed in <mark> on platforms which support it. This
value is an unsigned 32 bit value which can be matched by netfilter and
by the routing table. It can be expressed both in decimal or hexadecimal
format (prefixed by "0x"). This can be useful to force certain packets to
take a different route (for example a cheaper network path for bulk
downloads). This works on Linux kernels 2.6.32 and above and requires
admin privileges.
2013-06-11 19:34:13 +02:00
Willy Tarreau
42cf39e3b9 MEDIUM: http: add support for "set-tos" in http-request/http-response
This manipulates the TOS field of the IP header of outgoing packets sent
to the client. This can be used to set a specific DSCP traffic class based
on some request or response information. See RFC2474, 2597, 3260 and 4594
for more information.
2013-06-11 19:04:37 +02:00
Willy Tarreau
9a355ec257 MEDIUM: http: add support for action "set-log-level" in http-request/http-response
Some users want to disable logging for certain non-important requests such as
stats requests or health-checks coming from another equipment. Other users want
to log with a higher importance (eg: notice) some special traffic (POST requests,
authenticated requests, requests coming from suspicious IPs) or some abnormally
large responses.

This patch responds to all these needs at once by adding a "set-log-level" action
to http-request/http-response. The 8 syslog levels are supported, as well as "silent"
to disable logging.
2013-06-11 17:50:26 +02:00
Willy Tarreau
abcd5145f8 MEDIUM: log: add a log level override value in struct session
This log level will be used in a further patch to change the log level
depending on the request or response.
2013-06-11 17:50:26 +02:00
Willy Tarreau
f4c43c13be MEDIUM: http: add the "set-nice" action to http-request and http-response
This new action changes the nice factor of the task processing the current
request.
2013-06-11 17:50:26 +02:00
Willy Tarreau
e365c0b92b MEDIUM: http: add a new "http-response" ruleset
Some actions were clearly missing to process response headers. This
patch adds a new "http-response" ruleset which provides the following
actions :
  - allow : stop evaluating http-response rules
  - deny : stop and reject the response with a 502
  - add-header : add a header in log-format mode
  - set-header : set a header in log-format mode
2013-06-11 16:06:12 +02:00
Willy Tarreau
04ff9f105f MINOR: http: add full-length header fetch methods
The req.hdr and res.hdr fetch methods do not work well on headers which
are allowed to contain commas, such as User-Agent, Date or Expires.
More specifically, full-length matching is impossible if a comma is
present.

This patch introduces 4 new fetch functions which are designed to work
with these full-length headers :
  - req.fhdr, req.fhdr_cnt
  - res.fhdr, res.fhdr_cnt

These ones do not stop at commas and permit to return full-length header
values.
2013-06-10 18:39:42 +02:00
Willy Tarreau
570f221cbb MINOR: log: add a new flag 'L' for locally processed requests
People who use "option dontlog-normal" are bothered with redirects and
stats being logged and reported as errors in the logs ("PR" = proxy
blocked the request).

This patch introduces a new flag 'L' for when a request is locally
processed, that is not considered as an error by the log filters. That
way we know a request was intercepted and processed by haproxy without
logging the line when "option dontlog-normal" is in effect.
2013-06-10 16:42:09 +02:00
Willy Tarreau
379357af58 BUG/MAJOR: http: always ensure response buffer has some room for a response
Since 1.5-dev12 and commit 3bf1b2b8 (MAJOR: channel: stop relying on
BF_FULL to take action), the HTTP parser switched to channel_full()
instead of BF_FULL to decide whether a buffer had enough room to start
parsing a request or response. The problem is that channel_full()
intentionally ignores outgoing data, so a corner case exists where a
large response might still be left in a response buffer with just a
few bytes left (much less than the reserve), enough to accept a second
response past the last data, but not enough to permit the HTTP processor
to add some headers. Since all the processing relies on this space being
available, we can get some random crashes when clients pipeline requests.

The analysis of a core from haproxy configured with 20480 bytes buffers
shows this : with enough "luck", when sending back the response for the
first request, the client is slow, the TCP window is congested, the socket
buffers are full, and haproxy's buffer fills up. We still have 20230 bytes
of response data in a 20480 response buffer. The second request is sent to
the server which returns 214 bytes which fit in the small 250 bytes left
in this buffer. And the buffer arrangement makes it possible to escape all
the controls in http_wait_for_response() :

    |<------ response buffer = 20480 bytes ------>|
    [ 2/2  | 3 | 4 |          1/2                 ]
           ^ start of circular buffer

      1/2 = beginning of previous response (18240)
      2/2 = end of previous response       (1990)
        3 = current response               (214)
        4 = free space                     (36)

  - channel_full() returns false (20230 bytes are going to leave)
  - the response headers does not wrap at the end of the buffer
  - the remaining linear room after the headers is larger than the
    reserve, because it's the previous response which wraps :
  => response is processed

Header rewriting causes it to reach 260 bytes, 10 bytes larger than what
the buffer could hold. So all computations during header addition are
wrong and lead to the corruption we've observed.

All the conditions are very hard to meet (which explains why it took
almost one year for this bug to show up) and are almost impossible to
reproduce on purpose on a test platform. But the bug is clearly there.

This issue was reported by Dinko Korunic who kindly devoted a lot of
time to provide countless traces and cores, and to experiment with
troubleshooting patches to knock the bug down. Thanks Dinko!

No backport is needed, but all 1.5-dev versions between dev12 and dev18
included must be upgraded. A workaround consists in setting option
forceclose to prevent pipelined requests from being processed.
2013-06-08 13:14:17 +02:00
Willy Tarreau
7fe3300b76 BUG/MEDIUM: stats: fix a regression when dealing with POST requests
In 1.5-dev17 (commit 1facd6d6), we reorganized the way HTTP stats
requests are handled. When moving the code, we dropped a "return 0"
which happens upon incomplete POST request, so we now end up with
the next return 1 which causes processing to go on with next
analyser. This causes incomplete POST requests to try to forward
the request to servers, resulting in either a 404 or a 503 depending
on the configuration.

This patch fixes this regression to restore the previous behaviour.
It's not enough though, as it happens that the stats code is handled
after all http header processing but in the same function. The net
effect is that incomplete requests cause the headers manipulation to
be performed multiple times, possibly resulting in multiple headers
in the request buffer. Since the stats requests are not meant to be
forwarded, it's not an issue yet but this is something to take care
of later.

A remaining issue that's not handled yet is that if the client does
not send the complete POST headers, then the request is finally
forwarded. This is not a regression, it has always been there and
seems to be caused by the lack of timeout processing when waiting
for the POST body. The solution to this issue would be to move the
handling of stats requests into a dedicated analyser placed after
http_process_request_body().

Bug reported by Guillaume de Lafond.
2013-04-21 08:16:10 +02:00
de Lafond Guillaume
88c278fadf MEDIUM: stats: add proxy name filtering on the statistic page
This patch adds a "scope" box in the statistics page in order to
display only proxies with a name that contains the requested value.
The scope filter is preserved across all clicks on the page.
2013-04-15 22:50:33 +02:00
Willy Tarreau
667c2a3d2a BUG/MAJOR: http: compression still has defects on chunked responses
The compression state machine happens to start work it cannot undo if
there's no more data in the input buffer, and has trouble accounting
for it. Fixing it requires more than a few lines, as the confusion is
in part caused by the way the pointers to the various places in the
message are handled internally. So as a temporary fix, let's disable
compression on chunk-encoded responses. This will give us more time
to perform the required changes.
2013-04-14 23:32:53 +02:00
Willy Tarreau
8d1c5164f3 BUG/MINOR: http: add-header/set-header did not accept the ACL condition
Sander Klein reported this bug. The test for the extra argument on these
rules prevent any condition from being added. The bug was introduced with
the feature itself in 1.5-dev16.
2013-04-03 14:13:58 +02:00
Willy Tarreau
a4312fa28e MAJOR: sample: maintain a per-proxy list of the fetch args to resolve
While ACL args were resolved after all the config was parsed, it was not the
case with sample fetch args because they're almost everywhere now.

The issue is that ACLs now solely rely on sample fetches, so their args
resolving doesn't work anymore. And many fetches involving a server, a
proxy or a userlist don't work at all.

The real issue is that at the bottom layers we have no information about
proxies, line numbers, even ACLs in order to report understandable errors,
and that at the top layers we have no visibility over the locations where
fetches are referenced (think log node).

After failing multiple unsatisfying solutions attempts, we now have a new
concept of args list. The principle is that every proxy has a list head
which contains a number of indications such as the config keyword, the
context where it's used, the file and line number, etc... and a list of
arguments. This list head is of the same type as the elements, so it
serves as a template for adding new elements. This way, it is filled from
top to bottom by the callers with the information they have (eg: line
numbers, ACL name, ...) and the lower layers just have to duplicate it and
add an element when they face an argument they cannot resolve yet.

Then at the end of the configuration parsing, a loop passes over each
proxy's list and resolves all the args in sequence. And this way there is
all necessary information to report verbose errors.

The first immediate benefit is that for the first time we got very precise
location of issues (arg number in a keyword in its context, ...). Second,
in order to do this we had to parse log-format and unique-id-format a bit
earlier, so that was a great opportunity for doing so when the directives
are encountered (unless it's a default section). This way, the recorded
line numbers for these args are the ones of the place where the log format
is declared, not the end of the file.

Userlists report slightly more information now. They're the only remaining
ones in the ACL resolving function.
2013-04-03 02:13:02 +02:00
Willy Tarreau
0a0daecbb2 MEDIUM: http: remove val_usr() to validate user_lists
This one was incorrect since it tried to validate the user-lists
before end of parsing.
2013-04-03 02:13:02 +02:00
Willy Tarreau
ff5afcc32b MINOR: http: replace acl_parse_ver with acl_parse_str
The HTTP version parser used in ACLs has long been a string and
still had its own parser. This makes no sense, switch it to use
the standard string parser.
2013-04-03 02:13:01 +02:00
Willy Tarreau
d86e29d2a1 CLEANUP: acl: remove unused references to ACL_USE_*
Now that acl->requires is not used anymore, we can remove all references
to it as well as all ACL_USE_* flags.
2013-04-03 02:13:00 +02:00
Willy Tarreau
18ed2569f5 MINOR: http: add new direction-explicit sample fetches for headers and cookies
Since "hdr" and "cookie" were ambiguously referring to the request or response
depending on the context, we need a way to explicitly specify the direction.
By prefixing the fetches names with "req." and "res.", we can now restrict such
fetches to the appropriate direction. At the moment the fetches are explicitly
declared by later we might think about having an automatic match when "req." or
"res." appears. These explicit fetches are now used by the relevant ACLs.
2013-04-03 02:12:59 +02:00
Willy Tarreau
9baae63d8d MAJOR: acl: remove fetch argument validation from the ACL struct
ACL fetch being inherited from the sample fetch keyword, we don't need
anymore to specify what function to use to validate the fetch arguments.

Note that the job is still done in the ACL parsing code based on elements
from the sample fetch structs.
2013-04-03 02:12:59 +02:00
Willy Tarreau
c48c90dfa5 MAJOR: acl: remove the arg_mask from the ACL definition and use the sample fetch's
Now that ACLs solely rely on sample fetch functions, make them use the
same arg mask. All inconsistencies have been fixed separately prior to
this patch, so this patch almost only adds a new pointer indirection
and removes all references to ARG*() in the definitions.

The parsing is still performed by the ACL code though.
2013-04-03 02:12:58 +02:00
Willy Tarreau
8ed669b12a MAJOR: acl: make all ACLs reference the fetch function via a sample.
ACL fetch functions used to directly reference a fetch function. Now
that all ACL fetches have their sample fetches equivalent, we can make
ACLs reference a sample fetch keyword instead.

In order to simplify the code, a sample keyword name may be NULL if it
is the same as the ACL's, which is the most common case.

A minor change appeared, http_auth always expects one argument though
the ACL allowed it to be missing and reported as such afterwards, so
fix the ACL to match this. This is not really a bug.
2013-04-03 02:12:58 +02:00
Willy Tarreau
409bcde176 MEDIUM: http: unify acl and sample fetch functions
The following sample fetch functions were only usable by ACLs but are now
usable by sample fetches too :

    cook, cook_cnt, cook_val, hdr_cnt, hdr_ip, hdr_val, http_auth,
    http_auth_group, http_first_req, method, req_proto_http, req_ver,
    resp_ver, scook, scook_cnt, scook_val, shdr, shdr_cnt, shdr_ip,
    shdr_val, status, urlp, urlp_val,

Most of them won't bring much benefit at the moment, or are even aliases of
existing ones, however they'll be needed for ACL->SMP convergence.

A new val_usr() function was added to resolve userlist names into pointers.

The http_auth_group ACL forgot to make its first argument mandatory, so
there was a check in cfgparse to report a vague error. Now that args are
correctly parsed, let's report something more precise.

All urlp* ACLs now support an optional 3rd argument like their sample
counter-part which is the optional delimiter.

The fetch functions have been renamed "smp_fetch_*".

Some args controls on the sample keywords have been relaxed so that we
can soon use them for ACLs :

  - cookie now accepts to have an optional name ; it will return the
    first matching cookie if the name is not set ;
  - same for set-cookie and hdr
2013-04-03 02:12:57 +02:00
Willy Tarreau
434c57c95c MINOR: log: indicate it when some unreliable sample fetches are logged
If a log-format involves some sample fetches that may not be present at
the logging instant, we can now report a warning.

Note that this is done both for log-format and for add-header and carefully
respects the original fetch keyword's capabilities.
2013-04-03 02:12:56 +02:00
Willy Tarreau
80aca90ad2 MEDIUM: samples: use new flags to describe compatibility between fetches and their usages
Samples fetches were relying on two flags SMP_CAP_REQ/SMP_CAP_RES to describe
whether they were compatible with requests rules or with response rules. This
was never reliable because we need a finer granularity (eg: an HTTP request
method needs to parse an HTTP request, and is available past this point).

Some fetches are also dependant on the context (eg: "hdr" uses request or
response depending where it's involved, causing some abiguity).

In order to solve this, we need to precisely indicate in fetches what they
use, and their users will have to compare with what they have.

So now we have a bunch of bits indicating where the sample is fetched in the
processing chain, with a few variants indicating for some of them if it is
permanent or volatile (eg: an HTTP status is stored into the transaction so
it is permanent, despite being caught in the response contents).

The fetches also have a second mask indicating their validity domain. This one
is computed from a conversion table at registration time, so there is no need
for doing it by hand. This validity domain consists in a bitmask with one bit
set for each usage point in the processing chain. Some provisions were made
for upcoming controls such as connection-based TCP rules which apply on top of
the connection layer but before instantiating the session.

Then everywhere a fetch is used, the bit for the control point is checked in
the fetch's validity domain, and it becomes possible to finely ensure that a
fetch will work or not.

Note that we need these two separate bitfields because some fetches are usable
both in request and response (eg: "hdr", "payload"). So the keyword will have
a "use" field made of a combination of several SMP_USE_* values, which will be
converted into a wider list of SMP_VAL_* flags.

The knowledge of permanent vs dynamic information has disappeared for now, as
it was never used. Later we'll probably reintroduce it differently when
dealing with variables. Its only use at the moment could have been to avoid
caching a dynamic rate measurement, but nothing is cached as of now.
2013-04-03 02:12:56 +02:00
Willy Tarreau
e0db1e8946 MEDIUM: acl: remove flag ACL_MAY_LOOKUP which is improperly used
This flag is used on ACL matches that support being looking up patterns
in trees. At the moment, only strings and IPs support tree-based lookups,
but the flag is randomly set also on integers and binary data, and is not
even always set on strings nor IPs.

Better get rid of this mess by only relying on the matching function to
decide whether or not it supports tree-based lookups, this is safer and
easier to maintain.
2013-04-03 02:12:56 +02:00
Willy Tarreau
aae75e3279 BUG/CRITICAL: using HTTP information in tcp-request content may crash the process
During normal HTTP request processing, request buffers are realigned if
there are less than global.maxrewrite bytes available after them, in
order to leave enough room for rewriting headers after the request. This
is done in http_wait_for_request().

However, if some HTTP inspection happens during a "tcp-request content"
rule, this realignment is not performed. In theory this is not a problem
because empty buffers are always aligned and TCP inspection happens at
the beginning of a connection. But with HTTP keep-alive, it also happens
at the beginning of each subsequent request. So if a second request was
pipelined by the client before the first one had a chance to be forwarded,
the second request will not be realigned. Then, http_wait_for_request()
will not perform such a realignment either because the request was
already parsed and marked as such. The consequence of this, is that the
rewrite of a sufficient number of such pipelined, unaligned requests may
leave less room past the request been processed than the configured
reserve, which can lead to a buffer overflow if request processing appends
some data past the end of the buffer.

A number of conditions are required for the bug to be triggered :
  - HTTP keep-alive must be enabled ;
  - HTTP inspection in TCP rules must be used ;
  - some request appending rules are needed (reqadd, x-forwarded-for)
  - since empty buffers are always realigned, the client must pipeline
    enough requests so that the buffer always contains something till
    the point where there is no more room for rewriting.

While such a configuration is quite unlikely to be met (which is
confirmed by the bug's lifetime), a few people do use these features
together for very specific usages. And more importantly, writing such
a configuration and the request to attack it is trivial.

A quick workaround consists in forcing keep-alive off by adding
"option httpclose" or "option forceclose" in the frontend. Alternatively,
disabling HTTP-based TCP inspection rules enough if the application
supports it.

At first glance, this bug does not look like it could lead to remote code
execution, as the overflowing part is controlled by the configuration and
not by the user. But some deeper analysis should be performed to confirm
this. And anyway, corrupting the process' memory and crashing it is quite
trivial.

Special thanks go to Yves Lafon from the W3C who reported this bug and
deployed significant efforts to collect the relevant data needed to
understand it in less than one week.

CVE-2013-1912 was assigned to this issue.

Note that 1.4 is also affected so the fix must be backported.
2013-04-03 02:12:55 +02:00
Willy Tarreau
2d43e18b69 BUG/MAJOR: http: fix regression introduced by commit d655ffe
Sander Klein reported that since last snapshot, some downloads would
hang from nginx but succeed from apache. The culprit was not too hard
to find given the low number of recent changes affecting the data path.

Commit d655ffe slightly reorganized the HTTP state machine and
introduced this regression. The reason is that we must never jump
into the MSG_DONE case without first flushing remaining data because
this is not done anymore afterwards. This part is scheduled for
being reorganized since it's totally ugly especially since we added
compression, and this regression is an illustration of its readability.

The issue is entirely dependant on the server close sequence, which
explains why it was reproducible only with nginx here.
2013-04-03 00:22:25 +02:00
Willy Tarreau
ffb6f08bab BUG/MAJOR: http: fix regression introduced by commit a890d072
This commit fixed a bug and introduced a new one at the same time.
It's a stupid typo, the index to store the context is [0], not [2].

The effect is that parsing the header can loop forever if multiple
headers are found. This issue was reported by Lukas Tribus.
2013-04-02 23:19:30 +02:00
Willy Tarreau
a890d072fc BUG/MAJOR: http: use a static storage for sample fetch context
Baptiste Assmann reported that the cook*() ACLs do not work anymore.

The reason is the way we store the hdr_ctx between subsequent calls
to smp_fetch_cookie() since commit 3740635b (1.5-dev10).

The smp->ctx.a[] storage holds up to 8 pointers. It is not meant for
generic storage. We used to store hdr_ctx in the ctx, but while it used
to just fit for smp_fetch_hdr(), it does not for smp_fetch_cookie()
since we stored it at offset 2.

The correct solution is to use this storage to store a pointer to the
current hdr_ctx struct which is statically allocated.
2013-04-02 12:01:06 +02:00
Willy Tarreau
d655ffe863 OPTIM: http: optimize the response forward state machine
By replacing the if/else series with a switch/case, we could save
another 20% on the worst case (chunks of 1 byte).
2013-04-02 02:01:00 +02:00
Willy Tarreau
0161d62d23 OPTIM: http: improve branching in chunk size parser
By tweaking a bit some conditions in http_parse_chunk_size(), we could
improve the overall performance in the worst case by 15%.
2013-04-02 02:00:57 +02:00
Yves Lafon
e267421e93 MINOR: http: status 301 should not be marked non-cacheable
Also, browsers behaviour is inconsistent regarding the Cache-Control
header field on a 301.
2013-03-30 11:22:41 +01:00
Yves Lafon
3e8d1ae2d2 MEDIUM: http: implement redirect 307 and 308
I needed to emit a 307 and noticed it was not available so I did it,
as well as 308.
2013-03-29 19:17:41 +01:00
Yves Lafon
4e8ec500e5 MINOR: http: status code 303 is HTTP/1.1 only
Don't return a 303 redirect with "HTTP/1.0" as it's HTTP/1.1 only.
2013-03-29 19:08:09 +01:00
Willy Tarreau
2fef9b1ef6 BUG/MEDIUM: http: fix another issue caused by http-send-name-header
An issue reported by David Coulson is that when using http-send-name-header,
the response processing would randomly be performed. The issue was first
diagnosed by Cyril Bonté as being related to a time race when processing
the closing of the response.

In practice, the issue is a bit trickier. It happens that
http_send_name_header() did not update msg->sol after a rewrite. This
counter is supposed to point to the beginning of the message's body
once headers are scheduled for being forwarded. And not updating it
means that the first forwarding of the request headers in
http_request_forward_body() does not send the correct count, leaving
some bytes in chn->to_forward.

Then if the server sends its response in a single packet with the
close, the stream interface switches to state SI_ST_DIS which in
turn moves to SI_ST_CLO in process_session(), and to close the
outgoing connection. This is detected by http_request_forward_body(),
which then switches the request message to the error state, and syncs
all FSMs and removes any response analyser.

The response analyser being removed, no processing is performed on
the response buffer, which is tunnelled as-is to the client.

Of course, the correct fix consists in having http_send_name_header()
update msg->sol. Normally this ought not to have been needed, but it
is an abuse to modify data already scheduled for being forwarded, so
it is expected that such specific handling has to be done there. Better
not have generic functions deal with such cases, so that it does not
become the standard.

Note: 1.4 does not have this issue even if it does not update the
pointer either, because it forwards from msg->som which is not
updated at the moment the connect() succeeds. So no backport is
required.
2013-03-26 01:21:47 +01:00
Willy Tarreau
3bfeadb3f6 BUG/MEDIUM: http: add-header should not emit "-" for empty fields
Patch 6cbbdbf3 fixed the missing "-" delimitors in logs but it caused
them to be emitted with "http-request add-header", eventhough it was
correctly fixed for the unique-id format. Fix this by simply removing
LOG_OPT_MANDATORY in this case.
2013-03-24 07:33:22 +01:00
Willy Tarreau
6cbbdbf3f3 BUG/MEDIUM: log: emit '-' for empty fields again
Commit 2b0108ad accidently got rid of the ability to emit a "-" for
empty log fields. This can happen for captured request and response
cookies, as well as for fetches. Since we don't want to have this done
for headers however, we set the default log method when parsing the
format. It is still possible to force the desired mode using +M/-M.
2013-02-05 18:55:09 +01:00
Willy Tarreau
192e59fb07 CLEANUP: http: don't try to deinitialize http compression if it fails before init
In select_compression_response_header(), some tests are rather confusing
as the "fail" label is used to deinitialize the compression context for
the session while it's branched only before initialization succeeds. The
test is always false here and the dereferencing of the comp_algo pointer
which might be null is also confusing. Remove that code which is not needed
anymore since commit ec3e3890 got rid of the latest issues.

Reported-by: Dinko Korunic <dkorunic@reflected.net>
2013-01-24 16:19:19 +01:00
Willy Tarreau
4521ba689c CLEANUP: http: remove a useless null check
srv cannot be null in http_perform_server_redirect(), as it's taken
from the stream interface's target which is always valid for a
server-based redirect, and it was already dereferenced above, so in
practice, gcc already removes the test anyway.

Reported-by: Dinko Korunic <dkorunic@reflected.net>
2013-01-24 16:19:18 +01:00
Baptiste Assmann
116eefed8f MINOR: config: http-request configuration error message misses new keywords
"redirect" and "tarpit" keywords were missing from http-request configuration
error message.
2013-01-05 16:53:49 +01:00
Willy Tarreau
56e9ffa6a6 BUG/MINOR: http-compression: lookup Cache-Control in the response, not the request
As stated in both RFC2616 and the http-bis drafts, Cache-Control:
no-transform must be looked up in the response since we're modifying
the response. However, its presence in the request is irrelevant to
any changes in the response :

  7.2.1.6. no-transform
   The "no-transform" request directive indicates that an intermediary
   (whether or not it implements a cache) MUST NOT change the Content-
   Encoding, Content-Range or Content-Type request header fields, nor
   the request representation.

  7.2.2.9. no-transform
   The "no-transform" response directive indicates that an intermediary
   (regardless of whether it implements a cache) MUST NOT change the
   Content-Encoding, Content-Range or Content-Type response header
   fields, nor the response representation.

Note: according to the specs, we're supposed to emit the following
response header :

  Warning: 214 transformation applied

However no other product seems to do it, so the effect on user agents
is unclear.
2013-01-05 16:31:58 +01:00
Willy Tarreau
ccbcc37a01 MEDIUM: http: add support for "http-request tarpit" rule
The "reqtarpit" rule is not very handy to use. Now that we have more
flexibility with "http-request", let's finally make the tarpit rules
usable there.

There are still semantical differences between apply_filters_to_request()
and http_req_get_intercept_rule() because the former updates the counters
while the latter does not. So we currently have almost similar code leafs
for similar conditions, but this should be cleaned up later.
2012-12-28 14:47:19 +01:00
Willy Tarreau
81499eb67d MEDIUM: http: add support for "http-request redirect" rules
These are exactly the same as the classic redirect rules except
that they can be interleaved with other http-request rules for
more flexibility.

The redirect parser should probably be changed to stop at the condition
so that the caller puts its own condition pointer. At the moment, the
redirect rule and condition are parsed at once by build_redirect_rule()
and the condition is assigned to the http_req_rule.
2012-12-28 14:47:19 +01:00
Willy Tarreau
4baae248fc REORG: config: move the http redirect rule parser to proto_http.c
We'll have to use this elsewhere soon, let's move it to the proper
place.
2012-12-28 14:47:19 +01:00
Willy Tarreau
71241abfd3 MINOR: http: move redirect rule processing to its own function
We now have http_apply_redirect_rule() which does all the redirect-specific
job instead of having this inside http_process_req_common().

Also one of the benefit gained from uniformizing this code is that both
keep-alive and close response do emit the PR-- flags. The fix for the
flags could probably be backported to 1.4 though it's very minor.

The previous function http_perform_redirect() was becoming confusing
so it was renamed http_perform_server_redirect() since it only applies
to server-based redirection.
2012-12-28 14:47:19 +01:00
Willy Tarreau
96257ec5c8 CLEANUP: http: rename the misleading http_check_access_rule
Several bugs were introduced recently due to a misunderstanding of how
this function works and what it was supposed to do. Since it's supposed
to only return the pointer to a rule which aborts further processing of
the request, let's rename it to avoid further issues.

The function was also slightly cleaned up without any functional change.
2012-12-28 14:47:19 +01:00
Willy Tarreau
d79a3b248e BUG/MINOR: log: make log-format, unique-id-format and add-header more independant
It happens that all of them call parse_logformat_line() which sets
proxy->to_log with a number of flags affecting the line format for
all three users. For example, having a unique-id specified disables
the default log-format since fe->to_log is tested when the session
is established.

Similarly, having "option logasap" will cause "+" to be inserted in
unique-id or headers referencing some of the fields depending on
LW_BYTES.

This patch first removes most of the dependency on fe->to_log whenever
possible. The first possible cleanup is to stop checking fe->to_log
for being null, considering that it always contains at least LW_INIT
when any such usage is made of the log-format!

Also, some checks are wrong. s->logs.logwait cannot be nulled by
"logwait &= ~LW_*" since LW_INIT is always there. This results in
getting the wrong log at the end of a request or session when a
unique-id or add-header is set, because logwait is still not null
but the log-format is not checked.

Further cleanups are required. Most LW_* flags should be removed or at
least replaced with what they really mean (eg: depend on client-side
connection, depend on server-side connection, etc...) and this should
only affect logging, not other mechanisms.

This patch fixes the default log-format and tries to limit interferences
between the log formats, but does not pretend to do more for the moment,
since it's the most visible breakage.
2012-12-28 09:51:00 +01:00
Willy Tarreau
cbc743e36c BUG/MEDIUM: stats: disable request analyser when processing POST or HEAD
After the response headers are sent and the request processing is done,
the buffers are wiped out and the stream interface is closed. We must
then disable the request analysers, otherwise some processing will
happen on a closed stream interface and empty buffers which do not
match, causing all sort of crashes. This issue was introduced with
recent work on the stats, and was reported by Seri.
2012-12-28 08:36:50 +01:00
Willy Tarreau
1a1e8072f9 BUG/MINOR: stats: http-request rules still don't cope with stats
Since commit 20b0de5, we also had another remaining issue : an
"http-request allow" rule would prevent a stats rule from being
processed.
2012-12-27 10:34:21 +01:00
Willy Tarreau
8b80f0c9a2 BUG/MINOR: stats: last fix was still wrong
Previous commit was still wrong, it broke add-header and set-header
because we don't want to leave on these actions.

The http_check_access_rule() function should be redesigned, it was
initially thought for allow/deny rules but now it is executing other
non-final rules and at the same time returning a pointer to the last
final rule. That becomes a bit confusing and will need to be addressed
before we implement redirect and return.
2012-12-25 21:55:37 +01:00
Willy Tarreau
418c1a0a95 BUG/MEDIUM: stats: fix stats page regression introduced by commit 20b0de5
This commit adding http-request add-header/set-header unfortunately introduced
a regression to the handling of the stats page which is not matched anymore.

Thanks to Dmitry Sivachenko for reporting this.
2012-12-25 20:52:58 +01:00
Willy Tarreau
20b0de56d4 MEDIUM: http: add http-request 'add-header' and 'set-header' to build headers
These two new statements allow to pass information extracted from the request
to the server. It's particularly useful for passing SSL information to the
server, but may be used for various other purposes such as combining headers
together to emulate internal variables.
2012-12-24 15:56:20 +01:00
Willy Tarreau
5c2e198390 MINOR: http: prepare to support more http-request actions
We'll need to support per-action arguments, so we need to have an
"arg" union in http_req_rule.
2012-12-24 12:26:26 +01:00
Willy Tarreau
354898bba9 MINOR: stats: replace STAT_FMT_CSV with STAT_FMT_HTML
We need to switch the default mode if we want to add new output formats
later. Let CSV be the default and HTML be an option.
2012-12-23 21:46:30 +01:00
Willy Tarreau
47ca54505c MINOR: chunks: centralize the trash chunk allocation
At the moment, we need trash chunks almost everywhere and the only
correctly implemented one is in the sample code. Let's move this to
the chunks so that all other places can use this allocator.

Additionally, the get_trash_chunk() function now really returns two
different chunks. Previously it used to always overwrite the same
chunk and point it to a different buffer, which was a bit tricky
because it's not obvious that two consecutive results do alias each
other.
2012-12-23 21:46:07 +01:00
Willy Tarreau
1facd6d67e REORG: stats: move the HTTP header injection to proto_http
The HTTP header injection that are performed in dumpstats when responding
or when redirecting a POST request have nothing to do in dumpstats. They
do not use any state from the stats, and are 100% HTTP. Let's make the
headers there in the HTTP core, and have dumpstats only produce stats.
2012-12-22 22:50:01 +01:00
Willy Tarreau
d9bdcd5139 REORG: stats: massive code reorg and cleanup
The dumpstats code looks like a spaghetti plate. Several functions are
supposed to be able to do several things but rely on complex states to
dispatch the work to independant functions. Most of the HTML output is
performed within the switch/case statements of the whole state machine.

Let's clean this up by adding new functions to emit the data and have
a few more iterators to avoid relying on so complex states.

The new stats dump sequence looks like this for CLI and for HTTP :

  cli_io_handler()
      -> stats_dump_sess_to_buffer()      // "show sess"
      -> stats_dump_errors_to_buffer()    // "show errors"
      -> stats_dump_raw_info_to_buffer()  // "show info"
         -> stats_dump_raw_info()
      -> stats_dump_raw_stat_to_buffer()  // "show stat"
         -> stats_dump_csv_header()
         -> stats_dump_proxy()
            -> stats_dump_px_hdr()
            -> stats_dump_fe_stats()
            -> stats_dump_li_stats()
            -> stats_dump_sv_stats()
            -> stats_dump_be_stats()
            -> stats_dump_px_end()

  http_stats_io_handler()
      -> stats_http_redir()
      -> stats_dump_http()              // also emits the HTTP headers
         -> stats_dump_html_head()      // emits the HTML headers
         -> stats_dump_csv_header()     // emits the CSV headers (same as above)
         -> stats_dump_http_info()      // note: ignores non-HTML output
         -> stats_dump_proxy()          // same as above
         -> stats_dump_http_end()       // emits HTML trailer
2012-12-22 20:45:02 +01:00
Willy Tarreau
40f151aa79 BUG/MINOR: http: don't abort client connection on premature responses
When a server responds prematurely to a POST request, haproxy used to
cause the transfer to be aborted before the end. This is problematic
because this causes the client to receive a TCP reset when it tries to
push more data, generally preventing it from receiving the response
which contain the reason for the premature reponse (eg: "entity too
large" or an authentication request).

From now on we take care of allowing the upload traffic to flow to the
server even when the response has been received, since the server is
supposed to drain it. That way the client receives the server response.

This bug has been present since 1.4 and the fix should probably be
backported there.
2012-12-20 12:10:09 +01:00
Willy Tarreau
f26b252ee4 MINOR: http: make resp_ver and status ACLs check for the presence of a response
The two ACL fetches "resp_ver" and "status", if used in a request despite
the warning, would return a match of zero length. This is inappropriate,
better return a non-match to be more consistent with other ACL processing.
2012-12-14 08:35:45 +01:00
Willy Tarreau
4a55060aa6 MINOR: http: add the "base32+src" fetch method.
This returns the concatenation of the base32 fetch and the src fetch.
The resulting type is of type binary, with a size of 8 or 20 bytes
depending on the source address family. This can be used to track
per-IP, per-URL counters.
2012-12-09 14:53:32 +01:00
Willy Tarreau
ab1f7b72fb MINOR: http: add the "base32" pattern fetch function
This returns a 32-bit hash of the value returned by the "base"
fetch method above. This is useful to track per-URL activity on
high traffic sites without having to store all URLs. Instead a
shorter hash is stored, saving a lot of memory. The output type
is an unsigned integer.
2012-12-09 14:08:48 +01:00
Willy Tarreau
5d5b5d8eaf MEDIUM: proto_tcp: add support for tracking L7 information
Until now it was only possible to use track-sc1/sc2 with "src" which
is the IPv4 source address. Now we can use track-sc1/sc2 with any fetch
as well as any transformation type. It works just like the "stick"
directive.

Samples are automatically converted to the correct types for the table.

Only "tcp-request content" rules may use L7 information, and such information
must already be present when the tracking is set up. For example it becomes
possible to track the IP address passed in the X-Forwarded-For header.

HTTP request processing now also considers tracking from backend rules
because we want to be able to update the counters even when the request
was already parsed and tracked.

Some more controls need to be performed (eg: samples do not distinguish
between L4 and L6).
2012-12-09 14:08:47 +01:00
Willy Tarreau
dc979f2492 BUG/MINOR: http: don't log a 503 on client errors while waiting for requests
If a client aborts a request with an error (typically a TCP reset), we must
log a 400. Till now we did not set the status nor close the stream interface,
causing the request to attempt to be forwarded and logging a 503.

Should be backported to 1.4 which is affected as well.
2012-12-04 10:52:22 +01:00
Willy Tarreau
14cba4b0b1 MEDIUM: connection: add an error code in connections
This will be needed to improve error reporting, especially for SSL.
2012-12-03 14:22:13 +01:00
Willy Tarreau
8139b9959f MINOR: compression: make the stats a bit more robust
To ensure that we only count when a response was compressed, we also
check for the SN_COMP_READY flag which indicates that the compression
was effectively initialized. Comp_algo alone is meaningless.
2012-11-27 09:34:00 +01:00
Willy Tarreau
9101535038 BUG/MINOR: http: disable compression when message has no body
Compression was not disabled on 1xx, 204, 304 nor HEAD requests. This
is not really a problem, but it reports more compressed responses than
really done.
2012-11-27 09:34:00 +01:00
Willy Tarreau
0a80a8dbb2 MINOR: http: factor out the content-type checks
Let's only look up the content-type header once. This involves
inverting the condition which is not dramatic.

Also, we now always check the value length before comparing it, and we
always reset the ctx.idx before looking a header up. Otherwise that
could make header lookups depend on their on-wire order. It would be
a minor issue however since at worst it would cause some responses not
to be compressed.
2012-11-26 16:36:00 +01:00
William Lallemand
d300261bab MINOR: compression: disable on multipart or status != 200
The compression is disabled when the HTTP status code is not 200, indeed
compression on some HTTP code can create issues (ex: 206, 416).

Multipart message should not be compressed eitherway.
2012-11-26 16:02:58 +01:00
William Lallemand
859550e068 BUG/MINOR: compression: Content-Type is case insensitive
The Content-Type parameter must be case insensitive.
2012-11-26 16:02:58 +01:00
Willy Tarreau
f003d375ec BUG/MINOR: http: don't report client aborts as server errors
If a client aborts with an abortonclose flag, the close is forwarded
to the server and when server response is processed, the analyser thinks
it's the server who has closed first, and logs flags "SD" or "SH" and
counts a server error. In order to avoid this, we now first detect that
the client has closed and log a client abort instead.

This likely is the reason why many people have been observing a small rate
of SD/SH flags without being able to find what the error was.

This fix should probably be backported to 1.4.
2012-11-26 13:50:02 +01:00
Willy Tarreau
5e16cbc3bd MINOR: stats: report the total number of compressed responses per front/back
Depending on the content-types and accept-encoding fields, some responses
might or might not be compressed. Let's have a counter of the number of
compressed responses and report it in the stats to help improve compression
usage.

Some cosmetic issues were fixed in the CSV output too (missing commas at the
end).
2012-11-24 14:54:13 +01:00
William Lallemand
00bf1dee9c BUG/MEDIUM: compression: does not forward trailers
The commit bf3ae617 introduced a regression about the forward of the
trailers in compression mode.
2012-11-23 11:12:33 +01:00
Willy Tarreau
193b8c6168 MINOR: http: allow the cookie capture size to be changed
Some users need more than 64 characters to log large cookies. The limit
was set to 63 characters (and not 64 as previously documented). Now it
is possible to change this using the global "tune.http.cookielen" setting
if required.
2012-11-22 00:44:27 +01:00
William Lallemand
072a2bf537 MINOR: compression: CPU usage limit
New option 'maxcompcpuusage' in global section.
Sets the maximum CPU usage HAProxy can reach before stopping the
compression for new requests or decreasing the compression level of
current requests.  It works like 'maxcomprate' but with the Idle.
2012-11-21 02:15:16 +01:00
William Lallemand
8b52bb3878 MEDIUM: compression: use pool for comp_ctx
Use pool for comp_ctx, it is allocated during the comp_algo->init().
The allocation of comp_ctx is accounted for in the zlib_memory_available.
2012-11-21 01:56:47 +01:00
William Lallemand
bf3ae61789 MEDIUM: compression: don't compress when no data
This patch makes changes in the http_response_forward_body state
machine. It checks if the compress algorithm had consumed data before
swapping the temporary and the input buffer. So it prevents null sized
zlib chunks.
2012-11-19 14:57:29 +01:00
Willy Tarreau
b97b6190e1 BUG: compression: properly disable compression when content-type does not match
Disabling compression based on the content-type was improperly done since the
introduction of the COMP_READY flag, sometimes resulting in truncated responses.
2012-11-19 14:55:02 +01:00
Willy Tarreau
543db62e1f BUG/MEDIUM: compression: release the zlib pools between keep-alive requests
There was a possible memory leak in the zlib code when the first response of
a keep-alive session was compressed, because the next request would reset the
compression algo, preventing a later call to session_free() from releasing it.
The reason is that it is necessary to release the assigned resources in
http_end_txn_clean_session().
2012-11-15 16:41:22 +01:00
William Lallemand
ec3e3890f0 BUG/MINOR: compression: deinit zlib only when required
The zlib stream was deinitialized even when the init failed.
2012-11-15 15:42:17 +01:00
William Lallemand
c04ca58222 BUG/MEDIUM: compression: no Content-Type header but type in configuration
HAProxy was compressing data when there was no Content-Type header in
the response but a compression type specified in the configuration.
2012-11-15 15:42:11 +01:00
Willy Tarreau
3fdb366885 MAJOR: connection: replace struct target with a pointer to an enum
Instead of storing a couple of (int, ptr) in the struct connection
and the struct session, we use a different method : we only store a
pointer to an integer which is stored inside the target object and
which contains a unique type identifier. That way, the pointer allows
us to retrieve the object type (by dereferencing it) and the object's
address (by computing the displacement in the target structure). The
NULL pointer always corresponds to OBJ_TYPE_NONE.

This reduces the size of the connection and session structs. It also
simplifies target assignment and compare.

In order to improve the generated code, we try to put the obj_type
element at the beginning of all the structs (listener, server, proxy,
si_applet), so that the original and target pointers are always equal.

A lot of code was touched by massive replaces, but the changes are not
that important.
2012-11-12 00:42:33 +01:00
Willy Tarreau
50fc7777c6 MEDIUM: http: refrain from sending "Connection: close" when Upgrade is present
Some servers are not totally HTTP-compliant when it comes to parsing the
Connection header. This is particularly true with WebSocket where it happens
from time to time that a server doesn't support having a "close" token along
with the "Upgrade" token in the Connection header. This broken behaviour has
also been noticed on some clients though the problem is less frequent on the
response path.

Sometimes the workaround consists in enabling "option http-pretend-keepalive"
to leave the request Connection header untouched, but this is not always the
most convenient solution. This patch introduces a new solution : haproxy now
also looks for the "Upgrade" token in the Connection header and if it finds
it, then it refrains from adding any other token to the Connection header
(though "keep-alive" and "close" may still be removed if found). The same is
done for the response headers.

This way, WebSocket much with less changes even when facing non-compliant
clients or servers. At least it fixes the DISCONNECT issue that was seen
on the websocket.org test.

Note that haproxy does not change its internal mode, it just refrains from
adding new tokens to the connection header.
2012-11-11 22:40:00 +01:00
Willy Tarreau
7f7ad91056 BUILD: stream_interface: remove si_fd() and its references
si_fd() is not used a lot, and breaks builds on OpenBSD 5.2 which
defines this name for its own purpose. It's easy enough to remove
this one-liner function, so let's do it.
2012-11-11 20:53:29 +01:00
William Lallemand
d85f917daf MINOR: compression: maximum compression rate limit
This patch adds input and output rate calcutation on the HTTP compresion
feature.

Compression can be limited with a maximum rate value in kilobytes per
second. The rate is set with the global 'maxcomprate' option. You can
change this value dynamicaly with 'set rate-limit http-compression
global' on the UNIX socket.
2012-11-10 17:47:27 +01:00
William Lallemand
f3747837e5 MINOR: compression: tune.comp.maxlevel
This option allows you to set the maximum compression level usable by
the compression algorithm. It affects CPU usage.
2012-11-10 17:47:07 +01:00
Finn Arne Gangstad
0a410e81fb BUG: http: revert broken optimisation from 82fe75c1a7
This optimisation causes haproxy to time out requests that result
in two TCP packets, one packet containing the header, and one
packet containing the actual data. This is a very typical type
of response from a lot of servers.

[Willy: I suspect the fix might have an impact on the compression code
 which I'm not sure completely handles calls with 0 bytes to forward]
2012-11-10 17:38:36 +01:00
William Lallemand
4c49fae985 MINOR: compression: init before deleting headers
Init the compression algorithm before modifying the response headers. So
if the compression init fail, the headers won't be modified.
2012-11-08 15:23:30 +01:00
William Lallemand
1c2d622d82 CLEANUP: use struct comp_ctx instead of union
Replace union comp_ctx by struct comp_ctx.

Use struct comp_ctx * in the init/add_data/flush/reset/end prototypes of
compression.h functions.
2012-11-05 10:23:16 +01:00
Finn Arne Gangstad
cbb9a4b128 MINOR: compression: Enable compression for IE6 w/SP2, IE7 and IE8
Some old browsers that have a user-agent starting with "Mozilla/4" do
not support compressison correctly, so disable compression for those.

Internet explorer 6 after Windows XP service pack 2, IE 7, and IE 8,
do however support compression and still have a user agent starting
with Mozilla/4, so we try to enable compression for those.

MSIE has a user-agent on this form:
Mozilla/4.0 (compatible; MSIE <version>; ...)

98% of MSIE 6 SP2 user agents start with
Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1; SV1
The remaining 2% have additional flags before "SV1".

This simplified matching looking for MSIE at exactly position 25
and SV1 at exacly position 51 gives a few false negatives, so sometimes
a compression opportunity is lost.

A test against 3 hours of traffic to around 3000 news sites worldwide
gives less than 0.007% (70ppm) missed compression opportunities.
2012-10-29 22:03:14 +01:00
Willy Tarreau
7e2c647ee7 MEDIUM: remove remains of BUFSIZE in HTTP auth and sample conversions
Sample conversions rely on two alternative buffers which were previously
allocated as static bufs of size BUFSIZE. Now they're initialized to the
global buffer size. It was the same for HTTP authentication. Note that it
seems that none of them was prone to any mistake when dealing with the
buffer size, but better stay on the safe side by maintaining the old
assumption that a trash buffer is always "large enough".
2012-10-29 20:44:36 +01:00
Willy Tarreau
19d14ef104 MEDIUM: make the trash be a chunk instead of a char *
The trash is used everywhere to store the results of temporary strings
built out of s(n)printf, or as a storage for a chunk when chunks are
needed.

Using global.tune.bufsize is not the most convenient thing either.

So let's replace trash with a chunk and directly use it as such. We can
then use trash.size as the natural way to get its size, and get rid of
many intermediary chunks that were previously used.

The patch is huge because it touches many areas but it makes the code
a lot more clear and even outlines places where trash was used without
being that obvious.
2012-10-29 16:57:30 +01:00
Willy Tarreau
08b4d79d31 BUG: compression: disable auto-close and enable MSG_MORE during transfer
We don't want the lower layer to forward a close while we're compressing,
and we want the system to fuse outgoing TCP segments using MSG_MORE as
much as possible to save round trips that can emerge from sending short
packets with a PUSH flag.

A test on a remote busy DSL line consisting in compressing a 100MB file
on the fly full of zeroes only showed a transfer rate of a few kB/s due
to these round trips.
2012-10-27 01:36:34 +02:00
Willy Tarreau
70737d142f MINOR: compression: add an offload option to remove the Accept-Encoding header
This is used when it is desired that backend servers don't compress
(eg: because of buggy implementations).
2012-10-27 01:13:24 +02:00
Willy Tarreau
f2943dccd0 MAJOR: session: detach the connections from the stream interfaces
We will need to be able to switch server connections on a session and
to keep idle connections. In order to achieve this, the preliminary
requirement is that the connections can survive the session and be
detached from them.

Right now they're still allocated at exactly the same place, so when
there is a session, there are always 2 connections. We could soon
improve on this by allocating the outgoing connection only during a
connect().

This current patch touches a lot of code and intentionally does not
change any functionnality. Performance tests show no regression (even
a very minor improvement). The doc has not yet been updated.
2012-10-26 20:15:20 +02:00
Willy Tarreau
c919dc66a3 CLEANUP: remove trashlen
trashlen is a copy of global.tune.bufsize, so let's stop using it as
a duplicate, fall back to the original bufsize, it's less confusing
this way.
2012-10-26 20:04:27 +02:00
Willy Tarreau
3c7b97b9f9 BUG/MINOR: http: compression should consider all Accept-Encoding header values
Right now commit 82fe75c1 came with a minor bug limiting the check to the first
accept-encoding header value only.
2012-10-26 14:52:02 +02:00
Willy Tarreau
05d846092f MINOR: compression: automatically disable compression for older browsers
A number of older browsers have many issues with compressed contents. It
happens that all these older browsers announce themselves as "Mozilla/4"
and that despite not being all broken, the amount of working browsers
announcing themselves this way compared to all other ones is so tiny
that it's not worth wasting cycles trying to adapt to every specific
one.

So let's simply disable compression for these older browsers.

More information on this very detailed article :

   http://zoompf.com/2012/02/lose-the-wait-http-compression
2012-10-26 02:54:31 +02:00
William Lallemand
82fe75c1a7 MEDIUM: HTTP compression (zlib library support)
This commit introduces HTTP compression using the zlib library.

http_response_forward_body has been modified to call the compression
functions.

This feature includes 3 algorithms: identity, gzip and deflate:

  * identity: this is mostly for debugging, and it was useful for
  developping the compression feature. With Content-Length in input, it
  is making each chunk with the data available in the current buffer.
  With chunks in input, it is rechunking, the output chunks will be
  bigger or smaller depending of the size of the input chunk and the
  size of the buffer. Identity does not apply any change on data.

  * gzip: same as identity, but applying a gzip compression. The data
  are deflated using the Z_NO_FLUSH flag in zlib. When there is no more
  data in the input buffer, it flushes the data in the output buffer
  (Z_SYNC_FLUSH). At the end of data, when it receives the last chunk in
  input, or when there is no more data to read, it writes the end of
  data with Z_FINISH and the ending chunk.

  * deflate: same as gzip, but with deflate algorithm and zlib format.
  Note that this algorithm has ambiguous support on many browsers and
  no support at all from recent ones. It is strongly recommended not
  to use it for anything else than experimentation.

You can't choose the compression ratio at the moment, it will be set to
Z_BEST_SPEED (1), as tests have shown very little benefit in terms of
compression ration when going above for HTML contents, at the cost of
a massive CPU impact.

Compression will be activated depending of the Accept-Encoding request
header. With identity, it does not take care of that header.

To build HAProxy with zlib support, use USE_ZLIB=1 in the make
parameters.

This work was initially started by David Du Colombier at Exceliance.
2012-10-26 02:30:48 +02:00
Willy Tarreau
54d23dfc07 CLEANUP: http: rename HTTP_MSG_DATA_CRLF state
This state's name is confusing as it is only used with chunked encoding
and makes newcomers think it's also related to the content-length. Let's
call it CHUNK_CRLF to clear any doubt on this.
2012-10-26 01:13:52 +02:00
Willy Tarreau
24e6d972aa OPTIM: http: inline http_parse_chunk_size() and http_skip_chunk_crlf()
These functions are not that long and the compiler inlines them well. Doing
so has sped up the chunked encoding parser by 41% !

Note that http_forward_trailers was also declared static because it's not
exported.
2012-10-26 01:12:40 +02:00
Cyril Bonté
69fa99292e MEDIUM: http: accept IPv6 values with (s)hdr_ip acl
Commit ceb4ac9c states that IPv6 values are accepted by "hdr_ip" acl,
but the code didn't allow it. This patch provides the ability to accept IPv6
values.
2012-10-25 14:41:33 +02:00
Willy Tarreau
fc47f91c9c BUG/MEDIUM: http: set DONTWAIT on data when switching to tunnel mode
Jaroslaw Bojar diagnosed an issue when haproxy switches to tunnel mode
after a transfer. The response data are sent with the MSG_MORE flag,
causing them to be needlessly queued in the kernel. In order to fix this,
we set the CF_NEVER_WAIT flag on the channels when switching to tunnel
mode.

One issue remained with client-side keep-alive : if the response is sent
before the end of the request, it suffers the same issue for the same
reason. This is easily addressed by setting the CF_SEND_DONTWAIT flag
on the channel when the response has been parsed and we're waiting for
the other side.

The same issue is present in 1.4 so the fix must be backported.
2012-10-20 10:41:37 +02:00
Willy Tarreau
9b28e03b66 MAJOR: channel: replace the struct buffer with a pointer to a buffer
With this commit, we now separate the channel from the buffer. This will
allow us to replace buffers on the fly without touching the channel. Since
nobody is supposed to keep a reference to a buffer anymore, doing so is not
a problem and will also permit some copy-less data manipulation.

Interestingly, these changes have shown a 2% performance increase on some
workloads, probably due to a better cache placement of data.
2012-10-13 09:07:52 +02:00
Willy Tarreau
cdbdd52a38 CLEANUP: http: use 'chn' to name channel variables, not 'buf'
These "buf" were confusing as they were really refering to channels. At
most places, a buffer was really all what was needed, so a struct buffer
was used instead. It is possible that the performance has slightly increased
by the removal of pointer offset in many pointer operations by directly
using the buffer pointer instead of the channel pointer.
2012-10-12 23:02:51 +02:00
Willy Tarreau
394db379eb REORG: http: rename msg->buf to msg->chn since it's a channel
It's extremely confusing to have all those msg->buf->buf everywhere after
the extraction of the buffer from the channel. Let's clean this up.
2012-10-12 22:40:39 +02:00
Willy Tarreau
1b6c00cb99 BUG/MAJOR: ensure that hdr_idx is always reserved when L7 fetches are used
Baptiste Assmann reported a bug causing a crash on recent versions when
sticking rules were set on layer 7 in a TCP proxy. The bug is easier to
reproduce with the "defer-accept" option on the "bind" line in order to
have some contents to parse when the connection is accepted. The issue
is that the acl_prefetch_http() function called from HTTP fetches relies
on hdr_idx to be preinitialized, which is not the case if there is no L7
ACL.

The solution consists in adding a new SMP_CAP_L7 flag to fetches to indicate
that they are expected to work on L7 data, so that the proxy knows that the
hdr_idx has to be initialized. This is already how ACL and HTTP mode are
handled.

The bug was present since 1.5-dev9.
2012-10-05 22:46:09 +02:00
bedis
4c75cca8ba MINOR: samples: update the url_param fetch to match parameters in the path
It now supports an optional delimiter to allow to look for the parameter before
the query string.
2012-10-05 15:17:23 +02:00
Willy Tarreau
f7bc57ca6e REORG: connection: rename the data layer the "transport layer"
While working on the changes required to make the health checks use the
new connections, it started to become obvious that some naming was not
logical at all in the connections. Specifically, it is not logical to
call the "data layer" the layer which is in charge for all the handshake
and which does not yet provide a data layer once established until a
session has allocated all the required buffers.

In fact, it's more a transport layer, which makes much more sense. The
transport layer offers a medium on which data can transit, and it offers
the functions to move these data when the upper layer requests this. And
it is the upper layer which iterates over the transport layer's functions
to move data which should be called the data layer.

The use case where it's obvious is with embryonic sessions : an incoming
SSL connection is accepted. Only the connection is allocated, not the
buffers nor stream interface, etc... The connection handles the SSL
handshake by itself. Once this handshake is complete, we can't use the
data functions because the buffers and stream interface are not there
yet. Hence we have to first call a specific function to complete the
session initialization, after which we'll be able to use the data
functions. This clearly proves that SSL here is only a transport layer
and that the stream interface constitutes the data layer.

A similar change will be performed to rename app_cb => data, but the
two could not be in the same commit for obvious reasons.
2012-10-04 22:26:09 +02:00
Willy Tarreau
b8ffd378f0 BUG/MAJOR: http: chunk parser was broken with buffer changes
Since at least commit a458b679, msg->sov could become negative in
http_parse_chunk_size() if a chunk size wrapped around the buffer.
The effect is that at some point channel_forward() was called with
a negative size, causing all data to be transferred without being
analyzed anymore.

Since haproxy does not support keep-alive with the server yet, this
issue is not really noticeable, as the server closes the connection
in response. Still, when tunnel mode is used or when pretent-keepalive
is used, it is possible to see the problem.

This issue was reported and diagnosed by William Lallemand at
Exceliance.
2012-09-27 15:08:56 +02:00
Willy Tarreau
e92693af26 BUG: http: do not print garbage on invalid requests in debug mode
Cyril Bonté reported a mangled debug output when an invalid request
was sent with a faulty request line. The reason was the use of the
msg->sl.rq.l offset which was not yet initialized in this case. So
we change the way to report such an error so that first we initialize
it to zero before parsing a message, then we use that to know whether
we can trust it or not. If it's still zero, then we display the whole
buffer, truncated by debug_hdr() to the first CR or LF character, which
results in the first line only.

The same operation was performed for the response, which was wrong too.
2012-09-24 21:16:42 +02:00
Willy Tarreau
eb6cead1de MINOR: standard: make memprintf() support a NULL destination
Doing so removes many checks that were systematically made because
the callees don't know if the caller passed a valid pointer.
2012-09-24 10:53:16 +02:00
Willy Tarreau
2e1dca8f52 MEDIUM: http: add "redirect scheme" to ease HTTP to HTTPS redirection
For instance :

   redirect scheme https if !{ is_ssl }
2012-09-12 08:43:15 +02:00
Willy Tarreau
783f25800c BUILD: http: rename error_message http_error_message to fix conflicts on RHEL
Duncan Hall reported a build issue on CentOS where error_message conflicts
with another system declaration when SSL is enabled. Rename the function.
2012-09-04 12:19:04 +02:00
Willy Tarreau
74172ff9c3 CLEANUP: frontend: remove the old proxy protocol decoder
This one used to rely on a stream analyser which was inappropriate.
It's not used anymore.
2012-09-03 20:47:35 +02:00
Willy Tarreau
986a9d2d12 MAJOR: connection: move the addr field from the stream_interface
We need to have the source and destination addresses in the connection.
They were lying in the stream interface so let's move them. The flags
SI_FL_FROM_SET and SI_FL_TO_SET have been moved as well.

It's worth noting that tcp_connect_server() almost does not use the
stream interface anymore except for a few flags.

It has been identified that once we detach the connection from the SI,
it will probably be needed to keep a copy of the server-side addresses
in the SI just for logging purposes. This has not been implemented right
now though.
2012-09-03 20:47:34 +02:00
Willy Tarreau
3cefd521fa REORG: connection: move the target pointer from si to connection
The target is per connection and is directly used by the connection, so
we need it there. It's not needed anymore in the SI however.
2012-09-03 20:47:34 +02:00
Willy Tarreau
8263d2b259 CLEANUP: channel: use "channel" instead of "buffer" in function names
This is a massive rename of most functions which should make use of the
word "channel" instead of the word "buffer" in their names.

In concerns the following ones (new names) :

unsigned long long channel_forward(struct channel *buf, unsigned long long bytes);
static inline void channel_init(struct channel *buf)
static inline int channel_input_closed(struct channel *buf)
static inline int channel_output_closed(struct channel *buf)
static inline void channel_check_timeouts(struct channel *b)
static inline void channel_erase(struct channel *buf)
static inline void channel_shutr_now(struct channel *buf)
static inline void channel_shutw_now(struct channel *buf)
static inline void channel_abort(struct channel *buf)
static inline void channel_stop_hijacker(struct channel *buf)
static inline void channel_auto_connect(struct channel *buf)
static inline void channel_dont_connect(struct channel *buf)
static inline void channel_auto_close(struct channel *buf)
static inline void channel_dont_close(struct channel *buf)
static inline void channel_auto_read(struct channel *buf)
static inline void channel_dont_read(struct channel *buf)
unsigned long long channel_forward(struct channel *buf, unsigned long long bytes)

Some functions provided by channel.[ch] have kept their "buffer" name because
they are really designed to act on the buffer according to some information
gathered from the channel. They have been moved together to the same place in
the file for better readability but they were not changed at all.

The "buffer" memory pool was also renamed "channel".
2012-09-03 20:47:33 +02:00
Willy Tarreau
03cdb7c678 CLEANUP: channel: usr CF_/CHN_ prefixes instead of BF_/BUF_
Get rid of these confusing BF_* flags. Now channel naming should clearly
be used everywhere appropriate.

No code was changed, only a renaming was performed. The comments about
channel operations was updated.
2012-09-03 20:47:33 +02:00
Willy Tarreau
af81935b82 REORG: channel: move buffer_{replace,insert_line}* to buffer.{c,h}
These functions do not depend on the channel flags anymore thus they're
much better suited to be used on plain buffers. Move them from channel
to buffer.
2012-09-03 20:47:33 +02:00
Willy Tarreau
3bf1b2b816 MAJOR: channel: stop relying on BF_FULL to take action
This flag is quite complex to get right and updating it everywhere is a
major pain, especially since the buffer/channel split. This is the first
step of getting rid of it. Instead now it's dynamically computed whenever
needed.
2012-09-03 20:47:33 +02:00
Willy Tarreau
a75bcef867 REORG: buffer: move buffer_flush, b_adv and b_rew to buffer.h
These one now operate over real buffers, not channels anymore.
2012-09-03 20:47:32 +02:00
Willy Tarreau
8e21bb9e52 MAJOR: channel: remove the BF_OUT_EMPTY flag
This flag was very problematic because it was composite in that both changes
to the pipe or to the buffer had to cause this flag to be updated, which is
not always simple (eg: there may not even be a channel attached to a buffer
at all).

There were not that many users of this flags, mostly setters. So the flag got
replaced with a macro which reports whether the channel is empty or not, by
checking both the pipe and the buffer.

One part of the change is sensible : the flag was also part of BF_MASK_STATIC,
which is used by process_session() to rescan all analysers in case the flag's
status changes. At first glance, none of the analysers seems to change its
mind base on this flag when it is subject to change, so it seems fine not to
add variation checks here. Otherwise it's possible that checking the buffer's
output size is more useful than checking the flag's replacement.
2012-09-03 20:47:32 +02:00
Willy Tarreau
c7e4238df0 REORG: buffers: split buffers into chunk,buffer,channel
Many parts of the channel definition still make use of the "buffer" word.
2012-09-03 20:47:32 +02:00
Willy Tarreau
75bf2c925f REORG: sock_raw: rename the files raw_sock*
The "raw_sock" prefix will be more convenient for naming functions as
it will be prefixed with the data layer and suffixed with the data
direction. So let's rename the files now to avoid any further confusion.

The #include directive was also removed from a number of files which do
not need it anymore.
2012-09-02 21:54:56 +02:00
Willy Tarreau
572bf9095d REORG/MAJOR: extract "struct buffer" from "struct channel"
At the moment, the struct is still embedded into the struct channel, but
all the functions have been updated to use struct buffer only when possible,
otherwise struct channel. Some functions would likely need to be splitted
between a buffer-layer primitive and a channel-layer function.

Later the buffer should become a pointer in the struct buffer, but doing so
requires a few changes to the buffer allocation calls.
2012-09-02 21:54:56 +02:00
Willy Tarreau
7421efb85f REORG/MAJOR: use "struct channel" instead of "struct buffer"
This is a massive rename. We'll then split channel and buffer.

This change needs a lot of cleanups. At many locations, the parameter
or variable is still called "buf" which will become ambiguous. Also,
the "struct channel" is still defined in buffers.h.
2012-09-02 21:54:55 +02:00
Willy Tarreau
505e34a36d MAJOR: get rid of fdtab[].state and use connection->flags instead
fdtab[].state was only used to know whether a connection was in progress
or an error was encountered. Instead we now use connection->flags to store
a flag for both. This way, connection management will be able to update the
connection status on I/O.
2012-09-02 21:51:26 +02:00
Willy Tarreau
a9fddca778 MINOR: http: add the urlp_val ACL match
It's derived from other urlp_* matches, but there was no way to check for
an integer value and it seems like it's significantly used.
2012-07-31 07:55:32 +02:00
Willy Tarreau
dae2a8a5a5 BUG/MINOR: tarpit: fix condition to return the HTTP 500 message
Commit fa7e1025 (1.3.16-rc1) introduced a minor bug by comparing req->flags
with BF_READ_ERROR instead of checking for the bit. The result is that the
error message is always returned even in case of client error. This has no
real impact but this must be fixed.

It may be backported to 1.4 and 1.3.
2012-07-31 07:55:31 +02:00
Willy Tarreau
a7ad50cdb1 MEDIUM: pattern: add the "base" sample fetch method
This one returns the concatenation of the first Host header entry with
the path. It can make content-switching rules easier, help with fighting
DDoS on certain URLs and improve shared caches efficiency.
2012-07-26 19:08:38 +02:00
Willy Tarreau
6812bcfc94 MINOR: replace acl_fetch_{path,url}* with smp_fetch_*
Doing so allows us to support sticking on URL, URL's IP, URL's port and
path.

Both fetch functions should be improved to support an optional depth
allowing to stick to a server depending on just a few directory
components. This would help with portals, some prefetch-capable
caches and with outgoing connections using multiple internet links.
2012-07-26 19:06:40 +02:00
Willy Tarreau
a05903174f BUG/MAJOR: cookie prefix doesn't support cookie-less servers
Commit 827aee91 merged in 1.5-dev5 introduced a regression causing
the srv pointer to be tested twice instead of srv then srv->cookie.
The result is that if a server has no cookie in prefix mode, haproxy
will crash when trying to modify it.

Such a config is very unlikely to happen, except maybe with a backup
server, which would cause haproxy to die with the last server in the
farm.

No backport is needed, only 1.5-dev was affected.
2012-06-06 16:07:00 +02:00
Willy Tarreau
4f8a83cb6e MEDIUM: stats: add the ability to kill sessions from the admin interface
It was not possible to kill remaining sessions from the admin interface,
which is annoying especially when switching to maintenance mode. Now it's
possible.
2012-06-04 00:26:23 +02:00
Willy Tarreau
d72822442d MEDIUM: stats: add support for soft stop/soft start in the admin interface
One important missing feature on the web interface is the ability to perform
a soft stop/soft start. This is now possible.
2012-06-04 00:22:44 +02:00
Willy Tarreau
4992dd2d30 MINOR: http: add support for "httponly" and "secure" cookie attributes
httponly  This option tells haproxy to add an "HttpOnly" cookie attribute
             when a cookie is inserted. This attribute is used so that a
             user agent doesn't share the cookie with non-HTTP components.
             Please check RFC6265 for more information on this attribute.

   secure    This option tells haproxy to add a "Secure" cookie attribute when
             a cookie is inserted. This attribute is used so that a user agent
             never emits this cookie over non-secure channels, which means
             that a cookie learned with this flag will be presented only over
             SSL/TLS connections. Please check RFC6265 for more information on
             this attribute.
2012-05-31 21:02:17 +02:00
Willy Tarreau
674021329c REORG/MINOR: use dedicated proxy flags for the cookie handling
Cookies were mixed with many other options while they're not used as options.
Move them to a dedicated bitmask (ck_opts). This has released 7 flags in the
proxy options and leaves some room for new proxy flags.
2012-05-31 20:40:20 +02:00
Willy Tarreau
cde18fc1ba BUG/MINOR: perform_http_redirect also needs to rewind the buffer
Commit d1de8af362 was incomplete, because
perform_http_redirect() also needs to rewind the buffer since it's called
after data are scheduled for forwarding.

No backport needed.
2012-05-30 08:00:56 +02:00
Cyril Bonté
a32d275ab0 BUG/MEDIUM: option forwardfor if-none doesn't work with some configurations
When "option forwardfor" is enabled in a frontend that uses backends,
"if-none" ignores the header name provided in the frontend.
This prevents haproxy to add the X-Forwarded-For header if the option is not
used in the backend.

This may introduce security issues for servers/applications that rely on the
header provided by haproxy.

A minimal configuration which can reproduce the bug:
defaults
	mode http

listen OK
	bind :9000

	option forwardfor if-none
	server s1 127.0.0.1:80

listen BUG-frontend
	bind :9001

	option forwardfor if-none

	default_backend BUG-backend

backend BUG-backend
	server s1 127.0.0.1:80
2012-05-30 06:43:24 +02:00
Willy Tarreau
949811319b REORG/MEDIUM: stream_interface: move applet->state and private to connection
The state and the private pointer are not specific to the applets, since SSL
will require exactly both of them. Move them to the connection layer now and
rename them. We also now ensure that both are NULL on first call.
2012-05-21 17:09:48 +02:00
Willy Tarreau
fb7508aefb REORG/MINOR: stream_interface: move si->fd to struct connection
The socket fd is used only when in socket mode and with a connection.
2012-05-21 16:47:54 +02:00
Willy Tarreau
73b013b070 MINOR: stream_interface: introduce a new "struct connection" type
We start to move everything needed to manage a connection to a special
entity "struct connection". We have the data layer operations and the
control operations there. We'll also have more info in the future such
as file descriptors and applet contexts, so that in the end it becomes
detachable from the stream interface, which will allow connections to
be reused between sessions.

For now on, we start with minimal changes.
2012-05-21 16:31:45 +02:00
Willy Tarreau
ea95316bf1 MEDIUM: http: msg->sov and msg->sol will never wrap
These ones are offsets now, so they cannot wrap. Let's remove the useless
wrapping detection and simplify the forwarding code.
2012-05-18 23:50:43 +02:00
Willy Tarreau
2692736aa3 MEDIUM: http: get rid of msg->som which is not used anymore
msg->som was zero before the body and was used to carry the beginning
of a chunk size for chunked-encoded messages, at a moment when msg->sol
is always zero.

Remove msg->som and replace it with msg->sol where needed.
2012-05-18 23:50:43 +02:00
Willy Tarreau
06a000f56e CLEANUP: http: make it more obvious that msg->som is always null outside of chunks
Since the recent buffer reorg, msg->som is redundant with buf->p but still
appears at a number of places. This tiny patch allows to confirm that som
follows two states :
  - 0 from the moment the message starts to be parsed
  - relative offset to ->p for start of chunk when parsing chunks

During this second state, ->sol is never used, so we should probably merge
the two.
2012-05-18 23:04:32 +02:00
Willy Tarreau
09d1e254c9 MAJOR: http: stop using msg->sol outside the parsers
This is a left-over from the buffer changes. Msg->sol is always null at the
end of the parsing, so we must not use it anymore to read headers or find
the beginning of a message. As a side effect, the dump of the request in
debug mode is working again because it was relying on msg->sol not being
null.

Maybe it will even be mergeable with another of the message pointers.
2012-05-18 22:43:55 +02:00
Willy Tarreau
d1de8af362 BUG/MAJOR: fix regression on content-based hashing and http-send-name-header
The recent split between the buffers and HTTP messages in 1.5-dev9 caused
a major trouble : in the past, we used to keep a pointer to HTTP data in the
buffer struct itself, which was the cause of most of the pain we had to deal
with buffers.

Now the two are split but we lost the information about the beginning of
the HTTP message once it's being forwarded. While it seems normal, it happens
that several parts of the code currently rely on this ability to inspect a
buffer containing old contents :
  - balance uri
  - balance url_param
  - balance url_param check_post
  - balance hdr()
  - balance rdp-cookie()
  - http-send-name-header

All these happen after the data are scheduled for being forwarded, which
also causes a server to be selected. So for a long time we've been relying
on supposedly sent data that we still had a pointer to.

Now that we don't have such a pointer anymore, we only have one possibility :
when we need to inspect such data, we have to rewind the buffer so that ->p
points to where it previously was. We're lucky, no data can leave the buffer
before it's being connecting outside, and since no inspection can begin until
it's empty, we know that the skipped data are exactly ->o. So we rewind the
buffer by ->o to get headers and advance it back by the same amount.

Proceeding this way is particularly important when dealing with chunked-
encoded requests, because the ->som and ->sov fields may be reused by the
chunk parser before the connection attempt is made, so we cannot rely on
them.

Also, we need to be able to come back after retries and redispatches, which
might change the size of the request if http-send-name-header is set. All of
this is accounted for by the output queue so in the end it does not look like
a bad solution.

No backport is needed.
2012-05-18 22:23:01 +02:00
David du Colombier
7af4605ef7 BUG/MAJOR: trash must always be the size of a buffer
Before it was possible to resize the buffers using global.tune.bufsize,
the trash has always been the size of a buffer by design. Unfortunately,
the recent buffer sizing at runtime forgot to adjust the trash, resulting
in it being too short for content rewriting if buffers were enlarged from
the default value.

The bug was encountered in 1.4 so the fix must be backported there.
2012-05-16 14:21:55 +02:00
Willy Tarreau
7bb68abb9f OPTIM/MEDIUM: stream_interface: add a new SI_FL_NOHALF flag
This flag indicates that we're not interested in keeping half-open
connections on a stream interface. It has the benefit of allowing
the socket layer to cause an immediate write close when detecting
an incoming read close. This releases resources much faster and
saves one syscall (either a shutdown or setsockopt).

This flag is only set by HTTP on the interface going to the server
since we don't want to continue pushing data there when it has
closed.

Another benefit is that it responds with a FIN to a server's FIN
instead of responding with an RST as it used to, which is much
cleaner.

Performance gains of 7.5% have been measured on HTTP connection
rate on empty objects.
2012-05-13 14:52:22 +02:00
Willy Tarreau
93548be149 OPTIM: proto_http: don't enable quick-ack on empty buffers
Commit 5e205524 was a bit overzealous by inconditionally enabling
quick ack when a request is not yet in the buffer, because it also
does so when nothing has been received yet, causing a useless ACK
to be emitted.

Improve the situation by doing this only if the input buffer is
empty (indicating that nothing was sent by the client).

In case of keep-alive, an empty buffer means we already have a
response in flight which will serve as an ACK.
2012-05-13 08:44:16 +02:00
Willy Tarreau
59b9479667 BUG/MEDIUM: stream_interface: restore get_src/get_dst
Commit e164e7a removed get_src/get_dst setting in the stream interfaces but
forgot to set it in proto_tcp. Get the feature back because we need it for
logging, transparent mode, ACLs etc... We now rely on the stream interface
direction to know what syscall to use.

One benefit of doing it this way is that we don't use getsockopt() anymore
on outgoing stream interfaces nor on UNIX sockets.
2012-05-11 16:48:10 +02:00
Willy Tarreau
c63190d429 REORG: use the name sock_raw instead of stream_sock
We'll soon have an SSL socket layer, and in order to ease the difference
between the two, we use the name "sock_raw" to designate the one which
directly talks to the sockets without any conversion.
2012-05-11 14:23:52 +02:00
Willy Tarreau
4a3fd4c8df BUG/MAJOR: acl: http_auth_group() must not accept any user from the userlist
http_auth and http_auth_group used to share the same fetch function, while
they're doing very different things. The first one only checks whether the
supplied credentials are valid wrt a userlist while the second not only
checks this but also checks group ownership from a list of patterns.

Recent acl/pattern merge caused a simplification here by which the fetch
function would always return a boolean, so the group match was always fine
if the user:password was valid, regardless of the patterns provided with
the ACL.

The proper solution consists in splitting the function in two, depending
on what is desired.

It's also worth noting that check_user() would probably be split, one to
check user:password, and the other one to check for group ownership for
an already valid user:password combination. At this point it is not certain
if the group mask is still useful or not considering that the passwd check
is always made.

This bug was reported and diagnosed by Cyril Bonté. It first appeared
in 1.5-dev9 so it does not need any backporting.
2012-05-10 23:18:26 +02:00
Cyril Bonté
20a804ac6d BUG/MINOR: stats admin: "Unexpected result" was displayed unconditionally
I introduced a regression in commit 19979e176e while reworking the admin
actions results.
"Unexpected result" was displayed even if the action was applied due to a
misplaced initialization. This small patch should fix it.

Note: no need to backport.
2012-05-10 21:51:17 +02:00
Willy Tarreau
a7fe8e527c MINOR: http: replace http_message_realign() with buffer_slow_realign()
There is no more reason for the realign function being HTTP specific,
it only operates on a buffer now. Let's move it to buffers.c instead.

It's likely that buffer_bounce_realign is broken (not used), this will
have to be inspected. The function is worth rewriting as it can be
cheaper than buffer_slow_realign() to realign large wrapping buffers.
2012-05-08 21:28:17 +02:00
Willy Tarreau
515393649c MINOR: acl: add the cook_val() match to match a cookie against an integer 2012-05-08 21:28:16 +02:00
Willy Tarreau
d04b1bce69 MEDIUM: http: improve error capture reports
A number of important information were missing from the error captures, so
let's improve them. Now we also log source port, session flags, transaction
flags, message flags, pending output bytes, expected buffer wrapping position,
total bytes transferred, message chunk length, and message body length.

As such, the output format has slightly evolved and the source address moved
to the third line :

[08/May/2012:11:14:36.341] frontend echo (#1): invalid request
  backend echo (#1), server <NONE> (#-1), event #1
  src 127.0.0.1:40616, session #4, session flags 0x00000000
  HTTP msg state 26, msg flags 0x00000000, tx flags 0x00000000
  HTTP chunk len 0 bytes, HTTP body len 0 bytes
  buffer flags 0x00909002, out 0 bytes, total 28 bytes
  pending 28 bytes, wrapping at 8030, error at position 7:

  00000  GET / /?t=20000 HTTP/1.1\r\n
  00026  \r\n

[08/May/2012:11:13:13.426] backend echo (#1) : invalid response
  frontend echo (#1), server local (#1), event #0
  src 127.0.0.1:40615, session #1, session flags 0x0000044e
  HTTP msg state 32, msg flags 0x0000000e, tx flags 0x08200000
  HTTP chunk len 0 bytes, HTTP body len 20 bytes
  buffer flags 0x00008002, out 81 bytes, total 92 bytes
  pending 11 bytes, wrapping at 7949, error at position 9:

  00000  Foo: bar\r\r\n
2012-05-08 21:28:16 +02:00
Willy Tarreau
69d8c5d99e BUG/MINOR: http: ensure that msg->err_pos is always relative to buf->p
Since the beginning of buffer&msg changes, the error position (err_pos)
had not completely been converted and some offsets still appear wrong.
Now we ensure that everywhere msg->err_pos is relative to buf->p and
we always report buf->i bytes starting at buf->p in all error captures,
which ensures that err_pos is there.

This is not exactly a bug and is specific to latest changes so no backport
is needed.
2012-05-08 21:28:15 +02:00
Willy Tarreau
d6c2e8c916 BUG/MINOR: http: error snapshots are wrong if buffer wraps
Commit 81f2fb added support for wrapping buffer captures, but unfortunately
the code used to perform two memcpy() over the same destination, causing a
loss of the start of the buffer rendering some error snapshots unusable.

This bug is present in 1.4 too and must be backported.
2012-05-08 21:28:15 +02:00
Willy Tarreau
060781fb4a REORG: stream_interface: create a struct sock_ops to hold socket operations
These operators are used regardless of the socket protocol family. Move
them to a "sock_ops" struct. ->read and ->write have been moved there too
as they have no reason to remain at the protocol level.
2012-05-08 21:28:14 +02:00
Willy Tarreau
cd3b094618 REORG: rename "pattern" files
They're now called "sample" everywhere to match their description.
2012-05-08 20:57:21 +02:00
Willy Tarreau
1278578487 REORG: use the name "sample" instead of "pattern" to designate extracted data
This is mainly a massive renaming in the code to get it in line with the
calling convention. Next patch will rename a few files to complete this
operation.
2012-05-08 20:57:20 +02:00
Willy Tarreau
7dcb6480db MEDIUM: acl: extend the pattern parsers to report meaningful errors
By passing the error pointer to all ACL parsers, we can make them report
useful errors and not simply fail.
2012-05-08 20:57:20 +02:00
Willy Tarreau
b7451bb660 MEDIUM: acl: report parsing errors to the caller
All parsing errors were known but impossible to return. Now by making use
of memprintf(), we're able to build meaningful error messages that the
caller can display.
2012-05-08 20:57:20 +02:00
Willy Tarreau
28376d62cb MEDIUM: http: merge ACL and pattern cookie fetches into a single one
It's easy to merge pattern and ACL fetches of cookies. It allows us
to remove two distinct fetch functions. The new function internally
uses an occurrence number to serve both purposes, but it didn't appear
worth exposing it outside so there is no keyword argument to set it.
However one of the benefits is that the "cookie" fetch for stick tables
now automatically adapts to requests and responses, so there is no more
need for set-cookie().
2012-05-08 20:57:19 +02:00
Willy Tarreau
185b5c4a7b MEDIUM: http: merge acl and pattern header fetch functions
HTTP header fetch is now done using smp_fetch_hdr() for both ACLs and
patterns. This one also supports an occurrence number, making it possible
to specify explicit occurrences for ACLs and patterns.
2012-05-08 20:57:19 +02:00
Willy Tarreau
25c1ebc0c9 MEDIUM: acl/pattern: start merging common sample fetch functions
src_port, dst_port and url_param have converged between ACLs and patterns.
This means that src_port is now available in patterns and that urlp_* has
been added to ACLs. Some code has moved to accommodate for static function
definitions, but there were little changes.
2012-05-08 20:57:17 +02:00
Willy Tarreau
32a6f2e572 MEDIUM: acl/pattern: use the same direction scheme
Patterns were using a bitmask to indicate if request or response was desired
in fetch functions and keywords. ACLs were using a bitmask in fetch keywords
and a single bit in fetch functions. ACLs were also using an ACL_PARTIAL bit
in fetch functions indicating that a non-final fetch was performed, which was
an abuse of the existing direction flag.

The change now consists in using :
  - a capabilities field for fetch keywords => SMP_CAP_REQ/RES to indicate
    if a keyword supports requests, responses, both, etc...
  - an option field for fetch functions to indicate what the caller expects
    (request/response, final/non-final)

The ACL_PARTIAL bit was reversed to get SMP_OPT_FINAL as it's more explicit
to know we're working on a final buffer than on a non-final one.

ACL_DIR_* were removed, as well as PATTERN_FETCH_*. L4 fetches were improved
to support being called on responses too since they're still available.

The <dir> field of all fetch functions was changed to <opt> which is now
unsigned.

The patch is large but mostly made of cosmetic changes to accomodate this, as
almost no logic change happened.
2012-05-08 20:57:17 +02:00
Willy Tarreau
24e32d8c6b MEDIUM: acl: replace acl_expr with args in acl fetch_* functions
Having the args everywhere will make it easier to share fetch functions
between patterns and ACLs. The only place where we could have needed
the expr was in the http_prefetch function which can do well without.
2012-05-08 20:57:16 +02:00
Willy Tarreau
b8c8f1f611 MEDIUM: pattern: retrieve the sample type in the sample, not in the keyword description
We need the pattern fetchers and converters to correctly set the output type
so that they can be used by ACL fetchers. By using the sample type instead of
the keyword type, we also open the possibility to create some multi-type
pattern fetch methods later (eg: "src" being v4/v6). Right now the type in
the keyword is used to validate the configuration.
2012-05-08 20:57:16 +02:00
Willy Tarreau
342acb4775 MEDIUM: pattern: integrate pattern_data into sample and use sample everywhere
Now there is no more reference to union pattern_data. All pattern fetch and
conversion functions now make use of the common sample type. Note: none of
them adjust the type right now so it's important to do it next otherwise
we would risk sharing such functions with ACLs and seeing them fail.
2012-05-08 20:57:15 +02:00
Willy Tarreau
21e5b0e3cb MEDIUM: get rid of SMP_F_READ_ONLY and SMP_F_MUST_FREE
These ones were either unused or improperly used. Some integers were marked
read-only, which does not make much sense. Buffers are not read-only, they're
"constant" in that they must be kept intact after any possible change.
2012-05-08 20:57:15 +02:00
Willy Tarreau
197e10aaae MEDIUM: acl: get rid of the SET_RES flags
We now simply rely on a boolean result from a fetch to declare a match.
Booleans are not compared against patterns, they fix the result.
2012-05-08 20:57:15 +02:00
Willy Tarreau
f853c46bc3 MEDIUM: pattern/acl: get rid of temp_pattern in ACLs
This one is not needed anymore as we can return the data and its type in the
sample provided by the caller. ACLs now always return the proper type. BOOL
is already returned when the result is expected to be processed as a boolean.

temp_pattern has been unexported now.
2012-05-08 20:57:14 +02:00
Willy Tarreau
3740635b88 MAJOR: acl: make use of the new sample struct and get rid of acl_test
This change is invasive in lines of code but not much in terms of
functionalities as it's mainly a replacement of struct acl_test
with struct sample.
2012-05-08 20:57:14 +02:00
Willy Tarreau
422aa0792d MEDIUM: pattern: add new sample types to replace pattern types
The new sample types are necessary for the acl-pattern convergence.
These types are boolean and signed int. Some types were renamed for
less ambiguity (ip->ipv4, integer->uint).
2012-05-08 20:57:14 +02:00
Willy Tarreau
8f7406e9b4 MEDIUM: acl: remove the ACL_TEST_F_NULL_MATCH flag
This flag was used to force a boolean match even if there was no pattern
to match. It was used only by http_auth() and designed only for this one.
It's easier and cleaner to make the fetch function perform the test and
report the boolean result as a few other functions already do. It simplifies
the acl_exec_cond() logic and will help merging ACLs and patterns.
2012-05-08 20:57:13 +02:00
Willy Tarreau
21d68a6895 MEDIUM: pattern: add an argument validation callback to pattern descriptors
This is used to validate that arguments are coherent. For instance,
payload_lv expects that the last arg (if any) is not more negative
than the sum of the first two. The error is reported if any.
2012-05-08 20:57:13 +02:00
Willy Tarreau
9fcb984b17 MEDIUM: pattern: use the standard arg parser
We don't need the pattern-specific args parsers anymore, make use of the
common parser instead. We still need to improve this by adding a validation
function to report abnormal argument values or combinations. We don't report
precise parsing errors yet but this was not previously done either.
2012-05-08 20:57:13 +02:00
Willy Tarreau
f995410355 MEDIUM: pattern: get rid of arg_i in all functions making use of arguments
arg_i was almost unused, and since we migrated to use struct arg everywhere,
the rare cases where arg_i was needed could be replaced by switching to
arg->type = ARGT_STOP.
2012-05-08 20:57:12 +02:00
Willy Tarreau
ecfb8e8ff9 MEDIUM: pattern: replace type pattern_arg with type arg
arg is more complete than pattern_arg since it also covers ACL args,
so let's use this one instead.
2012-05-08 20:57:12 +02:00
Willy Tarreau
61612d49a7 MAJOR: acl: store the ACL argument types in the ACL keyword declaration
The types and minimal number of ACL keyword arguments are now stored in
their declaration. This will allow many more fantasies if some ACL use
several arguments or types.

Doing so required to rework all ACL keyword declarations to add two
parameters. So this was a good opportunity for a general cleanup and
to sort all entries in alphabetical order.

We still have two pending issues :
  - parse_acl_expr() checks for errors but has no way to report them to
    the user ;
  - the types of some arguments are still not resolved and kept as strings
    (eg: ARGT_FE/BE/TAB) for compatibility reasons, which must be resolved
    in acl_find_targets()
2012-05-08 20:57:11 +02:00
Willy Tarreau
34db108423 MAJOR: acl: make use of the new argument parsing framework
The ACL parser now uses the argument parser to build a typed argument list.
Right now arguments are all strings and only one argument is supported since
this is what ACLs currently support.
2012-05-08 20:57:11 +02:00
Willy Tarreau
d53e2428d1 MEDIUM: http/acl: make acl_fetch_hdr_{ip,val} rely on acl_fetch_hdr()
These two functions will now exploit the return of acl_fetch_hdr() instead
of doing the same work again and again.
2012-05-08 20:57:10 +02:00
Willy Tarreau
e333ec9f24 MEDIUM: http/acl: merge all request and response ACL fetches of headers and cookies
Latest changes have made it possible to remove all differences between
request and response processing, making it worth merging request and
response ACL fetch functions to reduce code size.

Most likely with minor adaptation it will be possible to use the same hdr_*
functions to match in the response path, and cook_* for the response cookie
too.
2012-05-08 20:57:10 +02:00
Willy Tarreau
7744e0cf43 BUG/MINOR: http_auth: ACLs are volatile, not permanent
ACLs are volatile since they require a fetch of request buffer data which is
then copied to a temporary shared place. The issue is minor though since auth
is generally checked very early.
2012-05-08 20:57:10 +02:00