Commit Graph

1396 Commits

Author SHA1 Message Date
Willy Tarreau
691b1f429e CLEANUP: stream-int: remove obsolete si_ctrl function
This function makes no sense anymore and will cause trouble to convert
the remains of connection/applet to end points. Let's replace it now
with its contents.
2013-12-09 15:40:22 +01:00
Willy Tarreau
cf644ed37a MEDIUM: stream-int: make ->end point to the connection or the appctx
The long-term goal is to have a context for applets as an alternative
to the connection and not as a complement. At the moment, the context
is still stored into the stream interface, and we only put a pointer
to the applet's context in si->end, initialize the context with object
type OBJ_TYPE_APPCTX, and this allows us not to allocate an entry when
deciding to switch to an applet.

A special care is taken to never dereference si->conn anymore when
dealing with an applet. That's why it's important that si->end is
always set to the proper type :

    si->end == NULL             => not connected to anything
   *si->end == OBJ_TYPE_APPCTX  => connected to an applet
   *si->end == OBJ_TYPE_CONN    => real connection (server, proxy, ...)

The session management code used to check the applet from the connection's
target. Now it uses the stream interface's end point and does not touch the
connection at all. Similarly, we stop checking the connection's addresses
and file descriptors when reporting the applet's status in the stats dump.
2013-12-09 15:40:22 +01:00
Willy Tarreau
4a59f2f954 MAJOR: stream interface: remove the ->release function pointer
Since last commit, we now have a pointer to the applet in the
applet context. So we don't need the si->release function pointer
anymore, it can be extracted from applet->applet.release. At many
places, the ->release function was still tested for real connections
while it is only limited to applets, so most of them were simply
removed. For the remaining valid uses, a new inline function
si_applet_release() was added to simplify the check and the call.
2013-12-09 15:40:22 +01:00
Willy Tarreau
48099c7a07 MEDIUM: stream-interface: set the pointer to the applet into the applet context
In preparation for a later move of all the applet context outside of the
stream interface, we'll need to have access to the applet itself from the
context. Let's have a pointer to it inside the context.
2013-12-09 15:40:22 +01:00
Willy Tarreau
7d67d7b9e5 MINOR: stream-int: add a new pointer to the end point
The end point will correspond to either an applet context or a connection,
depending on the object type. For now the pointer remains null.
2013-12-09 15:40:22 +01:00
Willy Tarreau
372d6708fb MINOR: stream-int: split si_prepare_embedded into si_prepare_none and si_prepare_applet
si_prepare_embedded() was used both to attach an applet and to detach
anything from a stream interface. Split it into si_prepare_none() to
detach and si_prepare_applet() to attach an applet.

si->conn->target is now assigned from within these two functions instead
of their respective callers.
2013-12-09 15:40:22 +01:00
Willy Tarreau
9b6c2c721e MINOR: stream-int: rename ->applet to ->appctx
Since this is the applet context, call it ->appctx to avoid the confusion
with the pointer to the applet. Many places were changed but it's only a
renaming.
2013-12-09 15:40:22 +01:00
Willy Tarreau
0788f47cc1 MINOR: obj: introduce a new type appctx
The object type was added to "struct appctx". The purpose will be
to identify an appctx when the applet context is detached from the
stream interface. For now, it's still attached, so this patch only
adds the new type and does not replace its use.
2013-12-09 15:40:22 +01:00
Willy Tarreau
452d3bb0c4 MINOR: stream-interface: move the applet context to its own struct
In preparation of making the applet context dynamically allocatable,
we create a "struct appctx". Nothing else was changed, it's the same
struct as the one which was in the stream interface.
2013-12-09 15:40:22 +01:00
Willy Tarreau
f4acee332b MEDIUM: stream interface: move the peers' ptr into the applet context
A long time ago when peers were introduced, there was no applet nor
applet context. Applet contexts were introduced but the peers still
did not make use of them and the "ptr" pointer remains present in
every stream interface in addition to the other contexts.

Simply move this pointer to its own location in the context.

Note that this pointer is still a void* because its type and contents
varies depending on the peers session state. Probably that this could
be cleaned up in the future given that all other contexts already store
much more than a single pointer.
2013-12-09 15:40:22 +01:00
Willy Tarreau
51c2184755 MINOR: connection: add a field to store an object type
This will soon be used to differenciate connections from applet
contexts. Object type "connection" has also been added.
2013-12-09 15:40:22 +01:00
Willy Tarreau
66337a0784 MINOR: obj: provide a safe and an unsafe access to pointed objects
Most of the times, the caller of objt_<type>(ptr) will know that <ptr>
is valid and of the correct type (eg: in an "if" condition). Let's provide
an unsafe variant that does not perform the check again for these usages.
The new functions are called "__objt_<type>".
2013-12-09 15:40:22 +01:00
Willy Tarreau
6fe1541285 MINOR: stream-int: make the shutr/shutw functions void
This is to be more consistent with the other functions. The only
reason why these functions used to return a value was to let the
caller adjust polling by itself, but now their only callers were
the si_shutr()/si_shutw() inline functions. Now these functions
do not depend anymore on the connection.

These connection variant of these functions now call
conn_data_stop_recv()/conn_data_stop_send() before returning order
not to require a return code anymore. The applet version does not
need this at all.
2013-12-09 15:40:22 +01:00
Willy Tarreau
8b3d7dfd7c MEDIUM: stream-int: split the shutr/shutw functions between applet and conn
These functions induce a lot of ifs everywhere because they consider two
different cases, one which is where the connection exists and has a file
descriptor, and the other one which is the default case where at most an
applet has to be notified.

Let's have them in si_ops and automatically decide which one to use.

The connection shutdown sequence has been slightly simplified, and we
now clear the flags at the end.

Also we remove SHUTR_NOW after a shutw with nolinger, as it's cleaner
not to keep it.
2013-12-09 15:40:22 +01:00
Willy Tarreau
347a35d19e MAJOR: stats: move the HTTP stats handling to its applet
There is a big trouble with the way POST is handled for the admin
stats page. The POST parameters are extracted from some http-request
rules, and if not round they return zero hoping for being called again
when more data passes. This results in the HTTP analyser being called
several times and all the rules prior to the stats being executed
multiple times as well. That includes rewrite rules.

So instead of doing this, we now move all the processing of the stats
into the stats applet.

That way we just set the stats applet in the HTTP analyser when a stats
request is detected, and the applet takes the time it needs to read the
arguments and respond. We could even imagine improving the applet to
support requests larger than a single buffer.

The code was almost only moved and minimally changed. Several new HTTP
states were added to the stats applet to emit headers, redirects and
to read POST. It was necessary to do this because the headers sent
depend on the parsing of the POST request. In the end it's beneficial
because we removed two stream_int_retnclose() calls.
2013-12-09 15:40:22 +01:00
Willy Tarreau
96d44918f7 MEDIUM: stats: prepare the HTTP stats I/O handler to support more states
In preparation for moving the POST processing to the applet, we first
add new states to the HTTP I/O handler. Till now st0 was only 0/1 for
start/end. We now replace it with an enum.
2013-12-09 15:40:22 +01:00
Willy Tarreau
9f68148321 MEDIUM: peers: don't rely on conn->xprt_ctx anymore
We make the peers code use applet->ptr instead of conn->xprt_ctx to
store the pointer to the current peer. That way it does not depend
on a connection anymore.
2013-12-09 15:40:21 +01:00
Willy Tarreau
787add2932 MINOR: session: add a simple function to retrieve a session from a task
This function only casts t->context to (struct session *). It will
avoid some ugly and unsafe casts in upcoming changes.
2013-12-09 15:40:21 +01:00
Willy Tarreau
a94d2d7653 MEDIUM: stats: don't use conn->xprt_st anymore
We're trying to move the applets out of the struct connection. So
let's remove the dependence on xprt_st and introduce si->applet.st2
to store the missing contextual data instead.
2013-12-09 15:40:21 +01:00
Willy Tarreau
08382955fe CLEANUP: stream_interface: remove unused field err_loc
This field was still fed with a pointer to the server that caught an
error but was not used anymore. Let's remove it.
2013-12-09 15:40:21 +01:00
Willy Tarreau
37e340ce4b BUG/MEDIUM: stick: completely remove the unused flag from the store entries
The store[] array in the session holds a flag which probably aimed to
differenciate store entries learned from the request from those learned
from the response, and allowing responses to overwrite only the request
ones (eg: have a server set a response cookie which overwrites the request
one).

But this flag is set when a response data is stored, and is never cleared.
So in practice, haproxy always runs with this flag set, meaning that
responses prevent themselves from overriding the request data.

It is desirable anyway to keep the ability not to override data, because
the override is performed only based on the table and not on the key, so
that would mean that it would be impossible to retrieve two different
keys to store into a same table. For example, if a client sets a cookie
and a server another one, both need to be updated in the table in the
proper order. This is especially true when multiple keys may be tracked
on each side into the same table (eg: list of IP addresses in a header).

So the correct fix which also maintains the current behaviour consists in
simply removing this flag and never try to optimize for the overwrite case.

This fix also has the benefit of significantly reducing the session size,
by 64 bytes due to alignment issues caused by this flag!

The bug has been there forever (since 1.4-dev7), so a backport to 1.4
would be appropriate.
2013-12-06 23:14:53 +01:00
Willy Tarreau
98aec9ff47 BUG/MINOR: checks: tcp-check actions are enums, not flags
In recent commit 5ecb77f (MEDIUM: checks: add send/expect tcp based check),
bitfields were mistakenly used at some places for the actions. Fortunately,
the only two actions right now are 1 and 2 so they don't share any bit in
common and the bug has no impact.

No backport is needed.
2013-12-06 16:16:41 +01:00
Baptiste Assmann
5ecb77f4c7 MEDIUM: checks: add send/expect tcp based check
This is a generic health check which can be used to match a
banner or send a request and analyse a server response.
It works in a send/expect ways and many exchange can be done between
HAProxy and a server to decide the server status, making HAProxy able to
speak the server's protocol.

It can send arbitrary regular or binary strings and match content as a
regular or binary string or a regex.

Signed-off-by: Baptiste Assmann <bedis9@gmail.com>
2013-12-06 11:50:47 +01:00
Baptiste Assmann
bb77c8e26d MINOR: tools: function my_memmem() to lookup binary contents
This function simply looks for a memory block inside another one.

Signed-off-by: Baptiste Assmann <bedis9@gmail.com>
2013-12-06 11:50:47 +01:00
Willy Tarreau
126d40691a MINOR: tools: add a generic binary hex string parser
We currently use such an hex parser in pat_parse_bin() to parse hex
string patterns. We'll need another generic one so let's move it to
standard.c and have pat_parse_bin() make use of it.
2013-12-06 11:50:47 +01:00
Thierry FOURNIER
0ffe78cfe3 MEDIUM: map: merge identical maps
This patch permits to use the same struct pattern for two indentical maps.
This permits to preserve memory, and permits to update only one
"struct pattern" when the dynamic map update is supported.
2013-12-06 11:40:53 +01:00
Thierry FOURNIER
275db69c07 BUG/MINOR: map: The map list was declared in the map.h file
This bug is harmless and post-dev19, it does not require any backport.
2013-12-06 11:37:28 +01:00
Thierry FOURNIER
d18cd0f110 MEDIUM: http: The redirect strings follows the log format rules.
We handle "http-request redirect" with a log-format string now, but we
leave "redirect" unaffected.

Note that the control of the special "/" case is move from the runtime
execution to the configuration parsing. If the format rule list is
empty, the build_logline() function does nothing.
2013-12-02 23:31:33 +01:00
Thierry FOURNIER
d5f624dde7 MEDIUM: sample: add the "map" converter
Add a new converter with the following prototype :

  map(<map_file>[,<default_value>])
  map_<match_type>(<map_file>[,<default_value>])
  map_<match_type>_<output_type>(<map_file>[,<default_value>])

It searches the for input value from <map_file> using the <match_type>
matching method, and return the associated value converted to the type
<output_type>. If the input value cannot be found in the <map_file>,
the converter returns the <default_value>. If the <default_value> is
not set, the converter fails and acts as if no input value could be
fetched. If the <match_type> is not set, it defaults to "str".
Likewise, if the <output_type> is not set, it defaults to "str". For
convenience, the "map" keyword is an alias for "map_str" and maps a
string to another string. The following array contains contains the
list of all the map* converters.

                 +----+----------+---------+-------------+------------+
                 |     `-_   out |         |             |            |
                 | input  `-_    |   str   |     int     |     ip     |
                 | / match   `-_ |         |             |            |
                 +---------------+---------+-------------+------------+
                 | str   / str   | map_str | map_str_int | map_str_ip |
                 | str   / sub   | map_sub | map_sub_int | map_sub_ip |
                 | str   / dir   | map_dir | map_dir_int | map_dir_ip |
                 | str   / dom   | map_dom | map_dom_int | map_dom_ip |
                 | str   / end   | map_end | map_end_int | map_end_ip |
                 | str   / reg   | map_reg | map_reg_int | map_reg_ip |
                 | int   / int   | map_int | map_int_int | map_int_ip |
                 | ip    / ip    | map_ip  | map_ip_int  | map_ip_ip  |
                 +---------------+---------+-------------+------------+

The names are intentionally chosen to reflect the same match methods
as ACLs use.
2013-12-02 23:31:33 +01:00
Thierry FOURNIER
4b5e422759 MINOR: map: Define map types
Define the types used with maps, and add new argument type that can
reference the map. This pointer contains the map configuration entries.
2013-12-02 23:31:33 +01:00
Thierry FOURNIER
fdbf4842b6 MINOR: sample: add a private field to the struct sample_conv
These flags will be used for maps, and possibly later to pass some
extra information to other converters if needed.
2013-12-02 23:31:33 +01:00
Thierry FOURNIER
b805f71d1b MEDIUM: sample: let the cast functions set their output type
This patch allows each sample cast function to specify the sample
output type. The goal is to be able to emit an output type IPv4 or
IPv6 depending on what is found in the input if the next converter
is able to process them both.

The patch also adds a new pseudo type called "ADDR". This type is an
alias for IPV4 and IPV6 which is only used as an input type by converters
who want to express their compatibility with both address formats. It may
not be emitted.

The goal is to unify as much as possible the processing of IPv4 and IPv6
in order not to add extra keywords for the maps which act as converters,
but will match samples like ACLs do with their patterns.
2013-12-02 23:31:33 +01:00
Willy Tarreau
6f8fe310cf MINOR: pattern: import acl_find_match_name() into pattern.h
It's only dedicated to pattern match lookups, so it was renamed
pat_find_match_name().
2013-12-02 23:31:33 +01:00
Willy Tarreau
0cba607400 MINOR: acl/pattern: use types different from int to clarify who does what.
We now have the following enums and all related functions return them and
consume them :

   enum pat_match_res {
	PAT_NOMATCH = 0,         /* sample didn't match any pattern */
	PAT_MATCH = 3,           /* sample matched at least one pattern */
   };

   enum acl_test_res {
	ACL_TEST_FAIL = 0,           /* test failed */
	ACL_TEST_MISS = 1,           /* test may pass with more info */
	ACL_TEST_PASS = 3,           /* test passed */
   };

   enum acl_cond_pol {
	ACL_COND_NONE,		/* no polarity set yet */
	ACL_COND_IF,		/* positive condition (after 'if') */
	ACL_COND_UNLESS,	/* negative condition (after 'unless') */
   };

It's just in order to avoid doubts when reading some code.
2013-12-02 23:31:33 +01:00
Thierry FOURNIER
a65b343eee MEDIUM: pattern: rename "acl" prefix to "pat"
This patch just renames functions, types and enums. No code was changed.
A significant number of files were touched, especially the ACL arrays,
so it is likely that some external patches will not apply anymore.

One important thing is that we had to split ACL_PAT_* into two groups :
  - ACL_TEST_{PASS|MISS|FAIL}
  - PAT_{MATCH|UNMATCH}

A future patch will enforce enums on all these places to avoid confusion.
2013-12-02 23:31:33 +01:00
Thierry FOURNIER
d163e1ce30 MEDIUM: pattern: create pattern expression
This new structure contains the data needed for pattern matching. It's
the first step to the complete independance of the pattern matching.
2013-12-02 23:31:33 +01:00
Thierry FOURNIER
ed66c297c2 REORG: acl/pattern: extract pattern matching from the acl file and create pattern.c
This patch just moves code without any change.

The ACL are just the association between sample and pattern. The pattern
contains the match method and the parse method. These two things are
different. This patch cleans the code by splitting it.
2013-12-02 23:31:33 +01:00
Thierry FOURNIER
dd69a04666 MEDIUM: acl: associate "struct sample_storage" to each "struct acl_pattern"
This will be used later with maps. Each map will associate an entry with
a sample_storage value.

This patch changes the "parse" prototype and all the parsing methods.
The goal is to associate "struct sample_storage" to each entry of
"struct acl_pattern". Only the "parse" function can add the sample value
into the "struct acl_pattern".
2013-12-02 23:31:33 +01:00
Thierry FOURNIER
8ed9697064 MINOR: sample: Define new struct sample_storage
This struct is used to store a sample constant. The size of this
struct is less than the struct sample. This struct only contains
a constant and doesn't need the "ctx" nor the "flags".
2013-12-02 23:31:33 +01:00
Thierry FOURNIER
29d47b87c4 MINOR: acl: Extract the pattern matching function
The map feature will need to match acl patterns. This patch extracts
the matching function from the global ACL function "acl_exec_cond".

The code was only moved to its own function, no functional changes were made.
2013-12-02 23:31:33 +01:00
Thierry FOURNIER
3a103c5a6b MINOR: acl: Extract the pattern parsing and indexation from the "acl_read_patterns_from_file()" function
With this split, the pattern indexation can apply to any source. The map
feature needs this functionality because the map cannot be loaded with the
same file format as the ones supported by acl_read_patterns_from_file().

The code was only moved to its own function, no functional changes were made.
2013-12-02 23:31:33 +01:00
Thierry FOURNIER
319e495a96 MINOR: acl: export acl arrays
The map feature needs to use the acl parser and converters.
2013-12-02 23:31:32 +01:00
Thierry FOURNIER
d559dd8390 MINOR: tools: Add a function to convert buffer to an ipv6 address
The inet_pton function needs an input string with a final \0. This
function copies the input string to a temporary buffer, adds the final
\0 and converts to address.
2013-12-02 23:31:32 +01:00
Thierry FOURNIER
9c1d67ecbd MINOR: sample: provide the original sample_conv descriptor struct to the argument checker function.
Note that this argument checker is still unused but will be used by
maps.
2013-12-02 23:31:32 +01:00
Thierry FOURNIER
348971ea28 MEDIUM: acl: use the fetch syntax 'fetch(args),conv(),conv()' into the ACL keyword
If the acl keyword is a "fetch", the dedicated parsing function
"sample_parse_expr()" is used. Otherwise, the acl parsing function
"parse_acl_expr()" is extended to understand the syntax of a series
of converters placed after the "fetch" keyword.

Before this patch, each acl uses a "struct sample_fetch" and executes
it with the "<fetch>->process()" function. Now, the dedicated function
"sample_process()" is called.

These syntax are now avalaible:

   acl bad req.hdr(host),lower -m str www
   http-request redirect prefix /go-away if bad

   acl bad hdr_beg(host),lower www
   http-request redirect prefix /go-away if bad
2013-12-02 23:31:32 +01:00
Thierry FOURNIER
8af6ff12b5 MINOR: sample: export sample_casts
just export the sample cast matrix "sample_casts" to prepare the
generic sample conversion parser.
2013-12-02 23:31:32 +01:00
Thierry FOURNIER
20f4996738 MINOR: sample: export the generic sample conversion parser
just export function "find_sample_conv()" to prepare the
generic sample conversion parser.
2013-12-02 23:31:32 +01:00
Willy Tarreau
34c2fb6f89 BUG/MINOR: config: report the correct track-sc number in tcp-rules
When parsing track-sc* actions in tcp-request rules, we now automatically
compute the track-sc identifier number using %d when displaying an error
message. But the ID has become wrong since we introduced sc0, we continue
to report id+1 in error messages causing some confusion.

No backport is needed.
2013-12-02 23:31:32 +01:00
Willy Tarreau
830bf61815 BUG/MINOR: connection: fix typo in error message report
"unknownn" -> "unknown"
2013-12-01 20:29:58 +01:00
Thierry FOURNIER
1c0054fe83 BUG/MINOR: arg: fix error reporting for add-header/set-header sample fetch arguments
The 'add-header %[samples]' parsing errors associated to http-request
and http-response are displayed with the wrong keyword.

Configuration entry:

   http-request set-header mon-header %[res.hdr(user-agent)]

Original error message:

   [WARNING] 323/150920 (16559) : parsing [haproxy.conf:36] : 'log-format' : sample fetch <res.hdr ...

After commit error message:

   [WARNING] 323/150929 (16580) : parsing [haproxy.conf:36] : 'http-request' : sample fetch <res.hdr ...
2013-11-28 18:25:18 +01:00
Simon Horman
8c3d0be987 MEDIUM: Add DRAIN state and report it on the stats page
Add a DRAIN sub-state for a server which
will be shown on the stats page instead of UP if
its effective weight is zero.

Also, log if a server enters or leaves the DRAIN state
as the result of an agent check.

Signed-off-by: Simon Horman <horms@verge.net.au>
2013-11-25 07:31:16 +01:00
Simon Horman
671b6f02b5 MEDIUM: Add enable and disable agent unix socket commands
The syntax of this new commands are:

enable agent <backend>/<server>
disable agent <backend>/<server>

These commands allow temporarily stopping and subsequently
re-starting an auxiliary agent check. The effect of this is as follows:

New checks are only initialised when the agent is in the enabled. Thus,
disable agent will prevent any new agent checks from begin initiated until
the agent re-enabled using enable agent.

When an agent is disabled the processing of an auxiliary agent check that
was initiated while the agent was set as enabled is as follows: All
results that would alter the weight, specifically "drain" or a weight
returned by the agent, are ignored. The processing of agent check is
otherwise unchanged.

The motivation for this feature is to allow the weight changing effects
of the agent checks to be paused to allow the weight of a server to be
configured using set weight without being overridden by the agent.

Signed-off-by: Simon Horman <horms@verge.net.au>
2013-11-25 07:31:16 +01:00
Simon Horman
58c32978b2 MEDIUM: Set rise and fall of agent checks to 1
This is achieved by moving rise and fall from struct server to struct check.

After this move the behaviour of the primary check, server->check is
unchanged. However, the secondary agent check, server->agent now has
independent rise and fall values each of which are set to 1.

The result is that receiving "fail", "stopped" or "down" just once from the
agent will mark the server as down. And receiving a weight just once will
allow the server to be marked up if its primary check is in good health.

This opens up the scope to allow the rise and fall values of the agent
check to be configurable, however this has not been implemented at this
stage.

Signed-off-by: Simon Horman <horms@verge.net.au>
2013-11-25 07:31:16 +01:00
Simon Horman
d60d69138b MEDIUM: checks: Add supplementary agent checks
Allow an auxiliary agent check to be run independently of the
regular a regular health check. This is enabled by the agent-check
server setting.

The agent-port, which specifies the TCP port to use for the agent's
connections, is required.

The agent-inter, which specifies the interval between agent checks and
timeout of agent checks, is optional. If not set the value for regular
checks is used.

e.g.
server	web1_1 127.0.0.1:80 check agent-port 10000

If either the health or agent check determines that a server is down
then it is marked as being down, otherwise it is marked as being up.

An agent health check performed by opening a TCP socket and reading an
ASCII string. The string should have one of the following forms:

* An ASCII representation of an positive integer percentage.
  e.g. "75%"

  Values in this format will set the weight proportional to the initial
  weight of a server as configured when haproxy starts.

* The string "drain".

  This will cause the weight of a server to be set to 0, and thus it
  will not accept any new connections other than those that are
  accepted via persistence.

* The string "down", optionally followed by a description string.

  Mark the server as down and log the description string as the reason.

* The string "stopped", optionally followed by a description string.

  This currently has the same behaviour as "down".

* The string "fail", optionally followed by a description string.

  This currently has the same behaviour as "down".

Signed-off-by: Simon Horman <horms@verge.net.au>
2013-11-25 07:31:16 +01:00
Willy Tarreau
d32c399747 MINOR: stats: report correct throttling percentage for servers in slowstart
The column used to report the throttle percentage when a server is in
slowstart is based on the time only. This is wrong, because server weights
in slowstart are updated at most once a second, so the reported value is
wrong at least fo rone second during each step, which means all the time
when using short delays (< 20s).

The second point is that it's disturbing to see a weight < 100% without
any throttle at the end of the period (during the last second), because
the effective weight has not yet been updated.

Instead, we now compute the exact ratio between eweight and uweight and
report it. It's always accurate and describes the value being used instead
of using only the date.

It can be backported to 1.4 though it's not particularly important.
2013-11-21 15:30:45 +01:00
Willy Tarreau
004e045f31 BUG/MAJOR: server: weight calculation fails for map-based algorithms
A crash was reported by Igor at owind when changing a server's weight
on the CLI. Lukas Tribus could reproduce a related bug where setting
a server's weight would result in the new weight being multiplied by
the initial one. The two bugs are the same.

The incorrect weight calculation results in the total farm weight being
larger than what was initially allocated, causing the map index to be out
of bounds on some hashes. It's easy to reproduce using "balance url_param"
with a variable param, or with "balance static-rr".

It appears that the calculation is made at many places and is not always
right and not always wrong the same way. Thus, this patch introduces a
new function "server_recalc_eweight()" which is dedicated to this task
of computing ->eweight from many other elements including uweight and
current time (for slowstart), and all users now switch to use this
function.

The patch is a bit large but the code was not trivially fixable in a way
that could guarantee this situation would not occur anymore. The fix is
much more readable and has been verified to work with all algorithms,
with both consistent and map-based hashes, and even with static-rr.

Slowstart was tested as well, just like enable/disable server.

The same bug is very likely present in 1.4 as well, so the patch will
probably need to be backported eventhough it will not apply as-is.

Thanks to Lukas and Igor for the information they provided to reproduce it.
2013-11-21 15:09:02 +01:00
Simon Horman
125d099662 MEDIUM: Move health element to struct check
This is in preparation for associating a agent check
with a server which runs as well as the server's existing check.

Signed-off-by: Simon Horman <horms@verge.net.au>
2013-11-19 09:36:07 +01:00
Simon Horman
cd5d7b678e MEDIUM: Add state to struct check
Add state to struct check. This is currently used to store one bit,
CHK_RUNNING, which is set if a check is running and clear otherwise.
This bit was previously SRV_CHK_RUNNING of the state element of struct
server.

This is in preparation for associating a agent check
with a server which runs as well as the server's existing check.

Signed-off-by: Simon Horman <horms+renesas@verge.net.au>
2013-11-19 09:36:04 +01:00
Simon Horman
4a741432be MEDIUM: Paramatise functions over the check of a server
Paramatise the following functions over the check of a server

* set_server_down
* set_server_up
* srv_getinter
* server_status_printf
* set_server_check_status
* set_server_disabled
* set_server_enabled

Generally the server parameter of these functions has been removed.
Where it is still needed it is obtained using check->server.

This is in preparation for associating a agent check
with a server which runs as well as the server's existing check.
By paramatising these functions they may act on each of the checks
without further significant modification.

Explanation of the SSP_O_HCHK portion of this change:

* Prior to this patch SSP_O_HCHK serves a single purpose which
  is to tell server_status_printf() weather it should print
  the details of the check of a server or not.

  With the paramatisation that this patch adds there are two cases.
  1) Printing the details of the check in which case a
     valid check parameter is needed.
  2) Not printing the details of the check in which case
     the contents check parameter are unused.

  In case 1) we could pass SSP_O_HCHK and a valid check and;
  In case 2) we could pass !SSP_O_HCHK and any value for check
  including NULL.

  If NULL is used for case 2) then SSP_O_HCHK becomes supurfulous
  and as NULL is used for case 2) SSP_O_HCHK has been removed.

Signed-off-by: Simon Horman <horms@verge.net.au>
2013-11-19 09:35:54 +01:00
Simon Horman
28b5ffc76f MEDIUM: Move result element to struct check
Move result element from struct server to struct check
This allows check results to be independent of the check's server.

This is in preparation for associating a agent check
with a server which runs as well as the server's existing check.

Signed-off-by: Simon Horman <horms@verge.net.au>
2013-11-19 09:35:52 +01:00
Simon Horman
6618300e13 MEDIUM: Split up struct server's check element
This is in preparation for associating a agent check
with a server which runs as well as the server's existing check.

The split has been made by:
* Moving elements of struct server's check element that will
  be shared by both checks into a new check_common element
  of struct server.
* Moving the remaining elements to a new struct check and
  making struct server's check element a struct check.
* Adding a server element to struct check, a back-pointer
  to the server element it is a member of.
  - At this time the server could be obtained using
    container_of, however, this will not be so easy
    once a second struct check element is added to struct server
    to accommodate an agent health check.

Signed-off-by: Simon Horman <horms@verge.net.au>
2013-11-19 09:35:48 +01:00
Simon Horman
c69d547638 CLEANUP: Remove unused 'last_slowstart_change' field from struct peer
This was inadvertently added by "MEDIUM: checks: Add agent health check".
It appears to have never been used.

Signed-off-by: Simon Horman <horms@verge.net.au>
2013-11-19 08:04:59 +01:00
Simon Horman
a360844735 CLEANUP: Make parameters of srv_downtime and srv_getinter const
The parameters of srv_downtime and srv_getinter are not modified
and thus may be const.

Signed-off-by: Simon Horman <horms@verge.net.au>
2013-11-19 08:04:58 +01:00
Willy Tarreau
a0f4271497 MEDIUM: backend: add support for the wt6 hash
This function was designed for haproxy while testing other functions
in the past. Initially it was not planned to be used given the not
very interesting numbers it showed on real URL data : it is not as
smooth as the other ones. But later tests showed that the other ones
are extremely sensible to the server count and the type of input data,
especially DJB2 which must not be used on numeric input. So in fact
this function is still a generally average performer and it can make
sense to merge it in the end, as it can provide an alternative to
sdbm+avalanche or djb2+avalanche for consistent hashing or when hashing
on numeric data such as a source IP address or a visitor identifier in
a URL parameter.
2013-11-14 16:37:50 +01:00
Bhaskar Maddala
b6c0ac94a4 MEDIUM: backend: Implement avalanche as a modifier of the hashing functions.
Summary:
Avalanche is supported not as a native hashing choice, but a modifier
on the hashing function. Note that this means that possible configs
written after 1.5-dev4 using "hash-type avalanche" will get an informative
error instead. But as discussed on the mailing list it seems nobody ever
used it anyway, so let's fix it before the final 1.5 release.

The default values were selected for backward compatibility with previous
releases, as discussed on the mailing list, which means that the consistent
hashing will still apply the avalanche hash by default when no explicit
algorithm is specified.

Examples
  (default) hash-type map-based
	Map based hashing using sdbm without avalanche

  (default) hash-type consistent
	Consistent hashing using sdbm with avalanche

Additional Examples:

  (a) hash-type map-based sdbm
	Same as default for map-based above
  (b) hash-type map-based sdbm avalanche
	Map based hashing using sdbm with avalanche
  (c) hash-type map-based djb2
	Map based hashing using djb2 without avalanche
  (d) hash-type map-based djb2 avalanche
	Map based hashing using djb2 with avalanche
  (e) hash-type consistent sdbm avalanche
	Same as default for consistent above
  (f) hash-type consistent sdbm
	Consistent hashing using sdbm without avalanche
  (g) hash-type consistent djb2
	Consistent hashing using djb2 without avalanche
  (h) hash-type consistent djb2 avalanche
	Consistent hashing using djb2 with avalanche
2013-11-14 16:37:50 +01:00
Bhaskar
98634f0c7b MEDIUM: backend: Enhance hash-type directive with an algorithm options
Summary:
In testing at tumblr, we found that using djb2 hashing instead of the
default sdbm hashing resulted is better workload distribution to our backends.

This commit implements a change, that allows the user to specify the hash
function they want to use. It does not limit itself to consistent hashing
scenarios.

The supported hash functions are sdbm (default), and djb2.

For a discussion of the feature and analysis, see mailing list thread
"Consistent hashing alternative to sdbm" :

      http://marc.info/?l=haproxy&m=138213693909219

Note: This change does NOT make changes to new features, for instance,
applying an avalance hashing always being performed before applying
consistent hashing.
2013-11-14 16:37:50 +01:00
Thierry FOURNIER
a054d410db BUILD/MINOR: missing header file
In the header file "types/proto_http.h", the list are used
but the header file "mini-clist.h" is not included.
2013-10-23 15:53:56 +02:00
Thierry FOURNIER
de6617b486 MINOR: http: some exported functions were not in the header file
Export the following functions:
 - find_hdr_value_end
 - http_header_match2
 - http_remove_header2
 - http_header_add_tail2
2013-10-23 12:21:38 +02:00
Thierry FOURNIER
ef37a66628 CLEANUP: The function "regex_exec" needs the string length but in many case they expect null terminated char.
If haproxy is compiled with the USE_PCRE_JIT option, the length of the
string is used. If it is compiled without this option the function doesn't
use the length and expects a null terminated string.

The prototype of the function is ambiguous, and depends on the
compilation option. The developer can think that the length is always
used, and many bugs can be created.

This patch makes sure that the length is used. The regex_exec function
adds the final '\0' if it is needed.
2013-10-23 12:19:51 +02:00
Thierry FOURNIER
ed5a4aefae CLEANUP: regex: Create regex_comp function that compiles regex using compilation options
The current file "regex.h" define an abstraction for the regex. It
provides the same struct name and the same "regexec" function for the
3 regex types supported: standard libc, basic pcre and jit pcre.

The regex compilation function is not provided by this file. If the
developper wants to use regex, he must write regex compilation code
containing "#define *JIT*".

This patch provides a unique regex compilation function according to
the compilation options.

In addition, the "regex.h" file checks the presence of the "#define
PCRE_CONFIG_JIT" when "USE_PCRE_JIT" is enabled. If this flag is not
present, the pcre lib doesn't support JIT and "#error" is emitted.
2013-10-14 14:42:50 +02:00
Thierry FOURNIER
e28f1ecf2b BUILD/MINOR: missing header file
In the header file "common/regex.h", the C keyword NULL is used. This
keyword is referenced into the header file "stdlib.h", but this is not
included.
2013-10-10 11:38:35 +02:00
Godbach
2b8fd54287 DOC: fix typo in comments
Hi Willy,

There is a patch to fix typo in comments, please check the attachment
for you information.

The commit log is as below:

commit 9824d1b3740ac2746894f1aa611c795366c84210
Author: Godbach <nylzhaowei@gmail.com>
Date:   Mon Sep 30 11:05:42 2013 +0800

    DOC: fix typo in comments

      0x20000000 -> 0x40000000
      vuf -> buf
      ethod -> Method

    Signed-off-by: Godbach <nylzhaowei@gmail.com>

--
Best Regards,
Godbach

From 9824d1b3740ac2746894f1aa611c795366c84210 Mon Sep 17 00:00:00 2001
From: Godbach <nylzhaowei@gmail.com>
Date: Mon, 30 Sep 2013 11:05:42 +0800
Subject: [PATCH] DOC: fix typo in comments

  0x20000000 -> 0x40000000
  vuf -> buf
  ethod -> Method

Signed-off-by: Godbach <nylzhaowei@gmail.com>
2013-10-01 09:49:21 +02:00
Willy Tarreau
cc1e04b1e8 MINOR: tcp: add new "close" action for tcp-response
This new action immediately closes the connection with the server
when the condition is met. The first such rule executed ends the
rules evaluation. The main purpose of this action is to force a
connection to be finished between a client and a server after an
exchange when the application protocol expects some long time outs
to elapse first. The goal is to eliminate idle connections which
take signifiant resources on servers with certain protocols.
2013-09-11 23:28:51 +02:00
Willy Tarreau
3a925c155d MEDIUM: stick-tables: flush old entries upon soft-stop
When a process with large stick tables is replaced by a new one and remains
present until the last connection finishes, it keeps these data in memory
for nothing since they will never be used anymore by incoming connections,
except during syncing with the new process. This is especially problematic
when dealing with long session protocols such as WebSocket as it becomes
possible to stack many processes and eat a lot of memory.

So the idea here is to know if a table still needs to be synced or not,
and to purge all unused entries once the sync is complete. This means that
after a few hundred milliseconds when everything has been synchronized with
the new process, only a few entries will remain allocated (only the ones
held by sessions during the restart) and all the remaining memory will be
freed.

Note that we carefully do that only after the grace period is expired so as
not to impact a possible proxy that needs to accept a few more connections
before leaving.

Doing this required to add a sync counter to the stick tables, to know how
many peer sync sessions are still in progress in order not to flush the entries
until all synchronizations are completed.
2013-09-04 17:54:01 +02:00
Evan Broder
be55431f9f MINOR: ssl: Add statement 'verifyhost' to "server" statements
verifyhost allows you to specify a hostname that the remote server's
SSL certificate must match. Connections that don't match will be
closed with an SSL error.
2013-09-01 07:55:49 +02:00
Willy Tarreau
9f09521f2d BUG/MEDIUM: unique_id: HTTP request counter must be unique!
The HTTP request counter is incremented non atomically, which means that
many requests can log the same ID. Let's increment it when it is consumed
so that we avoid this case.

This bug was reported by Patrick Hemmer. It's 1.5-specific and does not
need to be backported.
2013-08-13 17:52:20 +02:00
Willy Tarreau
47060b6ae0 MINOR: cli: make it possible to enter multiple values at once with "set table"
The "set table" statement allows to create new entries with their respective
values. Till now it was limited to a single data type per line, requiring as
many "set table" statements as the desired data types to be set. Since this
is only a parser limitation, this patch gets rid of it. It also allows the
creation of a key with no data types (all reset to their default values).
2013-08-01 21:17:19 +02:00
Willy Tarreau
b4c8493a9f MINOR: session: make the number of stick counter entries more configurable
In preparation of more flexibility in the stick counters, make their
number configurable. It still defaults to 3 which is the minimum
accepted value. Changing the value alone is not sufficient to get
more counters, some bitfields still need to be updated and the TCP
actions need to be updated as well, but this update tries to be
easier, which is nice for experimentation purposes.
2013-08-01 21:17:14 +02:00
Willy Tarreau
cadd8c9ec3 MINOR: payload: split smp_fetch_rdp_cookie()
This function is also called directly from backend.c, so let's stop
building fake args to call it as a sample fetch, and have a lower
layer more generic function instead.
2013-08-01 21:17:13 +02:00
Willy Tarreau
ef38c39287 MEDIUM: sample: systematically pass the keyword pointer to the keyword
We're having a lot of duplicate code just because of minor variants between
fetch functions that could be dealt with if the functions had the pointer to
the original keyword, so let's pass it as the last argument. An earlier
version used to pass a pointer to the sample_fetch element, but this is not
the best solution for two reasons :
  - fetch functions will solely rely on the keyword string
  - some other smp_fetch_* users do not have the pointer to the original
    keyword and were forced to pass NULL.

So finally we're passing a pointer to the keyword as a const char *, which
perfectly fits the original purpose.
2013-08-01 21:17:13 +02:00
Godbach
a34bdc0ea4 BUG/MEDIUM: server: set the macro for server's max weight SRV_UWGHT_MAX to SRV_UWGHT_RANGE
The max weight of server is 256 now, but SRV_UWGHT_MAX is still 255. As a result,
FWRR will not work well when server's weight is 256. The description is as below:

There are some macros related to server's weight in include/types/server.h:
    #define SRV_UWGHT_RANGE 256
    #define SRV_UWGHT_MAX   (SRV_UWGHT_RANGE - 1)
    #define SRV_EWGHT_MAX   (SRV_UWGHT_MAX   * BE_WEIGHT_SCALE)

Since weight of server can be reach to 256 and BE_WEIGHT_SCALE equals to 16,
the max eweight of server should be 256*16 = 4096, it will exceed SRV_EWGHT_MAX
which equals to SRV_UWGHT_MAX*BE_WEIGHT_SCALE = 255*16 = 4080. When a server
with weight 256 is insterted into FWRR tree during initialization, the key value
of this server should be SRV_EWGHT_MAX - s->eweight = 4080 - 4096 = -16 which
is closed to UINT_MAX in unsigned type, so the server with highest weight will
be not elected as the first server to process request.

In addition, it is a better choice to compare with SRV_UWGHT_MAX than a magic
number 256 while doing check for the weight. The max number of servers for
round-robin algorithm is also updated.

Signed-off-by: Godbach <nylzhaowei@gmail.com>
2013-07-22 09:29:34 +02:00
Lukas Tribus
67db8df12b MEDIUM: http: add IPv6 support for "set-tos"
As per RFC3260 #4 and BCP37 #4.2 and #5.2, the IPv6 counterpart of TOS
is "traffic class".

Add support for IPv6 traffic class in "set-tos" by moving the "set-tos"
related code to the new inline function inet_set_tos(), handling IPv4
(IP_TOS), IPv6 (IPV6_TCLASS) and IPv4-mapped sockets (IP_TOS, like
::ffff:127.0.0.1).

Also define - if missing - the IN6_IS_ADDR_V4MAPPED() macro in
include/common/compat.h for compatibility.
2013-06-23 18:01:38 +02:00
Willy Tarreau
dc13c11c1e BUG/MEDIUM: prevent gcc from moving empty keywords lists into BSS
Benoit Dolez reported a failure to start haproxy 1.5-dev19. The
process would immediately report an internal error with missing
fetches from some crap instead of ACL names.

The cause is that some versions of gcc seem to trim static structs
containing a variable array when moving them to BSS, and only keep
the fixed size, which is just a list head for all ACL and sample
fetch keywords. This was confirmed at least with gcc 3.4.6. And we
can't move these structs to const because they contain a list element
which is needed to link all of them together during the parsing.

The bug indeed appeared with 1.5-dev19 because it's the first one
to have some empty ACL keyword lists.

One solution is to impose -fno-zero-initialized-in-bss to everyone
but this is not really nice. Another solution consists in ensuring
the struct is never empty so that it does not move there. The easy
solution consists in having a non-null list head since it's not yet
initialized.

A new "ILH" list head type was thus created for this purpose : create
an Initialized List Head so that gcc cannot move the struct to BSS.
This fixes the issue for this version of gcc and does not create any
burden for the declarations.
2013-06-21 23:29:02 +02:00
Godbach
430f291a99 CLEANUP: session: remove event_accept() which was not used anymore
Remove event_accept() in include/proto/proto_http.h and use correct function
name in other two files instead of event_accept().

Signed-off-by: Godbach <nylzhaowei@gmail.com>
2013-06-20 08:07:35 +02:00
Willy Tarreau
be4a3eff34 MEDIUM: counters: use sc0/sc1/sc2 instead of sc1/sc2/sc3
It was a bit inconsistent to have gpc start at 0 and sc start at 1,
so make sc start at zero like gpc. No previous release was issued
with sc3 anyway, so no existing setup should be affected.
2013-06-17 15:04:07 +02:00
Thierry FOURNIER
7eeb435494 CLEANUP: protect checks.h from multiple inclusions
The types/checks.h include file isn't protected against multiple inclusions,
so let's surround it with "#ifndef _TYPES_CHECKS_H/#endif" to fix this.
2013-06-14 19:53:02 +02:00
Thierry FOURNIER
d3879e8b57 CLEANUP: fix missing include <string.h> in proto/listener.h
The file proto/listener.h makes use of strdup() but doesn't include
<string.h> so it's sensible to include file ordering.
2013-06-14 19:52:17 +02:00
Willy Tarreau
4f0d919bd4 MEDIUM: tcp: add "tcp-request connection expect-proxy layer4"
This configures the client-facing connection to receive a PROXY protocol
header before any byte is read from the socket. This is equivalent to
having the "accept-proxy" keyword on the "bind" line, except that using
the TCP rule allows the PROXY protocol to be accepted only for certain
IP address ranges using an ACL. This is convenient when multiple layers
of load balancers are passed through by traffic coming from public
hosts.
2013-06-11 20:40:55 +02:00
Willy Tarreau
51347ed94c MEDIUM: http: add the "set-mark" action on http-request/http-response rules
"set-mark" is used to set the Netfilter MARK on all packets sent to the
client to the value passed in <mark> on platforms which support it. This
value is an unsigned 32 bit value which can be matched by netfilter and
by the routing table. It can be expressed both in decimal or hexadecimal
format (prefixed by "0x"). This can be useful to force certain packets to
take a different route (for example a cheaper network path for bulk
downloads). This works on Linux kernels 2.6.32 and above and requires
admin privileges.
2013-06-11 19:34:13 +02:00
Willy Tarreau
42cf39e3b9 MEDIUM: http: add support for "set-tos" in http-request/http-response
This manipulates the TOS field of the IP header of outgoing packets sent
to the client. This can be used to set a specific DSCP traffic class based
on some request or response information. See RFC2474, 2597, 3260 and 4594
for more information.
2013-06-11 19:04:37 +02:00
Willy Tarreau
9a355ec257 MEDIUM: http: add support for action "set-log-level" in http-request/http-response
Some users want to disable logging for certain non-important requests such as
stats requests or health-checks coming from another equipment. Other users want
to log with a higher importance (eg: notice) some special traffic (POST requests,
authenticated requests, requests coming from suspicious IPs) or some abnormally
large responses.

This patch responds to all these needs at once by adding a "set-log-level" action
to http-request/http-response. The 8 syslog levels are supported, as well as "silent"
to disable logging.
2013-06-11 17:50:26 +02:00
Willy Tarreau
abcd5145f8 MEDIUM: log: add a log level override value in struct session
This log level will be used in a further patch to change the log level
depending on the request or response.
2013-06-11 17:50:26 +02:00
Willy Tarreau
f4c43c13be MEDIUM: http: add the "set-nice" action to http-request and http-response
This new action changes the nice factor of the task processing the current
request.
2013-06-11 17:50:26 +02:00
Willy Tarreau
e365c0b92b MEDIUM: http: add a new "http-response" ruleset
Some actions were clearly missing to process response headers. This
patch adds a new "http-response" ruleset which provides the following
actions :
  - allow : stop evaluating http-response rules
  - deny : stop and reject the response with a 502
  - add-header : add a header in log-format mode
  - set-header : set a header in log-format mode
2013-06-11 16:06:12 +02:00
Willy Tarreau
2b57cb8f30 MEDIUM: protocol: implement a "drain" function in protocol layers
Since commit cfd97c6f was merged into 1.5-dev14 (BUG/MEDIUM: checks:
prevent TIME_WAITs from appearing also on timeouts), some valid health
checks sometimes used to show some TCP resets. For example, this HTTP
health check sent to a local server :

  19:55:15.742818 IP 127.0.0.1.16568 > 127.0.0.1.8000: S 3355859679:3355859679(0) win 32792 <mss 16396,nop,nop,sackOK,nop,wscale 7>
  19:55:15.742841 IP 127.0.0.1.8000 > 127.0.0.1.16568: S 1060952566:1060952566(0) ack 3355859680 win 32792 <mss 16396,nop,nop,sackOK,nop,wscale 7>
  19:55:15.742863 IP 127.0.0.1.16568 > 127.0.0.1.8000: . ack 1 win 257
  19:55:15.745402 IP 127.0.0.1.16568 > 127.0.0.1.8000: P 1:23(22) ack 1 win 257
  19:55:15.745488 IP 127.0.0.1.8000 > 127.0.0.1.16568: FP 1:146(145) ack 23 win 257
  19:55:15.747109 IP 127.0.0.1.16568 > 127.0.0.1.8000: R 23:23(0) ack 147 win 257

After some discussion with Chris Huang-Leaver, it appeared clear that
what we want is to only send the RST when we have no other choice, which
means when the server has not closed. So we still keep SYN/SYN-ACK/RST
for pure TCP checks, but don't want to see an RST emitted as above when
the server has already sent the FIN.

The solution against this consists in implementing a "drain" function at
the protocol layer, which, when defined, causes as much as possible of
the input socket buffer to be flushed to make recv() return zero so that
we know that the server's FIN was received and ACKed. On Linux, we can make
use of MSG_TRUNC on TCP sockets, which has the benefit of draining everything
at once without even copying data. On other platforms, we read up to one
buffer of data before the close. If recv() manages to get the final zero,
we don't disable lingering. Same for hard errors. Otherwise we do.

In practice, on HTTP health checks we generally find that the close was
pending and is returned upon first recv() call. The network trace becomes
cleaner :

  19:55:23.650621 IP 127.0.0.1.16561 > 127.0.0.1.8000: S 3982804816:3982804816(0) win 32792 <mss 16396,nop,nop,sackOK,nop,wscale 7>
  19:55:23.650644 IP 127.0.0.1.8000 > 127.0.0.1.16561: S 4082139313:4082139313(0) ack 3982804817 win 32792 <mss 16396,nop,nop,sackOK,nop,wscale 7>
  19:55:23.650666 IP 127.0.0.1.16561 > 127.0.0.1.8000: . ack 1 win 257
  19:55:23.651615 IP 127.0.0.1.16561 > 127.0.0.1.8000: P 1:23(22) ack 1 win 257
  19:55:23.651696 IP 127.0.0.1.8000 > 127.0.0.1.16561: FP 1:146(145) ack 23 win 257
  19:55:23.652628 IP 127.0.0.1.16561 > 127.0.0.1.8000: F 23:23(0) ack 147 win 257
  19:55:23.652655 IP 127.0.0.1.8000 > 127.0.0.1.16561: . ack 24 win 257

This change should be backported to 1.4 which is where Chris encountered
this issue. The code is different, so probably the tcp_drain() function
will have to be put in the checks only.
2013-06-10 20:33:23 +02:00
Willy Tarreau
04ff9f105f MINOR: http: add full-length header fetch methods
The req.hdr and res.hdr fetch methods do not work well on headers which
are allowed to contain commas, such as User-Agent, Date or Expires.
More specifically, full-length matching is impossible if a comma is
present.

This patch introduces 4 new fetch functions which are designed to work
with these full-length headers :
  - req.fhdr, req.fhdr_cnt
  - res.fhdr, res.fhdr_cnt

These ones do not stop at commas and permit to return full-length header
values.
2013-06-10 18:39:42 +02:00
Willy Tarreau
570f221cbb MINOR: log: add a new flag 'L' for locally processed requests
People who use "option dontlog-normal" are bothered with redirects and
stats being logged and reported as errors in the logs ("PR" = proxy
blocked the request).

This patch introduces a new flag 'L' for when a request is locally
processed, that is not considered as an error by the log filters. That
way we know a request was intercepted and processed by haproxy without
logging the line when "option dontlog-normal" is in effect.
2013-06-10 16:42:09 +02:00
Willy Tarreau
bf43f6eb32 MINOR: defaults: allow REQURI_LEN and CAPTURE_LEN to be redefined
Some users want to change these default settings but it was not
convenient.
2013-06-10 10:30:09 +02:00
Willy Tarreau
379357af58 BUG/MAJOR: http: always ensure response buffer has some room for a response
Since 1.5-dev12 and commit 3bf1b2b8 (MAJOR: channel: stop relying on
BF_FULL to take action), the HTTP parser switched to channel_full()
instead of BF_FULL to decide whether a buffer had enough room to start
parsing a request or response. The problem is that channel_full()
intentionally ignores outgoing data, so a corner case exists where a
large response might still be left in a response buffer with just a
few bytes left (much less than the reserve), enough to accept a second
response past the last data, but not enough to permit the HTTP processor
to add some headers. Since all the processing relies on this space being
available, we can get some random crashes when clients pipeline requests.

The analysis of a core from haproxy configured with 20480 bytes buffers
shows this : with enough "luck", when sending back the response for the
first request, the client is slow, the TCP window is congested, the socket
buffers are full, and haproxy's buffer fills up. We still have 20230 bytes
of response data in a 20480 response buffer. The second request is sent to
the server which returns 214 bytes which fit in the small 250 bytes left
in this buffer. And the buffer arrangement makes it possible to escape all
the controls in http_wait_for_response() :

    |<------ response buffer = 20480 bytes ------>|
    [ 2/2  | 3 | 4 |          1/2                 ]
           ^ start of circular buffer

      1/2 = beginning of previous response (18240)
      2/2 = end of previous response       (1990)
        3 = current response               (214)
        4 = free space                     (36)

  - channel_full() returns false (20230 bytes are going to leave)
  - the response headers does not wrap at the end of the buffer
  - the remaining linear room after the headers is larger than the
    reserve, because it's the previous response which wraps :
  => response is processed

Header rewriting causes it to reach 260 bytes, 10 bytes larger than what
the buffer could hold. So all computations during header addition are
wrong and lead to the corruption we've observed.

All the conditions are very hard to meet (which explains why it took
almost one year for this bug to show up) and are almost impossible to
reproduce on purpose on a test platform. But the bug is clearly there.

This issue was reported by Dinko Korunic who kindly devoted a lot of
time to provide countless traces and cores, and to experiment with
troubleshooting patches to knock the bug down. Thanks Dinko!

No backport is needed, but all 1.5-dev versions between dev12 and dev18
included must be upgraded. A workaround consists in setting option
forceclose to prevent pipelined requests from being processed.
2013-06-08 13:14:17 +02:00
Willy Tarreau
ba2ffd18b5 MEDIUM: counters: add a new "gpc0_rate" counter in stick-tables
This counter is special in that instead of reporting the gpc0 cumulative
count, it returns its increase rate over the configured period.
2013-05-29 15:54:14 +02:00