This function makes no sense anymore and will cause trouble to convert
the remains of connection/applet to end points. Let's replace it now
with its contents.
The long-term goal is to have a context for applets as an alternative
to the connection and not as a complement. At the moment, the context
is still stored into the stream interface, and we only put a pointer
to the applet's context in si->end, initialize the context with object
type OBJ_TYPE_APPCTX, and this allows us not to allocate an entry when
deciding to switch to an applet.
A special care is taken to never dereference si->conn anymore when
dealing with an applet. That's why it's important that si->end is
always set to the proper type :
si->end == NULL => not connected to anything
*si->end == OBJ_TYPE_APPCTX => connected to an applet
*si->end == OBJ_TYPE_CONN => real connection (server, proxy, ...)
The session management code used to check the applet from the connection's
target. Now it uses the stream interface's end point and does not touch the
connection at all. Similarly, we stop checking the connection's addresses
and file descriptors when reporting the applet's status in the stats dump.
Since last commit, we now have a pointer to the applet in the
applet context. So we don't need the si->release function pointer
anymore, it can be extracted from applet->applet.release. At many
places, the ->release function was still tested for real connections
while it is only limited to applets, so most of them were simply
removed. For the remaining valid uses, a new inline function
si_applet_release() was added to simplify the check and the call.
In preparation for a later move of all the applet context outside of the
stream interface, we'll need to have access to the applet itself from the
context. Let's have a pointer to it inside the context.
si_prepare_embedded() was used both to attach an applet and to detach
anything from a stream interface. Split it into si_prepare_none() to
detach and si_prepare_applet() to attach an applet.
si->conn->target is now assigned from within these two functions instead
of their respective callers.
Since this is the applet context, call it ->appctx to avoid the confusion
with the pointer to the applet. Many places were changed but it's only a
renaming.
The object type was added to "struct appctx". The purpose will be
to identify an appctx when the applet context is detached from the
stream interface. For now, it's still attached, so this patch only
adds the new type and does not replace its use.
In preparation of making the applet context dynamically allocatable,
we create a "struct appctx". Nothing else was changed, it's the same
struct as the one which was in the stream interface.
A long time ago when peers were introduced, there was no applet nor
applet context. Applet contexts were introduced but the peers still
did not make use of them and the "ptr" pointer remains present in
every stream interface in addition to the other contexts.
Simply move this pointer to its own location in the context.
Note that this pointer is still a void* because its type and contents
varies depending on the peers session state. Probably that this could
be cleaned up in the future given that all other contexts already store
much more than a single pointer.
Most of the times, the caller of objt_<type>(ptr) will know that <ptr>
is valid and of the correct type (eg: in an "if" condition). Let's provide
an unsafe variant that does not perform the check again for these usages.
The new functions are called "__objt_<type>".
This is to be more consistent with the other functions. The only
reason why these functions used to return a value was to let the
caller adjust polling by itself, but now their only callers were
the si_shutr()/si_shutw() inline functions. Now these functions
do not depend anymore on the connection.
These connection variant of these functions now call
conn_data_stop_recv()/conn_data_stop_send() before returning order
not to require a return code anymore. The applet version does not
need this at all.
These functions induce a lot of ifs everywhere because they consider two
different cases, one which is where the connection exists and has a file
descriptor, and the other one which is the default case where at most an
applet has to be notified.
Let's have them in si_ops and automatically decide which one to use.
The connection shutdown sequence has been slightly simplified, and we
now clear the flags at the end.
Also we remove SHUTR_NOW after a shutw with nolinger, as it's cleaner
not to keep it.
There is a big trouble with the way POST is handled for the admin
stats page. The POST parameters are extracted from some http-request
rules, and if not round they return zero hoping for being called again
when more data passes. This results in the HTTP analyser being called
several times and all the rules prior to the stats being executed
multiple times as well. That includes rewrite rules.
So instead of doing this, we now move all the processing of the stats
into the stats applet.
That way we just set the stats applet in the HTTP analyser when a stats
request is detected, and the applet takes the time it needs to read the
arguments and respond. We could even imagine improving the applet to
support requests larger than a single buffer.
The code was almost only moved and minimally changed. Several new HTTP
states were added to the stats applet to emit headers, redirects and
to read POST. It was necessary to do this because the headers sent
depend on the parsing of the POST request. In the end it's beneficial
because we removed two stream_int_retnclose() calls.
In preparation for moving the POST processing to the applet, we first
add new states to the HTTP I/O handler. Till now st0 was only 0/1 for
start/end. We now replace it with an enum.
We make the peers code use applet->ptr instead of conn->xprt_ctx to
store the pointer to the current peer. That way it does not depend
on a connection anymore.
We're trying to move the applets out of the struct connection. So
let's remove the dependence on xprt_st and introduce si->applet.st2
to store the missing contextual data instead.
The store[] array in the session holds a flag which probably aimed to
differenciate store entries learned from the request from those learned
from the response, and allowing responses to overwrite only the request
ones (eg: have a server set a response cookie which overwrites the request
one).
But this flag is set when a response data is stored, and is never cleared.
So in practice, haproxy always runs with this flag set, meaning that
responses prevent themselves from overriding the request data.
It is desirable anyway to keep the ability not to override data, because
the override is performed only based on the table and not on the key, so
that would mean that it would be impossible to retrieve two different
keys to store into a same table. For example, if a client sets a cookie
and a server another one, both need to be updated in the table in the
proper order. This is especially true when multiple keys may be tracked
on each side into the same table (eg: list of IP addresses in a header).
So the correct fix which also maintains the current behaviour consists in
simply removing this flag and never try to optimize for the overwrite case.
This fix also has the benefit of significantly reducing the session size,
by 64 bytes due to alignment issues caused by this flag!
The bug has been there forever (since 1.4-dev7), so a backport to 1.4
would be appropriate.
In recent commit 5ecb77f (MEDIUM: checks: add send/expect tcp based check),
bitfields were mistakenly used at some places for the actions. Fortunately,
the only two actions right now are 1 and 2 so they don't share any bit in
common and the bug has no impact.
No backport is needed.
This is a generic health check which can be used to match a
banner or send a request and analyse a server response.
It works in a send/expect ways and many exchange can be done between
HAProxy and a server to decide the server status, making HAProxy able to
speak the server's protocol.
It can send arbitrary regular or binary strings and match content as a
regular or binary string or a regex.
Signed-off-by: Baptiste Assmann <bedis9@gmail.com>
We currently use such an hex parser in pat_parse_bin() to parse hex
string patterns. We'll need another generic one so let's move it to
standard.c and have pat_parse_bin() make use of it.
This patch permits to use the same struct pattern for two indentical maps.
This permits to preserve memory, and permits to update only one
"struct pattern" when the dynamic map update is supported.
We handle "http-request redirect" with a log-format string now, but we
leave "redirect" unaffected.
Note that the control of the special "/" case is move from the runtime
execution to the configuration parsing. If the format rule list is
empty, the build_logline() function does nothing.
Add a new converter with the following prototype :
map(<map_file>[,<default_value>])
map_<match_type>(<map_file>[,<default_value>])
map_<match_type>_<output_type>(<map_file>[,<default_value>])
It searches the for input value from <map_file> using the <match_type>
matching method, and return the associated value converted to the type
<output_type>. If the input value cannot be found in the <map_file>,
the converter returns the <default_value>. If the <default_value> is
not set, the converter fails and acts as if no input value could be
fetched. If the <match_type> is not set, it defaults to "str".
Likewise, if the <output_type> is not set, it defaults to "str". For
convenience, the "map" keyword is an alias for "map_str" and maps a
string to another string. The following array contains contains the
list of all the map* converters.
+----+----------+---------+-------------+------------+
| `-_ out | | | |
| input `-_ | str | int | ip |
| / match `-_ | | | |
+---------------+---------+-------------+------------+
| str / str | map_str | map_str_int | map_str_ip |
| str / sub | map_sub | map_sub_int | map_sub_ip |
| str / dir | map_dir | map_dir_int | map_dir_ip |
| str / dom | map_dom | map_dom_int | map_dom_ip |
| str / end | map_end | map_end_int | map_end_ip |
| str / reg | map_reg | map_reg_int | map_reg_ip |
| int / int | map_int | map_int_int | map_int_ip |
| ip / ip | map_ip | map_ip_int | map_ip_ip |
+---------------+---------+-------------+------------+
The names are intentionally chosen to reflect the same match methods
as ACLs use.
This patch allows each sample cast function to specify the sample
output type. The goal is to be able to emit an output type IPv4 or
IPv6 depending on what is found in the input if the next converter
is able to process them both.
The patch also adds a new pseudo type called "ADDR". This type is an
alias for IPV4 and IPV6 which is only used as an input type by converters
who want to express their compatibility with both address formats. It may
not be emitted.
The goal is to unify as much as possible the processing of IPv4 and IPv6
in order not to add extra keywords for the maps which act as converters,
but will match samples like ACLs do with their patterns.
We now have the following enums and all related functions return them and
consume them :
enum pat_match_res {
PAT_NOMATCH = 0, /* sample didn't match any pattern */
PAT_MATCH = 3, /* sample matched at least one pattern */
};
enum acl_test_res {
ACL_TEST_FAIL = 0, /* test failed */
ACL_TEST_MISS = 1, /* test may pass with more info */
ACL_TEST_PASS = 3, /* test passed */
};
enum acl_cond_pol {
ACL_COND_NONE, /* no polarity set yet */
ACL_COND_IF, /* positive condition (after 'if') */
ACL_COND_UNLESS, /* negative condition (after 'unless') */
};
It's just in order to avoid doubts when reading some code.
This patch just renames functions, types and enums. No code was changed.
A significant number of files were touched, especially the ACL arrays,
so it is likely that some external patches will not apply anymore.
One important thing is that we had to split ACL_PAT_* into two groups :
- ACL_TEST_{PASS|MISS|FAIL}
- PAT_{MATCH|UNMATCH}
A future patch will enforce enums on all these places to avoid confusion.
This patch just moves code without any change.
The ACL are just the association between sample and pattern. The pattern
contains the match method and the parse method. These two things are
different. This patch cleans the code by splitting it.
This will be used later with maps. Each map will associate an entry with
a sample_storage value.
This patch changes the "parse" prototype and all the parsing methods.
The goal is to associate "struct sample_storage" to each entry of
"struct acl_pattern". Only the "parse" function can add the sample value
into the "struct acl_pattern".
This struct is used to store a sample constant. The size of this
struct is less than the struct sample. This struct only contains
a constant and doesn't need the "ctx" nor the "flags".
The map feature will need to match acl patterns. This patch extracts
the matching function from the global ACL function "acl_exec_cond".
The code was only moved to its own function, no functional changes were made.
With this split, the pattern indexation can apply to any source. The map
feature needs this functionality because the map cannot be loaded with the
same file format as the ones supported by acl_read_patterns_from_file().
The code was only moved to its own function, no functional changes were made.
The inet_pton function needs an input string with a final \0. This
function copies the input string to a temporary buffer, adds the final
\0 and converts to address.
If the acl keyword is a "fetch", the dedicated parsing function
"sample_parse_expr()" is used. Otherwise, the acl parsing function
"parse_acl_expr()" is extended to understand the syntax of a series
of converters placed after the "fetch" keyword.
Before this patch, each acl uses a "struct sample_fetch" and executes
it with the "<fetch>->process()" function. Now, the dedicated function
"sample_process()" is called.
These syntax are now avalaible:
acl bad req.hdr(host),lower -m str www
http-request redirect prefix /go-away if bad
acl bad hdr_beg(host),lower www
http-request redirect prefix /go-away if bad
When parsing track-sc* actions in tcp-request rules, we now automatically
compute the track-sc identifier number using %d when displaying an error
message. But the ID has become wrong since we introduced sc0, we continue
to report id+1 in error messages causing some confusion.
No backport is needed.
Add a DRAIN sub-state for a server which
will be shown on the stats page instead of UP if
its effective weight is zero.
Also, log if a server enters or leaves the DRAIN state
as the result of an agent check.
Signed-off-by: Simon Horman <horms@verge.net.au>
The syntax of this new commands are:
enable agent <backend>/<server>
disable agent <backend>/<server>
These commands allow temporarily stopping and subsequently
re-starting an auxiliary agent check. The effect of this is as follows:
New checks are only initialised when the agent is in the enabled. Thus,
disable agent will prevent any new agent checks from begin initiated until
the agent re-enabled using enable agent.
When an agent is disabled the processing of an auxiliary agent check that
was initiated while the agent was set as enabled is as follows: All
results that would alter the weight, specifically "drain" or a weight
returned by the agent, are ignored. The processing of agent check is
otherwise unchanged.
The motivation for this feature is to allow the weight changing effects
of the agent checks to be paused to allow the weight of a server to be
configured using set weight without being overridden by the agent.
Signed-off-by: Simon Horman <horms@verge.net.au>
This is achieved by moving rise and fall from struct server to struct check.
After this move the behaviour of the primary check, server->check is
unchanged. However, the secondary agent check, server->agent now has
independent rise and fall values each of which are set to 1.
The result is that receiving "fail", "stopped" or "down" just once from the
agent will mark the server as down. And receiving a weight just once will
allow the server to be marked up if its primary check is in good health.
This opens up the scope to allow the rise and fall values of the agent
check to be configurable, however this has not been implemented at this
stage.
Signed-off-by: Simon Horman <horms@verge.net.au>
Allow an auxiliary agent check to be run independently of the
regular a regular health check. This is enabled by the agent-check
server setting.
The agent-port, which specifies the TCP port to use for the agent's
connections, is required.
The agent-inter, which specifies the interval between agent checks and
timeout of agent checks, is optional. If not set the value for regular
checks is used.
e.g.
server web1_1 127.0.0.1:80 check agent-port 10000
If either the health or agent check determines that a server is down
then it is marked as being down, otherwise it is marked as being up.
An agent health check performed by opening a TCP socket and reading an
ASCII string. The string should have one of the following forms:
* An ASCII representation of an positive integer percentage.
e.g. "75%"
Values in this format will set the weight proportional to the initial
weight of a server as configured when haproxy starts.
* The string "drain".
This will cause the weight of a server to be set to 0, and thus it
will not accept any new connections other than those that are
accepted via persistence.
* The string "down", optionally followed by a description string.
Mark the server as down and log the description string as the reason.
* The string "stopped", optionally followed by a description string.
This currently has the same behaviour as "down".
* The string "fail", optionally followed by a description string.
This currently has the same behaviour as "down".
Signed-off-by: Simon Horman <horms@verge.net.au>
The column used to report the throttle percentage when a server is in
slowstart is based on the time only. This is wrong, because server weights
in slowstart are updated at most once a second, so the reported value is
wrong at least fo rone second during each step, which means all the time
when using short delays (< 20s).
The second point is that it's disturbing to see a weight < 100% without
any throttle at the end of the period (during the last second), because
the effective weight has not yet been updated.
Instead, we now compute the exact ratio between eweight and uweight and
report it. It's always accurate and describes the value being used instead
of using only the date.
It can be backported to 1.4 though it's not particularly important.
A crash was reported by Igor at owind when changing a server's weight
on the CLI. Lukas Tribus could reproduce a related bug where setting
a server's weight would result in the new weight being multiplied by
the initial one. The two bugs are the same.
The incorrect weight calculation results in the total farm weight being
larger than what was initially allocated, causing the map index to be out
of bounds on some hashes. It's easy to reproduce using "balance url_param"
with a variable param, or with "balance static-rr".
It appears that the calculation is made at many places and is not always
right and not always wrong the same way. Thus, this patch introduces a
new function "server_recalc_eweight()" which is dedicated to this task
of computing ->eweight from many other elements including uweight and
current time (for slowstart), and all users now switch to use this
function.
The patch is a bit large but the code was not trivially fixable in a way
that could guarantee this situation would not occur anymore. The fix is
much more readable and has been verified to work with all algorithms,
with both consistent and map-based hashes, and even with static-rr.
Slowstart was tested as well, just like enable/disable server.
The same bug is very likely present in 1.4 as well, so the patch will
probably need to be backported eventhough it will not apply as-is.
Thanks to Lukas and Igor for the information they provided to reproduce it.
This is in preparation for associating a agent check
with a server which runs as well as the server's existing check.
Signed-off-by: Simon Horman <horms@verge.net.au>
Add state to struct check. This is currently used to store one bit,
CHK_RUNNING, which is set if a check is running and clear otherwise.
This bit was previously SRV_CHK_RUNNING of the state element of struct
server.
This is in preparation for associating a agent check
with a server which runs as well as the server's existing check.
Signed-off-by: Simon Horman <horms+renesas@verge.net.au>
Paramatise the following functions over the check of a server
* set_server_down
* set_server_up
* srv_getinter
* server_status_printf
* set_server_check_status
* set_server_disabled
* set_server_enabled
Generally the server parameter of these functions has been removed.
Where it is still needed it is obtained using check->server.
This is in preparation for associating a agent check
with a server which runs as well as the server's existing check.
By paramatising these functions they may act on each of the checks
without further significant modification.
Explanation of the SSP_O_HCHK portion of this change:
* Prior to this patch SSP_O_HCHK serves a single purpose which
is to tell server_status_printf() weather it should print
the details of the check of a server or not.
With the paramatisation that this patch adds there are two cases.
1) Printing the details of the check in which case a
valid check parameter is needed.
2) Not printing the details of the check in which case
the contents check parameter are unused.
In case 1) we could pass SSP_O_HCHK and a valid check and;
In case 2) we could pass !SSP_O_HCHK and any value for check
including NULL.
If NULL is used for case 2) then SSP_O_HCHK becomes supurfulous
and as NULL is used for case 2) SSP_O_HCHK has been removed.
Signed-off-by: Simon Horman <horms@verge.net.au>
Move result element from struct server to struct check
This allows check results to be independent of the check's server.
This is in preparation for associating a agent check
with a server which runs as well as the server's existing check.
Signed-off-by: Simon Horman <horms@verge.net.au>
This is in preparation for associating a agent check
with a server which runs as well as the server's existing check.
The split has been made by:
* Moving elements of struct server's check element that will
be shared by both checks into a new check_common element
of struct server.
* Moving the remaining elements to a new struct check and
making struct server's check element a struct check.
* Adding a server element to struct check, a back-pointer
to the server element it is a member of.
- At this time the server could be obtained using
container_of, however, this will not be so easy
once a second struct check element is added to struct server
to accommodate an agent health check.
Signed-off-by: Simon Horman <horms@verge.net.au>
This was inadvertently added by "MEDIUM: checks: Add agent health check".
It appears to have never been used.
Signed-off-by: Simon Horman <horms@verge.net.au>
This function was designed for haproxy while testing other functions
in the past. Initially it was not planned to be used given the not
very interesting numbers it showed on real URL data : it is not as
smooth as the other ones. But later tests showed that the other ones
are extremely sensible to the server count and the type of input data,
especially DJB2 which must not be used on numeric input. So in fact
this function is still a generally average performer and it can make
sense to merge it in the end, as it can provide an alternative to
sdbm+avalanche or djb2+avalanche for consistent hashing or when hashing
on numeric data such as a source IP address or a visitor identifier in
a URL parameter.
Summary:
Avalanche is supported not as a native hashing choice, but a modifier
on the hashing function. Note that this means that possible configs
written after 1.5-dev4 using "hash-type avalanche" will get an informative
error instead. But as discussed on the mailing list it seems nobody ever
used it anyway, so let's fix it before the final 1.5 release.
The default values were selected for backward compatibility with previous
releases, as discussed on the mailing list, which means that the consistent
hashing will still apply the avalanche hash by default when no explicit
algorithm is specified.
Examples
(default) hash-type map-based
Map based hashing using sdbm without avalanche
(default) hash-type consistent
Consistent hashing using sdbm with avalanche
Additional Examples:
(a) hash-type map-based sdbm
Same as default for map-based above
(b) hash-type map-based sdbm avalanche
Map based hashing using sdbm with avalanche
(c) hash-type map-based djb2
Map based hashing using djb2 without avalanche
(d) hash-type map-based djb2 avalanche
Map based hashing using djb2 with avalanche
(e) hash-type consistent sdbm avalanche
Same as default for consistent above
(f) hash-type consistent sdbm
Consistent hashing using sdbm without avalanche
(g) hash-type consistent djb2
Consistent hashing using djb2 without avalanche
(h) hash-type consistent djb2 avalanche
Consistent hashing using djb2 with avalanche
Summary:
In testing at tumblr, we found that using djb2 hashing instead of the
default sdbm hashing resulted is better workload distribution to our backends.
This commit implements a change, that allows the user to specify the hash
function they want to use. It does not limit itself to consistent hashing
scenarios.
The supported hash functions are sdbm (default), and djb2.
For a discussion of the feature and analysis, see mailing list thread
"Consistent hashing alternative to sdbm" :
http://marc.info/?l=haproxy&m=138213693909219
Note: This change does NOT make changes to new features, for instance,
applying an avalance hashing always being performed before applying
consistent hashing.
If haproxy is compiled with the USE_PCRE_JIT option, the length of the
string is used. If it is compiled without this option the function doesn't
use the length and expects a null terminated string.
The prototype of the function is ambiguous, and depends on the
compilation option. The developer can think that the length is always
used, and many bugs can be created.
This patch makes sure that the length is used. The regex_exec function
adds the final '\0' if it is needed.
The current file "regex.h" define an abstraction for the regex. It
provides the same struct name and the same "regexec" function for the
3 regex types supported: standard libc, basic pcre and jit pcre.
The regex compilation function is not provided by this file. If the
developper wants to use regex, he must write regex compilation code
containing "#define *JIT*".
This patch provides a unique regex compilation function according to
the compilation options.
In addition, the "regex.h" file checks the presence of the "#define
PCRE_CONFIG_JIT" when "USE_PCRE_JIT" is enabled. If this flag is not
present, the pcre lib doesn't support JIT and "#error" is emitted.
This new action immediately closes the connection with the server
when the condition is met. The first such rule executed ends the
rules evaluation. The main purpose of this action is to force a
connection to be finished between a client and a server after an
exchange when the application protocol expects some long time outs
to elapse first. The goal is to eliminate idle connections which
take signifiant resources on servers with certain protocols.
When a process with large stick tables is replaced by a new one and remains
present until the last connection finishes, it keeps these data in memory
for nothing since they will never be used anymore by incoming connections,
except during syncing with the new process. This is especially problematic
when dealing with long session protocols such as WebSocket as it becomes
possible to stack many processes and eat a lot of memory.
So the idea here is to know if a table still needs to be synced or not,
and to purge all unused entries once the sync is complete. This means that
after a few hundred milliseconds when everything has been synchronized with
the new process, only a few entries will remain allocated (only the ones
held by sessions during the restart) and all the remaining memory will be
freed.
Note that we carefully do that only after the grace period is expired so as
not to impact a possible proxy that needs to accept a few more connections
before leaving.
Doing this required to add a sync counter to the stick tables, to know how
many peer sync sessions are still in progress in order not to flush the entries
until all synchronizations are completed.
verifyhost allows you to specify a hostname that the remote server's
SSL certificate must match. Connections that don't match will be
closed with an SSL error.
The HTTP request counter is incremented non atomically, which means that
many requests can log the same ID. Let's increment it when it is consumed
so that we avoid this case.
This bug was reported by Patrick Hemmer. It's 1.5-specific and does not
need to be backported.
The "set table" statement allows to create new entries with their respective
values. Till now it was limited to a single data type per line, requiring as
many "set table" statements as the desired data types to be set. Since this
is only a parser limitation, this patch gets rid of it. It also allows the
creation of a key with no data types (all reset to their default values).
In preparation of more flexibility in the stick counters, make their
number configurable. It still defaults to 3 which is the minimum
accepted value. Changing the value alone is not sufficient to get
more counters, some bitfields still need to be updated and the TCP
actions need to be updated as well, but this update tries to be
easier, which is nice for experimentation purposes.
This function is also called directly from backend.c, so let's stop
building fake args to call it as a sample fetch, and have a lower
layer more generic function instead.
We're having a lot of duplicate code just because of minor variants between
fetch functions that could be dealt with if the functions had the pointer to
the original keyword, so let's pass it as the last argument. An earlier
version used to pass a pointer to the sample_fetch element, but this is not
the best solution for two reasons :
- fetch functions will solely rely on the keyword string
- some other smp_fetch_* users do not have the pointer to the original
keyword and were forced to pass NULL.
So finally we're passing a pointer to the keyword as a const char *, which
perfectly fits the original purpose.
The max weight of server is 256 now, but SRV_UWGHT_MAX is still 255. As a result,
FWRR will not work well when server's weight is 256. The description is as below:
There are some macros related to server's weight in include/types/server.h:
#define SRV_UWGHT_RANGE 256
#define SRV_UWGHT_MAX (SRV_UWGHT_RANGE - 1)
#define SRV_EWGHT_MAX (SRV_UWGHT_MAX * BE_WEIGHT_SCALE)
Since weight of server can be reach to 256 and BE_WEIGHT_SCALE equals to 16,
the max eweight of server should be 256*16 = 4096, it will exceed SRV_EWGHT_MAX
which equals to SRV_UWGHT_MAX*BE_WEIGHT_SCALE = 255*16 = 4080. When a server
with weight 256 is insterted into FWRR tree during initialization, the key value
of this server should be SRV_EWGHT_MAX - s->eweight = 4080 - 4096 = -16 which
is closed to UINT_MAX in unsigned type, so the server with highest weight will
be not elected as the first server to process request.
In addition, it is a better choice to compare with SRV_UWGHT_MAX than a magic
number 256 while doing check for the weight. The max number of servers for
round-robin algorithm is also updated.
Signed-off-by: Godbach <nylzhaowei@gmail.com>
As per RFC3260 #4 and BCP37 #4.2 and #5.2, the IPv6 counterpart of TOS
is "traffic class".
Add support for IPv6 traffic class in "set-tos" by moving the "set-tos"
related code to the new inline function inet_set_tos(), handling IPv4
(IP_TOS), IPv6 (IPV6_TCLASS) and IPv4-mapped sockets (IP_TOS, like
::ffff:127.0.0.1).
Also define - if missing - the IN6_IS_ADDR_V4MAPPED() macro in
include/common/compat.h for compatibility.
Benoit Dolez reported a failure to start haproxy 1.5-dev19. The
process would immediately report an internal error with missing
fetches from some crap instead of ACL names.
The cause is that some versions of gcc seem to trim static structs
containing a variable array when moving them to BSS, and only keep
the fixed size, which is just a list head for all ACL and sample
fetch keywords. This was confirmed at least with gcc 3.4.6. And we
can't move these structs to const because they contain a list element
which is needed to link all of them together during the parsing.
The bug indeed appeared with 1.5-dev19 because it's the first one
to have some empty ACL keyword lists.
One solution is to impose -fno-zero-initialized-in-bss to everyone
but this is not really nice. Another solution consists in ensuring
the struct is never empty so that it does not move there. The easy
solution consists in having a non-null list head since it's not yet
initialized.
A new "ILH" list head type was thus created for this purpose : create
an Initialized List Head so that gcc cannot move the struct to BSS.
This fixes the issue for this version of gcc and does not create any
burden for the declarations.
Remove event_accept() in include/proto/proto_http.h and use correct function
name in other two files instead of event_accept().
Signed-off-by: Godbach <nylzhaowei@gmail.com>
It was a bit inconsistent to have gpc start at 0 and sc start at 1,
so make sc start at zero like gpc. No previous release was issued
with sc3 anyway, so no existing setup should be affected.
This configures the client-facing connection to receive a PROXY protocol
header before any byte is read from the socket. This is equivalent to
having the "accept-proxy" keyword on the "bind" line, except that using
the TCP rule allows the PROXY protocol to be accepted only for certain
IP address ranges using an ACL. This is convenient when multiple layers
of load balancers are passed through by traffic coming from public
hosts.
"set-mark" is used to set the Netfilter MARK on all packets sent to the
client to the value passed in <mark> on platforms which support it. This
value is an unsigned 32 bit value which can be matched by netfilter and
by the routing table. It can be expressed both in decimal or hexadecimal
format (prefixed by "0x"). This can be useful to force certain packets to
take a different route (for example a cheaper network path for bulk
downloads). This works on Linux kernels 2.6.32 and above and requires
admin privileges.
This manipulates the TOS field of the IP header of outgoing packets sent
to the client. This can be used to set a specific DSCP traffic class based
on some request or response information. See RFC2474, 2597, 3260 and 4594
for more information.
Some users want to disable logging for certain non-important requests such as
stats requests or health-checks coming from another equipment. Other users want
to log with a higher importance (eg: notice) some special traffic (POST requests,
authenticated requests, requests coming from suspicious IPs) or some abnormally
large responses.
This patch responds to all these needs at once by adding a "set-log-level" action
to http-request/http-response. The 8 syslog levels are supported, as well as "silent"
to disable logging.
Some actions were clearly missing to process response headers. This
patch adds a new "http-response" ruleset which provides the following
actions :
- allow : stop evaluating http-response rules
- deny : stop and reject the response with a 502
- add-header : add a header in log-format mode
- set-header : set a header in log-format mode
Since commit cfd97c6f was merged into 1.5-dev14 (BUG/MEDIUM: checks:
prevent TIME_WAITs from appearing also on timeouts), some valid health
checks sometimes used to show some TCP resets. For example, this HTTP
health check sent to a local server :
19:55:15.742818 IP 127.0.0.1.16568 > 127.0.0.1.8000: S 3355859679:3355859679(0) win 32792 <mss 16396,nop,nop,sackOK,nop,wscale 7>
19:55:15.742841 IP 127.0.0.1.8000 > 127.0.0.1.16568: S 1060952566:1060952566(0) ack 3355859680 win 32792 <mss 16396,nop,nop,sackOK,nop,wscale 7>
19:55:15.742863 IP 127.0.0.1.16568 > 127.0.0.1.8000: . ack 1 win 257
19:55:15.745402 IP 127.0.0.1.16568 > 127.0.0.1.8000: P 1:23(22) ack 1 win 257
19:55:15.745488 IP 127.0.0.1.8000 > 127.0.0.1.16568: FP 1:146(145) ack 23 win 257
19:55:15.747109 IP 127.0.0.1.16568 > 127.0.0.1.8000: R 23:23(0) ack 147 win 257
After some discussion with Chris Huang-Leaver, it appeared clear that
what we want is to only send the RST when we have no other choice, which
means when the server has not closed. So we still keep SYN/SYN-ACK/RST
for pure TCP checks, but don't want to see an RST emitted as above when
the server has already sent the FIN.
The solution against this consists in implementing a "drain" function at
the protocol layer, which, when defined, causes as much as possible of
the input socket buffer to be flushed to make recv() return zero so that
we know that the server's FIN was received and ACKed. On Linux, we can make
use of MSG_TRUNC on TCP sockets, which has the benefit of draining everything
at once without even copying data. On other platforms, we read up to one
buffer of data before the close. If recv() manages to get the final zero,
we don't disable lingering. Same for hard errors. Otherwise we do.
In practice, on HTTP health checks we generally find that the close was
pending and is returned upon first recv() call. The network trace becomes
cleaner :
19:55:23.650621 IP 127.0.0.1.16561 > 127.0.0.1.8000: S 3982804816:3982804816(0) win 32792 <mss 16396,nop,nop,sackOK,nop,wscale 7>
19:55:23.650644 IP 127.0.0.1.8000 > 127.0.0.1.16561: S 4082139313:4082139313(0) ack 3982804817 win 32792 <mss 16396,nop,nop,sackOK,nop,wscale 7>
19:55:23.650666 IP 127.0.0.1.16561 > 127.0.0.1.8000: . ack 1 win 257
19:55:23.651615 IP 127.0.0.1.16561 > 127.0.0.1.8000: P 1:23(22) ack 1 win 257
19:55:23.651696 IP 127.0.0.1.8000 > 127.0.0.1.16561: FP 1:146(145) ack 23 win 257
19:55:23.652628 IP 127.0.0.1.16561 > 127.0.0.1.8000: F 23:23(0) ack 147 win 257
19:55:23.652655 IP 127.0.0.1.8000 > 127.0.0.1.16561: . ack 24 win 257
This change should be backported to 1.4 which is where Chris encountered
this issue. The code is different, so probably the tcp_drain() function
will have to be put in the checks only.
The req.hdr and res.hdr fetch methods do not work well on headers which
are allowed to contain commas, such as User-Agent, Date or Expires.
More specifically, full-length matching is impossible if a comma is
present.
This patch introduces 4 new fetch functions which are designed to work
with these full-length headers :
- req.fhdr, req.fhdr_cnt
- res.fhdr, res.fhdr_cnt
These ones do not stop at commas and permit to return full-length header
values.
People who use "option dontlog-normal" are bothered with redirects and
stats being logged and reported as errors in the logs ("PR" = proxy
blocked the request).
This patch introduces a new flag 'L' for when a request is locally
processed, that is not considered as an error by the log filters. That
way we know a request was intercepted and processed by haproxy without
logging the line when "option dontlog-normal" is in effect.
Since 1.5-dev12 and commit 3bf1b2b8 (MAJOR: channel: stop relying on
BF_FULL to take action), the HTTP parser switched to channel_full()
instead of BF_FULL to decide whether a buffer had enough room to start
parsing a request or response. The problem is that channel_full()
intentionally ignores outgoing data, so a corner case exists where a
large response might still be left in a response buffer with just a
few bytes left (much less than the reserve), enough to accept a second
response past the last data, but not enough to permit the HTTP processor
to add some headers. Since all the processing relies on this space being
available, we can get some random crashes when clients pipeline requests.
The analysis of a core from haproxy configured with 20480 bytes buffers
shows this : with enough "luck", when sending back the response for the
first request, the client is slow, the TCP window is congested, the socket
buffers are full, and haproxy's buffer fills up. We still have 20230 bytes
of response data in a 20480 response buffer. The second request is sent to
the server which returns 214 bytes which fit in the small 250 bytes left
in this buffer. And the buffer arrangement makes it possible to escape all
the controls in http_wait_for_response() :
|<------ response buffer = 20480 bytes ------>|
[ 2/2 | 3 | 4 | 1/2 ]
^ start of circular buffer
1/2 = beginning of previous response (18240)
2/2 = end of previous response (1990)
3 = current response (214)
4 = free space (36)
- channel_full() returns false (20230 bytes are going to leave)
- the response headers does not wrap at the end of the buffer
- the remaining linear room after the headers is larger than the
reserve, because it's the previous response which wraps :
=> response is processed
Header rewriting causes it to reach 260 bytes, 10 bytes larger than what
the buffer could hold. So all computations during header addition are
wrong and lead to the corruption we've observed.
All the conditions are very hard to meet (which explains why it took
almost one year for this bug to show up) and are almost impossible to
reproduce on purpose on a test platform. But the bug is clearly there.
This issue was reported by Dinko Korunic who kindly devoted a lot of
time to provide countless traces and cores, and to experiment with
troubleshooting patches to knock the bug down. Thanks Dinko!
No backport is needed, but all 1.5-dev versions between dev12 and dev18
included must be upgraded. A workaround consists in setting option
forceclose to prevent pipelined requests from being processed.