The indexation functions now accept duplicates. This way it is possible
to always have some consistency between lists and trees. The "add" command
will always add regardless of any previous existence. The new entry will
not be used because both trees and list retrieve keys in insertion order.
Thus the "add" operation will always succeed (as long as there is enough
memory).
If acl is shared with a map, the "add acl" command must be blocked
because it not take a sample on his parameters. The absense of this
parameter can cause error with corresponding maps.
The pointer <regstr> is only used to compare and identify the original
regex string with the patterns. Now the patterns have a reference map
containing this original string. It is useless to store this value two
times.
Before this patch, this function try to add values in best effort. If
the parsing iof the value fail, the operation continue until the end.
Now, this function stop on the first error and left the pattern in
coherant state.
This patch adds new display type. This display returns allocated string,
when the string is flush into buffers, it is freed. This permit to
return the content of "memprintf(err, ...)" messages.
The pat_ref_add functions has changed to return error.
The format of the acl file are not the same than the format of the map
files. In some case, the same file can be used, but this is ambiguous
for the user because the patterns are not the expected.
The acl and map function do the same work with the file parsing. This
patch merge these code in only one.
Note that the function map_read_entries_from_file() in the file "map.c"
is moved to the the function pat_ref_read_from_file_smp() in the file
"pattern.c". The code of this function is not modified, only the the
name and the arguments order has changed.
Each pattern displayed is associated to the value of his pattern
reference. This value can be used for deleting the entry. It is useful
with complex regex: the users are not forced to write the regex with all
the amiguous chars and escaped chars on the CLI.
The find_smp search the smp using the value of the pat_ref_elt pointer.
The pat_find_smp_* are no longer used. The function pattern_find_smp()
known all pattern indexation, and can be found
All the pattern delete function can use her reference to the original
"struct pat_ref_elt" to find the element to be remove. The functions
pat_del_list_str() and pat_del_meth() were deleted because after
applying this modification, they have the same code than pat_del_list_ptr().
Before this patch, the "get map/acl" function try to convert and display
the sample. This behavior is not efficient because some type like the
regex cannot be reversed and displayed as string.
This patch display the original stored reference.
Now, each pattern entry known the original "struct pat_ref_elt" from
that was built. This patch permit to delete each pattern entry without
confusion. After this patch, each reference can use his pointer to be
targeted.
The function Pattern_add() is only used by pat_ref_push(). This patch
remove the function pattern_add() and merge his code in the function
pat_ref_push().
The pattern reference are stored with two identifiers: the unique_id and
the reference.
The reference identify a file. Each file with the same name point to the
same reference. We can register many times one file. If the file is
modified, all his dependencies are also modified. The reference can be
used with map or acl.
The unique_id identify inline acl. The unique id is unique for each acl.
You cannot force the same id in the configuration file, because this
repport an error.
The format of the acl and map listing through the "socket" has changed
for displaying these new ids.
This patch extract the expect_type variable from the "struct pattern" to
"struct pattern_head". This variable is set during the declaration of
ACL and MAP. With this change, the function "pat_parse_len()" become
useless and can be replaced by "pat_parse_int()".
Implicit ACLs by default rely on the fetch's output type, so let's simply do
the same for all other ones. It has been verified that they all match.
Sometimes the same pattern file is used with the same index, parse and
parse_smp functions. If this two condition are true, these two pattern
are identical and the same struct can be used.
This patch add the following socket command line options:
show acl [<id>]
clear acl <id>
get acl <id> <pattern>
del acl <id> <pattern>
add acl <id> <pattern>
The system used for maps is backported in the pattern functions.
Some functions needs to change the sample associated to pattern. This
new pointer permit to return the a pointer to the sample pointer. The
caller can use or change the value.
This commit adds a delete function for patterns. It looks up all
instances of the pattern to delete and deletes them all. The fetch
keyword declarations have been extended to point to the appropriate
delete function.
This commit adds second tree node in the pattern struct and use it to
index IPv6 addresses. This commit report feature used in the list. If
IPv4 not match the tree, try to convert the IPv4 address in IPv6 with
prefixing the IPv4 address by "::ffff", after this operation, the match
function try lookup in the IPv6 tree. If the IPv6 sample dont match the
IPv6 tree, try to convert the IPv6 addresses prefixed by "2002:IPv4",
"::ffff:IPv4" and "::0000:IPv4" in IPv4 address. after this operation,
the match function try lookup in the IPv4 tree.
The match function known the format of the pattern. The pattern can be
stored in a list or in a tree. The pattern matching function use itself
the good entry point and indexation type.
Each pattern matching function return the struct pattern that match. If
the flag "fill" is set, the struct pattern is filled, otherwise the
content of this struct must not be used.
With this feature, the general pattern matching function cannot have
exceptions for building the "struct pattern".
The original get map display function set the comma separator after each
word displayed. This is not efficient because we cannot knew if the
displayed word is the last.
This new system set the comma separator before the displayed word, and
independant "\n" is set a the end of the function.
Before this commit, the pattern_exec_match() function returns the
associate sample, the associate struct pattern or the associate struct
pattern_tree. This is complex to use, because we can check the type of
information returned.
Now the function return always a "struct pattern". If <fill> is not set,
only the value of the pointer can be used as boolean (NULL or other). If
<fill> is set, you can use the <smp> pointer and the pattern
information.
If information must be duplicated, it is stored in trash buffer.
Otherwise, the pattern can point on existing strings.
The method are actuelly stored using two types. Integer if the method is
known and string if the method is not known. The fetch is declared as
UINT, but in some case it can provides STR.
This patch create new type called METH. This type contain interge for
known method and string for the other methods. It can be used with
automatic converters.
The pattern matching can expect method.
During the free or prune function, http_meth pettern is freed. This
patch initialise the freed pointer to NULL.
The operations applied on types SMP_T_CSTR and SMP_T_STR are the same,
but the check code and the declarations are double, because it must
declare action for SMP_T_C* and SMP_T_*. The declared actions and checks
are the same. this complexify the code. Only the "conv" functions can
change from "C*" to "*"
Now, if a function needs to modify input string, it can call the new
function smp_dup(). This one duplicate data in a trash buffer.
The pattern parse functions put the parsed result in a "struct pattern"
without memory allocation. If the pattern must reference the input data
without changes, the pattern point to the parsed string. If buffers are
needed to store translated data, it use th trash buffer. The indexation
function that allocate the memory later if it is needed.
Before this patch, the indexation function check the declared patttern
matching function and index the data according with this function. This
is not useful to add some indexation mode.
This commit adds dedicated indexation function. Each struct pattern is
associated with one indexation function. This function permit to index
data according with the type of pattern and with the type of match.
This commit separes the "struct list" used for the chain the "struct
pattern" which contain the pattern data. Later, this change will permit
to manipulate lists ans trees with the same "struct pattern".
Each pattern parser take only one string. This change is reported to the
function prototype of the function "pattern_register()". Now, it is
called with just one string and no need to browse the array of args.
After the previous patches, the "pat_parse_strcat()" function disappear,
and the "pat_parse_int()" and "pat_parse_dotted_ver()" functions dont
use anymore the "opaque" argument, and take only one string on his
input.
So, after this patch, each pattern parser no longer use the opaque
variable and take only one string as input. This patch change the
prototype of the pattern parsing functions.
Now, the "char **args" is replaced by a "char *arg", the "int *opaque"
is removed and these functions return 1 in succes case, and 0 if fail.
The goal of these patch is to simplify the prototype of
"pat_pattern_*()" functions. I want to replace the argument "char
**args" by a simple "char *arg" and remove the "opaque" argument.
"pat_parse_int()" and "pat_parse_dotted_ver()" are the unique pattern
parser using the "opaque" argument and using more than one string
argument of the char **args. These specificities are only used with ACL.
Other systems using this pattern parser (MAP and CLI) just use one
string for describing a range.
This two functions can read a range, but the min and the max must y
specified. This patch extends the syntax to describe a range with
implicit min and max. This is used for operators like "lt", "le", "gt",
and "ge". the syntax is the following:
":x" -> no min to "x"
"x:" -> "x" to no max
This patch moves the parsing of the comparison operator from the
functions "pat_parse_int()" and "pat_parse_dotted_ver()" to the acl
parser. The acl parser read the operator and the values and build a
volatile string readable by the functions "pat_parse_int()" and
"pat_parse_dotted_ver()". The transformation is done with these rules:
If the parser is "pat_parse_int()":
"eq x" -> "x"
"le x" -> ":x"
"lt x" -> ":y" (with y = x - 1)
"ge x" -> "x:"
"gt x" -> "y:" (with y = x + 1)
If the parser is "pat_parse_dotted_ver()":
"eq x.y" -> "x.y"
"le x.y" -> ":x.y"
"lt x.y" -> ":w.z" (with w.z = x.y - 1)
"ge x.y" -> "x.y:"
"gt x.y" -> "w.z:" (with w.z = x.y + 1)
Note that, if "y" is not present, assume that is "0".
Now "pat_parse_int()" and "pat_parse_dotted_ver()" accept only one
pattern and the variable "opaque" is no longer used. The prototype of
the pattern parsers can be changed.
This patch remove the limit of 32 groups. It also permit to use standard
"pat_parse_str()" function in place of "pat_parse_strcat()". The
"pat_parse_strcat()" is no longer used and its removed. Before this
patch, the groups are stored in a bitfield, now they are stored in a
list of strings. The matching is slower, but the number of groups is
low and generally the list of allowed groups is short.
The fetch function "smp_fetch_http_auth_grp()" used with the name
"http_auth_group" return valid username. It can be used as string for
displaying the username or with the acl "http_auth_group" for checking
the group of the user.
Maybe the names of the ACL and fetch methods are no longer suitable, but
I keep the current names for conserving the compatibility with existing
configurations.
The function "userlist_postinit()" is created from verification code
stored in the big function "check_config_validity()". The code is
adapted to the new authentication storage system and it is moved in the
"src/auth.c" file. This function is used to check the validity of the
users declared in groups and to check the validity of groups declared
on the "user" entries.
This resolve function is executed before the check of all proxy because
many acl needs solved users and groups.
The ACL keyword returned by find_acl_kw() is checked for having a
valid ->parse() function. This dates back 2007 when ACLs were reworked
in order to differenciate old and new keywords. This check is
inappropriate and confusing since all keywords have a parser now.
The bin2str cast gives the hexadecimal representation of the binary
content when it is used as string. This was inherited from the
stick-table casts without realizing that it was a mistake. Indeed,
it breaks string processing on binary contents, preventing any _reg,
_beg, etc from working.
For example, with an HTTP GET request, the fetch "req.payload(0,3)"
returns the 3 bytes "G", "E", and "T" in binary. If this fetch is
used with regex, it is automatically converted to "474554" and the
regex is applied on this string, so it never matches.
This commit changes the cast so that bin2str does not convert the
contents anymore, and returns a string type. The contents can thus
be matched as is, and the NULL character continues to mark the end
of the string to avoid any issue with some string-based functions.
This commit could almost have been marked as a bug fix since it
does what the doc says.
Note that in case someone would rely on the hex encoding, then the
same behaviour could be achieved by appending ",hex" after the sample
fetch function (brought by previous patch).
This new filter converts BIN type to its hexadecimal
representation in STR type. It is used to keep the
compatibility with the original bin2str cast.
It will be useful when bin2str changes to copy the
string as-is without encoding anymore.
The binary samples are sometimes copied as is into http headers.
A sample can contain bytes unallowed by the http rfc concerning
header content, for example if it was extracted from binary data.
The resulting http request can thus be invalid.
This issue does not yet happen because haproxy currently (mistakenly)
hex-encodes binary data, so it is not really possible to retrieve
invalid HTTP chars.
The solution consists in hex-encoding all non-printable chars prefixed
by a '%' sign.
No backport is needed since existing code is not affected yet.
cfg_parse_listen() currently checks for duplicated proxy names.
Now that we have a tree for this, we can use it.
The config load time was further reduced by 1.6, which is now
about 4.5 times faster than what it was without the trees.
In fact it was the last CPU-intensive processing involving proxy
names. Now the only remaining point is the automatic fullconn
computation which can be bypassed by having a fullconn in the
defaults section, reducing the load time by another 10x.
Large configurations can take time to parse when thousands of backends
are in use. Let's store all the proxies in trees.
findproxy_mode() has been modified to use the tree for lookups, which
has divided the parsing time by about 2.5. But many lookups are still
present at many places and need to be dealt with.
Currently there are two places where the compression context is released,
one in session_free() and another one in http_end_txn_clean_session().
Both of them call http_end_txn(), either directly or via http_reset_txn(),
and this function is made for this exact purpose. So let's centralize the
call there instead.
Currently, "balance url_param check_post" randomly works. If the client
sends chunked data and there's another chunk after the one containing the
data, http_request_forward_body() will advance msg->sov and move the start
of data to the beginning of the last chunk, and get_server_ph_post() will
not find the data.
In order to avoid this, we add an HTTP_MSGF_WAIT_CONN flag whose goal is
to prevent the forwarding code from parsing until the connection is
confirmed, so that we're certain not to fail on a redispatch. Note that
we need to force channel_auto_connect() since the output buffer is empty
and a previous analyser might have stopped auto-connect.
The flag is currently set whenever some L7 POST analysis is needed for a
connect() so that it correctly addresses all corner cases involving a
possible rewind of the buffer, waiting for a better fix.
Note that this has been broken for a very long time. Even all 1.4 versions
seem broken but differently, with ->sov pointing to the end of the arguments.
So the fix should be considered for backporting to all stable releases,
possibly including 1.3 which works differently.
Julien Vehent repport that the log format '%{+Q}hr' display the value
termnated by two chars '"' like this: '"value""'. This patch just remove
the second quote.
This bug is old but 1.5-specific but users of older 1.5 versions may be
interested in a backport.
The parser check the end line comparing to the null character.
In fact, the end of line can be also '\r' or '\n'.
The effect is that empty lines are loaded and indexed in maps.
The bug was introduced by commit d5f624dd ("MEDIUM: sample:
add the "map" converter") in 1.5-dev20. No backport is needed.
smp_fetch_res_comp_algo() returns the name of the compression algorithm
in use. The output type is set to SMP_T_STR instead of SMP_T_CSTR, which
causes any transformation to be operated without a cast. Fortunately,
the current converters do not overwrite a zero-sized area, so the result
is an empty string. Fix this to have SMP_T_CSTR instead so that the cast
is always performed using a copy before any transformation is done.
I was testing haproxy-1.5-dev22 on SmartOS (an illumos-based system)
and ran into a problem. There's a small window after non-blocking
connect() is called, but before the TCP connection is established,
where recv() may return ENOTCONN. On Linux, the behaviour here seems
to be always to return EAGAIN. The fix is relatively trivial, and
appears to make haproxy work reliably on current SmartOS (see patch
below). It's possible that other UNIX platforms exhibit this
behaviour as well.
Note: the equivalent was already done for send() in commit 0ea0cf6
("BUG: raw_sock: also consider ENOTCONN in addition to EAGAIN").
Both patches should be backported to 1.4.
Lets set IP_FREEBIND on IPv6 sockets as well, this works since Linux 3.3
and doesn't require CAP_NET_ADMIN privileges (IPV6_TRANSPARENT does).
This allows unprivileged users to bind to non-local IPv6 addresses, which
can be useful when setting up the listening sockets or when connecting
to backend servers with a specific, non-local source IPv6 address (at that
point we usually dropped root privileges already).
Disabled backends don't have their symbols resolved. We must not initialize
their peers section since they're not valid and instead still contain the
section's name.
There are other places where such unions are still in use, and other similar
errors might still happen. Ideally we should get rid of all of them in the
quite sensible config stage.
Since commit 0ce3aa0c ("MEDIUM: acl: implement payload and payload_lv"),
the payload and payload_lv ACL patterns were declared as strings because
at this date there was no support for binary patterns. At this time, these
ACLs were not reliably usable due to the binary-to-string cast involved,
and because it was not possible to specify the direction of the match.
Since recent evolutions, the new fetch methods "req.payload" and
"res.payload" have leveraged the ambiguity and were of type "binary",
with an implicit ACL mapping of the same type. The doc also states
that "payload" is an alias for "req.payload" etc... while these two
don't share the same type.
Better fix this mess before it's too late. "payload" and "payload_lv"
return a binary content, so their ACLs must by default use a binary
pattern. That way they behave like their "req." and "res." sisters.
This change might break some configs making use of these, but there's
almost a zero probability that anyone managed to use them to match
exact strings, so in practice the change should be safe.
Finn Arne Gangstad reported that commit 6b726adb35 ("MEDIUM: http: do
not report connection errors for second and further requests") breaks
support for serving static files by abusing the errorfile 503 statement.
Indeed, a second request over a connection sent to any server or backend
returning 503 would silently be dropped.
The proper solution consists in adding a flag on the session indicating
that the server connection was reused, and to only avoid the error code
in this case.
Since 1.5-dev20, we have a working server-side keep-alive and an option
"prefer-last-server" to indicate that we explicitly want to reuse the
same server as the last one. Unfortunately this breaks the redispatch
feature because assign_server() insists on reusing the same server as
the first one attempted even if the connection failed to establish.
A simple solution consists in only considering the last connection if
it was connected. Otherwise there is no reason for being interested in
reusing the same server.
Commits e0d1bfb ("[MINOR] Allow shutdown of sessions when a server
becomes unavailable") and eb2c24a ("MINOR: checks: add on-marked-up
option") mentionned that the directive was supported in default-server
but while it can be stated there, it's ignored because the config value
is not copied from the default server upon creation of a new server.
Moving the statement to the "server" lines works fine though. Thanks
to Baptiste Assmann for reporting and diagnosing this bug.
These features were introduced in 1.5-dev6 and 1.5-dev10 respectively,
so no backport is needed.
Igor Chan reported a very interesting bug which was triggered by the
recent dynamic size change in SSL.
The OpenSSL API refuses to send less data than any failed previous
attempt. So what's happening is that if an SSL_write() in streaming
mode sends 5kB of data and the openssl layer cannot send them all,
it returns SSL_ERROR_WANT_WRITE, which haproxy reacts to by enabling
polling on the file descriptor. In the mean time, haproxy may detect
that the buffer was almost full and will disable streaming mode. Upon
write notification, it will try to send again, but less data this
time (limited to tune.ssl_max_record). OpenSSL disagrees with this
and returns a generic error SSL_ERROR_SSL.
The solution which was found consists in adding a flag to the SSL
context to remind that we must not shrink writes after a failed
attempt. Thus, if EAGAIN is encountered, the next send() will not
be limited in order to retry the same size as before.
Cyril Bont reported that despite commit 0dbbf317 which attempted
to fix the crash when a peers section has no name, we still get a
segfault after the error message when parsing the peers. The reason
is that the returned error code is ERR_FATAL and not ERR_ABORT, so
the parsing continues while the section was not initialized.
This is 1.5-specific, no backport is needed.
Peers with integer stick tables are breaking the keys received. This is due to
the fact that the sender converts the key with htonl() but the receiver doesn't
convert the value back to its original format.
Peers appeared in haproxy-1.5, no backport is needed.
Sometimes it can be useful to generate a random value, at least
for debugging purposes, but also to take routing decisions or to
pass such a value to a backend server.
The ability to globally override the default client and server cipher
suites has been requested multiple times since the introduction of SSL.
This commit adds two new keywords to the global section for this :
- ssl-default-bind-ciphers
- ssl-default-server-ciphers
It is still possible to preset them at build time by setting the macros
LISTEN_DEFAULT_CIPHERS and CONNECT_DEFAULT_CIPHERS.
The new tune.idletimer value allows one to set a different value for
idle stream detection. The default value remains set to one second.
It is possible to disable it using zero, and to change the default
value at build time using DEFAULT_IDLE_TIMER.
Disabling the streamer flags after an idle period will help TCP proxies
to better adapt to the streams they're forwarding, especially with SSL
where this will allow the SSL sender to use smaller records. This is
typically used to optimally relay HTTP and derivatives such as SPDY or
HTTP/2 in pure TCP mode when haproxy is used as an SSL offloader.
This idea was first proposed by Ilya Grigorik on the haproxy mailing
list, and his tests seem to confirm the improvement :
https://www.mail-archive.com/haproxy@formilux.org/msg12576.html
tcp-check must not reinitialize the SSL stack upon each check!
It's done once after the config parsing and leaks memory and eats
performance when done upon every check.
This bug was introduced in 1.5-dev22, no backport is needed.
It happens that latest change broke some monitoring tools which expect the
field to be found at the same position as indicated in the doc. Let's move
it to the last column instead.
I forgot to remove one human_time() in the CSV output for the backend's
lastsess entry in previous patch, which caused the value to be reported
as "1m18s" for example instead of 78.
Summary:
Track and report last session time on the stats page for each server
in every backend, as well as the backend.
This attempts to address the requirement in the ROADMAP
- add a last activity date for each server (req/resp) that will be
displayed in the stats. It will be useful with soft stop.
The stats page reports this as time elapsed since last session. This
change does not adequately address the requirement for long running
session (websocket, RDP... etc).
By having the stream interface pass the CF_STREAMER flag to the
snd_buf() primitive, we're able to tell the send layer whether
we're sending large chunks or small ones.
We use this information in SSL to adjust the max record dynamically.
This results in small chunks respecting tune.ssl.maxrecord at the
beginning of a transfer or for small transfers, with an automatic
switch to full records if the exchanges last long. This allows the
receiver to parse HTML contents on the fly without having to retrieve
16kB of data, which is even more important with small initcwnd since
the receiver does not need to wait for round trips to start fetching
new objects. However, sending large files still produces large chunks.
For example, with tune.ssl.maxrecord = 2859, we see 5 write(2885)
sent in two segments each and 6 write(16421).
This idea was first proposed on the haproxy mailing list by Ilya Grigorik.
This prevents us from passing other useful info and requires the
upper levels to know these flags. Let's use a new flags category
instead : CO_SFL_*. For now, only MSG_MORE has been remapped.
When no check type is configured (so the basic connection check), we
want the connection success to be immediately reported. Unfortunately,
it did not happen because in this case the connection is not registered
for read nor for write, and the wake_srv() callback does not handle this
case where no data transfer was requested. However, having option tcp-check
hides this problem because the check type follows a different setup mode,
by having check->type != 0 and the connection believing it must try to
send data.
The effect was that without any option, checks would succeed only at the
end of the check interval. So let's just add the wake-up condition.
This bug appeared with the recent polling changes, no backport is needed.
As a workaround, using "option tcp-check" fixes the problem.
Useless strncpy were done in those two sample fetches, the
"struct chunk" allows us to dump the specified len.
The encode_string() in capture.req.uri was judged inappropriate and was
deleted.
The return type was fixed to SMP_T_CSTR.
A typo made first step of a tcpcheck to be a connect step. This patch
prevents this behavior. The bug was introduced in 1.5-dev22 with
"tcp-check connect" and only affects these directives. No backport is
needed.
Add 2 sample fetchs allowing to extract the method and the uri of an
HTTP request.
FIXME: the sample fetches parser can't add the LW_REQ requirement, at
the moment this flag is used automatically when you use sample fetches.
Note: also fixed the alphabetical order of other capture.req.* keywords
in the doc.
Released version 1.5-dev22 with the following main changes :
- MEDIUM: tcp-check new feature: connect
- MEDIUM: ssl: Set verify 'required' as global default for servers side.
- MINOR: ssl: handshake optim for long certificate chains.
- BUG/MINOR: pattern: pattern comparison executed twice
- BUG/MEDIUM: map: segmentation fault with the stats's socket command "set map ..."
- BUG/MEDIUM: pattern: Segfault in binary parser
- MINOR: pattern: move functions for grouping pat_match_* and pat_parse_* and add documentation.
- MINOR: standard: The parse_binary() returns the length consumed and his documentation is updated
- BUG/MINOR: payload: the patterns of the acl "req.ssl_ver" are no parsed with the good function.
- BUG/MEDIUM: pattern: "pat_parse_dotted_ver()" set bad expect_type.
- BUG/MINOR: sample: The c_str2int converter does not fail if the entry is not an integer
- BUG/MEDIUM: http/auth: Sometimes the authentication credentials can be mix between two requests
- MINOR: doc: Bad cli function name.
- MINOR: http: smp_fetch_capture_header_* fetch captured headers
- BUILD: last release inadvertently prepended a "+" in front of the date
- BUG/MEDIUM: stream-int: fix the keep-alive idle connection handler
- BUG/MEDIUM: backend: do not re-initialize the connection's context upon reuse
- BUG: Revert "OPTIM/MEDIUM: epoll: fuse active events into polled ones during polling changes"
- BUG/MINOR: checks: successful check completion must not re-enable MAINT servers
- MINOR: http: try to stick to same server after status 401/407
- BUG/MINOR: http: always disable compression on HTTP/1.0
- OPTIM: poll: restore polling after a poll/stop/want sequence
- OPTIM: http: don't stop polling for read on the client side after a request
- BUG/MEDIUM: checks: unchecked servers could not be enabled anymore
- BUG/MEDIUM: stats: the web interface must check the tracked servers before enabling
- BUG/MINOR: channel: CHN_INFINITE_FORWARD must be unsigned
- BUG/MINOR: stream-int: do not clear the owner upon unregister
- MEDIUM: stats: add support for HTTP keep-alive on the stats page
- BUG/MEDIUM: stats: fix HTTP/1.0 breakage introduced in previous patch
- Revert "MEDIUM: stats: add support for HTTP keep-alive on the stats page"
- MAJOR: channel: add a new flag CF_WAKE_WRITE to notify the task of writes
- OPTIM: session: set the READ_DONTWAIT flag when connecting
- BUG/MINOR: http: don't clear the SI_FL_DONT_WAKE flag between requests
- MINOR: session: factor out the connect time measurement
- MEDIUM: session: prepare to support earlier transitions to the established state
- MEDIUM: stream-int: make si_connect() return an established state when possible
- MINOR: checks: use an inline function for health_adjust()
- OPTIM: session: put unlikely() around the freewheeling code
- MEDIUM: config: report a warning when multiple servers have the same name
- BUG: Revert "OPTIM: poll: restore polling after a poll/stop/want sequence"
- BUILD/MINOR: listener: remove a glibc warning on accept4()
- BUG/MAJOR: connection: fix mismatch between rcv_buf's API and usage
- BUILD: listener: fix recent accept4() again
- BUG/MAJOR: ssl: fix breakage caused by recent fix abf08d9
- BUG/MEDIUM: polling: ensure we update FD status when there's no more activity
- MEDIUM: listener: fix polling management in the accept loop
- MINOR: protocol: improve the proto->drain() API
- MINOR: connection: add a new conn_drain() function
- MEDIUM: tcp: report in tcp_drain() that lingering is already disabled on close
- MEDIUM: connection: update callers of ctrl->drain() to use conn_drain()
- MINOR: connection: add more error codes to report connection errors
- MEDIUM: tcp: report connection error at the connection level
- MEDIUM: checks: make use of chk_report_conn_err() for connection errors
- BUG/MEDIUM: unique_id: HTTP request counter is not stable
- DOC: fix misleading information about SIGQUIT
- BUG/MAJOR: fix freezes during compression
- BUG/MEDIUM: stream-interface: don't wake the task up before end of transfer
- BUILD: fix VERDATE exclusion regex
- CLEANUP: polling: rename "spec_e" to "state"
- DOC: add a diagram showing polling state transitions
- REORG: polling: rename "spec_e" to "state" and "spec_p" to "cache"
- REORG: polling: rename "fd_spec" to "fd_cache"
- REORG: polling: rename the cache allocation functions
- REORG: polling: rename "fd_process_spec_events()" to "fd_process_cached_events()"
- MAJOR: polling: rework the whole polling system
- MAJOR: connection: remove the CO_FL_WAIT_{RD,WR} flags
- MEDIUM: connection: remove conn_{data,sock}_poll_{recv,send}
- MEDIUM: connection: add check for readiness in I/O handlers
- MEDIUM: stream-interface: the polling flags must always be updated in chk_snd_conn
- MINOR: stream-interface: no need to call fd_stop_both() on error
- MEDIUM: connection: no need to recheck FD state
- CLEANUP: connection: use conn_ctrl_ready() instead of checking the flag
- CLEANUP: connection: use conn_xprt_ready() instead of checking the flag
- CLEANUP: connection: fix comments in connection.h to reflect new behaviour.
- OPTIM: raw-sock: don't speculate after a short read if polling is enabled
- MEDIUM: polling: centralize polled events processing
- MINOR: polling: create function fd_compute_new_polled_status()
- MINOR: cli: add more information to the "show info" output
- MEDIUM: listener: add support for limiting the session rate in addition to the connection rate
- MEDIUM: listener: apply a limit on the session rate submitted to SSL
- REORG: stats: move the stats socket states to dumpstats.c
- MINOR: cli: add the new "show pools" command
- BUG/MEDIUM: counters: flush content counters after each request
- BUG/MEDIUM: counters: fix stick-table entry leak when using track-sc2 in connection
- MINOR: tools: add very basic support for composite pointers
- MEDIUM: counters: stop relying on session flags at all
- BUG/MINOR: cli: fix missing break in command line parser
- BUG/MINOR: config: correctly report when log-format headers require HTTP mode
- MAJOR: http: update connection mode configuration
- MEDIUM: http: make keep-alive + httpclose be passive mode
- MAJOR: http: switch to keep-alive mode by default
- BUG/MEDIUM: http: fix regression caused by recent switch to keep-alive by default
- BUG/MEDIUM: listener: improve detection of non-working accept4()
- BUILD: listener: add fcntl.h and unistd.h
- BUG/MINOR: raw_sock: correctly set the MSG_MORE flag
A new tcp-check rule type: connect.
It allows HAProxy to test applications which stand on multiple ports or
multiple applications load-balanced through the same backend.
Due to a typo, the MSG_MORE flag used to replace MSG_NOSIGNAL and
MSG_DONTWAIT. Fortunately, sockets are always marked non-blocking,
so the loss of MSG_DONTWAIT is harmless, and the NOSIGNAL is covered
by the interception of the SIGPIPE. So no issue could have been
caused by this bug.
On ARM, glibc does not implement accept4() and simply returns ENOSYS
which was not caught as a reason to fall back to accept(), resulting
in a spinning process since poll() would call again.
Let's change the error detection mechanism to save the broken status
of the syscall into a local variable that is used to fall back to the
legacy accept().
In addition to this, since the code was becoming a bit messy, the
accept4() was removed, so now the fallback code and the legacy code
are the same. This will also increase bug report accuracy if needed.
This is 1.5-specific, no backport is needed.
Yesterday's commit 70dffda ("MAJOR: http: switch to keep-alive mode by default")
broke HTTP/1.0 handling without keep-alive when keep-alive is enabled both in
the frontend and in the backend.
Before this patch, it used to work because tunnel mode was the default one,
so if no mode was present in the frontend and a mode was set in the backend,
the backend was the first one to parse the header. This is what the original
patch tried to do with keep-alive by default, causing the version and the
connection header to be ignored if both the frontend and the backend were
running in keep-alive mode.
The fix consists in always parsing the header in non-tunnel mode, and
processing the rest of the logic in at least once, and again if the
backend works in a different mode than the frontend.
This is 1.5-specific, no backport is needed.
The authentication function "get_http_auth()" extract credentials from
the request and keep it this values in shared cache. This function set
a flag in the session indicating that the authentication is already
parsed and the value stored in the cache are avalaible. If this flag is
set the authorization header is not re-parsed and the shared cache is
used.
If two request are simultaneous processsed, the first one check the
credentials. After this, the second request check also it's credentials
and change the data stored in the shared cache. When the first request
re-check credentials (for many reasons), they are changed. The change
can introduce a segfault.
This patch deactivate the cache upon success. When we need
authentication information from one request, they are re-parsed and
re-decoded. However, a failure to retrieve credentials is still
cached to avoid useless lookups.
This fix needs to be backported to 1.4 as well.
Since we support HTTP keep-alive, there is no more reason for staying
in tunnel mode by default. It is confusing for new users and creates
more issues than it solves. Option "http-tunnel" is available to force
to use it if really desired.
Switching to KA by default has implied to change the value of some
option flags and some transaction flags so that value zero (default)
matches keep-alive. That explains why more code has been changed than
expected. Tests have been run on the 25 combinations of frontend and
backend options, plus a few with option http-pretend-keepalive, and
no anomaly was found.
The relation between frontend and backends remains the same. Options
have been updated to take precedence over http-keep-alive which is now
implicit.
All references in the doc to haproxy not supporting keep-alive have
been fixed, and the doc for config options has been updated.
There's no particular reason for having keep-alive + httpclose combine
into forceclose when set in different frontend/backend sections, since
keep-alive does not close anything by default. Let's have this still
combination remain httpclose only.
At the very beginning of haproxy, there was "option httpclose" to make
haproxy add a "Connection: close" header in both directions to invite
both sides to agree on closing the connection. It did not work with some
rare products, so "option forceclose" was added to do the same and actively
close the connection. Then client-side keep-alive was supported, so option
http-server-close was introduced. Now we have keep-alive with a fourth
option, not to mention the implicit tunnel mode.
The connection configuration has become a total mess because all the
options above may be combined together, despite almost everyone thinking
they cancel each other, as judging from the common problem reports on the
mailing list. Unfortunately, re-reading the doc shows that it's not clear
at all that options may be combined, and the opposite seems more obvious
since they're compared. The most common issue is options being set in the
defaults section that are not negated in other sections, but are just
combined when the user expects them to be overloaded. The migration to
keep-alive by default will only make things worse.
So let's start to address the first problem. A transaction can only work in
5 modes today :
- tunnel : haproxy doesn't bother with what follows the first req/resp
- passive close : option http-close
- forced close : option forceclose
- server close : option http-server-close with keep-alive on the client side
- keep-alive : option http-keep-alive, end to end
All 16 combination for each section fall into one of these cases. Same for
the 256 combinations resulting from frontend+backend different modes.
With this patch, we're doing something slightly different, which will not
change anything for users with valid configs, and will only change the
behaviour for users with unsafe configs. The principle is that these options
may not combined anymore, and that the latest one always overrides all the
other ones, including those inherited from the defaults section. The "no
option xxx" statement is still supported to cancel one option and fall back
to the default one. It is mainly needed to ignore defaults sections (eg:
force the tunnel mode). The frontend+backend combinations have not changed.
So for examplen the following configuration used to put the connection
into forceclose :
defaults http
mode http
option httpclose
frontend foo.
option http-server-close
=> http-server-close+httpclose = forceclose before this patch! Now
the frontend's config replaces the defaults config and results in
the more expected http-server-close.
All 25 combinations of the 5 modes in (frontend,backend) have been
successfully tested.
In order to prepare for upcoming changes, a new "option http-tunnel" was
added. It currently only voids all other options, and has the lowest
precedence when mixed with another option in another frontend/backend.
If no CA file specified on a server line, the config parser will show an error.
Adds an cmdline option '-dV' to re-set verify 'none' as global default on
servers side (previous behavior).
Also adds 'ssl-server-verify' global statement to set global default to
'none' or 'required'.
WARNING: this changes the default verify mode from "none" to "required" on
the server side, and it *will* break insecure setups.
When using some log-format directives in header insertion without HTTP mode,
the config parser used to report a cryptic message about option httplog being
downgraded to tcplog and with "(null):0" as the file name and line number.
This is because the lfs_file and lfs_line were not properly set for some valid
use cases of log-format directives. Now we cover http-request and http-response
as well.
Yesterday's commit 12833bb ("MINOR: cli: add the new "show pools" command")
missed a "break" statement causing trouble to the "show map" command. Spotted
by Thierry Fournier.
Till now, we had one flag per stick counter to indicate if it was
tracked in a backend or in a frontend. We just had to add another
flag per stick-counter to indicate if it relies on contents or just
connection. These flags are quite painful to maintain and tend to
easily conflict with other flags if their number is changed.
The correct solution consists in moving the flags to the stkctr struct
itself, but currently this struct is made of 2 pointers, so adding a
new entry there to store only two bits will cause at least 16 more bytes
to be eaten per counter due to alignment issues, and we definitely don't
want to waste tens to hundreds of bytes per session just for things that
most users don't use.
Since we only need to store two bits per counter, an intermediate
solution consists in replacing the entry pointer with a composite
value made of the original entry pointer and the two flags in the
2 unused lower bits. If later a need for other flags arises, we'll
have to store them in the struct.
A few inline functions have been added to abstract the retrieval
and assignment of the pointers and flags, resulting in very few
changes. That way there is no more dependence on the number of
stick-counters and their position in the session flags.
In 1.5-dev19, commit e25c917 ("MEDIUM: counters: add support for tracking
a third counter") introduced the third track counter. However, there was
a hard-coded test in the accept() error path to release only sc0 and sc1.
So it seems that if tracking sc2 at the connection level and deciding to
reject once the track-sc2 has been done, there could be some leaking of
stick-table entries which remain marked used forever, thus which can never
be purged nor expired. There's no memory leak though, it's just that
entries are unexpirable forever.
The simple solution consists in removing the test and always calling
the inline function which iterates over all entries.
One year ago, commit 5d5b5d8 ("MEDIUM: proto_tcp: add support for tracking
L7 information") brought support for tracking L7 information in tcp-request
content rules. Two years earlier, commit 0a4838c ("[MEDIUM] session-counters:
correctly unbind the counters tracked by the backend") used to flush the
backend counters after processing a request.
While that earliest patch was correct at the time, it became wrong after
the second patch was merged. The code does what it says, but the concept
is flawed. "TCP request content" rules are evaluated for each HTTP request
over a single connection. So if such a rule in the frontend decides to
track any L7 information or to track L4 information when an L7 condition
matches, then it is applied to all requests over the same connection even
if they don't match. This means that a rule such as :
tcp-request content track-sc0 src if { path /index.html }
will count one request for index.html, and another one for each of the
objects present on this page that are fetched over the same connection
which sent the initial matching request.
Worse, it is possible to make the code do stupid things by using multiple
counters:
tcp-request content track-sc0 src if { path /foo }
tcp-request content track-sc1 src if { path /bar }
Just sending two requests first, one with /foo, one with /bar, shows
twice the number of requests for all subsequent requests. Just because
both of them persist after the end of the request.
So the decision to flush backend-tracked counters was not the correct
one. In practice, what is important is to flush countent-based rules
since they are the ones evaluated for each request.
Doing so requires new flags in the session however, to keep track of
which stick-counter was tracked by what ruleset. A later change might
make this easier to maintain over time.
This bug is 1.5-specific, no backport to stable is needed.
show pools
Dump the status of internal memory pools. This is useful to track memory
usage when suspecting a memory leak for example. It does exactly the same
as the SIGQUIT when running in foreground except that it does not flush
the pools.
Just like the previous commit, we sometimes want to limit the rate of
incoming SSL connections. While it can be done for a frontend, it was
not possible for a whole process, which makes sense when multiple
processes are running on a system to server multiple customers.
The new global "maxsslrate" setting is usable to fix a limit on the
session rate going to the SSL frontends. The limits applies before
the SSL handshake and not after, so that it saves the SSL stack from
expensive key computations that would finally be aborted before being
accounted for.
The same setting may be changed at run time on the CLI using
"set rate-limit ssl-session global".
It's sometimes useful to be able to limit the connection rate on a machine
running many haproxy instances (eg: per customer) but it removes the ability
for that machine to defend itself against a DoS. Thus, better also provide a
limit on the session rate, which does not include the connections rejected by
"tcp-request connection" rules. This permits to have much higher limits on
the connection rate without having to raise the session rate limit to insane
values.
The limit can be changed on the CLI using "set rate-limit sessions global",
or in the global section using "maxsessrate".
In addition to previous outputs, we also emit the cumulated number of
connections, the cumulated number of requests, the maximum allowed
SSL connection concurrency, the current number of SSL connections and
the cumulated number of SSL connections. This will help troubleshoot
systems which experience memory shortage due to SSL.
If the string not start with a number, the converter fails. In other, it
converts a maximum of characters to a number and stop to the first
character that not match a number.
This is a regression introducted by the patches "MINOR: pattern: Each
pattern sets the expected input type" and "MEDIUM: acl: Last patch
change the output type". The expected value is SMP_T_CSTR in place of
SMP_T_UINT.
This bug impact all the acl using the parser "pat_parse_dotted_ver()".
The two acl are "req_ssl_ver()" and "req.ssl_ver()".
This is a recent bug, no backport is needed.
This function is used to compute the new polling state based on
the previous state. All pollers have to do this in their update
loop, so better centralize the logic for it.
Currently, each poll loop handles the polled events the same way,
resulting in a lot of duplicated, complex code. Additionally, epoll
was the only one to handle newly created FDs immediately.
So instead, let's move that code to fd.c in a new function dedicated
to this task : fd_process_polled_events(). All pollers now use this
function.
This is the reimplementation of the "done" action : when we experience
a short read, we're almost certain that we've exhausted the system's
buffers and that we'll meet an EAGAIN if we attempt to read again. If
the FD is not yet polled, the stream interface already takes care of
stopping the speculative read. When the FD is already being polled, we
have two options :
- either we're running from a level-triggered poller, in which case
we'd rather report that we've reached the end so that we don't
speculate over the poller and let it report next time data are
available ;
- or we're running from an edge-triggered poller in which case we
have no choice and have to see the EAGAIN to re-enable events.
At the moment we don't have any edge-triggered poller, so it's desirable
to avoid speculative I/O that we know will fail.
Note that this must not be ported to SSL since SSL hides the real
readiness of the file descriptor.
Thanks to this change, we observe no EAGAIN anymore during keep-alive
transfers, and failed recvfrom() are reduced by half in http-server-close
mode (the client-facing side is always being polled and the second recv
can be avoided). Doing so results in about 5% performance increase in
keep-alive mode. Similarly, we used to have up to about 1.6% of EAGAIN
on accept() (1/maxaccept), and these have completely disappeared under
high loads.
It's easier and safer to rely on conn_xprt_ready() everywhere than to
check the flag itself. It will also simplify adding extra checks later
if needed. Some useless controls for !xprt have been removed, as the
XPRT_READY flag itself guarantees xprt is set.
It's easier and safer to rely on conn_ctrl_ready() everywhere than to
check the flag itself. It will also simplify adding extra checks later
if needed. Some useless controls for !ctrl have been removed, as the
CTRL_READY flag itself guarantees ctrl is set.
We already have everything in the connection flags using the
CO_FL_DATA_*_ENA bits combined with the fd's ready state, so
we do not need to check fdtab[fd].ev anymore. This considerably
simplifies the connection handling logic since it doesn't
have to mix connection flags with past polling states.
We don't need to call fd_stop_both() since we already call
conn_cond_update_polling() which will do it. This call was introduced by
commit d29a066 ("BUG/MAJOR: connection: always recompute polling status
upon I/O").
We used to only update the polling flags in data phase, but after that
we could update other flags. It does not seem possible to trigger a
bug here but it's not very safe either. Better always keep them up to
date.
The recv/send callbacks must check for readiness themselves instead of
having their callers do it. This will strengthen the test and will also
ensure we never refrain from calling a handshake handler because a
direction is being polled while the other one is ready.
We simply remove these functions and replace their calls with the
appropriate ones :
- if we're in the data phase, we can simply report wait on the FD
- if we're in the socket phase, we may also have to signal the
desire to read/write on the socket because it might not be
active yet.
These flags were used to report the readiness of the file descriptor.
Now this readiness is directly checked at the file descriptor itself.
This removes the need for constantly synchronizing updates between the
file descriptor and the connection and ensures that all layers share
the same level of information.
For now, the readiness is updated in conn_{sock,data}_poll_* by directly
touching the file descriptor. This must move to the lower layers instead
so that these functions can disappear as well. In this state, the change
works but is incomplete. It's sensible enough to avoid making it more
complex.
Now the sock/data updates become much simpler because they just have to
enable/disable access to a file descriptor and not to care anymore about
its readiness.
This commit heavily changes the polling system in order to definitely
fix the frequent breakage of SSL which needs to remember the last
EAGAIN before deciding whether to poll or not. Now we have a state per
direction for each FD, as opposed to a previous and current state
previously. An FD can have up to 8 different states for each direction,
each of which being the result of a 3-bit combination. These 3 bits
indicate a wish to access the FD, the readiness of the FD and the
subscription of the FD to the polling system.
This means that it will now be possible to remember the state of a
file descriptor across disable/enable sequences that generally happen
during forwarding, where enabling reading on a previously disabled FD
would result in forgetting the EAGAIN flag it met last time.
Several new state manipulation functions have been introduced or
adapted :
- fd_want_{recv,send} : enable receiving/sending on the FD regardless
of its state (sets the ACTIVE flag) ;
- fd_stop_{recv,send} : stop receiving/sending on the FD regardless
of its state (clears the ACTIVE flag) ;
- fd_cant_{recv,send} : report a failure to receive/send on the FD
corresponding to EAGAIN (clears the READY flag) ;
- fd_may_{recv,send} : report the ability to receive/send on the FD
as reported by poll() (sets the READY flag) ;
Some functions are used to report the current FD status :
- fd_{recv,send}_active
- fd_{recv,send}_ready
- fd_{recv,send}_polled
Some functions were removed :
- fd_ev_clr(), fd_ev_set(), fd_ev_rem(), fd_ev_wai()
The POLLHUP/POLLERR flags are now reported as ready so that the I/O layers
knows it can try to access the file descriptor to get this information.
In order to simplify the conditions to add/remove cache entries, a new
function fd_alloc_or_release_cache_entry() was created to be used from
pollers while scanning for updates.
The following pollers have been updated :
ev_select() : done, built, tested on Linux 3.10
ev_poll() : done, built, tested on Linux 3.10
ev_epoll() : done, built, tested on Linux 3.10 & 3.13
ev_kqueue() : done, built, tested on OpenBSD 5.2
We're completely changing the way FDs will be polled. There will be no
more speculative I/O since we'll know the exact FD state, so these will
only be cached events.
First, let's fix a few field names which become confusing. "spec_e" was
used to store a speculative I/O event state. Now we'll store the whole
R/W states for the FD there. "spec_p" was used to store a speculative
I/O cache position. Now let's clearly call it "cache".
We're completely changing the way FDs will be polled. First, let's fix
a few field names which become confusing. "spec_e" was used to store a
speculative I/O event state. Now we'll store the whole R/W states for
the FD there.
Recent commit d7ad9f5 ("MAJOR: channel: add a new flag CF_WAKE_WRITE to
notify the task of writes") was not correct. It used to wake up the task
as soon as there was some write activity and the flag was set, even if there
were still some data to be forwarded. This resulted in process_session()
being called a lot when transfering chunk-encoded HTTP responses made of
very large chunks.
The purpose of the flag is to wake up only a task waiting for some
room and not the other ones, so it's totally counter-productive to
wake it up as long as there are data to forward because the task
will not be allowed to write anyway.
Also, the commit above was taking some risks by not considering
certain events anymore (eg: state != SI_ST_EST). While such events
are not used at the moment, if some new features were developped
in the future relying on these, it would be better that they could
be notified when subscribing to the WAKE_WRITE event, so let's
restore the condition.
Recent commit d7ad9f5 ("MAJOR: channel: add a new flag CF_WAKE_WRITE to
notify the task of writes") introduced this new CF_WAKE_WRITE flag that
an analyser which requires some free space to write must set if it wants
to be notified.
Unfortunately, some places were missing. More specifically, the
compression engine can rarely be stuck by a lack of output space,
especially when dealing with non-compressible data. It then has to
stop until some pending data are flushed and for this it must set
the CF_WAKE_WRITE flag. But these cases were missed by the commit
above.
Fortunately, this change was introduced very recently and never
released, so the impact was limited.
Huge thanks to Sander Klein who first reported this issue and who kindly
and patiently provided lots of traces and test data that made it possible
to reproduce, analyze, then fix this issue.
Patrick Hemmer reported that using unique_id_format and logs did not
report the same unique ID counter since commit 9f09521 ("BUG/MEDIUM:
unique_id: HTTP request counter must be unique!"). This is because
the increment was done while producing the log message, so it was
performed twice.
A better solution consists in fetching a new value once per request
and saving it in the request or session context for all of this
request's life.
It happens that sessions already have a unique ID field which is used
for debugging and reporting errors, and which differs from the one
sent in logs and unique_id header.
So let's change this to reuse this field to have coherent IDs everywhere.
As of now, a session gets a new unique ID once it is instanciated. This
means that TCP sessions will also benefit from a unique ID that can be
logged. And this ID is renewed for each extra HTTP request received on
an existing session. Thus, all TCP sessions and HTTP requests will have
distinct IDs that will be stable along all their life, and coherent
between all places where they're used (logs, unique_id header,
"show sess", "show errors").
This feature is 1.5-specific, no backport to 1.4 is needed.
The fetch "req.ssl_ver" is not declared as explicit acl. If it is used
as implicit ACL, the acl engine detect SMP_T_UINT output type and choose
to use the default interger parser: pat_parse_int(). This fetch needs the
parser pat_parse_dotted_ver().
This patch declare explicit ACL named "req.ssl_ver" that use the good
parser function pat_parse_dotted_ver().
Checks used not to precisely report the errors that were detected at the
connection layer (eg: too many SSL connections). Using chk_report_conn_err()
makes this possible.
Now when a connection error happens, it is reported in the connection
so that upper layers know exactly what happened. This is particularly
useful with health checks and resources exhaustion.
Actually the values returned by this function is never used. All the
callers just check if the resultat is non-zero. Before this patch, the
function returns the length of the produced content. This value is not
useful because is returned twice: the first time in the return value and
the second time in the <binstrlen> argument. Now the function returns
the number of bytes consumed from <source>.
The functions pat_parse_* must return 0 if fail and the number of
elements eated from **text if not fail. The function pat_parse_bin()
returns 0 or the length parsed. This causes a segfault. I just apply the
double operator "!" on the result of the function pat_parse_bin() and
the return value value match the expected value.
Now we can more safely rely on the connection state to decide how to
drain and what to do when data are drained. Callers don't need to
manipulate the file descriptor's state anymore.
Note that it also removes the need for the fix ea90063 ("BUG/MEDIUM:
stream-int: fix the keep-alive idle connection handler") since conn_drain()
correctly sets the polling flags.
When an incoming shutdown or error is detected, we know that we
can safely close without disabling lingering. Do it in tcp_drain()
so that we don't have to do it from each and every caller.
It was not possible to know if the drain() function had hit an
EAGAIN, so now we change the API of this function to return :
< 0 if EAGAIN was met
= 0 if some data remain
> 0 if a shutdown was received
The accept loop used to force fd_poll_recv() even in places where it
was not completely appropriate (eg: unexpected errors). It does not
yet cause trouble but will do with the upcoming polling changes. Let's
use it only where relevant now. EINTR/ECONNABORTED do not result in
poll() anymore but the failed connection is simply skipped (this code
dates from 1.1.32 when error codes were first considered).
Some rare unexplained busy loops were observed on versions up to 1.5-dev19.
It happens that if a file descriptor happens to be disabled for both read and
write while it was speculatively enabled for both and this without creating a
new update entry, there will be no way to remove it from the speculative I/O
list until some other changes occur. It is suspected that a double sequence
such as enable_both/disable_both could have led to this situation where an
update cancels itself and does not clear the spec list in the poll loop.
While it is unclear what I/O sequence may cause this situation to arise, it
is safer to always add the FD to the update list if nothing could be done on
it so that the next poll round will automatically take care of it.
This is 1.5-specific, no backport is needed.
Recent commit abf08d9 ("BUG/MAJOR: connection: fix mismatch between rcv_buf's
API and usage") accidentely broke SSL by relying on an uninitialized value to
enter the read loop.
Many thanks to Cyril Bont and Steve Ruiz for reporting this issue.
The value of the variable "appctx->ctx.map.ent" is used after the loop,
but its value has changed. The variable "value" is initialized and
contains the good value.
This is a recent bug, no backport is needed.
Steve Ruiz reported some reproducible crashes with HTTP health checks
on a certain page returning a huge length. The traces he provided
clearly showed that the recv() call was performed twice for a total
size exceeding the buffer's length.
Cyril Bont tracked down the problem to be caused by the full buffer
size being passed to rcv_buf() in event_srv_chk_r() instead of passing
just the remaining amount of space. Indeed, this change happened during
the connection rework in 1.5-dev13 with the following commit :
f150317 MAJOR: checks: completely use the connection transport layer
But one of the problems is also that the comments at the top of the
rcv_buf() functions suggest that the caller only has to ensure the
requested size doesn't overflow the buffer's size.
Also, these functions already have to care about the buffer's size to
handle wrapping free space when there are pending data in the buffer.
So let's change the API instead to more closely match what could be
expected from these functions :
- the caller asks for the maximum amount of bytes it wants to read ;
This means that only the caller is responsible for enforcing the
reserve if it wants to (eg: checks don't).
- the rcv_buf() functions fix their computations to always consider
this size as a max, and always perform validity checks based on
the buffer's free space.
As a result, the code is simplified and reduced, and made more robust
for callers which now just have to care about whether they want the
buffer to be filled or not.
Since the bug was introduced in 1.5-dev13, no backport to stable versions
is needed.
The accept4() Linux syscall requires _GNU_SOURCE on ix86, otherwise
it emits a warning. On other archs including x86_64, this problem
doesn't happen. Thanks to Charles Carter from Sigma Software for
reporting this.
If the pattern is set as case insensitive, the string comparison
is executed twice. The first time is insensitive comparison, the
second is sensitive.
This is a recent bug, no backport is needed.
A config where multiple servers have the same name in the same backend is
prone to a number of issues : logs are not really exploitable, stats get
really tricky and even harder to change, etc...
In fact, it can be safe to have the same name between multiple servers only
when their respective IDs are known and used. So now we detect this situation
and emit a warning for the first conflict detected per server if any of the
servers uses an automatic ID.
The code which enables tunnel mode or TCP transfers is rarely used
and at most once per session. Putting it in an unlikely() clause
reduces the length of the hot path of process_session() which is
already quite long, and also slightly reduces its overall size.
Some measurements show a steady gain of about 0.2% thanks to this.
This function is called twice per request, and does almost always nothing.
Better use an inline version to avoid entering it when we can.
About 0.5% additional performance was gained this way.
si_connect() used to only return SI_ST_CON. But it already detect the
connection reuse and is the function which avoids calling connect().
So it already knows the connection is valid and reuse. Thus we make it
return SI_ST_EST when a connection is reused. This means that
connect_server() can return this state and sess_update_stream_int()
as well.
Thanks to this change, we don't need to leave process_session() in
SI_ST_CON state to immediately enter it again to switch to SI_ST_EST.
Implementing this removes one call to process_session() per request
in keep-alive mode. We're now at 2 calls per request, which is the
minimum (one for the request and another one for the response). The
number of calls to http_wait_for_response() has also dropped from 2
to one.
Tests indicate a performance gain of about 2.6% in request rate in
keep-alive mode. There should be no gain in http-server-close() since
we don't use this faster path.
At the moment it is possible in sess_prepare_conn_req() to switch to the
established state when the target is an applet. But sess_update_stream_int()
will soon also have the ability to set the established state via
connect_server() when a connection is reused, leading to a synchronous
connect.
So prepare the code to handle this SI_ST_ASS -> SI_ST_EST transition, which
really matches what's done in the lower layers.
Currently there are 3 places in the code where t_connect is set after
switching to state SI_ST_EST, and a fourth one will soon come. Since
all these places lead to an immediate call to sess_establish() to
complete the session establishment, better move that measurement
there.
It's a bit hasardous to wipe out all channel flags, this flag should
be left intact as it protects against recursive calls. Fortunately,
we have no possibility to meet this situation with current applets,
but better fix it before it becomes an issue.
This bug has been there for a long time, but it doesn't seem worth
backporting the fix.
As soon as we connect to the server, we want to limit the number of
recvfrom() on the response path because most of the time a single
call will retrieve enough information.
At the moment this is only done in the HTTP response parser, after
some reads have already failed, which is too late. We need to do
that at the earliest possible instant. It was already done for the
request side by frontend_accept() for the first request, and by
http_reset_txn() for the next requests.
Thanks to this change, there are no more failed recvfrom() calls in
keep-alive mode.
Since commit 6b66f3e ([MAJOR] implement autonomous inter-socket forwarding)
introduced in 1.3.16-rc1, we've been relying on a stupid mechanism to wake
up the task after a write, which was an exact copy-paste of the reader side.
The principle was that if we empty a buffer and there's no forwarding
scheduled or if the *producer* is not in a connected state, then we wake
the task up.
That does not make any sense. It happens to wake up too late sometimes (eg,
when the request analyser waits for some room in the buffer to start to
work), and leads to unneeded wakeups in client-side keep-alive, because
the task is woken up when the response is sent, while the analysers are
simply waiting for a new request.
In order to fix this, we introduce a new channel flag : CF_WAKE_WRITE. It
is designed so that an analyser can explicitly request being notified when
some data were written. It is used only when the HTTP request or response
analysers need to wait for more room in the buffers. It is automatically
cleared upon wake up.
The flag is also automatically set by the functions which try to write into
a buffer from an applet when they fail (bi_putblk() etc...).
That allows us to remove the stupid condition above and avoid some wakeups.
In http-server-close and in http-keep-alive modes, this reduces from 4 to 3
the average number of wakeups per request, and increases the overall
performance by about 1.5%.
This reverts commit f3221f99ac.
Igor reported some very strange breakage of his stats page which is
clearly caused by the chunking, though I don't see at first glance
what could be wrong. Better revert it for now.
In theory the principle is simple as we just need to send HTTP chunks
if the client is 1.1 compatible. In practice it's harder because we
have to append a CR LF after each block of data and we're never sure
to have the room for this. In order not to have to deal with this, we
instead send the CR LF prior to each chunk size. The only issue is for
the first chunk and for this reason we avoid to send the empty header
line when using chunked encoding.
Since the applet rework and the removal of the inter-task applets,
we must not clear the stream-interface's owner task anymore otherwise
we risk a crash when maintaining keep-alive with an applet. This is
not possible right now so there is no impact yet, but this bug is not
easy to track down. No backport is needed.
When enabling a tracked server via the web interface, we must first
check if the server tracks another one and the state of this tracked
server, just like the command line does.
Failure to do so causes incorrect logs to be emitted when the server
is enabled :
[WARNING] 361/212556 (2645) : Server bck2/srv3 is DOWN via bck2/srv2. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
[WARNING] 361/212603 (2645) : Server bck2/srv3 is DOWN for maintenance.
--> enable server now
[WARNING] 361/212606 (2645) : Server bck2/srv3 is UP (leaving maintenance).
With this fix, it's correct now :
[WARNING] 361/212805 (2666) : Server bck2/srv3 is DOWN via bck2/srv2. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
[WARNING] 361/212813 (2666) : Server bck2/srv3 is DOWN for maintenance.
--> enable server now
[WARNING] 361/212821 (2666) : Server bck2/srv3 is DOWN via bck2/srv2. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
It does not seem necessary to backport this fix, considering that it
depends on extremely fragile behaviours, there are more risks of breakage
caused by a backport than the current inconvenience.
Recent fix 02541e8 (BUG/MEDIUM: checks: servers must not start in
slowstart mode) failed to consider one case : a server chich is not
checked at all can be disabled and has to support being enabled
again. So we must also enter the set_server_up() function when the
checks are totally disabled.
No backport is needed.
We used to unconditionally disable client-side polling after the client
has posted its request. The goal was to avoid subscribing the file
descriptor to the poller for nothing.
This is perfect for the HTTP close mode where we know we won't have to
read on the client side anymore. However, when keep-alive is maintained
with the client, this makes the situation worse. Indeed, after the first
response, we'll have to wait for the client to send a next request and
since this is never immediate, we'll certainly poll. So what happens is
that polling is enabled after a response and disabled after a request,
so the polling is constantly alternating, which is very expensive with
epoll_ctl().
The solution implemented in this patch consists in only disabling the
polling if the client-side is not in keep-alive mode. That way we have
the best of both worlds. In close, we really close, and in keep-alive,
we poll only once.
The performance gained by this change is important, with haproxy jumping
from 158kreq/s to 184kreq/s (+16%) in HTTP keep-alive mode on a machine
which at best does 222k/s in raw TCP mode.
With this patch and the previous one, a keep-alive run with a fast
enough server (or enough concurrent connections to cover the connect
time) does no epoll_ctl() anymore during a run of ab -k. The net
measured gain is 19%.
Compression is normally disabled on HTTP/1.0 since it does not
support chunked encoded responses. But the test was incomplete, and
Bertrand Jacquin reported a case where if the server responded using
1.1 to an 1.0 request, then haproxy still used to compress (and of
course the client could not understand the response).
No backport is needed, this is 1.5-specific.
In HTTP keep-alive mode, if we receive a 401, we still have a chance
of being able to send the visitor again to the same server over the
same connection. This is required by some broken protocols such as
NTLM, and anyway whenever there is an opportunity for sending the
challenge to the proper place, it's better to do it (at least it
helps with debugging).
If a server is switched to maintenance mode while a check is in progress,
the successful completion of the check must not switch it back up. This
is still a consequence of using the same function set_server_up() for
every state change. Bug reported by Igor at owind.
This fix should be backported to 1.4 which is affected as well.
This reverts commit 2f877304ef.
This commit is OK for clear text traffic but causes trouble with SSL
when buffers are smaller than SSL buffers. Since the issue it addresses
will be gone once the polling redesign is complete, there's no reason
for trying to workaround temporary inefficiencies. Better remove it.
If we reuse a server-side connection, we must not reinitialize its context nor
try to enable send_proxy. At the moment HTTP keep-alive over SSL fails on the
first attempt because the SSL context was cleared, so it only worked after a
retry.
Commit 2737562 (MEDIUM: stream-int: implement a very simplistic idle
connection manager) implemented an idle connection handler. In the
case where all data is drained from the server, it fails to disable
polling, resulting in a busy spinning loop.
Thanks to Sander Klein and Guillaume Castagnino for reporting this bug.
No backport is needed.
Idle connections are not monitored right now. So if a server closes after
a response without advertising it, it won't be detected until a next
request wants to use the connection. This is a bit problematic because
it unnecessarily maintains file descriptors and sockets in an idle
state.
This patch implements a very simple idle connection manager for the stream
interface. It presents itself as an I/O callback. The HTTP engine enables
it when it recycles a connection. If a close or an error is detected on the
underlying socket, it tries to drain as much data as possible from the socket,
detect the close and responds with a close as well, then detaches from the
stream interface.
In 1.5-dev20, commit bb9665e (BUG/MEDIUM: checks: ensure we can enable
a server after boot) tried to fix a side effect of having both regular
checks and agent checks condition the up state propagation to servers.
Unfortunately it was still not fine because after this fix, servers
which make use of slowstart start in this mode. We must not check
the agent's health if agent checks are not enabled, and likewise,
we must not check the regular check's health if they are not enabled.
Reading the code, it seems like we could avoid entering this function
at all if (s->state & SRV_RUNNING) is not satisfied. Let's reserve
this for a later patch if needed.
Thanks to Sander Klein for reporting this abnormal situation.
Since comit b805f71 (MEDIUM: sample: let the cast functions set their
output type), the output type of a fetch function is automatically
considered and passed to the next converter. A bug introduced in
1.5-dev9 with commit f853c46 (MEDIUM: pattern/acl: get rid of
temp_pattern in ACLs) was revealed by this last one : the output type
remained string instead of UINT, causing the cast function to try to
cast the contents and to crash on a NULL deref.
Note: this fix was made after a careful review of all fetch functions.
A few non-trivial ones had their comments amended to clearly indicate
the output type.
There are very few users of http_proxy, and all of them complain about
the same thing : the request is passed unmodified to the server (in its
proxy form), and it is not possible to fix it using reqrep rules because
http_proxy happens after.
So let's have http_proxy fix the URL it has analysed to get rid of the
scheme and the host part. This will do what users of this feature expect.
A null pointer assignment was missing after a free in commit 7148ce6 (MEDIUM:
pattern: Extract the index process from the pat_parse_*() functions), causing
a double free after loading a file of string patterns.
This bug was introduced in 1.5-dev20, no backport is needed.
Thanks to Sander Klein for reporting this bug and providing the config
needed to trigger it.
The memset() was put here to corrupt memory for a debugging test,
it's not needed anymore and was unfortunately committed. It does
not harm anyway, it probably just slightly affects performance.
On several browsers, the monospace font used to display numbers in tips
is not much readable. Since the numbers are aligned anyway, there is too
little benefit in using such a font.
In HTTP keep-alive, if we face a connection error to the server while sending
the request, the error should not be reported, and the client-side connection
should simply be closed, so that client knows it can retry. This can happen if
the server has too short a keep-alive timeout and quits at the same moment the
new request comes in.
When the load balancing algorithm in use is not deterministic, and a previous
request was sent to a server to which haproxy still holds a connection, it is
sometimes desirable that subsequent requests on a same session go to the same
server as much as possible. Note that this is different from persistence, as
we only indicate a preference which haproxy tries to apply without any form
of warranty. The real use is for keep-alive connections sent to servers. When
this option is used, haproxy will try to reuse the same connection that is
attached to the server instead of rebalancing to another server, causing a
close of the connection. This can make sense for static file servers. It does
not make much sense to use this in combination with hashing algorithms.
This commit allows an existing server-side connection to be reused if
it matches the same target. Basic controls are performed ; right now
we do not allow to reuse a connection when dynamic source binding is
in use or when the destination address or port is dynamic (eg: proxy
mode). Later we'll have to also disable connection sharing when PROXY
protocol is being used or when non-idempotent requests are processed.
When a connection to the server is complete, if the transaction
requests keep-alive mode, we don't shut the connection and we just
reinitialize the stream interface in order to be able to reuse the
connection afterwards.
Note that the server connection count is decremented, just like the
backend's, and that we still try to wake up waiters. But that makes
sense considering that we'll eventually be able to immediately pass
idle connections to waiters.
When allocating a new connection, only the caller knows whether it's
acceptable to reuse the previous one or not. Let's pass this information
to si_alloc_conn() which will do the cleanup if the connection is not
acceptable.
This new option enables HTTP keep-alive processing on the connections.
It can be overwritten by http-server-close, httpclose and forceclose.
Right now full-chain keep-alive is not yet implemented, but we need
the option to work on it. The doc will come later.
It's common to observe a an recv() call on the client side just after
the connect() to has been issued to the server side when running in
server close mode. The reason is that the whole request has been sent
and the shutw() has been queued in the channel, so the request message
switches to the MSG_CLOSED state, which didn't disable reading. Let's
do it now. That way the reading will only be re-enabled after the
response is transferred to the client. However if abortonclose is set,
we still leave it enabled.
strace shows a lot of EAGAIN on small response messages. This
is caused by the fact that the READ_DONTWAIT flag is not set
on response message, it's only there when we want to flush
pending data.
For small responses, it's a waste of CPU cycles to call recv()
for nothing since most of the time, everything we'll need will
be in the first response. Also, this will offer more opportunities
for using splice() to transfer data.
Right now we see many places doing their own setsockopt(SO_LINGER).
Better only do it just before the close() in fd_delete(). For this
we add a new flag on the file descriptor, indicating if it's safe or
not to linger. If not (eg: after a connect()), then the setsockopt()
call is automatically performed before a close().
The flag automatically turns to safe when receiving a read0.
conn_xprt_ready() reports if the transport layer is ready.
conn_ctrl_ready() reports if the control layer is ready.
The stream interface uses si_conn_ready() to report that the
underlying connection is ready. This will be used for connection
reuse in keep-alive mode.
Since the recent addition of map updates, haproxy does not build anymore
on Solaris because "s_addr" is a #define :
src/dumpstats.c: In function `stats_map_lookup':
src/dumpstats.c:4688: error: syntax error before '.' token
src/dumpstats.c:4781: error: `S_un' undeclared (first use in this function)
src/dumpstats.c:4781: error: (Each undeclared identifier is reported only once
src/dumpstats.c:4781: error: for each function it appears in.)
make: *** [src/dumpstats.o] Error 1
Simply rename the variable.
The is* macros must not use a char on Solaris. Unsigned char is OK.
Casting char to int is wrong as well since we get a negative value.
src/log.c: In function `parse_logformat_string':
src/log.c:454: warning: subscript has type `char'
Gcc 3.4 warns that mask may be used uninitialized in pattern.c. This
is wrong since it's used in the same condition as its assignment,
although it's not necessarily obvious for the compiler. Fix this by
initializing the value.
This was introduced by recent commit 01cdcd4a so no backport is needed.
Since recent commit f79c817 (MAJOR: connection: add two new flags to
indicate readiness of control/transport) and the surrounding commits,
the session initialization has been slightly delayed and the control
layer of the connection is not yet initialized when processing the
rules.
We need to move that minimal initialization a bit above.
The bug was introduced with latest changes, no backport is needed.
If a server is disabled in configuration and another one tracks it,
this last one must not inherit the MAINT flag otherwise it needs to
be explicitly enabled afterwards. Just remove this to fix the issue.
Since commit 58c3297 (MEDIUM: Set rise and fall of agent checks to 1),
due to a bogus condition, it became impossible to re-enable a server
that was disabled in the configuration if no agent was enabled. The
reason is that in this case, the agent's health was zero while the
condition expected it to be at least one to consider the action.
Let's fix this by only considering the health of checks that are enabled.
The agent is able to retrieve some weight information from the server
and will eventually be able to force the server into maintenance mode.
It doesn't seem logical to have it depend on the health check being
configured, as for some servers it might very well make sense to only
fetch the weight from the server's load regardless of the health.
So let's stop disabling the agent checks when health checks are disabled.
Till now, a configuration required at least one health check in the
whole config file to create the agent tasks. Now we start them even
if no health check is enabled.
Health checks can now be paused. This is the status they get when the
server is put into maintenance mode, which is more logical than relying
on the server's state at some places. It will be needed to allow agent
checks to run when health checks are disabled (currently not possible).
start_checks() only used to consider the health checks intervals to
compute the start interval, so if an agent had a faster check than
all health checks, it would be significantly delayed.
Having the check state partially stored in the server doesn't help.
Some functions such as srv_getinter() rely on the server being checked
to decide what check frequency to use, instead of relying on the check
being configured. So let's get rid of SRV_CHECKED and SRV_AGENT_CHECKED
and only use the check's states instead.
At the moment, health checks and agent checks are tied : no agent
check is emitted if no health check is enabled. Other parameters
are considered in the condition for letting checks run. It will
help us selectively enable checks (agent and regular checks) to be
know whether they're enabled/disabled and configured or not. Now
we can already emit an error when trying to enable an unconfigured
agent.
The flag CHK_STATE_RUNNING is misleading as one may believe it means
the state is enabled (just like SRV_RUNNING). Let's rename these two
flags CHK_ST_INPROGRESS and CHK_ST_DISABLED.
We used to have up to 4 sets of flags which were almost all exclusive
to report a check result. And the names were inherited from the old
server states, adding to the confusion. Let's replace that with an
enum handling only the possible combinations :
SRV_CHK_UNKNOWN => CHK_RES_UNKNOWN
SRV_CHK_FAILED => CHK_RES_FAILED
SRV_CHK_PASSED => CHK_RES_PASSED
SRV_CHK_PASSED | SRV_CHK_DISABLE => CHK_RES_CONDPASS
Server tracking uses the same "tracknext" list for servers tracking
another one and for the servers being tracked. This caused an issue
which was fixed by commit f39c71c ([CRITICAL] fix server state tracking:
it was O(n!) instead of O(n)), consisting in ensuring that a server is
being checked before walking down the list, so that we don't propagate
the up/down information via servers being part of the track chain.
But the root cause is the fact that all servers share the same list.
The correct solution consists in having a list head for the tracked
servers and a list of next tracking servers. This simplifies the
propagation logic, especially for the case where status changes might
be passed to individual servers via the CLI.
The get_trash_chunk() function is convenient and is sometimes used even
to get a temporary string. While the chunk is initialized, the string
may contain some random garbage that some code might retrieve if it uses
chunk->str directly without checking ->len. This is what happened in checks
after commit 25e2ab5 (MEDIUM: checks: centralize error reporting). It's not
easy to guess it at first so better pre-initialize the string with a zero.
It's becoming increasingly difficult to ignore unwanted function returns in
debug code with gcc. Now even when you try to work around it, it suggests a
way to write your code differently. For example :
src/frontend.c:187:65: warning: if statement has empty body [-Wempty-body]
if (write(1, trash.str, trash.len) < 0) /* shut gcc warning */;
^
src/frontend.c:187:65: note: put the semicolon on a separate line to silence this warning
1 warning generated.
This is totally unacceptable, this code already had to be written this way
to shut it up in earlier versions. And now it comments the form ? What's the
purpose of the C language if you can't write anymore the code that does what
you want ?
Emeric proposed to just keep a global variable to drain such useless results
so that gcc stops complaining all the time it believes people who write code
are monkeys. The solution is acceptable because the useless assignment is done
only in debug code so it will not impact performance. This patch implements
this, until gcc becomes even "smarter" to detect that we tried to cheat.
The ACL expression parser recently became a huge mess like a
spaghetti plate. The keyword is looked up at the beginning, then
sample fetches are processed, then an expression is initialized,
then arguments and converters are parsed but only if the keyword
was an ACL one, etc... Lots of "if" and redundant variables
everywhere making it hard to read and follow.
Let's move the args/conv parsing just after the keyword lookup.
At least now it's consistent that when we leave this if/else
statement, we have a sample expression initialized and full
parsed wherever the elements came from.
Just like for the last commit, we need to fix the ACL argument parser so
that it lets the lower layer do the job of referencing unresolved arguments
and correctly report the type of missing arguments.
Some errors may be reported about missing mandatory arguments when some
sample fetch arguments are marked as mandatory and implicit (eg: proxy
names such as in table_cnt or be_conn).
In practice the argument parser already handles all the situations very
well, it's just that the sample fetch parser want to go beyond its role
and starts some controls that it should not do. Simply removing these
useless controls lets make_arg_list() create the correct argument types
when such types are encountered.
This regression was introduced by the recent use of sample_parse_expr()
in ACLs which makes use of its own argument parser, while previously
the arguments were parsed in the ACL function itself. No backport is
needed.
Doing so ensures that we're consistent between all the functions in the whole
chain. This is important so that we can extract the argument parsing from this
function.
This patch adds map manipulation commands to the socket interface.
add map <map> <key> <value>
Add the value <value> in the map <map>, at the entry corresponding to
the key <key>. This command does not verify if the entry already
exists.
clear map <map>
Remove entries from the map <map>
del map <map> <key>
Delete all the map entries corresponding to the <key> value in the map
<map>.
set map <map> <key> <value>
Modify the value corresponding to each key <key> in a map <map>. The
new value is <value>.
show map [<map>]
Dump info about map converters. Without argument, the list of all
available maps are returned. If a <map> is specified, is content is
dumped.
We'll need to pass patterns on the CLI for lookups. Till now there was no
need for a backslash, so it's still time to support them just like in the
config file.
With this patch, patterns can be compiled for two modes :
- match
- lookup
The match mode is used for example in ACLs or maps. The lookup mode
is used to lookup a key for pattern maintenance. For example, looking
up a network is different from looking up one address belonging to
this network.
A special case is made for regex. In lookup mode they return the input
regex string and do not compile the regex.
Now, the pat_parse_*() functions parses the incoming data. The input
"pattern" struct can be preallocated. If the parser needs to add some
buffers, it allocates memory.
The function pattern_register() runs the call to the parser, process
the key indexation and associate the "sample_storage" used by maps.
This patch remove the compatibility check from the input type and the
match method. Now, it checks if a casts from the input type to output
type exists and the pattern_exec_match() function apply casts before
each pattern matching.
This is used later for increasing the compability with incoming
sample types. When multiple compatible types are supported, one
is arbitrarily used (eg: UINT).
Applying inet_pton() to input contents is not reliable because the
function requires a zero-terminated string. While inet_pton() will
stop when contents do not match an IPv6 address anymore, it could
theorically read past the end of a buffer if the data to be converted
was at the end of a buffer (this cannot happen right now thanks to
the reserve at the end of the buffer). At least the conversion does
not work.
Fix this by using buf2ip6() instead, which copies the string into a
padded aread.
This bug came with recent commit b805f71 (MEDIUM: sample: let the
cast functions set their output type), no backport is needed.
There is a mix-up between input type of the data and input type of the
map file. This mix-up causes that all pattern matching function based
on "string" (reg, beg, end, ...) don't run.
This bug came with commit d5f624d (MEDIUM: sample: add the "map" converter),
no backport is needed.
The agent refrains from reading the server's response until the server
closes, but if the server waits for the client to close, the response
is never read. Let's try to fetch a whole line before deciding to wait
more.
The function stktable_init() will return 0 if create_pool() returns NULL. Since
the returned value of this function is ignored, HAProxy will crash if the pool
of stick table is NULL and stksess_new() is called to allocate a new stick
session. It is a better choice to check the returned value and make HAProxy exit
with alert message if any error is caught.
Signed-off-by: Godbach <nylzhaowei@gmail.com>
The original codes are indented by spaces and not aligned with the former line.
It should be a convention to indent by tabs in HAProxy.
Signed-off-by: Godbach <nylzhaowei@gmail.com>
We must not report incomplete data if the buffer is not full, otherwise
we can abort some processing on the stats socket when dealing with massive
amounts of commands.
There is a compiler warning after commit 1b6e75fa84 ("MEDIUM: haproxy-
systemd-wrapper: Use haproxy in same directory"):
src/haproxy-systemd-wrapper.c: In function ‘locate_haproxy’:
src/haproxy-systemd-wrapper.c:28:10: warning: ignoring return value of ‘readlink’, declared with attribute warn_unused_result [-Wunused-result]
Fix the compiler warning by checking the return value of readlink().
SSL and keep-alive will need to be able to fail on allocation errors,
and the stream interface did not allow to report such a cause. The flag
will then be "RC" as already documented.
This reduces its size which is not reused by anything else. However it
will significantly improve the debugger's output since we'll now get
real state values.
The default case had to be enabled in the parsers because gcc tries
to optimize the switch/case and noticed some values were missing from
the enums and emitted a warning.
Here again we had some oversized and misaligned entries. The method
and the status don't need 4 bytes each, and there was a hole after
the status that does not exist anymore. That's 8 additional bytes
saved from http_txn and as much for the session.
Also some fields were slightly moved to present better memory access
patterns resulting in a steady 0.5% performance increase.
When dumping a session, it can be useful to know what applet it is
connected to instead of having just the appctx pointer. We also
report st0/st1/st2 to help debugging.
Currently, all states, all status codes and a few constants used in
the peers are all prefixed with "PEER_SESSION_". It's confusing because
there is no way to know which one is a state, a status code or anything
else. Thus, let's rename them this way :
PEER_SESS_ST_* : states
PEER_SESS_SC_* : status codes
Additionally the states have been numbered from zero and contigously.
This will allow us not to have to deal with the stream interface
initialization anymore and to ease debugging using enums.
Some applet users don't need to initialize their applet, they just want
to route the traffic there just as if it were a server. Since applets
are now connected to from session.c, let's simply ensure that when
connecting, the applet in si->end matches the target, and allocate
one there if it's not already done. In case of error, we force the
status code to resource and connection so that it's clear that it
happens because of a memory shortage.
From now on, a call to stream_int_register_handler() causes a call
to si_alloc_appctx() and returns an initialized appctx for the
current stream interface. If one was previously allocated, it is
released. If the stream interface was attached to a connection, it
is released as well.
The appctx are allocated from the same pools as the connections, because
they're substantially smaller in size, and we can't have both a connection
and an appctx on an interface at any moment.
In case of memory shortage, the call may return NULL, which is already
handled by all consumers of stream_int_register_handler().
The field appctx was removed from the stream interface since we only
rely on the endpoint now. On 32-bit, the stream_interface size went down
from 108 to 44 bytes. On 64-bit, it went down from 144 to 64 bytes. This
represents a memory saving of 160 bytes per session.
It seems that a later improvement could be to move the call to
stream_int_register_handler() to session.c for most cases.
The task returned by stream_int_register_handler() is never used, however we
always need to access the appctx afterwards. So make it return the appctx
instead. We already plan for it to fail, which is the reason for the addition
of a few tests and the possibility for the HTTP analyser to return a status
code 500.
We're about to remove si->appctx, so first let's replace all occurrences
of its usage with a dynamic extract from si->end. A lot of code was changed
by search-n-replace, but the behaviour was intentionally not altered.
The code surrounding calls to stream_int_register_handler() was slightly
changed since we can only use si->end *after* the registration.
We used to have two very similar functions for sending a PROXY protocol
line header. The reason is that the default one relies on the stream
interface to retrieve the other end's address, while the "local" one
performs a local address lookup and sends that instead (used by health
checks).
Now that the send_proxy_ofs is stored in the connection and not the
stream interface, we can make the local_send_proxy rely on it and
support partial sends. This also simplifies the code by removing the
local_send_proxy function, making health checks use send_proxy_ofs,
resulting in the removal of the CO_FL_LOCAL_SPROXY flag, and the
associated test in the connection handler. The other flag,
CO_FL_SI_SEND_PROXY was renamed without the "SI" part so that it
is clear that it is not dedicated anymore to a usage with a stream
interface.
Till now the send_proxy_ofs field remained in the stream interface,
but since the dynamic allocation of the connection, it makes a lot
of sense to move that into the connection instead of the stream
interface, since it will not be statically allocated for each
session.
Also, it turns out that moving it to the connection fils an alignment
hole on 64 bit architectures so it does not consume more memory, and
removing it from the stream interface was an opportunity to correctly
reorder fields and reduce the stream interface's size from 160 to 144
bytes (-10%). This is 32 bytes saved per session.
The outgoing connection is now allocated dynamically upon the first attempt
to touch the connection's source or destination address. If this allocation
fails, we fail on SN_ERR_RESOURCE.
As we didn't use si->conn anymore, it was removed. The endpoints are released
upon session_free(), on the error path, and upon a new transaction. That way
we are able to carry the existing server's address across retries.
The stream interfaces are not initialized anymore before session_complete(),
so we could even think about allocating them dynamically as well, though
that would not provide much savings.
The session initialization now makes use of conn_new()/conn_free(). This
slightly simplifies the code and makes it more logical. The connection
initialization code is now shorter by about 120 bytes because it's done
at once, allowing the compiler to remove all redundant initializations.
The si_attach_applet() function now takes care of first detaching the
existing endpoint, and it is called from stream_int_register_handler(),
so we can safely remove the calls to si_release_endpoint() in the
application code around this call.
A call to si_detach() was made upon stream_int_unregister_handler() to
ensure we always free the allocated connection if one was allocated in
parallel to setting an applet (eg: detect HTTP proxy while proceeding
with stats maybe).
si_prepare_conn() is not appropriate in our case as it both initializes and
attaches the connection to the stream interface. Due to the asymmetry between
accept() and connect(), it causes some fields such as the control and transport
layers to be reinitialized.
Now that we can separately initialize these fields using conn_prepare(), let's
break this function to only attach the connection to the stream interface.
Also, by analogy, si_prepare_none() was renamed si_detach(), and
si_prepare_applet() was renamed si_attach_applet().
We don't want to assign the control nor transport layers anymore
at the same time as the data layer, because it prevents one from
keeping existing settings when reattaching a connection to an
existing stream interface.
Let's have conn_attach() replace conn_assign() for this purpose.
Thus, conn_prepare() + conn_attach() do exactly the same as the
previous conn_assign().
Now that we can assign conn->xprt regardless of the initialization state,
we can reintroduce conn_prepare() to set only the protocol, the transport
layer and initialize the transport layer's state.
The first function is used to (re)initialize a stream interface and
the second to force it into a known state. These are intended for
cleaning up the stream interface initialization code in session.c
and peers.c and avoiding future issues with missing initializations.
Currently the control and transport layers of a connection are supposed
to be initialized when their respective pointers are not NULL. This will
not work anymore when we plan to reuse connections, because there is an
asymmetry between the accept() side and the connect() side :
- on accept() side, the fd is set first, then the ctrl layer then the
transport layer ; upon error, they must be undone in the reverse order,
then the FD must be closed. The FD must not be deleted if the control
layer was not yet initialized ;
- on the connect() side, the fd is set last and there is no reliable way
to know if it has been initialized or not. In practice it's initialized
to -1 first but this is hackish and supposes that local FDs only will
be used forever. Also, there are even less solutions for keeping trace
of the transport layer's state.
Also it is possible to support delayed close() when something (eg: logs)
tracks some information requiring the transport and/or control layers,
making it even more difficult to clean them.
So the proposed solution is to add two flags to the connection :
- CO_FL_CTRL_READY is set when the control layer is initialized (fd_insert)
and cleared after it's released (fd_delete).
- CO_FL_XPRT_READY is set when the control layer is initialized (xprt->init)
and cleared after it's released (xprt->close).
The functions have been adapted to rely on this and not on the pointers
anymore. conn_xprt_close() was unused and dangerous : it did not close
the control layer (eg: the socket itself) but still marks the transport
layer as closed, preventing any future call to conn_full_close() from
finishing the job.
The problem comes from conn_full_close() in fact. It needs to close the
xprt and ctrl layers independantly. After that we're still having an issue :
we don't know based on ->ctrl alone whether the fd was registered or not.
For this we use the two new flags CO_FL_XPRT_READY and CO_FL_CTRL_READY. We
now rely on this and not on conn->xprt nor conn->ctrl anymore to decide what
remains to be done on the connection.
In order not to miss some flag assignments, we introduce conn_ctrl_init()
to initialize the control layer, register the fd using fd_insert() and set
the flag, and conn_ctrl_close() which unregisters the fd and removes the
flag, but only if the transport layer was closed.
Similarly, at the transport layer, conn_xprt_init() calls ->init and sets
the flag, while conn_xprt_close() checks the flag, calls ->close and clears
the flag, regardless xprt_ctx or xprt_st. This also ensures that the ->init
and the ->close functions are called only once each and in the correct order.
Note that conn_xprt_close() does nothing if the transport layer is still
tracked.
conn_full_close() now simply calls conn_xprt_close() then conn_full_close()
in turn, which do nothing if CO_FL_XPRT_TRACKED is set.
In order to handle the error path, we also provide conn_force_close() which
ignores CO_FL_XPRT_TRACKED and closes the transport and the control layers
in turns. All relevant instances of fd_delete() have been replaced with
conn_force_close(). Now we always know what state the connection is in and
we can expect to split its initialization.
Everywhere conn_prepare() is used, the call to conn_init() has already
been done. We can now safely replace all instances of conn_prepare()
with conn_assign() which does not reset the transport layer, and remove
conn_prepare().
In order to reduce the dependency over stream-interfaces, we now
attach the incoming connection to the embryonic session's target
instead of the stream-interface's connection. This means we won't
need to initialize stream interfaces anymore after we implement
dynamic connection allocation. The session's target is reset to
NULL after the session has been converted to a complete session.
When we know we're not going to use a connection on a stream interface
because we're using an applet instead, do not allocate a connection, or
release the preallocated one. We do that for peers and CLI only at the
moment, and not for HTTP stats which in the future might be adapted to
support keep-alive.
The connection pointer is simply set to NULL, which pool_free2() already
supports.
The connection will only remain there as a pre-allocated entity whose
goal is to be placed in ->end when establishing an outgoing connection.
All connection initialization can be made on this connection, but all
information retrieved should be applied to the end point only.
This change is huge because there were many users of si->conn. Now the
only users are those who initialize the new connection. The difficulty
appears in a few places such as backend.c, proto_http.c, peers.c where
si->conn is used to hold the connection's target address before assigning
the connection to the stream interface. This is why we have to keep
si->conn for now. A future improvement might consist in dynamically
allocating the connection when it is needed.
This function makes no sense anymore and will cause trouble to convert
the remains of connection/applet to end points. Let's replace it now
with its contents.
The long-term goal is to have a context for applets as an alternative
to the connection and not as a complement. At the moment, the context
is still stored into the stream interface, and we only put a pointer
to the applet's context in si->end, initialize the context with object
type OBJ_TYPE_APPCTX, and this allows us not to allocate an entry when
deciding to switch to an applet.
A special care is taken to never dereference si->conn anymore when
dealing with an applet. That's why it's important that si->end is
always set to the proper type :
si->end == NULL => not connected to anything
*si->end == OBJ_TYPE_APPCTX => connected to an applet
*si->end == OBJ_TYPE_CONN => real connection (server, proxy, ...)
The session management code used to check the applet from the connection's
target. Now it uses the stream interface's end point and does not touch the
connection at all. Similarly, we stop checking the connection's addresses
and file descriptors when reporting the applet's status in the stats dump.
Since last commit, we now have a pointer to the applet in the
applet context. So we don't need the si->release function pointer
anymore, it can be extracted from applet->applet.release. At many
places, the ->release function was still tested for real connections
while it is only limited to applets, so most of them were simply
removed. For the remaining valid uses, a new inline function
si_applet_release() was added to simplify the check and the call.
si_prepare_embedded() was used both to attach an applet and to detach
anything from a stream interface. Split it into si_prepare_none() to
detach and si_prepare_applet() to attach an applet.
si->conn->target is now assigned from within these two functions instead
of their respective callers.
Since this is the applet context, call it ->appctx to avoid the confusion
with the pointer to the applet. Many places were changed but it's only a
renaming.
A long time ago when peers were introduced, there was no applet nor
applet context. Applet contexts were introduced but the peers still
did not make use of them and the "ptr" pointer remains present in
every stream interface in addition to the other contexts.
Simply move this pointer to its own location in the context.
Note that this pointer is still a void* because its type and contents
varies depending on the peers session state. Probably that this could
be cleaned up in the future given that all other contexts already store
much more than a single pointer.
This is to be more consistent with the other functions. The only
reason why these functions used to return a value was to let the
caller adjust polling by itself, but now their only callers were
the si_shutr()/si_shutw() inline functions. Now these functions
do not depend anymore on the connection.
These connection variant of these functions now call
conn_data_stop_recv()/conn_data_stop_send() before returning order
not to require a return code anymore. The applet version does not
need this at all.
These functions induce a lot of ifs everywhere because they consider two
different cases, one which is where the connection exists and has a file
descriptor, and the other one which is the default case where at most an
applet has to be notified.
Let's have them in si_ops and automatically decide which one to use.
The connection shutdown sequence has been slightly simplified, and we
now clear the flags at the end.
Also we remove SHUTR_NOW after a shutw with nolinger, as it's cleaner
not to keep it.
Now that applets work like real connections, there is no reason for
them to evade the response analysers. The stats applet emits valid
HTTP responses, it can flow through the HTTP response analyser just
fine. This now allows http-response/rsprep/rspadd rules to be applied
on top of stats. Cookie insertion does nothing since applets are not
servers and thus do not have a cookie. We can imagine compression to be
applied later if the stats output is emitted in chunks and in HTTP/1.1.
A minor visible effect of this change is that there is no more "-1" in
the timers presented in the logs when viewing the stats, all timers are
real.
Instead of having applets bypass the whole connection process, we now
follow the common path through sess_prepare_conn_req(). It is this
function which detects an applet an sets the output state so SI_ST_EST
instead of initiating a connection to a server. It is made possible
because we now have s->target pointing to the applet.
We used to rely on the stream interface's target to detect an applet
from within the session while trying to process the connection request,
but this is incorrect, as this target is the one currently connected
and not the next one to process. This will make a difference when we
later support keep-alive. The only "official" value indicating where
we want to connect is the session's target, which can be :
- &applet : connect to this applet
- NULL : connect using the normal LB algos
- anything else : direct connection to some entity
Since we're interested in detecting the specific case of applets, it's
OK to make use of s->target then.
Also, applets are being isolated from connections, and as such there
will not be any ->connect method available when an applet is running,
so we can get rid of this test as well.
At the moment, stats require some preliminary storage just to store
some flags and codes that are parsed very early and used later. In
fact that doesn't make much sense and makes it very hard to allocate
the applet dynamically.
This patch changes this. Now stats_check_uri() only checks for the
validity of the request and the fact that it matches the stats uri.
It's handle_stats() which parses it. It makes more sense because
handle_stats() used to already perform some preliminary processing
such as verifying that POST contents are not missing, etc...
There is only one minor hiccup in doing so : the reqrep rules might
be processed in between. This has been addressed by moving
http_handle_stats() just after stats_check_uri() and setting s->target
at the same time. Now that s->target is totally operational, it's used
to mark the current request as being targetted at the stats, and this
information is used after the request processing to remove the HTTP
analysers and only let the applet handle the request.
Thus we guarantee that the storage for the applet is filled with the
relevant information and not overwritten when we switch to the applet.
There is a big trouble with the way POST is handled for the admin
stats page. The POST parameters are extracted from some http-request
rules, and if not round they return zero hoping for being called again
when more data passes. This results in the HTTP analyser being called
several times and all the rules prior to the stats being executed
multiple times as well. That includes rewrite rules.
So instead of doing this, we now move all the processing of the stats
into the stats applet.
That way we just set the stats applet in the HTTP analyser when a stats
request is detected, and the applet takes the time it needs to read the
arguments and respond. We could even imagine improving the applet to
support requests larger than a single buffer.
The code was almost only moved and minimally changed. Several new HTTP
states were added to the stats applet to emit headers, redirects and
to read POST. It was necessary to do this because the headers sent
depend on the parsing of the POST request. In the end it's beneficial
because we removed two stream_int_retnclose() calls.
In preparation for moving the POST processing to the applet, we first
add new states to the HTTP I/O handler. Till now st0 was only 0/1 for
start/end. We now replace it with an enum.
Currently a connection is required on the remote side to emit a proxy
protocol header line. Let's support NULL addresses to emit an UNKNOWN
tag as well.
These two fetch methods predate the samples and used to store the
destination address into the server-facing connection's address field
because we had no other place at this time.
This will become problematic with the current connection changes, so
let's fix this.
We make the peers code use applet->ptr instead of conn->xprt_ctx to
store the pointer to the current peer. That way it does not depend
on a connection anymore.
This field was used by dumpstats to retrieve a pointer to the current
session, which may already be found from ->owner. With this change,
the stats code doesn't need the connection at all anymore.
We're trying to move the applets out of the struct connection. So
let's remove the dependence on xprt_st and introduce si->applet.st2
to store the missing contextual data instead.
The free() function must free the "struct pat_idx_elt".
This bug was introduced by commit ed66c29 (REORG: acl/pattern: extract
pattern matching from the acl file and create pattern.c), no backport
is needed.
The commit 37e340c (BUG/MEDIUM: stick: completely remove the unused flag
from the store entries) was incomplete. We also need to ensure that only
the first store-response for a table is applied and that it may coexist
with a possible store-request that was already done on this table.
This patch with the previous one should be backported to 1.4.
The store[] array in the session holds a flag which probably aimed to
differenciate store entries learned from the request from those learned
from the response, and allowing responses to overwrite only the request
ones (eg: have a server set a response cookie which overwrites the request
one).
But this flag is set when a response data is stored, and is never cleared.
So in practice, haproxy always runs with this flag set, meaning that
responses prevent themselves from overriding the request data.
It is desirable anyway to keep the ability not to override data, because
the override is performed only based on the table and not on the key, so
that would mean that it would be impossible to retrieve two different
keys to store into a same table. For example, if a client sets a cookie
and a server another one, both need to be updated in the table in the
proper order. This is especially true when multiple keys may be tracked
on each side into the same table (eg: list of IP addresses in a header).
So the correct fix which also maintains the current behaviour consists in
simply removing this flag and never try to optimize for the overwrite case.
This fix also has the benefit of significantly reducing the session size,
by 64 bytes due to alignment issues caused by this flag!
The bug has been there forever (since 1.4-dev7), so a backport to 1.4
would be appropriate.
Now instead of seeing many send() calls from multiple "tcp-check send"
rules, we fill the output buffer and try to send all only when we're
not in a send state or when the output buffer is too small for sending
the next message.
This results in a lot less syscalls and avoids filling the network with
many small packets. It will also improve the behaviour of some bogus
servers which expect a complete request in the first packet.
In recent commit 5ecb77f (MEDIUM: checks: add send/expect tcp based check),
bitfields were mistakenly used at some places for the actions. Fortunately,
the only two actions right now are 1 and 2 so they don't share any bit in
common and the bug has no impact.
No backport is needed.
ACL parse errors are not easy to understand since recent commit 348971e
(MEDIUM: acl: use the fetch syntax 'fetch(args),conv(),conv()' into the
ACL keyword) :
[ALERT] 339/154717 (26437) : parsing [check-bug.cfg:10] : error detected while parsing a 'stats admin' rule : unknown ACL or sample keyword 'env(a,b,c)': invalid arg 2 in fetch method 'env' : end of arguments expected at position 2, but got ',b,c'..
This error is only relevant to sample fetch keywords, so the new form is
a bit easier to understand :
[ALERT] 339/160011 (26626) : parsing [check-bug.cfg:12] : error detected while parsing a 'stats admin' rule : invalid arg 2 in fetch method 'env' : end of arguments expected at position 2, but got ',b,c' in sample expression 'env(a,b,c),upper'.
No backport is needed.
William Lallemand reported a double free on the args parser used in fetches
and ACLs. The cause is that the arg expression is not fully initialized nor
deinitialized when killed and that one of the pointers was already freed once
in certain error conditions.
Simply set it to NULL after the first call to free().
The bug was apparently introduced in 1.5-dev9 with commit 2ac5718
(MEDIUM: add a new typed argument list parsing framework).
If a "tcp-check send" experiences an EAGAIN on a send() call, it will
nevertheless go to next rule, and will not try to send again if the next
rule is an expect.
Change this so that we always try to send whatever remains in the buffer
before doing anything else.
A config with just a "tcp-check expect string XXX" loops at 100% CPU
because the connect() wakes the function and there's nothing to send,
but it does not disable the polling.
Rearrange the polling setup to fix this. This was just caused by latest
commit, no backport is needed.
This is a generic health check which can be used to match a
banner or send a request and analyse a server response.
It works in a send/expect ways and many exchange can be done between
HAProxy and a server to decide the server status, making HAProxy able to
speak the server's protocol.
It can send arbitrary regular or binary strings and match content as a
regular or binary string or a regex.
Signed-off-by: Baptiste Assmann <bedis9@gmail.com>
We currently use such an hex parser in pat_parse_bin() to parse hex
string patterns. We'll need another generic one so let's move it to
standard.c and have pat_parse_bin() make use of it.
This patch permits to use the same struct pattern for two indentical maps.
This permits to preserve memory, and permits to update only one
"struct pattern" when the dynamic map update is supported.
Commit 348971e (MEDIUM: acl: use the fetch syntax 'fetch(args),conv(),conv()'
into the ACL keyword) introduced a regression in the ACL parser. The second
argument of an ACL keyword is now mistakenly confused with a converter.
This bug is post-dev19 and does not require any backport.
We happened to preform this call twice on some checks, once in the
recv event handler, and another one in the main function. Remove
the one from the event handler which does not make any more sense
there.
When pure TCP checks are used, we see a useless call to recvfrom()
in strace resulting from an inconditional poll on recv after the
connect() succeeds. Let's remove this one and properly report
connection success in the write events.
Error reporting in health checks is unreliable as the number of recent
patch shows. The main reason is that the code required to detect the
exact situation where the error occurred is not simple, and the errors
have to be handled closer to where they occur in order to be accurate
(rely on getsockopt(SO_ERROR) and errno).
To solve this, we introduce chk_report_conn_err(). It does its best to
consider a possible errno passed in argument, a possible timeout passed
as well, then it completes this with getsockopt() if needed, and takes
into account the current status of the connection. The result is that
by simply calling this function with errno when it's known, we can emit
accurate log messages from every location. We can now see a messages
like "Connection error during SSL handshake (No route to host)" which
were not previously possible.
The only case where errno is supposed to be valid is when the connection
has just got the CO_FL_ERROR flag and errno is not zero, because it will
have been set by the same function that has set the flag. For all other
situations, we need to check the socket using getsockopt(), but only do
it once, since it clears the pending error code. For this reason, we
assign the error code to errno in order not to lose it. The same call
is made at the entry of event_srv_chk_r(), event_srv_chk_w(), and
wake_srv_chk() so that we get a chance to collect errors reported by
the poller or by failed syscalls.
Note that this fix relies on the 4 previous patches, so backporters
must be very careful.
At some places, we report an error by just detecting FD_POLL_ERR.
The problem is that the caller never knows if it must use errno or
call getsockopt(SO_ERROR). And since this last one clears the
pending error from the queue, it cannot be used inconditionally.
An elegant solution consists in clearing errno prior to inspecting
FD_POLL_ERR. The caller then knows that if it gets CO_FL_ERROR and
errno == 0, it must call getsockopt().
Since commit 348971e (MEDIUM: acl: use the fetch syntax
'fetch(args),conv(),conv()' into the ACL keyword), ACLs wait on input
that may change. This is visible in the configuration below :
tcp-request inspect-delay 3s
tcp-request content accept if REQ_CONTENT
Nothing will pass before the end of the timer. This is because
historically, sample_process() was dedicated to stick tables where
it was absolutely necessary to wait for a stable sample. Now samples
are used by many other things and we can't afford this. So let's move
this check to the stick tables after the call to sample_process()
instead.
This is post-1.5-dev19 work, no backport is required.
When we get a hard error from a syscall indicating the socket is dead,
it makes sense to set the CO_FL_SOCK_WR_SH and CO_FL_SOCK_RD_SH flags
to indicate that the socket may not be used anymore. It will ease the
error processing in health checks where the state of socket is very
important. We'll also be able to avoid some setsockopt(nolinger) after
an error.
For now, the rest of the code is not impacted because CO_FL_ERROR is
always tested prior to these flags.
PROXY protocol header was not tolerant to signals, so it might cause a
connection to report an error if a signal comes in at the exact same
moment the send is done.
This is 1.5-specific and does not need any backport.
The tcp_connect_probe() function may be called upon I/O activity when
no recv/send callbacks were called (eg: recv not possible, nothing to
send). It only relies on connect() to observe the connection establishment
progress but that does not work when some network errors are pending on
the socket (eg: a delayed connection refused).
For this reason we need to run a getsockopt() in the case where the
poller reports FD_POLL_ERR on the socket. We use this opportunity to
update errno so that the conn->data->wake() function has all relevant
info when it sees CO_FL_ERROR.
At the moment no code is impacted by this bug because recv polling is
always enabled during a connect, so recvfrom() always sees the error
first. But this may change with the health check cleanup.
No backport is needed.
The connection flag CO_FL_ERROR will be tested in the functions both
si_conn_recv_cb() and si_conn_send_cb(). If CO_FL_ERROR has been set, out_error
branch will be executed. But the only job of out_error branch is to set
CO_FL_ERROR on connection flag. So it's better return directly than goto
out_error branch under such conditions. As a result, out_error branch becomes
needless and can be removed.
In addition, the return type of si_conn_send_loop() is also changed to void.
The caller should check conn->flags for errors just like stream_int_chk_snd_conn()
does as below:
static void stream_int_chk_snd_conn(struct stream_interface *si)
{
...
conn_refresh_polling_flags(si->conn);
- if (si_conn_send(si->conn) < 0) {
+ si_conn_send(si->conn);
+ if (si->conn->flags & CO_FL_ERROR) {
...
}
Signed-off-by: Godbach <nylzhaowei@gmail.com>
We now have to report 2 conflicting information on the stats page :
- NOLB = server which returns 404 and stops load balancing ;
- DRAIN = server with a weight forced to zero
The DRAIN state was previously detected from eweight==0 and represented in
blue so that a temporarily disabled server was noticed. This was done by
commit cc8bb92 (MINOR: stats: show soft-stopped servers in different color).
This choice suffered from a small defect however, which is that a server
with a zero weight was reported in this color whatever its state (even down
or switching).
Also, one of the motivations for the color above was because the NOLB state
is barely detectable as it's very close to the UP state.
Since commit 8c3d0be (MEDIUM: Add DRAIN state and report it on the stats page),
we have the new DRAIN state to show servers with a zero weight. The colors are
unfortunately very close to those of the MAINT state, and some users were
confused by the disappearance of the blue bars.
Additionally, the NOLB state had precedence over DRAIN, which could be an
issue since DRAIN is the only thing the admin can act on, so once NOLB was
shown, there was nothing to indicate that the weight was forced to zero.
By switching the two priorities we can report DRAIN (forced mode) before
NOLB (detected mode).
The best solution to fix all this is to reuse the previous blue color for
all cases where weight == 0, whether it's set by config / agent / cli (DRAIN)
or detected by a 404 response (NOLB). However we only use this color when the
server is 100% UP. If it's going down we switch to the usual yellow color
showing failed checks, and when it's down it keeps its usual red color.
That way, a blue bar on the display indicates a server not taking new
sessions but perfectly up. And other colors keep their usual meaning.
In commit 8c3d0be (MEDIUM: Add DRAIN state and report it on the stats page),
the drain state was updated on every weight change except those that can be
sent via the web interface. This caused inconsistent state combinations to
be reported in the stats depending on the sequence (web then cli vs cli
then web).
It would seem that a call to set_server_drain_state() from within
server_recalc_eweight() would simplify things but that's not completely
certain yet.
The last fix on checks (02b0f58: BUG/MEDIUM: checks: fix a long-standing
issue with reporting connection errors) tried to isolate error codes
retrieved from the socket in order to report appropriate messages. The
only thing is that we must not pre-initialize err to errno since we're
not in I/O context anymore and errno will be the one of the last syscall
(whatever it was). However we can complete the message with more info
from the transport layer (eg: SSL can inform us we were in a handshake).
Also add a catch-all case for CO_FL_ERROR when the connection was
established. No check currently seem to leave this case open, but better
catch it because it's hard to find all possible cases.
Error handling in checks is complex because some stuff must be done in
the central task (mandatory at least for timeouts) and other stuff is
done closer to the data.
Since checks have their own buffers now, we could move everything to
the main task and only keep the low-level I/O for sending/retrieving
data to/from this buffer. It would also avoid sending logs from the
I/O context!
In 1.5-dev14 we fixed a bug induced by the new connection system which caused
handshake failures not to be reported in health checks. It was done with
commit 6c560da (BUG/MEDIUM: checks: report handshake failures). This fix
caused another issue which is that every check getting a TCP RST after a
valid response was flagged as error. This was fixed using commit c5c61fc
(BUG/MEDIUM: checks: ignore late resets after valid responses).
But because of this, we completely miss the status report. These two fixes
only set the check result as failed and did not call set_server_check_status()
to pass the information to upper layers.
The impact is that some failed checks are reported as INI or are simply not
updated if they happen fast enough (eg: TCP RST in response to connect()
without data in a pure TCP check). So the server appears down but the check
status says "L4OK".
After commit 6c560da, the handshake failures have been correctly dealt with
and every error causes process_chk() to be called with the appropriate
information still present on the socket. So let's get the error code in
process_chk() instead and stop mangling it in wake_srv_chk().
Now both L4 and L6 checks are correctly reported.
This bug was first introduced in 1.5-dev12 so no backport is needed.
Since commit 4a74143 (MEDIUM: Paramatise functions over the check of a
server), the check type is inherited from the current proxy's check type
at the moment where the server is declared instead of when reviewing
server configs. This causes an issue where a health check is disabled
when the server is declared before the checks. In fact the server will
inherit the last known check type declared before the "server" line :
backend foo
# this server is not checked at all
server s1 1.1.1.1:80 check
option tcpchk
# this server is tcp-checked :
server s2 1.1.1.2:80 check
option httpchk
# this server is http-checked :
server s3 1.1.1.3:80 check
The fix consists in assigning the check type during the config review
phase where the config is stable. No backport is nedeed.
Recent commit 06d97f9 (MEDIUM: log-format: relax parsing of '%' followed
by unsupported characters) caused the following warning on some compilers
since isblank is not always present :
src/log.c: In function 'parse_logformat_string':
src/log.c:453: warning: implicit declaration of function 'isblank'
As usual, replace it with the two values (space and tab).
We need to initialize the rdr_fmt list inconditionally. Using only
a redirect rule without an http-redirect may cause a crash during
deinit because of the list iterating from null.
We handle "http-request redirect" with a log-format string now, but we
leave "redirect" unaffected.
Note that the control of the special "/" case is move from the runtime
execution to the configuration parsing. If the format rule list is
empty, the build_logline() function does nothing.
At the moment when a '%' character is followed by any unhandled character,
it is considered as a variable name, and if it cannot be resolved, a warning
is emitted and the configuration goes on.
When we start using log-format for redirect rules, it may happen that some
people accidently use '%' instead of '%%' without understanding the cause
of the issue. Thus we do two things here :
- if a single '%' is followed by a blank or a digit, we fix it and emit a
warning explaining how this should be done ; this ensures that existing
configs continue to work ;
- if a single '%' is followed by an unknown variable name, we report it
and explain how to emit a verbatim '%' in case this is what the user
desired.
Add a new converter with the following prototype :
map(<map_file>[,<default_value>])
map_<match_type>(<map_file>[,<default_value>])
map_<match_type>_<output_type>(<map_file>[,<default_value>])
It searches the for input value from <map_file> using the <match_type>
matching method, and return the associated value converted to the type
<output_type>. If the input value cannot be found in the <map_file>,
the converter returns the <default_value>. If the <default_value> is
not set, the converter fails and acts as if no input value could be
fetched. If the <match_type> is not set, it defaults to "str".
Likewise, if the <output_type> is not set, it defaults to "str". For
convenience, the "map" keyword is an alias for "map_str" and maps a
string to another string. The following array contains contains the
list of all the map* converters.
+----+----------+---------+-------------+------------+
| `-_ out | | | |
| input `-_ | str | int | ip |
| / match `-_ | | | |
+---------------+---------+-------------+------------+
| str / str | map_str | map_str_int | map_str_ip |
| str / sub | map_sub | map_sub_int | map_sub_ip |
| str / dir | map_dir | map_dir_int | map_dir_ip |
| str / dom | map_dom | map_dom_int | map_dom_ip |
| str / end | map_end | map_end_int | map_end_ip |
| str / reg | map_reg | map_reg_int | map_reg_ip |
| int / int | map_int | map_int_int | map_int_ip |
| ip / ip | map_ip | map_ip_int | map_ip_ip |
+---------------+---------+-------------+------------+
The names are intentionally chosen to reflect the same match methods
as ACLs use.
This patch allows each sample cast function to specify the sample
output type. The goal is to be able to emit an output type IPv4 or
IPv6 depending on what is found in the input if the next converter
is able to process them both.
The patch also adds a new pseudo type called "ADDR". This type is an
alias for IPV4 and IPV6 which is only used as an input type by converters
who want to express their compatibility with both address formats. It may
not be emitted.
The goal is to unify as much as possible the processing of IPv4 and IPv6
in order not to add extra keywords for the maps which act as converters,
but will match samples like ACLs do with their patterns.
Make the stick-table key converter automatically adapt to the address
family of the input sample. Samples such as "src" will return an address
with a sample type depending on the input family. We'll have to support
such combinations when we add support for maps because the output type
will not necessarily be fixed.
We now have the following enums and all related functions return them and
consume them :
enum pat_match_res {
PAT_NOMATCH = 0, /* sample didn't match any pattern */
PAT_MATCH = 3, /* sample matched at least one pattern */
};
enum acl_test_res {
ACL_TEST_FAIL = 0, /* test failed */
ACL_TEST_MISS = 1, /* test may pass with more info */
ACL_TEST_PASS = 3, /* test passed */
};
enum acl_cond_pol {
ACL_COND_NONE, /* no polarity set yet */
ACL_COND_IF, /* positive condition (after 'if') */
ACL_COND_UNLESS, /* negative condition (after 'unless') */
};
It's just in order to avoid doubts when reading some code.
This patch just renames functions, types and enums. No code was changed.
A significant number of files were touched, especially the ACL arrays,
so it is likely that some external patches will not apply anymore.
One important thing is that we had to split ACL_PAT_* into two groups :
- ACL_TEST_{PASS|MISS|FAIL}
- PAT_{MATCH|UNMATCH}
A future patch will enforce enums on all these places to avoid confusion.
This patch just moves code without any change.
The ACL are just the association between sample and pattern. The pattern
contains the match method and the parse method. These two things are
different. This patch cleans the code by splitting it.
This will be used later with maps. Each map will associate an entry with
a sample_storage value.
This patch changes the "parse" prototype and all the parsing methods.
The goal is to associate "struct sample_storage" to each entry of
"struct acl_pattern". Only the "parse" function can add the sample value
into the "struct acl_pattern".
The map feature will need to match acl patterns. This patch extracts
the matching function from the global ACL function "acl_exec_cond".
The code was only moved to its own function, no functional changes were made.
With this split, the pattern indexation can apply to any source. The map
feature needs this functionality because the map cannot be loaded with the
same file format as the ones supported by acl_read_patterns_from_file().
The code was only moved to its own function, no functional changes were made.
The inet_pton function needs an input string with a final \0. This
function copies the input string to a temporary buffer, adds the final
\0 and converts to address.
If the acl keyword is a "fetch", the dedicated parsing function
"sample_parse_expr()" is used. Otherwise, the acl parsing function
"parse_acl_expr()" is extended to understand the syntax of a series
of converters placed after the "fetch" keyword.
Before this patch, each acl uses a "struct sample_fetch" and executes
it with the "<fetch>->process()" function. Now, the dedicated function
"sample_process()" is called.
These syntax are now avalaible:
acl bad req.hdr(host),lower -m str www
http-request redirect prefix /go-away if bad
acl bad hdr_beg(host),lower www
http-request redirect prefix /go-away if bad
Some errors were still reported as log-format instead of their respective
contexts (acl, request header, stick, ...). This is harmless and does not
require any backport.
When parsing track-sc* actions in tcp-request rules, we now automatically
compute the track-sc identifier number using %d when displaying an error
message. But the ID has become wrong since we introduced sc0, we continue
to report id+1 in error messages causing some confusion.
No backport is needed.
A very old bug resulting from some code refactoring causes
assign_server_address() to refrain from retrieving the destination
address from the client-side connection when transparent mode is
enabled and we're connecting to a server which has address 0.0.0.0.
The impact is low since such configurations are unlikely to ever
be encountered. The fix should be backported to older branches.
When a server tracks another one, its state on the stats page always reports
"via xx/yy". That's convenient to know what server to act on to change the
state. But it is also possible to force the tracking server itself into
maintenance mode and in this case we should not report "via xx/yy" because
the tracked server can't do anything to change the server's state, which
is confusing. In practice there is nothing wrong in leaving it as-is,
except that it's highly misleading when looking at the stats page.
Note that we only change the HTML output, not the CSV one. The states are
already different : "MAINT" vs "MAINT(via)" and we expect anyone coding a
monitoring system based on the CSV output to know the differences between
all possible states.
This is the continuation of previous fix bc16cd8 "BUG/MAJOR: fix haproxy
crash when using server tracking instead of checks", the soft-stop/start
states were not addressed by this fix.