Both static-rr and hash with type map-based call init_server_map() to allocate
server map, so the server map should be freed while doing cleanup if one of
the above load balance algorithms is used.
Signed-off-by: Godbach <nylzhaowei@gmail.com>
[wt: removed the unneeded "if" before the free]
This minor bug was found using the coccinelle script "da.cocci". The
len was initialized twice instead of setting the size. It's harmless
since no operations are performed on this empty string but needs to
be fixed anyway.
At the moment, HTTP response time is computed after response headers are
processed. This can misleadingly assign to the server some heavy local
processing (eg: regex), and also prevents response headers from passing
information related to the response time (which can sometimes be useful
for stats).
Let's retrieve the reponse time before processing the headers instead.
Note that in order to remain compatible with what was previously done,
we disable the response time when we get a 502 or any bad response. This
should probably be changed in 1.6 since it does not make sense anymore
to lose this information.
health_adjust() should requeue the task after changing its expire timer.
I noticed it on devel servers without load. We have long inter (10 seconds)
and short fasinter (100ms). But according to webserver logs, after a failed
request next check request was called with same 10s interval.
This patch should probably be backported to 1.4 which has the same feature.
This fetch method returns the response buffer len, similarly
to req.len for the request. Previously it was only possible
to rely on "res.payload(0,size) -m found" to find if at least
that amount of data was available, which was a bit tricky.
This new action immediately closes the connection with the server
when the condition is met. The first such rule executed ends the
rules evaluation. The main purpose of this action is to force a
connection to be finished between a client and a server after an
exchange when the application protocol expects some long time outs
to elapse first. The goal is to eliminate idle connections which
take signifiant resources on servers with certain protocols.
When a process with large stick tables is replaced by a new one and remains
present until the last connection finishes, it keeps these data in memory
for nothing since they will never be used anymore by incoming connections,
except during syncing with the new process. This is especially problematic
when dealing with long session protocols such as WebSocket as it becomes
possible to stack many processes and eat a lot of memory.
So the idea here is to know if a table still needs to be synced or not,
and to purge all unused entries once the sync is complete. This means that
after a few hundred milliseconds when everything has been synchronized with
the new process, only a few entries will remain allocated (only the ones
held by sessions during the restart) and all the remaining memory will be
freed.
Note that we carefully do that only after the grace period is expired so as
not to impact a possible proxy that needs to accept a few more connections
before leaving.
Doing this required to add a sync counter to the stick tables, to know how
many peer sync sessions are still in progress in order not to flush the entries
until all synchronizations are completed.
David Berard reported that send-proxy was broken on FreeBSD and tracked the
issue to be an error returned by send(). We already had the same issue in
the past in another area which was addressed by the following commit :
0ea0cf6 BUG: raw_sock: also consider ENOTCONN in addition to EAGAIN
In fact, on Linux send() returns EAGAIN when the connection is not yet
established while other OSes return ENOTCONN. Let's consider ENOTCONN for
send-proxy there as the same as EAGAIN.
David confirmed that this change properly fixed the issue.
Another place was affected as well (health checks with send-proxy), and
was fixed.
This fix does not need any backport since it only affects 1.5.
verifyhost allows you to specify a hostname that the remote server's
SSL certificate must match. Connections that don't match will be
closed with an SSL error.
With a facily of 2 or 1 digit, the send size was wrong and bytes with
unknown value were sent.
The size was calculated using the start of the buffer and not the start
of the data which varies with the number of digits of the facility.
This bug was reported by Samuel Stoller and reported by Lukas Tribus.
When a request fail, the unique_id was allocated but not generated.
The string was not initialized and junk was printed in the log with %ID.
This patch changes the behavior of the unique_id. The unique_id is now
generated when a request failed.
This bug was reported by Patrick Hemmer.
The HTTP request counter is incremented non atomically, which means that
many requests can log the same ID. Let's increment it when it is consumed
so that we avoid this case.
This bug was reported by Patrick Hemmer. It's 1.5-specific and does not
need to be backported.
Mathew Levett reported an issue which is a bit nasty and hard to track
down. RDP cookies contain both the IP and the port, and haproxy matches
them exactly. So if a server has no port specified (or a remapped port),
it will never match a port specified in a cookie. Better warn the user
when this is detected.
Apollon Oikonomopoulos reported a build failure on Hurd where PATH_MAX
is not defined. The only place where it is referenced is ssl_sock.c,
all other places use MAXPATHLEN instead, with a fallback to 128 when
the OS does not define it. So let's switch to MAXPATHLEN as well.
Mark Brooks reported the following issue :
"My table looks like this -
0x24a8294: key=192.168.136.10 use=0 exp=1761492 server_id=3
0x24a8344: key=192.168.136.11 use=0 exp=1761506 server_id=2
0x24a83f4: key=192.168.136.12 use=0 exp=1761520 server_id=3
0x24a84a4: key=192.168.136.13 use=0 exp=1761534 server_id=2
0x24a8554: key=192.168.136.14 use=0 exp=1761548 server_id=3
0x24a8604: key=192.168.136.15 use=0 exp=1761563 server_id=2
0x24a86b4: key=192.168.136.16 use=0 exp=1761580 server_id=3
0x24a8764: key=192.168.136.17 use=0 exp=1761592 server_id=2
0x24a8814: key=192.168.136.18 use=0 exp=1761607 server_id=3
0x24a88c4: key=192.168.136.19 use=0 exp=1761622 server_id=2
0x24a8974: key=192.168.136.20 use=0 exp=1761636 server_id=3
0x24a8a24: key=192.168.136.21 use=0 exp=1761649 server_id=2
im running the command -
socat unix-connect:/var/run/haproxy.stat stdio <<< 'clear table VIP_Name-2 data.server_id eq 2'
Id assume that the entries with server_id = 2 would be removed but its
removing everything each time."
The cause of the issue is a missing test for skip_entry when deciding
whether to clear the key or not. The test was present when only the
last node is to be removed, so removing only the first node from a
list of two always did the right thing, explaining why it remained
unnoticed in basic unit tests.
The bug was introduced by commit 8fa52f4e which attempted to fix a
previous issue with this feature where only the last node was removed.
This bug is 1.5-specific and does not require any backport.
Such load balance algorithms as roundrobin, leastconn and first will check the
server after being selected with the following condition:
if (!s->maxconn || (!s->nbpend && s->served < srv_dynamic_maxconn(s)))
But static-rr uses the different one in map_get_server_rr() as below:
if (!srv->maxconn || srv->cur_sess < srv_dynamic_maxconn(srv))
After viewing this difference, it is a better choice for static-rr to use the
same check condition as other algorithms.
This change will only affect static-rr. Though all hash algorithms with type
map-based will use the same server map as static-rr, they call another function
map_get_server_hash() to get server.
Signed-off-by: Godbach <nylzhaowei@gmail.com>
When using req.payload and res.payload to look up for specific content at an
arbitrary location, we're often facing the problem of not knowing the input
buffer length. If the length argument is larger than the buffer length, the
function did not match, and if they're smaller, there is a risk of not getting
the expected content. This is especially true when looking for data in SOAP
requests.
So let's make some provisions for scanning the whole buffer by specifying a
length of 0 bytes. This greatly simplifies the processing of random-sized
input data.
The "set table" statement allows to create new entries with their respective
values. Till now it was limited to a single data type per line, requiring as
many "set table" statements as the desired data types to be set. Since this
is only a parser limitation, this patch gets rid of it. It also allows the
creation of a key with no data types (all reset to their default values).
Since commit 654694e1, it has been possible to feed some data into
stick tables from the CLI. That commit considered that frequency
counters would only have their previous value set, so that they
progressively fade out. But this does not match any real world
use case in fact. The only reason for feeding a freq counter is
to pass some data learned outside. We certainly don't want to see
such data start to vanish immediately, otherwise it will force the
external scripts to loop very frequently to limit the losses.
So let's set the current value instead in order to guarantee that
the data remains stable over the full period, then starts to fade
out between 1* and 2* the period.
sc_* sample fetches now take an optional parameter which allows to look
the key in an alternate table. This is convenient to pass multiple
information for the same key at once (eg: have multiple gpc0 for the
same key, or support being fed complementary information from the CLI).
Example :
listen front
bind :8000
tcp-request content track-sc0 src table local-ip
http-response set-header src-id %[sc0_get_gpc0]+%[sc0_get_gpc0(global-ip)]
server dummy 127.0.0.1:8001
backend local-ip
stick-table size 1k type ip store gpc0
backend global-ip
stick-table size 1k type ip store gpc0
One very annoying issue when trying to extend the sticky counters beyond
the current 3 counters is that it requires a massive copy-paste of fetch
functions (we don't have to copy-paste code anymore), just so that the
fetch names exist.
So let's have an alternate form like "sc_*(num)" to allow passing the
counter number as an argument without having to redefine new fetch names.
The MAX_SESS_STKCTR macro defines the number of usable sticky counters,
which defaults to 3.
In preparation of more flexibility in the stick counters, make their
number configurable. It still defaults to 3 which is the minimum
accepted value. Changing the value alone is not sufficient to get
more counters, some bitfields still need to be updated and the TCP
actions need to be updated as well, but this update tries to be
easier, which is nice for experimentation purposes.
smp_fetch_sc0_trackers, smp_fetch_sc1_trackers and smp_fetch_sc2_trackers
were merged into a single function which relies on the fetch name to decide
what to return.
This is also a bug fix for this feature which has never worked till its bogus
introduction by commit "2406db4 MEDIUM: counters: add sc1_trackers/sc2_trackers"
(1.5-dev10).
Instead of returning the value in the sample, it was returned as the fetch
result!
There is no need to backport this fix anyway since it's 1.5-specific and
nobody uses the feature.
smp_fetch_sc0_bytes_out_rate, smp_fetch_sc1_bytes_out_rate, smp_fetch_sc2_bytes_out_rate,
smp_fetch_src_bytes_out_rate and smp_fetch_bytes_out_rate were merged into a single
function which relies on the fetch name to decide what to return.
smp_fetch_sc0_kbytes_out, smp_fetch_sc1_kbytes_out, smp_fetch_sc2_kbytes_out,
smp_fetch_src_kbytes_out and smp_fetch_kbytes_out were merged into a single
function which relies on the fetch name to decide what to return.
smp_fetch_sc0_bytes_in_rate, smp_fetch_sc1_bytes_in_rate, smp_fetch_sc2_bytes_in_rate,
smp_fetch_src_bytes_in_rate and smp_fetch_bytes_in_rate were merged into a single
function which relies on the fetch name to decide what to return.
smp_fetch_sc0_kbytes_in, smp_fetch_sc1_kbytes_in, smp_fetch_sc2_kbytes_in,
smp_fetch_src_kbytes_in and smp_fetch_kbytes_in were merged into a single
function which relies on the fetch name to decide what to return.
smp_fetch_sc0_http_err_rate, smp_fetch_sc1_http_err_rate, smp_fetch_sc2_http_err_rate,
smp_fetch_src_http_err_rate and smp_fetch_http_err_rate were merged into a single
function which relies on the fetch name to decide what to return.
smp_fetch_sc0_http_err_cnt, smp_fetch_sc1_http_err_cnt, smp_fetch_sc2_http_err_cnt,
smp_fetch_src_http_err_cnt and smp_fetch_http_err_cnt were merged into a single
function which relies on the fetch name to decide what to return.
smp_fetch_sc0_http_req_rate, smp_fetch_sc1_http_req_rate, smp_fetch_sc2_http_req_rate,
smp_fetch_src_http_req_rate and smp_fetch_http_req_rate were merged into a single
function which relies on the fetch name to decide what to return.
smp_fetch_sc0_http_req_cnt, smp_fetch_sc1_http_req_cnt, smp_fetch_sc2_http_req_cnt,
smp_fetch_src_http_req_cnt and smp_fetch_http_req_cnt were merged into a single
function which relies on the fetch name to decide what to return.
smp_fetch_sc0_sess_rate, smp_fetch_sc1_sess_rate, smp_fetch_sc2_sess_rate,
smp_fetch_src_sess_rate and smp_fetch_sess_rate were merged into a single
function which relies on the fetch name to decide what to return.
smp_fetch_sc0_sess_cnt, smp_fetch_sc1_sess_cnt, smp_fetch_sc2_sess_cnt,
smp_fetch_src_sess_cnt and smp_fetch_sess_cnt were merged into a single
function which relies on the fetch name to decide what to return.
smp_fetch_sc0_conn_cur, smp_fetch_sc1_conn_cur, smp_fetch_sc2_conn_cur,
smp_fetch_src_conn_cur and smp_fetch_conn_cur were merged into a single
function which relies on the fetch name to decide what to return.
smp_fetch_sc0_conn_rate, smp_fetch_sc1_conn_rate, smp_fetch_sc2_conn_rate,
smp_fetch_src_conn_rate and smp_fetch_conn_rate were merged into a single
function which relies on the fetch name to decide what to return.
smp_fetch_sc0_conn_cnt, smp_fetch_sc1_conn_cnt, smp_fetch_sc2_conn_cnt,
smp_fetch_src_conn_cnt and smp_fetch_conn_cnt were merged into a single
function which relies on the fetch name to decide what to return.
smp_fetch_sc0_clr_gpc0, smp_fetch_sc1_clr_gpc0, smp_fetch_sc2_clr_gpc0,
smp_fetch_src_clr_gpc0 and smp_fetch_clr_gpc0 were merged into a single
function which relies on the fetch name to decide what to return.
smp_fetch_sc0_inc_gpc0, smp_fetch_sc1_inc_gpc0, smp_fetch_sc2_inc_gpc0,
smp_fetch_src_inc_gpc0 and smp_fetch_inc_gpc0 were merged into a single
function which relies on the fetch name to decide what to return.
smp_fetch_sc0_gpc0, smp_fetch_sc1_gpc0, smp_fetch_sc2_gpc0,
smp_fetch_src_gpc0 and smp_fetch_gpc0 were merged into a single
function which relies on the fetch name to decide what to return.
smp_fetch_sc0_get_gpc0, smp_fetch_sc1_get_gpc0, smp_fetch_sc2_get_gpc0,
smp_fetch_src_get_gpc0 and smp_fetch_get_gpc0 were merged into a single
function which relies on the fetch name to decide what to return.
This function aims at simplifying the prefetching of the table and entry
when using any of the session counters fetches. The principle is that the
src_* variant produces a stkctr that is used instead of the one from the
session. That way we can call the same function from all session counter
fetch functions and always have a single function to support sc[0-9]_/src_.
This function is also called directly from backend.c, so let's stop
building fake args to call it as a sample fetch, and have a lower
layer more generic function instead.
We're having a lot of duplicate code just because of minor variants between
fetch functions that could be dealt with if the functions had the pointer to
the original keyword, so let's pass it as the last argument. An earlier
version used to pass a pointer to the sample_fetch element, but this is not
the best solution for two reasons :
- fetch functions will solely rely on the keyword string
- some other smp_fetch_* users do not have the pointer to the original
keyword and were forced to pass NULL.
So finally we're passing a pointer to the keyword as a const char *, which
perfectly fits the original purpose.
Converts an integer supposed to contain a date since epoch to
a string representing this date in a format suitable for use
in HTTP header fields. If an offset value is specified, then
it is a number of seconds that is added to the date before the
conversion is operated. This is particularly useful to emit
Date header fields, Expires values in responses when combined
with a positive offset, or Last-Modified values when the
offset is negative.
Returns the current date as the epoch (number of seconds since 01/01/1970).
If an offset value is specified, then it is a number of seconds that is added
to the current date before returning the value. This is particularly useful
to compute relative dates, as both positive and negative offsets are allowed.
There is no more reason for having "always_true", "always_false" and "env"
in acl.c while they're the most basic sample fetch keywords, so let's move
them to sample.c where it's easier to find them.
sample_process() used to return NULL on changing data, regardless of the
SMP_OPT_FINAL flag. Let's change this so that it is now possible to
include such data in logs or HTTP headers. Also, one unconvenient
thing was that it used to always set the sample flags to zero, making
it incompatible with ACLs which may need to call it multiple times. Only
do this for locally-allocated samples.
We now support having a comma-delimited converter list, which can start
right after the fetch keyword. The immediate benefit is that it allows
to use converters in log-format expressions, for example :
set-header source-net %[src,ipmask(24)]
The parser is also slightly improved and should be more resilient against
configuration errors. Also, optional arguments in converters were mistakenly
not allowed till now, so this was fixed.
Splicing is avoided for small transfers because it's generally cheaper
to perform a couple of recv+send calls than pipe+splice+splice. This
has the consequence that the last chunk of a large transfer may be
transferred using recv+send if it's less than 4 kB. But when the pipe
is already set up, it's better to use splice() to read the pending data,
since they will get merged with the pending ones. This is what now
happens everytime the reader is slower than the writer.
Note that this change alone could have fixed most of the CPU hog bug,
except at the end when only the close was pending.
As explained in previous patch, we incorrectly call chk_snd() when
performing a read even if the write event is already subscribed to
poll(). This is counter-productive because we're almost sure to get
an EAGAIN.
A quick test shows that this fix halves the number of failed splice()
calls without adding any extra work on other syscalls.
This could have been tagged as an improvement, but since this behaviour
made the analysis of previous bug more complex, it still qualifies as
a fix.
Mark Janssen reported an issue in 1.5-dev19 which was introduced
in 1.5-dev12 by commit 96199b10. From time to time, randomly, the
CPU usage spikes to 100% for seconds to minutes.
A deep analysis of the traces provided shows that it happens when
waiting for the response to a second pipelined HTTP request, or
when trying to handle the received shutdown advertised by epoll()
after the last block of data. Each time, splice() was involved with
data pending in the pipe.
The cause of this was that such events could not be taken into account
by splice nor by recv and were left pending :
- the transfer of the last block of data, optionally with a shutdown
was not handled by splice() because of the validation that to_forward
is higher than MIN_SPLICE_FORWARD ;
- the next recv() call was inhibited because of the test on presence
of data in the pipe. This is also what prevented the recv() call
from handling a response to a pipelined request until the client
had ACKed the previous response.
No less than 4 different methods were experimented to fix this, and the
current one was finally chosen. The principle is that if an event is not
caught by splice(), then it MUST be caught by recv(). So we remove the
condition on the pipe's emptiness to perform an recv(), and in order to
prevent recv() from being used in the middle of a transfer, we mark
supposedly full pipes with CO_FL_WAIT_ROOM, which makes sense because
the reason for stopping a splice()-based receive is that the pipe is
supposed to be full.
The net effect is that we don't wake up and sleep in loops during these
transient states. This happened much more often than expected, sometimes
for a few cycles at end of transfers, but rarely long enough to be
noticed, unless a client timed out with data pending in the pipe. The
effect on CPU usage is visible even when transfering 1MB objects in
pipeline, where the CPU usage drops from 10 to 6% on a small machine at
medium bandwidth.
Some further improvements are needed :
- the last chunk of a splice() transfer is never done using splice due
to the test on to_forward. This is wrong and should be performed with
splice if the pipe has not yet been emptied ;
- si_chk_snd() should not be called when the write event is already being
polled, otherwise we're almost certain to get EAGAIN.
Many thanks to Mark for all the traces he cared to provide, they were
essential for understanding this issue which was not reproducible
without.
Only 1.5-dev is affected, no backport is needed.
The max weight of server is 256 now, but SRV_UWGHT_MAX is still 255. As a result,
FWRR will not work well when server's weight is 256. The description is as below:
There are some macros related to server's weight in include/types/server.h:
#define SRV_UWGHT_RANGE 256
#define SRV_UWGHT_MAX (SRV_UWGHT_RANGE - 1)
#define SRV_EWGHT_MAX (SRV_UWGHT_MAX * BE_WEIGHT_SCALE)
Since weight of server can be reach to 256 and BE_WEIGHT_SCALE equals to 16,
the max eweight of server should be 256*16 = 4096, it will exceed SRV_EWGHT_MAX
which equals to SRV_UWGHT_MAX*BE_WEIGHT_SCALE = 255*16 = 4080. When a server
with weight 256 is insterted into FWRR tree during initialization, the key value
of this server should be SRV_EWGHT_MAX - s->eweight = 4080 - 4096 = -16 which
is closed to UINT_MAX in unsigned type, so the server with highest weight will
be not elected as the first server to process request.
In addition, it is a better choice to compare with SRV_UWGHT_MAX than a magic
number 256 while doing check for the weight. The max number of servers for
round-robin algorithm is also updated.
Signed-off-by: Godbach <nylzhaowei@gmail.com>
When ACLs and samples were converged in 1.5-dev18, function
"acl_prefetch_http" was not properly converted after commit 8ed669b1.
It used to return -1 when contents did not match HTTP traffic, which
was considered as a "true" boolean result by the ACL execution code,
possibly causing crashes due to missing data when checking for HTTP
traffic in TCP rules.
Another issue is that when the function returned zero, it did not
set tje SMP_F_MAY_CHANGE flag, so it could randomly exit on partial
requests before waiting for a complete one.
Last issue is that when it returned 1, it did not set smp->data.uint,
so this last one would retain a random value from a past execution.
This could randomly cause some matches to fail as well.
Thanks to Remo Eichenberger for reporting this issue with a detailed
explanation and configuration.
This bug is 1.5-specific, no backport is needed.
The checkcache option checks for cacheable responses with a set-cookie
header. Since the response processing code was refactored in 1.3.8
(commit a15645d4), the check was broken because the no-cache value
is only checked as no-cache="set-cookie", and not alone.
Thanks to Herv Commowick for reporting this stupid bug!
The fix should be backported to 1.4 and 1.3.
Lukas Benes reported that http-send-name-header causes a segfault if no
server is available because we're dereferencing the session's target which
is NULL. The tiniest reproducer looks like this :
listen foo
bind :1234
mode http
http-send-name-header srv
This obvious fix must be backported to 1.4 which is affected as well.
Commit e25c917a introduced a third tracking counter bug forgot
to check it when storing values at the end of the session. The
impact is that if neither the first nor the second one are
changed, none of them are saved.
Both fdinfo and fdtab are allocated memory in init() while haproxy is starting,
but only fdtab is freed in deinit(), fdinfo should also be freed.
Signed-off-by: Godbach <nylzhaowei@gmail.com>
As per RFC3260 #4 and BCP37 #4.2 and #5.2, the IPv6 counterpart of TOS
is "traffic class".
Add support for IPv6 traffic class in "set-tos" by moving the "set-tos"
related code to the new inline function inet_set_tos(), handling IPv4
(IP_TOS), IPv6 (IPV6_TCLASS) and IPv4-mapped sockets (IP_TOS, like
::ffff:127.0.0.1).
Also define - if missing - the IN6_IS_ADDR_V4MAPPED() macro in
include/common/compat.h for compatibility.
s->req->prod->conn->addr.to.ss_family contains only useful data if
conn_get_to_addr() is called early. If thats not the case (nothing in the
configuration needs the destination address like logs, transparent, ...)
then "set-tos" doesn't work.
Fix this by checking s->req->prod->conn->addr.from.ss_family instead.
Also fix a minor doc issue about set-tos in http-response.
Benoit Dolez reported a failure to start haproxy 1.5-dev19. The
process would immediately report an internal error with missing
fetches from some crap instead of ACL names.
The cause is that some versions of gcc seem to trim static structs
containing a variable array when moving them to BSS, and only keep
the fixed size, which is just a list head for all ACL and sample
fetch keywords. This was confirmed at least with gcc 3.4.6. And we
can't move these structs to const because they contain a list element
which is needed to link all of them together during the parsing.
The bug indeed appeared with 1.5-dev19 because it's the first one
to have some empty ACL keyword lists.
One solution is to impose -fno-zero-initialized-in-bss to everyone
but this is not really nice. Another solution consists in ensuring
the struct is never empty so that it does not move there. The easy
solution consists in having a non-null list head since it's not yet
initialized.
A new "ILH" list head type was thus created for this purpose : create
an Initialized List Head so that gcc cannot move the struct to BSS.
This fixes the issue for this version of gcc and does not create any
burden for the declarations.
When abortonclose is used and an error is detected on the client side,
better force an RST to the server. That way we propagate to the server
the same vision we got from the client, and we ensure that we won't keep
TIME_WAITs.
Remove event_accept() in include/proto/proto_http.h and use correct function
name in other two files instead of event_accept().
Signed-off-by: Godbach <nylzhaowei@gmail.com>
It was a bit inconsistent to have gpc start at 0 and sc start at 1,
so make sc start at zero like gpc. No previous release was issued
with sc3 anyway, so no existing setup should be affected.
When a config makes use of hdr_ip(x-forwarded-for,-1) or any such thing
involving a negative occurrence count, the header is still parsed in the
order it appears, and an array of up to MAX_HDR_HISTORY entries is created.
When more entries are used, the entries simply wrap and continue this way.
A problem happens when the incoming header field count exactly divides
MAX_HDR_HISTORY, because the computation removes the number of requested
occurrences from the count, but does not care about the risk of wrapping
with a negative number. Thus we can dereference the array with a negative
number and randomly crash the process.
The bug is located in http_get_hdr() in haproxy 1.5, and get_ip_from_hdr2()
in haproxy 1.4. It affects configurations making use of one of the following
functions with a negative <value> occurence number :
- hdr_ip(<name>, <value>) (in 1.4)
- hdr_*(<name>, <value>) (in 1.5)
It also affects "source" statements involving "hdr_ip(<name>)" since that
statement implicitly uses -1 for <value> :
- source 0.0.0.0 usesrc hdr_ip(<name>)
A workaround consists in rejecting dangerous requests early using
hdr_cnt(<name>), which is available both in 1.4 and 1.5 :
block if { hdr_cnt(<name>) ge 10 }
This bug has been present since the introduction of the negative offset
count in 1.4.4 via commit bce70882. It has been reported by David Torgerson
who offered some debugging traces showing where the crash happened, thus
making it significantly easier to find the bug!
CVE-2013-2175 was assigned to this bug.
This fix must absolutely be backported to 1.4.
Commit 5adeda1 (acl: add option -m to change the pattern matching method)
was not completely correct with regards to boolean fetches. It only used
the sample type to determine if the test had to be performed as a boolean
instead of relying on the match function. Due to this, a test such as the
following would not correctly match as the pattern would be ignored :
acl srv_down srv_is_up(s2) -m int 0
No backport is needed as this was merged first in 1.5-dev18.
This is useful in order to take different actions across restarts without
touching the configuration (eg: soft-stop), or to pass some information
such as the local host name to the next hop.
This one is wrong, never matches and cannot work. It was brought by a blind
copy-paste from the url_* version in 1.5-dev9, but there is no underlying
fetch returning an IP type for this.
The following 15 ACLs were missed from previous review, and are not needed
either.
hdr_cnt, hdr_ip, hdr_val, rep_ssl_hello_type, req_len, req_ssl_hello_type,
scook_cnt, scook_val, shdr_cnt, shdr_ip, shdr_val, url_ip, url_port,
urlp_val, req_proto_http.
Commit bef91e71 added the possibility to automatically use some fetch
functions instead of ACL functions, but for the fetch output type was
never used and setting the match method using -m was always mandatory.
Some fetch types are non-ambiguous and can intuitively be associated
with some ACL types :
SMP_T_BOOL -> bool
SMP_T_UINT/SINT -> int
SMP_T_IPV4/IPV6 -> ip
So let's have the ACL expression parser detect these ones automatically.
Other types are more ambiguous, especially everything related to strings,
as there are many string matching methods available and none of them is
the obvious standard matching method for any string. These ones will still
have to be specified using -m.
This configures the client-facing connection to receive a PROXY protocol
header before any byte is read from the socket. This is equivalent to
having the "accept-proxy" keyword on the "bind" line, except that using
the TCP rule allows the PROXY protocol to be accepted only for certain
IP address ranges using an ACL. This is convenient when multiple layers
of load balancers are passed through by traffic coming from public
hosts.
"set-mark" is used to set the Netfilter MARK on all packets sent to the
client to the value passed in <mark> on platforms which support it. This
value is an unsigned 32 bit value which can be matched by netfilter and
by the routing table. It can be expressed both in decimal or hexadecimal
format (prefixed by "0x"). This can be useful to force certain packets to
take a different route (for example a cheaper network path for bulk
downloads). This works on Linux kernels 2.6.32 and above and requires
admin privileges.
This manipulates the TOS field of the IP header of outgoing packets sent
to the client. This can be used to set a specific DSCP traffic class based
on some request or response information. See RFC2474, 2597, 3260 and 4594
for more information.
Some users want to disable logging for certain non-important requests such as
stats requests or health-checks coming from another equipment. Other users want
to log with a higher importance (eg: notice) some special traffic (POST requests,
authenticated requests, requests coming from suspicious IPs) or some abnormally
large responses.
This patch responds to all these needs at once by adding a "set-log-level" action
to http-request/http-response. The 8 syslog levels are supported, as well as "silent"
to disable logging.
Some actions were clearly missing to process response headers. This
patch adds a new "http-response" ruleset which provides the following
actions :
- allow : stop evaluating http-response rules
- deny : stop and reject the response with a 502
- add-header : add a header in log-format mode
- set-header : set a header in log-format mode
Since commit cfd97c6f was merged into 1.5-dev14 (BUG/MEDIUM: checks:
prevent TIME_WAITs from appearing also on timeouts), some valid health
checks sometimes used to show some TCP resets. For example, this HTTP
health check sent to a local server :
19:55:15.742818 IP 127.0.0.1.16568 > 127.0.0.1.8000: S 3355859679:3355859679(0) win 32792 <mss 16396,nop,nop,sackOK,nop,wscale 7>
19:55:15.742841 IP 127.0.0.1.8000 > 127.0.0.1.16568: S 1060952566:1060952566(0) ack 3355859680 win 32792 <mss 16396,nop,nop,sackOK,nop,wscale 7>
19:55:15.742863 IP 127.0.0.1.16568 > 127.0.0.1.8000: . ack 1 win 257
19:55:15.745402 IP 127.0.0.1.16568 > 127.0.0.1.8000: P 1:23(22) ack 1 win 257
19:55:15.745488 IP 127.0.0.1.8000 > 127.0.0.1.16568: FP 1:146(145) ack 23 win 257
19:55:15.747109 IP 127.0.0.1.16568 > 127.0.0.1.8000: R 23:23(0) ack 147 win 257
After some discussion with Chris Huang-Leaver, it appeared clear that
what we want is to only send the RST when we have no other choice, which
means when the server has not closed. So we still keep SYN/SYN-ACK/RST
for pure TCP checks, but don't want to see an RST emitted as above when
the server has already sent the FIN.
The solution against this consists in implementing a "drain" function at
the protocol layer, which, when defined, causes as much as possible of
the input socket buffer to be flushed to make recv() return zero so that
we know that the server's FIN was received and ACKed. On Linux, we can make
use of MSG_TRUNC on TCP sockets, which has the benefit of draining everything
at once without even copying data. On other platforms, we read up to one
buffer of data before the close. If recv() manages to get the final zero,
we don't disable lingering. Same for hard errors. Otherwise we do.
In practice, on HTTP health checks we generally find that the close was
pending and is returned upon first recv() call. The network trace becomes
cleaner :
19:55:23.650621 IP 127.0.0.1.16561 > 127.0.0.1.8000: S 3982804816:3982804816(0) win 32792 <mss 16396,nop,nop,sackOK,nop,wscale 7>
19:55:23.650644 IP 127.0.0.1.8000 > 127.0.0.1.16561: S 4082139313:4082139313(0) ack 3982804817 win 32792 <mss 16396,nop,nop,sackOK,nop,wscale 7>
19:55:23.650666 IP 127.0.0.1.16561 > 127.0.0.1.8000: . ack 1 win 257
19:55:23.651615 IP 127.0.0.1.16561 > 127.0.0.1.8000: P 1:23(22) ack 1 win 257
19:55:23.651696 IP 127.0.0.1.8000 > 127.0.0.1.16561: FP 1:146(145) ack 23 win 257
19:55:23.652628 IP 127.0.0.1.16561 > 127.0.0.1.8000: F 23:23(0) ack 147 win 257
19:55:23.652655 IP 127.0.0.1.8000 > 127.0.0.1.16561: . ack 24 win 257
This change should be backported to 1.4 which is where Chris encountered
this issue. The code is different, so probably the tcp_drain() function
will have to be put in the checks only.
The req.hdr and res.hdr fetch methods do not work well on headers which
are allowed to contain commas, such as User-Agent, Date or Expires.
More specifically, full-length matching is impossible if a comma is
present.
This patch introduces 4 new fetch functions which are designed to work
with these full-length headers :
- req.fhdr, req.fhdr_cnt
- res.fhdr, res.fhdr_cnt
These ones do not stop at commas and permit to return full-length header
values.
People who use "option dontlog-normal" are bothered with redirects and
stats being logged and reported as errors in the logs ("PR" = proxy
blocked the request).
This patch introduces a new flag 'L' for when a request is locally
processed, that is not considered as an error by the log filters. That
way we know a request was intercepted and processed by haproxy without
logging the line when "option dontlog-normal" is in effect.
Since 1.5-dev12 and commit 3bf1b2b8 (MAJOR: channel: stop relying on
BF_FULL to take action), the HTTP parser switched to channel_full()
instead of BF_FULL to decide whether a buffer had enough room to start
parsing a request or response. The problem is that channel_full()
intentionally ignores outgoing data, so a corner case exists where a
large response might still be left in a response buffer with just a
few bytes left (much less than the reserve), enough to accept a second
response past the last data, but not enough to permit the HTTP processor
to add some headers. Since all the processing relies on this space being
available, we can get some random crashes when clients pipeline requests.
The analysis of a core from haproxy configured with 20480 bytes buffers
shows this : with enough "luck", when sending back the response for the
first request, the client is slow, the TCP window is congested, the socket
buffers are full, and haproxy's buffer fills up. We still have 20230 bytes
of response data in a 20480 response buffer. The second request is sent to
the server which returns 214 bytes which fit in the small 250 bytes left
in this buffer. And the buffer arrangement makes it possible to escape all
the controls in http_wait_for_response() :
|<------ response buffer = 20480 bytes ------>|
[ 2/2 | 3 | 4 | 1/2 ]
^ start of circular buffer
1/2 = beginning of previous response (18240)
2/2 = end of previous response (1990)
3 = current response (214)
4 = free space (36)
- channel_full() returns false (20230 bytes are going to leave)
- the response headers does not wrap at the end of the buffer
- the remaining linear room after the headers is larger than the
reserve, because it's the previous response which wraps :
=> response is processed
Header rewriting causes it to reach 260 bytes, 10 bytes larger than what
the buffer could hold. So all computations during header addition are
wrong and lead to the corruption we've observed.
All the conditions are very hard to meet (which explains why it took
almost one year for this bug to show up) and are almost impossible to
reproduce on purpose on a test platform. But the bug is clearly there.
This issue was reported by Dinko Korunic who kindly devoted a lot of
time to provide countless traces and cores, and to experiment with
troubleshooting patches to knock the bug down. Thanks Dinko!
No backport is needed, but all 1.5-dev versions between dev12 and dev18
included must be upgraded. A workaround consists in setting option
forceclose to prevent pipelined requests from being processed.
I left a mistake in my previous patch bringing the crt-list feature,
it breaks clients with no SNI support.
Also remove the useless wildp = NULL as per a previous discussion.
We were using "tune.ssl.maxrecord 2000" and discovered an interesting
problem: SSL data sent from the server to the client showed occasional
corruption of the payload data.
The root cause was:
When ssl_max_record is smaller than the requested send amount
the ring buffer wrapping wasn't properly adjusting the
number of bytes to send.
I solved this by selecting the initial size based on the number
of output bytes that can be sent without splitting _before_ checking
against ssl_max_record.
We're often missin a third counter to track base, src and base+src at
the same time. Here we introduce track_sc3 to have this third counter.
It would be wise not to add much more counters because that slightly
increases the session size and processing time though the real issue
is more the declaration of the keywords in the code and in the doc.
By properly affecting the flags and values, it becomes easier to add
more tracked counters, for example for experimentation. It also slightly
reduces the code and the number of tests. No counters were added with
this patch.
This new pattern fetch returns the client certificate's SHA-1 fingerprint
(i.e. SHA-1 hash of DER-encoded certificate) in a binary chunk.
This can be useful to pass it to a server in a header or to stick a client
to a server across multiple SSL connections.
FreeBSD uses (IPPROTO_IP, IP_BINDANY) and (IPPROTO_IPV6, IPV6_BINDANY)
to enable transparent proxy on a socket.
This patch adds support for the relevant setsockopt() calls.
This patch does not change the logic of the code, it only changes the
way OS-specific defines are tested.
At the moment the transparent proxy code heavily depends on Linux-specific
defines. This first patch introduces a new define "CONFIG_HAP_TRANSPARENT"
which is set every time the defines used by transparent proxy are present.
This also means that with an up-to-date libc, it should not be necessary
anymore to force CONFIG_HAP_LINUX_TPROXY during the build, as the flags
will automatically be detected.
The CTTPROXY flags still remain separate because this older API doesn't
work the same way.
A new line has been added in the version output for haproxy -vv to indicate
what transparent proxy support is available.
Improve the crt-list file format to allow a rule to negate a certain SNI :
<crtfile> [[!]<snifilter> ...]
This can be useful when a domain supports a wildcard but you don't want to
deliver the wildcard cert for certain specific domains.
Global compression settings (windowsize and memlevel) were only considered
for the gzip algorithm but not the deflate algorithm. Since a single allocator
is used for both algos, if gzip was first initialized the memory with parameters
smaller than default, then initializing deflate after with default settings
would result in overusing the small allocated areas.
To fix this, we make use of deflateInit2() for deflate_init() as well.
Thanks to Godbach for reporting this bug, introduced by in 1.5-dev13 by commit
8b52bb38. No backport is needed.
It happens that openssl's API can differ between versions, causing some
serious trouble if the version used at runtime is not the same as used
for building.
Now we report the two versions separately along with a warning if the
version differs (except the patch version).
Source function will not work with the following line in default section:
source 0.0.0.0 usesrc clientip
even that related settings by iptables and ip rule have been set correctly.
But it can work well in backend setcion.
The reason is that the operation in line 1815 in cfgparse.c as below:
curproxy->conn_src.opts = defproxy.conn_src.opts & ~CO_SRC_TPROXY_MASK;
clears three low bits of conn_src.opts which stores the configuration of
'usesrc'. Without correct bits set, the source address function can not
work well. They should be copied to the backend proxy without being modified.
Since conn_src.tproxy_addr had not copied from defproxy to backend proxy
while initializing backend proxy, source function will not work well
with explicit source address set in default section either.
Signed-off-by: Godbach <nylzhaowei@gmail.com>
Note: the bug was introduced in 1.5-dev16 with commit ef9a3605
In 1.5-dev17 (commit 1facd6d6), we reorganized the way HTTP stats
requests are handled. When moving the code, we dropped a "return 0"
which happens upon incomplete POST request, so we now end up with
the next return 1 which causes processing to go on with next
analyser. This causes incomplete POST requests to try to forward
the request to servers, resulting in either a 404 or a 503 depending
on the configuration.
This patch fixes this regression to restore the previous behaviour.
It's not enough though, as it happens that the stats code is handled
after all http header processing but in the same function. The net
effect is that incomplete requests cause the headers manipulation to
be performed multiple times, possibly resulting in multiple headers
in the request buffer. Since the stats requests are not meant to be
forwarded, it's not an issue yet but this is something to take care
of later.
A remaining issue that's not handled yet is that if the client does
not send the complete POST headers, then the request is finally
forwarded. This is not a regression, it has always been there and
seems to be caused by the lack of timeout processing when waiting
for the POST body. The solution to this issue would be to move the
handling of stats requests into a dedicated analyser placed after
http_process_request_body().
Bug reported by Guillaume de Lafond.
Implements the "res.comp" ACL which is a boolean returning 1 when a
response has been compressed by HAProxy or 0 otherwise.
Implements the "res.comp_algo" fetch which contains the name of the
algorithm HAProxy used to compress the response.
Bryan Talbot reported that a config with only "stats bind-process 1" in
the global section would crash during parsing. This bug was introduced
with this new statement in 1.5-dev13. The stats frontend must be allocated
if it was not yet.
No backport is needed. The workaround consists in having previously declared
any other stats keyword (generally stats socket).
A "soft-stopped" server (weight 0) can be missed easily in the
webinterface. Fix this by using a specific class (and color).
Signed-off-by: Lukas Tribus <luky-37@hotmail.com>
When a server state change happens, a status bar appears with a link. Since
commit 1.5-dev16 (commit e7dbfc66), it was not visible anymore because of
the CSS properties set on the whole "div" tag used by the tooltips. Fix this
by using a specific class for the tooltips.
The confirmation link on the web interface forgets to set the displaying
options, so that when one clicks on "[X] Action processed successfully",
the auto-refresh, up/down and scope search settings are lost. We need to
pass all these info in the new link.
commit 88c278fadf provided a new input field on the statistics page which was
focused by default. The autofocus prevent to scroll down the page with the
keyboard immediately afterloading it, without the need to leave that field.
It also interfered with "stats refresh" by scrolling up to this field on each
refresh.
This small patch removes the autofocus keyword and adds a tabindex="1" as
suggested by Vincent Bernat. It also cleans up the generated html code.
This patch adds a "scope" box in the statistics page in order to
display only proxies with a name that contains the requested value.
The scope filter is preserved across all clicks on the page.
Since ea68d36 we show whether JIT is enabled, based on the USE-flag
(USE_PCRE_JIT). This is too naive; libpcre may be built without JIT
support (which is the default).
Fix this by calling pcre_config(), which has the accurate information
we are looking for.
Example of a libpcre without JIT support after this patch:
> ./haproxy -vv | grep PCRE
> OPTIONS = USE_STATIC_PCRE=1 USE_PCRE_JIT=1
> Built with PCRE version : 8.32 2012-11-30
> PCRE library supports JIT : no (libpcre build without JIT?)
The compression state machine happens to start work it cannot undo if
there's no more data in the input buffer, and has trouble accounting
for it. Fixing it requires more than a few lines, as the confusion is
in part caused by the way the pointers to the various places in the
message are handled internally. So as a temporary fix, let's disable
compression on chunk-encoded responses. This will give us more time
to perform the required changes.
Will Glass-Husain reported this breakage which was caused by commit
dec9814e merged in 1.5-dev12, and which was incomplete, resulting in
both "clear table" and "show table" to do the same thing.
No backport is needed.
Improve error log reporting in the format parser by always giving the
file name, line number, and directive name instead of the hard-coded
"log-format".
Previously we got this when dealing with log-format errors:
[WARNING] 101/183012 (8561) : Warning: log-format variable name 'r' is not suited to HTTP mode
[WARNING] 101/183012 (8561) : log-format: sample fetch <hdr(a,1)> may not be reliably used with 'log-format' because it needs 'HTTP request headers,HTTP response headers' which is not available here.
[WARNING] 101/183012 (8561) : Warning: no such variable name 'k' in log-format
Now we have this :
[WARNING] 101/183016 (8593) : parsing [fmt.cfg:8] : 'log-format' variable name 'r' is reserved for HTTP mode
[WARNING] 101/183016 (8593) : parsing [fmt.cfg:8] : 'log-format' : sample fetch <hdr(a,1)> may not be reliably used here because it needs 'HTTP request headers,HTTP response headers' which is not available here.
[WARNING] 101/183016 (8593) : parsing [fmt.cfg:15] : no such variable name 'k' in 'unique-id-format'
Commit a4312fa2 merged into dev18 improved log-format management by
processing "log-format" and "unique-id-format" where they were declared,
so that the faulty args could be reported with their correct line numbers.
Unfortunately, the log-format parser considers the proxy mode (TCP/HTTP)
and now if the directive is set before the "mode" statement, it can be
rejected and report warnings.
So we really need to parse these directives at the end of a section at
least. Right now we do not have an "end of section" event, so we need
to store the file name and line number for each of these directives,
and take care of them at the end.
One of the benefits is that now the line numbers can be inherited from
the line passing "option httplog" even if it's in a defaults section.
Future improvements should be performed to report line numbers in every
log-format processed by the parser.
When the parameter passed to a consistent hash is not found, we fall back to
round-robin using chash_get_next_server(). This one stores the last visited
server in lbprm.chash.last, which can be NULL upon the first invocation or if
the only server was recently brought up.
The loop used to scan for a server is able to skip the previously attempted
server in case of a redispatch, by passing this previous server in srvtoavoid.
For this reason, the loop stops when the currently considered server is
different from srvtoavoid and different from the original chash.last.
A problem happens in a special sequence : if a connection to a server fails,
then all servers are removed from the farm, then the original server is added
again before the redispatch happens, we have chash.last = NULL and srvtoavoid
set to the only server in the farm. Then this server is always equal to
srvtoavoid and never to NULL, and the loop never stops.
The fix consists in assigning the stop point to the first encountered node if
it was not yet set.
This issue cannot happen with the map-based algorithm since it's based on an
index and not a stop point.
This issue was reported by Henry Qian who kindly provided lots of critically
useful information to figure out the conditions to reproduce the issue.
The fix needs to be backported to 1.4 which is also affected.
Till now we used to call the function until the connection established, which
means that the header rewriting was performed for nothing upon each even (eg:
uploaded contents) until the server responded or timed out.
Now we only call the function when we assign the server.
When syncing data between two haproxy instances, the received keys were
allocated in the stack but no room was planned for type string. So in
practice all strings of 16 or less bytes were properly handled (due to
the union with in6_addr which reserves some room), but above this some
local variables were overwritten. Up to 8 additional bytes could be
written without impact, but random behaviours are expected above, including
crashes and memory corruption.
If large enough strings are allowed in the configuration, it is possible that
code execution could be triggered when the function's return pointer is
overwritten.
A quick workaround consists in ensuring that no stick-table uses a string
larger than 16 bytes (default is 32). Another workaround is to disable peer
sync when strings are in use.
Thanks to Will Glass-Husain for bringing up this issue with a backtrace to
understand the issue.
The bug only affects 1.5-dev3 and above. No backport is needed.
We'll need to centralize some pool_alloc()/pool_free() calls in the
upcoming fix so before that we need to reduce the number of points
by which we leave the critical code.
According to "man pcreapi", pcre_compile() does not accept being passed
a NULL pointer in errptr or erroffset. It immediately returns NULL, causing
any expression to fail. So let's pass real variables and make use of them
to improve error reporting.
When an ACL keyword needs a mandatory argument and this argument is of
type proxy or table, it is allowed not to specify it so that current
proxy is used by default.
In order to achieve this, the ACL expression parser builds a dummy
argument from scratch and marks it unresolved.
However, since recent changes on the ACL and samples, an unresolved
argument needs to be added to the unresolved list. This specific code
did not do it, resulting in random data being used as a proxy pointer
if no argument was passed for a proxy name, possibly even causing a
crash.
A quick workaround consists explicitly naming proxies in ACLs.
Commit 96199b10 reintroduced the splice() mechanism in the new connection
system. However, it failed to account for the number of transferred bytes,
allowing more bytes than scheduled to be transferred to the client. This
can cause an issue with small-chunked responses, where each packet from
the server may contain several chunks, because a single splice() call may
succeed, then try to splice() a second time as the pipe is not full, thus
consuming the next chunk size.
This patch also reverts commit baf2a5 ("OPTIM: splice: detect shutdowns...")
because it introduced a related regression. The issue is that splice() may
return less data than available also if the pipe is full, so having EPOLLRDHUP
after splice() returns less than expected is not a sufficient indication that
the input is empty.
In both cases, the issue may be detected by the presence of "SD" termination
flags in the logs, and worked around by disabling splicing (using "-dS").
This problem was reported by Sander Klein, and no backport is needed.
haproxy -vv shows build informations about USE flags and lib versions.
This patch introduces informations about PCRE and the new JIT feature.
It also makes USE_PCRE_JIT=1 appear in the haproxy -vv "OPTIONS".
This is useful since with the introduction of JIT we will see libpcre
related issues.
Sander Klein reported this bug. The test for the extra argument on these
rules prevent any condition from being added. The bug was introduced with
the feature itself in 1.5-dev16.
Released version 1.5-dev18 with the following main changes :
- DOCS: Add explanation of intermediate certs to crt paramater
- DOC: typo and minor fixes in compression paragraph
- MINOR: config: http-request configuration error message misses new keywords
- DOC: minor typo fix in documentation
- BUG/MEDIUM: ssl: ECDHE ciphers not usable without named curve configured.
- MEDIUM: ssl: add bind-option "strict-sni"
- MEDIUM: ssl: add mapping from SNI to cert file using "crt-list"
- MEDIUM: regex: Use PCRE JIT in acl
- DOC: simplify bind option "interface" explanation
- DOC: tfo: bump required kernel to linux-3.7
- BUILD: add explicit support for TFO with USE_TFO
- MEDIUM: New cli option -Ds for systemd compatibility
- MEDIUM: add haproxy-systemd-wrapper
- MEDIUM: add systemd service
- BUG/MEDIUM: systemd-wrapper: don't leak zombie processes
- BUG/MEDIUM: remove supplementary groups when changing gid
- BUG/MEDIUM: config: fix parser crash with bad bind or server address
- BUG/MINOR: Correct logic in cut_crlf()
- CLEANUP: checks: Make desc argument to set_server_check_status const
- CLEANUP: dumpstats: Make cli_release_handler() static
- MEDIUM: server: Break out set weight processing code
- MEDIUM: server: Allow relative weights greater than 100%
- MEDIUM: server: Tighten up parsing of weight string
- MEDIUM: checks: Add agent health check
- BUG/MEDIUM: ssl: openssl 0.9.8 doesn't open /dev/random before chroot
- BUG/MINOR: time: frequency counters are not totally accurate
- BUG/MINOR: http: don't process abortonclose when request was sent
- BUG/MEDIUM: stream_interface: don't close outgoing connections on shutw()
- BUG/MEDIUM: checks: ignore late resets after valid responses
- DOC: fix bogus recommendation on usage of gpc0 counter
- BUG/MINOR: http-compression: lookup Cache-Control in the response, not the request
- MINOR: signal: don't block SIGPROF by default
- OPTIM: epoll: make use of EPOLLRDHUP
- OPTIM: splice: detect shutdowns and avoid splice() == 0
- OPTIM: splice: assume by default that splice is working correctly
- BUG/MINOR: log: temporary fix for lost SSL info in some situations
- BUG/MEDIUM: peers: only the last peers section was used by tables
- BUG/MEDIUM: config: verbosely reject peers sections with multiple local peers
- BUG/MINOR: epoll: use a fix maxevents argument in epoll_wait()
- BUG/MINOR: config: fix improper check for failed memory alloc in ACL parser
- BUG/MINOR: config: free peer's address when exiting upon parsing error
- BUG/MINOR: config: check the proper variable when parsing log minlvl
- BUG/MEDIUM: checks: ensure the health_status is always within bounds
- BUG/MINOR: cli: show sess should always validate s->listener
- BUG/MINOR: log: improper NULL return check on utoa_pad()
- CLEANUP: http: remove a useless null check
- CLEANUP: tcp/unix: remove useless NULL check in {tcp,unix}_bind_listener()
- BUG/MEDIUM: signal: signal handler does not properly check for signal bounds
- BUG/MEDIUM: tools: off-by-one in quote_arg()
- BUG/MEDIUM: uri_auth: missing NULL check and memory leak on memory shortage
- BUG/MINOR: unix: remove the 'level' field from the ux struct
- CLEANUP: http: don't try to deinitialize http compression if it fails before init
- CLEANUP: config: slowstart is never negative
- CLEANUP: config: maxcompcpuusage is never negative
- BUG/MEDIUM: log: emit '-' for empty fields again
- BUG/MEDIUM: checks: fix a race condition between checks and observe layer7
- BUILD: fix a warning emitted by isblank() on non-c99 compilers
- BUILD: improve the makefile's support for libpcre
- MEDIUM: halog: add support for counting per source address (-ic)
- MEDIUM: tools: make str2sa_range support all address syntaxes
- MEDIUM: config: make use of str2sa_range() instead of str2sa()
- MEDIUM: config: use str2sa_range() to parse server addresses
- MEDIUM: config: use str2sa_range() to parse peers addresses
- MINOR: tests: add a config file to ease address parsing tests.
- MINOR: ssl: add a global tunable for the max SSL/TLS record size
- BUG/MINOR: syscall: fix NR_accept4 system call on sparc/linux
- BUILD/MINOR: syscall: add definition of NR_accept4 for ARM
- MINOR: config: report missing peers section name
- BUG/MEDIUM: tools: fix bad character handling in str2sa_range()
- BUG/MEDIUM: stats: never apply "unix-bind prefix" to the global stats socket
- MINOR: tools: prepare str2sa_range() to return an error message
- BUG/MEDIUM: checks: don't call connect() on unsupported address families
- MINOR: tools: prepare str2sa_range() to accept a prefix
- MEDIUM: tools: make str2sa_range() parse unix addresses too
- MEDIUM: config: make str2listener() use str2sa_range() to parse unix addresses
- MEDIUM: config: use a single str2sa_range() call to parse bind addresses
- MEDIUM: config: use str2sa_range() to parse log addresses
- CLEANUP: tools: remove str2sun() which is not used anymore.
- MEDIUM: config: add complete support for str2sa_range() in dispatch
- MEDIUM: config: add complete support for str2sa_range() in server addr
- MEDIUM: config: add complete support for str2sa_range() in 'server'
- MEDIUM: config: add complete support for str2sa_range() in 'peer'
- MEDIUM: config: add complete support for str2sa_range() in 'source' and 'usesrc'
- CLEANUP: minor cleanup in str2sa_range() and str2ip()
- CLEANUP: config: do not use multiple errmsg at once
- MEDIUM: tools: support specifying explicit address families in str2sa_range()
- MAJOR: listener: support inheriting a listening fd from the parent
- MAJOR: tools: support environment variables in addresses
- BUG/MEDIUM: http: add-header should not emit "-" for empty fields
- BUG/MEDIUM: config: ACL compatibility check on "redirect" was wrong
- BUG/MEDIUM: http: fix another issue caused by http-send-name-header
- DOC: mention the new HTTP 307 and 308 redirect statues
- MEDIUM: poll: do not use FD_* macros anymore
- BUG/MAJOR: ev_select: disable the select() poller if maxsock > FD_SETSIZE
- BUG/MINOR: acl: ssl_fc_{alg,use}_keysize must parse integers, not strings
- BUG/MINOR: acl: ssl_c_used, ssl_fc{,_has_crt,_has_sni} take no pattern
- BUILD: fix usual isdigit() warning on solaris
- BUG/MEDIUM: tools: vsnprintf() is not always reliable on Solaris
- OPTIM: buffer: remove one jump in buffer_count()
- OPTIM: http: improve branching in chunk size parser
- OPTIM: http: optimize the response forward state machine
- BUILD: enable poll() by default in the makefile
- BUILD: add explicit support for Mac OS/X
- BUG/MAJOR: http: use a static storage for sample fetch context
- BUG/MEDIUM: ssl: improve error processing and reporting in ssl_sock_load_cert_list_file()
- BUG/MAJOR: http: fix regression introduced by commit a890d072
- BUG/MAJOR: http: fix regression introduced by commit d655ffe
- BUG/CRITICAL: using HTTP information in tcp-request content may crash the process
- MEDIUM: acl: remove flag ACL_MAY_LOOKUP which is improperly used
- MEDIUM: samples: use new flags to describe compatibility between fetches and their usages
- MINOR: log: indicate it when some unreliable sample fetches are logged
- MEDIUM: samples: move payload-based fetches and ACLs to their own file
- MINOR: backend: rename sample fetch functions and declare the sample keywords
- MINOR: frontend: rename sample fetch functions and declare the sample keywords
- MINOR: listener: rename sample fetch functions and declare the sample keywords
- MEDIUM: http: unify acl and sample fetch functions
- MINOR: session: rename sample fetch functions and declare the sample keywords
- MAJOR: acl: make all ACLs reference the fetch function via a sample.
- MAJOR: acl: remove the arg_mask from the ACL definition and use the sample fetch's
- MAJOR: acl: remove fetch argument validation from the ACL struct
- MINOR: http: add new direction-explicit sample fetches for headers and cookies
- MINOR: payload: add new direction-explicit sample fetches
- CLEANUP: acl: remove ACL hooks which were never used
- MEDIUM: proxy: remove acl_requires and just keep a flag "http_needed"
- MINOR: sample: provide a function to report the name of a sample check point
- MAJOR: acl: convert all ACL requires to SMP use+val instead of ->requires
- CLEANUP: acl: remove unused references to ACL_USE_*
- MINOR: http: replace acl_parse_ver with acl_parse_str
- MEDIUM: acl: move the ->parse, ->match and ->smp fields to acl_expr
- MAJOR: acl: add option -m to change the pattern matching method
- MINOR: acl: remove the use_count in acl keywords
- MEDIUM: acl: have a pointer to the keyword name in acl_expr
- MEDIUM: acl: support using sample fetches directly in ACLs
- MEDIUM: http: remove val_usr() to validate user_lists
- MAJOR: sample: maintain a per-proxy list of the fetch args to resolve
- MINOR: ssl: add support for the "alpn" bind keyword
- MINOR: http: status code 303 is HTTP/1.1 only
- MEDIUM: http: implement redirect 307 and 308
- MINOR: http: status 301 should not be marked non-cacheable
The ALPN extension is meant to replace the now deprecated NPN extension.
This patch implements support for it. It requires a version of openssl
with support for this extension. Patches are available here right now :
http://html5labs.interopbridges.com/media/167447/alpn_patches.zip
While ACL args were resolved after all the config was parsed, it was not the
case with sample fetch args because they're almost everywhere now.
The issue is that ACLs now solely rely on sample fetches, so their args
resolving doesn't work anymore. And many fetches involving a server, a
proxy or a userlist don't work at all.
The real issue is that at the bottom layers we have no information about
proxies, line numbers, even ACLs in order to report understandable errors,
and that at the top layers we have no visibility over the locations where
fetches are referenced (think log node).
After failing multiple unsatisfying solutions attempts, we now have a new
concept of args list. The principle is that every proxy has a list head
which contains a number of indications such as the config keyword, the
context where it's used, the file and line number, etc... and a list of
arguments. This list head is of the same type as the elements, so it
serves as a template for adding new elements. This way, it is filled from
top to bottom by the callers with the information they have (eg: line
numbers, ACL name, ...) and the lower layers just have to duplicate it and
add an element when they face an argument they cannot resolve yet.
Then at the end of the configuration parsing, a loop passes over each
proxy's list and resolves all the args in sequence. And this way there is
all necessary information to report verbose errors.
The first immediate benefit is that for the first time we got very precise
location of issues (arg number in a keyword in its context, ...). Second,
in order to do this we had to parse log-format and unique-id-format a bit
earlier, so that was a great opportunity for doing so when the directives
are encountered (unless it's a default section). This way, the recorded
line numbers for these args are the ones of the place where the log format
is declared, not the end of the file.
Userlists report slightly more information now. They're the only remaining
ones in the ACL resolving function.
Now it becomes possible to directly use sample fetches as the ACL fetch
methods. In this case, the matching method is mandatory. This allows to
form more ACL combinations from existing fetches and will limit the need
for new ACLs when everything is available to form them from sample fetches
and matches.
The acl_expr struct used to hold a pointer to the ACL keyword. But since
we now have all the relevant pointers, we don't need that anymore, we just
need the pointer to the keyword as a string in order to return warnings
and error messages.
So let's change this in order to remove the dependency on the acl_keyword
struct from acl_expr.
During this change, acl_cond_kw_conflicts() used to return a pointer to an
ACL keyword but had to be changed to return a const char* for the same reason.
ACL expressions now support "-m" in addition to "-i" and "-f". This new
option is followed by the name of the pattern matching method to be used
on the extracted pattern. This makes it possible to reuse existing sample
fetch methods with other matching methods (eg: regex). A "found" matching
method ignores any pattern and only verifies that the required sample was
found (useful for cookies).
The HTTP version parser used in ACLs has long been a string and
still had its own parser. This makes no sense, switch it to use
the standard string parser.
The ACLs now use the fetch's ->use and ->val to decide upon compatibility
between the place where they are used and where the information are fetched.
The code is capable of reporting warnings about very fine incompatibilities
between certain fetches and an exact usage location, so it is expected that
some new warnings will be emitted on some existing configurations.
Two degrees of detection are provided :
- detecting ACLs that never match
- detecting keywords that are ignored
All tests show that this seems to work well, though bugs are still possible.
Proxy's acl_requires was a copy of all bits taken from ACLs, but we'll
get rid of ACL flags and only rely on sample fetches soon. The proxy's
acl_requires was only used to allocate an HTTP context when needed, and
was even forced in HTTP mode. So better have a flag which exactly says
what it's supposed to be used for.
These hooks, which established the relation between ACL_USE_* and the location
where the ACL were used, were never used because they were superseded with the
sample capabilities. Remove them now.
Similarly to previous commit fixing "hdr" and "cookie" in HTTP, we have to deal
with "payload" and "payload_lv" which are request-only for ACLs and req/resp for
sample fetches depending on the context, and to a less extent with other req_*
and rep_*/rep_* fetches. So let's add explicit "req." and "res." variants and
make the ACLs rely on that instead.
Since "hdr" and "cookie" were ambiguously referring to the request or response
depending on the context, we need a way to explicitly specify the direction.
By prefixing the fetches names with "req." and "res.", we can now restrict such
fetches to the appropriate direction. At the moment the fetches are explicitly
declared by later we might think about having an automatic match when "req." or
"res." appears. These explicit fetches are now used by the relevant ACLs.
ACL fetch being inherited from the sample fetch keyword, we don't need
anymore to specify what function to use to validate the fetch arguments.
Note that the job is still done in the ACL parsing code based on elements
from the sample fetch structs.
Now that ACLs solely rely on sample fetch functions, make them use the
same arg mask. All inconsistencies have been fixed separately prior to
this patch, so this patch almost only adds a new pointer indirection
and removes all references to ARG*() in the definitions.
The parsing is still performed by the ACL code though.
ACL fetch functions used to directly reference a fetch function. Now
that all ACL fetches have their sample fetches equivalent, we can make
ACLs reference a sample fetch keyword instead.
In order to simplify the code, a sample keyword name may be NULL if it
is the same as the ACL's, which is the most common case.
A minor change appeared, http_auth always expects one argument though
the ACL allowed it to be missing and reported as such afterwards, so
fix the ACL to match this. This is not really a bug.
The following sample fetch functions were only usable by ACLs but are now
usable by sample fetches too :
cook, cook_cnt, cook_val, hdr_cnt, hdr_ip, hdr_val, http_auth,
http_auth_group, http_first_req, method, req_proto_http, req_ver,
resp_ver, scook, scook_cnt, scook_val, shdr, shdr_cnt, shdr_ip,
shdr_val, status, urlp, urlp_val,
Most of them won't bring much benefit at the moment, or are even aliases of
existing ones, however they'll be needed for ACL->SMP convergence.
A new val_usr() function was added to resolve userlist names into pointers.
The http_auth_group ACL forgot to make its first argument mandatory, so
there was a check in cfgparse to report a vague error. Now that args are
correctly parsed, let's report something more precise.
All urlp* ACLs now support an optional 3rd argument like their sample
counter-part which is the optional delimiter.
The fetch functions have been renamed "smp_fetch_*".
Some args controls on the sample keywords have been relaxed so that we
can soon use them for ACLs :
- cookie now accepts to have an optional name ; it will return the
first matching cookie if the name is not set ;
- same for set-cookie and hdr
The following sample fetch functions were only usable by ACLs but are now
usable by sample fetches too :
dst_conn, so_id,
The fetch functions have been renamed "smp_fetch_*".
The following sample fetch functions were only usable by ACLs but are now
usable by sample fetches too :
fe_conn, fe_id, fe_sess_rate
The fetch functions have been renamed "smp_fetch_*".
The following sample fetch functions were only usable by ACLs but are now
usable by sample fetches too :
avg_queue, be_conn, be_id, be_sess_rate, connslots, nbsrv,
queue, srv_conn, srv_id, srv_is_up, srv_sess_rate
The fetch functions have been renamed "smp_fetch_*".
The file acl.c is a real mess, it both contains functions to parse and
process ACLs, and some sample extraction functions which act on buffers.
Some other payload analysers were arbitrarily dispatched to proto_tcp.c.
So now we're moving all payload-based fetches and ACLs to payload.c
which is capable of extracting data from buffers and rely on everything
that is protocol-independant. That way we can safely inflate this file
and only use the other ones when some fetches are really specific (eg:
HTTP, SSL, ...).
As a result of this cleanup, the following new sample fetches became
available even if they're not really useful :
always_false, always_true, rep_ssl_hello_type, rdp_cookie_cnt,
req_len, req_ssl_hello_type, req_ssl_sni, req_ssl_ver, wait_end
The function 'acl_fetch_nothing' was wrong and never used anywhere so it
was removed.
The "rdp_cookie" sample fetch used to have a mandatory argument while it
was optional in ACLs, which are supposed to iterate over RDP cookies. So
we're making it optional as a fetch too, and it will return the first one.
If a log-format involves some sample fetches that may not be present at
the logging instant, we can now report a warning.
Note that this is done both for log-format and for add-header and carefully
respects the original fetch keyword's capabilities.
Samples fetches were relying on two flags SMP_CAP_REQ/SMP_CAP_RES to describe
whether they were compatible with requests rules or with response rules. This
was never reliable because we need a finer granularity (eg: an HTTP request
method needs to parse an HTTP request, and is available past this point).
Some fetches are also dependant on the context (eg: "hdr" uses request or
response depending where it's involved, causing some abiguity).
In order to solve this, we need to precisely indicate in fetches what they
use, and their users will have to compare with what they have.
So now we have a bunch of bits indicating where the sample is fetched in the
processing chain, with a few variants indicating for some of them if it is
permanent or volatile (eg: an HTTP status is stored into the transaction so
it is permanent, despite being caught in the response contents).
The fetches also have a second mask indicating their validity domain. This one
is computed from a conversion table at registration time, so there is no need
for doing it by hand. This validity domain consists in a bitmask with one bit
set for each usage point in the processing chain. Some provisions were made
for upcoming controls such as connection-based TCP rules which apply on top of
the connection layer but before instantiating the session.
Then everywhere a fetch is used, the bit for the control point is checked in
the fetch's validity domain, and it becomes possible to finely ensure that a
fetch will work or not.
Note that we need these two separate bitfields because some fetches are usable
both in request and response (eg: "hdr", "payload"). So the keyword will have
a "use" field made of a combination of several SMP_USE_* values, which will be
converted into a wider list of SMP_VAL_* flags.
The knowledge of permanent vs dynamic information has disappeared for now, as
it was never used. Later we'll probably reintroduce it differently when
dealing with variables. Its only use at the moment could have been to avoid
caching a dynamic rate measurement, but nothing is cached as of now.
This flag is used on ACL matches that support being looking up patterns
in trees. At the moment, only strings and IPs support tree-based lookups,
but the flag is randomly set also on integers and binary data, and is not
even always set on strings nor IPs.
Better get rid of this mess by only relying on the matching function to
decide whether or not it supports tree-based lookups, this is safer and
easier to maintain.
During normal HTTP request processing, request buffers are realigned if
there are less than global.maxrewrite bytes available after them, in
order to leave enough room for rewriting headers after the request. This
is done in http_wait_for_request().
However, if some HTTP inspection happens during a "tcp-request content"
rule, this realignment is not performed. In theory this is not a problem
because empty buffers are always aligned and TCP inspection happens at
the beginning of a connection. But with HTTP keep-alive, it also happens
at the beginning of each subsequent request. So if a second request was
pipelined by the client before the first one had a chance to be forwarded,
the second request will not be realigned. Then, http_wait_for_request()
will not perform such a realignment either because the request was
already parsed and marked as such. The consequence of this, is that the
rewrite of a sufficient number of such pipelined, unaligned requests may
leave less room past the request been processed than the configured
reserve, which can lead to a buffer overflow if request processing appends
some data past the end of the buffer.
A number of conditions are required for the bug to be triggered :
- HTTP keep-alive must be enabled ;
- HTTP inspection in TCP rules must be used ;
- some request appending rules are needed (reqadd, x-forwarded-for)
- since empty buffers are always realigned, the client must pipeline
enough requests so that the buffer always contains something till
the point where there is no more room for rewriting.
While such a configuration is quite unlikely to be met (which is
confirmed by the bug's lifetime), a few people do use these features
together for very specific usages. And more importantly, writing such
a configuration and the request to attack it is trivial.
A quick workaround consists in forcing keep-alive off by adding
"option httpclose" or "option forceclose" in the frontend. Alternatively,
disabling HTTP-based TCP inspection rules enough if the application
supports it.
At first glance, this bug does not look like it could lead to remote code
execution, as the overflowing part is controlled by the configuration and
not by the user. But some deeper analysis should be performed to confirm
this. And anyway, corrupting the process' memory and crashing it is quite
trivial.
Special thanks go to Yves Lafon from the W3C who reported this bug and
deployed significant efforts to collect the relevant data needed to
understand it in less than one week.
CVE-2013-1912 was assigned to this issue.
Note that 1.4 is also affected so the fix must be backported.
Sander Klein reported that since last snapshot, some downloads would
hang from nginx but succeed from apache. The culprit was not too hard
to find given the low number of recent changes affecting the data path.
Commit d655ffe slightly reorganized the HTTP state machine and
introduced this regression. The reason is that we must never jump
into the MSG_DONE case without first flushing remaining data because
this is not done anymore afterwards. This part is scheduled for
being reorganized since it's totally ugly especially since we added
compression, and this regression is an illustration of its readability.
The issue is entirely dependant on the server close sequence, which
explains why it was reproducible only with nginx here.
This commit fixed a bug and introduced a new one at the same time.
It's a stupid typo, the index to store the context is [0], not [2].
The effect is that parsing the header can loop forever if multiple
headers are found. This issue was reported by Lukas Tribus.
fe61656b added the ability to load a list of certificates from a file,
but error control was incomplete and misleading, as some errors such
as missing files were not reported, and errors reported with Alert()
instead of memprintf() were inappropriate and mixed with upper errors.
Also, the code really supports a single SNI filter right now, so let's
correct it and the doc for that, leaving room for later change if needed.
It designates a list of PEM file with an optional list of SNI filter
per certificate, with the following format for each line :
<crtfile>[ <snifilter>]*
Wildcards are supported in the SNI filter. The certificates will be
presented to clients who provide a valid TLS Server Name Indication
field matching one of SNI filter. If no SNI filter is specified the
CN and alt subjects are used.
Formerly, if A was replaced by B, and then B by C before
A finished exiting, we didn't wait for B to finish so it
ended up as a zombie process.
Fix this by waiting randomly every child we spawn.
Signed-off-by: Marc-Antoine Perennou <Marc-Antoine@Perennou.com>
Baptiste Assmann reported that the cook*() ACLs do not work anymore.
The reason is the way we store the hdr_ctx between subsequent calls
to smp_fetch_cookie() since commit 3740635b (1.5-dev10).
The smp->ctx.a[] storage holds up to 8 pointers. It is not meant for
generic storage. We used to store hdr_ctx in the ctx, but while it used
to just fit for smp_fetch_hdr(), it does not for smp_fetch_cookie()
since we stored it at offset 2.
The correct solution is to use this storage to store a pointer to the
current hdr_ctx struct which is statically allocated.
Seen on Solaris 8, calling vsnprintf() with a null-size results
in the output size not being computed. This causes some random
behaviour including crashes when trying to display error messages
when loading an invalid configuration.
src/standard.c: In function `str2sa_range':
src/standard.c:734: warning: subscript has type `char'
This one was recently introduced by commit c120c8d3.
Some recent glibc updates have added controls on FD_SET/FD_CLR/FD_ISSET
that crash the program if it tries to use a file descriptor larger than
FD_SETSIZE.
For this reason, we now control the compatibility between global.maxsock
and FD_SETSIZE, and refuse to use select() if there too many FDs are
expected to be used. Note that on Solaris, FD_SETSIZE is already forced
to 65536, and that FreeBSD and OpenBSD allow it to be redefined, though
this is not needed thanks to kqueue which is much more efficient.
In practice, since poll() is enabled on all targets, it should not cause
any problem, unless it is explicitly disabled.
This change must be backported to 1.4 because the crashes caused by glibc
have already been reported on this version.
Some recent glibc updates have added controls on FD_SET/FD_CLR/FD_ISSET
that crash the program if it tries to use a file descriptor larger than
FD_SETSIZE.
Do not rely on FD_* macros anymore and replace them with bit fields.
An issue reported by David Coulson is that when using http-send-name-header,
the response processing would randomly be performed. The issue was first
diagnosed by Cyril Bont as being related to a time race when processing
the closing of the response.
In practice, the issue is a bit trickier. It happens that
http_send_name_header() did not update msg->sol after a rewrite. This
counter is supposed to point to the beginning of the message's body
once headers are scheduled for being forwarded. And not updating it
means that the first forwarding of the request headers in
http_request_forward_body() does not send the correct count, leaving
some bytes in chn->to_forward.
Then if the server sends its response in a single packet with the
close, the stream interface switches to state SI_ST_DIS which in
turn moves to SI_ST_CLO in process_session(), and to close the
outgoing connection. This is detected by http_request_forward_body(),
which then switches the request message to the error state, and syncs
all FSMs and removes any response analyser.
The response analyser being removed, no processing is performed on
the response buffer, which is tunnelled as-is to the client.
Of course, the correct fix consists in having http_send_name_header()
update msg->sol. Normally this ought not to have been needed, but it
is an abuse to modify data already scheduled for being forwarded, so
it is expected that such specific handling has to be done there. Better
not have generic functions deal with such cases, so that it does not
become the standard.
Note: 1.4 does not have this issue even if it does not update the
pointer either, because it forwards from msg->som which is not
updated at the moment the connect() succeeds. So no backport is
required.
The check was made on "cond" instead of "rule->cond", so it never
emitted any warning since either the rule was NULL or it was set to
the last condition met.
This is 1.5-specific and the bug was introduced by commit 4baae248
in 1.5-dev17, so no backport is needed.
Patch 6cbbdbf3 fixed the missing "-" delimitors in logs but it caused
them to be emitted with "http-request add-header", eventhough it was
correctly fixed for the unique-id format. Fix this by simply removing
LOG_OPT_MANDATORY in this case.
Now that all addresses are parsed using str2sa_range(), it becomes easy
to add support for environment variables and use them everywhere an address
is needed. Environment variables are used as $VAR or ${VAR} as in shell.
Any number of variables may compose an address, allowing various fantasies
such as "fd@${FD_HTTP}" or "${LAN_DC1}.1:80".
These ones are usable in logs, bind, servers, peers, stats socket, source,
dispatch, and check address.
Using the address syntax "fd@<num>", a listener may inherit a file
descriptor that the caller process has already bound and passed as
this number. The fd's socket family is detected using getsockname(),
and the usual initialization is performed through the existing code
for that family, but the socket creation is skipped.
Whether the parent has performed the listen() call or not is not
important as this is detected.
For UNIX sockets, we immediately clear the path after preparing a
socket so that we never remove it in case an abort would happen due
to a late error during startup.
This change allows one to force the address family in any address parsed
by str2sa_range() by specifying it as a prefix followed by '@' then the
address. Currently supported address prefixes are 'ipv4@', 'ipv6@', 'unix@'.
This also helps forcing resolving for host names (when getaddrinfo is used),
and force the family of the empty address (eg: 'ipv4@' = 0.0.0.0 while
'ipv6@' = ::).
The main benefits is that unix sockets can now get a local name without
being forced to begin with a slash. This is useful during development as
it is no longer necessary to have stats socket sent to /tmp.
Several of the parsing functions made use of multiple errmsg/err_msg
variables which had to be freed, while there is already one in each
function that is freed upon exit. Adapt the code to use the existing
variable exclusively.
Don't use a statically allocated address both for str2ip and str2sa_range,
use the same. The inet and unix code paths have been splitted a little
better to improve readability.
The 'source' and 'usesrc' statements now completely rely on str2sa_range() to
parse an address. A test is made to ensure that the address family supports
connect().
Now that str2sa_range() knows how to parse UNIX addresses, make str2listener()
use it. It simplifies the function. Next step consists in unifying the error
handling to further simplify the call.
Tests have been done and show that unix sockets are correctly handled, with
and without prefixes, both for global stats and normal "bind" statements.
str2sa_range() now considers that any address beginning with '/' is a UNIX
address. It is compatible with all callers at the moment since all of them
perform this test and use a different parser for such addresses. However,
some parsers (eg: servers) still don't check for unix addresses.
We'll need str2sa_range() to support a prefix for unix sockets. Since
we don't always want to use it (eg: stats socket), let's not take it
unconditionally from global but let the caller pass it.
At the moment, all address families supported on a "server" statement support
a connect() method, but this will soon change with the generalization of
str2sa_range(). Checks currently call ->connect() unconditionally so let's
add a check for this.
The "unix-bind prefix" feature was made for explicit "bind" statements. Since
the stats socket was changed to use str2listener(), it implicitly inherited
from this feature. But both are defined in the global section, and we don't
want them to be position-dependant.
So let's make str2listener() explicitly not apply the unix-bind prefix to the
global stats frontend.
This only affects 1.5-dev so it does not need any backport.
Commit d4448bc8 brought support for parsing port ranges, but invalid
characters are not properly handled and can result in a crash while
parsing the configuration if an invalid character is present in the
port, because the return value is set to NULL then dereferenced.
Add new tunable "tune.ssl.maxrecord".
Over SSL/TLS, the client can decipher the data only once it has received
a full record. With large records, it means that clients might have to
download up to 16kB of data before starting to process them. Limiting the
record size can improve page load times on browsers located over high
latency or low bandwidth networks. It is suggested to find optimal values
which fit into 1 or 2 TCP segments (generally 1448 bytes over Ethernet
with TCP timestamps enabled, or 1460 when timestamps are disabled), keeping
in mind that SSL/TLS add some overhead. Typical values of 1419 and 2859
gave good results during tests. Use "strace -e trace=write" to find the
best value.
This trick was first suggested by Mike Belshe :
http://www.belshe.com/2010/12/17/performance-and-the-tls-record-size/
Then requested again by Ilya Grigorik who provides some hints here :
http://ofps.oreilly.com/titles/9781449344764/_transport_layer_security_tls.html#ch04_00000101
When parsing the config, we now use str2sa_range() to detect when
ranges or port offsets were improperly used. Among the new checks
are "log", "source", "addr", "usesrc" which previously didn't check
for extra parameters.
Right now we have multiple methods for parsing IP addresses in the
configuration. This is quite painful. This patch aims at adapting
str2sa_range() to make it support all formats, so that the callers
perform the appropriate tests on the return values. str2sa() was
changed to simply return str2sa_range().
The output values are now the following ones (taken from the comment
on top of the function).
Converts <str> to a locally allocated struct sockaddr_storage *, and a port
range or offset consisting in two integers that the caller will have to
check to find the relevant input format. The following format are supported :
String format | address | port | low | high
addr | <addr> | 0 | 0 | 0
addr: | <addr> | 0 | 0 | 0
addr:port | <addr> | <port> | <port> | <port>
addr:pl-ph | <addr> | <pl> | <pl> | <ph>
addr:+port | <addr> | <port> | 0 | <port>
addr:-port | <addr> |-<port> | <port> | 0
The detection of a port range or increment by the caller is made by
comparing <low> and <high>. If both are equal, then port 0 means no port
was specified. The caller may pass NULL for <low> and <high> if it is not
interested in retrieving port ranges.
Note that <addr> above may also be :
- empty ("") => family will be AF_INET and address will be INADDR_ANY
- "*" => family will be AF_INET and address will be INADDR_ANY
- "::" => family will be AF_INET6 and address will be IN6ADDR_ANY
- a host name => family and address will depend on host name resolving.
If an address is improperly formated on a bind or server address
and haproxy is built for using getaddrinfo, then a crash may occur
upon the call to freeaddrinfo().
Thanks to Jon Meredith for helping me patch this for SmartOS,
I am not a C/GDB wizard.
Support for server side TFO was actually introduced in linux-3.7,
linux-3.6 just has client support.
This patch fixes documentation and a code comment about the
kernel requirement. It also fixes a wrong tfo related code
comment in src/proto_tcp.c.
Commit a2b9dad introduced use of isblank() which is not present everywhere
and requires -std=c99 or various other defines. Here on gcc-3.4 with glibc
2.3 it emits a warning though it works :
src/checks.c: In function 'event_srv_chk_r':
src/checks.c:1007: warning: implicit declaration of function 'isblank'
This macro matches only 2 values, better replace it with the explicit match.
Support a agent health check performed by opening a TCP socket to a
pre-defined port and reading an ASCII string. The string should have one of
the following forms:
* An ASCII representation of an positive integer percentage.
e.g. "75%"
Values in this format will set the weight proportional to the initial
weight of a server as configured when haproxy starts.
* The string "drain".
This will cause the weight of a server to be set to 0, and thus it will
not accept any new connections other than those that are accepted via
persistence.
* The string "down", optionally followed by a description string.
Mark the server as down and log the description string as the reason.
* The string "stopped", optionally followed by a description string.
This currently has the same behaviour as down (iii).
* The string "fail", optionally followed by a description string.
This currently has the same behaviour as down (iii).
A agent health check may be configured using "option lb-agent-chk".
The use of an alternate check-port, used to obtain agent heath check
information described above as opposed to the port of the service,
may be useful in conjunction with this option.
e.g.
option lb-agent-chk
server http1_1 10.0.0.10:80 check port 10000 weight 100
Signed-off-by: Simon Horman <horms@verge.net.au>
Detect:
* Empty weight string, including no digits before '%' in relative
weight string
* Trailing garbage, including between the last integer and '%'
in relative weights
The motivation for this is to allow the weight string to be safely
logged if successfully parsed by this function
Signed-off-by: Simon Horman <horms@verge.net.au>
Allow relative weights greater than 100%,
capping the absolute value to 256 which is
the largest supported absolute weight.
Signed-off-by: Simon Horman <horms@verge.net.au>
Break out set weight processing code.
This is in preparation for reusing the code.
Also, remove duplicate check in nested if clauses.
{px->lbprm.algo & BE_LB_PROP_DYN) is checked by
the immediate outer if clause, so there is no need
to check it a second time.
Signed-off-by: Simon Horman <horms@verge.net.au>
Currently, to reload haproxy configuration, you have to use "-sf".
There is a problem with this way of doing things. First of all, in the systemd world,
reload commands should be "oneshot" ones, which means they should not be the new main
process but rather a tool which makes a call to it and then exits. With the current approach,
the reload command is the new main command and moreover, it makes the previous one exit.
Systemd only tracks the main program, seeing it ending, it assumes it either finished or failed,
and kills everything remaining as a grabage collector. We then end up with no haproxy running
at all.
This patch adds wrapper around haproxy, no changes at all have been made into it,
so it's not intrusive and doesn't change anything for other hosts. What this wrapper does
is basically launching haproxy as a child, listen to the SIGUSR2 (not to conflict with
haproxy itself) signal, and spawing a new haproxy with "-sf" as a child to relay the
first one.
Signed-off-by: Marc-Antoine Perennou <Marc-Antoine@Perennou.com>
This patch adds a new option "-Ds" which is exactly like "-D", but instead of
forking n times to get n jobs running and then exiting, prefers to wait for all the
children it just created. With this done, haproxy becomes more systemd-compliant,
without changing anything for other systems.
Signed-off-by: Marc-Antoine Perennou <Marc-Antoine@Perennou.com>
When observe layer7 is enabled on a server, a response may cause a server
to be marked down while a check is in progress. When the check finally
completes, the connection is not properly released in process_chk() because
the server states makes it think that no check was in progress due to the
lastly reported failure.
When a new check gets scheduled, it reuses the same connection structure
which is reinitialized. When the server finally closes the previous
connection, epoll_wait() notifies conn_fd_handler() which sees that the
old connection is still referenced by fdtab[fd], but it can not do anything
with this fd which does not match conn->t.sock.fd. So epoll_wait() keeps
reporting this fd forever.
The solution is to always make process_chk() always take care of closing
the connection and not make it rely on the connection layer to so.
Special thanks go to James Cole and Finn Arne Gangstad who encountered
the issue almost at the same time and took care of reporting a very
detailed analysis with rich information to help understand the issue.
Commit 2b0108ad accidently got rid of the ability to emit a "-" for
empty log fields. This can happen for captured request and response
cookies, as well as for fetches. Since we don't want to have this done
for headers however, we set the default log method when parsing the
format. It is still possible to force the desired mode using +M/-M.
Openssl needs to access /dev/urandom to initialize its internal random
number generator. It does so when it needs a random for the first time,
which fails if it is a handshake performed after the chroot(), causing
all SSL incoming connections to fail.
This fix consists in calling RAND_bytes() to produce a random before
the chroot, which will in turn open /dev/urandom before it's too late,
and avoid the issue.
If the random generator fails to work while processing the config,
haproxy now fails with an error instead of causing SSL connections to
fail at runtime.
This new option ensures that there is no possible fallback to a default
certificate if the client does not provide an SNI which is explicitly
handled by a certificate.
In select_compression_response_header(), some tests are rather confusing
as the "fail" label is used to deinitialize the compression context for
the session while it's branched only before initialization succeeds. The
test is always false here and the dereferencing of the comp_algo pointer
which might be null is also confusing. Remove that code which is not needed
anymore since commit ec3e3890 got rid of the latest issues.
Reported-by: Dinko Korunic <dkorunic@reflected.net>
Commit 290e63aa moved the unix parameters out of the global stats socket
to the bind_conf struct. As such the stats admin level was also moved
overthere, but it remained in the stats global section where it was not
used, except by a nasty memcpy() used to initialize the ux struct in the
bind_conf with too large data. Fortunately, the extra data copied were
the previous level over the new level so it did not have any impact, but
it could have been worse.
This bug is 1.5 specific, no backport is needed.
Reported-by: Dinko Korunic <dkorunic@reflected.net>
A test is obviously wrong in uri_auth(). If strdup(pass) returns an error
while strdup(user) passes, the NULL pointer is still stored into the
structure. If the user returns the NULL instead, the allocated memory is
not released before returning the error.
The issue was present in 1.4 so the fix should be backported.
Reported-by: Dinko Korunic <dkorunic@reflected.net>
This function may write the \0 one char too far in the static array.
There is no effect right now as the function has never been used except
maybe in code that was never released. Out-of-tree code might possibly
be affected though (hence the MEDIUM flag).
No backport is needed.
Reported-by: Dinko Korunic <dkorunic@reflected.net>
sig is checked for < 0 or > MAX_SIGNAL, but the signal array is
MAX_SIGNAL in size. At the moment, MAX_SIGNAL is 256. If a system supports
more than MAX_SIGNAL signals, then sending signal MAX_SIGNAL to the process
will corrupt one integer in its memory and might also crash the process.
This bug is also present in 1.4 and 1.3, and the fix must be backported.
Reported-by: Dinko Korunic <dkorunic@reflected.net>
srv cannot be null in http_perform_server_redirect(), as it's taken
from the stream interface's target which is always valid for a
server-based redirect, and it was already dereferenced above, so in
practice, gcc already removes the test anyway.
Reported-by: Dinko Korunic <dkorunic@reflected.net>
utoa_pad() is directly fed into tmplog, which is checked for NULL.
First, when NULLs are possible, they should be put into a temp variable
in order to preserve tmplog, and second, this return value can never be
NULL because the value passed is tv_usec/1000 (between "0" and "999")
with a 4-char output. However better fix the check in case this code gets
improperly copy-pasted for another usage later.
Reported-by: Dinko Korunic <dkorunic@reflected.net>
Currently s->listener is set for all sessions, but this may not remain
the case forever so we already check s->listener for validity. On check
was missed.
Reported-by: Dinko Korunic <dkorunic@reflected.net>
health_adjust() checks for incorrect bounds for the status argument.
With current code, the argument is always a constant from the valid
enum so there is no impact and the check is basically a NOP. However
users running local patches (eg: new checks) might want to recheck
their code.
This fix should be backported to 1.4 which introduced the issue.
Reported-by: Dinko Korunic <dkorunic@reflected.net>
logsrv->minlvl gets the numeric log level from the equivalent string.
Upon error, ->level was checked due to a wrong copy-paste. The effect
is that a wrong name will silently be ignored and due to minlvl=-1,
will act as if the option was not set.
No backport is needed, this is 1.5-specific.
Reported-by: Dinko Korunic <dkorunic@reflected.net>
An error caused by an invalid port does not cause the raddr string to
be freed. This is harmless at the moment since we exit, but may have
an impact later if we ever support hot config changes.
Reported-by: Dinko Korunic <dkorunic@reflected.net>
The wrong variable is checked after a calloc() so a memory shortage would
result in a segfault while loading the config instead of a clean error.
This fix may be backported to 1.4 and 1.3 which are both affected.
Reported-by: Dinko Korunic <dkorunic@reflected.net>
epoll_wait() takes a number of returned events, not the number of
fds to consider. We must not pass it the number of the smallest fd,
as it leads to value zero being used, which is invalid in epoll_wait().
The effect may sometimes be observed with peers sections trying to
connect and causing 2-seconds CPU loops upon a soft reload because
epoll_wait() immediately returns -1 EINVAL instead of waiting for the
timeout to happen.
This fix should be backported to 1.4 too (into ev_epoll and ev_sepoll).
If a peers section contains several instances of the local peer name, only
the first one was considered and the next ones were silently ignored. This
can cause some trouble to debug such a configuration. Now the extra entries
are rejected with an error message indicating where the first occurrence was
found.
Without it, haproxy will retain the group membership of root, which may
give more access than intended to the process. For example, haproxy would
still be in the wheel group on Fedora 18, as seen with :
# haproxy -f /etc/haproxy/haproxy.cfg
# ps a -o pid,user,group,command | grep hapr
3545 haproxy haproxy haproxy -f /etc/haproxy/haproxy.cfg
4356 root root grep --color=auto hapr
# grep Group /proc/3545/status
Groups: 0 1 2 3 4 6 10
# getent group wheel
wheel❌10:root,misc
[WT: The issue has been investigated by independent security research team
and realized by itself not being able to allow security exploitation.
Additionally, dropping groups is not allowed to unprivileged users,
though this mode of deployment is quite common. Thus a warning is
emitted in this case to inform the user. The fix could be backported
into all supported versions as the issue has always been there. ]
Due to a typo in the peers section lookup code, the last declared peers
section was used instead of the one matching the requested name. This bug
has been there since the very first commit on peers section (1.5-dev2).
When using log-format to log the result of sample fetch functions
which rely on the transport layer (eg: ssl*), we have no way to tell
the proxy not to release the connection before logs have caught the
necessary information. As a result, it happens that logging SSL fetch
functions sometimes doesn't return anything for example if the server
is not available and the connection is immediately aborted.
This issue will be fixed with the upcoming patches to finish handling
of sample fetches.
So for the moment, always mark the LW_XPRT flag on the proxy so that
when any fetch method is used, the proxy does not release the transport
layer too fast.
Versions of splice between 2.6.25 and 2.6.27.12 were bogus and would return EAGAIN
on incoming shutdowns. On these versions, we have to call recv() after such a return
in order to find whether splice is OK or not. Since 2.6.27.13 we don't need to do
this anymore, saving one useless recv() call after each splice() returning EAGAIN,
and we can avoid this logic by defining ASSUME_SPLICE_WORKS.
Building with linux2628 automatically enables splice and the flag above since the
kernel is safe. People enabling splice for custom kernels will be able to disable
this logic by hand too.
Since last commit introducing EPOLLRDHUP, the splicing code is able to
detect an incoming shutdown without calling splice() == 0. This avoids
one useless syscall.
epoll may report pending shutdowns using EPOLLRDHUP. Since this
flag is missing from a number of libcs despite being available
since kernel 2.6.17, let's define it ourselves.
Doing so saves one syscall by allow us to avoid the read()==0 when
the server closes with the respose.
SIGPROF is blocked then restored to default settings, which sometimes
makes profiling harder. Let's not block it by default nor restore it,
it doesn't serve any purpose anyway.