As usual, when touching any is* function, Solaris complains about the
type of the element being checked. Better backport this to 1.5 since
nobody knows what the emitted code looks like since macros are used
instead of functions.
check.c doesn't build on solaris since 98637e5 ("MEDIUM: Add external check")
because sigset_t is unknown. Simply include signal.h to fix the issue.
No need to backport, this is 1.6-only.
When bind() fails (function uxst_bind_listener()), the fail path doesn't
consider the abstract namespace and tries to unlink paths held in
uninitiliazed memory (tempname and backname). See the strace excerpt;
the strings still hold the path from test1.
===============================================================================================
23722 bind(5, {sa_family=AF_FILE, path=@"test2"}, 110) = -1 EADDRINUSE (Address already in use)
23722 unlink("/tmp/test1.sock.23722.tmp") = -1 ENOENT (No such file or directory)
23722 close(5) = 0
23722 unlink("/tmp/test1.sock.23722.bak") = -1 ENOENT (No such file or directory)
===============================================================================================
This patch should be backported to 1.5.
With all the goodies supported by logformat, people find that the limit
of 1024 chars for log lines is too short. Some servers do not support
larger lines and can simply drop them, so changing the default value is
not always the best choice.
This patch takes a different approach. Log line length is specified per
log server on the "log" line, with a value between 80 and 65535. That
way it's possibly to satisfy all needs, even with some fat local servers
and small remote ones.
I've been facing multiple configurations which involved track-sc* rules
in tcp-request content without the "if ..." to force it to wait for the
contents, resulting in random behaviour with contents sometimes retrieved
and sometimes not.
Reading the doc doesn't make it clear either that the tracking will be
performed only if data are already there and that waiting on an ACL is
the only way to avoid this.
Since this behaviour is not natural and we now have the ability to fix
it, this patch ensures that if input data are still moving, instead of
silently dropping them, we naturally wait for them to stabilize up to
the inspect-delay. This way it's not needed anymore to implement an
ACL-based condition to force to wait for data, eventhough the behaviour
is not changed for when an ACL is present.
The most obvious usage will be when track-sc is followed by any HTTP
sample expression, there's no need anymore for adding "if HTTP".
It's probably worth backporting this to 1.5 to avoid further configuration
issues. Note that it requires previous patch.
stktable_fetch_key() does not indicate whether it returns NULL because
the input sample was not found or because it's unstable. It causes trouble
with track-sc* rules. Just like with sample_fetch_string(), we want it to
be able to give more information to the caller about what it found. Thus,
now we use the pointer to a sample passed by the caller, and fill it with
the information we have about the sample. That way, even if we return NULL,
the caller has the ability to check whether a sample was found and if it is
still changing or not.
We used to only clear flags when reusing the static sample before calling
sample_process(), but that's not enough because there's a context in samples
that can be used by some fetch functions such as auth, headers and cookies,
and not reinitializing it risks that a pointer of a different type is used
in the wrong context.
An example configuration which triggers the case consists in mixing hdr()
and http_auth_group() which both make use of contexts :
http-request add-header foo2 %[hdr(host)],%[http_auth_group(foo)]
The solution is simple, initialize all the sample and not just the flags.
This fix must be backported into 1.5 since it was introduced in 1.5-dev19.
Baptiste Assmann reported a corner case in the releasing of stick-counters:
we release content-aware counters before logging. In the past it was not a
problem, but since now we can log them it, it prevents one from logging
their value. Simply switching the log production and the release of the
counter fixes the issue.
This should be backported into 1.5.
'ssl_sock_get_common_name' applied to a connection was also renamed
'ssl_sock_get_remote_common_name'. Currently, this function is only used
with protocol PROXYv2 to retrieve the client certificate's common name.
A further usage could be to retrieve the server certificate's common name
on an outgoing connection.
The sample fetch function "base" makes use of the trash which is also
used by set-header/add-header etc... everything which builds a formated
line. So we end up with some junk in the header if base is in use. Let's
fix this as all other fetches by using a trash chunk instead.
This bug was reported by Baptiste Assmann, and also affects 1.5.
Commit 81ae195 ("[MEDIUM] add support for logging via a UNIX socket")
merged in 1.3.14 introduced a few minor issues with log sockets. All
of them happen only when a failure is encountered when trying to set
up the logging socket (eg: socket family is not available or is
temporarily short in resources).
The first socket which experiences an error causes the socket setup
loop to abort, possibly preventing any log from being sent if it was
the first logger. The second issue is that if this socket finally
succeeds after a second attempt, errors are reported for the wrong
logger (eg: logger #1 failed instead of #2). The last point is that
we now have multiple loggers, and it's a waste of time to walk over
their list for every log while they're almost always properly set up.
So in order to fix all this, let's merge the two lists. If a logger
experiences an error, it simply sends an alert and skips to the next
one. That way they don't prevent messages from being sent and are
all properly accounted for.
This is the 3rd regression caused by the changes below. The latest to
date was reported by Finn Arne Gangstad. If a server responds with no
content-length and the client's FIN is never received, either we leak
the client-side FD or we spin at 100% CPU if timeout client-fin is set.
Enough is enough. The amount of tricks needed to cover these side-effects
starts to look like used toilet paper stacked over a chocolate cake. I
don't want to eat that cake anymore!
All this to avoid reporting a server-side timeout when a client stops
uploading data and haproxy expires faster than the server... A lot of
"ifs" resulting in a technically valid log that doesn't always please
users, and whose alternative causes that many issues for all others
users.
So let's revert this crap merged since 1.5-dev25 :
Revert "CLEANUP: http: don't clear CF_READ_NOEXP twice"
This reverts commit 1592d1e72a4a2d25a554c299ae95a3e6cad80bf1.
Revert "BUG/MEDIUM: http: clear CF_READ_NOEXP when preparing a new transaction"
This reverts commit 77d29029af1c44216b190dd7442964b9d8f45257.
Revert "BUG/MEDIUM: session: don't clear CF_READ_NOEXP if analysers are not called"
This reverts commit 0943757a2144761c60e416b5ed07baa76934f5a4.
Revert "BUG/MEDIUM: http: disable server-side expiration until client has sent the body"
This reverts commit 3bed5e9337fd6eeab0f0006ebefcbe98ee5c4f9f.
Revert "BUG/MEDIUM: http: correctly report request body timeouts"
This reverts commit b9edf8fbecc9d1b5c82794735adcc367a80a4ae2.
Revert "BUG/MEDIUM: http/session: disable client-side expiration only after body"
This reverts commit b1982e27aaff2a92a389a9f1bc847e3bb8fdb4f2.
If a cleaner AND SAFER way to do something equivalent in 1.6-dev, we *might*
consider backporting it to 1.5, but given the vicious bugs that have surfaced
since, I doubt it will happen any time soon.
Fortunately, that crap never made it into 1.4 so no backport is needed.
For some browsers (firefox), an expired OCSP Response causes unwanted behavior.
Haproxy stops serving OCSP response if nextupdate date minus
the supported time skew (#define OCSP_MAX_RESPONSE_TIME_SKEW) is
in the past.
I am not entirely sure that this is a bug, but it seems
to me that it may cause a problem if there agent-check is
configured and there is some kind of error making a connection for it.
Signed-off-by: Simon Horman <horms@verge.net.au>
The support is all based on static responses. This doesn't add any
request / response logic to HAProxy, but allows a way to update
information through the socket interface.
Currently certificates specified using "crt" or "crt-list" on "bind" lines
are loaded as PEM files.
For each PEM file, haproxy checks for the presence of file at the same path
suffixed by ".ocsp". If such file is found, support for the TLS Certificate
Status Request extension (also known as "OCSP stapling") is automatically
enabled. The content of this file is optional. If not empty, it must contain
a valid OCSP Response in DER format. In order to be valid an OCSP Response
must comply with the following rules: it has to indicate a good status,
it has to be a single response for the certificate of the PEM file, and it
has to be valid at the moment of addition. If these rules are not respected
the OCSP Response is ignored and a warning is emitted. In order to identify
which certificate an OCSP Response applies to, the issuer's certificate is
necessary. If the issuer's certificate is not found in the PEM file, it will
be loaded from a file at the same path as the PEM file suffixed by ".issuer"
if it exists otherwise it will fail with an error.
It is possible to update an OCSP Response from the unix socket using:
set ssl ocsp-response <response>
This command is used to update an OCSP Response for a certificate (see "crt"
on "bind" lines). Same controls are performed as during the initial loading of
the response. The <response> must be passed as a base64 encoded string of the
DER encoded response from the OCSP server.
Example:
openssl ocsp -issuer issuer.pem -cert server.pem \
-host ocsp.issuer.com:80 -respout resp.der
echo "set ssl ocsp-response $(base64 -w 10000 resp.der)" | \
socat stdio /var/run/haproxy.stat
This feature is automatically enabled on openssl 0.9.8h and above.
This work was performed jointly by Dirkjan Bussink of GitHub and
Emeric Brun of HAProxy Technologies.
The pcreposix layer (in the pcre projetc) execute strlen to find
thlength of the string. When we are using the function "regex_exex*2",
the length is used to add a final \0, when pcreposix is executed a
strlen is executed to compute the length.
If we are using a native PCRE api, the length is provided as an
argument, and these operations disappear.
This is useful because PCRE regex are more used than POSIC regex.
The new regex function can use string and length. The HAproxy buffer are
not null-terminated, and the use of the regex_exec* functions implies
the add of this null character. This patch replace these function by the
functions which takes a string and length as input.
Just the file "proto_http.c" is change because this one is more executed
than other. The file "checks.c" have a very low usage, and it is not
interesting to change it. Furthermore, the buffer used by "checks.c" are
null-terminated.
This patch remove all references of standard regex in haproxy. The last
remaining references are only in the regex.[ch] files.
In the file src/checks.c, the original function uses a "pmatch" array.
In fact this array is unused. This patch remove it.
This patchs rename the "regex_exec" to "regex_exec2". It add a new
"regex_exec", "regex_exec_match" and "regex_exec_match2" function. This
function can match regex and return array containing matching parts.
Otherwise, this function use the compiled method (JIT or PCRE or POSIX).
JIT require a subject with length. PCREPOSIX and native POSIX regex
require a null terminted subject. The regex_exec* function are splited
in two version. The first version take a null terminated string, but it
execute strlen() on the subject if it is compiled with JIT. The second
version (terminated by "2") take the subject and the length. This
version adds a null character in the subject if it is compiled with
PCREPOSIX or native POSIX functions.
The documentation of posix regex and pcreposix says that the function
returns 0 if the string matche otherwise it returns REG_NOMATCH. The
REG_NOMATCH macro take the value 1 with posix regex and the value 17
with the pcreposix. The documentaion of the native pcre API (used with
JIT) returns a negative number if no match, otherwise, it returns 0 or a
positive number.
This patch fix also the return codes of the regex_exec* functions. Now,
these function returns true if the string match, otherwise it returns
false.
When I renamed the modify-header action to replace-value, one of them
was mistakenly set to "replace-val" instead. Additionally, differentiation
of the two actions must be done on args[0][8] and not *args[8]. Thanks
Thierry for spotting...
This patch adds two new actions to http-request and http-response rulesets :
- replace-header : replace a whole header line, suited for headers
which might contain commas
- replace-value : replace a single header value, suited for headers
defined as lists.
The match consists in a regex, and the replacement string takes a log-format
and supports back-references.
The time statistics computed by previous patches are now reported in the
HTML stats in the tips related to the total sessions for backend and servers,
and as separate columns for the CSV stats.
Using the last rate counters, we now compute the queue, connect, response
and total times per server and per backend with a 95% accuracy over the last
1024 samples. The operation is cheap so we don't need to condition it.
Now that we can quote unsafe string, it becomes possible to dump the health
check responses on the CSV page as well. The two new fields are "last_chk"
and "last_agt".
qstr() and cstr() will be used to quote-encode strings. The first one
does it unconditionally. The second one is aimed at CSV files where the
quote-encoding is only needed when the field contains a quote or a comma.
This helper is similar to addr_to_str but
tries to convert the port rather than the address
of a struct sockaddr_storage.
This is in preparation for supporting
an external agent check.
Signed-off-by: Simon Horman <horms@verge.net.au>
The "accept-proxy" statement of bind lines was still limited to version
1 of the protocol, while send-proxy-v2 is now available on the server
lines. This patch adds support for parsing v2 of the protocol on incoming
connections. The v2 header is automatically recognized so there is no
need for a new option.
This is in order to simplify the PPv2 header parsing code to look more
like the one provided as an example in the spec. No code change was
performed beyond just merging the proxy_addr union into the proxy_hdr_v2
struct.
If haproxy receives a connection over a unix socket and forwards it to
another haproxy instance using proxy protocol v1, it sends an UNKNOWN
protocol, which is rejected by the other side. Make the receiver accept
the UNKNOWN protocol as per the spec, and only use the local connection's
address for this.
As discussed with Dmitry Sivachenko, is a server farm has more than one
active server, uses a guaranteed non-determinist algorithm (round robin),
and a connection was initiated from a non-persistent connection, there's
no point insisting to reconnect to the same server after a connect failure,
better redispatch upon the very first retry instead of insisting on the same
server multiple times.
The retry delay is only useful when sticking to a same server. During
a redispatch, it's useless and counter-productive if we're sure to
switch to another server, which is almost guaranteed when there's
more than one server and the balancing algorithm is round robin, so
better not pass via the turn-around state in this case. It could be
done as well for leastconn, but there's a risk of always killing the
delay after the recovery of a server in a farm where it's almost
guaranteed to take most incoming traffic. So better only kill the
delay when using round robin.
As discussed with Dmitry Sivachenko, the default 1-second connect retry
delay can be large for situations where the connect timeout is much smaller,
because it means that an active connection reject will take more time to be
retried than a silent drop, and that does not make sense.
This patch changes this so that the retry delay is the minimum of 1 second
and the connect timeout. That way people running with sub-second connect
timeout will benefit from the shorter reconnect.
This new directive captures the specified fetch expression, converts
it to text and puts it into the next capture slot. The capture slots
are shared with header captures so that it is possible to dump all
captures at once or selectively in logs and header processing.
The purpose is to permit logs to contain whatever payload is found in
a request, for example bytes at a fixed location or the SNI of forwarded
SSL traffic.
This patch adds support for captures with no header name. The purpose
is to allow extra captures to be defined and logged along with the
header captures.
Currently, all callers to sample_fetch_string() call it with SMP_OPT_FINAL.
Now we improve it to support the case where this option is not set, and to
make it return the original sample as-is. The purpose is to let the caller
check the SMP_F_MAY_CHANGE flag in the result and know that it should wait
to get complete contents. Currently this has no effect on existing code.
Similar to previous patches, HTTP header captures are performed when
a TCP frontend switches to an HTTP backend, but are not possible to
report. So let's relax the check to explicitly allow them to be present
in TCP frontends.
Log format is defined in the frontend, and some frontends may be chained to
an HTTP backend. Sometimes it's very convenient to be able to log the HTTP
status code of these HTTP backends. This status is definitely present in
the internal structures, it's just that we used to limit it to be used in
HTTP frontends. So let's simply relax the check to allow it to be used in
TCP frontends as well.