Commit Graph

1173 Commits

Author SHA1 Message Date
Christopher Faulet
e6006245de BUG/MEDIUM: filters: Fix channels synchronization in flt_end_analyze
When a filter is used, there are 2 channel's analyzers to surround all the
others, flt_start_analyze and flt_end_analyze. This is the good place to acquire
and release resources used by filters, when needed. In addition, the last one is
used to synchronize the both channels, especially for HTTP streams. We must wait
that the analyze is finished for the both channels for an HTTP transaction
before restarting it for the next one.

But this part was buggy, leading to unexpected behaviours. First, depending on
which channel ends first, the request or the response can be switch in a
"forward forever" mode. Then, the HTTP transaction can be cleaned up too early,
while a processing is still in progress on a channel.

To fix the bug, the flag CF_FLT_ANALYZE has been added. It is set on channels in
flt_start_analyze and is kept if at least one filter is still analyzing the
channel. So, we can trigger the channel syncrhonization if this flag was removed
on the both channels. In addition, the flag TX_WAIT_CLEANUP has been added on
the transaction to know if the transaction must be cleaned up or not during
channels syncrhonization. This way, we are sure to reset everything once all the
processings are finished.

This patch should be backported in 1.7.
2017-03-15 19:09:06 +01:00
Willy Tarreau
2019f95997 CLEANUP: http: make http_server_error() not set the status anymore
Given that all call places except one had to set txn->status prior to
calling http_server_error(), it's simpler to make this function rely
on txn->status than have it store it from an argument.
2017-03-14 11:09:04 +01:00
Jarno Huuskonen
800d1761d0 MINOR: http-request tarpit deny_status.
Implements deny_status for http-request tarpit rule (allows setting
custom http status code). This commit depends on:
MEDIUM: http_error_message: txn->status / http_get_status_idx.
2017-03-14 10:41:54 +01:00
Jarno Huuskonen
9e6906b9ec MEDIUM: http_error_message: txn->status / http_get_status_idx.
This commit removes second argument(msgnum) from http_error_message and
changes http_error_message to use s->txn->status/http_get_status_idx for
mapping status code from 200..504 to HTTP_ERR_200..HTTP_ERR_504(enum).

This is needed for http-request tarpit deny_status commit.
2017-03-14 10:41:41 +01:00
Willy Tarreau
19b1412e02 MINOR: http: don't close when redirect location doesn't start with "/"
In 1.4-dev5 when we started to implement keep-alive, commit a9679ac
("[MINOR] http: make the conditional redirect support keep-alive")
added a specific check was added to support keep-alive on redirect
rules but only when the location would start with a "/" indicating
the client would come back to the same server.

But nowadays most applications put http:// or https:// in front of
each and every location, and continuing to perform a close there is
counter-efficient, especially when multiple objects are fetched at
once from a same origin which redirects them to the correct origin
(eg: after an http to https forced upgrade).

It's about time to get rid of this old trick as it causes more harm
than good at an era where persistent connections are omnipresent.

Special thanks to Ciprian Dorin Craciun for providing convincing
arguments with a pretty valid use case and proposing this draft
patch which addresses the issue he was facing.

This change although not exactly a bug fix should be backported
to 1.7 to adapt better to existing infrastructure.
2017-02-28 09:48:11 +01:00
Christopher Faulet
cdade94cf5 BUG/MINOR: http: Return an error when a replace-header rule failed on the response
Historically, http-response rules couldn't produce errors generating HTTP
responses during their evaluation. This possibility was "implicitly" added with
http-response redirect rules (51d861a4). But, at the time, replace-header rules
were kept untouched. When such a rule failed, the rules processing was just
stopped (like for an accept rule).

Conversely, when a replace-header rule fails on the request, it generates a HTTP
response (400 Bad Request).

With this patch, errors on replace-header rule are now handled in the same way
for HTTP requests and HTTP responses.

This patch should be backported in 1.7 and 1.6.
2017-02-08 19:08:38 +01:00
Christopher Faulet
07a0fecced BUG/MEDIUM: http: Prevent replace-header from overwriting a buffer
This is the same fix as which concerning the redirect rules (0d94576c).

The buffer used to expand the <replace-fmt> argument must be protected to
prevent it being overwritten during build_logline() execution (the function used
to expand the format string).

This patch should be backported in 1.7, 1.6 and 1.5. It relies on commit b686afd
("MINOR: chunks: implement a simple dynamic allocator for trash buffers") for
the trash allocator, which has to be backported as well.
2017-02-08 19:08:38 +01:00
Christopher Faulet
f1cc5d0eaf BUG/MEDIUM: filters: Do not truncate HTTP response when body length is undefined
Some users have experienced some troubles using the compression filter when the
HTTP response body length is undefined. They complained about receiving
truncated responses.

In fact, the bug can be triggered if there is at least one filter attached to
the stream but none registered to analyze the HTTP response body. In this case,
when the body length is undefined, data should be forwarded without any
parsing. But, because of a wrong check, we were starting to parse them. Because
it was not expected, the end of response was not correctly detected and the
response could be truncted. So now, we rely on HAS_DATA_FILTER macro instead of
HAS_FILTER one to choose to parse HTTP response body or not.

Furthermore, in http_response_forward_body, the test to not forward the server
closure to the client has been updated to reflect conditions listed in the
associated comment.

And finally, in http_msg_forward_body, when the body length is undefined, we
continue the parsing it until the server closes the connection without any on
filters. So filters can safely stop to filter data during their parsing.

This fix should be backported in 1.7
2017-02-08 19:08:38 +01:00
Thierry FOURNIER
0d94576c74 BUG/MEDIUM: http: prevent redirect from overwriting a buffer
See 4b788f7d34

If we use the action "http-request redirect" with a Lua sample-fetch or
converter, and the Lua function calls one of the Lua log function, the
header name is corrupted, it contains an extract of the last loggued data.

This is due to an overwrite of the trash buffer, because his scope is not
respected in the "add-header" function. The scope of the trash buffer must
be limited to the function using it. The build_logline() function can
execute a lot of other function which can use the trash buffer.

This patch fix the usage of the trash buffer. It limits the scope of this
global buffer to the local function, we build first the header value using
build_logline, and after we store the header name.

Thanks Jesse Schulman for the bug repport.

This patch must be backported in 1.7, 1.6 and 1.5 version, and it relies
on commit b686afd ("MINOR: chunks: implement a simple dynamic allocator for
trash buffers") for the trash allocator, which has to be backported as well.
2017-02-08 11:16:46 +01:00
Jarno Huuskonen
59af2df102 MINOR: proto_http.c 502 error txt typo.
[wt: should be backported to 1.7 and 1.6 as it was introduced in 1.6-dev4]
2017-01-11 12:44:41 +01:00
Jarno Huuskonen
16ad94adf6 MINOR: Use "500 Internal Server Error" for 500 error/status code message.
Internal Server Error is what is in RFC 2616/7231.
2017-01-11 12:44:40 +01:00
Robin H. Johnson
52f5db2a44 MINOR: http: custom status reason.
The older 'rsprep' directive allows modification of the status reason.

Extend 'http-response set-status' to take an optional string of the new
status reason.

  http-response set-status 418 reason "I'm a coffeepot"

Matching updates in Lua code:
- AppletHTTP.set_status
- HTTP.res_set_status

Signed-off-by: Robin H. Johnson <robbat2@gentoo.org>
2017-01-06 11:57:44 +01:00
Willy Tarreau
2afff9c2d6 BUG/MAJOR: http: fix risk of getting invalid reports of bad requests
Commits 5f10ea3 ("OPTIM: http: improve parsing performance of long
URIs") and 0431f9d ("OPTIM: http: improve parsing performance of long
header lines") introduced a bug in the HTTP parser : when a partial
request is read, the first part ends up on a 8-bytes boundary (or 4-byte
on 32-bit machines), the end lies in the header field value part, and
the buffer used to contain a CR character exactly after the last block,
then the parser could be confused and read this CR character as being
part of the current request, then switch to a new state waiting for an
LF character. Then when the next part of the request appeared, it would
read the character following what was erroneously mistaken for a CR,
see that it is not an LF and fail on a bad request. In some cases, it
can even be worse and the header following the hole can be improperly
indexed causing all sort of unexpected behaviours like a content-length
being ignored or a header appended at the wrong position. The reason is
that there's no control of and of parsing just after breaking out of the
loop.

One way to reproduce it is with this config :

  global
      stats socket /tmp/sock1 mode 666 level admin
      stats timeout 1d

  frontend  px
      bind :8001
      mode http
      timeout client 10s
      redirect location /

And sending requests this way :

  $ tcploop 8001 C P S:"$(dd if=/dev/zero bs=16384 count=1 2>/dev/null | tr '\000' '\r')"
  $ tcploop 8001 C P S:"$(dd if=/dev/zero bs=16384 count=1 2>/dev/null | tr '\000' '\r')"
  $ tcploop 8001 C P \
    S:"GET / HTTP/1.0\r\nX-padding: 0123456789.123456789.123456789.123456789.123456789.123456789.1234567" P \
    S:"89.123456789\r\n\r\n" P

Then a "show errors" on the socket will report :

  $ echo "show errors" | socat - /tmp/sock1
  Total events captured on [04/Jan/2017:15:09:15.755] : 32

  [04/Jan/2017:15:09:13.050] frontend px (#2): invalid request
    backend <NONE> (#-1), server <NONE> (#-1), event #31
    src 127.0.0.1:59716, session #91, session flags 0x00000080
    HTTP msg state 17, msg flags 0x00000000, tx flags 0x00000000
    HTTP chunk len 0 bytes, HTTP body len 0 bytes
    buffer flags 0x00808002, out 0 bytes, total 111 bytes
    pending 111 bytes, wrapping at 16384, error at position 107:

    00000  GET / HTTP/1.0\r\n
    00016  X-padding: 0123456789.123456789.123456789.123456789.123456789.12345678
    00086+ 9.123456789.123456789\r\n
    00109  \r\n

This fix must be backported to 1.7.

Many thanks to Aleksey Gordeev and Axel Reinhold for providing detailed
network captures and configurations exhibiting the issue.
2017-01-05 20:26:55 +01:00
Willy Tarreau
10e61cbf41 BUG/MINOR: http: report real parser state in error captures
Error captures almost always report a state 26 (MSG_ERROR) making it
very hard to know what the parser was expecting. The reason is that
we have to switch to MSG_ERROR to trigger the dump, and then during
the dump we capture the current state which is already MSG_ERROR. With
this change we now copy the current state into an err_state field that
will be reported as the faulty state.

This patch looks a bit large because the parser doesn't update the
current state until it runs out of data so the current state is never
known when jumping to ther error label! Thus the code had to be updated
to take copies of the current state before switching to MSG_ERROR based
on the switch/case values.

As a bonus, it now shows the current state in human-readable form and
not only in numeric form ; in the past it was not an issue since it was
always 26 (MSG_ERROR).

At least now we can get exploitable invalid request/response reports :

  [05/Jan/2017:19:28:57.095] frontend f (#2): invalid request
    backend <NONE> (#-1), server <NONE> (#-1), event #1
    src 127.0.0.1:39894, session #4, session flags 0x00000080
    HTTP msg state MSG_RQURI(4), msg flags 0x00000000, tx flags 0x00000000
    HTTP chunk len 0 bytes, HTTP body len 0 bytes
    buffer flags 0x00908002, out 0 bytes, total 20 bytes
    pending 20 bytes, wrapping at 16384, error at position 5:

    00000  GET /\e HTTP/1.0\r\n
    00017  \r\n
    00019  \n

  [05/Jan/2017:19:28:33.827] backend b (#3): invalid response
    frontend f (#2), server s1 (#1), event #0
    src 127.0.0.1:39718, session #0, session flags 0x000004ce
    HTTP msg state MSG_HDR_NAME(17), msg flags 0x00000000, tx flags 0x08300000
    HTTP chunk len 0 bytes, HTTP body len 0 bytes
    buffer flags 0x80008002, out 0 bytes, total 59 bytes
    pending 59 bytes, wrapping at 16384, error at position 31:

    00000  HTTP/1.1 200 OK\r\n
    00017  Content-length : 10\r\n
    00038  \r\n
    00040  0a\r\n
    00044  0123456789\r\n
    00056  0\r\n

This should be backported to 1.7 and 1.6 at least to help with bug
reports.
2017-01-05 19:48:50 +01:00
Christopher Faulet
0184ea71a6 BUG/MAJOR: channel: Fix the definition order of channel analyzers
It is important to defined analyzers (AN_REQ_* and AN_RES_*) in the same order
they are evaluated in process_stream. This order is really important because
during analyzers evaluation, we run them in the order of the lower bit to the
higher one. This way, when an analyzer adds/removes another one during its
evaluation, we know if it is located before or after it. So, when it adds an
analyzer which is located before it, we can switch to it immediately, even if it
has already been called once but removed since.

With the time, and introduction of new analyzers, this order was broken up. the
main problems come from the filter analyzers. We used values not related with
their evaluation order. Furthermore, we used same values for request and response
analyzers.

So, to fix the bug, filter analyzers have been splitted in 2 distinct lists to
have different analyzers for the request channel than those for the response
channel. And of course, we have moved them to the right place.

Some other analyzers have been reordered to respect the evaluation order:

  * AN_REQ_HTTP_TARPIT has been moved just before AN_REQ_SRV_RULES
  * AN_REQ_PRST_RDP_COOKIE has been moved just before AN_REQ_STICKING_RULES
  * AN_RES_STORE_RULES has been moved just after AN_RES_WAIT_HTTP

Note today we have 29 analyzers, all stored into a 32 bits bitfield. So we can
still add 4 more analyzers before having a problem. A good way to fend off the
problem for a while could be to have a different bitfield for request and
response analyzers.

[wt: all of this must be backported to 1.7, and part of it must be backported
 to 1.6 and 1.5]
2017-01-05 17:58:22 +01:00
Thierry FOURNIER / OZON.IO
43ad11dc75 MINOR: Do not forward the header "Expect: 100-continue" when the option http-buffer-request is set
When the option "http-buffer-request" is set, HAProxy send itself the
"HTTP/1.1 100 Continue" response in order to retrieve the post content.
When HAProxy forward the request, it send the body directly after the
headers. The header "Expect: 100-continue" was sent with the headers.
This header is useless because the body will be sent in all cases, and
the server reponse is not removed by haproxy.

This patch removes the header "Expect: 100-continue" if HAProxy sent it
itself.
2016-12-12 17:33:42 +01:00
Matthieu Guegan
35088f960d BUG/MINOR: http: don't send an extra CRLF after a Set-Cookie in a redirect
By investigating a keep-alive issue with CloudFlare, we[1] found that
when using the 'set-cookie' option in a redirect (302) HAproxy is adding
an extra `\r\n`.

Triggering rule :

`http-request redirect location / set-cookie Cookie=value if [...]`

Expected result :

```
HTTP/1.1 302 Found
Cache-Control: no-cache
Content-length: 0
Location: /
Set-Cookie: Cookie=value; path=/;
Connection: close
```

Actual result :

```
HTTP/1.1 302 Found
Cache-Control: no-cache
Content-length: 0
Location: /
Set-Cookie: Cookie=value; path=/;

Connection: close
```

This extra `\r\n` seems to be harmless with another HAproxy instance in
front of it (sanitizing) or when using a browser. But we confirm that
the CloudFlare NGINX implementation is not able to handle this. It
seems that both 'Content-length: 0' and extra carriage return broke RFC
(to be confirmed).

When looking into the code, this carriage-return was already present in
1.3.X versions but just before closing the connection which was ok I
think. Then, with 1.4.X the keep-alive feature was added and this piece
of code remains unchanged.

[1] all credit for the bug finding goes to CloudFlare Support Team

[wt: the bug was indeed present since the Set-Cookie was introduced
 in 1.3.16, by commit 0140f25 ("[MINOR] redirect: add support for
 "set-cookie" and "clear-cookie"") so backporting to all supported
 versions is desired]
2016-12-05 19:27:48 +01:00
Christopher Faulet
6962f4e0d6 BUG/MINOR: http: Call XFER_DATA analyzer when HTTP txn is switched in tunnel mode
This allow a filter to start to analyze data in HTTP and to fallback in TCP when
data are tunneled.

[wt: backport desired in 1.7 - no impact right now but may impact the ability
 to backport future fixes]
2016-11-29 17:03:04 +01:00
Christopher Faulet
3235957685 BUG/MINOR: http: Keep the same behavior between 1.6 and 1.7 for tunneled txn
In HAProxy 1.6, When "http-tunnel" option is enabled, HTTP transactions are
tunneled as soon as possible after the headers parsing/forwarding. When the
transfer length of the response can be determined, this happens when all data
are forwarded. But for responses with an undetermined transfer length this
happens when headers are forwarded. This behavior is questionable, but this is
not the purpose of this fix...

In HAProxy 1.7, the first use-case works like in 1.6. But the second one not
because of the data filtering. HAProxy was always trying to forward data until
the server closes the connection. So the transaction was never switched in
tunnel mode. This is the expected behavior when there is a data filter. But in
the default case (no data filter), it should work like in 1.6.

This patch fixes the bug. We analyze response data until the server closes the
connection only when there is a data filter.

[wt: backport needed in 1.7]
2016-11-29 17:03:01 +01:00
Christopher Faulet
d1cd209b21 BUG/MEDIUM: http: Fix tunnel mode when the CONNECT method is used
When a 2xx response to a CONNECT request is returned, the connection must be
switched in tunnel mode immediatly after the headers, and Transfer-Encoding and
Content-Length headers must be ignored. So from the HTTP parser point of view,
there is no body.

The bug comes from the fact the flag HTTP_MSGF_XFER_LEN was not set on the
response (This flag means that the body size can be determined. In our case, it
can, it is 0). So, during data forwarding, the connection was never switched in
tunnel mode and we were blocked in a state where we were waiting that the
server closes the connection to ends the response.

Setting the flag HTTP_MSGF_XFER_LEN on the response fixed the bug.

The code of http_wait_for_response has been slightly updated to be more
readable.

[wt: 1.7-only, this is not needed in 1.6]
2016-11-29 17:00:14 +01:00
Willy Tarreau
35069f84af MINOR: cli: make "show errors" capable of dumping only request or response
When dealing with many proxies, it's hard to spot response errors because
all internet-facing frontends constantly receive attacks. This patch now
makes it possible to demand that only request or response errors are dumped
by appending "request" or "reponse" to the show errors command.
2016-11-25 09:16:37 +01:00
Willy Tarreau
234ba2d8eb MINOR: cli: make "show errors" support a proxy name
Till now it was needed to know the proxy's ID while we do have the
ability to look up a proxy by its name now.
2016-11-25 08:56:55 +01:00
Thierry FOURNIER / OZON.IO
8a4e4420fb MEDIUM: log-format: Use standard HAProxy log system to report errors
The function log format emit its own error message using Alert(). This
patch replaces this behavior and uses the standard HAProxy error system
(with memprintf).

The benefits are:
 - cleaning the log system

 - the logformat can ignore the caller (actually the caller must set
   a flag designing the caller function).

 - Make the usage of the logformat function easy for future components.
2016-11-25 07:32:58 +01:00
Thierry FOURNIER / OZON.IO
59fd511555 MEDIUM: log-format/conf: take into account the parse_logformat_string() return code
This patch takes into account the return code of the parse_logformat_string()
function. Now the configuration parser will fail if the log_format is not
strict.
2016-11-24 18:54:26 +01:00
Thierry FOURNIER / OZON.IO
6fe0e1b977 CLEANUP: log-format: remove unused arguments
The log-format function parse_logformat_string() takes file and line
for building parsing logs. These two parameters are embedded in the
struct proxy curproxy, which is the current parsing context.

This patch removes these two unused arguments.
2016-11-24 18:54:26 +01:00
Willy Tarreau
30e5e18bbb CLEANUP: cli: remove assignments to st0 and st2 in keyword parsers
Now it's not needed anymore to set STAT_ST_INIT nor CLI_ST_CALLBACK
in the parsers, remove it in the various places.
2016-11-24 16:59:28 +01:00
Willy Tarreau
12207b360a REORG: cli: move "show errors" out of cli.c
It really belongs to proto_http.c since it's a dump for HTTP request
and response errors. Note that it's possible that some parts do not
need to be exported anymore since it really is the only place where
errors are manipulated.
2016-11-24 16:59:28 +01:00
William Lallemand
9ed6203aef REORG: cli: split dumpstats.h in stats.h and cli.h
proto/dumpstats.h has been split in 4 files:

  * proto/cli.h  contains protypes for the CLI
  * proto/stats.h contains prototypes for the stats
  * types/cli.h contains definition for the CLI
  * types/stats.h contains definition for the stats
2016-11-24 16:59:27 +01:00
Willy Tarreau
e6d9c21059 OPTIM: http: optimize lookup of comma and quote in header values
http_find_header2() relies on find_hdr_value_end() to find the comma
delimiting a header field value, which also properly handles double
quotes and backslashes within quotes. In fact double quotes are very
rare, and commas happen once every multiple characters, especially
with cookies where a full block can be found at once. So it makes
sense to optimize this function to speed up the lookup of the first
block before the quote.

This change increases the performance from 212k to 217k req/s when
requests contain a 1kB cookie (+2.5%). We don't care about going
back into the fast parser after the first quote, as it may
needlessly make the parser more complex for very marginal gains.
2016-11-05 18:23:38 +01:00
Willy Tarreau
5f10ea30f4 OPTIM: http: improve parsing performance of long URIs
Searching the trailing space in long URIs takes some time. This can
happen especially on static files and some blogs. By skipping valid
character ranges by 32-bit blocks, it's possible to increase the
HTTP performance from 212k to 216k req/s on requests features a
100-character URI, which is an increase of 2%. This is done for
architectures supporting unaligned accesses (x86_64, x86, armv7a).
There's only a 32-bit version because URIs are rarely long and very
often short, so it's more efficient to limit the systematic overhead
than to try to optimize for the rarest requests.
2016-11-05 18:00:35 +01:00
Willy Tarreau
0431f9d476 OPTIM: http: improve parsing performance of long header lines
A performance test with 1kB cookies was capping at 194k req/s. After
implementing multi-byte skipping, the performance increased to 212k req/s,
or 9.2% faster. This patch implements this for architectures supporting
unaligned accesses (x86_64, x86, armv7a). Maybe other architectures can
benefit from this but they were not tested yet.
2016-11-05 18:00:17 +01:00
Willy Tarreau
2235b261b6 OPTIM: http: move all http character classs tables into a single one
We used to have 7 different character classes, each was 256 bytes long,
resulting in almost 2kB being used in the L1 cache. It's as cheap to
test a bit than to check the byte is not null, so let's store a 7-bit
composite value and check for the respective bits there instead.

The executable is now 4 kB smaller and the performance on small
objects increased by about 1% to 222k requests/second with a config
involving 4 http-request rules including 1 header lookup, one header
replacement, and 2 variable assignments.
2016-11-05 15:58:08 +01:00
Erwan Velu
b12ff9a201 CLEANUP: proto_http: Removing useless variable assignation
delta is set to 0 just before being assigned to a buffer.
This patch is just removing this useless line, shorted is better.
2016-08-30 14:24:48 +02:00
Thierry FOURNIER / OZON.IO
4cac359a39 MEDIUM: log: Decompose %Tq in %Th %Ti %TR
Tq is the time between the instant the connection is accepted and a
complete valid request is received. This time includes the handshake
(SSL / Proxy-Protocol), the idle when the browser does preconnect and
the request reception.

This patch decomposes %Tq in 3 measurements names %Th, %Ti, and %TR
which returns respectively the handshake time, the idle time and the
duration of valid request reception. It also adds %Ta which reports
the request's active time, which is the total time without %Th nor %Ti.
It replaces %Tt as the total time, reporting accurate measurements for
HTTP persistent connections.

%Th is avalaible for TCP and HTTP sessions, %Ti, %TR and %Ta are only
avalaible for HTTP connections.

In addition to this, we have new timestamps %tr, %trg and %trl, which
log the date of start of receipt of the request, respectively in the
default format, in GMT time and in local time (by analogy with %t, %T
and %Tl). All of them are obviously only available for HTTP. These values
are more relevant as they more accurately represent the request date
without being skewed by a browser's preconnect nor a keep-alive idle
time.

The HTTP log format and the CLF log format have been modified to
use %tr, %TR, and %Ta respectively instead of %t, %Tq and %Tt. This
way the default log formats now produce the expected output for users
who don't want to manually fiddle with the log-format directive.

Example with the following log-format :

   log-format "%ci:%cp [%tr] %ft %b/%s h=%Th/i=%Ti/R=%TR/w=%Tw/c=%Tc/r=%Tr/a=%Ta/t=%Tt %ST %B %CC %CS %tsc %ac/%fc/%bc/%sc/%rc %sq/%bq %hr %hs %{+Q}r"

The request was sent by hand using "openssl s_client -connect" :

   Aug 23 14:43:20 haproxy[25446]: 127.0.0.1:45636 [23/Aug/2016:14:43:20.221] test~ test/test h=6/i=2375/R=261/w=0/c=1/r=0/a=262/t=2643 200 145 - - ---- 1/1/0/0/0 0/0 "GET / HTTP/1.1"

=> 6 ms of SSL handshake, 2375 waiting before sending the first char (in
fact the time to type the first line), 261 ms before the end of the request,
no time spent in queue, 1 ms spend connecting to the server, immediate
response, total active time for this request = 262ms. Total time from accept
to close : 2643 ms.

The timing now decomposes like this :

                 first request               2nd request
      |<-------------------------------->|<-------------- ...
      t         tr                       t    tr ...
   ---|----|----|----|----|----|----|----|----|--
      : Th   Ti   TR   Tw   Tc   Tr   Td : Ti   ...
      :<---- Tq ---->:                   :
      :<-------------- Tt -------------->:
                :<--------- Ta --------->:
2016-08-23 15:18:08 +02:00
Willy Tarreau
3146a4cde2 BUG/MINOR: peers: don't count track-sc multiple times on errors
Ruoshan Huang found that the call to session_inc_http_err_ctr() in the
recent http-response patch was not a good idea because it also increments
counters that are already tracked (eg: http-request track-sc or previous
http-response track-sc). Better open-code the update, it's simple.
2016-07-26 15:25:32 +02:00
Ruoshan Huang
e4edc6b628 MEDIUM: http: implement http-response track-sc* directive
This enables tracking of sticky counters from current response. The only
difference from "http-request track-sc" is the <key> sample expression
can only make use of samples in response (eg. res.*, status etc.) and
samples below Layer 6.
2016-07-26 14:31:14 +02:00
Christopher Faulet
a9300a3d5a BUG/MINOR: Rework slightly commit 9962f8fc to clean code and avoid mistakes
In commit 9962f8fc (BUG/MEDIUM: http: unbreak uri/header/url_param hashing), we
take care to update 'msg->sov' value when the parser changes to state
HTTP_MSG_DONE. This works when no filter is used. But, if a filter is used and
if it loops on 'http_end' callback, the following block is evaluated two times
consecutively:

    if (unlikely(!(chn->flags & CF_WROTE_DATA) || msg->sov > 0))
            msg->sov -= ret;

Today, in practice, because this happens when all data are parsed and forwarded,
the second test always fails (after the first update, msg->sov is always lower
or equal to 0). But it is useless and error prone. So to avoid misunderstanding
the code has been slightly changed. Now, in all cases, we try to update msg->sov
only once per iteration.

No backport is needed.
2016-06-28 16:34:50 +02:00
Willy Tarreau
9962f8fc44 BUG/MEDIUM: http: unbreak uri/header/url_param hashing
Vedran Furac reported that "balance uri" doesn't work anymore in recent
1.7-dev versions. Dragan Dosen found that the first faulty commit was
dbe34eb ("MEDIUM: filters/http: Move body parsing of HTTP messages in
dedicated functions"), merged in 1.7-dev2.

After this patch, the hashing is performed on uninitialized data,
indicating that the buffer is not correctly rewound. In fact, all forms
of content-based hashing are broken since the commit above. Upon code
inspection, it appears that the new functions http_msg_forward_chunked_body()
and http_msg_forward_body() forget to rewind the buffer in the success
case, when the parser changes to state HTTP_MSG_DONE. The rewinding code
was reinserted in both functions and the fix was confirmed by two test,
with and without chunking.

No backport it needed.
2016-06-28 11:57:06 +02:00
Willy Tarreau
29bdb1c7ff BUG/MINOR: http: fix misleading error message for response captures
Kay Fuchs reported that the error message is misleading in response
captures because it suggests that "len" is accepted while it's not.

This needs to be backported to 1.6.
2016-06-24 15:36:34 +02:00
Christopher Faulet
1eea6d7ba8 BUG/MINOR: filters: Fix HTTP parsing when a filter loops on data forwarding
A filter can choose to loop on data forwarding. When this loop occurs in
HTTP_MSG_ENDING state, http_foward_data callbacks are called twice because of a
goto on the wrong label.

A filter can also choose to loop at the end of a HTTP message, in http_end
callback. Here the goto is good but the label is not at the right place. We must
be sure to upate msg->sov value.
2016-06-21 18:53:09 +02:00
Ruoshan Huang
dd01678a79 BUG/MINOR: fix http-response set-log-level parsing error
hi,
    `http-response set-log-level` doesn't work, as the config parsing always set the log level to -1.

From 2b183447c5b37c19aae5d596871fc0b9004c87b4 Mon Sep 17 00:00:00 2001
From: Ruoshan Huang <ruoshan.huang@gmail.com>
Date: Wed, 15 Jun 2016 22:07:58 +0800
Subject: [PATCH] BUG/MINOR: fix http-response set-log-level parsing error

http-response set-log-level can't parse the log level correctly,
as the level argument ptr is one byte ahead when passed to get_log_level
---
 src/proto_http.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)
2016-06-17 17:57:58 +02:00
Dragan Dosen
db5af61f3c BUG/MINOR: http: url32+src should check cli_conn before using it
In function smp_fetch_url32_src(), it's better to check the value of
cli_conn before we go any further.

This patch needs to be backported to 1.6 and 1.5.
2016-06-16 12:53:25 +02:00
Dragan Dosen
e5f4133b19 BUG/MINOR: http: url32+src should use the big endian version of url32
This is similar to the commit 5ad6e1dc ("BUG/MINOR: http: base32+src
should use the big endian version of base32"). Now we convert url32 to big
endian when building the binary block.

This patch needs to be backported to 1.6 and 1.5.
2016-06-16 12:53:25 +02:00
Thierry Fournier
4b788f7d34 BUG/MEDIUM: http: add-header: buffer overwritten
If we use the action "http-request add-header" with a Lua sample-fetch or
converter, and the Lua function calls one of the Lua log function, the
header name is corrupted, it contains an extract of the last loggued data.

This is due to an overwrite of the trash buffer, because his scope is not
respected in the "add-header" function. The scope of the trash buffer must
be limited to the function using it. The build_logline() function can
execute a lot of other function which can use the trash buffer.

This patch fix the usage of the trash buffer. It limits the scope of this
global buffer to the local function, we build first the header value using
build_logline, and after we store the header name.

Thanks Michael Ezzell for the repporting.

This patch must be backported in 1.6 version
2016-06-08 10:34:22 +02:00
Thierry Fournier
53c1a9b7cb BUG/MINOR: http: add-header: header name copied twice
The header name is copied two time in the buffer. The first copy is a printf-like
function writing the name and the http separators in the buffer, and the second
form is a memcopy. This seems to be inherited from some changes. This patch
removes the printf like, format.

This patch must be backported in 1.6 and 1.5 versions
2016-06-08 10:34:07 +02:00
William Lallemand
2e785f23cb MEDIUM: tcp: add 'set-src' to 'tcp-request connection'
The 'set-src' action was not available for tcp actions The action code
has been converted into a function in proto_tcp.c to be used for both
'http-request' and 'tcp-request connection' actions.

Both http and tcp keywords are registered in proto_tcp.c
2016-06-01 11:44:11 +02:00
Willy Tarreau
58727ec088 BUG/MAJOR: http: fix breakage of "reqdeny" causing random crashes
Commit 108b1dd ("MEDIUM: http: configurable http result codes for
http-request deny") introduced in 1.6-dev2 was incomplete. It introduced
a new field "rule_deny_status" into struct http_txn, which is filled only
by actions "http-request deny" and "http-request tarpit". It's then used
in the deny code path to emit the proper error message, but is used
uninitialized when the deny comes from a "reqdeny" rule, causing random
behaviours ranging from returning a 200, an empty response, or crashing
the process. Often upon startup only 200 was returned but after the fields
are used the crash happens. This can be sped up using -dM.

There's no need at all for storing this status in the http_txn struct
anyway since it's used immediately after being set. Let's store it in
a temporary variable instead which is passed as an argument to function
http_req_get_intercept_rule().

As an extra benefit, removing it from struct http_txn reduced the size
of this struct by 8 bytes.

This fix must be backported to 1.6 where the bug was detected. Special
thanks to Falco Schmutz for his detailed report including an exploitable
core and a reproducer.
2016-05-25 16:23:59 +02:00
Vincent Bernat
6e61589573 BUG/MAJOR: fix listening IP address storage for frontends
When compiled with GCC 6, the IP address specified for a frontend was
ignored and HAProxy was listening on all addresses instead. This is
caused by an incomplete copy of a "struct sockaddr_storage".

With the GNU Libc, "struct sockaddr_storage" is defined as this:

    struct sockaddr_storage
      {
        sa_family_t ss_family;
        unsigned long int __ss_align;
        char __ss_padding[(128 - (2 * sizeof (unsigned long int)))];
      };

Doing an aggregate copy (ss1 = ss2) is different than using memcpy():
only members of the aggregate have to be copied. Notably, padding can be
or not be copied. In GCC 6, some optimizations use this fact and if a
"struct sockaddr_storage" contains a "struct sockaddr_in", the port and
the address are part of the padding (between sa_family and __ss_align)
and can be not copied over.

Therefore, we replace any aggregate copy by a memcpy(). There is another
place using the same pattern. We also fix a function receiving a "struct
sockaddr_storage" by copy instead of by reference. Since it only needs a
read-only copy, the function is converted to request a reference.
2016-05-19 10:43:24 +02:00
Willy Tarreau
3de5bd603c BUG/MEDIUM: http: fix risk of CPU spikes with pipelined requests from dead client
Since client-side HTTP keep-alive was introduced in 1.4-dev, a good
number of corner cases had to be dealt with. One of them remained and
caused some occasional CPU spikes that Cyril Bonté had the opportunity
to observe from time to time and even recently to capture for deeper
analysis.

What happens is the following scenario :
  1) a client sends a first request which causes the server to respond
     using chunked encoding

  2) the server starts to respond with a large response that doesn't
     fit into a single buffer

  3) the response buffer fills up with the response

  4) the client reads the response while it is being sent by the server.

  5) haproxy eventually receives the end of the response from the server,
     which occupies the whole response buffer (must at least override the
     reserve), parses it and prepares to receive a second request while
     sending the last data blocks to the client. It then reinitializes the
     whole http_txn and calls http_wait_for_request(), which arms the
     http-request timeout.

  6) at this exact moment the client emits a second request, probably
     asking for an object referenced in the first page while parsing it
     on the fly, and silently disappears from the internet (internet
     access cut or software having trouble with pipelined request).

  7) the second request arrives into the request buffer and the response
     data stall in the response buffer, preventing the reserve from being
     used for anything else.

  8) haproxy calls http_wait_for_request() to parse the next request which
     has just arrived, but since it sees the response buffer is full, it
     refrains from doing so and waits for some data to leave or a timeout
     to strike.

  9) the http-request timeout strikes, causing http_wait_for_request() to
     be called again, and the function immediately returns since it cannot
     even produce an error message, and the loop is maintained here.

 10) the client timeout strikes, aborting the loop.

At first glance a check for timeout would be needed *before* considering
the buffer in http_wait_for_request(), but the issue is not there in fact,
because when doing so we see in the logs a client timeout error while
waiting for a request, which is wrong. The real issue is that we must not
consider the first transaction terminated until at least the reserve is
released so that http_wait_for_request() has no problem starting to process
a new request. This allows the first response to be reported in timeout and
not the second request. A more restrictive control could consist in not
considering the request complete until the response buffer has no more
outgoing data but this brings no added value and needlessly increases the
number of wake-ups when dealing with pipelining.

Note that the same issue exists with the request, we must ensure that any
POST data are finished being forwarded to the server before accepting a new
request. This case is much harder to trigger however as servers rarely
disappear and if they do so, they impact all their sessions at once.

No specific reproducer is usable because the bug is very hard to reproduce
and depends on the system as well with settings varying across reboots. On
a test machine, socket buffers were reduced to 4096 (tcp_rmem and tcp_wmem),
haproxy's buffers were set to 16384 and tune.maxrewrite to 12288. The proxy
must work in http-server-close mode, with a request timeout smaller than the
client timeout. The test is run like this :

  $ (printf "GET /15000 HTTP/1.1\r\n\r\n"; usleep 100000; \
     printf "GET / HTTP/1.0\r\n\r\n"; sleep 15) | nc6 --send-only 0 8002

The server returns chunks of the requested size (15000 bytes here, but
78000 in a previous test was the only working value). Strace must show the
last recvfrom() succeed and the last sendto() being shorter than desired or
better, not being called.

This fix must be backported to 1.6, 1.5 and 1.4. It was made in a way that
should make it possible to backport it. It depends on channel_congested()
which also needs to be backported. Further cleanup of http_wait_for_request()
could be made given that some of the tests are now useless.
2016-05-02 16:39:22 +02:00
Willy Tarreau
f51d03cf14 BUG/MEDIUM: http: fix incorrect reporting of server errors
Commit dbe34eb ("MEDIUM: filters/http: Move body parsing of HTTP
messages in dedicated functions") introduced a bug in function
http_response_forward_body() by getting rid of the while(1) loop.
The code immediately following the loop was only reachable on missing
data but now it's also reachable under normal conditions, which used
to be dealt with by the skip_resync_state label returning zero. The
side effect is that in http_server_close situations, the channel's
SHUTR flag is seen and considered as a server error which is reported
if any other error happens (eg: client timeout).

This bug is specific to 1.7, no backport is needed.
2016-05-02 16:39:22 +02:00