8564 Commits

Author SHA1 Message Date
Willy Tarreau
ff47b3f41d BUG/MEDIUM: http: don't automatically forward request close
Maximilian Böhm, and Lucas Rolff reported some frequent HTTP/2 POST
failures affecting version 1.8.2 that were not affecting 1.8.1. Lukas
Tribus determined that these ones appeared consecutive to commit a48c141
("BUG/MAJOR: connection: refine the situations where we don't send shutw()").

It turns out that the HTTP request forwarding engine lets a shutr from
the client be automatically forwarded to the server unless chunked
encoding is in use. It's a bit tricky to meet this condition as it only
happens if the shutr is not reported in the initial request. So if a
request is large enough or the body is delayed after the headers (eg:
Expect: 100-continue), the the function quits with channel_auto_close()
left enabled. The patch above was not really related in fact. It's just
that a previous bug was causing this shutw to be skipped at the lower
layers, and the two bugs used to cancel themselves.

In the HTTP request we should only pass the close in tunnel mode, as
other cases either need to keep the connection alive (eg: for reuse)
or will force-close it. Also the forced close will properly take care
of avoiding the painful time-wait, which is not possible with the early
close.

This patch must be backported to 1.8 as it directly impacts HTTP/2, and
may be backported to older version to save them from being abused by
clients causing TIME_WAITs between haproxy and the server.

Thanks to Lukas and Lucas for running many tests with captures allowing
the bug to be narrowed down.
2017-12-29 17:23:40 +01:00
William Lallemand
e134041910 MINOR: don't close stdio anymore
Closing the standard IO FDs (0,1,2) can be troublesome, especially in
the case of the master-worker.

Instead of closing those FDs, they are now pointing to /dev/null which
prevents sending debugging messages to the wrong FDs.

This patch could be backported in 1.8.
2017-12-29 16:33:41 +01:00
PiBa-NL
149a81a443 BUG/MEDIUM: mworker: don't close stdio several time
This patch makes sure that a frontend socket that gets created after
initialization won't be closed when the master gets re-executed.

When used in daemon mode, the master-worker is closing the FDs 0, 1, 2
after the fork of the children.

When the master was reloading, those FDs were assigned again during the
parsing of the configuration (probably for some listeners), and the
workers were closing them thinking it was the stdio.

This patch must be backported to 1.8.
2017-12-29 16:31:10 +01:00
Willy Tarreau
d790143d99 BUG/MEDIUM: h2: ensure we always know the stream before sending a reset
The recent patch introducing the H2_CS_FRAME_E state to emit stream
resets was not totally correct in that in the rare case where there is
no room left to emit the reset, the next call to process it later could
use an uninitialized stream. This only affects responses to frames that
are sent on closed streams though.

This fix must be backported to 1.8.
2017-12-29 11:34:40 +01:00
Davor Ocelic
e9ed281e9f DOC/MINOR: configuration: typo, formatting fixes
- Add simple typo and formatting fixes
- Eliminate a couple > 80 column lines

Changes do not affect technical content and can be backported.
2017-12-27 19:03:32 +01:00
Willy Tarreau
ab83750a29 BUG/MEDIUM: h2: improve handling of frames received on closed streams
The h2spec utility found certain situations where we're returning an
RST_STREAM while a GOAWAY is expected. While we can't always reliably
decide which one to use (eg: after a stream has been closed for a long
time), in practice we often still have the stream available until it's
destroyed at the application level. This provides the flags we need to
verify the conditions that led to its closure, namely if RST was sent
or received, or if it was regularly closed using a double ES.

The first step consists in marking all closed streams as having already
sent an RST_STREAM frame. This will ensure that we can send an RST_STREAM
for a late transmission on a stream we have forgotten about instead of
risking to break the connection. The next steps consist in re-arranging
the H2_SS_CLOSED checks so that we can deliver a GOAWAY frame for the
few cases where an unexpected frame was received after a double ES.

By carefully taking care of these specificities, we can reduce by 4 the
number of remaining compliance issues.

Note: some tests start to become a bit long and to be repeated at various
places. Probably that adding a bitmask of allowed/forbidden frame types
per state and/or per situation could significantly help. It's likely
that some deeper tests in the frame handlers could also be removed now
as they can't be triggered anymore.

This fix should be backported to 1.8.
2017-12-27 18:44:22 +01:00
Willy Tarreau
a20a519b8f BUG/MEDIUM: h2: properly handle and report some stream errors
Some stream errors applied to half-closed and closed streams are not
properly reported, especially after the stream transistions to the
closed state. The reason is that the code checks for this "error"
stream state in order to send an RST frame. But if the stream was
just closed or was already closed, there's no way to validate this
condition, and the error is never reported to the peer.

In order to address this situation, we'll add a new FRAME_E demux state
which indicates that the previously parsed frame triggered a stream error
of type STREAM CLOSED that needs to be reported. Proceeding like this
will ensure that we don't lose that information even if we can't
immediately send the message. It also removes the confusion where FRAME_A
could be used either for ACKs or for RST.

The state transition has been added after every h2s_error() on the demux
path. It seems that we might need to have two distinct h2s_error()
functions, one for the mux and another one for the demux, though it
would provide little benefit. It also becomes more apparent that the
H2_SS_ERROR state is only used to detect the need to report an error
on the mux direction. Maybe this will have to be revisited later.

This simple change managed to eliminate 5 bugs reported by h2spec.

This fix must be backported to 1.8.
2017-12-27 18:34:50 +01:00
Willy Tarreau
b26881a5d5 BUG/MEDIUM: checks: properly set servers to stopping state on 404
Paul Lockaby reported that since 1.8, disable-on-404 doesn't work
anymore in that the server stay up despite returning 404. Cyril spotted
that this was caused by a copy-paste error introduced by commit 5a13351
("BUG/MEDIUM: log: check result details truncated.") causing
set_server_running() to be called instead of set_server_stopping() in
this case.

It can be reproduced with the simple test config below :

  defaults
     mode http
     timeout connect 1s
     timeout client  10s
     timeout server  10s

  listen http
     bind :8888
     option httpchk GET /
     http-check disable-on-404
     server s1 127.0.0.1:9001 check
     server s2 127.0.0.1:9002 check
     http-response add-header x-served-by %s

  listen s1
     bind :9001
     server next 127.0.0.1:9002
     http-response set-status 404

  frontend s2
     bind :9002
     http-request redirect location /

S1 is supposed to be stopping and s2 up, which is not the case. After
calling the correct function, only S2 is used now.

This needs to be backported to 1.8.
2017-12-23 11:16:49 +01:00
Willy Tarreau
a48c141f44 BUG/MAJOR: connection: refine the situations where we don't send shutw()
Since commit f9ce57e ("MEDIUM: connection: make conn_sock_shutw() aware
of lingering"), we refrain from performing the shutw() on the socket if
there is no lingering risk. But there is a problem with this in tunnel
and in TCP modes where a client is explicitly allowed to send a shutw
to the server, eventhough it it risky.

Not doing it creates this situation reported by Ricardo Fraile and
diagnosed by Christopher : a typical HTTP client (eg: curl) connecting
via the config below to an HTTP server would receive its response,
immediately close while the server remains in keep-alive mode. The
shutr() received by haproxy from the client is "propagated" to the
server side but not acted upon because fdtab[fd].linger_risk is set,
so we expect that the next close will immediately complete this
operation.

  listen proxy-tcp
    bind 127.0.0.1:8888
    mode tcp
    timeout connect 5s
    timeout server  10s
    timeout client  10s
    server server1 127.0.0.1:8000

But since the whole stream will not end until the server closes in
turn, the server doesn't close and haproxy expires on server timeout.
This problem has already struck by waking up an older bug and was
partially fixed with commit 8059351 ("BUG/MEDIUM: http: don't disable
lingering on requests with tunnelled responses") though it was not
enough.

The problem is that linger_risk is not suited here. In fact we need to
know whether or not it is desired to close normally or silently, and
whether or not a shutr() has already been received on this connection.

This is the approach this patch takes, and it solves the problem for
the various difficult modes (tcp, http-server-close, pretend-keepalive).

This fix needs to be backported to 1.8. Many thanks to Ricardo for
providing very detailed traces and configurations.
2017-12-22 18:54:05 +01:00
Willy Tarreau
d4569d1937 BUG/MEDIUM: cache: don't cache the response on no-cache="set-cookie"
If the server mentions no-cache="set-cookie" in the response headers,
we must guarantee that any set-cookie field will not be stored. We
cannot edit the stored response on the fly to trim the set-cookie
header so we can refrain from storing a response containing such a
header. In theory we could use TX_SCK_PRESENT for this but this one
is only set when the cookie is being watched by the configuration.
Since these responses are not very frequent and often accompanied
with a set-cookie header, let's simply refrain from caching whenever
such directive is present.

This needs to be backported to 1.8.
2017-12-22 18:03:04 +01:00
Willy Tarreau
504455c533 BUG/MEDIUM: cache: respect the request cache-control header
Till now if a client emitted a request featureing a cache-control header,
this one was not respected and a stale object could still be delievered.r
 This patch ensures that :
  - cache-control: no-cache disables retrieval from the cache but does
    not prevent the newly fetched object from being stored ;
  - cache-control: no-store can safely retrieve from the cache but prevents
    from storing any fetched object
  - cache-control: max-age/max-stale/min-fresh act like no-cache
  - pragma: no-cache acts like cache-control: no-cache.

This needs to be backported to 1.8.
2017-12-22 17:56:18 +01:00
Willy Tarreau
c9bd34c7e0 BUG/MEDIUM: cache: replace old object on store
Currently the cache aborts a store operation if the object to store
already exists in the cache. This is used to avoid storing multiple
copies at the same time on concurrent accesses. It causes an issue
though, which is that existing unexpired objects cannot be updated.
This happens when any request criterion disables the retrieval from
the cache (eg: with max-age or any other cache-control condition).

For now, let's simply replace the previous existing entry by unlinking
it from the index. This could possibly be improved in the future if
needed.

This fix needs to be backported to 1.8.
2017-12-22 17:56:18 +01:00
Willy Tarreau
7704b1e89a BUG/MEDIUM: cache: do not try to retrieve host-less requests from the cache
All HTTP/1.1 requests the Host header share the same hash key 0 and
will be return the first cached object. Let's add the check on the call
to sha1_hosturi() to prevent this from happening.

This must be backported to 1.8.
2017-12-22 17:56:17 +01:00
Willy Tarreau
0ad8e0dfea MINOR: http: add a function to check request's cache-control header field
The new function check_request_for_cacheability() is used to check if
a request may be served from the cache, and/or allows the response to
be stored into the cache. For this it checks the cache-control and
pragma header fields, and adjusts the existing TX_CACHEABLE and a new
TX_CACHE_IGNORE flags.

For now, just like its response side counterpart, it only checks the
first value of the header field. These functions should be reworked to
improve their parsers and validate all elements.
2017-12-22 17:56:17 +01:00
Willy Tarreau
faf2909f9f BUG/MINOR: cache: do not force the TX_CACHEABLE flag before checking cacheability
The cache used to set this flag before calling
check_response_for_cacheability() due to the way the flags were previously
set (too late), but this is a bad idea as it loses the information of the
implicit caching rules related to the method and the status code. Let's
only rely on what was determined during the request and response parsing
instead and not change it.

This fix must be backported to 1.8, and it requires that the following
patches are also merged :
 - MINOR: http: adjust the list of supposedly cacheable methods
 - MINOR: http: update the list of cacheable status codes as per RFC7231
 - MINOR: http: start to compute the transaction's cacheability from the request
 - BUG/MINOR: http: do not ignore cache-control: public
2017-12-22 15:49:15 +01:00
Willy Tarreau
d3900cc31d BUG/MINOR: http: properly detect max-age=0 and s-maxage=0 in responses
In 1.3.8, commit a15645d ("[MAJOR] completed the HTTP response processing.")
improved the response parser by taking care of the cache-control header
field. The parser is wrong because it is split in two parts, one checking
for elements containing an equal sign and the other one for those without.
The "max-age=0" and "s-maxage=0" tests were located at the wrong place and
thus have never matched. In practice the side effect was very minimal given
that this code used to be enabled only when checking if a cookie had the
risk of being cached or not. Recently in 1.8 it was also used to decide if
the response could be cached but in practice the cache takes care of these
values by itself so there is very limited impact.

This fix can be backported to all stable versions.
2017-12-22 15:49:15 +01:00
Willy Tarreau
12b32f212f BUG/MINOR: http: do not ignore cache-control: public
In check_response_for_cacheability(), we don't check the
cache-control flags if the response is already supposed not to be
cacheable. This was introduced very early when cache-control:public
was not checked, and it basically results in this last one not being
able to properly mark the response as cacheable if it uses a status
code which is non-cacheable by default. Till now the impact is very
limited as it doesn't check that cookies set on non-default status
codes are not cacheable, and it prevents the cache from caching such
responses.

Let's fix this by doing two things :
  - remove the test for !TX_CACHEABLE in the aforementionned function
  - however take care of 1xx status codes here (which used to be
    implicitly dealt with by the test above) and remove the explicit
    check for 101 in the caller

This fix must be backported to 1.8.
2017-12-22 14:43:26 +01:00
Willy Tarreau
83ece462b4 MINOR: http: start to compute the transaction's cacheability from the request
There has always been something odd with the way the cache-control flags
are checked. Since it was made for checking for the risk of leaking cookies
only, all the processing was done in the response. Because of this it is not
possible to reuse the transaction flags correctly for use with the cache.

This patch starts to change this by moving the method check in the request
so that we know very early whether the transaction is expected to be cacheable
and that this status evolves along with checked headers. For now it's not
enough to use from the cache yet but at least it makes the flag more
consistent along the transaction processing.
2017-12-22 14:43:26 +01:00
Willy Tarreau
c55ddce65c MINOR: http: update the list of cacheable status codes as per RFC7231
Since RFC2616, the following codes were added to the list of codes
cacheable by default : 204, 404, 405, 414, 501. For now this it only
checked by the checkcache option to detect cacheable cookies.
2017-12-22 14:43:26 +01:00
Willy Tarreau
24ea0bcb1d MINOR: http: adjust the list of supposedly cacheable methods
We used to have a rule inherited from RFC2616 saying that the POST
method was the only uncacheable one, but things have changed since
and RFC7231+7234 made it clear that in fact only GET/HEAD/OPTIONS/TRACE
are cacheable. Currently this rule is only used to detect cacheable
cookies.
2017-12-22 14:43:26 +01:00
Eric Salama
fe7456f3b7 BUG/MEDIUM: lua: fix crash when using bogus mode in register_service()
When using an incorrect 'mode' as 2nd argument of core.register_service(),
HAProxy crashes while displaying the error message.

To be backported to 1.8, 1.7 and 1.6.
2017-12-22 14:34:54 +01:00
Emeric Brun
e31148031f BUG/MEDIUM: checks: a server passed in maint state was not forced down.
Setting a server in maint mode, the required next_state was not set
before calling the 'lb_down' function and so the system state was never
commited.

This patch should be backported in 1.8
2017-12-21 15:23:55 +01:00
Willy Tarreau
7aa15b072e BUG/MEDIUM: stream: don't consider abortonclose on muxes which close cleanly
The H2 mux can cleanly report an error when a client closes, which is not
the case for the pass-through mux which only reports shutr. That was the
reason why "option abortonclose" was created since there was no way to
distinguish a clean shutdown after sending the request from an abort.

The problem is that in case of H2, the streams are always shut read after
the request is complete (when the END_STREAM flag is received), and that
when this lands on a backend configured with "option abortonclose", this
aborts the request. Disabling abortonclose is not always an option when
H1 and H2 have to coexist.

This patch makes use of the newly introduced mux capabilities reported
via the stream interface's SI_FL_CLEAN_ABRT indicating that the mux is
safe and that there is no need to turn a clean shutread into an abort.
This way abortonclose has no effect on requests initiated from an H2
mux.

This patch as well as these 3 previous ones need to be backported to
1.8 :
 - BUG/MINOR: h2: properly report a stream error on RST_STREAM
 - MINOR: mux: add flags to describe a mux's capabilities
 - MINOR: stream-int: set flag SI_FL_CLEAN_ABRT when mux supports clean aborts
2017-12-20 17:01:24 +01:00
Willy Tarreau
984fca9363 MINOR: stream-int: set flag SI_FL_CLEAN_ABRT when mux supports clean aborts
By copying the info in the stream interface that the mux cleanly reports
aborts, we'll have the ability to check this flag wherever needed regardless
of the presence of a mux or not.
2017-12-20 16:56:32 +01:00
Willy Tarreau
28f1cb9da2 MINOR: mux: add flags to describe a mux's capabilities
This new field will be used to describe certain properties of some
muxes. For now we only add MX_FL_CLEAN_ABRT to indicate that a mux
is able to unambiguously report aborts using CS_FL_ERROR contrary
to others who may only report it via a read0. This will be used to
improve handling of the abortonclose option with H2. Other flags
may come later to report multiplexing capabilities or not, support
of client/server sides etc.
2017-12-20 16:31:30 +01:00
Willy Tarreau
2153d3ce73 BUG/MINOR: h2: properly report a stream error on RST_STREAM
We want to report such an error since H2 allows to differenciate
between an end of stream and an abort.

To be backported to 1.8.
2017-12-20 14:38:19 +01:00
Ryan O'Hara
8cb9993469 CONTRIB: halog: Fix compiler warnings in halog.c
There were several unused variables in halog.c that each caused a
compiler warning [-Wunused-but-set-variable]. This patch simply
removes the declaration of said vairables and any instance where the
unused variable was assigned a value.
2017-12-20 09:36:58 +01:00
Ryan O'Hara
957d12028e CONTRIB: iprange: Fix compiler warning in iprange.c
The declaration of main() in iprange.c did not specify a type, causing
a compiler warning [-Wimplicit-int]. This patch simply declares main()
to be type 'int' and calls exit(0) at the end of the function.
2017-12-20 09:36:58 +01:00
Etienne Carriere
aec8989e53 MINOR: spoe: add force-set-var option in spoe-agent configuration
For security reasons, the spoe filter was only able to change values of
existing variables. In specific cases (ex : with LUA code), the name of
variables are unknown at the configuration parsing phase.
The force-set-var option can be enabled to register all variables.
2017-12-20 08:55:18 +01:00
Bertrand Jacquin
72fa1ec24e MEDIUM: netscaler: add support for standard NetScaler CIP protocol
It looks like two version of the protocol exist as reported by
Andreas Mahnke. This patch add support for both legacy and standard CIP
protocol according to NetScaler specifications.
2017-12-20 07:04:07 +01:00
Bertrand Jacquin
a341a2f479 MEDIUM: netscaler: do not analyze original IP packet size
Original informations about the client are stored in the CIP encapsulated
IP header, hence there is no need to consider original IP packet length
to determine if data are missing. Instead this change detect missing
data if the remaining buffer is large enough to contain a minimal IP and
TCP header and if the buffer has as much data as CIP is telling.
2017-12-20 07:04:07 +01:00
Bertrand Jacquin
67de5a295c MINOR: netscaler: check in one-shot if buffer is large enough for IP and TCP header
There is minimal gain in checking first the IP header length and then
the TCP header length since we always want to capture information about
both protocols.

IPv4 length calculation was incorrect since IPv4 ip_len actually defines
the total length of IPv4 header and following data.
2017-12-20 07:04:07 +01:00
Bertrand Jacquin
43a66a96b3 BUG/MAJOR: netscaler: address truncated CIP header detection
Buffer line is manually incremented in order to progress in the trash
buffer but calculation are made omitting this manual offset.

This leads to random packets being rejected with the following error:

  HTTP/1: Truncated NetScaler Client IP header received

Instead, once original IP header is found, use the IP header length
without considering the CIP encapsulation.
2017-12-20 07:04:07 +01:00
Bertrand Jacquin
c7cc69ac36 BUG/MEDIUM: netscaler: use the appropriate IPv6 header size
IPv6 header has a fixed size of 40 bytes, not 20.
2017-12-20 07:04:07 +01:00
Bertrand Jacquin
7d668f9e76 MINOR: netscaler: rename cip_len to clarify its uage
cip_len was meant to be the length of the data encapsulated in the CIP
protocol, the size the IP and TCP header
2017-12-20 07:04:07 +01:00
Bertrand Jacquin
4b4c286bee MINOR: netscaler: remove the use of cip_magic only used once 2017-12-20 07:04:07 +01:00
Bertrand Jacquin
b387591f32 MINOR: netscaler: respect syntax
As per doc/coding-style.txt
2017-12-20 07:04:07 +01:00
Davor Ocelic
4094ce1a23 DOC/MINOR: intro: typo, wording, formatting fixes
- Fix a couple typos
- Introduce a couple simple rewordings
- Eliminate > 80 column lines

Changes do not affect technical content and can be backported.
2017-12-20 07:01:36 +01:00
Christopher Faulet
789691778f BUG/MEDIUM: mworker: Set FD_CLOEXEC flag on log fd
A log socket (UDP or UNIX) is opened by the master during its startup, when the
first log message is sent. So, to prevent FD leaks, we must ensure we correctly
close it during a reload. By setting FD_CLOEXEC bit on it, we are sure it will
be automatically closed it during a reload.

This patch must be backported in 1.8.
2017-12-19 14:03:30 +01:00
Willy Tarreau
60a2ee7945 MINOR: sample: rename the "len" converter to "length"
This converter was recently introduced by commit ed0d24e ("MINOR:
sample: add len converter").

As found by Cyril, it causes an issue in "http-request capture"
statements. The non-obvious problem is that an old syntax for sample
expressions and converters used to support a series of words, each
representing a converter. This used to be how the "stick" directives
were created initially. By having a converter called "len", a
statement such as "http-request capture foo len 10" considers "len"
as a converter and not as the capture length.

This obsolete syntax needs to be changed in 1.9 but it's too late
for other versions. It's worth noting that the same problem can
happen if converters are registered on the fly using Lua. Other
language keywords that currently have to be avoided in converters
include "id", "table", "if", "unless".
2017-12-15 07:13:48 +01:00
Cyril Bonté
9fc9e53763 BUG: MINOR: http: don't check http-request capture id when len is provided
Randomly, haproxy could fail to start when a "http-request capture"
action is defined, without any change to the configuration. The issue
depends on the memory content, which may raise a fatal error like :
  unable to find capture id 'xxxx' referenced by http-request capture
rule

Commit fd608dd2 already prevents the condition to happen, but this one
should be included for completeness and to reclect the code on the
response side.

The issue was introduced recently by commit 29730ba5 and should only be
backported to haproxy 1.8.
2017-12-14 22:46:27 +01:00
Cyril Bonté
3906d5739c BUG: MAJOR: lb_map: server map calculation broken
Adrian Williams reported that several balancing methods were broken and
sent all requests to one backend. This is a regression in haproxy 1.8 where
the server score was not correctly recalculated.

This fix must be backported to the 1.8 branch.
2017-12-14 17:36:39 +01:00
Etienne Carriere
ed0d24ebed MINOR: sample: add len converter
Add len converter that returns the length of a string
2017-12-14 14:36:10 +01:00
Willy Tarreau
b78b80efe5 BUG/MINOR: stream-int: don't try to receive again after receiving an EOS
When an end of stream has been reported, we should not try to receive again
as the mux layer might not be prepared to this and could report unexpected
errors.

This is more of a strengthening measure that follows the introduction of
conn_stream that came in 1.8. It's desired to backport this into 1.8 though
it's uncertain at this time whether it may have caused real issues.
2017-12-14 13:43:52 +01:00
Willy Tarreau
91bfdd7e04 BUG/MEDIUM: h2: fix stream limit enforcement
Commit 4974561 ("BUG/MEDIUM: h2: enforce the per-connection stream limit")
implemented a stream limit enforcement on the connection but it was not
correctly done as it would count streams still known by the connection,
which includes the lingering ones that are already marked close. We need
to count only the non-closed ones, which this patch does. The effect is
that some streams are rejected a bit before the limit.

This fix needs to be backported to 1.8.
2017-12-14 13:43:52 +01:00
Willy Tarreau
805935147a BUG/MEDIUM: http: don't disable lingering on requests with tunnelled responses
The HTTP forwarding engine needs to disable lingering on requests in
case the connection to the server has to be suddenly closed due to
http-server-close being used, so that we don't accumulate lethal
TIME_WAIT sockets on the outgoing side. A problem happens when the
server doesn't advertise a response size, because the response
message quickly goes through the MSG_DONE and MSG_TUNNEL states,
and once the client has transferred all of its data, it turns to
MSG_DONE and immediately sets NOLINGER and closes before the server
has a chance to respond. The problem is that this destroys some of
the pending DATA being uploaded, the server doesn't receive all of
them, detects an error and closes.

This early NOLINGER is inappropriate in this situation because it
happens before the response is transmitted. This state transition
to MSG_TUNNEL doesn't happen when the response size is known since
we stay in MSG_DATA (and related states) during all the transfer.

Given that the issue is only related to connections not advertising
a response length and that by definition these connections cannot be
reused, there's no need for NOLINGER when the response's transfer
length is not known, which can be verified when entering the CLOSED
state. That's what this patch does.

This fix needs to be backported to 1.8 and very likely to 1.7 and
older as it affects the very rare case where a client immediately
closes after the last uploaded byte (typically a script). However
given that the risk of occurrence in HTTP/1 is extremely low, it is
probably wise to wait before backporting it before 1.8.
2017-12-14 13:43:52 +01:00
Willy Tarreau
13e4e94dae BUG/MEDIUM: h2: don't close after the first DATA frame on tunnelled responses
Tunnelled responses are those without a content-length nor a chunked
encoding. They are specially dealt with in the current code but the
behaviour is not correct. The fact that the chunk size is left to zero
with a state artificially set to CHUNK_SIZE validates the test on
whether or not to set the end of stream flag. Thus the first DATA
frame always carries the ES flag and subsequent ones remain blocked.

This patch fixes it in two ways :
  - update h1m->curr_len to the size of the current buffer so that it
    is properly subtracted later to find the real end ;
  - don't set the state to CHUNK_SIZE when there's no content-length
    and instead set it to CHUNK_SIZE only when there's chunking.

This fix needs to be backported to 1.8.
2017-12-14 13:43:52 +01:00
Willy Tarreau
c4134ba8b0 BUG/MEDIUM: h2: don't switch the state to HREM before end of DATA frame
We used to switch the stream's state to HREM when seeing and ES bit on
the DATA frame before actually being able to process that frame, possibly
resulting in the DATA frame being processed after the stream was seen as
half-closed and possibly being rejected. The state must not change before
the frame is really processed.

Also fixes a harmless typo in the flag name which should have DATA and
not HEADERS in its name (but all values are equal).

Must be backported to 1.8.
2017-12-14 13:43:52 +01:00
Willy Tarreau
6847262211 MINOR: h2: don't demand that a DATA frame is complete before processing it
Since last commit it's not required that the DATA frames are complete anymore
so better start with what we have. Only the HEADERS frame requires this. This
may be backported as part of the upload fixes.
2017-12-14 13:43:52 +01:00
Willy Tarreau
8fc016d0fe BUG/MEDIUM: h2: support uploading partial DATA frames
We currently have a problem with DATA frames when they don't fit into
the destination buffer. While it was imagined that in theory this never
happens, in practice it does when "option http-buffer-request" is set,
because the headers don't leave the target buffer before trying to read
so if the frame is full, there's never enough room.

This fix consists in reading what can be read from the frame and advancing
the input buffer. Once the contents left are only the padding, the frame
is completely processed. This also solves another problem we had which is
that it was possible to fill a request buffer beyond its reserve because
the <count> argument was not respected in h2_rcv_buf(). Thus it's possible
that some POST requests sent at once with a headers+body filling exactly a
buffer could result in "400 bad req" when trying to add headers.

This fix must be backported to 1.8.
2017-12-14 13:43:52 +01:00