I've tried to follow all the pool_alloc2/pool_free2 calls in the code
to track memory leaks. I've found one which only happens when there's
already no more memory when allocating a new appsession cookie.
Sometimes it can be desired to return a location which is the same
as the request with a slash appended when there was not one in the
request. A typical use of this is for sending a 301 so that people
don't reference links without the trailing slash. The name of the
new option is "append-slash" and it can be used on "redirect"
statements in prefix mode.
When using server redirection, it is possible to specify a path
consisting of only one slash. While this is discouraged (risk of
loop) it may sometimes be useful combined with content switching.
The prefixing of a '/' then causes two slashes to be returned in
the response. So we now do as with the other redirects, don't
prepend a slash if it's alone.
Some message pointers were not usable once the message reached the
HTTP_MSG_DONE state. This is the case for ->som which points to the
body because it is needed to parse chunks. There is one case where
we need the beginning of the message : server redirect. We have to
call http_get_path() after the request has been parsed. So we rely
on ->sol without counting on ->som. In order to achieve this, we're
making ->rq.{u,v} relative to the beginning of the message instead
of the buffer. That simplifies the code and makes it cleaner.
Preliminary tests show this is OK.
This might have been introduced with chunk extensions. Note that
the server redirect still does not work because http_get_path()
cannot get the correct path once the request message is in the
HTTP_MSG_DONE state (->som does not point to the start of message
anymore).
If we accept a new request and that request produces an immediate
response (error, redirect, ...), then we may fail to send it in
case of pipelined requests if the response buffer is full. To avoid
this, we check the availability of at least maxrewrite bytes in the
response buffer before accepting a new pipelined request.
During a redirect, we used to send the last chunk of response with
stream_int_cond_close(). But this is wrong in case of pipeline,
because if the response already contains something, this function
will refrain from touching the buffer. Use a concatenation function
instead.
Also, this call might still fail when the buffer is full, we need
a second fix to refrain from parsing an HTTP request as long as the
response buffer is full, otherwise we may not even be able to return
a pending redirect or an error code.
That patch was incorrect because under some circumstances, the
capture memory could be freed by session_free() and then again
by http_end_txn(), causing a double free and an eventual segfault.
The pool use count was also reported wrong due to this bug.
The cleanup code was removed from session_free() to remain only
in http_end_txn().
Hank A. Paulson reported a massive memory leak when using keep-alive
mode. The information he provided made it easy to find that captured
request and response headers were erased but not released when renewing
a request.
Several HTTP analysers used to set those flags to values that
were useful but without considering the possibility that they
were not called again to clean what they did. First, replace
direct flag manipulation with more explicit macros. Second,
enforce a rule stating that any buffer which changes one of
these flags from the default must restore it after completion,
so that other analysers see correct flags.
With both this fix and the previous one about analyser bits,
we should not see any more stuck sessions.
Commit 0dfdf19b6438c2cea47b1dea0442d65bacfc77cf introduced a
regression because the connection header is now parsed and checked
depending on the configured options, but the options are set after
calling it instead of being set before.
Historically, "option httpclose" has always worked the same way. It
only mangles the "Connection" header in the request and the response
if needed, but does not affect the connection by itself, and ignores
any further data. It is dangerous to change this behaviour without
leaving any other alternative. If an active close is desired, it's
better to make use of "option forceclose" which does exactly what
it intends to do.
So as of now, "option httpclose" will only mangle the headers as
before, and will only affect the connection by itself when combined
with another connection-related option (eg: keepalive or server-close).
We basically have to mimmic the code of process_session() here, so
when the remote output is closed, we must abort otherwise we'll end
up with data which cannot leave the buffer.
By default this function returned 0 indicating an end of analysis.
This was not a problem as long as it was the last analyser in the
chain but becomes quite a big one now since it skips the forwarder
with auto_close enabled, causing some data to pass under the nose
of the last one undetected.
There were still several situations leading to CLOSE_WAIT sockets
remaining there forever because some complex transitions were
obviously not caught due to the impossibility to resync changes
between the request and response FSMs.
This patch now centralizes the global transaction state and feeds
it from both request and response transitions. That way, whoever
finishes first, there will be no issue for converging to the correct
state.
Some heavy use of the new debugging function has helped a lot. Maybe
those calls could be removed after some time. First tests are very
positive.
This function outputs to fd #-1 the status of request and response
buffers, the transaction states, the stream interface states, etc...
That way, it's easy to find that output in an strace report, correctly
placed WRT the other syscalls.
The data forwarders are analysers. As such, the have to check for
various situations on which they have to abort, one of them being
the lack of data with closed input. Now we don't leave the functions
anymore without performing these checks. This has solved the new
CLOSE_WAIT issue that became more noticeable since last patch.
It may happen that we forward a close just after we sent the last
chunk, because we forgot to clear the AUTO_CLOSE flag.
This issue caused some pages to be truncated depending on some
timing races. Issue initially reported by Cyril Bonté.
The cookie parser could be fooled by spaces or commas in cookie names
and values, causing the persistence cookie not to be matched if located
just after such a cookie. Now spaces found in values are considered as
part of the value, and spaces, commas and semi-colons found in values
or names, are skipped till next cookie name.
This fix must be backported to 1.3.
It makes sense to permit a client to keep its connection when
performing a redirect to the same host. We only detect the fact
that the redirect location begins with a slash to use the keep-alive
(if the client supports it).
By default we automatically wait for enough data to fill large
packets if buf->to_forward is not null. This causes a problem
with POST/Expect requests which have a data size but no data
immediately available. Instead of causing noticeable delays on
such requests, simply add a flag to disable waiting when sending
requests.
In server-close mode particularly, the response buffer is marked for
no-auto-close after a response passed through. This prevented a POST
request from being aborted on errors, timeouts or anything if the
response was received before the request was complete.
If we enable reading of a request immediately after completing
another one, we end up performing small reads until the request
buffer is complete. This takes time and makes it harder to realign
the buffer when needed. Just enable reading when we need to.
The rq.u field is relative to buf->data, not to msg->sol. We have
to subtract msg->som everywhere this error was made. Maybe it will
be simpler to have a pointer to the buffer in the message and find
appropriate data there.
Many times we see a lot of short responses in HTTP (typically 304 on a
reload). It is a waste of network bandwidth to send that many small packets
when we know we can merge them. When we know that another HTTP request is
following a response, we set BF_EXPECT_MORE on the response buffer, which
will turn MSG_MORE on exactly once. That way, multiple short responses can
leave pipelined if their corresponding requests were also pipelined.
We used to forward more trailers than required, causing a
desynchronization of the output. Now we schedule all for forwarding
as soon as we encounter them.
This option enables HTTP keep-alive on the client side and close mode
on the server side. This offers the best latency on the slow client
side, and still saves as many resources as possible on the server side
by actively closing connections. Pipelining is supported on both requests
and responses, though there is currently no reason to get pipelined
responses.
When too large a message lies in a buffer before parsing a new
request/response, we can now wait for previous outgoing data to
leave the buffer before attempting to parse again. After that
we can consider the opportunity to realign the buffer if needed.
The HTTP parser needed the msg structure to hold pre-initialized pointers.
This causes a trouble with keep-alive because if some data is still in the
buffer, the pointers can be anywhere after the data and later become invalid
when the buffer gets realigned.
It was not needed to rely on that since we have two valid information
in the buffer itself :
- buf->lr : last visited place
- buf->w + buf->send_max : beginning of next message
So by doing the maths only on those values, we can avoid doing tricks
on msg->som.
When we catch an error from the server, speed up the connection
abort since we don't want to remain long with pending data in the
socket, and we want to be able to reuse our source port ASAP.
The "forceclose" option used to close the output channel to the
server once it started to respond. While this happened to work with
most servers, some of them considered this as a connection abort and
immediately stopped responding.
Now that we're aware of the end of a request and response, we're able
to trivially handle this option and properly close both sides when the
server's response is complete.
During this change it appeared that forwarding could be allowed when
the BF_SHUTW_NOW flag was set on a buffer, which obviously is not
acceptable and was causing some trouble. This has been fixed too and
is the reason for the MEDIUM status on this patch.
There were still issues with the buffer alignment. Now we ensure
that we always align it before a request or response is completely
parsed if there is less than maxrewrite bytes free at the end. In
practice, it's not called that often and ensures we can always work
as expected.
In many places where we perform header insertion, an error control
is performed but due to a mistake, it cannot match any error :
if (unlikely(error) < 0)
instead of
if (unlikely(error < 0))
This prevents error 400 responses from being sent when the buffer is
full due to many header additions. This must be backported to 1.3.
The body parser will be used in close and keep-alive modes. It follows
the stream to keep in sync with both the request and the response message.
Both chunked transfer-coding and content-length are supported according to
RFC2616.
The multipart/byterange encoding has not yet been implemented and if not
seconded by any of the two other ones, will be forwarded till the close,
as requested by the specification.
Both the request and the response analysers converge into an HTTP_MSG_DONE
state where it will be possible to force a close (option forceclose) or to
restart with a fresh new transaction and maintain keep-alive.
This change is important. All tests are OK but any possible behaviour
change with "option httpclose" might find its root here.
When parsing body for URL parameters, we must not consider that
data are available from buf->data but from buf->data + msg->som.
This is not a problem right now but may become with keep-alive.
When parsing a request that does not start at the beginning of the
buffer, we may experience a buffer full issue. In order to avoid
this, we try to realign the buffer if it is not really full. That
will be required when we have to deal with pipelined requests.