This counter will help quickly spot whether there are new errors or not.
It is also assigned to each capture so that a script can keep trace of
which capture was taken when.
It is possible to block on incorrectly chunked requests or responses,
but this becomes very hard to debug when it happens once in a while.
This patch adds the ability to also capture incorrectly chunked requests
and responses. The chunk will appear in the error buffer and will be
verifiable with the usual "show errors". The incorrect byte will match
the error location.
Error captures did only support contiguous messages. This is annoying
for capturing chunking errors, so let's ensure the function is able to
copy wrapped messages.
When an error message is returned to a client, all buffer contents
were left intact. Since the analysers were removed, the potentially
invalid data that were read had a chance to be sent too.
Now we ensure we only keep the already scheduled data in the buffer
and we truncate it after that. That means that responses with data
that must be blocked will really be blocked, and that incorrectly
chunked data will be stopped at the point where the chunking fails.
When haproxy parses chunk-encoded data that are scheduled to be sent, it is
possible that the other end is closed (mainly due to a client abort returning
as an error). The message state thus changes to HTTP_MSG_ERROR and the error
is reported as a chunk parsing error ("PD--") while it is not. Detect this
case before setting the flags and set the appropriate flag in this case.
Debugging parsing errors can be greatly improved if we know what the parser
state was and what the buffer flags were (especially for closed inputs/outputs
and full buffers). Let's add that to the error snapshots.
When forwarding chunk-encoded data, each chunk gets a TCP PUSH flag when
going onto the wire simply because the send() function does not know that
some data remain after it (next chunk). Now we set the BF_EXPECT_MORE flag
on the buffer if the chunk size is not null. That way we can reduce the
number of packets sent, which is particularly noticeable when forwarding
compressed data, especially as it requires less ACKs from the client.
When the number of servers is a multiple of the size of the input set,
map-based hash can be inefficient. This typically happens with 64
servers when doing URI hashing. The "avalanche" hash-type applies an
avalanche hash before performing a map lookup in order to smooth the
distribution. The result is slightly less smooth than the map for small
numbers of servers, but still better than the consistent hashing.
We'll use this hash at other places, let's make it globally available.
The function has also been renamed because its "chash_hash" name was
not appropriate.
When a header is removed, the previous header's next pointer is updated
to reflect the next of the current header. However, when cycling through
the loop, we update the prev pointer to point to the deleted header, which
means that if we delete another header, it's the deleted header's next
pointer that will be updated, leaving the deleted header in the list with
a null length, which is forbidden.
We must just not update the prev pointer after a removal.
This bug was present when either "reqdel" and "rspdel" removed two consecutive
headers. It could also occur when removing cookies in either requests or
responses, but since headers were the last header processing, the issue
remained unnoticed.
Issue reported by Hank A. Paulson.
This fix must be ported to 1.4 and possibly 1.3.
Cookies in indirect mode are removed from the cookie header. Three pointers
ought to be updated when appsession cookies are processed next, but were not.
The result is that a memcpy() can be called with a negative value causing the
process to crash. It is not sure whether this can be remotely exploited or not.
(cherry picked from commit c5f3749aa3ccfdebc4992854ea79823d26f66213)
In out of memory conditions, the ->destroy function would free all
possibly allocated pools from the current appsession, including those
that were not yet allocated nor assigned, which used to point to a
previous allocation, obviously resulting in a segfault.
(cherry picked from commit 75eae485921d3a6ce197915c769673834ecbfa5c)
In case of out of memory, it was possible to write to a null pointer
when capturing response cookies due to a missing "else" block. The
request handling was fine though.
(cherry picked from commit 62e3604d7dd27741c0b4c9e27d9e7c73495dfc32)
When running with -vv or -V -d, the list of usable polling systems
is reported. The final selection did not take into account the
possible failures during the tests, which is misleading and could
make one think that a non-working poller will be used, while it is
not the case. Fix that to really report the correct ones.
(cherry picked from commit 6d0e354e0171f08b7b3868ad2882c3663bd068a7)
Since unix sockets are supported for bind, the default backlog size was not
enough to accept the traffic. The size is now inherited from the listener
to behave like the tcp listeners.
This also affects the "stats socket" backlog, which is now determined by
"stats maxconn".
Some distros' libc are built for CPUs earlier than i686 and as such do
not offer support for Linux kernel's faster vsyscalls. This code adds
a new build option USE_VSYSCALLS to bypass libc for most commonly used
system calls. A net gain of about 10% can be observed with this change
alone.
It only works when /proc/sys/abi/vsyscall32 equals exactly 2. When it's
set to 1, the VDSO is randomized and cannot be used.
Analysers were re-evaluated when some flags were still present in the
buffers, even if they had not changed since previous pass, resulting
in a waste of CPU cycles.
Ensuring that the flags have changed has saved some useless calls :
function min calls per session (before -> after)
http_request_forward_body 5 -> 4
http_response_forward_body 3 -> 2
http_sync_req_state 10 -> 8
http_sync_res_state 8 -> 6
http_resync_states 8 -> 6
The stream_sock's accept() used to close the FD upon error, but this
was also sometimes performed by the frontend's accept() called via the
session's accept(). Those interlaced calls were also responsible for the
spaghetti-looking error unrolling code in session.c and stream_sock.c.
Now the frontend must not close the FD anymore, the session is responsible
for that. It also takes care of just closing the FD or also removing from
the FD lists, depending on its state. The socket-level accept() does not
have to care about that anymore.
Some Alert() messages were remaining in the accept() path, which they
would have no chance to be detected. Remove some of them (the impossible
ones) and replace the relevant ones with send_log() so that the admin
has a chance to catch them.
Enhance pattern convs and fetch argument parsing, now fetchs and convs callbacks used typed args.
Add more details on error messages on parsing pattern expression function.
Update existing pattern convs and fetchs to new proto.
Create stick table key type "binary".
Manage Truncation and padding if pattern's fetch-converted result don't match table key size.
If a read shutdown is encountered on the first packet of a connection
right after the data and the last analyser is unplugged at the same
time, then that last data chunk may never be forwarded. In practice,
right now it cannot happen on requests due to the way they're scheduled,
nor can it happen on responses due to the way their analysers work.
But this behaviour has been observed with new response analysers being
developped.
The reason is that when the read shutdown is encountered and an analyser
is present, data cannot be forwarded but the BF_SHUTW_NOW flag is set.
After that, the analyser gets called and unplugs itself, hoping that
process_session() will automatically forward the data. This does not
happen due to BF_SHUTW_NOW.
Simply removing the test on this flag is not enough because then aborted
requests still get forwarded, due to the forwarding code undoing the
abort.
The solution here consists in checking BF_SHUTR_NOW instead of BF_SHUTW_NOW.
BF_SHUTR_NOW is only set on aborts and remains set until ->shutr() is called.
This is enough to catch recent aborts but not prevent forwarding in other
cases. Maybe a new special buffer flag "BF_ABORT" might be desirable in the
future.
This patch does not need to be backported because older versions don't
have the analyser which make the problem appear.
Some options depends on the target architecture or compilation options.
When such an option is used on a compiled version that doesn't support it,
it's probably better to identify it as an unsupported option due to
compilation options instead of an unknown option.
Edit: better check on the empty capability than on the option bits. -Willy
There were a lot of snprintf() everywhere in the UNIX bind code. Now we
proceed as for tcp and indicate the socket path at the end between square
brackets. The code is smaller and more readable.
Add the address and port to the error message of the proxy socket that caused
the error. This can be helpful when several listening addresses are used in a
proxy.
Edit: since we now also support unix sockets (which already report their
path), better move the address reporting to proto_tcp.c by analogy.
-Willy
MAXPATHLEN may be used at other places, it's unconvenient to have it
redefined in a few files. Also, since checking it requires including
sys/param.h, some versions of it cause a macro declaration conflict
with MIN/MAX which are defined in tools.h. The solution consists in
including sys/param.h in both files so that we ensure it's loaded
before the macros are defined and MAXPATHLEN is checked.