This one acts similarly to its tcp-request counterpart. It immediately
closes the request without emitting any response. It can be suitable in
certain DoS conditions, as well as to close an HTTP/2 connection.
This method was reserved for the HTTP/2 connection preface, must never
be used and must be rejected. In normal situations it doesn't happen,
but it may be visible if a TCP frontend has alpn "h2" enabled, and
forwards to an HTTP backend which tries to parse the request. Before
this patch it would pass the wrong request to the backend server, now
it properly returns 400 bad req.
This patch should probably be backported to stable versions.
When a write activity is reported on a channel, it is important to keep this
information for the stream because it take part on the analyzers' triggering.
When some data are written, the flag CF_WRITE_PARTIAL is set. It participates to
the task's timeout updates and to the stream's waking. It is also used in
CF_MASK_ANALYSER mask to trigger channels anaylzers. In the past, it was cleared
by process_stream. Because of a bug (fixed in commit 95fad5ba4 ["BUG/MAJOR:
stream-int: don't re-arm recv if send fails"]), It is now cleared before each
send and in stream_int_notify. So it is possible to loss this information when
process_stream is called, preventing analyzers to be called, and possibly
leading to a stalled stream.
Today, this happens in HTTP2 when you call the stat page or when you use the
cache filter. In fact, this happens when the response is sent by an applet. In
HTTP1, everything seems to work as expected.
To fix the problem, we need to make the difference between the write activity
reported to lower layers and the one reported to the stream. So the flag
CF_WRITE_EVENT has been added to notify the stream of the write activity on a
channel. It is set when a send succedded and reset by process_stream. It is also
used in CF_MASK_ANALYSER. finally, it is checked in stream_int_notify to wake up
a stream and in channel_check_timeouts.
This bug is probably present in 1.7 but it seems to have no effect. So for now,
no needs to backport it.
This adds a new keyword on the "server" line, "allow-0rtt", if set, we'll try
to send early data to the server, as long as the client sent early data, as
in case the server rejects the early data, we no longer have them, and can't
resend them, so the only option we have is to send back a 425, and we need
to be sure the client knows how to interpret it correctly.
All the references to connections in the data path from streams and
stream_interfaces were changed to use conn_streams. Most functions named
"something_conn" were renamed to "something_cs" for this. Sometimes the
connection still is what matters (eg during a connection establishment)
and were not always renamed. The change is significant and minimal at the
same time, and was quite thoroughly tested now. As of this patch, all
accesses to the connection from upper layers go through the pass-through
mux.
locks have been added in pat_ref and pattern_expr structures to protect all
accesses to an instance of on of them. Moreover, a global lock has been added to
protect the LRU cache used for pattern matching.
Patterns are now duplicated after a successfull matching, to avoid modification
by other threads when the result is used.
Finally, the function reloading a pattern list has been modified to be
thread-safe.
This is done by passing the right stream's proxy (the frontend or the backend,
depending on the context) to lock the error snapshot used to store the error
info.
The stick table API was slightly reworked:
A global spin lock on stick table was added to perform lookup and
insert in a thread safe way. The handling of refcount on entries
is now handled directly by stick tables functions under protection
of this lock and was removed from the code of callers.
The "stktable_store" function is no more externalized and users should
now use "stktable_set_entry" in any case of insertion. This last one performs
a lookup followed by a store if not found. So the code using "stktable_store"
was re-worked.
Lookup, and set_entry functions automatically increase the refcount
of the returned/stored entry.
The function "sticktable_touch" was renamed "sticktable_touch_local"
and is now able to decrease the refcount if last arg is set to true. It
is allowing to release the entry without taking the lock twice.
A new function "sticktable_touch_remote" is now used to insert
entries coming from remote peers at the right place in the update tree.
The code of peer update was re-worked to use this new function.
This function is also able to decrease the refcount if wanted.
The function "stksess_kill" also handle a parameter to decrease
the refcount on the entry.
A read/write lock is added on each entry to protect the data content
updates of the entry.
For now, we have a list of each type per thread. So there is no need to lock
them. This is the easiest solution for now, but not the best one because there
is no sharing between threads. An idle connection on a thread will not be able
be used by a stream on another thread. So it could be a good idea to rework this
patch later.
Now, each proxy contains a lock that must be used when necessary to protect
it. Moreover, all proxy's counters are now updated using atomic operations.
First, we use atomic operations to update jobs/totalconn/actconn variables,
listener's nbconn variable and listener's counters. Then we add a lock on
listeners to protect access to their information. And finally, listener queues
(global and per proxy) are also protected by a lock. Here, because access to
these queues are unusal, we use the same lock for all queues instead of a global
one for the global queue and a lock per proxy for others.
"check_http_req_capture" and "check_http_res_capture" functions have been added
to check validity of "http-request capture" and "http-response capture"
rules. Code for these functions come from cfgparse.c.
When compiled with Openssl >= 1.1.1, before attempting to do the handshake,
try to read any early data. If any early data is present, then we'll create
the session, read the data, and handle the request before we're doing the
handshake.
For this, we add a new connection flag, CO_FL_EARLY_SSL_HS, which is not
part of the CO_FL_HANDSHAKE set, allowing to proceed with a session even
before an SSL handshake is completed.
As early data do have security implication, we let the origin server know
the request comes from early data by adding the "Early-Data" header, as
specified in this draft from the HTTP working group :
https://datatracker.ietf.org/doc/html/draft-ietf-httpbis-replay
Calls to build_logline() are audited in order to use dynamic trash buffers
allocated by alloc_trash_chunk() instead of global trash buffers.
This is similar to commits 07a0fec ("BUG/MEDIUM: http: Prevent
replace-header from overwriting a buffer") and 0d94576 ("BUG/MEDIUM: http:
prevent redirect from overwriting a buffer").
This patch should be backported in 1.7, 1.6 and 1.5. It relies on commit
b686afd ("MINOR: chunks: implement a simple dynamic allocator for trash
buffers") for the trash allocator, which has to be backported as well.
The chunk crlf parser used to depend on the channel and on the HTTP
message, eventhough it's not really needed. Let's remove this dependency
so that it can be used within the H2 to H1 gateway.
As part of this small API change, it was renamed to h1_skip_chunk_crlf()
to mention that it doesn't depend on http_msg anymore.
The chunk parser used to depend on the channel and on the HTTP message
but it's not really needed as they're only used to retrieve the buffer
as well as to return the number of bytes parsed and the chunk size.
Here instead we pass the (few) relevant information in arguments so that
the function may be reused without a channel nor an HTTP message (ie
from the H2 to H1 gateway).
As part of this API change, it was renamed to h1_parse_chunk_size() to
mention that it doesn't depend on http_msg anymore.
Functions http_parse_chunk_size(), http_skip_chunk_crlf() and
http_forward_trailers() were moved to h1.h and h1.c respectively so
that they can be called from outside. The parts that were inline
remained inline as it's critical for performance (+41% perf
difference reported in an earlier test). For now the "http_" prefix
remains in their name since they still depend on the http_msg type.
Certain types and enums are very specific to the HTTP/1 parser, and we'll
need to share them with the HTTP/2 to HTTP/1 translation code. Let's move
them to h1.c/h1.h. Those with very few occurrences or only used locally
were renamed to explicitly mention the relevant HTTP version :
enum ht_state -> h1_state.
http_msg_state_str -> h1_msg_state_str
HTTP_FLG_* -> H1_FLG_*
http_char_classes -> h1_char_classes
Others like HTTP_IS_*, HTTP_MSG_* are left to be done later.
For HTTP/2 we'll need some buffer-only equivalent functions to some of
the ones applying to channels and still squatting the bi_* / bo_*
namespace. Since these names have kept being misleading for quite some
time now and are really getting annoying, it's time to rename them. This
commit will use "ci/co" as the prefix (for "channel in", "channel out")
instead of "bi/bo". The following ones were renamed :
bi_getblk_nc, bi_getline_nc, bi_putblk, bi_putchr,
bo_getblk, bo_getblk_nc, bo_getline, bo_getline_nc, bo_inject,
bi_putchk, bi_putstr, bo_getchr, bo_skip, bi_swpbuf
url_dec sample converter uses url_decode function to decode an URL. This
function fails by returning -1 when an invalid character is found. But the
sample converter never checked the return value and it used it as length for the
decoded string. Because it always succeeded, the invalid sample (with a string
length set to -1) could be used by other sample fetches or sample converters,
leading to undefined behavior like segfault.
The fix is pretty simple, url_dec sample converter just needs to return an error
when url_decode fails.
This patch must be backported in 1.7 and 1.6.
A previous fix was made to prevent the connection to a server if a redirect was
performed during the request processing when we wait to keep the client
connection alive. This fix introduced a pernicious bug. If a client closes its
connection immediately after sending a request, it is possible to keep stream
alive infinitely. This happens when the connection closure is caught when the
request is received, before the request parsing.
To be more specific, this happens because the close event is not "forwarded",
first because of the call to "channel_dont_connect" in the function
"http_apply_redirect_rule", then because we want to keep the client connection
alive, we explicitly call "channel_dont_close" in the function
"http_request_forward_body".
So, to fix the bug, instead of blocking the server connection, we force its
shutdown. This will force the stream to re-evaluate all connexions states. So it
will detect the client has closed its connection.
This patch must be backported in 1.7.
The server state and weight was reworked to handle
"pending" values updated by checks/CLI/LUA/agent.
These values are commited to be propagated to the
LB stack.
In further dev related to multi-thread, the commit
will be handled into a sync point.
Pending values are named using the prefix 'next_'
Current values used by the LB stack are named 'cur_'
This string is used in sample fetches so it is safe to use a preallocated trash
chunk instead of a buffer dynamically allocated during HAProxy startup.
Till now connections used to rely exclusively on file descriptors. It
was planned in the past that alternative solutions would be implemented,
leading to member "union t" presenting sock.fd only for now.
With QUIC, the connection will need to continue to exist but will not
rely on a file descriptor but a connection ID.
So this patch introduces a "connection handle" which is either a file
descriptor or a connection ID, to replace the existing "union t". We've
now removed the intermediate "struct sock" which was never used. There
is no functional change at all, though the struct connection was inflated
by 32 bits on 64-bit platforms due to alignment.
The two macros EXPECT_LF_HERE and EAT_AND_JUMP_OR_RETURN were exported
for use outside the HTTP parser. They now take extra arguments to avoid
implicit pointers and jump labels. These will be used to reimplement a
minimalist HTTP/1 parser in the H1->H2 gateway.
In commit "MINOR: http: Switch requests/responses in TUNNEL mode only by
checking txn flags", it is possible to have an infinite loop on HTTP_MSG_CLOSING
state.
The previous patch ("MINOR: http: Rely on analyzers mask to end processing in
forward_body functions") contains a bug for keep-alive transactions.
For these transactions, AN_REQ_FLT_END and AN_RES_FLT_END analyzers must be
removed only when all outgoing data was forwarded.
Instead of relying on request or response state, we use "chn->analysers" mask as
all other analyzers. So now, http_resync_states does not return anything
anymore.
The debug message in http_resync_states has been improved.
When the body length of a HTTP response is undefined, the HTTP parser is blocked
in the body parsing. Before HAProxy 1.7, in this case, because
AN_RES_HTTP_XFER_BODY is never set, there is no visible effect. When the server
closes its connection to terminate the response, HAProxy catches it as a normal
closure. Since 1.7, we always set this analyzer to enter at least once in
http_response_forward_body. But, in the present case, when the server connection
is closed, http_response_forward_body is called one time too many. The response
is correctly sent to the client, but an error is catched and logged with "SD--"
flags.
To reproduce the bug, you can use the configuration "tests/test-fsm.cfg". The
tests 3 and 21 hit the bug.
Idea to fix the bug is to switch the response in TUNNEL mode without switching
the request. This is possible because of previous patches.
First, we need to detect responses with undefined body length during states
synchronization. Excluding tunnelled transactions, when the response length is
undefined, TX_CON_WANT_CLO is always set on the transaction. So, when states are
synchronized, if TX_CON_WANT_CLO is set, the response is switched in TUNNEL mode
and the request remains unchanged.
Then, in http_msg_forward_body, we add a specific check to switch the response
in DONE mode if the body length is undefined and if there is no data filter.
This patch depends on following previous commits:
* MINOR: http: Switch requests/responses in TUNNEL mode only by checking txn flags
* MINOR: http: Reorder/rewrite checks in http_resync_states
This patch must be backported in 1.7 with 2 previous ones.
Today, the only way to have a request or a response in HTTP_MSG_TUNNEL state is
to have the flag TX_CON_WANT_TUN set on the transaction. So this is a symmetric
state. Both the request and the response are switch in same time in this
state. This can be done only by checking transaction flags instead of relying on
the other side state. This is the purpose of this patch.
This way, if for any reason we need to switch only one side in TUNNEL mode, it
will be possible. And to prepare asymmetric cases, we check channel flags in
DONE _AND_ TUNNEL states.
WARNING: This patch will be used to fix a bug. The fix will be commited in a
very next commit. So if the fix is backported, this one must be backported too.
The previous patch removed the forced symmetry of the TUNNEL mode during the
state synchronization. Here, we take care to remove body analyzer only on the
channel in TUNNEL mode. In fact, today, this change has no effect because both
sides are switched in same time. But this way, with some changes, it will be
possible to keep body analyzer on a side (to finish the states synchronization)
with the other one in TUNNEL mode.
WARNING: This patch will be used to fix a bug. The fix will be commited in a
very next commit. So if the fix is backported, this one must be backported too.
Only 100 was considered informational instead of all 1xx. This can be
a problem when facing a 102 ("progress") or with the upcoming 103 for
early hints. Let's properly handle all 1xx now, leaving a special case
for 101 which is used for the upgrade.
This fix should be backported to 1.7, 1.6 and 1.5. In 1.4 the code is
different but the backport should be made there as well.
To reset an HTTP transaction, we need to be sure all data were sent, for the
request and the response. There are tests on request and response buffers for
that in http_resync_states function. But the return code was wrong. We must
return 0 to wait.
This patch must be backported in 1.7
The pool used to log the uri was created with a size of 0 because the
configuration and 'tune.http.logurilen' were parsed too earlier.
The fix consist to postpone the pool_create as it is done for
cookie captures.
Regression introduced with 'MINOR: log: Add logurilen tunable'
A filter can choose to loop when a HTTP message is in the state
HTTP_MSG_ENDING. But the transaction is terminated with an error if the input is
closed (CF_SHUTR set on the channel). At this step, we have received all data,
so we can wait.
So now, we also check the parser state before leaving. This fix only affects
configs that use a filter that can wait in http_forward_data or http_end
callbacks, when all data were parsed.
Jean Lubatti reported a crash on haproxy using a config involving cookies
and tarpit rules. It just happens that since 1.7-dev3 with commit 83a2c3d
("BUG/MINOR : allow to log cookie for tarpit and denied request"), function
manage_client_side_cookies() was called after erasing the request buffer in
case of a tarpit action. The problem is that this function must absolutely
not be called with an empty buffer since it moves parts of it. A typical
reproducer consists in sending :
"GET / HTTP/1.1\r\nCookie: S=1\r\n\r\n"
On such a config :
listen crash
bind :8001
mode http
reqitarpit .
cookie S insert indirect
server s1 127.0.0.1:8000 cookie 1
The fix simply consists in moving the call to the function before the call
to buffer_erase().
Many thanks to Jean for testing instrumented code and providing a usable
core.
This fix must be backported to all stable versions since the fix introducing
this bug was backported as well.
The default len of request uri in log messages is 1024. In some use
cases, you need to keep the long trail of GET parameters. The only
way to increase this len is to recompile with DEFINE=-DREQURI_LEN=2048.
This commit introduces a tune.http.logurilen configuration directive,
allowing to tune this at runtime.
The sample fetch returns all headers including the last jump line.
The last jump line is used to determine if the block of headers is
truncated or not.