385 Commits

Author SHA1 Message Date
Aurelien DARRAGON
0846638f7f MEDIUM: stream: interrupt costly rulesets after too many evaluations
It is not rare to see configurations with a large number of "tcp-request
content" or "http-request" rules for instance. A large number of rules
combined with cpu-demanding actions (e.g.: actions that work on content)
may create thread contention as all the rules from a given ruleset are
evaluated under the same polling loop if the evaluation is not interrupted

Thus, in this patch we add extra logic around "tcp-request content",
"tcp-response content", "http-request" and "http-response" rulesets, so
that when a certain number of rules are evaluated under the single polling
loop, we force the evaluating function to yield. As such, the rule which
was about to be evaluated is saved, and the function starts evaluating
rules from the save pointer when it returns (in the next polling loop).

We use task_wakeup(task, TASK_WOKEN_MSG) to explicitly wake the task so
that no time is wasted and the processing is resumed ASAP. TASK_WOKEN_MSG
is mandatory here because process_stream() expects TASK_WOKEN_MSG for
explicit analyzers re-evaluation.

rules_bcount stream's attribute was added to count how manu rules were
evaluated since last interruption (yield). Also, SF_RULE_FYIELD flag
was added to know that the s->current_rule was assigned due to forced
yield and not regular yield.

By default haproxy will enforce a yield every 50 rules, this behavior
can be configured using the "tune.max-rules-at-once" global keyword.

There is a limitation though: for now, if the ACT_OPT_FINAL flag is set
on act_opts, we consider it is not safe to yield (as it is already the
case for automatic yield). In this case instead of yielding an taking
the risk of not being called back, we skip the yield and hope it will
not create contention. This is something we should ideally try to
improve in order to yield in all conditions.
2025-02-03 17:09:48 +01:00
Christopher Faulet
1c6512f8fc DEBUG: http-ana: Remove debug counters from HTTP analyzers
Several debug counters were added in HTTP analyzers to help debugging a
strange issue about early aborts. But these counters are a bit overkill
now. Especially because it is now possible to rely on the termination event
log. So just remove them.

Note that these counters are still there in 3.1.
2025-02-03 08:28:45 +01:00
Christopher Faulet
274c9d21a6 BUG/MINOR: tevt/http-ana: Remove badly placed event reports
When specific events for the stream location were added, some reports about
message interception were not removed. These reports are now removed.

No need to backport.
2025-02-03 08:20:41 +01:00
Christopher Faulet
2dc02f75b1 MEDIUM: tevt/stconn/stream: Add dedicated termination events for stream location
If it is the last patch to introduce dedicated termination events for each
location. In this one, events for the stream location are introcued. The old
enum is also removed because it is now unused.

Here, more accurate evets are added. The "intercepted" event was splitted.
2025-01-31 10:41:50 +01:00
Christopher Faulet
00a07c8b54 MINOR: tevt/stream/stconn: Report termination events for stream and sc
In this patch, events for the stream location are reported. These events are
first reported on the corresponding stream-connector. So front events on scf
and back event on scb. Then all events are both merged in the stream. But
only 4 events are saved on the stream.

Several internal events are for now grouped with the type
"tevt_type_intercepted". More events will be added to have a better
resolution. But at least the place to report these events are identified.

For now, when a event is reported on a SC, it is also reported on the stream
and vice versa.
2025-01-31 10:41:50 +01:00
Aurelien DARRAGON
e768a531b7 CLEANUP: tree-wide: define and use acl_match_cond() helper
acl_match_cond() combines acl_exec_cond() + acl_pass() and a check on the
condition->pol (to check if the cond is inverted) in order to return
either 0 if the cond doesn't match or 1 if it matches (or NULL).

Thanks to this we can actually simplify some redundant constructs that
iterate over rules and evaluate if the condition matches or not.

Conditions for tcp-request inspect-content and tcp-response
inspect-content couldn't be simplified because they perform an extra
check for missing data, and thus still need to leverage acl_exec_cond()

It's best to display the patch using "-w", like "git show xxxx -w",
because some blocks had to be re-indented after the cleanup, which
makes the patch hard to review by default.
2025-01-27 11:11:43 +01:00
Amaury Denoyelle
071ae8ce3d BUG/MEDIUM: stats/server: use watcher to track server during stats dump
If a server A is deleted while a stats dump is currently on it, deletion
is delayed thanks to reference counting. Server A is nonetheless removed
from the proxy list. However, this list is a single linked list. If the
next server B is deleted and freed immediately, server A would still
point to it. This problem has been solved by the prev_deleted list in
servers.

This model seems correct, but it is difficult to ensure completely its
validity. In particular, it implies when stats dump is resumed, server A
elements will be accessed despite the server being in a half-deleted
state.

Thus, it has been decided to completely ditch the refcount mechanism for
stats dump. Instead, use the watcher element to register every stats
dump currently tracking a server instance. Each time a server is deleted
on the CLI, each stats dump element which may points to it are updated
to access the next server instance, or NULL if this is the last server.
This ensures that a server which was deleted via CLI but not completely
freed is never accessed on stats dump resumption.

Currently, no race condition related to dynamic servers and stats dump
is known. However, as described above, the previous model is deemed too
fragile, as such this patch is labelled as bug-fix. It should be
backported up to 2.6, after a reasonable period of observation. It
relies on the following patch :
  MINOR: list: define a watcher type
2024-12-10 16:19:33 +01:00
Christopher Faulet
62f37801c8 BUG/MEDIUM: http-ana: Reset request flag about data sent to perform a L7 retry
It is possible to loose the request after several L7 retries, leading to
crashes, because the request channel flag stating some data were sent is not
properly reset.

When a L7 retry is performed, some flags on different entities must be reset
to be sure a new connection will be properly retried, just like it was the
first one, mainly because there was no connection establishment failure. One
of them, on the request channel, is not reset. The flag stating some data
were already sent. It is annoying because this flag is used during the
connection establishment to know if an error is triggered at the connection
level or at the data level. In the last case, the error must be handled by
the HTTP response analyzer, to eventually perform another L7 retry.

Because CF_WROTE_DATA flag is not removed when a L7 retry is performed, a
subsequent connection establishment error may be handled as a L7 error while
in fact the request was never sent. It also means the request was never
saved in the buffer used to performed L7 retries. Thus, on the next L7
retires, the request is just lost. This forecefully leads to a bunch of
undefined behavior. One of them is a crash, when the request is used to
perform the load-balancing.

This patch should fix issue #2793. It must be backported to all stable
versions.
2024-11-29 14:46:38 +01:00
Christopher Faulet
dc15581c02 BUG/MEDIUM: http-ana: Don't release too early the L7 buffer
In some cases, the buffer used to store the request to be able to perform a
L7 retry is released released too early, leading to a crash because a retry
is performed with an empty request.

First, there is a test on invalid 101 responses that may be caught by the
"junk-response" retry policy. Then, it is possible to get an error
(empty-response, bad status code...) after an interim response. In both
cases, the L7 buffer is already released while it should not.

To fix the issue, the L7 buffer is now released at the end of the
AN_RES_WAIT_HTTP analyser, but only when a response was successfully
received and processed. In all error cases, the stream is quickly released,
with the L7 buffer. So there is no leak and it is safer this way.

This patch may fix the issue #2793. It must be as far as 2.4.
2024-11-25 22:18:19 +01:00
Christopher Faulet
2a5da31cce BUG/MINOR: http-ana: Adjust the server status before the L7 retries
The server status must be adjusted, if necessary, at each retry. It is
properly performed when "obersve layer4" directive is set. But for the layer
7, only the last attempt was considered.

When the L7 retries were implemented, all retries were added before the
server status adjutement. So only the last attempt was considered. To fix
the issue, we must adjut the server status first, and then try to perform a
L7 retry.

This patch should fix the issue #2679. It must be backported to all stable
versions.
2024-11-20 09:22:06 +01:00
Christopher Faulet
5863d33fce BUG/MINOR: http_ana: Report -1 for %Tr for invalid response only
The server response time is erroneously reported as -1 when it is
intercepted by HAProxy.

As stated in the documentation, the server response time is reported as -1
when the last response header was never seen. It happens when a server
timeout is triggered before the server managed to process the request. It
also happens if the response is invalid. This may be reported by the mux
during the response parsing, but also by the HTTP analyzers. However, in
this last case, the response time must only be reported as -1 on 502.

This patch must be backported to all stable versions. It should fix the
issue #2384.
2024-11-19 15:29:40 +01:00
Christopher Faulet
1be7140ade MINOR: http-ana: Add support for "set-cookie-fmt" option to redirect rules
It is now possible to use a log-format string to define the "Set-Cookie"
header value of a response generated by a redirect rule. There is no special
check on the result format and it is not possible during the configuration
parsing. It is proably not a big deal because already existing "set-cookie"
and "clear-cookie" options don't perform any check.

Here is an example:

  http-request redirect location https://someurl.com/ set-cookie haproxy="%[var(txn.var)]"

This patch should fix the issue #1784.
2024-11-19 15:20:02 +01:00
Christopher Faulet
b2877db47c MINOR: http-ana: Add option to keep query-string on a localtion-based redirect
On prefix-based redirect, there is an option to drop the query-string of the
location. Here it is the opposite. an option is added to preserve the
query-string of the original URI for a localtion-based redirect.

By setting "keep-query" option, for a location-based redirect only, the
query-string of the original URI is appended to the location. If there is no
query-string, nothing is added (no empty '?'). If there is already a
non-empty query-string on the localtion, the original one is appended with
'&' separator.

This patch should fix issue #2728.
2024-11-19 15:20:02 +01:00
Christopher Faulet
a930e99f46 BUG/MINOR: Don't report early srv aborts on request forwarding in DONE state
L7-retries may be ignored if server aborts are detected during the request
forwarding, when the request is already in DONE state.

When a request was fully processed (so in HTTP_MSG_DONE state) and is
waiting for be forwarded to the server, there is a test to detect server
aborts, to be able to report the error. However, this test must be skipped
if the response was not received yet, to let the reponse analyszers handle
the abort. It is important to properly handle the retries. This test must
only be performed if the response analysis was finished. It means the
response must be at least in HTTP_MSG_BODY state.

This patch should be backported as far as 2.8.
2024-11-15 11:00:05 +01:00
Christopher Faulet
64554a55f4 MINOR: stream: Add http-buffer-request option in the waiting entities
When http-buffer-request option is set on a proxy, the processing will be
paused to wait the full request payload or a full buffer. So it is an entity
that block the processing, just like a rule or a filter that yields. So now,
it is reported as a waiting entity if an error or a timeout occurred.

To do so, an stream entity type is added for this option. There is no
pointer. And "waiting_entity" sample fetch returns the option name.
2024-10-31 20:24:50 +01:00
Christopher Faulet
c64712b085 MINOR: stream: Use an enum to identify last and waiting entities for streams
Instead of using 1 for last/waiting rule and 2 for last/waiting filter, an
enum is used. It is less ambiguous this way.
2024-10-31 20:24:37 +01:00
Christopher Faulet
537f20eb3e MINOR: stream: Save the entity waiting to continue its processing
When a rule or a filter yields because it waits for something to be able to
continue its processing, this entity is saved in the stream. If an error or
a timeout occurred, info on this entity may be retrieved via the
"waiting_entity" sample fetch, for instance to dump it in the logs. This
info may be useful to found root cause of some bugs because it is a way to
know the processing was temporarily stopped. This may explain timeouts for
instance.

The sample fetch is not documented yet.
2024-10-31 16:40:09 +01:00
Christopher Faulet
c9fa78e747 MINOR: stream: Replace last_rule_file/line fields by a more generic field
The last evaluated rule is now saved in a generic structure, named
last_entity, with a type to identify it. The idea is to be able to store
other kind of entity that may interrupt a specific processing.

The type of the last evaluated rule is set to 1. It will be replace later by
an enum to be more explicit. In addition, the pointer to the rule itself is
saved instead of its location.

The sample fetch "last_entity" was added to retrieve the information about
it. In this case, it is the rule localtion, the config file containing the
rule followed by the line where the rule is defined, separated by a
colon. This sample fetch is not documented yet.
2024-10-31 16:36:39 +01:00
Christopher Faulet
0b7605491e MINOR: stream: Save last evaluated rule on invalid yield
When an action yields while it is not allowed, an internal error is
reported. This interrupts the processing. So info about the last evaluated
rule must be filled.

This patch may be bakcported if needed. If so, the commit ("MINOR: stream:
Save last evaluated rule on invalid yield") must be backported first.
2024-10-31 09:30:52 +01:00
Christopher Faulet
65ea29dcf8 BUG/MINOR: http-ana: Report internal error if an action yields on a final eval
This was already performed for tcp actions at content level, but not for
HTTP actions. It is always a bug, so it must be reported accordingly.

This patch may be backported to all stable versions.
2024-10-31 09:30:52 +01:00
Christopher Faulet
5970c6abec BUG/MINOR: http-ana: Fix wrong client abort reports during responses forwarding
When the response forwarding is aborted, we must not report a client abort
if a EOS was seen on client side. On abort performed by the stream must be
considered.

This bug was introduced when the SHUTR was splitted in 2 flags.

This patch must be backported as far as 2.8.
2024-10-24 12:07:50 +02:00
Christopher Faulet
c8aecc393b DEBUG: stream: Add debug counters to track some client/server aborts
Not all aborts are tracked for now but only those a bit ambiguous. Mainly,
aborts during the data forwarding are concerned. Those triggered during the
request or the response analysis are easier to analyze with the stream
termination state.
2024-10-22 16:46:37 +02:00
Christopher Faulet
6790067e79 BUG/MINOR: http-ana: Don't report a server abort if response payload is invalid
If a parsing error is reported by the mux on the response payload, a proxy
error (PRXCOND) must be reported instead of a server abort (SRVCL). Because
of this bug, inavlid response may are reported as "SD--" or "SL--" in logs
instead of "PD--" or "PL--".

This patch must be backported to all stable versions.
2024-10-17 13:53:40 +02:00
Christopher Faulet
cea1379cf1 BUG/MINOR: http-ana: Disable fast-fwd for unfinished req waiting for upgrade
If a request is waiting for a protocol upgrade but it is not finished, the
data fast-forwarding is disabled. Otherwise, the request analyzers will miss
the end of the message.

This case is possible since the commit 01fb1a54 ("BUG/MEDIUM: mux-h1/mux-h2:
Reject upgrades with payload on H2 side only"). Indeed, before, a protocol
upgrade was not allowed for request with payload. But it is now possible and
this comes with a side-effect. It is not really satisfying but for now there
is no other way to sync the muxes and the applicative stream. It seems to be
a reasonnable fix for now, waiting for a deeper refactoring.

This patch must be backported with the commit above.
2024-10-02 10:31:40 +02:00
Christopher Faulet
91e785edc9 MINOR: stream: Rely on a per-stream max connection retries value
Instead of directly relying on the backend parameter to limit the number of
connection retries, we now use a per-stream value. This value is by default
inherited from the backend value when it is set. So for now, there is no
change except the stream value is used instead of the backend value. But
thanks to this change, it will be possible to dynamically change this value.
2024-09-30 16:55:53 +02:00
Aurelien DARRAGON
e3eb6a9035 MEDIUM: log: consider log-steps proxy setting for existing log origins
During tcp/http transaction processing, haproxy may produce logs at
different steps during the processing (accept, connect, request,
response, close). But the behavior is hardly configurable because
haproxy will only emit a single log per transaction, and by default
it will try to produce the log once all log aliases or fetches used
in the logformat could be satisfied, which means the log is often
emitted during connection teardown, unless "option logasap" is used.

We were often asked to have a way to emit multiple logs for a single
transaction, like for instance emit log during accept, then request,
response and close for instance, see GH #401 for more context.

Thanks to "log-steps" keyword introduced by commit "MINOR: log:
introduce "log-steps" proxy keyword", it is now possible to explictly
configure when logs should be generated by haproxy when processing a
transaction. This commit adds the required checks so that log-steps
proxy option is properly considered for existing logs generated by
haproxy. If "log-steps" is not specified on the proxy, the old behavior
is preserved.

Note: a slight cpu overhead should only be visible when "log-steps"
keyword will be used due to the implementation relying on eb32 lookup
instead of basic bitfield check as described in "MINOR: proxy: add
log_steps struct member". However, the default behavior shouldn't be
affected.

When combining log-steps with log-profiles, user has the ability to
explicitly control how and when haproxy should generate logs during
requests handling.
2024-09-26 16:53:07 +02:00
Aurelien DARRAGON
3c15ee05e9 MINOR: log: introduce log_orig flags
Rename 'enum log_orig' to 'enum log_orig_id', since this enum specifically
contains the log origin ids.

Add 'struct log_orig' which wraps 'enum log_orig' with optional flags
(no flags defined for now).

Add log_orig() helper func that takes id and flags as parameter and
returns log_orig struct initialized with input arguments.

Update functions taking log origin as parameter so they explicitly take
log orig id or log orig wrapper as argument depending on the level of
context expected by the function.
2024-09-26 16:53:07 +02:00
Willy Tarreau
6e92988e20 MINOR: vars: remove the emptiness tests in callers before pruning
All callers of vars_prune_* currently check the list for emptiness.
Let's leave that to vars_prune() itself, it will ease some changes in
the code. Thanks to the previous inlining of the vars_prune() function,
there's no performance loss, and even a very tiny 0.1% gain.
2024-09-15 23:44:16 +02:00
Christopher Faulet
0ba6202796 BUG/MEDIUM: http-ana: Report error on write error waiting for the response
When we are waiting for the server response, if an error is pending on the
frontend side (a write error on client), it is handled as an abort and all
regular response analyzers are removed, except the one responsible to
release the filters, if any. However, while it is handled as an abort, the
error is not reported, as usual, via http_reply_and_close() function. It is
an issue because in that, the channels buffers are not reset.

Because of this bug, it is possible to block a stream infinitely. The
request side is waiting for the response side and the response side is
blocked because filters must be released and this cannot be done because
data remain blocked in channels buffers.

So, in that case, calling http_reply_and_close() with no message is enough
to unblock the stream.

This patch must be backported as far as 2.8.
2024-08-02 08:42:28 +02:00
Aurelien DARRAGON
2a91bd52ad MINOR: log: provide sending log context to process_send_log() when available
This is another prerequisite work in preparation for log-profiles: in this
patch we make process_send_log() aware of the log origin, primarily aiming
for sess and txn logging steps such as error, accept, connect, close, as
well as relevant sess and stream pointers.
2024-06-13 15:43:09 +02:00
Christopher Faulet
746e6f8597 BUG/MINOR: http-ana: Don't crush stream termination condition on internal error
When internal error is reported from an HTTP analyzer, we must take care to
not set the stream termination condition if it was already set. For
instance, it happens when a message rewrite fails. In this case
SF_ERR_PXCOND is set by the rule. The HTTP analyzer must not crush it with
SF_ERR_INTERNAL.

The regression was introduced with the commit 0fd25514d6 ("MEDIUM: http-ana:
Set termination state before returning haproxy response").

The bug was discovered working in the issue #2568. It must be backported to
2.9.
2024-05-22 09:04:38 +02:00
Amaury Denoyelle
341bf913d4 MINOR: stats: use STAT_F_* prefix for flags
Some flags are defined during statistics generation and output. They use
the prefix STAT_* which is also used for other purposes. Rename them
with the new prefix STAT_F_* to differentiate them from the other
usages.
2024-04-22 16:25:18 +02:00
Amaury Denoyelle
b8c1fdf24e REORG: stats: extract HTML related functions
Extract functions related to HTML stats webpage from stats.c into a new
module named stats-html. This allows to reduce stats.c to roughly half
of its original size.
2024-04-18 17:04:08 +02:00
Christopher Faulet
3fc38593d5 BUG/MEDIUM: http-ana: Deliver 502 on keep-alive for fressh server connection
In HTTP keep-alive, if we face a connection error to the server while
sending the request, the error should not be reported, and the client-side
connection should simply be closed, so that client knows it can retry.

If the error happens during the connection stage, there is two cases. We
have a connection timeout or an allocation error. In this case, the 503
response must be skipped if it is not the first request on the client-side
connection. Or we have a connection error. In this case, the 503 response
must be skipped if it is a reused server connection. Otherwise, during the
connection stage, the 503-Service-unavailable response is delivered to the
client. The part works properly.

If the error happens after this stage, the 502-Bad-gateway response
delivering should only be based on the server-side connection status. For a
reused server connection, the client-side connection must be closed with no
reponses. However, for a fresh server-side connection, a 502-Bad-gateway
response must be delivered to the client. Unfortunately, This part is
buggy. Only the client-side connection state is considered and the response
is skipped if it is not the first request for the same client connection.

The bug is not so visbile in HTTP/1.1 but in H2 and H3 it is pretty annoying
because for a connection, requests are multiplexed, in parallels. It means
there is no first request. So, because of this bug, for H2 and H3,
502-Bad-gateway responses because of a connection error before receiveing
the response are always skipped.

To fix the issue, in http_wait_for_response() analyser, we must only rely on
SF_SRV_REUSED stream flag to skip the 502 response or not. This flag is set
if the server connection was reused.

The bug is their since a while. SF_SRV_REUSED flag was added in the version
1.5 especially to fix this kind of bug. But only the 503 case was fixed.

This patch should fix the issue #2285. It must be backported to every stable
versions.
2024-04-10 15:42:22 +02:00
Aurelien DARRAGON
6810c41f8e MEDIUM: tree-wide: add logformat expressions wrapper
log format expressions are broadly used within the code: once they are
parsed from input string, they are converted to a linked list of
logformat nodes.

We're starting to face some limitations because we're simply storing the
converted expression as a generic logformat_node list.

The first issue we're facing is that storing logformat expressions that
way doesn't allow us to add metadata alongside the list, which is part
of the prerequites for implementing log-profiles.

Another issue with storing logformat expressions as generic lists of
logformat_node elements is that it's starting to become really hard to
tell when we rely on logformat expressions or not in the code given that
there isn't always a comment near the list declaration or manipulation
to indicate that it's relying on logformat expressions under the hood,
so this adds some complexity for code maintenance.

This patch looks quite impressive due to changes in a lot of header and
source files (since logformat expressions are broadly used), but it does
a simple thing: it defines the lf_expr structure which itself holds a
generic list of logformat nodes, and then declares some helpers to
manipulate lf_expr elements and fixes the code so that we now exclusively
manipulate logformat_node lists as lf_expr elements outside of log.c.

For now, lf_expr struct only contains the list of logformat nodes (no
additional metadata), but now that we have dedicated type and helpers,
doing so in the future won't be problematic at all and won't require
extensive code changes.
2024-04-04 19:10:01 +02:00
Amaury Denoyelle
fd3ce173aa BUG/MEDIUM: http_ana: ignore NTLM for reuse aggressive/always and no H1
Backend connections can be marked as private to prevent their sharing by
multiple clients. Now, this has become an exception as only two reasons
for data traffic can trigger this (checks are ignored here) :
* http-reuse never
* HTTP response with NTLM header

The first case is easy to manage as the connection is flagged as private
since its inception. However, the second case is dynamic as the
connection can be flagged anytime during its lifetime. When using a
backend protocol such as HTTP/2 with reuse mode aggressive or always, we
face a design issue as the connection would be marked as private,
despite potentially being shared by several clients at the same time.

This is conceptually invalid, but worst it can trigger crashes on MUX
stream detach callback depending on the order of release of the streams,
by calling session_check_idle_conn() with a NULL session. It could also
be possible to have several NTLM responses on a single connection for
different sessions. In this case, connection owner is still being
updated without attaching the connection to its correct session, which
ultimately would cause a crash on session_check_idle_conn with an
invalid session.

Here are two backtrace examples from GDB for such cases :

Thread 1 (Thread 0x7ff73e9fc700 (LWP 648859)):
 #0  session_check_idle_conn (conn=0x7ff72f597800, sess=0x0) at include/haproxy/session.h:209
 #1  h2_detach (sd=<optimized out>) at src/mux_h2.c:4520
 #2  0x000056151742be24 in sc_detach_endp (scp=scp@entry=0x7ff73e9f0f18) at src/stconn.c:376
 #3  0x000056151742c208 in sc_destroy (sc=<optimized out>) at src/stconn.c:444
 #4  0x0000561517370871 in stream_free (s=s@entry=0x7ff72a2dbd80) at src/stream.c:728
 #5  0x000056151737541f in process_stream (t=t@entry=0x7ff72d5e2620, context=0x7ff72a2dbd80, state=<optimized out>) at src/stream.c:2645
 #6  0x0000561517456cbb in run_tasks_from_lists (budgets=budgets@entry=0x7ff73e9f10d0) at src/task.c:632
 #7  0x00005615174576b9 in process_runnable_tasks () at src/task.c:876
 #8  0x000056151742275a in run_poll_loop () at src/haproxy.c:2996
 #9  0x0000561517422db1 in run_thread_poll_loop (data=<optimized out>) at src/haproxy.c:3195
 #10 0x00007ff789e081ca in start_thread () from /lib64/libpthread.so.0
 #11 0x00007ff789a39e73 in clone () from /lib64/libc.so.6

(gdb)
Thread 1 (Thread 0x7ff52e7fc700 (LWP 681458)):
 #0  0x0000556ebd6e7e69 in session_check_idle_conn (conn=0x7ff5787ff100, sess=0x7ff51d2539a0) at include/haproxy/session.h:209
 #1  h2_detach (sd=<optimized out>) at src/mux_h2.c:4520
 #2  0x0000556ebd7f3e24 in sc_detach_endp (scp=scp@entry=0x7ff52e7f0f18) at src/stconn.c:376
 #3  0x0000556ebd7f4208 in sc_destroy (sc=<optimized out>) at src/stconn.c:444
 #4  0x0000556ebd738871 in stream_free (s=s@entry=0x7ff520e28200) at src/stream.c:728
 #5  0x0000556ebd73d41f in process_stream (t=t@entry=0x7ff565783700, context=0x7ff520e28200, state=<optimized out>) at src/stream.c:2645
 #6  0x0000556ebd81ecbb in run_tasks_from_lists (budgets=budgets@entry=0x7ff52e7f10d0) at src/task.c:632
 #7  0x0000556ebd81f6b9 in process_runnable_tasks () at src/task.c:876
 #8  0x0000556ebd7ea75a in run_poll_loop () at src/haproxy.c:2996
 #9  0x0000556ebd7eadb1 in run_thread_poll_loop (data=<optimized out>) at src/haproxy.c:3195
 #10 0x00007ff5752081ca in start_thread () from /lib64/libpthread.so.0
 #11 0x00007ff574e39e73 in clone () from /lib64/libc.so.6
(gdb)

To solve this issue, simply ignore NTLM responses when using a
multiplexer with streams support and the connection is not already
attached to the session. The connection is not marked as private and
will continue to be shared freely accross clients. This is considered
conceptually valid as NTLM usage (rfc 4559) with HTTP is broken and was
designed only with HTTP/1.1 in mind. A side-effect of the change is that
SESS_FL_PREFER_LAST is also not set anymore on NTLM detection, which
allows following requests to be load-balanced accross several server
instances.

The original behavior is kept for HTTP/1 or if the connection is already
attached to the session. This last case happens when using HTTP/2 with
default http-reuse safe mode since the following patch :

  0d21deaded1a4bbd1e1700ab8386af1f1441ea73
  MEDIUM: backend: add reused conn to sess if mux marked as HOL blocking

This should be backported up to all stable releases. Up until 2.4, it
can be taken as-is. For lesser versions, above patch is not present. In
this case the condition should be restricted only to HTTP/1 usage :

  if (srv_conn && strcmp(srv_conn->mux->name, "H1") == 0) {
2024-03-20 14:26:57 +01:00
Christopher Faulet
60fcc27577 MEDIUM: htx/http-ana: No longer close connection on early HAProxy response
When a response was returned by HAProxy, a dedicated HTX flag was
set. Thanks to this flag, it was possible to add a "connection: close"
header to the response if the request was not fully received and to close
the connection. In the same way, when a redirect rule was applied,
keep-alive was forcefully disabled for unfinished requests.

All these mechanisms are now useless because the H1 mux is able to drain the
response. So HTX_FL_PROXY_RESP flag is removed and no special processing is
performed on HAProxy response when the request is unfinished.
2024-02-28 16:02:33 +01:00
Willy Tarreau
9d827e1049 MEDIUM: http_act: check status codes against the bit fields for err/fail
This drops the hard-coded 4xx and 5xx status codes for err_cnt and
fail_cnt, in favor of the new bit fields that will soon be configurable.
There should be no difference at all since the bit fields are initialized
to the exact same sets (400-499 for err, 500-599 minus 501 and 505 for
fail).
2024-01-11 15:10:08 +01:00
Aurelien DARRAGON
64b7d8e173 BUG/MEDIUM: stats: unhandled switching rules with TCP frontend
Consider the following configuration:

  backend back
    mode http

  frontend front
    mode tcp
    bind localhost:8080
    stats enable
    stats uri /stats
    tcp-request content switch-mode http if FALSE
    use_backend back

Firing a request to /stats on the haproxy process started with the above
configuration will cause a segfault in http_handle_stats().

The cause for the crash is that in this case, the upgrade doesn't simply
switches to HTTP mode, but also changes the stream backend (changing from
the frontend itself to the targeted HTTP backend).

However, there is an inconsitency in the stats logic between the check
for the stats URI and the actual handling of the stats page.

Indeed, http_stats_check_uri() checks uri parameters from the proxy
undergoing the http analyzers, whereas http_handle_stats() uses s->be
instead.

During stream analysis, from the frontend perspective: s->be defaults to
the frontend. But if the frontend is in TCP mode and the stream is
upgraded to HTTP via backend switching rules, then s->be will be assigned
to the actual HTTP-capable backend in stream_set_backend().

What this means is that when the http analyzer first checks if the current
URI matches the one from the "stats uri" directive, it will check against
the "stats uri" directive from the frontend, but later since the stats
handlers reads the uri from s->be it wil actually use the value from the
backend and the previous safety checks are thus garbage, resulting in
unexpected behavior. (In our test case since the backend didn't define
"stats uri" it is set to NULL, and http_handle_stats() dereferences it)

To fix this, we should ensure that prechecks and actual stats processing
always rely on the same proxy source for stats config directives.

This is what is done in this patch, thanks to the previous commit, since
we can make sure that the stat applet will use ->http_px as its parent
proxy. So here we simply propagate the current proxy being analyzed
through all the stats processing functions.

This patch depends on:
 - MINOR: stats: store the parent proxy in stats ctx (http)

It should be backported up to 2.4.
For 2.4: the fix is less trivial since stats ctx was directly stored
within the applet struct at that time, so this alternative patch must be
used instead (without "MINOR: stats: store the parent proxy in stats ctx
(http)" dependency):

    diff --git a/include/haproxy/applet-t.h b/include/haproxy/applet-t.h
    index 014e01ed9..1d9a63359 100644
    --- a/include/haproxy/applet-t.h
    +++ b/include/haproxy/applet-t.h
    @@ -121,6 +121,7 @@ struct appctx {
     		 * keep the grouped together and avoid adding new ones.
     		 */
     		struct {
    +			struct proxy *http_px;  /* parent proxy of the current applet (only relevant for HTTP applet) */
     			void *obj1;             /* context pointer used in stats dump */
     			void *obj2;             /* context pointer used in stats dump */
     			uint32_t domain;        /* set the stats to used, for now only proxy stats are supported */
    diff --git a/src/http_ana.c b/src/http_ana.c
    index b557da89d..1025d7711 100644
    --- a/src/http_ana.c
    +++ b/src/http_ana.c
    @@ -63,8 +63,8 @@ static enum rule_result http_req_restrict_header_names(struct stream *s, struct
     static void http_manage_client_side_cookies(struct stream *s, struct channel *req);
     static void http_manage_server_side_cookies(struct stream *s, struct channel *res);

    -static int http_stats_check_uri(struct stream *s, struct http_txn *txn, struct proxy *backend);
    -static int http_handle_stats(struct stream *s, struct channel *req);
    +static int http_stats_check_uri(struct stream *s, struct http_txn *txn, struct proxy *px);
    +static int http_handle_stats(struct stream *s, struct channel *req, struct proxy *px);

     static int http_handle_expect_hdr(struct stream *s, struct htx *htx, struct http_msg *msg);
     static int http_reply_100_continue(struct stream *s);
    @@ -428,7 +428,7 @@ int http_process_req_common(struct stream *s, struct channel *req, int an_bit, s
     		}

     		/* parse the whole stats request and extract the relevant information */
    -		http_handle_stats(s, req);
    +		http_handle_stats(s, req, px);
     		verdict = http_req_get_intercept_rule(px, &px->uri_auth->http_req_rules, s);
     		/* not all actions implemented: deny, allow, auth */

    @@ -3959,16 +3959,16 @@ void http_check_response_for_cacheability(struct stream *s, struct channel *res)

     /*
      * In a GET, HEAD or POST request, check if the requested URI matches the stats uri
    - * for the current backend.
    + * for the current proxy.
      *
      * It is assumed that the request is either a HEAD, GET, or POST and that the
      * uri_auth field is valid.
      *
      * Returns 1 if stats should be provided, otherwise 0.
      */
    -static int http_stats_check_uri(struct stream *s, struct http_txn *txn, struct proxy *backend)
    +static int http_stats_check_uri(struct stream *s, struct http_txn *txn, struct proxy *px)
     {
    -	struct uri_auth *uri_auth = backend->uri_auth;
    +	struct uri_auth *uri_auth = px->uri_auth;
     	struct htx *htx;
     	struct htx_sl *sl;
     	struct ist uri;
    @@ -4003,14 +4003,14 @@ static int http_stats_check_uri(struct stream *s, struct http_txn *txn, struct p
      * s->target which is supposed to already point to the stats applet. The caller
      * is expected to have already assigned an appctx to the stream.
      */
    -static int http_handle_stats(struct stream *s, struct channel *req)
    +static int http_handle_stats(struct stream *s, struct channel *req, struct proxy *px)
     {
     	struct stats_admin_rule *stats_admin_rule;
     	struct stream_interface *si = &s->si[1];
     	struct session *sess = s->sess;
     	struct http_txn *txn = s->txn;
     	struct http_msg *msg = &txn->req;
    -	struct uri_auth *uri_auth = s->be->uri_auth;
    +	struct uri_auth *uri_auth = px->uri_auth;
     	const char *h, *lookup, *end;
     	struct appctx *appctx;
     	struct htx *htx;
    @@ -4020,6 +4020,7 @@ static int http_handle_stats(struct stream *s, struct channel *req)
     	memset(&appctx->ctx.stats, 0, sizeof(appctx->ctx.stats));
     	appctx->st1 = appctx->st2 = 0;
     	appctx->ctx.stats.st_code = STAT_STATUS_INIT;
    +	appctx->ctx.stats.http_px = px;
     	appctx->ctx.stats.flags |= uri_auth->flags;
     	appctx->ctx.stats.flags |= STAT_FMT_HTML; /* assume HTML mode by default */
     	if ((msg->flags & HTTP_MSGF_VER_11) && (txn->meth != HTTP_METH_HEAD))
    diff --git a/src/stats.c b/src/stats.c
    index d1f3daa98..1f0b2bff7 100644
    --- a/src/stats.c
    +++ b/src/stats.c
    @@ -2863,9 +2863,9 @@ static int stats_dump_be_stats(struct stream_interface *si, struct proxy *px)
     	return stats_dump_one_line(stats, stats_count, appctx);
     }

    -/* Dumps the HTML table header for proxy <px> to the trash for and uses the state from
    - * stream interface <si> and per-uri parameters <uri>. The caller is responsible
    - * for clearing the trash if needed.
    +/* Dumps the HTML table header for proxy <px> to the trash and uses the state from
    + * stream interface <si>. The caller is responsible for clearing the trash if
    + * needed.
      */
     static void stats_dump_html_px_hdr(struct stream_interface *si, struct proxy *px)
     {
    @@ -3015,17 +3015,19 @@ static void stats_dump_html_px_end(struct stream_interface *si, struct proxy *px
      * input buffer. Returns 0 if it had to stop dumping data because of lack of
      * buffer space, or non-zero if everything completed. This function is used
      * both by the CLI and the HTTP entry points, and is able to dump the output
    - * in HTML or CSV formats. If the later, <uri> must be NULL.
    + * in HTML or CSV formats.
      */
     int stats_dump_proxy_to_buffer(struct stream_interface *si, struct htx *htx,
    -			       struct proxy *px, struct uri_auth *uri)
    +			       struct proxy *px)
     {
     	struct appctx *appctx = __objt_appctx(si->end);
    -	struct stream *s = si_strm(si);
     	struct channel *rep = si_ic(si);
     	struct server *sv, *svs;	/* server and server-state, server-state=server or server->track */
     	struct listener *l;
    +	struct uri_auth *uri = NULL;

    +	if (appctx->ctx.stats.http_px)
    +		uri = appctx->ctx.stats.http_px->uri_auth;
     	chunk_reset(&trash);

     	switch (appctx->ctx.stats.px_st) {
    @@ -3045,7 +3047,7 @@ int stats_dump_proxy_to_buffer(struct stream_interface *si, struct htx *htx,
     					break;

     				/* match '.' which means 'self' proxy */
    -				if (strcmp(scope->px_id, ".") == 0 && px == s->be)
    +				if (strcmp(scope->px_id, ".") == 0 && px == appctx->ctx.stats.http_px)
     					break;
     				scope = scope->next;
     			}
    @@ -3227,10 +3229,16 @@ int stats_dump_proxy_to_buffer(struct stream_interface *si, struct htx *htx,
     }

     /* Dumps the HTTP stats head block to the trash for and uses the per-uri
    - * parameters <uri>. The caller is responsible for clearing the trash if needed.
    + * parameters from the parent proxy. The caller is responsible for clearing
    + * the trash if needed.
      */
    -static void stats_dump_html_head(struct appctx *appctx, struct uri_auth *uri)
    +static void stats_dump_html_head(struct appctx *appctx)
     {
    +	struct uri_auth *uri;
    +
    +	BUG_ON(!appctx->ctx.stats.http_px);
    +	uri = appctx->ctx.stats.http_px->uri_auth;
    +
     	/* WARNING! This must fit in the first buffer !!! */
     	chunk_appendf(&trash,
     	              "<!DOCTYPE HTML PUBLIC \"-//W3C//DTD HTML 4.01 Transitional//EN\"\n"
    @@ -3345,17 +3353,21 @@ static void stats_dump_html_head(struct appctx *appctx, struct uri_auth *uri)
     }

     /* Dumps the HTML stats information block to the trash for and uses the state from
    - * stream interface <si> and per-uri parameters <uri>. The caller is responsible
    - * for clearing the trash if needed.
    + * stream interface <si> and per-uri parameters from the parent proxy. The caller
    + * is responsible for clearing the trash if needed.
      */
    -static void stats_dump_html_info(struct stream_interface *si, struct uri_auth *uri)
    +static void stats_dump_html_info(struct stream_interface *si)
     {
     	struct appctx *appctx = __objt_appctx(si->end);
     	unsigned int up = (now.tv_sec - start_date.tv_sec);
     	char scope_txt[STAT_SCOPE_TXT_MAXLEN + sizeof STAT_SCOPE_PATTERN];
     	const char *scope_ptr = stats_scope_ptr(appctx, si);
    +	struct uri_auth *uri;
     	unsigned long long bps = (unsigned long long)read_freq_ctr(&global.out_32bps) * 32;

    +	BUG_ON(!appctx->ctx.stats.http_px);
    +	uri = appctx->ctx.stats.http_px->uri_auth;
    +
     	/* Turn the bytes per second to bits per second and take care of the
     	 * usual ethernet overhead in order to help figure how far we are from
     	 * interface saturation since it's the only case which usually matters.
    @@ -3629,8 +3641,7 @@ static void stats_dump_json_end()
      * a pointer to the current server/listener.
      */
     static int stats_dump_proxies(struct stream_interface *si,
    -                              struct htx *htx,
    -                              struct uri_auth *uri)
    +                              struct htx *htx)
     {
     	struct appctx *appctx = __objt_appctx(si->end);
     	struct channel *rep = si_ic(si);
    @@ -3650,7 +3661,7 @@ static int stats_dump_proxies(struct stream_interface *si,
     		px = appctx->ctx.stats.obj1;
     		/* skip the disabled proxies, global frontend and non-networked ones */
     		if (!px->disabled && px->uuid > 0 && (px->cap & (PR_CAP_FE | PR_CAP_BE))) {
    -			if (stats_dump_proxy_to_buffer(si, htx, px, uri) == 0)
    +			if (stats_dump_proxy_to_buffer(si, htx, px) == 0)
     				return 0;
     		}

    @@ -3666,14 +3677,12 @@ static int stats_dump_proxies(struct stream_interface *si,
     }

     /* This function dumps statistics onto the stream interface's read buffer in
    - * either CSV or HTML format. <uri> contains some HTML-specific parameters that
    - * are ignored for CSV format (hence <uri> may be NULL there). It returns 0 if
    - * it had to stop writing data and an I/O is needed, 1 if the dump is finished
    - * and the stream must be closed, or -1 in case of any error. This function is
    - * used by both the CLI and the HTTP handlers.
    + * either CSV or HTML format. It returns 0 if it had to stop writing data and
    + * an I/O is needed, 1 if the dump is finished and the stream must be closed,
    + * or -1 in case of any error. This function is used by both the CLI and the
    + * HTTP handlers.
      */
    -static int stats_dump_stat_to_buffer(struct stream_interface *si, struct htx *htx,
    -				     struct uri_auth *uri)
    +static int stats_dump_stat_to_buffer(struct stream_interface *si, struct htx *htx)
     {
     	struct appctx *appctx = __objt_appctx(si->end);
     	struct channel *rep = si_ic(si);
    @@ -3688,7 +3697,7 @@ static int stats_dump_stat_to_buffer(struct stream_interface *si, struct htx *ht

     	case STAT_ST_HEAD:
     		if (appctx->ctx.stats.flags & STAT_FMT_HTML)
    -			stats_dump_html_head(appctx, uri);
    +			stats_dump_html_head(appctx);
     		else if (appctx->ctx.stats.flags & STAT_JSON_SCHM)
     			stats_dump_json_schema(&trash);
     		else if (appctx->ctx.stats.flags & STAT_FMT_JSON)
    @@ -3708,7 +3717,7 @@ static int stats_dump_stat_to_buffer(struct stream_interface *si, struct htx *ht

     	case STAT_ST_INFO:
     		if (appctx->ctx.stats.flags & STAT_FMT_HTML) {
    -			stats_dump_html_info(si, uri);
    +			stats_dump_html_info(si);
     			if (!stats_putchk(rep, htx, &trash))
     				goto full;
     		}
    @@ -3733,7 +3742,7 @@ static int stats_dump_stat_to_buffer(struct stream_interface *si, struct htx *ht
     		case STATS_DOMAIN_PROXY:
     		default:
     			/* dump proxies */
    -			if (!stats_dump_proxies(si, htx, uri))
    +			if (!stats_dump_proxies(si, htx))
     				return 0;
     			break;
     		}
    @@ -4112,11 +4121,14 @@ static int stats_process_http_post(struct stream_interface *si)
     static int stats_send_http_headers(struct stream_interface *si, struct htx *htx)
     {
     	struct stream *s = si_strm(si);
    -	struct uri_auth *uri = s->be->uri_auth;
    +	struct uri_auth *uri;
     	struct appctx *appctx = __objt_appctx(si->end);
     	struct htx_sl *sl;
     	unsigned int flags;

    +	BUG_ON(!appctx->ctx.stats.http_px);
    +	uri = appctx->ctx.stats.http_px->uri_auth;
    +
     	flags = (HTX_SL_F_IS_RESP|HTX_SL_F_VER_11|HTX_SL_F_XFER_ENC|HTX_SL_F_XFER_LEN|HTX_SL_F_CHNK);
     	sl = htx_add_stline(htx, HTX_BLK_RES_SL, flags, ist("HTTP/1.1"), ist("200"), ist("OK"));
     	if (!sl)
    @@ -4166,11 +4178,14 @@ static int stats_send_http_redirect(struct stream_interface *si, struct htx *htx
     {
     	char scope_txt[STAT_SCOPE_TXT_MAXLEN + sizeof STAT_SCOPE_PATTERN];
     	struct stream *s = si_strm(si);
    -	struct uri_auth *uri = s->be->uri_auth;
    +	struct uri_auth *uri;
     	struct appctx *appctx = __objt_appctx(si->end);
     	struct htx_sl *sl;
     	unsigned int flags;

    +	BUG_ON(!appctx->ctx.stats.http_px);
    +	uri = appctx->ctx.stats.http_px->uri_auth;
    +
     	/* scope_txt = search pattern + search query, appctx->ctx.stats.scope_len is always <= STAT_SCOPE_TXT_MAXLEN */
     	scope_txt[0] = 0;
     	if (appctx->ctx.stats.scope_len) {
    @@ -4263,7 +4278,7 @@ static void http_stats_io_handler(struct appctx *appctx)
     	}

     	if (appctx->st0 == STAT_HTTP_DUMP) {
    -		if (stats_dump_stat_to_buffer(si, res_htx, s->be->uri_auth))
    +		if (stats_dump_stat_to_buffer(si, res_htx))
     			appctx->st0 = STAT_HTTP_DONE;
     	}

    @@ -4888,6 +4903,7 @@ static int cli_parse_show_stat(char **args, char *payload, struct appctx *appctx

     	appctx->ctx.stats.scope_str = 0;
     	appctx->ctx.stats.scope_len = 0;
    +	appctx->ctx.stats.http_px = NULL; // not under http context
     	appctx->ctx.stats.flags = STAT_SHNODE | STAT_SHDESC;

     	if ((strm_li(si_strm(appctx->owner))->bind_conf->level & ACCESS_LVL_MASK) >= ACCESS_LVL_OPER)
    @@ -4954,7 +4970,7 @@ static int cli_io_handler_dump_info(struct appctx *appctx)
      */
     static int cli_io_handler_dump_stat(struct appctx *appctx)
     {
    -	return stats_dump_stat_to_buffer(appctx->owner, NULL, NULL);
    +	return stats_dump_stat_to_buffer(appctx->owner, NULL);
     }

     static int cli_io_handler_dump_json_schema(struct appctx *appctx)
2023-12-21 14:21:53 +01:00
Aurelien DARRAGON
ef9d692544 MINOR: stats: store the parent proxy in stats ctx (http)
Some HTTP related stats functions need to know the parent proxy, mainly
to get a pointer on the related uri_auth set by the proxy or to check
scope settings.

The current design (probably historical as only the http context existed
by then) took the other approach: it propagates the uri pointer from the
http context deep down the calling stack up to the relevant functions.
For non-http contexts (cli), the pointer is set to NULL.

Doing so is not very pretty and not easy to maintain. Moreover, there were
still some places in the code were the uri pointer was learned directly
from the stream proxy because the argument was not available as argument
from those functions. This is error-prone, because if one day we decide to
change the source proxy in the parent function, we might still have some
functions down the stack that ignore the top most argument and still do
on their own, and we'll probably end up with inconsistencies.

So in this patch, we take a safer approach: the caller responsible for
creating the stats applet should set the http_px pointer so that any stats
function running under the applet that needs to know if it's running in
http context or needs to access parent proxy info may do so thanks to
the dedicated ctx->http_px pointer.
2023-12-21 14:20:03 +01:00
Christopher Faulet
0fd25514d6 MEDIUM: http-ana: Set termination state before returning haproxy response
When, for any reason and at any step, HAProxy decides to interrupt an HTTP
transaction, it returns a dedicated responses to the client (possibly empty)
and it sets the stream flags used to produce the session termination state.
These both operation were performed in any order, depending on the code
path. Most of time, the HAPRoxy response is produced first.

With this patch, the stream flags for the termination state are now set
first. This way, these flags become visible from http-after-reponse rule
sets. Only errors when the HAProxy response is generated are reported later.
2023-11-29 11:11:12 +01:00
Christopher Faulet
b2f82b2b51 MINOR: http-fetch: Add a sample to retrieve the server status code
The code returned by the "status" sample fetch is the one in the HTTP
response at the moment the sample is evaluated. It may be the status code in
the server response or the one of the HAProxy reply in case of error, deny,
redirect...

However, it could be handy to retrieve the status code returned by the
server, when a HTTP response was really received from it. It is the purpose
of the "server_status" sample fetch. The server status code itself is stored
in the HTTP txn.
2023-11-29 11:11:12 +01:00
Ilya Shipitsin
80813cdd2a CLEANUP: assorted typo fixes in the code and comments
This is 37th iteration of typo fixes
2023-11-23 16:23:14 +01:00
Christopher Faulet
322d660d08 MINOR: tree-wide: Only rely on co_data() to check channel emptyness
Because channel_is_empty() function does now only check the channel's
buffer, we can remove it and rely on co_data() instead. Of course, all tests
must be inverted.

channel_is_empty() is thus removed.
2023-10-17 18:51:13 +02:00
Christopher Faulet
d0b04920d1 BUG/MINOR: htpp-ana/stats: Specify that HTX redirect messages have a C-L header
Redirect responses sent during the HTTP analysis have no payload. However
there is still a "Content-Length" header. It is important to set the
corresponding flag on the HTX start-line to be sure to preserve this header
when the reponse is sent to the client. The same is true with the stats
applet, when it returns a redirect responses.

It is especially important because we no ignore in-fly modifications of
"Content-Length" or "Transfer-Encoding" headers without updating the HTX
start-line flags.

This patch may be backported to all stable versions but it is probably
useless because only the 2.9-dev is affected by the bug.
2023-10-17 18:11:04 +02:00
Christopher Faulet
d3e379b3ce BUG/MEDIUM: http-ana: Try to handle response before handling server abort
In the request analyser responsible to forward the request, we try to detect
the server abort to stop the request forwarding. However, we must be careful
to not block the response processing, if any. Indeed, it is possible to get
the response and the server abort in same time. In this case, we must try to
forward the response to the client first.

So to fix the issue, in the request analyser we no longer handle the server
abort if the response channel is not empty. In the end, the response
analyser is able to detect the server abort if it is relevant. Otherwise,
the stream will be woken up after the response forwarding and the server
abort should be handled at this stage.

This patch should be backported as far as 2.7 only because the risk of
breakage is high. And it is probably a good idea to wait a bit before
backporting it.
2023-09-21 09:36:37 +02:00
Aurelien DARRAGON
4457783ade MINOR: http_ana: position the FINAL flag for http_after_res execution
Ensure that the ACT_OPT_FINAL flag is always set when executing actions
from http_after_res context.

This will permit lua functions to be executed as http_after_res actions
since hlua_ctx_resume() automatically disables "yielding" when such flag
is set: the hlua handler will only allow 1shot executions at this point
(lua or not, we don't wan't to reschedule http_after_res actions).
2023-09-06 11:42:34 +02:00
Christopher Faulet
624979cf3b BUG/MAJOR: http-ana: Get a fresh trash buffer for each header value replacement
When a "replace-header" action is used, we loop on all headers in the
message to change value of all headers matching a name. The new value is
placed in a trash. However, there is a race here because if the message must
be defragmented, another trash is used. If several defragmentation are
performed because several headers must be updated at same time, the first
trash, used to store the new value, may be crushed. Indeed, there are only 2
pre-allocated trash used in rotation. and the trash to store the new value
is never renewed. As consequece, random data may be inserted into the header
value.

Here, to fix the issue, we must take care to refresh the trash buffer when
we evaluated a new header. This way, a trash used for the new value, and
eventually another way for the htx defragmentation. But that's all.

Thanks to Christian Ruppert for his detailed report.

This patch must be to all stable versions. On the 2.0, the patch must be
applied on src/proto_htx.c and the function is named
htx_transform_header_str().
2023-08-04 17:06:31 +02:00
Remi Tricot-Le Breton
ca4fd73938 BUG/MINOR: cache: A 'max-age=0' cache-control directive can be overriden by a s-maxage
When a s-maxage cache-control directive is present, it overrides any
other max-age or expires value (see section 5.2.2.9 of RFC7234). So if
we have a max-age=0 alongside a strictly positive s-maxage, the response
should be cached.

This bug was raised in GitHub issue #2203.
The fix can be backported to all stable branches.
2023-07-04 22:15:00 +02:00
Willy Tarreau
69530f59ae MEDIUM: clock: replace timeval "now" with integer "now_ns"
This puts an end to the occasional confusion between the "now" date
that is internal, monotonic and not synchronized with the system's
date, and "date" which is the system's date and not necessarily
monotonic. Variable "now" was removed and replaced with a 64-bit
integer "now_ns" which is a counter of nanoseconds. It wraps every
585 years, so if all goes well (i.e. if humanity does not need
haproxy anymore in 500 years), it will just never wrap. This implies
that now_ns is never nul and that the zero value can reliably be used
as "not set yet" for a timestamp if needed. This will also simplify
date checks where it becomes possible again to do "date1<date2".

All occurrences of "tv_to_ns(&now)" were simply replaced by "now_ns".
Due to the intricacies between now, global_now and now_offset, all 3
had to be turned to nanoseconds at once. It's not a problem since all
of them were solely used in 3 functions in clock.c, but they make the
patch look bigger than it really  is.

The clock_update_local_date() and clock_update_global_date() functions
are now much simpler as there's no need anymore to perform conversions
nor to round the timeval up or down.

The wrapping continues to happen by presetting the internal offset in
the short future so that the 32-bit now_ms continues to wrap 20 seconds
after boot.

The start_time used to calculate uptime can still be turned to
nanoseconds now. One interrogation concerns global_now_ms which is used
only for the freq counters. It's unclear whether there's more value in
using two variables that need to be synchronized sequentially like today
or to just use global_now_ns divided by 1 million. Both approaches will
work equally well on modern systems, the difference might come from
smaller ones. Better not change anyhting for now.

One benefit of the new approach is that we now have an internal date
with a resolution of the nanosecond and the precision of the microsecond,
which can be useful to extend some measurements given that timestamps
also have this resolution.
2023-04-28 16:08:08 +02:00