This field is used by stream_new() to optionally set the applet the
stream will connect to for simple proxies like the CLI for example.
But it has never been configurable to anything and is always strictly
equal to the frontend's ->default_target. Let's just drop it and make
stream_new() only use the frontend's. It makes more sense anyway as
we don't want the proxy to work differently based on the "bind" line.
This idea was brought in 1.6 hoping that the h2 implementation would
use applets for decoding (which was dropped after the very first
attempt in 1.8).
When bind_conf were created, some elements such as the analysers mask
ought to have moved there but that wasn't the case. Now that it's
getting clearer that bind_conf provides all binding parameters and
the listener is essentially a listener on an address, it's starting
to get really confusing to keep such parameters in the listener, so
let's move the mask to the bind_conf. We also take this opportunity
for pre-setting the mask to the frontend's upon initalization. Now
several loops have one less argument to take care of.
In applets, we stop processing when a write error (CF_WRITE_ERROR) or a shutdown
for writes (CF_SHUTW) is detected. However, any write error leads to an
immediate shutdown for writes. Thus, it is enough to only test if CF_SHUTW is
set.
When a read error (CF_READ_ERROR) is reported, a shutdown for reads is
always performed (CF_SHUTR). Thus, there is no reason to check if
CF_READ_ERROR is set if CF_SHUTR is also checked.
CF_READ_ATTACHED flag is only used in input events for stream analyzers,
CF_MASK_ANALYSER. A read event can be reported instead and this flag can be
removed. We must only take care to report a read event when the client
connection is upgraded from TCP to HTTP.
It appears CF_ANA_TIMEOUT is flag only used in CF_MASK_ANALYSER. All
analyzer timeout relies on the analysis expiration date (chn->analyse_exp).
Worst, once set, this flag is never removed. Thus this flag can be removed
and replaced by a read event (CF_READ_EVENT).
Thanks to previous changes, CF_WRITE_ACTIVITY flags can be removed.
Everywhere it was used, its value is now directly used
(CF_WRITE_EVENT|CF_WRITE_ERROR).
Thanks to previous changes, CF_READ_ACTIVITY flags can be removed.
Everywhere it was used, its value is now directly used
(CF_READ_EVENT|CF_READ_ERROR).
Just like CF_READ_PARTIAL, CF_WRITE_PARTIAL is now merged with
CF_WRITE_EVENT. There a subtlety in sc_notify(). The "connect" event
(formely CF_WRITE_NULL) is now detected with
(CF_WRITE_EVENT + sc->state < SC_ST_EST).
CF_READ_PARTIAL flag is now merged with CF_READ_EVENT. It means
CF_READ_EVENT is set when a read0 is received (formely CF_READ_NULL) or when
data are received (formely CF_READ_ACTIVITY).
There is nothing special here, except conditions to wake the stream up in
sc_notify(). Indeed, the test was a bit changed to reflect recent
change. read0 event is now formalized by (CF_READ_EVENT + CF_SHUTR).
As for CF_READ_NULL, it appears CF_WRITE_NULL and other write events on a
channel are mainly used to wake up the stream and may be replace by on write
event.
In this patch, we introduce CF_WRITE_EVENT flag as a replacement to
CF_WRITE_EVENT_NULL. There is no breaking change for now, it is just a
rename. Gradually, other write events will be merged with this one.
CF_READ_NULL flag is not really useful and used. It is a transient event
used to wakeup the stream. As we will see, all read events on a channel may
be resumed to only one and are all used to wake up the stream.
In this patch, we introduce CF_READ_EVENT flag as a replacement to
CF_READ_NULL. There is no breaking change for now, it is just a
rename. Gradually, other read events will be merged with this one.
The number of stick-counter entries usable by track-sc rules is currently
set at build time. There is no good value for this since the vast majority
of users don't need any, most need only a few and rare users need more.
Adding more counters for everyone increases memory and CPU usages for no
reason.
This patch moves the per-session and per-stream arrays to a pool of a size
defined at boot time. This way it becomes possible to set the number of
entries at boot time via a new global setting "tune.stick-counters" that
sets the limit for the whole process. When not set, the MAX_SESS_STR_CTR
value still applies, or 3 if not set, as before.
It is also possible to lower the value to 0 to save a bit of memory if
not used at all.
Note that a few low-level sample-fetch functions had to be protected due
to the ability to use sample-fetches in the global section to set some
variables.
This patch adds the support of following actions in the http-after-response
ruleset:
* set-map, del-map and del-acl
* set-log-level
* sc-inc-gpc, sc-inc-gpc0 and set-inc-gpc1
* sc-inc-gpt and sc-set-gpt0
This patch should solve the issue #1980.
In http_create_txn(): vars_init_head() was performed on both s->vars_txn
and s->var_reqres lists.
But this is wrong, these two lists are already initialized upon stream
creation in stream_new().
Moreover, between stream_new() and http_create_txn(), some variable may
be defined (e.g.: by the frontend), resulting in lists not being empty.
Because of this "extra" list initialization, already defined variables
can be lost.
This causes txn dependant code not being able to access previously defined
variables as well as memory leak because http_destroy_txn() relies on these
lists to perform the purge.
This proved to be the case when a frontend sets variables and lua sample
fetch is used in backend section as described in GH #1935.
Many thanks to Darragh O'Toole for his detailed report.
Removing extra var_init_head (x2) in http_create_txn() to fix the issue.
Adding somme comments in the code in an attempt to prevent future misuses
of s->var_reqres, and s->var_txn lists.
It should be backported in every stable version.
(This is an old bug that seems to exist since 1.6-dev6)
[cf: On 2.0 and 1.8, for the legacy HTTP code, vars_init() are used during
the TXN cleanup, when the stream is reused. So, these calls must be
moved from http_init_txn() to http_reset_txn() and not removed.]
The commit bc7c207f74 ("BUG/MAJOR: stick-tables: do not try to index a
server name for applets") tried to catch applets case when we tried to index
the server name. However, there is still an issue. The applets are
unconditionally casted to servers and this bug exists since a while. it's
just luck if it doesn't crash.
Now, when store rules are processed, we skip the rule if the stream's target
is not a server or, of course, if it is a server but the "non-stick" option
is set. However, we still take care to release the sticky session.
This patch must be backported to all stable versions.
Since commit 03cdf55e6 ("MINOR: stream: Stickiness server lookup by name.")
in 2.0-dev6, server names may be used instead of their IDs, in order to
perform stickiness. However the commit above may end up trying to insert
an empty server name in the dictionary when the server is an applet
instead, resulting in an immediate segfault. This is typically what
happens when a "stick-store" rule is present in a backend featuring a
"stats" directive. As there doesn't seem to be an easy way around it,
it seems to imply that "stick-store" is not much used anymore.
The solution here is to only try to insert non-null keys into the
dictionary. The patch moves the check of the key type before the
first lock so that the test on the key can be performed under the lock
instead of locking twice (the patch is more readable with diff -b).
Note that before 2.4, there's no <key> variable there as it was
introduced by commit 92149f9a8 ("MEDIUM: stick-tables: Add srvkey
option to stick-table"), but the __objt_server(s->target)->id still
needs to be tested.
This needs to be backported as far as 2.0.
Add an anonymization for an element missed in the first merge
for 'show sess all'.
No backport needed, except if anonymization mechanism is backported.
The frontend SC is attached before the backend one is allocated. Thus an
allocation error on backend SC must be handled before an error on the
frontend SC.
This patch must be backported to 2.6.
Modify stream.c in order to hash the following confidential data if the
anonymized mode is enabled:
- configuration elements such as frontend/backend/server names
- IP addresses
It was a mistake to put these two fields in the struct task. This
was added in 1.9 via commit 9efd7456e ("MEDIUM: tasks: collect per-task
CPU time and latency"). These fields are used solely by streams in
order to report the measurements via the lat_ns* and cpu_ns* sample
fetch functions when task profiling is enabled. For the rest of the
tasks, this is pure CPU waste when profiling is enabled, and memory
waste 100% of the time, as the point where these latencies and usages
are measured is in the profiling array.
Let's move the fields to the stream instead, and have process_stream()
retrieve the relevant info from the thread's context.
The struct task is now back to 120 bytes, i.e. almost two cache lines,
with 32 bit still available.
When task profiling is enabled, the reported CPU time for short requests
and responses (e.g. redirect) is always zero in the logs, because
process_stream() is only called once and the CPU time is measured after
it returns. This is particuarly annoying when dealing with denies and in
general anything that deals with parasitic traffic because it can be
difficult to figure where the CPU is spent.
The solution taken in this patch consists in having process_stream()
update the cpu time itself before logging and quitting. It's very simple.
It will not take into account the time taken to produce the log nor
freeing the stream, but that's marginal compared to always logging zero.
The task's wake_date is also reset so that the scheduler doesn't have to
perform these operations again. This is dependent on the following patch:
MINOR: sched: store the current profile entry in the thread context
It should be backported to 2.6 as it does help for troubleshooting.
This helper will be called for muxes that provide it and will be used
to let the mux provide extra information about the stream attached to
a stream descriptor. A line prefix is passed in argument so that the
mux is free to break long lines without breaking indent. No prefix
means no line breaks should be produced (e.g. for short dumps).
When an appctx is found looping over itself, we report a number of info
but not the pointers to the definition nor the handler, which can be quite
handy in some cases. Let's add them and try to decode the symbol.
From now on, the FD's update_mask only refers to local thread IDs. However,
there remains a limitation, in updt_fd_polling(), we temporarily have to
check and set shared FDs against .thread_mask, which still contains global
ones. As such, nbtgroups > 1 may break (but this is not yet supported without
special build options).
The "ctx" and "st2" parts in the appctx were marked for removal in 2.7
and were emulated using memcpy/memset etc for possible external code.
Let's remove this now.
The tasks currently rely on a mask but do not have an assigned thread ID,
contrary to tasklets. However, in practice they're either running on a
single thread or on any thread, so that it will be worth simplifying all
this in order to ease the transition to the thread groups.
This patch introduces a "tid" field in the task struct, that's either
the number of the thread the task is attached to, or a negative value
if the task is not bound to a thread, (i.e. its mask is all_threads_mask).
The new ID is only set and updated but not used yet.
There's a subtle bug in stream_free() when releasing captures. The
pools may be NULL when no capture is defined, and the calls to
pool_free() are inconditional. The only reason why this doesn't
cause trouble is because the pointer to be freed is always NULL in
this case and we don't go further down the chain. That's particularly
ugly and it complicates debugging, so let's only call these ones when
the pointers are set.
There's no impact on running code, it only fools those trying to debug
pools manually. There's no need to backport it though it unless it helps
for debugging sessions.
On destructive connection upgrade, instead of using the new mux name to
abort the old stream, we can relay on the stream connector flags. If it is
detached after the upgrade, it means the stream will not be resused by the
new mux and it must be aborted.
This patch may be backported to 2.6.
When the protocol is changed for a client connection at the stream level
(from TCP to H1/H2), there are two cases. The stream may be reused or
not. The first case, when the stream is reused is working. The second one is
buggy since the conn-stream refactoring and leads to a crash.
In this case, the new mux don't reuse the stream. It must be silently
aborted. However, its front stream connector is still referencing the
connection. So it must be detached. But it must be performed in two stages,
to be sure to not loose the context for the upgrade and to be able to
rollback on error. So now, before the upgrade, we prepare to detach the
stconn and it is finally detached if the upgrade succeeds. There is a trick
here. Because we pretend the stconn is detached but its state is preserved.
This patch must be backported to 2.6.
stream.c and mux_fcgi.c may cause a warning for a possible NULL deref
at -Os, while that is not possible thanks to the previous test. Let's
just switch to __htx_get_head_blk() instead.
Now that the data consumption from the endpoint is the default setting,
we can generalize the pre-clearing of the wont_consume flag, which is
no more specific to applets. In practice it's not needed anymore to do
it, but since streams might be initiatied from asynchronous applets,
these might have blocked their consumption side before creating the
stream thus it's safer to preserve the clearing of the flag.
Function arguments and local variables called "cs" were renamed to "sc"
to avoid future confusion. The HTTP analyser and the backend functions
were all updated after being reviewed. Function stream_update_both_cs()
was renamed to stream_update_both_sc()
There's no more reason for keepin the code and definitions in conn_stream,
let's move all that to stconn. The alphabetical ordering of include files
was adjusted.
This file contains all the stream-connector functions that are specific
to application layers of type stream. So let's name it accordingly so
that it's easier to figure what's located there.
The alphabetical ordering of include files was preserved.
An equivalent applet_need_more_data() was added as well since that function
is mostly used from applet code. It makes it much clearer that the applet
is waiting for data from the stream layer.
These ones are essentially for the stream endpoint, let's give them a
name that matches the intent. Equivalent versions were provided in the
applet namespace to ease code legibility.
The following flags are not at all related to the endpoint but to the
connector itself:
- SE_FL_RXBLK_ROOM
- SE_FL_RXBLK_BUFF
- SE_FL_RXBLK_CHAN
As such they have no business staying in the endpoint descriptor and
they must move to the stream connector. They've also been renamed
accordingly to better match what they correspond to (the same name
as the function that sets them).
The rare occurrences of cs_rx_blocked() were replaced by an explicit
test on the list of flags. The reason is that cs_rx_blocked() used to
preserve some tests that are not needed at certain places since already
known. For the same reason SE_FL_RXBLK_ANY wasn't converted. As such it
will later be possible to carefully review these few locations and
eliminate the unneeded flags from the tests. No particular function
was made to test them since they're explicit enough.
It now looks like ci_putchk() and friends could very well place the flag
themselves on the connector when they detect a buffer full condition, as
this would significantly simplify the high-level API. But all usages must
first be reviewed before this simplification can be done. For now it
remains done by applet_put*() instead.
At plenty of places we combine multiple flags checks to determine if we
can receive (endp_ready, rx_blocked, cf_shutr etc). Let's group them
under a single function that is meant to replace existing tests.
Some tests were only checking the rxblk flags at the connection level,
so for now they were not converted, this requires a bit of auditing
first, and probably a test to determine whether or not to check for
cf_shutr (e.g. there is none if no stream is present).