In resume_listener(), proto->resume() errors were not properly handled:
the function kept flowing down as if no errors were detected.
Instead, we're performing an early return when such errors are detected to
prevent undefined behaviors.
This could be backported up to 2.4.
--
Backport notes:
This commit depends on:
- "MINOR: listener: make sure we don't pause/resume bypassed listeners"
-> 2.4 ... 2.7:
Replace this:
| if (l->bind_conf->maxconn && l->nbconn >= l->bind_conf->maxconn) {
| l->rx.proto->disable(l);
By this:
| if (l->maxconn && l->nbconn >= l->maxconn) {
| l->rx.proto->disable(l);
Legacy suspend() return value handling in pause_listener() has been altered
over the time.
First with fb76bd5ca ("BUG/MEDIUM: listeners: correctly report pause() errors")
Then with e03204c8e ("MEDIUM: listeners: implement protocol level
->suspend/resume() calls")
We aim to restore original function behavior and comply with resume_listener()
function description.
This is required for resume_listener() and pause_listener() to work as a whole
Now, it is made explicit that pause_listener() may stop a listener if the
listener doesn't support the LI_PAUSED state (depending on the protocol
family, ie: ABNS sockets), in this case l->state will be set to LI_ASSIGNED
and this won't be considered as an error.
This could be backported up to 2.4 after a reasonable observation period
to make sure that this change doesn't cause unwanted side-effects.
--
Backport notes:
This commit depends on:
- "MINOR: listener: make sure we don't pause/resume bypassed listeners"
-> 2.4: manual change required because "MINOR: proxy/listener: support
for additional PAUSED state" was not backported: the contextual patch
lines don't match.
Replace this:
| if (px && !px->li_ready) {
| /* PROXY_LOCK is required */
By this:
| if (px && !px->li_ready) {
| ha_warning("Paused %s %s.\n", proxy_cap_str(px->cap), px->id);
Some listeners are kept in LI_ASSIGNED state but are not supposed to be
started since they were bypassed on initial startup (eg: in protocol_bind_all()
or in enable_listener()...)
Introduce the LI_F_FINALIZED flag: when the variable is non
zero it means that the listener made it past the LI_LISTEN state (finalized)
at least once so we can safely pause / resume. This way we won't risk starting
a previously bypassed listener which never made it that far and thus was not
expected to be lazy-started by accident.
As listener_pause() and listener_resume() are currently partially broken, such
unexpected lazy-start won't happen. But we're trying to restore pause() and
resume() behavior so this patch will be required before going any further.
We had to re-introduce listeners 'flags' struct member since it was recently
moved into bind_conf struct. But here we do have a legitimate need for these
listener-only flags.
This should only be backported if explicitly required by another commit.
--
Backport notes:
-> 2.4 and 2.5:
The 2-bytes hole we're using in the current patch does not apply, let's
use the 4-byte hole located under the 'option' field.
Replace this:
|@@ -226,7 +226,8 @@ struct li_per_thread {
| struct listener {
| enum obj_type obj_type; /* object type = OBJ_TYPE_LISTENER */
| enum li_state state; /* state: NEW, INIT, ASSIGNED, LISTEN, READY, FULL */
|- /* 2-byte hole here */
|+ uint16_t flags; /* listener flags: LI_F_* */
| int luid; /* listener universally unique ID, used for SNMP */
| int nbconn; /* current number of connections on this listener */
| unsigned int thr_idx; /* thread indexes for queue distribution : (t2<<16)+t1 */
By this:
|@@ -209,6 +209,8 @@ struct listener {
| short int nice; /* nice value to assign to the instantiated tasks */
| int luid; /* listener universally unique ID, used for SNMP */
| int options; /* socket options : LI_O_* */
|+ uint16_t flags; /* listener flags: LI_F_* */
|+ /* 2-bytes hole here */
| __decl_thread(HA_RWLOCK_T lock);
|
| struct fe_counters *counters; /* statistics counters */
-> 2.4 only:
We need to adjust some contextual lines.
Replace this:
|@@ -477,7 +478,7 @@ int pause_listener(struct listener *l, int lpx, int lli)
| if (!lli)
| HA_RWLOCK_WRLOCK(LISTENER_LOCK, &l->lock);
|
|- if (l->state <= LI_PAUSED)
|+ if (!(l->flags & LI_F_FINALIZED) || l->state <= LI_PAUSED)
| goto end;
|
| if (l->rx.proto->suspend)
By this:
|@@ -477,7 +478,7 @@ int pause_listener(struct listener *l, int lpx, int lli)
| !(proc_mask(l->rx.settings->bind_proc) & pid_bit))
| goto end;
|
|- if (l->state <= LI_PAUSED)
|+ if (!(l->flags & LI_F_FINALIZED) || l->state <= LI_PAUSED)
| goto end;
|
| if (l->rx.proto->suspend)
And this:
|@@ -535,7 +536,7 @@ int resume_listener(struct listener *l, int lpx, int lli)
| if (MT_LIST_INLIST(&l->wait_queue))
| goto end;
|
|- if (l->state == LI_READY)
|+ if (!(l->flags & LI_F_FINALIZED) || l->state == LI_READY)
| goto end;
|
| if (l->rx.proto->resume)
By this:
|@@ -535,7 +536,7 @@ int resume_listener(struct listener *l, int lpx, int lli)
| !(proc_mask(l->rx.settings->bind_proc) & pid_bit))
| goto end;
|
|- if (l->state == LI_READY)
|+ if (!(l->flags & LI_F_FINALIZED) || l->state == LI_READY)
| goto end;
|
| if (l->rx.proto->resume)
-> 2.6 and 2.7 only:
struct listener 'flags' member still exists, let's use it.
Remove this from the current patch:
|@@ -226,7 +226,8 @@ struct li_per_thread {
| struct listener {
| enum obj_type obj_type; /* object type = OBJ_TYPE_LISTENER */
| enum li_state state; /* state: NEW, INIT, ASSIGNED, LISTEN, READY, FULL */
|- /* 2-byte hole here */
|+ uint16_t flags; /* listener flags: LI_F_* */
| int luid; /* listener universally unique ID, used for SNMP */
| int nbconn; /* current number of connections on this listener */
| unsigned int thr_idx; /* thread indexes for queue distribution : (t2<<16)+t1 */
Then, replace this:
|@@ -251,6 +250,9 @@ struct listener {
| EXTRA_COUNTERS(extra_counters);
| };
|
|+/* listener flags (16 bits) */
|+#define LI_F_FINALIZED 0x0001 /* listener made it to the READY||LIMITED||FULL state at least once, may be suspended/resumed safely */
|+
| /* Descriptor for a "bind" keyword. The ->parse() function returns 0 in case of
| * success, or a combination of ERR_* flags if an error is encountered. The
| * function pointer can be NULL if not implemented. The function also has an
By this:
|@@ -221,6 +221,7 @@ struct li_per_thread {
| };
|
| #define LI_F_QUIC_LISTENER 0x00000001 /* listener uses proto quic */
|+#define LI_F_FINALIZED 0x00000002 /* listener made it to the READY||LIMITED||FULL state at least once, may be suspended/resumed safely */
|
| /* The listener will be directly referenced by the fdtab[] which holds its
| * socket. The listener provides the protocol-specific accept() function to
This is an alternative fix that tries to address the same issue as
d1ebee177 ("BUG/MINOR: listener: close tiny race between
resume_listener() and stopping") while allowing resume_listener() to be
more versatile.
Indeed, because of the previous fix, resume_listener() is not able to
rebind stopped listeners, and this breaks the original behavior that is
documented in the function description:
"If the listener was only in the assigned
state, it's totally rebound. This can happen if a pause() has completely
stopped it. If the resume fails, 0 is returned and an error might be
displayed."
With relax_listener(), we now make sure to check l->state under the
listener lock so we don't call resume_listener() when the conditions are not
met.
As such, concurrently stopped listeners may not be rebound using
relax_listener().
Note: the documented race can't happen since 1b927eb3c ("MEDIUM: proto: stop
protocols under thread isolation during soft stop"), but older versions are
concerned as 1b927eb3c was not marked for backports.
Moreover, the patch also prevents the race between protocol_pause_all() and
resuming from LIMITED or FULL states.
This commit depends on:
- "MINOR: listener: add relax_listener() function"
This should be backported with d1ebee177 up to 2.4
(d1ebee177 is marked to be backported for all stable versions but the current
patch does not apply for versions < 2.4)
There is a need for a small difference between resuming and relaxing
a listener.
When resuming, we expect that the listener may completely resume, this includes
unpausing or rebinding if required.
Resuming a listener is a best-effort operation: no matter the current state,
try our best to bring the listener up to the LI_READY state.
There are some cases where we only want to "relax" listeners that were
previously restricted using limit_listener() or listener_full() functions.
Here we don't want to ressucitate listeners, we're simply interested in
cancelling out the previous restriction.
To this day, listener_resume() on a unbound listener is broken, that's why
the need for this wasn't felt yet.
But we're trying to restore historical listener_resume() behavior, so we better
prepare for this by introducing an explicit relax_listener() function that
only does what is expected in such cases.
This commit depends on:
- "MINOR: listener/api: add lli hint to listener functions"
Add listener lock hint (AKA lli) to (stop/resume/pause)_listener() functions.
All these functions implicitely take the listener lock when they are called:
It could be useful to be able to call them while already holding the lock, so
we're adding lli hint to make them take the lock only when it is missing.
This should only be backported if explicitly required by another commit
--
-> 2.4 and 2.5 common backport notes:
These 2 commits need to be backported first:
- 187396e34 "CLEANUP: listener: function comment typo in stop_listener()"
- a57786e87 "BUG/MINOR: listener: null pointer dereference suspected by
coverity"
-> 2.4 special backport notes:
In addition to the previously mentionned dependencies, the patch needs to be
slightly adapted to match the corresponding contextual lines:
Replace this:
|@@ -471,7 +474,8 @@ int pause_listener(struct listener *l, int lpx)
| if (!lpx && px)
| HA_RWLOCK_WRLOCK(PROXY_LOCK, &px->lock);
|
|- HA_RWLOCK_WRLOCK(LISTENER_LOCK, &l->lock);
|+ if (!lli)
|+ HA_RWLOCK_WRLOCK(LISTENER_LOCK, &l->lock);
|
| if (l->state <= LI_PAUSED)
| goto end;
By this:
|@@ -471,7 +474,8 @@ int pause_listener(struct listener *l, int lpx)
| if (!lpx && px)
| HA_RWLOCK_WRLOCK(PROXY_LOCK, &px->lock);
|
|- HA_RWLOCK_WRLOCK(LISTENER_LOCK, &l->lock);
|+ if (!lli)
|+ HA_RWLOCK_WRLOCK(LISTENER_LOCK, &l->lock);
|
| if ((global.mode & (MODE_DAEMON | MODE_MWORKER)) &&
| !(proc_mask(l->rx.settings->bind_proc) & pid_bit))
Replace this:
|@@ -169,7 +169,7 @@ void protocol_stop_now(void)
| HA_SPIN_LOCK(PROTO_LOCK, &proto_lock);
| list_for_each_entry(proto, &protocols, list) {
| list_for_each_entry_safe(listener, lback, &proto->receivers, rx.proto_list)
|- stop_listener(listener, 0, 1);
|+ stop_listener(listener, 0, 1, 0);
| }
| HA_SPIN_UNLOCK(PROTO_LOCK, &proto_lock);
| }
By this:
|@@ -169,7 +169,7 @@ void protocol_stop_now(void)
| HA_SPIN_LOCK(PROTO_LOCK, &proto_lock);
| list_for_each_entry(proto, &protocols, list) {
| list_for_each_entry_safe(listener, lback, &proto->receivers, rx.proto_list)
| if (!listener->bind_conf->frontend->grace)
|- stop_listener(listener, 0, 1);
|+ stop_listener(listener, 0, 1, 0);
| }
| HA_SPIN_UNLOCK(PROTO_LOCK, &proto_lock);
Replace this:
|@@ -2315,7 +2315,7 @@ void stop_proxy(struct proxy *p)
| HA_RWLOCK_WRLOCK(PROXY_LOCK, &p->lock);
|
| list_for_each_entry(l, &p->conf.listeners, by_fe)
|- stop_listener(l, 1, 0);
|+ stop_listener(l, 1, 0, 0);
|
| if (!(p->flags & (PR_FL_DISABLED|PR_FL_STOPPED)) && !p->li_ready) {
| /* might be just a backend */
By this:
|@@ -2315,7 +2315,7 @@ void stop_proxy(struct proxy *p)
| HA_RWLOCK_WRLOCK(PROXY_LOCK, &p->lock);
|
| list_for_each_entry(l, &p->conf.listeners, by_fe)
|- stop_listener(l, 1, 0);
|+ stop_listener(l, 1, 0, 0);
|
| if (!p->disabled && !p->li_ready) {
| /* might be just a backend */
resume method was not explicitly defined for uxst protocol family.
Here we can safely use the default_resume_listener, just like the uxdg family.
This could be backported up to 2.4.
In protocol_bind_all() (involved in startup sequence):
We only free errmsg (set by fam->bind() attempt) when we make use of it.
But this could lead to some memory leaks because there are some cases
where we ignore the error message (e.g: verbose=0 with ERR_WARN messages).
As long as errmsg is set, we should always free it.
As mentioned earlier, this really is a minor leak because it can only occur on
specific conditions (error paths) during the startup phase.
This may be backported up to 2.4.
--
Backport notes:
-> 2.4 only:
Replace this:
| ha_warning("Binding [%s:%d] for %s %s: %s\n",
| listener->bind_conf->file, listener->bind_conf->line,
| proxy_type_str(px), px->id, errmsg);
By this:
| else if (lerr & ERR_WARN)
| ha_warning("Starting %s %s: %s\n",
| proxy_type_str(px), px->id, errmsg);
In uxst_bind_listener() and uxdg_bind_listener(), when the function
fails because the listener is not bound, both function are setting
the error message but don't set the err status before returning.
Because of this, such error is not properly handled by the upper functions.
Making sure this error is properly catched by returning a composition of
ERR_FATAL and ERR_ALERT.
This could be backported up to 2.4.
The half-closed timeout is now directly retrieved from the proxy
settings. There is no longer usage for the .hcto field in the stconn
structure. So let's remove it.
We now directly use the proxy settings to set the half-close timeout of a
stream-connector. The function sc_set_hcto() must be used to do so. This
timeout is only set when a shutw is performed. So it is not really a big
deal to use a dedicated function to do so.
It was done by hand by callers when a shutdown for read or write was
performed. It is now always handled by the functions performing the
shutdown. This way the callers don't take care of it. This will avoid some
bugs.
Expiration dates in trace messages are now relative to now_ms. It is fairly
easier to read traces this way. And an expired date is now negative. So, it
is also easy to detect when a timeout was reached.
Becasue read and write timeout are now detected using .lra and .fsb fields
of the stream-endpoint descriptor, it is better to also use these fields to
report read and write expiration date in trace messages. Especially because
old rex and wex fields will be removed.
We stop to use the channel's expiration dates to detect read and write
timeouts on the channels. We now rely on the stream-endpoint descriptor to
do so. All the stuff is handled in process_stream().
The stream relies on 2 helper functions to know if the receives or sends may
expire: sc_rcv_may_expire() and sc_snd_may_expire().
An endpoint should now set SE_FL_EXP_NO_DATA flag if it does not expect any
data from the opposite endpoint. This way, the stream will be able to
disable any read timeout on the opposite endpoint. Applets should use
applet_expect_no_data() and applet_expect_data() functions to set or clear
the flag. For now, only dns and sink forwarder applets are concerned.
The stream endpoint descriptor now owns two date, lra (last read activity) and
fsb (first send blocked).
The first one is updated every time a read activity is reported, including data
received from the endpoint, successful connect, end of input and shutdown for
reads. A read activity is also reported when receives are unblocked. It will be
used to detect read timeouts.
The other one is updated when no data can be sent to the endpoint and reset
when some data are sent. It is the date of the first send blocked by the
endpoint. It will be used to detect write timeouts.
Helper functions are added to report read/send activity and to retrieve lra/fsb
date.
Read and write timeouts (.rto and .wto) are now replaced by an unique
timeout, call .ioto. Since the recent refactoring on channel's timeouts,
both use the same value, the client timeout on client side and the server
timeout on the server side. Thus, this part may be simplified. Now it
represents the I/O timeout.
After I/O handling, in sc_notify(), the stream's task is no longer
requeue. The stream may be woken up. But its task is not requeue. It is
useless nowadays and only avoids a call to process_stream() for edge
cases. It is not really a big deal if the stream is woken up for nothing
because its task expired. At worst, it will be responsible to compute its
new expiration date.
These timers are related to the I/O. Thus it is logical to move them into
the SE descriptor. The patch is a bit huge but it is just a
replacement. However it is error-prone.
From the stconn or the stream, helper functions are used to get, set or
reset these timers. This simplify the timers manipulations.
Read and write timeouts concerns the I/O. Thus, it is logical to move it into
the stconn. At the end, the stream is responsible to detect the timeouts. So
it is logcial to have these values in the stconn and not in the SE
descriptor. But it may change depending on the recfactoring.
So, now:
* scf->rto is used instead of req->rto
* scf->wto is used instead of res->wto
* scb->rto is used instead of res->rto
* scb->wto is used instead of req->wto
This patch removes CF_READ_ERROR and CF_WRITE_ERROR flags. We now rely on
SE_FL_ERR_PENDING and SE_FL_ERROR flags. SE_FL_ERR_PENDING is used for write
errors and SE_FL_ERROR for read or unrecoverable errors.
When a connection error is reported, SE_FL_ERROR and SE_FL_EOS are now set and a
read event and a write event are reported to be sure the stream will properly
process the error. At the stream-connector level, it is similar. When an error
is reported during a send, a write event is triggered. On the read side, nothing
more is performed because an error at this stage is enough to wake the stream
up.
A major change is brought with this patch. We stop to check flags of the
ooposite channel to report abort or timeout. It also means when an read or
write error is reported on a side, we no longer update the other side. Thus
a read error on the server side does no long lead to a write error on the
client side. This should ease errors report.
This flag was introduced in 1.3 to fix a design issue. It was untouch since
then but there is no reason to still have this trick. Note it could be good
to review what happens in HTTP with the server is waiting for the end of the
request. It could be good to be sure a client timeout is always reported.
In bb581423b ("BUG/MEDIUM: httpclient/lua: crash when the lua task timeout
before the httpclient"), a new logic was implemented to make sure that
when a lua ctx destroyed, related httpclients are correctly destroyed too
to prevent a such httpclients from being resuscitated on a destroyed lua ctx.
This was implemented by adding a list of httpclients within the lua ctx,
and a new function, hlua_httpclient_destroy_all(), that is called under
hlua_ctx_destroy() and runs through the httpclients list in the lua context
to properly terminate them.
This was done with the assumption that no concurrent Lua garbage collection
cycles could occur on the same ressources, which seems OK since the "lua"
context is about to be freed and is not explicitly being used by other threads.
But when 'lua-load' is used, the main lua stack is shared between multiple
OS threads, which means that all lua ctx in the process are linked to the
same parent stack.
Yet it seems that lua GC, which can be triggered automatically under
lua_resume() or manually through lua_gc(), does not limit itself to the
"coroutine" stack (the stack referenced in lua->T) when performing the cleanup,
but is able to perform some cleanup on the main stack plus coroutines stacks
that were created under the same main stack (via lua_newthread()) as well.
This can be explained by the fact that lua_newthread() coroutines are not meant
to be thread-safe by design.
Source: http://lua-users.org/lists/lua-l/2011-07/msg00072.html (lua co-author)
It did not cause other issues so far because most of the time when using
'lua-load', the global lua lock is taken when performing critical operations
that are known to interfere with the main stack.
But here in hlua_httpclient_destroy_all(), we don't run under the global lock.
Now that we properly understand the issue, the fix is pretty trivial:
We could simply guard the hlua_httpclient_destroy_all() under the global
lua lock, this would work but it could increase the contention over the
global lock.
Instead, we switched 'lua->hc_list' which was introduced with bb581423b
from simple list to mt_list so that concurrent accesses between
hlua_httpclient_destroy_all and hlua_httpclient_gc() are properly handled.
The issue was reported by @Mark11122 on Github #2037.
This must be backported with bb581423b ("BUG/MEDIUM: httpclient/lua: crash
when the lua task timeout before the httpclient") as far as 2.5.
In hlua_httpclient_send(), we replace hc->req.url with a new url.
But we forgot to free the original url that was allocated in
hlua_httpclient_new() or in the previous httpclient_send() call.
Because of this, each httpclient request performed under lua scripts would
result in a small leak. When stress-testing a lua action which uses httpclient,
the leak is clearly visible since we're leaking severals Mbytes per minute.
This bug was discovered by chance when trying to reproduce GH issue #2037.
It must be backported up to 2.5
Before looking for a secondary cache entry for a given request we
checked that the first entry was complete, which might prevent us from
using a valid entry if the first one with the same primary key is not
full yet.
Likewise, if the primary entry is complete but not the secondary entry
we try to use, we might end up using a partial entry from the cache as
a response.
This bug was raised in GitHub #2048.
It can be backported up to branch 2.4.
Since commit cc9bf2e5f "MEDIUM: cache: Change caching conditions"
responses that do not have an explicit expiration time are not cached
anymore. But this mechanism wrongly used the TX_CACHE_IGNORE flag
instead of the TX_CACHEABLE one. The effect this had is that a cacheable
response that corresponded to a request having a "Cache-Control:
no-cache" for instance would not be cached.
Contrary to what was said in the other commit message, the "checkcache"
option should not be impacted by the use of the TX_CACHEABLE flag
instead of the TX_CACHE_IGNORE one. The response is indeed considered as
not cacheable if it has no expiration time, regardless of the presence
of a cookie in the response.
This should fix GitHub issue #2048.
This patch can be backported up to branch 2.4.
HAPROXY_STARTUP_VERSION: contains the version used to start, in
master-worker mode this is the version which was used to start the
master, even after updating the binary and reloading.
This patch could be backported in every version since it is useful when
debugging.
This patch handles the case where the fd could be -1 when proc_self was
lost for some reason (environment variable corrupted or upgrade from < 1.9).
This could result in a out of bound array access fdtab[-1] and would crash.
Must be backported in every maintained versions.
Previous versions ( < 1.9 ) of the master-worker process didn't had the
"HAPROXY_PROCESSES" environment variable which contains the list of
processes, fd etc.
The part which describes the master is created at first startup so if
you started the master with an old version you would never have
it.
Since patch 68836740 ("MINOR: mworker: implement a reload failure
counter"), the failedreloads member of the proc_self structure for the
master is set to 0. However if this structure does not exist, it will
result in a NULL dereference and crash the master.
This patch fixes the issue by creating the proc_self structure for the
master when it does not exist. It also shows a warning which states to
restart the master if that is the case, because we can't guarantee that
it will be working correctly.
This MUST be backported as far as 2.5, and could be backported in every
other stable branches.
When parsing the HAPROXY_PROCESSES environement variable, strtok was
done directly from the ptr resulting from getenv(), which replaces the ;
by \0, showing confusing environment variables when debugging in /proc
or in a corefile.
Example:
(gdb) x/39s *environ
[...]
0x7fff6935af64: "HAPROXY_PROCESSES=|type=w"
0x7fff6935af7e: "fd=3"
0x7fff6935af83: "pid=4444"
0x7fff6935af8d: "rpid=1"
0x7fff6935af94: "reloads=0"
0x7fff6935af9e: "timestamp=1676338060"
0x7fff6935afb3: "id="
0x7fff6935afb7: "version=2.4.0-8076da-1010+11"
This patch fixes the issue by doing a strdup on the variable.
Could be backported in previous versions (mworker_proc_to_env_list
exists since 1.9)
Implement a way to test if some options are enabled at run-time. For now,
following options may be detected:
POLL, EPOLL, KQUEUE, EVPORTS, SPLICE, GETADDRINFO, REUSEPORT,
FAST-FORWARD, SERVER-SSL-VERIFY-NONE
These options are those that can be disabled on the command line. This way
it is possible, from a reg-test for instance, to know if a feature is
supported or not :
feature cmd "$HAPROXY_PROGRAM -cc '!(globa.tune & GTUNE_NO_FAST_FWD)'"
The option was renamed to only permit to disable the fast-forward. First
there is no reason to enable it because it is the default behavior. Then it
introduced a bug because there is no way to be sure the command line has
precedence over the configuration this way. So, the option is now named
"tune.disable-fast-forward" and does not support any argument. And of
course, the commande line option "-dF" has now precedence over the
configuration.
No backport needed.
For server connections, both the frontend and backend were considered to
enable the httpclose option. However, it is ambiguous because on client side
only the frontend is considerd. In addition for 2 frontends, one with the
option enabled and not for the other, the HTTP connection mode may differ
while it is a backend setting.
Thus, now, for the server side, only the backend is considered. Of course,
if the option is set for a listener, the option will be enabled if the
listener is the backend's connection.
We must never exit for the stream processing function with an expired
task. Otherwise, we are pretty sure this will ends with a spinning loop. It
is really better to abort as far as possible and with the original buggy
state. This will ease the debug sessions.
If the TX buffer (->tx.buf) attached to the connection is not drained, there
are chances that this will be detected by qc_txb_release() which triggers
a BUG_ON_HOT() when this is the case as follows
[00|quic|2|c_conn.c:3477] UDP port unreachable : qc@0x5584f18d6d50 pto_count=0 cwnd=6816 ppif=1046 pif=1046
[00|quic|5|ic_conn.c:749] qc_kill_conn(): entering : qc@0x5584f18d6d50
[00|quic|5|ic_conn.c:752] qc_kill_conn(): leaving : qc@0x5584f18d6d50
[00|quic|5|c_conn.c:3532] qc_send_ppkts(): leaving : qc@0x5584f18d6d50 pto_count=0 cwnd=6816 ppif=1046 pif=1046
FATAL: bug condition "buf && b_data(buf)" matched at src/quic_conn.c:3098
Consume the remaining data in the TX buffer calling b_del().
This bug arrived with this commit:
a2c62c314 MINOR: quic: Kill the connections on ICMP (port unreachable) packet receipt
Takes also the opportunity of this patch to modify the comments for qc_send_ppkts()
which should have arrived with a2c62c314 commit.
Must be backported to 2.7 where this latter commit is supposed to be backported.
Traces from this function would miss a TRACE_LEAVE() on the success path,
which had for consequences, 1) that it was difficult to figure where the
function was left, and 2) that we never had the allocated stream ID
clearly visible (actually the one returned by h2c_frt_stream_new() is
the right one but it's not obvious).
This can be backported to 2.7 and 2.6.
Functions which are called with dummy streams pass it down the traces
and that leads to somewhat confusing "h2s=0x1234568(0,IDL)" for example
while the nature of the called function makes this stream useless at that
place. Better not report a random pointer, especially since it always
requires to look at the code before remembering how this should be
interpreted.
Now what we're doing is that the idle stream only prints "h2s=IDL" which
is shorter and doesn't report a pointer, closed stream do not report
anything since the stream ID 0 already implies it, and other ones are
reported normally.
This could be backported to 2.7 and 2.6 as it improves traces legibility.
With previous commit, quic-conn are now handled as jobs to prevent the
termination of haproxy process. This ensures that QUIC connections are
closed when all data are acknowledged by the client and there is no more
active streams.
The quic-conn layer emits a CONNECTION_CLOSE once the MUX has been
released and all streams are acknowledged. Then, the timer is scheduled
to definitely free the connection after the idle timeout period. This
allows to treat late-arriving packets.
Adjust this procedure to deactivate this timer when process stopping is
in progress. In this case, quic-conn timer is set to expire immediately
to free the quic-conn instance as soon as possible. This allows to
quickly close haproxy process.
This should be backported up to 2.7.
To prevent data loss for QUIC connections, haproxy global variable jobs
is incremented each time a quic-conn socket is allocated. This allows
the QUIC connection to terminate all its transfer operation during proxy
soft-stop. Without this patch, the process will be terminated without
waiting for QUIC connections.
Note that this is done in qc_alloc_fd(). This means only QUIC connection
with their owned socket will properly support soft-stop. In the other
case, the connection will be interrupted abruptly as before. Similarly,
jobs decrement is conducted in qc_release_fd().
This should be backported up to 2.7.
Properly implement support for haproxy soft-stop on QUIC MUX. This code
is similar to H2 MUX :
* on timeout refresh, if stop-stop in progress, schedule the timeout to
expire with regards to the close-spread-end window.
* after input/output processing, if soft-stop in progress, shutdown the
connection. This is randomly spread by close-spread-end window. In the
case of H3 connection, a GOAWAY is emitted and the connection is kept
until all data are sent for opened streams. If the client tries to use
new streams, they are rejected in conformance with the GOAWAY
specification.
This ensures that MUX is able to forward all content properly before
closing the connection. The lower quic-conn layer is then responsible
for retransmission and should be closed when all data are acknowledged.
This will be implemented in the next commit to fully support soft-stop
for QUIC connections.
This should be backported up to 2.7.
Implement client-fin timeout for MUX quic. This timeout is used once an
applicative layer shutdown has been called. In HTTP/3, this corresponds
to the emission of a GOAWAY.
This should be backported up to 2.7.
Define a new function qc_process(). This function will regroup several
internal operation which should be called both on I/O tasklet and wake()
callback. For the moment, only streams purge is conducted there.
This patch is useful to support haproxy soft stop. This should be
backported up to 2.7.
Factorize shutdown operation in a dedicated function qc_shutdown(). This
will allow to call it from multiple places. A new flag QC_CF_APP_SHUT is
also defined to ensure it will only be executed once even if called
multiple times per connection.
This commit will be useful to properly support haproxy soft stop.
This should be backported up to 2.7.
When a GOAWAY has been emitted, an ID is announced to represent handled
streams. H3 RFC suggests that higher streams should be resetted with the
error code H3_REQUEST_CANCELLED. This allows the peer to replay requests
on another connection.
For the moment, the impact of this change is limitted as GOAWAY is only
used on connection shutdown just before the MUX is freed. However, for
soft-stop support, a GOAWAY can be emitted in anticipation while keeping
the MUX to finish the active streams. In this case, new streams opened
by the client are resetted.
As a consequence of this change, app_ops.attach() operation has been
delayed at the very end of qcs_new(). This ensure that all qcs members
are initialized to support RESET_STREAM sending.
This should be backported up to 2.7.
h3s stores the current demux frame type and length as a state info. It
should be big enough to store a QUIC variable-length integer which is
the maximum H3 frame type and size.
Without this patch, there is a risk of integer overflow if H3 frame size
is bigger than INT_MAX. This can typically causes demux state mismatch
and demux frame error. However, no occurence has been found yet of this
bug with the current implementation.
This should be backported up to 2.6.
When the MUX is freed, the quic-conn layer may stay active until all
streams acknowledgment are processed. In this interval, if a new stream
is opened by the client, the quic-conn is thus now responsible to handle
it. This is done by the emission of a STOP_SENDING + RESET_STREAM.
Prior to this patch, the received packet was not acknowledged. This is
undesirable if the quic-conn is able to properly reject the request as
this can lead to unneeded retransmission from the client.
This must be backported up to 2.6.
When the MUX is freed, the quic-conn layer may stay active until all
streams acknowledgment are processed. In this interval, if a new stream
is opened by the client, the quic-conn is thus now responsible to handle
it. This is done by the emission of a STOP_SENDING.
This process has been completed to also emit a RESET_STREAM with the
same error code H3_REQUEST_REJECTED. This is done to conform with the H3
specification to invite the client to retry its request on a new
connection.
This should be backported up to 2.6.
When the MUX is freed, the quic-conn layer may stay active until all
streams acknowledgment are processed. In this interval, if a new stream
is opened by the client, the quic-conn is thus now responsible to handle
it. This is done by the emission of a STOP_SENDING.
This process is closely related to HTTP/3 protocol despite being handled
by the quic-conn layer. This highlights a flaw in our QUIC architecture
which should be adjusted. To reflect this situation, the function
qc_stop_sending_frm_enqueue() is renamed qc_h3_request_reject(). Also,
internal H3 treatment such as uni-directional bypass has been moved
inside the function.
This commit is only a refactor. However, bug fix on next patches will
rely on it so it should be backported up to 2.6.
This was revealed by Amaury when setting tune.quic.frontend.max-streams-bidi to 8
and asking a client to open 12 streams. haproxy has to send short packets
with little MAX_STREAMS frames encoded with 2 bytes. In addition to a packet number
encoded with only one byte. In the case <len_frms> is the length of the encoded
frames to be added to the packet plus the length of the packet number.
Ensure the length of the packet is at least QUIC_PACKET_PN_MAXLEN adding a PADDING
frame wich (QUIC_PACKET_PN_MAXLEN - <len_frms>) as size. For instance with
a two bytes MAX_STREAMS frames and a one byte packet number length, this adds
one byte of padding.
See https://datatracker.ietf.org/doc/html/rfc9001#name-header-protection-sample.
Must be backported to 2.7 and 2.6.
When receiving an Initial packet a peer must drop it if the datagram is smaller
than 1200. Before this patch, this is the entire datagram which was dropped.
In such a case, drop the packet after having parsed its length.
Must be backported to 2.6 and 2.7
This bug arrives with this commit:
982896961 MINOR: quic: split and rename qc_lstnr_pkt_rcv()
The first block of code consists in possibly setting this variable to true.
But it was already initialized to true before entering this code section.
Should be initialized to false.
Also take the opportunity to remove an unused "err" label.
Must be backported to 2.6 and 2.7.
Before probing the Initial packet number space, verify that we can at least
sent 1200 bytes by datagram. This may not be the case due to the amplification limit.
Must be backported to 2.6 and 2.7.
This should help in diagnosing issues revealed by the interop runner which counts
the number of handshakes from the number of Initial packets sent by the server.
Must be backported to 2.7.
The aim of this function is to rearm the idle timer. The ->expire
field of the timer task was updated without being requeued.
Some connection could be unexpectedly terminated.
Must be backported to 2.6 and 2.7.
This is very helpful during retranmission when receiving ICMP port unreachable
errors after the peer has left. This is the unique case at prevent where
qc_send_hdshk_pkts() or qc_send_app_probing() may fail (when they call
qc_send_ppkts() which fails with ECONNREFUSED as errno).
Also make the callers qc_dgrams_retransmit() stop their packet process. This
is the case of quic_conn_app_io_cb() and quic_conn_io_cb().
This modifications stops definitively any packet processing when receiving
ICMP port unreachable errors.
Must be backported to 2.7.
The send*() syscall which are responsible of such ICMP packets reception
fails with ECONNREFUSED as errno.
man(7) udp
ECONNREFUSED
No receiver was associated with the destination address.
This might be caused by a previous packet sent over the socket.
We must kill asap the underlying connection.
Must be backported to 2.7.
This code was there because the timer task was not running on the same thread
as the one which parse the QUIC packets. Now that this is no more the case,
we can wake up this task directly.
Must be backported to 2.7.
Move quic_rx_pkts_del() out of quic_conn.h to make it benefit from the TRACE API.
Add traces which already already helped in diagnosing an issue encountered with
ngtcp2 which sent too much 1RTT packets before the handshake completion. This
has been fixed here after having discussed with Tasuhiro on QUIC dev slack:
https://github.com/ngtcp2/ngtcp2/pull/663
Must be backported to 2.7.
Some counters could potentially be incremented even if send*() syscall returned
no error when ret >= 0 and ret != sz. This could be the case for instance if
a first call to send*() returned -1 with errno set to EINTR (or any previous syscall
which set errno to a non-null value) and if the next call to send*() returned
something positive and smaller than <sz>.
Must be backported to 2.7 and 2.6.
Add traces inside h3_decode_qcs(). Every error path has now its
dedicated trace which should simplify debugging. Each early returns has
been converted to a goto invocation.
To complete the demux tracing, demux frame type and length are now
printed using the h3s instance whenever its possible on trace
invocation. A new internal value H3_FT_UNINIT is used as a frame type to
mark demuxing as inactive.
This should be backported up to 2.7.
Since the recent changes on the clocks, now.tv_sec is not to be used
between processes because it's a clock which is local to the process and
does not contain a real unix timestamp. This patch fixes the issue by
using "data.tv_sec" which is the wall clock instead of "now.tv_sec'.
It prevents having incoherent timestamps.
It also introduces some checks on negatives values in order to never
displays a netative value if it was computed from a wrong value set by a
previous haproxy version.
It must be backported as far as 2.0.
Implement support for clients that emit the stream FIN with an empty
STREAM frame. For that, qcc_recv() offset comparison has been adjusted.
If offset has already been received but the FIN bit is now transmitted,
do not skip the rest of the function and call application layer
decode_qcs() callback.
Without this, streams will be kept open forever as HTX EOM is never
transfered to the upper stream layer.
This behavior was observed with mvfst client prior to its patch
38c955a024aba753be8bf50fdeb45fba3ac23cfd
Fix hq-interop (HTTP 0.9 over QUIC)
This notably caused the interop multiplexing test to fail as unclosed
streams on haproxy side prevented the emission of new MAX_STREAMS frame
to the client.
This shoud be backported up to 2.6. It also relies on previous commit :
381d8137e3
MINOR: h3/hq-interop: handle no data in decode_qcs() with FIN set
Properly handle a STREAM frame with no data but the FIN bit set at the
application layer. H3 and hq-interop decode_qcs() callback have been
adjusted to not return early in this case.
If the FIN bit is accepted, a HTX EOM must be inserted for the upper
stream layer. If the FIN is rejected because the stream cannot be
closed, a proper CONNECTION_CLOSE error will be triggered.
A new utility function qcs_http_handle_standalone_fin() has been
implemented in the qmux_http module. This allows to simply add the HTX
EOM on qcs HTX buffer. If the HTX buffer is empty, a EOT is first added
to ensure it will be transmitted above.
This commit will allow to properly handle FIN notify through an empty
STREAM frame. However, it is not sufficient as currently qcc_recv() skip
the decode_qcs() invocation when the offset is already received. This
will be fixed in the next commit.
This should be backported up to 2.6 along with the next patch.
Several times during debugging it has been difficult to find a way to
reliably indicate if a thread had been started and if it was still
running. It's really not easy because the elements we look at are not
necessarily reliable (e.g. harmless bit or idle bit might not reflect
what we think during a signal). And such notions can be subjective
anyway.
Here we define two thread flags, TH_FL_STARTED which is set as soon as
a thread enters run_thread_poll_loop() and drops the idle bit, and
another one, TH_FL_IN_LOOP, which is set when entering run_poll_loop()
and cleared when leaving it. This should help init/deinit code know
whether it's called from a non-initialized thread (i.e. tid must not
be trusted), or shared functions know if they're being called from a
running thread or from init/deinit code outside of the polling loop.
As reported in github issue #1881, there are situations where an excess
of TLS handshakes can cause a livelock. What's happening is that normally
we process at most one TLS handshake per loop iteration to maintain the
latency low. This is done by tagging them with TASK_HEAVY, queuing these
tasklets in the TL_HEAVY queue. But if something slows down the loop, such
as a connect() call when no more ports are available, we could end up
processing no more than a few hundred or thousands handshakes per second.
If the llmit becomes lower than the rate of incoming handshakes, we will
accumulate them and at some point users will get impatient and give up or
retry. Then a new problem happens: the queue fills up with even more
handshake attempts, only one of which will be handled per iteration, so
we can end up processing only outdated handshakes at a low rate, with
basically nothing else in the queue. This can for example happen in
parallel with health checks that don't require incoming handshakes to
succeed to continue to cause some activity that could maintain the high
latency stuff active.
Here we're taking a slightly different approach. First, instead of always
allowing only one handshake per loop (and usually it's critical for
latency), we take the current situation into account:
- if configured with tune.sched.low-latency, the limit remains 1
- if there are other non-heavy tasks, we set the limit to 1 + one
per 1024 tasks, so that a heavily loaded queue of 4k handshakes
per thread will be able to drain them at ~4 per loops with a
limited impact on latency
- if there are no other tasks, the limit grows to 1 + one per 128
tasks, so that a heavily loaded queue of 4k handshakes per thread
will be able to drain them at ~32 per loop with still a very
limited impact on latency since only I/O will get delayed.
It was verified on a 56-core Xeon-8480 that this did not degrade the
latency; all requests remained below 1ms end-to-end in full close+
handshake, and even 500us under low-lat + busy-polling.
This must be backported to 2.4.
There's a per-thread "long_rq" counter that is used to indicate how
often we leave the scheduler with tasks still present in the run queue.
The purpose is to know when tune.runqueue-depth served to limit latency,
due to a large number of tasks being runnable at once.
However there's a bug there, it's not always set: if after the first
run, one heavy task was processed and later only heavy tasks remain,
we'll loop back to not_done_yet where we try to pick more tasks, but
none are eligible (since heavy ones have already run) so we directly
return without incrementing the counter. This is what causes ultra-low
values on long_rq during massive SSL handshakes, that are confusing
because they make one believe that tl_class_mask doesn't have the HEAVY
flag anymore. Let's just fix that by not returning from the middle of
the function.
This can be backported as far as 2.4.
In 2.7, the method used to check for a sleeping thread changed with
commit e7475c8e7 ("MEDIUM: tasks/fd: replace sleeping_thread_mask with
a TH_FL_SLEEPING flag"). Previously there was a global sleeping mask
and now there is a flag per thread. The commit above partially broke
the watchdog by looking at the current thread's flags via th_ctx
instead of the reported thread's flags, and using an AND condition
instead of an OR to update and leave. This can cause a wrong thread
to be killed when the load is uneven. For example, when enabling
busy polling and sending traffic over a single connection, all
threads have their run time grow, and if the one receiving the
signal is also processing some traffic, it will not match the
sleeping/harmless condition and will set the stuck flag, then die
upon next invocation. While it's reproducible in tests, it's unlikely
to be met in field.
This fix should be backported to 2.7.
The -dF option can now be used to disable data fast-forward. It does the
same than the global option "tune.fast-forward off". Some reg-tests may rely
on this optim. To detect the feature and skip such script, the following
vtest command must be used:
feature cmd "$HAPROXY_PROGRAM -cc '!(globa.tune & GTUNE_NO_FAST_FWD)'"
The new global option "tune.fast-forward" can be set to "off" to disable the
data fast-forward. It is an debug option, thus it is internally marked as
experimental. The directive "expose-experimental-directives" must be set
first to use this one. By default, the data fast-forward is enable.
It could be usefull to force to wake the stream up when data are
received. To be sure, evreything works fine in this case. The data
fast-forward is an optim. It must work without it. But some code may rely on
the fact the stream will not be woken up. With this option, it is possible
to spot some hidden bugs.
At the stream level, the read expiration date is unset if a shutr was
received but not if the end of input was reached. If we know no more data
are excpected, there is no reason to let the read expiration date armed,
except to respect clientfin/serverfin timeout on some circumstances.
This patch could slowly be backported as far as 2.2.
During the payload forwarding, since the commit f2b02cfd9 ("MAJOR: http-ana:
Review error handling during HTTP payload forwarding"), when an error
occurred on one side, we don't rely anymore on a specific HTTP message state
to detect it on the other side. However, nothing was added to detect the
error. Thus, when this happens, a spinning loop may be experienced and an
abort because of the watchdog.
To fix the bug, we must detect the opposite side is closed by checking the
opposite SC state. Concretly, in http_end_request() and http_end_response(),
we wait for the other side iff the HTTP message state is lower to
HTTP_MSG_DONE (the message is not finished) and the SC state is not
SC_ST_CLO (the opposite side is not closed). In these function, we don't
care if there was an error on the opposite side. We only take care to detect
when we must stop waiting the other side.
This patch should fix the issue #2042. No backport needed.
The ssl_bind_kw structure is exclusively used for crt-list keyword, it
must be named otherwise to remove the confusion.
The structure was renamed ssl_crtlist_kws.
The HTTP header parsers surprizingly accepts empty header field names,
and this is a leftover from the original code that was agnostic to this.
When muxes were introduced, for H2 first, the HPACK decompressor needed
to feed headers lists, and since empty header names were strictly
forbidden by the protocol, the lists of headers were purposely designed
to be terminated by an empty header field name (a principle that is
similar to H1's empty line termination). This principle was preserved
and generalized to other protocols migrated to muxes (H1/FCGI/H3 etc)
without anyone ever noticing that the H1 parser was still able to deliver
empty header field names to this list. In addition to this it turns out
that the HPACK decompressor, despite a comment in the code, may
successfully decompress an empty header field name, and this mistake
was propagated to the QPACK decompressor as well.
The impact is that an empty header field name may be used to truncate
the list of headers and thus make some headers disappear. While for
H2/H3 the impact is limited as haproxy sees a request with missing
headers, and headers are not used to delimit messages, in the case of
HTTP/1, the impact is significant because the presence (and sometimes
contents) of certain sensitive headers is detected during the parsing.
Thus, some of these headers may be seen, marked as present, their value
extracted, but never delivered to upper layers and obviously not
forwarded to the other side either. This can have for consequence that
certain important header fields such as Connection, Upgrade, Host,
Content-length, Transfer-Encoding etc are possibly seen as different
between what haproxy uses to parse/forward/route and what is observed
in http-request rules and of course, forwarded. One direct consequence
is that it is possible to exploit this property in HTTP/1 to make
affected versions of haproxy forward more data than is advertised on
the other side, and bypass some access controls or routing rules by
crafting extraneous requests. Note, however, that responses to such
requests will normally not be passed back to the client, but this can
still cause some harm.
This specific risk can be mostly worked around in configuration using
the following rule that will rely on the bug's impact to precisely
detect the inconsistency between the known body size and the one
expected to be advertised to the server (the rule works from 2.0 to
2.8-dev):
http-request deny if { fc_http_major 1 } !{ req.body_size 0 } !{ req.hdr(content-length) -m found } !{ req.hdr(transfer-encoding) -m found } !{ method CONNECT }
This will exclusively block such carefully crafted requests delivered
over HTTP/1. HTTP/2 and HTTP/3 do not need content-length, and a body
that arrives without being announced with a content-length will be
forwarded using transfer-encoding, hence will not cause discrepancies.
In HAProxy 2.0 in legacy mode ("no option http-use-htx"), this rule will
simply have no effect but will not cause trouble either.
A clean solution would consist in modifying the loops iterating over
these headers lists to check the header name's pointer instead of its
length (since both are zero at the end of the list), but this requires
to touch tens of places and it's very easy to miss one. Functions such
as htx_add_header(), htx_add_trailer(), htx_add_all_headers() would be
good starting points for such a possible future change.
Instead the current fix focuses on blocking empty headers where they
are first inserted, hence in the H1/HPACK/QPACK decoders. One benefit
of the current solution (for H1) is that it allows "show errors" to
report a precise diagnostic when facing such invalid HTTP/1 requests,
with the exact location of the problem and the originating address:
$ printf "GET / HTTP/1.1\r\nHost: localhost\r\n:empty header\r\n\r\n" | nc 0 8001
HTTP/1.1 400 Bad request
Content-length: 90
Cache-Control: no-cache
Connection: close
Content-Type: text/html
<html><body><h1>400 Bad request</h1>
Your browser sent an invalid request.
</body></html>
$ socat /var/run/haproxy.stat <<< "show errors"
Total events captured on [10/Feb/2023:16:29:37.530] : 1
[10/Feb/2023:16:29:34.155] frontend decrypt (#2): invalid request
backend <NONE> (#-1), server <NONE> (#-1), event #0, src 127.0.0.1:31092
buffer starts at 0 (including 0 out), 16334 free,
len 50, wraps at 16336, error at position 33
H1 connection flags 0x00000000, H1 stream flags 0x00000810
H1 msg state MSG_HDR_NAME(17), H1 msg flags 0x00001410
H1 chunk len 0 bytes, H1 body len 0 bytes :
00000 GET / HTTP/1.1\r\n
00016 Host: localhost\r\n
00033 :empty header\r\n
00048 \r\n
I want to address sincere and warm thanks for their great work to the
team composed of the following security researchers who found the issue
together and reported it: Bahruz Jabiyev, Anthony Gavazzi, and Engin
Kirda from Northeastern University, Kaan Onarlioglu from Akamai
Technologies, Adi Peleg and Harvey Tuch from Google. And kudos to Amaury
Denoyelle from HAProxy Technologies for spotting that the HPACK and
QPACK decoders would let this pass despite the comment explicitly
saying otherwise.
This fix must be backported as far as 2.0. The QPACK changes can be
dropped before 2.6. In 2.0 there is also the equivalent code for legacy
mode, which doesn't suffer from the list truncation, but it would better
be fixed regardless.
CVE-2023-25725 was assigned to this issue.
There was a parenthesis placed in the wrong place for a memcmp().
As a consequence, clients could not reuse a UDP address for a new connection.
Must be backported to 2.7.
The commit d5983cef8 ("MINOR: listener: remove the useless ->default_target
field") revealed a bug in the SPOE. No default-target must be defined for
the SPOE agent frontend. SPOE applets are used on the frontend side and a
TCP connection is established on the backend side.
Because of this bug, since the commit above, the stream target is set to the
SPOE applet instead of the backend connection, leading to a spinning loop on
the applet when it is released because are unable to close the backend side.
This patch should fix the issue #2040. It only affects the 2.8-DEV but to
avoid any future bug, it should be backported to all stable versions.
When a client timeout is reported by the H1 mux, it is not an error but an
abort. Thus, H1C_F_ERROR flag must not be set. It is espacially important to
not inhibit the send. Because of this bug, a 408-Request-time-out is
reported in logs but the error message is not sent to the client.
This patch must be backported to 2.7.
We should not report LAST data in log if the response is in TUNNEL mode on
client close/timeout because there is no way to be sure it is the last
data. It means, it can only be reported in DONE, CLOSING or CLOSE states.
No backport needed.
There is already a test on CF_EOI and CF_SHUTR. The last one is always set
when a read error is reported. Thus there is no reason to check
CF_READ_ERROR.
This change was performed on all applet I/O handlers but one was missed. In
the CLI I/O handler used to commit a CA/CRL file, we can remove the test on
CF_WRITE_ERROR because there is already a test on CF_SHUTW.
This has been detected by libasan as follows:
=================================================================
==3170559==ERROR: AddressSanitizer: global-buffer-overflow on address 0x55cf77faad08 at pc 0x55cf77a87370 bp 0x7ffc01bdba70 sp 0x7ffc01bdba68
READ of size 8 at 0x55cf77faad08 thread T0
#0 0x55cf77a8736f in cli_find_kw src/cli.c:335
#1 0x55cf77a8a9bb in cli_parse_request src/cli.c:792
#2 0x55cf77a8c385 in cli_io_handler src/cli.c:1024
#3 0x55cf77d19ca1 in task_run_applet src/applet.c:245
#4 0x55cf77c0b6ba in run_tasks_from_lists src/task.c:634
#5 0x55cf77c0cf16 in process_runnable_tasks src/task.c:861
#6 0x55cf77b48425 in run_poll_loop src/haproxy.c:2934
#7 0x55cf77b491cf in run_thread_poll_loop src/haproxy.c:3127
#8 0x55cf77b4bef2 in main src/haproxy.c:3783
#9 0x7fb8b0693d09 in __libc_start_main ../csu/libc-start.c:308
#10 0x55cf7764f4c9 in _start (/home/flecaille/src/haproxy-untouched/haproxy+0x1914c9)
0x55cf77faad08 is located 0 bytes to the right of global variable 'cli_kws' defined in 'src/quic_conn.c:7834:27' (0x55cf77faaca0) of size 104
SUMMARY: AddressSanitizer: global-buffer-overflow src/cli.c:335 in cli_find_kw
Shadow bytes around the buggy address:
According to cli_find_kw() code and cli_kw_list struct definition, the second
member of this structure ->kw[] must be a null-terminated array.
Add a last element with default initializers to <cli_kws> global variable which
is impacted by this bug.
This bug arrived with this commit:
15c74702d MINOR: quic: implement a basic "show quic" CLI handler
Must be backported to 2.7 where this previous commit has been already
backported.
It is not really a bug because it does not fix any known issue. And it is
flagged as MEDIUM because it is sensitive. But if there are some extra calls
to process_stream(), it can be an issue because, in si_update_rx(), we may
disable reading for the SC when outgoing data are blocked in the input
channel. But it is not really the process_stream() job to take care of
that. This may block data receipt.
It is an old code, mainly here to avoid wakeup in loop on the stats
applet. Today, it seems useless and can lead to bugs. An endpoint is
responsible to block the SC if it waits for some room and the opposite
endpoint is responsible to unblock it when some data are sent. The stream
should not interfere on this part.
This patch could be backported to 2.7 after a period of observation. And it
should only be backported to lower versions if an issue is reported.
For an unknown reason in the change of uptime calculation for the HTML
page didn't make it to commit 6093ba47c ("BUG/MINOR: clock: do not mix
wall-clock and monotonic time in uptime calculation"). Let's address it
as well otherwise the stats page will display an incorrect uptime.
No backport needed unless the patch above is backported.
Uptime calculation for master process was incorrect as it used
<start_date> as its timestamp base time. Fix this by using the scheduler
time <start_time> for this.
The impact of this bug is minor as timestamp base time is only used for
"show proc" CLI output. it was highlighted by the following commit.
which caused a negative value to be displayed for the master process
uptime on "show proc" output.
28360dc53f
MEDIUM: clock: force internal time to wrap early after boot
This should be backported up to 2.0.
Incorrect printf format specifier "%lu" was used on "show quic" handler
for uint64_t. This breaks build on 32-bits architecture. To fix this
portability issue, force an explicit cast to unsigned long long with
"%llu" specifier.
This must be backported up to 2.7.
With a connection, when data are received, if these data are sent to the
opposite side because the fast-forwarding is possible, the stream may be
woken up on some conditions (at the end of sc_app_chk_snd_conn()):
* The channel is shut for write
* The SC is not in the "established" state
* The stream must explicitly be woken up on write and all data was sent
* The connection was just established.
A bug on the last condition was introduced with the commit d89884153
("MEDIUM: channel: Use CF_WRITE_EVENT instead of CF_WRITE_PARTIAL"). The
stream is now woken up on any write events.
This patch fixes this issue and restores the original behavior. No backport
is needed.
Filtering of closing/draining connections on "show quic" was not
properly implemented. This causes the extra argument "all" to display
all connections to be without effect. This patch fixes this and restores
the output of all connections.
This must be backported up to 2.7.
Reduce default "show quic" output by masking connection on
closing/draing state due to a CONNECTION_CLOSE emission/reception. These
connections can still be displayed using the special argument "all".
This should be backported up to 2.7.
Complete "show quic" handler by displaying information about
quic_stream_desc entries. These structures are used to emit stream data
and store them until acknowledgment is received.
This should be backported up to 2.7.
Complete "show quic" handler by displaying various information related
to each encryption level and packet number space. Most notably, ack
ranges and bytes in flight are present to help debug retransmission
issues.
This should be backported up to 2.7.
Complete "show quic" handler by displaying information related to the
quic_conn owned socket. First, the FD is printed, followed by the
address of the local and remote endpoint.
This should be backported up to 2.7.
Complete "show quic" handler. Source and destination CIDs are printed
for every connection. This is complete by a state info to reflect if
handshake is completed and if a CONNECTION_CLOSE has been emitted or
received and the allocation status of the attached MUX. Finally the idle
timer expiration is also printed.
This should be backported up to 2.7.
Implement a basic "show quic" CLI handler. This command will be useful
to display various information on all the active QUIC frontend
connections.
This work is heavily inspired by "show sess". Most notably, a global
list of quic_conn has been introduced to be able to loop over them. This
list is stored per thread in ha_thread_ctx.
Also add three CLI handlers for "show quic" in order to allocate and
free the command context. The dump handler runs on thread isolation.
Each quic_conn is referenced using a back-ref to handle deletion during
handler yielding.
For the moment, only a list of raw quic_conn pointers is displayed. The
handler will be completed over time with more information as needed.
This should be backported up to 2.7.
Commit 0aba11e9e ("MINOR: quic: remove unnecessary quic_session_accept()")
overlooked one problem, in session_accept_fd() at the end, there's a bunch
of FD-specific stuff that either sets up or resets the socket at the TCP
level. The tests are mostly performed for AF_INET/AF_INET6 families but
they're only for one part (i.e. to avoid setting up TCP options on UNIX
sockets). Other pieces continue to configure the socket regardless of its
family. All of this directly acts on the FD, which is not correct since
the FD is not valid here, it corresponds to the QUIC handle. The issue
is much more visible when "option nolinger" is enabled in the frontend,
because the access to fdatb[cfd].state immediately crashes on the first
connection, as can be seen in github issue #2030.
This patch bypasses this setup for FD-less connections, such as QUIC.
However some of them could definitely be relevant to the QUIC stack, or
even to UNIX sockets sometimes. A better long-term solution would consist
in implementing a setsockopt() equivalent at the protocol layer that would
be used to configure the socket, either the FD or the QUIC conn depending
on the case. Some of them would not always be implemented but that would
allow to unify all this code.
This fix must be backported everywhere the commit above is backported,
namely 2.6 and 2.7.
Thanks to github user @twomoses for the nicely detailed report.
The commit 7f59d68fe ("BUG/MEDIIM: stconn: Flush output data before
forwarding close to write side") introduced a regression. When the read side
is closed, the close is not forwarded to the write side if there are some
pending outgoind data. The idea is to foward data first and the close the
write side. However, when fast-forwarding is enabled and last data block is
received with the read0, the close is never forwarded.
We cannot revert the commit above because it really fix an issue. However,
we can schedule the shutdown for write by setting CF_SHUTW_NOW flag on the
write side. Indeed, it is the purpose of this flag.
To not replicate ugly and hardly maintainable code block at different places
in stconn.c, an helper function is used. Thus, sc_cond_forward_shutw() must
be called to know if the close can be fowarded or not. It returns 1 if it is
possible. In this case, the caller is responsible to forward the close to
the write side. Otherwise, if the close cannot be forwarded, 0 is
returned. It happens when it should not be performed at all. Or when it
should only be delayed, waiting for the input channel to be flushed. In this
last case, the CF_SHUTW_NOW flag is set in the output channel.
This patch should fix the issue #2033. It must be backported with the commit
above, thus at least as far as 2.2.
When a new server was added through the cli using "server add" command,
the maxconn/minconn consistency check historically implemented in
check_config_validity() for static servers was missing.
As a result, when adding a server with the maxconn parameter without the
minconn set, the server was unable to handle any connection because
srv_dynamic_maxconn() would always return 0.
Consider the following reproducer:
| global
| stats socket /tmp/ha.sock mode 660 level admin expose-fd listeners
|
| defaults
| timeout client 5s
| timeout server 5s
| timeout connect 5s
|
| frontend test
| mode http
| bind *:8081
| use_backend farm
|
| listen dummyok
| bind localhost:18999
| mode http
| http-request return status 200 hdr test "ok"
|
| backend farm
| mode http
Start haproxy and perform the following :
echo "add server farm/t1 127.0.0.1:18999 maxconn 100" | nc -U /tmp/ha.sock
echo "enable server farm/t1" | nc -U /tmp/ha.sock
curl localhost:8081 # -> 503 after 5s connect timeout
Thanks to ("MINOR: cfgparse/server: move (min/max)conn postparsing logic into
dedicated function"), we are now able to perform the consistency check after
the new dynamic server has been parsed.
This is enough to fix the issue documented here that was reported by
Thomas Pedoussaut on the ML.
This commit depends on:
- ("MINOR: cfgparse/server: move (min/max)conn postparsing logic into
dedicated function")
It must be backported to 2.6 and 2.7
In check_config_validity() function, we performed some consistency checks to
adjust minconn/maxconn attributes for each declared server.
We move this logic into a dedicated function named srv_minmax_conn_apply()
to be able to perform those checks later in the process life when needed
(ie: dynamic servers)
Deduplicate the code which checks the OCSP update in the ckch_store and
in the crtlist_entry.
Also, jump immediatly to error handling when the ERR_FATAL is catched.
GH issue #2034 clearly indicates yet another case of time roll-over
that went badly. Issues that happen only once every 50 days are hard
to detect and debug, and are usually reported more or less synchronized
from multiple sources. This patch finally does what had long been planned
but never done yet, which is to force the time to wrap early after boot
so that any such remaining issue can be spotted quicker. The margin delay
here is 20s (it may be changed by setting BOOT_TIME_WRAP_SEC to another
value). This value seems sufficient to permit failed health checks to
succeed and traffic to come in and possibly start to update some time
stamps (accept dates in logs, freq counters, stick-tables expiration
dates etc).
It could theoretically be helpful to have this in 2.7, but as can be
seen with the two patches below, we've already had incorrect use cases
of the internal monotonic time when the wall-clock one was needed, so
we could expect to detect other ones in the future. Note that this will
*not* induce bugs, it will only make them happen much faster (i.e. no
need to wait for 50 days before seeing them). If it were to eventually
be backported, these two previous patches must also be backported:
BUG/MINOR: clock: use distinct wall-clock and monotonic start dates
BUG/MEDIUM: cache: use the correct time reference when comparing dates
The cache makes use of dates advertised by external components, such
as "last-modified" or "date". As such these are wall-clock dates, and
not internal dates. However, all comparisons are mistakenly made based
on the internal monotonic date which is designed to drift from the wall
clock one in order to catch up with stolen time (which can sometimes be
intense in VMs). As such after some run time some objects may fail to
validate or fail to expire depending on the direction of the drift. This
is particularly visible when applying an offset to the internal time to
force it to wrap soon after startup, as it will be shifted up to 49.7
days in the future depending on the current date; what happens in this
case is that the reg-test "cache_expires.vtc" fails on the 3rd test by
returning stale contents from the cache at the date of this commit.
It is really important that all external dates are compared against
"date" and not "now" for this reason.
This fix needs to be backported to all versions.
We've had a start date even before the internal monotonic clock existed,
but once the monotonic clock was added, the start date was not updated
to distinguish the wall clock time units and the internal monotonic time
units. The distinction is important because both clocks do not necessarily
progress at the same speed. The very rare occurrences of the wall-clock
date are essentially for human consumption and communication with third
parties (e.g. report the start date in "show info" for monitoring
purposes). However currently this one is also used to measure the distance
to "now" as being the process' uptime. This is actually not correct. It
only works because for now the two dates are initialized at the exact
same instant at boot but could still be wrong if the system's date shows
a big jump backwards during startup for example. In addition the current
situation prevents us from enforcing an abritrary offset at boot to reveal
some heisenbugs.
This patch adds a new "start_time" at boot that is set from "now" and is
used in uptime calculations. "start_date" instead is now set from "date"
and will always reflect the system date for human consumption (e.g. in
"show info"). This way we're now sure that any drift of the internal
clock relative to the system date will not impact the reported uptime.
This could possibly be backported though it's unlikely that anyone has
ever noticed the problem.
At some moments expired stick table records stop being removed. This
happens when the internal time wraps around the 32-bit limit, or every
49.7 days. What precisely happens is that some elements that are collected
close to the end of the time window (2^32 - table's "expire" setting)
might have been updated and will be requeued further, at the beginning
of the next window. Here, three bad situations happen:
- the incorrect integer-based comparison that is not aware of wrapping
will result in the scan to restart from the freshly requeued element,
skipping all those at the end of the window. The net effect of this
is that at each wakeup of the expiration task, only one element from
the end of the window will be expired, and other ones will remain
there for a very long time, especially if they have to wait for all
the predecessors to be picked one at a time after slow wakeups due
to a long expiration ; this is what was observed in issue #2034
making the table fill up and appear as not expiring at all, and it
seems that issue #2024 reports the same problem at the same moment
(since such issues happen for everyone roughly at the same time
when the clock doesn't drift too much).
- the elements that were placed at the beginning of the next window
are skipped as well for as long as there are refreshed entries at
the end of the previous window, so these ones participate to filling
the table as well. This is cause by the restart from the current,
updated node that is generally placed after most other less recently
updated elements.
- once the last element at the end of the window is picked, suddenly
there is a large amount of expired entries at the beginning of the
next window that all have to be requeued. If the expiration delay
is large, the number can be big and it can take a long time, which
can very likely explain the periodic crashes reported in issue #2025.
Limiting the batch size as done in commit dfe79251d ("BUG/MEDIUM:
stick-table: limit the time spent purging old entries") would make
sense for process_table_expire() as well.
This patch addresses the incorrect tree scan algorithm to make sure that:
- there's always a next element to compare against, even when dealing
with the last one in the tree, the first one must be used ;
- time comparisons used to decide whether to restart from the current
element use tick_is_lt() as it is the only case where we know the
current element will be placed before any other one (since the tree
respects insertion ordering for duplicates)
In order to reproduce the issue, it was found that injecting traffic on
a random key that spans over half of the size of a table whose expiration
is set to 15s while the date is going to wrap in 20s does exhibit an
increase of the table's size 5s after startup, when entries start to be
pushed to the next window. It's more effective when a second load
generator constantly hammers a same key to be certain that none of them
is ready to expire. This doesn't happen anymore after this patch.
This fix needs to be backported to all stable versions. The bug has been
there for as long as the stick tables were introduced in 1.4-dev7 with
commit 3bd697e07 ("[MEDIUM] Add stick table (persistence) management
functions and types"). A cleanup could consists in deduplicating that
code by having process_table_expire() call __stktable_trash_oldest(),
with that one improved to support an optional time check.
Display a warning when some text exists between the filename and the
options. This part is completely ignored so if there are filters here,
they were never parsed.
This could be backported in every versions. In the older versions, the
parsing was done in ssl_sock_load_cert_list_file() in ssl_sock.c.
Aurélien reported that the BUG_ON(!new_ts.nbgrp) added in 2.8-dev3 by
commit 50440457e ("MEDIUM: config: restrict shards, not bind_conf to one
group each") can trigger on some invalid configs where the thread_set on
the "bind" line couldn't be resolved. The reason is that we still enter
the parsing loop (as it was done previously) and we possibly have no
group to work on (which was the purpose of this assertion). There we
need to bypass all this block on such a condition.
No backport is needed.
Aurélien reported a bug making a statement such as "thread 2-2" fail for
a config made of exactly 2 threads. What happens is that the parser for
the "thread" keyword scans a range of thread numbers from either 1..64
or 0,-1,-2 for special values, and presets the bit masks accordingly in
the thread set, except that due to the 1..64 range, the shift length must
be reduced by one. Not doing this causes empty masks for single-bit values
that are exactly equal to the number of threads in the group and fails to
properly parse.
No backport is needed as this was introduced in 2.8-dev3 by commit
bef43dfa6 ("MINOR: thread: add a simple thread_set API").
Due to multithreading concurrency, it is difficult at this time to figure
out how this counter may become negative. This simple patch only checks this
will never be the case.
This issue arrives with this commit:
"9969adbcdc MINOR: stats: add by HTTP version cumulated number of sessions and requests"
So, this patch should be backported when the latter has been backported.
When stats_putchk() fails to peform the dump because available data space in
htx is less than the number of bytes pending in the dump buffer, we wait
for more room in the htx (ie: sc_need_room()) to retry the dump attempt
on the next applet invocation.
To provide consistent output, we have to make sure that the stat ctx is not
updated (or at least correctly reverted) in case stats_putchk() fails so
that the new dumping attempt behaves just like the previous (failed) one.
STAT_STARTED is not following this logic, the flag is set in
stats_dump_fields_json() as soon as some data is written to the output buffer.
It's done too early: we need to delay this step after the stats_putchk() has
successfully returned if we want to correctly handle the retries attempts.
Because of this, JSON output could suffer from extraneous ',' characters which
could make json parsers unhappy.
For example, this is the kind of errors you could get when using
`python -m json.tool` on such badly formatted outputs:
"Expecting value: line 1 column 2 (char 1)"
Unfortunately, fixing this means that the flag needs to be enabled at
multiple places, which is what we're doing in this patch.
(in stats_dump_proxy_to_buffer() where stats_dump_one_line() is involved
by underlying stats_dump_{fe,li,sv,be} functions)
Thereby, this raises the need for a cleanup to reduce code duplication around
stats_dump_proxy_to_buffer() function and simplify things a bit.
It could be backported to 2.6 and 2.7
In ("MINOR: stats: introduce stats field ctx"), we forgot
to apply the patch to servers.
This prevents "BUG/MINOR: stats: fix show stat json buffer limitation"
from working with servers dump.
We're adding the missing part related to servers dump.
This commit should be backported with the aforementioned commits.
When ctx->field was introduced with ("MINOR: stats: introduce stats field ctx")
a mistake was made for the STAT_PX_ST_LI state in stats_dump_proxy_to_buffer():
current_field reset is placed after the for loop, ie: after multiple lines
are dumped. Instead it should be placed right after each li line is dumped.
This could cause some output inconsistencies (missing fields), especially when
http dump is used with JSON output and "socket-stats" option is enabled
on the proxy, because when htx is full we restore the ctx->field with
current_field (which contains outdated value in this case).
This should be backported with ("MINOR: stats: introduce stats field ctx")
In ("BUG/MEDIUM: stats: Rely on a local trash buffer to dump the stats"),
we forgot to apply the patch in resolvers.c which provides the
stats_dump_resolvers() function that is involved when dumping with "resolvers"
domain.
As a consequence, resolvers dump was broken because stats_dump_one_line(),
which is used in stats_dump_resolv_to_buffer(), implicitely uses trash_chunk
from stats.c to prepare the dump, and stats_putchk() is then called with
global trash (currently empty) as output data.
Given that trash_dump variable is static and thus only available within stats.c
we change stats_putchk() function prototype so that the function does not take
the output buffer as an argument. Instead, stats_putchk() will implicitly use
the local trash_dump variable declared in stats.c.
It will also prevent further mixups between stats_dump_* functions and
stats_putchk().
This needs to be backported with ("BUG/MEDIUM: stats: Rely on a local trash
buffer to dump the stats")
In ("BUG/MINOR: stats: use proper buffer size for http dump"),
we used trash.size as source buffer size before applying the htx
overhead computation.
It is safer to use res->buf.size instead since res_htx (which is <htx> argument
passed to stats_putchk() in http context) is made from res->buf:
in http_stats_io_handler:
| res_htx = htx_from_buf(&res->buf);
This will prevent the hang bug from showing up again if res->buf.size were to be
less than trash.size (which is set according to tune.bufsize).
This should be backported with ("BUG/MINOR: stats: use proper buffer size for http dump")
When building STREAM frames in a packet buffer, if a frame is too large
it will be splitted in two. A shorten version will be used and the
original frame will be modified to represent the remaining space.
To ensure there is enough space to store the frame data length encoded
as a QUIC integer, we use the function max_available_room(). This
function can return 0 if there not only a small space left which is
insufficient for the frame header and the shorten data. Prior to this
patch, this wasn't check and an empty unneeded STREAM frame was built
and sent for nothing.
Change this by checking the value return by max_available_room(). If 0,
do not try to split this frame and continue to the next ones in the
packet.
On 2.6, this patch serves as an optimization which will prevent the building
of unneeded empty STREAM frames.
On 2.7, this behavior has the side-effect of triggering a BUG_ON()
statement on quic_build_stream_frame(). This BUG_ON() ensures that we do
not use quic_frame with OFF bit set if its offset is 0. This can happens
if the condition defined above is reproduced for a STREAM frame at
offset 0. An empty unneeded frame is built as descibed. The problem is
that the original frame is modified with its OFF bit set even if the
offset is still 0.
This must be backported up to 2.6.
Now that we're using thread_sets there's no need to restrict an entire
bind_conf to 1 group, the real concern being the FD, we can move that
restriction to the shard only. This means that as long as we have enough
shards and that they're properly aligned on group boundaries (i.e. shards
are an integer divider of the number of threads), we can support "bind"
lines spanning more than one group.
The check is still performed for shards to span more than one group,
and an error is emitted when this happens. But at least now it becomes
possible to have this:
global
nbthread 256
frontend foo
bind :1111 shards 4
bind :2222 shards by-thread
Let's now retrieve the first thread group and its mask from the
thread_set so that we don't need these fields in the bind_conf anymore.
For now we're still limited to the first group (like before) but that
allows to get rid of these fields and to make sure that there's nothing
"special" being done there anymore.
Instead of reading and storing a single group and a single mask for a
"thread" directive on a bind line, we now store the complete range in
a thread set that's stored in the bind_conf. The bind_parse_thread()
function now just calls parse_thread_set() to complete the current set,
which starts empty, and thread_resolve_group_mask() was updated to
support retrieving thread group numbers or absolute thread numbers
directly from the pre-filled thread_set, and continue to feed bind_tgroup
and bind_thread. The CLI parsers which were pre-initialized to set the
bind_tgroup to 1 cannot do it anymore as it would prevent one from
restricting the thread set. Instead check_config_validity() now detects
the CLI frontend and passes the info down to thread_resolve_group_mask()
that will automatically use only the group 1's threads for these
listeners. The same is done for the peers listeners for now.
At this step it's already possible to start with all previous valid
configs as well as extended ones supporting comma-delimited thread
sets. In addition the parser already accepts large ranges spanning
multiple groups, but since the underlying listeners infrastructure
is not read, for now we're maintaining a specific check against this
at the higher level of the config validity check.
The patch is a bit large because thread resolution is performed in
multiple steps, so we need to adjust all of them at once to preserve
functional and technical consistency.
The purpose is to be able to store large thread sets, defined by ranges
that may cross group boundaries, as well as define lists of groups and
masks. The thread_set struct implements the storage, and the parser is
in parse_thread_set(), with a focus on "bind" lines, but not only.
During 2.5 development, a fallback was implemented for bind "thread"
directives that would not map to existing threads, with commit e3f4d7496
("MEDIUM: config: resolve relative threads on bind lines to absolute ones").
The approch consisted in remapping the threads to other ones. But now
that relative threads and not absolute threads are stored in this mask,
this case cannot happen anymore, and this confusing hack is not needed
anymore.
This flag is only used to tag a QUIC listener, which we now know by
its bind_conf's xprt as well. It's only used to decide whether or not
to perform an extra initialization step on the listener. Let's drop it
as well as the flags field.
With the various fields and options moved, the listener struct reduced
by 48 bytes total.
LI_O_TCP_L4_RULES and LI_O_TCP_L5_RULES are only set by from the proxy
based on the presence or absence of tcp_req l4/l5 rules. It's basically
as cheap to check the list as it is to check the flag, except that there
is no need to maintain a copy. Let's get rid of them, and this may ease
addition of more dynamic stuff later.
These two flags are entirely for internal use and are even per proxy
in practice since they're used for peers and CLI to indicate (for the
first one) that the listener(s) are not subject to connection limits,
and for the second that the listener(s) should not be stopped on
soft-stop. No need to keep them in the listeners, let's move them to
the bind_conf under names BC_O_UNLIMITED and BC_O_NOSTOP.
These are only set per bind line and used when creating a sessions,
we can move them to the bind_conf under the names BC_O_ACC_PROXY and
BC_O_ACC_CIP respectively.
It's set per bind line ("tfo") and only used in tcp_bind_listener() so
there's no point keeping the address family tests, let's just store the
flag in the bind_conf under the name BC_O_TCP_FO.
This option is set per bind line, and was only set stored when the
address family is AF_INET4 or AF_INET6. That's pointless since it's
used only in tcp_bind_listener() which is only used for such families
as well, so it can now be moved to the bind_conf under the name
BC_O_DEF_ACCEPT.
It's currently declared per-frontend, though it would make sense to
support it per-line but in no case per-listener. Let's move the option
to a bind_conf option BC_O_NOLINGER.
This field is used by stream_new() to optionally set the applet the
stream will connect to for simple proxies like the CLI for example.
But it has never been configurable to anything and is always strictly
equal to the frontend's ->default_target. Let's just drop it and make
stream_new() only use the frontend's. It makes more sense anyway as
we don't want the proxy to work differently based on the "bind" line.
This idea was brought in 1.6 hoping that the h2 implementation would
use applets for decoding (which was dropped after the very first
attempt in 1.8).
The accept callback directly derives from the upper layer, generally
it's session_accept_fd(). As such it's also defined per bind line
so it makes sense to move it there.
The maxconn is set per bind line so let's move it there. This might
possibly even slightly reduce inter-thread contention since this one
is read-mostly and it was stored next to nbconn which changes for
each connection setup or teardown.
Like for previous values, maxaccept is really per-bind_conf, so let's
move it there. Some frontends (peers, log) set it to 1 so the assignment
was slightly moved.
These two arguments were only set and only used with tcpv4/tcpv6. Let's
just store them into the bind_conf instead of duplicating them for all
listeners since they're fixed per "bind" line.
When bind_conf were created, some elements such as the analysers mask
ought to have moved there but that wasn't the case. Now that it's
getting clearer that bind_conf provides all binding parameters and
the listener is essentially a listener on an address, it's starting
to get really confusing to keep such parameters in the listener, so
let's move the mask to the bind_conf. We also take this opportunity
for pre-setting the mask to the frontend's upon initalization. Now
several loops have one less argument to take care of.
The SCID (source connection ID) used by a peer (client or server) is sent into the
long header of a QUIC packet in clear. But it is also sent into the transport
parameters (initial_source_connection_id). As these latter are encrypted into the
packet, one must check that these two pieces of information do not differ
due to a packet header corruption. Furthermore as such a connection is unusuable
it must be killed and must stop as soon as possible processing RX/TX packets.
Implement qc_kill_con() to flag a connection as unusable and to kille it asap
waking up the idle timer task to release the connection.
Add a check to quic_transport_params_store() to detect that the SCIDs do not
match and make it call qc_kill_con().
Add several tests about connection to be killed at several critial locations,
especially in the TLS stack callback to receive CRYPTO data from or derive secrets,
and before preparing packet after having received others.
Must be backported to 2.6 and 2.7.
This is a bad idea to make the TLS ClientHello callback call qc_conn_finalize().
If this latter fails, this would generate a TLS alert and make the connection
send packet whereas it is not functional. But qc_conn_finalize() job was to
install the transport parameters sent by the QUIC listener. This installation
cannot be done at any time. This must be done after having possibly negotiated
the QUIC version and before sending the first Handshake packets. It seems
the better moment to do that in when the Handshake TX secrets are derived. This
has been found inspecting the ngtcp2 code. Calling SSL_set_quic_transport_params()
too late would make the ServerHello to be sent without the transport parameters.
The code for the connection update which was done from qc_conn_finalize() has
been moved to quic_transport_params_store(). So, this update is done as soon as
possible.
Add QUIC_FL_CONN_TX_TP_RECEIVED to flag the connection as having received the
peer transport parameters. Indeed this is required when the ClientHello message
is splitted between packets.
Add QUIC_FL_CONN_FINALIZED to protect the connection from calling qc_conn_finalize()
more than one time. This latter is called only when the connection has received
the transport parameters and after returning from SSL_do_hanshake() which is the
function which trigger the TLS ClientHello callback call.
Remove the calls to qc_conn_finalize() from from the TLS ClientHello callbacks.
Must be backported to 2.6. and 2.7.
This bug was revealed by some C1 interop tests (heavy hanshake packet
corruption) when receiving 1-RTT packets with a key phase update.
This lead the packet to be decrypted with the next key phase secrets.
But this latter is initialized only after the handshake is complete.
In fact, 1-RTT must never be processed before the handshake is complete.
Relying on the "qc->mux_state == QC_MUX_NULL" condition to check the
handshake is complete is wrong during 0-RTT sessions when the mux
is initialized before the handshake is complete.
Must be backported to 2.7 and 2.6.
This is not really a bug fix but an improvement. When the Handshake packet number
space has been detected as needed to be probed, we should also try to probe the
Initial packet number space if there are still packets in flight. Furthermore
we should also try to send up to two datagrams.
Must be backported to 2.6 and 2.7.
This function is called only when probing only one packet number space
(Handshake) or two times the same one (Application). So, there is no risk
to prepare two times the same frame when uneeded because we wanted to
probe two packet number spaces. The condition "ignore the packets which
has been coalesced to another one" is not necessary. More importantly
the bug is when we want to prepare a Application packet which has
been coalesced to an Handshake packet. This is always the case when
the first Application packet is sent. It is always coalesced to
an Handshake packet with an ACK frame. So, when lost, this first
application packet was never resent. It contains the HANDSHAKE_DONE
frame to confirm the completion of the handshake to the client.
Must be backported to 2.6 and 2.7.
During the handshake and when the handshake has not been confirmed
the acknowledgement delays reported by the peer may be larger
than max_ack_delay. max_ack_delay SHOULD be ignored before the
handshake is completed when computing the PTO. But the current code considered
the wrong condition "before the hanshake is completed".
Replace the enum value QUIC_HS_ST_COMPLETED by QUIC_HS_ST_CONFIRMED to
fix this issue. In quic_loss.c, the parameter passed to quic_pto_pktns()
is renamed to avoid any possible confusion.
Must be backported to 2.7 and 2.6.
This may happen during retransmission of frames which can be splitted
(CRYPTO, or STREAM frames). One may have to split a frame to be
retransmitted due to the QUIC protocol properties (packet size limitation
and packet field encoding sizes). The remaining part of a frame which
cannot be retransmitted must be detached from the original frame it is
copied from. If not, when the really sent part will be acknowledged
the remaining part will be acknowledged too but not sent!
Must be backported to 2.7 and 2.6.
Add cum_sess_ver[] new array of counters to count the number of cumulated
HTTP sessions by version (h1, h2 or h3).
Implement proxy_inc_fe_cum_sess_ver_ctr() to increment these counter.
This function is called each a HTTP mux is correctly initialized. The QUIC
must before verify the application operations for the mux is for h3 before
calling proxy_inc_fe_cum_sess_ver_ctr().
ST_F_SESS_OTHER stat field for the cumulated of sessions others than
HTTP sessions is deduced from ->cum_sess_ver counter (for all the session,
not only HTTP sessions) from which the HTTP sessions counters are substracted.
Add cum_req[] new array of counters to count the number of cumulated HTTP
requests by version and others than HTTP requests. This new member replace ->cum_req.
Modify proxy_inc_fe_req_ctr() which increments these counters to pass an HTTP
version, 0 special values meaning "other than an HTTP request". This is the case
for instance for syslog.c from which proxy_inc_fe_req_ctr() is called with 0
as version parameter.
ST_F_REQ_TOT stat field compputing for the cumulated number of requests is modified
to count the sum of all the cum_req[] counters.
As this patch is useful for QUIC, it must be backported to 2.7.
On high traffic benchmarks, it's visible the the CPU is dominated by
calls to memcpy(), and many of those come from htx functions. It was
measured that 63% of those coming from htx are made on 8-byte blocks
which really are not worth a call to the function since a single
read-write cycle does it fine.
This commit adds an inline htx_memcpy() function that explicitly
checks for this length and just copies the data without that call.
It's even likely that it could be detected on const sizes, though
that was not done. This is already effective in reducing the number
of calls to memcpy().
Define a new configuration option "tune.quic.max-frame-loss". This is
used to specify the limit for which a single frame instance can be
detected as lost. If exceeded, the connection is closed.
This should be backported up to 2.7.
Add a <loss_count> new field in quic_frame structure. This field is set
to 0 and incremented each time a sent packet is declared lost. If
<loss_count> reached a hard-coded limit, the connection is deemed as
failing and is closed immediately with a CONNECTION_CLOSE using
INTERNAL_ERROR.
By default, limit is set to 10. This should ensure that overall memory
usage is limited if a peer behaves incorrectly.
This should be backported up to 2.7.
Define a new function qc_frm_free() to handle frame deallocation. New
BUG_ON() statements ensure that the deallocated frame is not referenced
by other frame. To support this, all LIST_DELETE() have been replaced by
LIST_DEL_INIT(). This should enforce that frame deallocation is robust.
As a complement, qc_frm_unref() has been moved into quic_frame module.
It is justified as this is a utility function related to frame
deallocation. It allows to use it in quic_pktns_tx_pkts_release() before
calling qc_frm_free().
This should be backported up to 2.7.
Define two utility functions for quic_frame allocation :
* qc_frm_alloc() is used to allocate a new frame
* qc_frm_dup() is used to allocate a new frame by duplicating an
existing one
Theses functions are useful to centralize quic_frame initialization.
Note that pool_zalloc() is replaced by a proper pool_alloc() + explicit
initialization code.
This commit will simplify implementation of the per frame retransmission
limitation. Indeed, a new counter will be added in quic_frame structure
which must be initialized to 0.
This should be backported up to 2.7.
Care must be taken when reading/writing offset for STREAM frames. A
special OFF bit is set in the frame type to indicate that the field is
present. If not set, it is assumed that offset is 0.
To represent this, offset field of quic_stream structure must always be
initialized with a valid value in regards with its frame type OFF bit.
The previous code has no bug in part because pool_zalloc() is used to
allocate quic_frame instances. To be able to use pool_alloc(), offset is
always explicitely set to 0. If a non-null value is used, OFF bit is set
at the same occasion. A new BUG_ON() statement is added on frame builder
to ensure that the caller has set OFF bit if offset is non null.
This should be backported up to 2.7.
A dedicated <fin> field was used in quic_stream structure. However, this
info is already encoded in the frame type field as specified by QUIC
protocol.
In fact, only code for packet reception used the <fin> field. On the
sending side, we only checked for the FIN bit. To align both sides,
remove the <fin> field and only used the FIN bit.
This should be backported up to 2.7.
In an attempt to fix GH #1873, ("BUG/MEDIUM: stats: Rely on a local trash
buffer to dump the stats") explicitly reduced output buffer size to leave
enough space for htx overhead under http context.
Github user debtsandbooze, who first reported the issue, came back to us
and said he was still able to make the http dump "hang" with the new fix.
After some tests, it became clear that htx_add_data_atonce() could fail from
time to time in stats_putchk(), even if htx was completely empty:
In http context, buffer size is maxed out at channel_htx_recv_limit().
Unfortunately, channel_htx_recv_limit() is not what we're looking for here
because limit() doesn't compute the proper htx overhead.
Using buf_room_for_htx_data() instead of channel_htx_recv_limit() to compute
max "usable" data space seems to be the last piece of work required for
the previous fix to work properly.
This should be backported everywhere the aforementioned commit is.
idle and harmless bits in the tgroup_ctx structure were not explicitly
set during boot.
| struct tgroup_ctx ha_tgroup_ctx[MAX_TGROUPS] = { };
As the structure is first statically initialized,
.threads_harmless and .threads_idle are automatically zero-
initialized by the compiler.
Unfortulately, this means that such threads are not considered idle
nor harmless by thread_isolate(_full)() functions until they enter
the polling loop (thread_harmless_now() and thread_idle_now() are
respectively called before entering the polling loop)
Because of this, any attempt to call thread_isolate() or thread_isolate_full()
during a startup phase with nbthreads >= 2 will cause thread_isolate to
loop until every secondary threads make it through their first polling loop.
If the startup phase is aborted during boot (ie: "-c" option to check the
configuration), secondary threads may be initialized but will never be started
(ie: they won't enter the polling loop), thus thread_isolate()
could would loop forever in such cases.
We can easily reveal the bug with this patch reproducer:
| diff --git a/src/haproxy.c b/src/haproxy.c
| index e91691658..0b733f6ee 100644
| --- a/src/haproxy.c
| +++ b/src/haproxy.c
| @@ -2317,6 +2317,10 @@ static void init(int argc, char **argv)
| if (pr || px) {
| /* At least one peer or one listener has been found */
| qfprintf(stdout, "Configuration file is valid\n");
| + printf("haproxy will loop...\n");
| + thread_isolate();
| + printf("we will never reach this\n");
| + thread_release();
| deinit_and_exit(0);
| }
| qfprintf(stdout, "Configuration file has no error but will not start (no listener) => exit(2).\n");
Now we start haproxy with a valid config:
$> haproxy -c -f valid.conf
Configuration file is valid
haproxy will loop...
^C
------------------------------------------------------------------------------
This did not cause any issue so far because no early deinit paths require
full thread isolation. But this may change when new features or requirements
are introduced, so we should fix this before it becomes a real issue.
To fix this, we explicitly assign .threads_harmless and .threads_idle
to .threads_enabled value in thread_map_to_groups() function during boot.
This is the proper place to do this since as long as .threads_enabled is not
explicitly set, its default value is also 0 (zero-initialized by the compiler)
code snippet from thread_isolate() function:
ulong te = _HA_ATOMIC_LOAD(&ha_tgroup_info[tgrp].threads_enabled);
ulong th = _HA_ATOMIC_LOAD(&ha_tgroup_ctx[tgrp].threads_harmless);
if ((th & te) == te)
break;
Thus thread_isolate(_full()) won't be looping forever in thread_isolate()
even if it were to be used before thread_map_to_groups() is executed.
No backport needed unless this is a requirement.
This commit is identical to the preceeding patch. However, these traces
are from another patch with a different backport scope :
56a86ddfb9
MINOR: h3: add missing traces on closure
This must be backported up to 2.7 where above patch is scheduled.
First H3 traces argument must be a connection instance or a NULL. Some
new traces were added recently with a qcc instance which caused a crash
when traces are activated.
This trace was added by the following patch :
87f8766d3f
BUG/MEDIUM: h3: handle STOP_SENDING on control stream
This must be backported up to 2.6 along with the above patch.
When using WolfSSL, there are some cases were the SSL_CTX_sess_new_cb is
called with an existing session ID. These cases are not met with
OpenSSL.
When the ID is found in the session tree during the insertion, the
shared_block len is not set to 0 and is not used. However if later the
block is reused, since the len is not set to 0, the release callback
will be called an ebmb_delete will be tried on the block, even if it's
not in the tree, provoking a crash.
The code was buggy from the beginning, but the case never happen with
openssl which changes the ID.
Must be backported in every maintained branches.
Add traces for function h3_shutdown() / h3_send_goaway(). This should
help to debug problems related to connection closure.
This should be backported up to 2.7.
This commit is similar to the previous one. It reports an error if a
RESET_STREAM is received for the remote control stream. This will
generate a CONNECTION_CLOSE with H3_CLOSED_CRITICAL_STREAM error.
Note that contrary to the previous bug related to STOP_SENDING, this bug
was not encountered in real environment. As such, it is labelled as
MINOR. However, it could triggered the same crash as the previous patch.
This should be backported up to 2.6.
Before this patch, STOP_SENDING reception was considered valid even on
H3 control stream. This causes the emission in return of RESET_STREAM
and eventually the closure and freeing of the QCS instance. This then
causes a crash during connection closure as a GOAWAY frame is emitted on
the control stream which is now released.
To fix this crash, STOP_SENDING on the control stream is now properly
rejected as specified by RFC 9114. The new app_ops close callback is
used which in turn will generate a CONNECTION_CLOSE with error
H3_CLOSED_CRITICAL_STREAM.
This bug was detected in github issue #2006. Note that however it is
triggered by an incorrect client behavior. It may be useful to determine
which client behaves like this. If this case is too frequent,
STOP_SENDING should probably be silently ignored.
To reproduce this issue, quiche was patched to emit a STOP_SENDING on
its send() function in quiche/src/lib.rs:
pub fn send(&mut self, out: &mut [u8]) -> Result<(usize, SendInfo)> {
- self.send_on_path(out, None, None)
+ let ret = self.send_on_path(out, None, None);
+ self.streams.mark_stopped(3, true, 0);
+ ret
}
This must be backported up to 2.6 along with the preceeding commit :
MINOR: mux-quic/h3: define close callback
Define a new qcc_app_ops callback named close(). This will be used to
notify app-layer about the closure of a stream by the remote peer. Its
main usage is to ensure that the closure is allowed by the application
protocol specification.
For the moment, close is not implemented by H3 layer. However, this
function will be mandatory to properly reject a STOP_SENDING on the
control stream and preventing a later crash. As such, this commit must
be backported with the next one on 2.6.
This is related to github issue #2006.
h3_resp_trailers_send() may be called due to an HTX EOT block present
without preceeding HTX TRAILER block. In this case, no HEADERS frame
will be generated by H3 layer and MUX will emit an empty STREAM frame
with FIN set.
However, before skipping these, some operations are conducted on qcs
buffer to realign it and try to encode the QPACK field section line in a
buffer copy. These operation are thus unneeded if no trailer is
generated. Even worse, the function will fail if there is not enough
space in the buffer for the superfluous QPACK section line.
To improve this situation, this patch adds an early goto statement to
skip most operations in h3_resp_trailers_send() if no HTX trailer block
is found.
This patch is related to github issue #2006.
This should be backported up to 2.7.
Replace ABORT_NOW() by proper error management in
h3_resp_trailers_send() for QPACK encoding operation.
If a QPACK encoding operation fails, it means there is not enough space
in qcs buffer. In this case, flag qcs instance with QC_SF_BLK_MROOM and
return an error. MUX is responsible to remove this flag once buffer
space is available.
This should fix the crash reported by gabrieltz on github issue #2006.
This must be backported up to 2.7.
In http_build_7239_header_nodename(), ip6 address dumping is performed
at a single place to prevent code duplication:
A goto statement combined with a local pointer variable (ip6_addr) were used
to perform ipv6 dump from different calling places inside the function.
However, when the goto was performed (ie: sample expression handling),
ip6_addr pointer was assigned to limited scope variable's address that is not
valid within the dumping code.
Because of this, we have an undefined behavior that could result in a bug or
a crash depending on the platform that is running haproxy.
This was found by Coverity (GH #2018)
To fix this, we add a simple ip6 printing helper that takes the ip6_addr
pointer as an argument.
This prevents any scope related bug as the function is executed under the
proper context.
if/else guards inside the function were reviewed to make sure that the goto
removal won't affect existing behavior.
----------
No backport needed, except if the commit ("MINOR: proxy/http_ext: introduce
proxy forwarded option") is backported.
Given that this commit needs to be backported with
"MINOR: proxy/http_ext: introduce proxy forwarded option", We're using it as a
reminder for another bug that was introduced with
"MINOR: proxy/http_ext: introduce proxy forwarded option" but has been silently
fixed since with
"MEDIUM: proxy/http_ext: implement dynamic http_ext".
If "MINOR: proxy/http_ext: introduce proxy forwarded option" needs to be
backported without
"MEDIUM: proxy/http_ext: implement dynamic http_ext", you should manually apply
the following patch on top of it:
| diff --git a/src/http_ext.c b/src/http_ext.c
| index fcb5a07bc..3921357a3 100644
| --- a/src/http_ext.c
| +++ b/src/http_ext.c
| @@ -609,7 +609,7 @@ static inline void http_build_7239_header_node(struct buffer *out,
| if (forby->np_mode)
| chunk_appendf(out, "\"");
| offset_save = out->data;
| - http_build_7239_header_node(out, s, curproxy, addr, &curproxy->http.fwd.p_by);
| + http_build_7239_header_nodename(out, s, curproxy, addr, forby);
| if (offset_save == out->data) {
| /* could not build nodename, either because some
| * data is not available or user is providing bad input
| @@ -619,7 +619,7 @@ static inline void http_build_7239_header_node(struct buffer *out,
| if (forby->np_mode) {
| chunk_appendf(out, ":");
| offset_save = out->data;
| - http_build_7239_header_nodeport(out, s, curproxy, addr, &curproxy->http.fwd.p_by);
| + http_build_7239_header_nodeport(out, s, curproxy, addr, forby);
| if (offset_save == out->data) {
| /* could not build nodeport, either because some data is
| * not available or user is providing bad input
(If you don't, forwarded option won't work properly and will crash
haproxy (stack overflow) when building 'for' or 'by' parameter)
As reported by Coverity, this function may be called with no h2c. Thus, the
pointer must always be checked before any access. One test was missing in
TRACE_PRINTF_LOC().
This patch should fix the issue #2015. No backport needed, except if the
commit 11e8a8c2a ("MEDIUM: mux-h2/trace: add tracing support for headers")
is backported.
Despite the doc saying that 'use-fcgi-app' keyword may only be used in backend
or listen section, we forgot to prevent its usage in default section.
This is wrong because fcgi relies on a filter, and filters cannot be defined in
a default section.
Making sure such usage reports an error to the user and complies with the doc.
This could be backported up to 2.2.
This is a simple refactor to remove specific http_ext post-parsing treatment from
cfgparse.
Related work is now performed internally through check_http_ext_postconf()
function that is registered via REGISTER_POST_PROXY_CHECK() in http_ext.c.
proxy http-only options implemented in http_ext were statically stored
within proxy struct.
We're making some changes so that http_ext are now stored in a dynamically
allocated structs.
http_ext related structs are only allocated when needed to save some space
whenever possible, and they are automatically freed upon proxy deletion.
Related PX_O_HTTP{7239,XFF,XOT) option flags were removed because we're now
considering an http_ext option as 'active' if it is allocated (ptr is not NULL)
A few checks (and BUG_ON) were added to make these changes safe because
it adds some (acceptable) complexity to the previous design.
Also, proxy.http was renamed to proxy.http_ext to make things more explicit.
http option forwarded (rfc7239) supports sample expressions when
configuring 'host', 'for' and 'by' parameters.
However, since we are in a http-backend-only context, right after http
header is processed, we have a limited resolution scope for the sample
expression provided by the user.
To prevent any confusion, a warning is emitted when parsing the option
if the user relies on a sample expression (more precisely a fetch) which
would yield unexpected results at runtime when processing the option.
forwarded header option (rfc7239) deals with sample expressions in two
steps: first a sample expression string is extracted from the config file
and later in startup sequence this string is converted into the resulting
sample_expr.
We need to perform these two steps because we cannot compile the expr
too early in the parsing sequence. (or we would miss some context)
Because of this, we have two dinstinct structure members (expr and expr_s)
for each 7239 field supporting sample expressions.
This is not cool, because we're bloating the http forwarded config structure,
and thus, bloating proxy config structure.
To address this, we now merge both expr and expr_s members inside a single
union to regain some space. This forces us to perform some additional logic
to make sure to use the proper structure member at different parsing steps.
Thanks to this, we're also able to free/release related config hints and
sample expression strings as soon as the sample expression
compilation is finished.
Adding new http converter: rfc7239_n2np.
Takes a string representing 7239 forwarded header node (extracted from
either 'for' or 'by' 7239 header fields) as input and translates it
to either unsigned integer or ('_' prefixed obfuscated identifier),
according to 7239RFC.
Example:
# extract 'by' field from forwarded header, extract node port from
# resulting node identifier and store the result in req.fnp
http-request set-var(req.fnp) req.hdr(forwarded),rfc7239_field(by),rfc7239_n2np
#input: "by=\"127.0.0.1:9999\""
# output: 9999
#input: "by=\"_name:_port\""
# output: "_port"
Depends on:
- "MINOR: http_ext: introduce http ext converters"
Adding new http converter: rfc7239_n2nn.
Takes a string representing 7239 forwarded header node (extracted from
either 'for' or 'by' 7239 header fields) as input and translates it
to either ipv4 address, ipv6 address or str ('_' prefixed if obfuscated
or "unknown" if unknown), according to 7239RFC.
Example:
# extract 'for' field from forwarded header, extract nodename from
# resulting node identifier and store the result in req.fnn
http-request set-var(req.fnn) req.hdr(forwarded),rfc7239_field(for),rfc7239_n2nn
#input: "for=\"127.0.0.1:9999\""
# output: 127.0.0.1
#input: "for=\"_name:_port\""
# output: "_name"
Depends on:
- "MINOR: http_ext: introduce http ext converters"
Adding new http converter: rfc7239_field.
Takes a string representing 7239 forwarded header single value as
input and extracts a single field/parameter from the header according
to user selection.
Example:
# extract host field from forwarded header and store it in req.fhost var
http-request set-var(req.fhost) req.hdr(forwarded),rfc7239_field(host)
#input: "proto=https;host=\"haproxy.org:80\""
# output: "haproxy.org:80"
# extract for field from forwarded header and store it in req.ffor var
http-request set-var(req.ffor) req.hdr(forwarded),rfc7239_field(for)
#input: "proto=https;host=\"haproxy.org:80\";for=\"127.0.0.1:9999\""
# output: "127.0.0.1:9999"
Depends on:
- "MINOR: http_ext: introduce http ext converters"
Adding new http converter: rfc7239_is_valid.
Takes a string representing 7239 forwarded header single value as
input and returns bool:TRUE if header is RFC compliant and
bool:FALSE otherwise.
Example:
acl valid req.hdr(forwarded),rfc7239_is_valid
#input: "for=127.0.0.1;proto=http"
# output: TRUE
#input: "proto=custom"
# output: FALSE
Depends on:
- "MINOR: http_ext: introduce http ext converters"
This commit is really simple, it adds the required skeleton code to allow
new http_ext converter to be easily registered through STG_REGISTER facility.
Just like forwarded (7239) header and forwardfor header, move parsing,
logic and management of 'originalto' option into http_ext dedicated class.
We're only doing this to standardize proxy http options management.
Existing behavior remains untouched.
Just like forwarded (7239) header, move parsing, logic and management
of 'forwardfor' option into http_ext dedicated class.
We're only doing this to standardize proxy http options management.
Existing behavior remains untouched.
Introducing http_ext class for http extension related work that
doesn't fit into existing http classes.
HTTP extension "forwarded", introduced with 7239 RFC is now supported
by haproxy.
The option supports various modes from simple to complex usages involving
custom sample expressions.
Examples :
# Those servers want the ip address and protocol of the client request
# Resulting header would look like this:
# forwarded: proto=http;for=127.0.0.1
backend www_default
mode http
option forwarded
#equivalent to: option forwarded proto for
# Those servers want the requested host and hashed client ip address
# as well as client source port (you should use seed for xxh32 if ensuring
# ip privacy is a concern)
# Resulting header would look like this:
# forwarded: host="haproxy.org";for="_000000007F2F367E:60138"
backend www_host
mode http
option forwarded host for-expr src,xxh32,hex for_port
# Those servers want custom data in host, for and by parameters
# Resulting header would look like this:
# forwarded: host="host.com";by=_haproxy;for="[::1]:10"
backend www_custom
mode http
option forwarded host-expr str(host.com) by-expr str(_haproxy) for for_port-expr int(10)
# Those servers want random 'for' obfuscated identifiers for request
# tracing purposes while protecting sensitive IP information
# Resulting header would look like this:
# forwarded: for=_000000002B1F4D63
backend www_for_hide
mode http
option forwarded for-expr rand,hex
By default (no argument provided), forwarded option will try to mimic
x-forward-for common setups (source client ip address + source protocol)
The option is not available for frontends.
no option forwarded is supported.
More info about 7239 RFC here: https://www.rfc-editor.org/rfc/rfc7239.html
More info about the feature in doc/configuration.txt
This should address feature request GH #575
Depends on:
- "MINOR: http_htx: add http_append_header() to append value to header"
- "MINOR: sample: add ARGC_OPT"
- "MINOR: proxy: introduce http only options"
Just like http_append_header(), but this time to insert new value before
an existing one.
If the header already contains one or multiple values, ',' is automatically
inserted after the new value.
Calling this function as an alternative to http_replace_header_value()
to append a new value to existing header instead of replacing the whole
header content.
If the header already contains one or multiple values: a ',' is automatically
appended before the new value.
This function is not meant for prepending (providing empty ctx value),
in which case we should consider implementing dedicated prepend
alternative function.
The functions in charge of processing headers have their names in the
traces and they're among the longest of the mux_h2.c file, while even
containing some redundancy. These names are not used outside, let's
shorten them:
- h2c_decode_headers -> h2c_dec_hdrs
- h2s_bck_make_req_headers -> h2s_snd_bhdrs
- h2s_frt_make_resp_headers -> h2s_snd_fhdrs
Now the traces are a bit more readable:
[00|h2|5|mux_h2.c:4822] h2c_dec_hdrs(): h2c=0x1870510(F,FRP) dsi=1 rcvh :method: GET
[00|h2|5|mux_h2.c:4822] h2c_dec_hdrs(): h2c=0x1870510(F,FRP) dsi=1 rcvh :path: /
[00|h2|5|mux_h2.c:4822] h2c_dec_hdrs(): h2c=0x1870510(F,FRP) dsi=1 rcvh :scheme: http
[00|h2|5|mux_h2.c:4822] h2c_dec_hdrs(): h2c=0x1870510(F,FRP) dsi=1 rcvh :authority: localhost:14446
[00|h2|5|mux_h2.c:4822] h2c_dec_hdrs(): h2c=0x1870510(F,FRP) dsi=1 rcvh user-agent: curl/7.54.1
[00|h2|5|mux_h2.c:4822] h2c_dec_hdrs(): h2c=0x1870510(F,FRP) dsi=1 rcvh accept: */*
Now we can make use of TRACE_PRINTF() to iterate over headers as they
are received or dumped. It's worth noting that the dumps may occasionally
be interrupted due to a buffer full or a realign, but in this case it
will be visible because the trace will restart from the first one. All
these headers (and trailers) may be interleaved with other connections'
so they're all preceeded by the pointer to the connection and optionally
the stream (or alternately the stream ID) to help discriminating them.
Since it's not easy to read the header directions, sent headers are
prefixed with "sndh" and received headers are prefixed with "rcvh", both
of which are rare enough in the traces to conveniently support a quick
grep.
In order to avoid code duplication, h2_encode_headers() was implemented
as a wrapper on top of hpack_encode_header(), which optionally emits the
header to the trace if the trace is active. In addition, for headers that
are encoded using a different method, h2_trace_header() was added as well.
Header names are truncated to 256 bytes and values to 1024 bytes. If
the lengths are larger, they will be truncated and suffixed with
"(... +xxx)" where "xxx" is the number of extra bytes.
Example of what an end-to-end H2 request gives:
[00|h2|5|mux_h2.c:4818] h2c_decode_headers(): h2c=0x1c13120(F,FRP) dsi=1 rcvh :method: GET
[00|h2|5|mux_h2.c:4818] h2c_decode_headers(): h2c=0x1c13120(F,FRP) dsi=1 rcvh :path: /
[00|h2|5|mux_h2.c:4818] h2c_decode_headers(): h2c=0x1c13120(F,FRP) dsi=1 rcvh :scheme: http
[00|h2|5|mux_h2.c:4818] h2c_decode_headers(): h2c=0x1c13120(F,FRP) dsi=1 rcvh :authority: localhost:14446
[00|h2|5|mux_h2.c:4818] h2c_decode_headers(): h2c=0x1c13120(F,FRP) dsi=1 rcvh user-agent: curl/7.54.1
[00|h2|5|mux_h2.c:4818] h2c_decode_headers(): h2c=0x1c13120(F,FRP) dsi=1 rcvh accept: */*
[00|h2|5|mux_h2.c:4818] h2c_decode_headers(): h2c=0x1c13120(F,FRP) dsi=1 rcvh cookie: blah
[00|h2|5|mux_h2.c:5491] h2s_bck_make_req_headers(): h2c=0x1c1cd90(B,FRH) h2s=0x1c1e3d0(1,IDL) sndh :method: GET
[00|h2|5|mux_h2.c:5572] h2s_bck_make_req_headers(): h2c=0x1c1cd90(B,FRH) h2s=0x1c1e3d0(1,IDL) sndh :authority: localhost:14446
[00|h2|5|mux_h2.c:5596] h2s_bck_make_req_headers(): h2c=0x1c1cd90(B,FRH) h2s=0x1c1e3d0(1,IDL) sndh :path: /
[00|h2|5|mux_h2.c:5647] h2s_bck_make_req_headers(): h2c=0x1c1cd90(B,FRH) h2s=0x1c1e3d0(1,IDL) sndh user-agent: curl/7.54.1
[00|h2|5|mux_h2.c:5647] h2s_bck_make_req_headers(): h2c=0x1c1cd90(B,FRH) h2s=0x1c1e3d0(1,IDL) sndh accept: */*
[00|h2|5|mux_h2.c:5647] h2s_bck_make_req_headers(): h2c=0x1c1cd90(B,FRH) h2s=0x1c1e3d0(1,IDL) sndh cookie: blah
[00|h2|5|mux_h2.c:4818] h2c_decode_headers(): h2c=0x1c1cd90(B,FRP) dsi=1 rcvh :status: 200
[00|h2|5|mux_h2.c:4818] h2c_decode_headers(): h2c=0x1c1cd90(B,FRP) dsi=1 rcvh content-length: 0
[00|h2|5|mux_h2.c:4818] h2c_decode_headers(): h2c=0x1c1cd90(B,FRP) dsi=1 rcvh x-req: size=102, time=0 ms
[00|h2|5|mux_h2.c:4818] h2c_decode_headers(): h2c=0x1c1cd90(B,FRP) dsi=1 rcvh x-rsp: id=dummy, code=200, cache=1, size=0, time=0 ms (0 real)
[00|h2|5|mux_h2.c:5210] h2s_frt_make_resp_headers(): h2c=0x1c13120(F,FRH) h2s=0x1c1c780(1,HCR) sndh :status: 200
[00|h2|5|mux_h2.c:5231] h2s_frt_make_resp_headers(): h2c=0x1c13120(F,FRH) h2s=0x1c1c780(1,HCR) sndh content-length: 0
[00|h2|5|mux_h2.c:5231] h2s_frt_make_resp_headers(): h2c=0x1c13120(F,FRH) h2s=0x1c1c780(1,HCR) sndh x-req: size=102, time=0 ms
[00|h2|5|mux_h2.c:5231] h2s_frt_make_resp_headers(): h2c=0x1c13120(F,FRH) h2s=0x1c1c780(1,HCR) sndh x-rsp: id=dummy, code=200, cache=1, size=0, time=0 ms (0 real)
At some point the frontend/backend names would be useful but that's a more
general comment than just the H2 traces.
By default, passing a NULL cb to the trace functions will result in the
source's default one to be used. For some cases we won't want to use any
callback at all, not event the default one. Let's define a trace_no_cb()
function for this, that does absolutely nothing.
Sometimes it would be necessary to prepare some messages, pre-process some
blocks or maybe duplicate some contents before they vanish for the purpose
of tracing them. However we don't want to do that for everything that is
submitted to the traces, it's important to do it only for what will really
be traced.
The __trace() function has all the knowledge for this, to the point of even
checking the lockon pointers. This commit splits the function in two, one
with the trace decision logic, and the other one for the trace production.
The first one is now usable through wrappers such as _trace_enabled() and
TRACE_ENABLED() which will indicate whether traces are going to be produced
for the current source, level, event mask, parameters and tracking.
There are ifdefs at several places to only define TRC_ARGS_QCON when
QUIC is defined, but nothing prevents this code from building without.
Let's just remove those ifdefs, the single "if" they avoid is not worth
the extra maintenance burden.
Since 2.6 we have a free_logsrv() function that is used to release log
servers. It must be called from deinit() instead of manually iterating
over the log servers, otherwise some parts of the structure are not
freed (namely the ring name), as reported by ASAN.
This should be backported to 2.6.
Commit 9f4f6b038 ("OPTIM: hpack-huff: reduce the cache footprint of the
huffman decoder") replaced the large tables with more space efficient
byte arrays, but one table, rht_bit15_11_11_4, has a 64 bytes hole in it
that wasn't materialized by filling it with zeroes to make the offsets
match, nor by adjusting the offset from the caller. This resulted in
some control chars not properly being decoded and being seen as byte 0,
and the associated messages to be rejected, as can be seen in issue #1971.
This commit fixes it by adjusting the offset used for the higher part of
the table so that we don't need to store 64 zeroes that will never be
accessed.
This needs to be backported to 2.7.
Thanks to Christopher for spotting the bug, and to Juanga Covas for
providing precious traces showing the problem.
A major regression was introduced by following patch
commit 71fd03632f
MINOR: mux-quic/h3: send SETTINGS as soon as transport is ready
H3 finalize operation is now called at an early stage in the middle of
qc_init(). However, some qcc members are not yet initialized. In
particular the stream tree which will cause a crash when H3 control
stream will be accessed.
To fix this, qcc_install_app_ops() has been delayed at the end of
qc_init(). This ensures that qcc is properly initialized when app_ops
operation are used.
This must be backported wherever above patch is. For the record, it has
been tagged up to 2.7.
Since the rework of QUIC streams send scheduling, each stream has to be
inserted in QUIC-mux send-list to be able to emit content. This was not
the case for GOAWAY which prevent it to be sent. This regression has
been introduced by the following patch :
commit 20f2a425ff
MAJOR: mux-quic: rework stream sending priorization
This new patch fixes the issue by inserting H3 control stream in mux
send-list. The impact is deemed minor as for the moment GOAWAY is only
sent just before connection/mux cleanup with a CONNECTION_CLOSE.
However, it might cause some connections to hang up indefinitely.
This should be backported up to 2.7.
As specified by HTTP3 RFC, SETTINGS frame should be sent as soon as
possible. Before this patch, this was only done on the first qc_send()
invocation. This delay significantly SETTINGS emission until the first
H3 response is ready to be transferred.
This patch fixes this by ensuring SETTINGS is emitted when MUX-QUIC is
being setup.
As a side point, return value of finalize operation is checked. This
means that an error during SETTINGS emission will cause the connection
init to fail.
This should be backported up to 2.7.
Add a BUG_ON() in conn_free(), to check that when we're freeing a
connection, it is not still in the idle connections tree, otherwise the
next thread that will try to use it will probably crash.
This patch fixes two leaks in the 'update ssl ocsp-response' cli
command. One rather significant one since a whole trash buffer was
allocated for every call of the command, and another more marginal one
in an error path.
This patch does not need to be backported.
The munmap() call performed on exit was incorrect since it used to apply
to the buffer instead of the area, so neither the pointer nor the size
were page-aligned. This patches corrects this and also adds a call to
msync() since munmap() alone doesn't guarantee that data will be dumped.
This should be backported to 2.6.
When receiving a QUIC datagram, destination address is retrieved via
recvmsg() and stored in quic-conn as qc.local_addr. This address is then
reused when using the quic-conn owned socket.
When listener socket mode is preferred, send operation did not specify
the source address of the emitted datagram. If listener socket is bound
on a wildcard address, the kernel is free to choose any address assigned
to the local machine. This may be different from the address selected by
the client on its first datagram which will prevent the client to emit
next replies.
To address this, this patch fixes the UDP source address via sendmsg().
This process is similar to the reception and relies on ancillary
message, so the socket is left untouched after the operation. This is
heavily platform specific and may not be supported by some kernels.
This change has only an impact if listener socket only is used for QUIC
communications. This is the default behavior for 2.7 branch but not
anymore on 2.8. Use tune.quic.socket-owner set to listener to ensure set
it.
This should be backported up to 2.7.
It is forbidden to request h3 clients to close its Control and QPACK unidirection
streams. If not, the client closes the connection with H3_CLOSED_CRITICAL_STREAM(0x104).
Perhaps this could prevent some clients as Chrome to come back for a while.
But at quic_conn level there is no mean to identify the streams for which we cannot
send STOP_SENDING frame. Such a possibility is even not mentionned in RFC 9000.
At this time there is no choice than stopping sending STOP_SENDING frames for
all the h3 unidirectional streams inspecting the ->app_opps quic_conn value.
Must be backported to 2.7 and 2.6.
The wrong return value was checked, resulting in dead code and
potential bugs.
It should fix GitHub issue #2005.
This patch should be backported up to 2.5.
In case HPACK cannot be decoded, logs are emitted but there's no info
in the H2 traces, so let's add them.
This may be backported to all supported versions.
As reported by Dominik Froehlich in github issue #1968, some H2 request
parsing errors do not result in a log being emitted. This is annoying
for debugging because while an RST_STREAM is correctly emitted to the
client, there's no way without enabling traces to find it on the
haproxy side.
After some testing with various abnormal requests, a few places were
found where logs were missing and could be added. In this case, we
simply use sess_log() so some sample fetch functions might not be
available since the stream is not created. But at least there will
be a BADREQ in the logs. A good eaxmple of this consists in sending
forbidden headers or header syntax (e.g. presence of LF in value).
Some quick tests can be done this way:
- protocol error (LF in value):
curl -iv --http2-prior-knowledge -H "$(printf 'a:b\na')" http://0:8001/
- too large header block after decoding:
curl -v --http2-prior-knowledge -H "a:$(perl -e "print('a'x10000)")" -H "a:$(perl -e "print('a'x10000)")" http://localhost:8001/
This should be backported where needed, most likely 2.7 and 2.6 at
least for a start, and progressively to other versions.
The debug handler may deadlock with some threads waiting for isolation.
This may happend during a "show threads" command or even during a panic.
The reason is the call to thread_harmless_end() which waits for rdv_requests
to turn to zero before releasing its position in thread_dump_state,
while that one may not progress if another thread was interrupted in
thread_isolate() and is waiting for that thread to drop thread_dump_state.
In order to address this, we now use thread_harmless_end_sig() introduced
by previous commit:
MINOR: threads: add a thread_harmless_end() version that doesn't wait
However there's a catch: since commit f7afdd910 ("MINOR: debug: mark
oneself harmless while waiting for threads to finish"), there's a second
pair of thread_harmless_now()/thread_harmless_end() that surround the
loop around thread_dump_state. Marking a thread harmless before this
loop and dropping that without checking rdv_requests there could break
the harmless promise made to the other thread if it returns first and
proceeds with its isolated work. Hence we just drop this pair which was
only preventive for other signal handlers, while as indicated in that
patch's commit message, other signals are handled asynchronously and do
not require that extra protection.
This fix must be backported to 2.7.
The problem can be seen by running "show threads" in fast loops (100/s)
while reloading haproxy very quickly (10/s) and sending lots of traffic
to it (100krps, 15 Gbps). In this case the soft stop calls pool_gc()
which isolates a lot and manages to race with the dumps after a few
tens of seconds, leaving the process with all threads at 100%.
A few loops waiting for threads to synchronize such as thread_isolate()
rightfully filter the thread masks via the threads_enabled field that
contains the list of enabled threads. However, it doesn't use an atomic
load on it. Before 2.7, the equivalent variables were marked as volatile
and were always reloaded. In 2.7 they're fields in ha_tgroup_ctx[], and
the risk that the compiler keeps them in a register inside a loop is not
null at all. In practice when ha_thread_relax() calls sched_yield() or
an x86 PAUSE instruction, it could be verified that the variable is
always reloaded. If these are avoided (e.g. architecture providing
neither solution), it's visible in asm code that the variables are not
reloaded. In this case, if a thread exists just between the moment the
two values are read, the loop could spin forever.
This patch adds the required _HA_ATOMIC_LOAD() on the relevant
threads_enabled fields. It must be backported to 2.7.
Commit c1640f79f ("BUG/MEDIUM: fd/threads: fix incorrect thread selection
in wakeup broadcast") fixed an incorrect range being used to pick a thread
when broadcasting a wakeup for a foreign thread, but the selection was still
wrong as the number of threads and their mask was taken from the current
thread instead of the target thread. In addition, the code dealing with the
wakeup of a thread from the same group was still relying on MAX_THREADS
instead of tg->count.
This could theoretically cause random crashes with more than one thread
group though this was never encountered.
This needs to be backported to 2.7.
Implement the conversion of H3 request trailers as HTX blocks. This is
done through a new function h3_trailers_to_htx(). If the request
contains forbidden trailers it is rejected with a stream error.
This should be backported up to 2.7.
First, the inspect-delay is now tested if the action is used on a
tcp-response content rule. Then, when an expressions scope is checked, we
now take care to detect the right scope depending on the ruleset used
(tcp-request, tcp-response, http-request or http-response).
This patch could be backported to 2.7.
It is now possible to set a constant for the limit or period parameters on a
set-bandwidth-limit actions. The limit must follow the HAProxy size format
and is expressed in bytes. The period must follow the HAProxy time format
and is expressed in milliseconds. Of course, it is still possible to use
sample expressions instead.
The documentation was updated accordingly.
It is not really a bug. Only exemples were written this way in the
documentation. But it could be good to backport this change in 2.7.
This patch implement the conversion of an HTX response containing
trailer into a H3 HEADERS frame. This is done through a new function
named h3_resp_trailers_send().
This was tested with a nginx configuration using <add_trailer>
statement.
It may be possible that HTX buffer only contains a EOT block without
preceeding trailer. In this case, the conversion will produce nothing
but fin will be reported. This causes QUIC mux to generate an empty
STREAM frame with FIN bit set.
This should be backported up to 2.7.
Slighty adjust b_quic_enc_int(). This function is used to encode an
integer as a QUIC varint in a struct buffer.
A new parameter is added to the function API to specify the width of the
encoded integer. By default, 0 should be use to ensure that the minimum
space is used. Other valid values are 1, 2, 4 or 8. An error is reported
if the width is not large enough.
This new parameter will be useful when buffer space is reserved prior to
encode an unknown integer value. The maximum size of 8 bytes will be
reserved and some data can be put after. When finally encoding the
integer, the width can be requested to be 8 bytes.
With this new parameter, a small refactoring of the function has been
conducted to remove some useless internal variables.
This should be backported up to 2.7. It will be mostly useful to
implement H3 trailers encoding.
Connection headers are not used in HTTP/3. As specified by RFC 9114, a
received message containing one of those is considered as malformed and
rejected. When converting an HTX message to HTTP/3, these headers are
silently skipped.
This must be backported up to 2.6. Note that assignment to <h3s.err>
must be removed on 2.6 as stream level error has been introduced in 2.7
so this field does not exist in 2.6 A connection error will be used
instead automatically.
Pierre Cheynier reported a very rare race condition on soft-stop in the
listeners. What happens is that if a previously limited listener is
being resumed by another thread finishing an accept loop, and at the
same time a soft-stop is performed, the soft-stop will turn the
listener's state to LI_INIT, and once the listener's lock is released,
resume_listener() in the second thread will try to resume this listener
which has an fd==-1, yielding a crash in listener_set_state():
FATAL: bug condition "l->rx.fd == -1" matched at src/listener.c:288
The reason is that resume_listener() only checks for LI_READY, but doesn't
consider being called with a non-initialized or a stopped listener. Let's
also make sure we don't try to ressuscitate such a listener there.
This will have to be backported to all versions.
When the JWT token signature is using ECDSA algorithm (ES256 for
instance), the signature is a direct concatenation of the R and S
parameters instead of OpenSSL's DER format (see section
3.4 of RFC7518).
The code that verified the signatures wrongly assumed that they came in
OpenSSL's format and it did not actually work.
We now have the extra step of converting the signature into a complete
ECDSA_SIG that can be fed into OpenSSL's digest verification functions.
The ECDSA signatures in the regtest had to be recalculated and it was
made via the PyJWT python library so that we don't end up checking
signatures that we built ourselves anymore.
This patch should fix GitHub issue #2001.
It should be backported up to branch 2.5.
Existing logic for checking whether a regex subexpression for pathinfo
is matched results in valid matches being ignored and non-matches having
a new zero length string stored in params->pathinfo. This patch reverses
the logic so params->pathinfo is set when the subexpression is matched.
Without this patch the example configuration in the documentation:
path-info ^(/.+\.php)(/.*)?$
does not result in PATH_INFO being sent to the FastCGI application, as
expected, when the second subexpression is matched (in which case both
pmatch[2].rm_so and pmatch[2].rm_eo will be non-negative integers).
This patch may be backported as far as 2.2, the first release that made
the capture of this second subexpression optional.
This sample fetch returns a boolean. True if the support for QUIC transport
protocol was built and if this protocol was not disabled by "no-quic"
global option.
Must be backported to 2.7.
Add "no-quic" to "global" section to disable the use of QUIC transport protocol
by all configured QUIC listeners. This is listeners with QUIC addresses on their
"bind" lines. Internally, the socket addresses binding is skipped by
protocol_bind_all() for receivers with <proto_quic4> or <proto_quic6> as
protocol (see protocol struct).
Add information about "no-quic" global option to the documentation.
Must be backported to 2.7.
Set "disable_active_migration" transport parameter to inform the peer
haproxy listeners does not the connection migration feature.
Also drop all received datagrams with a modified source address.
Must be backported to 2.7.
In mux-h2 and mux-quic we still had two places manually setting
SE_FL_ERR_PENDING or SE_FL_ERROR depending on the EOS state, instead
of using se_fl_set_error() which takes care of the condition. Better
use the specialized function for this, it will allow to centralize
the conditions. Note that this will be needed to fix a bug.
While we do support quic4@ and quic6@ for listening addresses, it was
not possible to specify that we want to use an FD inherited from the
parent with QUIC. It's just a matter of making it possible to enable
a dgram-type socket and a stream-type transport, so let's add this.
Now it becomes possible to write "quic+fd@12", "quic+ipv4@addr" etc.
FDs inherited from a parent process do not deal well with suspend/resume
since commit 59b5da487 ("BUG/MEDIUM: listener: never suspend inherited
sockets") introduced in 2.3. The problem is that we now report that they
cannot be suspended at all, and they return a failure. As such, if a new
process fails to bind and sends SIGTTOU to the previous process, that
one will notice the failure and instantly switch to soft-stop, leaving
no chance to the new process to give up later and signal its failure.
What we need to do, however, is to stop receiving new connections from
such inherited FDs, which just means that the FD must be unsubscribed
from the poller (and resubscribed later if finally it has to stay).
With this a new process can start on the already bound FD without
problem thanks to the absence of polling, and when the old process
stops the new process will be alone on it.
This may be backported as far as 2.4.
Patrick Hemmer reported an interesting case where the status present in
the logs doesn't reflect what was reported to the user. During analysis
we could figure that it was in fact solely caused by the code dealing
with the set-status action. Indeed, set-status does update the status
in the HTX message itself but not in the HTTP transaction. However, at
most places where the status is needed to take a decision, it is
retrieved from the transaction, and the logs proceed like this as well,
though the "status" sample fetch function does retrieve it from the HTX
data. This particularly means that once a set-status has been used to
modify the status returned to the user, logs do not match that status,
and the response code distribution doesn't match either. However a
subsequent rule using the status as a condition will still match because
the "status" sample fetch function does also extract the status from the
HTX stream. Here's an example that fails:
frontend f
bind :8001
mode http
option httplog
log stdout daemon
http-after-response set-status 400
This will return a 400 to the client but log a 503 and increment
http_rsp_5xx.
In the end the root cause is that we need to make txn->status the only
authoritative place to get the status, and as such it must be updated
by the set-status rule. Ideally "status" should just use txn->status
but with the two synchronized this way it's not needed.
This should be backported since it addresses some consistency issues
between logs and what's observed. The set-status action appeared in
1.9 so all stable versions are eligible.
When the payload length cannot be determined, the htx extra field is set to
the magical vlaue ULLONG_MAX. It is not obvious. This a dedicated HTX value
is now used. Now, HTX_UNKOWN_PAYLOAD_LENGTH must be used in this case,
instead of ULLONG_MAX.
Since commit 473e0e54 ("BUG/MINOR: mux-h2: send a CANCEL instead of ES on
truncated writes"), a CANCEL may be reported when the response length is
unkown. It happens for H1 reponses without "Content-lenght" or
"Transfer-encoding" header.
Indeed, in this case, the end of the reponse is detected when the server
connection is closed. On the fontend side, the H2 multiplexer handles this
event as an abort and sensd a RST_STREAM frame with CANCEL error code.
The issue is not with the above commit but with the commit 4877045f1
("MINOR: mux-h2: make streams know if they need to send more data"). The
H2_SF_MORE_HTX_DATA flag must only be set if the payload length can be
determined.
This patch should fix the issue #1992. It must be backported to 2.7.
The error handling in the HTTP payload forwarding is far to be ideal because
both sides (request and response) are tested each time. It is espcially ugly
on the request side. To report a server error instead of a client error,
there are some workarounds to delay the error handling. The reason is that
the request analyzer is evaluated before the response one. In addition,
errors are tested before the data analysis. It means it is possible to
truncate data because errors may be handled to early.
So the error handling at this stages was totally reviewed. Aborts are now
handled after the data analysis. We also stop to finish the response on
request error or the opposite. As a side effect, the HTTP_MSG_ERROR state is
now useless. As another side effect, the termination flags are now set by
the HTTP analysers and not process_stream().
It was inherited from the legacy HTTP mode, but the message parsing is
handled by the underlying mux now. Thus, if a message is in HTTP_MSG_ERROR
state, it is just an analysis error and not a parsing error. So there is no
reason to block the HTTP sample fetch evaluation in this case.
This patch could be backported in all stable versions (For the 2.0, only the
htx part must be updated).
When HAProxy is waiting for the request body and an abort or an error is
detected, we can now use http_set_term_flags() function to set the termination
flags of the stream instead of handling it by hand.
When we wait for the request body, we are still in the request analysis. So
a SF_FINST_R flag must be reported in logs. Even if some data are already
received, at this staged, nothing is sent to the server.
This patch could be backported in all stable versions.
We use the new function to set the HTTP termination flags in the most
obvious places. The other places are a bit specific and will be handled one
by one in dedicated patched.
There is already a function to set termination flags but it is not well
suited for HTTP streams. So a function, dedicated to the HTTP analysis, was
added. This way, this new function will be called for HTTP analysers on
error. And if the error is not caugth at this stage, the generic function
will still be called from process_stream().
Here, by default a PRXCOND error is reported and depending on the stream
state, the reson will be set accordingly:
* If the backend SC is in INI state, SF_FINST_T is reported on tarpit and
SF_FINST_R otherwise.
* SF_FINST_Q is the server connection is queued
* SF_FINST_C in any connection attempt state (REQ/TAR/ASS/CONN/CER/RDY).
Except for applets, a SF_FINST_R is reported.
* Once the server connection is established, SF_FINST_H is reported while
HTTP_MSG_DATA state on the response side.
* SF_FINST_L is reported if the response is in HTTP_MSG_DONE state or
higher and a client error/timeout was reported.
* Otherwise SF_FINST_D is reported.
Since 2.6 with commit 34e4085f8 ("MEDIUM: peers: Balance applets across
threads") the initialization of a peers appctx may be postponed with a
wakeup, causing some partially initialized appctx to be visible. The
"show peers" command used to only care about peers without appctx, but
now it must also take care of those with no stconn, otherwise it can
occasionally crash while dumping them.
This fix must be backported to 2.6. Thanks to Patrick Hemmer for
reporting the problem.
During multiple tests we've already noticed that shared stats counters
have become a real bottleneck under large thread counts. With QUIC it's
pretty visible, with qc_snd_buf() taking 2.5% of the CPU on a 48-thread
machine at only 25 Gbps, and this CPU is entirely spent in the atomic
increment of the byte count and byte rate. It's also visible in H1/H2
but slightly less since we're working with larger buffers, hence less
frequent updates. These counters are exclusively used to report the
byte count in "show info" and the byte rate in the stats.
Let's move them to the thread_ctx struct and make the stats reader
just collect each thread's stats when requested. That's way more
efficient than competing on a single cache line.
After this, qc_snd_buf has totally disappeared from the perf profile
and tests made in h1 show roughly 1% performance increase on small
objects.
When updating an OCSP response, in case of HTTP error (host unreachable
for instance) we do not want to reinsert the entry at the same place in
the update tree otherwise we might retry immediately the update of the
same response. This patch adds an arbitrary 1min time to the next_update
of a response in such a case.
After an HTTP error, instead of waking the update task up after an
arbitrary 10s time, we look for the first entry of the update tree and
sleep for the apropriate time.
In the unlikely event that the ocsp update task is started but the
update tree is empty, put the update task to sleep indefinitely.
The only way this can happen is if the same certificate is loaded under
two different names while the second one has the 'ocsp-update on'
option. Since the certificate names are distinct we will have two
ckch_stores but a single certificate_ocsp because they are identified by
the OCSP_CERTID which is built out of the issuer certificate and the
certificate id (which are the same regardless of the .pem file name).
If incompatibilities are found in a certificate's ocsp-update mode we
raised a single alert that will be considered fatal from here on. This
is changed because in case of incompatibilities we will end up with an
undefined behaviour. The ocsp response might or might not be updated
depending on the order in which the multiple ocsp-update options are
taken into account.
An arbitrary 5 minutes minimum interval between two updates of the same
OCSP response is defined but it was not properly used when inserting
entries in the update tree.
This patch does not need to be backported.
Since commit 36d9097cf ("MINOR: fd: Add BUG_ON checks on fd_insert()"),
there is currently a test in fd_insert() to detect that we're not trying
to reinsert an FD that had already been inserted. This test catches the
following anomalies:
frontend fail1
bind fd@0
bind fd@0
and:
frontend fail2
bind fd@0 shards 2
What happens is that clone_listener() is called on a listener already
having an FD, and when sock_{inet,unix}_bind_receiver() are called, the
same FD will be registered multiple times and rightfully crash in the
sanity check.
It wouldn't be correct to block shards though (e.g. they could be used
in a default-bind line). What looks like a safer and more future-proof
approach simply is to dup() the FD so that each listener has one copy.
This is also the only solution that might allow later to support more
than 64 threads on an inherited FD.
This needs to be backported as far as 2.4. Better wait for at least
one extra -dev version before backporting though, as the bug should
not be triggered often anyway.
The do_resolv action triggers a resolution and must wait for the
result. Concretely, if no cache entry is available, it creates a resolution
and wakes up the resolvers task. Then it yields. When the action is
recalled, if the resolution is still running, it yields again.
However, if the resolution is not running, it does not check it was
running. Thus, it is possible to ignore the resolution because the action
was recalled before the resolvers task had a chance to be executed. If there
is result, the action must yield.
This patch should fix the issue #1993. It must be backported as far as 2.0.
These both functions are buggy and don't respect the documentation. They
must wait for more data, if possible.
For Channel.data(), it must happen if not enough data was received orf if no
length was specified and no data was received. The first case is properly
handled but not the second one. An empty string is return instead. In
addition, if there is no data and the channel can't receive more data, 'nil'
value must be returned.
In the same spirit, for Channel.line(), we must try to wait for more data
when no line is found if not enough data was received or if no length was
specified. Here again, only the first case is properly handled. And for this
function too, 'nil' value must be returned if there is no data and the
channel can't receive more data.
This patch is related to the issue #1993. It must be backported as far as
2.5.
It is possible to have an "upgrade:" header and the corresponding value in
the "connection:" header for a non-101 response. It happens for
426-Upgrade-Required messages. However, on HAProxy side, a parsing error is
reported for this kind of message because no websocket key header
("sec-websocket-accept:") is found in the response.
So a possible fix could be to not perform this test for non-101
responses. However, having flags about protocol upgrade on this kind of
response could lead to other bugs. Instead, corresponding flags are
removed. Thus, during the H1 response post-parsing, H1_MF_CONN_UPG and
H1_MF_UPG_WEBSOCKET flags are removed from any non-101 response.
This patch should fix the issue #1997. It must be backported as far as 2.4.
Sending is done with several iterations over qcs streams in qc_send().
The first loop is conducted over streams in <qcc.send_list>. After this
first iteration, some streams may still have data in their Tx buffer but
were blocked by a full qc_stream_desc buffer. In this case, they have
release their qc_stream_desc buffer in qcc_streams_sent_done().
New iterations can be done for these streams which can allocate new
qc_stream_desc buffer if available. Before this patch, this was done
through another stream list <qcc.send_retry_list>. Now, we can reuse the
new <qcc.send_list> for this usage.
This is safe to use as after first iteration, we have guarantee that
either one of the following is true if there is still streams in
<qcc.send_list> :
* transport layer has rejected data due to congestion
* stream is left because it is blocked on stream flow control
* stream still has data and has released a fulfilled qc_stream_desc
buffer. Immediate retry is useful for these streams : they will
allocate a new qc_stream_desc buffer if possible to continue sending.
This must be backported up to 2.7.
When a STOP_SENDING or RESET_STREAM must be send, its corresponding qcs
is inserted into <qcc.send_list> via qcc_reset_stream() or
qcc_abort_stream_read().
This allows to remove the iteration on full qcs tree in qc_send().
Instead, STOP_SENDING and RESET_STREAM is done in the loop over
<qcc.send_list> as with STREAM frames. This should improve slightly the
performance, most notably when large number of streams are opened.
This must be backported up to 2.7.
Complete qcc_send_stream() function to allow to specify if the stream
should be handled in priority. Internally this will insert the qcs
instance in front of <qcc.send_list> to be able to treat it before other
streams.
This functionality is useful when some QUIC streams should be sent
before others. Most notably, this is used to guarantee that H3 SETTINGS
is done first via the control stream.
This must be backported up to 2.7.
Implement a mechanism to register streams ready to send data in new
STREAM frames. Internally, this is implemented with a new list
<qcc.send_list> which contains qcs instances.
A qcs can be registered safely using the new function qcc_send_stream().
This is done automatically in qc_send_buf() which covers most cases.
Also, application layer is free to use it for internal usage streams.
This is currently the case for H3 control stream with SETTINGS sending.
The main point of this patch is to handle stream sending fairly. This is
in stark contrast with previous code where streams with lower ID were
always prioritized. This could cause other streams to be indefinitely
blocked behind a stream which has a lot of data to transfer. Now,
streams are handled in an order scheduled by se_desc layer.
This commit is the first one of a serie which will bring other
improvments which also relied on the send_list implementation.
This must be backported up to 2.7 when deemed sufficiently stable.
Add new traces when QUIC flow-control limits are reached at stream or
connection level. This may help to explain an interrupted transfer.
This should be backported up to 2.6.
QUIC stream did not transferred its response if it was an empty HTTP
response without headers nor entity body. This is caused by an
incomplete condition on qc_send() which skips streams with empty
<tx.buf>.
Fix this by extending the condition. Sending will be conducted on a
stream if <tx.buf> is not empty or FIN notification must be provided.
This allows to send the last STREAM frame for this stream.
Such HTTP responses should be extremely rare so this bug is labelled as
MINOR. It was encountered with a HTTP/0.9 request on an empty payload.
The bug was triggered as HTTP/0.9 does not support header in response
message.
Also, note that condition to wakeup MUX tasklet has been changed
similarly in qc_send_buf(). It is not mandatory to work properly
however, most probably because another tasklet_wakeup() is done
before/after.
This should be backported up to 2.6.
In applets, we stop processing when a write error (CF_WRITE_ERROR) or a shutdown
for writes (CF_SHUTW) is detected. However, any write error leads to an
immediate shutdown for writes. Thus, it is enough to only test if CF_SHUTW is
set.
When a read error (CF_READ_ERROR) is reported, a shutdown for reads is
always performed (CF_SHUTR). Thus, there is no reason to check if
CF_READ_ERROR is set if CF_SHUTR is also checked.
CF_READ_ATTACHED flag is only used in input events for stream analyzers,
CF_MASK_ANALYSER. A read event can be reported instead and this flag can be
removed. We must only take care to report a read event when the client
connection is upgraded from TCP to HTTP.
It appears CF_ANA_TIMEOUT is flag only used in CF_MASK_ANALYSER. All
analyzer timeout relies on the analysis expiration date (chn->analyse_exp).
Worst, once set, this flag is never removed. Thus this flag can be removed
and replaced by a read event (CF_READ_EVENT).
Thanks to previous changes, CF_WRITE_ACTIVITY flags can be removed.
Everywhere it was used, its value is now directly used
(CF_WRITE_EVENT|CF_WRITE_ERROR).
Thanks to previous changes, CF_READ_ACTIVITY flags can be removed.
Everywhere it was used, its value is now directly used
(CF_READ_EVENT|CF_READ_ERROR).
Just like CF_READ_PARTIAL, CF_WRITE_PARTIAL is now merged with
CF_WRITE_EVENT. There a subtlety in sc_notify(). The "connect" event
(formely CF_WRITE_NULL) is now detected with
(CF_WRITE_EVENT + sc->state < SC_ST_EST).
CF_READ_PARTIAL flag is now merged with CF_READ_EVENT. It means
CF_READ_EVENT is set when a read0 is received (formely CF_READ_NULL) or when
data are received (formely CF_READ_ACTIVITY).
There is nothing special here, except conditions to wake the stream up in
sc_notify(). Indeed, the test was a bit changed to reflect recent
change. read0 event is now formalized by (CF_READ_EVENT + CF_SHUTR).
As for CF_READ_NULL, it appears CF_WRITE_NULL and other write events on a
channel are mainly used to wake up the stream and may be replace by on write
event.
In this patch, we introduce CF_WRITE_EVENT flag as a replacement to
CF_WRITE_EVENT_NULL. There is no breaking change for now, it is just a
rename. Gradually, other write events will be merged with this one.
CF_READ_NULL flag is not really useful and used. It is a transient event
used to wakeup the stream. As we will see, all read events on a channel may
be resumed to only one and are all used to wake up the stream.
In this patch, we introduce CF_READ_EVENT flag as a replacement to
CF_READ_NULL. There is no breaking change for now, it is just a
rename. Gradually, other read events will be merged with this one.
If CF_READ_NULL flag is set on a channel, it implies a shutdown for reads
was performed and CF_SHUTR is also set on this channel. Thus, there is no
reason to test is any of these flags is present, testing CF_SHUTR is enough.
When calling 'update ssl ocsp-response' with an unknown certificate file
name, the error message would mention a "ckch_store" which is an
internal structure unknown by users.
The ocsp_uri field of the certificate_ocsp structure was a 16k buffer
when it could be hand allocated to just the required size to store the
OCSP uri. This field is now behaving the same way as the sctl and
ocsp_response buffers of the ckch_store structure.
If a given certificate is used multiple times in a configuration, the
ocsp_cid field would have been overwritten during each
ssl_sock_load_ocsp call even if it was previously filled.
This patch does not need to be backported.
If a configuration such as the following was included in a crt-list
file, it would not have raised a warning about 'ocsp-update'
inconsistencies for the concerned certificate:
cert.pem [ocsp-update on]
cert.pem
because the second line as a NULL entry->ssl_conf.
In the unlikely event that the OCSP udpate task is killed in the middle
of an update process (request sent but no response received yet) the
cur_ocsp member of the update context would keep an unneeded reference
to a certificate_ocsp object. It must then be freed during the task's
cleanup.
If the ocsp issuer certificate was actually taken from the certificate
chain in ssl_sock_load_ocsp, we don't need to keep an extra reference on
it since we already keep a reference to the full certificate chain.
When calling OCSP_basic_verify to check the validity of the received
OCSP response, we need to provide an untrusted certificate chain as well
as an X509_STORE holding only trusted certificates. Since the
certificate chain and the issuer certificate are all provided by the
user, we assume that they are valid and we add them all to a temporary
store. This enables to focus only on the response's validity.
When ocsp-update is enabled for a given certificate, its
certificate_ocsp objects is inserted in two separate trees (the actual
ocsp response one and the ocsp update one). But since the same instance
is used for the two trees, its ownership is kept by the regular ocsp
response one. The ocsp update task should then never have to free the
ocsp entries. The crash actually occurred because of this. The update
task was freeing entries whose reference counter was not increased while
a reference was still held by the SSL_CTXs.
The only time during which the ocsp update task will need to increase
the reference counter is during an actual update, because at this moment
the entry is taken out of the update tree and a 'flying' reference to
the certificate_ocsp is kept in the ocsp update context.
This bug could be reproduced by calling './haproxy -f conf.cfg -c' with
any of the used certificates having the 'ocsp-update on' option. For
some reason asan caught the bug easily but valgrind did not.
This patch does not need to be backported.
This CLI command crashed when called for a certificate which did not
have an OCSP response during startup because it assumed that the
ocsp_issuer pointer of the ckch_data object would be valid. It was only
true for already known OCSP responses though.
The ocsp issuer certificate is now taken either from the ocsp_issuer
pointer or looked for in the certificate chain. This is the same logic
as the one in ssl_sock_load_ocsp.
This patch does not need to be backported.
Released version 2.8-dev1 with the following main changes :
- MEDIUM: 51d: add support for 51Degrees V4 with Hash algorithm
- MINOR: debug: support pool filtering on "debug dev memstats"
- MINOR: debug: add a balance of alloc - free at the end of the memstats dump
- LICENSE: wurfl: clarify the dummy library license.
- MINOR: event_hdl: add event handler base api
- DOC/MINOR: api: add documentation for event_hdl feature
- MEDIUM: ssl: rename the struct "cert_key_and_chain" to "ckch_data"
- MINOR: quic: remove qc from quic_rx_packet
- MINOR: quic: complete traces in qc_rx_pkt_handle()
- MINOR: quic: extract datagram parsing code
- MINOR: tools: add port for ipcmp as optional criteria
- MINOR: quic: detect connection migration
- MINOR: quic: ignore address migration during handshake
- MINOR: quic: startup detect for quic-conn owned socket support
- MINOR: quic: test IP_PKTINFO support for quic-conn owned socket
- MINOR: quic: define config option for socket per conn
- MINOR: quic: allocate a socket per quic-conn
- MINOR: quic: use connection socket for emission
- MEDIUM: quic: use quic-conn socket for reception
- MEDIUM: quic: move receive out of FD handler to quic-conn io-cb
- MINOR: mux-quic: rename duplicate function names
- MEDIUM: quic: requeue datagrams received on wrong socket
- MINOR: quic: reconnect quic-conn socket on address migration
- MINOR: quic: activate socket per conn by default
- BUG/MINOR: ssl: initialize SSL error before parsing
- BUG/MINOR: ssl: initialize WolfSSL before parsing
- BUG/MINOR: quic: fix fd leak on startup check quic-conn owned socket
- BUG/MEDIIM: stconn: Flush output data before forwarding close to write side
- MINOR: server: add srv->rid (revision id) value
- MINOR: stats: add server revision id support
- MINOR: server/event_hdl: add support for SERVER_ADD and SERVER_DEL events
- MINOR: server/event_hdl: add support for SERVER_UP and SERVER_DOWN events
- BUG/MEDIUM: checks: do not reschedule a possibly running task on state change
- BUG/MINOR: checks: make sure fastinter is used even on forced transitions
- CLEANUP: assorted typo fixes in the code and comments
- MINOR: mworker: display an alert upon a wait-mode exit
- BUG/MEDIUM: mworker: fix segv in early failure of mworker mode with peers
- BUG/MEDIUM: mworker: create the mcli_reload socketpairs in case of upgrade
- BUG/MINOR: checks: restore legacy on-error fastinter behavior
- MINOR: check: use atomic for s->consecutive_errors
- MINOR: stats: properly handle ST_F_CHECK_DURATION metric
- MINOR: mworker: remove unused legacy code in mworker_cleanlisteners
- MINOR: peers: unused code path in process_peer_sync
- BUG/MINOR: init/threads: continue to limit default thread count to max per group
- CLEANUP: init: remove useless assignment of nbthread
- BUILD: atomic: atomic.h may need compiler.h on ARMv8.2-a
- BUILD: makefile/da: also clean Os/ in Device Atlas dummy lib dir
- BUG/MEDIUM: httpclient/lua: double LIST_DELETE on end of lua task
- CLEANUP: pools: move the write before free to the uaf-only function
- CLEANUP: pool: only include pool-os from pool.c not pool.h
- REORG: pool: move all the OS specific code to pool-os.h
- CLEANUP: pools: get rid of CONFIG_HAP_POOLS
- DEBUG: pool: show a few examples in -dMhelp
- MINOR: pools: make DEBUG_UAF a runtime setting
- BUG/MINOR: promex: create haproxy_backend_agg_server_status
- MINOR: promex: introduce haproxy_backend_agg_check_status
- DOC: promex: Add missing backend metrics
- BUG/MAJOR: fcgi: Fix uninitialized reserved bytes
- REGTESTS: fix the race conditions in iff.vtc
- CI: github: reintroduce openssl 1.1.1
- BUG/MINOR: quic: properly handle alloc failure in qc_new_conn()
- BUG/MINOR: quic: handle alloc failure on qc_new_conn() for owned socket
- CLEANUP: mux-quic: remove unused attribute on qcs_is_close_remote()
- BUG/MINOR: mux-quic: remove qcs from opening-list on free
- BUG/MINOR: mux-quic: handle properly alloc error in qcs_new()
- CI: github: split ssl lib selection based on git branch
- REGTESTS: startup: check maxconn computation
- BUG/MINOR: startup: don't use internal proxies to compute the maxconn
- REGTESTS: startup: change the expected maxconn to 11000
- CI: github: set ulimit -n to a greater value
- REGTESTS: startup: activate automatic_maxconn.vtc
- MINOR: sample: add param converter
- CLEANUP: ssl: remove check on srv->proxy
- BUG/MEDIUM: freq-ctr: Don't compute overshoot value for empty counters
- BUG/MEDIUM: resolvers: Use tick_first() to update the resolvers task timeout
- REGTESTS: startup: add alternatives values in automatic_maxconn.vtc
- BUG/MEDIUM: h3: reject request with invalid header name
- BUG/MEDIUM: h3: reject request with invalid pseudo header
- MINOR: http: extract content-length parsing from H2
- BUG/MEDIUM: h3: parse content-length and reject invalid messages
- CI: github: remove redundant ASAN loop
- CI: github: split matrix for development and stable branches
- BUG/MEDIUM: mux-h1: Don't release H1 stream upgraded from TCP on error
- BUG/MINOR: mux-h1: Fix test instead a BUG_ON() in h1_send_error()
- MINOR: http-htx: add BUG_ON to prevent API error on http_cookie_register
- BUG/MEDIUM: h3: fix cookie header parsing
- BUG/MINOR: h3: fix memleak on HEADERS parsing failure
- MINOR: h3: check return values of htx_add_* on headers parsing
- MINOR: ssl: Remove unneeded buffer allocation in show ocsp-response
- MINOR: ssl: Remove unnecessary alloc'ed trash chunk in show ocsp-response
- BUG/MINOR: ssl: Fix memory leak of find_chain in ssl_sock_load_cert_chain
- MINOR: stats: provide ctx for dumping functions
- MINOR: stats: introduce stats field ctx
- BUG/MINOR: stats: fix show stat json buffer limitation
- MINOR: stats: make show info json future-proof
- BUG/MINOR: quic: fix crash on PTO rearm if anti-amplification reset
- BUILD: 51d: fix build issue with recent compilers
- REGTESTS: startup: disable automatic_maxconn.vtc
- BUILD: peers: peers-t.h depends on stick-table-t.h
- BUG/MEDIUM: tests: use tmpdir to create UNIX socket
- BUG/MINOR: mux-h1: Report EOS on parsing/internal error for not running stream
- BUG/MINOR:: mux-h1: Never handle error at mux level for running connection
- BUG/MEDIUM: stats: Rely on a local trash buffer to dump the stats
- OPTIM: pool: split the read_mostly from read_write parts in pool_head
- MINOR: pool: make the thread-local hot cache size configurable
- MINOR: freq_ctr: add opportunistic versions of swrate_add()
- MINOR: pool: only use opportunistic versions of the swrate_add() functions
- REGTESTS: ssl: enable the ssl_reuse.vtc test for WolfSSL
- BUG/MEDIUM: mux-quic: fix double delete from qcc.opening_list
- BUG/MEDIUM: quic: properly take shards into account on bind lines
- BUG/MINOR: quic: do not allocate more rxbufs than necessary
- MINOR: ssl: Add a lock to the OCSP response tree
- MINOR: httpclient: Make the CLI flags public for future use
- MINOR: ssl: Add helper function that extracts an OCSP URI from a certificate
- MINOR: ssl: Add OCSP request helper function
- MINOR: ssl: Add helper function that checks the validity of an OCSP response
- MINOR: ssl: Add "update ssl ocsp-response" cli command
- MEDIUM: ssl: Add ocsp_certid in ckch structure and discard ocsp buffer early
- MINOR: ssl: Add ocsp_update_tree and helper functions
- MINOR: ssl: Add crt-list ocsp-update option
- MINOR: ssl: Store 'ocsp-update' mode in the ckch_data and check for inconsistencies
- MEDIUM: ssl: Insert ocsp responses in update tree when needed
- MEDIUM: ssl: Add ocsp update task main function
- MEDIUM: ssl: Start update task if at least one ocsp-update option is set to on
- DOC: ssl: Add documentation for ocsp-update option
- REGTESTS: ssl: Add tests for ocsp auto update mechanism
- MINOR: ssl: Move OCSP code to a dedicated source file
- BUG/MINOR: ssl/ocsp: check chunk_strcpy() in ssl_ocsp_get_uri_from_cert()
- CLEANUP: ssl/ocsp: add spaces around operators
- BUG/MEDIUM: mux-h2: Refuse interim responses with end-stream flag set
- BUG/MINOR: pool/stats: Use ullong to report total pool usage in bytes in stats
- BUG/MINOR: ssl/ocsp: httpclient blocked when doing a GET
- MINOR: httpclient: don't add body when istlen is empty
- MEDIUM: httpclient: change the default log format to skip duplicate proxy data
- BUG/MINOR: httpclient/log: free of invalid ptr with httpclient_log_format
- MEDIUM: mux-quic: implement shutw
- MINOR: mux-quic: do not count stream flow-control if already closed
- MINOR: mux-quic: handle RESET_STREAM reception
- MEDIUM: mux-quic: implement STOP_SENDING emission
- MINOR: h3: use stream error when needed instead of connection
- CI: github: enable github api authentication for OpenSSL tags read
- BUG/MINOR: mux-quic: ignore remote unidirectional stream close
- CI: github: use the GITHUB_TOKEN instead of a manually generated token
- BUILD: makefile: build the features list dynamically
- BUILD: makefile: move common options-oriented macros to include/make/options.mk
- BUILD: makefile: sort the features list
- BUILD: makefile: initialize all build options' variables at once
- BUILD: makefile: add a function to collect all options' CFLAGS/LDFLAGS
- BUILD: makefile: start to automatically collect CFLAGS/LDFLAGS
- BUILD: makefile: ensure that all USE_* handlers appear before CFLAGS are used
- BUILD: makefile: clean the wolfssl include and lib generation rules
- BUILD: makefile: make sure to also ignore SSL_INC when using wolfssl
- BUILD: makefile: reference libdl only once
- BUILD: makefile: make sure LUA_INC and LUA_LIB are always initialized
- BUILD: makefile: do not restrict Lua's prepend path to empty LUA_LIB_NAME
- BUILD: makefile: never force -latomic, set USE_LIBATOMIC instead
- BUILD: makefile: add an implicit USE_MATH variable for -lm
- BUILD: makefile: properly report USE_PCRE/USE_PCRE2 in features
- CLEANUP: makefile: properly indent ifeq/ifneq conditional blocks
- BUILD: makefile: rework 51D to split v3/v4
- BUILD: makefile: support LIBCRYPT_LDFLAGS
- BUILD: makefile: support RT_LDFLAGS
- BUILD: makefile: support THREAD_LDFLAGS
- BUILD: makefile: support BACKTRACE_LDFLAGS
- BUILD: makefile: support SYSTEMD_LDFLAGS
- BUILD: makefile: support ZLIB_CFLAGS and ZLIB_LDFLAGS
- BUILD: makefile: support ENGINE_CFLAGS
- BUILD: makefile: support OPENSSL_CFLAGS and OPENSSL_LDFLAGS
- BUILD: makefile: support WOLFSSL_CFLAGS and WOLFSSL_LDFLAGS
- BUILD: makefile: support LUA_CFLAGS and LUA_LDFLAGS
- BUILD: makefile: support DEVICEATLAS_CFLAGS and DEVICEATLAS_LDFLAGS
- BUILD: makefile: support PCRE[2]_CFLAGS and PCRE[2]_LDFLAGS
- BUILD: makefile: refactor support for 51DEGREES v3/v4
- BUILD: makefile: support WURFL_CFLAGS and WURFL_LDFLAGS
- BUILD: makefile: make all OpenSSL variants use the same settings
- BUILD: makefile: remove the special case of the SSL option
- BUILD: makefile: only consider settings from enabled options
- BUILD: makefile: also list per-option settings in 'make opts'
- BUG/MINOR: debug: don't mask the TH_FL_STUCK flag before dumping threads
- MINOR: cfgparse-ssl: avoid a possible crash on OOM in ssl_bind_parse_npn()
- BUG/MINOR: ssl: Missing goto in error path in ocsp update code
- BUG/MINOR: stick-table: report the correct action name in error message
- CI: Improve headline in matrix.py
- CI: Add in-memory cache for the latest OpenSSL/LibreSSL
- CI: Use proper `if` blocks instead of conditional expressions in matrix.py
- CI: Unify the `GITHUB_TOKEN` name across matrix.py and vtest.yml
- CI: Explicitly check environment variable against `None` in matrix.py
- CI: Reformat `matrix.py` using `black`
- MINOR: config: add environment variables for default log format
- REGTESTS: Remove REQUIRE_VERSION=1.9 from all tests
- REGTESTS: Remove REQUIRE_VERSION=2.0 from all tests
- REGTESTS: Remove tests with REQUIRE_VERSION_BELOW=1.9
- BUG/MINOR: http-fetch: Only fill txn status during prefetch if not already set
- BUG/MAJOR: buf: Fix copy of wrapping output data when a buffer is realigned
- DOC: config: fix alphabetical ordering of http-after-response rules
- MINOR: http-rules: Add missing actions in http-after-response ruleset
- DOC: config: remove duplicated "http-response sc-set-gpt0" directive
- BUG/MINOR: proxy: free orgto_hdr_name in free_proxy()
- REGTEST: fix the race conditions in json_query.vtc
- REGTEST: fix the race conditions in add_item.vtc
- REGTEST: fix the race conditions in digest.vtc
- REGTEST: fix the race conditions in hmac.vtc
- BUG/MINOR: fd: avoid bad tgid assertion in fd_delete() from deinit()
- BUG/MINOR: http: Memory leak of http redirect rules' format string
- MEDIUM: stick-table: set the track-sc limit at boottime via tune.stick-counters
- MINOR: stick-table: implement the sc-add-gpc() action
This action increments the General Purpose Counter at the index <idx> of
the array associated to the sticky counter designated by <sc-id> by the
value of either integer <int> or the integer evaluation of expression
<expr>. Integers and expressions are limited to unsigned 32-bit values.
If an error occurs, this action silently fails and the actions evaluation
continues. <idx> is an integer between 0 and 99 and <sc-id> is an integer
between 0 and 2. It also silently fails if the there is no GPC stored at
this index. The entry in the table is refreshed even if the value is zero.
The 'gpc_rate' is automatically adjusted to reflect the average growth
rate of the gpc value.
The main use of this action is to count scores or total volumes (e.g.
estimated danger per source IP reported by the server or a WAF, total
uploaded bytes, etc).
The number of stick-counter entries usable by track-sc rules is currently
set at build time. There is no good value for this since the vast majority
of users don't need any, most need only a few and rare users need more.
Adding more counters for everyone increases memory and CPU usages for no
reason.
This patch moves the per-session and per-stream arrays to a pool of a size
defined at boot time. This way it becomes possible to set the number of
entries at boot time via a new global setting "tune.stick-counters" that
sets the limit for the whole process. When not set, the MAX_SESS_STR_CTR
value still applies, or 3 if not set, as before.
It is also possible to lower the value to 0 to save a bit of memory if
not used at all.
Note that a few low-level sample-fetch functions had to be protected due
to the ability to use sample-fetches in the global section to set some
variables.
When the configuration contains such a line:
http-request redirect location /
a "struct logformat_node" object is created and it contains an "arg"
member which gets alloc'ed as well in which we copy the new location
(see add_to_logformat_list). This internal arg pointer was not freed in
the dedicated release_http_redir release function.
Likewise, the expression pointer was not released as well.
This patch can be backported to all stable branches. It should apply
as-is all the way to 2.2 but it won't on 2.0 because release_http_redir
did not exist yet.
In 2.7, commit 0dc1cc93b ("MAJOR: fd: grab the tgid before manipulating
running") added a check to make sure we never try to delete an FD from
the wrong thread group. It already handles the specific case of an
isolated thread (e.g. stop a listener from the CLI) but forgot to take
into account the deinit() code iterating over all idle server connections
to close them. This results in the crash below during deinit() if thread
groups are enabled and idle connections exist on a thread group higher
than 1.
[WARNING] (15711) : Proxy decrypt stopped (cumulated conns: FE: 64, BE: 374511).
[WARNING] (15711) : Proxy stats stopped (cumulated conns: FE: 0, BE: 0).
[WARNING] (15711) : Proxy GLOBAL stopped (cumulated conns: FE: 0, BE: 0).
FATAL: bug condition "fd_tgid(fd) != ti->tgid && !thread_isolated()" matched at src/fd.c:369
call trace(11):
| 0x4a6060 [c6 04 25 01 00 00 00 00]: main-0x1d60
| 0x67fcc6 [c7 43 68 fd ad de fd 5b]: sock_conn_ctrl_close+0x16/0x1f
| 0x59e6f5 [48 89 ef e8 83 65 11 00]: main+0xf6935
| 0x60ad16 [48 8b 1b 48 81 fb a0 91]: free_proxy+0x716/0xb35
| 0x62750e [48 85 db 74 35 48 89 dd]: deinit+0xbe/0x87a
| 0x627ce2 [89 ef e8 97 76 e7 ff 0f]: deinit_and_exit+0x12/0x19
| 0x4a9694 [bf e6 ff 9d 00 44 89 6c]: main+0x18d4/0x2c1a
There's no harm though since all traffic already ended. This must be
backported to 2.7.
Unlike fwdfor_hdr_name, orgto_hdr_name was not properly freed in
free_proxy().
This did not cause observable memory leaks because originalto proxy option is
only used for user configurable proxies, which are solely freed right before
process termination.
No backport needed unless some architectural changes causing regular proxies
to be freed and reused multiple times within single process lifetime are made.
This patch adds the support of following actions in the http-after-response
ruleset:
* set-map, del-map and del-acl
* set-log-level
* sc-inc-gpc, sc-inc-gpc0 and set-inc-gpc1
* sc-inc-gpt and sc-set-gpt0
This patch should solve the issue #1980.
When an HTTP sample fetch is evaluated, a prefetch is performed to check the
channel contains a valid HTTP message. If the HTTP analysis was not already
started, some info are filled.
It may be an issue when an error is returned before the response analysis
and when http-after-response rules are used because the original HTTP txn
status may be crushed. For instance, with the following configuration:
listen l1
log global
mode http
bind :8000
log-format ST=%ST
http-after-response set-status 400
#http-after-response set-var(res.foo) status
A "ST=503" is reported in the log messages, independantly on the first
http-after-response rule. The same must happen if the second rule is
uncommented. However, for now, a "ST=400" is logged.
To fix the bug, during the prefetch, the HTTP txn status is only set if it
is undefined (-1). This way, we are sure the original one is never lost.
This patch should be backported as far as 2.2.
This patch provides a convenient way to override the default TCP, HTTP
and HTTP log formats. Instead of having a look into the documentation
to figure out what is the appropriate default log format three new
environment variables can be used: HAPROXY_TCP_LOG_FMT,
HAPROXY_HTTP_LOG_FMT and HAPROXY_HTTPS_LOG_FMT. Their content are
substituted verbatim.
These variables are set before parsing the configuration and are unset
just after all configuration files are successful parsed.
Example:
# Instead of writing this long log-format line...
log-format "%ci:%cp [%tr] %ft %b/%s %TR/%Tw/%Tc/%Tr/%Ta %ST %B %CC \
%CS %tsc %ac/%fc/%bc/%sc/%rc %sq/%bq %hr %hs %{+Q}r \
lr=last_rule_file:last_rule_line"
# ..the HAPROXY_HTTP_LOG_FMT can be used to provide the default
# http log-format string
log-format "${HAPROXY_HTTP_LOG_FMT} lr=last_rule_file:last_rule_line"
Please note that nothing prevents users to unset the variables or
override their content in a global section.
Signed-off-by: Sébastien Gross <sgross@haproxy.com>
sc-inc-gpc() learned to use arrays in 2.5 with commit 4d7ada8f9 ("MEDIUM:
stick-table: add the new arrays of gpc and gpc_rate"), but the error
message says "sc-set-gpc" instead of "sc-inc-gpc". Let's fix this to
avoid confusion.
This can be backported to 2.5.
When converting an OCSP request's information into base64, the return
value of a2base64 is checked but processing is not interrupted when it
returns a negative value, which was caught by coverity.
This patch fixes GitHub issue #1974.
It does not need to be backported.
Upon out of memory condition at boot, we could possibly crash when
parsing the "npn" bind line keyword since it's used unchecked. There's
no real need to backport this though it will not hurt.
Commit f0c86ddfe ("BUG/MEDIUM: debug: fix parallel thread dumps again")
added a clearing of the TH_FL_STUCK flag before dumping threads in case
of parallel dumps, but that was in part a sort of workaround for some
remains of the commit that introduced the flag in 2.0 before the watchdog
existed, and which would set it after dumping a thread: e6a02fa65 ("MINOR:
threads: add a "stuck" flag to the thread_info struct"), and in part an
attempt to avoid that a thread waiting for too long during the dump would
get the flag set. But that is not possible, a thread waiting for being
dumped has the harmless bit set and doesn't get the stuck bit. What happens
in fact is that issuing "show threads" in fast loops ends up causing some
threads to keep their STUCK bit that was set at the end of "show threads",
and confuses the output.
The problem with doing this is that the flag is cleared before the thread
is dumped, and since this flag is used to decide whether to show a backtrace
or not, we don't get backtraces anymore of stuck threads since the commit
above in 2.7.
This patch just removes the two points where the flag was cleared by the
commit above. It should be backported to 2.7.
Remove ABORT_NOW() on remote unidirectional stream closure. This is
required to ensure our implementation is evolutive enough to not fail on
unknown stream type.
Note that for the moment MAX_STREAMS_UNI flow-control frame is never
emitted. This should be unnecessary for HTTP/3 which have a limited
usage of unidirectional streams but may be required if other application
protocols are supported in the future.
ABORT_NOW() was triggered by s2n-quic which opens an unknown
unidirectional stream with greasing. This was detected by QUIC interop
runner for http3 testcase.
This must be backported up to 2.6.
Use a stream error when possible instead of always closing the whole
connection. This requires a new field <err> in h3s structure.
Change slightly the decoding loop to facilitate error propagation. It
will be interrupted as soon as <h3s.err> or <h3c.err> is non null. In
the later case, a CONNECTION_CLOSE is requested through
qcc_emit_cc_app().
For stream error, H3 layer uses qcc_abort_stream_read() coupled with
qcc_reset_stream(). This is in conformance with RFC 9114 which
recommends to use STOP_SENDING + RESET_STREAM emission on stream error.
This commit is part of implementing H3 errors at the stream level.
This should be backported up to 2.7.
Implement STOP_SENDING. This is divided in two main functions :
* qcc_abort_stream_read() which can be used by application protocol to
request for a STOP_SENDING. This set the flag QC_SF_READ_ABORTED.
* qcs_send_reset() is a static function called after the preceding one.
It will send a STOP_SENDING via qcc_send().
QC_SF_READ_ABORTED flag is now properly used : if activated on a stream
during qcc_recv(), <qcc.app_ops.decode_qcs> callback is skipped. Also,
abort reading on unknown unidirection remote stream is now fully
supported with the emission of a STOP_SENDING as specified by RFC 9000.
This commit is part of implementing H3 errors at the stream level. This
will allows the H3 layer to request the peer to close its endpoint for
an error on a stream.
This should be backported up to 2.7.
Implement RESET_STREAM reception by mux-quic. On reception, qcs instance
will be mark as remotely closed and its Rx buffer released. The stream
layer will be flagged on error if still attached.
This commit is part of implementing H3 errors at the stream level.
Indeed, on H3 stream errors, STOP_SENDING + RESET_STREAM should be
emitted. The STOP_SENDING will in turn generate a RESET_STREAM by the
remote peer which will be handled thanks to this patch.
This should be backported up to 2.7.
It is unnecessary to increase stream credit once its size is known.
Indeed, a peer cannot sent a greater offset than the value advertized.
Else, connection will be closed on STREAM reception with
FINAL_SIZE_ERROR.
This commit is a small optimization and may prevent the emission of
unneeded MAX_STREAM_DATA frames on some occasions.
It should be backported up to 2.7.
Implement mux_ops shutw operation for QUIC mux. A RESET_STREAM is
emitted unless the stream is already closed due to all data or
RESET_STREAM already transmitted.
This operation is notably useful when upper stream layer wants to close
the connection early due to an error.
This was tested by using a HTTP server which listens with PROXY protocol
support. The corresponding server line on haproxy configuration
deliberately not specify send-proxy. This causes the server to close
abruptly the connection. Without this patch, nothing was done on the QUIC
stream which was kept open until the whole connection is closed. Now, a
proper RESET_STREAM is emitted to report the error.
This should be backported up to 2.7.
The httpclient emits logs in the httplog format, however it still
display the frontend, the backend and the server.
In the case of the httpclient we only need to know that we are using the
httpclient, so the backend and server information are irelevant.
In the case of extra code the name of the proxy can be long and will be
displayed twice which is not useful.
This is the same log-format as the httplog but the %b/%s is now -/- so
the format is still compatible with an httplog parser.
Before:
<134>Dec 22 15:19:27 haproxy[1013520]: -:- [22/Dec/2022:15:19:27.482] <HTTPCLIENT> <HTTPCLIENT>/<HTTPCLIENT> 2/0/4/6/10 200 848 - - ---- 0/0/0/0/0 0/0 {92.123.236.161} "GET http://r3.o.lencr.org/1234 HTTP/1.1"
After:
<134>Dec 22 15:19:27 haproxy[1013520]: -:- [22/Dec/2022:15:19:27.482] <HTTPCLIENT> -/- 2/0/4/6/10 200 848 - - ---- 0/0/0/0/0 0/0 {92.123.236.161} "GET http://r3.o.lencr.org/1234 HTTP/1.1"
Don't try to create a request with a body in httpclient_req_gen() if the
payload ist has a ptr but no len.
Sometimes people have their httpclient stuck because they use an ist
with a data ptr but no len. Check the len so this mistake doesn't block
the client.
The same change was already performed for the cli. The stats applet and the
prometheus exporter are also concerned. Both use the stats API and rely on
pool functions to get total pool usage in bytes. pool_total_allocated() and
pool_total_used() must return 64 bits unsigned integer to avoid any wrapping
around 4G.
This may be backported to all versions.
As state in RFC9113#8.1, HEADERS frame with the ES flag set that carries an
informational status code is malformed. However, there is no test on this
condition.
On 2.4 and higher, it is hard to predict consequences of this bug because
end of the message is only reported with a flag. But on 2.2 and lower, it
leads to a crash because there is an unexpected extra EOM block at the end
of an interim response.
Now, when a ES flag is detected on a HEADERS frame for an interim message, a
stream error is sent (RST_STREAM/PROTOCOL_ERROR).
This patch should solve the issue #1972. It should be backported as far as
2.0.
This patch effectively enables the ocsp auto update mechanism. If a
least one ocsp-update option is enabled in a crt-list, then the ocsp
auto update task is created. It will look into the dedicated ocsp update
tree for the next update to be updated, use the http_client to send the
ocsp request to the proper responder, validate the received ocsp
response and update the ocsp response tree before finally reinserting
the entry in the ocsp update tree (with a next update time set to
now+1H).
The main task will then sleep until another entry needs to be updated.
The task gets scheduled after config check in order to avoid trying to
update ocsp responses while configuration is still being parsed (and
certificates and actual ocsp responses are loaded).
This patch contains the main function of the ocsp auto update mechanism
as well as an init and destroy function of the task used for this.
The task is not created in this patch but in a later one.
The function has two distinct parts and the branching to one or the
other is completely based on the fact that the cur_ocsp pointer of the
ssl_ocsp_task_ctx member is set.
If the pointer is not set, we need to look at the first item of the
update tree and see if it needs to be updated. If it does not we simply
wait until the time is right and let the task asleep. If it does need to
be updated, we simply build and send the corresponding ocsp request
thanks to the http_client. The task is then sent to sleep with an expire
time set to infinity. The http_client will wake it back up once the
response is received (or a timeout occurs). Just note that during this
whole process the cetificate_ocsp object corresponding to the entry
being updated is taken out of the update tree and only stored in the
ssl_ocsp_task_ctx context.
Once the task is waken up by the http_client, it branches on the
response processing part of the function which basically checks that the
response is valid and inserts it into the ocsp_response tree. The task
then goes back to sleep until another entry needs to be updated.
When 'ocsp-update' is enabled for a given certificate, we need to insert
the certificate_ocsp member of this certificate in the OCSP update tree
as well as the already existing OCSP response tree. For such an entry to
be created, the certificate needs to contain an "OCSP URI" field, and we
also need to know the certificate's issuer, that is used to build the
OCSP_CERTID. When no OCSP response is known for a given certificate, an
empty certificate_ocsp object gets created so that it can be inserted in
the ocsp update tree.
The entry is inserted on the first spot of the update tree since its
expire time is 0. Then whenever the update task is started, it will try
to get responses for those certificates first.
In order for the update process to work, we also need to store some
information relative to the main certificate into the certificate_ocsp
structure. This avoids having to keep a reference to a ckch in an ocsp
tree entry.
This patch adds a reference to the certificate chain as well as the ocsp
issuer that might have been filled during init into the certificate_ocsp
object. It also gets the ocsp uri at this time since it is contained in
the server's certificate. We only take the first uri that might be
contained in the certificate though.
Those fields are only filled when ocsp auto update is enabled for the
concerned certificate.
The 'ocsp-update' option is parsed at the same time as all the other
bind line options but it does not actually have anything to do with the
bind line since it concerns the frontend certificate instead. For that
reason, we should have a mean to identify inconsistencies in the
configuration and raise an error when a given certificate has two
different ocsp-update modes specified in one or more crt-lists.
The simplest way to do it is to store the ocsp update mode directly in
the ckch and not only in the ssl_bind_conf.
This option will define how the ocsp update mechanism behaves. The
option can either be set to 'on' or 'off' and can only be specified in a
crt-list entry so that we ensure that it concerns a single certificate.
The 'off' mode is the default one and corresponds to the old behavior
(no automatic update).
When the option is set to 'on', we will try to get an ocsp response
whenever an ocsp uri can be found in the frontend's certificate. The
only limitation of this mode is that the certificate's issuer will have
to be known in order for the OCSP certid to be built.
This patch only adds the parsing of the option. The full functionality
will come in a later commit.
The OCSP update tree holds ocsp responses that will need to be updated
automatically. The entries are inserted in an eb64_tree where the keys
are the absolute time after which the entry will need to be updated.
The ocsp_response member of the cert_key_and_chain structure is only
used temporarily. During a standard init process where an ocsp response
is provided, this ocsp file is first copied into the ocsp_response
buffer without any ocsp-related parsing (see
ssl_sock_load_ocsp_response_from_file), and then the contents are
actually interpreted and inserted into the actual ocsp tree
(cert_ocsp_tree) later in the process (see ssl_sock_load_ocsp). If the
response was deemed valid, it is then copied into the actual
ocsp_response structure's 'response' field (see
ssl_sock_load_ocsp_response). From this point, the ocsp_response field
of the cert_key_and_chain object could be discarded since actual ocsp
operations will be based of the certificate_ocsp object.
The only remaining runtime use of the ckch's ocsp_response field was in
the CLI, and more precisely in the 'show ssl cert' mechanism.
This constraint could be removed by adding an OCSP_CERTID directly in
the ckch because the buffer was only used to get this id.
This patch then adds the OCSP_CERTID pointer in the ckch, it clears the
ocsp_response buffer early and simplifies the ckch_store_build_certid
function.
The new "update ssl ocsp-response <certfile>" CLI command allows to
update the stored OCSP response for a given certificate. It relies on
the http_client which is used to send an HTTP request to the OCSP
responder whose URI can be extracted from the certificate.
This command won't work for a certificate that did not have a stored
OCSP response yet.
This helper function will check that an OCSP response is valid, meaning
that the proper "Content-Type: application/ocsp-response" header is
present and the data itself is a proper OCSP_RESPONSE that can be
checked thanks to the issuer certificate.
The tree that contains OCSP responses is never locked despite being used
at runtime for OCSP stapling as well as the CLI through "set ssl cert"
and "set ssl ocsp-response" commands.
Everything works though because the certificate_ocsp structure is
refcounted and the tree's entries are cleaned up when SSL_CTXs are
destroyed (thanks to an ex_data entry in which the certificate_ocsp
pointer is stored).
This new lock will come to use when the OCSP auto update mechanism is
fully implemented because this new feature will be based on another tree
that stores the same certificate_ocsp members and updates their contents
periodically.
When QUIC thread binding was fixed by commit f5a0c8abf ("MEDIUM: quic:
respect the threads assigned to a bind line"), one point was overlooked
regarding rxbuf allocation. Indeed, there's one rxbuf per listener and
per bound thread. Originally the loop would iterate over all threads,
but this is not needed anymore and causes lots of memory to be allocated
in scenarios where shards are used, the worst one being "shards by-thread"
which allocates N^2 buffers for N threads. This gives us 2304 buffers
(or 576 MB of RAM) for 48 threads!
Let's only allocate one buffer per bound thread on each listener to fix
this.
This should be backported to 2.7 and generally wherever the commit
above is backported. It depends on the previous commit below:
"BUG/MEDIUM: quic: properly take shards into account on bind lines"
Shards were completely forgotten in commit f5a0c8abf ("MEDIUM: quic:
respect the threads assigned to a bind line"). The thread mask is
taken from the bind_conf, but since shards were introduced in 2.5,
the per-listener mask is held by the receiver and can be smaller
than the bind_conf's mask.
The effect here is that the traffic is not distributed to the
appropriate thread. At first glance it's not dramatic since it remains
one of the threads eligible by the bind_conf, but it still means that
in some contexts such as "shards by-thread", some concurrency may
persist on listeners while they're expected to be alone. One identified
impact is that it requires more rxbufs than necessary, but there may
possibly be other not yet identified side effects.
This must be backported to 2.7 and everywhere the commit above is
backported.
qcs instances for bidirectional streams are inserted in
<qcc.opening_list>. It is removed from the list once a full HTTP request
has been parsed. This is required to implement http-request timeout.
In case a stream is deleted before receiving full HTTP request, it also
must be removed from <qcc.opening_list>. This was not the case on first
implementation but has been fixed by the following patch :
641a65ff3c
BUG/MINOR: mux-quic: remove qcs from opening-list on free
This means that now a stream can be deleted from the list in two
different functions. Sadly, as LIST_DELETE was used in both cases,
nothing prevented a double-deletion from the list, even though
LIST_INLIST was used. Both calls are replaced with LIST_DEL_INIT which
is idempotent.
This bug causes memory corruption which results in most cases in a
segfault, most of times outside of mux-quic code itself. It has been
found first by gabrieltz who reported it on the github issue #1903. Big
thanks to him for his testing.
This bug also causes failures on several 'M' transfer testcase of QUIC
interop-runner. The s2n-quic client is particularly useful in this case
as segfaults triggers were most of the times on the LIST_DELETE
operation itself. This is probably due to its encapsulating of HEADERS
frame with fin bit delayed in a following empty STREAM frame.
This must be backported wherever the above patch is, up to 2.6.
We don't need to know very accurately how much RAM is needed in a pool,
however we must not spend time competing with other threads trying to be
the one with the most accurate value. Let's use the "_opportunistic"
variants of swrate_add() which will simply cause some updates to be
dropped in case of thread contention. This should significantly improve
the situation when dealing with many threads and small per-thread caches.
Performance gains of up to 1-2% were observed on 48-thread systems thanks
to this alone.
Till now it was only possible to change the thread local hot cache size
at build time using CONFIG_HAP_POOL_CACHE_SIZE. But along benchmarks it
was sometimes noticed a huge contention in the lower level memory
allocators indicating that larger caches could be beneficial, especially
on machines with large L2 CPUs.
Given that the checks against this value was no longer on a hot path
anymore, there was no reason for continuing to force it to be tuned at
build time. So this patch allows to set it by tune.memory-hot-size.
It's worth noting that during the boot phase the value remains zero so
that it's possible to know if the value was set or not, which opens the
possibility that we try to automatically adjust it based on the per-cpu
L2 cache size or the use of certain protocols (none of this is done yet).
It is possible to block the stats applet if a line exceeds the free space in
the responsse buffer while the buffer is empty. It is only an issue in HTTP
becaues of the HTX overhead and, AFAIK, only with json output.
In this case, the applet is unable to write anything in the response buffer
and waits for some free space to proceed further. On the other hand, because
the response channel is empty, nothing is sent and thus no space can be
freed. At this stage, the stream and the applet are blocked waiting for the
other side.
To avoid this situation, we must take care to not dump a line exceeding the
free space in the HTX message. It means we cannot rely anymore on the global
trash buffer. At least, not directly. The trick is to use a local trash
buffer, mapped on the global one but with a different size. We use b_make()
to do so. The local trash buffer is thread local to avoid any concurrency
issue.
It is a valid fix. However it could be good to review the internal API of
the stats applet to not rely on a global variable.
This patch should solve the #1873. It must be backported at least as far as
2.6. Older versions must be evaluated first but it is probably possible to
hit this bug with long proxy/server names.
During the request parsing, we must be sure to never handle errors at the
mux level if the connection is running (or closing). The error must be
handled by the upper layer.
It should never happen. But there is an edge case that was not properly
handled. If all data are received on the first packet with the read0 and the
request is truncated on the payload, in this case the stream-connector is
created, so the H1C is in RUNNING state. But an error is reported because
the request is truncated. In this specific case, the error is handled by the
mux while it should not.
This patch is related to #1966. It must be backported to 2.7.
When an error occurred during the request parsing while the stream is not
running, an EOS must be reported. It is not an issue for an embryonic
connection because the H1 stream is orphan. However, it is an issue with
connections upgraded from TCP to H1. In this case, the upgrade is not
performed because there is an early error. However the H1 stream is not
orphan and is not destroyed. The H1 multiplexer will wait for the detach
event. But without EOS, the upper layer is unable to perform the shutdown.
This patch is related to #1966. It must be backported to 2.7. Older versions
are not affected by this issue.
There is a possible segfault when accessing qc->timer_task in
quic_conn_io_cb() without testing it. It seems however very rare as it
requires several condition to be encounter.
* quic_conn must be in CLOSING state after having sent a
CONNECTION_CLOSE which free the qc.timer_task
* quic_conn handshake must still be in progress : in fact, qc.timer_task
is accessed on this path because of the anti-amplification limit
lifted.
I was unable thus far to trigger it but benchmarking tests seems to have
fire it with the following backtrace as a result :
#0 _task_wakeup (f=4096, caller=0x5620ed004a40 <_.46868>, t=0x0) at include/haproxy/task.h:195
195 state = _HA_ATOMIC_OR_FETCH(&t->state, f);
[Current thread is 1 (Thread 0x7fc714ff1700 (LWP 14305))]
(gdb) bt
#0 _task_wakeup (f=4096, caller=0x5620ed004a40 <_.46868>, t=0x0) at include/haproxy/task.h:195
#1 quic_conn_io_cb (t=0x7fc5d0e07060, context=0x7fc5d0df49c0, state=<optimized out>) at src/quic_conn.c:4393
#2 0x00005620ecedab6e in run_tasks_from_lists (budgets=<optimized out>) at src/task.c:596
#3 0x00005620ecedb63c in process_runnable_tasks () at src/task.c:861
#4 0x00005620ecea971a in run_poll_loop () at src/haproxy.c:2913
#5 0x00005620ecea9cf9 in run_thread_poll_loop (data=<optimized out>) at src/haproxy.c:3102
#6 0x00007fc773c3f609 in start_thread () from /lib/x86_64-linux-gnu/libpthread.so.0
#7 0x00007fc77372d133 in clone () from /lib/x86_64-linux-gnu/libc.so.6
(gdb) up
#1 quic_conn_io_cb (t=0x7fc5d0e07060, context=0x7fc5d0df49c0, state=<optimized out>) at src/quic_conn.c:4393
4393 task_wakeup(qc->timer_task, TASK_WOKEN_MSG);
(gdb) p qc
$1 = (struct quic_conn *) 0x7fc5d0df49c0
(gdb) p qc->timer_task
$2 = (struct task *) 0x0
This fix should be backported up to 2.6.
This is a follow up of "BUG/MINOR: stats: fix show stat json buffer limitation"
However this time this is purely preemptive as we did not reach the buffer
limitation yet. But now is the proper time so that this won't be an issue
in the upcoming versions.
No backport needed.
json output type is a lot more verbose than other output types.
Because of this and the increasing number of metrics implemented within
haproxy, we are starting to reach max bufsize limit (defaults to 16k)
when dumping stats to json since 2.6-dev1.
This results in stats output being truncated with
"[{"errorStr":"output buffer too short"}]"
This was reported by Gabriel in #1964.
Thanks to "MINOR: stats: introduce stats field ctx", we can now make
multipart (using multiple buffers) dumping, in case a single buffer is not big
enough to hold the complete stat line.
For now, only stats_dump_fields_json() makes use of it as it is by
far the most verbose stats output type.
(csv, typed and html outputs should be good for a while and may use this
capability if the need arises in some distant future)
--
It could be backported to 2.6 and 2.7.
This commit depends on:
- MINOR: stats: provide ctx for dumping functions
- MINOR: stats: introduce stats field ctx
Add a new value in stats ctx: field.
Implement field support in line dumping parent functions
stats_print_proxy_field_json() and stats_dump_proxy_to_buffer().
This will allow child dumping functions to support partial line dumping
when needed. ie: when dumping buffer is exhausted: do a partial send and
wait for a new buffer to finish the dump. Thanks to field ctx, the function
can start dumping where it left off on previous (unterminated) invokation.
This is a minor refactor to allow stats_dump_info_* and stats_dump_fields_*
functions to directly access stat ctx pointer instead of explicitly
passing stat ctx struct members to them.
This will allow dumping functions to benefit from upcoming ctx updates.
The certificate chain that gets passed in the SSL_CTX through
SSL_CTX_set1_chain has its reference counter increased by OpenSSL
itself. But since the ssl_sock_load_cert_chain function might create a
brand new certificate chain if none exists in the ckch_data
(sk_X509_new_null), then we ended up returning a new certificate chain
to the caller that was never destroyed.
This patch can be backported to all stable branches but it might need to
be reworked for branches older than 2.4 because of commit ec805a32b9
that refactorized the modified code.
When calling 'show ssl ocsp-response' from the CLI, a temporary buffer
was created in parse_binary when we could just use a local static buffer
instead. This does not change the behavior of the function, it just
simplifies it.
Check return values of htx_add_header()/htx_add_eof() during H3 HEADERS
conversion to HTX. In case of error, the connection is interrupted with
a CONNECTION_CLOSE.
This commit is useful to detect abnormal situation on headers parsing.
It should be backported up to 2.7.
If an error is triggered on H3 HEADERS parsing, the allocated buffer for
HTX data is not freed.
To prevent this memleak, all return path have been centralized using
goto statements.
Also, as a small bonus, offer_buffers() is not called anymore if buffer
is not freed because sedesc has taken it. However this change has
probably no noticeable effect as dynamic buffers management is not
functional currently.
This should be backported up to 2.6.
Cookie header are treated specifically to merge multiple occurences in a
single HTX header. This is treated in a if-block condition inside the
'while (1)' loop for headers parsing. The length value of ist
representing cookie header is set to -1 by http_cookie_register(). The
problem is that then a continue statement is used but without
incrementing 'hdr_idx' to pass on the next header.
This issue was revealed by the introduction of commit :
commit d6fb7a0e0f
BUG/MEDIUM: h3: reject request with invalid header name
Before the aformentionned patch, the bug was hidden : on the next while
iteration, all isteq() invocations won't match with cookie header length
now set to -1. htx_add_header() fails silently because length is
invalid. hdr_idx is finally incremented which allows parsing to proceed
normally with the next header.
Now, a cookie header with length -1 do not pass the test on header name
conformance introduced by the above patch. Thus, a spurrious
RESET_STREAM is emitted. This behavior has been reported on the mailing
list by Shawn Heisey who found out that browsers disabled H3 usage due
to the RESET_STREAM received. Big thanks to him for his testing on the
master branch.
This issue is simply resolved by incrementing hdr_idx before continue
statement. It could have been detected earlier if htx_add_header()
return value was checked. This will be the subject of a dedicated commit
outside of the backport scope.
This must be backported up to 2.6.
http_cookie_register() must be called on first invocation with the last
two arguments pointing both to a negative value. After it, they will be
updated to a valid index.
We must never have only the last argument as NULL as this will cause an
invalid array addressing. To clarify this a BUG_ON statement is
introduced.
This is linked to github issue #1967.
In the previous patch (86924532db "BUG/MINOR: mux-h1: Fix test instead a
BUG_ON() in h1_send_error()"), a BUG_ON() condition was inverted by error in
h1_send_error(). The stream-connector must be NULL to be able to destroy the H1
stream.
This patch must be backported with the commit above (to 2.7).
When an error occurred during the request parsing, the H1 multiplexer is
responsible to sent a response to the client and to release the H1 stream
and the H1 connection. In HTTP mode, it is not an issue because at this
stage the H1 connection is in embryonic state. Thus it can be released
immediately.
However, it is a problem if the connection was first upgraded from a TCP
connection. In this case, a stream-connector is attached. The H1 stream is
not orphan. Thus it must not be released at this stage. It must be detached
first. Otherwise a BUG_ON() is triggered in h1s_destroy().
So now, the H1S is destroyed on early errors but only if the H1C is in
embryonic state.
This patch may be related to #1966. It must be backported to 2.7.
Ensure that if a request contains a content-length header it matches
with the total size of following DATA frames. This is conformance with
HTTP/3 RFC 9114.
For the moment, this kind of errors triggers a connection close. In the
future, it should be handled only with a stream reset. To reduce
backport surface, this will be implemented in another commit.
This must be backported up to 2.6. It relies on the previous commit :
MINOR: http: extract content-length parsing from H2
Extract function h2_parse_cont_len_header() in the generic HTTP module.
This allows to reuse it for all HTTP/x parsers. The function is now
available as http_parse_cont_len_header().
Most notably, this will be reused in the next bugfix for the H3 parser.
This is necessary to check that content-length header match the length
of DATA frames.
Thus, it must be backported to 2.6.
RFC 9114 dictates several requirements for pseudo header usage in H3
request. Previously only minimal checks were implemented. Enforce all
the following requirements with this patch :
* reject request with undefined or invalid pseudo header
* reject request with duplicated pseudo header
* reject non-CONNECT request with missing mandatory pseudo header
* reject request with pseudo header after standard ones
For the moment, this kind of errors triggers a connection close. In the
future, it should be handled only with a stream reset. To reduce
backport surface, this will be implemented in another commit.
This must be backported up to 2.6.
Reject request containing invalid header name. This concerns every
header containing uppercase letter or a non HTTP token such as a space.
For the moment, this kind of errors triggers a connection close. In the
future, it should be handled only with a stream reset. To reduce
backport surface, this will be implemented in another commit.
Thanks to Yuki Mogi from FFRI Security, Inc. for having reported this.
This must be backported up to 2.6.
In resolv_update_resolvers_timeout(), the resolvers task timeout is updated
by checking running and waiting resolutions. However, to find the next
wakeup date, MIN() operator is used to compare ticks. Ticks must never be
compared with such operators, tick helper functions must be used, to
properly handled TICK_ETERNITY value. In this case, tick_first() must be
used instead of MIN().
It is an old bug but it is pretty visible since the commit fdecaf6ae4
("BUG/MINOR: resolvers: do not run the timeout task when there's no
resolution"). Because of this bug, the resolvers task timeout may be set to
TICK_ETERNITY, stopping periodic resolutions.
This patch should solve the issue #1962. It must be backported to all stable
versions.
The function computing the excess of events of a frequency counter over the
current period for a target frequency must handle empty counters (no event
and no start date). In this case, no excess must be reported.
Because of this bug, long pauses may be experienced on the bandwith
limitation filter.
This patch must be backported to 2.7.
With internal proxies using the SSL activated (httpclient for example)
the automatic computation of the maxconn is wrong because these proxies
are always activated by default.
This patch fixes the issue by not counting these internal proxies during
the computation.
Must be backported as far as 2.5.
Use qcs_free() on allocation failure in qcs_new() This ensures that all
qcs content is properly deallocated and prevent memleaks. Most notably,
qcs instance is now removed from qcc tree.
This bug is labelled as MINOR as it occurs only on qcs allocation
failure due to memory exhaustion.
This must be backported up to 2.6.
qcs instances for bidirectional streams are inserted in
<qcc.opening_list>. It is removed from the list once a full HTTP request
has been parsed. This is required to implement http-request timeout.
If a qcs instance is freed before receiving a full HTTP request, it must
be removed from the <qcc.opening_list>. Else a segfault will occur in
qcc_refresh_timeout() when accessing a dangling pointer.
For the moment this bug was not reproduced in production. This is
because there exists only few rare cases where a qcs is freed before
HTTP request parsing. However, as error detection will be improved on
H3, this will occur more frequently in the near future.
This must be backported up to 2.6.
This patch is the follow up of previous fix :
BUG/MINOR: quic: properly handle alloc failure in qc_new_conn()
quic_conn owned socket FD is initialized as soon as possible in
qc_new_conn(). This guarantees that we can safely call
quic_conn_release() on allocation failure. This function uses internally
qc_release_fd() to free the socket FD unless it has been initialized to
an invalid FD value.
Without this patch, a segfault will occur if one inner allocation of
qc_new_conn() fails before qc.fd is initialized.
This change is linked to quic-conn owned socket implementation.
This should be backported up to 2.7.
qc_new_conn() is used to allocate a quic_conn instance and its various
internal members. If one allocation fails, quic_conn_release() is used
to cleanup things.
For the moment, pool_zalloc() is used which ensures that all content is
null. However, some members must be initialized to a special values
to be able to use quic_conn_release() safely. This is the case for
quic_conn lists and its tasklet.
Also, some quic_conn internal allocation functions were doing their own
cleanup on failure without reset to NULL. This caused an issue with
quic_conn_release() which also frees this members. To fix this, these
functions now only return an error without cleanup. It is the caller
responsibility to free the allocated content, which is done via
quic_conn_release().
Without this patch, allocation failure in qc_new_conn() would often
result in segfault. This was reproduced easily using fail-alloc at 10%.
This should be backported up to 2.6.
The output buffer is not zero-initialized. If we don't clear reserved
bytes, fcgi requests sent to backend will leak sensitive data.
This patch must be backported as far as 2.2.
This patch introduces haproxy_backend_agg_check_status metric
as we wanted in 42d7c402d but with the right data source.
This patch could be backported as far as 2.4.
haproxy_backend_agg_server_check_status currently aggregates
haproxy_server_status instead of haproxy_server_check_status.
We deprecate this and create a new one,
haproxy_backend_agg_server_status to clarify what it really
does.
This patch could be backported as far as 2.4.
Since the massive pools cleanup that happened in 2.6, the pools
architecture was made quite more hierarchical and many alternate code
blocks could be moved to runtime flags set by -dM. One of them had not
been converted by then, DEBUG_UAF. It's not much more difficult actually,
since it only acts on a pair of functions indirection on the slow path
(OS-level allocator) and a default setting for the cache activation.
This patch adds the "uaf" setting to the options permitted in -dM so
that it now becomes possible to set or unset UAF at boot time without
recompiling. This is particularly convenient, because every 3 months on
average, developers ask a user to recompile haproxy with DEBUG_UAF to
understand a bug. Now it will not be needed anymore, instead the user
will only have to disable pools and enable uaf using -dMuaf. Note that
-dMuaf only disables previously enabled pools, but it remains possible
to re-enable caching by specifying the cache after, like -dMuaf,cache.
A few tests with this mode show that it can be an interesting combination
which catches significantly less UAF but will do so with much less
overhead, so it might be compatible with some high-traffic deployments.
The change is very small and isolated. It could be helpful to backport
this at least to 2.7 once confirmed not to cause build issues on exotic
systems, and even to 2.6 a bit later as this has proven to be useful
over time, and could be even more if it did not require a rebuild. If
a backport is desired, the following patches are needed as well:
CLEANUP: pools: move the write before free to the uaf-only function
CLEANUP: pool: only include pool-os from pool.c not pool.h
REORG: pool: move all the OS specific code to pool-os.h
CLEANUP: pools: get rid of CONFIG_HAP_POOLS
DEBUG: pool: show a few examples in -dMhelp
It's not always easy to remember what certain options do together nor
which ones are only relevant when combined with others, so let's add a
few examples with the "help" command on -dM.
This one was set in defaults.h only when neither DEBUG_NO_POOLS nor
DEBUG_UAF were set. This was not the most convenient location to look
for it, and it was only used in pool.c to decide on the initial value
of POOL_DBG_NO_CACHE.
Let's just use DEBUG_NO_POOLS || DEBUG_UAF directly on this flag and
get rid of the intermediary condition. This also has the benefit of
removing a double inversion, which is always nice for understanding.
Till now pool-os used to contain a mapping from pool_{alloc,free}_area()
to pool_{alloc,free}_area_uaf() in case of DEBUG_UAF, or the regular
malloc-based function. And the *_uaf() functions were in pool.c. But
since 2.4 with the first cleanup of the pools, there has been no more
calls to pool_{alloc,free}_area() from anywhere but pool.c, from exactly
one place each. As such, there's no more need to keep *_uaf() apart in
pool.c, we can inline it into pool-os.h and leave all the OS stuff there,
with pool.c calling either based on DEBUG_UAF. This is cleaner with less
round trips between both files and easier to find.
There's no need for the low-level pool functions to be known from all
callers anymore, they're only used by pool.c. Let's reduce the amount
of header files processed.
In UAF mode, pool_put_to_os() performs a write to the about-to-be-freed
memory area so as to make sure the page is properly mapped and catch a
possible double-free. However there's no point keeping that in an ifdef
in the generic function, because we now have a pool_free_area_uaf()
that is the UAF-specific version of pool_free_area() and the one that
is called immediately after this write. Let's move the code there, it
will be cleaner.
The lua httpclient cleanup can be called in 2 places, the
hlua_httpclient_gc() and the hlua_httpclient_destroy_all().
A LIST_DELETE() is performed to remove the hlua_hc struct of the list.
However, when the lua task ends and call hlua_ctx_destroy(), it does a
LIST_DELETE() first, and then the gc tries to do a LIST_DELETE() again
in hlua_httpclient_gc(), provoking a crash.
This patch fixes the issue by doing a LIST_DEL_INIT() instead of
LIST_DELETE() in both cases.
Should fix issue #1958.
Must be backported where bb58142 is backported.
Jakub Vojacek reported in issue #1955 that haproxy 2.7.0 doesn't start
anymore on a 128-CPU machine with a default config. The reason is the
raise of the default MAX_THREADS value that came with thread groups.
Previously, the maximum number of threads was simply limited to this
value, and all of them fit into one group. Now the limit being higher,
all threads cannot fit by default into a single group, and haproxy fails
to start.
The solution adopted here is to continue to limit the number of threads
to the max supported per group, but to multiply it by the number of groups
(usually 1 by default). In addition, a diag warning is now emitted when
this happens, reminding the user to set nbthread or adjust thread-groups.
We can hardly do more than a diag warning if we don't want to make the
upgrade painful for users.
Thanks to Jakub for reporting this early. This must be backported to 2.7.
In process_peer_sync: a check was performed to know whether the peers section
handler should kill itself if the corresponding proxy was not started on the
current process.
This logic was initially implemented in early 1.6 development to prevent
some issues when peers where used in conjunction with nbproc > 1:
f83d3fe00a MEDIUM: init: stop any peers section not bound to the correct process
46dc1ca MEDIUM: peers: unregister peers that were never started
But later in 1.6 dev, a new commit has been introduced:
47c8c029db MEDIUM: init: completely deallocate unused peers
With the latter, the check implemented in 46dc1ca ("MEDIUM: peers: unregister
peers that were never started") will never succeed: it is dead code.
Since nbproc support has been dropped in 2.5, things have changed a bit:
f83d3fe00a logic was moved in mworker_cleanlisteners, but as in 46dc1ca :
peers task is safely destroyed before peers_fe is set to NULL.
Conversely, peers_fe is first set by init_peers_frontend() before peers task
is scheduled by peers_init_sync() in check_config_validity().
Again, it is safe to say that we will never reach !peers->peers_fe
in process_peer_sync(): this self-killing mechanism is not relevant anymore.
--
To cut a long story short: I stumbled on this while tracking down
current signal api usage.
This led me to a signal_unregister_handler() call performed in the
aforementionned dead code. To me this code was potentially unsafe because
signal_unregister_handler() is not thread safe and here it was used within a
task initialized via task_new_anywhere(). So I decided to check how bad this
could be (ie: conditions to be met for this code to run).. and here we are.
This cleanup is a follow up of "CLEANUP: peers: unused code path in
process_peer_sync"
There are some remnants of 1.6 peers specific code in mworker_cleanlisteners()
that was introduced with this patch serie:
f83d3fe00a MEDIUM: init: stop any peers section not bound to the correct process
47c8c029db MEDIUM: init: completely deallocate unused peers
Back then, nbthread did not exist, nbproc was used instead.
Updating some comments to make them more relevant to current haproxy design.
(multithreaded single process)
Moreover, in 47c8c029db, task_free() was performed on peers_fe->task.
But by looking at the code, from 1.6 til now, peers_fe->task
is never used for peers proxies, it is only used for main proxies (referenced
in proxies_list).
Removing this extra task cleanup because it is misleading.
ST_F_CHECK_DURATION metric is typed as unsigned int variable, and it is
derived from check->duration that is signed.
While most of the time check->duration > 0, it is not always true:
with HCHK_STATUS_HANA checks, check->duration is set to -1 to prevent server
logs from including irrelevant duration info (HCHK_STATUS_HANA checks are not
time related).
Because of this, stats could report UINT64_MAX value for ST_F_CHECK_DURATION
metric. This was quite confusing. To prevent this, we make sure not to assign
negative value to ST_F_CHECK_DURATION.
This is only a minor printing issue, not backport needed.
With previous commit, 9e080bf ("BUG/MINOR: checks: make sure fastinter is used
even on forced transitions"), on-error mark-down|sudden-death|fail-check are
now working as expected.
However, on-error fastinter remains broken because srv_getinter(), used in
the above commit to check the expiration date, won't return fastinter interval
if server health is maxed out (which is the case with on-error fastinter mode).
To fix this, we introduce a check flag named CHK_ST_FASTINTER.
This flag is set when on-error is triggered. This way we can force
srv_getinter() to return fastinter interval whenever the flag is set.
The flag is automatically cleared as soon as the new check task expiry is
recalculated in process_chk_conn().
This restores original behavior prior to d114f4a ("MEDIUM: checks: spread the
checks load over random threads").
It must be backported to 2.7 along with the aforementioned commits.
In ticket #1956, it was reported that an upgrade from 2.6 to 2.7 via a
reload would stop the master process.
When upgrading the binary, the new process is considered reexec and does
not try to creates the socketpair for the mcli_reload listener, then
tries to bind on -1 since the socket doesn't exit. The failure provokes
an exit() of the master.
This patch fixes the issue by trying to create the mcli_reload sockets
only when they don't exist, instead of creating them at first start.
This way we also avoid possible fd leak since we always try to use the
existing FDs first.
Must be backported in 2.7.
During an early failure of the mworker mode, the
mworker_cleanlisteners() function is called and tries to cleanup the
peers, however the peers are in a semi-initialized state and will use
NULL pointers.
The fix check the variable before trying to use them.
Bug revealed in issue #1956.
Could be backported as far as 2.0.
When the mworker wait mode fails it does an exit, but there is no
error message which says it exits.
Add a message which specify that the error is non-recoverable.
Could be backported in 2.7 and possibly earlier branch.
Aurélien also found that while previous commit a56798ea4 ("BUG/MEDIUM:
checks: do not reschedule a possibly running task on state change")
addressed one specific case where the check's task had to be woken up
quickly, but it's not always sufficient as the check will not be
considered as expired regarding the fastinter yet.
Let's make sure we do consider this specific case to update the timer
based on the new state if the new value is shorter. This particularly
means that even if the timer is not expired yet during a wakeup when
nothing is in progress, we need to check if applying the currently
effective interval right now to the current date would expire earlier
than what is programmed, then the timer needs to be updated. I.e.
make sure we never miss fastinter during a state transition before
the end of the current period.
The approach is not pretty, but it forces to repass via the existing
block dedicated to updating the timer if the current one is expired
and the updated one would appear earlier.
This must be backported to 2.7 along with the commit above.
Aurélien found an issue introduced in 2.7-dev8 with commit d114f4a68
("MEDIUM: checks: spread the checks load over random threads"), but which
in fact has deeper roots.
When a server's state is changed via __health_adjust(), if a fastinter
setting is set, the task gets rescheduled to run at the new date. The
way it's done is not thread safe, as nothing prevents another thread
where the task is already running from also updating the expire field
in parallel. But since such events are quite rare, this statistically
never happens. However, with the commit above, the tasks are no longer
required to go to the shared wait queue and are no longer marked as
shared between multiple threads. It's just that *any* thread may run
them at a time without implying that all of them are allowed to modify
them. And this change is sufficient to trigger the BUG_ON() condition
in the scheduler that detects the inconsistency between a task queued
in one thread and being manipulated in parallel by another one:
FATAL: bug condition "task->tid != tid" matched at
include/haproxy/task.h:670
call trace(13):
| 0x55f61cf520c9 [c6 04 25 01 00 00 00 00]: main-0x2ee7
| 0x55f61d0646e8 [8b 45 08 a8 40 0f 85 65]: back_handle_st_cer+0x78/0x4d7
| 0x55f61cff3e72 [41 0f b6 4f 01 e9 c8 df]: process_stream+0x2252/0x364f
| 0x55f61d0d2fab [48 89 c3 48 85 db 74 75]: run_tasks_from_lists+0x34b/0x8c4
| 0x55f61d0d38ad [29 44 24 18 8b 54 24 18]: process_runnable_tasks+0x37d/0x6c6
| 0x55f61d0a22fa [83 3d 0b 63 1e 00 01 0f]: run_poll_loop+0x13a/0x536
| 0x55f61d0a28c9 [48 8b 1d f0 46 19 00 48]: main+0x14d919
| 0x55f61cf56dfe [31 c0 e8 eb 93 1b 00 31]: main+0x1e4e/0x2d5d
At first glance it looked like it could be addressed in the scheduler
only, but in fact the problem clearly is at the application level, since
some shared fields are manipulated without protection. At minima, the
task's expiry ought to be touched only under the server's lock. While
it's arguable that the scheduler could make such updates easier, changing
it alone will not be sufficient here.
Looking at the sequencing closer, it becomes obvious that we do not need
this task_schedule() at all: a simple task_wakeup() is sufficient for the
callee to update its timers. Indeed, the process_chk_con() function already
deals with spurious wakeups, and already uses srv_getinter() to calculate
the next wakeup date based on the current state. So here, instead of
having to queue the task from __health_adjust() to anticipate a new check,
we can simply wake the task up and let it decide when it needs to run
next. This is much cleaner as the expiry calculation remains performed at
a single place, from the task itself, as it should be, and it fixes the
problem above.
This should be backported to 2.7, but not to older versions where the
risks of breakage are higher than the chance to fix something that
ever happened.
We're using srv_update_status() as the only event source or UP/DOWN server
events in an attempt to simplify the support for these 2 events.
It seems srv_update_status() is the common path for server state changes anyway
Tested with server state updated from various sources:
- the cli
- server-state file (maybe we could disable this or at least don't publish
in global event queue in the future if it ends in slower startup for setups
relying on huge server state files)
- dns records (ie: srv template)
(again, could be fined tuned to only publish in server specific subscriber
list and no longer in global subscription list if mass dns update tend to
slow down srv_update_status())
- normal checks and observe checks (HCHK_STATUS_HANA)
(same as above, if checks related state update storms are expected)
- lua scripts
- html stats page (admin mode)
Basic support for ADD and DEL server events are added through this commit:
SERVER_ADD is published on dynamic server addition through cli.
SERVER_DEL is published on dynamic server deletion through cli.
This work depends on:
"MINOR: event_hdl: add event handler base api"
"MINOR: server: add srv->rid (revision id) value"
Make use of the new srv->rid value in stats.
Stat is referred as ST_F_SRID, it is now used in stats_fill_sv_stats
function in order to be included in csv and json stats dumps.
Moreover, "rid: $value" will be displayed next to server puid
in html stats page if "stats show-legend" is specified in the stats frontend.
(mouse hovering tooltip)
Depends on the following commit:
"MINOR: server: add srv->rid (revision id) value"
With current design, we could not distinguish between
previously existing deleted server and a new server reusing
the deleted server name/id.
This can cause some confusion when auditing stats/events/logs,
because the new server will look similar to the old
one.
To address this, we're adding a new value in server structure: rid
rid (revision id) value is an unsigned 32bits value that is set upon
server creation. Value is derived from a global counter that starts
at 0 and is incremented each time one or multiple server deletions are
followed by a server addition (meaning that old name/id reuse could occur).
Thanks to this revision id, it is now easy to tell whether the server
we're looking at is the same as before or if it has been deleted and
re-added in the meantime.
(combining server name/id + server revision id yields a process-wide unique
identifier)
In process_stream(), we wait to have an empty output channel to forward a
close to the write side (a shutw). However, at the stream-connector level,
when a close is detected on one side and we don't want to keep half-close
connections, the shutw is unconditionally forwarded to the write side. This
typically happens on server side.
At first glance, this bug may truncate messages. But depending on the muxes
and the stream states, the bug may be more visible. On recent versions
(2.8-dev and 2.7) and on 2.2 and 2.0, the stream may be freezed, waiting for
the client timeout, if the client mux is unable to forward data because the
client is too slow _AND_ the response channel is not empty _AND_ the server
closes its connection _AND_ the server mux has forwarded all data to the
upper layer _AND_ the client decides to send some data and to close its
connection. On 2.6 and 2.4, it is worst. Instead of a freeze, the client mux
is woken up in loop.
Of course, conditions are pretty hard to meet. Especially because it is highly
time dependent. For what it's worth, I reproduce it with tcploop on client and
server sides and a basic HTTP configuration for HAProxy:
* client: tcploop -v 8889 C S:"GET / HTTP/1.1\r\nConnection: upgrade\r\n\r\n" P5000 S:"1234567890" K
* server: tcploop -v 8000 L A R S:"HTTP/1.1 101 ok\r\nConnection: upgrade\r\n\r\n" P2000 S2660000 F R
On 2.8-dev, without this patch, the stream is freezed and when the client
connection timed out, client data are truncated and '--cL' is reported in
logs. With the patch, the client data are forwarded to the server and the
connection is closed. A '--CD' is reported in logs.
It is an old bug. It was probably introduced with the multiplexers. To fix
it, in stconn (Formerly the stream-interface), we must wait all output data
be flushed before forwarding close to write side.
This patch must be backported as far as 2.2 and must be evaluated for 2.0.
A startup check is done for first QUIC listener to detect if quic-conn
owned socket is supported by the system. This is done by creating a
dummy socket reusing the listener address. This socket must be closed as
soon as the check is done.
The socket condition is invalid as it excludes zero which is a valid
file-descriptor value. Fix this bug by adjusting this condition.
In theory, this bug could prevent the usage of quic-conn owned socket as
startup check would report a false error. Also, the file-descriptor
would leak as it is not closed. In practice, this cannot happen when
startup check is done after a 'quic4/quic6' listener is instantiated as
file-descriptor are allocated in ascending order by the system.
This should fix github issue #1954.
quic-conn owned socket implementation is scheduled for backport on 2.7.
This commit must be backported with it, more specifically to fix the
following patch :
75839a44e7
MINOR: quic: startup detect for quic-conn owned socket support
Activate QUIC connection socket to achieve the best performance. The
previous behavior can be reverted by tune.quic.socket-owner
configuration option.
This change is part of quic-conn owned socket implementation.
Contrary to its siblings patches, I suggest to not backport it to 2.7.
This should ensure that stable releases behavior is perserved. If a user
faces issues with QUIC performance on 2.7, he can nonetheless change the
default configuration.
UDP addresses may change over time for a QUIC connection. When using
quic-conn owned socket, we have to detect address change to break the
bind/connect association on the socket.
For the moment, on change detected, QUIC connection socket is closed and
a new one is opened. In the future, we may improve this by trying to
keep the original socket and reexecute only bind/connect syscalls.
This change is part of quic-conn owned socket implementation.
It may be backported to 2.7 after a period of observation.
There is a small race condition when QUIC connection socket is
instantiated between the bind() and connect() system calls. This means
that the first datagram read on the sockets may belong to another
connection.
To detect this rare case, we compare the DCID for each QUIC datagram
read on the QUIC socket. If it does not match the connection CID, the
datagram is requeue using quic_receiver_buf to be able to handle it on
the correct thread.
This change is part of quic-conn owned socket implementation.
It may be backported to 2.7 after a period of observation.
qc_rcv_buf and qc_snd_buf are names used for static functions in both
quic-sock and quic-mux. To remove this ambiguity, slightly modify names
used in MUX code.
In the future, we should properly define a unique prefix for all QUIC
MUX functions to avoid such problem in the future.
This change is part of quic-conn owned socket implementation.
It may be backported to 2.7 after a period of observation.
This change is the second part for reception on QUIC connection socket.
All operations inside the FD handler has been delayed to quic-conn
tasklet via the new function qc_rcv_buf().
With this change, buffer management on reception has been simplified. It
is now possible to use a local buffer inside qc_rcv_buf() instead of
quic_receiver_buf().
This change is part of quic-conn owned socket implementation.
It may be backported to 2.7 after a period of observation.
Try to use the quic-conn socket for reception if it is allocated. For
this, the socket is inserted in the fdtab. This will call the new
handler quic_conn_io_cb() which is responsible to process the recv()
system call. It will reuse datagram dispatch for simplicity. However,
this is guaranteed to be called on the quic-conn thread, so it will be
more efficient to use a dedicated buffer. This will be implemented in
another commit.
This patch should improve performance by reducing contention on the
receiver socket. However, more gain can be obtained when the datagram
dispatch operation will be skipped.
Older quic_sock_fd_iocb() is renamed to quic_lstnr_sock_fd_iocb() to
emphasize its usage for the receiver socket.
This change is part of quic-conn owned socket implementation.
It may be backported to 2.7 after a period of observation.
If quic-conn has a dedicated socket, use it for sending over the
listener socket. This should improve performance by reducing contention
over the shared listener socket.
This change is part of quic-conn owned socket implementation.
It may be backported to 2.7 after a period of observation.
Allocate quic-conn owned socket if possible. This requires that this is
activated in haproxy configuration. Also, this is done only if local
address is known so it depends on the support of IP_PKTINFO.
For the moment this socket is not used. This causes QUIC support to be
broken as received datagram are not read. This commit will be completed
by a following patch to support recv operation on the newly allocated
socket.
This change is part of quic-conn owned socket implementation.
It may be backported to 2.7 after a period of observation.
Define global configuration option "tune.quic.socket-owner". This option
can be used to activate or not socket per QUIC connection mode. The
default value is "listener" which disable this feature. It can be
activated with the option "connection".
This change is part of quic-conn owned socket implementation.
It may be backported to 2.7 after a period of observation.
Extend the startup platform detection support test for quic-conn owned
socket. It is required to be able to retrieve destination address on a
recvfrom() system call so check if IP_PKTINFO or IP_RECVDSTADDR flags
are supported.
This change is part of quic-conn owned socket implementation.
It may be backported to 2.7 after a period of observation.
To be able to use individual sockets for QUIC connections, we rely on
the OS network stack which must support UDP sockets binding on the same
local address.
Add a detection code for this feature executed on startup. When the
first QUIC listener socket is binded, a test socket is created and
binded on the same address. If the bind call fails, we consider that
it's impossible to use individual socket for QUIC connections.
A new global option GTUNE_QUIC_SOCK_PER_CONN is defined. If startup
detect fails, this value is resetted from global options. For the
moment, there is no code to activate the option : this will be in a
follow-up patch with the introduction of a new configuration option.
This change is part of quic-conn owned socket implementation.
It may be backported to 2.7 after a period of observation.
QUIC protocol support address migration which allows to maintain the
connection even if client has changed its network address. This is done
through address migration.
RFC 9000 stipulates that address migration is forbidden before handshake
has been completed. Add a check for this : drop silently every datagram
if client network address has changed until handshake completion.
This commit is one of the first steps towards QUIC connection migration
support.
This should be backported up to 2.7.
Detect connection migration attempted by the client. This is done by
comparing addresses stored in quic-conn with src/dest addresses of the
UDP datagram.
A new function qc_handle_conn_migration() has been added. For the
moment, no operation is conducted and the function will be completed
during connection migration implementation. The only notable things is
the increment of a new counter "quic_conn_migration_done".
This should be backported up to 2.7.
Complete ipcmp() function with a new argument <check_port>. If this
argument is true, the function will compare port values besides IP
addresses and return true only if both are identical.
This commit will simplify QUIC connection migration detection. As such,
it should be backported to 2.7.
Extract individual datagram parsing code outside of datagrams list loop
in quic_lstnr_dghdlr(). This is moved in a new function named
quic_dgram_parse().
To complete this change, quic_lstnr_dghdlr() has been moved into
quic_sock source file : it belongs to QUIC socket lower layer and is
directly called by quic_sock_fd_iocb().
This commit will ease implementation of quic-conn owned socket.
New function quic_dgram_parse() will be easily usable after a receive
operation done on quic-conn IO-cb.
This should be backported up to 2.7.
quic_rx_packet struct had a reference to the quic_conn instance. This is
useless as qc instance is always passed through function argument. In
fact, pkt.qc is used only in qc_pkt_decrypt() on key update, even though
qc is also passed as argument.
Simplify this by removing qc field from quic_rx_packet structure
definition. Also clean up qc_pkt_decrypt() documentation and interface
to align it with other quic-conn related functions.
This should be backported up to 2.7.
Rename the structure "cert_key_and_chain" to "ckch_data" in order to
avoid confusion with the store whcih often called "ckchs".
The "cert_key_and_chain *ckch" were renamed "ckch_data *data", so we now
have store->data instead of ckchs->ckch.
Marked medium because it changes the API.
Adding base code to provide subscribe/publish API for internal
events processing.
event_hdl provides two complementary APIs, both are implemented
in src/event_hdl.c and include/haproxy/event_hdl{-t.h,.h}:
One API targeting developers that want to register event handlers
that will be notified on specific events.
(SUBSCRIBE)
One API targeting developers that want to notify registered handlers
about an event.
(PUBLISH)
This feature is being considered to address the following scenarios:
- mailers code refactoring (getting rid of deprecated
tcp-check ruleset implementation)
- server events from lua code (registering user defined
lua function that is executed with relevant data when a
server is dynamically added/removed or on server state change)
- providing a stable and easy to use API for upcoming
developments that rely on specific events to perform actions.
(e.g: ressource cleanup when a server is deleted from haproxy)
At this time though, we don't have much use cases in mind in addition to
server events handling, but the API is aimed at being multipurpose
so that new event families, with their own particularities, can be
easily implemented afterwards (and hopefully) without requiring breaking
changes to the API.
Moreover, you should know that the API was not designed to cope well
with high rate event publishing.
Mostly because publishing means iterating over unsorted subscriber list.
So it won't scale well as subscriber list increases, but it is intended in
order to keep the code simple and versatile.
Instead, it is assumed that events implemented using this API
should be periodic events, and that events related to critical
io/networking processing should be handled using
dedicated facilities anyway.
(After all, this is meant to be a general purpose event API)
Apart from being easily extensible, one of the main goals of this API is
to make subscriber code as simple and safe as possible.
This is done by offering multiple event handling modes:
- SYNC mode:
publishing code directly
leverages handler code (callback function)
and handler code has a direct access to "live" event data
(pointers mostly, alongside with lock hints/context
so that accessing data pointers can be done properly)
- normal ASYNC mode:
handler is executed in a backward compatible way with sync mode,
so that it is easy to switch from and to SYNC/ASYNC mode.
Only here the handler has access to "offline" event data, and
not "live" data (ptrs) so that data consistency is guaranteed.
By offline, you should understand "snapshot" of relevant data
at the time of the event, so that the handler can consume it
later (even if associated ressource is not valid anymore)
- advanced ASYNC mode
same as normal ASYNC mode, but here handler is not a function
that is executed with event data passed as argument: handler is a
user defined tasklet that is notified when event occurs.
The tasklet may consume pending events and associated data
through its own message queue.
ASYNC mode should be considered first if you don't rely on live event
data and you wan't to make sure that your code has the lowest impact
possible on publisher code. (ie: you don't want to break stuff)
Internal API documentation will follow:
You will find more details about the notions we roughly approached here.
When digging into suspected memory leaks, it's cumbersome to count the
number of allocations and free calls. Here we're adding a summary at the
end of the sum of allocs minus the sum of frees, excluding realloc since
we can't know how much it releases upon each call. This means that when
doing many realloc+free the count may be negative but in practice there
are very few reallocs so that's not a problem. Also the size/call is signed
and corresponds to the average size allocated (e.g. leaked) per call.
It seems to work reasonably well for now:
> debug dev memstats match buf
quic_conn.c:2978 P_FREE size: 1239547904 calls: 75656 size/call: 16384 buffer
quic_conn.c:2960 P_ALLOC size: 1239547904 calls: 75656 size/call: 16384 buffer
mux_quic.c:393 P_ALLOC size: 9112780800 calls: 556200 size/call: 16384 buffer
mux_quic.c:383 P_ALLOC size: 17783193600 calls: 1085400 size/call: 16384 buffer
mux_quic.c:159 P_FREE size: 8935833600 calls: 545400 size/call: 16384 buffer
mux_quic.c:142 P_FREE size: 9112780800 calls: 556200 size/call: 16384 buffer
h3.c:776 P_ALLOC size: 8935833600 calls: 545400 size/call: 16384 buffer
quic_stream.c:166 P_FREE size: 975241216 calls: 59524 size/call: 16384 buffer
quic_stream.c:127 P_FREE size: 7960592384 calls: 485876 size/call: 16384 buffer
stream.c:772 P_FREE size: 8798208 calls: 537 size/call: 16384 buffer
stream.c:768 P_FREE size: 2424832 calls: 148 size/call: 16384 buffer
stream.c:751 P_ALLOC size: 8852062208 calls: 540287 size/call: 16384 buffer
stream.c:641 P_FREE size: 8849162240 calls: 540110 size/call: 16384 buffer
stream.c:640 P_FREE size: 8847360000 calls: 540000 size/call: 16384 buffer
channel.h:850 P_ALLOC size: 2441216 calls: 149 size/call: 16384 buffer
channel.h:850 P_ALLOC size: 5914624 calls: 361 size/call: 16384 buffer
dynbuf.c:55 P_FREE size: 32768 calls: 2 size/call: 16384 buffer
Total BALANCE size: 0 calls: 5606906 size/call: 0 (excl. realloc)
Let's see how useful this becomes over time.
Sometimes when debugging it's convenient to be able to focus only on
certain pools. Just like we did for "show pools", let's add a filter
based on a prefix on "debug dev memstats match <prefix>".
These two have absolutely zero impact on the process and do not need to
be restricted to the expert mode. The first one calculates a string hash
that can be used by anyone when checking a dump; the second one may be
used by anyone tracking a memory leak, and is cumbersome to use due to
the "expert-mode on" that needs to be prepended. In addition this gives
bad habits to users and needlessly taints the process. So let's drop
this restriction for these two commands.
This command is used to hash a section name using the current anon key,
it was brought in 2.7 by commit 54966dffd ("MINOR: anon: store the
anonymizing key in the CLI's appctx"). However the help message only
says "return msg hashed" which is misleading because if anon mode is
not enabled, it returns the string as-is. Let's just mention this
condition in the help message, and also fix the alphabetical ordering
and alignment on the line.
"debug dev memstats" supports various options but silently ignores the
unknown ones. Let's make sure it returns indications about what it
expects, as the help message is quite limited otherwise.
Just like for the H2 multiplexer, info about the H1 connection task is now
displayed in "show fd" output. The task pointer is displayed and, if not
null, its expiration date.
It may be useful to backport it.
If shards are in use, we must fill the shard number on incoming updates,
otherwise some entries are assigned shard number zero, and may be broadcast
everywhere once updated, instead of being sent only to the peers having the
same shard number.
This fixes commit 36d156564 ("MINOR: peers: Support for peer shards"). No
backport is needed.
In peer_treat_updatemsg(), the lower layers of the stick-table code are
reimplemented, and the key length is never really known for an entry
being processed, it depends on the type being parsed and the moment
where it's done. This makes it quite difficult to stuff some shard
number calculation there.
This patch adds a keylen local variable that is always set to the length
of the current key depending on its type. It takes this opportunity for
reducing redudant expressions involving this length and always using the
new variable instead, limiting the risk of errors. Arguably that code
would have been way simpler by creating a dummy stktable_key and passing
it to stksess_new() as done anywhere else, but let's not change all that
a few days before the release.
The function used to calculate the shard number currently requires a
stktable_key on input for this. Unfortunately, it happens that peers
currently miss this calculation and they do not provide stktable_key
at all, instead they're open-coding all the low-level stick-table work
(hence why it's missing). Thus we'll need to be able to calculate the
shard number in keys coming from peers as well but the current API does
not make it possible.
This commit addresses this by inverting the order where the length and
the shard number are used. Now the low-level function is independent on
stksess and stktable_key, it takes a table, pointer and length and does
all the job. The upper function takes care of the type and key to get
the its length, and is for use only from stick-table code.
This doesn't change anything except that the low-level one will be usable
from outside (hence why it's exported now).
If the client closes the connection while there is no pending outgoing data,
the H1 connection must be released. However, it was switched to CLOSING
state instead. Thus the client connection was closed on client timeout.
It is side effect of the commif d1b573059a ("MINOR: mux-h1: Avoid useless
call to h1_send() if no error is sent"). Before, the extra call to h1_send()
was able to fix the H1C state.
To fix the bug and make switch to close state (CLOSING or CLOSED) less
errorprone, h1_close() helper function is systematically used.
It is a 2.7-specific bug. No backport needed.
We need to initialize the shard value in __stksess_init() because there is
not necessarily a key to make it happen later, resulting in an uninitialized
shard value appearing in the entry, typically when entries are learned from
peers. This fixes commit 36d156564 ("MINOR: peers: Support for peer shards"),
no backport is needed.
Note however that it is not sufficient to completely fix the peers code, the
shard value remains zero because the setting of the key was open-coded in
the peers code and these parts were not identified when adding support for
shards.
Some issues such as #1929 seem to involve a task without timeout but we
can't find the condition to reproduce this in the code. However, not having
this info in the output doesn't help, so this patch adds the task pointer
and its timeout (when the task is non-null). It may be useful to backport
it.
qc_dgrams_retransmit() could reuse the same local list and could splice it two
times to the packet number space list of frame to be send/resend. This creates a
loop in this list and makes qc_build_frms() possibly endlessly loop when trying
to build frames from the packet number space list of frames. Then haproxy aborts.
This issue could be easily reproduced patching qc_build_frms() function to set <dlen>
variable value to 0 after having built at least 10 CRYPTO frames and using ngtcp2
as client with 30% packet loss in both direction.
Thank you to @gabrieltz for having reported this issue in GH #1903.
Must be backported to 2.6.
ncbuf can be compiled for haproxy or standalone to run unit test suite.
For the latest mode, BUG_ON() macro has been re-implemented in a simple
version.
The inclusion of the default or the redefined macro relied on DEBUG_DEV.
Change this to now rely on DEBUG_STRICT as this is activated for the
default build.
This change is safe as only BUG_ON_HOT() macro is used in ncbuf code,
which is activated only with the default value DEBUG_STRICT=2.
This should be backported up to 2.6.
ncbuf API relies on lot of small functions. Mark these functions as
inline to reduce call invocations and facilitate compiler optimizations
to reduce code size.
This should be backported up to 2.6.
ncb_blk structure is used to represent a block of data or a gap in a
non-contiguous buffer. This is used in several functions for ncbuf
implementation. Before this patch, ncb_blk was passed by value, which is
sub-optimal. Replace this by const pointer arguments.
This has the side-effect of suppressing a compiler warning reported in
older GCC version :
CC src/http_conv.o
src/ncbuf.c: In function 'ncb_blk_next':
src/ncbuf.c:170: warning: 'blk.end' may be used uninitialized in this function
This should be backported up to 2.6.
Stick-tables support sharding to multiple peers but there was no way to
know to what shard an entry was going to be sent. Let's display this in
the "show table" output to ease debugging.
Instead of using memcpy() to concatenate the table's name to the key when
allocating an stksess, let's compute once for all a per-table seed at boot
time and use it to calculate the key's hash. This saves two memcpy() and
the usage of a chunk, it's always nice in a fast path.
When tested under extreme conditions with a 80-byte long table name, it
showed a 1% performance increase.
Initially the code was placed into cli.c to keep activity.c small and
independent of the cli stuff, but that's no longer the case anyway and
keeping that code over there makes it harder to find. Let's move it to
its more natural place now.
It happened a few times that it was difficult to figure if a counter was
normal or not in "show activity" based on the uptime. Let's just emit the
uptime value along with the date.
With an OpenSSL library which use the wrong OPENSSLDIR, HAProxy tries to
load the OPENSSLDIR/certs/ into @system-ca, but emits a warning when it
can't.
This patch fixes the issue by allowing to shut the error when the SSL
configuration for the httpclient is not explicit.
Must be backported in 2.6.
After reading a datagram, it is enqueud for the thread attached to the
DCID. This is done via quic_lstnr_dgram_dispatch() function. If this
step fails, we remove the datagram from the buffer of quic_receiver_buf.
This step is faulty because we use b_del() instead of b_sub(). If
quic_receiver_buf was not empty, we will remove content from another
datagram while leaving the content of the last read datagram. This
probably produces valid datagram dropping and may even result in crash.
As stated, this bug can only happen if qc_lstnr_dgram_dispatch() fails
which happen on two occaions :
* on quic_dgram allocation failure, which should be pretty rare
* on datagram labelled as invalid for QUIC protocol. This may happen
more frequently depending on the network conditions. Thus, this bug
has been labelled as a medium one.
This should be backported up to 2.6.
There were a few leftovers of DEBUG_HPACK and HPACK_SHT_SIZE instead of
their QPACK equivalent in the QPACK debug code. There's no harm anyway,
but it could lead to confusing results if the tables are not sized
equally.
In issue #1939, Ilya mentions that cppchecks warned about use of "%d" to
report the QPACK table's index that's locally stored as an unsigned int.
While technically valid, this will never cause any trouble since indexes
are always small positive values, but better use %u anyway to silence
this warning.
In issue #1939, Ilya mentions that cppchecks warned about use of "%d" to
report the status state that's locally stored as an unsigned int. While
technically valid, this will never cause any trouble since in the end
what we store there are the applet's states (just a few enum values).
Better use %u anyway to silence this warning.
In GH issue #1940 cppcheck warns about a possible null-dereference in
check_user() when DEBUG_AUTH is enabled. Indeed, <ep> may potentially
be NULL because upon error crypt_r() and crypt() may return a null
pointer. However it's not directly derefenced, it is only passed to
printf() with '%s' fmt. While it is in practice fine with the printf
implementations we care about (that check strings against null before
printing them), it is undefined behavior according to the spec, hence
the warning.
Let's check <ep> before passing it to printf. This should partly
solve GH #1940.
Emeric noticed that producing many randoms to fill a stick table was
saturating on the rand_lock. Since 2.4 we have the statistical PRNG
for low-quality randoms like this one, there is no point in using the
one that was originally implemented for the purpose of creating safe
UUIDs, since the doc itself clearly states that these randoms are not
secure and they have not been in the past either. With this change,
locking contention is completely gone.
This adds a USE_OPENSSL_WOLFSSL option, wolfSSL must be used with the
OpenSSL compatibility layer. This must be used with USE_OPENSSL=1.
WolfSSL build options:
./configure --prefix=/opt/wolfssl --enable-haproxy
HAProxy build options:
USE_OPENSSL=1 USE_OPENSSL_WOLFSSL=1 WOLFSSL_INC=/opt/wolfssl/include/ WOLFSSL_LIB=/opt/wolfssl/lib/ ADDLIB='-Wl,-rpath=/opt/wolfssl/lib'
Using at least the commit 54466b6 ("Merge pull request #5810 from
Uriah-wolfSSL/haproxy-integration") from WolfSSL. (2022-11-23).
This is still to be improved, reg-tests are not supported yet, and more
tests are to be done.
Signed-off-by: William Lallemand <wlallemand@haproxy.org>
Gcc 6.5 is now well known for triggering plenty of false "may be used
uninitialized", particularly at -O1, and two of them happen in quic,
quic_tp and quic_conn. Both of them were reviewed and easily confirmed
as wrong (gcc seems to ignore the control flow after the function
returns and believes error conditions are not met). Let's just preset
the variables that bothers it. In quic_tp the initialization was moved
out of the loop since there's no point inflating the code just to
silence a stupid warning.
These includes brought by commit 9c76637ff ("MINOR: anon: add new macros
and functions to anonymize contents") resulted in an increase of exactly
20% of the number of lines to build. These include are not needed there,
only tools.c needs xxhash.h.
cfgparse-quic accesses some members of the "global" struct but only
includes global-t.h. It actually used to work via tools.h due to a
long dependency chain that brought it, but it will be fixed and will
break cfgparse-quic, so let's fix it first.
Commit 36d156564 ("MINOR: peers: Support for peer shards") reintroduced
a direct dependency on import/xxhash.h which was previously dropped by
commit d5fc8fcb8 ("CLEANUP: Add haproxy/xxhash.h to avoid modifying
import/xxhash.h"). This results in xxhash.h being included twice, which
breaks the build on older compilers which do not like seeing XXH32_hash_t
being defined twice.
Let's just use include/haproxy/xxhash.h instead.
No backport is needed.
If we choose to not send an error to the client, there is no reason to call
h1_send() for nothing. This happens because functions handling errors return
1 instead of 0 when nothing is performed.
If is cleaner to remove this flag in the internal functions handling errors,
responsible to update the H1 connection state, instead to do so in calling
functions. This will hopefully avoid bugs in future.
When a timeout is detected waiting for the request, a 408-Request-Time-Out
response is sent. However, an error was introduced by commit 6858d76cd3
("BUG/MINOR: mux-h1: Obey dontlognull option for empty requests"). Instead
of inhibiting the log message, the option was stopping the error sending.
Of course, we must do the opposite.
This patch must be backported as far as 2.4.
When an idle H1 connection starts to process an new request, we must take
care to remove H1C_F_WAIT_NEXT_REQ flag. This flag is used to know an idle
H1 connection has already processed at least one request and is waiting for
a next one, but nothing was received yet.
Keeping this flag leads to a crash because some running H1 connections may
be erroneously released on a soft-stop. Indeed, only idle or closed
connections must be released.
This bug was reported into #1903. It is 2.7-specific. No backport needed.
In ssl_sock_bind_verifycbk(), when compiled without QUIC support, the
compiler may report an error during compilation about a possible NULL
dereference:
src/ssl_sock.c: In function ‘ssl_sock_bind_verifycbk’:
src/ssl_sock.c:1738:12: error: potential null pointer dereference [-Werror=null-dereference]
1738 | ctx->xprt_st |= SSL_SOCK_ST_FL_VERIFY_DONE;
| ~~~^~~~~~~~~
A BUG_ON() was addeded because it must never happen. But when compiled
without DEBUG_STRICT, there is nothing to help the compiler. Thus
ALREADY_CHECKED() macro is used. The ssl-sock context and the bind config
are concerned.
This patch must be backported to 2.6.
In http_replace_req_uri(), if the URI was successfully replaced, it means
the start-line exists. However, the compiler reports an error about a
possible NULL pointer dereference:
src/http_htx.c: In function ‘http_replace_req_uri’:
src/http_htx.c:392:19: error: potential null pointer dereference [-Werror=null-dereference]
392 | sl->flags &= ~HTX_SL_F_NORMALIZED_URI;
So, ALREADY_CHECKED() macro is used to silent the build error. This patch
must be backported with 84cdbe478a ("BUG/MINOR: http-htx: Don't consider an
URI as normalized after a set-uri action").
The recent refactoring about errors handling in the H1 multiplexer
introduced a bug on abort when the client wait for the server response. The
bug only exists if abortonclose option is not enabled. Indeed, in this case,
when the end of the message is reached, the mux stops to receive data
because these data are part of the next request. However, error on the
sending path are no longer fatal. An error on the reading path must be
caught to do so. So, in case of a client abort, the error is not reported to
the upper layer and the H1 connection cannot be closed if some pending data
are blocked (remember, an error on sending path was detected, blocking
outgoing data).
To be sure to have a chance to detect the abort in the case, when an error
is detected on the sending path, we force the subscription for reads.
This patch, with the previous one, should fix the issue #1943. It is
2.7-specific, no backport is needed.
When the H1 task timed out, we must be careful to not release the H1
conneciton if there is still a H1 stream with a stream-connector
attached. In this case, we must wait. There are some tests to prevent it
happens. But the last one only tests the UPGRADING state while there is also
the CLOSING state with a SC attached. But, in the end, it is safer to test
if there is a H1 stream with a SC attached.
This patch should partially fix the issue #1943. However, it only prevent
the segfault. There is another bug under the hood. BTW, this one is
2.7-specific. Not backport is needed.
An abosulte URI is marked as normalized if it comes from an H2 client. This
way, we know we can send a relative URI to an H1 server. But, after a
set-uri action, the URI must no longer be considered as normalized.
Otherwise there is no way to send an absolute URI on the server side.
If it is important to update a normalized absolute URI without altering this
property, the host, path and/or query-string must be set separatly.
This patch should fix the issue #1938. It should be backported as far as
2.4.
Except for CONNECT method, where a normalization is performed, we expected
to have an exact match between the authority and the host header value.
However it was too strict. Indeed, default port must be handled and the
matching must respect the RFC3986.
There is already a scheme based normalizeation performed on the URI later,
on the HTX message. And we cannot normalize the URI during H1 parsing to be
able to report proper errors on the original raw buffer. And a systematic
read-only normalization to validate the authority will consume CPU for only
few requests. At the end, we decided to perform extra-checks when the exact
match fails. Now, following authority/host are considered as equivalent:
http: domain.com <=> domain.com:80 <=> domain.com:
https: domain.com <=> domain.com:443 <=> domain.com:
This patch depends on:
* MINOR: h1: Consider empty port as invalid in authority for CONNECT
* MINOR: http: Considere empty ports as valid default ports
It is a bug regarding the RFC3986. Technically, I may be backported as far
as 2.4. However, this must be discussed first. If it is backported, the
commits above must be backported too.
Thanks to the previous commit ("MINOR: http: Considere empty ports as valid
default ports"), empty ports are now considered as valid default ports. Thus,
absolute URIs with empty port should be normalized.
So now, the following URIs are normalized:
http://example.com:/ --> http://example.com/https://example.com:/ --> https://example.com/
This patch depend on:
* MINOR: h1: Consider empty port as invalid in authority for CONNECT
* MINOR: http: Considere empty ports as valid default ports
It is a bug regarding the RFC3986. Technically, I may be backported as far
as 2.4. However, this must be discussed first. If backported, the commits
above must be backported too.
In RFC3986#6.2.3, following URIs are considered as equivalent:
http://example.comhttp://example.com/http://example.com:/http://example.com:80/
The third one is interristing because the port is empty and it is still
considered as a default port. Thus, http_get_host_port() does no longer
return IST_NULL when the port is empty. Now, a ist is returned, it points on
the first character after the colon (':') with a length of 0. In addition,
http_is_default_port() now considers an empty port as a default port,
regardless the scheme.
This patch must not be backported, if so, without the previous one ("MINOR:
h1: Consider empty port as invalid in authority for CONNECT").
For now, this change is useless because http_get_host_port() returns
IST_NULL when the port is empty. But this will change. For other methods,
empty ports are valid. But not for CONNECT method. To still return a
400-Bad-Request if a CONNECT is performed with an empty port, istlen() is
used to test the port, instead of isttest().
Len cannot be equal to 1 when entering in escape handling code.
But yet, an extra "len == 1" check was performed.
Removing this useless check.
This was reported by Ilya with the help of cppcheck.
In parse_log_message(), if log is rfc5424 compliant, p pointer
is incremented and size is not. However size is still used in further
checks as if p pointer was not incremented.
This could lead to logic error or buffer overflow if input buf is not
null-terminated.
Fixing this by making sure size is up to date where it is needed.
It could be backported up to 2.4.
ebpt_next_dup() was used 2 times in a row but only the first call was
checked against NULL, probably assuming that the 2 calls always yield the
same result here.
gcc is not OK with that, and it should be safer to store the result of
the first call in a temporary var to dereference it once checked against NULL.
This should fix GH #1869.
Thanks to Ilya for reporting this issue.
It may be backported up to 2.4.
The build with DEBUG_THREAD was broken by commit fc50b9dd1 ("BUG/MAJOR:
sched: protect task during removal from wait queue"). It took me a while
to figure how to declare and aligned and initialized rwlock that wasn't
static, but it turns out that __decl_aligned_rwlock() does exactly this,
so that we don't have to assign an integer value when a struct is expected
in case of debugging.
No backport is needed.
The issue addressed by commit fbb934da9 ("BUG/MEDIUM: stick-table: fix
a race condition when updating the expiration task") is still present
when thread groups are enabled, but this time it lies in the scheduler.
What happens is that a task configured to run anywhere might already
have been queued into one group's wait queue. When updating a stick
table entry, sometimes the task will have to be dequeued and requeued.
For this a lock is taken on the current thread group's wait queue lock,
but while this is necessary for the queuing, it's not sufficient for
dequeuing since another thread might be in the process of expiring this
task under its own group's lock which is different. This is easy to test
using 3 stick tables with 1ms expiration, 3 track-sc rules and 4 thread
groups. The process crashes almost instantly under heavy traffic.
One approach could consist in storing the group number the task was
queued under in its descriptor (we don't need 32 bits to store the
thread id, it's possible to use one short for the tid and another
one for the tgrp). Sadly, no safe way to do this was figured, because
the race remains at the moment the thread group number is checked, as
it might be in the process of being changed by another thread. It seems
that a working approach could consist in always having it associated
with one group, and only allowing to change it under this group's lock,
so that any code trying to change it would have to iterately read it
and lock its group until the value matches, confirming it really holds
the correct lock. But this seems a bit complicated, particularly with
wait_expired_tasks() which already uses upgradable locks to switch from
read state to a write state.
Given that the shared tasks are not that common (stick-table expirations,
rate-limited listeners, maybe resolvers), it doesn't seem worth the extra
complexity for now. This patch takes a simpler and safer approach
consisting in switching back to a single wq_lock, but still keeping
separate wait queues. Given that shared wait queues are almost always
empty and that otherwise they're scanned under a read lock, the
contention remains manageable and most of the time the lock doesn't
even need to be taken since such tasks are not present in a group's
queue. In essence, this patch reverts half of the aforementionned
patch. This was tested and confirmed to work fine, without observing
any performance degradation under any workload. The performance with
8 groups on an EPYC 74F3 and 3 tables remains twice the one of a
single group, with the contention remaining on the table's lock first.
No backport is needed.
The global_listener_rwlock was introduced by recent commit 13e86d947
("BUG/MEDIUM: listener: Fix race condition when updating the global mngmt
task"), but it's declared static and is not used when threads are disabled,
thus causing a warning to be emitted in this case. Let's just condition it
to thread usage to shut the warning.
This will need to be backported where the patch above is backported.
In order to evenly pick idle connections from other threads, there is
a "next_takeover" index in the server, that is incremented each time
a connection is picked from another thread, and indicates which one to
start from next time.
With thread groups this doesn't work well because the index is the same
regardless of the group, and if a group has more threads than another,
there's even a risk to reintroduce an imbalance.
This patch introduces a new per-tgroup storage in servers which, for now,
only contains an instance of this next_takeover index. This way each
thread will now only manipulate the index specific to its own group, and
the takeover will become fair again. More entries may come soon.
In 2.2, some idle conns usage metrics were added by commit cf612a045
("MINOR: servers: Add a counter for the number of currently used
connections."), which mentioned that the operation doesn't need to be
atomic since we're not seeking exact values. This is true but at least
we should use atomic stores to make sure not to cause invalid values
to appear on archs that wouldn't guarantee atomicity when writing an
int, such as writing two 16-bit words. This is pretty unlikely on our
targets but better keep the code safe against this.
This may be backported as far as 2.2.
The function resolv_update_resolvers_timeout() always schedules a wakeup
of the process_resolvers() task based on the "timeout resolve" setting,
regardless of the presence of an ongoing resolution or not. This is causing
one wakeup every second by default even when there's no resolvers section
(due to the default one), and can even be worse: creating a section with
"timeout resolve 1" bombs the process with 1000 wakeups per second.
Let's condition the setting to the presence of a resolution to address
this.
This issue has been there forever, but it doesn't cause that much trouble,
and given how fragile and tricky this code is, it's probably wise to
refrain from backporting it until it's reported to really cause trouble.
If no cluster-secret is defined by the user, a random one is silently
generated.
This ensures that at least QUIC Retry tokens are generated if abnormal
conditions are detected. However, it is advisable to specify it in the
configuration for tokens to be valid even after a reload or across LBs
instances in the same cluster.
This should be backported up to 2.6.
QUIC Retry generation relies on global cluster-secret to produce token
valid even after a process restart and across several LBs instances.
Before this patch, Retry is automatically deactivated if no
cluster-secret is provided. This is the case even if a user has
configured a QUIC listener with quic-force-retry. Change this behavior
by now returning an error during configuration parsing. The user must
provide a cluster-secret if quic-force-retry is used.
This shoud be backported up to 2.6.
The "show pools" command is used a lot for debugging but didn't get much
love over the years. This patch brings new capabilities:
- sorting the output by pool names to ese their finding ("byname").
- sorting the output by reverse item size to spot the biggest ones("bysize")
- sorting the output by reverse number of allocated bytes ("byusage")
The last one (byusage) also omits displaying the ones with zero allocation.
In addition, an optional max number of output entries may be passed so as
to dump only the N most relevant ones.
This previous patch was not sufficient to prevent haproxy from
crashing when some Handshake packets had to be inspected before being possibly
retransmitted:
"BUG/MAJOR: quic: Crash upon retransmission of dgrams with several packets"
This patch introduced another issue: access to packets which have been
released because still attached to others (in the same datagram). This was
the case for instance when discarding the Initial packet number space before
inspecting an Handshake packet in the same datagram through its ->prev or
member in our case.
This patch implements quic_tx_packet_dgram_detach() which detaches a packet
from the adjacent ones in the same datagram to be called when ackwowledging
a packet (as done in the previous commit) and when releasing its memory. This
was, we are sure the released packets will not be accessed during retransmissions.
Thank you to @gabrieltz for having reported this issue in GH #1903.
Must be backported to 2.6.
It is useful because when we're passing data to runtime API, specially
via code, we can mistakenly send newlines leading to some lines being
wrongly interpretted as commands.
This is analogous to how it's done in a shell, example bash:
$ not_found arg1
bash: not_found: command not found...
$
Real world example: Following the official docs to add a cert:
$ echo -e "set ssl cert ./cert.pem <<\n$(cat ./cert.pem)\n" | socat stdio tcp4-connect:127.0.0.1:9999
Note, how the payload is sent via '<<\n$(cat ./cert.pem)\n'. If cert.pem
contains a newline between various PEM blocks, which is valid, the above
command would generate a flood of 'Unknown command' messages for every
line sent after the first newline. As a new user, this detail is not
clearly visible as socket API doesn't say what exactly what was 'unknown'
about it. The cli interface should be obvious around guiding user on
"what do do next".
This commit changes that by printing the parsed cmd in output like
'Unknown command: "<cmd>"' so the user gets clear "next steps", like
bash, regarding what indeed was the wrong command that HAproxy couldn't
interpret.
Previously:
$ echo -e "show version\nhelpp"| socat ./haproxy.sock - | head -n4
2.7-dev6
Unknown command, but maybe one of the following ones is a better match:
add map [@<ver>] <map> <key> <val> : add a map entry (payload supported instead of key/val)
Now:
$ echo -e "show version\nhelpp"| socat ./haproxy.sock - | head -n4
2.7-dev8-737bb7-156
Unknown command: 'helpp', but maybe one of the following ones is a better match:
add map [@<ver>] <map> <key> <val> : add a map entry (payload supported instead of key/val)
As revealed by some traces provided by @gabrieltz in GH #1903 issue,
there are clients (chrome I guess) which acknowledge only one packet among others
in the same datagram. This is the case for the first datagram sent by a QUIC haproxy
listener made an Initial packet followed by an Handshake one. In this identified
case, this is the Handshake packet only which is acknowledged. But if the
client is able to respond with an Handshake packet (ACK frame) this is because
it has successfully parsed the Initial packet. So, why not also acknowledging it?
AFAIK, this is mandatory. On our side, when restransmitting this datagram, the
Handshake packet was accessed from the Initial packet after having being released.
Anyway. There is an issue on our side. Obviously, we must not expect an
implementation to respect the RFC especially when it want to build an attack ;)
With this simple patch for each TX packet we send, we also set the previous one
in addition to the next one. When a packet is acknowledged, we detach the next one
and the next one in the same datagram from this packet, so that it cannot be
resent when resending these packets (the previous one, in our case).
Thank you to @gabrieltz for having reported this issue.
Must be backported to 2.6.
The silent-drop action was extended with an additional optional parameter,
[rst-ttl <ttl> ], causing HAProxy to send a TCP RST with the specified TTL
towards the client.
With this behaviour, the connection state on your own client-
facing middle-boxes (load balancers, firewalls) will be purged,
but the client will still assume the TCP connection is up because
the TCP RST packet expires before reaching the client.
Add more traces to follow CRYPTO data buffering in ncbuf. Offset for
quic_enc_level is now reported for event QUIC_EV_CONN_PRHPKTS. Also
ncb_advance() must never fail so a BUG_ON() statement is here to
guarantee it.
This was useful to track handshake failure reported too often. This is
related to github issue #1903.
This should be backported up to 2.6.
Liberate quic_enc_level ncbuf in quic_stream_free(). In most cases, this
will already be done when handshake is completed via
qc_treat_rx_crypto_frms(). However, if a connection is released before
handshake completion, a leak was present without this patch.
Under normal situation, this leak should have been limited due to the
majority of QUIC connection success on handshake. However, another bug
caused handshakes to fail too frequently, especially with chrome client.
This had the side-effect to dramatically increase this memory leak.
This should fix in part github issue #1903.
QUIC handshakes were frequently in error due to haproxy misuse of
ncbuf. This resulted in one of the following scenario :
- handshake rejected with CONNECTION_CLOSE due to overlapping data
rejected
- CRYPTO data fully received by haproxy but handshake completion signal
not reported causing the client to emit PING repeatedly before timeout
This was produced because ncb_advance() result was not checked after
providing crypto data to the SSL stack in qc_provide_cdata(). However,
this operation can fail if a too small gap is formed. In the meantime,
quic_enc_level offset was always incremented. In some cases, this caused
next ncb_add() to report rejected overlapping data. In other cases, no
error was reported but SSL stack never received the end of CRYPTO data.
Change slightly the handling of new CRYPTO frames to avoid this bug :
when receiving a CRYPTO frame for the current offset, handle them
directly as previously done only if quic_enc_level ncbuf is empty. In
the other case, copy them to the buffer before treating contiguous data
via qc_treat_rx_crypto_frms().
This change ensures that ncb_advance() operation is now conducted only
in a data block : thus this is guaranteed to never fail.
This bug was easily reproduced with chromium as it fragments CRYPTO
frames randomly in several frames out of order.
This commit has two drawbacks :
- it is slightly less worst on performance because as sometimes even
data at the current offset will be memcpy
- if a client uses too many fragmented CRYPTO frames, this can cause
repeated ncb_add() error on gap size. This can be reproduced with
chrome, albeit with a slighly less frequent rate than the fixed issue.
This change should fix in part github issue #1903.
This must be backported up to 2.6.
ncb_advance() operation may reject the operation if a too small gap is
formed in buffer front. This must be documented to avoid an issue with
it.
This should be backported up to 2.6.
Since 0909f62266 ("BUG/MEDIUM: peers: messages about unkown tables not
correctly ignored"), the 'sc' variable is no longer used in
peer_treat_updatemsg() and peer_treat_definemsg() functions. So, we must
remove them to avoid compilation warning.
This patch must be backported with the commit above.
nbhread, thead-group and thread-groups directives must only be defined in
very first global sections. It means no other section must have been parsed
before. Indeed, some parts of the configuratio depends on the value of these
settings and it is undefined to change them after.
In diag mode, the section position is checked and a warning is emitted if a
global section is defined after any non-global one. Now, this check is
always performed. But the warning is still only emitted in diag mode. In
addition, the result of this check is now stored in a global variable, to be
used from anywhere.
The aim of this patch is to be able to restrict usage of some global
directives to the very first global sections. It will be useful to avoid
undefined behaviors. Indeed, some config parts may depend on global settings
and it is a problem if these settings are changed after.
Table defintion's messages and update messages are not correctly
ignored if the table is not configured on the local peer.
It is a bug because, receiving those messages, the parser
returns an error and the upper layer considers that the state of the
peer's connection is modified (as it is done in the case of protocol
error) and switch immediatly the automate to process the new state.
But, even if message is silently ignored because the connection's
state doesn't change and we continue to process the next message, some
processing remains not performed: for instance the ALIVE flag is not set
on the peer's connection as it should be done after receiving any valid
messages. This results in a shutdown of the connection when timeout
is elapsed as if no message has been received during this delay.
This patch fix the behavior, those messages are now silently ignored
and the upper layer continue the processing as it is done for any valid
messages.
This bug appears with the code re-work of the peers on 2.0 so
it should be backported until this version.
The registering of the keylog callback seems to provoke a loss of
performance. Disable the registration as well as the fetches if
tune.ssl.keylog is off.
Must be backported as far as 2.2.
In raw_sock_to_buf(), if a low-level error is reported, we no longer
immediately set an error on the connexion if something was received. This
may happen when a RST is received with data. This way, we let a chance to
the mux to process received data first instead of immediately aborting.
This patch should fix some spurious health-check failures. It is pretty hard
to observe, but with a server immediately returning the response followed by
a RST, without waiting the request, it is possible to have some health-check
errors. For instance, with the following tcploop server:
tcploop 8000 L Q W N1 A S:"HTTP/1.0 200 OK\r\n\r\n" F K
( Accept -> send response -> FIN -> Close)
we can have such strace output:
15:11:21.433005 socket(AF_INET, SOCK_STREAM, IPPROTO_IP) = 38
15:11:21.433141 fcntl(38, F_SETFL, O_RDONLY|O_NONBLOCK) = 0
15:11:21.433233 setsockopt(38, SOL_TCP, TCP_NODELAY, [1], 4) = 0
15:11:21.433359 setsockopt(38, SOL_TCP, TCP_QUICKACK, [0], 4) = 0
15:11:21.433457 connect(38, {sa_family=AF_INET, sin_port=htons(8000), sin_addr=inet_addr("127.0.0.1")}, 16) = -1 EINPROGRESS (Operation now in progress)
15:11:21.434215 epoll_ctl(4, EPOLL_CTL_ADD, 38, {events=EPOLLIN|EPOLLOUT|EPOLLRDHUP, data={u32=38, u64=38}}) = 0
15:11:21.434468 epoll_wait(4, [{events=EPOLLOUT, data={u32=38, u64=38}}], 200, 21) = 1
15:11:21.434810 recvfrom(38, 0x7f32a83e5020, 16320, 0, NULL, NULL) = -1 EAGAIN (Resource temporarily unavailable)
15:11:21.435405 sendto(38, "OPTIONS / HTTP/1.0\r\ncontent-leng"..., 41, MSG_DONTWAIT|MSG_NOSIGNAL, NULL, 0) = 41
15:11:21.435833 epoll_ctl(4, EPOLL_CTL_MOD, 38, {events=EPOLLIN|EPOLLRDHUP, data={u32=38, u64=38}}) = 0
15:11:21.435907 epoll_wait(4, [{events=EPOLLIN|EPOLLERR|EPOLLHUP|EPOLLRDHUP, data={u32=38, u64=38}}], 200, 17) = 1
15:11:21.436024 recvfrom(38, "HTTP/1.0 200 OK\r\n\r\n", 16320, 0, NULL, NULL) = 19
15:11:21.436189 close(38) = 0
15:11:21.436402 write(2, "[WARNING] (163564) : Server bac"..., 184[WARNING] (163564) : Server back-http/www is DOWN, reason: Socket error, check duration: 5ms. 0 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
The response was received, but it is ignored because an error was reported
too. The error handling must be refactored. But it a titanic stain. Thus,
for now, a good fix is to delay the error report when something was
received. The error will be reported on the next receive, if any.
This patch should fix the issue #1863, but it must be confirmed. At least it
fixes the above example. It must be backported to 2.6. For older versions,
it must be evaluated first.
In http_create_txn(): vars_init_head() was performed on both s->vars_txn
and s->var_reqres lists.
But this is wrong, these two lists are already initialized upon stream
creation in stream_new().
Moreover, between stream_new() and http_create_txn(), some variable may
be defined (e.g.: by the frontend), resulting in lists not being empty.
Because of this "extra" list initialization, already defined variables
can be lost.
This causes txn dependant code not being able to access previously defined
variables as well as memory leak because http_destroy_txn() relies on these
lists to perform the purge.
This proved to be the case when a frontend sets variables and lua sample
fetch is used in backend section as described in GH #1935.
Many thanks to Darragh O'Toole for his detailed report.
Removing extra var_init_head (x2) in http_create_txn() to fix the issue.
Adding somme comments in the code in an attempt to prevent future misuses
of s->var_reqres, and s->var_txn lists.
It should be backported in every stable version.
(This is an old bug that seems to exist since 1.6-dev6)
[cf: On 2.0 and 1.8, for the legacy HTTP code, vars_init() are used during
the TXN cleanup, when the stream is reused. So, these calls must be
moved from http_init_txn() to http_reset_txn() and not removed.]
The goto label is not at the right place. When H1S allocation failed, the
error is immediately handled. Thus, "no_parsing" label must be set just
after h1_send() call to skip the request parsing part.
It is 2-7-specific. No backport needed.
At the end of manage_global_listener_queue(), the task expire date is set to
TICK_ETERNITY. Thus, it is useless to call task_queue() just after because
the function does nothing in this case.
It is pretty similar to fbb934da90 ("BUG/MEDIUM: stick-table: fix a race
condition when updating the expiration task"). When the global management
task is running, at the end of its process function, it resets the expire
date by setting it to TICK_ETERNITY. In same time, a listener may reach a
global limit and decides to schedule the task. Thus it is possible to queue
the task and trigger the BUG_ON() on the expire date because its value was
set to TICK_ETERNITY in the means time:
FATAL: bug condition "task->expire == 0" matched at src/task.c:310
call trace(12):
| 0x662de8 [b8 01 00 00 00 c6 00 00]: __task_queue+0xc7/0x11e
| 0x63b03f [48 b8 04 00 00 00 05 00]: main+0x2535e
| 0x63ed1a [e9 d2 fd ff ff 48 8b 45]: listener_accept+0xf72/0xfda
| 0x6a36d3 [eb 01 90 c9 c3 55 48 89]: sock_accept_iocb+0x82/0x87
| 0x6af22f [48 8b 05 ca f9 13 00 8b]: fd_update_events+0x35a/0x497
| 0x42a7a8 [89 45 d8 83 7d d8 02 75]: main-0x1eb539
| 0x6158fb [48 8b 05 e6 06 1c 00 64]: run_poll_loop+0x2e7/0x319
| 0x615b6c [48 8b 05 ed 65 1d 00 48]: main-0x175
| 0x7ffff775bded [e9 69 fe ff ff 48 8b 4c]: libc:+0x8cded
| 0x7ffff77e1370 [48 89 c7 b8 3c 00 00 00]: libc:+0x112370
To fix the bug, a RW lock is introduced. It is used to fix the race
condition. A read lock is taken when the task is scheduled, in
listener_accpet() and a write lock is used at the end of process function to
set the expire date to TICK_ETERNITY. This lock should not be used very
often and most of time by "readers". So, the impact should be really
limited.
This patch should fix the issue #1930. It must be backported as far as 1.8
with some cautions because the code has evolved a lot since then.
h1_process_mux() is written to allow partial headers formatting. For now,
all headers are forwarded in one time. But it is still good to keep this
ability at the H1 mux level. So we must rely on a H1S flag instead of a
local variable to know a WebSocket key was found in headers to be able to
generate a key if necessary.
There is no reason to backport this patch.
SE_FL_ERR_PENDING flag is used when there is still data to be read. So we
must take care to not set SE_FL_ERROR too early. Thus, on sending path, it
must be set if SE_FL_EOS was already set.
Similarly to the H1 and H2 multiplexers, FCFI_CF_ERR_PENDING is now used to
report an error when we try to send data and FCGI_CF_ERROR to report an
error when we try to read data. In other funcions, we rely on these flags
instead of connection ones. Only FCGI_CF_ERROR is considered as a final
error. FCGI_CF_ERR_PENDING does not block receive attempt.
In addition, FCGI_CF_EOS flag was added. we rely on it to test if a read0
was received or not.
Some fields in h2c structures are not used: .mfl, .mft and .mff. Just remove
them.
.msi field is also removed. It is tested but never set, except when a H2
connection is initialized. It also means h2c_mux_busy() function is useless
because it always returns 0 (.msi is always -1). And thus, by transitivity,
H2_CF_DEM_MBUSY is also useless because it is never set. So .msi field,
h2c_mux_busy() function and H2C_MUX_BUSY flag are removed.
Similarly to the H1 multiplexer, H2_CF_ERR_PENDING is now used to report an
error when we try to send data and H2_CF_ERROR to report an error when we
try to read data. In other funcions, we rely on these flags instead of
connection ones. Only H2_CF_ERROR is considered as a final error.
H2_CF_ERR_PENDING does not block receive attempt.
In addition, we rely on H2_CF_RCVD_SHUT flag to test if a read0 was received
or not.
SE_FL_ERROR must be set on the SE descriptor only if EOS was already
reported. So call se_fl_set_error() function to properly the
ERR_PENDING/ERROR flags. It is not really a bug because the mux-pt is really
simple. But it is better to do it now the right way.
When the H1 connection is aborted, we no longer set a final error. To do so,
the flag H1C_F_ABORTED was added. For now, it is only set when a error is
detected on the H1 stream. Idea is to use ERR_PENDING/ERROR for upgoing
errors and ABRT_PENDING/ABRTED for downgoing errors.
read0 is now handled with a H1 connection flag (H1C_F_EOS). Corresponding
flag was removed on the H1 stream and we fully rely on the SE descriptor at
the stream level.
Concretly, it means we rely on the H1 connection flags instead of the
connection one. H1C_F_EOS is only set in h1_recv() or h1_rcv_pipe() after a
read if a read0 was detected.
A new error is added on H1 stream to deal with internal errors. For now,
this error is only reported when we fail to create a stream-connector. This
way, the error is reported at the H1 stream level and not the H1 connection
level.
H1C_F_ERR_PENDING flags will be used to refactor error handling at the H1
connection level. It will be used to notify error during sends. Thus, the
flag to notify an error must be sent before closing the connection is now
named H1C_F_ABRT_PENDING.
This introduce a naming convertion: ERROR must be used to notify upper layer
of an event at the lower ones while ABORT must be used in the opposite
direction.
When the request headers are not fully received, we must subscribe the H1
connection for reads to be able to receive more data. This was performed in
h1_process_demux(). It is now perfoemd in h1_process_demux().
The H1 connection state is now handled in a dedicated state. H1C_F_ST_*
flags are removed. All states are now exclusives. It is easier to know the
H1 connection states. It is alive, or usable, if it is not CLOSING or
CLOSED. It is CLOSING if it should be closed ASAP but a stream is still
attached and/or the output buffer is not empty. CLOSED is used when the H1
connection is ready to be closed. Other states are quite easy to understand.
There is no special changes in the H1 connection behavior. Except in
h1_send(). When a CLOSING connection is CLOSED, the function now reports an
activity. In addition, when an embryonic H1 stream is aborted, it is
destroyed. This way, the H1 connection can be switched to CLOSED state.
The H1 connection state will be handled is a dedicated field. To do so,
h1_cs enum was added. The different states are more or less equivalent to
H1C_F_ST_* flags:
* H1_CS_IDLE <=> H1C_F_ST_IDLE
* H1_CS_EMBRYONIC <=> H1C_F_ST_EMBRYONIC
* H1_CS_UPGRADING <=> H1C_F_ST_ATTACHED && !H1C_F_ST_READY
* H1_CS_RUNNING <=> H1C_F_ST_ATTACHED && H1C_F_ST_READY
* H1_CS_CLOSING <=> H1C_F_ST_SHUTDOWN && (H1C_F_ST_ATTACHED || b_data(&h1c->ibuf))
* H1_CS_CLOSED <=> H1C_F_ST_SHUTDOWN && !H1C_F_ST_ATTACHED && !b_data(&h1c->ibuf)
In addition, in this patch, the h1_is_alive() and h1_close() function are
added. The first one will be used to know if a H1 connection is alive or
not. The second one will be used to set the connection in CLOSING or CLOSED
state, depending on the output buffer state and if there is still a H1
stream or not.
For now, the H1 connection state is not used.
Fields in H1C structure are reorganised to not have the output buffer
straddled between to cache lines. There is 4-bytes hole after the flags, but
it will be partially filled by an enum representing the H1 connection state.
For now, at the transport-layer level, there is no shutr callback (in
xprt_ops). Thus, to ease the aborts refacotring, the code is removed from
the h1_shutr() function. The callback is not removed for now. It is only
kept to have a trace message. It may be handy for debugging sessions.
As noticed by Gabriel Tzagkarakis in issue #1903, the total pool size
in bytes is historically still in 32 bits, but at least we should report
the product of the number of objects and their size in 64 bits so that
the value doesn't wrap around 4G.
This may be backported to all versions.
With GCC 12.2.0 and O2 optimization activated, compiler reports the
following warning for qc_release_lost_pkts().
In function ‘quic_tx_packet_refdec’,
inlined from ‘qc_release_lost_pkts.constprop’ at src/quic_conn.c:2056:3:
include/haproxy/atomic.h:320:41: error: ‘__atomic_sub_fetch_4’ writing 4 bytes into a region of size 0 overflows the destination [-Werror=stringop-overflow=]
320 | #define HA_ATOMIC_SUB_FETCH(val, i) __atomic_sub_fetch(val, i, __ATOMIC_SEQ_CST)
| ^~~~~~~~~~~~~~~~~~
include/haproxy/quic_conn.h:499:14: note: in expansion of macro ‘HA_ATOMIC_SUB_FETCH’
499 | if (!HA_ATOMIC_SUB_FETCH(&pkt->refcnt, 1)) {
| ^~~~~~~~~~~~~~~~~~~
GCC thinks that quic_tx_packet_refdec() can be called with a NULL
argument from qc_release_lost_pkts() with <oldest_lost> as arg.
This warning is a false positive as <oldest_lost> cannot be NULL in
qc_release_lost_pkts() at this stage. This is due to the previous check
to ensure that <pkts> list is not empty.
This warning is silenced by using ALREADY_CHECKED() macro.
This should be backported up to 2.6.
This should fix github issue #1852.
If a "ring" section initialization fails (e.g. due to a duplicate name,
invalid chars, or missing memory), any subsequent "server" statement that
appears in the same section will crash the config parser by dereferencing
the currently NULL cfg_sink. E.g:
ring x
ring x # fails on "already exists"
server srv 1.1.1.1 # crashes on cfg_sink==NULL
All other statements have a test for this but "server" was missing it,
so this patch adds it.
Thanks to Joel Hutchinson for reporting this issue.
This must be backported as far as 2.2.
The exact same commands as those from the CLI may be pre-loaded at boot
time by passing them one per line after the "trace" keyword in the global
section; i.e. just copy-pasting all commands directly there will do the
job. Note that if a ring is mentioned, it needs to be declared before the
global section. Another option is to append another global section after
"ring".
For now the keyword is marked as experimental to discourage its broad
adoption by default. "expose-experimental-directives" needs to be placed
in the global section to expose it.
In order to be able to reuse the "trace" statements elsewhere (e.g.
global section), we'll first need to split its parser. It turns out
that the whole thing is self-contained inside a single function that
emits a single message on warning/error or nothing on success. That's
quite easy to split in two parts, the one that does the job and produces
the status message and the one that sends it to the CLI. That's what
this patch does.
HEAD answers should not contain any body data. Currently when a
"transfer-encoding: chunked" header is returned, a last null-chunk is added to
the answer. Some clients choke on it and fail when trying to reuse the
connection. Check that the response should not be body-less before sending the
null-chunk.
This patch should fix#1932. It must be backported as far as 2.4.
The SSL_load_error_strings function was marked as deprecated in OpenSSL
1.1.0 so compiling HAProxy with OPENSSL_NO_DEPRECATED set and a recent
OpenSSL library would fail.
The manpages say that this function was replaced by OPENSSL_init_crypto
and OPENSSL_init_ssl which are already called at start up by the SSL
lib. We do not seem to be in a case where explicit call of those
functions is required.
This patch fixes GitHub issue #1813.
It can be backported to 2.6.
When the request data are copied in a mbuf, if the free space is too small
to copy all data at once, the data length is shortened. When this is
performed, we reserve the size of the STDIN recod header and eventually the
same for the empty STDIN record if it is the last HTX block of the request.
However, there is no test to be sure the free space is large enough. Thus,
on this special case, when the mbuf is almost full, it is possible to
overflow the value length. Because of this bug, it is possible to experience
crashes from time to time.
This patch should fix the issue #1923. It must be backported as far as 2.4.
When the last HTX DATA block was copied in zero-copy, the empty STDIN
record, marking the end of the request data was never sent. Thanks to this
patch, it is now sent.
This patch must be backported as far as 2.4.
For a server subject to SRV resolution, when the server's address is set,
its dynamic cookie, if any, and its server key are computed. Both are based
on the ip/port pair. However, this happens before the server's port is
set. Thus the port is equal to 0 at this stage. It is a problem if several
servers share the same IP but with different ports because they will share
the same dynamic cookie and the same server key, disturbing this way the
connection persistency and the session stickiness.
This patch must be backported as far as 2.2.
DNS resoltions may be triggered via a "do-resolve" action or when a connection
failure is experienced during a healthcheck. Cached valid responses are used, if
possible. But if the entry is expired or if there is no valid response, a new
reolution should be performed. However, an resolution is only performed if the
"resolve" timeout is expired. Thus, when this comes from a healthcheck, it means
no extra resolution is performed at all.
Now, when the resolution is performed for a server (SRV or SRVEQ) and no valid
response is found, the resolution timer is reset (last_resolution is set to
TICK_ETERNITY). Of course, it is only performed if no resolution is already
running.
Note that this feature was broken 5 years ago when the resolvers code was
refactored (67957bd59e).
This patch should fix the issue #1906. It affects all stable versions. However,
it is probably a good idea to not backport it too far (2.6, maybe 2.4) and with
some delay.
When an error is triggered during arguments parsing of an http reply (for
instance, from a "return" rule), while a log-format body was expected but
not evaluated yet, HAproxy crashes when the body log-format string is
released because it was not properly initialized.
The list used for the log-format string must be initialized earlier.
This patch should fix the issue #1925. It must be backported as far as 2.2.
Commit 432cd1a ("MEDIUM: ssl: be stricter about chain error") removed
the ERR_GET_LIB(ret) != ERR_LIB_PEM to be stricter about errors.
However, PEM_R_NO_START_LINE is better be checked with ERR_LIB_PEM.
So this patch complete the previous one.
The original problem was that the condition was wrongly inversed. This
original code from openssl:
if (ERR_GET_LIB(err) == ERR_LIB_PEM &&
ERR_GET_REASON(err) == PEM_R_NO_START_LINE)
became:
if (ret && (ERR_GET_LIB(ret) != ERR_LIB_PEM &&
ERR_GET_REASON(ret) != PEM_R_NO_START_LINE))
instead of:
if (ret && !(ERR_GET_LIB(ret) == ERR_LIB_PEM &&
ERR_GET_REASON(ret) == PEM_R_NO_START_LINE))
This must not be backported as it will break a lot of setup. That's too
bad because a lot of errors are lost. Not marked as a bug because of the
breakage it could cause on working setups.
Once in a while we spot a bug in the deinit code that is complex,
especially when it has to deal with incomplete initializations, and the
ability to bypass this step has regularly been raised. In addition for
fast-reloading setups it could theoretically save some time. Tests have
shown that very large configs can barely save ~100-150ms by skipping the
deinit step. However the ability not to crash if a bug is encountered can
occasionally help.
This patch adds an option to do exactly this. It's obviously not enabled
by default and the documentation discourages from using it, but this might
be useful in the future.
In ae053b30 - BUG/MEDIUM: wdt: don't trigger the watchdog when p is unitialized:
wdt is not triggering until prev_cpu_time
is initialized to prevent unexpected process
termination.
Unfortunately this is not enough, some tasks could start
immediately after process startup, and in such cases
prev_cpu_time could be uninitialized, because
prev_cpu_time is set after the polling loop while
process_runnable_tasks() is executed before the polling loop.
It happens to be the case with lua tasks registered using
register_task function from lua script.
Those tasks are registered in early init stage of haproxy and
they are scheduled to run before the first polling loop,
leading to prev_cpu_time being uninitialized (equals 0)
on the thread when the task is first executed.
Because of this, if such tasks get stuck right away
(e.g: blocking IO) the watchdog won't behave as expected
and the thread will remain stuck indefinitely.
(polling loop for the thread won't run at all as
the thread is already stuck)
To solve this, we're now making sure that prev_cpu_time is first
set before any tasks are processed on the thread.
This is done by setting initial prev_cpu_time value directly
in clock_init_thread_date()
Thanks to Abhijeet Rastogi for reporting this unexpected behavior.
It could be backported in every stable versions.
(everywhere ae053b30 is, because both are related)
This has never really been implemented in clients nor servers. We wanted
to drop it from 2.5 already but forgot, so let's do it now. The code was
only minimally changed. It could possibly be slightly simplified but it
would only be marginal, at the great risk of breaking something, thus
let's keep it in its proven state instead.
Tracked in github issue #1551.
Pierre Cheynier reported a rare crash that can affect stick-tables. When
a entry is created, the stick-table's expiration date is updated. But if
at exactly the same time the expiration task runs, it finishes by updating
its expiration timer without any protection, which may collide with the
call to task_queue() in another thread. In this case, it sometimes happens
that the first test for TICK_ETERNITY in task_queue() passes, then the
"expire" field is reset, then the BUG_ON() triggers, like below:
FATAL: bug condition "task->expire == 0" matched at src/task.c:279
call trace(13):
| 0x649d86 [c6 04 25 01 00 00 00 00]: __task_queue+0xc6/0xce
| 0x596bef [eb 90 ba 03 00 00 00 be]: stktable_requeue_exp+0x1ef/0x258
| 0x596c87 [48 83 bb 90 00 00 00 00]: stktable_touch_with_exp+0x27/0x312
| 0x563698 [48 8b 4c 24 18 4c 8b 4c]: stream_process_counters+0x3a8/0x6a2
| 0x569344 [49 8b 87 f8 00 00 00 48]: process_stream+0x3964/0x3b4f
| 0x64a80b [49 89 c7 e9 23 ff ff ff]: run_tasks_from_lists+0x3ab/0x566
| 0x64ad66 [29 44 24 14 8b 7c 24 14]: process_runnable_tasks+0x396/0x71e
| 0x6184b2 [83 3d 47 b3 a6 00 01 0f]: run_poll_loop+0x92/0x4ff
| 0x618acf [48 8b 1d aa 20 7d 00 48]: main+0x1877ef
| 0x7fc7d6ec1e45 [64 48 89 04 25 30 06 00]: libpthread:+0x7e45
| 0x7fc7d6c9e4af [48 89 c7 b8 3c 00 00 00]: libc:clone+0x3f/0x5a
This one is extremely difficult to reproduce in practice, but adding a
printf() in process_table_expire() before assigning the value, while
running with an expire delay of 1ms helps a lot and may trigger the
crash in less than one minute on a 8-thread machine. Interestingly,
depending on the sequencing, this bug could also have made a table fail
to expire if the expire field got reset after the last update but before
the call to task_queue(). It would require to be quite unlucky so that
the table is never touched anymore after the race though.
The solution taken by this patch is to take the table's lock when
updating its expire value in stktable_requeue_exp(), enclosing the call
to task_queue(), and to update the task->expire field while still under
the lock in process_table_expire(). Note that thanks to previous changes,
taking the table's lock for the update in stktable_requeue_exp() costs
almost nothing since we now have the guarantee that this is not done more
than 1000 times a second.
Since process_table_expire() sets the timeout after returning from
stktable_trash_expired() which just released the lock, the two functions
were merged so that the task's expire field is updated while still under
the lock. Note that this heavily depends on the two previous patches
below:
CLEANUP: stick-table: remove the unused table->exp_next
OPTIM: stick-table: avoid atomic ops in stktable_requeue_exp() when possible
This is a bit complicated due to the fact that in 2.7 some parts were
made lockless. In 2.6 and older, the second part (the merge of the
two functions) will be sufficient since the task_queue() call was
already performed under the table's lock, and the patches above are
not needed.
This needs to be backported as far as 1.8 scrupulously following
instructions above.
Since the task's time resolution is the millisecond we know there
will not be more than 1000 useful updates per second, so there's no
point in doing a CAS and a task_queue() for each call, better first
check if we're going to change the date. Now we're certain not to
perform such operations more than 1000 times a second for a given
table.
The loop was modified because this improvement will also be used to fix a
bug later.
The ->exp_next field of the stick-table was probably useful in 1.5 but
it currently only carries a copy of what the future value of the table's
task's expire value will be, while it's systematically copied over there
immediately after being assigned. As such it provides exactly a local
variable. Let's remove it, as it costs atomic operations.
Building with -DFIND_OPTIMAL_MATCH would fail on undeclared "len".
This one likely vanished in some cleanup.
This is libslz upstream commit 1ea20360715e1ad0cd81db83fa4361310716b8cc
Commit 960fb74ca ("MEDIUM: ssl: {ca,crt}-ignore-err can now use error
constant name") provided a very convenient way to initialize only desired
macros. Unfortunately with gcc versions older than 8, it breaks with:
src/ssl_utils.c:473:12: error: initializer element is not constant
because it seems that the compiler cannot resolve strings to constants
at build time.
This patch takes a different approach, it stores the value of the macro
as a string and this string is converted to integer at boot time. This
way it works everywhere.
Since commit 9b2598 ("BUG/MEDIUM: ssl: Verify error codes can exceed
63"), the ca_ignerr_bitfield and crt_ignerr_bietfield are incorrecly
accessed from __objt_listener(conn->target)->bind_conf which is not
avaiable from QUIC. The bind_conf variable was mistakenly replaced.
This patch fixes the issue by using again the bind_conf variable.
Must be backported where 9b2598 was backported.
cli_parse_add_server() is the CLI handler for 'add server' command. This
functions uses usermsgs_ctx to retrieve logs messages from internal
ha_alert() calls and display it at the end of the handler.
At the beginning of the handler, stderr prefix is defined to "CLI" via
usermsgs_clr() function. However, this is not resetted at the end. This
causes inconsistency for stderr output :
1. each ha_alert() invocation will reuse "CLI" prefix if 'add server'
command was executed before, even in non-CLI context
2. usermsgs_ctx is thread local, so this is only true if this runs on
the same thread as 'add server' handler.
To fix this, ensure that "CLI" prefix is now resetted after
cli_parse_add_server(). This is done thanks to the addition to
cli_umsg()/cli_umsgerr() functions.
This can be backported up to 2.5 if we prefer to ensure output
consistency at the risk of changing stderr behaviors in stable versions.
In this case, the previous commit should be backported before :
MINOR: cli: define usermsgs print context
CLI 'add server' handler relies on usermsgs_ctx to display errors in
internal function on CLI output. This may be also extended to other
handlers.
However, to not clutter stderr from another contextes, usermsgs_ctx must
be resetted when it is not needed anymore. This operation cannot be
conducted in the CLI parse handler as display is conducted after it.
To achieve this, define new CLI states CLI_ST_PRINT_UMSG /
CLI_ST_PRINT_UMSGERR. Their principles is nearly identical to states for
dynamic messages printing.
Rename CLI_ST_PRINT_FREE to CLI_ST_PRINT_DYNERR.
Most notably, this highlights that this is reserved to error printing.
This is done to ensure consistency between CLI_ST_PRINT/CLI_ST_PRINT_DYN
and CLI_ST_PRINT_ERR/CLI_ST_PRINT_DYNERR. The name is also consistent
with the function cli_dynerr() which activates it.
The ca-ignore-err and crt-ignore-err directives are now able to use the
openssl X509_V_ERR constant names instead of the numerical values.
This allow a configuration to survive an OpenSSL upgrade, because the
numerical ID can change between versions. For example
X509_V_ERR_INVALID_CA was 24 in OpenSSL 1 and is 79 in OpenSSL 3.
The list of errors must be updated when a new major OpenSSL version is
released.
The CRT and CA verify error codes were stored in 6 bits each in the
xprt_st field of the ssl_sock_ctx meaning that only error code up to 63
could be stored. Likewise, the ca-ignore-err and crt-ignore-err options
relied on two unsigned long longs that were used as bitfields for all
the ignored error codes. On the latest OpenSSL1.1.1 and with OpenSSLv3
and newer, verify errors have exceeded this value so these two storages
must be increased. The error codes will now be stored on 7 bits each and
the ignore-err bitfields are replaced by a big enough array and
dedicated bit get and set functions.
It can be backported on all stable branches.
[wla: let it be tested a little while before backport]
Signed-off-by: William Lallemand <wlallemand@haproxy.org>
In case of error, the ocsp item might already be in the ocsp certificate
tree but simply freed instead of destroyed through ssl_sock_free_ocsp.
This patch can be backported to all stable versions.
When calling ssl_get0_issuer_chain, if akid is not NULL but its keyid
is, then the AUTHORITY_KEYID is not freed.
This patch can be backported to all stable branches.
When running HAProxy with OpenSSLv3, the two BIGNUMs used to build our
own DH parameters are not freed. It was not necessary previously because
ownership of those parameters was transferred to OpenSSL through the
DH_set0_pqg call.
This patch should be backported to 2.6.
The memory for the SSL ca_file was allocated only once (in the function
httpclient_create_proxy()) and that pointer was assigned to each created
proxy that the HTTP client uses. This would not be a problem if this
memory was not freed in each individual proxy when it was deinitialized
in the function ssl_sock_free_srv_ctx().
Memory allocation:
src/http_client.c, function httpclient_create_proxy():
1277: if (!httpclient_ssl_ca_file)
1278: httpclient_ssl_ca_file = strdup("@system-ca");
1280: srv_ssl->ssl_ctx.ca_file = httpclient_ssl_ca_file;
Memory deallocation:
src/ssl_sock.c, function ssl_sock_free_srv_ctx():
5613: ha_free(&srv->ssl_ctx.ca_file);
This should be backported to version 2.6.
Each datagram is received by a random thread and dispatch to its
destination thread linked to the connection. Then, the datagram is
handled by the connection thread. Once this is done, datagram buffer
pointer is atomically set to NULL to mark it as consumed.
Consumed datagrams are purged before recvfrom() invocation on random
receiver threads. The check for NULL buffer must thus be done
atomically. This was not the case before this patch, which may have
triggered race conditions.
This bug has been introduced by commit
91b2305ad7
MINOR: quic: implement datagram cleanup for quic_receiver_buf
This should be backported up to 2.6 after previously mentionned commit.
Add a new counter "quic_rxbuf_full". It is incremented each time
quic_sock_fd_iocb() is interrupted on full buffer.
This should help to debug github issue #1903. It is suspected that
QUIC receiver buffers are full which in turn cause quic_sock_fd_iocb()
to be called repeatedly resulting in a high CPU consumption.
syslog_io_handler does specific treatment to handle syslog tcp octet
counting:
Logic was good, but a sneaky mistake prevented
rfc-6587 octet counting from working properly.
trash.area was used as an input buffer.
It does not make sense here since it is uninitialized.
Compilation was unaffected because trash is a thread
local "global" variable.
buf->area should definitely be used instead.
This should be backported as far as 2.4.
Subscribing was not properly designed between quic-conn and quic MUX
layers. Align this as with in other haproxy components : <subs> field is
moved from the MUX to the quic-conn structure. All mention of qcc MUX is
cleaned up in quic_conn_subscribe()/quic_conn_unsubscribe().
Thanks to this change, ACK reception notification has been simplified.
It's now unnecessary to check for the MUX existence before waking it.
Instead, if <subs> quic-conn field is set, just wake-up the upper layer
tasklet without mentionning MUX. This should probably be extended to
other part in quic-conn code.
This should be backported up to 2.6.
A specialized listener accept was previously used for QUIC. This is now
unneeded and we can revert to the default one session_accept_fd().
One change of importance is that the call order between
conn_xprt_start() and conn_complete_session() is now reverted to the
default one. This means that MUX instance is now NULL during
qc_xprt_start() and its app-ops layer cannot be set here. This operation
has been delayed to qc_init() to prevent a segfault.
This should be backported up to 2.6.
The commit bc7c207f74 ("BUG/MAJOR: stick-tables: do not try to index a
server name for applets") tried to catch applets case when we tried to index
the server name. However, there is still an issue. The applets are
unconditionally casted to servers and this bug exists since a while. it's
just luck if it doesn't crash.
Now, when store rules are processed, we skip the rule if the stream's target
is not a server or, of course, if it is a server but the "non-stick" option
is set. However, we still take care to release the sticky session.
This patch must be backported to all stable versions.
The error check on certificate chain was ignoring all decoding error,
silently ignoring some errors.
This patch fixes the issue by being stricter on errors when reading the
chain, this is a change of behavior, it could break existing setup that
has a wrong chain.
Add the SSL error string when failing to load a certificate in
ssl_sock_load_pem_into_ckch(). It's difficult to know what happen when no
descriptive errror are emitted. This one is for the certificate before
trying to load the complete chain.
Add the SSL error string when failing to load a certificate in
ssl_sock_load_pem_into_ckch(). It's difficult to know what happen when no
descriptive errror are emitted.
Example:
[ALERT] (1264006) : config : parsing [ssl_default_server.cfg:51] : 'bind /tmp/ssl.sock' in section 'listen' : unable to load certificate chain from file 'reg-tests/ssl//common.pem': ASN no PEM Header Error
Ring buffers may be implicitly created from log declarations when "tcp@",
"tcp6@", "tcp4@" or "uxst@" prefixes are used. These ring buffers rely on
unconfigurable proxies. While connect and server timeouts should be defined for
explicit ring buffers, it is no possible for implicit ones. However, a default
value must be set and TICK_ETERNITY is not an acceptable one.
Thus, now "1s" is set for the connect timeout and "5s" is set for server one.
This patch may be backported as far as 2.4.
When a ring section is parsed, a proxy is created. For now, it has the
frontend (PR_CAP_FE) and the internal (PR_CAP_INT) capabilities, in addition
to the expected backend capability (PR_CAP_BE).
PR_CAP_INT capability was added to silent warning triggered because of
PR_CAP_FE capability. Indeed, Because the proxy is declared as a frontend,
warnings about missing bind lines and missing client timeout should be
triggered during the configuration parsing. These warnings are inhibited
because PR_CAP_INT capability is set. It is an issue on the 2.4 because
PR_CAP_INT capability does not exist. So warnings are always emitted.
But the true bug is that these proxies should not have PR_CAP_FE and
PR_CAP_INT capabilities. Removing these capabilities is enough to remove any
warnings on the 2.4, with no regression on higher versions. However, it may
be a good idea to eval if a dedicated frontend for sinks should be added or
not. This way, a true frontend would be used to start the sink applets. In
addition, proxies capabilities/modes have to be reviewed to have a less
ambiguous API. For instance a dedicate mode for sinks (PR_MODE_SINK ?) may
be added. Finally, it could be very nice to have all proxies in the same
list, including internal ones.
This patch should fix the issue #1900. It must be backported as far as 2.4.
We considered the resync process is finished if a full resync request
is ended receiving the "resync-finish" message. But in the case of
"shards" each node declared with a "shard" has only a partial view
of the table. And the resync process is ended whereas the original
peer tables content contains only a "shard" of the full content.
This patch allow to retrieve the entire tables requesting a resync
from all different "shards".
To do so we don't commit the end of a resync process receiving a
"resync-finish" if the node is part of "shard", we only flag this
peer and all peers using the same shard as "notup2date" as if we
received a "resync-partial" message, and we re-schedule a request
of a resync as it is done receiving a "resync-partial" message.
Doing this the peers flagged "notup2date" won't be addressed for the
next resync request round and the next resync request will be send to
a shard not yet requested.
Receving a "resync-finish" message we also check if all peers using
"shards" are flagged "notup2date". It meens that all peers have been
addressed and we can considered the resync process is now finished.
Note also that the "resync request" scheduler already handle a timeout
and if we are not able to retrieve a full resync after a delay. The
resync process is ended.
This patch should be backported in all versions handling "shard"
on peer lines.
Add "shards" new keyword for "peers" section to configure the number
of peer shards attached to such secions. This impact all the stick-tables
attached to the section.
Add "shard" new "server" parameter to configure the peers which participate to
all the stick-tables contents distribution. Each peer receive the stick-tables updates
only for keys with this shard value as distribution hash. The "shard" value
is stored in ->shard new server struct member.
cfg_parse_peers() which is the function which is called to parse all
the lines of a "peers" section is modified to parse the "shards" parameter
stored in ->nb_shards new peers struct member.
Add srv_parse_shard() new callback into server.c to pare the "shard"
parameter.
Implement stksess_getkey_hash() to compute the distribution hash for a
stick-table key as the 64-bits xxhash of the key concatenated to the stick-table
name. This function is called by stksess_setkey_shard(), itself
called by the already implemented function which create a new stick-table
key (stksess_new()).
Add ->idlen new stktable struct member to store the stick-table name length
to not have to compute it each time a stick-table key hash is computed.
This patch complete the previous incomplete commit. The new counter
sendto_err_unknown is now displayed on stats page/CLI show stats.
This is related to github issue #1903.
This should be backported up to 2.6.
Remove ABORT_NOW() statement on unhandled sendto error. Instead use a
dedicated counter sendto_err_unknown to report these cases.
If we detect increment of this counter, strace can be used to detect
errno value :
$ strace -p $(pidof haproxy) -f -e trace=sendto -Z
This should be backported up to 2.6.
This should help to debug github issue #1903.
When an HTTP response is compressed by HAProxy, the headers are updated.
However it is possible to encounter a rewrite error because the buffer is
full. In this case, the compression is aborted. Thus, we must be sure to
leave the response in a valid state.
For now, it is an issue because the "Content-Encoding" header is added
before all other headers manipulations. So if the compression is aborted on
error, the "Content-Encoding" header may remain while the payload is not
compressed.
So now, we take care to leave with a valid response on error by reordering
the headers manipulations. It is too painful to really rollback all changes,
especially for an edge case.
This patch should be backported as far as 2.0. Note that on the 2.0, the
legacy HTTP part is also concerned.
Max stream data was not enforced and respect for local/remote uni
streams. Previously, qcs instances incorrectly reused the limit defined
from bidirectional ones.
This is now fixed. Two fields are added in qcc structure connection :
* value for local flow control to enforce on remote uni streams
* value for remote flow control to respect on local uni streams
These two values can be reused to properly initialized msd field of a
qcs instance in qcs_new(). The rest of the code is similar.
This must be backported up to 2.6.
When haproxy is compiled without USE_SHM_OPEN, does not try to dump the
startup-logs in the "reload" output, because it won't show anything
interesting.
Upon a applet_release(), the applet can be scheduled again and a call to
the IO handler is still possible. When the struct httpclient is already
free the IO handler could try to access it.
This patch fixes the issue by setting svcctx to NULL in the
applet_release, and checking its value in the IO handler.
Must be backported as far as 2.5.
When the lua task finished before the httpclient that are associated to
it, there is a risk that the httpclient try to task_wakeup() the lua
task which does not exist anymore.
To fix this issue the httpclient used in a lua task are stored in a
list, and the httpclient are destroyed at the end of the lua task.
Must be backported in 2.5 and 2.6.
The connect timeout in a ring section was not properly parsed. Thus, it was
never set and the server timeout may be overwritten, depending on the
directives order. The first char of the keyword must be tested, not the
third one.
This patch is related to the issue #1900. But it does not fix the issue. It
must be backported as far as 2.4.
When a ring is used as log target, the original facility, if any, must be
preserved. The default facility must only be used if there no facility was
found in the incoming log message.
This patch should fix the issue #1901. It must be backported as far as 2.4.
On Initial packet reception, token is checked for validity through
quic_retry_token_check() function. However, some related parts were left
in the parent function quic_rx_pkt_retrieve_conn(). Move this code
directly into quic_retry_token_check() to facilitate its call in various
context.
The API of quic_retry_token_check() has also been refactored. Instead of
working on a plain char* buffer, it now uses a quic_rx_packet instance.
This helps to reduce the number of parameters.
This change will allow to check Retry token even if data were received
with a FD-owned quic-conn socket. Indeed, in this case,
quic_rx_pkt_retrieve_conn() call will probably be skipped.
This should be backported up to 2.6.
Sometimes, a packet is dropped on reception. Several goto statements are
used, mostly to increment a proxy drop counter or drop silently the
packet. However, this labels are interleaved. Re-arrang goto labels to
simplify this process :
* drop label is used to drop a packet with counter incrementation. This
is the default method.
* drop_silent is the next label which does the same thing but skip the
counter incrementation. This is useful when we do not need to report
the packet dropping operation.
This should be backported up to 2.6.
This change is the following of qc_lstnr_pkt_rcv() refactoring. This
function has finally been split into several ones.
The first half is renamed quic_rx_pkt_parse(). This function is
responsible to parse a QUIC packet header and calculate the packet
length.
QUIC connection retrieval has been extracted and is now called directly
by quic_lstnr_dghdlr().
The second half of qc_lstnr_pkt_rcv() is renamed to qc_rx_pkt_handle().
This function is responsible to copy a QUIC packet content to a
quic-conn receive buffer.
A third function named qc_rx_check_closing() is responsible to detect if
the connection is already in closing state. As this requires to drop the
whole datagram, it seems justified to be in a separate function.
This change has no functional impact. It is part of a refactoring series
on qc_lstnr_pkt_rcv(). The objective is to facilitate the integration of
FD-owned quic-conn socket patches.
This should be backported up to 2.6.
Simplify qc_lstnr_pkt_rcv() by extracting code responsible to retrieve
the quic-conn instance. This code is put in a dedicated function named
quic_rx_pkt_retrieve_conn(). This new function could be skipped if a
FD-owned quic-conn socket is used.
The first traces of qc_lstnr_pkt_rcv() have been clean up as qc instance
is always NULL here : thus qc parameter can be removed without any
change.
This change has no functional impact. It is a part of a refactoring
series on qc_lstnr_pkt_rcv(). The objective is facilitate integration of
FD-owned socket patches.
This should be backported up to 2.6.
Received packets treatment has some difference regarding if this is the
first one or not of the encapsulating datagram. Previously, this was set
via a function argument. Simplify this by defining a new Rx packet flag
named QUIC_FL_RX_PACKET_DGRAM_FIRST.
This change does not have functional impact. It will simplify API when
qc_lstnr_pkt_rcv() is broken into several functions : their number of
arguments will be reduced thanks to this patch.
This should be backported up to 2.6.
pn_offset field was only set if header protection cannot be removed.
Extend the usage of this field : it is now set everytime on packet
parsing in qc_lstnr_pkt_rcv().
This change helps to clean up API of Rx functions by removing
unnecessary variables and function argument.
This change has no functional impact. It is a part of a refactoring
series on qc_lstnr_pkt_rcv(). The objective is facilitate integration of
FD-owned socket patches.
This should be backported up to 2.6.
Add a new field version on quic_rx_packet structure. This is set on
header parsing in qc_lstnr_pkt_rcv() function.
This change has no functional impact. It is a part of a refactoring
series on qc_lstnr_pkt_rcv(). The objective is facilitate integration of
FD-owned socket patches.
This should be backported up to 2.6.
When generating a Retry token, client CID is used as encryption input.
The client must reuse the same CID when emitting the token in a new
Initial packet.
A memory overflow can occur on quic_generate_retry_token() depending on
the size of client CID. This is because space reserved for <aad> only
accounted for QUIC_HAP_CID_LEN (size of haproxy owned generated CID).
However, the client CID size only depends on client parameter and is
instead limited to QUIC_CID_MAXLEN as specified in RFC9000.
This was reproduced with ngtcp2 and haproxy built with ASAN. Here is the error
log :
==14964==ERROR: AddressSanitizer: stack-buffer-overflow on address 0x7fffee228cee at pc 0x7ffff785f427 bp 0x7fffee2289e0 sp 0x7fffee228188
WRITE of size 17 at 0x7fffee228cee thread T5
#0 0x7ffff785f426 in __interceptor_memcpy /usr/src/debug/gcc/libsanitizer/sanitizer_common/sanitizer_common_interceptors.inc:827
#1 0x555555906ea7 in quic_generate_retry_token_aad src/quic_conn.c:5452
#2 0x555555907e72 in quic_retry_token_check src/quic_conn.c:5577
#3 0x55555590d01e in qc_lstnr_pkt_rcv src/quic_conn.c:6103
#4 0x5555559190fa in quic_lstnr_dghdlr src/quic_conn.c:7179
#5 0x555555eb0abf in run_tasks_from_lists src/task.c:590
#6 0x555555eb285f in process_runnable_tasks src/task.c:855
#7 0x555555d9118f in run_poll_loop src/haproxy.c:2853
#8 0x555555d91f88 in run_thread_poll_loop src/haproxy.c:3042
#9 0x7ffff709f8fc (/usr/lib/libc.so.6+0x868fc)
#10 0x7ffff7121a5f (/usr/lib/libc.so.6+0x108a5f)
This must be backported up to 2.6.
Fix several warinings as this one:
src/qmux_trace.c:80:45: error: format ‘%lu’ expects argument of type ‘long unsigned int’, but argument 4 has type ‘uint64_t’ {aka ‘const long long unsigned int’} [-Werror=format=]
80 | chunk_appendf(&trace_buf, " qcs=%p .id=%lu .st=%s",
| ~~^
| |
| long unsigned int
| %llu
81 | qcs, qcs->id,
| ~~~~~~~
| |
| uint64_t {aka const long long unsigned int}
compilation terminated due to -Wfatal-errors.
Cast remaining uint64_t variables as ullong with %llu as printf format and size_t
others as ulong with %lu as printf format.
Thank you to Ilya for having reported this issue in GH #1899.
Must be backported to 2.6
A previous commit tries to fix uninitialized GCC warning on ssl code for
QUIC build. See the fix here :
48e46f98cc
BUILD: ssl_sock: bind_conf uninitialized in ssl_sock_bind_verifycbk()
However, this is incomplete as it still reports possible NULL
dereference on ctx variable (GCC v12.2.0). Here is the compilation
result :
src/ssl_sock.c: In function ‘ssl_sock_bind_verifycbk’:
src/ssl_sock.c:1739:12: error: potential null pointer dereference [-Werror=null-dereference]
1739 | ctx->xprt_st |= SSL_SOCK_ST_FL_VERIFY_DONE;
|
To fix this, remove check on qc which can also never happens and replace
it with a BUG_ON. This seems to satisfy GCC on my machine.
This must be backported up to 2.6.
If the uri is unexpected ("/" in place of "http://xxx/"), some parsing
function fails. The failure is not handled.
This patch handle these errors. Note: the return code is boolean, maybe
we can return more precise error for Lua reporting ?
Must be backported in 2.6.
The HTTPclient callback req_payload callback is set when a request payload
must be streamed. In the lua, this callback is set when a body is passed as
argument in one of httpclient functions (head/get/post/put/delete). However,
there is no reason to set it if body string is empty.
This patch is related to the issue #1898. It may be backported as far as
2.5.
In the HTTP client, when the request body is streamed, at the end of the
payload, we must be sure to not set the EOM flag on an empty message.
Otherwise, because there is no data, the buffer is reset to be released and
the flag is lost. Thus, the HTTP client is never notified of the end of
payload for the request and the applet is blocked. If the HTTP client is
instanciated from a Lua script, it is even worse because we fall into a
wakeup loop between the lua script and the HTTP client applet. At the end,
HAProxy is killed because of the watchdog.
This patch should fix the issue #1898. It must be backported to 2.6.
Even if this cannot happen, ensure <bind_conf> is initialized in this
function to please some compilers.
Takes the opportunity of this patch to replace an ABORT_NOW() by
a BUG_ON() because if the variable values they test are not initialized,
this is really because there is a bug.
Must be backported to 2.6.
Right now the QUIC thread mapping derives the thread ID from the CID
by dividing by global.nbthread. This is a problem because this makes
QUIC work on all threads and ignores the "thread" directive on the
bind lines. In addition, only 8 bits are used, which is no more
compatible with the up to 4096 threads we may have in a configuration.
Let's modify it this way:
- the CID now dedicates 12 bits to the thread ID
- on output we continue to place the TID directly there.
- on input, the value is extracted. If it corresponds to a valid
thread number of the bind_conf, it's used as-is.
- otherwise it's used as a rank within the current bind_conf's
thread mask so that in the end we still get a valid thread ID
for this bind_conf.
The extraction function now requires a bind_conf in order to get the
group and thread mask. It was better to use bind_confs now as the goal
is to make them support multiple listeners sooner or later.
Change the output of the "reload" command, it now displays "Success=0"
if the reload failed and "Success=1" if it succeed.
If the startup-logs is available (USE_SHM_OPEN=1), the command will
print a "--\n" line, followed by the content of the startup-logs.
Example:
$ echo "reload" | socat /tmp/master.sock -
Success=1
--
[NOTICE] (482713) : haproxy version is 2.7-dev7-4827fb-69
[NOTICE] (482713) : path to executable is ./haproxy
[WARNING] (482713) : config : 'http-request' rules ignored for proxy 'frt1' as they require HTTP mode.
[NOTICE] (482713) : New worker (482720) forked
[NOTICE] (482713) : Loading success.
$ echo "reload" | socat /tmp/master.sock -
Success=0
--
[NOTICE] (482886) : haproxy version is 2.7-dev7-4827fb-69
[NOTICE] (482886) : path to executable is ./haproxy
[ALERT] (482886) : config : parsing [test3.cfg:1]: unknown keyword 'Aglobal' out of section.
[ALERT] (482886) : config : Fatal errors found in configuration.
[WARNING] (482886) : Loading failure!
$
When compiled with USE_SHM_OPEN=1 the startup-logs are now able to use
an shm which is used to keep the logs when switching to mworker wait
mode. This allows to keep the failed reload logs.
When allocating the startup-logs at first start of the process, haproxy
will do a shm_open with a unique path using the PID of the process, the
file is unlink immediatly so we don't let unwelcomed files be. The fd
resulting from this shm is stored in the HAPROXY_STARTUPLOGS_FD
environment variable so it can be mmap again when switching to wait
mode.
When forking children, the process is copying the mmap to a a mallocated
ring so we never share the same memory section between the master and
the workers. When switching to wait mode, the shm is not used anymore as
it is also copied to a mallocated structure.
This allow to use the "show startup-logs" command over the master CLI,
to get the logs of the latest startup or reload. This way the logs of
the latest failed reload are also kept.
This is only activated on the linux-glibc target for now.
Cast an unified ring + storage area to a ring from area, without
reinitializing the data buffer. Reinitialize the waiters and the lock.
It helps retrieving a previously allocated ring, from an mmap for
example.
Each time data is read on QUIC receiver socket, we try to reuse the
first datagram of the currently used quic_receiver_buf instead of
allocating a new one. This algorithm is suboptimal if there is several
unused datagrams as only the first one is tested and its buffer removed
from quic_receiver_buf.
If QUIC traffic is quite substential, this can lead to an important
number of quic_dgram occurences allocated from pool_head_quic_dgram and
a lack of free space in allocated quic_receiver_buf buffers.
To improve this, each time we want to reuse a datagram, we pop elements
until a non-yet released datagram is found or the list is empty. All
intermediary elements are freed and the last found datagram can be
reused. This operation has been extracted in a dedicated function named
quic_rxbuf_purge_dgrams().
This should improve memory consumption incured by quic_dgram instances under heavy
QUIC traffic. Note that there is still room for improvement as if the
first datagram is still in use, it may block several unused datagram
after him. However this requires to support removal of datagrams out of
order which is currently not possible.
This should be backported up to 2.6.
QUIC datagrams are read from a random thread. They are then redispatch
to the connection thread according to the first packet DCID. These
operations are implemented through a special buffer designed to avoid
locking.
Refactor this code with the following changes :
* <rxbuf> type is renamed <quic_receiver_buf>. Its list element is also
renamed to highligh its attach point to a receiver.
* <quic_dgram> and <quic_receiver_buf> definition are moved to
quic_sock-t.h. This helps to reduce the size of quic_conn-t.h.
* <quic_dgram> list elements are renamed to highlight their attach point
into a <quic_receiver_buf> and a <quic_dghdlr>.
This should be backported up to 2.6.
rxbuf is the structure used to store QUIC datagrams and redispatch them
to the connection thread.
Each receiver manages a list of rxbuf. This was stored both as an array
and a mt_list. Currently, only mt_list is needed so removed <rxbufs>
member from receiver structure.
This should be backported up to 2.6.
Implement quic_tls_secrets_keys_alloc()/quic_tls_secrets_keys_free() to allocate
the memory for only one direction (RX or TX).
Modify ha_quic_set_encryption_secrets() to call these functions for one of this
direction (or both). So, for now on we can rely on the value of the secret keys
to know if it was derived.
Remove QUIC_FL_TLS_SECRETS_SET flag which is no more useful.
Consequently, the secrets are dumped by the traces only if derived.
Must be backported to 2.6.
This issue was reproduced with -Q picoquic client option to split a big ClientHello
message into two Initial packets and haproxy as server without any knowledged of
any previous ORTT session (restarted after a firt 0RTT session). The ORTT received
packets were removed from their queue when the second Initial packet was parsed,
and the QUIC handshake state never progressed and remained at Initial state.
To avoid such situations, after having treated some Initial packets we always
check if there are ORTT packets to parse and we never remove them from their
queue. This will be done after the hanshake is completed or upon idle timeout
expiration.
Also add more traces to be able to analize the handshake progression.
Tested with ngtcp2 and picoquic
Must be backported to 2.6.
Implement quic_get_ncbuf() to dynamically allocate a new ncbuf to be attached to
any quic_cstream struct which needs such a buffer. Note that there is no quic_cstream
for 0RTT encryption level. quic_free_ncbuf() is added to release the memory
allocated for a non-contiguous buffer.
Modify qc_handle_crypto_frm() to call this function and allocate an ncbuf for
crypto data which are not received in order. The crypto data which are received in
order are not buffered but provide to the TLS stack (calling qc_provide_cdata()).
Modify qc_treat_rx_crypto_frms() which is called after having provided the
in order received crypto data to the TLS stack to provide again the remaining
crypto data which has been buffered, if possible (if they are in order). Each time
buffered CRYPTO data were consumed, we try to release the memory allocated for
the non-contiguous buffer (ncbuf).
Also move rx.crypto.offset quic_enc_level struct member to rx.offset quic_cstream
struct member.
Must be backported to 2.6.
Add new quic_cstream struct definition to implement the CRYPTO data stream.
This is a simplication of the qcs object (QUIC streams) for the CRYPTO data
without any information about the flow control. They are not attached to any
tree, but to a QUIC encryption level, one by encryption level except for
the early data encryption level (for 0RTT). A stream descriptor is also allocated
for each CRYPTO data stream.
Must be backported to 2.6
The CPU usage pattern was found to be high (5%) on a machine with
48 threads and only 100 servers checked every second That was
supposed to be only 100 connections per second, which should be very
cheap. It was figured that due to the check tasks unbinding from any
thread when going back to sleep, they're queued into the shared queue.
Not only this requires to manipulate the global queue lock, but in
addition it means that all threads have to check the global queue
before going to sleep (hence take a lock again) to figure how long
to sleep, and that they would all sleep only for the shortest amount
of time to the next check, one would pick it and all other ones would
go down to sleep waiting for the next check.
That's perfectly visible in time-to-first-byte measurements. A quick
test consisting in retrieving the stats page in CSV over a 48-thread
process checking 200 servers every 2 seconds shows the following tail:
percentile ttfb(ms)
99.98 2.43
99.985 5.72
99.99 32.96
99.995 82.176
99.996 82.944
99.9965 83.328
99.997 83.84
99.9975 84.288
99.998 85.12
99.9985 86.592
99.999 88
99.9995 89.728
99.9999 100.352
One solution could consist in forcefully binding checks to threads at
boot time, but that's annoying, will cause trouble for dynamic servers
and may cause some skew in the load depending on some server patterns.
Instead here we take a different approach. A check remains bound to its
thread for as long as possible, but upon every wakeup, the thread's load
is compared with another random thread's load. If it's found that that
other thread's load is less than half of the current one's, the task is
bounced to that thread. In order to prevent that new thread from doing
the same, we set a flag "CHK_ST_SLEEPING" that indicates that it just
woke up and we're bouncing the task only on this condition.
Tests have shown that the initial load was very unfair before, with a few
checks threads having a load of 15-20 and the vast majority having zero.
With this modification, after two "inter" delays, the load is either zero
or one everywhere when checks start. The same test shows a CPU usage that
significantly drops, between 0.5 and 1%. The same latency tail measurement
is much better, roughly 10 times smaller:
percentile ttfb(ms)
99.98 1.647
99.985 1.773
99.99 4.912
99.995 8.76
99.996 8.88
99.9965 8.944
99.997 9.016
99.9975 9.104
99.998 9.224
99.9985 9.416
99.999 9.8
99.9995 10.04
99.9999 10.432
In fact one difference here is that many threads work while in the past
they were waking up and going down to sleep after having perturbated the
shared lock. Thus it is anticipated that this will scale way smoother
than before. Under strace it's clearly visible that all threads are
sleeping for the time it takes to relaunch a check, there's no more
thundering herd wakeups.
However it is also possible that in some rare cases such as very short
check intervals smaller than a scheduler's timeslice (such as 4ms),
some users might have benefited from the work being concentrated on
less threads and would instead observe a small increase of apparent
CPU usage due to more total threads waking up even if that's for less
work each and less total work. That's visible with 200 servers at 4ms
where show activity shows that a few threads were overloaded and others
doing nothing. It's not a problem, though as in practice checks are not
supposed to eat much CPU and to wake up fast enough to represent a
significant load anyway, and the main issue they could have been
causing (aside the global lock) is an increase last-percentile latency.
There's no point using ha_random32() which is heavy and uses shared
variables to calculate a random timer when we have statistical_prng()
which does the same and was made exactly for this.
In the past we've seen "show servers state" dump some internal bits for
the check states, that were causing regtests to fail. The relevant bits
have been added to the doc to fix the public API and make sure they do
not change by accident, but the output doesn't take care of masking the
undesired ones, causing regtests (and possibly user programs) to fail
when new bits are added. Let's add the mask for the only documented ones
(0x0F for check and 0x1F for agent respectively).
This could be backported wherever the server state is present, though
there's a tiny risk that some undocumented bits might have already
leaked to some user scripts, so it might be wise to wait a bit before
doing that or even not to backport too far.
In h1_process_demux(), aborts for incomplete messages were not properly
handled. It was not an issue because the abort was detected later in
h1_process(). But it will be an issue to perform the aborts refoctoring.
First, when a read0 was detected, the SE_FL_EOI flag was set for messages in
DONE or TUNNEL state or for messages without known length (so responses in
close mode). The last statement is not accurate. The message must also be in
DATA state. Otherwise, SE_FL_EOI flag may be set on incomplete message.
Then, an error was reported, via SE_FL_ERROR flag, only when an incomplete
message was detected on the payload parsing. It must also be reported if
headers are incomplete. Here again, the error is detected later for now. But
it could be an issue later.
There is no reason to backport this patch.
There is no error handling when we read or write on a pipe. There error is
caught later, in the mux I/O handler. But there is no reason to not do so
here.
There is no reason to backport it because no issue was reported for now
because of this "bug". In all cases, it must be evaluated first.
The same was performed for the H2 and H1 multiplexers. FCGI connection and
stream flags are moved in a dedicated header file. It will be mainly used to
be able to decode mux-fcgi flags from the flags utility.
In this patch, we move the flags and enums to mux_fcgi-t.h, as well as the
two state decoding inline functions.
Compilation is broken with DEBUG_THREAD since the following patch
76642223f0
MEDIUM: stick-table: switch the table lock to rwlock
Fix this by updating a legacy HA_SPIN_INIT() to HA_RWLOCK_INIT().
No backport needed unless the mentionned patch is backported.
We don't need to call stktable_requeue_exp() with the table's lock
held anymore, so let's move it out. It should slightly reduce the
contention on the write lock, though it is now already quite low.
We're generalizing the change performed in previous commit "MEDIUM:
stick-table: requeue the expiration task out of the exclusive lock"
to stktable_requeue_exp() so that it can also be used by callers of
__stktable_store(). At the moment there's still no visible change
since it's still called under the write lock. However, the previous
code in stitable_touch_with_exp() was updated to use this function.
__staktable_store() performs two distinct things, one is to insert a key
and the other one is to requeue the task's expiration date. Since the
latter might be done without a lock, let's first split the function in
two halves. For now this has no impact.
With 48 threads, a heavily loaded table with plenty of trackers and
rules and a short expiration timer of 10ms saturates the CPU at 232k
rps. By carefully using atomic ops we can make sure that t->exp_next
and t->task->expire converge to the earliest next expiration date and
that all of this can be performed under atomic ops without any lock.
That's what this patch is doing in stktable_touch_with_exp(). This is
sufficient to double the performance and reach 470k rps.
It's worth noting that __stktable_store() uses a mix of eb32_insert()
and task_queue, and that the second part of it could possibly benefit
from this, even though sometimes it's called under a lock that was
already held.
On a 24-core machine having some "stick-store response" rules, a lot of
time is spent in the write lock in stktable_set_entry(). Let's apply the
same mechanism as for the stktable_get_entry() consisting in looking up
the value under the read lock and upgrading it to a write lock only to
perform modifications. Here we even have the luxury of upgrading the
lock since there are no alloc/free in the path. All this increases the
performance by 40% (from 363k to 510k rps).
We don't need to be protected by the table's lock when touching t->current
if we do it using atomics, and that's great because it allows us to have
a cleaner stksess_new() that doesn't require a lock either, and to avoid
manipulating pools under a lock.
That's another 1% performance gain from 2.07 to 2.10M req/s under 48
threads.
On a 24-core machine doing lots of track-sc, it was found that the lock
in stktable_get_entry() was responsible for 25% of the CPU alone. It's
sad because most of its job is to protect the table during the lookup.
Here we're taking a slightly different approach: the lock is first taken
for reads during the lookup, and only in case of failure we switch it for
a write lock. We don't even perform an upgrade here since an allocation
is needed between the two, it would be wasted to do it under the lock,
and is generally not a good idea, so better release the read lock and
try again.
Here the performance under 48 threads with 3 trackers on the same table
jumped from 455k to 2.07M, or 4.55x! Note that the same approach should
be possible for stktable_set_entry().
These functions do not modify anything in the the table except the refcount
on success. Let's just lock the table for shared accesses and make use of
atomic ops to update the refcount. This brings a nice gain from 425k to
455k under 48 threads (7%), but some contention remains on the exclusive
locks in other parts.
Note that the refcount continues to be updated under the lock because it's
not yet certain whether there are races between it and some of the exclusive
lock on the table. The difference is marginal and we prefer to stay on the
safe side for now.
In __stktable_get_entry() now we're planning for the possibility that the
call to __stktable_store() doesn't add the newly allocated entry and instead
finds a previously inserted one. At the moment this doesn't exist because
the lookup + insert passes are made under the same lock. But it will soon
change.
This function is used to create an entry in the table. But it doesn't
consider the possibility that the entry already exists, because right
now it's only called in situations where it was verified under a lock
that it doesn't exist. Since we'll soon need to break that assumption
we need it to verify that the requested entry was added and to return
a pointer to the one in the tree so that the caller can detect any
possible conflict. At the moment this is not used.
stream_store_counters() calls stksess_kill_if_expired() for each active
counter. And this one takes an exclusive lock on the table before
checking if it has any work to do (hint: it almost never has since it
only wants to delete expired entries). However a lock is still neeed for
now to protect the ref_cnt, but we can do it atomically under the read
lock.
Let's change the mechanism. Now what we do is to check out of the lock
if the entry is expired. If it is, we take the write lock, expire it,
and decrement the refcount. Otherwise we just decrement the refcount
under a read lock. With this change alone, the config based on 3
trackers without the previous patches saw a 2.6x improvement, but here
it doesn't yet change anything because some heavy contention remains
on the lookup part.
As previously mentioned, this function currently holds an exclusive lock
on the table during all the time it take to check if the entry needs to
be updated and synchronized with peers. The reality is that many setups
do not use peers and that on highly loaded setups, the same entries are
hammered all the time so the key's expiration doesn't change between a
number of consecutive accesses.
With this patch we take a different approach. The function starts
without taking the lock, and will take it only if needed, keeping track
of it. This way we can avoid it most of the time, or even entirely.
Finally if the decrefcnt argument requires that the refcount is
decremented, we either do it using a non-atomic op if the table was
locked (since no other entry may touch it) or via an atomic under the
read lock only.
With this change alone, a 48-thread test with 3 trackers increased
from 193k req/s to 425k req/s, which is a 2.2x factor.
Taking the write lock prior to entering that function is a problem
because this function is full of conditions that most of the time can
lead to eliminating the lock.
This commit first moves the write lock inside the function and passes
the extra argument required to implement stktable_touch_remote() and
stktable_touch_local(). It also renames the function to remove the
underscores since there's no other variant and it's exported under
this name (probably an old rename that was not propagated). The code
was stressed under 48 threads using 3 trackers on the same table. It
already shows a tiny 3% improvement from 187k to 193k rps.
At plenty of places we decrement ts->ref_cnt under the write lock
because it's held. We don't technically need it to be done that way
if there's contention and an atomic could suffice. However until all
places are turned to atomic, we at least need to do that under a
read lock for now, so that we don't mix atomic and non-atomic uses.
Regardless it already brings ~1.5% req rate improvement with 3 trackers
on the same table under 48 threads at 184k->187k rps.
Right now a spinlock is used, but most accesses are for reads, so let's
switch the lock to an rwlock and switch all accesses to exclusive locks
for now. There should be no visible difference at this point.
Right now when dealing with freq_ctr updates, we're using the process-
wide monotinic time, and accessing it is expensive since every thread
needs to update it, so this adds some contention. However we don't need
it all the time, the thread's local time is most of the time strictly
equal to the global time, and may be off by one millisecond when the
global time is switched to the next one by another thread, and in this
case we don't want to use the local time because it would risk to cause
a rotation of the counter. But that's precisely the condition we're
already relying on for the slow path!
What this patch does is to add a check for the period against the
local time prior to anything else, and immediately return after
updating the counter if still within the period, otherwise fall back
to the existing code. Given that the function starts to inflate a bit,
it was split between s very short inline part that does the hot path,
and the slower fallback that's in a cold function. It was measured that
on a 24-CPU machine it was called ~0.003% of the time.
The resulting improvement sits between 2 and 3% at 500k req/s tracking
an http_req_rate counter.
Since commit 03cdf55e6 ("MINOR: stream: Stickiness server lookup by name.")
in 2.0-dev6, server names may be used instead of their IDs, in order to
perform stickiness. However the commit above may end up trying to insert
an empty server name in the dictionary when the server is an applet
instead, resulting in an immediate segfault. This is typically what
happens when a "stick-store" rule is present in a backend featuring a
"stats" directive. As there doesn't seem to be an easy way around it,
it seems to imply that "stick-store" is not much used anymore.
The solution here is to only try to insert non-null keys into the
dictionary. The patch moves the check of the key type before the
first lock so that the test on the key can be performed under the lock
instead of locking twice (the patch is more readable with diff -b).
Note that before 2.4, there's no <key> variable there as it was
introduced by commit 92149f9a8 ("MEDIUM: stick-tables: Add srvkey
option to stick-table"), but the __objt_server(s->target)->id still
needs to be tested.
This needs to be backported as far as 2.0.
In cd341d531, I added a FIXME comment because I noticed a
lua_pushvalue with 0 index, whereas lua doc states that 0 is never
an acceptable index.
After reviewing and testing the hlua_applet_http_send_response() code,
it turns out that this pushvalue is not even needed.
So it's safer to remove it as it could lead to undefined
behavior (since it is not supported by Lua API) and it grows lua stack
by 1 for no reason.
No backport needed.
In hlua code, we mark every function that may longjump using
MAY_LJMP macro so it's easier to identify them by reading the code.
However, some luaL_checktypes() were performed without the MAY_LJMP.
According to lua doc:
Functions called luaL_check* always raise an error if
the check is not satisfied.
-> Adding the missing MAY_LJMP for those luaLchecktypes() calls.
No backport needed.
Move code which activates IP_PKTINFO socket option (or affiliated
options) from sock_inet_bind_receiver() to quic_bind_listener()
function. This change is useful for two reasons :
* first, and the most important one : this activates IP_PKTINFO only for
QUIC receivers. The previous version impacted all datagram receivers,
used for example by log-forwarder. This should reduce memory usage for
these datagram sockets which do not need this option.
* second, USE_QUIC preprocessor statements are removed from
src/sock_inet.c which clean up the code.
IP_PKTINFO was introduced recently by the following patch :
97ecc7a8ea (quic-dev/qns)
MEDIUM: quic: retrieve frontend destination address
For the moment, this does not impact any stable release. However, as
previous patch is scheduled for 2.6 backporting, the current change must
also be backported to the same versions.
The tx_qrings[] and tx_qring_list in the receiver are not used
anymore since commit f2476053f ("MINOR: quic: replace custom buf on Tx
by default struct buffer"), the only place where they're referenced
was in quic_alloc_tx_rings_listener(), which by the way implies that
these were not even freed on exit.
Let's just remove them. This should be backported to 2.6 since the
commit above also was.
Retrieve the frontend destination address for a QUIC connection. This
address is retrieve from the first received datagram and then stored in
the associated quic-conn.
This feature relies on IP_PKTINFO or affiliated flags support on the
socket. This flag is set for each QUIC listeners in
sock_inet_bind_receiver(). To retrieve the destination address,
recvfrom() has been replaced by recvmsg() syscall. This operation and
parsing of msghdr structure has been extracted in a wrapper quic_recv().
This change is useful to finalize the implementation of 'dst' sample
fetch. As such, quic_sock_get_dst() has been edited to return local
address from the quic-conn. As a best effort, if local address is not
available due to kernel non-support of IP_PKTINFO, address of the
listener is returned instead.
This should be backported up to 2.6.
Inspect return code of qc_send_mux(). If quic-conn layer reports an
error, this will interrupt the current emission process.
This should be backported up to 2.6.
Continue on the cleanup of QUIC stack and components.
quic_conn uses internally a ssl_sock_ctx to handle mandatory TLS QUIC
integration. However, this is merely as a convenience, and it is not
equivalent to stackable ssl xprt layer in the context of HTTP1 or 2.
To better emphasize this, ssl_sock_ctx usage in quic_conn has been
removed wherever it is not necessary : namely in functions not related
to TLS. quic_conn struct now contains its own wait_event for tasklet
quic_conn_io_cb().
This should be backported up to 2.6.
Channel.insert(channel, string, [,offset]):
When no offset is provided, hlua_channel_insert_data() inserts
string at the end of incoming data.
This behavior conflicts with the documentation that explicitly says
that the default behavior is to insert the string in front of incoming data.
This patch fixes hlua_channel_insert_data() behavior so that it fully
complies with the documentation.
Thanks to Smackd0wn for noticing it.
This could be backported to 2.6 and 2.5
Gcc 4.x, 5.x and 6.x report this when compiling http_fetch.c:
src/http_fetch.c: In function 'smp_fetch_meth':
src/http_fetch.c:357:6: warning: 'htx' may be used uninitialized in this function [-Wmaybe-uninitialized]
sl = http_get_stline(htx);
That's quite weird since there's no such code path, but presetting the
htx variable to NULL during declaration is enough to shut it up.
This may be backported to any version that has dbbdb25f1 ("BUG/MINOR:
http-fetch: Use integer value when possible in "method" sample fetch")
as it's the one that triggered this warning (hence at least 2.0).
In smp_fetch_meth(), smp_prefetch_htx() function may be called to parse the
HTX message and update the HTTP transaction accordingly. In this case, in
smp_fetch_metch() and on success, we must update "meth" variable. Otherwise,
the variable is still equal to HTTP_METH_OTHER and the string version is
always used instead of the enum for known methods.
This patch must be backported as far as 2.0.
As seen in issue #1866, some environments will not allow to change the
current FD limit, and actually we don't need to do it, we only do it as
a byproduct of adjusting the limit to the one that fits. Here we're
replacing calls to setrlimit() with calls to raise_rlim_nofile(), which
will avoid making the setrlimit() syscall in case the desired value is
lower than the current process' one.
This depends on previous commit "MINOR: fd: add a new function to only
raise RLIMIT_NOFILE" and may need to be backported to 2.6, possibly
earlier, depending on users' experience in such environments.
In issue #1866 an issue was reported under docker, by which a user cannot
lower the number of FD needed. It looks like a restriction imposed in this
environment, but it results in an error while it ought not have to in the
case of shrinking.
This patch adds a new function raise_rlim_nofile() that takes the desired
new setting, compares it to the current one, and only calls setrlimit() if
one of the values in the new setting is larger than the older one. As such
it will continue to emit warnings and errors in case of failure to raise
the limit but will never shrink it.
This patch is only preliminary to another one, but will have to be
backported where relevant (likely only 2.6).
Building h1.c with gcc-4.7 -Os produces the following warning:
src/h1.c: In function 'h1_headers_to_hdr_list':
src/h1.c:1101:36: warning: 'ptr' may be used uninitialized in this function [-Wmaybe-uninitialized]
In fact ptr may be taken from sl.rq.u.ptr which is only initialized after
passing through the relevant states, but gcc doesn't know which states
are visited. Adding an ALREADY_CHECKED() statement there is sufficient to
shut it up and doesn't affect the emitted code.
This may be backported to stable versions to make sure that builds on older
distros and systems is clean.
In hlua_lua2arg_check(), we allow for the first argument to not be
provided, if it has a type we know, this is true for frontend, backend,
and stick table. However, the stick table code was changed. It used
to be deduced from the proxy, but it is now directly provided in struct
args. So setting the proxy there no longer work, and we have to
explicitely set the stick table.
Not doing so will lead the code do use the proxy pointer as a stick
table pointer, which will likely cause crashes.
This should be backported up to 2.0.
In hlua_lua2arg_check(), on failure, before calling free_argp(), make
sure to always mark the failed argument as ARGT_STOP. We only want to
free argument prior to that point, because we did not allocate the
strings after this one, and so we don't want to free them.
This should be backported up to 2.2.
It is possible to receive a STOP_SENDING frame for a locally closed
stream. This was not properly managed as this would result in a BUG_ON()
crash from qcs_idle_open() call under qcc_recv_stop_sending().
Now, STOP_SENDING frames are ignored when received on streams already
locally closed. This has two consequences depending on the reason of
closure :
* if a RESET_STREAM was already emitted and closed the stream, this
patch prevents to emit a new RESET_STREAM. This behavior is thus
better.
* if stream was closed due to all data transmitted, no RESET_STREAM will
be built. This is contrary to the RFC 9000 which advice to transmit
it, even on "Data Sent" state. However, this is not mandatory so the
new behavior is acceptable, even if it could be improved.
This crash has been detected on haproxy.org. This can be artifically
reproduced by adding the following snippet at the end of qc_send_mux()
when doing a request with a small payload response :
qcc_recv_stop_sending(qc->qcc, 0, 0);
This must be backported up to 2.6.
xprt_quic module was too large and did not reflect the true architecture
by contrast to the other protocols in haproxy.
Extract code related to XPRT layer and keep it under xprt_quic module.
This code should only contains a simple API to communicate between QUIC
lower layer and connection/MUX.
The vast majority of the code has been moved into a new module named
quic_conn. This module is responsible to the implementation of QUIC
lower layer. Conceptually, it overlaps with TCP kernel implementation
when comparing QUIC and HTTP1/2 stacks of haproxy.
This should be backported up to 2.6.
Clean up quic sources by adjusting headers list included depending
on the actual dependency of each source file.
On some occasion, xprt_quic.h was removed from included list. This is
useful to help reducing the dependency on this single file and cleaning
up QUIC haproxy architecture.
This should be backported up to 2.6.
Two prototypes in quic_tls module were not identical to the actual
function definition.
* quic_tls_decrypt2() : the second argument const attribute is not
present, to be able to use it with EVP_CIPHER_CTX_ctlr(). As a
consequence of this change, token field of quic_rx_packet is now
declared as non-const.
* quic_tls_generate_retry_integrity_tag() : the second argument type
differ between the two. Adjust this by fixing it to as unsigned char
to match EVP_EncryptUpdate() SSL function.
This situation did not seem to have any visible effect. However, this is
clearly an undefined behavior and should be treated as a bug.
This should be backported up to 2.6.
Some variables related to QUIC TLS were defined in a header file : their
definitions are now moved properly in the implementation file, with only
declarations in the header.
This should be backported up to 2.6.
ull is a typedef to unsigned long long. It is only defined in
xprt_quic-t.h. Its usage should be limited over time to reduce xprt_quic
dependency over the whole code. It can be replaced by ullong typedef
from compat.h.
For the moment, ull references have been replaced in qmux_trace module.
They were only used for printf format and has been replaced by the true
variable type.
This change is useful to reduce dependencies on xprt_quic in other
files.
This should be backported up to 2.6.
This patch adds support to the following authentication methods:
- AUTH_REQ_GSS (7)
- AUTH_REQ_SSPI (9)
- AUTH_REQ_SASL (10)
Note that since AUTH_REQ_SASL allows multiple authentication mechanisms
such as SCRAM-SHA-256 or SCRAM-SHA-256-PLUS, the auth payload length may
vary since the method is sent in plaintext. In order to allow this, the
regex now matches any payload length.
This partially fixes Github issue #1508 since user authentication is
still broken but should restore pre-2.2 behavior.
This should be backported up to 2.2.
Signed-off-by: Fatih Acar <facar@scaleway.com>
In github issue #1878, Bart Butler reported observing turn-around states
(1 second pause) after connection retries going to different servers,
while this ought not happen.
In fact it does happen because back_handle_st_cer() enforces the TAR
state for any algo that's not round-robin. This means that even leastconn
has it, as well as hashes after the number of servers changed.
Prior to doing that, the call to stream_choose_redispatch() has already
had a chance to perform the correct choice and to check the algo and
the number of retries left. So instead we should just let that function
deal with the algo when needed (and focus on deterministic ones), and
let the former just obey. Bart confirmed that the fixed version works
as expected (no more delays during retries).
This may be backported to older releases, though it doesn't seem very
important. At least Bart would like to have it in 2.4 so let's go there
for now after it has cooked a few weeks in 2.6.
The maxconn value is decoded using atol(), so values like "3k" are
rightly converter as interger 3, while the user wants 3000.
This patch fixes this behavior by reporting a parsing error.
This patch could be backported on all maintained version, but it
could break some configuration. The bug is really minor, I recommend
to not backport, or backport a patch which only throws a warning in
place of a fatal error.
Idle connections do not work on 32-bit machines due to an alignment issue
causing the connection nodes to be indexed with their lower 32-bits set to
zero and the higher 32 ones containing the 32 lower bitss of the hash. The
cause is the use of ebmb_node with an aligned data, as on this platform
ebmb_node is only 32-bit aligned, leaving a hole before the following hash
which is a uint64_t:
$ pahole -C conn_hash_node ./haproxy
struct conn_hash_node {
struct ebmb_node node; /* 0 20 */
/* XXX 4 bytes hole, try to pack */
int64_t hash; /* 24 8 */
struct connection * conn; /* 32 4 */
/* size: 40, cachelines: 1, members: 3 */
/* sum members: 32, holes: 1, sum holes: 4 */
/* padding: 4 */
/* last cacheline: 40 bytes */
};
Instead, eb64 nodes should be used when it comes to simply storing a
64-bit key, and that is what this patch does.
For backports, a variant consisting in simply marking the "hash" member
with a "packed" attribute on the struct also does the job (tested), and
might be preferable if the fix is difficult to adapt. Only 2.6 and 2.5
are affected by this.
Previous commit 8a6767d26 ("BUG/MINOR: config: don't count trailing spaces
as empty arg (v2)") was still not enough. As reported by ClusterFuzz in
issue 52049 (https://bugs.chromium.org/p/oss-fuzz/issues/detail?id=52049),
there remains a case where for the sake of reporting the correct argument
count, the function may produce virtual args that span beyond the end of
the output buffer if that one is too short. That's what's happening with
a config file of one empty line followed by a large number of args.
This means that what args[] points to cannot be relied on and that a
different approach is needed. Since no output is produced for spaces and
comments, we know that args[arg] continues to point to out+outpos as long
as only comments or spaces are found, which is what we're interested in.
As such it's safe to check the last arg's pointer against the one before
the trailing zero was emitted, in order to decide to count one final arg.
No backport is needed, unless the commit above is backported.
In parse_line(), spaces increment the arg count and it is incremented
again on '#' or end of line, resulting in an extra empty arg at the
end of arg's list. The visible effect is that the reported arg count
is in excess of 1. It doesn't seem to affect regular function but
specialized ones like anonymisation depends on this count.
This is the second attempt for this problem, here the explanation :
When called for the first line, no <out> was allocated yet so it's NULL,
letting the caller realloc a larger line if needed. However the words are
parsed and their respective args[arg] are filled with out+position, which
means that while the first arg is NULL, the other ones are no and fail the
test that was meant to avoid dereferencing a NULL. Let's simply check <out>
instead of <args> since the latter is always derived from the former and
cannot be NULL without the former also being.
This may need to be backported to stable versions.
In function hlua_applet_http_send_response(), a pushvalue
is performed with index '0'.
But according to lua doc (https://www.lua.org/manual/5.3/manual.html#4.3):
"Note that 0 is never an acceptable index".
Adding a FIXME comment near to the pushvalue operation
so that this can get some chance to be reviewed later.
No backport needed.
When providing multiple optional arguments with lua-load or
lua-load-per-thread directives, arguments where pushed 1 by 1
to the stack using lua_pushstring() without checking if the stack
could handle it.
This could easily lead to program crash when providing too much
arguments. I can easily reproduce the crash starting from ~50 arguments.
Calling lua_checkstack() before pushing to the stack fixes the crash:
According to lua.org, lua_checkstack() does some housekeeping and
allow the stack to be expanded as long as some memory is available
and the hard limit isn't reached.
When no memory is available to expand the stack or the limit is reached,
lua_checkstacks returns an error: in this case we force hlua_load_state()
to return a meaningfull error instead of crashing.
In practice though, cfgparse complains about too many words
way before such event may occur on a normal system.
TLDR: the ~50 arguments limitation is not an issue anymore.
No backport needed, except if 'MINOR: hlua: Allow argument on
lua-lod(-per-thread) directives' (ae6b568) is backported.
Calling the function with an offset when "offset + len" was superior or equal
to the targeted blk length caused 'v' value to be improperly set.
And because 'v' is directly provided to htx_replace_blk_value(), blk consistency was compromised.
(It seems that blk was overrunning in htx_replace_blk_value() due to
this and header data was overwritten in this case).
This patch adds the missing checks to make the function behave as
expected when offset is set and offset+len is greater or equals to the targeted blk length.
Some comments were added to the function as well.
It may be backported to 2.6 and 2.5
"stdout" and "stderr" are not hashed. In the same spirit, "fd@" and
"sockpair@" prefixes are not hashed too. There is no reason to hash such
address and it may be useful to diagnose bugs.
No backport needed, except if anonymization mechanism is backported.
Add PA_O_DGRAM and PA_O_PORT_OFS options when str2sa_range() is called. This
way dgram sockets and addresses with port offsets are supported.
No backport needed, except if anonymization mechanism is backported.
Correct a commentary in in include/haproxy/global-t.h and include/haproxy/tools.h
Replace the CLI command 'set global-key <key>' by 'set anon global-key <key>' in
order to find it easily when you don't remember it, the recommandation can guide
you when you just tap 'set anon'.
No backport needed, except if anonymization mechanism is backported.
Add an option to dump the number lines of the configuration file when
it's dumped. Other options can be easily added. Options are separated
by ',' when tapping the command line:
'./haproxy -dC[key],line -f [file]'
No backport needed, except if anonymization mechanism is backported.
Add keywords recognized during the dump of the configuration file,
these keywords are followed by sensitive information.
Remove the condition 'localhost' for the second argument of keyword
'server', consider as not essential and can disturb when comparing
it in cli section (there is no exception 'localhost').
No backport needed, except if anonymization mechanism is backported.
Add HA_ANON_CLI to the srv->hostname when using 'show servers state'.
It can contain sensitive information like 'www....com'
No backport needed, except if anonymization mechanism is backported.
Replace HA_ANON_CLI by hash_ipanon to anonynmized address like
anonymizing address in the configuration file.
No backport needed, except if anonymization mechanism is backported.
Add a parameter hasport to return a simple hash or ipstring when
ipstring has no port. Doesn't hash if scramble is null. Add
option PA_O_PORT_RESOLVE to str2sa_range. Add a case UNIX.
Those modification permit to use hash_ipanon in cli section
in order to dump the same anonymization of address in the
configuration file and with CLI.
No backport needed, except if anonymization mechanism is backported.
Removed the error message in 'set anon on|off', it's more user
friendly: users use 'set anon on' even if the mode is already
activated, and the same for 'set anon off'. That allows users
to write the command line in the anonymized mode they want
without errors.
No backport needed, except if anonymization mechanism is backported.
Add an anonymization for an element missed in the first merge
for 'show sess all'.
No backport needed, except if anonymization mechanism is backported.
hlua_http_msg_insert_data() function is called upon
HTTPMessage.insert() method from lua script.
This function did not work properly for multiple reasons:
- An incorrect argument check was performed and prevented the user
from providing optional offset argument.
- Input and output variables were inverted
and offset was not handled properly. The same bug
was also fixed in hlua_http_msg_del_data(), see:
'BUG/MINOR: hlua: fixing hlua_http_msg_del_data behavior'
The function now behaves as described in the documentation.
This could be backported to 2.6 and 2.5.
GH issue #1885 reported that HTTPMessage.remove() did not
work as expected.
It turns out that underlying hlua_http_msg_del_data() function
was not working properly due to input / output inversion as well
as incorrect user offset handling.
This patch fixes it so that the behavior is the one described in
the documentation.
This could be backported to 2.6 and 2.5.
This reverts commit 5529424ef1.
Since this patch, HAProxy crashes when the first line of the configuration
file contains more than one parameter because, on the first call of
parse_line(), the output line is not allocated. Thus elements in the
arguments array may point on invalid memory area.
It may be considered as a bug to reference invalid memory area and should be
fixed. But for now, it is safer to revert this patch
If the reverted commit is backported, this one must be backported too.
In parse_line(), spaces increment the arg count and it is incremented
again on '#' or end of line, resulting in an extra empty arg at the
end of arg's list. The visible effect is that the reported arg count
is in excess of 1. It doesn't seem to affect regular function but
specialized ones like anonymisation depends on this count.
This may need to be backported to stable versions.
Fix the size check in ring_make_from_area() which is checking the size
of the pointer instead of the size of the structure.
No backport needed, 2.7 only.
To avoid any UAF when a resolution is released, a mechanism was added to
abort a resolution and delayed the released at the end of the current
execution path. This mechanism depends on an hard assumption: Any reference
on an aborted resolution must be removed. So, when a resolution is aborted,
it is removed from the resolver lists and inserted into a death row list.
However, a resolution may still be referenced in the query_ids tree. It is
the tree containing all resolutions with a pending request. Because aborted
resolutions are released outside the resolvers lock, it is possible to
release a resolution on a side while a query ansswer is received and
processed on another one. Thus, it is still possible to have a UAF because
of this bug.
To fix the issue, when a resolution is aborted, it is removed from any list,
but it is also removed from the query_ids tree.
This patch should solve the issue #1862 and may be related to #1875. It must
be backported as far as 2.2.
If stream_new() fails after the frontend SC is attached, the underlying SE
descriptor is not properly reset. Among other things, SE_FL_ORPHAN flag is
not set again. Because of this error, a BUG_ON() is triggered when the mux
stream on the frontend side is destroyed.
Thus, now, when stream_new() fails, SE_FL_ORPHAN flag is set on the SE
descriptor and its stream-connector is set to NULL.
This patch should solve the issue #1880. It must be backported to 2.6.
The frontend SC is attached before the backend one is allocated. Thus an
allocation error on backend SC must be handled before an error on the
frontend SC.
This patch must be backported to 2.6.
Upon a reload with the master CLI, the FD of the master CLI session is
received by the internal socketpair listener.
This session is used to display the status of the reload and then will
close.
The environment variable HAPROXY_LOAD_SUCCESS stores "1" if it
successfully load the configuration and started, "0" otherwise.
The "_loadstatus" master CLI command displays either
"Loading failure!\n" or "Loading success.\n"
As pointed out by chipitsine in GH #1879, coverity complains
about a sizeof with char ** type where it should be char *.
This was introduced in 'MINOR: hlua: Allow argument on
lua-lod(-per-thread) directives' (ae6b568)
Luckily this had no effect so far because on most platforms
sizeof(char **) == sizeof(char *), but this can not be safely
assumed for portability reasons.
The fix simply changes the argument to sizeof so that it refers to
'*per_thread_load[len]' instead of 'per_thread_load[len]'.
No backport needed.
When using the "reload" command over the master CLI, all connections to
the master CLI were cut, this was unfortunate because it could have been
used to implement a synchronous reload command.
This patch implements an architecture to keep the connection alive after
the reload.
The master CLI is now equipped with a listener which uses a socketpair,
the 2 FDs of this socketpair are stored in the mworker_proc of the
master, which the master keeps via the environment variable.
ipc_fd[1] is used as a listener for the master CLI. During the "reload"
command, the CLI will send the FD of the current session over ipc_fd[0],
then the reload is achieved, so the master won't handle the recv of the
FD. Once reloaded, ipc_fd[1] receives the FD of the session, so the
connection is preserved. Of course it is a new context, so everything
like the "prompt mode" are lost.
Only the FD which performs the reload is kept.
chipitsine reported in github issue #1872 that in function hash_anon and
hash_ipanon, index_hash can be equal to NB_L_HASH_WORD and can reach an
inexisting line table, the table is initialized hash_word[NB_L_HASH_WORD][20];
so hash_word[NB_L_HASH_WORD] doesn't exist.
No backport needed, except if anonymization mechanism is backported.
Because memprintf return an error to the caller and not on screen.
the function which perform display of message on the right output
is in charge of adding \n if it is necessary.
This patch may be backported.
At present option smtpchk closes the TCP connection abruptly on completion of service checking,
even if successful. This can result in a very high volume of errors in backend SMTP server logs.
This patch ensures an SMTP QUIT is sent and a positive 2xx response is received from the SMTP
server prior to disconnection.
This patch depends on the following one:
* MINOR: smtpchk: Update expect rule to fully match replies to EHLO commands
This patch should fix the issue #1812. It may be backported as far as 2.2
with the commit above On the 2.2, proxy_parse_smtpchk_opt() function is
located in src/check.c
[cf: I updated reg-tests script accordingly]
The response to EHLO command is a multiline reply. However the corresponding
expect rule only match on the first line. For now, it is not an issue. But
to be able to send the QUIT command and gracefully close the connection, we
must be sure to consume the full EHLO reply first.
To do so, the regex has been updated to match all 2xx lines at a time.
Tests with forced wakeups on a 24c/48t machine showed that we're caping
at 7.3M loops/s, which means 6.6 microseconds of loop delay without
having anything to do.
This is caused by two factors:
- the load and update of the now_offset variable
- the update of the global_now variable
What is happening is that threads are not running within the one-
microsecond time precision provided by gettimeofday(), so each thread
waking up sees a slightly different date and causes undesired updates
to global_now. But worse, these undesired updates mean that we then
have to adjust the now_offset to match that, and adds significant noise
to this variable, which then needs to be updated upon each call.
By only allowing sightly less precision we can completely eliminate
that contention. Here we're ignoring the 5 lowest bits of the usec
part, meaning that the global_now variable may be off by up to 31 us
(16 on avg). The variable is only used to correct the time drift some
threads might be observing in environments where CPU clocks are not
synchronized, and it's used by freq counters. In both cases we don't
need that level of precision and even one millisecond would be pretty
fine. We're just 30 times better at almost no cost since the global_now
and now_offset variables now only need to be updated 30000 times a
second in the worst case, which is unnoticeable.
After this change, the wakeup rate jumped from 7.3M/s to 66M/s, meaning
that the loop delay went from 6.6us to 0.73us, that's a 9x improvement
when under load! With real tasks we're seeing a boost from 28M to 52M
wakeups/s. The clock_update_global_date() function now only takes
1.6%, it's good enough so that we don't need to go further.
This patch modifies epoll, kqueue and evports (the 3 pollers that support
busy polling) to only update the local date in the inner polling loop,
the global one being done when leaving the loop. Testing with epoll on
a 24c/48t machine showed a boost from 53M to 352M loops/s, indicating
that the loop was spending 85% of its time updating the global date or
causing side effects (which was confirmed with perf top showing 67% in
clock_update_global_date() alone).
Pollers that support busy polling spend a lot of time (and cause
contention) updating the global date when they're looping over themselves
while it serves no purpose: what's needed is only an update on the local
date to know when to stop looping.
This patch splits clock_pudate_date() into a pair of local and global
update functions, so that pollers can be easily improved.
Patrick Hemmer reported an improper log behavior when using
log-format to escape log data (+E option):
Some bytes were truncated from the output:
- escape_string() function now takes an extra parameter that
allow the caller to specify input string stop pointer in
case the input string is not guaranteed to be zero-terminated.
- Minors checks were added into lf_text_len() to make sure dst
string will not overflow.
- lf_text_len() now makes proper use of escape_string() function.
This should be backported as far as 1.8.
The commit 372b38f935 ("BUG/MEDIUM: mux-h1: Handle connection error after a
synchronous send") introduced a bug. In h1_snd_buf(), consumed data are not
properly accounted if a connection error is detected. Indeed, data are
consumed when the output buffer is filled. But, on connection error, we exit
from the loop without incremented total variable accordingly.
When this happens, this leaves the channel buffer in an inconsistent
state. The buffer may be empty with some output at the channel level.
Because an error is reported, it is harmless. But it is safer to fix this
bug now to avoid any regression in future.
This patch must be backported as far as 2.2.
MUX QUIC snd_buf operation whill return early if a qcs instance is
resetted. In this case, HTX is left untouched and the callback returns
the whole bufer size. This lead to an undefined behavior as the stream
layer is notified about a transfer but does not see its HTX buffer
emptied. In the end, the transfer may stall which will lead to a leak on
session.
To fix this, HTX buffer is now resetted when snd_buf is short-circuited.
This should fix the issue as now the stream layer can continue the
transfer until its completion.
This patch has already been tested by Tristan and is reported to solve
the github issue #1801.
This should be backported up to 2.6.
Factorize common code between h3 and hq-interop snd_buf operation. This
is inserted in MUX QUIC snd_buf own callback.
The h3/hq-interop API has been adjusted to directly receive a HTX
message instead of a plain buf. This led to extracting part of MUX QUIC
snd_buf in qmux_http module.
This should be backported up to 2.6.
Extract function dealing with HTX outside of MUX QUIC. For the moment,
only rcv_buf stream operation is concerned.
The main objective is to be able to support both TCP and HTTP proxy mode
with a common base and add specialized modules on top of it.
This should be backported up to 2.6.
QUIC MUX implements several APIs to interface with stream, quic-conn and
app-ops layers. It is planified to better separate this roles, possibly
by using several files.
The first step is to extract QUIC MUX traces in a dedicated source
files. This will allow to reuse traces in multiple files.
The main objective is to be
able to support both TCP and HTTP proxy mode with a common base and add
specialized modules on top of it.
This should be backported up to 2.6.
A qcs instance free may be postponed in stream detach operation if the
stream is not locally closed. This condition is there to achieve
transfering data still present in Tx buffer. Once all data have been
emitted to quic-conn layer, qcs instance can be released.
However, the stream is only closed locally if HTX EOM has been seen or
it has been resetted. In case the transfer finished without EOM, a
detached qcs won't be freed even if there is no more activity on it.
This bug was not reproduced but was found on code analysis. Its precise
impact is unknown but it should not cause any leak as all qcs instances
are freed with its parent qcc connection : this should eventually happen
on MUX timeout or QUIC idle timeout.
To adjust this, condition to mark a stream as locally closed has been
extended. On qcc_streams_sent_done() notification, if its Tx buffer has
been fully transmitted, it will be closed if either FIN STREAM was set
or the stream is detached.
This must be backported up to 2.6.
Some tables are currently used to decode bit blocks and lengths. We do
see such lookups in perf top. We have 4 512-byte tables and one 64-byte
one. Looking closer, the second half of the table (length) has so few
variations that most of the time it will be computed in a single "if",
and never more than 3. This alone allows to cut the tables in half. In
addition, one table (bits 15-11) is only 32-element long, while another
one (bits 11-4) starts at 0x60, so we can merge the two as they do not
overlap, and further save size. We're now down to 4 256-entries tables.
This is visible in h3 and h2 where the max request rate is slightly higher
(e.g. +1.6% for h2). The huff_dec() function got slightly larger but the
overall code size shrunk:
$ nm --size haproxy-before | grep huff_dec
000000000000029e T huff_dec
$ nm --size haproxy-after | grep huff_dec
0000000000000345 T huff_dec
$ size haproxy-before haproxy-after
text data bss dec hex filename
7591126 569268 2761348 10921742 a6a70e haproxy-before
7591082 568180 2761348 10920610 a6a2a2 haproxy-after
The locally defined static variables 'httpclient_srv_raw' and
'httpclient_srv_ssl' are not used anywhere in the source code,
except that they are set in the httpclient_precheck() function.
nb_hreq is a counter on qcc for active HTTP requests. It is incremented
for each qcs where a full HTTP request was received. It is decremented
when the stream is closed locally :
- on HTTP response fully transmitted
- on stream reset
A bug will occur if a stream is resetted without having processed a full
HTTP request. nb_hreq will be decremented whereas it was not
incremented. This will lead to a crash when building with
DEBUG_STRICT=2. If BUG_ON_HOT are not active, nb_hreq counter will wrap
which may break the timeout logic for the connection.
This bug was triggered on haproxy.org. It can be reproduced by
simulating the reception of a STOP_SENDING frame instead of a STREAM one
by patching qc_handle_strm_frm() :
+ if (quic_stream_is_bidi(strm_frm->id))
+ qcc_recv_stop_sending(qc->qcc, strm_frm->id, 0);
+ //ret = qcc_recv(qc->qcc, strm_frm->id, strm_frm->len,
+ // strm_frm->offset.key, strm_frm->fin,
+ // (char *)strm_frm->data);
To fix this bug, a qcs is now flagged with a new QC_SF_HREQ_RECV. This
is set when the full HTTP request is received. When the stream is closed
locally, nb_hreq will be decremented only if this flag was set.
This must be backported up to 2.6.
This commit adds a new command line option -dC to dump the configuration
file. An optional key may be appended to -dC in order to produce an
anonymized dump using this key. The anonymizing process uses the same
algorithm as the CLI so that the same key will produce the same hashes
for the same identifiers. This way an admin may share an anonymized
extract of a configuration to match against live dumps. Note that key 0
will not anonymize the output. However, in any case, the configuration
is dumped after tokenizing, thus comments are lost.
Modify proxy.c in order to anonymize the following confidential data on
commands 'show servers state' and 'show servers conn':
- proxy name
- server name
- server address
Modify stream.c in order to hash the following confidential data if the
anonymized mode is enabled:
- configuration elements such as frontend/backend/server names
- IP addresses
In order to allow users to dump internal states using a specific key
without changing the global one, we're introducing a key in the CLI's
appctx. This key is preloaded from the global one when "set anon on"
is used (and if none exists, a random one is assigned). And the key
can optionally be assigned manually for the whole CLI session.
A "show anon" command was also added to show the anon state, and the
current key if the users has sufficient permissions. In addition, a
"debug dev hash" command was added to test the feature.
Add a uint32_t key in global to hash words with it. A new CLI command
'set global-key <key>' was added to change the global anonymizing key.
The global may also be set in the configuration using the global
"anonkey" directive. For now this key is not used.
These macros and functions will be used to anonymize strings by producing
a short hash. This will allow to match config elements against dump elements
without revealing the original data. This will later be used to anonymize
configuration parts and CLI commands output. For now only string, identifiers
and addresses are supported, but the model is easily extensible.
Ilya reported in issue #1816 a build warning on armhf (promoted to error
here since -Werror):
src/fd.c: In function fd_rm_from_fd_list:
src/fd.c:209:87: error: passing argument 3 of __ha_cas_dw discards volatile qualifier from pointer target type [-Werror=discarded-array-qualifiers]
209 | unlikely(!_HA_ATOMIC_DWCAS(((long *)&fdtab[fd].update), (uint32_t *)&cur_list.u32, &next_list.u32))
| ^~~~~~~~~~~~~~
This happens only on such an architecture because the DWCAS requires the
pointer not the value, and gcc seems to be needlessly picky about reading
a const from a volatile! This may safely be backported to older versions.
Ed Hein reported in github issue #1856 some occasional watchdog panics
in 2.4.18 showing extreme contention on the proxy's lock while the libc
was in malloc()/free(). One cause of this problem is that we call free()
under the proxy's lock in proxy_capture_error(), which makes no sense
since if we can free the object under the lock after it's been detached,
we can also free it after releasing the lock (since it's not referenced
anymore).
This should be backported to all relevant versions, likely all
supported ones.
This fixes 4 tiny and harmless typos in mux_quic.c, quic_tls.c and
ssl_sock.c. Originally sent via GitHub PR #1843.
Signed-off-by: cui fliter <imcusg@gmail.com>
[Tim: Rephrased the commit message]
[wt: further complete the commit message]
When calling 'add server' with a hostname from the cli (runtime),
str2sa_range() does not resolve hostname because it is purposely
called without PA_O_RESOLVE flag.
This leads to 'srv->addr_node.key' being NULL. According to Willy it
is fine behavior, as long as we handle it properly, and is already
handled like this in srv_set_addr_desc().
This patch fixes GH #1865 by adding an extra check before inserting
'srv->addr_node' into 'be->used_server_addr'. Insertion and removal
will be skipped if 'addr_node.key' is NULL.
It must be backported to 2.6 and 2.5 only.
A stream is considered as remotely closed once we have received all the
data with the FIN bit set.
The condition to close the stream was wrong. In particular, if we
receive an empty STREAM frame with FIN bit set, this would have close
the stream even if we do not have yet received all the data. The
condition is now adjusted to ensure that Rx buffer contains all the data
up to the stream final size.
In most cases, this bug is harmless. However, if compiled with
DEBUG_STRICT=2, a BUG_ON_HOT crash would have been triggered if close is
done too early. This was most notably the case sometimes on interop test
suite with quinn or kwik clients. This can also be artificially
reproduced by simulating reception of an empty STREAM frame with FIN bit
set in qc_handle_strm_frm() :
+ if (strm_frm->fin) {
+ qcc_recv(qc->qcc, strm_frm->id, 0,
+ strm_frm->len, strm_frm->fin,
+ (char *)strm_frm->data);
+ }
ret = qcc_recv(qc->qcc, strm_frm->id, strm_frm->len,
strm_frm->offset.key, strm_frm->fin,
(char *)strm_frm->data);
This must be backported up to 2.6.
Small cleanup on snd_buf for application protocol layer.
* do not export h3_snd_buf
* replace stconn by a qcs argument. This is better as h3/hq-interop only
uses the qcs instance.
This should be backported up to 2.6.
The same was performed for the H2 multiplexer. H1C and H1S flags are moved
in a dedicated header file. It will be mainly used to be able to decode
mux-h1 flags from the flags utility.
In this patch, we only move the flags to mux_h1-t.h.
H3 SETTINGS emission has recently been delayed. The idea is to send it
with the first STREAM to reduce sendto syscall invocation. This was
implemented in the following patch :
3dd79d378c
MINOR: h3: Send the h3 settings with others streams (requests)
This patch works fine under nominal conditions. However, it will cause a
crash if a HTTP/3 connection is released before having sent any data,
for example when receiving an invalid first request. In this case,
qc_release will first free qcc.app_ops HTTP/3 application protocol layer
via release callback. Then qc_send is called to emit any closing frames
built by app_ops release invocation. However, in qc_send, as no data has
been sent, it will try to complete application layer protocol
intialization, with a SETTINGS emission for HTTP/3. Thus, qcc.app_ops is
reused, which is invalid as it has been just freed. This will cause a
crash with h3_finalize in the call stack.
This bug can be reproduced artificially by generating incomplete HTTP/3
requests. This will in time trigger http-request timeout without any
data send. This is done by editing qc_handle_strm_frm function.
- ret = qcc_recv(qc->qcc, strm_frm->id, strm_frm->len,
+ ret = qcc_recv(qc->qcc, strm_frm->id, strm_frm->len - 1,
strm_frm->offset.key, strm_frm->fin,
(char *)strm_frm->data);
To fix this, application layer closing API has been adjusted to be done
in two-steps. A new shutdown callback is implemented : it is used by the
HTTP/3 layer to generate GOAWAY frame in qc_release prologue.
Application layer context qcc.app_ops is then freed later in qc_release
via the release operation which is now only used to liberate app layer
ressources. This fixes the problem as the intermediary qc_send
invocation will be able to reuse app_ops before it is freed.
This patch fixes the crash, but it would be better to adjust H3 SETTINGS
emission in case of early connection closing : in this case, there is no
need to send it. This should be implemented in a future patch.
This should fix the crash recently experienced by Tristan in github
issue #1801.
This must be backported up to 2.6.
With quicTLS the set_encruption_secrets callback is always called with
the read_secret and the write_secret.
However this is not the case with libreSSL, which uses the
set_read_secret()/set_write_secret() mecanism. It still provides the
set_encryption_secrets() callback, which is called with a NULL
parameter for the write_secret during the read, and for the read_secret
during the write.
The exchange key was not designed in haproxy to be called separately for
read and write, so this patch allow calls with read or write key to
NULL.
httpclient_new_from_proxy() is a variant of httpclient_new() which
allows to create the requests from a different proxy.
The proxy and its 2 servers are now stored in the httpclient structure.
The proxy must have been created with httpclient_create_proxy() to be
used.
The httpclient_postcheck() callback will finish the initialization of
all proxies created with PR_CAP_HTTPCLIENT.
httpclient_create_proxy() is a function which creates a proxy that could
be used for the httpclient. It will allocate a proxy, a raw server and
an ssl server.
This patch moves most of the code from httpclient_precheck() into a
generic function httpclient_create_proxy().
The proxy will have the PR_CAP_HTTPCLIENT capability.
This could be used for specifics httpclient instances that needs
different proxy settings.
The init of tcp sink, particularly for SSL, was done
too early in the code, during parsing, and this can cause
a crash specially if nbthread was not configured.
This was detected by William using ASAN on a new regtest
on log forward.
This patch adds the 'struct proxy' created for a sink
to a list and this list is now submitted to the same init
code than the main proxies list or the log_forward's proxies
list. Doing this, we are assured to use the right init sequence.
It also removes the ini code for ssl from post section parsing.
This patch should be backported as far as v2.2
Note: this fix uses 'goto' labels created by commit
'BUG/MAJOR: log-forward: Fix log-forward proxies not fully initialized'
but this code didn't exist before v2.3 so this patch needs to be
adapted for v2.2.
Originally in 1.8 we wanted to have an independent mux that could possibly
be disabled and would not impose dependencies on the outside. Everything
would fit into a single C file and that was fine.
Nowadays muxes are unavoidable, and not being able to easily inspect them
from outside is sometimes a bit of a pain. In particular, the flags utility
still cannot be used to decode their flags.
As a first step towards this, this patch moves the flags and enums to
mux_h2-t.h, as well as the two state decoding inline functions. It also
dropped the H2_SS_*_BIT defines that nobody uses. The mux_h2.c file remains
the only one to include that for now.
Please refer to GH #1859 for more info.
Coverity suspected improper proxy pointer handling.
Without the fix it is considered safe for the moment, but it might not
be the case in the future as we want to keep the ability to have
isolated listeners.
Making sure stop_listener(), pause_listener(), resume_listener()
and listener_release() functions make proper use
of px pointer in that context.
No need for backport except if multi-connection protocols (ie:FTP)
were to be backported as well.
Since this counter was added, it was incremented at the wrong place for
client streams. It was incremented when the stream-connector (formely the
conn-stream) was created while it should be done when the H1 stream is
created. Thus, on parsing error, on H1>H2 upgrades or TCP>H1 upgrades, the
counter is not incremented. However, it is always decremented when the H1
stream is destroyed.
On bakcned side, there is no issue.
This patch must be backported to 2.6.
As reported by Ilya and Coverity in issue #1858, since recent commit
eea152ee6 ("BUG/MINOR: signals/poller: ensure wakeup from signals")
which removed the test for the global signal flag from the pollers'
loop, the remaining "wake" flag doesn't need to be tested since it
already participates to zeroing the wait_time and will be caught
on the previous line.
Let's just remove that test now.
This patch adresses the issue #1626.
Adding support for PR_FL_PAUSED flag in the function stats_fill_fe_stats().
The command 'show stat' now properly reports a disabled frontend
using "PAUSED" state label.
This patch depends on the following commits:
- 7d00077fd5 "BUG/MEDIUM: proxy: ensure pause_proxy()
and resume_proxy() own PROXY_LOCK".
- 001328873c "MINOR: listener: small API change"
- d46f437de6 "MINOR: proxy/listener: support for additional PAUSED state"
It should be backported to 2.6, 2.5 and 2.4
This patch is a prerequisite for #1626.
Adding PAUSED state to the list of available proxy states.
The flag is set when the proxy is paused at runtime (pause_listener()).
It is cleared when the proxy is resumed (resume_listener()).
It should be backported to 2.6, 2.5 and 2.4
A minor API change was performed in listener(.c/.h) to restore consistency
between stop_listener() and (resume/pause)_listener() functions.
LISTENER_LOCK was never locked prior to calling stop_listener():
lli variable hint is thus not useful anymore.
Added PROXY_LOCK locking in (resume/pause)_listener() functions
with related lpx variable hint (prerequisite for #1626).
It should be backported to 2.6, 2.5 and 2.4
There was a race involving hlua_proxy_* functions
and some proxy management functions.
pause_proxy() and resume_proxy() can be used directly from lua code,
but that could lead to some race as lua code didn't make sure PROXY_LOCK
was owned before calling the proxy functions.
This patch makes sure it won't happen again elsewhere in the code
by locking PROXY_LOCK directly in resume and pause proxy functions
so that it's not the caller's responsibility anymore.
(based on stop_proxy() behavior that was already safe prior to the patch)
This should be backported to stable series.
Note that the API will likely differ < 2.4
Add self-wake in signal_handler() to fix a race condition with a signal
coming in between checking signal_queue_len and entering polling sleep.
The changes in commit 43c891dda ("BUG/MINOR: signals/poller: set the
poller timeout to 0 when there are signals") were insufficient.
Move the signal_queue_len check from the poll implementations to
run_poll_loop() to keep that logic in one place.
The poll loops are terminated either by the parameter wake being set or
wake up due to a write to their poller_wr_pipe by wake_thread() in
signal_handler().
This fixes issue #1841.
Must be backported in every stable version.
This is the ->finalize application callback which prepares the unidirectional STREAM
frames for h3 settings and wakeup the mux I/O handler to send them. As haproxy is
at the same time always waiting for the client request, this makes haproxy
call sendto() to send only about 20 bytes of stream data. Furthermore in case
of heavy loss, this give less chances to short h3 requests to succeed.
Drawback: as at this time the mux sends its streams by their IDs ascending order
the stream 0 is always embedded before the unidirectional stream 3 for h3 settings.
Nevertheless, as these settings may be lost and received after other h3 request
streams, this is permitted by the RFC.
Perhaps there is a better way to do. This will have to be checked with Amaury.
Must be backported to 2.6.
This was due to a missing check in h3_trace() about the first argument
presence (connection) and h3_parse_settings_frm() which calls TRACE_LEAVE()
without any argument. Then this argument was dereferenced.
Must be backported to 2.6
<qc> variable was confused with <qel>. The consequence was that it was
always the same packet number space which was displayed: the first one (or
the Initial packet number space).
Must be backported to 2.6.
It is possible to speed up the handshake completion but only one time
by connection as mentionned in RFC 9002 "6.2.3. Speeding up Handshake Completion".
Add a flag to prevent this process to be run several times
(see https://www.rfc-editor.org/rfc/rfc9002#name-speeding-up-handshake-compl).
Must be backported to 2.6.
When receiving a signal before entering the poller, and without any
activity in the process, the poller will be entered with a timeout
calculated without checking the signals.
Since commit 4f59d3 ("MINOR: time: increase the minimum wakeup interval
to 60s") the issue is much more visible because it could be stuck for
60s.
When in mworker mode, if a worker quits and the SIGCHLD signal deliver
at the right time to the master, this one could be stuck for the time of
the timeout.
This should fix issue #1841
Must be backported in every stable version.
By default we now dump stats between caller and callee, but by
specifying "aggr" on the command line, stats get aggregated by
callee again as it used to be before the feature was available.
It may sometimes be helpful when comparing total call counts,
though that's about all.
The following task/tasklet handlers often appear in "show profiling tasks"
but were not resolved since static:
qc_io_cb, quic_conn_app_io_cb, process_timer,
quic_accept_run, qc_idle_timer_task
This commit simply exports them so they can be resolved now. "process_timer"
which was a bit too generic and renamed to qc_process_timer.
The function appears like this in "show profiling tasks", so let's export
it:
function calls cpu_tot cpu_avg lat_tot lat_avg
main+0x1463f0 92 77.28us 839.0ns 2.018ms 21.93us <- wake_expired_tasks@src/task.c:429 task_drop_running
This removes all the hard-coded 8-bit and 256 entries to use a pair of
macros instead so that we can more easily experiment with larger table
sizes if needed.
Now that ->wake_date is common to tasks and tasklets, we don't need
anymore to carry a duplicate control block to read and update it for
tasks and tasklets. And given that this code was present early in the
if/else fork between tasks and tasklets, taking it out of the block
allows to move the task part into a more visible "else" branch that
also allows to factor the epilogue that resets th_ctx->current and
updates profile_entry->cpu_time, which also used to be duplicated.
Overall, doing just that saved 253 bytes in the function, or ~1/6,
which is not bad considering that it's on a hot path. And the code
got much ore readable.
The memstats code currently defines its own file/function/line number,
type and extra pointer. We don't need to keep them separate and we can
easily replace them all with just a struct ha_caller. Note that the
extra pointer could be converted to a pool ID stored into arg8 or
arg32 and be dropped as well, but this would first require to define
IDs for pools (which we currently do not have).
It was a mistake to put these two fields in the struct task. This
was added in 1.9 via commit 9efd7456e ("MEDIUM: tasks: collect per-task
CPU time and latency"). These fields are used solely by streams in
order to report the measurements via the lat_ns* and cpu_ns* sample
fetch functions when task profiling is enabled. For the rest of the
tasks, this is pure CPU waste when profiling is enabled, and memory
waste 100% of the time, as the point where these latencies and usages
are measured is in the profiling array.
Let's move the fields to the stream instead, and have process_stream()
retrieve the relevant info from the thread's context.
The struct task is now back to 120 bytes, i.e. almost two cache lines,
with 32 bit still available.
When task profiling is enabled, the reported CPU time for short requests
and responses (e.g. redirect) is always zero in the logs, because
process_stream() is only called once and the CPU time is measured after
it returns. This is particuarly annoying when dealing with denies and in
general anything that deals with parasitic traffic because it can be
difficult to figure where the CPU is spent.
The solution taken in this patch consists in having process_stream()
update the cpu time itself before logging and quitting. It's very simple.
It will not take into account the time taken to produce the log nor
freeing the stream, but that's marginal compared to always logging zero.
The task's wake_date is also reset so that the scheduler doesn't have to
perform these operations again. This is dependent on the following patch:
MINOR: sched: store the current profile entry in the thread context
It should be backported to 2.6 as it does help for troubleshooting.
The profile entry that corresponds to the current task/tasklet being
profiled is now stored into the thread's context. This will allow it
to be accessed from the tasks themselves. This is needed for an upcoming
fix.
When task profiling is enabled, the scheduler can measure and report
the cumulated time spent in each task and their respective latencies. But
this was wrong for tasks with few wakeups as well as for self-waking ones,
because the call date needed to measure how long it takes to process the
task is retrieved in the task itself (->wake_date was turned to the call
date), and we could face two conditions:
- a new wakeup while the task is executing would reset the ->wake_date
field before returning and make abnormally low values being reported;
that was likely the case for taskèrun_applet for self-waking applets;
- when the task dies, NULL is returned and the call date couldn't be
retrieved, so that CPU time was not being accounted for. This was
particularly visible with process_stream() which is usually called
only twice per request, and whose time was systematically halved.
The cleanest solution here is to keep in mind that the scheduler already
uses quite a bit of local context in th_ctx, and place the intermediary
values there so that they cannot vanish. The wake_date has to be reset
immediately once read, and only its copy is used along the function. Note
that this must be done both for tasks and tasklet, and that until recently
tasklets were also able to report wrong values due to their sole dependency
on TH_FL_TASK_PROFILING between tests.
One nice benefit for future improvements is that such information will now
be available from the task without having to be stored into the task itself
anymore.
Since the tasklet part was computed on wrapping 32-bit arithmetics and
the task one was on 64-bit, the values were now consistently moved to
32-bit as it's already largely sufficient (4s spent in a task is more
than twice what the watchdog would tolerate). Some further cleanups might
be necessary, but the patch aimed at staying minimal.
Task profiling output after 1 million HTTP request previously looked like
this:
Tasks activity:
function calls cpu_tot cpu_avg lat_tot lat_avg
h1_io_cb 2012338 4.850s 2.410us 12.91s 6.417us
process_stream 2000136 9.594s 4.796us 34.26s 17.13us
sc_conn_io_cb 2000135 1.973s 986.0ns 30.24s 15.12us
h1_timeout_task 137 - - 2.649ms 19.34us
accept_queue_process 49 152.3us 3.107us 321.7yr 6.564yr
main+0x146430 7 5.250us 750.0ns 25.92us 3.702us
srv_cleanup_idle_conns 1 559.0ns 559.0ns 918.0ns 918.0ns
task_run_applet 1 - - 2.162us 2.162us
Now it looks like this:
Tasks activity:
function calls cpu_tot cpu_avg lat_tot lat_avg
h1_io_cb 2014194 4.794s 2.380us 13.75s 6.826us
process_stream 2000151 20.01s 10.00us 36.04s 18.02us
sc_conn_io_cb 2000148 2.167s 1.083us 32.27s 16.13us
h1_timeout_task 198 54.24us 273.0ns 3.487ms 17.61us
accept_queue_process 52 158.3us 3.044us 409.9us 7.882us
main+0x1466e0 18 16.77us 931.0ns 63.98us 3.554us
srv_cleanup_toremove_conns 8 282.1us 35.26us 546.8us 68.35us
srv_cleanup_idle_conns 3 149.2us 49.73us 8.131us 2.710us
task_run_applet 3 268.1us 89.38us 11.61us 3.871us
Note the two-fold difference on process_stream().
This feature is essentially used for debugging so it has extremely limited
impact. However it's used quite a bit more in bug reports and it would be
desirable that at least 2.6 gets this fix backported. It depends on at least
these two previous patches which will then also have to be backported:
MINOR: task: permanently enable latency measurement on tasklets
CLEANUP: task: rename ->call_date to ->wake_date
This field is misnamed because its real and important content is the
date the task was woken up, not the date it was called. It temporarily
holds the call date during execution but this remains confusing. In
fact before the latency measurements were possible it was indeed a call
date. Thus is will now be called wake_date.
This change is necessary because a subsequent fix will require the
introduction of the real call date in the thread ctx.
When tasklet latency measurement was enabled in 2.4 with commit b2285de04
("MINOR: tasks: also compute the tasklet latency when DEBUG_TASK is set"),
the feature was conditionned on DEBUG_TASK because the field would add 8
bytes to the struct tasklet.
This approach was not a very good idea because the struct ends on an int
anyway thus it does finish with a 32-bit hole regardless of the presence
of this field. What is true however is that adding it turned a 64-byte
struct to 72-byte when caller debugging is enabled.
This patch revisits this with a minor change. Now only the lowest 32
bits of the call date are stored, so they always fit in the remaining
hole, and this allows to remove the dependency on DEBUG_TASK. With
debugging off, we're now seeing a 48-byte struct, and with debugging
on it's exactly 64 bytes, thus still exactly one cache line. 32 bits
allow a latency of 4 seconds on a tasklet, which already indicates a
completely dead process, so there's no point storing the upper bits at
all. And even in the event it would happen once in a while, the lost
upper bits do not really add any value to the debug reports. Also, now
one tasklet wakeup every 4 billion will not be sampled due to the test
on the value itself. Similarly we just don't care, it's statistics and
the measurements are not 9-digit accurate anyway.
Add QUIC support to the ssl_sock_switchctx_cbk() variant used only when
no client_hello_cb is available.
This could be used with libreSSL implementation of QUIC for example.
It also works with quictls when HAVE_SSL_CLIENT_HELLO_CB is removed from
openss-compat.h
When building HAProxy with USE_QUIC and libressl 3.6.0, the
ssl_sock_switchtx_cbk symbol is not found because libressl does not
implement the client_hello_cb.
A ssl_sock_switchtx_cbk version for the servername callback is available
but wasn't exported correctly.
This verification is done by ssl_sock_bind_verifycbk() which is set at different
locations in the ssl_sock.c code . About QUIC connections, there are a lot of chances
the connection object is not initialized when entering this function. What must
be accessed is the SSL object to retrieve the connection or quic_conn objects,
then the bind_conf object of the listener. If the connection object is not found,
we try to find the quic_conn object.
Modify ssl_sock_dump_errors() interface which takes a connection object as parameter
to also passed a quic_conn object as parameter. Again this function try first
to access the connection object if not NULL or the quic_conn object if not.
There is a remaining thing to do for QUIC: store the certificate verification error
code as it is currently stored in the connection object. This error code is at least
used by the "bc_err" and "fc_err" sample fetches.
There are chances this bug is in relation with GH #1851. Thank you to @tasavis
for the report.
Must be merged into 2.6.
On frontend side, "h1-case-adjust-bogus-client" option is now supported in
TCP mode. It is important to be able to adjust the case of response headers
when a connection is routed to an HTTP backend. In this case, the client
connection is upgraded to H1.
On backend side, "h1-case-adjust-bogus-server" option is now also supported
in TCP mode to be able to perform HTTP health-checks with a case adjustment
of the request headers.
This patch should be backported as far as 2.0.
This trick is deprecated since the health-check refactoring, It is now
invalid. It means the following line will trigger an error during the
configuration parsing:
option httpchk OPTIONS * HTTP/1.1\r\nHost:\ www
It must be replaced by:
option httpchk OPTIONS * HTTP/1.1
http-check send hdr Host www
ssl_tlsext_ticket_key_cb() is called when "tls-ticket-keys" option is used on a
"bind" line. It needs to have an access to the TLS ticket keys which have been
stored into the listener bind_conf struct. The fix consists in nitializing the
<ref> variable (references to TLS secret keys) the correct way when this callback
is called for a QUIC connection. The bind_conf struct is store into the quic_conn
object (QUIC connection).
This issue may be in relation with GH #1851. Thank you for @tasavis for the report.
Must be backported to 2.6.
Obviously, frames which are duplicated from others must not be retransmitted if
the original frame they were duplicated from was already acknowledged.
This should have been detected by qc_build_frms() which skips such frames,
except if the QUIC xprt does really bad things which are not supported by
the upper layer. This will have to be checked with Amaury.
To prevent the retransmision of these frames which leads to crashes as reported by
hpn0t0ad this gdb backtrace in GH #1809 where the frame builder tries to copy a huge
number of bytes to the packet buffer:
Thread 7 (Thread 0x7fddf373a700 (LWP 13)):
#0 __memmove_sse2_unaligned_erms () at ../sysdeps/x86_64/multiarch/memmove-vec-unaligned-erms.S:520
No locals.
#1 0x000055b17435705e in quic_build_stream_frame (buf=0x7fddf372ef78, end=<optimized out>, frm=0x7fdde08d3470, conn=<optimized out>) at src/quic_frame.c:515
to_copy = 18446697703428890384
stream = 0x7fdde08d3490
wrap = <optimized out>
which matches this part of quic_frame.c code:
wrap = (const unsigned char *)b_wrap(stream->buf);
if (stream->data + stream->len > wrap) {
size_t to_copy = wrap - stream->data;
memcpy(*buf, stream->data, to_copy);
*buf += to_copy;
we release as soon as possible the impacted frames as there is really no need
to retransmit such frames.
Thank you to @hpn0t0ad for having provided us with useful traces in github
issue #1809.
Must be backported in 2.6.
Commit 5c83e3a156 made some adjustments
to clarify which TCP_INFO information is supported by each respective
OS.
There was a comment like so..
Note that fc_rtt and fc_rttvar are supported on any OS that has TCP_INFO,
not just linux/freebsd/netbsd, so we continue to expose them unconditionally.
But the diff didn't do so in a consistent manner.
In github issue #1850, Christian Ruppert reported a case of crash in
2.6 when failing to parse some http rules. This started to happen
with 2.6 commit dd7e6c6 ("BUG/MINOR: http-rules: completely free
incorrect TCP rules on error") but has some of its roots in 2.2
commit 2eb539687 ("MINOR: http-rules: Add release functions for
existing HTTP actions").
The cause is that when the release function is set for HTTP actions,
the rule->arg.http.fmt list head is not yet initialized, hence is
NULL, thus the release function crashes when it tries to iterate over
it. In fact this code was initially not written with the perspective
of releasing such elements upon error, so the arg list initialization
happened after error checking.
This patch just moves the list initialization just after setting the
release pointer and that's OK.
This patch must be backported to 2.6 since the problem is visible
there. It could be backported to 2.5 but the issue is not triggered
there without the first mentioned patch above that landed in 2.6, so
it will not bring any obvious benefit.
We now have two functions, one for dumping connections and the other
one for dumping the streams. This will permit to use it from show_sd.
A few optional line breaks were inserted where relevant to keep lines
homogenous when a prefix is passed.
It's very limited but at least provides the very basic info about QCS and
QCC when issuing "show sess all":
scf=0x7fa9642394a0 flags=0x00000080 state=EST endp=CONN,0x7fa9642351f0,0x02001001 sub=3
> qcs=0x7fa9642351f0 .flg=0x5 .id=396 .st=HCR .ctx=0x7fa9642353f0, .err=0
> qcc=0x7fa96405ce20 .flg=0 .nbsc=100 .nbhreq=100, .task=0x7fa964054260
co0=0x7fa96405cd50 ctrl=quic4 xprt=QUIC mux=QUIC data=STRM target=LISTENER:0x328c530
flags=0x00200300 fd=-1 fd.state=00 updt=0 fd.tmask=0x0
It will need to be improved but it's better than nothing already. This
should be backported to 2.6 if the other dumps are backported.
With this, it now becomes possible to see the state of each H2 stream from
"show sess all". Lines are still too long and need to be split, but that's
for another patch.
This helper will be called for muxes that provide it and will be used
to let the mux provide extra information about the stream attached to
a stream descriptor. A line prefix is passed in argument so that the
mux is free to break long lines without breaking indent. No prefix
means no line breaks should be produced (e.g. for short dumps).
The function will be reusable to dump streams, so let's extract it.
Note that due to "last_h2s" being originally printed as a prefix for
the stream dump, now the pointer is displayed by the caller instead.
When an appctx is found looping over itself, we report a number of info
but not the pointers to the definition nor the handler, which can be quite
handy in some cases. Let's add them and try to decode the symbol.
Commit 1776ffb97 ("MINOR: mux-fcgi: make the "show fd" helper also decode
the fstrm subscriber when known") improved the output of "show fd" for the
FCGI mux, but the output is sent to the trash buffer instead of the msg
argument. It turns out that this has no effect right now as the caller
passes the trash but this is risky.
This should be backported to 2.4.
Commit 150c4f8b7 ("MINOR: mux-h1: make the "show fd" helper also decode
the h1s subscriber when known") improved the output of "show fd" for the
H1 mux, but the output is sent to the trash buffer instead of the msg
argument. It turns out that this has no effect right now as the caller
passes the trash but this is risky.
This should be backported to 2.4.
Commit 98e40b981 ("MINOR: mux-h2: make the "show fd" helper also decode
the h2s subscriber when known") improved the output of "show fd" for the
H2 mux, but the output is sent to the trash buffer instead of the msg
argument. It turns out that this has no effect right now as the caller
passes the trash but this is risky.
This should be backported to 2.4.
Since everything is available for this, let's enable ALPN with the
usual "h2,http/1.1" on the https server. This will allow HTTPS requests
to use HTTP/2 when available.
It may be needed to permit to disable this (or to set the string) in
case some client code explicitly checks for the "HTTP/1.1" string, but
since httpclient is quite young it's unlikely that such code already
exists.
The servers were not set with default settings, meaning that a few
settings including the pool_max_delay were not set, thus disabling
connection pools, which is the cause of the fact that keep-alive was
disabled as reported in issue #1831. There might possibly be other
issues pending since all these fields were left to zero.
Note that this patch alone will not fix keep-alive because the applet
does not enforce SE_FL_NOT_FIRST and relies on the default http-reuse
safe, thus if servers are not shared, all requests are considered
first ones and do not reuse existing connections.
In 2.7, commit ecb40b2c3 ("MINOR: backend: always satisfy the first
req reuse rule with l7 retries") addressed this in a more elegant way
by fixing http-reuse to take into account the fact that properly
configured l7 retries provide exactly the capability that reuse safe
was trying to cover, and this patch is suitable for backporting.
This patch should be backported to 2.6 only.
There's a tiny issue in the I/O handler by which both a failed request
emission and missing response data will want to subscribe for more room
on output. That's not correct in that only the case where the request
buffer is full should cause this, the other one should just wait for
incoming data. This could theoretically cause spurious wakeups at
certain key points (e.g. connect() time maybe) though this could not
be reproduced but better fix this while it's easy enough.
It doesn't seem necessary to backport it right now, though this may
have to in case a concrete reproducible case is discovered.
If the caller dies before the server responds, the httpclient can crash
in hc_cli_res_end_cb() when unregistering because it dereferences
hc->caller which was already freed during the caller's unregistration.
The easiest way to reproduce it is by sending twice the following
request on the same CLI connection in expert mode, with httpterm
running on local port 8000:
httpclient GET http://127.0.0.1:8000/?t=600
Note the 600ms delay that's larger than socat's default 500.
The code checks for a NULL everywhere hc->caller is used, but the NULL
was forgotten in this specific case. It must be placed in the second
half of httpclient_stop_and_destroy() which is responsible for signaling
the client that the caller leaves.
This must be backported to 2.6.
In 1.9-dev, a new flag was introduced on the start line with commit
f1ba18d7b ("MEDIUM: htx: Don't rely on h1_sl anymore except during H1
header parsing") to designate a response message: HTX_SL_F_IS_RESP.
Unfortunately as it was done in parallel to the mux_h2 support for
the backend, it was never integrated there. It was not used by then
so this remained unnoticed for a while.
However the http_client now uses it, and missing that flag prevents
it from using the H2 mux, so let's properly add it.
There's no point in backporting this far away, but since the http_client
is fully operational in 2.6 it would make sense to backport this fix at
least there to secure the code.
The frame which are retransmitted by qc_dgrams_retransmit() are duplicated
from sent but not acknowledged packets and added to local frames lists.
Some may not have been sent. If not replaced somewhere (linked to the
connection) they are lost for ever (leak). We splice the list remaining
contents to the packets number space frame list to avoid such a situation.
Must be backported to 2.6.
<force_ack> boolean variable passed to qc_do_build_pkt() which builds a clear
packet is there to force this function to build an ACK frame regardless of
others conditions. This is used during handshake, when we acknowledge every
handshake packets received.
This variable was already taken into an account by the local variable <must_ack>
which is there at least to ignore any other conditions than this one: "are
we building a probing packet?". Indeed we do not want to add ACK frames when
we probe the peers. This is to have more chances to embed the new duplicated frames
into another packets without splitting them. So, the test on <force_ack> boolean
value is useless, silly and brakes the rule which consists in not acknowledging
when probing.
Must be backported to 2.6.
The "first req" rule consists in not delivering a connection's first
request to a connection that's not known for being safe so that we
don't deliver a broken page to a client if the server didn't intend to
keep it alive. That's what's used by "http-reuse safe" particularly.
But the reason this rule was created was precisely because haproxy was
not able to re-emit the request to the server in case of connection
breakage, which is precisely what l7 retries later brought. As such,
there's no reason for enforcing this rule when l7 retries are properly
enabled because such a blank page will trigger a retry and will not be
delivered to the client.
This patch simply checks that the l7 retries are enabled for the 3 cases
that can be triggered on a dead or dying connection (failure, empty, and
timeout), and if all 3 are enabled, then regular idle connections can be
reused.
This could almost be marked as a bug fix because a lot of users relying
on l7 retries do not necessarily think about using http-reuse always due
to the recommendation against it in the doc, while the protection that
the safe mode offers is never used in that mode, and it forces the http
client not to reuse existing persistent connections since it never sets
the "not first" flag.
It could also be decided that the protection is not used either when
the origin is an applet, as in this case this is internal code that
we can decide to let handle the retry by itself (all info are still
present). But at least the httpclient will be happy with this alone.
It would make sense to backport this at least to 2.6 in order to let
the httpclient reuse connections, maybe to older releases if some
users report low reuse counts.
When idle H1 connections cannot be stored into a server pool or are later
evicted, they're often seen closed with a FIN then an RST. The problem is
that this is sufficient to leave them in TIME_WAIT in the local sockets
table and port exhaustion may happen.
The reason is that in h1_release() we rely on h1_shutw_conn() which itself
decides whether to close in silent or normal mode only based on the
H1C_F_ST_SILENT_SHUT flag. This flag is only set by h1_shutw() based on
the requested mode. But when the connection is in the idle list, the mode
ought to always be silent.
What this patch does is to set the flag before trying to add to the idle
list, and remove it after removing from the idle list. This way if the
connection fails to be added or has to be killed, it's closed with an
RST.
This must be backported as far as 2.4. It's not sure whether older
versions need an equivalent.
The PCRE2 JIT support is buggy. If HAProxy is compiled with USE_PCRE2_JIT
option while the PCRE2 library is compiled without the JIT support, any
matching will fail because pcre2_jit_compile() return value is not properly
handled. We must fall back on pcre2_match() if PCRE2_ERROR_JIT_BADOPTION
error is returned.
This patch should fix the issue #1848. It must be backported as far as 2.4.
If the service is rechecked before a reload, that may cause the config
to be parsed twice and file-backed rings to be lost.
Here we make sure that such a ring does contain information before
deciding to rotate it. This way the first process starting after some
writes will cause a rotate but not subsequent ones until new writes
are applied.
An attempt was also made to disable rotations on checks but this was a
bad idea, as the ring is still initialized and this causes the contents
to be lost. The choice of initializing the ring during parsing is
questionable but the config check ought to be as close as possible to a
real start, and we could imagine that the ring is used by some code
during startup (e.g. lua). So this approach was abandonned and config
checks also cause a rotation, as the purpose of this rotation is to
preserve latest information against accidental removal.
ckch_inst_free() unlink the ckch_inst_link structure but never free it.
It can't be fixed simply because cli_io_handler_commit_cafile_crlfile()
is using a cafile_entry list to iterate a list of ckch_inst entries
to free. So both cli_io_handler_commit_cafile_crlfile() and
ckch_inst_free() would modify the list at the same time.
In order to let the caller manipulate the ckch_inst_link,
ckch_inst_free() now checks if the element is still attached before
trying to detach and free it.
For this trick to work, the caller need to do a LIST_DEL_INIT() during
the iteration over the ckch_inst_link.
list_for_each_entry was also replace by a while (!LIST_ISEMPTY()) on the
head list in cli_io_handler_commit_cafile_crlfile() so the iteration
works correctly, because it could have been stuck on the first detached
element. list_for_each_entry_safe() is not enough to fix the issue since
multiple element could have been removed.
Must be backported as far as 2.5.
Move these traces to QUIC_EV_CONN_SPPKTS trace event. They were displayed
at a useless location. Make them displayed just after having sent a packet
and when checking the anti-amplication limit.
Useful to diagnose issues in relation with the recovery.
This reverts commit 056ad01d55.
This reverts commit ddd480cbdc.
The architecture is ambiguous here: ckch_inst_free() is detaching and
freeing the "ckch_inst_link" linked list which must be free'd only from
the cafile_entry side.
The problem was also hidden by the fix ddd480c ("BUG/MEDIUM: ssl: Fix a
UAF when old ckch instances are released") which change the ckchi_link
inner loop by a safe one. However this can't fix entirely the problem
since both __ckch_inst_free_locked() could remove several nodes in the
ckchi_link linked list.
This revert is voluntary reintroducing a memory leak before really fixing
the problem.
Must be backported in 2.5 + 2.6.
When old chck instances is released at the end of "commit ssl ca-file" or
"commit ssl crl-file" commands, the link is released. But we walk through
the list using the unsafe macro. list_for_each_entry_safe() must be used.
This bug was introduced by commit 056ad01d5 ("BUG/MINOR: ssl: leak of
ckch_inst_link in ckch_inst_free()"). Thus this patch must be backported as
far as 2.5.
The commit 871dd8211 ("BUG/MINOR: tcpcheck: Disable QUICKACK only if data
should be sent after connect") introduced a regression. It removes the test
on the next rule to be able to disable TCP_QUICKACK when only a connect is
performed (so no next rule).
This patch must be backported as far as 2.2.
ckch_inst_free() unlink the ckch_inst_link structure but never free it.
It can cause a memory leak upon a ckch_inst_free() done with CLI
operation.
Bug introduced by commit 4458b97 ("MEDIUM: ssl: Chain ckch instances in
ca-file entries").
Must be backported as far as 2.5.
Commit b0c4827 ("BUG/MINOR: ssl: free the cafile entries on deinit")
introduced a double free.
The node was never removed from the tree before its free.
Fix issue #1836.
Must be backported where b0c4827 was backported. (2.6 for now).
Without such a trace, we do not know when a datagram is sent. Only trace for
the packets inside the datagrams were displayed.
Must be backported to 2.6.
This bug arrived with this commit:
"MINOR: quic: Add reusable cipher contexts for header protection"
haproxy could crash because of missing cipher contexts initializations for
the header protection and draft-v2 Initial secrets. This was due to the fact
that these initialization both for RX and TX secrets were done outside of
qc_new_isecs(). The role of this function is definitively to initialize these
cipher contexts in addition to the derived secrets. Indeed this function is called
by qc_new_conn() which initializes the connection but also by qc_conn_finalize()
which also calls qc_new_isecs() in case of a different QUIC version was negotiated
by the peers from the one used by the client for its first Initial packet.
This was reported by "v2" QUIC interop test with at least picoquic as client.
Must be backported to 2.6.
There's no point trying to send() on a socket on which an error was already
reported. This wastes syscalls. Till now it was possible to occasionally
see an attempt to sendto() after epoll_wait() had reported EPOLLERR.
In 2.2, commit 5d7dcc2a8 ("OPTIM: epoll: always poll for recv if neither
active nor ready") was added to compensate for the fact that our iocbs
are almost always asynchronous now and do not have the opportunity to
update the FD correctly. As such, they just perform a wakeup, the FD is
turned to inactive, the tasklet wakes up, performs the I/O, updates the
FD, most of the time this is done withing the same polling loop, and the
update cancels itself in the poller without having to switch the FD off
then on.
The issue was that when deciding to claim an FD was active for reads
if it was active for writes, we forgot one situation that unfortunately
causes excessive wakeups: dealing with errors. Indeed, errors are
reported and keep ringing as long as the FD is active for sending even
if the consumer disabled the FD for receiving. Usually this only causes
one extra wakeup for the time it takes to consider a potential write
subscriber and to call it, though with many tasks in a run queue, it
can last a bit longer and be reported more often.
The fix consists in checking that we really want to get more receive
events on this FD, that is:
- that no prevous EPOLLERR was reported
- that the FD doesn't carry a sticky error
- that the FD is not shut for reads
With this, after the last epoll_wait() reports EPOLLERR, one last recv()
is performed to flush pending data and the FD is immediately unregistered.
It's probably not needed to backport this as its effects are not much
visible, though it should not harm.
Before, EPOLLERR was seen twice:
accept4(4, {sa_family=AF_INET, sin_port=htons(22314), sin_addr=inet_addr("127.0.0.1")}, [128 => 16], SOCK_NONBLOCK) = 8
accept4(4, 0x261b160, [128], SOCK_NONBLOCK) = -1 EAGAIN (Resource temporarily unavailable)
recvfrom(8, "POST / HTTP/1.1\r\nConnection: close\r\nTransfer-encoding: chunk"..., 16320, 0, NULL, NULL) = 66
socket(AF_INET, SOCK_STREAM, IPPROTO_IP) = 9
connect(9, {sa_family=AF_INET, sin_port=htons(8002), sin_addr=inet_addr("127.0.0.1")}, 16) = -1 EINPROGRESS (Operation now in progress)
epoll_ctl(3, EPOLL_CTL_ADD, 8, {events=EPOLLIN|EPOLLRDHUP, data={u32=8, u64=8}}) = 0
epoll_ctl(3, EPOLL_CTL_ADD, 9, {events=EPOLLIN|EPOLLOUT|EPOLLRDHUP, data={u32=9, u64=9}}) = 0
epoll_wait(3, [{events=EPOLLOUT, data={u32=9, u64=9}}], 200, 355) = 1
recvfrom(9, 0x25cfb30, 16320, 0, NULL, NULL) = -1 EAGAIN (Resource temporarily unavailable)
sendto(9, "POST / HTTP/1.1\r\ntransfer-encoding: chunked\r\n\r\n", 47, MSG_DONTWAIT|MSG_NOSIGNAL, NULL, 0) = 47
epoll_ctl(3, EPOLL_CTL_MOD, 9, {events=EPOLLIN|EPOLLRDHUP, data={u32=9, u64=9}}) = 0
epoll_wait(3, [{events=EPOLLIN|EPOLLERR|EPOLLHUP|EPOLLRDHUP, data={u32=9, u64=9}}], 200, 354) = 1
recvfrom(9, "HTTP/1.1 200 OK\r\ncontent-length: 0\r\nconnection: close\r\n\r\n", 16320, 0, NULL, NULL) = 57
sendto(8, "HTTP/1.1 200 OK\r\ncontent-length: 0\r\nconnection: close\r\n\r\n", 57, MSG_DONTWAIT|MSG_NOSIGNAL, NULL, 0) = 57
->epoll_wait(3, [{events=EPOLLIN|EPOLLERR|EPOLLHUP|EPOLLRDHUP, data={u32=9, u64=9}}], 200, 354) = 1
epoll_ctl(3, EPOLL_CTL_DEL, 9, 0x7ffe0b65fb24) = 0
epoll_wait(3, [{events=EPOLLIN, data={u32=8, u64=8}}], 200, 354) = 1
recvfrom(8, "A\n0123456789\r\n0\r\n\r\n", 16320, 0, NULL, NULL) = 19
close(9) = 0
close(8) = 0
After, EPOLLERR is only seen only once, with one less call to epoll_wait():
accept4(4, {sa_family=AF_INET, sin_port=htons(22362), sin_addr=inet_addr("127.0.0.1")}, [128 => 16], SOCK_NONBLOCK) = 8
accept4(4, 0x20d0160, [128], SOCK_NONBLOCK) = -1 EAGAIN (Resource temporarily unavailable)
recvfrom(8, "POST / HTTP/1.1\r\nConnection: close\r\nTransfer-encoding: chunk"..., 16320, 0, NULL, NULL) = 66
socket(AF_INET, SOCK_STREAM, IPPROTO_IP) = 9
connect(9, {sa_family=AF_INET, sin_port=htons(8002), sin_addr=inet_addr("127.0.0.1")}, 16) = -1 EINPROGRESS (Operation now in progress)
epoll_ctl(3, EPOLL_CTL_ADD, 8, {events=EPOLLIN|EPOLLRDHUP, data={u32=8, u64=8}}) = 0
epoll_ctl(3, EPOLL_CTL_ADD, 9, {events=EPOLLIN|EPOLLOUT|EPOLLRDHUP, data={u32=9, u64=9}}) = 0
epoll_wait(3, [{events=EPOLLOUT, data={u32=9, u64=9}}], 200, 411) = 1
recvfrom(9, 0x2084b30, 16320, 0, NULL, NULL) = -1 EAGAIN (Resource temporarily unavailable)
sendto(9, "POST / HTTP/1.1\r\ntransfer-encoding: chunked\r\n\r\n", 47, MSG_DONTWAIT|MSG_NOSIGNAL, NULL, 0) = 47
epoll_ctl(3, EPOLL_CTL_MOD, 9, {events=EPOLLIN|EPOLLRDHUP, data={u32=9, u64=9}}) = 0
epoll_wait(3, [{events=EPOLLIN|EPOLLERR|EPOLLHUP|EPOLLRDHUP, data={u32=9, u64=9}}], 200, 411) = 1
recvfrom(9, "HTTP/1.1 200 OK\r\ncontent-length: 0\r\nconnection: close\r\n\r\n", 16320, 0, NULL, NULL) = 57
sendto(8, "HTTP/1.1 200 OK\r\ncontent-length: 0\r\nconnection: close\r\n\r\n", 57, MSG_DONTWAIT|MSG_NOSIGNAL, NULL, 0) = 57
epoll_ctl(3, EPOLL_CTL_DEL, 9, 0x7ffc95d46f04) = 0
epoll_wait(3, [{events=EPOLLIN, data={u32=8, u64=8}}], 200, 411) = 1
recvfrom(8, "A\n0123456789\r\n0\r\n\r\n", 16320, 0, NULL, NULL) = 19
close(9) = 0
close(8) = 0
In 2.6-dev4, a fix for truncated response was brought with commit 99bbdbcc2
("BUG/MEDIUM: mux-h1: only turn CO_FL_ERROR to CS_FL_ERROR with empty ibuf"),
trying to address the situation where an error is present at the connection
level but some data are still pending to be read by the stream. However,
this patch did not consider the case where the stream was no longer willing
to read the pending data, resulting in a situation where some aborted
transfers could lead to excessive CPU usage by causing constant stream
wakeups for which no error was reported. This perfectly matches what was
observed and reported in github issue #1842. It's not trivial to reproduce,
but aborting HTTP/1 pipelining in the middle of transfers seems to give
good results (using h2load and Ctrl-C in the middle).
The fix was incorrct as the error should be held only if there were data
that the stream was able to read. This is the approach taken by this patch,
which also checks via SE_FL_EOI | SE_FL_EOS that the stream will be able
to consume the pending data.
Note that the loop was provoked by the attempt by sc_conn_io_cb() itself
to call sc_conn_send() which resulted in a write subscription in
h1_subscribe() which immediately calls a tasklet_wakeup() since the
event is ready, and that it is now stopped by the presence of SE_FL_ERROR
that is checked in sc_conn_io_cb(). It seems that an extra check down the
send() path to refrain from subscribing when the connection is in error
could speed up error detection or at least avoid a risk of loops in this
case, but this is tricky. In addition, there's already SE_FL_ERR_PENDING
that seems more suitable for reporting when there are pending data, but
similarly, it probably isn't checked well enough to be suitable for
backports.
FWIW the issue may (unreliably) be reproduced by chaining haproxy to
httpterm and issuing:
(printf "GET /?s=10g HTTP/1.1\r\n\r\n"; sleep 0.1; printf "\r\n") | \
nc6 --half-close 0 8001 | head -c1000000000 >/dev/null
It's necessary to play with the size of the head command that's supposed
to trigger the error at some point. A variant involving h2load in h1 mode
and multiple pipelined streams, that is stopped with Ctrl-C also tends to
work.
As the fix above was backported as far as 2.0, it would be tempting to
backport this one as far. However tests have shown that the oldest
version that can trigger this issue is 2.5, maybe due to subtle
differences in older ones, so it's probably not worth going further
until an issue is reported. Note that in 2.5 and older, the SE_FL_*
flags are applied on the conn_stream instead, as CS_FL_*.
Special thanks go to Felipe W Damasio for providing lots of detailed data
allowing to quickly spot the root cause of the problem.
applet:getline() and applet:receive() functions for HTTP applets must rely
on the channel flags to detect the end of the message and not on HTX
flags. It means CF_EOI must be used instead of HTX_FL_EOM.
It is important because the HTX flag is transient. Because there is no flag
on HTTP applets to save the info, it is not reliable. However CF_EOI once
set is never removed. So it is safer to rely on it. Otherwise, the call to
these functions hang.
This patch must be backported as far as 2.4.
On a reload, if the previous resync was not finished, the freshly old worker
must not try to start a new resync. Otherwise, it will compete with the
older wokers, slowing down or blocking the resync. Only an up-to-date woker
must try to perform a local resync.
This patch must be backported as far as 2.0 (and maybe to 1.8 too).
When a worker is stopped, the resync timer is used to limit in time the
connection stage to the new worker to perform the local resync. However,
this timer must be stopped when the resync is in progress and it must be
re-armed if the resync is interrupted (for instance because another
reload). Otherwise, if the resync is a bit long, an old worker may be killed
too early.
This bug was introduce by the commit 160fff665 ("BUG/MEDIUM: peers: limit
reconnect attempts of the old process on reload"). It must be backported as
far as 2.0.
Only the client timeout was set. Nothing prevent a peer applet to stall
during a connect or waiting a message from a remote peer. To avoid any
issue, it is important to also set connection and server timeouts. The
connect timeout is set to 1s and the server timeout is set to 5s.
This patch must be backported to all supported versions.
A bug was introduced by the commit b042e4f6f ("BUG/MAJOR: spoe: properly
detach all agents when releasing the applet"). The fix is not correct. We
really want to known if the released appctx is the last one or not. It is
important when async mode is used. If there are still running applets, we
just need to remove the reference on the current applet from streams in the
async waiting queue.
With the commit above, in async mode, if there are still running applets, it
will work as expected. Otherwise a processing timeout will be reported for
all these streams. So it is not too bad. But for other modes (sync and
pipelining), the async waiting queue is always empty. If at least one stream
is waiting to send a message, a new applet is created. It is an issue if the
SPOA is unhealthy because the number of running applets may explode.
However, the commit above tried to fix an issue. The bug is in fact when an
new SPOE applet is created. On success, we must remove reference on the
current appctx from the streams in the async waiting queue.
This patch must be backported as far as 1.8.
Several frames could remain as not build into <frm_list> built by qc_build_frms()
after having stopped at the first building error. So only one frame was reinserted in
the frame list passed as parameter to qc_do_build_pkt(). Then <frm_list> was
spliced to the packet frame list even its frames were not built, nor attached to
any packet. Such frames had their ->pkt member set to NULL, but considered as
built, then sent leading to a crash in qc_release_frm() where ->pkt is dereferenced.
This issue was again reported by useful traces provided by Tristan in GH #1808.
Must be backported to 2.6.
This function must duplicate frames be resent from packets. Some of
them are still in flight, others have already been detected as lost.
In this case the original frame ->pkt member is NULL.
Add a trace to distinguish these cases.
Thank you to Tristan for having reported this issue in GH #1808.
Must be backported to 2.6.
Fix the resolution in the httpclient when a port is associated to a
domain. The do-resolve action doesn't support a port in its input.
Must be backported to 2.6. Require the "host_only" converter to be
backported.
This reverts commit f61398a7ca.
After having checked a version with more traces and reproduced the issue
as reported by Tristan in GH #1808, there are remaining cases where
a duplicated but not already sent frame have to be marked as acked because
the frame it was copied from was acknowledeged before its copied was sent.
Must be backported to 2.6.
Since this commit:
"BUG/MINOR: quic: Wrong list_for_each_entry() use when building packets from
qc_do_build_pkt()"
there is no more reason that frames can be released without having been
sent, i.e. frames with non null ->pkt member. This ->pkt is the packet
the frame is attached to.
Must be backported to 2.6.
This function parses the QUIC packet from a UDP datagram. It was originally
supposed to be run by several thread. Here we remove a section of code
where the current thread checks there is not another thread which has already
inserted the new quic_conn it is trying to insert in the connections tree.
Must be backported to 2.6 to ease the future backports to come.
This was due to a missing I/O handler tasklet wakeup in process_timer() when
detecting packet loss. As, qc_release_lost_pkts() could remove the lost packets
from the in flight packets count, qc_set_timer() could cancel the timer used
to wakeup the connection I/O handler. Then the connection could remain idle
until it ends.
Must be backported to 2.6.
Packets with null "in flight" lengths are kept as the others packets as sent
but not already acknowledeged in the by packet number space trees.
But qc_release_lost_pkts() relied on this in fligh length to release the
memory allocated for this packets. We must release the memory allocated for
all the lost packets regardless of their in fligh lengths.
Modify this function to do nothing if the list of lost packets passed
as argument is empty. Stop using <lost_bytes> variable to decide if some packets
memory must be released or not.
Modify the callers to stop checking if this list is empty.
Should helping in fixing memory leak as reported by Tristan in GH #1801.
Must be backported to 2.6.
This reverts commit da9c441886.
Indeed this commit prevented the ACK only packets to be used as other packets
when they are acknowledged. Even if not ack-eliciting packets they are
acknowledged alongside others packets. Such acknowledged ACK only packets
must be used for instance to compute the RTT.
Must be backported to 2.6 if da9c441 was backported to 2.6.
Shut the connect() warning of resolvers_finalize_config() when the
configuration was not emitted manually.
This shuts the warning for the "default" resolvers which is created
automatically for the httpclient.
Must be backported in 2.6.
It is only a real problem for agent-checks when there is no agent string to
send. The condition to disable TCP_QUICKACK was only based on the action
type following the connect one. But it is not always accurate. indeed, for
agent-checks, there is always a SEND action. But if there is no "agent-send"
string defined, nothing is sent. In this case, this adds 200ms of latency
with no reason.
To fix the bug, a flag is now used on the CONNECT action to instruct there
are data that should be sent after the connect. For health-checks, this flag
is set if the action following the connect is a SEND action. For
agent-checks, it is set if an "agent-send" string is defined.
This patch should fix the issue #1836. It must be backported as far as 2.2.
When doing a re-exec, the master was creating a "default" resolvers,
which could result in a warning emitted because the "default" resolvers
of the configuration file is not available anymore.
Skip the creating of the "default" resolvers in wait mode, this is not
useful in the master.
Must be backported as far as 2.6.
Patch c31577f ("MEDIUM: resolvers: continue startup if network is
unavailable") was not working correctly. Indeed
resolvers_finalize_config() was returning a ERR type, but a postparser
is supposed to return 0 or 1.
The return value was never right, however it was only a problem since c31577f.
Must be backported in every stable branch.
The build on OpenBSD is broken since commit 5c83e3a15 ("MINOR: tcp_sample:
clarifying samples support per os, for further expansion."), hence it
only affects 2.7 and 2.6.
It looks like this changed things in such a way that if TCP_INFO is added
but the OS is not added to the list of OS's it will not build.
Extend support for get_tcp_info to OpenBSD.
This must be backported to 2.6.
As seen in GH issue #1770, peers synchronization do not cope well with
very large buffers because by default the only two reasons for stopping
the processing of updates is either that the end was reached or that
the buffer is full. This can cause high latencies, and even rightfully
trigger the watchdog when the operations are numerous and slowed down
by competition on the stick-table lock.
This patch introduces a limit to the number of messages one may send
at once, which now defaults to 200, regardless of the buffer size. This
means taking and releasing the lock up to 400 times in a row, which is
costly enough to let some other parts work.
After some observation this could be backported to 2.6. If so, however,
previous commits "BUG/MEDIUM: applet: fix incorrect check for abnormal
return condition from handler" and "BUG/MINOR: applet: make the call_rate
only count the no-progress calls" must be backported otherwise the call
rate might trigger the looping protection.
This is very similar to what we did in commit 6c539c4b8 ("BUG/MINOR:
stream: make the call_rate only count the no-progress calls"), it's
better to only count the call rate with no progress than to count all
calls and try to figure if there's no progress, because a fast running
applet might once satisfy the whole condition and trigger the bug. This
typically happens when artificially limiting the number of messages sent
at once by an applet, but could happen with plenty of highly interactive
applets.
This patch could be backported to stable versions if there are any
indications that it might be useful there.
We have quite numerous checks for abnormal applet handler behavior which
are supposed to trigger the loop protection. However, consecutive to
commit 15252cd9c ("MEDIUM: stconn: move the RXBLK flags to the stream
connector") that was merged into 2.6-dev12, one flag was incorrectly
renamed, and the check for an applet waiting for a buffer that is present
mistakenly turned to a check for missing room in the buffer. This erroneous
test could mistakenly trigger on applets that perform intensive I/Os doing
small exchanges each (e.g. cache, peers or HTTP client) if the load would
be sustained (>100k iops). For the cache this could represent higher than
13 Gbps on an object at least 1.6 GB large for example, which is quite
unlikely but theoretically possible.
This fix needs to be backported to 2.6.
Replace ->rx.pqpkts quic_enc_level struct member MT_LIST by an LIST.
Same thing for ->list quic_rx_packet struct member MT_LIST.
Update the code consequently. This was a reminisence of the multithreading
support (several threads by connection).
Must be backported to 2.6
Do not rely on the fact the callers of qc_build_frm() handle their
buffer passed to function the correct way (without leaving garbage).
Make qc_build_frm() update the buffer passed as argument only if
the frame it builds is well formed.
As far as I sse, there is no such callers which does not handle
carefully such buffers.
Must be backported to 2.6.
This is list_for_each_entry_safe() which must be used if we want to delete elements
inside its code block. This could explain that some frames which were not built were added
to packets with a NULL ->pkt member.
Thank you to Tristan for having reported this issue through backtraces in GH #1808
Must be backported to 2.6.
First, these packets must not be inserted in the tree of TX packets.
They are never explicitely acknowledged (for instance an ACK only
packet will never be acknowledged). Furthermore, if taken into an account
these packets may uselessly disturb the congestion control. We do not care
if they are lost or not. Furthermore as the ->in_fligh_len member value is null
they were not released by qc_release_lost_pkts() which rely on these values
to decide to release the allocated memory for such packets.
Must be backported to 2.6.
The master is re-exec with an empty proxies list if no master CLI is
configured.
This results in infinite loop since last patch:
3b68b602 ("BUG/MAJOR: log-forward: Fix log-forward proxies not fully initialized")
This patch avoid to loop again on log-forward proxies list if empty.
This patch should be backported until v2.3
We used to emit a diag warning in case ranges were used both with the
process and thread part of a thread spec. Now with groups it's not
longer a problem, so let's just kill this warning.
Since 2.7-dev2 with commit 5b09341c02 ("MEDIUM: cpu-map: replace the
process number with the thread group number"), the thread group has
replaced the process number in the "cpu-map" directive. In part due to
a design limit in 2.4 and 2.5, a special case was made of thread 1 in
commit bda7c1decd ("MEDIUM: config: simplify cpu-map handling"), because
there was no other location to store a single-threaded setup's mask by
then. The combination of the two resulted in a problem with thread
groups, by which as soon as one line exhibiting thread number 1 alone
was found in a config, the mask would be applied to all threads in the
group.
The loop was reworked to avoid this obsolete special case, and was
factored for better legibility. One obsolete comment about nbproc
was also removed. No backport is needed.
Some clients send CONNECTION_CLOSE frame without acknowledging the STREAM
data haproxy has sent. In this case, when closing the connection if
there were remaining data in QUIC stream buffers, they were not released.
Add a <closing> boolean option to qc_stream_desc_free() to force the
stream buffer memory releasing upon closing connection.
Thank you to Tristan for having reported such a memory leak issue in GH #1801.
Must be backported to 2.6.
In ticket #1805 an user is impacted by the limitation of size of the CLI
buffer when updating a ca-file.
This patch allows a user to append new certificates to a ca-file instead
of trying to put them all with "set ssl ca-file"
The implementation use a new function ssl_store_dup_cafile_entry() which
duplicates a cafile_entry and its X509_STORE.
ssl_store_load_ca_from_buf() was modified to take an apped parameter so
we could share the function for "set" and "add".
In order to be able to append new CA in a cafile_entry,
ssl_store_load_ca_from_buf() was reworked and a "append" parameter was
added.
The function is able to keep the previous X509_STORE which was already
present in the cafile_entry.
"set ssl ca-file" does not return any error when a ca-file is empty or
only contains comments. This could be a problem is the file was
malformated and did not contain any PEM header.
It must be backported as far as 2.5.
Implement quic_tls_rx_hp_ctx_init() and quic_tls_tx_hp_ctx_init() to initiliaze
such header protection cipher contexts for each RX and TX parts and for each
packet number spaces, only one time by connection.
Make qc_new_isecs() call these two functions to initialize the cipher contexts
of the Initial secrets. Same thing for ha_quic_set_encryption_secrets() to
initialize the cipher contexts of the subsequent derived secrets (ORTT, 1RTT,
Handshake).
Modify qc_do_rm_hp() and quic_apply_header_protection() to reuse these
cipher contexts.
Note that there is no need to modify the key update for the header protection.
The header protection secrets are never updated.
Since commit 2071a99df ("MINOR: listener/ssl: set the SSL xprt layer only
once the whole config is known") the xprt is initialized for ssl directly
from a generic funtion used to parse bind args.
But the 'bind' lines from 'log-forward' sections were forgotten in commit
55f0f7bb5 ("MINOR: config: use the new bind_parse_args_list() to parse a
"bind" line").
This patch re-works 'log-forward' section parsing to use the generic
function to parse bind args and fix the issue.
Since the generic way to parse was introduced in 2.6, this patch
should be backported as far as this version.
Some initialisation for log forward proxies was missing such
as ssl configuration on 'log-forward's 'bind' lines.
After the loop on the proxy initialization code for proxies present
in the main proxies list, this patch force to loop again on this code
for proxies present in the log forward proxies list.
Those two lists should be merged. This will be part of a global
re-work of proxy initialization including peers proxies and resolver
proxies.
This patch was made in first attempt to fix the bug and to facilitate
the backport on older branches waiting for a cleaner re-work on proxies
initialization on the dev branch.
This patch should be backported as far as 2.3.
This wrong trace came with this commit:
"BUG/MINOR: quic: Possible crashes when dereferencing ->pkt quic_frame struct member"
In qc_release_frm() we mark frames as acked. Nothing to see with references
to frames.
Thank you to Willy for having caught this one.
Must be backported to 2.6 as these traces arrived with a bug fix to be backported
to 2.6.
When duplicated frames are splitted, we must propagate this information
to the new allocated frame and add a reference to this new frame
to the reference list of the original frame.
Must be backported to 2.6
This was done at several places. First in qc_requeue_nacked_pkt_tx_frms.
This aim of this function is, if needed, to requeue all the TX frames of a lost
<pkt> packet passed as argument and detach them from this packet they have been
sent from. They are possible cases where the frm->pkt quic_frame struct member could
be NULL, as a result of a duplication of an original frame by qc_dup_pkt_frms(). This
function adds the duplicated frame to the original frame reference list:
LIST_APPEND(&origin->reflist, &dup_frm->ref);
But, in this function, the packet which contains the frame is the one which is passed
as argument (for debug purpose). So let us prefer using this variable.
Also do not dereference this ->pkt quic_frame member in qc_release_frm() and
qc_frm_unref() and add a trace to catch the frame with a null ->pkt member.
They are logically frames which have not already been sent.
Thank you to Tristan for having reported such crashes in GH #1808.
Must be backported to 2.6
If a POST upload is cancelled after having advertised a content-length,
or a response body is truncated after a content-length, we're not allowed
to send ES because in this case the total body length must exactly match
the advertised value. Till now that's what we were doing, and that was
causing the other side (possibly haproxy) to respond with an RST_STREAM
PROTOCOL_ERROR due to "ES on DATA frame before content-length".
We can behave a bit cleaner here. Let's detect that we haven't sent
everything, and send an RST_STREAM(CANCEL) instead, which is designed
exactly for this purpose.
This patch could be backported to older versions but only a little bit
of exposure to make sure it doesn't wake up a bad behavior somewhere.
It relies on the following previous commit:
"MINOR: mux-h2: make streams know if they need to send more data"
H2 streams do not even know if they are expected to send more data or
not, which is problematic when closing because we don't know if we're
closing too early or not. Let's start by adding a new stream flag
"H2_SF_MORE_HTX_DATA" to indicate this on the tx path.
Traces indicating "switching to XXX" generally apply before the transition
so that the current connection state is visible in the trace. SETTINGS1
was incorrect in this regard, with the trace being emitted after. Let's
fix this.
No need to backport this, as this is purely cosmetic.
When switching to H2_CS_FRAME_H, we do not want to present the previous
frame's state, flags, length etc in traces, or we risk to confuse the
analysis, making the reader think that the header information presented
is related to the new frame header being analysed. A naive approach could
have consisted in simply relying on the current parser state (FRAME_H
being that state), but traces are emitted before switching the state,
so traces cannot rely on this.
This was initially addressed by commit 73db434f7 ("MINOR: h2/trace: report
the frame type when known") which used to set dsi to -1 when the connection
becomes idle again, but was accidentally broken by commit 5112a603d
("BUG/MAJOR: mux_h2: Don't consume more payload than received for skipped
frames") which moved dsi after calling the trace function.
But in both cases there's problem with this approach. If an RST or WU frame
cannot be uploaded due to a busy mux, and at the same time we complete
processing on a perfect end of frame with no single new frame header, we
can leave the demux loop with dsi=-1 and with RST or WU to be sent, and
these ones will be sent for stream ID -1. This is what was reported in
github issue #1830. This can be reproduced with a config chaining an h1->h2
proxy to an empty h2 frontend, and uploading a large body such as below:
$ (printf "POST / HTTP/1.1\r\nContent-length: 1000000000\r\n\r\n";
cat /dev/zero) | nc 0 4445 > /dev/null
This shows that we must never affect ->dsi which must always remain valid,
and instead we should set "something else". That something else could be
served by the demux frame type, but that one also needs to be preserved
for the RST_STREAM case. Instead, let's just add a connection flag to say
that the demuxing is in progress. This will be set once a new demux header
is set and reset after the end of a frame. This way the trace subsystem
can know that dft/dfl must not be displayed, without affecting the logic
relying on such values.
Given that the commits above are old and were backported to 1.8, this
new one also needs to be backported as far as 1.8.
Many thanks to David le Blanc (@systemmonkey42) for spotting, reporting,
capturing and analyzing this bug; his work permitted to quickly spot the
problem.
Erwan Le Goas reported that chaining certain commands on the CLI would
systematically crash the process; for example, "show version; show sess".
This happened since the conversion of cli context to appctx->svcctx,
because if applet_reserve_svcctx() is called a first time for a tiny
context, it's allocated in-situ, and later a keyword that wants a
larger one will see that it's not null and will reuse it and will
overwrite the end of the first one's context.
What is missing is a reset of the svcctx when looping back to
CLI_ST_GETREQ.
This needs to be backported to 2.6, and relies on previous commit
"MINOR: applet: add a function to reset the svcctx of an applet".
The CLI needs to reset the svcctx between commands, and there was nothing
done to handle this. Let's add appctx_reset_svcctx() to do that, it's the
closing equivalent of appctx_reserve_svcctx().
This will have to be backported to 2.6 as it will be used by a subsequent
patch to fix a bug.
As specified by RFC 9114, multiple cookie headers must be concatenated
into a single entry before passing it to a HTTP/1.1 connection. To
implement this, reuse the same function as already used for HTTP/2
module.
This should answer to feature requested in github issue #1818.
As specified by RFC 7540, multiple cookie headers are merged in a single
entry before passing it to a HTTP/1.1 connection. This step is
implemented during headers parsing in h2 module.
Extract this code in the generic http_htx module. This will allow to
reuse it quickly for HTTP/3 implementation which has the same
requirement for cookie headers.
MUX notification on TX has been edited recently : it will be notified
only when sending its own data, and not for example on retransmission by
the quic-conn layer. This is subject of the patch :
b29a1dc2f4
BUG/MINOR: quic: do not notify MUX on frame retransmit
A new flag QUIC_FL_CONN_RETRANS_LOST_DATA has been introduced to
differentiate qc_send_app_pkts invocation by MUX and directly by the
quic-conn layer in quic_conn_app_io_cb(). However, this is a first
problem as internal quic-conn layer usage is not limited to
retransmission. For example for NEW_CONNECTION_ID emission.
Another problem much important is that send functions are also called
through quic_conn_io_cb() which has not been protected from MUX
notification. This could probably result in crash when trying to notify
the MUX.
To fix both problems, quic-conn flagging has been inverted : when used
by the MUX, quic-conn is flagged with QUIC_FL_CONN_TX_MUX_CONTEXT. To
improve the API, MUX must now used qc_send_mux which ensure the flag is
set. qc_send_app_pkts is now static and can only be used by the
quic-conn layer.
This must be backported wherever the previously mentionned patch is.
When duplication frames in qc_dup_pkt_frms(), ->pkt member was not correctly
initialized (copied from the original frame). This could not have any impact
because this member is initialized whe the frame is added to a packet.
This was also the case for ->flags.
Also replace the pool_zalloc() call by a call to pool_alloc().
Must be backported to 2.6.
When using `option http-restrict-req-hdr-names delete`, HAproxy may
crash or delete wrong header after receiving request containing multiple
forbidden characters in single header name; exact behavior depends on
number of request headers, number of forbidden characters and position
of header containing them.
This patch fixes GitHub issue #1822.
Must be backported as far as 2.2 (buggy feature got included in 2.2.25,
2.4.18 and 2.5.8).
On STREAM emission, quic-conn notifies MUX through a callback named
qcc_streams_sent_done(). This also happens on retransmission : in this
case offset are examined and notification is ignored if already seen.
However, this behavior has slightly changed since
e53b489826
BUG/MEDIUM: mux-quic: fix server chunked encoding response
Indeed, if offset diff is NULL, frame is now not ignored. This is to
support FIN notification with a final empty STREAM frame. A side-effect
of this is that if the last stream frame is retransmitted, it won't be
ignored in qcc_streams_sent_done().
In most cases, this side-effect is harmless as qcs instance will soon be
freed after being closed. But if qcs is still alive, this will cause a
BUG_ON crash as it is considered as locally closed.
This bug depends on delay condition and seems to be extremely rare. But
it might be the reason for a crash seen on interop with s2n client on
http3 testcase :
FATAL: bug condition "qcs->st == QC_SS_CLO" matched at src/mux_quic.c:372
call trace(16):
| 0x558228912b0d [b8 01 00 00 00 c6 00 00]: main-0x1c7878
| 0x558228917a70 [48 8b 55 d8 48 8b 45 e0]: qcc_streams_sent_done+0xcf/0x355
| 0x558228906ff1 [e9 29 05 00 00 48 8b 05]: main-0x1d3394
| 0x558228907cd9 [48 83 c4 10 85 c0 0f 85]: main-0x1d26ac
| 0x5582289089c1 [48 83 c4 50 85 c0 75 12]: main-0x1d19c4
| 0x5582288f8d2a [48 83 c4 40 48 89 45 a0]: main-0x1e165b
| 0x5582288fc4cc [89 45 b4 83 7d b4 ff 74]: qc_send_app_pkts+0xc6/0x1f0
| 0x5582288fd311 [85 c0 74 12 eb 01 90 48]: main-0x1dd074
| 0x558228b2e4c1 [48 c7 c0 d0 60 ff ff 64]: run_tasks_from_lists+0x4e6/0x98e
| 0x558228b2f13f [8b 55 80 29 c2 89 d0 89]: process_runnable_tasks+0x7d6/0x84c
| 0x558228ad9aa9 [8b 05 75 16 4b 00 83 f8]: run_poll_loop+0x80/0x48c
| 0x558228ada12f [48 8b 05 aa c5 20 00 48]: main-0x256
| 0x7ff01ed2e609 [64 48 89 04 25 30 06 00]: libpthread:+0x8609
| 0x7ff01e8ca163 [48 89 c7 b8 3c 00 00 00]: libc:clone+0x43/0x5e
To reproduce it locally, code was artificially patched to produce
retransmission and avoid qcs liberation.
In order to fix this and avoid future class of similar problem, the best
way is to not call qcc_streams_sent_done() to notify MUX for
retranmission. To implement this, we test if any of
QUIC_FL_CONN_RETRANS_OLD_DATA or the new flag
QUIC_FL_CONN_RETRANS_LOST_DATA is set. A new wrapper
qc_send_app_retransmit() has been added to set the new flag as a
complement to already existing qc_send_app_probing().
This must be backported up to 2.6.
Adjust qc_send_app_pkts function : remove <old_data> arg and provide a
new wrapper function qc_send_app_probing() which should be used instead
when probing with old data.
This simplifies the interface of the default function, most notably for
the MUX which does not interfer with retransmission.
QUIC_FL_CONN_RETRANS_OLD_DATA flag is set/unset directly in the wrapper
qc_send_app_probing().
At the same time, function documentation has been updated to clarified
arguments and return values.
This commit will be useful for the next patch to differentiate MUX and
retransmission send context. As a consequence, the current patch should
be backported wherever the next one will be.
Complete some MUX traces by adding qcc or qcs instance as arguments when
this is possible. This will be useful when several connections are
interleaved.
Emit STREAM_LIMIT_ERROR if a client tries to open an unidirectional
stream with an ID greater than the value specified by our flow-control
limit. The code is similar to the bidirectional stream opening.
MAX_STREAMS_UNI emission is not implement for the moment and is left as
a TODO. This should not be too urgent for the moment : in HTTP/3, a
client has only a limited use for unidirectional streams (H3 control
stream + 2 QPACK streams). This is covered by the value provided by
haproxy in transport parameters.
This patch has been tagged with BUG as it should have prevented last
crash reported on github issue #1808 when opening a new unidirectional
streams with an invalid ID. However, it is probably not the main cause
of the bug contrary to the patch
commit 11a6f4007b
BUG/MINOR: quic: Wrong status returned by qc_pkt_decrypt()
This must be backported up to 2.6.
As specified by RFC 9204, encoder and decoder streams must not be
closed. If the peer behaves incorrectly and closes one of them, emit a
H3_CLOSED_CRITICAL_STREAM connection error.
To implement this, QPACK stream decoding API has been slightly adjusted.
Firstly, fin parameter is passed to notify about FIN STREAM bit.
Secondly, qcs instance is passed via unused void* context. This allows
to use qcc_emit_cc_app() function to report a CONNECTION_CLOSE error.
As specified by RFC 9114 the control stream must not be closed. If the
peer behaves incorrectly and closes it, emit a H3_CLOSED_CRITICAL_STREAM
connection error.
Replace a plain '=' operator by '|=' when setting quic_frame
QUIC_FL_TX_FRAME_LOST flag.
For the moment, this change has no impact as only two exclusive flags
are defined for quic_frame. On the edited code path we are certain that
QUIC_FL_TX_FRAME_ACKED is not set due to a previous if statement, so a
plain equal or a binary OR is strictly identical.
This change will be useful if new flags are defined for quic_frame in
the future. These new flags won't be resetted automatically thanks to
binary OR without explictly intended, which otherwise could easily lead
to new bugs.
table_expire() returns the expiration delay for a stick-table entry associated
to an input sample. Its counterpart table_idle() returns the time the entry
remained idle since the last time it was updated.
Both converters may take a default value as second argument which is returned
when the entry is not present.
This function is responsible for all calls to pool_alloc(trash), whose
total size can be huge. As such it's quite a pain that it doesn't provide
more hints about its users. However, since the function is tiny, it fully
makes sense to inline it, the code is less than 0.1% larger with this.
This way we can now detect where the callers are via "show profiling",
e.g.:
0 1953671 0 32071463136| 0x59960f main+0x10676f p_free(-16416) [pool=trash]
0 1 0 16416| 0x59960f main+0x10676f p_free(-16416) [pool=trash]
1953672 0 32071479552 0| 0x599561 main+0x1066c1 p_alloc(16416) [pool=trash]
0 976835 0 16035723360| 0x576ca7 http_reply_to_htx+0x447/0x920 p_free(-16416) [pool=trash]
0 1 0 16416| 0x576ca7 http_reply_to_htx+0x447/0x920 p_free(-16416) [pool=trash]
976835 0 16035723360 0| 0x576a5d http_reply_to_htx+0x1fd/0x920 p_alloc(16416) [pool=trash]
1 0 16416 0| 0x576a5d http_reply_to_htx+0x1fd/0x920 p_alloc(16416) [pool=trash]
Storing the pointer to the pool along with the stats is quite useful as
it allows to report the name. That's what we're doing here. We could
store it in place of another field but that's not convenient as it would
require to change all functions that manipulate counters. Thus here we
store one extra field, as well as some padding because the struct turns
56 bytes long, thus better go to 64 directly. Example of output from
"show profiling memory":
2 0 48 0| 0x4bfb2c ha_quic_set_encryption_secrets+0xcc/0xb5e p_alloc(24) [pool=quic_tls_iv]
0 55252 0 10608384| 0x4bed32 main+0x2beb2 free(-192)
15 0 2760 0| 0x4be855 main+0x2b9d5 p_alloc(184) [pool=quic_frame]
1 0 1048 0| 0x4be266 ha_quic_add_handshake_data+0x2b6/0x66d p_alloc(1048) [pool=quic_crypto]
3 0 552 0| 0x4be142 ha_quic_add_handshake_data+0x192/0x66d p_alloc(184) [pool=quic_frame]
31276 0 6755616 0| 0x4bb8f9 quic_sock_fd_iocb+0x689/0x69b p_alloc(216) [pool=quic_dgram]
0 31424 0 6787584| 0x4bb7f3 quic_sock_fd_iocb+0x583/0x69b p_free(-216) [pool=quic_dgram]
152 0 32832 0| 0x4bb4d9 quic_sock_fd_iocb+0x269/0x69b p_alloc(216) [pool=quic_dgram]
Pools are being used so well that it becomes difficult to profile their
usage via the regular memory profiling. Let's add new entries for pools
there, named "p_alloc" and "p_free" that correspond to pool_alloc() and
pool_free(). Ideally it would be nice to only report those that fail
cache lookups but that's complicated, particularly on the free() path
since free lists are released in clusters to the shared pools.
It's worth noting that the alloc_tot/free_tot fields can easily be
determined by multiplying alloc_calls/free_calls by the pool's size, and
could be better used to store a pointer to the pool itself. However it
would require significant changes down the code that sorts output.
If this were to cause a measurable slowdown, an alternate approach could
consist in using a different value of USE_MEMORY_PROFILING to enable pools
profiling. Also, this profiler doesn't depend on intercepting regular malloc
functions, so we could also imagine enabling it alone or the other one alone
or both.
Tests show that the CPU overhead on QUIC (which is already an extremely
intensive user of pools) jumps from ~7% to ~10%. This is quite acceptable
in most deployments.
Right now it's not possible to feed memory profiling info from outside
activity.c, so let's export the function and move the enum and struct
to the include file.
This bug came with this big commit:
"MEDIUM: quic: xprt traces rework"
This is the <ret> variable value which must be returned by most of the xprt functions.
This leaded packets which could not be decrypted to be parsed, with weird frames
to be parsed as found by Tristan in GH #1808.
To be backported where the commit above was backported.
When building an ack-eliciting frame only packet, if we did not manage to add
at least one such a frame to the packet, we did not notify the caller about
the fact the packet is empty. This could lead the caller to believe
everything was ok and make it endlessly try to build packet again and again.
This issue was amplified by the recent changes where a while(1) loop has been
added to qc_send_app_pkt() which calls qc_do_build_pkt() through qc_prep_app_pkts()
until we could not prepare packets. Before this recent change, I guess only
one empty packet was sent.
This patch checks that non empty packets could be built by qc_do_build_pkt()
and makes this function return an error if this was the case. Also note that
such an issue could happened only when the packet building was limited by
the congestion control.
Thank you to Tristan for having reported this issue in GH #1808.
Must be backported to 2.6.
qc_detach() is used to free a qcs as notified by sedesc. If there is no
more stream active and the connection is considered as dead, it will
then be freed. This prevent to dereference qcc in TRACE macro. Else this
will cause a crash.
Use a different code-path on release for qc_detach() to fix this bug.
This will fix the last occurence of crash on github issue #1808.
This has been introduced by recent QUIC MUX traces rework. Thus, it does
not need to be backport.
In order to ensure that an instant restart of the process will not wipe
precious debugging information, and to leave time for an admin to archive
a copy of a ring, now upon startup, any previously existing file will be
renamed with the extra suffix ".bak", and any previously existing file
with suffix ".bak" will be removed.
The build broke on freebsd with S_IRUSR undefined after commit 0b8e9ceb1
("MINOR: ring: add support for a backing-file"). Maybe another include
is needed there, but the point is that we really don't care about these
symbolic names, file modes are more readable as 0600 than via these
cryptic names anyway, so let's go back to 0600. This will teach me not
to try to make things too clean.
No backport is needed.
This mmaps a file which will serve as the backing-store for the ring's
contents. The idea is to provide a way to retrieve sensitive information
(last logs, debugging traces) even after the process stops and even after
a possible crash. Right now this was possible by connecting to the CLI
and dumping the contents of the ring live, but this is not handy and
consumes quite a bit of resources before it is needed.
With a backing file, the ring is effectively RAM-mapped file, so that
contents stored there are the same as those found in the file (the OS
doesn't guarantee immediate sync but if the process dies it will be OK).
Note that doing that on a filesystem backed by a physical device is a
bad idea, as it will induce slowdowns at high loads. It's really
important that the device is RAM-based.
Also, this may have security implications: if the file is corrupted by
another process, the storage area could be corrupted, causing haproxy
to crash or to overwrite its own memory. As such this should only be
used for debugging.