MINOR: listener: workaround for closing a tiny race between resume_listener() and stopping

This is an alternative fix that tries to address the same issue as
d1ebee177 ("BUG/MINOR: listener: close tiny race between
resume_listener() and stopping") while allowing resume_listener() to be
more versatile.

Indeed, because of the previous fix, resume_listener() is not able to
rebind stopped listeners, and this breaks the original behavior that is
documented in the function description:

 "If the listener was only in the assigned
  state, it's totally rebound. This can happen if a pause() has completely
  stopped it. If the resume fails, 0 is returned and an error might be
  displayed."

With relax_listener(), we now make sure to check l->state under the
listener lock so we don't call resume_listener() when the conditions are not
met.

As such, concurrently stopped listeners may not be rebound using
relax_listener().

Note: the documented race can't happen since 1b927eb3c ("MEDIUM: proto: stop
protocols under thread isolation during soft stop"), but older versions are
concerned as 1b927eb3c was not marked for backports.
Moreover, the patch also prevents the race between protocol_pause_all() and
resuming from LIMITED or FULL states.

This commit depends on:
  - "MINOR: listener: add relax_listener() function"

This should be backported with d1ebee177 up to 2.4
(d1ebee177 is marked to be backported for all stable versions but the current
patch does not apply for versions < 2.4)
This commit is contained in:
Aurelien DARRAGON 2023-02-06 17:19:58 +01:00 committed by Christopher Faulet
parent bcad7e6319
commit f5d98938ad
2 changed files with 6 additions and 10 deletions

View File

@ -538,10 +538,6 @@ int resume_listener(struct listener *l, int lpx, int lli)
if (l->state == LI_READY)
goto end;
/* the listener might have been stopped in parallel */
if (l->state < LI_PAUSED)
goto end;
if (l->rx.proto->resume)
ret = l->rx.proto->resume(l);
@ -599,7 +595,7 @@ int relax_listener(struct listener *l, int lpx, int lli)
}
/* Marks a ready listener as full so that the stream code tries to re-enable
* it upon next close() using resume_listener().
* it upon next close() using relax_listener().
*/
static void listener_full(struct listener *l)
{
@ -637,7 +633,7 @@ void dequeue_all_listeners()
/* This cannot fail because the listeners are by definition in
* the LI_LIMITED state.
*/
resume_listener(listener, 0, 0);
relax_listener(listener, 0, 0);
}
}
@ -650,7 +646,7 @@ void dequeue_proxy_listeners(struct proxy *px)
/* This cannot fail because the listeners are by definition in
* the LI_LIMITED state.
*/
resume_listener(listener, 0, 0);
relax_listener(listener, 0, 0);
}
}
@ -1221,7 +1217,7 @@ void listener_accept(struct listener *l)
(!tick_isset(global_listener_queue_task->expire) ||
tick_is_expired(global_listener_queue_task->expire, now_ms))))) {
/* at least one thread has to this when quitting */
resume_listener(l, 0, 0);
relax_listener(l, 0, 0);
/* Dequeues all of the listeners waiting for a resource */
dequeue_all_listeners();
@ -1280,7 +1276,7 @@ void listener_release(struct listener *l)
_HA_ATOMIC_DEC(&l->thr_conn[tid]);
if (l->state == LI_FULL || l->state == LI_LIMITED)
resume_listener(l, 0, 0);
relax_listener(l, 0, 0);
/* Dequeues all of the listeners waiting for a resource */
dequeue_all_listeners();

View File

@ -3035,7 +3035,7 @@ static int cli_parse_set_maxconn_frontend(char **args, char *payload, struct app
px->maxconn = v;
list_for_each_entry(l, &px->conf.listeners, by_fe) {
if (l->state == LI_FULL)
resume_listener(l, 1, 0);
relax_listener(l, 1, 0);
}
if (px->maxconn > px->feconn)