MINOR: tasks: catch TICK_ETERNITY with BUG_ON() in __task_queue()

__task_queue() must absolutely not be called with TICK_ETERNITY or it
will place a never-expiring node upfront in the timers queue, preventing
any timer from expiring until the process is restarted. Code was found
to cause this using "task_schedule(task, now_ms)" which does this one
millisecond every 49.7 days, so let's add a condition against this. It
must never trigger since any process susceptible to trigger it would
already accumulate tasks until it dies.

An extra test was added in wake_expired_tasks() to detect tasks whose
timeout would have been changed after being queued.

An improvement over this could be in the future to use a non-scalar
type (union/struct) for expiration dates so as to avoid the risk of
using them directly like this. But now_ms is already such a valid
time and this specific construct would still not be caught.

This could even be backported to stable versions to help detect other
occurrences if any.
This commit is contained in:
Willy Tarreau 2021-09-30 16:38:09 +02:00
parent cb59e0bc3c
commit 7a9699916a
2 changed files with 13 additions and 3 deletions

View File

@ -553,6 +553,10 @@ static inline void tasklet_set_tid(struct tasklet *tl, int tid)
/* Ensure <task> will be woken up at most at <when>. If the task is already in /* Ensure <task> will be woken up at most at <when>. If the task is already in
* the run queue (but not running), nothing is done. It may be used that way * the run queue (but not running), nothing is done. It may be used that way
* with a delay : task_schedule(task, tick_add(now_ms, delay)); * with a delay : task_schedule(task, tick_add(now_ms, delay));
* It MUST NOT be used with a timer in the past, and even less with
* TICK_ETERNITY (which would block all timers). Note that passing it directly
* now_ms without using tick_add() will definitely make this happen once every
* 49.7 days.
*/ */
static inline void task_schedule(struct task *task, int when) static inline void task_schedule(struct task *task, int when)
{ {

View File

@ -251,7 +251,7 @@ void __task_wakeup(struct task *t)
* *
* Inserts a task into wait queue <wq> at the position given by its expiration * Inserts a task into wait queue <wq> at the position given by its expiration
* date. It does not matter if the task was already in the wait queue or not, * date. It does not matter if the task was already in the wait queue or not,
* as it will be unlinked. The task must not have an infinite expiration timer. * as it will be unlinked. The task MUST NOT have an infinite expiration timer.
* Last, tasks must not be queued further than the end of the tree, which is * Last, tasks must not be queued further than the end of the tree, which is
* between <now_ms> and <now_ms> + 2^31 ms (now+24days in 32bit). * between <now_ms> and <now_ms> + 2^31 ms (now+24days in 32bit).
* *
@ -268,6 +268,10 @@ void __task_queue(struct task *task, struct eb_root *wq)
(wq == &sched->timers && (task->state & TASK_SHARED_WQ)) || (wq == &sched->timers && (task->state & TASK_SHARED_WQ)) ||
(wq != &timers && wq != &sched->timers)); (wq != &timers && wq != &sched->timers));
#endif #endif
/* if this happens the process is doomed anyway, so better catch it now
* so that we have the caller in the stack.
*/
BUG_ON(task->expire == TICK_ETERNITY);
if (likely(task_in_wq(task))) if (likely(task_in_wq(task)))
__task_unlink_wq(task); __task_unlink_wq(task);
@ -341,7 +345,8 @@ void wake_expired_tasks()
__task_queue(task, &tt->timers); __task_queue(task, &tt->timers);
} }
else { else {
/* task not expired and correctly placed */ /* task not expired and correctly placed. It may not be eternal. */
BUG_ON(task->expire == TICK_ETERNITY);
break; break;
} }
} }
@ -411,7 +416,8 @@ void wake_expired_tasks()
goto lookup_next; goto lookup_next;
} }
else { else {
/* task not expired and correctly placed */ /* task not expired and correctly placed. It may not be eternal. */
BUG_ON(task->expire == TICK_ETERNITY);
break; break;
} }
} }