From c8da044b41991fbb56ba47b89a2d2d27f8ca6701 Mon Sep 17 00:00:00 2001 From: Willy Tarreau Date: Mon, 15 Apr 2019 09:33:42 +0200 Subject: [PATCH] MINOR: tasks: restore the lower latency scheduling when niced tasks are present In the past we used to reduce the number of tasks consulted at once when some niced tasks were present in the run queue. This was dropped in 1.8 when the scheduler started to take batches. With the recent fixes it now becomes possible to restore this behaviour which guarantees a better latency between tasks when niced tasks are present. Thanks to this, with the default number of 200 for tune.runqueue-depth, with a parasitic load of 14000 requests per second, nice 0 gives 14000 rps, nice 1024 gives 12000 rps and nice -1024 gives 16000 rps. The amplitude widens if the runqueue depth is lowered. --- src/task.c | 3 +++ 1 file changed, 3 insertions(+) diff --git a/src/task.c b/src/task.c index cf72b55f8..4e6a151e7 100644 --- a/src/task.c +++ b/src/task.c @@ -325,6 +325,9 @@ void process_runnable_tasks() nb_tasks_cur = nb_tasks; max_processed = global.tune.runqueue_depth; + if (likely(niced_tasks)) + max_processed = (max_processed + 3) / 4; + /* Note: the grq lock is always held when grq is not null */ while (task_per_thread[tid].task_list_size < max_processed) {