Commit Graph

88 Commits

Author SHA1 Message Date
Willy Tarreau
6136989a22 MINOR: time: uninline report_idle() and move it to task.c
I don't know why I inlined this one, this makes no sense given that it's
only used for stats, and it starts a circular dependency on tinfo.h which
can be problematic in the future. In addition, all the stuff related to
idle time calculation should be with the rest of the scheduler, which
currently is in task.{c,h}, so let's move it there.
2021-10-01 18:37:50 +02:00
Willy Tarreau
beeabf5314 MINOR: task: provide 3 task_new_* wrappers to simplify the API
We'll need to improve the API to pass other arguments in the future, so
let's start to adapt better to the current use cases. task_new() is used:
  - 18 times as task_new(tid_bit)
  - 18 times as task_new(MAX_THREADS_MASK)
  - 2 times with a single bit (in a loop)
  - 1 in the debug code that uses a mask

This patch provides 3 new functions to achieve this:
  - task_new_here()     to create a task on the calling thread
  - task_new_anywhere() to create a task to be run anywhere
  - task_new_on()       to create a task to run on a specific thread

The change is trivial and will allow us to later concentrate the
required adaptations to these 3 functions only. It's still possible
to call task_new() if needed but a comment was added to encourage the
use of the new ones instead. The debug code was not changed and still
uses it.
2021-10-01 18:36:29 +02:00
Willy Tarreau
6a2a912cb8 CLEANUP: tasks: remove the long-unused work_lists
Work lists were a mechanism introduced in 1.8 to asynchronously delegate
some work to be performed on another thread via a dedicated task.
The only user was the listeners, to deal with the queue. Nowadays
the tasklets have made this much more convenient, and have replaced
work_lists in the listeners. It seems there will be no valid use case
of work lists anymore, so better get rid of them entirely and keep the
scheduler code cleaner.
2021-10-01 18:30:14 +02:00
Willy Tarreau
7a9699916a MINOR: tasks: catch TICK_ETERNITY with BUG_ON() in __task_queue()
__task_queue() must absolutely not be called with TICK_ETERNITY or it
will place a never-expiring node upfront in the timers queue, preventing
any timer from expiring until the process is restarted. Code was found
to cause this using "task_schedule(task, now_ms)" which does this one
millisecond every 49.7 days, so let's add a condition against this. It
must never trigger since any process susceptible to trigger it would
already accumulate tasks until it dies.

An extra test was added in wake_expired_tasks() to detect tasks whose
timeout would have been changed after being queued.

An improvement over this could be in the future to use a non-scalar
type (union/struct) for expiration dates so as to avoid the risk of
using them directly like this. But now_ms is already such a valid
time and this specific construct would still not be caught.

This could even be backported to stable versions to help detect other
occurrences if any.
2021-09-30 17:09:39 +02:00
Tim Duesterhus
992007ec78 CLEANUP: tree-wide: fix prototypes for functions taking no arguments.
"f(void)" is the correct and preferred form for a function taking no
argument, while some places use the older "f()". These were reported
by clang's -Wmissing-prototypes, for example:

  src/cpuset.c:111:5: warning: no previous prototype for function 'ha_cpuset_size' [-Wmissing-prototypes]
  int ha_cpuset_size()
  include/haproxy/cpuset.h:42:5: note: this declaration is not a prototype; add 'void' to make it a prototype for a zero-parameter function
  int ha_cpuset_size();
      ^
                     void

This aggregate patch fixes this for the following functions:

   ha_backtrace_to_stderr(), ha_cpuset_size(), ha_panic(), ha_random64(),
   ha_thread_dump_all_to_trash(), get_exec_path(), check_config_validity(),
   mworker_child_nb(), mworker_cli_proxy_(create|stop)(),
   mworker_cleantasks(), mworker_cleanlisteners(), mworker_ext_launch_all(),
   mworker_reload(), mworker_(env|proc_list)_to_(proc_list|env)(),
   mworker_(un|)block_signals(), proxy_adjust_all_maxconn(),
   proxy_destroy_all_defaults(), get_tainted(),
   pool_total_(allocated|used)(), thread_isolate(_full|)(),
   thread(_sync|)_release(), thread_harmless_till_end(),
   thread_cpu_mask_forced(), dequeue_all_listeners(), next_timer_expiry(),
   wake_expired_tasks(), process_runnable_tasks(), init_acl(),
   init_buffer(), (de|)init_log_buffers(), (de|)init_pollers(),
   fork_poller(), pool_destroy_all(), pool_evict_from_local_caches(),
   pool_total_failures(), dump_pools_to_trash(), cfg_run_diagnostics(),
   tv_init_(process|thread)_date(), __signal_process_queue(),
   deinit_signals(), haproxy_unblock_signals()
2021-09-15 11:07:18 +02:00
Amaury Denoyelle
7b368339af MEDIUM: task: implement tasklet kill
Implement an equivalent of task_kill for tasklets. This function can be
used to request a tasklet deletion in a thread-safe way.

Currently this function is unused.
2021-08-06 11:07:48 +02:00
Willy Tarreau
2b71810cb3 CLEANUP: lists/tree-wide: rename some list operations to avoid some confusion
The current "ADD" vs "ADDQ" is confusing because when thinking in terms
of appending at the end of a list, "ADD" naturally comes to mind, but
here it does the opposite, it inserts. Several times already it's been
incorrectly used where ADDQ was expected, the latest of which was a
fortunate accident explained in 6fa922562 ("CLEANUP: stream: explain
why we queue the stream at the head of the server list").

Let's use more explicit (but slightly longer) names now:

   LIST_ADD        ->       LIST_INSERT
   LIST_ADDQ       ->       LIST_APPEND
   LIST_ADDED      ->       LIST_INLIST
   LIST_DEL        ->       LIST_DELETE

The same is true for MT_LISTs, including their "TRY" variant.
LIST_DEL_INIT keeps its short name to encourage to use it instead of the
lazier LIST_DELETE which is often less safe.

The change is large (~674 non-comment entries) but is mechanical enough
to remain safe. No permutation was performed, so any out-of-tree code
can easily map older names to new ones.

The list doc was updated.
2021-04-21 09:20:17 +02:00
Willy Tarreau
4781b1521a CLEANUP: atomic/tree-wide: replace single increments/decrements with inc/dec
This patch replaces roughly all occurrences of an HA_ATOMIC_ADD(&foo, 1)
or HA_ATOMIC_SUB(&foo, 1) with the equivalent HA_ATOMIC_INC(&foo) and
HA_ATOMIC_DEC(&foo) respectively. These are 507 changes over 45 files.
2021-04-07 18:18:37 +02:00
Willy Tarreau
1db427399c CLEANUP: atomic: add an explicit _FETCH variant for add/sub/and/or
Currently our atomic ops return a value but it's never known whether
the fetch is done before or after the operation, which causes some
confusion each time the value is desired. Let's create an explicit
variant of these operations suffixed with _FETCH to explicitly mention
that the fetch occurs after the operation, and make use of it at the
few call places.
2021-04-07 18:18:37 +02:00
Willy Tarreau
144f84a09d MEDIUM: task: extend the state field to 32 bits
It's been too short for quite a while now and is now full. It's still
time to extend it to 32-bits since we have room for this without
wasting any space, so we now gained 16 new bits for future flags.

The values were not reassigned just in case there would be a few
hidden u16 or short somewhere in which these flags are placed (as
it used to be the case with stream->pending_events).

The patch is tagged MEDIUM because this required to update the task's
process() prototype to use an int instead of a short, that's quite a
bunch of places.
2021-03-05 08:30:08 +01:00
Willy Tarreau
e0d5942ddd MINOR: task: move the nice field to the struct task only
The nice field isn't needed anymore for the tasklet so we can move it
from the TASK_COMMON area into the struct task which already has a
hole around the expire entry.
2021-03-05 08:30:08 +01:00
Willy Tarreau
db4e238938 MINOR: task: stop abusing the nice field to detect a tasklet
It's cleaner to use a flag from the task's state to detect a tasklet
and it's even cheaper. One of the best benefits is that this will
allow to get the nice field out of the common part since the tasklet
doesn't need it anymore. This commit uses the last task bit available
but that's temporary as the purpose of the change is to extend this.
2021-03-05 08:30:08 +01:00
Willy Tarreau
b2285de049 MINOR: tasks: also compute the tasklet latency when DEBUG_TASK is set
It is extremely useful to be able to observe the wakeup latency of some
important I/O operations, so let's accept to inflate the tasklet struct
by 8 extra bytes when DEBUG_TASK is set. With just this we have enough
to get live reports like this:

  $ socat - /tmp/sock1 <<< "show profiling"
  Per-task CPU profiling              : on      # set profiling tasks {on|auto|off}
  Tasks activity:
    function                      calls   cpu_tot   cpu_avg   lat_tot   lat_avg
    si_cs_io_cb                 8099492   4.833s    596.0ns   8.974m    66.48us
    h1_io_cb                    7460365   11.55s    1.548us   2.477m    19.92us
    process_stream              7383828   22.79s    3.086us   18.39m    149.5us
    h1_timeout_task                4157      -         -      348.4ms   83.81us
    srv_cleanup_toremove_connections751   39.70ms   52.86us   10.54ms   14.04us
    srv_cleanup_idle_connections     21   1.405ms   66.89us   30.82us   1.467us
    task_run_applet                  16   1.058ms   66.13us   446.2us   27.89us
    accept_queue_process              7   34.53us   4.933us   333.1us   47.58us
2021-02-25 09:44:16 +01:00
Willy Tarreau
45499c56d3 MINOR: task: make grq_total atomic to move it outside of the grq_lock
Instead of decrementing grq_total once per task picked from the global
run queue, let's do it at once after the loop like we do for other
counters. This simplifies the code everywhere. It is not expected to
bring noticeable improvements however, since global tasks tend to be
less common nowadays.
2021-02-25 09:44:16 +01:00
Willy Tarreau
c03fbeb358 CLEANUP: task: re-merge __task_unlink_rq() with task_unlink_rq()
There's no point keeping the two separate anymore, some tests are
duplicated for no reason.
2021-02-25 09:44:16 +01:00
Willy Tarreau
9c6dbf0eea CLEANUP: task: split the large tasklet_wakeup_on() function in two
This function has become large with the multi-queue scheduler. We need
to keep the fast path and the debugging parts inlined, but the rest now
moves to task.c just like was done for task_wakeup(). This has reduced
the code size by 6kB due to less inlining of large parts that are always
context-dependent, and as a side effect, has increased the overall
performance by 1%.
2021-02-24 17:55:58 +01:00
Willy Tarreau
955a11ebfa MINOR: task: move the allocated tasks counter to the per-thread struct
The nb_tasks counter was still global and gets incremented and decremented
for each task_new()/task_free(), and was read in process_runnable_tasks().
But it's only used for stats reporting, so doing this this often is
pointless and expensive. Let's move it to the task_per_thread struct and
have the stats sum it when needed.
2021-02-24 17:42:04 +01:00
Willy Tarreau
018564eaa2 CLEANUP: task: move the tree root detection from __task_wakeup() to task_wakeup()
Historically we used to call __task_wakeup() with a known tree root but
this is not the case and the code has remained needlessly complicated
with the root calculation in task_wakeup() passed in argument to
__task_wakeup() which compares it again.

Let's get rid of this and just move the detection code there. This
eliminates some ifdefs and allows to simplify the test conditions quite
a bit.
2021-02-24 17:42:04 +01:00
Willy Tarreau
2c41d77ebc MINOR: tasks: do not maintain the rqueue_size counter anymore
This one is exclusively used as a boolean nowadays and is non-zero only
when the thread-local run queue is not empty. Better check the root tree's
pointer and avoid updating this counter all the time.
2021-02-24 17:42:04 +01:00
Willy Tarreau
9c7b8085f4 MEDIUM: task: remove the tasks_run_queue counter and have one per thread
This counter is solely used for reporting in the stats and is the hottest
thread contention point to date. Moving it to the scheduler and having a
separate one for the global run queue dramatically improves the performance,
showing a 12% boost on the request rate on 16 threads!

In addition, the thread debugging output which used to rely on rqueue_size
was not totally accurate as it would only report task counts. Now we can
return the exact thread's run queue length.

It is also interesting to note that there are still a few other task/tasklet
counters in the scheduler that are not efficiently updated because some cover
a single area and others cover multiple areas. It looks like having a distinct
counter for each of the following entries would help and would keep the code
a bit cleaner:
  - global run queue (tree)
  - per-thread run queue (tree)
  - per-thread shared tasklets list
  - per-thread local lists

Maybe even splitting the shared tasklets lists between pure tasklets and
tasks instead of having the whole and tasks would simplify the code because
there remain a number of places where several counters have to be updated.
2021-02-24 17:42:04 +01:00
Willy Tarreau
ca9f60c1ac MINOR: tasks/debug: add some extra controls of use-after-free in DEBUG_TASK
It's pretty easy to pre-initialize the index, change it on free() and check
it during the wakeup, so let's do this to ease detection of any accidental
task_wakeup() after a task_free() or tasklet_wakeup() after a tasklet_free().
If this would ever happen we'd then get a backtrace and a core now. The
index's parity is respected so that the call history remains exploitable.
2021-02-18 14:38:49 +01:00
Willy Tarreau
b23f04260b MINOR: tasks: add DEBUG_TASK to report caller info in a task
The idea is to know who woke a task up, by recording the last two
callers in a rotating mode. For now it's trivial with task_wakeup()
but tasklet_wakeup_on() will require quite some more changes.

This typically gives this from the debugger:

  (gdb) p t->debug
  $2 = {
    caller_file = {0x0, 0x8c0d80 "src/task.c"},
    caller_line = {0, 260},
    caller_idx = 1
  }

or this:

  (gdb) p t->debug
  $6 = {
    caller_file = {0x7fffe40329e0 "", 0x885feb "src/stream.c"},
    caller_line = {284, 284},
    caller_idx = 1
  }

But it also provides a trivial macro allowing to simply place a call in
a task/tasklet handler that needs to be observed:

   DEBUG_TASK_PRINT_CALLER(t);

Then starting haproxy this way would trivially yield such info:

  $ ./haproxy -db -f test.cfg | sort | uniq -c | sort -nr
   199992 h1_io_cb woken up from src/sock.c:797
    51764 h1_io_cb woken up from src/mux_h1.c:3634
       65 h1_io_cb woken up from src/connection.c:169
       45 h1_io_cb woken up from src/sock.c:777
2021-02-18 10:42:07 +01:00
Willy Tarreau
4d6c594998 BUG/MEDIUM: task: close a possible data race condition on a tasklet's list link
In issue #958 Ashley Penney reported intermittent crashes on AWS's ARM
nodes which would not happen on x86 nodes. After investigation it turned
out that the Neoverse N1 CPU cores used in the Graviton2 CPU are much
more aggressive than the usual Cortex A53/A72/A55 or any x86 regarding
memory ordering.

The issue that was triggered there is that if a tasklet_wakeup() call
is made on a tasklet scheduled to run on a foreign thread and that
tasklet is just being dequeued to be processed, there can be a race at
two places:
  - if MT_LIST_TRY_ADDQ() happens between MT_LIST_BEHEAD() and
    LIST_SPLICE_END_DETACHED() if the tasklet is alone in the list,
    because the emptiness tests matches ;

  - if MT_LIST_TRY_ADDQ() happens during LIST_DEL_INIT() in
    run_tasks_from_lists(), then depending on how LIST_DEL_INIT() ends
    up being implemented, it may even corrupt the adjacent nodes while
    they're being reused for the in-tree storage.

This issue was introduced in 2.2 when support for waking up remote
tasklets was added. Initially the attachment of a tasklet to a list
was enough to know its status and this used to be stable information.
Now it's not sufficient to rely on this anymore, thus we need to use
a different information.

This patch solves this by adding a new task flag, TASK_IN_LIST, which
is atomically set before attaching a tasklet to a list, and is only
removed after the tasklet is detached from a list. It is checked
by tasklet_wakeup_on() so that it may only be done while the tasklet
is out of any list, and is cleared during the state switch when calling
the tasklet. Note that the flag is not set for pure tasks as it's not
needed.

However this introduces a new special case: the function
tasklet_remove_from_tasklet_list() needs to keep both states in sync
and cannot check both the state and the attachment to a list at the
same time. This function is already limited to being used by the thread
owning the tasklet, so in this case the test remains reliable. However,
just like its predecessors, this function is wrong by design and it
should probably be replaced with a stricter one, a lazy one, or be
totally removed (it's only used in checks to avoid calling a possibly
scheduled event, and when freeing a tasklet). Regardless, for now the
function exists so the flag is removed only if the deletion could be
done, which covers all cases we're interested in regarding the insertion.
This removal is safe against a concurrent tasklet_wakeup_on() since
MT_LIST_DEL() guarantees the atomic test, and will ultimately clear
the flag only if the task could be deleted, so the flag will always
reflect the last state.

This should be carefully be backported as far as 2.2 after some
observation period. This patch depends on previous patch
"MINOR: task: remove __tasklet_remove_from_tasklet_list()".
2020-11-30 18:17:59 +01:00
Willy Tarreau
2da4c316c2 MINOR: task: remove __tasklet_remove_from_tasklet_list()
This function is only used at a single place directly within the
scheduler in run_tasks_from_lists() and it really ought not be called
by anything else, regardless of what its comment says. Let's delete
it, move the two lines directly into the call place, and take this
opportunity to factor the atomic decrement on tasks_run_queue. A comment
was added on the remaining one tasklet_remove_from_tasklet_list() to
mention the risks in using it.
2020-11-30 18:17:44 +01:00
Willy Tarreau
a868c2920b MINOR: task: remove tasklet_insert_into_tasklet_list()
This function is only called at a single place and adds more confusion
than it removes. It also makes one think it could be used outside of
the scheduler while it must absolutely not. Let's just move its two
lines to the call place, making the code more readable there. In
addition this clearly shows that the preliminary LIST_INIT() is
useless since the entry is immediately overwritten.
2020-11-30 18:17:44 +01:00
Willy Tarreau
69a7b8fc6c CLEANUP: task: remove the unused and mishandled global_rqueue_size
This counter is only updated and never used, and in addition it's done
without any atomicity so it's very unlikely to be correct on multi-CPU
systems! Let's just remove it since it's not used.
2020-10-19 14:08:13 +02:00
Willy Tarreau
f2452b3c70 MINOR: tasks/debug: add a BUG_ON() check to detect requeued task on free
__task_free() cannot be called with a task still in the queue. This
test adds a check which confirms there is no concurrency issue on such
a case where a thread could requeue nor wakeup a task being freed.
2020-07-22 14:42:52 +02:00
Willy Tarreau
e5d79bccc0 MINOR: tasks/debug: add a few BUG_ON() to detect use of wrong timer queue
This aims at catching calls to task_unlink_wq() performed by the wrong
thread based on the shared status for the task, as well as calls to
__task_queue() with the wrong timer queue being used based on the task's
capabilities. This will at least help eliminate some hypothesis during
debugging sessions when suspecting that a wrong thread has attempted to
queue a task at the wrong place.
2020-07-22 14:42:52 +02:00
Willy Tarreau
2447bce554 MINOR: tasks/debug: make the thread affinity BUG_ON check a bit stricter
The BUG_ON() test in task_queue() only tests for the case where
we're queuing a task that doesn't run on the current thread. Let's
refine it a bit further to catch all cases where the task does not
run *exactly* on the current thread alone.
2020-07-22 14:22:38 +02:00
Willy Tarreau
de4db17dee MINOR: lists: rename some MT_LIST operations to clarify them
Initially when mt_lists were added, their purpose was to be used with
the scheduler, where anyone may concurrently add the same tasklet, so
it sounded natural to implement a check in MT_LIST_ADD{,Q}. Later their
usage was extended and MT_LIST_ADD{,Q} started to be used on situations
where the element to be added was exclusively owned by the one performing
the operation so a conflict was impossible. This became more obvious with
the idle connections and the new macro was called MT_LIST_ADDQ_NOCHECK.

But this remains confusing and at many places it's not expected that
an MT_LIST_ADD could possibly fail, and worse, at some places we start
by initializing it before adding (and the test is superflous) so let's
rename them to something more conventional to denote the presence of the
check or not:

   MT_LIST_ADD{,Q}    : inconditional operation, the caller owns the
                        element, and doesn't care about the element's
                        current state (exactly like LIST_ADD)
   MT_LIST_TRY_ADD{,Q}: only perform the operation if the element is not
                        already added or in the process of being added.

This means that the previously "safe" MT_LIST_ADD{,Q} are not "safe"
anymore. This also means that in case of backport mistakes in the
future causing this to be overlooked, the slower and safer functions
will still be used by default.

Note that the missing unchecked MT_LIST_ADD macro was added.

The rest of the code will have to be reviewed so that a number of
callers of MT_LIST_TRY_ADDQ are changed to MT_LIST_ADDQ to remove
the unneeded test.
2020-07-10 08:50:41 +02:00
Willy Tarreau
43079e0731 MINOR: sched: split tasklet_wakeup() into tasklet_wakeup_on()
tasklet_wakeup() only checks tl->tid to know whether the task is
programmed to run on the current thread or on a specific thread. We'll
have to ease this selection in a subsequent patch, preferably without
modifying tl->tid, so let's have a new tasklet_wakeup_on() function
to specify the thread number to run on. That the logic has not changed
at all.
2020-07-03 17:19:47 +02:00
Willy Tarreau
eb8c2c69fa MEDIUM: sched: implement task_kill() to kill a task
task_kill() may be used by any thread to kill any task with less overhead
than a regular wakeup. In order to achieve this, it bypasses the priority
tree and inserts the task directly into the shared tasklets list, cast as
a tasklet. The task_list_size is updated to make sure it is properly
decremented after execution of this task. The task will thus be picked by
process_runnable_tasks() after checking the tree and sent to the TL_URGENT
list, where it will be processed and killed.

If the task is bound to more than one thread, its first thread will be the
one notified.

If the task was already queued or running, nothing is done, only the flag
is added so that it gets killed before or after execution. Of course it's
the caller's responsibility to make sur any resources allocated by this
task were already cleaned up or taken over.
2020-07-01 16:35:53 +02:00
Willy Tarreau
59153fef86 MINOR: tasks: make run_tasks_from_lists() scan the queues itself
Now process_runnable_tasks is responsible for calculating the budgets
for each queue, dequeuing from the tree, and calling run_tasks_from_lists().
This latter one scans the queues, picking tasks there and respecting budgets.
Note that its name was updated with a plural "s" for this reason.
2020-06-24 12:21:26 +02:00
Willy Tarreau
ba48d5c8f9 MINOR: tasks: pass the queue index to run_task_from_list()
Instead of passing it a pointer to the queue, pass it the queue's index
so that it can perform all the work around current_queue and tl_class_mask.
2020-06-24 12:21:26 +02:00
Willy Tarreau
49f90bf148 MINOR: tasks: add a mask of the queues with active tasklets
It is neither convenient nor scalable to check each and every tasklet
queue to figure whether it's empty or not while we often need to check
them all at once. This patch introduces a tasklet class mask which gets
a bit 1 set for each queue representing one class of service. A single
test on the mask allows to figure whether there's still some work to be
done. It will later be usable to better factor the runqueue code.

Bits are set when tasklets are queued. They're cleared when queues are
emptied. It is possible that a queue is empty but has a bit if a tasklet
was added then removed, but this is not a problem as this is properly
checked for in run_tasks_from_list().
2020-06-24 12:21:26 +02:00
Willy Tarreau
c0a08ba2df MINOR: tasks: make current_queue an index instead of a pointer
It will be convenient to have the tasklet queue number soon, better make
current_queue an index rather than a pointer to the queue. When not currently
running (e.g. from I/O), the index is -1.
2020-06-24 12:21:26 +02:00
Willy Tarreau
5bd73063ab BUG/MEDIUM: task: be careful not to run too many tasks at TL_URGENT
A test on large objects revealed a big performance loss from 2.1. The
cause was found to be related to cache locality between scheduled
operations that are batched using tasklets. It happens that we now
have several layers of tasklets and that queuing all these operations
leaves time to let memory objects cool down in the CPU cache, effectively
resulting in halving the performance.

A quick test consisting in putting most unknown tasklets into the BULK
queue almost fixed the performance regression, but this is a wrong
approach as it can also slow down some low-latency transfers or access
to applets like the CLI.

What this patch does instead is to queue unknown tasklets into the same
queue as the current one when tasklet_wakeup() is itself called from a
task/tasklet, otherwise it uses urgent for real I/O (when sched->current
is NULL). This results in the called tasklet being woken up much sooner,
often at the end of the current batch of tasklets.

By doing so, a test on 2 cores 4 threads with 256 concurrent H1 conns
transferring 16m objects with 256kB buffers jumped from 55 to 88 Gbps.
It's even possible to go as high as 101 Gbps by evaluating the URGENT
queue after the BULK one, though this was not done as considered
dangerous for latency sensitive operations.

This reinforces the importance of getting back the CPU transfer
mechanisms based on tasklet_wakeup_after() to work at the tasklet level
by supporting an immediate wakeup in certain cases.

No backport is needed, this is strictly 2.2.
2020-06-23 16:45:28 +02:00
Willy Tarreau
cea0e1bb19 REORG: include: move task.h to haproxy/task{,-t}.h
The TASK_IS_TASKLET() macro was moved to the proto file instead of the
type one. The proto part was a bit reordered to remove a number of ugly
forward declaration of static inline functions. About a tens of C and H
files had their dependency dropped since they were not using anything
from task.h.
2020-06-11 10:18:58 +02:00