Commit Graph

116 Commits

Author SHA1 Message Date
Aurelien DARRAGON
abb8412d20 DEBUG: pollers: add name hint for large memory areas used by pollers
Thanks to ("MINOR: tools: add vma_set_name() helper"), set a name hint
for large memory areas allocated by pollers upon init so that they can
be easily indentified in /proc/<pid>/maps.

For now, only linux-compatible pollers are considered since vma_set_name()
requires a recent linux kernel (>= 5.17).

Depending on malloc() implementation, such memory areas will normally be
merged on the heap under MMAP_THRESHOLD (128 kB by default) and will
have a dedicated memory area once the threshold is exceeded. As such, when
large enough, they will appear like this in /proc/<pid>/maps:

7ec6b2d40000-7ec6b2d61000 rw-p 00000000 00:00 0                          [anon:ev_poll:fd_evts_wr]
7ec6b2d61000-7ec6b2d82000 rw-p 00000000 00:00 0                          [anon:ev_poll:fd_evts_rd]
2024-05-21 17:55:14 +02:00
Matthias Wirth
eea152ee68 BUG/MINOR: signals/poller: ensure wakeup from signals
Add self-wake in signal_handler() to fix a race condition with a signal
coming in between checking signal_queue_len and entering polling sleep.

The changes in commit 43c891dda ("BUG/MINOR: signals/poller: set the
poller timeout to 0 when there are signals") were insufficient.

Move the signal_queue_len check from the poll implementations to
run_poll_loop() to keep that logic in one place.

The poll loops are terminated either by the parameter wake being set or
wake up due to a write to their poller_wr_pipe by wake_thread() in
signal_handler().

This fixes issue #1841.

Must be backported in every stable version.
2022-09-09 11:15:22 +02:00
William Lallemand
43c891dda0 BUG/MINOR: signals/poller: set the poller timeout to 0 when there are signals
When receiving a signal before entering the poller, and without any
activity in the process, the poller will be entered with a timeout
calculated without checking the signals.

Since commit 4f59d3 ("MINOR: time: increase the minimum wakeup interval
to 60s") the issue is much more visible because it could be stuck for
60s.

When in mworker mode, if a worker quits and the SIGCHLD signal deliver
at the right time to the master, this one could be stuck for the time of
the timeout.

This should fix issue #1841

Must be backported in every stable version.
2022-09-08 17:46:31 +02:00
Willy Tarreau
3638d174e5 MEDIUM: fd: make thread_mask now represent group-local IDs
With the change that was started on other masks, the thread mask was
still not fully converted, sometimes being used as a global mask and
sometimes as a local one. This finishes the code modifications so that
the mask is always considered as a group-local mask. This doesn't
change anything as long as there's a single group, but is necessary
for groups 2 and above since it's used against running_mask and so on.
2022-07-15 20:16:30 +02:00
Willy Tarreau
6d3c501c08 MEDIUM: fd/poller: turn update_mask to group-local IDs
From now on, the FD's update_mask only refers to local thread IDs. However,
there remains a limitation, in updt_fd_polling(), we temporarily have to
check and set shared FDs against .thread_mask, which still contains global
ones. As such, nbtgroups > 1 may break (but this is not yet supported without
special build options).
2022-07-15 20:16:30 +02:00
Willy Tarreau
63022128a5 MEDIUM: fd/poller: turn polled_mask to group-local IDs
This changes the signification of each bit in the polled_mask so that
now each bit represents a local thread ID for the current group instead
of a global thread ID. As such, all tests now apply to ltid_bit instead
of tid_bit.

No particular check was made to verify that the FD's tgid matches the
current one because there should be no case where this is not true. A
check was added in epoll's __fd_clo() to confirm it never differs unless
expected (soft stop under thread isolation, or master in starting mode
going to exec mode), but that doesn't prevent from doing the job: it
only consists in checking in the group's threads those that are still
polling this FD and to remove them.

Some atomic loads were added at the various locations, and most repetitive
references to polled_mask[fd].xx were turned to a local copy instead making
the code much more clear.
2022-07-15 20:16:30 +02:00
Willy Tarreau
35ee710ece MEDIUM: fd/poller: make the update-list per-group
The update-list needs to be per-group because its inspection is based
on a mask and we need to be certain when scanning it if a mask is for
the same thread or another one. Once per-group there's no doubt about
it, even if the FD's polling changes, the entry remains valid. It will
be needed to check the tgid though.

Note that a soft-stop or pause/resume might not necessarily work here
with tgroups>1, because the operation might be delivered to a thread
that doesn't belong to the group and whoe update mask will not reflect
one that is interesting here. We can't do better at this stage.
2022-07-15 19:57:28 +02:00
Willy Tarreau
e5715bface MEDIUM: poller: disable thread-groups for poll() and select()
These old legacy pollers are not designed for this. They're still
using a shared list of events for all threads, this will not scale at
all, so there's no point in enabling thread-groups there. Modern
systems have epoll, kqueue or event ports and do not need these ones.

We arrange for failing at boot time, only when thread-groups > 1 so
that existing setups will remain unaffected.

If there's a compelling reason for supporting thread groups with these
pollers in the future, the rework should not be too hard, it would just
consume a lot of memory to have an fd_evts[] array per thread, but that
is doable.
2022-07-15 19:43:10 +02:00
Willy Tarreau
b1093c6ba2 MEDIUM: poller: program the update in fd_update_events() for a migrated FD
When an FD is migrated, all pollers program an update. That's useless
code duplication, and when thread groups will be supported, this will
require an extra round of locking just to verify the update_mask on
return. Let's just program the update direction from fd_update_events()
as it already does for closed FDs, this becomes more logical.
2022-07-15 19:43:10 +02:00
Willy Tarreau
058b2c1015 MINOR: poller: centralize poll return handling
When returning from the polling syscall, all pollers have a certain
dance to follow, made of wall clock updates, thread harmless updates,
idle time management and sleeping mask updates. Let's have a centralized
function to deal with all of this boring stuff: fd_leaving_poll(), and
make all the pollers use it.
2022-07-01 19:15:14 +02:00
Willy Tarreau
740d749d77 BUILD: pollers: use an initcall to register the pollers
Pollers are among the few remaining blocks still using constructors
to register themselves. That's not needed anymore since the initcalls
so better turn to initcalls.
2022-04-25 19:00:55 +02:00
Willy Tarreau
20adfde9c8 MINOR: activity: get the run_time from the clock updates
Instead of fiddling with before_poll and after_poll in
activity_count_runtime(), the function is now called by
clock_entering_poll() which passes it the number of microseconds
spent working. This allows to remove all calls to
activity_count_runtime() from the pollers.
2021-10-08 17:22:26 +02:00
Willy Tarreau
f9d5e1079c REORG: clock: move the updates of cpu/mono time to clock.c
The entering_poll/leaving_poll/measure_idle functions that were hard
to classify and used to move to various locations have now been placed
into clock.c since it's precisely about time-keeping. The functions
were renamed to clock_*. The samp_time and idle_time values are now
static since there is no reason for them to be read from outside.
2021-10-08 17:22:26 +02:00
Willy Tarreau
5554264f31 REORG: time: move time-keeping code and variables to clock.c
There is currently a problem related to time keeping. We're mixing
the functions to perform calculations with the os-dependent code
needed to retrieve and adjust the local time.

This patch extracts from time.{c,h} the parts that are solely dedicated
to time keeping. These are the "now" or "before_poll" variables for
example, as well as the various now_*() functions that make use of
gettimeofday() and clock_gettime() to retrieve the current time.

The "tv_*" functions moved there were also more appropriately renamed
to "clock_*".

Other parts used to compute stolen time are in other files, they will
have to be picked next.
2021-10-08 17:22:26 +02:00
Willy Tarreau
6dfab112e1 REORG: sched: move idle time calculation from time.h to task.h
time.h is a horrible place to put activity calculation, it's a
historical mistake because the functions were there. We already have
most of the parts in sched.{c,h} and these ones make an exception in
the middle, forcing time.h to include some thread stuff and to access
the before/after_poll and idle_pct values.

Let's move these 3 functions to task.h with the other ones. They were
prefixed with "sched_" instead of the historical "tv_" which already
made no sense anymore.
2021-10-01 18:37:51 +02:00
Willy Tarreau
88d1c5d3fb MEDIUM: threads: add a stronger thread_isolate_full() call
The current principle of running under isolation was made to access
sensitive data while being certain that no other thread was using them
in parallel, without necessarily having to place locks everywhere. The
main use case are "show sess" and "show fd" which run over long chains
of pointers.

The thread_isolate() call relies on the "harmless" bit that indicates
for a given thread that it's not currently doing such sensitive things,
which is advertised using thread_harmless_now() and which ends usings
thread_harmless_end(), which also waits for possibly concurrent threads
to complete their work if they took this opportunity for starting
something tricky.

As some system calls were notoriously slow (e.g. mmap()), a bunch of
thread_harmless_now() / thread_harmless_end() were placed around them
to let waiting threads do their work while such other threads were not
able to modify memory contents.

But this is not sufficient for performing memory modifications. One such
example is the server deletion code. By modifying memory, it not only
requires that other threads are not playing with it, but are not either
in the process of touching it. The fact that a pool_alloc() or pool_free()
on some structure may call thread_harmless_now() and let another thread
start to release the same object's memory is not acceptable.

This patch introduces the concept of "idle threads". Threads entering
the polling loop are idle, as well as those that are waiting for all
others to become idle via the new function thread_isolate_full(). Once
thread_isolate_full() is granted, the thread is not idle anymore, and
it is released using thread_release() just like regular isolation. Its
users have to keep in mind that across this call nothing is granted as
another thread might have performed shared memory modifications. But
such users are extremely rare and are actually expecting this from their
peers as well.

Note that that in case of backport, this patch depends on previous patch:
  MINOR: threads: make thread_release() not wait for other ones to complete
2021-08-04 14:49:36 +02:00
Willy Tarreau
200bd50b73 MEDIUM: fd: rely more on fd_update_events() to detect changes
This function already performs a number of checks prior to calling the
IOCB, and detects the change of thread (FD migration). Half of the
controls are still in each poller, and these pollers also maintain
activity counters for various cases.

Note that the unreliable test on thread_mask was removed so that only
the one performed by fd_set_running() is now used, since this one is
reliable.

Let's centralize all that fd-specific logic into the function and make
it return a status among:

  FD_UPDT_DONE,        // update done, nothing else to be done
  FD_UPDT_DEAD,        // FD was already dead, ignore it
  FD_UPDT_CLOSED,      // FD was closed
  FD_UPDT_MIGRATED,    // FD was migrated, ignore it now

Some pollers already used to call it last and have nothing to do after
it, regardless of the result. epoll has to delete the FD in case a
migration is detected. Overall this removes more code than it adds.
2021-07-30 17:45:18 +02:00
Willy Tarreau
53a16187fd MINOR: poll/epoll: move detection of RDHUP support earlier
Let's move the detection of support for RDHUP earlier and out of the
FD update chain, as it complicates its simplification.
2021-07-30 17:41:55 +02:00
Willy Tarreau
79e90b9615 BUG/MINOR: pollers: always program an update for migrated FDs
If an MT-aware poller reports that a file descriptor was migrated, it
must stop reporting it. The simplest way to do this is to program an
update if not done yet. This will automatically mark the FD for update
on next round. Otherwise there's a risk that some events are reported
a bit too often and cause extra CPU usage with these pollers. Note
that epoll is currently OK regarding this. Select does not need this
because it uses a single shared events table, so in case of migration
no FD change is expected.

This should be backported as far as 2.2.
2021-07-30 14:21:43 +02:00
Willy Tarreau
177119bb11 BUG/MINOR: poll: fix abnormally high skip_fd counter
The skip_fd counter that is incremented when a migrated FD is reported
was abnormally high in with poll. The reason is that it was accounted
for before preparing the polled events instead of being measured from
the reported events.

This mistake was done when the counters were introduced in 1.9 with
commit d80cb4ee1 ("MINOR: global: add some global activity counters to
help debugging"). It may be backported as far as 2.0.
2021-07-30 14:04:28 +02:00
Willy Tarreau
c37ccd70b4 BUG/MEDIUM: pollers: clear the sleeping bit after waking up, not before
A bug was introduced in 2.1-dev2 by commit 305d5ab46 ("MAJOR: fd: Get
rid of the fd cache."). Pollers "poll" and "evport" had the sleeping
bit accidentally removed before the syscall instead of after. This
results in them not being woken up by inter-thread wakeups, which is
particularly visible with the multi-queue accept() and with queues.

As a work-around, when these pollers are used, "nbthread 1" should
be used.

The fact that it has remained broken for 2 years is a great indication
that threads are definitely not enabled outside of epoll and kqueue,
hence why this patch is only tagged medium.

This must be backported as far as 2.2.
2021-07-30 10:57:09 +02:00
Willy Tarreau
4781b1521a CLEANUP: atomic/tree-wide: replace single increments/decrements with inc/dec
This patch replaces roughly all occurrences of an HA_ATOMIC_ADD(&foo, 1)
or HA_ATOMIC_SUB(&foo, 1) with the equivalent HA_ATOMIC_INC(&foo) and
HA_ATOMIC_DEC(&foo) respectively. These are 507 changes over 45 files.
2021-04-07 18:18:37 +02:00
Willy Tarreau
61cfdf4fd8 CLEANUP: tree-wide: replace free(x);x=NULL with ha_free(&x)
This makes the code more readable and less prone to copy-paste errors.
In addition, it allows to place some __builtin_constant_p() predicates
to trigger a link-time error in case the compiler knows that the freed
area is constant. It will also produce compile-time error if trying to
free something that is not a regular pointer (e.g. a function).

The DEBUG_MEM_STATS macro now also defines an instance for ha_free()
so that all these calls can be checked.

178 occurrences were converted. The vast majority of them were handled
by the following Coccinelle script, some slightly refined to better deal
with "&*x" or with long lines:

  @ rule @
  expression E;
  @@
  - free(E);
  - E = NULL;
  + ha_free(&E);

It was verified that the resulting code is the same, more or less a
handful of cases where the compiler optimized slightly differently
the temporary variable that holds the copy of the pointer.

A non-negligible amount of {free(str);str=NULL;str_len=0;} are still
present in the config part (mostly header names in proxies). These
ones should also be cleaned for the same reasons, and probably be
turned into ist strings.
2021-02-26 21:21:09 +01:00
Willy Tarreau
94a01e1cb7 CLEANUP: few extra typo and fixes over last one ("ot" -> "to")
As noticed by Tim there were a few incorrect fixes in the previous patch
("ot" -> "to" and not "or").
2021-01-06 17:35:52 +01:00
Ilya Shipitsin
b8888ab557 CLEANUP: assorted typo fixes in the code and comments
This is 15th iteration of typo fixes
2021-01-06 17:32:03 +01:00
Willy Tarreau
38e8a1c7b8 MINOR: debug: add a new DEBUG_FD build option
When DEBUG_FD is set at build time, we'll keep a counter of per-FD events
in the fdtab. This counter is reported in "show fd" even for closed FDs if
not zero. The purpose is to help spot situations where an apparently closed
FD continues to be reported in loops, or where some events are dismissed.
2020-06-23 10:04:54 +02:00
Willy Tarreau
e406386542 MINOR: activity: rename confusing poll_* fields in the output
We have poll_drop, poll_dead and poll_skip which are confusingly named
like their poll_io and poll_exp counterparts except that they are not
per poll() call but per-fd. This patch renames them to poll_drop_fd(),
poll_dead_fd() and poll_skip_fd() for this reason.
2020-06-17 20:35:33 +02:00
Willy Tarreau
e545153c50 MINOR: activity: report the number of times poll() reports I/O
The "show activity" output mentions a number of indicators to explain
wake up reasons but doesn't have the number of times poll() sees some
I/O. And given that multiple events can happen simultaneously, it's
not always possible to deduce this metric by subtracting.

This patch adds a new "poll_io" counter that allows one to see how
often poll() returns with at least one active FD. This should help
detect stuck events and measure various ratios of poll sub-metrics.
2020-06-17 20:25:18 +02:00
Willy Tarreau
b2551057af CLEANUP: include: tree-wide alphabetical sort of include files
This patch fixes all the leftovers from the include cleanup campaign. There
were not that many (~400 entries in ~150 files) but it was definitely worth
doing it as it revealed a few duplicates.
2020-06-11 10:18:59 +02:00
Willy Tarreau
5b9cde4820 REORG: include: move THREAD_LOCAL and __decl_thread() to compiler.h
Since these are used as type attributes or conditional clauses, they
are used about everywhere and should not require a dependency on
thread.h. Moving them to compiler.h along with other similar statements
like ALIGN() etc looks more logical; this way they become part of the
base API. This allowed to remove thread-t.h from ~12 files, one was
found to only require thread-t and not thread and dict.c was found to
require thread.h.
2020-06-11 10:18:59 +02:00
Willy Tarreau
f268ee8795 REORG: include: split global.h into haproxy/global{,-t}.h
global.h was one of the messiest files, it has accumulated tons of
implicit dependencies and declares many globals that make almost all
other file include it. It managed to silence a dependency loop between
server.h and proxy.h by being well placed to pre-define the required
structs, forcing struct proxy and struct server to be forward-declared
in a significant number of files.

It was split in to, one which is the global struct definition and the
few macros and flags, and the rest containing the functions prototypes.

The UNIX_MAX_PATH definition was moved to compat.h.
2020-06-11 10:18:58 +02:00
Willy Tarreau
0f6ffd652e REORG: include: move fd.h to haproxy/fd{,-t}.h
A few includes were missing in each file. A definition of
struct polled_mask was moved to fd-t.h. The MAX_POLLERS macro was
moved to defaults.h

Stdio used to be silently inherited from whatever path but it's needed
for list_pollers() which takes a FILE* and which can thus not be
forward-declared.
2020-06-11 10:18:57 +02:00
Willy Tarreau
c2f7c5895c REORG: include: move common/ticks.h to haproxy/ticks.h
Nothing needed to be changed, there are no exported types.
2020-06-11 10:18:57 +02:00
Willy Tarreau
a04ded58dc REORG: include: move activity to haproxy/
This moves types/activity.h to haproxy/activity-t.h and
proto/activity.h to haproxy/activity.h.

The macros defining the bit field values for the profiling variable
were moved to the type file to be more future-proof.
2020-06-11 10:18:57 +02:00
Willy Tarreau
92b4f1372e REORG: include: move time.h from common/ to haproxy/
This one is included almost everywhere and used to rely on a few other
.h that are not needed (unistd, stdlib, standard.h). It could possibly
make sense to split it into multiple parts to distinguish operations
performed on timers and the internal time accounting, but at this point
it does not appear much important.
2020-06-11 10:18:56 +02:00
Willy Tarreau
3f567e4949 REORG: include: split hathreads into haproxy/thread.h and haproxy/thread-t.h
This splits the hathreads.h file into types+macros and functions. Given
that most users of this file used to include it only to get the definition
of THREAD_LOCAL and MAXTHREADS, the bare minimum was placed into thread-t.h
(i.e. types and macros).

All the thread management was left to haproxy/thread.h. It's worth noting
the drop of the trailing "s" in the name, to remove the permanent confusion
that arises between this one and the system implementation (no "s") and the
makefile's option (no "s").

For consistency, src/hathreads.c was also renamed thread.c.

A number of files were updated to only include thread-t which is the one
they really needed.

Some future improvements are possible like replacing empty inlined
functions with macros for the thread-less case, as building at -O0 disables
inlining and causes these ones to be emitted. But this really is cosmetic.
2020-06-11 10:18:56 +02:00
Willy Tarreau
4c7e4b7738 REORG: include: update all files to use haproxy/api.h or api-t.h if needed
All files that were including one of the following include files have
been updated to only include haproxy/api.h or haproxy/api-t.h once instead:

  - common/config.h
  - common/compat.h
  - common/compiler.h
  - common/defaults.h
  - common/initcall.h
  - common/tools.h

The choice is simple: if the file only requires type definitions, it includes
api-t.h, otherwise it includes the full api.h.

In addition, in these files, explicit includes for inttypes.h and limits.h
were dropped since these are now covered by api.h and api-t.h.

No other change was performed, given that this patch is large and
affects 201 files. At least one (tools.h) was already freestanding and
didn't get the new one added.
2020-06-11 10:18:42 +02:00
William Dauchy
42a50bd19b BUG/MINOR: pollers: remove uneeded free in global init
Since commit d4604adeaa ("MAJOR: threads/fd: Make fd stuffs
thread-safe"), we init pollers per thread using a helper. It was still
correct for mono-thread mode until commit cd7879adc2 ("BUG/MEDIUM:
threads: Run the poll loop on the main thread too"). We now use a deinit
helper for all threads, making those free uneeded.

Only poll and select are affected by this very minor issue.

it could be backported from v1.8 to v2.1.

Fixes: cd7879adc2 ("BUG/MEDIUM: threads: Run the poll loop on the main
thread too")
Signed-off-by: William Dauchy <w.dauchy@criteo.com>
2020-05-13 11:45:37 +02:00
Willy Tarreau
03e7853581 BUILD: remove obsolete support for -mregparm / USE_REGPARM
This used to be a minor optimization on ix86 where registers are scarce
and the calling convention not very efficient, but this platform is not
relevant enough anymore to warrant all this dirt in the code for the sake
of saving 1 or 2% of performance. Modern platforms don't use this at all
since their calling convention already defaults to using several registers
so better get rid of this once for all.
2020-02-25 07:41:47 +01:00
Willy Tarreau
11ef0837af MINOR: pollers: add a new flag to indicate pollers reporting ERR & HUP
In practice it's all pollers except select(). It turns out that we're
keeping some legacy code only for select and enforcing it on all
pollers, let's offer the pollers the ability to declare that they
do not need that.
2019-12-27 14:04:33 +01:00
Willy Tarreau
6b3089856f MEDIUM: fd: do not use the FD_POLL_* flags in the pollers anymore
As mentioned in previous commit, these flags do not map well to
modern poller capabilities. Let's use the FD_EV_*_{R,W} flags instead.
This first patch only performs a 1-to-1 mapping making sure that the
previously reported flags are still reported identically while using
the closest possible semantics in the pollers.

It's worth noting that kqueue will now support improvements such as
returning distinctions between shut and errors on each direction,
though this is not exploited for now.
2019-09-06 19:09:56 +02:00
Willy Tarreau
5bee3e2f47 MEDIUM: fd: remove the FD_EV_POLLED status bit
Since commit 7ac0e35f2 in 1.9-dev1 ("MAJOR: fd: compute the new fd polling
state out of the fd lock") we've started to update the FD POLLED bit a
bit more aggressively. Lately with the removal of the FD cache, this bit
is always equal to the ACTIVE bit. There's no point continuing to watch
it and update it anymore, all it does is create confusion and complicate
the code. One interesting side effect is that it now becomes visible that
all fd_*_{send,recv}() operations systematically call updt_fd_polling(),
except fd_cant_recv()/fd_cant_send() which never saw it change.
2019-09-05 09:31:18 +02:00
Olivier Houchard
53055055c5 MEDIUM: pollers: Remember the state for read and write for each threads.
In the poller code, instead of just remembering if we're currently polling
a fd or not, remember if we're polling it for writing and/or for reading, that
way, we can avoid to modify the polling if it's already polled as needed.
2019-07-31 14:54:41 +02:00
Olivier Houchard
305d5ab469 MAJOR: fd: Get rid of the fd cache.
Now that the architecture was changed so that attempts to receive/send data
always come from the upper layers, instead of them only trying to do so when
the lower layer let them know they could try, we can finally get rid of the
fd cache. We don't really need it anymore, and removing it gives us a small
performance boost.
2019-07-31 14:12:55 +02:00
Willy Tarreau
2ae84e445d MEDIUM: poller: separate the wait time from the wake events
We have been abusing the do_poll()'s timeout for a while, making it zero
whenever there is some known activity. The problem this poses is that it
complicates activity diagnostic by incrementing the poll_exp field for
each known activity. It also requires extra computations that could be
avoided.

This change passes a "wake" argument to say that the poller must not
sleep. This simplifies the operations and allows one to differenciate
expirations from activity.
2019-05-28 17:25:21 +02:00
Olivier Houchard
cb6c9274ae MEDIUM: pollers: Use the new _HA_ATOMIC_* macros.
Use the new _HA_ATOMIC_* macros and add barriers where needed.
2019-03-11 17:02:38 +01:00
Willy Tarreau
48f8bc1368 MINOR: poller: move the call of tv_update_date() back to the pollers
The reason behind this will be to be able to compute a timeout when
busy polling.
2018-11-22 18:57:37 +01:00
Willy Tarreau
609aad9e73 REORG: time/activity: move activity measurements to activity.{c,h}
At the moment the situation with activity measurement is quite tricky
because the struct activity is defined in global.h and declared in
haproxy.c, with operations made in time.h and relying on freq_ctr
which are defined in freq_ctr.h which itself includes time.h. It's
barely possible to touch any of these files without breaking all the
circular dependency.

Let's move all this stuff to activity.{c,h} and be done with it. The
measurement of active and stolen time is now done in a dedicated
function called just after tv_before_poll() instead of mixing the two,
which used to be a lazy (but convenient) decision.

No code was changed, stuff was just moved around.
2018-11-22 11:48:41 +01:00
Willy Tarreau
7e9c4ae4de MINOR: poller: move time and date computation out of the pollers
By placing this code into time.h (tv_entering_poll() and tv_leaving_poll())
we can remove the logic from the pollers and prepare for extending this to
offer more accurate time measurements.
2018-10-17 19:59:43 +02:00
Willy Tarreau
f37ba94768 MINOR: fd: centralize poll timeout computation in compute_poll_timeout()
The 4 pollers all contain the same code used to compute the poll timeout.
This is pointless, let's centralize this into fd.h. This also gets rid of
the useless SCHEDULER_RESOLUTION macro which used to work arond a very old
linux 2.2 bug causing select() to wake up slightly before the timeout.
2018-10-17 19:59:43 +02:00