As mentioned in b40d804 ("MINOR: sink: add some comments about sft->appctx
usage in applet handlers"), there are few places in the code where it
looks like we assumed that the applet callbacks such as
sink_forward_session_init() or sink_forward_io_handler() could be
executing an appctx whose sft is detached from the appctx
(appctx != sft->appctx).
In practise this should not be happening since an appctx sticks to the
same thread its entire lifetime, and the only times sft->appctx is
effectively assigned is during the session/appctx creation (in
process_sink_forward()) or release.
Thus if sft->appctx wouldn't point to the appctx that the sft was bound
to after appctx creation, it would probably indicate a bug rather than
an expected condition. To further emphasize that and prevent the
confusion, and since 3.1-dev4 was released, let's remove such checks and
instead add a BUG_ON to ensure this never happens.
In _sink_forward_io_handler(), the "hard_close" label was removed since
there are no more uses for it (no hard errors may be caught from the
function for now)
In c454296f0 ("OPTIM: sink: balance applets accross threads"), we already
made sure to balance applets accross threads by picking a random thread
to spawn the new applet.
Also, thanks to the previous commit, we also have the ability to destroy
the applet when a certain amount of messages were processed to help
distribute the load during runtime.
Let's improve that by trying up to 3 different threads in the hope to pick
a non-overloaded one in the best scenario, and the least over loaded one
in the worst case. This should help to better distribute the load over
multiple threads when high loads are expected.
Logic was greatly inspired from thread migration logic used by server
health checks, but it was simpliflied for sink's use case.
Thanks to the previous commit, it is now possible to know how many events
were processed for a given sft/server sink pair. As mentioned in commit
c454296 ("OPTIM: sink: balance applets accross threads"), let's provide
the ability to restart a server connection when a certain amount of events
were processed to help better balance the load over multiple threads.
For this, we make use the of "max-reuse" server keyword which was only
relevant under "http" context so far. Under sink context, "max-reuse"
corresponds to the number of times the tcp connection can be reused
for sending messages, which in fact means that "max-reuse + 1" is the
number of events (ie: messages) that are allowed to be sent using the
same tcp server connection: when this threshold is met, the connection
will be destroyed and a new one will be created on a random thread.
The value is not strict: it is the minimum value above which the
connection may be destroyed since the value is checked after
ring_dispatch_messages() which may process multiple messages at once.
By default, no limit is enforced (the connection will be reused for as
long as it is available).
The documentation was updated accordingly.
Add a new struct member to sft structure named e_processed in order to
track the total number of events processed by sft applets.
sink_forward_oc_io_handler() and sink_forward_io_handler() now make use
of ring_dispatch_messages() optional value added in the previous commit
in order to increase the number of processed events.
ring_dispatch_messages() now takes an optional argument <processed> which
must point to a size_t counter when provided.
When provided, the value is updated to the number of messages processed
by the function.
Given that sink applets are responsible for conveying messages from the
ring to the tcp server endpoint, there are no protocol timeout or errors
expected there, it is an unidirectional flow of data over TCP.
As such, NOLINGER flag which was inherited from peers applet, see
dbd026792 ("BUG/MEDIUM: peers: set NOLINGER on the outgoing stream
interface") is not desirable under sink context:
The reason why we have the NOLINGER flag set is to ensure the connection
is closed right away and avoid 60s TIME_WAIT delay on closed sockets.
The downside is that messages sent right before closing the socket are
not guaranteed to make it to the server because closing with NOLINGER
flag set will result in RST packet being emitted right away, which could
prevent in-flight messages from being properly delivered.
Unlike peers applets, the only cases were sink applets are expected to
close the connection are upon unexpected error or upon stopping, which are
relatively rare events. Thanks to previous commit, ERROR flag is already
set in case of error, so the use of NOLINGER is not mandatory for the
RST to be sent. Now for the stopping case, it only happens once in the
process lifetime so it's acceptable to close the socket using EOS+EOI
flags without the NOLINGER option set.
So in our case, it is preferable to ensure messages get properly delivered
knowning that closed sockets should be piling up in TIME_WAIT, this means
removing the NOLINGER flag on the outgoing stream interface for sink
applets. It is a prerequisite for upcoming patches in order to cleanly
shut the applet during runtime without risking to send the RST packet
before all pending messages were sent to the endpoint.
Aborting the socket on soft-stop is not the same as aborting it due to
unexpected error. As such, let's leverage the granularity offered by
sedesc flags to better reflect the situation: abort during soft-stop is
handled as a soft close thanks to EOI+EOS flags, while abort due to
unexpected error is handled as hard error thanks to ERROR+EOS flags.
Thanks to this change, hard error will always emit RST packet even if
the NOLINGER option wasn't set on the socket.
There seem to be an ambiguity in the code where sft->appctx would differ
from the appctx that was assigned to it upon appctx creation.
In practise, it doesn't seem this could be happening. Adding a few notes
to come back to this later and try to see if we can remove this
ambiguity.
Now that sink_forward_oc_io_handler() and sink_forward_io_handler() were
unified again thanks to the previous commit, let's take a chance to merge
code that is common to both functions in order to ease code maintenance.
Let's add _sink_forward_io_handler() internal function which takes the
applet and a message handler as argument: sink_forward_io_handler() and
sink_forward_oc_io_handler() leverage this internal function by passing
the correct message handler for the desired format.
Re-apply dcd917d972 ("MINOR: applet: Remove uselelss test on SE_FL_SHR/SHW
flags") for sink_forward_oc_io_handler() function as it was probably
overlooked given that sink_forward_oc_io_handler() and
sink_forward_io_handler() follow the same logic.
In a739dc2 ("MEDIUM: sink: Use the sedesc to report and detect end of
processing"), we added a drain after close in sink_forward_oc_io_handler()
by the use of "goto out".
However, since we perform a close, there is no reason to drain data from
the socket. Moreover, before the patch there was no drain and nothing
mentioned the fact that that the drain was added on purpose. Lastly,
sink_forward_io_handler() and sink_forward_oc_io_handler() functions are
strictly identical when in comes to processing logic, and the drain was
only added in sink_forward_oc_io_handler() and not in
sink_forward_io_handler().
As such, it's pretty safe to assume that the drain is not needed here
and was added as accident. So in this patch we remove it in an attempt
to unify sink_forward_io_handler() and sink_forward_oc_io_handler()
functions like it was already the case before.
Since 09d69eacf8 ("MEDIUM: sink: start applets asynchronously") the applet
is no longer initialized under the sft lock while it was the case before.
At first it doesn't seem to be an issue, but if we look closer at
sink_forward_session_init(), we can see that sft->appctx is assigned
while it can be accessed at the same time from sink_init_forward().
Let's restore the old guarantees by performing the .init under the sft
lock.
No backport needed unless 09d69eacf8 is.
Most of the time all sink applets (which are responsible for relaying
messages from the ring to the tcp servers endpoints) would end up being
assigned to the first available thread (tid:0), resulting in excessive CPU
usage on a single thread when multiple sink servers were defined (no
matter if they were defined over multiple "ring" sections) and significant
message load was pushed through them over the ring API.
This patch is similar to 34e4085f ("MEDIUM: peers: Balance applets across
threads") but for sinks. We use a slightly different approach, which is to
elect a random thread instead of picking the one with leasts applets. This
proves to be already sufficient to alleviate the issue.
In the case we want to have a better load distribution we should consider
breaking existing connections to reestablish them on a new thread when we
find out that they start monopolizing a cpu thread (ie: after a certain
amount of messages for instance). Also check tcpchecks migrating model for
inspiration.
This patch depends on the previous one ("MEDIUM: sink: start applets
asynchronously").
Since d9c1d33fa1 ("MEDIUM: applet: Add support for async appctx startup
on a thread subset"), it is now possible to delay appctx's init: for that
it is required that the .init callback is defined on the applet.
When the applet will be processed on the first run, applet API will
automatically finish the applet initialization. Thus we explicitly
call appctx_wakeup() on the applet to schedule it for initial run
instead of calling appctx_init() ourselves.
This is done in prevision of the next patch in order to be able to
schedule the applet on a different thread from the one executing
sink_forward_session_create() function.
Note: 'out_free_appctx' label was removed since it is no longer used.
Thanks to ("MINOR: tools: add vma_set_name() helper"), set a name hint
for user created memory-backed sinks (ring sections without backing-file)
so that they can be easily indentified in /proc/<pid>/maps.
Depending on malloc() implementation, such memory areas will normally be
merged on the heap under MMAP_THRESHOLD (128 kB by default) and will
have a dedicated memory area once the threshold is exceeded. As such, when
large enough, they will appear like this in /proc/<pid>/maps:
7b8e8ac00000-7b8e8bf13000 rw-p 00000000 00:00 0 [anon💍myring]
Since 40d1c84bf0 ("BUG/MAJOR: ring: free the ring storage not the ring
itself when using maps"), munmap() call for startup_logs's ring and
file-backed rings fails to work (EINVAL) and causes memory leaks during
process cleanup.
munmap() fails because it is called with the ring's usable area pointer
which is an offset from the underlying original memory block allocated
using mmap(). Indeed, ring_area() helper function was misused because
it didn't explicitly mention that the returned address corresponds to
the usable storage's area, not the allocated one.
To fix the issue, we add an explicit ring_allocated_area() helper to
return the allocated area for the ring, just like we already have
ring_allocated_size() for the allocated size, and we properly use both
the allocated size and allocated area to manipulate them using munmap()
and msync().
No backport needed.
last_change was a member present in both proxy and server struct. It is
used as an age statistics to report the last update of the object.
Move last_change into fe_counters/be_counters. This is necessary to be
able to manipulate it through generic stat column and report it into
stats-file.
Note that there is a change for proxy structure with now 2 different
last_change values, on frontend and backend side. Special care was taken
to ensure that the value is initialized only on the proxy side. The
other value is set to 0 unless a listen proxy is instantiated. For the
moment, only backend counter is reported in stats. However, with now two
distinct values, stats could be extended to report it on both side.
A recent issue was uncovered by the CI which started to randomly report
segfaults on a few tests, and more systematically on FreeBSD. It turn
out that it was introduced by recent commit 03816ccfa9 ("MAJOR: ring:
insert an intermediary ring_storage level"), which overlooked the munmap()
path of the sink and startup logs: once the ring and its storage were
split, it was no longer correct to munmap() the ring, only its storage
area needs to be unmapped, and the ring must always be freed separately.
Thanks to Christopher and William for their help at trying to reproduce
it and figure the circumstances that triggers it.
No backport is needed.
It was only used to protect the list which is now an mt_list so it
doesn't provide any required protection anymore. It obviously also
used to provide strict ordering between the writer and the reader
when the writer started to update the messages, but that's now
covered by the oredered tail updates and updates to the readers
count to protect the area.
The message rate on small thread counts (up to 12) saw a boost of
roughly 5% while on large counts while for large counts it lost
about 2% due to some contention now becoming visible elsewhere.
Typical measures are 6.13M -> 6.61M at 3C6T, and 1.88 -> 1.92M at
24C48T on the EPYC.
Rings are keeping a lock only for the list, which apparently doesn't
need anything more than an mt_list, so let's first turn it into that
before dropping the lock. There should be no visible effect.
We're now locking the tail while looking for some room in the ring. In
fact it's still while writing to it, but the goal definitely is to get
rid of the lock ASAP. For this we reserve the topmost bit of the tail
as a lock, which may have as a possible visible effect that buffers will
be limited to 2GB instead of 4GB on 32-bit machines (though in practise,
good luck for allocating more than 2GB contiguous on 32-bit), but in
practice since the size is read with atol() and some operating systems
limit it to LONG_MAX unless passing negative numbers, the limit is
already there.
For now the impact on x86_64 is significant (drop from 2.35 to 1.4M/s
on 48 threads on EPYC 24 cores) but this situation is only temporary
so that changes can be reviewable and bisectable.
Other approaches were attempted, such as using XCHG instead, which is
slightly faster on x86 with low thread counts (but causes more write
contention), and forces readers to stall under heavy traffic because
they can't access a valid value for the queue anymore. A CAS requires
preloading the value and is les good on ARMv8.1. XADD could also be
considered with 12-13 upper bits of the offset dedicated to locking,
but that looks overkill.
The purpose is to store a head and a tail that are independent so that
we can further improve the API to update them independently from each
other.
The struct was arranged like the original one so that as long as a ring
has its head set to zero (i.e. no recycling) it will continue to work.
The new format is already detectable thanks to the "rsvd" field which
indicates the number of reserved bytes at the beginning. It's located
where the buffer's area pointer previously was, so that older versions
of haring can continue to open the ring in repair mode, and newer ones
can use the fact that the upper bits of that variable are zero to guess
that it's working with the new format instead of the old one. Also let's
keep in mind that the layout will further change to place some alignment
constraints.
The haring tool will thus updated based on this and it detects that the
rsvd field is smaller than a page and that the sum of it with the size
equals the mapped size, in which case it uses the new dump_v2() function
instead of dump_v1(). The new function also creates a buffer from the
ring's area, size, head and tail and calls the generic one so that no
other code had to be adapted.
In ring_resize() we used to check if the new ring was at least as large
as the previous one before resizing it, but what counts is that it's as
large as the previous one's contents. Initially it was thought this
would not really matter, but given that rings are initially created as
BUFSIZE, it's currently not possible to shrink them for debugging
purposes. Now with this change it is.
We'll need to add more complex structures in the ring, such as wait
queues. That's far too much to be stored into the area in case of
file-backed contents, so let's split the ring definition and its
storage once for all.
This patch introduces a struct ring_storage which is assigned to
ring->storage, which contains minimal information to represent the
storage layout, i.e. for now only the buffer, and all the rest
remains in the ring itself. The storage is appended immediately after
it and the buffer's pointer always points to that area. It has the
benefit of remaining 100% compatible with the existing file-backed
layout. In memory, the allocation loses the size of a struct buffer.
It's not even certain it's worth placing the size there, given that it's
constant and that a dump of a ring wouldn't really need it (the file size
is sufficient). But for now everything comes with the struct buffer, and
later this will change once split into head and tail. Also this area may
be completed with more information in the future (e.g. storage version,
format, endianness, word size etc).
Some open-coded constructs were updated to make use of the ring accessors
instead. This allows to remove some direct dependencies on the buffers
API a bit more.
This code becomes even simpler and almost does not need any knowledge
of the structure of the ring anymore. It even highlighted that an old
race had not been fixed due to code duplication, but that's now done.
The rink reader code was duplicated as-is in 2.2 for the ring forwarding
code in commits 494c505703 ("MEDIUM: ring: add server statement to forward
messages from a ring") and 975564784f ("MEDIUM: ring: add new srv statement
to support octet counting forward") (which only differs by using a prefix
instead of a suffix to delimit messages).
Unfortunately, that makes it almost impossible to rework the core ring
code because all these parts rely on it. This first commit aims at
restoring a common structure for the core loop by just calling a distinct
function based on the use case. The functions are either
applet_append_line() when a whole line is to be emitted followed by an LF
character, or syslog_applet_appent_event() when trying to send a TCP
syslog line prepended with its size in decimal.
There is no functional change beyond this.
The sink lock was made to prevent event producers from passing while
there were other threads trying to print a "dropped" message, in order
to guarantee the absence of reordering. It has a serious impact however,
which is that all threads need to take the read lock when producing a
regular trace even when there's no reader.
This patch takes a different approach. The drop counter is shifted left
by one so that the lowest bit is used to indicate that one thread is
already taking care of trying to dump the counter. Threads only read
this value normally, and will only try to change it if it's non-null,
in which case they'll first check if they are the first ones trying to
dump it, otherwise will simply count another drop and leave. This has
a large benefit. First, it will avoid the locking that causes stalls
as soon as a slow reader is present. Second, it avoids any write on the
fast path as long as there's no drop. And it remains very lightweight
since we just need to add +2 or subtract 2*dropped in operations, while
offering the guarantee that the sink_write() has succeeded before
unlocking the counter.
While a reader was previously limiting the traffic to 11k RPS under
4C/8T, now we reach 36k RPS vs 14k with no reader, so readers will no
longer slow the traffic down and will instead even speed it up due to
avoiding the contention down the chain in the ring. The locking cost
dropped from ~75% to ~60% now (it's in ring_write now).
When a reader doesn't read fast enough and causes drops, subsequent
threads try to produce a "dropped" message. But it takes time to
produce and emit this message, in part due to the use of chunk_printf()
that relies on vfprintf() which has to parse the printf format, and
during this time other threads may continue to increment the counter.
This is the reason why this is currently performed in a loop. When
reading what is received, it's common to see a large count followed
by one or two single-digit counts, indicating that we could possibly
have improved that by writing faster.
Let's improve the situation a little bit. First we're now using a
static message prefixed with enough space to write the digits, and a
call to ultoa_r() fills these digits from right to left so that we
don't have to process a format string nor perform a copy of the message.
Second, we now re-check the counter immediately after having prepared
the message so that we still get an opportunity for updating it. In
order to avoid too long loops, this is limited to 10 iterations.
Tests show that the number of single-digit "dropped" counters on output
now dropped roughly by 15-30%. Also, it was observed that with 8 threads,
there's almost never more than one retry.
That's exactly the same as commit 53bfab080c ("BUG/MINOR: sink: fix a race
condition between the writer and the reader") that went into 2.7 and was
backported as far as 2.4, except that since the code was duplicated, the
second instance was not noticed, leaving the race present. The race has
a limited impact, if a forwarder reaches the end of the logs and a new
message arrives before it leaves, the forwarder will only wake up after
yet another new message will be sent. In practice it remains unnoticeable
because for the race to trigger, one needs to have a steady flow of logs,
which means the wakeup will happen anyway.
This should be backported, but no need to insist on it if it resists.
These both flags are set after releasing the applet, in
appctx_shut(). Concretly, it means the applet is shutdown for reads and
writes. Once set, the applet's I/O handler was no longer called. Tests on
these flags are useless. There is no chance to match them.
Since 04276f3d ("MEDIUM: server: split the address and the port into two
different fields") we should not use srv->addr to store server's port
and rely on srv->svc_port instead.
For sink servers, we correctly set >svc_port upon server creation but
we didn't use it when initializing address for the connection.
As a result, FQDN resolution will not work properly with sink servers.
Hopefully, this used to work by accident because sink servers were
resolved using the PA_O_RESOLVE flag in str2sa_range(), which made the
srv->addr contain the port in addition to the address.
But this will fail to work when FQDN resolution is postponed because only
->svc_port will contain the proper server port upon resolution.
For instance, FQDN resolution with servers from log backends (which are
resolved as regular servers, that is, without the PA_O_RESOLVE) will fail
to work because of this.
This may be backported as far as 2.2 even though the bug didn't have
noticeable effects for versions below 2.9
[In 2.2, sink_forward_session_init() didn't exist it should be applied in
sink_forward_session_create()]
"log-bufsize" may now be used for a log server (in a log backend) to
configure the bufsize of implicit ring associated to the server (which
defaults to BUFSIZE).
This helper function can be used to create a new sink from an existing
server struct (and thus existing proxy as well), in order to spare some
resources when possible.
implicit rings were automatically forced to the parent logger format, but
this was done upon ring creation.
This is quite restrictive because we might want to choose the desired
format right before generating the log header (ie: when producing the
log message), depending on the logger (log directive) that is
responsible for the log message, and with current logic this is not
possible. (To this day, we still have dedicated implicit ring per log
directive, but this might change)
In ring_write(), we check if the sink->fmt is specified:
- defined: we use it since it is the most precise format
(ie: for named rings)
- undefined: then we fallback to the format from the logger
With this change, implicit rings' format is now set to UNSPEC upon
creation. This is safe because the log header building function
automatically enforces the "raw" format when UNSPEC is set. And since
logger->format also defaults to "raw", no change of default behavior
should be expected.
Introduce log_header struct to easily pass log header data between
functions and use that to simplify the logic around log header
handling.
While at it, some outdated comments were updated as well.
No change in behavior should be expected.
Since a5b325f92 ("MINOR: protocol: add a real family for existing FDs"),
we don't rely anymore on AF_UNSPEC for buffer rings in do_send_log.
But we kept it as a parsing hint to differentiate between implicit and
named rings during ring buffer postparsing.
However it is still a bit confusing and forces us to systematically rely
on target->addr, even for named buffer rings where it doesn't make much
sense anymore.
Now that target->addr was made a pointer in a recent commit, we can
choose not to initialize it when not needed (i.e.: named rings) and use
this as a hint to distinguish implicit rings during init since they rely
on the addr struct to temporarily store the ring's address until the ring
is actually created during postparsing step.
log targets were immediately embedded in logger struct (previously
named logsrv) and could not be used outside of this context.
In this patch, we're introducing log_target type with the associated
helper functions so that it becomes possible to declare and use log
targets outside of loggers scope.
When 'log' directive was implemented, the internal representation was
named 'struct logsrv', because the 'log' directive would directly point
to the log target, which used to be a (UDP) log server exclusively at
that time, hence the name.
But things have become more complex, since today 'log' directive can point
to ring targets (implicit, or named) for example.
Indeed, a 'log' directive does no longer reference the "final" server to
which the log will be sent, but instead it describes which log API and
parameters to use for transporting the log messages to the proper log
destination.
So now the term 'logsrv' is rather confusing and prevents us from
introducing a new level of abstraction because they would be mixed
with logsrv.
So in order to better designate this 'log' directive, and make it more
generic, we chose the word 'logger' which now replaces logsrv everywhere
it was used in the code (including related comments).
This is internal rewording, so no functional change should be expected
on user-side.
now forward_px only serves as a hint to know if a proxy was created
specifically for the sink, in which case the sink is responsible for it.
Everywhere forward_px was used in appctx context: get the parent proxy from
the sft->srv instead.
This permits to finally get rid of the double link dependency between sink
and proxy.
Removing unnecessary dependency on proxy->parent pointer in
sink appctx functions by directly using the sink sft from the
applet->svcctx to get back to sink related structs.
Thanks to this, proxy used for a ringbuf does not have to be exclusive
to a single sink anymore.
It's useless to check if sink has been created with BUF type after
calling sink_new_buf() since the goal of the function is to create
a new sink of BUF type.
Fixing some typos that have been overlooked during the recent log/sink
API improvements. Using this patch to make sink_new_from_logsrv() static
since it is not used outside of sink.c
The ring lock was initially mostly used for the logs and used to inherit
its name in lock stats. Now that it's exclusively used by rings, let's
rename it accordingly.