This mmaps a file which will serve as the backing-store for the ring's
contents. The idea is to provide a way to retrieve sensitive information
(last logs, debugging traces) even after the process stops and even after
a possible crash. Right now this was possible by connecting to the CLI
and dumping the contents of the ring live, but this is not handy and
consumes quite a bit of resources before it is needed.
With a backing file, the ring is effectively RAM-mapped file, so that
contents stored there are the same as those found in the file (the OS
doesn't guarantee immediate sync but if the process dies it will be OK).
Note that doing that on a filesystem backed by a physical device is a
bad idea, as it will induce slowdowns at high loads. It's really
important that the device is RAM-based.
Also, this may have security implications: if the file is corrupted by
another process, the storage area could be corrupted, causing haproxy
to crash or to overwrite its own memory. As such this should only be
used for debugging.
Instead of allocating two parts, one for the ring struct itself and
one for the storage area, ring_make_from_area() will arrange the two
inside the same memory area, with the storage starting immediately
after the struct. This will allow to store a complete ring state in
shared memory areas for example.
This commit was not complete:
"BUG/MEDIUM: quic: Possible use of uninitialized <odcid>
variable in qc_lstnr_params_init()"
<token_odcid> should have been directly passed to qc_lstnr_params_init()
without dereferencing it to prevent haproxy to have new chances to crash!
Must be backported to 2.6.
It took me a while to figure why a ring declared with "size 1M" was causing
strange effects in a ring, it's just because it's parsed as "1", which is
smaller than the default 16384 size and errors are silently ignored.
This commit tries to address this the best possible way without breaking
existing configs that would work by accident, by warning that the size is
ignored if it's smaller than the current one, and by printing the parsed
size instead of the input string in warnings and errors. This way if some
users have "size 10000" or "size 100k" it will continue to work as 16kB
like today but they will now be aware of it.
In addition the error messages were a bit poor in context in that they
only provided line numbers. The ring name was added to ease locating the
problem.
As the issue was present since day one and was introduced in 2.2 with
commit 99c453df9d ("MEDIUM: ring: new section ring to declare custom ring
buffers."), it could make sense to backport this as far as 2.2, but with
2.2 being quite old now it doesn't seem very reasonable to start emitting
new config warnings in config that apparently worked well.
Thus it looks more reasonable to backport this as far as 2.4.
When receiving a token into a client Initial packet without a cluster secret defined
by configuration, the <odcid> variable used to parse the ODCID from the token
could be used without having been initialized. Such a packet must be dropped. So
the sufficient part of this patch is this check:
+ }
+ else if (!global.cluster_secret && token_len) {
+ /* Impossible case: a token was received without configured
+ * cluster secret.
+ */
+ TRACE_PROTO("Packet dropped", QUIC_EV_CONN_LPKT,
+ NULL, NULL, NULL, qv);
+ goto drop;
}
Take the opportunity of this patch to rework and make it more readable this part
of code where such a packet must be dropped removing the <check_token> variable.
When an ODCID is parsed from a token, new <token_odcid> new pointer variable
is set to the address of the parsed ODCID. This way, is not set but used it will
make crash haproxy. This was not always the case with an uninitialized local
variable.
Adapt the API to used such a pointer variable: <token> boolean variable is removed
from qc_lstnr_params_init() prototype.
This must be backported to 2.6.
Traces argument were incorrectly used in qcs_free(). A qcs was specified
as first arg instead of a connection. This will lead to a crash if
developer qmux traces are activated. This is now fixed.
This bug has been introduced with QUIC MUX traces rework. No need to
backport.
Add new traces to help debugging on QUIC MUX. Most notable, the
following functions are now traced :
* qcc_emit_cc
* qcs_free
* qcs_consume
* qcc_decode_qcs
* qcc_emit_cc_app
* qcc_install_app_ops
* qcc_release_remote_stream
* qcc_streams_sent_done
* qc_init
Change default devel level for some traces in QUIC MUX:
* proto : used to notify about reception/emission of frames
* state : modification of internal state of connection or streams
* data : detailled information about transfer and flow-control
Replace devel traces with error level on all errors situation. Also a
new event QMUX_EV_PROTO_ERR is used. This should help to detect invalid
situations quickly.
This loop is due to the fact that we do not select the next node before
the conditional "continue" statement. Furthermore the condition and the
"continue" statement may be removed after replacing eb64_first() call
by eb64_lookup_ge(): we are sure this condition may not be satisfied.
Add some comments: this function initializes connection IDs with sequence
number 1 upto <max> non included.
Take the opportunity of this patch to remove a "return" wich broke this
traces rule: for any function, do not call TRACE_ENTER() without TRACE_LEAVE()!
Add also TRACE_ERROR() for any encoutered errors.
Must be backported to 2.6
This lock was there be able to handle the RX packets for a connetion
from several threads. This is no more needed since a QUIC connection
is always handled by the same thread.
May be backported to 2.6
gcc-6.x and 7.x emit build warnings about sc possibly being null upon
return from sc_detach_endp(). This actually is not the case and the
compiler is a little bit overzealous there, but there exists code
paths that can make this analysis non-trivial so let's at least add
a similar BUG_ON() to let both the compiler and the deverloper know
this doesn't happen.
This should be backported to 2.6.
Add a least as much as possible TRACE_ENTER() and TRACE_LEAVE() calls
to any function. Note that some functions do not have any access to the
a quic_conn argument when receiving or parsing datagram at very low level.
While testing the fix for the previous issue related to reloads with
hard_stop_after, I've met another one which could spuriously produce:
FATAL: bug condition "t->tid >= 0 && t->tid != tid" matched at include/haproxy/task.h:266
In 2.3-dev2, we've added more consistency checks for a number of bug-
inducing programming errors related to the tasks, via commit e5d79bccc
("MINOR: tasks/debug: add a few BUG_ON() to detect use of wrong timer
queue"), and this check comes from there.
The problem that happens here is that when hard-stop-after is set, we
can abort the current thread even if there are still ongoing checks
(or connections in fact). In this case some tasks are present in a
thread's wait queue and are thus bound exclusively to this thread.
During deinit(), the collect and cleanup of all memory areas also
stops servers and kills their check tasks. And calling task_destroy()
does in turn call task_unlink_wq()... except that it's called from
thread 0 which doesn't match the initially planned thread number.
Several approaches are possible. One of them would consist in letting
threads perform their own cleanup (tasks, pools, FDs, etc). This would
possibly be even faster since done in parallel, but some corner cases
might be way more complicated (e.g. who will kill a check's task, or
what to do with a task found in a local wait queue or run queue, and
what about other consistency checks this could violate?).
Thus for now this patches takes an easier and more conservative
approach consisting in admitting that when the process is stopping,
this rule is not necessarily valid, and to let thread 0 collect all
other threads' garbage.
As such this patch can be backpoted to 2.4.
The poller pipes needed to communicate between multiple threads are
allocated in init_pollers_per_thread() and released in
deinit_pollers_per_thread(). The former adds them via fd_insert()
so that they are known, but the former only closes them using a
regular close().
This asymmetry represents a problem, because we have in the fdtab[]
an entry for something that may disappear when one thread leaves, and
since these FD numbers are very low, there is a very high likelihood
that they are immediately reassigned to another thread trying to
connect() to a server or just sending a health check. In this case,
the other thread is going to fd_insert() the fd and the recently
added consistency checks will notive that ->owner is not NULL and
will crash. We just need to use fd_delete() here to match fd_insert().
Note that this test was added in 2.7-dev2 by commit 36d9097cf
("MINOR: fd: Add BUG_ON checks on fd_insert()") which was backported
to 2.4 as a safety measure (since it allowed to catch particularly
serious issues). The patch in itself isn't wrong, it just revealed
a long-dormant bug (been there since 1.9-dev1, 4 years ago). As such
the current patch needs to be backported wherever the commit above
is backported.
Many thanks to Christian Ruppert for providing detailed traces in
github issue #1807 and Cedric Paillet for bringing his complementary
analysis that helped to understand the required conditions for this
issue to happen (fast health checks @100ms + randomly long connections
~7s + fast reloads every second + hard-stop-after 5s were necessary
on the dev's machine to trigger it from time to time).
Fred managed to reproduce a crash showing a corrupted accept_list when
firing thousands of concurrent picoquicdemo clients to a same instance.
It may happen if the connection was placed into the accept_list and
immediately closed before being processed (e.g. on error or t/o ?).
In any case the quic_conn_release() function should always detach a
connection to be deleted from any list, like it does for other lists,
so let's add an MT_LIST_DELETE() here.
This should be backported to 2.6.
When arriving at the handshake completion, next encryption level will be
null on quic_conn_io_cb(). Thus this must be check this before
dereferencing it via qc_need_sending() to prevent a crash.
This was reproduced quickly when browsing over a local nextcloud
instance through QUIC with firefox.
This has been introduced in the current dev with quic-conn Tx
refactoring. No need to backport it.
Considered a stream as opened when receiving a STOP_SENDING frame as the
first frame on the stream.
This patch is tagged as BUG because a BUG_ON may occur if only a
STOP_SENDING frame has been received for a frame. This will reset the
stream in respect with RFC9000 but internally it is considered invalid
transition to reset an idle stream.
To fix this, simply use qcs_idle_open() on STOP_SENDING parsing
function. This will mark the stream as OPEN before resetting it.
This was detected on haproxy.org with the following backtrace :
FATAL: bug condition "qcs->st == QC_SS_IDLE" matched at
src/mux_quic.c:383
call trace(12):
| 0x490dd3 [b8 01 00 00 00 c6 00 00]: main-0x1d0633
| 0x4975b8 [48 8b 85 58 ff ff ff 8b]: main-0x1c9e4e
| 0x497df4 [48 8b 45 c8 48 89 c7 e8]: main-0x1c9612
| 0x49934c [48 8b 45 c8 48 89 c7 e8]: main-0x1c80ba
| 0x6b3475 [48 8b 05 54 1b 3a 00 64]: run_tasks_from_lists+0x45d/0x8b2
| 0x6b4093 [29 c3 89 d8 89 45 d0 83]: process_runnable_tasks+0x7c9/0x824
| 0x660bde [8b 05 fc b3 4f 00 83 f8]: run_poll_loop+0x74/0x430
| 0x6611de [48 8b 05 7b a6 40 00 48]: main-0x228
| 0x7f66e4fb2ea5 [64 48 89 04 25 30 06 00]: libpthread:+0x7ea5
| 0x7f66e455ab0d [48 89 c7 e8 5b 72 fc ff]: libc:clone+0x6d/0x86
Stream states have been implemented in the current dev tree. Thus, this
patch does not need to be backported.
Check on quic_conn_io_cb() if sending is required. This allows to skip
over Tx buffer allocation if not needed.
To implement this, we check if frame lists on current and next
encryption level are empty. We also need to check if there is no need to
send ACK, PROBE or CONNECTION_CLOSE. This has been isolated in a new
function qc_need_sending() which may be reuse in some other functions in
the future.
This is the final patch on quic-conn Tx refactor. Extend the function
which is used to write a datagram header to save at the same time
written buffer data. This makes sense as the two operations are used at
the same occasion when a pre-written datagram is comitted.
Complete refactor of quic-conn Tx buffer. The buffer is now released
on every send operation completion. This should help to reduce memory
footprint as now Tx buffers are allocated and released on demand.
To simplify allocation/free of quic-conn Tx buffer, two static functions
are created named qc_txb_alloc() and qc_txb_release().
On first prototype version of QUIC, emission was multithreaded. To
support this, a custom thread-safe ring-buffer has been implemented with
qring/cbuf.
Now the thread model has been adjusted : a quic-conn is always used on
the same thread and emission is not multi-threaded. Thus, qring/cbuf
usage can be replace by a standard struct buffer.
The code has been simplified even more as for now buffer is always
drained after a prepare/send invocation. This is the case since a
datagram is always considered as sent even on sendto() error. BUG_ON
statements guard are here to ensure that this model is always valid.
Thus, code to handle data wrapping and consume too small contiguous
space with a 0-length datagram is removed.
qc_send_app_pkts() has now a while loop implemented which allows to send
all possible frames even if the send buffer is full between packet
prepare and send. This is present since commit :
dc07751ed7
MINOR: quic: Send packets as much as possible from qc_send_app_pkts()
This means we can remove code from the MUX which implement this at the
upper layer. This is useful to simplify qc_send_frames() function.
As mentionned commit is subject to backport, this commit should be
backported as well to 2.6.
Right now the free() call is not intercepted since all this is done
using macros and that would break a lot of stuff. Instead a __free()
macro was provided but never used. In addition it used to only report
a zero size, which is not very convenient.
With this patch comes a better solution. Instead it provides a new
will_free() macro that can be prepended before a call to free(). It
only keeps the counters up to date, and also supports being passed a
size. The pool_free_area() command now uses it, which finally allows
the stats to look correct:
pool-os.h:38 MALLOC size: 5802127832 calls: 3868044 size/call: 1500
pool-os.h:47 FREE size: 5800041576 calls: 3867444 size/call: 1499
The few other places directly calling free() could now be instrumented to
use this and to pass the correct sizeof() when known.
The first column's width may vary a lot depending on outputs, and it's
annoying to have large empty columns on small names and mangled large
columns that are not yet large enough. In order to overcome this, this
patch adds a width field to the memstats applet's context, and this
width is calculated the first time the function is entered, by estimating
the width of all lines that will be dumped. This is simple enough and
does the job well. If in the future some filtering criteria are added,
it will still be possible to perform a single pass on everything
depending on the desired output format.
The calling function name is now stored in the structure, and it's
reported when the "all" argument is passed. The first column is
significantly enlarged because some names are really wide :-(
Not specifying the alignment will let the linker choose it, and it turns
out that it will not necessarily be the same size as the one chosen for
struct mem_stats, as can be seen if any new fields are added there. Let's
enforce an alignment to void* both for the section and for the structure.
These fake datagrams are only used by the low level I/O handler. They
are not provided to the "by connection" datagram handlers. This
is why they are not MT_LIST_APPEND()ed to the listner RX buffer list
(see &quic_dghdlrs[cid_tid].dgrams in quic_lstnr_dgram_dispatch().
Replace the call to pool_zalloc() to by the lighter call to pool_malloc()
and initialize only the ->buf and ->length members. This is safe because
only these fields are inspected by the low level I/O handler.
After removing the packet header protection, we can check the packet is long
enough to contain a 16 bytes length AEAD TAG (at this end of the packet).
This test was missing.
Must be backported to 2.6.
These traces about the available room into the packet currently built and
its payload length could be displayed for each STREAM frame, even for
those which have no chance to be embedded into a packet leading to
very traces to be displayed from a connection with a lot of stream.
This was revealed by traces provide by Tristan in GH #1808
May be backported to 2.6.
When entering this function, we first check the packet length is not too short.
But this was done against the datagram lenght in place of the packet length.
This could lead to the header protection to be removed using data past
the end of the packet (without buffer overflow).
Use the packet length in place of the datagram length which is at <end>
address passed as parameter to this function. As the packet length
is already stored in ->len packet struct member, this <end> parameter is no
more useful.
Must be backported to 2.6.
Released version 2.7-dev3 with the following main changes :
- BUILD: makefile: Fix install(1) handling for OpenBSD/NetBSD/Solaris/AIX
- BUG/MEDIUM: tools: avoid calling dlsym() in static builds (try 2)
- MINOR: resolvers: resolvers_destroy() deinit and free a resolver
- BUG/MINOR: resolvers: shut off the warning for the default resolvers
- BUG/MINOR: ssl: allow duplicate certificates in ca-file directories
- BUG/MINOR: tools: fix statistical_prng_range()'s output range
- BUG/MINOR: quic: do not send CONNECTION_CLOSE_APP in initial/handshake
- BUILD: debug: Add braces to if statement calling only CHECK_IF()
- BUG/MINOR: fd: Properly init the fd state in fd_insert()
- BUG/MEDIUM: fd/threads: fix incorrect thread selection in wakeup broadcast
- MINOR: init: load OpenSSL error strings
- MINOR: ssl: enhance ca-file error emitting
- BUG/MINOR: mworker/cli: relative pid prefix not validated anymore
- BUG/MAJOR: mux_quic: fix invalid PROTOCOL_VIOLATION on POST data overlap
- BUG/MEDIUM: mworker: proc_self incorrectly set crashes upon reload
- BUILD: add detection for unsupported compiler models
- BUG/MEDIUM: stconn: Only reset connect expiration when processing backend side
- BUG/MINOR: backend: Fallback on RR algo if balance on source is impossible
- BUG/MEDIUM: master: force the thread count earlier
- BUG/MAJOR: poller: drop FD's tgid when masks don't match
- DEBUG: fd: detect possibly invalid tgid in fd_insert()
- BUG/MINOR: sockpair: wrong return value for fd_send_uxst()
- MINOR: sockpair: move send_fd_uxst() error message in caller
- Revert "BUG/MINOR: peers: set the proxy's name to the peers section name"
- DEBUG: fd: split the fd check
- MEDIUM: resolvers: continue startup if network is unavailable
- BUG/MINOR: fd: always remove late updates when freeing fd_updt[]
- MINOR: cli: emit a warning when _getsocks was used more than once
- BUG/MINOR: mworker: PROC_O_LEAVING used but not updated
- Revert "MINOR: cli: emit a warning when _getsocks was used more than once"
- MINOR: cli: warning on _getsocks when socket were closed
- BUG/MEDIUM: mux-quic: fix missing EOI flag to prevent streams leaks
- MINOR: quic: Congestion control architecture refactoring
- MEDIUM: quic: Cubic congestion control algorithm implementation
- MINOR: quic: New "quic-cc-algo" bind keyword
- BUG/MINOR: quic: loss time limit variable computed but not used
- MINOR: quic: Stop looking for packet loss asap
- BUG/MAJOR: quic: Useless resource intensive loop qc_ackrng_pkts()
- MINOR: quic: Send packets as much as possible from qc_send_app_pkts()
- BUG/MEDIUM: queue/threads: limit the number of entries dequeued at once
- MAJOR: threads/plock: update the embedded library
- MINOR: thread: provide an alternative to pthread's rwlock
- DEBUG: tools: provide a tree dump function for ebmbtrees as well
- MINOR: ebtree: add ebmb_lookup_shorter() to pursue lookups
- BUG/MEDIUM: pattern: only visit equivalent nodes when skipping versions
- BUG/MINOR: mux-quic: prevent crash if conn released during IO callback
- CLEANUP: mux-quic: remove useless app_ops is_active callback
- BUG/MINOR: mux-quic: do not free conn if attached streams
- MINOR: mux-quic: save proxy instance into qcc
- MINOR: mux-quic: use timeout server for backend conns
- MEDIUM: mux-quic: adjust timeout refresh
- MINOR: mux-quic: count in-progress requests
- MEDIUM: mux-quic: implement http-keep-alive timeout
- MINOR: peers: Add a warning about incompatible SSL config for the local peer
- MINOR: peers: Use a dedicated reconnect timeout when stopping the local peer
- BUG/MEDIUM: peers: limit reconnect attempts of the old process on reload
- BUG/MINOR: peers: Use right channel flag to consider the peer as connected
- BUG/MEDIUM: dns: Properly initialize new DNS session
- BUG/MINOR: backend: Don't increment conn_retries counter too early
- MINOR: server: Constify source server to copy its settings
- REORG: server: Export srv_settings_cpy() function
- BUG/MEDIUM: proxy: Perform a custom copy for default server settings
- BUG/MINOR: quic: Missing in flight ack eliciting packet counter decrement
- BUG/MEDIUM: quic: Floating point exception in cubic_root()
- MINOR: h3: support HTTP request framing state
- MINOR: mux-quic: refresh timeout on frame decoding
- MINOR: mux-quic: refactor refresh timeout function
- MEDIUM: mux-quic: implement http-request timeout
- BUG/MINOR: quic: Avoid sending truncated datagrams
- BUG/MINOR: ring/cli: fix a race condition between the writer and the reader
- BUG/MEDIUM: sink: Set the sink ref for forwarders created during ring parsing
- BUG/MINOR: sink: fix a race condition between the writer and the reader
- BUG/MINOR: quic: do not reject datagrams matching minimum permitted size
- MINOR: quic: Add two new stats counters for sendto() errors
- BUG/MINOR: quic: Missing Initial packet dropping case
- MINOR: quic: explicitely ignore sendto error
- BUG/MINOR: quic: adjust errno handling on sendto
- BUG/MEDIUM: quic: break out of the loop in quic_lstnr_dghdlr
- MINOR: threads: report the number of thread groups in build options
- MINOR: config: automatically preset MAX_THREADS based on MAX_TGROUPS
- BUILD: SSL: allow to pass additional configure args to QUICTLS
- CI: enable weekly "m32" builds on x86_64
- CLEANUP: assorted typo fixes in the code and comments
- BUG/MEDIUM: fix DH length when EC key is used
- REGTESTS: ssl: adopt tests to OpenSSL-3.0.N
- REGTESTS: ssl: adopt tests to OpenSSL-3.0.N
- REGTESTS: ssl: fix grep invocation to use extended regex in ssl_generate_certificate.vtc
- BUILD: cfgparse: always defined _GNU_SOURCE for sched.h and crypt.h
_GNU_SOURCE used to be defined only when USE_LIBCRYPT was set. It's also
needed for sched_setaffinity() to be exported. As a side effect, when
USE_LIBCRYPT is not set, a warning is emitted, as Ilya found and reported
in issue #1815. Let's just define _GNU_SOURCE regardless of USE_LIBCRYPT,
and also explicitly add sched.h, as right now it appears to be inherited
from one of the other includes.
This should be backported to 2.4.
dh of length 1024 were chosen for EVP_PKEY_EC key type.
let us pick "default_dh_param" instead.
issue was found on Ubuntu 22.04 which is shipped with OpenSSL configured
with SECLEVEL=2 by default. such SECLEVEL value prohibits DH shorter than
2048:
OpenSSL error[0xa00018a] SSL_CTX_set0_tmp_dh_pkey: dh key too small
better strategy for chosing DH still may be considered though.
MAX_THREADS was not changed when setting MAX_TGROUPS, which still limits
some possibilities. Let's preset it to 4 * LONGBITS when MAX_TGROUPS is
larger than 1, or LONGBITS when it's set to 1. This means that the new
default value is 256 threads.
The rationale behind this is that the main use of thread groups is
mostly to address NUMA issues and that we don't necessarily need large
thread counts when using many groups, and 256 threads is already plenty
even on quite large systems.
For now it's important not to go too far because some internal structs
are arrays of MAX_THREADS entries, for example accept_queue_ring, which
is around 8kB per thread. Such structures will need to become dynamic
before defaulting to large thread counts (at 4096 threads max the
accept queues would require 32 MB RAM alone).
The function processes packets sent by other threads in the current
thread's queue. But if, for any reason, other threads write faster
than the current one processes, this can lead to a situation where
the function never returns.
It seems that it might be what's happening in issue #1808, though
unfortunately, this function is one of the rare without traces. But
the amount of calls to functions like qc_lstnr_pkt_rcv() on a single
thread seems to indicate this possibility.
Thanks to Tristan for his efforts in collecting extremely precious
traces!
This likely needs to be backported to 2.6.
qc_snd_buf returned a size_t which means that it was never negative
despite its documentation. Thus the caller who checked for this was
never informed of a sendto error.
Clean this by changing the return value of qc_snd_buf() to an integer.
A 0 is returned on success. Every other values are considered as an
error.
This commit should be backported up to 2.6. Note that to not cause
malfunctions, it must be backported after the previous patch :
906b058954
MINOR: quic: explicitely ignore sendto error
This is to ensure that a sendto error does not cause send to be
interrupted which may cause a stalled transfer without a proper retry
mechanism.
The impact of this bug seems null as caller explicitely ignores sendto
error. However this part of code seems to be subject to strange issues
and it may fix them in part. It may be of interest for github issue #1808.
qc_snd_buf() returns an error if sendto has failed. On standard
conditions, we should check for EAGAIN/EWOULDBLOCK errno and if so,
register the file-descriptor in the poller to retry the operation later.
However, quic_conn uses directly the listener fd which is shared for all
QUIC connections of this listener on several threads. Thus, it's
complicated to implement fd supversion via the poller : there is no
mechanism to easily wakeup quic_conn or MUX after a sendto failure.
A quick and simple solution for the moment is to considered a datagram
as properly emitted even on sendto error. In the end, this will trigger
the quic_conn retransmission timer as data will be considered lost on
the network and the send operation will be retried. This solution will
be replaced when fd management for quic_conn is reworked.
In fact, this quick hack was already in use in the current code, albeit
not voluntarily. This is due to a bug caused by an API mismatch on the
return type of qc_snd_buf() which never emits a negative error code
despite its documentation. Thus, all its invocation were considered as a
success. If this bug was fixed, the sending would would have been
interrupted by a break which could cause the transfer to freeze.
qc_snd_buf() invocation is clean up : the break statement is removed.
Send operation is now always explicitely conducted entirely even on
error and buffer data is purged.
A simple optimization has been added to skip over sendto when looping
over several datagrams at the first sendto error. However, to properly
function, it requires a fix on the return type of qc_snd_buf() which is
provided in another patch.
As the behavior before and after this patch seems identical, it is not
labelled as a BUG. However, it should be backported for cleaning
purpose. It may also have an impact on github issue #1808.