Instead of reporting the inaccurate "malloc_trim() support" on -vv, let's
report the case where the memory allocator was actively replaced from the
one used at build time, as this is the corner case we want to be cautious
about. We also put a tainted bit when this happens so that it's possible
to detect it at run time (e.g. the user might have inherited it from an
environment variable during a reload operation).
The now unused is_trim_enabled() function was finally dropped.
The runtime detection of the default memory allocator was broken by
Commit d8a97d8f6 ("BUG/MINOR: illegal use of the malloc_trim() function
if jemalloc is used") due to a misunderstanding of its role. The purpose
is not to detect whether we're on non-jemalloc but whether or not the
allocator was changed from the one we booted with, in which case we must
be extra cautious and absolutely refrain from calling malloc_trim() and
its friends.
This was done only to drop the message saying that malloc_trim() is
supported, which will be totally removed in another commit, and could
possibly be removed even in older versions if this patch would get
backported since in the end it provides limited value.
We already have some generic code in trim_all_pools() to implement the
equivalent of malloc_trim() on jemalloc and macos. Instead of keeping the
logic there, let's just move it to our own malloc_trim() implementation
so that we can unify the mechanism and the logic. Now any low-level code
calling malloc_trim() will either be disabled by haproxy's config if the
user decides to, or will be mapped to the equivalent mechanism if malloc()
was intercepted by a preloaded jemalloc.
Trim_all_pools() preserves the benefit of serializing threads (which we
must not impose to other libs which could come with their own threads).
It means that our own code should mostly use trim_all_pools() instead of
calling malloc_trim() directly.
As reported by Miroslav in commit d8a97d8f6 ("BUG/MINOR: illegal use of
the malloc_trim() function if jemalloc is used") there are still occasional
cases where it's discovered that malloc_trim() is being used without its
suitability being checked first. This is a problem when using another
incompatible allocator. But there's a class of use cases we'll never be
able to cover, it's dynamic libraries loaded from Lua. In order to address
this more reliably, we now define our own malloc_trim() that calls the
previous one after checking that the feature is supported and that the
allocator is the expected one. This way child libraries that would call
it will also be safe.
The function is intentionally left defined all the time so that it will
be possible to clean up some code that uses it by removing ifdefs.
The global option 'no-memory-trimming' was added in 2.6 with commit
c4e56dc58 ("MINOR: pools: add a new global option "no-memory-trimming"")
but there were some cases left where it was not considered. Let's make
is_trim_enabled() also consider it.
In the event that HAProxy is linked with the jemalloc library, it is still
shown that malloc_trim() is enabled when executing "haproxy -vv":
..
Support for malloc_trim() is enabled.
..
It's not so much a problem as it is that malloc_trim() is called in the
pat_ref_purge_range() function without any checking.
This was solved by setting the using_default_allocator variable to the
correct value in the detect_allocator() function and before calling
malloc_trim() it is checked whether the function should be called.
Using -dMfail alone does nothing unless tune.fail-alloc is set, which
renders it pretty useless as-is, and is not intuitive. Let's change
this so that the filure rate is preset to 1% when the option is set on
the command line. This allows to inject failures without having to edit
the configuration.
The same change was already performed for the cli. The stats applet and the
prometheus exporter are also concerned. Both use the stats API and rely on
pool functions to get total pool usage in bytes. pool_total_allocated() and
pool_total_used() must return 64 bits unsigned integer to avoid any wrapping
around 4G.
This may be backported to all versions.
We don't need to know very accurately how much RAM is needed in a pool,
however we must not spend time competing with other threads trying to be
the one with the most accurate value. Let's use the "_opportunistic"
variants of swrate_add() which will simply cause some updates to be
dropped in case of thread contention. This should significantly improve
the situation when dealing with many threads and small per-thread caches.
Performance gains of up to 1-2% were observed on 48-thread systems thanks
to this alone.
Till now it was only possible to change the thread local hot cache size
at build time using CONFIG_HAP_POOL_CACHE_SIZE. But along benchmarks it
was sometimes noticed a huge contention in the lower level memory
allocators indicating that larger caches could be beneficial, especially
on machines with large L2 CPUs.
Given that the checks against this value was no longer on a hot path
anymore, there was no reason for continuing to force it to be tuned at
build time. So this patch allows to set it by tune.memory-hot-size.
It's worth noting that during the boot phase the value remains zero so
that it's possible to know if the value was set or not, which opens the
possibility that we try to automatically adjust it based on the per-cpu
L2 cache size or the use of certain protocols (none of this is done yet).
Since the massive pools cleanup that happened in 2.6, the pools
architecture was made quite more hierarchical and many alternate code
blocks could be moved to runtime flags set by -dM. One of them had not
been converted by then, DEBUG_UAF. It's not much more difficult actually,
since it only acts on a pair of functions indirection on the slow path
(OS-level allocator) and a default setting for the cache activation.
This patch adds the "uaf" setting to the options permitted in -dM so
that it now becomes possible to set or unset UAF at boot time without
recompiling. This is particularly convenient, because every 3 months on
average, developers ask a user to recompile haproxy with DEBUG_UAF to
understand a bug. Now it will not be needed anymore, instead the user
will only have to disable pools and enable uaf using -dMuaf. Note that
-dMuaf only disables previously enabled pools, but it remains possible
to re-enable caching by specifying the cache after, like -dMuaf,cache.
A few tests with this mode show that it can be an interesting combination
which catches significantly less UAF but will do so with much less
overhead, so it might be compatible with some high-traffic deployments.
The change is very small and isolated. It could be helpful to backport
this at least to 2.7 once confirmed not to cause build issues on exotic
systems, and even to 2.6 a bit later as this has proven to be useful
over time, and could be even more if it did not require a rebuild. If
a backport is desired, the following patches are needed as well:
CLEANUP: pools: move the write before free to the uaf-only function
CLEANUP: pool: only include pool-os from pool.c not pool.h
REORG: pool: move all the OS specific code to pool-os.h
CLEANUP: pools: get rid of CONFIG_HAP_POOLS
DEBUG: pool: show a few examples in -dMhelp
It's not always easy to remember what certain options do together nor
which ones are only relevant when combined with others, so let's add a
few examples with the "help" command on -dM.
This one was set in defaults.h only when neither DEBUG_NO_POOLS nor
DEBUG_UAF were set. This was not the most convenient location to look
for it, and it was only used in pool.c to decide on the initial value
of POOL_DBG_NO_CACHE.
Let's just use DEBUG_NO_POOLS || DEBUG_UAF directly on this flag and
get rid of the intermediary condition. This also has the benefit of
removing a double inversion, which is always nice for understanding.
Till now pool-os used to contain a mapping from pool_{alloc,free}_area()
to pool_{alloc,free}_area_uaf() in case of DEBUG_UAF, or the regular
malloc-based function. And the *_uaf() functions were in pool.c. But
since 2.4 with the first cleanup of the pools, there has been no more
calls to pool_{alloc,free}_area() from anywhere but pool.c, from exactly
one place each. As such, there's no more need to keep *_uaf() apart in
pool.c, we can inline it into pool-os.h and leave all the OS stuff there,
with pool.c calling either based on DEBUG_UAF. This is cleaner with less
round trips between both files and easier to find.
There's no need for the low-level pool functions to be known from all
callers anymore, they're only used by pool.c. Let's reduce the amount
of header files processed.
In UAF mode, pool_put_to_os() performs a write to the about-to-be-freed
memory area so as to make sure the page is properly mapped and catch a
possible double-free. However there's no point keeping that in an ifdef
in the generic function, because we now have a pool_free_area_uaf()
that is the UAF-specific version of pool_free_area() and the one that
is called immediately after this write. Let's move the code there, it
will be cleaner.
The "show pools" command is used a lot for debugging but didn't get much
love over the years. This patch brings new capabilities:
- sorting the output by pool names to ese their finding ("byname").
- sorting the output by reverse item size to spot the biggest ones("bysize")
- sorting the output by reverse number of allocated bytes ("byusage")
The last one (byusage) also omits displaying the ones with zero allocation.
In addition, an optional max number of output entries may be passed so as
to dump only the N most relevant ones.
As noticed by Gabriel Tzagkarakis in issue #1903, the total pool size
in bytes is historically still in 32 bits, but at least we should report
the product of the number of objects and their size in 64 bits so that
the value doesn't wrap around 4G.
This may be backported to all versions.
Storing the pointer to the pool along with the stats is quite useful as
it allows to report the name. That's what we're doing here. We could
store it in place of another field but that's not convenient as it would
require to change all functions that manipulate counters. Thus here we
store one extra field, as well as some padding because the struct turns
56 bytes long, thus better go to 64 directly. Example of output from
"show profiling memory":
2 0 48 0| 0x4bfb2c ha_quic_set_encryption_secrets+0xcc/0xb5e p_alloc(24) [pool=quic_tls_iv]
0 55252 0 10608384| 0x4bed32 main+0x2beb2 free(-192)
15 0 2760 0| 0x4be855 main+0x2b9d5 p_alloc(184) [pool=quic_frame]
1 0 1048 0| 0x4be266 ha_quic_add_handshake_data+0x2b6/0x66d p_alloc(1048) [pool=quic_crypto]
3 0 552 0| 0x4be142 ha_quic_add_handshake_data+0x192/0x66d p_alloc(184) [pool=quic_frame]
31276 0 6755616 0| 0x4bb8f9 quic_sock_fd_iocb+0x689/0x69b p_alloc(216) [pool=quic_dgram]
0 31424 0 6787584| 0x4bb7f3 quic_sock_fd_iocb+0x583/0x69b p_free(-216) [pool=quic_dgram]
152 0 32832 0| 0x4bb4d9 quic_sock_fd_iocb+0x269/0x69b p_alloc(216) [pool=quic_dgram]
Pools are being used so well that it becomes difficult to profile their
usage via the regular memory profiling. Let's add new entries for pools
there, named "p_alloc" and "p_free" that correspond to pool_alloc() and
pool_free(). Ideally it would be nice to only report those that fail
cache lookups but that's complicated, particularly on the free() path
since free lists are released in clusters to the shared pools.
It's worth noting that the alloc_tot/free_tot fields can easily be
determined by multiplying alloc_calls/free_calls by the pool's size, and
could be better used to store a pointer to the pool itself. However it
would require significant changes down the code that sorts output.
If this were to cause a measurable slowdown, an alternate approach could
consist in using a different value of USE_MEMORY_PROFILING to enable pools
profiling. Also, this profiler doesn't depend on intercepting regular malloc
functions, so we could also imagine enabling it alone or the other one alone
or both.
Tests show that the CPU overhead on QUIC (which is already an extremely
intensive user of pools) jumps from ~7% to ~10%. This is quite acceptable
in most deployments.
There's no more reason for keepin the code and definitions in conn_stream,
let's move all that to stconn. The alphabetical ordering of include files
was adjusted.
This file contains all the stream-connector functions that are specific
to application layers of type stream. So let's name it accordingly so
that it's easier to figure what's located there.
The alphabetical ordering of include files was preserved.
This applies the change so that the applet code stops using ci_putchk()
and friends everywhere possible, for the much saferapplet_put*() instead.
The change is mechanical but large. Two or three functions used to have no
appctx and a cs derived from the appctx instead, which was a reminiscence
of old times' stream_interface. These were simply changed to directly take
the appctx. No sensitive change was performed, and the old (more complex)
API is still usable when needed (e.g. the channel is already known).
The change touched roughly a hundred of locations, with no less than 124
lines removed.
It's worth noting that the stats applet, the oldest of the series, could
get a serious lifting, as it's still very channel-centric instead of
propagating the appctx along the chain. Given that this code doesn't
change often, there's no emergency to clean it up but it would look
better.
This renames the "struct conn_stream" to "struct stconn" and updates
the descriptions in all comments (and the rare help descriptions) to
"stream connector" or "connector". This touches a lot of files but
the change is minimal. The local variables were not even renamed, so
there's still a lot of "cs" everywhere.
This one is the pointer to the conn_stream which is always in the
endpoint that is always present in the appctx, thus it's not needed.
This patch removes it and replaces it with appctx_cs() instead. A
few occurences that were using __cs_strm(appctx->owner) were moved
directly to appctx_strm() which does the equivalent.
In issue #1677, Tim reported that we don't correctly free some shared
pools on exit. What happens in fact is that pool_destroy() is meant to
be called once per pool *pointer*, as it decrements the use count for
each pass and only releases the pool when it reaches zero. But since
pool_destroy_all() iterates over the pools list, it visits each pool
only once and will not eliminate some of them, which thus remain in the
list.
In an ideal case, the function should loop over all pools for as long
as the list is not empty, but that's pointless as we know we're exiting,
so let's just set the users count to 1 before the call so that
pool_destroy() knows it can delete and release the entry.
This could be backported to all versions (memory.c in 2.0 and older) but
it's not a real problem in practice.
Remaining flags and associated functions are move in the conn-stream
scope. These flags are added on the endpoint and not the conn-stream
itself. This way it will be possible to get them from the mux or the
applet. The functions to get or set these flags are renamed accordingly with
the "cs_" prefix and updated to manipualte a conn-stream instead of a
stream-interface.
At many places, we now use the new CS functions to get a stream or a channel
from a conn-stream instead of using the stream-interface API. It is the
first step to reduce the scope of the stream-interfaces. The main change
here is about the applet I/O callback functions. Before the refactoring, the
stream-interface was the appctx owner. Thus, it was heavily used. Now, as
far as possible,the conn-stream is used. Of course, it remains many calls to
the stream-interface API.
Some users with very large numbers of connections have been facing
extremely long malloc_trim() calls on reload that managed to trigger
the watchdog! That's a bit counter-productive. It's even possible
that some implementations are not perfectly reliable or that their
trimming time grows quadratically with the memory used. Instead of
constantly trying to work around these issues, let's offer an option
to disable this mechanism, since nobody had been complaining in the
past, and this was only meant to be an improvement.
This should be backported to 2.4 where trimming on reload started to
appear.
Commit e81248c0c ("BUG/MINOR: pool: always align pool_heads to 64 bytes")
added a free of the allocated pool in pool_destroy() using ha_free(), but
it added a subtle bug by which once the pool is released, setting its
address to NULL inside the structure itself cannot work because the area
has just been freed.
This will need to be backported wherever the patch above is backported.
This is the pool equivalent of commit 97ea9c49f ("BUG/MEDIUM: fd: always
align fdtab[] to 64 bytes"). After a careful code review, it happens that
the pool heads are the other structures allocated with malloc/calloc that
claim to be aligned to a size larger than what the allocator can offer.
While no issue was reported on them, no memset() is performed and no type
is large, this is a problem waiting to happen, so better fix it. In
addition, it's relatively easy to do by storing the allocation address
inside the pool_head itself and use it at free() time. Finally, threads
might benefit from the fact that the caches will really be aligned and
that there will be no false sharing.
This should be backported to all versions where it applies easily.
Because appctx is now an endpoint of the conn-stream, there is no reason to
still have the stream-interface as appctx owner. Thus, the conn-stream is
now the appctx owner.
The 9 currently available debugging options may now be checked, set, or
cleared using -dM. The directive now takes a comma-delimited list of
options after the optional poisonning byte. With "help", the list of
available options is displayed with a short help and their current
status.
The management doc was updated.
New function pool_parse_debugging() is now dedicated to parsing options
of -dM. For now it only handles the optional memory poisonning byte, but
the function may already return an informative message to be printed for
help, a warning or an error. This way we'll reuse it for the settings
that will be needed for configurable debugging options.
Now -dM will set POOL_DBG_POISON for consistency with the rest of the
pool debugging options. As such now we only check for the new flag,
which allows the default value to be preset.
This option used to allow to store a marker at the end of the area, which
was used as a canary and detection against wrong freeing while the object
is used, and as a pointer to the last pool_free() caller when back in cache.
Now that we can compute the offsets at runtime, let's check it at run time
and continue the code simplification.
This option used to allow to store a pointer to the caller of the last
pool_alloc() or pool_free() at the end of the area. Now that we can
compute the offsets at runtime, let's check it at run time and continue
the code simplification. In __pool_alloc() we now always calculate the
return address (which is quite cheap), and the POOL_DEBUG_TRACE_CALLER()
calls are conditionned on the status of debugging option.
This macro is build-time dependent and is almost unused, yet where it
cannot easily be avoided. Now that we store the distinction between
pool->size and pool->alloc_sz, we don't need to maintain it and we
can instead compute it on the fly when creating a pool. This is what
this patch does. The variables are for now pretty static, but this is
sufficient to kill the macro and will allow to set them more dynamically.
The allocated size is the visible size plus the extra storage. Since
for now we can store up to two extra elements (mark and tracer), it's
convenient because now we know that the mark is always stored at
->size, and the tracer is always before ->alloc_sz.
Like previous patches, this replaces the build-time code paths that were
conditionned by CONFIG_HAP_POOLS with runtime paths conditionned by
!POOL_DBG_NO_CACHE. One trivial test had to be added in the hot path in
__pool_alloc() to refrain from calling pool_get_from_cache(), and another
one in __pool_free() to avoid calling pool_put_to_cache().
All cache-specific functions were instrumented with a BUG_ON() to make
sure we never call them with cache disabled. Additionally the cache[]
array was not initialized (remains NULL) so that we can later drop it
if not needed. It's particularly huge and should be turned to dynamic
with a pointer to a per-thread area where all the objects are located.
This will solve the memory usage issue and will improve locality, or
even help better deal with NUMA machines once each thread uses its own
arena.
There were very few functions left that were specific to global pools,
and even the checks they used to participate to are not directly on the
most critical path so they can suffer an extra "if".
What's done now is that pool_releasable() always returns 0 when global
pools are disabled (like the one before) so that pool_evict_last_items()
never tries to place evicted objects there. As such there will never be
any object in the free list. However pool_refill_local_from_shared() is
bypassed when global pools are disabled so that we even avoid the atomic
loads from this function.
The default global setting is still adjusted based on the original
CONFIG_NO_GLOBAL_POOLS that is set depending on threads and the allocator.
The global executable only grew by 1.1kB by keeping this code enabled,
and the code is simplified and will later support runtime options.
The test to decide whether or not to enforce integrity checks on cached
objects is now enabled at runtime and conditionned by this new debugging
flag. While previously it was not a concern to inflate the code size by
keeping the two functions static, they were moved to pool.c to limit the
impact. In pool_get_from_cache(), the fast code path remains fast by
having both flags tested at once to open a slower branch when either
POOL_DBG_COLD_FIRST or POOL_DBG_INTEGRITY are set.
When enabling pools integrity checks, we usually prefer to allocate cold
objects first in order to maximize the time the objects spend in the
cache. In order to make this configurable at runtime, let's introduce
a new debugging flag to control this allocation order. It is currently
preset by the DEBUG_POOL_INTEGRITY build-time setting.
This test used to appear at a single location in create_pool() to
enable a check on the pool name or unconditionally merge similarly
sized pools.
This patch introduces POOL_DBG_DONT_MERGE and conditions the test on
this new runtime flag, that is preset according to the aforementioned
debugging option.
The fail-alloc test used to be enabled/disabled at build time using
the DEBUG_FAIL_ALLOC macro, but it happens that the cost of the test
is quite cheap and that it can be enabled as one of the pool_debugging
options.
This patch thus introduces the first POOL_DBG_FAIL_ALLOC option, whose
default value depends on DEBUG_FAIL_ALLOC. The mem_should_fail() function
is now always built, but it was made static since it's never used outside.
This read-mostly variable will be used at runtime to enable/disable
certain pool-debugging features and will be set by the command-line
parser. A future option -dP will take a number of debugging features
as arguments to configure this variable's contents.