The function used to calculate the shard number currently requires a
stktable_key on input for this. Unfortunately, it happens that peers
currently miss this calculation and they do not provide stktable_key
at all, instead they're open-coding all the low-level stick-table work
(hence why it's missing). Thus we'll need to be able to calculate the
shard number in keys coming from peers as well but the current API does
not make it possible.
This commit addresses this by inverting the order where the length and
the shard number are used. Now the low-level function is independent on
stksess and stktable_key, it takes a table, pointer and length and does
all the job. The upper function takes care of the type and key to get
the its length, and is for use only from stick-table code.
This doesn't change anything except that the low-level one will be usable
from outside (hence why it's exported now).
We need to initialize the shard value in __stksess_init() because there is
not necessarily a key to make it happen later, resulting in an uninitialized
shard value appearing in the entry, typically when entries are learned from
peers. This fixes commit 36d156564 ("MINOR: peers: Support for peer shards"),
no backport is needed.
Note however that it is not sufficient to completely fix the peers code, the
shard value remains zero because the setting of the key was open-coded in
the peers code and these parts were not identified when adding support for
shards.
Stick-tables support sharding to multiple peers but there was no way to
know to what shard an entry was going to be sent. Let's display this in
the "show table" output to ease debugging.
Instead of using memcpy() to concatenate the table's name to the key when
allocating an stksess, let's compute once for all a per-table seed at boot
time and use it to calculate the key's hash. This saves two memcpy() and
the usage of a chunk, it's always nice in a fast path.
When tested under extreme conditions with a 80-byte long table name, it
showed a 1% performance increase.
Commit 36d156564 ("MINOR: peers: Support for peer shards") reintroduced
a direct dependency on import/xxhash.h which was previously dropped by
commit d5fc8fcb8 ("CLEANUP: Add haproxy/xxhash.h to avoid modifying
import/xxhash.h"). This results in xxhash.h being included twice, which
breaks the build on older compilers which do not like seeing XXH32_hash_t
being defined twice.
Let's just use include/haproxy/xxhash.h instead.
No backport is needed.
Pierre Cheynier reported a rare crash that can affect stick-tables. When
a entry is created, the stick-table's expiration date is updated. But if
at exactly the same time the expiration task runs, it finishes by updating
its expiration timer without any protection, which may collide with the
call to task_queue() in another thread. In this case, it sometimes happens
that the first test for TICK_ETERNITY in task_queue() passes, then the
"expire" field is reset, then the BUG_ON() triggers, like below:
FATAL: bug condition "task->expire == 0" matched at src/task.c:279
call trace(13):
| 0x649d86 [c6 04 25 01 00 00 00 00]: __task_queue+0xc6/0xce
| 0x596bef [eb 90 ba 03 00 00 00 be]: stktable_requeue_exp+0x1ef/0x258
| 0x596c87 [48 83 bb 90 00 00 00 00]: stktable_touch_with_exp+0x27/0x312
| 0x563698 [48 8b 4c 24 18 4c 8b 4c]: stream_process_counters+0x3a8/0x6a2
| 0x569344 [49 8b 87 f8 00 00 00 48]: process_stream+0x3964/0x3b4f
| 0x64a80b [49 89 c7 e9 23 ff ff ff]: run_tasks_from_lists+0x3ab/0x566
| 0x64ad66 [29 44 24 14 8b 7c 24 14]: process_runnable_tasks+0x396/0x71e
| 0x6184b2 [83 3d 47 b3 a6 00 01 0f]: run_poll_loop+0x92/0x4ff
| 0x618acf [48 8b 1d aa 20 7d 00 48]: main+0x1877ef
| 0x7fc7d6ec1e45 [64 48 89 04 25 30 06 00]: libpthread:+0x7e45
| 0x7fc7d6c9e4af [48 89 c7 b8 3c 00 00 00]: libc:clone+0x3f/0x5a
This one is extremely difficult to reproduce in practice, but adding a
printf() in process_table_expire() before assigning the value, while
running with an expire delay of 1ms helps a lot and may trigger the
crash in less than one minute on a 8-thread machine. Interestingly,
depending on the sequencing, this bug could also have made a table fail
to expire if the expire field got reset after the last update but before
the call to task_queue(). It would require to be quite unlucky so that
the table is never touched anymore after the race though.
The solution taken by this patch is to take the table's lock when
updating its expire value in stktable_requeue_exp(), enclosing the call
to task_queue(), and to update the task->expire field while still under
the lock in process_table_expire(). Note that thanks to previous changes,
taking the table's lock for the update in stktable_requeue_exp() costs
almost nothing since we now have the guarantee that this is not done more
than 1000 times a second.
Since process_table_expire() sets the timeout after returning from
stktable_trash_expired() which just released the lock, the two functions
were merged so that the task's expire field is updated while still under
the lock. Note that this heavily depends on the two previous patches
below:
CLEANUP: stick-table: remove the unused table->exp_next
OPTIM: stick-table: avoid atomic ops in stktable_requeue_exp() when possible
This is a bit complicated due to the fact that in 2.7 some parts were
made lockless. In 2.6 and older, the second part (the merge of the
two functions) will be sufficient since the task_queue() call was
already performed under the table's lock, and the patches above are
not needed.
This needs to be backported as far as 1.8 scrupulously following
instructions above.
Since the task's time resolution is the millisecond we know there
will not be more than 1000 useful updates per second, so there's no
point in doing a CAS and a task_queue() for each call, better first
check if we're going to change the date. Now we're certain not to
perform such operations more than 1000 times a second for a given
table.
The loop was modified because this improvement will also be used to fix a
bug later.
The ->exp_next field of the stick-table was probably useful in 1.5 but
it currently only carries a copy of what the future value of the table's
task's expire value will be, while it's systematically copied over there
immediately after being assigned. As such it provides exactly a local
variable. Let's remove it, as it costs atomic operations.
Add "shards" new keyword for "peers" section to configure the number
of peer shards attached to such secions. This impact all the stick-tables
attached to the section.
Add "shard" new "server" parameter to configure the peers which participate to
all the stick-tables contents distribution. Each peer receive the stick-tables updates
only for keys with this shard value as distribution hash. The "shard" value
is stored in ->shard new server struct member.
cfg_parse_peers() which is the function which is called to parse all
the lines of a "peers" section is modified to parse the "shards" parameter
stored in ->nb_shards new peers struct member.
Add srv_parse_shard() new callback into server.c to pare the "shard"
parameter.
Implement stksess_getkey_hash() to compute the distribution hash for a
stick-table key as the 64-bits xxhash of the key concatenated to the stick-table
name. This function is called by stksess_setkey_shard(), itself
called by the already implemented function which create a new stick-table
key (stksess_new()).
Add ->idlen new stktable struct member to store the stick-table name length
to not have to compute it each time a stick-table key hash is computed.
Compilation is broken with DEBUG_THREAD since the following patch
76642223f0
MEDIUM: stick-table: switch the table lock to rwlock
Fix this by updating a legacy HA_SPIN_INIT() to HA_RWLOCK_INIT().
No backport needed unless the mentionned patch is backported.
We don't need to call stktable_requeue_exp() with the table's lock
held anymore, so let's move it out. It should slightly reduce the
contention on the write lock, though it is now already quite low.
We're generalizing the change performed in previous commit "MEDIUM:
stick-table: requeue the expiration task out of the exclusive lock"
to stktable_requeue_exp() so that it can also be used by callers of
__stktable_store(). At the moment there's still no visible change
since it's still called under the write lock. However, the previous
code in stitable_touch_with_exp() was updated to use this function.
__staktable_store() performs two distinct things, one is to insert a key
and the other one is to requeue the task's expiration date. Since the
latter might be done without a lock, let's first split the function in
two halves. For now this has no impact.
With 48 threads, a heavily loaded table with plenty of trackers and
rules and a short expiration timer of 10ms saturates the CPU at 232k
rps. By carefully using atomic ops we can make sure that t->exp_next
and t->task->expire converge to the earliest next expiration date and
that all of this can be performed under atomic ops without any lock.
That's what this patch is doing in stktable_touch_with_exp(). This is
sufficient to double the performance and reach 470k rps.
It's worth noting that __stktable_store() uses a mix of eb32_insert()
and task_queue, and that the second part of it could possibly benefit
from this, even though sometimes it's called under a lock that was
already held.
On a 24-core machine having some "stick-store response" rules, a lot of
time is spent in the write lock in stktable_set_entry(). Let's apply the
same mechanism as for the stktable_get_entry() consisting in looking up
the value under the read lock and upgrading it to a write lock only to
perform modifications. Here we even have the luxury of upgrading the
lock since there are no alloc/free in the path. All this increases the
performance by 40% (from 363k to 510k rps).
We don't need to be protected by the table's lock when touching t->current
if we do it using atomics, and that's great because it allows us to have
a cleaner stksess_new() that doesn't require a lock either, and to avoid
manipulating pools under a lock.
That's another 1% performance gain from 2.07 to 2.10M req/s under 48
threads.
On a 24-core machine doing lots of track-sc, it was found that the lock
in stktable_get_entry() was responsible for 25% of the CPU alone. It's
sad because most of its job is to protect the table during the lookup.
Here we're taking a slightly different approach: the lock is first taken
for reads during the lookup, and only in case of failure we switch it for
a write lock. We don't even perform an upgrade here since an allocation
is needed between the two, it would be wasted to do it under the lock,
and is generally not a good idea, so better release the read lock and
try again.
Here the performance under 48 threads with 3 trackers on the same table
jumped from 455k to 2.07M, or 4.55x! Note that the same approach should
be possible for stktable_set_entry().
These functions do not modify anything in the the table except the refcount
on success. Let's just lock the table for shared accesses and make use of
atomic ops to update the refcount. This brings a nice gain from 425k to
455k under 48 threads (7%), but some contention remains on the exclusive
locks in other parts.
Note that the refcount continues to be updated under the lock because it's
not yet certain whether there are races between it and some of the exclusive
lock on the table. The difference is marginal and we prefer to stay on the
safe side for now.
In __stktable_get_entry() now we're planning for the possibility that the
call to __stktable_store() doesn't add the newly allocated entry and instead
finds a previously inserted one. At the moment this doesn't exist because
the lookup + insert passes are made under the same lock. But it will soon
change.
This function is used to create an entry in the table. But it doesn't
consider the possibility that the entry already exists, because right
now it's only called in situations where it was verified under a lock
that it doesn't exist. Since we'll soon need to break that assumption
we need it to verify that the requested entry was added and to return
a pointer to the one in the tree so that the caller can detect any
possible conflict. At the moment this is not used.
stream_store_counters() calls stksess_kill_if_expired() for each active
counter. And this one takes an exclusive lock on the table before
checking if it has any work to do (hint: it almost never has since it
only wants to delete expired entries). However a lock is still neeed for
now to protect the ref_cnt, but we can do it atomically under the read
lock.
Let's change the mechanism. Now what we do is to check out of the lock
if the entry is expired. If it is, we take the write lock, expire it,
and decrement the refcount. Otherwise we just decrement the refcount
under a read lock. With this change alone, the config based on 3
trackers without the previous patches saw a 2.6x improvement, but here
it doesn't yet change anything because some heavy contention remains
on the lookup part.
As previously mentioned, this function currently holds an exclusive lock
on the table during all the time it take to check if the entry needs to
be updated and synchronized with peers. The reality is that many setups
do not use peers and that on highly loaded setups, the same entries are
hammered all the time so the key's expiration doesn't change between a
number of consecutive accesses.
With this patch we take a different approach. The function starts
without taking the lock, and will take it only if needed, keeping track
of it. This way we can avoid it most of the time, or even entirely.
Finally if the decrefcnt argument requires that the refcount is
decremented, we either do it using a non-atomic op if the table was
locked (since no other entry may touch it) or via an atomic under the
read lock only.
With this change alone, a 48-thread test with 3 trackers increased
from 193k req/s to 425k req/s, which is a 2.2x factor.
Taking the write lock prior to entering that function is a problem
because this function is full of conditions that most of the time can
lead to eliminating the lock.
This commit first moves the write lock inside the function and passes
the extra argument required to implement stktable_touch_remote() and
stktable_touch_local(). It also renames the function to remove the
underscores since there's no other variant and it's exported under
this name (probably an old rename that was not propagated). The code
was stressed under 48 threads using 3 trackers on the same table. It
already shows a tiny 3% improvement from 187k to 193k rps.
At plenty of places we decrement ts->ref_cnt under the write lock
because it's held. We don't technically need it to be done that way
if there's contention and an atomic could suffice. However until all
places are turned to atomic, we at least need to do that under a
read lock for now, so that we don't mix atomic and non-atomic uses.
Regardless it already brings ~1.5% req rate improvement with 3 trackers
on the same table under 48 threads at 184k->187k rps.
Right now a spinlock is used, but most accesses are for reads, so let's
switch the lock to an rwlock and switch all accesses to exclusive locks
for now. There should be no visible difference at this point.
table_expire() returns the expiration delay for a stick-table entry associated
to an input sample. Its counterpart table_idle() returns the time the entry
remained idle since the last time it was updated.
Both converters may take a default value as second argument which is returned
when the entry is not present.
There's no more reason for keepin the code and definitions in conn_stream,
let's move all that to stconn. The alphabetical ordering of include files
was adjusted.
This file contains all the stream-connector functions that are specific
to application layers of type stream. So let's name it accordingly so
that it's easier to figure what's located there.
The alphabetical ordering of include files was preserved.
These functions return the app-layer associated with an stconn, which
is a check, a stream or a stream's task. They're used a lot to access
channels, flags and for waking up tasks. Let's just name them
appropriately for the stream connector.
We're starting to propagate the stream connector's new name through the
API. Most call places of these functions that retrieve the channel or its
buffer are in applets. The local variable names are not changed in order
to keep the changes small and reviewable. There were ~92 uses of cs_ic(),
~96 of cs_oc() (due to co_get*() being less factorizable than ci_put*),
and ~5 accesses to the buffer itself.
This applies the change so that the applet code stops using ci_putchk()
and friends everywhere possible, for the much saferapplet_put*() instead.
The change is mechanical but large. Two or three functions used to have no
appctx and a cs derived from the appctx instead, which was a reminiscence
of old times' stream_interface. These were simply changed to directly take
the appctx. No sensitive change was performed, and the old (more complex)
API is still usable when needed (e.g. the channel is already known).
The change touched roughly a hundred of locations, with no less than 124
lines removed.
It's worth noting that the stats applet, the oldest of the series, could
get a serious lifting, as it's still very channel-centric instead of
propagating the appctx along the chain. Given that this code doesn't
change often, there's no emergency to clean it up but it would look
better.
This renames the "struct conn_stream" to "struct stconn" and updates
the descriptions in all comments (and the rare help descriptions) to
"stream connector" or "connector". This touches a lot of files but
the change is minimal. The local variables were not even renamed, so
there's still a lot of "cs" everywhere.
This one is the pointer to the conn_stream which is always in the
endpoint that is always present in the appctx, thus it's not needed.
This patch removes it and replaces it with appctx_cs() instead. A
few occurences that were using __cs_strm(appctx->owner) were moved
directly to appctx_strm() which does the equivalent.
This state is pointless now, it just serves to initialize the initial
table pointer while this can be done more easily in the parser, so let's
do that and drop that state.
Let's instead define a 4-state enum solely for this use case, and place
it into the command's context. Note that END and FIN were already
aliases, which is why they were merged.
This makes use of the generic command context allocation so that the
appctx doesn't have to declare a specific one anymore. The context is
created during parsing. The code also uses st2 which deserves being
addressed in separate commit.
Remaining flags and associated functions are move in the conn-stream
scope. These flags are added on the endpoint and not the conn-stream
itself. This way it will be possible to get them from the mux or the
applet. The functions to get or set these flags are renamed accordingly with
the "cs_" prefix and updated to manipualte a conn-stream instead of a
stream-interface.
At many places, we now use the new CS functions to get a stream or a channel
from a conn-stream instead of using the stream-interface API. It is the
first step to reduce the scope of the stream-interfaces. The main change
here is about the applet I/O callback functions. Before the refactoring, the
stream-interface was the appctx owner. Thus, it was heavily used. Now, as
far as possible,the conn-stream is used. Of course, it remains many calls to
the stream-interface API.
Because appctx is now an endpoint of the conn-stream, there is no reason to
still have the stream-interface as appctx owner. Thus, the conn-stream is
now the appctx owner.
When an ipv6 key is used to filter a CLI command on a stick table
(clear/set/show table ...), the return value of inet_pton() call must be
checked to be sure the key is valid.
This patch should fix the issue #1163. It should be backported to all
supported versions.
This change is required to support TCP/HTTP rules in defaults sections. The
'disabled' bitfield in the proxy structure, used to know if a proxy is
disabled or stopped, is replaced a generic bitfield named 'flags'.
PR_DISABLED and PR_STOPPED flags are renamed to PR_FL_DISABLED and
PR_FL_STOPPED respectively. In addition, everywhere there is a test to know
if a proxy is disabled or stopped, there is now a bitwise AND operation on
PR_FL_DISABLED and/or PR_FL_STOPPED flags.
There have been a large number of issues reported with conn_cur
synchronization because the concept is wrong. In an active-passive
setup, pushing the local connections count from the active node to
the passive one will result in the passive node to have a higher
counter than the real number of connections. Due to this, after a
switchover, it will never be able to close enough connections to
go down to zero. The same commonly happens on reloads since the new
process preloads its values from the old process, and if no connection
happens for a key after the value is learned, it is impossible to reset
the previous ones. In active-active setups it's a bit different, as the
number of connections reflects the number on the peer that pushed last.
This patch solves this by marking the "conn_cur" local and preventing
it from being learned from peers. It is still pushed, however, so that
any monitoring system that collects values from the peers will still
see it.
The patch is tiny and trivially backportable. While a change of behavior
in stable branches is never welcome, it remains possible to fix issues
if reports become frequent.
We'll need to improve the API to pass other arguments in the future, so
let's start to adapt better to the current use cases. task_new() is used:
- 18 times as task_new(tid_bit)
- 18 times as task_new(MAX_THREADS_MASK)
- 2 times with a single bit (in a loop)
- 1 in the debug code that uses a mask
This patch provides 3 new functions to achieve this:
- task_new_here() to create a task on the calling thread
- task_new_anywhere() to create a task to be run anywhere
- task_new_on() to create a task to run on a specific thread
The change is trivial and will allow us to later concentrate the
required adaptations to these 3 functions only. It's still possible
to call task_new() if needed but a comment was added to encourage the
use of the new ones instead. The debug code was not changed and still
uses it.
The sc-set-gpt0() parser was extended in 2.1 by commit 0d7712dff ("MINOR:
stick-table: allow sc-set-gpt0 to set value from an expression") to support
sample expressions in addition to plain integers. However there is a
subtlety there, which is that while the arg position must be incremented
when parsing an integer, it must not be touched when calling an expression
since the expression parser already does it.
The effect is that rules making use of sc-set-gpt0() followed by an
expression always ignore one word after that expression, and will typically
fail to parse if followed by an "if" as the parser will restart after the
"if". With no condition it's different because an empty condition doesn't
result in trying to parse anything.
This patch moves the increment at the right place and adds a few
explanations for a code part that was far from being obvious.
This should be backported to branches having the commit above (2.1+).
gcc 8.3.0 spews a bunch of:
src/stick_table.c: In function 'action_inc_gpc0':
include/haproxy/freq_ctr.h:66:12: warning: 'period' may be used uninitialized in this function [-Wmaybe-uninitialized]
curr_tick += period;
^~
src/stick_table.c:2241:15: note: 'period' was declared here
unsigned int period;
^~~~~~
but they're incorrect because all accesses are guarded by the exact same
condition (ptr1 not being null), it's just the compiler being overzealous
about the uninitialized detection that seems to be stronger than its
ability to follow its own optimizations. This code path is not critical,
let's just pre-initialize the period to zero.
No backport is needed.
This patch makes the use of 'gpc' excluding the use of the legacy
types 'gpc0' and 'gpc1" on the same table.
It also makes the use of 'gpc_rate' excluding the use of the legacy
types 'gpc0_rate' and 'gpc1_rate" on the same table.
The 'gpc0' and 'gpc1' related fetches and actions will apply
to the first two elements of the 'gpc' array if stored in table.
The 'gpc0_rate' and 'gpc1_rate' related fetches and actions will apply
to the first two elements of the 'gpc_rate' array if stored in table.
This patch adds the definition of two new array data_types:
'gpc': This is an array of 32bits General Purpose Counters.
'gpc_rate': This is an array on increment rates of General Purpose Counters.
Like for all arrays, they are limited to 100 elements.
This patch also adds actions and fetches to handle
elements of those arrays.
Note: As documented, those new actions and fetches won't
apply to the legacy 'gpc0', 'gpc1', 'gpc0_rate' nor 'gpc1_rate'.
This patch makes the use of 'gpt' excluding the use of the legacy
type 'gpt0' on the same table.
It also makes the 'gpt0' related fetches and actions applying
to the first element of the 'gpt' array if stored in table.
This patch adds the definition of a new array data_type
'gpt'. This is an array of 32bits General Purpose Tags.
Like for all arrays, it is limited to 100 elements.
This patch also adds actions and fetches to handle
elements of this array.
Note: As documented, those new actions and fetches won't
apply to the legacy 'gpt0' data type.
This patch provides the code to handle arrays of some
standard types (SINT, UINT, ULL and FRQP) in stick table.
This way we could define new "array" data types.
Note: the number of elements of an array was limited
to 100 to put a limit and to ensure that an encoded
update message will continue to fit into a buffer
when the peer protocol will handle such data types.
This patch replaces all advanced data type aliases on
stktable_data_cast calls by standard types.
This way we could call the same stktable_data_cast
regardless of the used advanced data type as long they
are using the same std type.
It also removes all the advanced data type aliases.
This patch fixes several errors printing integers
of stick table entry values and args during dump on cli.
This patch should be backported since the dump of entries
is supported. [wt: roughly 1.5-dev1 hence all stable branches]
A memory allocation failure happening during peers_register_table would
have resulted in a crash. This function is only called during init.
It was raised in GitHub issue #1233.
It could be backported to all stable branches.
Since the introduction of bc_src, smp_fetch_src from tcp_sample inspect
the kw argument to choose between the frontend or the backend source
address. However, for the stick tables, the argument is left to NULL.
This causes a segfault.
Fix the crash by explicitely set the kw argument to "src" to retrieve
the source address of the frontend side.
This bug was introduced by the following commit :
7d081f02a4
MINOR: tcp_samples: Add samples to get src/dst info of the backend connection
It does not need a backport as it is integrated in the current 2.4-dev
branch.
To reproduce the crash, I used the following config :
frontend fe
bind :20080
http-request track-sc0 src table foo
http-request reject if { src_conn_rate(foo) gt 10 }
use_backend h1
backend foo
stick-table type ip size 200k expire 30s store conn_rate(60s)
backend h1
server nginx 127.0.0.1:30080 check
This should fix the github issue #1247.
This function has no business being inlined in stick_table.h since it's
only used at boot time by the config parser. In addition it causes an
undesired dependency on tools.h because it uses parse_time_err(). Let's
move it to stick_table.c.
Define a new keyword flag KWF_MATCH_PREFIX. This is used to replace the
match_pfx field of action struct.
This has the benefit to have more explicit action declaration, and now
it is possible to quickly implement experimental actions.
There were 102 CLI commands whose help were zig-zagging all along the dump
making them unreadable. This patch realigns all these messages so that the
command now uses up to 40 characters before the delimiting colon. About a
third of the commands did not correctly list their arguments which were
added after the first version, so they were all updated. Some abuses of
the term "id" were fixed to use a more explanatory term. The
"set ssl ocsp-response" command was not listed because it lacked a help
message, this was fixed as well. The deprecated enable/disable commands
for agent/health/server were prominently written as deprecated. Whenever
possible, clearer explanations were provided.
Both structures are identical except the name of the field starting
the period and its description. Let's call them all freq_ctr and the
period's start "curr_tick" which is generic.
This is only a temporary change and fields are expected to remain
the same with no code change (verified).
It's been too short for quite a while now and is now full. It's still
time to extend it to 32-bits since we have room for this without
wasting any space, so we now gained 16 new bits for future flags.
The values were not reassigned just in case there would be a few
hidden u16 or short somewhere in which these flags are placed (as
it used to be the case with stream->pending_events).
The patch is tagged MEDIUM because this required to update the task's
process() prototype to use an int instead of a short, that's quite a
bunch of places.
Historically we've been counting lots of client-triggered events in stick
tables to help detect misbehaving ones, but we've been missing the same on
the server side, and there's been repeated requests for being able to count
the server errors per URL in order to precisely monitor the quality of
service or even to avoid routing requests to certain dead services, which
is also called "circuit breaking" nowadays.
This commit introduces http_fail_cnt and http_fail_rate, which work like
http_err_cnt and http_err_rate in that they respectively count events and
their frequency, but they only consider server-side issues such as network
errors, unparsable and truncated responses, and 5xx status codes other
than 501 and 505 (since these ones are usually triggered by the client).
Note that retryable errors are purposely not accounted for, so that only
what the client really sees is considered.
With this it becomes very simple to put some protective measures in place
to perform a redirect or return an excuse page when the error rate goes
beyond a certain threshold for a given URL, and give more chances to the
server to recover from this condition. Typically it could look like this
to bypass a URL causing more than 10 requests per second:
stick-table type string len 80 size 4k expire 1m store http_fail_rate(1m)
http-request track-sc0 base # track host+path, ignore query string
http-request return status 503 content-type text/html \
lf-file excuse.html if { sc0_http_fail_rate gt 10 }
A more advanced mechanism using gpt0 could even implement high/low rates
to disable/enable the service.
Reg-test converteers_ref_cnt_never_dec.vtc was updated to test it.
The sample fetch functions must always be called with a valid argument
list. When called by hand, if there is no argument to pass, empty_arg_list must
be used.
In the stick-table code, there are some calls to smp_fetch_src() with NULL as
argument list. It is changed to use empty_arg_list instead. It is not really a
bug because smp_fetch_src() does not use the argument list. But it is an API
bug.
This patch may be backported to all stable branches as a cleanup.
This handler can take quite some time as it deletes a large number of
entries under a lock, let's export it so that it's immediately visible
in "show profiling".
This allows using the address of the server rather than the name of the
server for keeping track of servers in a backend for stickiness.
The peers code was also extended to support feeding the dictionary using
this key instead of the name.
Fixes#814
An interesting case was reported with threads and moderately sized
stick-tables. Sometimes the watchdog would trigger during the purge.
It turns out that the stick tables were sized in the 10s of K entries
which is the order of magnitude of the possible number of connections,
and that threads were used over distinct NUMA nodes. While at first
glance nothing looks problematic there, actually there is a risk that
a thread trying to purge the table faces 100% of entries still in use
by a connection with (ts->ref_cnt > 0), and ends up scanning the whole
table, while other threads on the other NUMA node are causing the
cache lines to bounce back and forth and considerably slow down its
progress to the point of possibly spending hundreds of milliseconds
there, multiplied by the number of queued threads all failing on the
same point.
Interestingly, smaller tables would not trigger it because the scan
would be faster, and larger ones would not trigger it because plenty
of entries would be idle!
The most efficient solution is to increase the table size to be large
enough for this never to happen, but this is not reliable. We could
have a parallel list of idle entries but that would significantly
increase the storage and processing cost only to improve a few rare
corner cases.
This patch takes a more pragmatic approach, it considers that it will
not visit more than twice the number of nodes to be deleted, which
means that it accepts to fail up to 50% of the time. Given that very
small batches are programmed each time (1/256 of the table size), this
means the operation will finish quickly (128 times faster than now),
and will reduce the inter-thread contention. If this needs to be
reconsidered, it will probably mean that the batch size needs to be
fixed differently.
This needs to be backported to stable releases which extensively use
threads, typically 2.0.
Kudos to Nenad Merdanovic for figuring the root cause triggering this!
The remaining proxy states were only used to distinguish an enabled
proxy from a disabled one. Due to the initialization order, both
PR_STNEW and PR_STREADY were equivalent after startup, and they
would only differ from PR_STSTOPPED when the proxy is disabled or
shutdown (which is effectively another way to disable it).
Now we just have a "disabled" field which allows to distinguish them.
It's becoming obvious that start_proxies() is only used to print a
greeting message now, that we'd rather get rid of. Probably that
zombify_proxy() and stop_proxy() should be merged once their
differences move to the right place.
This is totally ugly, smp_fetch_src() is exported only so that stick_table.c
can (ab)use it in the {sc,src}_* sample fetch functions. It could be argued
that the sample could have been reconstructed there in place, but we don't
even need to duplicate the code. We'd rather simply retrieve the "src"
fetch's function from where it's used at init time and be done with it.
This patch fixes all the leftovers from the include cleanup campaign. There
were not that many (~400 entries in ~150 files) but it was definitely worth
doing it as it revealed a few duplicates.
Most of the files dealing with error reports have to include log.h in order
to access ha_alert(), ha_warning() etc. But while these functions don't
depend on anything, log.h depends on a lot of stuff because it deals with
log-formats and samples. As a result it's impossible not to embark long
dependencies when using ha_warning() or qfprintf().
This patch moves these low-level functions to errors.h, which already
defines the error codes used at the same places. About half of the users
of log.h could be adjusted, sometimes revealing other issues such as
missing tools.h. Interestingly the total preprocessed size shrunk by
4%.
There's no point splitting the file in two since only cfgparse uses the
types defined there. A few call places were updated and cleaned up. All
of them were in C files which register keywords.
There is nothing left in common/ now so this directory must not be used
anymore.
This one was not easy because it was embarking many includes with it,
which other files would automatically find. At least global.h, arg.h
and tools.h were identified. 93 total locations were identified, 8
additional includes had to be added.
In the rare files where it was possible to finalize the sorting of
includes by adjusting only one or two extra lines, it was done. But
all files would need to be rechecked and cleaned up now.
It was the last set of files in types/ and proto/ and these directories
must not be reused anymore.
This one is particularly difficult to split because it provides all the
functions used to manipulate a proxy state and to retrieve names or IDs
for error reporting, and as such, it was included in 73 files (down to
68 after cleanup). It would deserve a small cleanup though the cut points
are not obvious at the moment given the number of structs involved in
the struct proxy itself.
The current state of the logging is a real mess. The main problem is
that almost all files include log.h just in order to have access to
the alert/warning functions like ha_alert() etc, and don't care about
logs. But log.h also deals with real logging as well as log-format and
depends on stream.h and various other things. As such it forces a few
heavy files like stream.h to be loaded early and to hide missing
dependencies depending where it's loaded. Among the missing ones is
syslog.h which was often automatically included resulting in no less
than 3 users missing it.
Among 76 users, only 5 could be removed, and probably 70 don't need the
full set of dependencies.
A good approach would consist in splitting that file in 3 parts:
- one for error output ("errors" ?).
- one for log_format processing
- and one for actual logging.
It was moved without any change, however many callers didn't need it at
all. This was a consequence of the split of proto_http.c into several
parts that resulted in many locations to still reference it.
Almost no change except moving the cli_kw struct definition after the
defines. Almost all users had both types&proto included, which is not
surprizing since this code is old and it used to be the norm a decade
ago. These places were cleaned.
Just some minor reordering, and the usual cleanup of call places for
those which didn't need it. We don't include the whole tools.h into
stats-t anymore but just tools-t.h.
The stktable_types[] array declaration was moved to the main file as
it had nothing to do in the types. A few declarations were reordered
in the types file so that defines were before the structs. Thread-t
was added since there are a few __decl_thread(). The loss of peers.h
revealed that cfgparse-listen needed it.
The cfg_peers external declaration was moved to the main file instead
of the type one. A few types were still missing from the proto, causing
warnings in the functions prototypes (proxy, stick_table).
The TASK_IS_TASKLET() macro was moved to the proto file instead of the
type one. The proto part was a bit reordered to remove a number of ugly
forward declaration of static inline functions. About a tens of C and H
files had their dependency dropped since they were not using anything
from task.h.
global.h was one of the messiest files, it has accumulated tons of
implicit dependencies and declares many globals that make almost all
other file include it. It managed to silence a dependency loop between
server.h and proxy.h by being well placed to pre-define the required
structs, forcing struct proxy and struct server to be forward-declared
in a significant number of files.
It was split in to, one which is the global struct definition and the
few macros and flags, and the rest containing the functions prototypes.
The UNIX_MAX_PATH definition was moved to compat.h.
This one is particularly tricky to move because everyone uses it
and it depends on a lot of other types. For example it cannot include
arg-t.h and must absolutely only rely on forward declarations to avoid
dependency loops between vars -> sample_data -> arg. In order to address
this one, it would be nice to split the sample_data part out of sample.h.
And also rename standard.c to tools.c. The original split between
tools.h and standard.h dates from version 1.3-dev and was mostly an
accident. This patch moves the files back to what they were expected
to be, and takes care of not changing anything else. However this
time tools.h was split between functions and types, because it contains
a small number of commonly used macros and structures (e.g. name_desc)
which in turn cause the massive list of includes of tools.h to conflict
with the callers.
They remain the ugliest files of the whole project and definitely need
to be cleaned and split apart. A few types are defined there only for
functions provided there, and some parts are even OS-specific and should
move somewhere else, such as the symbol resolution code.
Now the file is ready to be stored into its final destination. A few
minor reorderings were performed to keep the file properly organized,
making the various sections more visible (cache & lockless).
In addition and to stay consistent, memory.c was renamed to pool.c.
This one is included almost everywhere and used to rely on a few other
.h that are not needed (unistd, stdlib, standard.h). It could possibly
make sense to split it into multiple parts to distinguish operations
performed on timers and the internal time accounting, but at this point
it does not appear much important.
Half of the users of this include only need the type definitions and
not the manipulation macros nor the inline functions. Moves the various
types into mini-clist-t.h makes the files cleaner. The other one had all
its includes grouped at the top. A few files continued to reference it
without using it and were cleaned.
In addition it was about time that we'd rename that file, it's not
"mini" anymore and contains a bit more than just circular lists.
All files that were including one of the following include files have
been updated to only include haproxy/api.h or haproxy/api-t.h once instead:
- common/config.h
- common/compat.h
- common/compiler.h
- common/defaults.h
- common/initcall.h
- common/tools.h
The choice is simple: if the file only requires type definitions, it includes
api-t.h, otherwise it includes the full api.h.
In addition, in these files, explicit includes for inttypes.h and limits.h
were dropped since these are now covered by api.h and api-t.h.
No other change was performed, given that this patch is large and
affects 201 files. At least one (tools.h) was already freestanding and
didn't get the new one added.
This is where other imported components are located. All files which
used to directly include ebtree were touched to update their include
path so that "import/" is now prefixed before the ebtree-related files.
The ebtree.h file was slightly adjusted to read compiler.h from the
common/ subdirectory (this is the only change).
A build issue was encountered when eb32sctree.h is loaded before
eb32tree.h because only the former checks for the latter before
defining type u32. This was addressed by adding the reverse ifdef
in eb32tree.h.
No further cleanup was done yet in order to keep changes minimal.