First, we don't want to measure wakeup times if the call date had not
been set before profiling was enabled at run time. And second, we may
only collect the value before clearing the TASK_IN_LIST bit, otherwise
another wakeup might happen on another thread and replace the call date
we're about to use, hence artificially lower the wakeup times.
It is extremely useful to be able to observe the wakeup latency of some
important I/O operations, so let's accept to inflate the tasklet struct
by 8 extra bytes when DEBUG_TASK is set. With just this we have enough
to get live reports like this:
$ socat - /tmp/sock1 <<< "show profiling"
Per-task CPU profiling : on # set profiling tasks {on|auto|off}
Tasks activity:
function calls cpu_tot cpu_avg lat_tot lat_avg
si_cs_io_cb 8099492 4.833s 596.0ns 8.974m 66.48us
h1_io_cb 7460365 11.55s 1.548us 2.477m 19.92us
process_stream 7383828 22.79s 3.086us 18.39m 149.5us
h1_timeout_task 4157 - - 348.4ms 83.81us
srv_cleanup_toremove_connections751 39.70ms 52.86us 10.54ms 14.04us
srv_cleanup_idle_connections 21 1.405ms 66.89us 30.82us 1.467us
task_run_applet 16 1.058ms 66.13us 446.2us 27.89us
accept_queue_process 7 34.53us 4.933us 333.1us 47.58us
Instead of decrementing grq_total once per task picked from the global
run queue, let's do it at once after the loop like we do for other
counters. This simplifies the code everywhere. It is not expected to
bring noticeable improvements however, since global tasks tend to be
less common nowadays.
Now we don't need to decrement rq_total when we pick a tack in the tree
to immediately increment it again after installing it into the local
list. Instead, we simply add to the local queue count the number of
globally picked tasks. Avoiding this shows ~0.5% performance gains at
1Mreq/s (2M task switches/s).
As indicated in previous commit, this function tries to guess which tree
the task is in to figure what counters to update, while we already have
that info in the caller. Let's just pick the relevant parts to place them
in the caller.
In process_runnable_tasks() we're still calling __task_unlink_rq() to
pick a task, and this function tries to guess where to pick the task
from and which counter to update while the caller's context already
has everything. Worse, the number of local tasks is decremented then
recredited, doubling the operations. In order to avoid this we first
need to keep separate counters for local and global tasks that were
picked. This is what this patch does.
The function htx_reserve_max_data() should be used to get an HTX DATA block
with the max possible size. A current block may be extended or a new one
created, depending on the HTX message state. But the idea is to let the
caller to copy a bunch of data without requesting many new blocks. It is its
responsibility to resize the block at the end, to set the final block size.
This function will be used to parse messages with small chunks. Indeed, we
can have more than 2700 1-byte chunks in a 16Kb of input data. So it is easy
to understand how this function may help to improve the parsing of chunk
messages.
If the DNS resolution failed for a server, its ip address must be
removed. Otherwise, the server is stopped but keeps its ip. This may be
confusing when the servers state are retrieved on the CLI and it may lead to
undefined behavior if HAproxy is configured to load its servers state from a
file.
This patch should be backported as far as 2.0.
When a SRV record expires, the ip/port assigned to the associated server are
now removed. Otherwise, the server is stopped but keeps its ip/port while
the server hostname is removed. It is confusing when the servers state are
retrieve on the CLI and may be a problem if saved in a server-state
file. Because the reload may fail because of this inconsistency.
Here is an example:
* Declare a server template in a backend, using the resolver <dns>
server-template test 2 _http._tcp.example.com resolvers dns check
* 2 SRV records are announced with the corresponding additional
records. Thus, 2 servers are filled. Here is the "show servers state"
output :
2 frt 1 test1 192.168.1.1 2 64 0 1 2 15 3 4 6 0 0 0 http1.example.com 8001 _http._tcp.example.com 0 0 - - 0
2 frt 2 test2 192.168.1.2 2 64 0 1 1 15 3 4 6 0 0 0 http2.example.com 8002 _http._tcp.example.com 0 0 - - 0
* Then, one additional record is removed (or a SRV record is removed, the
result is the same). Here is the new "show servers state" output :
2 frt 1 test1 192.168.1.1 2 64 0 1 38 15 3 4 6 0 0 0 http1.example.com 8001 _http._tcp.example.com 0 0 - - 0
2 frt 2 test2 192.168.1.2 0 96 0 1 19 15 3 0 14 0 0 0 - 8002 _http._tcp.example.com 0 0 - - 0
On reload, if a server-state file is used, this leads to undefined behaviors
depending on the configuration.
This patch should be backported as far as 2.0.
When a SRV record was created, it used to register the regular server name
resolution callbacks. That said, SRV records and regular server name
resolution don't work the same way, furthermore on error management.
This patch introduces a new call back to manage DNS errors related to
the SRV queries.
this fixes github issue #50.
Backport status: 2.3, 2.2, 2.1, 2.0
If no additional record is associated to a SRV record, its TTL must not be
renewed. Otherwise the entry never expires. Thus once announced a first
time, the entry remains blocked on the same IP/port except if a new announce
replaces the old one.
Now, the TTL is updated if a SRV record is received while a matching
existing one is found with an additional record or when an new additional
record is assigned to an existing SRV record.
This patch should be backported as far as 2.2.
At the end of resolv_validate_dns_response(), if a received additionnal
record is not assigned to an existing server record, it is released. But the
condition to do so is buggy. If "answer_record" (the received AR) is not
assigned, "tmp_record" is not a valid record object. It is just a dummy
record "representing" the head of the record list.
Now, the condition is far cleaner. This patch must be backported as far as
2.2.
This function has become large with the multi-queue scheduler. We need
to keep the fast path and the debugging parts inlined, but the rest now
moves to task.c just like was done for task_wakeup(). This has reduced
the code size by 6kB due to less inlining of large parts that are always
context-dependent, and as a side effect, has increased the overall
performance by 1%.
The nb_tasks counter was still global and gets incremented and decremented
for each task_new()/task_free(), and was read in process_runnable_tasks().
But it's only used for stats reporting, so doing this this often is
pointless and expensive. Let's move it to the task_per_thread struct and
have the stats sum it when needed.
The test in __task_wakeup() to figure if the remote threads are sleeping
doesn't make sense outside of the global runqueue test, since there are
only two possibilities here: local runqueue or global runqueue, hence a
sleeping thread is another one and can only happen when sending to the
global run queue. Let's move the test inside the "if" block.
Historically we used to call __task_wakeup() with a known tree root but
this is not the case and the code has remained needlessly complicated
with the root calculation in task_wakeup() passed in argument to
__task_wakeup() which compares it again.
Let's get rid of this and just move the detection code there. This
eliminates some ifdefs and allows to simplify the test conditions quite
a bit.
This one is systematically misunderstood due to its unclear name. It
is in fact the number of tasks in the local tasklet list. Let's call
it "tasks_in_list" to remove some of the confusion.
This one is exclusively used as a boolean nowadays and is non-zero only
when the thread-local run queue is not empty. Better check the root tree's
pointer and avoid updating this counter all the time.
This counter is solely used for reporting in the stats and is the hottest
thread contention point to date. Moving it to the scheduler and having a
separate one for the global run queue dramatically improves the performance,
showing a 12% boost on the request rate on 16 threads!
In addition, the thread debugging output which used to rely on rqueue_size
was not totally accurate as it would only report task counts. Now we can
return the exact thread's run queue length.
It is also interesting to note that there are still a few other task/tasklet
counters in the scheduler that are not efficiently updated because some cover
a single area and others cover multiple areas. It looks like having a distinct
counter for each of the following entries would help and would keep the code
a bit cleaner:
- global run queue (tree)
- per-thread run queue (tree)
- per-thread shared tasklets list
- per-thread local lists
Maybe even splitting the shared tasklets lists between pure tasklets and
tasks instead of having the whole and tasks would simplify the code because
there remain a number of places where several counters have to be updated.
dns_session_release() only uses its struct dns_stream_server to access
the lock, so a warning is emitted when threads are disabled. Let's mark
it __maybe_unused.
The lock was still used exclusively to deal with the concurrency between
the "show sess" release handler and a stream_new() or stream_free() on
another thread. All other accesses made by "show sess" are already done
under thread isolation. The release handler only requires to unlink its
node when stopping in the middle of a dump (error, timeout etc). Let's
just isolate the thread to deal with this case so that it's compatible
with the dump conditions, and remove all remaining locking on the streams.
This effectively kills the streams lock. The measured gain here is around
1.6% with 4 threads (374krps -> 380k).
The global streams list is exclusively used for "show sess", to look up
a stream to shut down, and for the hard-stop. Having all of them in a
single list is extremely expensive in terms of locking when using threads,
with performance losses as high as 7% having been observed just due to
this.
This patch makes the list per-thread, since there's no need to have a
global one in this situation. All call places just iterate over all
threads. The most "invasive" changes was in "show sess" where the end
of list needs to go back to the beginning of next thread's list until
the last thread is seen. For now the lock was maintained to keep the
code auditable but a next commit should get rid of it.
The observed performance gain here with only 4 threads is already 7%
(350krps -> 374krps).
Instead of placing the current stream at the end of the stream list when
issuing a "show sess" on the CLI as was done in 2.2 with commit c6e7a1b8e
("MINOR: cli: make "show sess" stop at the last known session"), now we
compare the listed stream's epoch with the dumping stream's and stop on
more recent ones.
This way we're certain to always only dump known streams at the moment we
issue the dump command without having to modify the list. In theory we
could miss some streams if more than 2^31 "show sess" requests are issued
while an old stream remains present, but that's 68 years at 1 "show sess"
per second and it's unlikely we'll keep a process, let alone a stream, that
long.
It could be verified that the count of dumped streams still matches the
one before this change.
The "show sess" CLI command currently lists all streams and needs to
stop at a given position to avoid dumping forever. Since 2.2 with
commit c6e7a1b8e ("MINOR: cli: make "show sess" stop at the last known
session"), a hack consists in unlinking the stream running the applet
and linking it again at the current end of the list, in order to serve
as a delimiter. But this forces the stream list to be global, which
affects scalability.
This patch introduces an epoch, which is a global 32-bit counter that
is incremented by the "show sess" command, and which is copied by newly
created streams. This way any stream can know whether any other one is
newer or older than itself.
For now it's only stored and not exploited.
The hard-stop event didn't wake threads up. In the past it wasn't an issue
as the poll timeout was limited to 1 second, but since commit 4f59d3861
("MINOR: time: increase the minimum wakeup interval to 60s") it has become
a problem because old processes can remain live for up to one minute after
the hard-stop-after delay. Let's just wake them up.
This may be backported to older releases, though before 2.4 the extra
delay was only one second.
There's no locking around the lookup of a stream nor its shutdown
when issuing "shutdown sessions" over the CLI so the risk of crashing
the process is particularly high.
Let's use a thread_isolate() there which is suitable for this task, and
there are not that many alternatives.
This must be backported to 1.8.
When setting hard-stop-after, hard_stop() is called at the end to kill
last pending streams. Unfortunately there's no locking there while
walking over the streams list nor when shutting them down, so it's
very likely that some old processes have been crashing or gone wild
due to this. Let's use a thread_isolate() call for this as we don't
have much other choice (and it happens once in the process' life,
that's OK).
This must be backported to 1.8.
This patch adds a lock to functions vars_get_by_name() and
vars_get_by_desc() to protect accesses to the list of variables.
After the variable is fetched, a sample data is duplicated by using
smp_dup() because the variable may be modified by another thread.
This should be backported to all versions supporting vars along with
"BUG/MINOR: sample: secure convs that accept base64 string and var name
as args" which this patch depends on.
This patch adds a few improvements in order to secure the use of
converters that accept base64 string and variable name as arguments.
The first change is within related function sample_conv_var2smp_str()
which now flags the sample as SMP_F_CONST if the argument is of type
ARGT_STR. This makes the sample more safe for later use.
A new function sample_check_arg_base64() is added. It checks an argument
and fills it with a variable type if the argument string contains a
valid variable name. If failed, it tries to perform a base64 decode
operation on a non-empty string, and fills the argument with the decoded
content which can be used later, without any additional base64dec()
function calls during runtime. This means that haproxy configuration
check may fail if variable lookup fails and an invalid base64 encoded
string is specified as an argument for such converters.
Both converters, "aes_gcm_dec" and "hmac", now use alloc_trash_chunk()
in order to allocate additional buffers for various conversions, and
avoid the use of a pre-allocated trash chunks directly (usually returned
by get_trash_chunk()). The function sample_check_arg_base64() is used
for both converters in order to check their arguments specified within
the haproxy configuration.
This patch should be backported as far as 2.0. However, it is important
to keep in mind a few things. The "hmac" converter is only available
starting with 2.2. In versions prior to 2.2, the "aes_gcm_dec" converter
and sample_conv_var2smp_str() are implemented in src/ssl_sock.c. Thus
the patch will have to be adapted on these versions.
Note that this patch is required for a subsequent, more important fix.
A potential null pointer dereference was reported with an old gcc
version (6.5)
src/ssl_ckch.c: In function 'cli_parse_set_cert':
src/ssl_ckch.c:838:7: error: potential null pointer dereference [-Werror=null-dereference]
if (!ssl_sock_copy_cert_key_and_chain(src->ckch, dst->ckch))
^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
src/ssl_ckch.c:838:7: error: potential null pointer dereference [-Werror=null-dereference]
src/ssl_ckch.c: In function 'ckchs_dup':
src/ssl_ckch.c:838:7: error: potential null pointer dereference [-Werror=null-dereference]
if (!ssl_sock_copy_cert_key_and_chain(src->ckch, dst->ckch))
^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
src/ssl_ckch.c:838:7: error: potential null pointer dereference [-Werror=null-dereference]
cc1: all warnings being treated as errors
This case does not actually happen but it's better to fix the ckch API
with a NULL check.
Could be backported as far as 2.1.
RAND_keep_random_devices_open is OpenSSL specific function, not
implemented in LibreSSL and BoringSSL. Let us define guard
HAVE_SSL_RAND_KEEP_RANDOM_DEVICES_OPEN in include/haproxy/openssl-compat.h
That guard does not depend anymore on HA_OPENSSL_VERSION
The runqueue_ticks counts the number of task wakeups and is used to
position new tasks in the run queue, but since we've had per-thread
run queues, the values there are not very relevant anymore and the
nice value doesn't apply well if some threads are more loaded than
others. In addition, letting all threads compete over a shared counter
is not smart as this may cause some excessive contention.
Let's move this index close to the run queues themselves, i.e. one per
thread and a global one. In addition to improving fairness, this has
increased global performance by 2% on 16 threads thanks to the lower
contention on rqueue_ticks.
Fairness issues were not observed, but if any were to be, this patch
could be backported as far as 2.0 to address them.
Historically this function would try to wake the most accurate number of
process_stream() waiters. But since the introduction of filters which could
also require buffers (e.g. for compression), things started not to be as
accurate anymore. Nowadays muxes and transport layers also use buffers, so
the runqueue size has nothing to do anymore with the number of supposed
users to come.
In addition to this, the threshold was compared to the number of free buffer
calculated as allocated minus used, but this didn't work anymore with local
pools since these counts are not updated upon alloc/free!
Let's clean this up and pass the number of released buffers instead, and
consider that each waiter successfully called counts as one buffer. This
is not rocket science and will not suddenly fix everything, but at least
it cannot be as wrong as it is today.
This could have been marked as a bug given that the current situation is
totally broken regarding this, but this probably doesn't completely fix
it, it only goes in a better direction. It is possible however that it
makes sense in the future to backport this as part of a larger series if
the situation significantly improves.
The buffer wait queue used to be global historically but this doest not
make any sense anymore given that the most common use case is to have
thread-local pools. Thus there's no point waking up waiters of other
threads after releasing an entry, as they won't benefit from it.
Let's move the queue head to the thread_info structure and use
ti->buffer_wq from now on.
When a server-state line is parsed, a test is performed to be sure there is
enough but not too much fields. However the test is buggy. The bug was
introduced in the commit ea2cdf55e ("MEDIUM: server: Don't introduce a new
server-state file version").
No backport needed.
This revert the commit 63e6cba12 ("MEDIUM: server: add server-states version
2"), but keeping all recent features added to the server-sate file. Instead
of adding a 2nd version for the server-state file format to handle the 5 new
fields added during the 2.4 development, these fields are considered as
optionnal during the parsing. So it is possible to load a server-state file
from HAProxy 2.3. However, from 2.4, these new fields are always dumped in
the server-state file. But it should not be a problem to load it on the 2.3.
This patch seems a bit huge but the diff ignoring the space is much smaller.
The version 2 of the server-state file format is reserved for a real
refactoring to address all issues of the current format.
If a line of a server-state file has too many fields, the last one is not
cut on the first following space, as all other fileds. It contains all the
end of the line. It is not the expected behavior. So, now, we cut it on the
next following space, if any. The parsing loop was slighly rewritten.
Note that for now there is no error reported if the line is too long.
This patch may be backported at least as far as 2.1. On 2.0 and prior the
code is not the same. The line parsing is inlined in apply_server_state()
function.
Same static arrays of parameters are used to parse all server-state
lines. Thus it is important to reinit them to be sure to not get params from
the previous line, eventually from the previous loaded file.
This patch should be backported to all stable branches. However, in 2.0 and
prior, the parsing of server-state lines are inlined in apply_server_state()
function. Thus the patch will have to be adapted on these versions.
When a HTTP return action is triggered, HAProxy is responsible to return the
response, based on the configured status code. On the request side, there is
no problem because there is no server response to replace. But on the
response side, we must take care to override the server response status
code, if any, to be sure to use the rigth status code to get the http reply
message.
In short, we must always set the configured status code of the HTTP return
action before returning the http reply to be sure to get the right reply,
the one base on the http return action status code and not a reply based on
the server response status code..
This patch should fix the issue #1139. It must be backported as far as 2.2.
If a SPOE filter is configured to send its logs to a ring buffer, the
corresponding sink must be resolved during the configuration post
parsing. Otherwise, the sink is undefined when a log message is emitted,
crashing HAProxy.
This patch must be backported as far as 2.2.
Remove ebmb_node entry from struct connection and create a dedicated
struct conn_hash_node. struct connection contains now only a pointer to
a conn_hash_node, allocated only for connections where target is of type
OBJ_TYPE_SERVER. This will reduce memory footprints for every
connections that does not need http-reuse such as frontend connections.
In h2_process there was two parts where the connection was removed from
the idle trees, without first checking if the connection is a backend
side.
This should not produce a crash as the node is properly zeroed on
conn_init. However, it is better to explicit the test as it is done on
all other places. Besides it will be mandatory if the node part is
dynamically allocated only for backend connections.
The maximum number of connections accepted at once by a thread for a single
listener used to default to 64 divided by the number of processes but the
tasklet-based model is much more scalable and benefits from smaller values.
Experimentation has shown that 4 gives the highest accept rate for all
thread values, and that 3 and 5 come very close, as shown below (HTTP/1
connections forwarded per second at multi-accept 4 and 64):
ac\thr| 1 2 4 8 16
------+------------------------------
4| 80k 106k 168k 270k 336k
64| 63k 89k 145k 230k 274k
Some tests were also conducted on SSL and absolutely no change was observed.
The value was placed into a define because it used to be spread all over the
code.
It might be useful at some point to backport this to 2.3 and 2.2 to help
those who observed some performance regressions from 1.6.
SCTL (signed certificate timestamp list) specified in RFC6962
was implemented in c74ce24cd22e8c683ba0e5353c0762f8616e597d, let
us introduce macro HAVE_SSL_SCTL for the HAVE_SSL_SCTL sake,
which in turn is based on SN_ct_cert_scts, which comes in the same commit
we previously forgot to add `agent-*` commands.
Take this opportunity to rewrite the help string in a simpler way for
readability (mainly removing simple quotes)
Signed-off-by: William Dauchy <wdauchy@gmail.com>
Due to the two-phase server reservation, there are 3 calls to
fwlc_srv_reposition() per request, one during assign_server() to reserve
the slot, one in connect_server() to commit it, and one in process_stream()
to release it. However only one of the first two will change the key, so
it's needlessly costly to take the lock, remove a server and insert it
again at the same place when we can already figure we ought not to even
have taken the lock.
Furthermore, even when the server needs to move, there can be quite some
contention on the lbprm lock forcing the thread to wait. During this time
the served and nbpend server values might have changed, just like the
lb_node.key itself. Thus we measure the values again under the lock
before updating the tree. Measurements have shown that under contention
with 16 servers and 16 threads, 50% of the updates can be avoided there.
This patch makes the function compute the new key and compare it to
the current one before deciding to move the entry (and does it again
under the lock forthe second test).
This removes between 40 and 50% of the tree updates depending on the
thread contention and the number of servers. The performance gain due
to this (on 16 threads) was:
16 servers: 415 krps -> 440 krps (6%, contention on lbprm)
4 servers: 554 krps -> 714 krps (+29%, little contention)
One point worth thinking about is that it's not logic to update the
tree 2-3 times per request while it's only read once. half to 2/3 of
these updates are not needed. An experiment consisting in subscribing
the server to a list and letting the readers reinsert them on the fly
showed further degradation instead of an improvement.
A better approach would probably consist in avoinding writes to shared
cache lines by having the leastconn nodes distinct from the servers,
with one node per value, and having them hold an mt-list with all the
servers having that number of connections. The connection count tree
would then be read-mostly instead of facing heavy writes, and most
write operations would be performed on 1-3 list heads which are way
cheaper to migrate than a tree node, and do not require updating the
last two updated neighbors' cache lines.
The operations are only an insert and a delete into the LB tree, which
doesn't require the server's lock at all as the lbprm lock is already
held. Let's drop it. Just for the sake of cleanness, given that the
served and nbpend values used to be atomically updated, we'll use an
atomic load to read them.
The operations are only an insert and a delete into the LB tree, which
doesn't require the server's lock at all as the lbprm lock is already
held. Let's drop it.
The two algos defining these functions (first and leastconn) do not need the
server's lock. However it's already present in pendconn_process_next_strm()
so the API must be updated so that the functions may take it if needed and
that the callers indicate whether they already own it.
As such, the call places (backend.c and stream.c) now do not take it
anymore, queue.c was unchanged since it's already held, and both "first"
and "leastconn" were updated to take it if not already held.
A quick test on the "first" algo showed a jump from 432 to 565k rps by
just dropping the lock in stream.c!
The remaining contention on the server lock solely comes from
sess_change_server() which takes the lock to add and remove a
stream from the server's actconn list. This is both expensive
and pointless since we have mt-lists, and this list is only
used by the CLI's "shutdown server sessions" command!
Let's migrate to an mt-list and remove the need for this costly
lock. By doing so, the request rate increased by ~1.8%.
The server lock was taken preventively for anything in health_adjust(),
including the static config checks needed to detect that the lock was not
needed, while the function is always called on the response path to update
a server's status. This was responsible for huge contention causing a
performance drop of about 17% on 16 threads. Let's move the lock only
where it should be, i.e. inside the function around the critical sections
only. By doing this, a 16-thread process jumped back from 575 to 675 krps.
This should be backported to 2.3 as the situation degraded there, and
maybe later to 2.2.
There's an issue when a server state changes, we use an integer comparison
to decide whether or not to reschedule a test instead of using a wrapping
timer comparison. This will cause some health-checks not to be immediately
triggered half of the time, and some unneeded calls to task_queue() to be
performed in other cases.
This bug has always been there as it was introduced with the commit that
added the feature, 97f07b832 ("[MEDIUM] Decrease server health based on
http responses / events, version 3"). This may be backported everywhere.
conn_hash_prehash does not need a nul-terminated string, thus it is only
needed to test if the sni sample is not null before using it as
connection hash input.
Moreover, a bug could be introduced between smp_make_safe and
ssl_sock_set_servername call. Indeed, smp_make_safe may call smp_dup
which duplicates the sample in the trash buffer. If another function
manipulates the trash buffer before the call to ssl_sock_set_servername,
the sni sample might be erased. Currently, no function seems to do that
except make_proxy_line in case proxy protocol is used simultaneously
with the sni on the server.
This does not need to be backported.
In session_count_new() the tracked counter was still incremented with
a "++" outside of any lock, resulting in occasional slightly off values
such as the following:
# table: foo, type: string, size:1000, used:1
0xb2a398: key=127.1.2.3 use=0 exp=86398318 sess_cnt=999959 http_req_cnt=1000004
Now with the correct atomic increment:
# table: foo, type: string, size:1000, used:1
0x7f82a4026d38: key=127.1.2.3 use=0 exp=86399294 sess_cnt=1000004 http_req_cnt=1000004
This can be backported to 1.8.
It seems that fd_delete perform the close of the file descriptor
Se we must not close the fd once again after that.
This should fix issues #1128, #1130 and #1131
This patch fix a case which should never happen writing
in output channel since we check available room before
This patch should fix github issue #1132
This patch fix returns code in case of dns_connect_server is called
on unsupported type (which should not happen). Doing this we have
the warranty that after a return 0 the fd is never -1.
This patch should fix github issues #1127, #1128 and #1130
This patch adds a missing test in dns_session_io_handler, getting
the query id from the buffer of the ring. An error should never
happen since messages are completely added atomically.
This bug should fix github issue #1133
move listen status to a helper, defining both status enum and string
definition.
this will be helpful to be reused in prometheus code. It also removes
this hard-to-read nested ternary.
Signed-off-by: William Dauchy <wdauchy@gmail.com>
prometheus approach requires to output all values for a given metric
name; meaning we iterate through all metrics, and then iterate in the
inner loop on all objects for this metric.
In order to allow more code reuse, adapt the stats API to be able to
select one field or fill them all otherwise.
From this patch it should be possible to add support for listen stats in
prometheus.
Signed-off-by: William Dauchy <wdauchy@gmail.com>
The RMAINT admin state is dynamic and should be remove from the
srv_admin_state parameter when a server state is loaded from a server-state
file. Otherwise an erorr is reported, the server-state line is ignored and
the server state is not updated.
This patch should fix the issue #576. It must be backported as far as 1.8.
This patch introduce the new line "server" to set a TCP
nameserver in a "resolvers" section:
server <name> <address> [param*]
Used to configure a DNS TCP or stream server. This supports for all
"server" parameters found in 5.2 paragraph. Some of these parameters
are irrelevant for DNS resolving. Note: currently 4 queries are pipelined
on the same connections. A batch of idle connections are removed every
5 seconds. "maxconn" can be configured to limit the amount of those
concurrent connections and TLS should also usable if the server supports
. The current implementation limits to 4 pipelined
The name of the line in configuration is open to discussion
and could be changed before the next release.
This patch introduce the "dns_stream_nameserver" to use DNS over
TCP on strict nameservers. For the upper layer it is analog to
the api used with udp nameservers except that the user que switch
the name server in "stream" mode at the init using "dns_stream_init".
The fallback from UDP to TCP is not handled and this is not the
purpose of this feature. This is done to choose the transport layer
during the initialization.
Currently there is a hardcoded limit of 4 pipelined transactions
per TCP connections. A batch of idle connections is expired every 5s.
This code is designed to support a maximum DNS message size on TCP: 64k.
Note: this code won't perform retry on unanswered queries this
should be handled by the upper layer
This patch splits current dns.c into two files:
The first dns.c contains code related to DNS message exchange over UDP
and in future other TCP. We try to remove depencies to resolving
to make it usable by other stuff as DNS load balancing.
The new resolvers.c inherit of the code specific to the actual
resolvers.
Note:
It was really difficult to obtain a clean diff dur to the amount
of moved code.
Note2:
Counters and stuff related to stats is not cleany separated because
currently counters for both layers are merged and hard to separate
for now.
This patch splits recv and send functions in two layers. the
lowest is responsible of DNS message transactions over
the network. Doing this we could use DNS message layer
for something else than resolving. Load balancing for instance.
This patch also re-works the way to init a nameserver and
introduce the new struct dns_dgram_server to prepare the arrival
of dns_stream_server and the support of DNS over TCP.
The way to retry a send failure of a request because of EAGAIN
was re-worked. Previously there was no control and all "pending"
queries were re-played each time it reaches a EAGAIN. This
patch introduce a ring to stack messages in case of sent
failure. This patch is emptied if poller shows that the
socket is ready again to push messages.
Counters are currently stored into lowlevel nameservers struct but
most of them are resolving layer data and increased in the upper layer
So this patch renames the prototype used to allocate/dump them with prefix
'resolv' waiting for a clean split.
Some types are specific to resolver code and a renamed using
the 'resolv' prefix instead 'dns'.
-struct dns_query_item {
+struct resolv_query_item {
-struct dns_answer_item {
+struct resolv_answer_item {
-struct dns_response_packet {
+struct resolv_response {
Resolv callbacks are also updated to rely on counters and not on
nameservers.
"show stat domain dns" will now show the parent id (i.e. resolvers
section name).
FreeBSD has a kernel feature (accf) and a sockopt flag similar to the
Linux's TCP_DEFER_ACCEPT to filter incoming data upon ACK. The main
difference is the filter needs to be placed when the socket actually
listens.
Very often, especially since reg-tests, it would be desirable to be able
to conditionally comment out a config block, such as removing an SSL
binding when SSL is disabled, or enabling HTX only for certain versions,
etc.
This patch introduces a very simple nested block management which takes
".if", ".elif", ".else" and ".endif" directives to take or ignore a block.
For now the conditions are limited to empty string or "0" for false versus
a non-nul integer for true, which already suffices to test environment
variables. Still, it needs to be a bit more advanced with defines, versions
etc.
A set of ".notice", ".warning" and ".alert" statements are provided to
emit messages, often in order to provide advice about how to fix certain
conditions.
The "show peers" output has become huge due to the dictionaries making it
less readable. Now this feature has reached a certain level of maturity
which doesn't warrant to dump it all the time, given that it was essentially
needed by developers. Let's make it optional, and disabled by default, only
when "show peers dict" is requested. The default output reminds about the
command. The output has been divided by 5 :
$ socat - /tmp/sock1 <<< "show peers dict" | wc -l
125
$ socat - /tmp/sock1 <<< "show peers" | wc -l
26
It could be useful to backport this to recent stable versions.
This variable is now only used to point on the local server-state file. When
the server-state is global, it is unused. So, we now use "localfilepath"
instead. Thus, the "filepath" variable can safely be removed.
When a local server-state file is loaded, if its name is too long, the error
is not properly handled, resulting to a call to fopen() with the "filepath"
variable set to NULL. To fix the bug, when this error occurs, we jump to the
next proxy, via a "continue" statement. And we take case to set "filepath"
variable after the error handling to be sure.
This patch should fix the issue #1111. It must be backported as far as 1.6.
When a connect rule is evaluated a test is performed on the "port" variable
while it is set to 0 just on the line just above. Just remove this useless
test to make ccpcheck happy.
This patch fixes the issue #1113.
Now it becomes possible to specify "from foo" on a frontend/listen/backend
or even on a "defaults" line, to mention that defaults section "foo" needs
to be used to preset the proxy's settings.
When not set, the last section remains used. In case the designated name
is found at multiple places, it is rejected and an error indicates two
occurrences of the same name. Similarly, if the section name is found,
its name must only use valid characters. This allows multiple named
defaults section to continue to coexist without the risk that they will
cause trouble by accident.
When it comes to "defaults" relying on another defaults, what happens is
just that a new defaults section is created from the designated one. This
will make it possible for example to reuse some settings such as log-format
like below:
defaults tcp-clear
log stdout local0 info
log-format "%ci:%cp/%b/%si:%sp %ST %ts %U/%B %{+Q}r"
defaults tcp-ssl
log stdout local0 info
log-format "%ci:%cp/%b/%si:%sp %ST %ts %U/%B %{+Q}r ssl=%sslv"
defaults http-clear from tcp-clear
mode http
defaults http-ssl from tcp-ssl
mode http
frontend fe1 from http-clear
bind :8001
frontend fe2 from http-ssl
bind :8002
A small corner case remains in the error detection, if a second defaults
section appears with the same name after the point where it was used, and
nobody references it, the duplicate will not be detected. This could be
addressed by performing the syntactic checks in check_config_validity(),
and by postponing the freeing of the defaults, after tagging a defaults
section as explicitly looked up by another section. This doesn't seem
that important at the moment though.
Now default proxies are stored into a dedicated tree, sorted by name.
Only unnamed entries are not kept upon new section creation. The very
first call to cfg_parse_listen() will automatically allocate a dummy
defaults section which corresponds to the previous static one, since
the code requires to have one at a few places.
The first immediately visible benefit is that it allows to reuse
alloc_new_proxy() to allocate a defaults section instead of doing it by
hand. And the secret goal is to allow to keep multiple named defaults
section in memory to reuse them from various proxies.
Now we'll have a tree of named defaults sections. The regular insertion
and lookup functions take care of the capability in order to select the
appropriate tree. A new function proxy_destroy_defaults() removes a
proxy from this tree and frees it entirely.
In order to make the default proxy configurable, we'll need to have a
pointer to it which might differ from &defproxy. cfg_parse_listen()
now gets curr_defproxy for this.
We don't want to expose this one anymore as we'll soon keep multiple
default proxies. Let's move it inside the parser which is the only
place which still uses it, and initialize it on the fly once needed
instead of doing it at boot time.
The default proxy was passed as a variable, which in addition to being
a PITA to deal with in the config parser, doesn't feel safe to use when
it ought to be const.
This will only affect new code so no backport is needed.
The default proxy was passed as a variable, which in addition to being
a PITA to deal with in the config parser, doesn't feel safe to use when
it ought to be const.
This will only affect new code so no backport is needed.