Following metrics are added for each event or group of messages processed in the
SPOE:
* processing time: the delay to process the event or the group. From the
stream point of view, it is the latency added by the SPOE
processing.
* request time : It is the encoding time. It includes ACLs processing, if
any. For fragmented frames, it is the sum of all fragments.
* queue time : the delay before the request gets out the sending queue. For
fragmented frames, it is the sum of all fragments.
* waiting time: the delay before the reponse is received. No fragmentation
supported here.
* response time: the delay to process the response. No fragmentation supported
here.
* total time: (unused for now). It is the sum of all events or groups
processed by the SPOE for a specific threads.
Log messages has been updated. Before, only errors was logged (status_code !=
0). Now every processing is logged, following this format:
SPOE: [AGENT] <TYPE:NAME> sid=STREAM-ID st=STATUC-CODE reqT/qT/wT/resT/pT
where:
AGENT is the agent name
TYPE is EVENT of GROUP
NAME is the event or the group name
STREAM-ID is an integer, the unique id of the stream
STATUS_CODE is the processing's status code
reqT/qT/wT/resT/pT are delays descrive above
For all these delays, -1 means the processing was interrupted before the end. So
-1 for the queue time means the request was never dequeued. For fragmented
frames it is harder to know when the interruption happened.
For now, messages are logged using the same logger than the backend of the
stream which initiated the request.
In async or pipelining mode, we count the number of NOTIFY frames sent waiting
for their corresponding ACK frames. This is a way to evaluate the "load" of a
SPOE applet. For pipelining mode, it is easy to make the link between a NOTIFY
frame and its ACK one, because exchanges are done using the same TCP connection.
For async mode, it is harder because a ACK frame can be received on another
connection than the one sending the NOTIFY frame. So to decrement the fpa of the
right applet, we need to keep it in the SPOE context. Most of time, it works
expect when the processing is interrupted by the stream, because of a timeout.
This patch fixes this issue. If a SPOE applet is still link to a SPOE context
when the processing is interrupted by the stream, the applet's fpa is
decremented. This is only done for unfragmented frames.
Variables referenced in HAProxy's configuration file are registered during the
configuration parsing (during parsing of "var", "set-var" or "unset-var"
keywords). For the SPOE, you can use "register-var-names" directive to
explicitly register variable names. All unknown variables will be rejected
(unless you set "force-set-var" option). But, the variable set when an error
occurred (when "set-on-error" option is defined) should also be regiestered by
default. This is done with this patch.
It is better to let spoe_stop_processing release this buffer because, in
.check_timeouts callback, we lack information to know if it should be release or
not. For instance, if the processing timeout is reached while the SPOE applet
receives the reply, it is preferable to ignore the timeout and process the
result.
This patch should be backported in 1.8.
Some initializations must be done at the beginning of parse_spoe_flt to avoid
segmentaion fault when first errors are catched, when the "filter spoe" line is
parsed.
This patch must be backported in 1.8.
[cf: the variable "curvars" doesn't exist in 1.8. So the patch must be adapted.]
Several segfaults were reported in the cache, each time in eb_delete()
called from cache_free_blocks() itself called from shctx_row_reserve_hot().
Each time the tree node was corrupted with random cached data (often JS or
HTML contents).
The problem comes from an incompatibility between the cache's expectations
and the recycling algorithm used in the shctx. The shctx allocates and
releases a chain of blocks at once. And when it needs to allocate N blocks
from the avail list while a chain of M>N is found, it picks the first N
from the list, moves them to the hot list, and marks all remaining M-N
blocks as isolated blocks (chains of 1).
For each such released block, the shctx->free_block() callback is used
and passed a pointer to the first and current block of the chain. For
the cache, it's cache_free_blocks(). What this function does is check
that the current block is the first one, and in this case delete the
object from the tree and mark it as not in tree by setting key to zero.
The problem this causes is that the tail blocks when M>N become first
blocks for the next call to shctx_row_reserve_hot(), these ones will
be passed to cache_free_blocks() as list heads, and will be sent to
eb_delete() despite containing only cached data.
The simplest solution for now is to mark each block as holding no cache
object by setting key to zero all the time. It keeps the principle used
elsewhere in the code. The SSL code is not subject to this problem
because it relies on the block's len not being null, which happens
immediately after a block was released. It was uncertain however whether
this method is suitable for the cache. It is not critical though since
this code is going to change soon in 1.9 to dynamically allocate only
the number of required blocks.
This fix must be backported to 1.8. Thanks to Thierry for providing
exploitable cores.
The "show cache" command used to dump the header for each entry into into
the handler loop, making it repeated every ~16kB of output data. Additionally
chunk_appendf() was used instead of chunk_printf(), causing the output to
repeat already emitted lines, and the output size to grow in O(n^2). It used
to take several minutes to report tens of millions of objects from a small
cache containing only a few thousands. There was no more impact though.
This fix must be backported to 1.8.
Since the commit 2f3a56b4f ("BUG/MINOR: tcp-check: use the server's service port
as a fallback"), email alerts stopped working because the mailer's port was
overriden by the server's port. Remember, email alerts are defined as checks
with specific tcp-check rules and triggered on demand to send alerts. So to send
an email, a check is executed. Because no specific port's was defined, the
server's one was used.
To fix the bug, the ports used for checks attached an email alert are explicitly
set using the mailer's port. So this port will be used instead of the server's
one.
In this patch, the assignement to a default port (587) when an email alert is
defined has been removed. Indeed, when a mailer is defined, the port must be
defined. So the default port was never used.
This patch must be backported in 1.8.
Clearing the update_mask bit in fd_insert may lead to duplicate insertion
of fd in fd_updt, that could lead to a write past the end of the array.
Instead, make sure the update_mask bit is cleared by the pollers no matter
what.
This should be backported to 1.8.
[wt: warning: 1.8 doesn't have the lockless fdcache changes and will
require some careful changes in the pollers]
Since commit 9aaf778 ("MAJOR: connection : Split struct connection into
struct connection and struct conn_stream."), the checks use a conn_stream
and not directly the connection anymore. However wake_srv_chk() still used
to verify the connection's readiness instead of the conn_stream's. Due to
the existence of a mux, the connection is always waiting for receiving
something, and doesn't reflect the changes made in event_srv_chk_{r,w}(),
causing the connection appear as not ready yet, and the check to be
validated only after its timeout. The difference is only visible when
sending pure TCP checks, and simply adding a "tcp-check connect" line
is enough to work around it.
This fix must be backported to 1.8.
When a stream blocks on a mux buffer full/unallocated or on connection
flow control, a flag among H2_SF_MUX_M* is set, but the stream is not
always added to the connection's list. It's properly done when the
operations are performed from the connection handler but not always when
done from the stream handler. For instance, a simple shutr or shutw may
fail by lack of room. If it's immediately followed by a call to h2_detach(),
the stream remains lying around in no list at all, and prevents the
connection from ending. This problem is actually quite difficult to
trigger and seems to require some large objects and low server-side
timeouts.
This patch covers all identified paths. Some are redundant but since the
code will change and will be simplified in 1.9, it's better to stay on
the safe side here for now. It must be backported to 1.8.
Commit e3f36cd ("MINOR: h2: implement a basic "show_fd" function")
accidently brought one surrounding debugging part that was in the same
context. No backport needed.
The purpose here is to dump some information regarding an H2 connection,
and a few statistics about its streams. The output looks like this :
35 : st=0x55(R:PrA W:PrA) ev=0x00(heopi) [lc] cache=0 owner=0x7ff49ee15e80 iocb=0x588a61(conn_fd_handler) tmask=0x1 umask=0x0 cflg=0x00201366 fe=decrypt mux=H2 mux_ctx=0x7ff49ee16f30 st0=2 flg=0x00000002 fctl_cnt=0 send_cnt=33 tree_cnt=33 orph_cnt=0
- st0 is the connection's state (FRAME_H here)
- flg is the connection's flags (MUX_MFULL here)
- fctl_cnt is the number of streams in the fctl_list
- send_cnt is the number of streams in the send_list
- tree_cnt is the number of streams in the streams_by_id tree
- orph_cnt is the number of orphaned streams (cs==0) in the tree
This function will be called from the CLI's "show fd" command to append some
extra mux-specific information that only the mux handler can decode. This is
supposed to help collect various hints about what is happening when facing
certain anomalies.
Interrupting an h2load test shows that some connections remain active till
the client timeout. This is due to the fact that h2_detach() immediately
returns if the h2s flags indicate that the h2s is still waiting for some
buffer room in the output mux (possibly to emit a response or to send some
window updates). If the connection is broken, these data will never leave
and must not prevent the stream from being terminated nor the connection
from being released.
This fix must be backported to 1.8.
Currently, h2_release() will release all resources assigned to the h2
connection, including the timeout task if any. But since the multi-threaded
scheduler, the timeout task could very well be queued in the thread-local
list of running tasks without any way to remove it, so task_delete() will
have no effect and task_free() will cause this undefined object to be
dereferenced.
In order to prevent this from happening, we never release the task in
h2_release(), instead we wake it up after marking its context NULL so that
the task handler can release the task.
Future improvements could consist in modifying the scheduler so that a
task_wakeup() has to be done on any task having to be killed, letting
the scheduler take care of it.
This fix must be backported to 1.8. This bug was apparently not reported
so far.
Since these two functions are always used together, let's simplify
the code by having a single one for both operations. It also ensures
we don't leave wandering elements that risk to leak later.
The code is safer and more robust this way, it avoids multiple paths.
This is possible due to the idempotence of LIST_DEL() and eb32_delete()
that are called in h2s_detach().
Several people reported very strange occasional crashes when using H2.
Every time it appeared that either an h2s or a task was corrupted. The
outcome is that a missing LIST_DEL() when removing an orphaned stream
from the list in h2_wake_some_streams() can cause this stream to
remain present in the send list after it was freed. This may happen
when receiving a GOAWAY frame for example. In the mean time the send
list may be processed due to pending streams, and the just released
stream is still found. If due to a buffer full condition we left the
h2_process_demux() loop before being able to process the pending
stream, the pool entry may be reassigned somewhere else. Either another
h2 connection will get it, or a task, since they are the same size and
are shared. Then upon next pass in h2_process_mux(), the stream is
processed again. Either it crashes here due to modifications, or the
contents are harmless to it and its last changes affect the other object
reasigned to this area (typically a struct task). In the case of a
collision with struct task, the LIST_DEL operation performed on h2s
corrupts the task's wait queue's leaf_p pointer, thus all the wait
queue's structure.
The fix consists in always performing the LIST_DEL in h2s_detach().
It will also make h2s_stream_new() more robust against a possible
future situation where stream_create_from_cs() could have sent data
before failing.
Many thanks to all the reporters who provided extremely valuable
information, traces and/or cores, namely Thierry Fournier, Yves Lafon,
Holger Amann, Peter Lindegaard Hansen, and discourse user "slawekc".
This fix must be backported to 1.8. It is probably better to also
backport the following code cleanups with it as well to limit the
divergence between master and 1.8-stable :
00dd078 CLEANUP: h2: rename misleading h2c_stream_close() to h2s_close()
0a10de6 MINOR: h2: provide and use h2s_detach() and h2s_free()
Commit 35b1b48 ("MINOR: cli: make "show fd" report the mux and mux_ctx
pointers when available") introduced an accidental build warning due to
a missing const statement.
This is handy to quickly distinguish H2 connections as well as to easily
access the h2c context. It could be backported to 1.8 to help during
troubleshooting sessions.
A warning is reported here by valgrind on first pass in hpack_dht_insert().
The cause is that the not-yet-initialized dht->head is checked in
hpack_dht_get_tail(), though the result is not used, making it have
no impact. At the very least it confuses valgrind, and maybe it makes
it harder for gcc to optimize the code path. Let's move the variable
initialization around to shut it up. Thanks to Olivier for reporting
this one.
This fix may be backported to 1.8 at least to make valgrind usage less
painful.
Instead of hlua_socket_settimeout() accepting only integers, allow user
to specify float and double as well. Convert to milliseconds much like
cli_parse_set_timeout but also sanity check the value.
http://w3.impa.br/~diego/software/luasocket/tcp.html#settimeout
T. Fournier edit:
The main goal is to keep compatibility with the LuaSocket API. This
API only accept seconds, so using a float to specify milliseconds is
an acceptable way.
Update doc.
src/queue.o: In function `pendconn_redistribute':
/home/ilia/haproxy/src/queue.c:272: undefined reference to `thread_want_sync'
src/queue.o: In function `pendconn_grab_from_px':
/home/ilia/haproxy/src/queue.c:311: undefined reference to `thread_want_sync'
src/queue.o: In function `process_srv_queue':
/home/ilia/haproxy/src/queue.c:184: undefined reference to `thread_want_sync'
collect2: error: ld returned 1 exit status
make: *** [Makefile:900: haproxy] Error 1
To be backported to 1.8.
Negatives timeouts doesn't have sense. A negative timeout doesn't cause
a crash, but the connection expires before the system try to extablish it.
This patch should be backported in all versions from 1.6
The output of these function indicates that one element is pushed in
the stack, but no element is set in the stack. Actually, if anyone
read the value returned by this function, is gets "something"
present in the stack.
This patch is a complement of these one: 119a5f10e4
The LuaSocket documentation tell anything about the returned value,
but the effective code set an integer of value one.
316a9455b9/src/timeout.c (L172)
Thanks to Tim for the bug report.
This patch should be backported in all version from 1.6
issue was identified by cppcheck
[src/map.c:372] -> [src/map.c:376]: (warning) Variable 'appctx->st2' is reassigned a value before the old one has been used. 'break;' missing?
[src/map.c:433] -> [src/map.c:437]: (warning) Variable 'appctx->st2' is reassigned a value before the old one has been used. 'break;' missing?
[src/map.c:555] -> [src/map.c:559]: (warning) Variable 'appctx->st2' is reassigned a value before the old one has been used. 'break;' missing?
[src/stream.c:3264] -> [src/stream.c:3268]: (warning) Variable 'appctx->st2' is reassigned a value before the old one has been used. 'break;' missing?
Signed-off-by: Ilya Shipitsin <chipitsine@gmail.com>
When a freshly created session is rejected, for any reason, during the accept in
the function "session_accept_fd", the variable "actconn" is decreased twice. The
first time when the rejected session is released, then in the function
"listener_accpect", because of the failure. So it is possible to have an
negative value for actconn. Note that, in this case, we will also have a negatve
value for the current number of connections on the listener rejecting the
session (actconn and l->nbconn are in/decreased in same time).
It is easy to reproduce the bug with this small configuration:
global
stats socket /tmp/haproxy
listen test
bind *:12345
tcp-request connection reject if TRUE
A "show info" on the stat socket, after a connection attempt, will show a very
high value (the unsigned representation of -1).
To fix the bug, if the function "session_accept_fd" returns an error, it
decrements the right counters and "listener_accpect" leaves them untouched.
This patch must be backported in 1.8.
There are some corner cases where this could happen by accident. Since
the spec explicitly forbids this (RFC7540#5.4.2), let's add a test in
the two only functions which make the RST to avoid this. Thanks to user
klzgrad for reporting this problem. Usually it is expected to be harmless
but may result in browsers issuing a warning.
This fix must be backported to 1.8.
Recent fixes made to process partial frames broke the flow control on
DATA frames, as the padding is not considered anymore, only the actual
data is. Let's simply take account of the padding once the transfer
ends. The probability to meet this bug is low because, when used, padding
is small and it can require a large number of padded transfers before the
window is completely depleted.
Thanks to user klzgrad for reporting this bug and confirming the fix.
This fix must be backported to 1.8.
This patch add option crc32c (PP2_TYPE_CRC32C) to proxy protocol v2.
It compute the checksum of proxy protocol v2 header as describe in
"doc/proxy-protocol.txt".
This is a recurring pain when using certain unix domain sockets or when
sending to temporarily unroutable addresses, if the process remains in
the foreground, the console is full of error which it's impossible to
do anything about. It's even worse when the process is remote, or when
run from a serial console which will slow the whole process down. Let's
send them only once now to warn about a possible config issue, and not
pollute the system nor slow everything down.
The previous patch about queues (5cd4bbd7a "BUG/MAJOR: threads/queue: Fix
thread-safety issues on the queues management") revealed a performance drop when
multithreading is enabled (nbthread > 1). This happens when pending connections
handled by other theads are dequeued. If these other threads are blocked in the
poller, we have to wait the poller's timeout (or any I/O event) to process the
dequeued connections.
To fix the problem, at least temporarly, we "wake up" the threads by requesting
a synchronization. This may seem a bit overkill to use the sync point to do a
wakeup on threads, but it fixes this performance issue. So we can now think
calmly on the good way to address this kind of issues.
This patch should be backported in 1.8 with the commit 5cd4bbd7a ("BUG/MAJOR:
threads/queue: Fix thread-safety issues on the queues management").
When running tcp-check scripts, one must ensure we can establish a tcp
connection first.
When doing this action, HAProxy needs a TCP port configured either on
the server or on the check itself or on the connect rule itself.
For some reasons, the connect code did not evaluate the service port on
the server structure...
this patch fixes this error.
Backport status: 1.8
When tcpcheck is used to do TCP port monitoring only and the script is
composed by a single "tcp-check connect" rule (whatever port and ssl
options enabled), then the server can't be seen as DOWN.
Simple configuration to reproduce:
backend b
[...]
option tcp-check
tcp-check connect
server s1 127.0.0.1:22 check
The main reason for this issue is that the piece of code which validates
that we're not at the end of the chained list (of rules) prevents
executing the validation of the establishment of the TCP connection.
Since validation is not executed, the rule is terminated and the report
says no errors were encountered, hence the server is UP all the time.
The workaround is simple: move the connection validation outsied the
CONNECT rule processing loop, into the main function.
That way, if the connection status is not CONNECTED, then HAProxy will
now add more time to wait for it. If the time is expired, an error is
now well reported.
Backport status: 1.8
OpenSSL can be built without NEXTPROTONEG support by passing
-no-npn to the configure script. This sets the
OPENSSL_NO_NEXTPROTONEG flag in opensslconf.h
Since NEXTPROTONEG is now considered deprecated, it is superseeded
by ALPN (Application Layer Protocol Next), HAProxy should allow
building withough NPN support.
This bug was introduced in 48bcfdab2 ("MEDIUM: dumpstat: make the CLI
parser understand the backslash as an escape char").
This should be backported to 1.8.
Signed-off-by: Aurélien Nephtali <aurelien.nephtali@corp.ovh.com>
Since 200b0fac ("MEDIUM: Add support for updating TLS ticket keys via
socket"), 4147b2ef ("MEDIUM: ssl: basic OCSP stapling support."),
4df59e9 ("MINOR: cli: add socket commands and config to prepend
informational messages with severity") and 654694e1 ("MEDIUM: stats/cli:
add support for "set table key" to enter values"), commands
'set ssl tls-key', 'set ssl ocsp-response', 'set severity-output' and
'set table' do not always send an extra LF at the end of their outputs.
This is required as mentioned in doc/management.txt:
"Since multiple commands may be issued at once, haproxy uses the empty
line as a delimiter to mark an end of output for each command"
Signed-off-by: Aurélien Nephtali <aurelien.nephtali@corp.ovh.com>
When doing a seemless reload, while receiving the sockets from the old process
the new process will die if the socket has been bound to a specific
interface.
This happens because the code that tries to parse the informations bogusly
try to set xfer_sock->namespace, while it should be setting wfer_sock->iface.
This should be backported to 1.8.
issue was identified by cppcheck
[src/dns.c:2037] -> [src/dns.c:2041]: (warning) Variable 'appctx->st2' is reassigned a value before the old one has been used. 'break;' missing?
Automatic downgrade of DNS accepted payload size may have undesired side
effect, which could make a backend with all servers DOWN.
After talking with Lukas on the ML, I realized this "feature" introduces
more issues that it fixes problem.
The "best" way to handle properly big responses will be to implement DNS
over TCP.
To be backported to 1.8.
The management of the servers and the proxies queues was not thread-safe at
all. First, the accesses to <strm>->pend_pos were not protected. So it was
possible to release it on a thread (for instance because the stream is released)
and to use it in same time on another one (because we redispatch pending
connections for a server). Then, the accesses to stream's information (flags and
target) from anywhere is forbidden. To be safe, The stream's state must always
be updated in the context of process_stream.
So to fix these issues, the queue module has been refactored. A lock has been
added in the pendconn structure. And now, when we try to dequeue a pending
connection, we start by unlinking it from the server/proxy queue and we wake up
the stream. Then, it is the stream reponsibility to really dequeue it (or
release it). This way, we are sure that only the stream can create and release
its <pend_pos> field.
However, be careful. This new implementation should be thread-safe
(hopefully...). But it is not optimal and in some situations, it could be really
slower in multi-threaded mode than in single-threaded one. The problem is that,
when we try to dequeue pending connections, we process it from the older one to
the newer one independently to the thread's affinity. So we need to wait the
other threads' wakeup to really process them. If threads are blocked in the
poller, this will add a significant latency. This problem happens when maxconn
values are very low.
This patch must be backported in 1.8.
When a listener is temporarily disabled, we start by locking it and then we call
.pause callback of the underlying protocol (tcp/unix). For TCP listeners, this
is not a problem. But listeners bound on an unix socket are in fact closed
instead. So .pause callback relies on unbind_listener function to do its job.
Unfortunatly, unbind_listener hold the listener's lock and then call an internal
function to unbind it. So, there is a deadlock here. This happens during a
reload. To fix the problemn, the function do_unbind_listener, which is lockless,
is now exported and is called when a listener bound on an unix socket is
temporarily disabled.
This patch must be backported in 1.8.
>From the very first day of force-persist and ignore-persist features,
they only applied to backends, except that the documentation stated it
could also be applied to frontends.
In order to make it clear, the documentation is updated and the parser
will raise a warning if the keywords are used in a frontend section.
This patch should be backported up to the 1.5 branch.
Krishna Kumar reported a 100% cpu usage with a configuration using
cpu-map and a high number of threads,
Indeed, this minimal configuration to reproduce the issue :
global
nbthread 40
cpu-map auto:1/1-40 0-39
frontend test
bind :8000
This is due to a wrong type in a shift operator (int vs unsigned long int),
causing an endless loop while applying the cpu affinity on threads. The same
issue may also occur with nbproc under FreeBSD. This commit addresses both
cases.
This patch must be backported to 1.8.
The correct keyword is 'ssl-sessions' (vs. 'ssl-session').
The typo was introduced in 45c742be05 ('REORG: cli: move the "set
rate-limit" functions to their own parser').
Signed-off-by: Aurélien Nephtali <aurelien.nephtali@corp.ovh.com>
This printf() was added in f886e3478d ("MINOR: cli: Add a command to
send listening sockets.").
Signed-off-by: Aurélien Nephtali <aurelien.nephtali@corp.ovh.com>
openssl/x509.h is included twice since commit fc0421fde ("MEDIUM: ssl:
add support for SNI and wildcard certificates").
Signed-off-by: Aurélien Nephtali <aurelien.nephtali@corp.ovh.com>
This bug is present since 7a4a0ac71d ("MINOR: cli: add a new "show fd"
command").
This should be backported to 1.8.
Signed-off-by: Aurélien Nephtali <aurelien.nephtali@corp.ovh.com>
Right now the h2 idle timeout is only set when there is no stream. If we
fail to send because the socket buffers are full (generally indicating
the client has left), we also need to arm it so that we can properly
expire such connections, otherwise some failed transfers might leave
H2 connections pending forever.
Thanks to Thierry Fournier for the diag and the traces.
This patch needs to be backported to 1.8.
When removing the socket from the xfer_sock_list, we want to set
next->prev to prev, not to next->prev, which is useless.
This should be backported to 1.8.
Some sample fetches check if session is established using
the flag CO_FL_CONNECTED. But in some cases, when a handshake
is performed this flag is set too late, after the process
of the tcp-request session rules.
This fix move the raising of the flag at the beginning of the
conn_complete_session function which processes the tcp-request
session rules.
This fix must be backported to 1.8 (and perhaps 1.7)
We used to have one buffer allocator per direction while we can never
block on two buffers at once. Let's have a single one and rely on the
connection's flags to know which one we're waitinf for.
This function takes an h2c and an h2s but it never uses the h2c, which
is a bit confusing at some places in the code. Let's make it clear that
it only operates on the h2s instead by renaming it and removing the
unused h2c argument.
This patch implement proxy protocol v2 options related to crypto information:
ssl-cipher (PP2_SUBTYPE_SSL_CIPHER), cert-sig (PP2_SUBTYPE_SSL_SIG_ALG) and
cert-key (PP2_SUBTYPE_SSL_KEY_ALG).
ssl_sock_get_pkey_algo can be used to report pkey algorithm to log
and ppv2 (RSA2048, EC256,...).
Extract pkey information is not free in ssl api (lock/alloc/free):
haproxy can use the pkey information computed in load_certificate.
Store and use this information in a SSL ex_data when available,
compute it if not (SSL multicert bundled and generated cert).
Private key information is used in switchctx to implement native multicert
selection (ecdsa/rsa/anonymous). This patch extract and store full pkey
information: dsa type and pkey size in bits. This can be used for switchctx
or to report pkey informations in ppv2 and log.
In the SPOE applet's handler, when an applet is switched from the state IDLE to
PROCESSING, it is removed for the list of idle applets. But when HAProxy is
stopping, this applet can be switched to DISCONNECT. In this case, we also need
to remove it from the list of idle applets. Else the applet is removed but still
present in the list. It could lead to a segmentation fault or an infinite loop,
depending the code path.
In case a stream tries to emit more data than advertised by the chunks
or content-length headers, the extra data remains in the channel's output
buffer until the channel's timeout expires. It can easily happen when
sending malformed error files making use of a wrong content-length or
having extra CRLFs after the empty chunk. It may also be possible to
forge such a bad response using Lua.
The H1 to H2 encoder must protect itself against this by marking the data
presented to it as consumed if it decides to discard them, so that the
sending stream doesn't wait for the timeout to trigger.
The visible effect of this problem is a huge memory usage and a high
concurrent connection count during benchmarks when using such bad data
(a typical place where this easily happens).
This fix must be backported to 1.8.
In h2_get_dbuf, when the buffer allocation was failing, dbuf_wait.target was
errornously set to the connection (h2c->conn) instead of the h2 connection
descriptor (h2c).
This patch must be backported to 1.8.
Since commit cf975d4 ("MINOR: pools/threads: Implement lockless memory
pools."), we support lockless pools. However the parts dedicated to
detecting use-after-free are not present in this part, making DEBUG_UAF
useless in this situation.
The present patch sets a new define CONFIG_HAP_LOCKLESS_POOLS when such
a compatible architecture is detected, and when pool debugging is not
requested, then makes use of this everywhere in pools and buffers
functions. This way enabling DEBUG_UAF will automatically disable the
lockless version.
No backport is needed as this is purely 1.9-dev.
This removes the unused next_header_block and try_again labels
from mux_h2.c.
try_again is unused as of a76e4c2183,
which first appeared in haproxy 1.8.0.
next_header_block is unused as of 872855998b,
which was backported to haproxy 1.8.0 as
59fcb216085a7aa9744cffe39567c80de4ebd6bf.
This removes the retry labels from spoe_send_frame and spoe_recv_frame
which are unused since d5216d474d, which
is unreleased, but was backported to haproxy 1.8 as
f13f3a4babdb1ce23a7e982c765704bca728111a.
Returns true when the back connection was made over an SSL/TLS transport
layer and the newly created SSL session was resumed using a cached
session or a TLS ticket.
When the body length is undefined (no Content-Length or Transfer-Encoding
headers), The reponse remains in ending mode, waiting the request is done. So,
most of time this is not a problem because the resquest is done before the
response. But when a client sends data to a server that replies without waiting
all the data, it is really not desirable to wait the end of the request to
finish the response.
This bug was introduced when the tunneling of the request and the reponse was
refactored, in commit 4be980391 ("MINOR: http: Switch requests/responses in
TUNNEL mode only by checking txn flag").
This patch should be backported in 1.8 and 1.7.
When SSL_read returns SSL_ERROR_SYSCALL and errno is unset or set to EAGAIN, the
connection must be shut down for reading. Else, the connection loops infinitly,
consuming all the CPU.
The bug was introduced in the commit 7e2e50500 ("BUG/MEDIUM: ssl: Don't always
treat SSL_ERROR_SYSCALL as unrecovarable."). This patch must be backported in
1.8 too.
It's always a pain not to be able to combine variables. This commit
introduces the "concat" converter, which appends a delimiter, a variable's
contents and another delimiter to an existing string. The result is a string.
This makes it easier to build composite variables made of other variables.
A TLS ticket keys file can be updated on the CLI and used in same time. So we
need to protect it to be sure all accesses are thread-safe. Because updates are
infrequent, a R/W lock has been used.
This patch must be backported in 1.8
Bart Geesink reported some random errors appearing under the form of
termination flags SD in the logs for connections involving SSL traffic
to reach the servers.
Tomek Gacek and Mateusz Malek finally narrowed down the problem to commit
c2aae74 ("MEDIUM: ssl: Handle early data with OpenSSL 1.1.1"). It happens
that the special case of SSL_ERROR_SYSCALL isn't handled anymore since
this commit.
SSL_read() might return <= 0, and SSL_get_erro() return SSL_ERROR_SYSCALL,
without meaning the connection is gone. Before flagging the connection
as in error, check the errno value.
This should be backported to 1.8.
It was believed that it was useless to lock the "prev" field when adding a
FD. However, if there's only one element in the FD cache, and that element
removes itself from the fd cache, and another FD is added before the first
add completed, there's a risk of losing elements. To prevent that, lock the
"prev" field, so that such a removal will wait until the add completed.
Martin Brauer reported an unexpected warning when some parts of the
global stats are defined but not the listening address, like below :
global
#stats socket run/admin.sock mode 660 level admin
stats timeout 30s
Then haproxy complains :
[WARNING] 334/150131 (23086) : config : frontend 'GLOBAL' has no
'bind' directive. Please declare it as a backend if this was intended.
This is because of the check for a bind-less frontend (the global section
creates a frontend for the stats). There's no clean fix for this one, so
here we're simply checking that the frontend is not the global stats one
before emitting the warning.
This patch should be backported to all stable versions.
The last fix for the volatile dereference made use of pl_deref_int()
which is unknown when building without threads. Let's simply open-code
it instead. No backport needed.
Previously, -sf and -sd command line parsing used atol which cannot
detect errors. I had a problem where I was doing -sf "$pid1 $pid2 $pid"
and it was sending the gracefully terminate signal only to the first pid.
The change uses strtol and checks endptr and errno to see if the parsing
worked. It will exit when the pid list is not parsed.
[wt: this should be backported to 1.8]
An haproxy compiled with:
> make -j4 all TARGET=linux2628 USE_GETADDRINFO=1
And running with a configuration like this:
defaults
log global
mode http
option httplog
option dontlognull
timeout connect 5000
timeout client 50000
timeout server 50000
frontend fe
bind :::8080 v4v6
default_backend be
backend be
server s example.com:80 check
Will leak memory inside `str2ip2()`, because the list `result` is not
properly freed in success cases:
==18875== 140 (76 direct, 64 indirect) bytes in 1 blocks are definitely lost in loss record 87 of 111
==18875== at 0x4C2DB8F: malloc (in /usr/lib/valgrind/vgpreload_memcheck-amd64-linux.so)
==18875== by 0x537A565: gaih_inet (getaddrinfo.c:1223)
==18875== by 0x537DD5D: getaddrinfo (getaddrinfo.c:2425)
==18875== by 0x4868E5: str2ip2 (standard.c:733)
==18875== by 0x43F28B: srv_set_addr_via_libc (server.c:3767)
==18875== by 0x43F50A: srv_iterate_initaddr (server.c:3879)
==18875== by 0x43F50A: srv_init_addr (server.c:3944)
==18875== by 0x475B30: init (haproxy.c:1595)
==18875== by 0x40406D: main (haproxy.c:2479)
The exists as long as the usage of getaddrinfo in that function exists,
it was introduced in commit:
d5f4328efd
v1.5-dev8 is the first tag containing this comment, the fix
should be backported to haproxy 1.5 and newer.
In the time offset calculation loop, we ensure we only commit the new
date once it's futher in the future than the current one. However there
is a small issue here on 32-bit platforms : if global_now is written in
two cycles by another thread, starting with the tv_sec part, and the
current thread reads it in the middle of a change, it may compute a
wrong "adjusted" value on the first round, with the new (larger) tv_sec
and the old (large) tv_usec. This will be detected as the CAS will fail,
and another attempt will be made, but this time possibly with too large
an adusted value, pushing the date further than needed (at worst almost
one second).
This patch addresses this by using a temporary adjusted time in the loop
that always restarts from the last known one, and by assigning the result
to the final value only once the CAS succeeds.
The impact is very limited, it may cause the time to advance in small
jumps on 32 bit platforms and in the worst case some timeouts might
expire 1 second too early.
This fix should be backported to 1.8.
The function was cleaned up a bit from duplicated parts inherited from
the initial attempt at getting it to work. It's a bit smaller and cleaner
this way.
In fd_rm_from_fd_list(), we have loops waiting for another change to
complete, in case we don't have support for a double CAS. But these
ones fail to place a compiler barrier or to dereference the fdcache
as a volatile, resulting in an endless loop on the first collision,
which is visible when run on MIPS32.
No backport needed.
An fd cache entry might be removed and added at the end of the list, while
another thread is parsing it, if that happens, we may miss fd cache entries,
to avoid that, add a new field in the struct fdtab, "added_mask", which
contains a mask for potentially affected threads, if it is set, the
corresponding thread will set its bit in fd_cache_mask, to avoid waiting in
poll while it may have more work to do.
Create a local, per-thread, fdcache, for file descriptors that only belongs
to one thread, and make the global fd cache mostly lockless, as we can get
a lot of contention on the fd cache lock.
It may be useful to keep the CO_FL_EARLY_DATA flag, so that we know early
data were used, so instead of doing this, only add the Early-data header,
and have the sample fetch ssl_fc_has_early return 1, if CO_FL_EARLY_DATA is
set, and if the handshake isn't done yet.
Instead of looking for CO_FL_EARLY_DATA to know if we have to try to wake
up a stream, because it is waiting for a SSL handshake, instead add a new
conn_stream flag, CS_FL_WAIT_FOR_HS. This way we don't have to rely on
CO_FL_EARLY_DATA, and we will only wake streams that are actually waiting.
This is the maximum number of frames waiting for an acknowledgement on the same
connection. This value is only used when the pipelinied or asynchronus exchanges
between HAProxy and SPOA are enabled. By default, it is set to 20.
Instead of using a list of applets with idle ones in front, we now use an
ebtree. Aapplets in the tree are idle by definition. And the key is the applet's
weight. When a new frame is queued, the first idle applet (with the lowest
weight) is woken up and its weight is increased by one. And when an applet sends
a frame to a SPOA, its weight is decremented by one.
This is empirical, but it should avoid to overuse a very few number of applets
and increase the balancing between idle applets.
So it is easier to respect the max_fpa value. This is no more the maximum frames
processed by an applet at each loop but the maximum frames waiting for an ack
for a specific applet.
The function spoe_handle_processing_appctx has been rewritten accordingly.
sending_rate was a counter used to evaluate the SPOE capacity to process
frames. Because it was not really accurrate, it has been replaced by a frequency
counter representing the number of frames handled by the SPOE per second. We
just check this counter is higher than the number of streams waiting for a
reply. If not, a new applet is created.
The calculation of a minimal number of active applets was really empirical and
finally useless. On heavy load, there are always many active applets (most of
time, more than the minimal required) and when the load is low, there is no
reason to keep unused applets opened.
Because of this change, the flag SPOE_APPCTX_FL_PERSIST is now unused. So it has
been removed.
This is mandatory to correctly set right timeout on the stream. Else the client
timeout is never set. So only SPOE processing timeout will be evaluated. If it
is not defined (ie infinity), the stream can be blocked for a while, waiting the
SPOA reply. Of course, this is not a good idea to let the SPOE processing
timeout undefined, but it can happen.
This patch must be backported in 1.8.
Before, we checked if the buffer was allocated or not to avoid sending or
receiving a frame. This was done to not call ci_putblk or co_getblk if there is
nothing to do. But the checks on the buffers are also done in these
functions. So this is not mandatory here. But in these functions, the channel
state is also checked, so an error is returned if it is closed. By skipping the
call, we also skip the checks on the channel state, delaying shutdowns
detection.
Now, we always try to send or receive a frame. So if the corresponding channel
is closed, we can immediatly handle the error.
This patch must be backported in 1.8
Proxy protocol v2 can transport many optional informations. To avoid
send-proxy-v2-* explosion, this patch introduce proxy-v2-options parameter
and will allow to write: "send-proxy-v2 proxy-v2-options ssl,cert-cn".
Commit d9e7e36 ("BUG/MEDIUM: epoll/threads: use one epoll_fd per thread")
addressed an issue with the polling and required that cloned FDs are removed
from all polling threads on close. But in fact it does it for all bound
threads, some of which may not necessarily poll the FD. This is harmless,
but it may also make it harder later to deal with FD migration between
threads. Better use polled_mask which only reports threads still aware
of the FD instead of thread_mask.
This fix should be backported to 1.8.
fd_insert() is currently called just after setting the owner and iocb,
but proceeding like this prevents the operation from being atomic and
requires a lock to protect the maxfd computation in another thread from
meeting an incompletely initialized FD and computing a wrong maxfd.
Fortunately for now all fdtab[].owner are set before calling fd_insert(),
and the first lock in fd_insert() enforces a memory barrier so the code
is safe.
This patch moves the initialization of the owner and iocb to fd_insert()
so that the function will be able to properly arrange its operations and
remain safe even when modified to become lockless. There's no other change
beyond the internal API.
The polling updates are now performed exactly like the epoll/kqueue
ones : only the new polled state is considered, and the previous one
is checked using polled_mask. The only specific stuff here is that
the fd state is shared between all threads, so an FD removal has to
be done only once.
The polling updates are now performed exactly like the epoll/kqueue
ones : only the new polled state is considered, and the previous one
is checked using polled_mask. The only specific stuff here is that
the fd state is shared between all threads, so an FD removal has to
be done only once.
Given that FD_{CLR,SET} are not always guaranteed to be thread safe,
let's fall back to using the hap_fd_* functions as we used to till
1.5-dev18 and as poll() continues to use. This will make it easier
to remove the poll_lock.
These functions were created for poll() in 1.5-dev18 (commit 80da05a4) to
replace the previous FD_{CLR,SET,ISSET} that were shared with select()
because some libcs enforce a limit on FD_SET. But FD_SET doesn't seem
to be universally MT-safe, requiring locks in the select() code that
are not needed in the poll code. So let's move back to the initial
situation where we used to only use bit fields, since that has been in
use since day one without a problem, and let's use these hap_fd_*
functions instead of FD_*.
This patch only moves the functions to fd.h and revives hap_fd_isset()
that was recently removed to kill an "unused" warning.
Last commit 173d995 ("MEDIUM: polling: start to move maxfd computation
to the pollers") moved the maxfd computation to the polling loop, but
it still adds an entry when removing an fd, forcing the next loop to
seek from further away than necessary. Let's only update the max when
actually adding an entry.
Since only select() and poll() still make use of maxfd, let's move
its computation right there in the pollers themselves, and only
during each fd update pass. The computation doesn't need a lock
anymore, only a few atomic ops. It will be accurate, be done much
less often and will not be required anymore in the FD's fast patch.
This provides a small performance increase of about 1% in connection
rate when using epoll since we get rid of this computation which was
performed under a lock.
The listeners and connectors may complain that process-wide or
system-wide FD limits have been reached and will in this case report
maxfd as the limit. This is wrong in fact since there's no reason for
the whole FD space to be contiguous when the total # of FD is reached.
A better approach would consist in reporting the accurate number of
opened FDs, but this is pointless as what matters here is to give a
hint about what might be wrong. So let's simply report the configured
maxsock, which will generally explain why the process' limits were
reached, which is the most common reason. This removes another
dependency on maxfd.
Maxfd is really only useful to poll() and select(), yet epoll and
kqueue reference it almost by mistake :
- cloning of the initial FDs (maxsock should be used here)
- max polled events, it's maxpollevents which should be used here.
Let's fix these places.
The "show fd" command on the CLI doesn't list the last FD in use since
it doesn't include maxfd. We don't need to use maxfd here anyway as
global.maxsock will do the job pretty well and removes this dependency.
This patch may be backported to 1.8.
Add an optional second parameter to the ipmask converter that specifies
the number of bits to mask off IPv6 addresses.
If the second parameter is not given IPv6 addresses fail to mask (resulting
in an empty string), preserving backwards compatibility: Previously
a sample like `src,ipmask(24)` failed to give a result for IPv6 addresses.
This feature can be tested like this:
defaults
log global
mode http
option httplog
option dontlognull
timeout connect 5000
timeout client 50000
timeout server 50000
frontend fe
bind :::8080 v4v6
# Masked IPv4 for IPv4, empty for IPv6 (with and without this commit)
http-response set-header Test %[src,ipmask(24)]
# Correctly masked IP addresses for both IPv4 and IPv6
http-response set-header Test2 %[src,ipmask(24,ffff:ffff:ffff:ffff::)]
# Correctly masked IP addresses for both IPv4 and IPv6
http-response set-header Test3 %[src,ipmask(24,64)]
default_backend be
backend be
server s example.com:80
Tested-By: Jarno Huuskonen <jarno.huuskonen@uef.fi>
The len2mask4 function was introduced in commit:
70473a5f8c
which is about six years later than the commit that introduced the
str2mask function:
2937c0dd20
This is a clean up in preparation for a str2mask6 function which
will use len2mask6.
c_ipv62ip failed to set the output type of the cast to SMP_T_IPV4
even for a successful conversion.
This bug exists as of commit cc4d1716a2
which is the first commit adding this function.
v1.6-dev4 is the first tag containing this commit, the fix should
be backported to haproxy 1.6 and newer.
The cast functions modify their output type as of commit:
b805f71d1b
v1.5-dev20 is the first tag containing this comment, the fix
should be backported to haproxy 1.5 and newer.
The file contained an 'e' with an gravis accent and thus was
not US-ASCII, but ISO-8859-1.
Also correct the spelling in the incorrect comment.
The incorrect character was introduced in commit:
4d9a1d1a5c
v1.6-dev1 is the first tag containing this comment, the fix
should be backported to haproxy 1.6 and newer.
This is the same patch than the previous one ("BUILD: epoll/threads: Add test on
MAX_THREADS to avoid warnings when complied without threads ").
It should be backported in 1.8 with the commit 7a2364d4 ("BUG/MEDIUM:
kqueue/threads: use one kqueue_fd per thread").
When HAProxy is complied without threads, gcc throws following warnings:
src/ev_epoll.c:222:3: warning: array subscript is outside array bounds [-Warray-bounds]
...
src/ev_epoll.c:199:11: warning: array subscript is outside array bounds [-Warray-bounds]
...
Of course, this is not a bug. In such case, tid is always equal to 0. But to
avoid the noise, a check on MAX_THREADS in "if (tid)" lines makes gcc happy.
This patch should be backported in 1.8 with the commit d9e7e36c ("BUG/MEDIUM:
epoll/threads: use one epoll_fd per thread").
A #ifdef/#endif on USE_THREAD was added in the commit 0048dd04 ("MINOR: threads:
Fix build when we're not compiling with threads.") to conditionally define the
start_lock variable, because HA_SPINLOCK_T is only defined when HAProxy is
compiled with threads.
If fact, to do that, we should use the macro __decl_hathreads instead.
If commit 0048dd04 is backported in 1.8, this one can also be backported.
in deinit_kqueue_per_thread, kqueue_fd[tid] must be closed, except for the main
thread (the first one, tid==0).
This patch must be backported in 1.8 with commit 7a2364d4.
When a healt-check is released, the attached conn_stream may be undefined. For
instance, this happens when 'no-check' option is used on a server line. So we
must check it is defined before trying to release it.
This patch must be backported in 1.8.
Because of a typo (HA_SPIN_LOCK instead of HA_SPIN_UNLOCK), there is a deadlock
in srv_set_stopping and srv_set_admin_flag when there is at least one trackers.
This patch must be backported in 1.8.
The owner of the fd used by the synchronization pipe was set to NULL,
making it ignored by maxfd computation. The risk would be that some
synchronization events get delayed between threads when using poll()
or select(). However this is only theorical since the pipe is created
before listeners are bound so normally its FD should be lower and
this should normally not happen. The only possible situation would
be if all listeners are bound to inherited FDs which are lower than
the pipe's.
This patch must be backported to 1.8.
Only declare the start_lock if threads are compiled in, otherwise
HA_SPINLOCK_T won't be defined.
This should be backported to 1.8 when/if
1605c7ae61 is backported.