This alone makes a typical HTML stats dump consume 10% CPU less,
because we avoid doing complex printf calls to drop them later.
Only a few common cases have been checked, those which are very
likely to run for nothing.
It is a bit expensive and complex to use to call buffer_feed()
directly from the request parser, and there are risks that some
output messages are lost in case of buffer full. Since most of
these messages are static, let's have a state dedicated to print
these messages and store them in a specific area shared with the
stats in the session. This both reduces code size and risks of
losing output data.
Krzysztof reported that using names only for get weight/set weight
was not enough because it's still possible to have multiple servers
with the same name (and my test config is one of those). He suggested
to be able to designate them by their unique numeric IDs by prefixing
the ID with a dash.
That way we can have :
set weight #120/#2
as well as
get weight static/srv1 10
Capture & display more data from health checks, like
strerror(errno) for L4 failed checks or a first line
from a response for L7 successes/failed checks.
Non ascii or control characters are masked with
chunk_htmlencode() (html stats) or chunk_asciiencode() (logs).
Add two functions to encode input chunk replacing
non-printable, non ascii or special characters
with:
"&#%u;" - chunk_htmlencode
"<%02X>" - chunk_asciiencode
Above functions should be used when adding strings, received
from possible unsafe sources, to html stats or logs.
int get_backend_server(const char *bk_name, const char *sv_name,
struct proxy **bk, struct server **sv);
This function scans the list of backends and servers to retrieve the first
backend and the first server with the given names, and sets them in both
parameters. It returns zero if either is not found, or non-zero and sets
the ones it did not found to NULL. If a NULL pointer is passed for the
backend, only the pointer to the server will be updated.
The stats socket can now run at 3 different levels :
- user
- operator (default one)
- admin
These levels are used to restrict access to some information
and commands. Only the admin can clear all stats. A user cannot
clear anything nor access sensible data such as sessions or
errors.
The most common use of "clear counters" should be to only clear
max values without affecting cumulated values, for instance,
after an incident. So we change "clear counters" to only clear
max values, and add "clear counters all" to clear all counters.
I noticed that in __eb32_insert , if the tree is empty
(root->b[EB_LEFT] == NULL) , the node.bit is not defined.
However in __task_queue there are checks:
- if (last_timer->node.bit < 0)
- if (task->wq.node.bit < last_timer->node.bit)
which might rely upon an undefined value.
This is how I see it:
1. We insert eb32_node in an empty wait queue tree for a task (called by
process_runnable_tasks() ):
Inserting into empty wait queue &task->wq = 0x72a87c8, last_timer
pointer: (nil)
2. Then, we set the last timer to the same address:
Setting last_timer: (nil) to: 0x72a87c8
3. We get a new task to be inserted in the queue (again called by
process_runnable_tasks()) , before the __task_unlink_wq() is called for
the previous task.
4. At this point, we still have last_timer set to 0x72a87c8 , but since
it was inserted in an empty tree, it doesn't have node.bit and the
values above get dereferenced with undefined value.
The bug has no effect right now because the check for equality is still
made, so the next timer will still be queued at the right place anyway,
without any possible side-effect. But it's a pending bug waiting for a
small change somewhere to strike.
Iliya Polihronov
These ACLs are used to check the number of active connections on the
frontend, backend or in a backend's queue. The avg_queue returns the
average number of queued connections per server, and for this, divides
the total number of queued connections by the number of alive servers.
The dst_conn ACL has been slightly changed to more reflect its name and
original usage, which is to return the number of connections on the
destination address/port (the socket) and not the whole frontend.
Consistent hashing provides some interesting advantages over common
hashing. It avoids full redistribution in case of a server failure,
or when expanding the farm. This has a cost however, the hashing is
far from being perfect, as we associate a server to a request by
searching the server with the closest key in a tree. Since servers
appear multiple times based on their weights, it is recommended to
use weights larger than approximately 10-20 in order to smoothen
the distribution a bit.
In some cases, playing with weights will be the only solution to
make a server appear more often and increase chances of being picked,
so stats are very important with consistent hashing.
In order to indicate the type of hashing, use :
hash-type map-based (default, old one)
hash-type consistent (new one)
Consistent hashing can make sense in a cache farm, in order not
to redistribute everyone when a cache changes state. It could also
probably be used for long sessions such as terminal sessions, though
that has not be attempted yet.
More details on this method of hashing here :
http://www.spiteful.com/2008/03/17/programmers-toolbox-part-3-consistent-hashing/
Commit 404e8ab461 introduced
smart checking for stupid acl typos. However, now haproxy shows
the warning even for valid acls, like this one:
acl Cookie-X-NoAccel hdr_reg(cookie) (^|\ |;)X-NoAccel=1(;|$)
If a frontend does not set 'option socket-stats', a 'clear counters'
on the stats socket could segfault because li->counters is NULL. The
correct fix is to check for NULL before as this is a valid situation.
Recent "struct chunk rework" introduced a NULL pointer dereference
and now haproxy segfaults if auth is required for stats but not found.
The reason is that size_t cannot store negative values, but current
code assumes that "len < 0" == uninitialized.
This patch fixes it.
There are a few remaining max values that need to move to counters.
Also, the counters are more often used than some config information,
so get them closer to the other useful struct members for better cache
efficiency.
Until now it was required that every custom ID was above 1000 in order to
avoid conflicts. Now we have the list of all assigned IDs and can automatically
pick the first unused one. This means that it is perfectly possible to interleave
automatic IDs with persistent IDs and the parser will automatically allocate
unused values starting with 1.
When a name or ID conflict is detected, it is sometimes useful to know
where the other one was declared. Now that we have this information,
report it in error messages.
This patch allows to collect & provide separate statistics for each socket.
It can be very useful if you would like to distinguish between traffic
generate by local and remote users or between different types of remote
clients (peerings, domestic, foreign).
Currently no "Session rate" is supported, but adding it should be possible
if we found it useful.
Doing this, we can remove the last BF_HIJACK user and remove
produce_content(). s->data_source could also be removed but
it is currently used to detect if the stats or a server was
used.
The stats handler used to store internal states in s->ana_state. Now
we only rely on si->st0 in which we can store as many states as we
have possible outputs. This cleans up the stats code a lot and makes
it more maintainable. It has also reduced code size by a few hundred
bytes.
We can simplify the code in the stats functions using buffer_feed_chunk()
instead of buffer_write_chunk(). Let's start with this function. This
patch also fixed an issue where we could dump past the end of the capture
buffer if it is shorter than the captured request.
In old versions, before 1.3.16, we had to refresh the timeouts after
each call to process_session() because the stream socket handler did
not do it. Now that the sockets can exchange data for a long period
without calling process_session(), we can detect an old activity and
refresh a timeout long after the last activity, causing too late a
detection of some timeouts.
The fix simply consists in not checking for activity anymore in
stream_sock_data_finish() but only set a timeout if it was not
previously set.
Calling buffer_shutw() marks the buffer as closed but if it was already
closed in the other direction, the stream interface is not marked as
closed, causing infinite loops.
We took this opportunity to completely remove buffer_shutw() and buffer_shutr()
which have no reason to be used at all and which will always cause trouble
when directly called. The stats occurrence was the last one.
By default, when data is sent over a socket, both the write timeout and the
read timeout for that socket are refreshed, because we consider that there is
activity on that socket, and we have no other means of guessing if we should
receive data or not.
While this default behaviour is desirable for almost all applications, there
exists a situation where it is desirable to disable it, and only refresh the
read timeout if there are incoming data. This happens on sessions with large
timeouts and low amounts of exchanged data such as telnet session. If the
server suddenly disappears, the output data accumulates in the system's
socket buffers, both timeouts are correctly refreshed, and there is no way
to know the server does not receive them, so we don't timeout. However, when
the underlying protocol always echoes sent data, it would be enough by itself
to detect the issue using the read timeout. Note that this problem does not
happen with more verbose protocols because data won't accumulate long in the
socket buffers.
When this option is set on the frontend, it will disable read timeout updates
on data sent to the client. There probably is little use of this case. When
the option is set on the backend, it will disable read timeout updates on
data sent to the server. Doing so will typically break large HTTP posts from
slow lines, so use it with caution.
During troubleshooting, it's often useful to get the list of supported
pollers but until now it was required to have a working configuration
first. Since the pollers are known before main() is called, let's list
them with the build options.
The "static-rr" is just the old round-robin algorithm. It is still
in use when a hash algorithm is used and the data to hash is not
present, but it was impossible to configure it explicitly. This one
is cheaper in terms of CPU and supports unlimited numbers of servers,
so it makes sense to be able to use it.
LB algo macros were composed of the LB algo by itself without any indication
of the method to use to look up a server (the lb function itself). This
method was implied by the LB algo, which was not very convenient to add
more algorithms. Now we have several fields in the LB macros, some to
describe what to look for in the requests, some to describe how to transform
that (kind of algo) and some to describe what lookup function to use.
The next patch will make it possible to factor out some code for all algos
which rely on a map.
The lbprm structure has moved to backend.h, where it should be, and
all algo-specific types and declarations have moved to their specific
files. The proxy struct is now much more readable.