Bryan Talbot reported that a config with only "stats bind-process 1" in
the global section would crash during parsing. This bug was introduced
with this new statement in 1.5-dev13. The stats frontend must be allocated
if it was not yet.
No backport is needed. The workaround consists in having previously declared
any other stats keyword (generally stats socket).
A "soft-stopped" server (weight 0) can be missed easily in the
webinterface. Fix this by using a specific class (and color).
Signed-off-by: Lukas Tribus <luky-37@hotmail.com>
When a server state change happens, a status bar appears with a link. Since
commit 1.5-dev16 (commit e7dbfc66), it was not visible anymore because of
the CSS properties set on the whole "div" tag used by the tooltips. Fix this
by using a specific class for the tooltips.
The confirmation link on the web interface forgets to set the displaying
options, so that when one clicks on "[X] Action processed successfully",
the auto-refresh, up/down and scope search settings are lost. We need to
pass all these info in the new link.
commit 88c278fadf provided a new input field on the statistics page which was
focused by default. The autofocus prevent to scroll down the page with the
keyboard immediately afterloading it, without the need to leave that field.
It also interfered with "stats refresh" by scrolling up to this field on each
refresh.
This small patch removes the autofocus keyword and adds a tabindex="1" as
suggested by Vincent Bernat. It also cleans up the generated html code.
This patch adds a "scope" box in the statistics page in order to
display only proxies with a name that contains the requested value.
The scope filter is preserved across all clicks on the page.
Will Glass-Husain reported this breakage which was caused by commit
dec9814e merged in 1.5-dev12, and which was incomplete, resulting in
both "clear table" and "show table" to do the same thing.
No backport is needed.
Break out set weight processing code.
This is in preparation for reusing the code.
Also, remove duplicate check in nested if clauses.
{px->lbprm.algo & BE_LB_PROP_DYN) is checked by
the immediate outer if clause, so there is no need
to check it a second time.
Signed-off-by: Simon Horman <horms@verge.net.au>
Currently s->listener is set for all sessions, but this may not remain
the case forever so we already check s->listener for validity. On check
was missed.
Reported-by: Dinko Korunic <dkorunic@reflected.net>
These macros (U2H, U2A, LIM2A, ...) have been used with an explicit
index for the local storage variable, making it difficult to change
log formats and causing a few issues from time to time. Let's have
a single macro with a rotating index so that up to 10 conversions
may be used in a single call.
Frontend, listener, server and backend detailed counters are now spread
over several lines in a table when the pointer hovers over the <u> area.
The values are much more readble, and the extra space gained this way
allowed to report some percentages.
Note: some incoherencies still exist between some counters. For example,
the backend's cum_conn is increased when a session is assigned instead
of increasing cum_sess.
Using titles to report detailed information is not convenient. The
browser decides to wrap the line where it wants, generally the box
quickly fades away, and it's not possible to copy-paste the text
from the box.
By using two levels, we can make a block appear/disappear depending
on whether its parent it being hovered or not, for example :
.tip { display: none; }
u:hover .tip { display: block; }
or better:
.tip { display: block; visibility: hidden; }
u:hover .tip { visibility: visible; }
Toggling visibility ensures that the place required to display the block
is always reserved. This is important to display boxes that are close to
the edges.
Then using <span class="tip">this is a box</span> will make the text
appear only when the upper <u> is hovered.
But this still adds much text. So instead we use a generic <div> tag
which we don't use anywhere else. That way we don't have to specify a
class :
div { display: block; visibility: hidden; }
u:hover div { visibility: hidden; }
This works pretty well, even in old browsers from 2005.
This commit does not change the display format, it only replaces the
title attribute with the div tag. Later commits will adjust the layout.
All the functions that were called "http" or "raw" have been cleaned up
to support either only HTML and use that word in their name, or support
both HTML and CSV and be usable both from the HTTP and the CLI entry
points.
stats_http_dump() now explicitly checks for HTML mode before adding
HTML-specific headers/trailers, and has been renamed since it's now
used by both entry points.
Some more duplicated code could be removed. The patch looks big but it's
mostly due to re-indents resulting from the moves or comments updates.
The calling sequences now look like this :
cli_io_handler()
-> stats_dump_sess_to_buffer() // "show sess"
-> stats_dump_errors_to_buffer() // "show errors"
-> stats_dump_info_to_buffer() // "show info"
-> stats_dump_stat_to_buffer() // "show stat"
-> stats_dump_csv_header()
-> stats_dump_proxy_to_buffer()
-> stats_dump_fe_stats()
-> stats_dump_li_stats()
-> stats_dump_sv_stats()
-> stats_dump_be_stats()
http_stats_io_handler()
-> stats_dump_stat_to_buffer() // same as above, but used for CSV or HTML
-> stats_dump_csv_header() // emits the CSV headers (same as above)
-> stats_dump_html_head() // emits the HTML headers
-> stats_dump_html_info() // emits the equivalent of "show info" at the top
-> stats_dump_proxy_to_buffer() // same as above, valid for CSV and HTML
-> stats_dump_html_px_hdr()
-> stats_dump_fe_stats()
-> stats_dump_li_stats()
-> stats_dump_sv_stats()
-> stats_dump_be_stats()
-> stats_dump_html_px_end()
-> stats_dump_html_end() // emits HTML trailer
The HTTP header injection that are performed in dumpstats when responding
or when redirecting a POST request have nothing to do in dumpstats. They
do not use any state from the stats, and are 100% HTTP. Let's make the
headers there in the HTTP core, and have dumpstats only produce stats.
The dumpstats code looks like a spaghetti plate. Several functions are
supposed to be able to do several things but rely on complex states to
dispatch the work to independant functions. Most of the HTML output is
performed within the switch/case statements of the whole state machine.
Let's clean this up by adding new functions to emit the data and have
a few more iterators to avoid relying on so complex states.
The new stats dump sequence looks like this for CLI and for HTTP :
cli_io_handler()
-> stats_dump_sess_to_buffer() // "show sess"
-> stats_dump_errors_to_buffer() // "show errors"
-> stats_dump_raw_info_to_buffer() // "show info"
-> stats_dump_raw_info()
-> stats_dump_raw_stat_to_buffer() // "show stat"
-> stats_dump_csv_header()
-> stats_dump_proxy()
-> stats_dump_px_hdr()
-> stats_dump_fe_stats()
-> stats_dump_li_stats()
-> stats_dump_sv_stats()
-> stats_dump_be_stats()
-> stats_dump_px_end()
http_stats_io_handler()
-> stats_http_redir()
-> stats_dump_http() // also emits the HTTP headers
-> stats_dump_html_head() // emits the HTML headers
-> stats_dump_csv_header() // emits the CSV headers (same as above)
-> stats_dump_http_info() // note: ignores non-HTML output
-> stats_dump_proxy() // same as above
-> stats_dump_http_end() // emits HTML trailer
Benjamin Polidore reported a build issue on Solaris with gcc 4.2.4 where
stdbool is not usable without c99. It only appeared at one location in
dumpstats and is totally useless, let's use the more common and portable
int as everywhere else.
Commit 5730c68b changed to display compression ratios based on 2xx
responses, but we should then check that there are such responses
instead of checking for requests. The risk is a divide error if there
are some requests but no 2xx yet (eg: redirect).
Since only responses with status 200 can be compressed, let's only count the
ratio of compressed responses on the basis of the 2xx responses and not all
of them. Note that responses 206 are still included in this count but it gives
a better figure, especially for places where authentication is used and 401 is
common.
This change removes pointers for known types (stream_interface, ...),
adds buffer pointers and sizes, and moves buffer information to their
own line. The output is cleaner with shorter lines and slightly more
lines.
show sess <id> puts a backref into the session it's dumping. If the output
is interrupted, the backref cannot always be removed because it's only done
in the I/O handler. This can randomly corrupt the backref list when the
session closes, because it passes the pointer to the next session which
itself might be watched.
The case is hard to reproduce (hundreds of attempts) but monitoring systems
might encounter it frequently.
Thus we have to add a release handler which does the cleanup even when the
I/O handler is not called.
This issue should also be present in 1.4 so the patch should be backported.
Sometimes when debugging haproxy, it is important to take a full
snapshot of all sessions and their respective states. Till now it
was complicated to do because we had to use scripts and sessions
would vanish between two runs.
Now with this command we have the same output as "show sess $id"
but for all sessions in the table. This is a debugging command only,
it should only be used by developers as it is never guaranteed to
perfectly work !
Depending on the content-types and accept-encoding fields, some responses
might or might not be compressed. Let's have a counter of the number of
compressed responses and report it in the stats to help improve compression
usage.
Some cosmetic issues were fixed in the CSV output too (missing commas at the
end).
It's interesting to know the average compression ratio obtained on
frontends and backends without having to compute it by hand, so let's
report it in the HTML stats.
The porting of checks to using connections was totally bogus. Some checks
were considered successful as soon as the connection was established,
regardless of any response. Some errors would be triggered upon recv
if polling was enabled for send or if the send channel was shut down.
Now the behaviour is much better. It would be cleaner to perform the
fd_delete() in wake_srv_chk() and to process failures and timeouts
separately, but this is already a good start.
It was a bit frustrating to have no idea about the bandwidth saved by
HTTP compression. Now we have per-frontend and per-backend stats. The
stats on the HTTP interface are shown in a hover title in the "bytes out"
column if at least something was fed to the compressor. 3 new columns
appeared in the CSV stats output.
Instead of storing a couple of (int, ptr) in the struct connection
and the struct session, we use a different method : we only store a
pointer to an integer which is stored inside the target object and
which contains a unique type identifier. That way, the pointer allows
us to retrieve the object type (by dereferencing it) and the object's
address (by computing the displacement in the target structure). The
NULL pointer always corresponds to OBJ_TYPE_NONE.
This reduces the size of the connection and session structs. It also
simplifies target assignment and compare.
In order to improve the generated code, we try to put the obj_type
element at the beginning of all the structs (listener, server, proxy,
si_applet), so that the original and target pointers are always equal.
A lot of code was touched by massive replaces, but the changes are not
that important.
si_fd() is not used a lot, and breaks builds on OpenBSD 5.2 which
defines this name for its own purpose. It's easy enough to remove
this one-liner function, so let's do it.
This patch adds input and output rate calcutation on the HTTP compresion
feature.
Compression can be limited with a maximum rate value in kilobytes per
second. The rate is set with the global 'maxcomprate' option. You can
change this value dynamicaly with 'set rate-limit http-compression
global' on the UNIX socket.
Keys are copied from samples to stick_table_key. If a key is larger
than the stick_table_key, we have an overflow. In pratice it does not
happen because it requires :
1) a configuration with tune.bufsize larger than BUFSIZE (common)
2) a stick-table configured with keys strictly larger than buffers
3) extraction of data larger than BUFSIZE (eg: using payload())
Points 2 and 3 don't make any sense for a real world configuration. That
said the issue needs be fixed. The solution consists in allocating it the
same size as the global buffer size, just like the samples. This fixes the
issue.
The trash is used everywhere to store the results of temporary strings
built out of s(n)printf, or as a storage for a chunk when chunks are
needed.
Using global.tune.bufsize is not the most convenient thing either.
So let's replace trash with a chunk and directly use it as such. We can
then use trash.size as the natural way to get its size, and get rid of
many intermediary chunks that were previously used.
The patch is huge because it touches many areas but it makes the code
a lot more clear and even outlines places where trash was used without
being that obvious.
This function's naming was misleading as it is used to append data
at the end of a string, causing some surprizes when used for the
first time!
Add a chunk_printf() function which does what its name suggests.
We will need to be able to switch server connections on a session and
to keep idle connections. In order to achieve this, the preliminary
requirement is that the connections can survive the session and be
detached from them.
Right now they're still allocated at exactly the same place, so when
there is a session, there are always 2 connections. We could soon
improve on this by allocating the outgoing connection only during a
connect().
This current patch touches a lot of code and intentionally does not
change any functionnality. Performance tests show no regression (even
a very minor improvement). The doc has not yet been updated.
Using "stats bind-process", it becomes possible to indicate to haproxy which
process will get the incoming connections to the stats socket. It will also
shut down the warning when nbproc > 1.
With this commit, we now separate the channel from the buffer. This will
allow us to replace buffers on the fly without touching the channel. Since
nobody is supposed to keep a reference to a buffer anymore, doing so is not
a problem and will also permit some copy-less data manipulation.
Interestingly, these changes have shown a 2% performance increase on some
workloads, probably due to a better cache placement of data.
The health checks in the servers are becoming a real mess, move them
into their own subsection. We'll soon need to have a struct buffer to
replace the char * as well as check-specific protocol and transport
layers.