This setting is used to limit memory usage without causing the alloc
failures caused by "-m". Unexpectedly, tests have shown a performance
boost of up to about 18% on HTTP traffic when limiting the number of
buffers to about 10% of the amount of concurrent connections.
tune.buffers.limit <number>
Sets a hard limit on the number of buffers which may be allocated per process.
The default value is zero which means unlimited. The minimum non-zero value
will always be greater than "tune.buffers.reserve" and should ideally always
be about twice as large. Forcing this value can be particularly useful to
limit the amount of memory a process may take, while retaining a sane
behaviour. When this limit is reached, sessions which need a buffer wait for
another one to be released by another session. Since buffers are dynamically
allocated and released, the waiting time is very short and not perceptible
provided that limits remain reasonable. In fact sometimes reducing the limit
may even increase performance by increasing the CPU cache's efficiency. Tests
have shown good results on average HTTP traffic with a limit to 1/10 of the
expected global maxconn setting, which also significantly reduces memory
usage. The memory savings come from the fact that a number of connections
will not allocate 2*tune.bufsize. It is best not to touch this value unless
advised to do so by an haproxy core developer.
Used in conjunction with the dynamic buffer allocator.
tune.buffers.reserve <number>
Sets the number of buffers which are pre-allocated and reserved for use only
during memory shortage conditions resulting in failed memory allocations. The
minimum value is 2 and is also the default. There is no reason a user would
want to change this value, it's mostly aimed at haproxy core developers.
Previously, external checks required to find at least one listener in order to
pass the <proxy_address> and <proxy_port> arguments to the external script.
It prevented from declaring external checks in backend sections and haproxy
rejected the configuration.
The listener is now optional and values "NOT_USED" are passed if no listener is
found. For instance, this is the case with a backend section.
This is specific to the 1.6 branch.
word(<index>,<delimiters>)
Extracts the nth word considering given delimiters from an input string.
Indexes start at 1 and delimiters are a string formatted list of chars.
field(<index>,<delimiters>)
Extracts the substring at the given index considering given delimiters from
an input string. Indexes start at 1 and delimiters are a string formatted
list of chars.
bytes(<offset>[,<length>])
Extracts a some bytes from an input binary sample. The result is a
binary sample starting at an offset (in bytes) of the original sample
and optionnaly truncated at the given length.
Sometimes, either for debugging or for logging we'd like to have a bit
of information about the running process. Here are 3 new fetches for this :
nbproc : integer
Returns an integer value corresponding to the number of processes that were
started (it equals the global "nbproc" setting). This is useful for logging
and debugging purposes.
proc : integer
Returns an integer value corresponding to the position of the process calling
the function, between 1 and global.nbproc. This is useful for logging and
debugging purposes.
stopping : boolean
Returns TRUE if the process calling the function is currently stopping. This
can be useful for logging, or for relaxing certain checks or helping close
certain connections upon graceful shutdown.
Adds global statements 'ssl-default-server-options' and
'ssl-default-bind-options' to force on 'server' and 'bind' lines
some ssl options.
Currently available options are 'no-sslv3', 'no-tlsv10', 'no-tlsv11',
'no-tlsv12', 'force-sslv3', 'force-tlsv10', 'force-tlsv11',
'force-tlsv12', and 'no-tls-tickets'.
Example:
global
ssl-default-server-options no-sslv3
ssl-default-bind-options no-sslv3
ssl_c_der : binary
Returns the DER formatted certificate presented by the client when the
incoming connection was made over an SSL/TLS transport layer. When used for
an ACL, the value(s) to match against can be passed in hexadecimal form.
ssl_f_der : binary
Returns the DER formatted certificate presented by the frontend when the
incoming connection was made over an SSL/TLS transport layer. When used for
an ACL, the value(s) to match against can be passed in hexadecimal form.
This converter escapes string to use it as json/ascii escaped string.
It can read UTF-8 with differents behavior on errors and encode it in
json/ascii.
json([<input-code>])
Escapes the input string and produces an ASCII ouput string ready to use as a
JSON string. The converter tries to decode the input string according to the
<input-code> parameter. It can be "ascii", "utf8", "utf8s", "utf8"" or
"utf8ps". The "ascii" decoder never fails. The "utf8" decoder detects 3 types
of errors:
- bad UTF-8 sequence (lone continuation byte, bad number of continuation
bytes, ...)
- invalid range (the decoded value is within a UTF-8 prohibited range),
- code overlong (the value is encoded with more bytes than necessary).
The UTF-8 JSON encoding can produce a "too long value" error when the UTF-8
character is greater than 0xffff because the JSON string escape specification
only authorizes 4 hex digits for the value encoding. The UTF-8 decoder exists
in 4 variants designated by a combination of two suffix letters : "p" for
"permissive" and "s" for "silently ignore". The behaviors of the decoders
are :
- "ascii" : never fails ;
- "utf8" : fails on any detected errors ;
- "utf8s" : never fails, but removes characters corresponding to errors ;
- "utf8p" : accepts and fixes the overlong errors, but fails on any other
error ;
- "utf8ps" : never fails, accepts and fixes the overlong errors, but removes
characters corresponding to the other errors.
This converter is particularly useful for building properly escaped JSON for
logging to servers which consume JSON-formated traffic logs.
Example:
capture request header user-agent len 150
capture request header Host len 15
log-format {"ip":"%[src]","user-agent":"%[capture.req.hdr(1),json]"}
Input request from client 127.0.0.1:
GET / HTTP/1.0
User-Agent: Very "Ugly" UA 1/2
Output log:
{"ip":"127.0.0.1","user-agent":"Very \"Ugly\" UA 1\/2"}
Since commit 1b71eb5 ("BUG/MEDIUM: counters: fix track-sc* to wait on
unstable contents"), we don't need the "if HTTP" anymore. But the doc
was not updated to reflect this.
Since this change was backported to 1.5, this doc update should be
backported as well.
When a frontend does not have any bind-process directive, make it
automatically bind to the union of all of its listeners' processes
instead of binding to all processes. That will make it possible to
have the expected behaviour without having to explicitly specify a
bind-process directive.
Note that if the listeners are not bound to a specific process, the
default is still to bind to all processes.
This change could be backported to 1.5 as it simplifies process
management, and was planned to be done during the 1.5 development phase.
Sometimes it would be convenient to have a log counter so that from a log
server we know whether some logs were lost or not. The frontend's log counter
serves exactly this purpose. It's incremented each time a traffic log is
produced. If a log is disabled using "http-request set-log-level silent",
the counter will not be incremented. However, admin logs are not accounted
for. Also, if logs are filtered out before being sent to the server because
of a minimum level set on the log line, the counter will be increased anyway.
The counter is 32-bit, so it will wrap, but that's not an issue considering
that 4 billion logs are rarely in the same file, let alone close to each
other.
Add support for http-request track-sc, similar to what is done in
tcp-request for backends. A new act_prm field was added to HTTP
request rules to store the track params (table, counter). Just
like for TCP rules, the table is resolved while checking for
config validity. The code was mostly copied from the TCP code
with the exception that here we also count the HTTP request count
and rate by hand. Probably that something could be factored out in
the future.
It seems like tracking flags should be improved to mark each hook
which tracks a key so that we can have some check points where to
increase counters of the past if not done yet, a bit like is done
for TRACK_BACKEND.
From time to time it's useful to hash input data (scramble input, or
reduce the space needed in a stick table). This patch provides 3 simple
converters allowing use of the available hash functions to hash input
data. The output is an unsigned integer which can be passed into a header,
a log or used as an index for a stick table. One nice usage is to scramble
source IP addresses before logging when there are requirements to hide them.
Konstantin Romanenko reported a typo in the HTML documentation. The typo is
already present in the raw text version : the "shutdown sessions" command
should be "shutdown sessions server".
This one is not inherited from defaults into frontends nor backends
because it would create a confusion situation where it would be hard
to disable it (since both frontend and backend would enable it).
This patch adds two converters :
ltime(<format>[,<offset>])
utime(<format>[,<offset>])
Both use strftime() to emit the output string from an input date. ltime()
provides local time, while utime() provides the UTC time.
These new converters make it possible to look up any sample expression
in a table, and check whether an equivalent key exists or not, and if it
exists, to retrieve the associated data (eg: gpc0, request rate, etc...).
Till now it was only possible using tracking, but sometimes tracking is
not suited to only retrieving such counters, either because it's done too
early or because too many items need to be checked without necessarily
being tracked.
These converters all take a string on input, and then convert it again to
the table's type. This means that if an input sample is of type IPv4 and
the table is of type IP, it will first be converted to a string, then back
to an IP address. This is a limitation of the current design which does not
allow converters to declare that "any" type is supported on input. Since
strings are the only types which can be cast to any other one, this method
always works.
The following converters were added :
in_table, table_bytes_in_rate, table_bytes_out_rate, table_conn_cnt,
table_conn_cur, table_conn_rate, table_gpc0, table_gpc0_rate,
table_http_err_cnt, table_http_err_rate, table_http_req_cnt,
table_http_req_rate, table_kbytes_in, table_kbytes_out,
table_server_id, table_sess_cnt, table_sess_rate, table_trackers.
Listening to an abstract namespace socket is quite convenient but
comes with some drawbacks that must be clearly understood when the
socket is being listened to by multiple processes. The trouble is
that the socket cannot be rebound if a new process attempts a soft
restart and fails, so only one of the initially bound processes
will still be bound to it, the other ones will fail to rebind. For
most situations it's not an issue but it needs to be indicated.
With all the goodies supported by logformat, people find that the limit
of 1024 chars for log lines is too short. Some servers do not support
larger lines and can simply drop them, so changing the default value is
not always the best choice.
This patch takes a different approach. Log line length is specified per
log server on the "log" line, with a value between 80 and 65535. That
way it's possibly to satisfy all needs, even with some fat local servers
and small remote ones.
This new branch is based on 1.5.0, which 1.6-dev0 is 100% equivalent to.
The README has been updated to mention that it is a development branch.
Released version 1.6-dev0 with the following main changes :
- exact copy of 1.5.0
Released version 1.5.0 with the following main changes :
- MEDIUM: ssl: ignored file names ending as '.issuer' or '.ocsp'.
- MEDIUM: ssl: basic OCSP stapling support.
- MINOR: ssl/cli: Fix unapropriate comment in code on 'set ssl ocsp-response'
- MEDIUM: ssl: add 300s supported time skew on OCSP response update.
- MINOR: checks: mysql-check: Add support for v4.1+ authentication
- MEDIUM: ssl: Add the option to use standardized DH parameters >= 1024 bits
- MEDIUM: ssl: fix detection of ephemeral diffie-hellman key exchange by using the cipher description.
- MEDIUM: http: add actions "replace-header" and "replace-values" in http-req/resp
- MEDIUM: Break out check establishment into connect_chk()
- MEDIUM: Add port_to_str helper
- BUG/MEDIUM: fix ignored values for half-closed timeouts (client-fin and server-fin) in defaults section.
- BUG/MEDIUM: Fix unhandled connections problem with systemd daemon mode and SO_REUSEPORT.
- MINOR: regex: fix a little configuration memory leak.
- MINOR: regex: Create JIT compatible function that return match strings
- MEDIUM: regex: replace all standard regex function by own functions
- MEDIUM: regex: Remove null terminated strings.
- MINOR: regex: Use native PCRE API.
- MINOR: missing regex.h include
- DOC: Add Exim as Proxy Protocol implementer.
- BUILD: don't use type "uint" which is not portable
- BUILD: stats: workaround stupid and bogus -Werror=format-security behaviour
- BUG/MEDIUM: http: clear CF_READ_NOEXP when preparing a new transaction
- CLEANUP: http: don't clear CF_READ_NOEXP twice
- DOC: fix proxy protocol v2 decoder example
- DOC: fix remaining occurrences of "pattern extraction"
- MINOR: log: allow the HTTP status code to be logged even in TCP frontends
- MINOR: logs: don't limit HTTP header captures to HTTP frontends
- MINOR: sample: improve sample_fetch_string() to report partial contents
- MINOR: capture: extend the captures to support non-header keys
- MINOR: tcp: prepare support for the "capture" action
- MEDIUM: tcp: add a new tcp-request capture directive
- MEDIUM: session: allow shorter retry delay if timeout connect is small
- MEDIUM: session: don't apply the retry delay when redispatching
- MEDIUM: session: redispatch earlier when possible
- MINOR: config: warn when tcp-check rules are used without option tcp-check
- BUG/MINOR: connection: make proxy protocol v1 support the UNKNOWN protocol
- DOC: proxy protocol example parser was still wrong
- DOC: minor updates to the proxy protocol doc
- CLEANUP: connection: merge proxy proto v2 header and address block
- MEDIUM: connection: add support for proxy protocol v2 in accept-proxy
- MINOR: tools: add new functions to quote-encode strings
- DOC: clarify the CSV format
- MEDIUM: stats: report the last check and last agent's output on the CSV status
- MINOR: freq_ctr: introduce a new averaging method
- MEDIUM: session: maintain per-backend and per-server time statistics
- MEDIUM: stats: report per-backend and per-server time stats in HTML and CSV outputs
- BUG/MINOR: http: fix typos in previous patch
- DOC: remove the ultra-obsolete TODO file
- DOC: update roadmap
- DOC: minor updates to the README
- DOC: mention the maxconn limitations with the select poller
- DOC: commit a few old design thoughts files
Select()'s safe area is limited to 1024 FDs, and anything higher
than this will report "select: FAILED" on startup in debug mode,
so better document it.
The support is all based on static responses. This doesn't add any
request / response logic to HAProxy, but allows a way to update
information through the socket interface.
Currently certificates specified using "crt" or "crt-list" on "bind" lines
are loaded as PEM files.
For each PEM file, haproxy checks for the presence of file at the same path
suffixed by ".ocsp". If such file is found, support for the TLS Certificate
Status Request extension (also known as "OCSP stapling") is automatically
enabled. The content of this file is optional. If not empty, it must contain
a valid OCSP Response in DER format. In order to be valid an OCSP Response
must comply with the following rules: it has to indicate a good status,
it has to be a single response for the certificate of the PEM file, and it
has to be valid at the moment of addition. If these rules are not respected
the OCSP Response is ignored and a warning is emitted. In order to identify
which certificate an OCSP Response applies to, the issuer's certificate is
necessary. If the issuer's certificate is not found in the PEM file, it will
be loaded from a file at the same path as the PEM file suffixed by ".issuer"
if it exists otherwise it will fail with an error.
It is possible to update an OCSP Response from the unix socket using:
set ssl ocsp-response <response>
This command is used to update an OCSP Response for a certificate (see "crt"
on "bind" lines). Same controls are performed as during the initial loading of
the response. The <response> must be passed as a base64 encoded string of the
DER encoded response from the OCSP server.
Example:
openssl ocsp -issuer issuer.pem -cert server.pem \
-host ocsp.issuer.com:80 -respout resp.der
echo "set ssl ocsp-response $(base64 -w 10000 resp.der)" | \
socat stdio /var/run/haproxy.stat
This feature is automatically enabled on openssl 0.9.8h and above.
This work was performed jointly by Dirkjan Bussink of GitHub and
Emeric Brun of HAProxy Technologies.
This patch adds two new actions to http-request and http-response rulesets :
- replace-header : replace a whole header line, suited for headers
which might contain commas
- replace-value : replace a single header value, suited for headers
defined as lists.
The match consists in a regex, and the replacement string takes a log-format
and supports back-references.
The time statistics computed by previous patches are now reported in the
HTML stats in the tips related to the total sessions for backend and servers,
and as separate columns for the CSV stats.
Now that we can quote unsafe string, it becomes possible to dump the health
check responses on the CSV page as well. The two new fields are "last_chk"
and "last_agt".
Indicate that the text cells in the CSV format may contain quotes to
escape ambiguous texts. We don't have this case right now since we limit
the output, but it may happen in the future.
The "accept-proxy" statement of bind lines was still limited to version
1 of the protocol, while send-proxy-v2 is now available on the server
lines. This patch adds support for parsing v2 of the protocol on incoming
connections. The v2 header is automatically recognized so there is no
need for a new option.
This new directive captures the specified fetch expression, converts
it to text and puts it into the next capture slot. The capture slots
are shared with header captures so that it is possible to dump all
captures at once or selectively in logs and header processing.
The purpose is to permit logs to contain whatever payload is found in
a request, for example bytes at a fixed location or the SNI of forwarded
SSL traffic.
Similar to previous patches, HTTP header captures are performed when
a TCP frontend switches to an HTTP backend, but are not possible to
report. So let's relax the check to explicitly allow them to be present
in TCP frontends.
Log format is defined in the frontend, and some frontends may be chained to
an HTTP backend. Sometimes it's very convenient to be able to log the HTTP
status code of these HTTP backends. This status is definitely present in
the internal structures, it's just that we used to limit it to be used in
HTTP frontends. So let's simply relax the check to allow it to be used in
TCP frontends as well.
When no static DH parameters are specified, this patch makes haproxy
use standardized (rfc 2409 / rfc 3526) DH parameters with prime lenghts
of 1024, 2048, 4096 or 8192 bits for DHE key exchange. The size of the
temporary/ephemeral DH key is computed as the minimum of the RSA/DSA server
key size and the value of a new option named tune.ssl.default-dh-param.
MySQL will in stop supporting pre-4.1 authentication packets in the future
and is already giving us a hard time regarding non-silencable warnings
which are logged on each health check. Warnings look like the following:
"[Warning] Client failed to provide its character set. 'latin1' will be used
as client character set."
This patch adds basic support for post-4.1 authentication by sending the proper
authentication packet with the character set, along with the QUIT command.
Released version 1.5-dev26 with the following main changes :
- BUG/MEDIUM: polling: fix possible CPU hogging of worker processes after receiving SIGUSR1.
- BUG/MINOR: stats: fix a typo on a closing tag for a server tracking another one
- OPTIM: stats: avoid the calculation of a useless link on tracking servers in maintenance
- MINOR: fix a few memory usage errors
- CONTRIB: halog: Filter input lines by date and time through timestamp
- MINOR: ssl: SSL_CTX_set_options() and SSL_CTX_set_mode() take a long, not an int
- BUG/MEDIUM: regex: fix risk of buffer overrun in exp_replace()
- MINOR: acl: set "str" as default match for strings
- DOC: Add some precisions about acl default matching method
- MEDIUM: acl: strenghten the option parser to report invalid options
- BUG/MEDIUM: config: a stats-less config crashes in 1.5-dev25
- BUG/MINOR: checks: tcp-check must not stop on '\0' for binary checks
- MINOR: stats: improve alignment of color codes to save one line of header
- MINOR: checks: simplify and improve reporting of state changes when using log-health-checks
- MINOR: server: remove the SRV_DRAIN flag which can always be deduced
- MINOR: server: use functions to detect state changes and to update them
- MINOR: server: create srv_was_usable() from srv_is_usable() and use a pointer
- BUG/MINOR: stats: do not report "100%" in the thottle column when server is draining
- BUG/MAJOR: config: don't free valid regex memory
- BUG/MEDIUM: session: don't clear CF_READ_NOEXP if analysers are not called
- BUG/MINOR: stats: tracking servers may incorrectly report an inherited DRAIN status
- MEDIUM: proxy: make timeout parser a bit stricter
- REORG/MEDIUM: server: split server state and flags in two different variables
- REORG/MEDIUM: server: move the maintenance bits out of the server state
- MAJOR: server: use states instead of flags to store the server state
- REORG: checks: put the functions in the appropriate files !
- MEDIUM: server: properly support and propagate the maintenance status
- MEDIUM: server: allow multi-level server tracking
- CLEANUP: checks: rename the server_status_printf function
- MEDIUM: checks: simplify server up/down/nolb transitions
- MAJOR: checks: move health checks changes to set_server_check_status()
- MINOR: server: make the status reporting function support a reason
- MINOR: checks: simplify health check reporting functions
- MINOR: server: implement srv_set_stopped()
- MINOR: server: implement srv_set_running()
- MINOR: server: implement srv_set_stopping()
- MEDIUM: checks: simplify failure notification using srv_set_stopped()
- MEDIUM: checks: simplify success notification using srv_set_running()
- MEDIUM: checks: simplify stopping mode notification using srv_set_stopping()
- MEDIUM: stats: report a server's own state instead of the tracked one's
- MINOR: server: make use of srv_is_usable() instead of checking eweight
- MAJOR: checks: add support for a new "drain" administrative mode
- MINOR: stats: use the admin flags for soft enable/disable/stop/start on the web page
- MEDIUM: stats: introduce new actions to simplify admin status management
- MINOR: cli: introduce a new "set server" command
- MINOR: stats: report a distinct output for DOWN caused by agent
- MINOR: checks: support specific check reporting for the agent
- MINOR: checks: support a neutral check result
- BUG/MINOR: cli: "agent" was missing from the "enable"/"disable" help message
- MEDIUM: cli: add support for enabling/disabling health checks.
- MEDIUM: stats: report down caused by agent prior to reporting up
- MAJOR: agent: rework the response processing and support additional actions
- MINOR: stats: improve the stats web page to support more actions
- CONTRIB: halog: avoid calling time/localtime/mktime for each line
- DOC: document the workarouds for Google Chrome's bogus pre-connect
- MINOR: stats: report SSL key computations per second
- MINOR: stats: add counters for SSL cache lookups and misses
More and more people are complaining about the bugs experienced by
Chrome users due to the pre-connect feature and the fact that Chrome
does not monitor its connections and happily displays the error page
instead of re-opening a new connection. Since we can work around this
bug, let's document how to do it.
We now retrieve a lot of information from a single line of response, which
can be made up of various words delimited by spaces/tabs/commas. We try to
arrange all this and report whatever unusual we detect. The agent now supports :
- "up", "down", "stopped", "fail" for the operational states
- "ready", "drain", "maint" for the administrative states
- any "%" number for the weight
- an optional reason after a "#" that can be reported on the stats page
The line parser and processor should move to its own function so that
we can reuse the exact same one for http-based agent checks later.
This command supports "agent", "health", "state" and "weight" to adjust
various server attributes as well as changing server health check statuses
on the fly or setting the drain mode.