Commit Graph

6022 Commits

Author SHA1 Message Date
Emeric Brun
b5e42a817b BUG/MAJOR: ssl: buffer overflow using offloaded ciphering on async engine
The Openssl's ASYNC API does'nt support moving buffers on SSL_read/write
This patch disables the ASYNC mode dynamically when the handshake
is left and re-enables it on reneg.
2017-06-08 06:47:34 +02:00
Emeric Brun
ce9e01c674 BUG/MAJOR: ssl: fix segfault on connection close using async engines.
This patch ensure that the ASYNC fd handlers won't be wake up
too early, disabling the event cache for this fd on connection close
and when a WANT_ASYNC is rised by Openssl.

The calls to SSL_read/SSL_write/SSL_do_handshake before rising a real read
event from the ASYNC fd, generated an EAGAIN followed by a context switch
for some engines, or a blocked read for the others.

On connection close it resulted in a too early call to SSL_free followed
by a segmentation fault.
2017-06-08 06:47:34 +02:00
Emmanuel Hocdet
bd695fe024 MEDIUM: ssl: disable SSLv3 per default for bind
For security, disable SSLv3 on bind line must be the default configuration.
SSLv3 can be enabled with "ssl-min-ver SSLv3".
2017-06-02 16:43:16 +02:00
Emmanuel Hocdet
df701a2adb MINOR: ssl: support ssl-min-ver and ssl-max-ver with crt-list
SSL/TLS version can be changed per certificat if and only if openssl lib support
earlier callback on handshake and, of course, is implemented in haproxy. It's ok
for BoringSSL. For Openssl, version 1.1.1 have such callback and could support it.
2017-06-02 16:42:09 +02:00
Emmanuel Hocdet
4aa615ff6b MEDIUM: ssl: ctx_set_version/ssl_set_version func for methodVersions table
This patch cleanup the usage of set_version func with a more suitable name:
ctx_set_version. It introduce ssl_set_version func (unused for the moment).
2017-06-02 16:41:57 +02:00
Emmanuel Hocdet
ecb0e234b9 REORG: ssl: move defines and methodVersions table upper
It will used in ssl_sock_switchctx_cbk.
2017-06-02 16:41:36 +02:00
Willy Tarreau
2686dcad1e CLEANUP: connection: remove unused CO_FL_WAIT_DATA
Very early in the connection rework process leading to v1.5-dev12, commit
56a77e5 ("MEDIUM: connection: complete the polling cleanups") marked the
end of use for this flag which since was never set anymore, but it continues
to be tested. Let's kill it now.
2017-06-02 15:50:27 +02:00
Willy Tarreau
ed936c5d37 MINOR: tools: make debug_hexdump() take a string prefix
When dumping data at various places in the code, it's hard to figure
what is present where. To make this easier, this patch slightly modifies
debug_hexdump() to take a prefix string which is prepended in front of
each output line.
2017-06-02 15:49:31 +02:00
Willy Tarreau
9faef1e391 MINOR: tools: make debug_hexdump() use a const char for the string
There's no reason the string to be dumped should be a char *, it's
a const.
2017-06-02 15:49:31 +02:00
Jarno Huuskonen
577d5ac8ae CLEANUP: str2mask return code comment: non-zero -> zero. 2017-06-02 15:43:46 +02:00
Emmanuel Hocdet
9ac143b607 BUILD: ssl: fix build with OPENSSL_NO_ENGINE
Build is broken with openssl library without support of engin (like boringssl).
Add OPENSSL_NO_ENGINE flag to fix that.
2017-06-02 12:07:48 +02:00
Baptiste Assmann
201c07f681 MAJOR/REORG: dns: DNS resolution task and requester queues
This patch is a major upgrade of the internal run-time DNS resolver in
HAProxy and it brings the following 2 main changes:

1. DNS resolution task

Up to now, DNS resolution was triggered by the health check task.
From now, DNS resolution task is autonomous. It is started by HAProxy
right after the scheduler is available and it is woken either when a
network IO occurs for one of its nameserver or when a timeout is
matched.

From now, this means we can enable DNS resolution for a server without
enabling health checking.

2. Introduction of a dns_requester structure

Up to now, DNS resolution was purposely made for resolving server
hostnames.
The idea, is to ensure that any HAProxy internal object should be able
to trigger a DNS resolution. For this purpose, 2 things has to be done:
  - clean up the DNS code from the server structure (this was already
    quite clean actually) and clean up the server's callbacks from
    manipulating too much DNS resolution
  - create an agnostic structure which allows linking a DNS resolution
    and a requester of any type (using obj_type enum)

3. Manage requesters through queues

Up to now, there was an uniq relationship between a resolution and it's
owner (aka the requester now). It's a shame, because in some cases,
multiple objects may share the same hostname and may benefit from a
resolution being performed by a third party.
This patch introduces the notion of queues, which are basically lists of
either currently running resolution or waiting ones.

The resolutions are now available as a pool, which belongs to the resolvers.
The pool has has a default size of 64 resolutions per resolvers and is
allocated at configuration parsing.
2017-06-02 11:58:54 +02:00
Baptiste Assmann
8ea0bcc911 MINOR: dns: introduce roundrobin into the internal cache (WIP)
This patch introduces a bit of roundrobin in the records stored in our
local cache.
Purpose is to allow some kind of distribution of the IPs found in a
response.
Note that distribution properly applies only when the IP used by many
requesters disappear and is replaced by an other one.
2017-06-02 11:40:39 +02:00
Baptiste Assmann
69fce67b56 MINOR: dns: make 'ancount' field to match the number of saved records
ancount is the number of answers available in a DNS response.
Before this patch, HAProxy used to store the ancount found in the buffer
(sent by the DNS server).
Unfortunately, this is now inaccurate and does not correspond to the
number of records effectively stored in our local version of the
response. In Example, the CNAMEs are not stored.

This patch updates ancount field in to make it match what is effectively
stored in our version.
2017-06-02 11:40:16 +02:00
Baptiste Assmann
fa4a663095 MINOR: dns: implement a LRU cache for DNS resolutions
Introduction of a DNS response LRU cache in HAProxy.

When a positive response is received from a DNS server, HAProxy stores
it in the struct resolution and then also populates a LRU cache with the
response.
For now, the key in the cache is a XXHASH64 of the hostname in the
domain name format concatened to the query type in string format.
2017-06-02 11:40:01 +02:00
Baptiste Assmann
729c901c3f MAJOR: dns: save a copy of the DNS response in struct resolution
Prior this patch, the DNS responses were stored in a pre-allocated
memory area (allocated at HAProxy's startup).
The problem is that this memory is erased for each new DNS responses
received and processed.

This patch removes the global memory allocation (which was not thread
safe by the way) and introduces a storage of the dns response  in the
struct
resolution.
The memory in the struct resolution is also reserved at start up and is
thread safe, since each resolution structure will have its own memory
area.

For now, we simply store the response and use it atomically per
response per server.
2017-06-02 11:30:21 +02:00
Baptiste Assmann
fb7091e213 MINOR: dns: new snr_check_ip_callback function
In the process of breaking links between dns_* functions and other
structures (mainly server and a bit of resolution), the function
dns_get_ip_from_response needs to be reworked: it now can call
"callback" functions based on resolution's owner type to allow modifying
the way the response is processed.

For now, main purpose of the callback function is to check that an IP
address is not already affected to an element of the same type.

For now, only server type has a callback.
2017-06-02 11:28:14 +02:00
Baptiste Assmann
42746373eb REORG: dns: dns_option structure, storage of hostname_dn
This patch introduces a some re-organisation around the DNS code in
HAProxy.

1. make the dns_* functions less dependent on 'struct server' and 'struct resolution'.

With this in mind, the following changes were performed:
- 'struct dns_options' has been removed from 'struct resolution' (well,
  we might need it back at some point later, we'll see)
  ==> we'll use the 'struct dns_options' from the owner of the resolution
- dns_get_ip_from_response(): takes a 'struct dns_options' instead of
  'struct resolution'
  ==> so the caller can pass its own dns options to get the most
      appropriate IP from the response
- dns_process_resolve(): struct dns_option is deduced from new
  resolution->requester_type parameter

2. add hostname_dn and hostname_dn_len into struct server

In order to avoid recomputing a server's hostname into its domain name
format (and use a trash buffer to store the result), it is safer to
compute it once at configuration parsing and to store it into the struct
server.
In the mean time, the struct resolution linked to the server doesn't
need anymore to store the hostname in domain name format. A simple
pointer to the server one will make the trick.

The function srv_alloc_dns_resolution() properly manages everything for
us: memory allocation, pointer updates, etc...

3. move resolvers pointer into struct server

This patch makes the pointer to struct dns_resolvers from struct
dns_resolution obsolete.
Purpose is to make the resolution as "neutral" as possible and since the
requester is already linked to the resolvers, then we don't need this
information anymore in the resolution itself.
2017-06-02 11:26:48 +02:00
Baptiste Assmann
4f91f7ea59 MINOR: dns: parse_server() now uses srv_alloc_dns_resolution()
In order to make DNS code more consistent, the function parse_server()
now uses srv_alloc_dns_resolution() to set up a server and its
resolution.
2017-06-02 11:20:50 +02:00
Baptiste Assmann
81ed1a0516 MINOR: dns: functions to manage memory for a DNS resolution structure
A couple of new functions to allocate and free memory for a DNS
resolution structure. Main purpose is to to make the code related to DNS
more consistent.
They allocate or free memory for the structure itself. Later, if needed,
they should also allocate / free the buffers, etc, used by this structure.
They don't set/unset any parameters, this is the role of the caller.

This patch also implement calls to these function eveywhere it is
required.
2017-06-02 11:20:29 +02:00
Baptiste Assmann
9d41fe7f98 CLEANUP: server.c: missing prototype of srv_free_dns_resolution
Prototype for the function srv_free_dns_resolution() missing at the top
of the file.
2017-06-02 11:18:28 +02:00
Stéphane Cottin
23e9e93128 MINOR: log: Add logurilen tunable.
The default len of request uri in log messages is 1024. In some use
cases, you need to keep the long trail of GET parameters. The only
way to increase this len is to recompile with DEFINE=-DREQURI_LEN=2048.

This commit introduces a tune.http.logurilen configuration directive,
allowing to tune this at runtime.
2017-06-02 11:06:36 +02:00
William Lallemand
a6cfa9098e MAJOR: systemd-wrapper: get rid of the wrapper
The master worker mode obsoletes the systemd-wrapper, to ensure that
nobody uses it anymore, the code has been removed.
2017-06-02 10:56:32 +02:00
William Lallemand
e20b6a62f8 MEDIUM: mworker: workers exit when the master leaves
This patch ensure that the children will exit when the master quits,
even if the master didn't send any signal.

The master and the workers are connected through a pipe, when the pipe
closes the children leave.
2017-06-02 10:56:32 +02:00
William Lallemand
69f9b3bfa4 MEDIUM: mworker: exit-on-failure option
This option exits every workers when one of the current workers die.

It allows you to monitor the master process in order to relaunch
everything on a failure.

For example it can be used with systemd and Restart=on-failure in a spec
file.
2017-06-02 10:56:32 +02:00
William Lallemand
85b0bd9e54 MEDIUM: mworker: try to guess the next stats socket to use with -x
In master worker mode, you can't specify the stats socket where you get
your listeners FDs on a reload, because the command line of the re-exec
is launched by the master.

To solve the problem, when -x is found on the command line, its
parameter is rewritten on a reexec with the first stats socket with the
capability to send sockets. It tries to reuse the original parameter if
it has this capability.
2017-06-02 10:56:32 +02:00
William Lallemand
cb11fd2c7a MEDIUM: mworker: wait mode on reload failure
In Master Worker mode, when the reloading of the configuration fail,
the process is exiting leaving the children without their father.

To handle this, we register an exit function with atexit(3), which is
reexecuting the binary in a special mode. This particular mode of
HAProxy don't reload the configuration, it only loops on wait().
2017-06-02 10:56:32 +02:00
William Lallemand
73b85e75b3 MEDIUM: mworker: handle reload and signals
The master-worker will reload itself on SIGUSR2/SIGHUP

It's inherited from the systemd wrapper, when the SIGUSR2 signal is
received, the master process will reexecute itself with the -sf flag
followed by the PIDs of the children.

In the systemd wrapper, the children were using a pipe to notify when
the config has been parsed and when the new process is ready. The goal
was to ensure that the process couldn't reload during the parsing of the
configuration, before signals were send to old process.

With the new mworker model, the master parses the configuration and is
aware of all the children. We don't need a pipe, but we need to block
those signals before the end of a reload, to ensure that the process
won't be killed during a reload.

The SIGUSR1 signal is forwarded to the children to soft-stop HAProxy.

The SIGTERM and SIGINT signals are forwarded to the children in order to
terminate them.
2017-06-02 10:56:32 +02:00
William Lallemand
095ba4c242 MEDIUM: mworker: replace systemd mode by master worker mode
This commit remove the -Ds systemd mode in HAProxy in order to replace
it by a more generic master worker system. It aims to replace entirely
the systemd wrapper in the near future.

The master worker mode implements a new way of managing HAProxy
processes. The master is in charge of parsing the configuration
file and is responsible for spawning child processes.

The master worker mode can be invoked by using the -W flag.  It can be
used either in background mode (-D) or foreground mode. When used in
background mode, the master will fork to daemonize.

In master worker background mode, chroot, setuid and setgid are done in
each child rather than in the master process, because the master process
will still need access to filesystem to reload the configuration.
2017-06-02 10:56:32 +02:00
Emmanuel Hocdet
2c32d8f379 MINOR: boringssl: basic support for OCSP Stapling
Use boringssl SSL_CTX_set_ocsp_response to set OCSP response from file with
'.ocsp' extension. CLI update is not supported.
2017-05-27 07:59:34 +02:00
Emeric Brun
3854e0102b MEDIUM: ssl: handle multiple async engines
This patch adds the support of a maximum of 32 engines
in async mode.

Some tests have been done using 2 engines simultaneously.

This patch also removes specific 'async' attribute from the connection
structure. All the code relies only on Openssl functions.
2017-05-27 07:12:27 +02:00
Grant Zhang
fa6c7ee702 MAJOR: ssl: add openssl async mode support
ssl-mode-async is a global configuration parameter which enables
asynchronous processing in OPENSSL for all SSL connections haproxy
handles. With SSL_MODE_ASYNC set, TLS I/O operations may indicate a
retry with SSL_ERROR_WANT_ASYNC with this mode set if an asynchronous
capable engine is used to perform cryptographic operations. Currently
async mode only supports one async-capable engine.

This is the latest version of the patchset which includes Emeric's
updates :
  - improved async fd cleaning when openssl reports an fd to delete
  - prevent conn_fd_handler from calling SSL_{read,write,handshake} until
    the async fd is ready, as these operations are very slow and waste CPU
  - postpone of SSL_free to ensure the async operation can complete and
    does not cause a dereference a released SSL.
  - proper removal of async fd from the fdtab and removal of the unused async
    flag.
2017-05-27 07:05:54 +02:00
Grant Zhang
872f9c2139 MEDIUM: ssl: add basic support for OpenSSL crypto engine
This patch adds the global 'ssl-engine' keyword. First arg is an engine
identifier followed by a list of default_algorithms the engine will
operate.

If the openssl version is too old, an error is reported when the option
is used.
2017-05-27 07:05:00 +02:00
William Lallemand
7f80eb2383 MEDIUM: proxy: zombify proxies only when the expose-fd socket is bound
When HAProxy is running with multiple processes and some listeners
arebound to processes, the unused sockets were not closed in the other
processes. The aim was to be able to send those listening sockets using
the -x option.

However to ensure the previous behavior which was to close those
sockets, we provided the "no-unused-socket" global option.

This patch changes this behavior, it will close unused sockets which are
not in the same process as an expose-fd socket, making the
"no-unused-socket" option useless.

The "no-unused-socket" option was removed in this patch.
2017-05-27 07:02:25 +02:00
William Lallemand
f6975e9f76 MINOR: cli: add 'expose-fd listeners' to pass listeners FDs
This patch changes the stats socket rights for allowing the sending of
listening sockets.

The previous behavior was to allow any unix stats socket with admin
level to send sockets. It's not possible anymore, you have to set this
option to activate the socket sending.

Example:
   stats socket /var/run/haproxy4.sock mode 666 expose-fd listeners level user process 4
2017-05-27 07:02:17 +02:00
William Lallemand
07a62f7a7e MINOR: cli: add ACCESS_LVL_MASK to store the access level
The current level variable use only 2 bits for storing the 3 access
level (user, oper and admin).

This patch add a bitmask which allows to use the remaining bits for
other usage.
2017-05-27 07:02:06 +02:00
Thierry FOURNIER
fd80df11c3 BUG/MEDIUM: lua: segfault if a converter or a sample doesn't return anything
In the case of a Lua sample-fetch or converter doesn't return any
value, an acces outside the Lua stack can be performed. This patch
check the stack size before converting the top value to a HAProxy
internal sample.

A workaround consist to check that a value value is always returned
with sample fetches and converters.

This patch should be backported in the version 1.6 and 1.7
2017-05-12 16:46:26 +02:00
Holger Just
1bfc24ba03 MINOR: sample: Add b64dec sample converter
Add "b64dec" as a new converter which can be used to decode a base64
encoded string into its binary representation. It performs the inverse
operation of the "base64" converter.
2017-05-12 15:56:52 +02:00
Frédéric Lécaille
64920538fc BUG/MAJOR: dns: Broken kqueue events handling (BSD systems).
Some DNS related network sockets were closed without unregistering their file
descriptors from their underlying kqueue event sets. This patch replaces calls to
close() by fd_delete() calls to that to delete such events attached to DNS
network sockets from the kqueue before closing the sockets.

The bug was introduced by commit 26c6eb8 ("BUG/MAJOR: dns: restart sockets
after fork()") which was backported in 1.7 so this fix has to be backported
there as well.

Thanks to Jim Pingle who reported it and indicated the faulty commit, and
to Lukas Tribus for the trace showing the bad file descriptor.
2017-05-12 15:49:05 +02:00
Emmanuel Hocdet
abd323395f MEDIUM: ssl: ssl-min-ver and ssl-max-ver compatibility.
In haproxy < 1.8, no-sslv3/no-tlsv1x are ignored when force-sslv3/force-tlsv1x
is used (without warning). With this patch, no-sslv3/no-tlsv1x are ignored when
ssl-min-ver or ssl-max-ver is used (with warning).
When all SSL/TLS versions are disable: generate an error, not a warning.
example: ssl-min-ver TLSV1.3 (or force-tlsv13) with a openssl <= 1.1.0.
2017-05-12 15:49:05 +02:00
Emmanuel Hocdet
e1c722b5e8 MEDIUM: ssl: add ssl-min-ver and ssl-max-ver parameters for bind and server
'ssl-min-ver' and 'ssl-max-ver' with argument SSLv3, TLSv1.0, TLSv1.1, TLSv1.2
or TLSv1.3 limit the SSL negotiation version to a continuous range. ssl-min-ver
and ssl-max-ver should be used in replacement of no-tls* and no-sslv3. Warning
and documentation are set accordingly.
2017-05-12 15:49:05 +02:00
Emmanuel Hocdet
50e25e1dbc MINOR: ssl: show methods supported by openssl
TLS v1.3 incoming, SSLv3 will disappears: it could be useful to list
all methods supported by haproxy/openssl (with -vvv).
2017-05-12 15:49:05 +02:00
Emmanuel Hocdet
42fb980e53 MINOR: ssl: support TLSv1.3 for bind and server
This patch add 'no-tlsv13' and 'force-tlsv13' configuration. This is
only useful with openssl-dev and boringssl.
2017-05-12 15:49:05 +02:00
Emmanuel Hocdet
b4e9ba4b36 MEDIUM: ssl: calculate the real min/max TLS version and find holes
Plan is to add min-tlsxx max-tlsxx configuration, more consistent than no-tlsxx.
Find the real min/max versions (openssl capabilities and haproxy configuration)
and generate warning with bad versions range.
'no-tlsxx' can generate 'holes':
"The list of protocols available can be further limited using the SSL_OP_NO_X
options of the SSL_CTX_set_options or SSL_set_options functions. Clients should
avoid creating 'holes' in the set of protocols they support, when disabling a
protocol, make sure that you also disable either all previous or all subsequent
protocol versions. In clients, when a protocol version is disabled without
disabling all previous protocol versions, the effect is to also disable all
subsequent protocol versions."
To not break compatibility, "holes" is authorized with warning, because openssl
1.1.0 and boringssl deal with it (keep the upper or lower range depending the
case and version).
2017-05-12 15:49:04 +02:00
Emmanuel Hocdet
5db33cbdc4 MEDIUM: ssl: ssl_methods implementation is reworked and factored for min/max tlsxx
Plan is to add min-tlsxx max-tlsxx configuration, more consistent than no-tlsxx.
This patch introduce internal min/max and replace force-tlsxx implementation.
SSL method configuration is store in 'struct tls_version_filter'.
SSL method configuration to openssl setting is abstract in 'methodVersions' table.
With openssl < 1.1.0, SSL_CTX_set_ssl_version is used for force (min == max).
With openssl >= 1.1.0, SSL_CTX_set_min/max_proto_version is used.
2017-05-12 15:49:04 +02:00
Emmanuel Hocdet
6cb2d1e963 MEDIUM: ssl: revert ssl/tls version settings relative to default-server.
Plan is to add min-tlsxx max-tlsxx configuration, more consistent than no-tlsxx.
min-tlsxx and max-tlsxx can be overwrite on local definition. This directives
should be the only ones needed in default-server.
To simplify next patches (rework of tls versions settings with min/max) all
ssl/tls version settings relative to default-server are reverted first:
remove: 'sslv3', 'tls*', 'no-force-sslv3', 'no-force-tls*'.
remove from default-server: 'no-sslv3', 'no-tls*'.
Note:
. force-tlsxx == min-tlsxx + max-tlsxx : would be ok in default-server.
. no-tlsxx is keep for compatibility: should not be propagated to default-server.
2017-05-12 15:49:04 +02:00
Lukas Tribus
53ae85c38e MINOR: ssl: add prefer-client-ciphers
Currently we unconditionally set SSL_OP_CIPHER_SERVER_PREFERENCE [1],
which may not always be a good thing.

The benefit of server side cipher prioritization may not apply to all
cases out there, and it appears that the various SSL libs are going away
from this recommendation ([2], [3]), as insecure ciphers suites are
properly blacklisted/removed and honoring the client's preference is
more likely to improve user experience  (for example using SW-friendly
ciphers on devices without HW AES support).

This is especially true for TLSv1.3, which will restrict the cipher
suites to just AES-GCM and Chacha20/Poly1305.

Apache [4], nginx [5] and others give admins full flexibility, we should
as well.

The initial proposal to change the current default and add a
"prefer-server-ciphers" option (as implemented in e566ecb) has been
declined due to the possible security impact.

This patch implements prefer-client-ciphers without changing the defaults.

[1] https://www.openssl.org/docs/man1.0.2/ssl/SSL_CTX_set_options.html
[2] https://github.com/openssl/openssl/issues/541
[3] https://github.com/libressl-portable/portable/issues/66
[4] https://httpd.apache.org/docs/2.0/en/mod/mod_ssl.html#sslhonorcipherorder
[5] https://nginx.org/en/docs/http/ngx_http_ssl_module.html#ssl_prefer_server_ciphers
2017-05-12 15:49:04 +02:00
Willy Tarreau
f494977bc1 BUG/MINOR: checks: don't send proxy protocol with agent checks
James Brown reported that agent-check mistakenly sends the proxy
protocol header when it's configured. This is obviously wrong as
the agent is an independant servie and not a traffic port, let's
disable this.

This fix must be backported to 1.7 and possibly 1.6.
2017-05-06 08:45:28 +02:00
Frédéric Lécaille
b418c1228c MINOR: server: cli: Add server FQDNs to server-state file and stats socket.
This patch adds a new stats socket command to modify server
FQDNs at run time.
Its syntax:
  set server <backend>/<server> fqdn <FQDN>
This patch also adds FQDNs to server state file at the end
of each line for backward compatibility ("-" if not present).
2017-05-03 06:58:53 +02:00
Lukas Tribus
23953686da DOC: update RFC references
A few doc and code comment updates bumping RFC references to the new
ones.
2017-04-28 18:58:11 +02:00
Emeric Brun
fa5c5c892d BUG/MINOR: ssl: fix warnings about methods for opensslv1.1.
This patch replaces the calls to TLSvX_X_client/server/_method
by the new TLS_client/server_method and it uses the new functions
SSL_set_min_proto_version and SSL_set_max_proto_version, setting them
at the wanted protocol version using 'force-' statements.
2017-04-28 18:57:15 +02:00
Thierry FOURNIER
d7d8881543 MINOR: proto-http: Add sample fetch wich returns all HTTP headers
The sample fetch returns all headers including the last jump line.
The last jump line is used to determine if the block of headers is
truncated or not.
2017-04-27 11:56:11 +02:00
Thierry FOURNIER
5617dce27d MINOR: Add binary encoding request header sample fetch
This sample fetch encodes the http request headers in binary
format. This sample-fetch is useful with SPOE.
2017-04-27 11:54:54 +02:00
Thierry FOURNIER
6ab2bae084 REORG: spoe: move spoe_encode_varint / spoe_decode_varint from spoe to common
These encoding functions does general stuff and can be used in
other context than spoe. This patch moves the function spoe_encode_varint
and spoe_decode_varint from spoe to common. It also remove the prefix spoe.

These functions will be used for encoding values in new binary sample fetch.
2017-04-27 11:50:41 +02:00
Andrew Rodland
18330ab17f BUG/MINOR: hash-balance-factor isn't effective in certain circumstances
in chash_get_server_hash, we find the nearest server entries both
before and after the request hash. If the next and prev entries both
point to the same server, the function would exit early and return that
server, to save work.

Before hash-balance-factor this was a valid optimization -- one of nsrv
and psrv would definitely be chosen, so if they are the same there's no
need to choose between them. But with hash-balance-factor it's possible
that adding another request to that server would overload it
(chash_server_is_eligible returns false) and we go further around the
ring. So it's not valid to return before checking for that.

This commit simply removes the early return, as it provides a minimal
savings even when it's correct.
2017-04-26 15:45:27 +02:00
Thierry FOURNIER
e068b60605 CLEANUP: lua: remove test
The man of "luaL_unref" says "If ref is LUA_NOREF or LUA_REFNIL,
luaL_unref does nothing.", so I remove the check.
2017-04-26 15:13:18 +02:00
Thierry FOURNIER
f326767711 BUG/MEDIUM: lua: memory leak
The priv context is not cleaned when we set a new priv context.
This is caused by a stupid swap between two parameter of the
luaL_unref() function.

workaround: use set_priv only once when we process a stream.

This patch should be backported in version 1.7 and 1.6
2017-04-26 15:13:18 +02:00
Frédéric Lécaille
72ed4758d6 MINOR: server: Add server_template_init() function to initialize servers from a templates.
This patch adds server_template_init() function used to initialize servers
from server templates. It is called just after having parsed a 'server-template'
line.
2017-04-21 15:42:10 +02:00
Frédéric Lécaille
b82f742b78 MINOR: server: Add 'server-template' new keyword supported in backend sections.
This patch makes backend sections support 'server-template' new keyword.
Such 'server-template' objects are parsed similarly to a 'server' object
by parse_server() function, but its first arguments are as follows:
    server-template <ID prefix> <nb | range> <ip | fqdn>:<port> ...

The remaining arguments are the same as for 'server' lines.

With such server template declarations, servers may be allocated with IDs
built from <ID prefix> and <nb | range> arguments.

For instance declaring:
    server-template foo 1-5 google.com:80 ...
or
    server-template foo 5 google.com:80 ...

would be equivalent to declare:
    server foo1 google.com:80 ...
    server foo2 google.com:80 ...
    server foo3 google.com:80 ...
    server foo4 google.com:80 ...
    server foo5 google.com:80 ...
2017-04-21 15:42:10 +02:00
Frédéric Lécaille
759ea98db2 MINOR: server: Extract the code which finalizes server initializations after 'server' lines parsing.
This patch moves the code which is responsible of finalizing server initializations
after having fully parsed a 'server' line (health-check, agent check and SNI expression
initializations) from parse_server() to new functions.
2017-04-21 15:42:10 +02:00
Frédéric Lécaille
58b207cdd5 MINOR: server: Extract the code responsible of copying default-server settings.
This patch moves the code responsible of copying default server settings
to a new server instance from parse_server() function to new defsrv_*_cpy()
functions which may be used both during server lines parsing and during server
templates initializations to come.

These defsrv_*_cpy() do not make any reference to anything else than default
server settings.
2017-04-21 15:42:10 +02:00
Frédéric Lécaille
daa2fe6621 BUG/MINOR: server: missing default server 'resolvers' setting duplication.
'resolvers' setting was not duplicated from default server setting to
new server instances when parsing 'server' lines.
This fix is simple: strdup() default resolvers <id> string argument after
having allocated a new server when parsing 'server' lines.

This patch must be backported to 1.7 and 1.6.
2017-04-21 15:42:09 +02:00
Christopher Faulet
9f724edbd8 BUG/MEDIUM: http: Drop the connection establishment when a redirect is performed
This bug occurs when a redirect rule is applied during the request analysis on a
persistent connection, on a proxy without any server. This means, in a frontend
section or in a listen/backend section with no "server" line.

Because the transaction processing is shortened, no server can be selected to
perform the connection. So if we try to establish it, this fails and a 503 error
is returned, while a 3XX was already sent. So, in this case, HAProxy generates 2
replies and only the first one is expected.

Here is the configuration snippet to easily reproduce the problem:

    listen www
        bind :8080
        mode http
        timeout connect 5s
        timeout client 3s
        timeout server 6s
        redirect location /

A simple HTTP/1.1 request without body will trigger the bug:

    $ telnet 0 8080
    Trying 0.0.0.0...
    Connected to 0.
    Escape character is '^]'.
    GET / HTTP/1.1

    HTTP/1.1 302 Found
    Cache-Control: no-cache
    Content-length: 0
    Location: /

    HTTP/1.0 503 Service Unavailable
    Cache-Control: no-cache
    Connection: close
    Content-Type: text/html

    <html><body><h1>503 Service Unavailable</h1>
    No server is available to handle this request.
    </body></html>
    Connection closed by foreign host.

[wt: only 1.8-dev is impacted though the bug is present in older ones]
2017-04-21 07:37:45 +02:00
Olivier Houchard
7d8e688953 BUG/MINOR: server: don't use "proxy" when px is really meant.
In server_parse_sni_expr(), we use the "proxy" global variable, when we
should probably be using "px" given as an argument.
It happens to work by accident right now, but may not in the future.

[wt: better backport it]
2017-04-20 19:51:10 +02:00
Willy Tarreau
b83dc3d2ef MEDIUM: config: don't check config validity when there are fatal errors
Overall we do have an issue with the severity of a number of errors. Most
fatal errors are reported with ERR_FATAL (which prevents startup) and not
ERR_ABORT (which stops parsing ASAP), but check_config_validity() is still
called on ERR_FATAL, and will most of the time report bogus errors. This
is what caused smp_resolve_args() to be called on a number of unparsable
ACLs, and it also is what reports incorrect ordering or unresolvable
section names when certain entries could not be properly parsed.

This patch stops this domino effect by simply aborting before trying to
further check and resolve the configuration when it's already know that
there are fatal errors.

A concrete example comes from this config :

  userlist users :
      user foo insecure-password bar

  listen foo
      bind :1234
      mode htttp
      timeout client 10S
      timeout server 10s
      timeout connect 10s
      stats uri /stats
      stats http-request auth unless { http_auth(users) }
      http-request redirect location /index.html if { path / }

It contains a colon after the userlist name, a typo in the client timeout value,
another one in "mode http" which cause some other configuration elements not to
be properly handled.

Previously it would confusingly report :

  [ALERT] 108/114851 (20224) : parsing [err-report.cfg:1] : 'userlist' cannot handle unexpected argument ':'.
  [ALERT] 108/114851 (20224) : parsing [err-report.cfg:6] : unknown proxy mode 'htttp'.
  [ALERT] 108/114851 (20224) : parsing [err-report.cfg:7] : unexpected character 'S' in 'timeout client'
  [ALERT] 108/114851 (20224) : Error(s) found in configuration file : err-report.cfg
  [ALERT] 108/114851 (20224) : parsing [err-report.cfg:11] : unable to find userlist 'users' referenced in arg 1 of ACL keyword 'http_auth' in proxy 'foo'.
  [WARNING] 108/114851 (20224) : config : missing timeouts for proxy 'foo'.
     | While not properly invalid, you will certainly encounter various problems
     | with such a configuration. To fix this, please ensure that all following
     | timeouts are set to a non-zero value: 'client', 'connect', 'server'.
  [WARNING] 108/114851 (20224) : config : 'stats' statement ignored for proxy 'foo' as it requires HTTP mode.
  [WARNING] 108/114851 (20224) : config : 'http-request' rules ignored for proxy 'foo' as they require HTTP mode.
  [ALERT] 108/114851 (20224) : Fatal errors found in configuration.

The "requires HTTP mode" errors are just pollution resulting from the
improper spelling of this mode earlier. The unresolved reference to the
userlist is caused by the extra colon on the declaration, and the warning
regarding the missing timeouts is caused by the wrong character.

Now it more accurately reports :

  [ALERT] 108/114900 (20225) : parsing [err-report.cfg:1] : 'userlist' cannot handle unexpected argument ':'.
  [ALERT] 108/114900 (20225) : parsing [err-report.cfg:6] : unknown proxy mode 'htttp'.
  [ALERT] 108/114900 (20225) : parsing [err-report.cfg:7] : unexpected character 'S' in 'timeout client'
  [ALERT] 108/114900 (20225) : Error(s) found in configuration file : err-report.cfg
  [ALERT] 108/114900 (20225) : Fatal errors found in configuration.

Despite not really a fix, this patch should be backported at least to 1.7,
possibly even 1.6, and 1.5 since it hardens the config parser against
certain bad situations like the recently reported use-after-free and the
last null dereference.
2017-04-19 11:49:11 +02:00
Willy Tarreau
bcfe23a7ec BUG/MEDIUM: acl: proprely release unused args in prune_acl_expr()
Stephan Zeisberg reported another dirty abort case which can be triggered
with this simple config (where file "d" doesn't exist) :

    backend b1
        stats  auth a:b
        acl auth_ok http_auth(c) -f d

This issue was brought in 1.5-dev9 by commit 34db108 ("MAJOR: acl: make use
of the new argument parsing framework") when prune_acl_expr() started to
release arguments. The arg pointer is set to NULL but not its length.
Because of this, later in smp_resolve_args(), the argument is still seen
as valid (since only a test on the length is made as in all other places),
and the NULL pointer is dereferenced.

This patch properly clears the lengths to avoid such tests.

This fix needs to be backported to 1.7, 1.6, and 1.5.
2017-04-19 11:31:44 +02:00
Jim Freeman
a2278c8bbb CLEANUP: logs: typo: simgle => single
Typo in error message. Backport to 1.7.
2017-04-18 14:52:07 +02:00
Frédéric Lécaille
dfacd69b94 BUG/MAJOR: Broken parsing for valid keywords provided after 'source' setting.
Any valid keyword could not be parsed anymore if provided after 'source' keyword.
This was due to the fact that 'source' number of arguments is variable.
So, as its parser srv_parse_source() is the only one who may know how many arguments
was provided after 'source' keyword, it updates 'cur_arg' variable (the index
in the line of the current arg to be parsed), this is a good thing.
This variable is also incremented by one (to skip the 'source' keyword).
This patch disable this behavior.

Should have come with dba9707 commit.
2017-04-16 18:13:06 +02:00
Frédéric Lécaille
8d083ed796 BUG/MINOR: server: Fix a wrong error message during 'usesrc' keyword parsing.
'usesrc' setting is not permitted on 'server' lines if not provided after
'source' setting. This is now also the case on 'default-server' lines.
Without this patch parse_server() parser displayed that 'usersrc' is
an unknown keyword.

Should have come with dba9707 commit.
2017-04-15 13:42:55 +02:00
Olivier Houchard
2c9744fe56 MINOR: systemd wrapper: add support for passing the -x option.
Make the systemd wrapper chech if HAPROXY_STATS_SOCKET if set.
If set, it will use it as an argument to the "-x" option, which makes
haproxy asks for any listening socket, on the stats socket, in order
to achieve reloads with no new connection lost.
2017-04-13 19:15:17 +02:00
Olivier Houchard
547408787f MINOR: socket transfer: Set a timeout on the socket.
Make sure we're not stuck forever by setting a timeout on the socket.
2017-04-13 19:15:17 +02:00
Olivier Houchard
1fc0516516 MINOR: proxy: Don't close FDs if not our proxy.
When running with multiple process, if some proxies are just assigned
to some processes, the other processes will just close the file descriptors
for the listening sockets. However, we may still have to provide those
sockets when reloading, so instead we just try hard to pretend those proxies
are dead, while keeping the sockets opened.
A new global option, no-reused-socket", has been added, to restore the old
behavior of closing the sockets not bound to this process.
2017-04-13 19:15:17 +02:00
Olivier Houchard
153659f1ae MINOR: tcp: When binding socket, attempt to reuse one from the old proc.
Try to reuse any socket from the old process, provided by the "-x" flag,
before binding a new one, assuming it is compatible.
"Compatible" here means same address and port, same namspace if any,
same interface if any, and that the following flags are the same :
LI_O_FOREIGN, LI_O_V6ONLY and LI_O_V4V6.
Also change tcp_bind_listener() to always enable/disable socket options,
instead of just doing so if it is in the configuration file, as the option
may have been removed, ie TCP_FASTOPEN may have been set in the old process,
and removed from the new configuration, so we have to disable it.
2017-04-13 19:15:17 +02:00
Olivier Houchard
f73629d23a MINOR: global: Add an option to get the old listening sockets.
Add the "-x" flag, that takes a path to a unix socket as an argument. If
used, haproxy will connect to the socket, and asks to get all the
listening sockets from the old process. Any failure is fatal.
This is needed to get seamless reloads on linux.
2017-04-13 19:15:17 +02:00
Olivier Houchard
f886e3478d MINOR: cli: Add a command to send listening sockets.
Add a new command that will send all the listening sockets, via the
stats socket, and their properties.
This is a first step to workaround the linux problem when reloading
haproxy.
2017-04-13 19:15:17 +02:00
Willy Tarreau
42ef75fb84 MINOR: lua: ensure the memory allocator is used all the time
luaL_setstate() uses malloc() to initialize the first objects, and only
after this we replace the allocator. This creates trouble when replacing
the standard memory allocators during debugging sessions since the new
allocator is used to realloc() an area previously allocated using the
default malloc().

Lua provides lua_newstate() in addition to luaL_newstate(), which takes
an allocator for the initial malloc. This is exactly what we need, and
this patch does this and fixes the problem. The now useless call to
lua_setallocf() could be removed.

This has no impact outside of debugging sessions and there's no need to
backport this.
2017-04-13 17:10:15 +02:00
Willy Tarreau
04bf98149b BUG/MEDIUM: servers: unbreak server weight propagation
This reverts commit 266b1a8 ("MEDIUM: server: Inherit CLI weight changes and
agent-check weight responses") from Michal Idzikowski, which is still broken.
It stops propagating weights at the first error encountered, leaving servers
in a random state depending on what LB algorithms are used on other servers
tracking the one experiencing the weight change. It's unsure what the best
way to address this is, but we cannot leave the servers in an inconsistent
state between farms. For example :

  backend site1
      mode http
      balance uri
      hash-type consistent
      server s1 127.0.0.1:8001 weight 10 track servers/s1

  backend site2
      mode http
      balance uri
      server s1 127.0.0.1:8001 weight 10 track servers/s1

  backend site3
      mode http
      balance uri
      hash-type consistent
      server s1 127.0.0.1:8001 weight 10 track servers/s1

  backend servers
      server s1 127.0.0.1:8001 weight 10 check inter 1s

The weight change is applied on "servers/s1". It tries to propagate
to the servers tracking it, which are site1/s1, site2/s1 and site3/s1.
Let's say that "weight 50%" is requested. The servers are linked in
reverse-order, so the change is applied to "servers/s1", then to
"site3/s1", then to "site2/s1" and this one fails and rejects the
change. The change is aborted and never propagated to "site1/s1",
which keeps the server in a different state from "site3/s1". At the
very least, in case of error, the changes should probably be unrolled.

Also the error reported on the CLI (when changing from the CLI) simply says :

  Backend is using a static LB algorithm and only accepts weights '0%' and '100%'.

Without more indications what the faulty backend is.

Let's revert this change for now, as initially feared it will definitely
cause more harm than good and at least needs to be revisited. It was never
backported to any stable branch so no backport is needed.
2017-04-13 15:09:26 +02:00
Willy Tarreau
145325e59d BUG/MEDIUM: acl: don't free unresolved args in prune_acl_expr()
In case of error it's very difficult to properly unroll the list of
unresolved args because the error can appear on any argument, and all
of them share the same memory area, pointed to by one or multiple links
from the global args list. The problem is that till now the arguments
themselves were released and were not unlinked from the list, causing
all forms of corruption in deinit() when quitting on the error path if
an argument couldn't properly parse.

A few attempts at trying to selectively spot the appropriate list entries
to kill before releasing the shared area have only resulted in complicating
the code and pushing the issue further.

Here instead we use a simple conservative approach : prune_acl_expr()
only tries to free the argument array if none of the arguments were
unresolved, which means that none of them was added to the arg list.

It's unclear what a better approach would be. We could imagine that
args would point to their own location in the shared list but given
that this extra cost and complexity would be added exclusively in
order to cleanly release everything when we're exiting due to a config
parse error, this seems quite overkill.

This bug was noticed on 1.7 and likely affects 1.6 and 1.5, so the fix
should be backported. It's not easy to reproduce it, as the reproducers
randomly work depending on how memory is allocated. One way to do it is
to use parsable and non-parsable patterns on an ACL making use of args.

Big thanks to Stephan Zeisberg for reporting this problem with a working
reproducer.
2017-04-13 12:20:52 +02:00
Willy Tarreau
0622f02b5a BUG/MEDIUM: arg: ensure that we properly unlink unresolved arguments on error
If make_arg_list() fails to process an argument after having queued an
unresolvable one, it frees the allocated argument list but doesn't remove
the referenced args from the arg list. This causes a use after free or a
double free if the same location was reused, during the deinit phase upon
exit after reporting the error.

Since it's not easy to properly unlinked all elements, we only release the
args block if none of them was queued in the list.
2017-04-13 12:20:52 +02:00
Michal Idzikowski
266b1a8336 MEDIUM: server: Inherit CLI weight changes and agent-check weight responses
When agent-check or CLI command executes relative weight change this patch
propagates it to tracking server allowing grouping many backends running on
same server underneath. Additionaly in case with many src IPs many backends
can have shared state checker, so there won't be unnecessary health checks.

[wt: Note: this will induce some behaviour change on some setups]
2017-04-13 11:31:38 +02:00
Willy Tarreau
a9e2e4b899 BUG/MINOR: arg: don't try to add an argument on failed memory allocation
Take care of arg_list_clone() returning NULL in arg_list_add() since
the former does it too. It's only used during parsing so the impact
is very low.

Can be backported to 1.7, 1.6 and 1.5.
2017-04-12 23:23:43 +02:00
Willy Tarreau
1822e8c356 BUG/MINOR: config: missing goto out after parsing an incorrect ACL character
The error doesn't prevent checking for other errors after an invalid
character was detected in an ACL name. Better quit ASAP to avoid risking
to emit garbled and confusing error messages if something else fails on
the same line.

This should be backported to 1.7, 1.6 and 1.5.
2017-04-12 18:57:04 +02:00
Frédéric Lécaille
5e5bc9fc23 BUG/MINOR: dns: Wrong address family used when creating IPv6 sockets.
AF_INET address family was always used to create sockets to connect
to name servers. This prevented any connection over IPv6 from working.

This fix must be backported to 1.7 and 1.6.
2017-04-11 20:02:21 +02:00
Willy Tarreau
73459797fd BUILD/MINOR: tools: fix build warning in debug_hexdump()
Commit 0ebb511 ("MINOR: tools: add a generic hexdump function for debugging")
introduced debug_hexdump() which is used to dump a memory area during
debugging sessions. This function can start at an unaligned offset and
uses a signed comparison to know where to start dumping from. But the
operation mixes signed and unsigned, making the test incorrect and causing
the following warnings to be emitted under Clang :

  src/standard.c:3775:14: warning: comparison of unsigned expression >= 0 is
        always true [-Wtautological-compare]
                          if (b + j >= 0 && b + j < len)
                              ~~~~~ ^  ~

Make "j" signed instead. At the moment this function is not used at all
so there's no impact. Thanks to Dmitry Sivachenko for reporting it. No
backport is needed.
2017-04-11 08:01:17 +02:00
Willy Tarreau
9d7fb63e33 BUILD/MINOR: stats: remove unexpected argument to stats_dump_json_header()
Commit 05ee213 ("MEDIUM: stats: Add JSON output option to show (info|stat)")
used to pass argument "uri" to the aforementionned function which doesn't
take any. It's probably a leftover from multiple iterations of the same
patchset. Spotted by Dmitry Sivachenko. No backport is needed.
2017-04-11 07:54:45 +02:00
David Carlier
3a471935e6 BUG/MINOR: server : no transparent proxy for DragonflyBSD
IP*_BINDANY is not defined under this system thus it is
necessary to make those fields access since CONFIG_HAP_TRANSPARENT
is not defined.
[wt: problem introduced late in 1.8-dev. The same fix was also reported
  by Steven Davidovitz]
2017-04-10 15:27:46 +02:00
Olivier Houchard
b4a2d5e19a MINOR server: Restrict dynamic cookie check to the same proxy.
Each time we generate a dynamic cookie, we try to make sure the same
cookie hasn't been generated for another server, it's very unlikely, but
it may happen.
We only have to check that for the servers in the same proxy, no, need to
check in others, plus the code was buggy and would always check in the
first proxy of the proxy list.
2017-04-10 15:20:11 +02:00
David Carlier
6f1820864b CLEANUP: server: moving netinet/tcp.h inclusion
netinet/tcp.h needs sys/types.h for u_int* types usage,
issue found while building on OpenBSD.
2017-04-04 08:22:48 +02:00
Willy Tarreau
7b677265fd [RELEASE] Released version 1.8-dev1
Released version 1.8-dev1 with the following main changes :
    - BUG/MEDIUM: proxy: return "none" and "unknown" for unknown LB algos
    - BUG/MINOR: stats: make field_str() return an empty string on NULL
    - DOC: Spelling fixes
    - BUG/MEDIUM: http: Fix tunnel mode when the CONNECT method is used
    - BUG/MINOR: http: Keep the same behavior between 1.6 and 1.7 for tunneled txn
    - BUG/MINOR: filters: Protect args in macros HAS_DATA_FILTERS and IS_DATA_FILTER
    - BUG/MINOR: filters: Invert evaluation order of HTTP_XFER_BODY and XFER_DATA analyzers
    - BUG/MINOR: http: Call XFER_DATA analyzer when HTTP txn is switched in tunnel mode
    - BUG/MAJOR: stream: fix session abort on resource shortage
    - OPTIM: stream-int: don't disable polling anymore on DONT_READ
    - BUG/MINOR: cli: allow the backslash to be escaped on the CLI
    - BUG/MEDIUM: cli: fix "show stat resolvers" and "show tls-keys"
    - DOC: Fix map table's format
    - DOC: Added 51Degrees conv and fetch functions to documentation.
    - BUG/MINOR: http: don't send an extra CRLF after a Set-Cookie in a redirect
    - DOC: mention that req_tot is for both frontends and backends
    - BUG/MEDIUM: variables: some variable name can hide another ones
    - MINOR: lua: Allow argument for actions
    - BUILD: rearrange target files by build time
    - CLEANUP: hlua: just indent functions
    - MINOR: lua: give HAProxy variable access to the applets
    - BUG/MINOR: stats: fix be/sessions/max output in html stats
    - MINOR: proxy: Add fe_name/be_name fetchers next to existing fe_id/be_id
    - DOC: lua: Documentation about some entry missing
    - DOC: lua: Add documentation about variable manipulation from applet
    - MINOR: Do not forward the header "Expect: 100-continue" when the option http-buffer-request is set
    - DOC: Add undocumented argument of the trace filter
    - DOC: Fix some typo in SPOE documentation
    - MINOR: cli: Remove useless call to bi_putchk
    - BUG/MINOR: cli: be sure to always warn the cli applet when input buffer is full
    - MINOR: applet: Count number of (active) applets
    - MINOR: task: Rename run_queue and run_queue_cur counters
    - BUG/MEDIUM: stream: Save unprocessed events for a stream
    - BUG/MAJOR: Fix how the list of entities waiting for a buffer is handled
    - BUILD/MEDIUM: Fixing the build using LibreSSL
    - BUG/MEDIUM: lua: In some case, the return of sample-fetches is ignored (2)
    - SCRIPTS: git-show-backports: fix a harmless typo
    - SCRIPTS: git-show-backports: add -H to use the hash of the commit message
    - BUG/MINOR: stream-int: automatically release SI_FL_WAIT_DATA on SHUTW_NOW
    - CLEANUP: applet/lua: create a dedicated ->fcn entry in hlua_cli context
    - CLEANUP: applet/table: add an "action" entry in ->table context
    - CLEANUP: applet: remove the now unused appctx->private field
    - DOC: lua: documentation about time parser functions
    - DOC: lua: improve links
    - DOC: lua: section declared twice
    - MEDIUM: cli: 'show cli sockets' list the CLI sockets
    - BUG/MINOR: cli: "show cli sockets" wouldn't list all processes
    - BUG/MINOR: cli: "show cli sockets" would always report process 64
    - CLEANUP: lua: rename one of the lua appctx union
    - BUG/MINOR: lua/cli: bad error message
    - MEDIUM: lua: use memory pool for hlua struct in applets
    - MINOR: lua/signals: Remove Lua part from signals.
    - DOC: cli: show cli sockets
    - MINOR: cli: automatically enable a CLI I/O handler when there's no parser
    - CLEANUP: memory: remove the now unused cli_parse_show_pools() function
    - CLEANUP: applet: group all CLI contexts together
    - CLEANUP: stats: move a misplaced stats context initialization
    - MINOR: cli: add two general purpose pointers and integers in the CLI struct
    - MINOR: appctx/cli: remove the cli_socket entry from the appctx union
    - MINOR: appctx/cli: remove the env entry from the appctx union
    - MINOR: appctx/cli: remove the "be" entry from the appctx union
    - MINOR: appctx/cli: remove the "dns" entry from the appctx union
    - MINOR: appctx/cli: remove the "server_state" entry from the appctx union
    - MINOR: appctx/cli: remove the "tlskeys" entry from the appctx union
    - CONTRIB: tcploop: add limits.h to fix build issue with some compilers
    - MINOR/DOC: lua: just precise one thing
    - DOC: fix small typo in fe_id (backend instead of frontend)
    - BUG/MINOR: Fix the sending function in Lua's cosocket
    - BUG/MINOR: lua: memory leak executing tasks
    - BUG/MINOR: lua: bad return code
    - BUG/MINOR: lua: memleak when Lua/cli fails
    - MEDIUM: lua: remove Lua struct from session, and allocate it with memory pools
    - CLEANUP: haproxy: statify unexported functions
    - MINOR: haproxy: add a registration for build options
    - CLEANUP: wurfl: use the build options list to report it
    - CLEANUP: 51d: use the build options list to report it
    - CLEANUP: da: use the build options list to report it
    - CLEANUP: namespaces: use the build options list to report it
    - CLEANUP: tcp: use the build options list to report transparent modes
    - CLEANUP: lua: use the build options list to report it
    - CLEANUP: regex: use the build options list to report the regex type
    - CLEANUP: ssl: use the build options list to report the SSL details
    - CLEANUP: compression: use the build options list to report the algos
    - CLEANUP: auth: use the build options list to report its support
    - MINOR: haproxy: add a registration for post-check functions
    - CLEANUP: checks: make use of the post-init registration to start checks
    - CLEANUP: filters: use the function registration to initialize all proxies
    - CLEANUP: wurfl: make use of the late init registration
    - CLEANUP: 51d: make use of the late init registration
    - CLEANUP: da: make use of the late init registration code
    - MINOR: haproxy: add a registration for post-deinit functions
    - CLEANUP: wurfl: register the deinit function via the dedicated list
    - CLEANUP: 51d: register the deinitialization function
    - CLEANUP: da: register the deinitialization function
    - CLEANUP: wurfl: move global settings out of the global section
    - CLEANUP: 51d: move global settings out of the global section
    - CLEANUP: da: move global settings out of the global section
    - MINOR: cfgparse: add two new functions to check arguments count
    - MINOR: cfgparse: move parsing of "ca-base" and "crt-base" to ssl_sock
    - MEDIUM: cfgparse: move all tune.ssl.* keywords to ssl_sock
    - MEDIUM: cfgparse: move maxsslconn parsing to ssl_sock
    - MINOR: cfgparse: move parsing of ssl-default-{bind,server}-ciphers to ssl_sock
    - MEDIUM: cfgparse: move ssl-dh-param-file parsing to ssl_sock
    - MEDIUM: compression: move the zlib-specific stuff from global.h to compression.c
    - BUG/MEDIUM: ssl: properly reset the reused_sess during a forced handshake
    - BUG/MEDIUM: ssl: avoid double free when releasing bind_confs
    - BUG/MINOR: stats: fix be/sessions/current out in typed stats
    - MINOR: tcp-rules: check that the listener exists before updating its counters
    - MEDIUM: spoe: don't create a dummy listener for outgoing connections
    - MINOR: listener: move the transport layer pointer to the bind_conf
    - MEDIUM: move listener->frontend to bind_conf->frontend
    - MEDIUM: ssl: remote the proxy argument from most functions
    - MINOR: connection: add a new prepare_bind_conf() entry to xprt_ops
    - MEDIUM: ssl_sock: implement ssl_sock_prepare_bind_conf()
    - MINOR: connection: add a new destroy_bind_conf() entry to xprt_ops
    - MINOR: ssl_sock: implement ssl_sock_destroy_bind_conf()
    - MINOR: server: move the use_ssl field out of the ifdef USE_OPENSSL
    - MINOR: connection: add a minimal transport layer registration system
    - CLEANUP: connection: remove all direct references to raw_sock and ssl_sock
    - CLEANUP: connection: unexport raw_sock and ssl_sock
    - MINOR: connection: add new prepare_srv()/destroy_srv() entries to xprt_ops
    - MINOR: ssl_sock: implement and use prepare_srv()/destroy_srv()
    - CLEANUP: ssl: move tlskeys_finalize_config() to a post_check callback
    - CLEANUP: ssl: move most ssl-specific global settings to ssl_sock.c
    - BUG/MINOR: backend: nbsrv() should return 0 if backend is disabled
    - BUG/MEDIUM: ssl: for a handshake when server-side SNI changes
    - BUG/MINOR: systemd: potential zombie processes
    - DOC: Add timings events schemas
    - BUILD: lua: build failed on FreeBSD.
    - MINOR: samples: add xx-hash functions
    - MEDIUM: regex: pcre2 support
    - BUG/MINOR: option prefer-last-server must be ignored in some case
    - MINOR: stats: Support "select all" for backend actions
    - BUG/MINOR: sample-fetches/stick-tables: bad type for the sample fetches sc*_get_gpt0
    - BUG/MAJOR: channel: Fix the definition order of channel analyzers
    - BUG/MINOR: http: report real parser state in error captures
    - BUILD: scripts: automatically update the branch in version.h when releasing
    - MINOR: tools: add a generic hexdump function for debugging
    - BUG/MAJOR: http: fix risk of getting invalid reports of bad requests
    - MINOR: http: custom status reason.
    - MINOR: connection: add sample fetch "fc_rcvd_proxy"
    - BUG/MINOR: config: emit a warning if http-reuse is enabled with incompatible options
    - BUG/MINOR: tools: fix off-by-one in port size check
    - BUG/MEDIUM: server: consider AF_UNSPEC as a valid address family
    - MEDIUM: server: split the address and the port into two different fields
    - MINOR: tools: make str2sa_range() return the port in a separate argument
    - MINOR: server: take the destination port from the port field, not the addr
    - MEDIUM: server: disable protocol validations when the server doesn't resolve
    - BUG/MEDIUM: tools: do not force an unresolved address to AF_INET:0.0.0.0
    - BUG/MINOR: ssl: EVP_PKEY must be freed after X509_get_pubkey usage
    - BUG/MINOR: ssl: assert on SSL_set_shutdown with BoringSSL
    - MINOR: Use "500 Internal Server Error" for 500 error/status code message.
    - MINOR: proto_http.c 502 error txt typo.
    - DOC: add deprecation notice to "block"
    - MINOR: compression: fix -vv output without zlib/slz
    - BUG/MINOR: Reset errno variable before calling strtol(3)
    - MINOR: ssl: don't show prefer-server-ciphers output
    - OPTIM/MINOR: config: Optimize fullconn automatic computation loading configuration
    - BUG/MINOR: stream: Fix how backend-specific analyzers are set on a stream
    - MAJOR: ssl: bind configuration per certificat
    - MINOR: ssl: add curve suite for ECDHE negotiation
    - MINOR: checks: Add agent-addr config directive
    - MINOR: cli: Add possiblity to change agent config via CLI/socket
    - MINOR: doc: Add docs for agent-addr configuration variable
    - MINOR: doc: Add docs for agent-addr and agent-send CLI commands
    - BUILD: ssl: fix to build (again) with boringssl
    - BUILD: ssl: fix build on OpenSSL 1.0.0
    - BUILD: ssl: silence a warning reported for ERR_remove_state()
    - BUILD: ssl: eliminate warning with OpenSSL 1.1.0 regarding RAND_pseudo_bytes()
    - BUILD: ssl: kill a build warning introduced by BoringSSL compatibility
    - BUG/MEDIUM: tcp: don't poll for write when connect() succeeds
    - BUG/MINOR: unix: fix connect's polling in case no data are scheduled
    - MINOR: server: extend the flags to 32 bits
    - BUG/MINOR: lua: Map.end are not reliable because "end" is a reserved keyword
    - MINOR: dns: give ability to dns_init_resolvers() to close a socket when requested
    - BUG/MAJOR: dns: restart sockets after fork()
    - MINOR: chunks: implement a simple dynamic allocator for trash buffers
    - BUG/MEDIUM: http: prevent redirect from overwriting a buffer
    - BUG/MEDIUM: filters: Do not truncate HTTP response when body length is undefined
    - BUG/MEDIUM: http: Prevent replace-header from overwriting a buffer
    - BUG/MINOR: http: Return an error when a replace-header rule failed on the response
    - BUG/MINOR: sendmail: The return of vsnprintf is not cleanly tested
    - BUG/MAJOR: ssl: fix a regression in ssl_sock_shutw()
    - BUG/MAJOR: lua segmentation fault when the request is like 'GET ?arg=val HTTP/1.1'
    - BUG/MEDIUM: config: reject anything but "if" or "unless" after a use-backend rule
    - MINOR: http: don't close when redirect location doesn't start with "/"
    - MEDIUM: boringssl: support native multi-cert selection without bundling
    - BUG/MEDIUM: ssl: fix verify/ca-file per certificate
    - BUG/MEDIUM: ssl: switchctx should not return SSL_TLSEXT_ERR_ALERT_WARNING
    - MINOR: ssl: removes SSL_CTX_set_ssl_version call and cleanup CTX creation.
    - BUILD: ssl: fix build with -DOPENSSL_NO_DH
    - MEDIUM: ssl: add new sample-fetch which captures the cipherlist
    - MEDIUM: ssl: remove ssl-options from crt-list
    - BUG/MEDIUM: ssl: in bind line, ssl-options after 'crt' are ignored.
    - BUG/MINOR: ssl: fix cipherlist captures with sustainable SSL calls
    - MINOR: ssl: improved cipherlist captures
    - BUG/MINOR: spoe: Fix soft stop handler using a specific id for spoe filters
    - BUG/MINOR: spoe: Fix parsing of arguments in spoe-message section
    - MAJOR: spoe: Add support of pipelined and asynchronous exchanges with agents
    - MINOR: spoe: Add support for pipelining/async capabilities in the SPOA example
    - MINOR: spoe: Remove SPOE details from the appctx structure
    - MINOR: spoe: Add status code in error variable instead of hardcoded value
    - MINOR: spoe: Send a log message when an error occurred during event processing
    - MINOR: spoe: Check the scope of sample fetches used in SPOE messages
    - MEDIUM: spoe: Be sure to wakeup the good entity waiting for a buffer
    - MINOR: spoe: Use the min of all known max_frame_size to encode messages
    - MAJOR: spoe: Add support of payload fragmentation in NOTIFY frames
    - MINOR: spoe: Add support for fragmentation capability in the SPOA example
    - MAJOR: spoe: refactor the filter to clean up the code
    - MINOR: spoe: Handle NOTIFY frames cancellation using ABORT bit in ACK frames
    - REORG: spoe: Move struct and enum definitions in dedicated header file
    - REORG: spoe: Move low-level encoding/decoding functions in dedicated header file
    - MINOR: spoe: Improve implementation of the payload fragmentation
    - MINOR: spoe: Add support of negation for options in SPOE configuration file
    - MINOR: spoe: Add "pipelining" and "async" options in spoe-agent section
    - MINOR: spoe: Rely on alertif_too_many_arg during configuration parsing
    - MINOR: spoe: Add "send-frag-payload" option in spoe-agent section
    - MINOR: spoe: Add "max-frame-size" statement in spoe-agent section
    - DOC: spoe: Update SPOE documentation to reflect recent changes
    - MINOR: config: warn when some HTTP rules are used in a TCP proxy
    - BUG/MEDIUM: ssl: Clear OpenSSL error stack after trying to parse OCSP file
    - BUG/MEDIUM: cli: Prevent double free in CLI ACL lookup
    - BUG/MINOR: Fix "get map <map> <value>" CLI command
    - MINOR: Add nbsrv sample converter
    - CLEANUP: Replace repeated code to count usable servers with be_usable_srv()
    - MINOR: Add hostname sample fetch
    - CLEANUP: Remove comment that's no longer valid
    - MEDIUM: http_error_message: txn->status / http_get_status_idx.
    - MINOR: http-request tarpit deny_status.
    - CLEANUP: http: make http_server_error() not set the status anymore
    - MEDIUM: stats: Add JSON output option to show (info|stat)
    - MEDIUM: stats: Add show json schema
    - BUG/MAJOR: connection: update CO_FL_CONNECTED before calling the data layer
    - MINOR: server: Add dynamic session cookies.
    - MINOR: cli: Let configure the dynamic cookies from the cli.
    - BUG/MINOR: checks: attempt clean shutw for SSL check
    - CONTRIB: tcploop: make it build on FreeBSD
    - CONTRIB: tcploop: fix time format to silence build warnings
    - CONTRIB: tcploop: report action 'K' (kill) in usage message
    - CONTRIB: tcploop: fix connect's address length
    - CONTRIB: tcploop: use the trash instead of NULL for recv()
    - BUG/MEDIUM: listener: do not try to rebind another process' socket
    - BUG/MEDIUM server: Fix crash when dynamic is defined, but not key is provided.
    - CLEANUP: config: Typo in comment.
    - BUG/MEDIUM: filters: Fix channels synchronization in flt_end_analyze
    - TESTS: add a test configuration to stress handshake combinations
    - BUG/MAJOR: stream-int: do not depend on connection flags to detect connection
    - BUG/MEDIUM: connection: ensure to always report the end of handshakes
    - MEDIUM: connection: don't test for CO_FL_WAKE_DATA
    - CLEANUP: connection: completely remove CO_FL_WAKE_DATA
    - BUG: payload: fix payload not retrieving arbitrary lengths
    - BUILD: ssl: simplify SSL_CTX_set_ecdh_auto compatibility
    - BUILD: ssl: fix OPENSSL_NO_SSL_TRACE for boringssl and libressl
    - BUG/MAJOR: http: fix typo in http_apply_redirect_rule
    - MINOR: doc: 2.4. Examples should be 2.5. Examples
    - BUG/MEDIUM: stream: fix client-fin/server-fin handling
    - MINOR: fd: add a new flag HAP_POLL_F_RDHUP to struct poller
    - BUG/MINOR: raw_sock: always perfom the last recv if RDHUP is not available
    - OPTIM: poll: enable support for POLLRDHUP
    - MINOR: kqueue: exclusively rely on the kqueue returned status
    - MEDIUM: kqueue: take care of EV_EOF to improve polling status accuracy
    - MEDIUM: kqueue: only set FD_POLL_IN when there are pending data
    - DOC/MINOR: Fix typos in proxy protocol doc
    - DOC: Protocol doc: add checksum, TLV type ranges
    - DOC: Protocol doc: add SSL TLVs, rename CHECKSUM
    - DOC: Protocol doc: add noop TLV
    - MEDIUM: global: add a 'hard-stop-after' option to cap the soft-stop time
    - MINOR: dns: improve DNS response parsing to use as many available records as possible
    - BUG/MINOR: cfgparse: loop in tracked servers lists not detected by check_config_validity().
    - MINOR: server: irrelevant error message with 'default-server' config file keyword.
    - MINOR: server: Make 'default-server' support 'backup' keyword.
    - MINOR: server: Make 'default-server' support 'check-send-proxy' keyword.
    - CLEANUP: server: code alignement.
    - MINOR: server: Make 'default-server' support 'non-stick' keyword.
    - MINOR: server: Make 'default-server' support 'send-proxy' and 'send-proxy-v2 keywords.
    - MINOR: server: Make 'default-server' support 'check-ssl' keyword.
    - MINOR: server: Make 'default-server' support 'force-sslv3' and 'force-tlsv1[0-2]' keywords.
    - CLEANUP: server: code alignement.
    - MINOR: server: Make 'default-server' support 'no-ssl*' and 'no-tlsv*' keywords.
    - MINOR: server: Make 'default-server' support 'ssl' keyword.
    - MINOR: server: Make 'default-server' support 'send-proxy-v2-ssl*' keywords.
    - CLEANUP: server: code alignement.
    - MINOR: server: Make 'default-server' support 'verify' keyword.
    - MINOR: server: Make 'default-server' support 'verifyhost' setting.
    - MINOR: server: Make 'default-server' support 'check' keyword.
    - MINOR: server: Make 'default-server' support 'track' setting.
    - MINOR: server: Make 'default-server' support 'ca-file', 'crl-file' and 'crt' settings.
    - MINOR: server: Make 'default-server' support 'redir' keyword.
    - MINOR: server: Make 'default-server' support 'observe' keyword.
    - MINOR: server: Make 'default-server' support 'cookie' keyword.
    - MINOR: server: Make 'default-server' support 'ciphers' keyword.
    - MINOR: server: Make 'default-server' support 'tcp-ut' keyword.
    - MINOR: server: Make 'default-server' support 'namespace' keyword.
    - MINOR: server: Make 'default-server' support 'source' keyword.
    - MINOR: server: Make 'default-server' support 'sni' keyword.
    - MINOR: server: Make 'default-server' support 'addr' keyword.
    - MINOR: server: Make 'default-server' support 'disabled' keyword.
    - MINOR: server: Add 'no-agent-check' server keyword.
    - DOC: server: Add docs for "server" and "default-server" new "no-*" and other settings.
    - MINOR: doc: fix use-server example (imap vs mail)
    - BUG/MEDIUM: tcp: don't require privileges to bind to device
    - BUILD: make the release script use shortlog for the final changelog
    - BUILD: scripts: fix typo in announce-release error message
    - CLEANUP: time: curr_sec_ms doesn't need to be exported
    - BUG/MEDIUM: server: Wrong server default CRT filenames initialization.
    - BUG/MEDIUM: peers: fix buffer overflow control in intdecode.
    - BUG/MEDIUM: buffers: Fix how input/output data are injected into buffers
    - BUG/MINOR: http: Fix conditions to clean up a txn and to handle the next request
    - CLEANUP: http: Remove channel_congested function
    - CLEANUP: buffers: Remove buffer_bounce_realign function
    - CLEANUP: buffers: Remove buffer_contig_area and buffer_work_area functions
    - MINOR: http: remove useless check on HTTP_MSGF_XFER_LEN for the request
    - MINOR: http: Add debug messages when HTTP body analyzers are called
    - BUG/MEDIUM: http: Fix blocked HTTP/1.0 responses when compression is enabled
    - BUG/MINOR: filters: Don't force the stream's wakeup when we wait in flt_end_analyze
    - DOC: fix parenthesis and add missing "Example" tags
    - DOC: update the contributing file
    - DOC: log-format/tcplog/httplog update
    - MINOR: config parsing: add warning when log-format/tcplog/httplog is overriden in "defaults" sections
2017-04-03 09:27:49 +02:00
Guillaume de Lafond
ea5b0e6fb7 MINOR: config parsing: add warning when log-format/tcplog/httplog is overriden in "defaults" sections
Add a warning when "log-format" or "tcplog" or "httplog" is overriden in "defaults" sections.
2017-03-31 21:10:18 +02:00
Christopher Faulet
2b553de595 BUG/MINOR: filters: Don't force the stream's wakeup when we wait in flt_end_analyze
In flt_end_analyze, we wait that the anlayze is finished for both the request
and the response. In this case, because of a task_wakeup, some streams can
consume too much CPU to do nothing. So now, this is the filter's responsibility
to know if this wakeup is needed.

This fix should be backported in 1.7.
2017-03-31 14:40:45 +02:00
Christopher Faulet
69744d92a3 BUG/MEDIUM: http: Fix blocked HTTP/1.0 responses when compression is enabled
When the compression filter is enabled, if a HTTP/1.0 response is received (no
content-length and no transfer-encoding), no data are forwarded to the client
because of a bug and the transaction is blocked indefinitly.

The bug comes from the fact we need to synchronize the end of the request and
the response because of the compression filter. This inhibits the infinite
forwarding of data. But for these responses, the compression is not
activated. So the response body is not analyzed. This leads to a deadlock.

The solution is to enable the analyze of the response body in all cases and
handle this one to enable the infinite forwarding. All other cases should
already by handled.

This fix should be backported to 1.7.
2017-03-31 14:40:42 +02:00
Christopher Faulet
814d270360 MINOR: http: Add debug messages when HTTP body analyzers are called
Only DPRINTF() for developers.
2017-03-31 14:38:33 +02:00
Christopher Faulet
be821b9f40 MINOR: http: remove useless check on HTTP_MSGF_XFER_LEN for the request
The flag HTTP_MSGF_XFER_LEN is always set for an HTTP request because we always
now the body length. So there is no need to do check on it.
2017-03-31 14:38:33 +02:00
Christopher Faulet
aaf4a325ca CLEANUP: buffers: Remove buffer_bounce_realign function
Not used anymore since last commit.
2017-03-31 14:38:22 +02:00
Christopher Faulet
c0c672a2ab BUG/MINOR: http: Fix conditions to clean up a txn and to handle the next request
To finish a HTTP transaction and to start the new one, we check, among other
things, that there is enough space in the reponse buffer to eventually inject a
message during the parsing of the next request. Because these messages can reach
the maximum buffers size, it is mandatory to have an empty response
buffer. Remaining input data are trimmed during the txn cleanup (in
http_reset_txn), so we just need to check that the output data were flushed.

The current implementation depends on channel_congested, which does check the
reserved area is available. That's not of course good enough. There are other
tests on the reponse buffer is http_wait_for_request. But conditions to move on
are almost the same. So, we can imagine some scenarii where some output data
remaining in the reponse buffer during the request parsing prevent any messages
injection.

To fix this bug, we just wait that output data were flushed before cleaning up
the HTTP txn (ie. s->res.buf->o == 0). In addition, in http_reset_txn we realign
the response buffer (note the buffer is empty at this step).

Thanks to this changes, there is no more need to set CF_EXPECT_MORE on the
response channel in http_end_txn_clean_session. And more important, there is no
more need to check the response buffer state in http_wait_for_request. This
remove a workaround on response analysers to handle HTTP pipelining.

This patch can be backported in 1.7, 1.6 and 1.5.
2017-03-31 14:36:20 +02:00
Christopher Faulet
637f8f2ca7 BUG/MEDIUM: buffers: Fix how input/output data are injected into buffers
The function buffer_contig_space is buggy and could lead to pernicious bugs
(never hitted until now, AFAIK). This function should return the number of bytes
that can be written into the buffer at once (without wrapping).

First, this function is used to inject input data (bi_putblk) and to inject
output data (bo_putblk and bo_inject). But there is no context. So it cannot
decide where contiguous space should placed. For input data, it should be after
bi_end(buf) (ie, buf->p + buf->i modulo wrapping calculation). For output data,
it should be after bo_end(buf) (ie, buf->p) and input data are assumed to not
exist (else there is no space at all).

Then, considering we need to inject input data, this function does not always
returns the right value. And when we need to inject output data, we must be sure
to have no input data at all (buf->i == 0), else the result can also be wrong
(but this is the caller responsibility, so everything should be fine here).

The buffer can be in 3 different states:

 1) no wrapping

              <---- o ----><----- i ----->
 +------------+------------+-------------+------------+
 |            |oooooooooooo|iiiiiiiiiiiii|xxxxxxxxxxxx|
 +------------+------------+-------------+------------+
                           ^             <contig_space>
                           p             ^            ^
			                 l            r

 2) input wrapping

 ...--->            <---- o ----><-------- i -------...
 +-----+------------+------------+--------------------+
 |iiiii|xxxxxxxxxxxx|oooooooooooo|iiiiiiiiiiiiiiiiiiii|
 +-----+------------+------------+--------------------+
       <contig_space>            ^
       ^            ^            p
       l            r

 3) output wrapping

 ...------ o ------><----- i ----->            <----...
 +------------------+-------------+------------+------+
 |oooooooooooooooooo|iiiiiiiiiiiii|xxxxxxxxxxxx|oooooo|
 +------------------+-------------+------------+------+
                    ^             <contig_space>
                    p             ^            ^
		                  l            r

buffer_contig_space returns (l - r). The cases 1 and 3 are correctly
handled. But for the second case, r is wrong. It points on the buffer's end
(buf->data + buf->size). It should be bo_end(buf) (ie, buf->p - buf->o).

To fix the bug, the function has been splitted. Now, bi_contig_space and
bo_contig_space should be used to know the contiguous space available to insert,
respectively, input data and output data. For bo_contig_space, input data are
assumed to not exist. And the right version is used, depending what we want to
do.

In addition, to clarify the buffer's API, buffer_realign does not return value
anymore. So it has the same API than buffer_slow_realign.

This patch can be backported in 1.7, 1.6 and 1.5.
2017-03-31 14:36:04 +02:00
Emeric Brun
18928af3a3 BUG/MEDIUM: peers: fix buffer overflow control in intdecode.
A buffer overflow could happen if an integer is badly encoded in
the data part of a msg received from a peer. It should not happen
with authenticated peers (the handshake do not use this function).

This patch makes the code of the 'intdecode' function more robust.

It also adds some comments about the intencode function.

This bug affects versions >= 1.6.
2017-03-30 12:12:46 +02:00
Frédéric Lécaille
acd4827eca BUG/MEDIUM: server: Wrong server default CRT filenames initialization.
This patch fixes a bug which came with 5e57643 commit where server
default CRT filenames were initialized to the same value as server
default CRL filenames.
2017-03-29 16:37:56 +02:00
Willy Tarreau
351b3a1780 CLEANUP: time: curr_sec_ms doesn't need to be exported
It's not used anywhere outside of tv_update_date().
2017-03-29 15:24:33 +02:00
Willy Tarreau
19060a3020 BUG/MEDIUM: tcp: don't require privileges to bind to device
Ankit Malp reported a bug that we've had since binding to devices was
implemented. Haproxy wrongly checks that the process stays privileged
after startup when a binding to a device is specified via the bind
keyword "interface". This is wrong, because after startup we're not
binding any socket anymore, and during startup if there's a permission
issue it will be immediately reported ("permission denied"). More
importantly there's no way around it as the process exits on startup
when facing such an option.

This fix should be backported to 1.7, 1.6 and 1.5.
2017-03-27 16:22:59 +02:00
Frédéric Lécaille
6e0843c0e0 MINOR: server: Add 'no-agent-check' server keyword.
This patch adds 'no-agent-check' setting supported both by 'default-server'
and 'server' directives to disable an agent check for a specific server which would
have 'agent-check' set as default value (inherited from 'default-server'
'agent-check' setting), or, on 'default-server' lines, to disable 'agent-check' setting
as default value for any further 'server' declarations.

For instance, provided this configuration:

    default-server agent-check
    server srv1
    server srv2 no-agent-check
    server srv3
    default-server no-agent-check
    server srv4

srv1 and srv3 would have an agent check enabled contrary to srv2 and srv4.

We do not allocate anymore anything when parsing 'default-server' 'agent-check'
setting.
2017-03-27 14:37:01 +02:00
Frédéric Lécaille
2a0d061a60 MINOR: server: Make 'default-server' support 'disabled' keyword.
Before this patch, only 'server' directives could support 'disabled' setting.
This patch makes also 'default-server' directives support this setting.
It is used to disable a list of servers declared after a 'defaut-server' directive.
'enabled' new keyword has been added, both supported as 'default-server' and
'server' setting, to enable again a list of servers (so, declared after a
'default-server enabled' directive) or to explicitly enable a specific server declared
after a 'default-server disabled' directive.

For instance provided this configuration:

    default-server disabled
    server srv1...
    server srv2...
    server srv3... enabled
    server srv4... enabled

srv1 and srv2 are disabled and srv3 and srv4 enabled.

This is equivalent to this configuration:

    default-server disabled
    server srv1...
    server srv2...
    default-server enabled
    server srv3...
    server srv4...

even if it would have been preferable/shorter to declare:

    server srv3...
    server srv4...
    default-server disabled
    server srv1...
    server srv2...

as 'enabled' is the default server state.
2017-03-27 14:37:01 +02:00
Frédéric Lécaille
6e5e0d8f9e MINOR: server: Make 'default-server' support 'addr' keyword.
This patch makes 'default-server' support 'addr' setting.
The code which was responsible of parsing 'server' 'addr' setting
has moved from parse_server() to implement a new parser
callable both as 'default-server' and 'server' 'addr' setting parser.

Should not break anything.
2017-03-27 14:37:01 +02:00
Frédéric Lécaille
9a146de934 MINOR: server: Make 'default-server' support 'sni' keyword.
This patch makes 'default-server' directives support 'sni' settings.
A field 'sni_expr' has been added to 'struct server' to temporary
stores SNI expressions as strings during both 'default-server' and 'server'
lines parsing. So, to duplicate SNI expressions from 'default-server' 'sni' setting
for new 'server' instances we only have to "strdup" these strings as this is
often done for most of the 'server' settings.
Then, sample expressions are computed calling sample_parse_expr() (only for 'server'
instances).
A new function has been added to produce the same error output as before in case
of any error during 'sni' settings parsing (display_parser_err()).
Should not break anything.
2017-03-27 14:37:01 +02:00
Frédéric Lécaille
dba9707713 MINOR: server: Make 'default-server' support 'source' keyword.
Before this patch, only 'server' directives could support 'source' setting.
This patch makes also 'default-server' directives support this setting.

To do so, we had to extract the code responsible of parsing 'source' setting
arguments from parse_server() function and make it callable both
as 'default-server' and 'server' 'source' setting parser. So, the code is mostly
the same as before except that before allocating anything for 'struct conn_src'
members, we must free the memory previously allocated.

Should not break anything.
2017-03-27 14:37:01 +02:00
Frédéric Lécaille
22f41a2d23 MINOR: server: Make 'default-server' support 'namespace' keyword.
Before this patch, 'namespace' setting was only supported by 'server' directive.
This patch makes 'default-server' directive support this setting.
2017-03-27 14:37:01 +02:00
Frédéric Lécaille
5c3cd97550 MINOR: server: Make 'default-server' support 'tcp-ut' keyword.
This patch makes 'default-server' directive support 'tcp-ut' keyword.
2017-03-27 14:37:01 +02:00
Frédéric Lécaille
bcaf1d7397 MINOR: server: Make 'default-server' support 'ciphers' keyword.
This patch makes 'default-server' directive support 'ciphers' setting.
2017-03-27 14:37:01 +02:00
Frédéric Lécaille
9d1b95b591 MINOR: server: Make 'default-server' support 'cookie' keyword.
Before this patch, 'cookie' setting was only supported by 'server' directives.
This patch makes 'default-server' directive also support 'cookie' setting.
Should not break anything.
2017-03-27 14:37:01 +02:00
Frédéric Lécaille
547356e484 MINOR: server: Make 'default-server' support 'observe' keyword.
Before this path, 'observe' setting was only supported by 'server' directives.
This patch makes 'default-server' directives also support 'observe' setting.
Should not break anything.
2017-03-27 14:37:01 +02:00
Frédéric Lécaille
16186236dd MINOR: server: Make 'default-server' support 'redir' keyword.
Before this patch only 'server' directives could support 'redir' setting.
This patch makes also 'default-server' directives support 'redir' setting.
Should not break anything.
2017-03-27 14:37:01 +02:00
Frédéric Lécaille
5e57643e09 MINOR: server: Make 'default-server' support 'ca-file', 'crl-file' and 'crt' settings.
This patch makes 'default-server' directives support 'ca-file', 'crl-file' and
'crt' settings.
2017-03-27 14:37:01 +02:00
Frédéric Lécaille
67e0e61316 MINOR: server: Make 'default-server' support 'track' setting.
Before this patch only 'server' directives could support 'track' setting.
This patch makes 'default-server' directives also support this setting.
Should not break anything.
2017-03-27 14:37:01 +02:00
Frédéric Lécaille
65aa356c0b MINOR: server: Make 'default-server' support 'check' keyword.
Before this patch 'check' setting was only supported by 'server' directives.
This patch makes also 'default-server' directives support this setting.
A new 'no-check' keyword parser has been implemented to disable this setting both
in 'default-server' and 'server' directives.
Should not break anything.
2017-03-27 14:37:01 +02:00
Frédéric Lécaille
273f321404 MINOR: server: Make 'default-server' support 'verifyhost' setting.
This patch makes 'default-server' directive support 'verifyhost' setting.
Note: there was a little memory leak when several 'verifyhost' arguments were
supplied on the same 'server' line.
2017-03-27 14:37:01 +02:00
Frédéric Lécaille
7c8cd587c2 MINOR: server: Make 'default-server' support 'verify' keyword.
This patch makes 'default-server' directive support 'verify' keyword.
2017-03-27 14:37:01 +02:00
Frédéric Lécaille
18388c910c CLEANUP: server: code alignement.
Code alignement again.
2017-03-27 14:37:01 +02:00
Frédéric Lécaille
e892c4c670 MINOR: server: Make 'default-server' support 'send-proxy-v2-ssl*' keywords.
This patch makes 'default-server' directive support 'send-proxy-v2-ssl'
(resp. 'send-proxy-v2-ssl-cn') setting.
A new keyword 'no-send-proxy-v2-ssl' (resp. 'no-send-proxy-v2-ssl-cn') has been
added to disable 'send-proxy-v2-ssl' (resp. 'send-proxy-v2-ssl-cn') setting both
in 'server' and 'default-server' directives.
2017-03-27 14:37:01 +02:00
Frédéric Lécaille
e381d7699e MINOR: server: Make 'default-server' support 'ssl' keyword.
This patch makes 'default-server' directive support 'ssl' setting.
 A new keyword 'no-ssl' has been added to disable this setting both
 in 'server' and 'default-server' directives.
2017-03-27 14:37:00 +02:00
Frédéric Lécaille
2cfcdbe58d MINOR: server: Make 'default-server' support 'no-ssl*' and 'no-tlsv*' keywords.
This patch makes 'default-server' directive support 'no-sslv3' (resp. 'no-ssl-reuse',
'no-tlsv10', 'no-tlsv11', 'no-tlsv12', and 'no-tls-tickets') setting.
New keywords 'sslv3' (resp. 'ssl-reuse', 'tlsv10', 'tlsv11', 'tlsv12', and
'tls-no-tickets') have been added to disable these settings both in 'server' and
'default-server' directives.
2017-03-27 14:36:58 +02:00
Frédéric Lécaille
ec16f0300a CLEANUP: server: code alignement.
Code alignement.
2017-03-27 14:36:12 +02:00
Frédéric Lécaille
9698092ab6 MINOR: server: Make 'default-server' support 'force-sslv3' and 'force-tlsv1[0-2]' keywords.
This patch makes 'default-server' directive support 'force-sslv3'
and 'force-tlsv1[0-2]' settings.
New keywords 'no-force-sslv3' (resp. 'no-tlsv1[0-2]') have been added
to disable 'force-sslv3' (resp. 'force-tlsv1[0-2]') setting both in 'server' and
'default-server' directives.
2017-03-27 14:36:12 +02:00
Frédéric Lécaille
340ae606af MINOR: server: Make 'default-server' support 'check-ssl' keyword.
This patch makes 'default-server' directive support 'check-ssl' setting
to enable SSL for health checks.
A new keyword 'no-check-ssl' has been added to disable this setting both in
'server' and 'default-server' directives.
2017-03-27 14:36:12 +02:00
Frédéric Lécaille
31045e4c10 MINOR: server: Make 'default-server' support 'send-proxy' and 'send-proxy-v2 keywords.
This patch makes 'default-server' directive support 'send-proxy'
(resp. 'send-proxy-v2') setting.
A new keyword 'no-send-proxy' (resp. 'no-send-proxy-v2') has been added
to disable 'send-proxy' (resp. 'send-proxy-v2') setting both in 'server' and
'default-server' directives.
2017-03-27 14:36:12 +02:00
Frédéric Lécaille
f9bc1d6a13 MINOR: server: Make 'default-server' support 'non-stick' keyword.
This patch makes 'default-server' directive support 'non-stick' setting.
A new keyword 'stick' has been added so that to disable
'non-stick' setting both in 'server' and 'default-server' directives.
2017-03-27 14:36:11 +02:00
Frédéric Lécaille
1502cfd1a3 CLEANUP: server: code alignement.
Code alignement.
2017-03-27 14:36:11 +02:00
Frédéric Lécaille
25df89066b MINOR: server: Make 'default-server' support 'check-send-proxy' keyword.
This patch makes 'default-server' directive support 'check-send-proxy' setting.
A new keyword 'no-check-send-proxy' has been added so that to disable
'check-send-proxy' setting both in 'server' and 'default-server' directives.
2017-03-27 14:36:11 +02:00
Frédéric Lécaille
f5bf903be6 MINOR: server: Make 'default-server' support 'backup' keyword.
At this time, only 'server' supported 'backup' keyword.
This patch makes also 'default-server' directive support this keyword.
A new keyword 'no-backup' has been added so that to disable 'backup' setting
both in 'server' and 'default-server' directives.

For instance, provided the following sequence of directives:

default-server backup
server srv1
server srv2 no-backup

default-server no-backup
server srv3
server srv4 backup

srv1 and srv4 are declared as backup servers,
srv2 and srv3 are declared as non-backup servers.
2017-03-27 14:36:11 +02:00
Frédéric Lécaille
8065b6d4f2 MINOR: server: irrelevant error message with 'default-server' config file keyword.
There is no reason to emit such an error message:
"'default-server' expects <name> and <addr>[:<port>] as arguments."
if less than two arguments are provided on 'default-server' lines.
This is a 'server' specific error message.
2017-03-27 14:33:58 +02:00
Frédéric Lécaille
2efc649447 BUG/MINOR: cfgparse: loop in tracked servers lists not detected by check_config_validity().
There is a silly case where a loop is not detected in tracked servers lists:
when a server tracks itself.

Ex:
   server srv1 127.0.0.1:8000 track srv1

Well, this never happens and this does not prevent haproxy from working.

But with this next following configuration:

   server srv1 127.0.0.1:8000 track srv2
   server srv2 127.0.0.1:8000 track srv2
   server srv3 127.0.0.1:8000 track srv2

the code in charge of detecting such loops never returns (without any error message).
haproxy becomes stuck in an infinite loop because of this statement found
in check_config_validity():

for (loop = srv->track; loop && loop != newsrv; loop = loop->track);

Again, such a configuration is never accidentally used I guess.
This latter example seems silly, but as several 'default-server' directives may be used
in the same proxy section, and as 'default-server' settings are not resetted each a
new 'default-server' line is created, it will match the following configuration, in the future,
when 'track' setting will be supported by 'default-server':

   default-server track srv3
   server srv1 127.0.0.1:8000
   server srv2 127.0.0.1:8000
   .
   .
   .
   default-server check
   server srv3 127.0.0.1:8000
(cherry picked from commit 6528fc93d3c065fdac841f24e55cfe9674a67414)
2017-03-27 12:04:20 +02:00
Baptiste
fc72590544 MINOR: dns: improve DNS response parsing to use as many available records as possible
A "weakness" exist in the first implementation of the parsing of the DNS
responses: HAProxy always choses the first IP available matching family
preference, or as a failover, the first IP.

It should be good enough, since most DNS servers do round robin on the
records they send back to clients.
That said, some servers does not do proper round robin, or we may be
unlucky too and deliver the same IP to all the servers sharing the same
hostname.

Let's take the simple configuration below:

  backend bk
    srv s1 www:80 check resolvers R
    srv s2 www:80 check resolvers R

The DNS server configured with 2 IPs for 'www'.
If you're unlucky, then HAProxy may apply the same IP to both servers.

Current patch improves this situation by weighting the decision
algorithm to ensure we'll prefer use first an IP found in the response
which is not already affected to any server.

The new algorithm does not guarantee that the chosen IP is healthy,
neither a fair distribution of IPs amongst the servers in the farm,
etc...
It only guarantees that if the DNS server returns many records for a
hostname and that this hostname is being resolved by multiple servers in
the same backend, then we'll use as many records as possible.
If a server fails, HAProxy won't pick up an other record from the
response.
2017-03-24 11:54:51 +01:00
Cyril Bonté
203ec5a2b5 MEDIUM: global: add a 'hard-stop-after' option to cap the soft-stop time
When SIGUSR1 is received, haproxy enters in soft-stop and quits when no
connection remains.
It can happen that the instance remains alive for a long time, depending
on timeouts and traffic. This option ensures that soft-stop won't run
for too long.

Example:
  global
    hard-stop-after 30s  # Once in soft-stop, the instance will remain
                         # alive for at most 30 seconds.
2017-03-23 23:03:57 +01:00
Willy Tarreau
fa0617660a MEDIUM: kqueue: only set FD_POLL_IN when there are pending data
Let's avoid setting FD_POLL_IN when there's no pending data. It will
save a useless recv() syscall on pure closes.
2017-03-21 16:35:17 +01:00
Willy Tarreau
19c4ab97c1 MEDIUM: kqueue: take care of EV_EOF to improve polling status accuracy
kevent() always sets EV_EOF with EVFILT_READ to notify of a read shutdown
and EV_EOF with EVFILT_WRITE to notify of a write error. Let's check this
flag to properly update the FD's polled status (FD_POLL_HUP and FD_POLL_ERR
respectively).

It's worth noting that this one can be coupled with a regular read event
to notify about a pending read followed by a shutdown, but for now we only
use this to set the relevant flags (HUP and ERR).

The poller now exhibits the flag HAP_POLL_F_RDHUP to indicate this new
capability.

An improvement may consist in not setting FD_POLL_IN when the "data"
field is null since it normally only reflects the amount of pending
data.
2017-03-21 16:35:17 +01:00
Willy Tarreau
dd437d9a4c MINOR: kqueue: exclusively rely on the kqueue returned status
After commit e852545 ("MEDIUM: polling: centralize polled events processing")
all pollers stopped to explicitly check the FD's polled status, except the
kqueue poller which is constructed a bit differently. It doesn't seem possible
to cause any issue such as missing an event however, but anyway it's better to
definitely get rid of this since the event filter already provides the event
direction.
2017-03-21 16:35:17 +01:00
Willy Tarreau
3c8a89642d OPTIM: poll: enable support for POLLRDHUP
On Linux since 2.6.17 poll() supports POLLRDHUP to notify of an upcoming
hangup after pending data. Making use of it allows us to avoid a useless
recv() after short responses on keep-alive connections. Note that we
automatically enable the feature once this flag has been met first in a
poll() status. Till now it was only enabled on epoll.
2017-03-21 16:30:44 +01:00
Willy Tarreau
68128710d0 BUG/MINOR: raw_sock: always perfom the last recv if RDHUP is not available
Curu Wong reported a case where haproxy used to send RST to a server
after receiving its FIN. The problem is caused by the fact that being
a server connection, its fd is marked with linger_risk=1, and that the
poller didn't report POLLRDHUP, making haproxy unaware of a pending
shutdown that came after the data, so it used to resort to nolinger
for closing.

However when pollers support RDHUP we're pretty certain whether or not
a shutdown comes after the data and we don't need to perform that extra
recv() call.

Similarly when we're dealing with an inbound connection we don't care
and don't want to perform this extra recv after a request for a very
unlikely case, as in any case we'll have to deal with the client-facing
TIME_WAIT socket.

So this patch ensures that only when it's known that there's no risk with
lingering data, as well as in cases where it's known that the poller would
have detected a pending RDHUP, we perform the fd_done_recv() otherwise we
continue, trying a subsequent recv() to try to detect a pending shutdown.

This effectively results in an extra recv() call for keep-alive sockets
connected to a server when POLLRDHUP isn't known to be supported, but
it's the only way to know whether they're still alive or closed.

This fix should be backported to 1.7, 1.6 and 1.5. It relies on the
previous patch bringing support for the HAP_POLL_F_RDHUP flag.
2017-03-21 16:30:44 +01:00
Willy Tarreau
5a767693b5 MINOR: fd: add a new flag HAP_POLL_F_RDHUP to struct poller
We'll need to differenciate between pollers which can report hangup at
the same time as read (POLL_RDHUP) from the other ones, because only
these ones may benefit from the fd_done_recv() optimization. Epoll has
had support for EPOLLRDHUP since Linux 2.6.17 and has always been used
this way in haproxy, so now we only set the flag once we've observed it
once in a response. It means that some initial requests may try to
perform a second recv() call, but after the first closed connection it
will be enough to know that the second call is not needed anymore.

Later we may extend these flags to designate event-triggered pollers.
2017-03-21 16:30:35 +01:00
Hongbo Long
e39683c4d4 BUG/MEDIUM: stream: fix client-fin/server-fin handling
A tcp half connection can cause 100% CPU on expiration.

First reproduced with this haproxy configuration :

  global
      tune.bufsize 10485760
  defaults
      timeout server-fin 90s
      timeout client-fin 90s
  backend node2
      mode tcp
      timeout server 900s
      timeout connect 10s
      server def 127.0.0.1:3333
  frontend fe_api
      mode  tcp
      timeout client 900s
      bind :1990
      use_backend node2

Ie timeout server-fin shorter than timeout server, the backend server
sends data, this package is left in the cache of haproxy, the backend
server continue sending fin package, haproxy recv fin package. this
time the session information is as follows:

  time the session information is as follows:
      0x2373470: proto=tcpv4 src=127.0.0.1:39513 fe=fe_api be=node2
      srv=def ts=08 age=1s calls=3 rq[f=848000h,i=0,an=00h,rx=14m58s,wx=,ax=]
      rp[f=8004c020h,i=0,an=00h,rx=,wx=14m58s,ax=] s0=[7,0h,fd=6,ex=]
      s1=[7,18h,fd=7,ex=] exp=14m58s

  rp has set the CF_SHUTR state, next, the client sends the fin package,

  session information is as follows:
      0x2373470: proto=tcpv4 src=127.0.0.1:39513 fe=fe_api be=node2
      srv=def ts=08 age=38s calls=4 rq[f=84a020h,i=0,an=00h,rx=,wx=,ax=]
      rp[f=8004c020h,i=0,an=00h,rx=1m11s,wx=14m21s,ax=] s0=[7,0h,fd=6,ex=]
      s1=[9,10h,fd=7,ex=] exp=1m11s

  After waiting 90s, session information is as follows:
      0x2373470: proto=tcpv4 src=127.0.0.1:39513 fe=fe_api be=node2
      srv=def ts=04 age=4m11s calls=718074391 rq[f=84a020h,i=0,an=00h,rx=,wx=,ax=]
      rp[f=8004c020h,i=0,an=00h,rx=?,wx=10m49s,ax=] s0=[7,0h,fd=6,ex=]
      s1=[9,10h,fd=7,ex=] exp=? run(nice=0)

  cpu information:
      6899 root      20   0  112224  21408   4260 R 100.0  0.7   3:04.96 haproxy

Buffering is set to ensure that there is data in the haproxy buffer, and haproxy
can receive the fin package, set the CF_SHUTR flag, If the CF_SHUTR flag has been
set, The following code does not clear the timeout message, causing cpu 100%:

stream.c:process_stream:

        if (unlikely((res->flags & (CF_SHUTR|CF_READ_TIMEOUT)) == CF_READ_TIMEOUT)) {
            if (si_b->flags & SI_FL_NOHALF)
            si_b->flags |= SI_FL_NOLINGER;
            si_shutr(si_b);
        }

If you have closed the read, set the read timeout does not make sense.
With or without cf_shutr, read timeout is set:

       if (tick_isset(s->be->timeout.serverfin)) {
           res->rto = s->be->timeout.serverfin;
           res->rex = tick_add(now_ms, res->rto);
       }

After discussion on the mailing list, setting half-closed timeouts the
hard way here doesn't make sense. They should be set only at the moment
the shutdown() is performed. It will also solve a special case which was
already reported of some half-closed timeouts not working when the shutw()
is performed directly at the stream-interface layer (no analyser involved).
Since the stream interface layer cannot know the timeout values, we'll have
to store them directly in the stream interface so that they are used upon
shutw(). This patch does this, fixing the problem.

An easier reproducer to validate the fix is to keep the huge buffer and
shorten all timeouts, then call it under tcploop server and client, and
wait 3 seconds to see haproxy run at 100% CPU :

  global
      tune.bufsize 10485760

  listen px
      bind :1990
      timeout client 90s
      timeout server 90s
      timeout connect 1s
      timeout server-fin 3s
      timeout client-fin 3s
      server def 127.0.0.1:3333

  $ tcploop 3333 L W N20 A P100 F P10000 &
  $ tcploop 127.0.0.1:1990 C S10000000 F
2017-03-21 15:04:43 +01:00
Christopher Faulet
014e39c0b6 BUG/MAJOR: http: fix typo in http_apply_redirect_rule
Because of this typo, AN_RES_FLT_END was never called when a redirect rule is
applied on a keep-alive connection. In almost all cases, this bug has no
effect. But, it leads to a memory leak if a redirect is done on a http-response
rule when the HTTP compression is enabled.

This patch should be backported in 1.7.
2017-03-21 11:21:18 +01:00
Emmanuel Hocdet
a52bb15cc7 BUILD: ssl: simplify SSL_CTX_set_ecdh_auto compatibility
SSL_CTX_set_ecdh_auto is declared (when present) with #define. A simple #ifdef
avoid to list all cases of ssllibs. It's a placebo in new ssllibs. It's ok with
openssl 1.0.1, 1.0.2, 1.1.0, libressl and boringssl.
Thanks to Piotr Kubaj for postponing and testing with libressl.
2017-03-20 12:01:34 +01:00
Felipe Guerreiro Barbosa Ruiz
00f55524e0 BUG: payload: fix payload not retrieving arbitrary lengths
This fixes a regression introduced in d7bdcb874b, that removed the
ability to use req.payload(0,0) to read the whole buffer content. The
offending commit is present starting in version 1.6, so the patch
should be backported to versions 1.6 and 1.7.
2017-03-20 07:25:37 +01:00
Willy Tarreau
de40d798de CLEANUP: connection: completely remove CO_FL_WAKE_DATA
Since it's only set and never tested anymore, let's remove it.
2017-03-19 12:18:27 +01:00
Willy Tarreau
9fa1ee61cc MEDIUM: connection: don't test for CO_FL_WAKE_DATA
This flag is always set when we end up here, for each and every data
layer (idle, stream-interface, checks), and continuing to test it
leaves a big risk of forgetting to set it as happened once already
before 1.5-dev13.

It could make sense to backport this into stable branches as part of
the connection flag fixes, after some cool down period.
2017-03-19 12:17:35 +01:00
Willy Tarreau
3c0cc49d30 BUG/MEDIUM: connection: ensure to always report the end of handshakes
Despite the previous commit working fine on all tests, it's still not
sufficient to completely address the problem. If the connection handler
is called with an event validating an L4 connection but some handshakes
remain (eg: accept-proxy), it will still wake the function up, which
will not report the activity, and will not detect a change once the
handshake it complete so it will not notify the ->wake() handler.

In fact the only reason why the ->wake() handler is still called here
is because after dropping the last handshake, we try to call ->recv()
and ->send() in turn and change the flags in order to detect a data
activity. But if for any reason the data layer is not interested in
reading nor writing, it will not get these events.

A cleaner way to address this is to call the ->wake() handler only
on definitive status changes (shut, error), on real data activity,
and on a complete connection setup, measured as CONNECTED with no
more handshake pending.

It could be argued that the handshake flags have to be made part of
the condition to set CO_FL_CONNECTED but that would currently break
a part of the health checks. Also a handshake could appear at any
moment even after a connection is established so we'd lose the
ability to detect a second end of handshake.

For now the situation around CO_FL_CONNECTED is not clean :
  - session_accept() only sets CO_FL_CONNECTED if there's no pending
    handshake ;

  - conn_fd_handler() will set it once L4 and L6 are complete, which
    will do what session_accept() above refrained from doing even if
    an accept_proxy handshake is still pending ;

  - ssl_sock_infocbk() and ssl_sock_handshake() consider that a
    handshake performed with CO_FL_CONNECTED set is a renegociation ;
    => they should instead filter on CO_FL_WAIT_L6_CONN

  - all ssl_fc_* sample fetch functions wait for CO_FL_CONNECTED before
    accepting to fetch information
    => they should also get rid of any pending handshake

  - smp_fetch_fc_rcvd_proxy() uses !CO_FL_CONNECTED instead of
    CO_FL_ACCEPT_PROXY

  - health checks (standard and tcp-checks) don't check for HANDSHAKE
    and may report a successful check based on CO_FL_CONNECTED while
    not yet done (eg: send buffer full on send_proxy).

This patch aims at solving some of these side effects in a backportable
way before this is reworked in depth :
  - we need to call ->wake() to report connection success, measure
    connection time, notify that the data layer is ready and update
    the data layer after activity ; this has to be done either if
    we switch from pending {L4,L6}_CONN to nothing with no handshakes
    left, or if we notice some handshakes were pending and are now
    done.

  - we document that CO_FL_CONNECTED exactly means "L4 connection
    setup confirmed at least once, L6 connection setup confirmed
    at least once or not necessary, all this regardless of any
    possibly remaining handshakes or future L6 negociations".

This patch also renames CO_FL_CONN_STATUS to the more explicit
CO_FL_NOTIFY_DATA, and works around the previous flags trick consiting
in setting an impossible combination of flags to notify the data layer,
by simply clearing the current flags.

This fix should be backported to 1.7, 1.6 and 1.5.
2017-03-19 12:06:18 +01:00
Willy Tarreau
52821e2737 BUG/MAJOR: stream-int: do not depend on connection flags to detect connection
Recent fix 7bf3fa3 ("BUG/MAJOR: connection: update CO_FL_CONNECTED before
calling the data layer") marked an end to a fragile situation where the
absence of CO_FL_{CONNECTED,L4,L6}* flags is used to mark the completion
of a connection setup. The problem is that by setting the CO_FL_CONNECTED
flag earlier, we can indeed call the ->wake() function from conn_fd_handler
but the stream-interface's wake function needs to see CO_FL_CONNECTED unset
to detect that a connection has just been established, so if there's no
pending data in the buffer, the connection times out. The other ->wake()
functions (health checks and idle connections) don't do this though.

So instead of trying to detect a subtle change in connection flags,
let's simply rely on the stream-interface's state and validate that the
connection is properly established and that handshakes are completed
before reporting the WRITE_NULL indicating that a pending connection was
just completed.

This patch passed all tests of handshake and non-handshake combinations,
with synchronous and asynchronous connect() and should be safe for backport
to 1.7, 1.6 and 1.5 when the fix above is already present.
2017-03-19 12:00:04 +01:00
Christopher Faulet
e6006245de BUG/MEDIUM: filters: Fix channels synchronization in flt_end_analyze
When a filter is used, there are 2 channel's analyzers to surround all the
others, flt_start_analyze and flt_end_analyze. This is the good place to acquire
and release resources used by filters, when needed. In addition, the last one is
used to synchronize the both channels, especially for HTTP streams. We must wait
that the analyze is finished for the both channels for an HTTP transaction
before restarting it for the next one.

But this part was buggy, leading to unexpected behaviours. First, depending on
which channel ends first, the request or the response can be switch in a
"forward forever" mode. Then, the HTTP transaction can be cleaned up too early,
while a processing is still in progress on a channel.

To fix the bug, the flag CF_FLT_ANALYZE has been added. It is set on channels in
flt_start_analyze and is kept if at least one filter is still analyzing the
channel. So, we can trigger the channel syncrhonization if this flag was removed
on the both channels. In addition, the flag TX_WAIT_CLEANUP has been added on
the transaction to know if the transaction must be cleaned up or not during
channels syncrhonization. This way, we are sure to reset everything once all the
processings are finished.

This patch should be backported in 1.7.
2017-03-15 19:09:06 +01:00
Olivier Houchard
a5938f71e4 CLEANUP: config: Typo in comment.
This is for the recently merged dynamic cookie patch set.
2017-03-15 16:01:51 +01:00
Olivier Houchard
2cb49ebbc4 BUG/MEDIUM server: Fix crash when dynamic is defined, but not key is provided.
Wait until we're sure we have a key before trying to calculate its length.

[wt: no backport needed, was just merged]
2017-03-15 16:01:33 +01:00
Willy Tarreau
3569df3fcf BUG/MEDIUM: listener: do not try to rebind another process' socket
When the "process" setting of a bind line limits the processes a
listening socket is enabled on, a "disable frontend" operation followed
by an "enable frontend" triggers a bug because all declared listeners
are attempted to be bound again regardless of their assigned processes.
This can at minima create new sockets not receiving traffic, and at worst
prevent from re-enabling a frontend if it's bound to a privileged port.

This bug was introduced by commit 1c4b814 ("MEDIUM: listener: support
rebinding during resume()") merged in 1.6-dev1, trying to perform the
bind() before checking the process list instead of after.

Just move the process check before the bind() operation to fix this.
This fix must be backported to 1.7 and 1.6.

Thanks to Pavlos for reporting this one.
2017-03-15 12:47:46 +01:00
Steven Davidovitz
544d481516 BUG/MINOR: checks: attempt clean shutw for SSL check
Strict interpretation of TLS can cause SSL sessions to be thrown away
when the socket is shutdown without sending a "close notify", resulting
in each check to go through the complete handshake, eating more CPU on
the servers.

[wt: strictly speaking there's no guarantee that the close notify will
 be delivered, it's only best effort, but that may be enough to ensure
 that once at least one is received, next checks will be cheaper. This
 should be backported to 1.7 and possibly 1.6]
2017-03-15 11:41:25 +01:00
Olivier Houchard
614f8d7d56 MINOR: cli: Let configure the dynamic cookies from the cli.
This adds 3 new commands to the cli :
enable dynamic-cookie backend <backend> that enables dynamic cookies for a
specified backend
disable dynamic-cookie backend <backend> that disables dynamic cookies for a
specified backend
set dynamic-cookie-key backend <backend> that lets one change the dynamic
cookie secret key, for a specified backend.
2017-03-15 11:38:29 +01:00
Olivier Houchard
4e694049fa MINOR: server: Add dynamic session cookies.
This adds a new "dynamic" keyword for the cookie option. If set, a cookie
will be generated for each server (assuming one isn't already provided on
the "server" line), from the IP of the server, the TCP port, and a secret
key provided. To provide the secret key, a new keyword as been added,
"dynamic-cookie-key", for backends.

Example :
backend bk_web
  balance roundrobin
  dynamic-cookie-key "bla"
  cookie WEBSRV insert dynamic
  server s1 127.0.0.1:80 check
  server s2 192.168.56.1:80 check

This is a first step to be able to dynamically add and remove servers,
without modifying the configuration file, and still have all the load
balancers redirect the traffic to the right server.

Provide a way to generate session cookies, based on the IP address of the
server, the TCP port, and a secret key provided.
2017-03-15 11:37:30 +01:00
Willy Tarreau
7bf3fa3c23 BUG/MAJOR: connection: update CO_FL_CONNECTED before calling the data layer
Matthias Fechner reported a regression in 1.7.3 brought by the backport
of commit 819efbf ("BUG/MEDIUM: tcp: don't poll for write when connect()
succeeds"), causing some connections to fail to establish once in a while.
While this commit itself was a fix for a bad sequencing of connection
events, it in fact unveiled a much deeper bug going back to the connection
rework era in v1.5-dev12 : 8f8c92f ("MAJOR: connection: add a new
CO_FL_CONNECTED flag").

It's worth noting that in a lab reproducing a similar environment as
Matthias' about only 1 every 19000 connections exhibit this behaviour,
making the issue not so easy to observe. A trick to make the problem
more observable consists in disabling non-blocking mode on the socket
before calling connect() and re-enabling it later, so that connect()
always succeeds. Then it becomes 100% reproducible.

The problem is that this CO_FL_CONNECTED flag is tested after deciding to
call the data layer (typically the stream interface but might be a health
check as well), and that the decision to call the data layer relies on a
change of one of the flags covered by the CO_FL_CONN_STATE set, which is
made of CO_FL_CONNECTED among others.

Before the fix above, this bug couldn't appear with TCP but it could
appear with Unix sockets. Indeed, connect() was always considered
blocking so the CO_FL_WAIT_L4_CONN connection flag was always set, and
polling for write events was always enabled. This used to guarantee that
the conn_fd_handler() could detect a change among the CO_FL_CONN_STATE
flags.

Now with the fix above, if a connect() immediately succeeds for non-ssl
connection with send-proxy enabled, and no data in the buffer (thus TCP
mode only), the CO_FL_WAIT_L4_CONN flag is not set, the lack of data in
the buffer doesn't enable polling flags for the data layer, the
CO_FL_CONNECTED flag is not set due to send-proxy still being pending,
and once send-proxy is done, its completion doesn't cause the data layer
to be woken up due to the fact that CO_FL_CONNECT is still not present
and that the CO_FL_SEND_PROXY flag is not watched in CO_FL_CONN_STATE.

Then no progress is made when data are received from the client (and
attempted to be forwarded), because a CF_WRITE_NULL (or CF_WRITE_PARTIAL)
flag is needed for the stream-interface state to turn from SI_ST_CON to
SI_ST_EST, allowing ->chk_snd() to be called when new data arrive. And
the only way to set this flag is to call the data layer of course.

After the connect timeout, the connection gets killed and if in the mean
time some data have accumulated in the buffer, the retry will succeed.

This patch fixes this situation by simply placing the update of
CO_FL_CONNECTED where it should have been, before the check for a flag
change needed to wake up the data layer and not after.

This fix must be backported to 1.7, 1.6 and 1.5. Versions not having
the patch above are still affected for unix sockets.

Special thanks to Matthias Fechner who provided a very detailed bug
report with a bisection designating the faulty patch, and to Olivier
Houchard for providing full access to a pretty similar environment where
the issue could first be reproduced.
2017-03-14 22:04:06 +01:00
Simon Horman
6f6bb380ef MEDIUM: stats: Add show json schema
This may be used to output the JSON schema which describes the output of
show info json and show stats json.

The JSON output is without any extra whitespace in order to reduce the
volume of output. For human consumption passing the output through a
pretty printer may be helpful.

e.g.:
$ echo "show schema json" | socat /var/run/haproxy.stat stdio | \
     python -m json.tool

The implementation does not generate the schema. Some consideration could
be given to integrating the output of the schema with the output of
typed and json info and stats. In particular the types (u32, s64, etc...)
and tags.

A sample verification of show info json and show stats json using
the schema is as follows. It uses the jsonschema python module:

cat > jschema.py <<  __EOF__
import json

from jsonschema import validate
from jsonschema.validators import Draft3Validator

with open('schema.txt', 'r') as f:
    schema = json.load(f)
    Draft3Validator.check_schema(schema)

    with open('instance.txt', 'r') as f:
        instance = json.load(f)
	validate(instance, schema, Draft3Validator)
__EOF__

$ echo "show schema json" | socat /var/run/haproxy.stat stdio > schema.txt
$ echo "show info json" | socat /var/run/haproxy.stat stdio > instance.txt
python ./jschema.py
$ echo "show stats json" | socat /var/run/haproxy.stat stdio > instance.txt
python ./jschema.py

Signed-off-by: Simon Horman <horms@verge.net.au>
2017-03-14 11:14:03 +01:00
Simon Horman
05ee213f8b MEDIUM: stats: Add JSON output option to show (info|stat)
Add a json parameter to show (info|stat) which will output information
in JSON format. A follow-up patch will add a JSON schema which describes
the format of the JSON output of these commands.

The JSON output is without any extra whitespace in order to reduce the
volume of output. For human consumption passing the output through a
pretty printer may be helpful.

e.g.:
$ echo "show info json" | socat /var/run/haproxy.stat stdio | \
     python -m json.tool

STAT_STARTED has bee added in order to track if show output has begun or
not. This is used in order to allow the JSON output routines to only insert
a "," between elements when needed. I would value any feedback on how this
might be done better.

Signed-off-by: Simon Horman <horms@verge.net.au>
2017-03-14 11:14:03 +01:00
Willy Tarreau
2019f95997 CLEANUP: http: make http_server_error() not set the status anymore
Given that all call places except one had to set txn->status prior to
calling http_server_error(), it's simpler to make this function rely
on txn->status than have it store it from an argument.
2017-03-14 11:09:04 +01:00
Jarno Huuskonen
800d1761d0 MINOR: http-request tarpit deny_status.
Implements deny_status for http-request tarpit rule (allows setting
custom http status code). This commit depends on:
MEDIUM: http_error_message: txn->status / http_get_status_idx.
2017-03-14 10:41:54 +01:00
Jarno Huuskonen
9e6906b9ec MEDIUM: http_error_message: txn->status / http_get_status_idx.
This commit removes second argument(msgnum) from http_error_message and
changes http_error_message to use s->txn->status/http_get_status_idx for
mapping status code from 200..504 to HTTP_ERR_200..HTTP_ERR_504(enum).

This is needed for http-request tarpit deny_status commit.
2017-03-14 10:41:41 +01:00
Nenad Merdanovic
50c8044423 CLEANUP: Remove comment that's no longer valid
Code was deleted in ad63582eb, but the comment remained.

Signed-off-by: Nenad Merdanovic <nmerdan@haproxy.com>
2017-03-13 18:26:05 +01:00
Nenad Merdanovic
807a6e7856 MINOR: Add hostname sample fetch
It adds "hostname" as a new sample fetch. It does exactly the same as
"%H" in a log format except that it can be used outside of log formats.

Signed-off-by: Nenad Merdanovic <nmerdan@haproxy.com>
2017-03-13 18:26:05 +01:00
Nenad Merdanovic
2754fbcfd6 CLEANUP: Replace repeated code to count usable servers with be_usable_srv()
2 places were using an open-coded implementation of this function to count
available servers. Note that the avg_queue_size() fetch didn't check that
the proxy was in STOPPED state so it would possibly return a wrong server
count here but that wouldn't impact the returned value.

Signed-off-by: Nenad Merdanovic <nmerdan@haproxy.com>
2017-03-13 18:26:05 +01:00
Nenad Merdanovic
b7e7c4720a MINOR: Add nbsrv sample converter
This is like the nbsrv() sample fetch function except that it works as
a converter so it can count the number of available servers of a backend
name retrieved using a sample fetch or an environment variable.

Signed-off-by: Nenad Merdanovic <nmerdan@haproxy.com>
2017-03-13 18:26:05 +01:00
Nenad Merdanovic
96c1571943 BUG/MINOR: Fix "get map <map> <value>" CLI command
The said form of the CLI command didn't return anything since commit
ad8be61c7.

This fix needs to be backported to 1.7.

Signed-off-by: Nenad Merdanovic <nmerdan@haproxy.com>
2017-03-13 18:25:53 +01:00
Nenad Merdanovic
24f45d8e34 BUG/MEDIUM: cli: Prevent double free in CLI ACL lookup
The memory is released by cli_release_mlook, which also properly sets the
pointer to NULL. This was introduced with a big code reorganization
involving moving to the new keyword registration form in commit ad8be61c7.

This fix needs to be backported to 1.7.

Signed-off-by: Nenad Merdanovic <nmerdan@haproxy.com>
2017-03-13 18:25:53 +01:00
Janusz Dziemidowicz
8d7104982e BUG/MEDIUM: ssl: Clear OpenSSL error stack after trying to parse OCSP file
Invalid OCSP file (for example empty one that can be used to enable
OCSP response to be set dynamically later) causes errors that are
placed on OpenSSL error stack. Those errors are not cleared so
anything that checks this stack later will fail.

Following configuration:
  bind :443 ssl crt crt1.pem crt crt2.pem

With following files:
  crt1.pem
  crt1.pem.ocsp - empty one
  crt2.pem.rsa
  crt2.pem.ecdsa

Will fail to load.

This patch should be backported to 1.7.
2017-03-10 18:53:22 +01:00
Willy Tarreau
de7dc88c51 MINOR: config: warn when some HTTP rules are used in a TCP proxy
Surprizingly, http-request, http-response, block, redirect, and capture
rules did not cause a warning to be emitted when used in a TCP proxy, so
let's fix this.

This patch may be backported to older versions as it helps spotting
configuration issues.
2017-03-10 11:49:21 +01:00
Christopher Faulet
2eca6b50a7 MINOR: spoe: Add "max-frame-size" statement in spoe-agent section
As its named said, this statement customize the maximum allowed size for frames
exchanged between HAProxy and SPOAs. It should be greater than or equal to 256
and less than or equal to (tune.bufsize - 4) (4 bytes are reserved to the frame
length).
2017-03-09 15:32:56 +01:00
Christopher Faulet
cecd8527b3 MINOR: spoe: Add "send-frag-payload" option in spoe-agent section
This option can be used to enable or to disable (prefixing the option line with
the "no" keyword) the sending of fragmented payload to agents. By default, this
option is enabled.
2017-03-09 15:32:55 +01:00
Christopher Faulet
ecc537a8b9 MINOR: spoe: Rely on alertif_too_many_arg during configuration parsing 2017-03-09 15:32:55 +01:00
Christopher Faulet
305c6079d4 MINOR: spoe: Add "pipelining" and "async" options in spoe-agent section
These options can be used to enable or to disable (prefixing the option line
with the "no" keyword), respectively, pipelined and asynchronous exchanged
between HAproxy and agents. By default, pipelining and async options are
enabled.
2017-03-09 15:32:55 +01:00
Christopher Faulet
6a2940c5f5 MINOR: spoe: Add support of negation for options in SPOE configuration file
For now, no options support negation (using "no" keyword). So it always returns
an error.
2017-03-09 15:32:55 +01:00
Christopher Faulet
f032c3ec09 MINOR: spoe: Improve implementation of the payload fragmentation
Now, when a payload is fragmented, the first frame must define the frame type
and the followings must use the special type SPOE_FRM_T_UNSET. This way, it is
easy to know if a fragment is the first one or not. Of course, all frames must
still share the same stream-id and frame-id.

Update SPOA example accordingly.
2017-03-09 15:32:55 +01:00
Christopher Faulet
4ff3e574ac REORG: spoe: Move low-level encoding/decoding functions in dedicated header file
So, it will be easier to anyone to develop external services using these
functions.

SPOA example has been updated accordingly.
2017-03-09 15:32:55 +01:00
Christopher Faulet
1f40b91a83 REORG: spoe: Move struct and enum definitions in dedicated header file
SPOA example has been Updated accordingly
2017-03-09 15:32:55 +01:00
Christopher Faulet
8eda93f30f MINOR: spoe: Handle NOTIFY frames cancellation using ABORT bit in ACK frames
If an agent want to abort the processing a fragmented NOTIFY frame before
receiving all fragments, it can send an ACK frame at any time with ABORT bit set
(and of course, the FIN bit too).

Beside this change, SPOE_FRM_ERR_FRAMEID_NOTFOUND error flag has been added. It
is set when a unknown ACK frame is received.
2017-03-09 15:32:55 +01:00
Christopher Faulet
8ef75251e3 MAJOR: spoe: refactor the filter to clean up the code
The SPOE code is now pretty big and it was the good time to clean it up. It is
not perfect, some parts remains a bit ugly. But it is far better now.
2017-03-09 15:32:55 +01:00
Christopher Faulet
f51f5fa56c MAJOR: spoe: Add support of payload fragmentation in NOTIFY frames
Now, agents can announce the support for the "fragmentation" capability during
the HELLO handshake. HAProxy will never announce it because fragmented frame
decoding is not implemented yet. But it can send such kind of frames. So, if an
agent supports this capability, payloads exceeding the frame size will be
split. A fragemented payload consists of several frames with the FIN bit clear
and terminated by a single frame with the FIN bit set. All these frames must
share the same STREAM-ID and FRAME-ID.

Note that an unfragemnted payload consists of a single frame with the FIN bit
set. And HELLO and DISCONNECT frames cannot be fragmented. This means that only
NOTIFY frames can transport fragmented payload for now.
2017-03-09 15:32:55 +01:00
Christopher Faulet
7aa0b2b0dd MINOR: spoe: Use the min of all known max_frame_size to encode messages
The max_frame_size value is negociated between HAProxy and SPOE agents during
the HELLO handshake. It is a per-connection value. Different SPOE agents can
choose to use different max_frame_size values. So, now, we keep the minimum of
all known max_frame_size. This minimum is updated when a new connection to a
SPOE agent is opened and when a connection is closed. We use this value as a
limit to encode messages in NOTIFY frames.
2017-03-09 15:32:55 +01:00
Christopher Faulet
4596fb7056 MEDIUM: spoe: Be sure to wakeup the good entity waiting for a buffer
This happens when buffer allocation failed. In the SPOE context, buffers are
allocated by streams and SPOE applets at different time. First, by streams, when
messages need to be encoded before sending them in a NOTIFY frame. Then, by SPOE
applets, when a ACK frame is received.

The first case works as expected, we wake up the stream. But for the second one,
we must wake up the waiting SPOE applet.
2017-03-09 15:32:55 +01:00
Christopher Faulet
a21b064f81 MINOR: spoe: Check the scope of sample fetches used in SPOE messages
If an error is triggered, the corresponding message is ignored and a warning is
emitted.
2017-03-09 15:32:55 +01:00
Christopher Faulet
72bcc4724f MINOR: spoe: Send a log message when an error occurred during event processing 2017-03-09 15:32:55 +01:00
Christopher Faulet
b067b06fc7 MINOR: spoe: Add status code in error variable instead of hardcoded value
Now, when option "set-on-error" is enabled, we set a status code representing
the error occurred instead of "true". For values under 256, it represents an
error coming from the engine. Below 256, it reports a SPOP error. In this case,
to retrieve the right SPOP status code, you must remove 256 to this value. Here
are possible values:

  * 1:     a timeout occurred during the event processing.
  * 2:     an error was triggered during the ressources allocation.
  * 255:   an unknown error occurred during the event processing.
  * 256+N: a SPOP error occurred during the event processing.
2017-03-09 15:32:55 +01:00
Christopher Faulet
42bfa46234 MINOR: spoe: Remove SPOE details from the appctx structure
Now, as for peers, we use an opaque pointer to store information related to the
SPOE filter in appctx structure. These information are now stored in a dedicated
structure (spoe_appctx) and allocated, using a pool, when the applet is created.

This removes the dependency between applets and the SPOE filter and avoids to
eventually inflate the appctx structure.
2017-03-09 15:32:55 +01:00
Christopher Faulet
a1cda02995 MAJOR: spoe: Add support of pipelined and asynchronous exchanges with agents
Now, HAProxy and agents can announce the support for "pipelining" and/or "async"
capabilities during the HELLO handshake. For now, HAProxy always announces the
support of both. In addition, in its HELLO frames. HAproxy adds the "engine-id"
key. It is a uniq string that identify a SPOE engine.

The "pipelining" capability is the ability for a peer to decouple NOTIFY and ACK
frames. This is a symmectical capability. To be used, it must be supported by
HAproxy and agents. Unlike HTTP pipelining, the ACK frames can be send in any
order, but always on the same TCP connection used for the corresponding NOTIFY
frame.

The "async" capability is similar to the pipelining, but here any TCP connection
established between HAProxy and the agent can be used to send ACK frames. if an
agent accepts connections from multiple HAProxy, it can use the "engine-id"
value to group TCP connections.
2017-03-09 15:32:55 +01:00
Christopher Faulet
b0b4238825 BUG/MINOR: spoe: Fix parsing of arguments in spoe-message section
The array of pointers passed to sample_parse_expr was not really an array but a
pointer to pointer. So it can easily lead to a segfault during the configuration
parsing.
2017-03-09 15:32:55 +01:00
Christopher Faulet
3b386a318f BUG/MINOR: spoe: Fix soft stop handler using a specific id for spoe filters
During a soft stop, we need to wakeup all SPOE applets to stop them. So we loop
on all proxies, and for each proxy, on all filters. But we must be sure to only
handle SPOE filters here. To do so, we use a specific id.
2017-03-09 15:32:55 +01:00
Emmanuel Hocdet
e38047423d MINOR: ssl: improved cipherlist captures
Alloc capture buffer later (when filling), parse client-hello after
heartbeat check and remove capture->conn (unused).
2017-03-08 15:04:43 +01:00
Emmanuel Hocdet
aaee75088a BUG/MINOR: ssl: fix cipherlist captures with sustainable SSL calls
Use SSL_set_ex_data/SSL_get_ex_data standard API call to store capture.
We need to avoid internal structures/undocumented calls usage to try to
control the beast and limit painful compatibilities.
2017-03-08 15:04:25 +01:00
Emmanuel Hocdet
f6b37c67be BUG/MEDIUM: ssl: in bind line, ssl-options after 'crt' are ignored.
Bug introduced with "removes SSL_CTX_set_ssl_version call and cleanup CTX
creation": ssl_sock_new_ctx is called before all the bind line is parsed.
The fix consists of separating the use of default_ctx as the initialization
context of the SSL connection via bind_conf->initial_ctx. Initial_ctx contains
all the necessary parameters before performing the selection of the CTX:
default_ctx is processed as others ctx without unnecessary parameters.
2017-03-07 10:42:43 +01:00
Emmanuel Hocdet
4608ed9511 MEDIUM: ssl: remove ssl-options from crt-list
ssl-options are link to the initial negotiation environnement worn
by default_ctx.
Remove it from crt-list to avoid any confusion.
2017-03-07 10:33:16 +01:00
Thierry FOURNIER
5bf77329b6 MEDIUM: ssl: add new sample-fetch which captures the cipherlist
This new sample-fetches captures the cipher list offer by the client
SSL connection during the client-hello phase. This is useful for
fingerprint the SSL connection.
2017-03-06 21:40:23 +01:00
Emmanuel Hocdet
cc6c2a2cb7 BUILD: ssl: fix build with -DOPENSSL_NO_DH 2017-03-06 10:20:28 +01:00
Emmanuel Hocdet
4de1ff1fd6 MINOR: ssl: removes SSL_CTX_set_ssl_version call and cleanup CTX creation.
BoringSSL doesn't support SSL_CTX_set_ssl_version. To remove this call, the
CTX creation is cleanup to clarify what is happening. SSL_CTX_new is used to
match the original behavior, in order: force-<method> according the method
version then the default method with no-<method> options.
OPENSSL_NO_SSL3 error message is now in force-sslv3 parsing (as force-tls*).
For CTX creation in bind environement, all CTX set related to the initial ctx
are aggregate to ssl_sock_new_ctx function for clarity.

Tests with crt-list have shown that server_method, options and mode are
linked to the initial CTX (default_ctx): all ssl-options are link to each
bind line and must be removed from crt-list.
2017-03-06 10:20:20 +01:00
Emmanuel Hocdet
d385060393 BUG/MEDIUM: ssl: switchctx should not return SSL_TLSEXT_ERR_ALERT_WARNING
Extract from RFC 6066:
"If the server understood the ClientHello extension but does not recognize
the server name, the server SHOULD take one of two actions: either abort the
handshake by sending a fatal-level unrecognized_name(112) alert or continue the
handshake. It is NOT RECOMMENDED to send a warning-level unrecognized_name(112)
alert, because the client's behavior in response to warning-level alerts is
unpredictable. If there is a mismatch between the server name used by the
client application and the server name of the credential chosen by the server,
this mismatch will become apparent when the client application performs the
server endpoint identification, at which point the client application will have
to decide whether to proceed with the communication."

Thanks Roberto Guimaraes for the bug repport, spotted with openssl-1.1.0.
This fix must be backported.
2017-03-06 10:10:49 +01:00
Emmanuel Hocdet
530141f747 BUG/MEDIUM: ssl: fix verify/ca-file per certificate
SSL verify and client_CA inherits from the initial ctx (default_ctx).
When a certificate is found, the SSL connection environment must be replaced by
the certificate configuration (via SSL_set_verify and SSL_set_client_CA_list).
2017-03-02 18:31:51 +01:00
Emmanuel Hocdet
0594211987 MEDIUM: boringssl: support native multi-cert selection without bundling
This patch used boringssl's callback to analyse CLientHello before any
handshake to extract key signature capabilities.
Certificat with better signature (ECDSA before RSA) is choosed
transparenty, if client can support it. RSA and ECDSA certificates can
be declare in a row (without order). This makes it possible to set
different ssl and filter parameter with crt-list.
2017-03-02 18:31:05 +01:00
Willy Tarreau
19b1412e02 MINOR: http: don't close when redirect location doesn't start with "/"
In 1.4-dev5 when we started to implement keep-alive, commit a9679ac
("[MINOR] http: make the conditional redirect support keep-alive")
added a specific check was added to support keep-alive on redirect
rules but only when the location would start with a "/" indicating
the client would come back to the same server.

But nowadays most applications put http:// or https:// in front of
each and every location, and continuing to perform a close there is
counter-efficient, especially when multiple objects are fetched at
once from a same origin which redirects them to the correct origin
(eg: after an http to https forced upgrade).

It's about time to get rid of this old trick as it causes more harm
than good at an era where persistent connections are omnipresent.

Special thanks to Ciprian Dorin Craciun for providing convincing
arguments with a pretty valid use case and proposing this draft
patch which addresses the issue he was facing.

This change although not exactly a bug fix should be backported
to 1.7 to adapt better to existing infrastructure.
2017-02-28 09:48:11 +01:00
Willy Tarreau
4f86264bae BUG/MEDIUM: config: reject anything but "if" or "unless" after a use-backend rule
Adrian Fitzpatrick reported that since commit f51658d ("MEDIUM: config:
relax use_backend check to make the condition optional"), typos like "of"
instead of "if" on use_backend rules are not properly detected. The reason
is that the parser only checks for "if" or "unless" otherwise it considers
there's no keyword, making the rule inconditional.

This patch fixes it. It may reveal some rare config bugs for some people,
but will not affect valid configurations.

This fix must be backported to 1.7, 1.6 and 1.5.
2017-02-28 09:34:39 +01:00
Thierry FOURNIER
7d38863552 BUG/MAJOR: lua segmentation fault when the request is like 'GET ?arg=val HTTP/1.1'
Error in the HTTP parser. The function http_get_path() can
return NULL and this case is not catched in the code. So, we
try to dereference NULL pointer, and a segfault occurs.

These two lines are useful to prevent the bug.

   acl prevent_bug path_beg /
	http-request deny if !prevent_bug

This bug fix should be backported in 1.6 and 1.7
2017-02-23 21:52:18 +01:00
Willy Tarreau
e3cc3a3026 BUG/MAJOR: ssl: fix a regression in ssl_sock_shutw()
Commit 405ff31 ("BUG/MINOR: ssl: assert on SSL_set_shutdown with BoringSSL")
introduced a regression causing some random crashes apparently due to
memory corruption. The issue is the use of SSL_CTX_set_quiet_shutdown()
instead of SSL_set_quiet_shutdown(), making it use a different structure
and causing the flag to be put who-knows-where.

Many thanks to Jarno Huuskonen who reported this bug early and who
bisected the issue to spot this patch. No backport is needed, this
is 1.8-specific.
2017-02-13 11:15:13 +01:00
Thierry FOURNIER
62c8a21c10 BUG/MINOR: sendmail: The return of vsnprintf is not cleanly tested
The string formatted by vsnprintf may be bigger than the size of
the buffer "buf". This case is not tested.

This sould be backported to 1.6 and 1.7
2017-02-10 06:18:17 +01:00
Christopher Faulet
cdade94cf5 BUG/MINOR: http: Return an error when a replace-header rule failed on the response
Historically, http-response rules couldn't produce errors generating HTTP
responses during their evaluation. This possibility was "implicitly" added with
http-response redirect rules (51d861a4). But, at the time, replace-header rules
were kept untouched. When such a rule failed, the rules processing was just
stopped (like for an accept rule).

Conversely, when a replace-header rule fails on the request, it generates a HTTP
response (400 Bad Request).

With this patch, errors on replace-header rule are now handled in the same way
for HTTP requests and HTTP responses.

This patch should be backported in 1.7 and 1.6.
2017-02-08 19:08:38 +01:00
Christopher Faulet
07a0fecced BUG/MEDIUM: http: Prevent replace-header from overwriting a buffer
This is the same fix as which concerning the redirect rules (0d94576c).

The buffer used to expand the <replace-fmt> argument must be protected to
prevent it being overwritten during build_logline() execution (the function used
to expand the format string).

This patch should be backported in 1.7, 1.6 and 1.5. It relies on commit b686afd
("MINOR: chunks: implement a simple dynamic allocator for trash buffers") for
the trash allocator, which has to be backported as well.
2017-02-08 19:08:38 +01:00
Christopher Faulet
f1cc5d0eaf BUG/MEDIUM: filters: Do not truncate HTTP response when body length is undefined
Some users have experienced some troubles using the compression filter when the
HTTP response body length is undefined. They complained about receiving
truncated responses.

In fact, the bug can be triggered if there is at least one filter attached to
the stream but none registered to analyze the HTTP response body. In this case,
when the body length is undefined, data should be forwarded without any
parsing. But, because of a wrong check, we were starting to parse them. Because
it was not expected, the end of response was not correctly detected and the
response could be truncted. So now, we rely on HAS_DATA_FILTER macro instead of
HAS_FILTER one to choose to parse HTTP response body or not.

Furthermore, in http_response_forward_body, the test to not forward the server
closure to the client has been updated to reflect conditions listed in the
associated comment.

And finally, in http_msg_forward_body, when the body length is undefined, we
continue the parsing it until the server closes the connection without any on
filters. So filters can safely stop to filter data during their parsing.

This fix should be backported in 1.7
2017-02-08 19:08:38 +01:00
Thierry FOURNIER
0d94576c74 BUG/MEDIUM: http: prevent redirect from overwriting a buffer
See 4b788f7d34

If we use the action "http-request redirect" with a Lua sample-fetch or
converter, and the Lua function calls one of the Lua log function, the
header name is corrupted, it contains an extract of the last loggued data.

This is due to an overwrite of the trash buffer, because his scope is not
respected in the "add-header" function. The scope of the trash buffer must
be limited to the function using it. The build_logline() function can
execute a lot of other function which can use the trash buffer.

This patch fix the usage of the trash buffer. It limits the scope of this
global buffer to the local function, we build first the header value using
build_logline, and after we store the header name.

Thanks Jesse Schulman for the bug repport.

This patch must be backported in 1.7, 1.6 and 1.5 version, and it relies
on commit b686afd ("MINOR: chunks: implement a simple dynamic allocator for
trash buffers") for the trash allocator, which has to be backported as well.
2017-02-08 11:16:46 +01:00
Willy Tarreau
b686afd568 MINOR: chunks: implement a simple dynamic allocator for trash buffers
The trash buffers are becoming increasingly complex to deal with due to
the code's modularity allowing some functions to be chained and causing
the same chunk buffers to be used multiple times along the chain, possibly
corrupting each other. In fact the trash were designed from scratch for
explicitly not surviving a function call but string manipulation makes
this impossible most of the time while not fullfilling the need for
reliable temporary chunks.

Here we introduce the ability to allocate a temporary trash chunk which
is reserved, so that it will not conflict with the trash chunks other
functions use, and will even support reentrant calls (eg: build_logline).

For this, we create a new pool which is exactly the size of a usual chunk
buffer plus the size of the chunk struct so that these chunks when allocated
are exactly the same size as the ones returned by get_trash_buffer(). These
chunks may fail so the caller must check them, and the caller is also
responsible for freeing them.

The code focuses on minimal changes and ease of reliable backporting
because it will be needed in stable versions in order to support next
patch.
2017-02-08 11:16:29 +01:00
Baptiste Assmann
26c6eb8383 BUG/MAJOR: dns: restart sockets after fork()
UDP sockets used to send DNS queries are created before fork happens and
this is a big problem because all the processes (in case of a
configuration starting multiple processes) share the same socket. Some
processes may consume responses dedicated to an other one, some servers
may be disabled, some IPs changed, etc...

As a workaround, this patch close the existing socket and create a new
one after the fork() has happened.

[wt: backport this to 1.7]
2017-02-03 07:22:06 +01:00
Baptiste Assmann
5cd1b9222e MINOR: dns: give ability to dns_init_resolvers() to close a socket when requested
The function dns_init_resolvers() is used to initialize socket used to
send DNS queries.
This patch gives the function the ability to close a socket before
re-opening it.

[wt: this needs to be backported to 1.7 for next fix]
2017-02-03 07:21:32 +01:00
Thierry FOURNIER
4dc7197338 BUG/MINOR: lua: Map.end are not reliable because "end" is a reserved keyword
This patch change the names prefixing it by a "_". So "end" becomes "_end".
The backward compatibility with names without the prefix "_" is assured.
In other way, another the keyword "end" can be used like this: Map['end'].

Thanks Robin H. Johnson for the bug repport

This should be backported in version 1.6 and 1.7
2017-01-30 20:29:10 +01:00
Willy Tarreau
9484179f32 BUG/MINOR: unix: fix connect's polling in case no data are scheduled
There's a test after a successful synchronous connect() consisting
in waking the data layer up asap if there's no more handshake.
Unfortunately this test is run before setting the CO_FL_SEND_PROXY
flag and before the transport layer adds its own flags, so it can
indicate a willingness to send data while it's not the case and it
will have to be handled later.

This has no visible effect except a useless call to a function in
case of health checks making use of the proxy protocol for example.

Additionally a corner case where EALREADY was returned and considered
equivalent to EISCONN was fixed so that it's considered equivalent to
EINPROGRESS given that the connection is not complete yet. But this
code should never return on the first call anyway so it's mostly a
cleanup.

This fix should be backported to 1.7 and 1.6 at least to avoid
headaches during some debugging.
2017-01-25 18:48:14 +01:00
Willy Tarreau
819efbf4b5 BUG/MEDIUM: tcp: don't poll for write when connect() succeeds
While testing a tcp_fastopen related change, it appeared that in the rare
case where connect() can immediately succeed, we still subscribe to write
notifications on the socket, causing the conn_fd_handler() to immediately
be called and a second call to connect() to be attempted to double-check
the connection.

In fact this issue had already been met with unix sockets (which often
respond immediately) and partially addressed but incorrect so another
patch will follow. But for TCP nothing was done.

The fix consists in removing the WAIT_L4_CONN flag if connect() succeeds
and to subscribe for writes only if some handshakes or L4_CONN are still
needed. In addition in order not to fail raw TCP health checks, we have
to continue to enable polling for data when nothing is scheduled for
leaving and the connection is already established, otherwise the caller
will never be notified.

This fix should be backported to 1.7 and 1.6.
2017-01-25 18:46:01 +01:00
Willy Tarreau
e3e326d9f0 BUILD: ssl: kill a build warning introduced by BoringSSL compatibility
A recent patch to support BoringSSL caused this warning to appear on
OpenSSL 1.1.0 :
   src/ssl_sock.c:3062:4: warning: statement with no effect [-Wunused-value]

It's caused by SSL_CTX_set_ecdh_auto() which is now only a macro testing
that the last argument is zero, and the result is not used here. Let's
just kill it for both versions.

Tested with 0.9.8, 1.0.0, 1.0.1, 1.0.2, 1.1.0. This fix may be backported
to 1.7 if the boringssl fix is as well.
2017-01-19 17:56:20 +01:00
Emmanuel Hocdet
fdec7897fd BUILD: ssl: fix to build (again) with boringssl
Limitations:
. disable force-ssl/tls (need more work)
should be set earlier with SSL_CTX_new (SSL_CTX_set_ssl_version is removed)
. disable generate-certificates (need more work)
introduce SSL_NO_GENERATE_CERTIFICATES to disable generate-certificates.

Cleanup some #ifdef and type related to boringssl env.
2017-01-16 12:40:35 +01:00
Misiek
2da082d732 MINOR: cli: Add possiblity to change agent config via CLI/socket
This change adds possibility to change agent-addr and agent-send directives
by CLI/socket. Now you can replace server's and their configuration without
reloading/restarting whole haproxy, so it's a step in no-reload/no-restart
direction.

Depends on #e9602af - agent-addr is implemented there.

Can be backported to 1.7.
2017-01-16 11:38:59 +01:00
Misiek
ea849333ca MINOR: checks: Add agent-addr config directive
This directive add possibility to set different address for agent-checks.
With this you can manage server status and weight from central place.

Can be backported to 1.7.
2017-01-16 11:38:02 +01:00
Emmanuel Hocdet
e7f2b7301c MINOR: ssl: add curve suite for ECDHE negotiation
Add 'curves' parameter on 'bind' and for 'crt-list' to set curve suite.
(ex: curves X25519:P-256)
2017-01-13 11:41:01 +01:00
Emmanuel Hocdet
98263291cc MAJOR: ssl: bind configuration per certificat
crt-list is extend to support ssl configuration. You can now have
such line in crt-list <file>:
mycert.pem [npn h2,http/1.1]

Support include "npn", "alpn", "verify", "ca_file", "crl_file",
"ecdhe", "ciphers" configuration and ssl options.

"crt-base" is also supported to fetch certificates.
2017-01-13 11:40:34 +01:00
Christopher Faulet
70e2f27212 BUG/MINOR: stream: Fix how backend-specific analyzers are set on a stream
When the stream's backend was defined, the request's analyzers flag was always
set to 0 if the stream had no listener. This bug was introduced with the filter
API but never triggered (I think so).

Because of the commit 5820a366, it is now possible to encountered it. For
example, this happens when the trace filter is enabled on a SPOE backend. The
fix is pretty trivial.

This fix must be backported to 1.7.
2017-01-13 11:38:27 +01:00
Emeric Brun
3f78357066 OPTIM/MINOR: config: Optimize fullconn automatic computation loading configuration
The previous version used an O(number of proxies)^2 algo to get the sum of
the number of maxconns of frontends which reference a backend at least once.

This new version adds the frontend's maxconn number to the backend's
struct proxy member 'tot_fe_maxconn' when the backend name is resolved
for switching rules or default_backend statment.  At the end, the final
backend's fullconn is computed looping only one time for all on proxies O(n).

The load of a configuration using a large amount of backends (10 thousands)
without configured fullconn was reduced from several minutes to few seconds.
2017-01-12 17:36:09 +01:00
Lukas Tribus
3eb5b3fdd3 MINOR: ssl: don't show prefer-server-ciphers output
The output of whether prefer-server-ciphers is supported by OpenSSL
actually always show yes in 1.8, because SSL_OP_CIPHER_SERVER_PREFERENCE
is redefined before the actual check in src/ssl_sock.c, since it was
moved from here from src/haproxy.c.

Since this is not really relevant anymore as we don't support OpenSSL
< 0.9.7 anyway, this change just removes this output.
2017-01-12 07:45:45 +01:00
Ryabin Sergey
77ee7526de BUG/MINOR: Reset errno variable before calling strtol(3)
Sometimes errno != 0 before calling strtol(3)

[wt: this needs to be backported to 1.7]
2017-01-11 21:30:07 +01:00
Lukas Tribus
b732321d03 MINOR: compression: fix -vv output without zlib/slz
When haproxy is compiled without zlib or slz, the output of
haproxy -vv shows (null).

Make haproxy -vv output great again by providing the proper
information (which is what we did before).

This is for 1.8 only.
2017-01-11 16:11:11 +01:00
Jarno Huuskonen
59af2df102 MINOR: proto_http.c 502 error txt typo.
[wt: should be backported to 1.7 and 1.6 as it was introduced in 1.6-dev4]
2017-01-11 12:44:41 +01:00
Jarno Huuskonen
16ad94adf6 MINOR: Use "500 Internal Server Error" for 500 error/status code message.
Internal Server Error is what is in RFC 2616/7231.
2017-01-11 12:44:40 +01:00
Emmanuel Hocdet
405ff31e31 BUG/MINOR: ssl: assert on SSL_set_shutdown with BoringSSL
With BoringSSL:
SSL_set_shutdown: Assertion `(SSL_get_shutdown(ssl) & mode) == SSL_get_shutdown(ssl)' failed.

"SSL_set_shutdown causes ssl to behave as if the shutdown bitmask (see SSL_get_shutdown)
were mode. This may be used to skip sending or receiving close_notify in SSL_shutdown by
causing the implementation to believe the events already happened.
It is an error to use SSL_set_shutdown to unset a bit that has already been set.
Doing so will trigger an assert in debug builds and otherwise be ignored.
Use SSL_CTX_set_quiet_shutdown instead."

Change logic to not notify on SSL_shutdown when connection is not clean.
2017-01-11 12:44:40 +01:00
Emmanuel Hocdet
b7a4c34aac BUG/MINOR: ssl: EVP_PKEY must be freed after X509_get_pubkey usage
"X509_get_pubkey() attempts to decode the public key for certificate x.
If successful it returns the public key as an EVP_PKEY pointer with its
reference count incremented: this means the returned key must be freed
up after use."
2017-01-11 12:44:40 +01:00
Willy Tarreau
7b760c9c80 BUG/MEDIUM: tools: do not force an unresolved address to AF_INET:0.0.0.0
This prevents DNS from resolving IPv6-only servers in 1.7. Note, this
patch depends on the previous series :

  1. BUG/MINOR: tools: fix off-by-one in port size check
  2. BUG/MEDIUM: server: consider AF_UNSPEC as a valid address family
  3. MEDIUM: server: split the address and the port into two different fields
  4. MINOR: tools: make str2sa_range() return the port in a separate argument
  5. MINOR: server: take the destination port from the port field, not the addr
  6. MEDIUM: server: disable protocol validations when the server doesn't resolve

This fix (hence the whole series) must be backported to 1.7.
2017-01-11 12:44:33 +01:00
Willy Tarreau
9698f4b295 MEDIUM: server: disable protocol validations when the server doesn't resolve
When a server doesn't resolve we don't know the address family so we
can't perform the basic protocol validations. However we know that we'll
ultimately resolve to AF_INET4 or AF_INET6 so the controls are OK. It is
important to proceed like this otherwise it will not be possible to start
with unresolved addresses.
2017-01-06 19:29:34 +01:00
Willy Tarreau
6ecb10aec7 MINOR: server: take the destination port from the port field, not the addr
Next patch will cause the port to disappear from the address field when servers
do not resolve so we need to take it from the separate field provided by
str2sa_range().
2017-01-06 19:29:34 +01:00
Willy Tarreau
48ef4c95b6 MINOR: tools: make str2sa_range() return the port in a separate argument
This will be needed so that we're don't have to extract it from the
returned address where it will not always be anymore (eg: for unresolved
servers).
2017-01-06 19:29:34 +01:00
Willy Tarreau
04276f3d6e MEDIUM: server: split the address and the port into two different fields
Keeping the address and the port in the same field causes a lot of problems,
specifically on the DNS part where we're forced to cheat on the family to be
able to keep the port. This causes some issues such as some families not being
resolvable anymore.

This patch first moves the service port to a new field "svc_port" so that the
port field is never used anymore in the "addr" field (struct sockaddr_storage).
All call places were adapted (there aren't that many).
2017-01-06 19:29:33 +01:00
Willy Tarreau
3acfcd1aa1 BUG/MEDIUM: server: consider AF_UNSPEC as a valid address family
The DNS code is written so as to support AF_UNSPEC to decide on the
server family based on responses, but unfortunately snr_resolution_cb()
considers it as invalid causing a DNS storm to happen when a server
arrives with this family.

This situation is not supposed to happen as long as unresolved addresses
are forced to AF_INET, but this will change with the upcoming fixes and
it's possible that it's not granted already when changing an address on
the CLI.

This fix must be backported to 1.7 and 1.6.
2017-01-06 19:21:37 +01:00
Willy Tarreau
d7dad1bc49 BUG/MINOR: tools: fix off-by-one in port size check
port_to_str() checks that the port size is at least 5 characters instead
of at least 6. While in theory it could permit a buffer overflow, it's
harmless because all callers have at least 6 characters here.

This fix needs to be backported to 1.7, 1.6 and 1.5.
2017-01-06 16:46:22 +01:00
Willy Tarreau
4c18346c0f BUG/MINOR: config: emit a warning if http-reuse is enabled with incompatible options
http-reuse should normally not be used in conjunction with the proxy
protocol or with "usesrc clientip". While there's nothing fundamentally
wrong with this, whenever these options are used, the server expects the
IP address to be the source address for all requests, which doesn't make
sense with http-reuse.
2017-01-06 12:21:38 +01:00
Emeric Brun
4f60301235 MINOR: connection: add sample fetch "fc_rcvd_proxy"
fc_rcvd_proxy : boolean
  Returns true if the client initiated the connection with a PROXY protocol
  header.

A flag is added on the struct connection if a PROXY header is successfully
parsed.
2017-01-06 11:59:17 +01:00
Robin H. Johnson
52f5db2a44 MINOR: http: custom status reason.
The older 'rsprep' directive allows modification of the status reason.

Extend 'http-response set-status' to take an optional string of the new
status reason.

  http-response set-status 418 reason "I'm a coffeepot"

Matching updates in Lua code:
- AppletHTTP.set_status
- HTTP.res_set_status

Signed-off-by: Robin H. Johnson <robbat2@gentoo.org>
2017-01-06 11:57:44 +01:00
Willy Tarreau
2afff9c2d6 BUG/MAJOR: http: fix risk of getting invalid reports of bad requests
Commits 5f10ea3 ("OPTIM: http: improve parsing performance of long
URIs") and 0431f9d ("OPTIM: http: improve parsing performance of long
header lines") introduced a bug in the HTTP parser : when a partial
request is read, the first part ends up on a 8-bytes boundary (or 4-byte
on 32-bit machines), the end lies in the header field value part, and
the buffer used to contain a CR character exactly after the last block,
then the parser could be confused and read this CR character as being
part of the current request, then switch to a new state waiting for an
LF character. Then when the next part of the request appeared, it would
read the character following what was erroneously mistaken for a CR,
see that it is not an LF and fail on a bad request. In some cases, it
can even be worse and the header following the hole can be improperly
indexed causing all sort of unexpected behaviours like a content-length
being ignored or a header appended at the wrong position. The reason is
that there's no control of and of parsing just after breaking out of the
loop.

One way to reproduce it is with this config :

  global
      stats socket /tmp/sock1 mode 666 level admin
      stats timeout 1d

  frontend  px
      bind :8001
      mode http
      timeout client 10s
      redirect location /

And sending requests this way :

  $ tcploop 8001 C P S:"$(dd if=/dev/zero bs=16384 count=1 2>/dev/null | tr '\000' '\r')"
  $ tcploop 8001 C P S:"$(dd if=/dev/zero bs=16384 count=1 2>/dev/null | tr '\000' '\r')"
  $ tcploop 8001 C P \
    S:"GET / HTTP/1.0\r\nX-padding: 0123456789.123456789.123456789.123456789.123456789.123456789.1234567" P \
    S:"89.123456789\r\n\r\n" P

Then a "show errors" on the socket will report :

  $ echo "show errors" | socat - /tmp/sock1
  Total events captured on [04/Jan/2017:15:09:15.755] : 32

  [04/Jan/2017:15:09:13.050] frontend px (#2): invalid request
    backend <NONE> (#-1), server <NONE> (#-1), event #31
    src 127.0.0.1:59716, session #91, session flags 0x00000080
    HTTP msg state 17, msg flags 0x00000000, tx flags 0x00000000
    HTTP chunk len 0 bytes, HTTP body len 0 bytes
    buffer flags 0x00808002, out 0 bytes, total 111 bytes
    pending 111 bytes, wrapping at 16384, error at position 107:

    00000  GET / HTTP/1.0\r\n
    00016  X-padding: 0123456789.123456789.123456789.123456789.123456789.12345678
    00086+ 9.123456789.123456789\r\n
    00109  \r\n

This fix must be backported to 1.7.

Many thanks to Aleksey Gordeev and Axel Reinhold for providing detailed
network captures and configurations exhibiting the issue.
2017-01-05 20:26:55 +01:00
Willy Tarreau
0ebb511b3e MINOR: tools: add a generic hexdump function for debugging
debug_hexdump() prints to the requested output stream (typically stdout
or stderr) an hex dump of the blob passed in argument. This is useful
to help debug binary protocols.
2017-01-05 20:12:20 +01:00
Willy Tarreau
10e61cbf41 BUG/MINOR: http: report real parser state in error captures
Error captures almost always report a state 26 (MSG_ERROR) making it
very hard to know what the parser was expecting. The reason is that
we have to switch to MSG_ERROR to trigger the dump, and then during
the dump we capture the current state which is already MSG_ERROR. With
this change we now copy the current state into an err_state field that
will be reported as the faulty state.

This patch looks a bit large because the parser doesn't update the
current state until it runs out of data so the current state is never
known when jumping to ther error label! Thus the code had to be updated
to take copies of the current state before switching to MSG_ERROR based
on the switch/case values.

As a bonus, it now shows the current state in human-readable form and
not only in numeric form ; in the past it was not an issue since it was
always 26 (MSG_ERROR).

At least now we can get exploitable invalid request/response reports :

  [05/Jan/2017:19:28:57.095] frontend f (#2): invalid request
    backend <NONE> (#-1), server <NONE> (#-1), event #1
    src 127.0.0.1:39894, session #4, session flags 0x00000080
    HTTP msg state MSG_RQURI(4), msg flags 0x00000000, tx flags 0x00000000
    HTTP chunk len 0 bytes, HTTP body len 0 bytes
    buffer flags 0x00908002, out 0 bytes, total 20 bytes
    pending 20 bytes, wrapping at 16384, error at position 5:

    00000  GET /\e HTTP/1.0\r\n
    00017  \r\n
    00019  \n

  [05/Jan/2017:19:28:33.827] backend b (#3): invalid response
    frontend f (#2), server s1 (#1), event #0
    src 127.0.0.1:39718, session #0, session flags 0x000004ce
    HTTP msg state MSG_HDR_NAME(17), msg flags 0x00000000, tx flags 0x08300000
    HTTP chunk len 0 bytes, HTTP body len 0 bytes
    buffer flags 0x80008002, out 0 bytes, total 59 bytes
    pending 59 bytes, wrapping at 16384, error at position 31:

    00000  HTTP/1.1 200 OK\r\n
    00017  Content-length : 10\r\n
    00038  \r\n
    00040  0a\r\n
    00044  0123456789\r\n
    00056  0\r\n

This should be backported to 1.7 and 1.6 at least to help with bug
reports.
2017-01-05 19:48:50 +01:00
Christopher Faulet
0184ea71a6 BUG/MAJOR: channel: Fix the definition order of channel analyzers
It is important to defined analyzers (AN_REQ_* and AN_RES_*) in the same order
they are evaluated in process_stream. This order is really important because
during analyzers evaluation, we run them in the order of the lower bit to the
higher one. This way, when an analyzer adds/removes another one during its
evaluation, we know if it is located before or after it. So, when it adds an
analyzer which is located before it, we can switch to it immediately, even if it
has already been called once but removed since.

With the time, and introduction of new analyzers, this order was broken up. the
main problems come from the filter analyzers. We used values not related with
their evaluation order. Furthermore, we used same values for request and response
analyzers.

So, to fix the bug, filter analyzers have been splitted in 2 distinct lists to
have different analyzers for the request channel than those for the response
channel. And of course, we have moved them to the right place.

Some other analyzers have been reordered to respect the evaluation order:

  * AN_REQ_HTTP_TARPIT has been moved just before AN_REQ_SRV_RULES
  * AN_REQ_PRST_RDP_COOKIE has been moved just before AN_REQ_STICKING_RULES
  * AN_RES_STORE_RULES has been moved just after AN_RES_WAIT_HTTP

Note today we have 29 analyzers, all stored into a 32 bits bitfield. So we can
still add 4 more analyzers before having a problem. A good way to fend off the
problem for a while could be to have a different bitfield for request and
response analyzers.

[wt: all of this must be backported to 1.7, and part of it must be backported
 to 1.6 and 1.5]
2017-01-05 17:58:22 +01:00
Thierry FOURNIER
401c64bfe4 BUG/MINOR: sample-fetches/stick-tables: bad type for the sample fetches sc*_get_gpt0
The registered output type for the sample fetches sc*_get_gpt0
is a boolean, but the value returned is an integer.

This patch fixs the default type to SINT in place of BOOL.

This patch should be backported in 1.6 and 1.7
2017-01-05 16:04:05 +01:00
David Harrigan
d3db35a1d1 MINOR: stats: Support "select all" for backend actions
Allow the user to quickly select all servers within a group before invoking an
action.
2017-01-02 16:56:33 +01:00
Olivier Doucet
1ca1b6fe3c BUG/MINOR: option prefer-last-server must be ignored in some case
when using "option prefer-last-server", we may not always stay on
the same backend if option balance told us otherwise.
For example, backend may change in the following cases:
balance hdr()
balance rdp-cookie
balance source
balance uri
balance url_param

[wt: backport this to 1.7 and 1.6]
2017-01-02 14:26:22 +01:00
David Carlier
f2592b29f1 MEDIUM: regex: pcre2 support
this adds a support of the newest pcre2 library,
more secure than its older sibling in a cost of a
more complex API.
It works pretty similarly to pcre's part to keep
the overall change smooth,  except :

- we define the string class supported at compile time.
- after matching the ovec data is properly sized, althought
we do not take advantage of it here.
- the lack of jit support is treated less 'dramatically'
as pcre2_jit_compile in this case is 'no-op'.
2016-12-28 12:51:51 +01:00
Thierry FOURNIER
01e0974b5a MINOR: samples: add xx-hash functions
This patch adds the support of xx-hash 32 and 64-bits functions.
2016-12-26 12:45:04 +01:00
Thierry FOURNIER
de6925eccf BUILD: lua: build failed on FreeBSD.
s6_addr* fields are not available in the userland on
BSD systems in general.

bug reported by David Carlier

needs backport to 1.7.x
2016-12-23 18:03:43 +01:00
William Lallemand
1e4fc43630 BUG/MINOR: systemd: potential zombie processes
In systemd mode (-Ds), the master haproxy process is waiting for each
child to exit in a specific order. If a process die when it's not his
turn, it will become a zombie process until every processes exit.

The master is now waiting for any process to exit in any order.

This patch should be backported to 1.7, 1.6 and 1.5.
2016-12-23 16:21:48 +01:00
Willy Tarreau
119a4084bf BUG/MEDIUM: ssl: for a handshake when server-side SNI changes
Calling SSL_set_tlsext_host_name() on the current SSL ctx has no effect
if the session is being resumed because the hostname is already stored
in the session and is not advertised again in subsequent connections.
It's visible when enabling SNI and health checks at the same time because
checks do not send an SNI and regular traffic reuses the same connection,
resulting in no SNI being sent.

The only short-term solution is to reset the reused session when the
SNI changes compared to the previous one. It can make the server-side
performance suffer when SNIs are interleaved but it will work. A better
long-term solution would be to keep a small cache of a few contexts for
a few SNIs.

Now with SSL_set_session(ctx, NULL) it works. This needs to be double-
checked though. The man says that SSL_set_session() frees any previously
existing context. Some people report a bit of breakage when calling
SSL_set_session(NULL) on openssl 1.1.0a (freed session not reusable at
all though it's not an issue for now).

This needs to be backported to 1.7 and 1.6.
2016-12-23 16:21:40 +01:00
Marcin Deranek
57b877147d BUG/MINOR: backend: nbsrv() should return 0 if backend is disabled
According to nbsrv() documentation this fetcher should return "an
integer value corresponding to the number of usable servers".
In case backend is disabled none of servers is usable, so I believe
fetcher should return 0.

This patch should be backported to 1.7, 1.6, 1.5.
2016-12-23 00:09:12 +01:00
Willy Tarreau
ef934603c0 CLEANUP: ssl: move most ssl-specific global settings to ssl_sock.c
Historically a lot of SSL global settings were stored into the global
struct, but we've reached a point where there are 3 ifdefs in it just
for this, and others in haproxy.c to initialize it.

This patch moves all the private fields to a new struct "global_ssl"
stored in ssl_sock.c. This includes :

       char *crt_base;
       char *ca_base;
       char *listen_default_ciphers;
       char *connect_default_ciphers;
       int listen_default_ssloptions;
       int connect_default_ssloptions;
       int tune.sslprivatecache; /* Force to use a private session cache even if nbproc > 1 */
       unsigned int tune.ssllifetime;   /* SSL session lifetime in seconds */
       unsigned int tune.ssl_max_record; /* SSL max record size */
       unsigned int tune.ssl_default_dh_param; /* SSL maximum DH parameter size */
       int tune.ssl_ctx_cache; /* max number of entries in the ssl_ctx cache. */

The "tune" part was removed (useless here) and the occasional "ssl"
prefixes were removed as well. Thus for example instead of

       global.tune.ssl_default_dh_param

we now have :

       global_ssl.default_dh_param

A few initializers were present in the constructor, they could be brought
back to the structure declaration.

A few other entries had to stay in global for now. They concern memory
calculationn (used in haproxy.c) and stats (used in stats.c).

The code is already much cleaner now, especially for global.h and haproxy.c
which become readable.
2016-12-22 23:26:38 +01:00
Willy Tarreau
d1c5750370 CLEANUP: ssl: move tlskeys_finalize_config() to a post_check callback
tlskeys_finalize_config() was the only reason for haproxy.c to still
require ifdef and includes for ssl_sock. This one fits perfectly well
in the late initializers so it was changed to be registered with
hap_register_post_check().
2016-12-22 23:26:38 +01:00
Willy Tarreau
17d4538044 MINOR: ssl_sock: implement and use prepare_srv()/destroy_srv()
Now we can simply check the transport layer at run time and decide
whether or not to initialize or destroy these entries. This removes
other ifdefs and includes from cfgparse.c, haproxy.c and hlua.c.
2016-12-22 23:26:38 +01:00
Willy Tarreau
d9f5cca3d5 CLEANUP: connection: unexport raw_sock and ssl_sock
This way we're sure not to reuse them by accident.
2016-12-22 23:26:38 +01:00
Willy Tarreau
a261e9b094 CLEANUP: connection: remove all direct references to raw_sock and ssl_sock
Now we exclusively use xprt_get(XPRT_RAW) instead of &raw_sock or
xprt_get(XPRT_SSL) for &ssl_sock. This removes a bunch of #ifdef and
include spread over a number of location including backend, cfgparse,
checks, cli, hlua, log, server and session.
2016-12-22 23:26:38 +01:00
Willy Tarreau
13e1410f8a MINOR: connection: add a minimal transport layer registration system
There are still a lot of #ifdef USE_OPENSSL in the code (still 43
occurences) because we never know if we can directly access ssl_sock
or not. This patch attacks the problem differently by providing a
way for transport layers to register themselves and for users to
retrieve the pointer. Unregistered transport layers will point to NULL
so it will be easy to check if SSL is registered or not. The mechanism
is very inexpensive as it relies on a two-entries array of pointers,
so the performance will not be affected.
2016-12-22 23:26:38 +01:00
Willy Tarreau
141ad85d10 MINOR: server: move the use_ssl field out of the ifdef USE_OPENSSL
Having it in the ifdef complicates certain operations which require
additional ifdefs just to access a member which could remain zero in
non-ssl cases. Let's move it out, it will not even increase the
struct size on 64-bit machines due to alignment.
2016-12-22 23:26:38 +01:00
Willy Tarreau
795cdabb57 MINOR: ssl_sock: implement ssl_sock_destroy_bind_conf()
Instead of hard-coding all SSL destruction in cfgparse.c and haproxy.c,
we now register this new function as the transport layer's destroy_bind_conf()
and call it only when defined. This removes some non-obvious SSL-specific
code and #ifdefs from cfgparse.c and haproxy.c
2016-12-22 23:26:38 +01:00
Willy Tarreau
55d3791b46 MEDIUM: ssl_sock: implement ssl_sock_prepare_bind_conf()
Instead of hard-coding all SSL preparation in cfgparse.c, we now register
this new function as the transport layer's prepare_bind_conf() and call it
only when definied. This removes some non-obvious SSL-specific code from
cfgparse.c as well as a #ifdef.
2016-12-22 23:26:38 +01:00
Willy Tarreau
0320934f7e MEDIUM: ssl: remote the proxy argument from most functions
Most of the SSL functions used to have a proxy argument which was mostly
used to be able to emit clean errors using Alert(). First, many of them
were converted to memprintf() and don't require this pointer anymore.
Second, the rare which still need it also have either a bind_conf argument
or a server argument, both of which carry a pointer to the relevant proxy.

So let's now get rid of it, it needlessly complicates the API and certain
functions already have many arguments.
2016-12-22 23:26:38 +01:00
Willy Tarreau
c95bad5013 MEDIUM: move listener->frontend to bind_conf->frontend
Historically, all listeners have a pointer to the frontend. But since
the introduction of SSL, we now have an intermediary layer called
bind_conf corresponding to a "bind" line. It makes no sense to have
the frontend on each listener given that it's the same for all
listeners belonging to a same bind_conf. Also certain parts like
SSL can only operate on bind_conf and need the frontend.

This patch fixes this by moving the frontend pointer from the listener
to the bind_conf. The extra indirection is quite cheap given and the
places were this is used are very scarce.
2016-12-22 23:26:38 +01:00
Willy Tarreau
71a8c7c49e MINOR: listener: move the transport layer pointer to the bind_conf
A mistake was made when the socket layer was cut into proto and
transport, the transport was attached to the listener while all
listeners in a single "bind" line always have exactly the same
transport. It doesn't seem obvious but this is the reason why there
are so many #ifdefs USE_OPENSSL in cfgparse : a lot of operations
have to be open-coded because cfgparse only manipulates bind_conf
and we don't have the information of the transport layer here.

Very little code makes use of the transport layer, mainly session
setup and log. These places can afford an extra pointer indirection
(the listener points to the bind_conf). This change is thus very small,
it saves a little bit of memory (8B per listener) and makes the code
more flexible.
2016-12-22 23:26:37 +01:00
Willy Tarreau
5820a36690 MEDIUM: spoe: don't create a dummy listener for outgoing connections
The code currently creates a listener only to ensure that sess->li is
properly populated, and to retrieve the frontend (which is also available
directly from the session).

It turns out that the current infrastructure (for a large part) already
supports not having any listener on a session (since Lua does the same),
except for the following places which were not yet converted :

  - session_count_new() : used by session_accept_fd, ie never for spoe
  - session_accept_fd() : never used here, an applet initiates the session
  - session_prepare_log_prefix() : embryonic sessions only, thus unused
  - session_kill_embryonic() : same
  - conn_complete_session() : same
  - build_log_line() for fields %cp, %fp and %ft : unused here but may change
  - http_wait_for_request() and subsequent functions : unused here

Thus for now it's as safe to run SPOE without listener as it is for Lua,
and this was an obstacle against some cleanups of the listener code. The
places above should be plugged so that it becomes save over the long term
as well.

An alternative in the future might be to create a dummy listener that
outgoing connections could use just to avoid keeping a null here.
2016-12-22 23:26:37 +01:00
Willy Tarreau
a12dde04e0 MINOR: tcp-rules: check that the listener exists before updating its counters
The tcp rules may be applied to a TCP stream initiated by applets (spoe,
lua, peers, later H2). These ones do not necessarily have a valid listener
so we must verify the field is not null before updating the stats. For now
there's no way to trigger this bug because lua and peers don't have analysers,
h2 is not implemented and spoe has a dummy listener. But this threatens to
break at any instant.
2016-12-22 23:26:37 +01:00
Thierry FOURNIER
0ff98a4758 BUG/MINOR: stats: fix be/sessions/current out in typed stats
"scur" was typed as "limit" (FO_CONFIG) and "config value" (FN_LIMIT).
The real types of "scur" are "metric" (FO_METRIC) and "gauge"
(FN_GAUGE). FO_METRIC and FN_GAUGE are the value 0.
2016-12-22 23:20:00 +01:00
Willy Tarreau
94ff03af84 BUG/MEDIUM: ssl: avoid double free when releasing bind_confs
ssl_sock functions don't mark pointers as NULL after freeing them. So
if a "bind" line specifies some SSL settings without the "ssl" keyword,
they will get freed at the end of check_config_validity(), then freed
a second time on exit. Simply mark the pointers as NULL to fix this.
This fix needs to be backported to 1.7 and 1.6.
2016-12-22 22:07:36 +01:00
Willy Tarreau
30fd4bd844 BUG/MEDIUM: ssl: properly reset the reused_sess during a forced handshake
We have a bug when SSL reuse is disabled on the server side : we reset
the context but do not set it to NULL, causing a multiple free of the
same entry. It seems like this bug cannot appear as-is with the current
code (or the conditions to get it are not obvious) but it did definitely
strike when trying to fix another bug with the SNI which forced a new
handshake.

This fix should be backported to 1.7, 1.6 and 1.5.
2016-12-22 21:54:21 +01:00
Willy Tarreau
368780334c MEDIUM: compression: move the zlib-specific stuff from global.h to compression.c
This finishes to clean up the zlib-specific parts. It also unbreaks recent
commit b97c6fb ("CLEANUP: compression: use the build options list to report
the algos") which broke USE_ZLIB due to MAXWBITS not being defined anymore
in haproxy.c.
2016-12-22 20:00:46 +01:00
Willy Tarreau
14e36a101c MEDIUM: cfgparse: move ssl-dh-param-file parsing to ssl_sock
This one was missing an arg count check which was added in the operation.
2016-12-21 23:39:26 +01:00
Willy Tarreau
f22e9683e9 MINOR: cfgparse: move parsing of ssl-default-{bind,server}-ciphers to ssl_sock
These ones are pretty similar, just an strdup. Contrary to ca-base
and crt-base they support being changed.
2016-12-21 23:39:26 +01:00
Willy Tarreau
0bea58d641 MEDIUM: cfgparse: move maxsslconn parsing to ssl_sock
This one simply reuses the existing integer parser. It implicitly
adds a control against negative numbers.
2016-12-21 23:39:26 +01:00
Willy Tarreau
9ceda384e9 MEDIUM: cfgparse: move all tune.ssl.* keywords to ssl_sock
The following keywords were still parsed in cfgparse and were moved
to ssl_sock to remove some #ifdefs :

"tune.ssl.cachesize", "tune.ssl.default-dh-param", "tune.ssl.force-private-cache",
"tune.ssl.lifetime", "tune.ssl.maxrecord", "tune.ssl.ssl-ctx-cache-size".

It's worth mentionning that some of them used to have incorrect sign
checks possibly resulting in some negative values being used. All of
them are now checked for being positive.
2016-12-21 23:39:26 +01:00
Willy Tarreau
8c3b0fd273 MINOR: cfgparse: move parsing of "ca-base" and "crt-base" to ssl_sock
This removes 2 #ifdefs and makes the code much cleaner. The controls
are still there and the two parsers have been merged into a single
function ssl_parse_global_ca_crt_base().

It's worth noting that there's still a check to prevent a change when
the value was already specified. This test seems useless and possibly
counter-productive, it may have to be revisited later, but for now it
was implemented identically.
2016-12-21 23:39:26 +01:00
Willy Tarreau
ece9b07c71 MINOR: cfgparse: add two new functions to check arguments count
We already had alertif_too_many_args{,_idx}(), but these ones are
specifically designed for use in cfgparse. Outside of it we're
trying to avoid calling Alert() all the time so we need an
equivalent using a pointer to an error message.

These new functions called too_many_args{,_idx)() do exactly this.
They don't take the file name nor the line number which they have
no use for but instead they take an optional pointer to an error
message and the pointer to the error code is optional as well.
With (NULL, NULL) they'll simply check the validity and return a
verdict. They are quite convenient for use in isolated keyword
parsers.

These two new functions as well as the previous ones have all been
exported.
2016-12-21 23:39:26 +01:00
Willy Tarreau
bee9dde31f CLEANUP: da: move global settings out of the global section
We replaced global.deviceatlas with global_deviceatlas since there's no need
to store all this into the global section. This removes the last #ifdefs,
and now the code is 100% self-contained in da.c. The file da.h was now
removed because it was only used to load dac.h, which is more easily
loaded directly from da.c. It provides another good example of how to
integrate code in the future without touching the core parts.
2016-12-21 21:30:54 +01:00
Willy Tarreau
b7a671477f CLEANUP: 51d: move global settings out of the global section
We replaced global._51degrees with global_51degrees since there's no need
to store all this into the global section. This removes the last #ifdefs,
and now the code is 100% self-contained in 51d.c. The file 51d.h was now
removed because it was only used to load 51Degrees.h, which is more easily
loaded from 51d.c. It provides a good example of how to integrate code in
the future without touching the core parts.
2016-12-21 21:30:54 +01:00
Willy Tarreau
350c1c6886 CLEANUP: wurfl: move global settings out of the global section
We replaced global.wurfl with global_wurfl since there's no need to store
all this into the global section. This removes the last #ifdefs, and now
the code is 100% self-contained in wurfl.c. It provides a good example of
how to integrate code in the future without touching the core parts.
2016-12-21 21:30:54 +01:00
Willy Tarreau
b149eedd5a CLEANUP: da: register the deinitialization function
deinit_deviceatlas() is not called anymore from haproxy.c, removing 2
still includes other parts of the Deviceatlas library so it was not
touched.
2016-12-21 21:30:54 +01:00
Willy Tarreau
7ac4c20509 CLEANUP: 51d: register the deinitialization function
deinit_51degrees() is not called anymore from haproxy.c, removing
2 #ifdefs and one include. The function was made static. The include
file still includes 51Degrees.h which is needed by global.h and 51d.c
so it was not touched beyond this last function removal.
2016-12-21 21:30:54 +01:00
Willy Tarreau
800f93f375 CLEANUP: wurfl: register the deinit function via the dedicated list
By registering the deinit function we avoid another #ifdef in haproxy.c.
The ha_wurfl_deinit() function has been made static and unexported. Now
proto/wurfl.h is totally empty, the code being self-contained in wurfl.c,
so the useless .h has been removed.
2016-12-21 21:30:54 +01:00
Willy Tarreau
05554e6bf1 MINOR: haproxy: add a registration for post-deinit functions
The 3 device detection engines stop at the same place in deinit()
with the usual #ifdefs. Similar to the other functions we can have
some late deinitialization functions. These functions do not return
anything however so we have to use a different type.
2016-12-21 21:30:54 +01:00
Willy Tarreau
876054df96 CLEANUP: da: make use of the late init registration code
Instead of having a #ifdef in the main init code we now use the registered
init functions. Doing so also enables error checking as errors were previously
reported as alerts but ignored. Also they were incorrect as the 'status'
variable was hidden by a second one and was always reporting DA_SYS (which
is apparently an error) in every case including the case where no file was
loaded. The init_deviceatlas() function was unexported since it's not used
outside of this place anymore.
2016-12-21 21:30:54 +01:00
Willy Tarreau
9f3f2549fb CLEANUP: 51d: make use of the late init registration
This removes some #ifdefs from the main haproxy code path. Function
init_51degrees() now returns ERR_* instead of exit(1) on error, and
this function was made static and is not exported anymore.
2016-12-21 21:30:54 +01:00
Willy Tarreau
dc2ed47163 CLEANUP: wurfl: make use of the late init registration
This removes some #ifdefs from the main haproxy code path and enables
error checking. The current code only makes use of warnings even for
some errors that look serious. While this choice is questionnable, it
has been kept as-is, and only the return codes were adapted to ERR_WARN
to at least report that some warnings were emitted. ha_wurfl_init() was
unexported as it's not needed anymore.
2016-12-21 21:30:54 +01:00
Willy Tarreau
64bca599d9 CLEANUP: filters: use the function registration to initialize all proxies
Function flt_init() was called in the main init code path, now we move
it to the list of initializers and we can unexport flt_init().
2016-12-21 21:30:54 +01:00
Willy Tarreau
865c5148e6 CLEANUP: checks: make use of the post-init registration to start checks
Instead of calling the checks directly from the init code, we now
register the start_checks() function to be run at this point. This
also allows to unexport the check init function and to remove one
include from haproxy.c.
2016-12-21 21:30:54 +01:00
Willy Tarreau
e694573fa0 MINOR: haproxy: add a registration for post-check functions
There's a significant amount of late initialization calls which are
performed after the point where we exit in check mode. These calls
are used to allocate resource and perform certain slow operations.
Let's have a way to register some functions which need to be called
there instead of having this multitude of #ifdef in the init path.
2016-12-21 21:30:54 +01:00
Willy Tarreau
e8692b41e5 CLEANUP: auth: use the build options list to report its support
This removes 1 #ifdef from haproxy.c.
2016-12-21 21:30:54 +01:00
Willy Tarreau
b97c6fb59e CLEANUP: compression: use the build options list to report the algos
This removes 2 #ifdef, an include, an ugly construct and a wild "extern"
declaration from haproxy.c. The message indicating that compression is
*not* enabled is not there anymore.
2016-12-21 21:30:54 +01:00
Willy Tarreau
c2c0b61274 CLEANUP: ssl: use the build options list to report the SSL details
This removes 7 #ifdef from haproxy.c. The message indicating that
openssl is *not* enabled is not there anymore.
2016-12-21 21:30:54 +01:00
Willy Tarreau
7a9ac6dac6 CLEANUP: regex: use the build options list to report the regex type
This removes 3 #ifdef from haproxy.c.
2016-12-21 21:30:54 +01:00
Willy Tarreau
bb57d94a96 CLEANUP: lua: use the build options list to report it
This removes 1 #ifdef from haproxy.c. The "build without" version
is not reported anymore now.
2016-12-21 21:30:54 +01:00
Willy Tarreau
ba96291600 CLEANUP: tcp: use the build options list to report transparent modes
This removes 6 #ifdef from haproxy.c.
2016-12-21 21:30:54 +01:00
Willy Tarreau
dba5002c4c CLEANUP: namespaces: use the build options list to report it
This removes one #ifdef from haproxy.c.
2016-12-21 21:30:54 +01:00
Willy Tarreau
3dd483e727 CLEANUP: da: use the build options list to report it
This removes one #ifdef from haproxy.c.
2016-12-21 21:30:54 +01:00
Willy Tarreau
b5e58d6ba1 CLEANUP: 51d: use the build options list to report it
This removes one #ifdef from haproxy.c.
2016-12-21 21:30:54 +01:00
Willy Tarreau
770042d3c6 CLEANUP: wurfl: use the build options list to report it
This removes one #ifdef from haproxy.c.
2016-12-21 21:30:54 +01:00
Willy Tarreau
cdb737e5a2 MINOR: haproxy: add a registration for build options
Many extensions now report some build options to ease debugging, but
this is now being done at the expense of code maintainability. Let's
provide a registration function to do this so that we can start to
remove most of the #ifdefs from haproxy.c (18 currently just for a
single function).
2016-12-21 21:30:54 +01:00
Willy Tarreau
1b5af7cd42 CLEANUP: haproxy: statify unexported functions
haproxy.c is a real mess. Let's start to clean it up by declaring static
all functions which are not exported (ie almost all of them).
2016-12-21 18:19:57 +01:00
Thierry FOURNIER
2c8b54e7be MEDIUM: lua: remove Lua struct from session, and allocate it with memory pools
This patch use memory pools for allocating the Lua struct. This
save 128B of memory in the session if the Lua is unused.
2016-12-21 15:24:56 +01:00
Thierry FOURNIER
1be34152da BUG/MINOR: lua: memleak when Lua/cli fails
If the memory allocator fails, it return a bad code, and the execution
continue. If the Lua/cli initializer fails, the allocated struct is not
released.
2016-12-21 15:24:35 +01:00
Thierry FOURNIER
33558c4a3f BUG/MINOR: lua: bad return code
If the lua/cli fails during initialization, it returns an ok
status, an the execution continue. This will probably occur a
segfault.

Thiw patch should be backported in 1.7
2016-12-21 15:24:24 +01:00
Thierry FOURNIER
4e7c708612 BUG/MINOR: lua: memory leak executing tasks
The struct hlua isn't freed when the task is complete.

This patch should be backported in 1.6 and 1.7
2016-12-21 15:24:09 +01:00
Christopher Faulet
33834b15dc BUG/MINOR: Fix the sending function in Lua's cosocket
This is a regression from the commit a73e59b690.

When data are sent from a cosocket, the action is done in the context of the
applet running a lua stack and not in the context of the applet owning the
cosocket. So we must take care, explicitly, that this last applet have a buffer
(the req buffer of the cosocket).

This patch must be backported in 1.7
2016-12-21 15:22:08 +01:00
Thierry FOURNIER
3b0a6d480b MINOR/DOC: lua: just precise one thing
In the case of applet, the Lua context is taken from session
when we get the private values. This patch just update comments
associated to this action because it is not obvious.
2016-12-17 14:27:30 +01:00
Willy Tarreau
f5f26e824a MINOR: appctx/cli: remove the "tlskeys" entry from the appctx union
This one now migrates to the general purpose cli.p0 for the ref pointer,
cli.i0 for the dump_all flag and cli.i1 for the dump_keys_index. A few
comments were added.

The applet.h file doesn't depend on openssl anymore. It's worth noting
that the previous dependency was accidental and only used to work because
all files including this one used to have openssl included prior to
loading this file.
2016-12-16 19:40:14 +01:00
Willy Tarreau
3c92f2aca4 MINOR: appctx/cli: remove the "server_state" entry from the appctx union
This one now migrates to the general purpose cli.p0 for the proxy pointer,
cli.p1 for the server pointer, and cli.i0 for the proxy's instance if only
one has to be dumped.
2016-12-16 19:40:14 +01:00
Willy Tarreau
777b560d04 MINOR: appctx/cli: remove the "dns" entry from the appctx union
This one now migrates to the general purpose cli.p0.
2016-12-16 19:40:14 +01:00
Willy Tarreau
608ea5921a MINOR: appctx/cli: remove the "be" entry from the appctx union
This one now migrates to the general purpose cli.p0. The parsing
function was removed since it was only used to set the pointer to
NULL.
2016-12-16 19:40:14 +01:00
Willy Tarreau
f6710f8811 MINOR: appctx/cli: remove the env entry from the appctx union
This one now migrates to the general purpose cli.p0.
2016-12-16 19:40:14 +01:00
Willy Tarreau
3af9d832e8 MINOR: appctx/cli: remove the cli_socket entry from the appctx union
This one now migrates to the general purpose cli.p0.
2016-12-16 19:40:13 +01:00
Willy Tarreau
a2d5872297 MINOR: cli: add two general purpose pointers and integers in the CLI struct
Most of the keywords don't need to have their own entry in the appctx
union, they just need to reuse some generic pointers like we've been
used to do in the appctx with st{0,1,2}. This patch adds p0, p1, i0, i1
and initializes them to zero before calling the parser. This way some
of the simplest existing keywords will be able to disappear from the
union.

It's worth noting that this is an extension to what was initially
attempted via the "private" member that I removed a few patches ago by
not understanding how it was supposed to be used. Here the fact that
we share the same union will force us to be stricter: the code either
uses the general purpose variables or it uses its own fields but not
both.
2016-12-16 19:40:13 +01:00
Willy Tarreau
d25fc79d72 CLEANUP: stats: move a misplaced stats context initialization
This is a leftover from the cleanup campaign, the stats scope was still
initialized by the CLI instead of being initialized by the stats keyword
parsers. This should probably be backported to 1.7 to make the code more
consistent.
2016-12-16 19:40:13 +01:00
Willy Tarreau
e9ecec8935 CLEANUP: memory: remove the now unused cli_parse_show_pools() function
We don't need this empty parser anymore since previous commit.
2016-12-16 19:40:13 +01:00
Willy Tarreau
eaffde38c8 MINOR: cli: automatically enable a CLI I/O handler when there's no parser
Sometimes a registered keyword will not need any specific parsing nor
initialization, so it's annoying to have to write an empty parsing
function returning zero just for this.

This patch makes it possible to automatically call a keyword's I/O
handler of when the parsing function is not defined, while still allowing
a parser to set the I/O handler itself.
2016-12-16 19:40:13 +01:00
Thierry FOURNIER
847ca66815 MINOR: lua/signals: Remove Lua part from signals.
The signals system embedded in Lua can be tranformed in general purpose
signals code. To reach this goal, this path removes the Lua part of the
signals.

This is an easy job, because Lua is useles with signal. I change just two
prototypes.
2016-12-16 16:46:57 +01:00
Thierry FOURNIER
ebed6e908a MEDIUM: lua: use memory pool for hlua struct in applets
The struct hlua size is 128 bytes. The size is the biggest of all the elements
of the union embedded in the appctx struct. With HTTP2, it is possible that this
appctx struct will be use many times for each connection, so the 128 bytes are
a little bit heavy for the global memory consomation.

This patch replace the embbeded hlua struct by a pointer and an associated memory
pool. Now, the memory for lua is allocated only if it is required.

[wt: the appctx is now down to 160 bytes]
2016-12-16 16:31:45 +01:00
Thierry FOURNIER
ffbf569edb BUG/MINOR: lua/cli: bad error message
Error message inherited from lua_appelet_tcp copy/paste.

Should be backported in 1.7
2016-12-16 16:25:51 +01:00
Thierry FOURNIER
18d0990a5d CLEANUP: lua: rename one of the lua appctx union
It is named hlua, which does not represent the usage of this variable.
this patch renames this one to "hlua_cosocket".
2016-12-16 12:59:00 +01:00
Willy Tarreau
4305ac7f1d BUG/MINOR: cli: "show cli sockets" would always report process 64
Another small bug in "show cli sockets" made the last fix always report
process 64 due to a signedness issue in the shift operation when building
the mask.
2016-12-16 12:59:00 +01:00
Willy Tarreau
20c5e52ac7 BUG/MINOR: cli: "show cli sockets" wouldn't list all processes
A small bug in "show cli sockets" made it limit the output to the first 8
processes only.
2016-12-16 12:59:00 +01:00
William Lallemand
eceddf7225 MEDIUM: cli: 'show cli sockets' list the CLI sockets
'show cli sockets' from the CLI socket displays the list of CLI sockets
available, with their level and process number.
2016-12-15 23:00:51 +01:00
Willy Tarreau
a24bc78ad4 CLEANUP: applet/table: add an "action" entry in ->table context
Just like previous patch, this was the only other user of the "private"
field of the applet. It used to store a copy of the keyword's action.
Let's just put it into ->table->action and use it from there. It also
slightly simplifies the code by removing a few pointer to integer casts.
2016-12-14 16:48:16 +01:00
Willy Tarreau
8ae4f7533d CLEANUP: applet/lua: create a dedicated ->fcn entry in hlua_cli context
We have very few users of the appctx's private field which was introduced
prior to the split of the CLI. Unfortunately it was not removed after the
end. This commit simply introduces hlua_cli->fcn which is the pointer to
the Lua function that the Lua code used to store in this private pointer.
2016-12-14 16:48:16 +01:00
Willy Tarreau
8cf9c8e663 BUG/MINOR: stream-int: automatically release SI_FL_WAIT_DATA on SHUTW_NOW
While developing an experimental applet performing only one read per full
line, it appeared that it would be woken up for the client's close, not
read all data (missing LF), then wait for a subsequent call, and would only
be woken up on client timeout to finish the read. The reason is that we
preset SI_FL_WAIT_DATA in the stream-interface's flags to avoid a fast loop,
but there's nothing which can remove this flag until there's a read operation.

We must definitely remove it in stream_int_notify() each time we're called
with CF_SHUTW_NOW because we know there will be no more subsequent read
and we don't want an applet which keeps the WANT_GET flag to block on this.

This fix should be backported to 1.7 and 1.6 though it's uncertain whether
cli, peers, lua or spoe really are affected there.
2016-12-14 16:48:16 +01:00
Thierry FOURNIER
11cfb3daec BUG/MEDIUM: lua: In some case, the return of sample-fetches is ignored (2)
This problem is already detected here:

   8dc7316a6f

Another case raises. Now HAProxy sends a final message (typically
with "http-request deny"). Once the the message is sent, the response
channel flags are not modified.

HAProxy executes a Lua sample-fecthes for building logs, and the
result is ignored because the response flag remains set to the value
HTTP_MSG_RPBEFORE. So the Lua function hlua_check_proto() want to
guarantee the valid state of the buffer and ask for aborting the
request.

The function check_proto() is not the good way to ensure request
consistency. The real question is not "Are the message valid ?", but
"Are the validity of message unchanged ?"

This patch memorize the parser state before entering int the Lua
code, and perform a check when it go out of the Lua code. If the parser
state change for down, the request is aborted because the HTTP message
is degraded.

This patch should be backported in version 1.6 and 1.7
2016-12-14 12:52:47 +01:00
Luca Pizzamiglio
578b169dcb BUILD/MEDIUM: Fixing the build using LibreSSL
Fixing the build using LibreSSL as OpenSSL implementation.
Currently, LibreSSL 2.4.4 provides the same API of OpenSSL 1.0.1x,
but it redefine the OpenSSL version number as 2.0.x, breaking all
checks with OpenSSL 1.1.x.
The patch solves the issue checking the definition of the symbol
LIBRESSL_VERSION_NUMBER when Openssl 1.1.x features are requested.
2016-12-12 22:57:04 +01:00
Christopher Faulet
a73e59b690 BUG/MAJOR: Fix how the list of entities waiting for a buffer is handled
When an entity tries to get a buffer, if it cannot be allocted, for example
because the number of buffers which may be allocated per process is limited,
this entity is added in a list (called <buffer_wq>) and wait for an available
buffer.

Historically, the <buffer_wq> list was logically attached to streams because it
were the only entities likely to be added in it. Now, applets can also be
waiting for a free buffer. And with filters, we could imagine to have more other
entities waiting for a buffer. So it make sense to have a generic list.

Anyway, with the current design there is a bug. When an applet failed to get a
buffer, it will wait. But we add the stream attached to the applet in
<buffer_wq>, instead of the applet itself. So when a buffer is available, we
wake up the stream and not the waiting applet. So, it is possible to have
waiting applets and never awakened.

So, now, <buffer_wq> is independant from streams. And we really add the waiting
entity in <buffer_wq>. To be generic, the entity is responsible to define the
callback used to awaken it.

In addition, applets will still request an input buffer when they become
active. But they will not be sleeped anymore if no buffer are available. So this
is the responsibility to the applet I/O handler to check if this buffer is
allocated or not. This way, an applet can decide if this buffer is required or
not and can do additional processing if not.

[wt: backport to 1.7 and 1.6]
2016-12-12 19:11:04 +01:00
Christopher Faulet
9d810cae11 BUG/MEDIUM: stream: Save unprocessed events for a stream
A stream can be awakened for different reasons. During its processing, it can be
early stopped if no buffer is available. In this situation, the reason why the
stream was awakened is lost, because we rely on the task state, which is reset
after each processing loop.

In many cases, that's not a big deal. But it can be useful to accumulate the
task states if the stream processing is interrupted, especially if some filters
need to be called.

To be clearer, here is an simple example:

  1) A stream is awakened with the reason TASK_WOKEN_MSG.

  2) Because no buffer is available, the processing is interrupted, the stream
  is back to sleep. And the task state is reset.

  3) Some buffers become available, so the stream is awakened with the reason
  TASK_WOKEN_RES. At this step, the previous reason (TASK_WOKEN_MSG) is lost.

Now, the task states are saved for a stream and reset only when the stream
processing is not interrupted. The correspoing bitfield represents the pending
events for a stream. And we use this one instead of the task state during the
stream processing.

Note that TASK_WOKEN_TIMER and TASK_WOKEN_RES are always removed because these
events are always handled during the stream processing.

[wt: backport to 1.7 and 1.6]
2016-12-12 19:10:58 +01:00
Christopher Faulet
34c5cc98da MINOR: task: Rename run_queue and run_queue_cur counters
<run_queue> is used to track the number of task in the run queue and
<run_queue_cur> is a copy used for the reporting purpose. These counters has
been renamed, respectively, <tasks_run_queue> and <tasks_run_queue_cur>. So the
naming is consistent between tasks and applets.

[wt: needed for next fixes, backport to 1.7 and 1.6]
2016-12-12 19:10:54 +01:00
Christopher Faulet
1cbe74cd83 MINOR: applet: Count number of (active) applets
As for tasks, 2 counters has been added to track :
  * the total number of applets : nb_applets
  * the number of active applets : applets_active_queue

[wt: needed for next fixes, to backport to 1.7 and 1.6]
2016-12-12 19:10:46 +01:00
Christopher Faulet
90b5abe46e BUG/MINOR: cli: be sure to always warn the cli applet when input buffer is full
[wt: may only strike if CLI commands are pipelined. Must be backported to 1.7
 and 1.6, where it's a bit different and in dumpstats.c]
2016-12-12 17:58:11 +01:00
Christopher Faulet
1821d3c25e MINOR: cli: Remove useless call to bi_putchk
[wt: while it could seem suspicious, the preceeding call to
 dump_servers_state() indeed flushes the trash in case anything is
emitted. No backport needed though.]
2016-12-12 17:55:13 +01:00
Thierry FOURNIER / OZON.IO
43ad11dc75 MINOR: Do not forward the header "Expect: 100-continue" when the option http-buffer-request is set
When the option "http-buffer-request" is set, HAProxy send itself the
"HTTP/1.1 100 Continue" response in order to retrieve the post content.
When HAProxy forward the request, it send the body directly after the
headers. The header "Expect: 100-continue" was sent with the headers.
This header is useless because the body will be sent in all cases, and
the server reponse is not removed by haproxy.

This patch removes the header "Expect: 100-continue" if HAProxy sent it
itself.
2016-12-12 17:33:42 +01:00
Marcin Deranek
d2471c2bdc MINOR: proxy: Add fe_name/be_name fetchers next to existing fe_id/be_id
These 2 patches add ability to fetch frontend/backend name in your
logic, so they can be used later to make routing decisions (fe_name) or
taking some actions based on backend which responded to request (be_name).
In our case we needed a fetcher to be able to extract information we
needed from frontend name.
2016-12-12 15:10:43 +01:00
Willy Tarreau
8e0f17543e BUG/MINOR: stats: fix be/sessions/max output in html stats
"Tadas / XtGem" reported that the max value was wrong and would report
the current value instead. This needs to be backported to 1.7.
2016-12-12 15:07:29 +01:00
Thierry FOURNIER / OZON.IO
4394a2cc87 MINOR: lua: give HAProxy variable access to the applets
This patch give function for manipulating variables inside the
applet HTTP and applet TCP functions.
2016-12-12 14:34:56 +01:00
Thierry FOURNIER / OZON.IO
3e1d791a4a CLEANUP: hlua: just indent functions
Function indentation. The code is not modified. This is done in
the goal of better integration of the next patch
2016-12-12 14:34:56 +01:00
Thierry FOURNIER / OZON.IO
4b123bebe4 MINOR: lua: Allow argument for actions
(http|tcp)-(request|response) action cannot take arguments from the
configuration file. Arguments are useful for executing the action with
a special context.

This patch adds the possibility of passing arguments to an action. It
runs exactly like sample fetches and other Lua wrappers.

Note that this patch implements a 'TODO'.
2016-12-12 14:34:56 +01:00
Thierry FOURNIER / OZON.IO
d2f6f47597 BUG/MEDIUM: variables: some variable name can hide another ones
The variable are compared only using text, the final '\0' (or the
string length) are not checked. So, the variable name "txn.internal"
matchs other one call "txn.int".

This patch fix this behavior

It must be backported ni 1.6 and 1.7
2016-12-12 14:34:56 +01:00
Matthieu Guegan
35088f960d BUG/MINOR: http: don't send an extra CRLF after a Set-Cookie in a redirect
By investigating a keep-alive issue with CloudFlare, we[1] found that
when using the 'set-cookie' option in a redirect (302) HAproxy is adding
an extra `\r\n`.

Triggering rule :

`http-request redirect location / set-cookie Cookie=value if [...]`

Expected result :

```
HTTP/1.1 302 Found
Cache-Control: no-cache
Content-length: 0
Location: /
Set-Cookie: Cookie=value; path=/;
Connection: close
```

Actual result :

```
HTTP/1.1 302 Found
Cache-Control: no-cache
Content-length: 0
Location: /
Set-Cookie: Cookie=value; path=/;

Connection: close
```

This extra `\r\n` seems to be harmless with another HAproxy instance in
front of it (sanitizing) or when using a browser. But we confirm that
the CloudFlare NGINX implementation is not able to handle this. It
seems that both 'Content-length: 0' and extra carriage return broke RFC
(to be confirmed).

When looking into the code, this carriage-return was already present in
1.3.X versions but just before closing the connection which was ok I
think. Then, with 1.4.X the keep-alive feature was added and this piece
of code remains unchanged.

[1] all credit for the bug finding goes to CloudFlare Support Team

[wt: the bug was indeed present since the Set-Cookie was introduced
 in 1.3.16, by commit 0140f25 ("[MINOR] redirect: add support for
 "set-cookie" and "clear-cookie"") so backporting to all supported
 versions is desired]
2016-12-05 19:27:48 +01:00
Willy Tarreau
3067bfa815 BUG/MEDIUM: cli: fix "show stat resolvers" and "show tls-keys"
The recent CLI reorganization managed to break these two commands
by having their parser return 1 (indicating an end of processing)
instead of 0 to indicate new calls to the io handler were needed.

Namely the faulty commits are :
  69e9644 ("REORG: cli: move show stat resolvers to dns.c")
  32af203 ("REORG: cli: move ssl CLI functions to ssl_sock.c")

The fix is trivial and there is no other loss of functionality. Thanks
to Dragan Dosen for reporting the issue and the faulty commits. The
backport is needed in 1.7.
2016-12-05 14:53:37 +01:00
Dragan Dosen
a1c35ab68d BUG/MINOR: cli: allow the backslash to be escaped on the CLI
In 1.5-dev20, commit 48bcfda ("MEDIUM: dumpstat: make the CLI parser
understand the backslash as an escape char") introduced support for
backslash on the CLI, but it strips all backslashes in all arguments
instead of only unescaping them, making it impossible to pass a
backslash in an argument.

This will allow us to use a backslash in a command over the socket, eg.
"add acl #0 ABC\\XYZ".

[wt: this should be backported to 1.7 and 1.6]
2016-12-05 14:23:41 +01:00
Willy Tarreau
796c5b7997 OPTIM: stream-int: don't disable polling anymore on DONT_READ
Commit 5fddab0 ("OPTIM: stream_interface: disable reading when
CF_READ_DONTWAIT is set") improved the connection layer's efficiency
back in 1.5-dev13 by avoiding successive read attempts on an active
FD. But by disabling this on a polled FD, it causes an unpleasant
side effect which is that the FD that was subscribed to polling is
suddenly stopped and may need to be re-enabled once the kernel
starts to slow down on data eviction (eg: saturated server at the
other end, bursty traffic caused by too large maxpollevents).

This behaviour is observable with persistent connections when there
is a large enough connection count so that there's no data in the
early connection and polling is required, because there are then
up to 4 epoll_ctl() calls per request. It's important that the
server is slower than haproxy to cause some delays when reading
response.

The current connection layer as designed in 1.6 with the FD cache
doesn't require this trick anymore, though it still benefits from
it when it saves an FD from being uselessly polled. But compared
to the increased cost of enabling and disabling poll all the time,
it's still better to disable it. In some cases it's possible to
observe a performance increase as high as 30% by avoiding this
epoll_ctl() dance.

In the end we only want to disable it when the FD is speculatively
read and not when it's polled. For this we introduce a new function
__conn_data_done_recv() which is used to indicate that we're done
with recv() and not interested in new attempts. If/when we later
support event-triggered epoll, this function will have to change
a bit to do the same even in the polled case.

A quick test with keep-alive requests run on a dual-core / dual-
thread Atom shows a significant improvement :

single process, 0 bytes :
before: Requests per second:    12243.20 [#/sec] (mean)
after:  Requests per second:    13354.54 [#/sec] (mean)

single process, 4k :
before: Requests per second:    9639.81 [#/sec] (mean)
after:  Requests per second:    10991.89 [#/sec] (mean)

dual process, 0 bytes (unstable) :
before: Requests per second:    16900-19800 ~ 17600 [#/sec] (mean)
after:  Requests per second:    18600-21400 ~ 20500 [#/sec] (mean)
2016-12-05 13:49:57 +01:00
Willy Tarreau
92b10c954d BUG/MAJOR: stream: fix session abort on resource shortage
In 1.6-dev2, commit 32990b5 ("MEDIUM: session: remove the task pointer
from the session") introduced a bug which can sometimes crash the process
on resource shortage. When stream_complete() returns -1, it has already
reattached the connection to the stream, then kill_mini_session() is
called and still expects to find the task in conn->owner. Note that
since this commit, the code has moved a bit and is now in stream_new()
but the problem remains the same.

Given that we already know the task around these places, let's simply
pass the task to kill_mini_session().

The conditions currently at risk are :
  - failure to initialize filters for the new stream (lack of memory or
    any filter returning < 0 on attach())
  - failure to attach filters (any filter returning < 0 on stream_start())
  - frontend's accept() returning < 0 (allocation failure)

This fix is needed in 1.7 and 1.6.
2016-12-04 20:16:52 +01:00
Christopher Faulet
6962f4e0d6 BUG/MINOR: http: Call XFER_DATA analyzer when HTTP txn is switched in tunnel mode
This allow a filter to start to analyze data in HTTP and to fallback in TCP when
data are tunneled.

[wt: backport desired in 1.7 - no impact right now but may impact the ability
 to backport future fixes]
2016-11-29 17:03:04 +01:00
Christopher Faulet
d47a1bd1d7 BUG/MINOR: filters: Invert evaluation order of HTTP_XFER_BODY and XFER_DATA analyzers
These 2 analyzers are responsible of the data forwarding in, respectively, HTTP
mode and TCP mode. Now, the analyzer responsible of the HTTP data forwarding is
called before the one responsible of the TCP data forwarding. This will allow
the filtering of tunneled data in HTTP.

[wt: backport desired in 1.7 - no impact right now but may impact the ability
 to backport future fixes]
2016-11-29 17:03:04 +01:00
Christopher Faulet
3235957685 BUG/MINOR: http: Keep the same behavior between 1.6 and 1.7 for tunneled txn
In HAProxy 1.6, When "http-tunnel" option is enabled, HTTP transactions are
tunneled as soon as possible after the headers parsing/forwarding. When the
transfer length of the response can be determined, this happens when all data
are forwarded. But for responses with an undetermined transfer length this
happens when headers are forwarded. This behavior is questionable, but this is
not the purpose of this fix...

In HAProxy 1.7, the first use-case works like in 1.6. But the second one not
because of the data filtering. HAProxy was always trying to forward data until
the server closes the connection. So the transaction was never switched in
tunnel mode. This is the expected behavior when there is a data filter. But in
the default case (no data filter), it should work like in 1.6.

This patch fixes the bug. We analyze response data until the server closes the
connection only when there is a data filter.

[wt: backport needed in 1.7]
2016-11-29 17:03:01 +01:00
Christopher Faulet
d1cd209b21 BUG/MEDIUM: http: Fix tunnel mode when the CONNECT method is used
When a 2xx response to a CONNECT request is returned, the connection must be
switched in tunnel mode immediatly after the headers, and Transfer-Encoding and
Content-Length headers must be ignored. So from the HTTP parser point of view,
there is no body.

The bug comes from the fact the flag HTTP_MSGF_XFER_LEN was not set on the
response (This flag means that the body size can be determined. In our case, it
can, it is 0). So, during data forwarding, the connection was never switched in
tunnel mode and we were blocked in a state where we were waiting that the
server closes the connection to ends the response.

Setting the flag HTTP_MSGF_XFER_LEN on the response fixed the bug.

The code of http_wait_for_response has been slightly updated to be more
readable.

[wt: 1.7-only, this is not needed in 1.6]
2016-11-29 17:00:14 +01:00
Tim Düsterhus
4896c440b3 DOC: Spelling fixes
[wt: this contains spelling fixes for both doc and code comments,
 should be backported, ignoring the parts which don't apply]
2016-11-29 07:29:57 +01:00
Willy Tarreau
b3e111b4fd BUG/MEDIUM: proxy: return "none" and "unknown" for unknown LB algos
When a backend doesn't use any known LB algorithm, backend_lb_algo_str()
returns NULL. It used to cause "nil" to be printed in the stats dump
since version 1.4 but causes 1.7 to try to parse this NULL to encode
it as a CSV string, causing a crash on "show stat" in this case.

The only situation where this can happen is when "transparent" or
"dispatch" are used in a proxy, in which case the LB algorithm is
BE_LB_ALGO_NONE. Thus now we explicitly report "none" when this
situation is detected, and we preventively report "unknown" if any
unknown algorithm is detected, which may happen if such an algo is
added in the future and the function is not updated.

This fix must be backported to 1.7 and may be backported as far as
1.4, though it has less impact there.
2016-11-26 15:58:27 +01:00
Willy Tarreau
7d56221d57 REORG: stkctr: move all the stick counters processing to stick-tables.c
Historically we used to have the stick counters processing put into
session.c which became stream.c. But a big part of it is now in
stick-table.c (eg: converters) but despite this we still have all
the sample fetch functions in stream.c

These parts do not depend on the stream anymore, so let's move the
remaining chunks to stick-table.c and have cleaner files.

What remains in stream.c is everything needed to attach/detach
trackers to the stream and to update the counters while the stream
is being processed.
2016-11-25 16:10:05 +01:00
Willy Tarreau
397131093f REORG: tcp-rules: move tcp rules processing to their own file
There's no more reason to keep tcp rules processing inside proto_tcp.c
given that there is nothing in common there except these 3 letters : tcp.
The tcp rules are in fact connection, session and content processing rules.
Let's move them to "tcp-rules" and let them live their life there.
2016-11-25 15:57:38 +01:00
Willy Tarreau
d39ad449b9 CLEANUP: cfgparse: cascade the warnif_misplaced_* rules
There are 8 functions each repeating what another does and adding one
extra test. We used to have some copy-paste issues in the past due to
this. Instead we now make them simply rely on the previous one and add
the final test. It's much better and much safer. The functions could
be moved to inlines but they're used at a few other locations only,
it didn't make much sense in the end.
2016-11-25 15:16:12 +01:00
Willy Tarreau
ae9bea0591 CLEANUP: counters: move from 3 types to 2 types
We used to have 3 types of counters with a huge overlap :
  - listener counters : stats collected for each bind line
  - proxy counters : union of the frontend and backend counters
  - server counters : stats collected per server

It happens that quite a good part was common between listeners and
proxies due to the frontend counters being updated at the two locations,
and that similarly the server and proxy counters were overlapping and
being updated together.

This patch cleans this up to propose only two types of counters :
  - fe_counters: used by frontends and listeners, related to
    incoming connections activity
  - be_counters: used by backends and servers, related to outgoing
    connections activity

This allowed to remove some non-sensical counters from both parts. For
frontends, the following entries were removed :

  cum_lbconn, last_sess, nbpend_max, failed_conns, failed_resp,
  retries, redispatches, q_time, c_time, d_time, t_time

For backends, this ones was removed : intercepted_req.

While doing this it was discovered that we used to incorrectly report
intercepted_req for backends in the HTML stats, which was always zero
since it's never updated.

Also it revealed a few inconsistencies (which were not fixed as they
are harmless). For example, backends count connections (cum_conn)
instead of sessions while servers count sessions and not connections.

Over the long term, some extra cleanups may be performed by having
some counters update functions touching both the server and backend
at the same time, as well as both the frontend and listener, to
ensure that all sides have all their stats properly filled. The stats
dump will also be able to factor the dump functions by counter types.
2016-11-25 15:03:12 +01:00
Willy Tarreau
35069f84af MINOR: cli: make "show errors" capable of dumping only request or response
When dealing with many proxies, it's hard to spot response errors because
all internet-facing frontends constantly receive attacks. This patch now
makes it possible to demand that only request or response errors are dumped
by appending "request" or "reponse" to the show errors command.
2016-11-25 09:16:37 +01:00
Willy Tarreau
234ba2d8eb MINOR: cli: make "show errors" support a proxy name
Till now it was needed to know the proxy's ID while we do have the
ability to look up a proxy by its name now.
2016-11-25 08:56:55 +01:00
Willy Tarreau
a1b1ed53e7 MINOR: cli: make "show stat" support a proxy name
Till now it was needed to know the proxy's ID while we do have the
ability to look up a proxy by its name now.
2016-11-25 08:55:25 +01:00
Christopher Faulet
b5cff60ef5 BUG: spoe: Fix parsing of SPOE actions in ACK frames
For "SET-VAR" actions, data was not correctly parsed. 'idx' variable was not
correctly updated when the 3rd argument was parsed.
2016-11-25 08:09:10 +01:00
Willy Tarreau
97108e08ce CLEANUP: sample: report "converter" instead of "conv method" in error messages
This was inherited from the very early stick-tables code but it's about
time to produce understandable error messages :-)
2016-11-25 07:36:22 +01:00
Thierry FOURNIER / OZON.IO
8a4e4420fb MEDIUM: log-format: Use standard HAProxy log system to report errors
The function log format emit its own error message using Alert(). This
patch replaces this behavior and uses the standard HAProxy error system
(with memprintf).

The benefits are:
 - cleaning the log system

 - the logformat can ignore the caller (actually the caller must set
   a flag designing the caller function).

 - Make the usage of the logformat function easy for future components.
2016-11-25 07:32:58 +01:00
Thierry FOURNIER / OZON.IO
4ed1c9585d MINOR: http/conf: store the use_backend configuration file and line for logs
The error log of the directive use_backend doesn't provide the
file and line containing the declaration. This patch stores
theses informations.
2016-11-25 07:15:09 +01:00
Thierry FOURNIER / OZON.IO
5948b01149 BUG/MINOR: conf: calloc untested
A calloc is executed without check of its returns code.
2016-11-25 07:15:06 +01:00
Thierry FOURNIER / OZON.IO
8a1027aa45 MINOR: lua: Add tokenize function.
For tokenizing a string, standard Lua recommends to use regexes.
The followinf example splits words:

   for i in string.gmatch(example, "%S+") do
      print(i)
   end

This is a little bit overkill for simply split words. This patch
adds a tokenize function which quick and do not use regexes.
2016-11-24 21:35:34 +01:00
Thierry FOURNIER / OZON.IO
7f3aa8b62f MINOR: lua: add utility function for check boolean argument
Strangely, the Lua API doesn't provides a function like
luaL_checkboolean(). This little function add this one.
2016-11-24 21:35:10 +01:00
Willy Tarreau
e365815007 BUILD: vars: remove a build warning on vars.c
gcc 3.4.6 noticed a possibly unitialized variable in vars.c, and while it
cannot happen the way the function is used, it's surprizing that newer
versions did not report it.

This fix may be backported to 1.6.
2016-11-24 21:25:43 +01:00
Thierry FOURNIER / OZON.IO
59fd511555 MEDIUM: log-format/conf: take into account the parse_logformat_string() return code
This patch takes into account the return code of the parse_logformat_string()
function. Now the configuration parser will fail if the log_format is not
strict.
2016-11-24 18:54:26 +01:00
Thierry FOURNIER / OZON.IO
a2c38d7904 MEDIUM: log-format: strict parsing and enable fail
Until now, the function parse_logformat_string() never fails. It
send warnings when it parses bad format, and returns expression in
best effort.

This patch replaces warnings by alert and returns a fail code.

Maybe the warning mode is designed for a compatibility with old
configuration versions. If it is the case, now this compatibility
is broken.

[wt: no, the reason is that an alert must cause a startup failure,
 but this will be OK with next patch]
2016-11-24 18:54:26 +01:00
Thierry FOURNIER / OZON.IO
6fe0e1b977 CLEANUP: log-format: remove unused arguments
The log-format function parse_logformat_string() takes file and line
for building parsing logs. These two parameters are embedded in the
struct proxy curproxy, which is the current parsing context.

This patch removes these two unused arguments.
2016-11-24 18:54:26 +01:00
Thierry FOURNIER / OZON.IO
bca46f0d9d CLEANUP: log-format: fix return code of function parse_logformat_var_args()
This patch replace the successful return code from 0 to 1. The
error code is replaced from 1 to 0.

The return code of this function is actually unused, so this
patch cannot modify the behaviour.
2016-11-24 18:54:26 +01:00
Thierry FOURNIER / OZON.IO
eca4d95317 CLEANUP: log-format: fix return code of the function parse_logformat_var()
This patch replaces the successful return code from 0 to 1. The
error code is replaced from -1 to 0.

The return code of this function is actually unused, so this
patch cannot modify the behaviour.
2016-11-24 18:54:25 +01:00
Thierry FOURNIER / OZON.IO
a69c912187 CLEANUP: log-format: useless file and line in json converter
The caller must log location information, so this information is
provided two times in the log line. The error log is like this:

   [ALERT] 327/011513 (14291) : parsing [o3.conf:38]: 'http-response
   set-header': Sample fetch <method,json(rrr)> failed with : invalid
   args in conv method 'json' : Unexpected input code type at file
   'o3.conf', line 38. Allowed value are 'ascii', 'utf8', 'utf8s',
   'utf8p' and 'utf8ps'.

This patch removes the second location indication, the the same error
becomes:

   [ALERT] 327/011637 (14367) : parsing [o3.conf:38]: 'http-response
   set-header': Sample fetch <method,json(rrr)> failed with : invalid
   args in conv method 'json' : Unexpected input code type. Allowed
   value are 'ascii', 'utf8', 'utf8s', 'utf8p' and 'utf8ps'.
2016-11-24 18:54:25 +01:00
Thierry FOURNIER / OZON.IO
9cbfef2455 BUG/MINOR: log-format: uncatched memory allocation functions
Some return code of memory allocation functions are not tested.
This patch fix theses checks.
2016-11-24 18:54:25 +01:00
Willy Tarreau
30e5e18bbb CLEANUP: cli: remove assignments to st0 and st2 in keyword parsers
Now it's not needed anymore to set STAT_ST_INIT nor CLI_ST_CALLBACK
in the parsers, remove it in the various places.
2016-11-24 16:59:28 +01:00
Willy Tarreau
419085656b CLEANUP: cli: simplify the request parser a little bit
stats_sock_parse_request() was renamed cli_parse_request(). It now takes
an appctx instead of a stream interface, and presets ->st2 to 0 so that
most handlers will not have to set it anymore. The io_handler is set by
default to the keyword's IO handler so that the parser can simply change
it without having to rewrite the new state.
2016-11-24 16:59:28 +01:00
Willy Tarreau
3b6e547be8 CLEANUP: cli: rename STAT_CLI_* to CLI_ST_*
These are in CLI states, not stats states anymore. STAT_CLI_O_CUSTOM
was more appropriately renamed CLI_ST_CALLBACK.
2016-11-24 16:59:28 +01:00
Willy Tarreau
45c742be05 REORG: cli: move the "set rate-limit" functions to their own parser
All 4 rate-limit settings were handled at once given that exactly the
same checks are performed on them. In case of missing or incorrect
argument, the detailed supported options are printed with their use
case.

This was the last specific entry in the CLI parser, some additional
cleanup may still be done.
2016-11-24 16:59:28 +01:00
Willy Tarreau
58d9cb7d22 REORG: cli: move "{enable|disable} agent" to server.c
Also mention that "set server" is preferred now. Note that these
were the last enable/disable commands in cli.c. Also remove the
now unused expect_server_admin() function.
2016-11-24 16:59:28 +01:00
Willy Tarreau
2c04eda8b5 REORG: cli: move "{enable|disable} health" to server.c
Also mention that "set server" is preferred now.
2016-11-24 16:59:28 +01:00
Willy Tarreau
ffb4d58e1b REORG: cli: move "{enable|disable} server" to server.c
Also mention that "set server" is preferred now.
2016-11-24 16:59:28 +01:00
Willy Tarreau
15b9e68a78 REORG: cli: move "{enable|disable} frontend" to proxy.c
These are the last frontend-specific actions on the CLI. The function
expect_frontend_admin() which is not used anymore was removed.
2016-11-24 16:59:28 +01:00
Willy Tarreau
5212d7f24c REORG: cli: move "shutdown frontend" to proxy.c
Now we don't have any "shutdown" commands left in cli.c.
2016-11-24 16:59:28 +01:00
Willy Tarreau
61b6521cbf REORG: cli: move "shutdown session" to stream.c
It really kills streams in fact, but we can't change the name now.
2016-11-24 16:59:28 +01:00
Willy Tarreau
4e46b62ab1 REORG: cli: move "shutdown sessions server" to stream.c
It could be argued that it's between server, stream and session but
at least due to the fact that it operates on streams, its best place
is in stream.c.
2016-11-24 16:59:28 +01:00
Willy Tarreau
c429a1fc2d REORG: cli: move "set maxconn frontend" to proxy.c
And get rid of the last specific "set maxconn" case.
2016-11-24 16:59:28 +01:00
Willy Tarreau
b802627eb3 REORG: cli: move "set maxconn server" to server.c
It's used to manipulate the server's maxconn setting.
2016-11-24 16:59:28 +01:00
Willy Tarreau
2af9941bcd REORG: cli: move "set maxconn global" to its own handler
The code remained in the same file, it just simplifies the parser
and makes use of cli_has_level().
2016-11-24 16:59:28 +01:00
Willy Tarreau
89d467c105 REORG: cli: move "clear counters" to stats.c
This command is only used to clear stats. It now relies on cli_has_level()
to validate the permissions.
2016-11-24 16:59:28 +01:00
Willy Tarreau
599852eade REORG: cli: move "set timeout" to its own handler
The code remained in the same file, it just simplifies the parser.
2016-11-24 16:59:28 +01:00
Willy Tarreau
0a73929dc8 REORG: cli: make "show env" also use the generic keyword registration
This way we don't have any more state specific to a given yieldable
command. The other commands should be easier to move as they only
involve a parser.
2016-11-24 16:59:28 +01:00
Willy Tarreau
12207b360a REORG: cli: move "show errors" out of cli.c
It really belongs to proto_http.c since it's a dump for HTTP request
and response errors. Note that it's possible that some parts do not
need to be exported anymore since it really is the only place where
errors are manipulated.
2016-11-24 16:59:28 +01:00
Willy Tarreau
f13ebdf286 REORG: cli: move table dump/clear/set to stick_table.c
The table dump code was a horrible mess, with common parts interleaved
all the way to deal with the various actions (set/clear/show). A few
error messages were still incorrect, as the "set" operation did not
update them so they would still report "unknown action" (now fixed).

The action was now passed as a private argument to the CLI keyword
which itself is copied into the appctx private field. It's just an
int cast to a pointer.

Some minor issues were noticed while doing this, for example when dumping
an entry by key, if the key doesn't exist, nothing is printed, not even
the table's header. It's unclear whether this was intentional but it
doesn't really match what is done for data-based dumps. It was left
unchanged for now so that a later fix can be backported if needed.

Enum entries STAT_CLI_O_TAB, STAT_CLI_O_CLR and STAT_CLI_O_SET were
removed.
2016-11-24 16:59:28 +01:00
Willy Tarreau
97c2ae13bc REORG: cli: move dump_text(), dump_text_line(), and dump_binary() to standard.c
These are general purpose functions, move them away.
2016-11-24 16:59:27 +01:00
Willy Tarreau
0baac8cf1f REORG: cli: move "show info" to stats.c
Move the "show info" command to stats.c using the CLI keyword API
to register it on the CLI. The stats_dump_info_to_buffer() function
is now static again. Note, we don't need proto_ssl anymore in cli.c.
2016-11-24 16:59:27 +01:00
Willy Tarreau
2b812e29f6 REORG: cli: move "show stat" to stats.c
Move the "show stat" command to stats.c using the CLI keyword API
to register it on the CLI. The stats_dump_stat_to_buffer() function
is now static again.
2016-11-24 16:59:27 +01:00
William Lallemand
6b16094355 REORG: cli: move get/set weight to server.c
Move get/set weight CLI functions to server.c and use the cli keyword API
to register it on the CLI.
2016-11-24 16:59:27 +01:00
William Lallemand
933efcd01a REORG: cli: move 'show backend' to proxy.c
Move 'show backend' CLI functions to proxy.c and use the cli keyword API
to register it on the CLI.
2016-11-24 16:59:27 +01:00
William Lallemand
4c5b4d531c REORG: cli: move 'show sess' to stream.c
Move 'show sess' CLI functions to stream.c and use the cli keyword API
to register it on the CLI.

[wt: the choice of stream vs session makes sense because since 1.6 these
 really are streams that we're dumping and not sessions anymore]
2016-11-24 16:59:27 +01:00
William Lallemand
a6c5f3372d REORG: cli: move 'show servers' to proxy.c
Move 'show servers' CLI functions to proxy.c and use the cli keyword
API to register it on the CLI.
2016-11-24 16:59:27 +01:00
William Lallemand
e7ed8855de REORG: cli: move 'show pools' to memory.c
Move 'show pools' CLI functions to memory.c and use the cli keyword
API to register it on the CLI.
2016-11-24 16:59:27 +01:00
William Lallemand
222baf20da REORG: cli: move 'set server' to server.c
Move 'set server' CLI functions to server.c and use the cli keyword API
to register it on the CLI.
2016-11-24 16:59:27 +01:00
Willy Tarreau
960f2cb056 MINOR: proxy: create new function cli_find_frontend() to find a frontend
Several CLI commands require a frontend, so let's have a function to
look this one up and prepare the appropriate error message and the
appctx's state in case of failure.
2016-11-24 16:59:27 +01:00
Willy Tarreau
21b069dca8 MINOR: server: create new function cli_find_server() to find a server
Several CLI commands require a server, so let's have a function to
look this one up and prepare the appropriate error message and the
appctx's state in case of failure.
2016-11-24 16:59:27 +01:00
Willy Tarreau
de57a578ba MINOR: cli: create new function cli_has_level() to validate permissions
This function is used to check that the CLI features the appropriate
level of permissions or to prepare the adequate error message.
2016-11-24 16:59:27 +01:00
William Lallemand
69e9644e35 REORG: cli: move show stat resolvers to dns.c
Move dns CLI functions to dns.c and use the cli keyword API to register
actions on the CLI.
2016-11-24 16:59:27 +01:00
William Lallemand
ad8be61c7e REORG: cli: move map and acl code to map.c
Move map and acl CLI functions to map.c and use the cli keyword API to
register actions on the CLI. Then remove the now unused individual
"add" and "del" keywords.
2016-11-24 16:59:27 +01:00
William Lallemand
32af203b75 REORG: cli: move ssl CLI functions to ssl_sock.c
Move ssl CLI functions to ssl_sock.c and use the cli keyword API to
register ssl actions on the CLI.
2016-11-24 16:59:27 +01:00
William Lallemand
9ed6203aef REORG: cli: split dumpstats.h in stats.h and cli.h
proto/dumpstats.h has been split in 4 files:

  * proto/cli.h  contains protypes for the CLI
  * proto/stats.h contains prototypes for the stats
  * types/cli.h contains definition for the CLI
  * types/stats.h contains definition for the stats
2016-11-24 16:59:27 +01:00
William Lallemand
74c24fb071 REORG: cli: split dumpstats.c in src/cli.c and src/stats.c
dumpstats.c was containing either the stats code and the CLI code.
The cli code has been moved to cli.c and the stats code to stats.c
2016-11-24 16:59:27 +01:00
Willy Tarreau
8e0bb0ae16 MINOR: connection: add names for transport and data layers
This makes debugging easier and avoids having to put ugly checks
against certain well-known internal struct pointers.
2016-11-24 16:58:12 +01:00
Willy Tarreau
2b5e6315a3 BUG/MINOR: cli: wake up the CLI's task after a timeout update
When the CLI's timeout is reduced, nothing was done to take the task
up to update it. In the past it used to run inside process_stream()
so it used to be refreshed. This is not the case anymore since we have
the appctx so the task needs to be woken up in order to recompute the
new expiration date.

This fix needs to be backported to 1.6.
2016-11-24 15:35:16 +01:00
Willy Tarreau
74a5a9828b BUG/MINOR: cli: dequeue from the proxy when changing a maxconn
The "set maxconn frontend" statement on the CLI tries to dequeue possibly
pending requests, but due to a copy-paste error, they're dequeued on the
CLI's frontend instead of the one being changed.

The impact is very minor as it only means that possibly pending connections
will still have to wait for a previous one to complete before being accepted
when a limit is raised.

This fix has to be backported to 1.6 and 1.5.
2016-11-24 15:34:34 +01:00
Willy Tarreau
578fa0259f BUG/MINOR: cli: fix pointer size when reporting data/transport layer name
In dumpstats.c we have get_conn_xprt_name() and get_conn_data_name() to
report the name of the data and transport layers used on a connection.
But when the name is not known, its pointer is reported instead. But the
static char used to report the pointer is too small as it doesn't leave
room for '0x'. Fortunately all subsystems are known so we never trigger
this case.

This fix needs to be backported to 1.6 and 1.5.
2016-11-24 15:21:26 +01:00
David Carlier
327298c215 BUILD: fix build on Solaris 10/11
uint16_t instead of u_int16_t
None ISO fields of struct tm are not present, but
by zeroyfing it, on GNU and BSD systems tm_gmtoff
field will be set.

[wt: moved the memset into each of the date functions]
2016-11-22 12:04:19 +01:00
Christopher Faulet
985532d1d8 MINOR: spoe: Add "option set-on-error" statement
It defines the variable to set when an error occurred during an event
processing. It will only be set when an error occurred in the scope of the
transaction. As for all other variables define by the SPOE, it will be
prefixed. So, if your variable name is "error" and your prefix is "my_spoe_pfx",
the variable will be "txn.my_spoe_pfx.error".

When set, the variable is the boolean "true". Note that if "option
continue-on-error" is set, the variable is not automatically removed between
events processing.
2016-11-21 15:29:59 +01:00
Christopher Faulet
4802672274 MINOR: spoe: Add "maxconnrate" and "maxerrrate" statements
"maxconnrate" is the maximum number of connections per second. The SPOE will
stop to open new connections if the maximum is reached and will wait to acquire
an existing one.

"maxerrrate" is the maximum number of errors per second. The SPOE will stop its
processing if the maximum is reached.

These options replace hardcoded macros MAX_NEW_SPOE_APPLETS and
MAX_NEW_SPOE_APPLET_ERRS. We use it to limit SPOE activity, especially when
servers are down..
2016-11-21 15:29:59 +01:00
Christopher Faulet
ea62c2a345 MINOR: spoe: Add 'option continue-on-error' statement in spoe-agent section
By default, for a specific stream, when an abnormal/unexpected error occurs, the
SPOE is disabled for all the transaction. So if you have several events
configured, such error on an event will disabled all followings. For TCP
streams, this will disable the SPOE for the whole session. For HTTP streams,
this will disable it for the transaction (request and response).

To bypass this behaviour, you can set 'continue-on-error' option in 'spoe-agent'
section. With this option, only the current event will be ignored.
2016-11-21 15:29:59 +01:00
Christopher Faulet
03a3449e1a MINOR: spoe: Remove useless 'timeout ack' option
To limit the time to process an event, you should set 'timeout processing'
option. So 'timeout ack' option is redundant and useless.
2016-11-21 15:29:59 +01:00
Christopher Faulet
f7a3092512 MINOR: spoe: Add 'timeout processing' option to limit time to process an event
It is a way to set the maximum time to wait for a stream to process an event,
i.e to acquire a stream to talk with an agent, to encode all messages, to send
the NOTIFY frame, to receive the corrsponding acknowledgement and to process all
actions. It is applied on the stream that handle the client and the server
sessions.
2016-11-21 15:29:59 +01:00
Christopher Faulet
a00d817aba MINOR: filters: Add check_timeouts callback to handle timers expiration on streams
A filter can now be notified when a stream is woken up because of an expired
timer.

The documentation and the TRACE filter have been updated.
2016-11-21 15:29:58 +01:00
Thierry FOURNIER / OZON.IO
8dc7316a6f BUG/MEDIUM: lua: In some case, the return of sample-fetche is ignored
When:

 - A Lua action return data and close the channel. The request status
   is set to HTTP_MSG_CLOSED for the request and HTTP_MSG_DONE for the
   response.

 - HAProxy sets the state HTTP_MSG_ERROR. I don't known why, because
   there are many line which sets this state.

 - A Lua sample-fetch is executed, typically for building the log
   line.

 - When the Lua sample fetch exits, a control of the data is
   executed. If HAProxy is currently parsing the request, the request
   is aborted in order to prevent a segfault or sending corrupted
   data.

This ast control is executed comparing the state HTTP_MSG_BODY. When
this state is reached, the request is parsed and no error are
possible. When the state is < than HTTP_MSG_BODY, the parser is
running.

Unfortunately, the code HTTP_MSG_ERROR is just < HTTP_MSG_BODY. When
we are in error, we want to terminate the execution of Lua without
error.

This patch changes the comparaison level.

This patch must be backported in 1.6
2016-11-19 00:29:19 +01:00
Willy Tarreau
2fe1b92163 BUG/MINOR: cli: properly decrement ref count on tables during failed dumps
Gernot Pörner reported some constant leak of ref counts for stick tables
entries. It happens that this leak was not at all in the regular traffic
path but on the "show table" path. An extra ref count was taken during
the dump if the output had to be paused, and it was released upon clean
termination or an error detected in the I/O handler. But the release
handler didn't do it, while it used to properly do it for the sessions
dump.

This fix needs to be backported to 1.6.
2016-11-18 19:20:09 +01:00
Willy Tarreau
5179146fa3 BUG/MEDIUM: stick-table: fix regression caused by recent fix for out-of-memory
Commit ef8f4fe ("BUG/MINOR: stick-table: handle out-of-memory condition
gracefully") unfortunately got trapped by a pointer operation. Replacing

    ts = poll_alloc() + size;

with :

    ts = poll_alloc();
    ts += size;

Doesn't give the same result because pool_alloc() is void while ts is a
struct stksess*. So now we don't access the same places, which is visible
in certain stick-table scenarios causing a crash.

This must be backported to 1.6 and 1.5.
2016-11-18 18:21:39 +01:00
Willy Tarreau
733b1327a6 DEBUG: connection: mark the closed FDs with a value that is easier to detect
Setting an FD to -1 when closed isn't the most easily noticeable thing
to do when we're chasing accidental reuse of a stale file descriptor.
Instead set it to that large a negative value that it will overflow the
fdtab and provide an analysable core at the moment the issue happens.
Care was taken to ensure it doesn't overflow nor change sign on 32-bit
machines when multiplied by fdtab, and that it also remains negative for
the various checks that exist. The value equals 0xFDDEADFD which happens
to be easily spotted in a debugger.
2016-11-18 15:00:42 +01:00
Willy Tarreau
350135cf49 BUG/MEDIUM: connection: check the control layer before stopping polling
The bug described in commit 568743a ("BUG/MEDIUM: stream-int: completely
detach connection on connect error") was not a stream-interface layer bug
but a connection layer bug. There was exactly one place in the code where
we could change a file descriptor's status without first checking whether
it is valid or not, it was in conn_stop_polling(). This one is called when
the polling status is changed after an update, and calls fd_stop_both even
if we had already closed the file descriptor :

1479388298.484240 ->->->->->   conn_fd_handler > conn_cond_update_polling
1479388298.484240 ->->->->->->   conn_cond_update_polling > conn_stop_polling
1479388298.484241 ->->->->->->->   conn_stop_polling > conn_ctrl_ready
1479388298.484241                  conn_stop_polling < conn_ctrl_ready
1479388298.484241 ->->->->->->->   conn_stop_polling > fd_stop_both
1479388298.484242 ->->->->->->->->   fd_stop_both > fd_update_cache
1479388298.484242 ->->->->->->->->->   fd_update_cache > fd_release_cache_entry
1479388298.484242                      fd_update_cache < fd_release_cache_entry
1479388298.484243                    fd_stop_both < fd_update_cache
1479388298.484243                  conn_stop_polling < fd_stop_both
1479388298.484243                conn_cond_update_polling < conn_stop_polling
1479388298.484243              conn_fd_handler < conn_cond_update_polling

The problem with the previous fix above is that it break the http_proxy mode
and possibly even some Lua parts and peers to a certain extent ; all outgoing
connections where the target address is initially copied into the outgoing
connection which experience a retry would use a random outgoing address after
the retry because closing and detaching the connection causes the target
address to be lost. This was attempted to be addressed by commit 0857d7a
("BUG/MAJOR: stream: properly mark the server address as unset on connect
retry") but it used to only solve the most visible effect and not the root
cause.

Prior to this fix, it was possible to cause this config to keep CLOSE_WAIT
for as long as it takes to expire a client or server timeout (note the
missing client timeout) :

   listen test
        mode http
        bind :8002
        server s1 127.0.0.1:8001

   $ tcploop 8001 L0 W N20 A R P100 S:"HTTP/1.1 200 OK\r\nContent-length: 0\r\n\r\n" &
   $ tcploop 8002 N200 C T W S:"GET / HTTP/1.0\r\n\r\n" O P10000 K

With this patch, these CLOSE_WAIT properly vanish when both processes leave.

This commit reverts the two fixes above and replaces them with the proper
fix in connection.h. It must be backported to 1.6 and 1.5. Thanks to
Robson Roberto Souza Peixoto for providing very detailed traces showing
some obvious inconsistencies leading to finding this bug.
2016-11-18 14:48:52 +01:00
Thierry FOURNIER / OZON.IO
a44fdd95f9 MEDIUM: lua: Add cli handler for Lua
Now, HAProxy allows to register some keys in the "cli". This patch allows
to handle these keys with Lua code.
2016-11-18 14:32:03 +01:00
Thierry FOURNIER / OZON.IO
6a22dcbe27 MINOR: cli: add private pointer and release function
This pointer will be used for storing private context. With this,
the same executed function can handle more than one keyword. This
will be very useful for creation Lua cli bindings.

The release function is called when the command is terminated (give
back the hand to the prompt) or when the session is broken (timeout
or client closed).
2016-11-18 14:32:03 +01:00
Vincent Bernat
ef8f4fe12d BUG/MINOR: stick-table: handle out-of-memory condition gracefully
In case `pool_alloc2()` returns NULL, propagate the condition to the
caller. This could happen when limiting the amount of memory available
for HAProxy with `-m`.

[wt: backport to 1.6 and 1.5 needed]
2016-11-17 16:00:16 +01:00
Willy Tarreau
a71f642b62 CLEANUP: lua: avoid directly calling getsockname/getpeername()
We already have per-protocol functions for this, and they already
take care of properly setting the CO_FL_ADDR_*_SET flags.
2016-11-16 17:32:57 +01:00
Bertrand Jacquin
ff13c06a17 CLEANUP: ssl: Fix bind keywords name in comments
Along with a whitespace cleanup and a grammar typo
2016-11-14 18:15:20 +01:00
Bertrand Jacquin
5a8fc2d45f CLEANUP: ssl: Remove goto after return dead code
This code can never be reached.
2016-11-14 18:15:20 +01:00
Bertrand Jacquin
5424ee08de BUG/MINOR: ssl: Print correct filename when error occurs reading OCSP
When Multi-Cert bundle are used, error is throwned regarding certificate
filename without including certifcate type extension.
2016-11-14 18:15:20 +01:00
Bertrand Jacquin
3342309572 BUG/MEDIUM: ssl: Store certificate filename in a variable
Before this change, trash is being used to create certificate filename
to read in care Mutli-Cert are in used. But then ssl_sock_load_ocsp()
modify trash leading to potential wrong information given in later error
message.

This also blocks any further use of certificate filename for other
usage, like ongoing patch to support Certificate Transparency handling
in Multi-Cert bundle.
2016-11-14 18:15:20 +01:00
Thierry FOURNIER / OZON.IO
b41f22f59c CLEANUP: lua: control executed twice
The availaible size in the stack is check two times. This patch removes
this double check.

Must be backported in 1.6
2016-11-14 15:23:17 +01:00
Thierry FOURNIER / OZON.IO
02564fd153 CLEANUP: lua: move comment
Old comment is misplaced. Certainly due to a bad copy/paste

Must be backported in 1.6
2016-11-14 15:23:17 +01:00
Thierry FOURNIER / OZON.IO
500d11e65d BUG/MEDIUM: channel: bad unlikely macro
The unlikely macro doesn't take in acount the condition, but only one
variable.

Must be backported in 1.6

[wt: with gcc 3.x, unlikely(x) is defined as __builtin_expect((x) != 0, 0)
 so the condition is wrong for negative numbers, which correspond to the
 case where bi_getblk_nc() has reached the end of the buffer and the
 channel is already closed. With gcc 4.x, the output is cast to unsigned long
 so the <=0 will not match negative values either. This is only used in Lua
 for now so that may explain why it hasn't hit yet]
2016-11-14 15:23:17 +01:00
Thierry FOURNIER / OZON.IO
62fec75183 MINOR: lua: add ip addresses and network manipulation function
Add two functions core.parse_addr() and core.match_addr() where are used
for matching networks.
2016-11-12 10:42:30 +01:00
Thierry FOURNIER / OZON.IO
65192f35d2 MINOR: lua: add function which return true if the channel is full.
Add function which return true if the channel is full. It is
useful for triggering some process when the buffer is full.
2016-11-12 10:42:25 +01:00
Christopher Faulet
ba7bc164f7 MINOR: spoe/checks: Add support for SPOP health checks
A new "option spop-check" statement has been added to enable server health
checks based on SPOP HELLO handshake. SPOP is the protocol used by SPOE filters
to talk to servers.
2016-11-09 22:57:02 +01:00
Christopher Faulet
f7e4e7e096 MAJOR: spoe: Add an experimental Stream Processing Offload Engine
SPOE makes possible the communication with external components to retrieve some
info using an in-house binary protocol, the Stream Processing Offload Protocol
(SPOP). In the long term, its aim is to allow any kind of offloading on the
streams. This first version, besides being experimental, won't do lot of
things. The most important today is to validate the protocol design and lay the
foundations of what will, one day, be a full offload engine for the stream
processing.

So, for now, the SPOE can offload the stream processing before "tcp-request
content", "tcp-response content", "http-request" and "http-response" rules. And
it only supports variables creation/suppression. But, in spite of these limited
features, we can easily imagine to implement a SSO solution, an ip reputation
service or an ip geolocation service.

Internally, the SPOE is implemented as a filter. So, to use it, you must use
following line in a proxy proxy section:

  frontend my-front
      ...
      filter spoe [engine <name>] config <file>
      ...

It uses its own configuration file to keep the HAProxy configuration clean. It
is also a easy way to disable it by commenting out the filter line.

See "doc/SPOE.txt" for all details about the SPOE configuration.
2016-11-09 22:57:01 +01:00
Christopher Faulet
85d79c94a9 MINOR: vars: Add 'unset-var' action/converter
It does the opposite of 'set-var' action/converter. It is really useful for
per-process variables. But, it can be used for any scope.

The lua function 'unset_var' has also been added.
2016-11-09 22:57:01 +01:00
Christopher Faulet
ff2613ed7a MEDIUM: vars: Add a per-process scope for variables
Now it is possible to use variables attached to a process. The scope name is
'proc'. These variables are released only when HAProxy is stopped.

'tune.vars.proc-max-size' directive has been added to confiure the maximum
amount of memory used by "proc" variables. And because memory accounting is
hierachical for variables, memory for "proc" vars includes memory for "sess"
vars.
2016-11-09 22:57:00 +01:00
Christopher Faulet
09c9df286b MINOR: vars: Add vars_set_by_name_ifexist function
This function, unsurprisingly, sets a variable value only if it already
exists. In other words, this function will succeed only if the variable was
found somewhere in the configuration during HAProxy startup.

It will be used by SPOE filter. So an agent will be able to set a value only for
existing variables. This prevents an agent to create a very large number of
unused variables to flood HAProxy and exhaust the memory reserved to variables..
2016-11-09 22:57:00 +01:00
Christopher Faulet
b71557a98b MINOR: vars: Allow '.' in variable names
This is required to have implicit prefix or scope. SPOE filter will use it to
keep variables set by an agent in its own namespace.
2016-11-09 22:57:00 +01:00
Christopher Faulet
476e5d0e03 REORG: sample: move code to release a sample expression in sample.c
This code has been moved from haproxy.c to sample.c and the function
release_sample_expr can now be called from anywhere to release a sample
expression. This function will be used by the stream processing offload engine
(SPOE).
2016-11-09 22:57:00 +01:00
Christopher Faulet
79bdef3cad MINOR: cfgparse: Parse scope lines and save the last one parsed
A scope is a section name between square bracket, alone on its line, ie:

  [scope-name]
  ...

The spaces at the beginning and at the end of the line are skipped. Comments at
the end of the line are also skipped.

When a scope is parsed, its name is saved in the global variable
cfg_scope. Initially, cfg_scope is NULL and it remains NULL until a valid scope
line is parsed.

This feature remains unused in the HAProxy configuration file and
undocumented. However, it will be used during SPOE configuration parsing.
2016-11-09 22:56:59 +01:00
Christopher Faulet
7110b40d06 MINOR: cfgparse: Add functions to backup and restore registered sections
This feature will be used by the stream processing offload engine (SPOE) to
parse dedicated configuration files without mixing HAProxy sections with SPOE
sections.

So, here we can back up all sections known by HAProxy, unregister all of them
and add new ones, dedicted to the SPOE. Once the SPOE configuration file parsed,
we can roll back all changes by restoring HAProxy sections.
2016-11-09 22:56:59 +01:00
Christopher Faulet
fcd99f8eec MINOR: flt_trace: Add hexdump option to dump forwarded data
This is pretty verbose, but it can be handy to have it in HAProxy.
2016-11-09 22:56:59 +01:00
Christopher Faulet
c6062be1e1 MINOR: filters: Remove backend filters attached to a stream only for HTTP streams
Now, for TCP streams, backend filters are released when the stream is
destroyed. But, for HTTP streams, these filters are released when the
transaction analyze ends, in flt_end_analyze callback.
2016-11-09 22:50:55 +01:00
Christopher Faulet
4117904ffd MINOR: filters: Call stream_set_backend callbacks before updating backend stats
So if an internal error is returned, the number of cumulated connections on the
backend is not incremented.
2016-11-09 22:50:55 +01:00
Christopher Faulet
31ed32dce4 MEDIUM: filters: Add attch/detach and stream_set_backend callbacks
New callbacks have been added to handle creation and destruction of filter
instances:

* 'attach' callback is called after a filter instance creation, when it is
  attached to a stream. This happens when the stream is started for filters
  defined on the stream's frontend and when the backend is set for filters
  declared on the stream's backend. It is possible to ignore the filter, if
  needed, by returning 0. This could be useful to have conditional filtering.

* 'detach' callback is called when a filter instance is detached from a stream,
  before its destruction. This happens when the stream is stopped for filters
  defined on the stream's frontend and when the analyze ends for filters defined
  on the stream's backend.

In addition, the callback 'stream_set_backend' has been added to know when a
backend is set for a stream. It is only called when the frontend and the backend
are not the same. And it is called for all filters attached to a stream
(frontend and backend).

Finally, the TRACE filter has been updated.
2016-11-09 22:50:54 +01:00
Christopher Faulet
898566e7e6 CLEANUP: remove last references to 'ruleset' section 2016-11-09 22:50:54 +01:00
Christopher Faulet
0099a8ca9d BUG: vars: Fix 'set-var' converter because of a typo
The 'set-var' converter uses function smp_conv_store (vars.c). In this function,
we should use the first argument (index 0) to retrieve the variable name and its
scope. But because of a typo, we get the scope of the second argument (index
1). In this case, there is no second argument. So the scope used was always 0
(SCOPE_SESS), always setting the variable in the session scope.

So, due to this bug, this rules

  tcp-request content accept if { src,set-var(txn.foo) -m found }

always set the variable 'sess.foo' instead of 'txn.foo'.
2016-11-09 22:50:54 +01:00
Willy Tarreau
e5a60688a4 MEDIUM: server: do not restrict anymore usage of IP address from the state file
Now that it is possible to decide whether we prefer to use libc or the
state file to resolve the server's IP address and it is possible to change
a server's IP address at run time on the CLI, let's not restrict the reuse
of the address from the state file anymore to the DNS only.

The impact is that by default the state file will be considered first
(which matches its purpose) and only then the libc. This way any address
change performed at run time over the CLI will be preserved regardless
of DNS usage or not.
2016-11-09 15:33:52 +01:00
Willy Tarreau
3eed10e54b MINOR: init: add -dr to ignore server address resolution failures
It is very common when validating a configuration out of production not to
have access to the same resolvers and to fail on server address resolution,
making it difficult to test a configuration. This option simply appends the
"none" method to the list of address resolution methods for all servers,
ensuring that even if the libc fails to resolve an address, the startup
sequence is not interrupted.
2016-11-09 15:33:52 +01:00
Willy Tarreau
4310d36a7e MINOR: server: add support for explicit numeric address in init-addr
This will allow a server to automatically fall back to an explicit numeric
IP address when all other methods fail. The address is simply specified in
the address list.
2016-11-09 15:30:47 +01:00
Willy Tarreau
465b6e5463 MEDIUM: server: make libc resolution failure non-fatal
Now that we have "init-addr none", it becomes possible to recover on
libc resolver's failures. Thus it's preferable not to alert nor fail
at the moment the libc is called, and instead process the failure at
the end of the list. This allows "none" to be set after libc to
provide a smooth fallback in case of resolver issues.
2016-11-09 15:30:47 +01:00
Willy Tarreau
37ebe1212b MINOR: server: implement init-addr none
The server is put into the "no address" maintenance state in this case.
2016-11-09 15:30:47 +01:00
Willy Tarreau
25e515235a MEDIUM: server: make use of init-addr
It is now supported. If not set, we default to the legacy methods list
which is "last,libc".
2016-11-09 15:30:47 +01:00
Baptiste Assmann
25938278b7 MEDIUM: server: add a new init-addr server line setting
This new setting supports a comma-delimited list of methods used to
resolve the server's FQDN to an IP address. Currently supported methods
are "libc" (use the regular libc's resolver) and "last" (use the last
known valid address found in the state file).

The list is implemented in a 32-bit integer, because each init-addr
method only requires 3 bits. The last one must always be SRV_IADDR_END
(0), allowing to store up to 10 methods in a single 32 bit integer.

Note: the doc is provided at the end of this series.
2016-11-09 15:30:47 +01:00
Willy Tarreau
a33ec24683 MEDIUM: cli: leave the RMAINT state when setting an IP address on the CLI
The RMAINT state happens when a server doesn't get a valid DNS response
past the hold time. If the address is forced on the CLI, we must use it
and leave the RMAINT state.
2016-11-09 15:30:47 +01:00
Baptiste Assmann
3b9fe9f8f4 MAJOR: dns: runtime resolution can change server admin state
WARNING: this is a MAJOR (and disruptive) change with previous HAProxy's
behavior: before, HAProxy never ever used to change a server administrative
status when the DNS resolution failed at run time.

This patch gives HAProxy the ability to change the administrative status
of a server to MAINT (RMAINT actually) when an error is encountered for
a period longer than its own allowed by the corresponding 'hold'
parameter.

IE if the configuration sets "hold nx 10s" and a server's hostname
points to a NX for more than 10s, then the server will be set to RMAINT,
hence in MAINTENANCE mode.
2016-11-09 15:30:47 +01:00
Baptiste Assmann
987e16d6f4 MINOR: dns: implement extra 'hold' timers.
This adds new "hold" timers : nx, refused, timeout, other. This timers
will be used to tell HAProxy to keep an erroneous response as valid for
the corresponding period. For now they're only configured, not enforced.
2016-11-09 15:30:47 +01:00
Willy Tarreau
8b42848a44 MINOR: server: make srv_set_admin_state() capable of telling why this happens
It will be important to help debugging some DNS resolution issues to
know why a server was marked down, so let's make  the function support
a 3rd argument with an indication of the reason. Passing NULL will keep
the message as-is.
2016-11-09 15:30:47 +01:00
Willy Tarreau
b96dd28477 MINOR: stats: indicate it when a server is down due to resolution
The server's state is now "MAINT (resolution)" just like we also have
"MAINT (via x/y)" when servers are tracked. The HTML stats page reports
"resolution" in the checks field similarly to what is done for the "via"
entry.
2016-11-09 15:30:47 +01:00
Willy Tarreau
e659973bfe MINOR: server: indicate in the logs when RMAINT is cleared
It's important to report in the server state change logs that RMAINT was
cleared, as it's not the regular maintenance mode, it's specific to name
resolution, and it's important to report the new state (which can be DRAIN
or READY).
2016-11-09 15:23:37 +01:00
Baptiste Assmann
83cbaa531f MAJOR: server: postpone address resolution
Server addresses are not resolved anymore upon the first pass so that we
don't fail if an address cannot be resolved by the libc. Instead they are
processed all at once after the configuration is fully loaded, by the new
function srv_init_addr(). This function only acts on the server's address
if this address uses an FQDN, which appears in server->hostname.

For now the function does two things, to followup with HAProxy's historical
default behavior:

  1. apply server IP address found in server-state file if runtime DNS
     resolution is enabled for this server

  2. use the DNS resolver provided by the libc

If none of the 2 options above can find an IP address, then an error is
returned.

All of this will be needed to support the new server parameter "init-addr".
For now, the biggest user-visible change is that all server resolution errors
are dumped at once instead of causing a startup failure one by one.
2016-11-09 14:24:20 +01:00
Baptiste Assmann
4215d7d033 MINOR: init: move apply_server_state in haproxy.c before MODE_CHECK
Currently, the function which applies server states provided by the
"old" process is applied after configuration sanity check. This results
in the impossibility to check the validity of the state file during a
regular config check, implying a full start is required, which can be
a problem sometimes.

This patch moves the loading of server_state file before MODE_CHECK.
2016-11-09 14:24:20 +01:00
Willy Tarreau
ceccdd78a7 MEDIUM: tools: make str2sa_range() return the FQDN even when not resolving
This will be needed to later postpone server address resolution. We need the
FQDN even when it doesn't resolve. The caller then needs to check if fqdn was
set when resolve is null to detect that the address couldn't be parsed and
needs later resolution.
2016-11-09 14:24:20 +01:00
Willy Tarreau
def0d22cc5 MINOR: stream: make option contstats usable again
Quite a lot of people have been complaining about option contstats not
working correctly anymore since about 1.4. The reason was that one reason
for the significant performance boost between 1.3 and 1.4 was the ability
to forward data between a server and a client without waking up the stream
manager. And we couldn't afford to force sessions to constantly wake it
up given that most of the people interested in contstats are also those
interested in high performance transmission.

An idea was experimented with in the past, consisting in limiting the
amount of transmissible data before waking it up, but it was not usable
on slow connections (eg: FTP over modem lines, RDP, SSH) as stats would
be updated too rarely if at all, so that idea was dropped.

During a discussion today another idea came up : ensure that stats are
updated once in a while, since it's the only thing that matters. It
happens that we have the request channel's analyse_exp timeout that is
used to wake the stream up after a configured delay, and that by
definition this timeout is not used when there's no more analyser
(otherwise the stream would wake up and the stats would be updated).

Thus here the idea is to reuse this timeout when there's no analyser
and set it to now+5 seconds so that a stream wakes up at least once
every 5 seconds to update its stats. It should be short enough to
provide smooth traffic graphs and to allow to debug outputs of "show
sess" more easily without inflicting too much load even for very large
number of concurrent connections.

This patch is simple enough and safe enough to be backportable to 1.6
if there is some demand.
2016-11-08 22:03:00 +01:00
Dirkjan Bussink
1866d6d8f1 MEDIUM: ssl: Add support for OpenSSL 1.1.0
In the last release a lot of the structures have become opaque for an
end user. This means the code using these needs to be changed to use the
proper functions to interact with these structures instead of trying to
manipulate them directly.

This does not fix any deprecations yet that are part of 1.1.0, it only
ensures that it can be compiled against that version and is still
compatible with older ones.

[wt: openssl-0.9.8 doesn't build with it, there are conflicts on certain
     function prototypes which we declare as inline here and which are
     defined differently there. But openssl-0.9.8 is not supported anymore
     so probably it's OK to go without it for now and we'll see later if
     some users still need it. Emeric has reviewed this change and didn't
     spot anything obvious which requires special care. Let's try it for
     real now]
2016-11-08 20:54:41 +01:00
Willy Tarreau
e5d3169e1c CLEANUP: wurfl: reduce exposure in the rest of the code
The only reason wurfl/wurfl.h was needed outside of wurfl.c was to expose
wurfl_handle which is a pointer to a structure, referenced by global.h.
By just storing a void* there instead, we can confine all wurfl code to
wurfl.c, which is really nice.
2016-11-08 18:47:25 +01:00
scientiamobile
d0027ed5b1 MEDIUM: wurfl: add Scientiamobile WURFL device detection module
WURFL is a high-performance and low-memory footprint mobile device
detection software component that can quickly and accurately detect
over 500 capabilities of visiting devices. It can differentiate between
portable mobile devices, desktop devices, SmartTVs and any other types
of devices on which a web browser can be installed.

In order to add WURFL device detection support, you would need to
download Scientiamobile InFuze C API and install it on your system.
Refer to www.scientiamobile.com to obtain a valid InFuze license.

Any useful information on how to configure HAProxy working with WURFL
may be found in:

  doc/WURFL-device-detection.txt
  doc/configuration.txt
  examples/wurfl-example.cfg

Please find more information about WURFL device detection API detection
at https://docs.scientiamobile.com/documentation/infuze/infuze-c-api-user-guide
2016-11-08 14:21:43 +01:00
Willy Tarreau
757478e900 BUG/MEDIUM: servers: properly propagate the maintenance states during startup
Right now there is an issue with the way the maintenance flags are
propagated upon startup. They are not propagate, just copied from the
tracked server. This implies that depending on the server's order, some
tracking servers may not be marked down. For example this configuration
does not work as expected :

        server s1 1.1.1.1:8000 track s2
        server s2 1.1.1.1:8000 track s3
        server s3 1.1.1.1:8000 track s4
        server s4 wtap:8000 check inter 1s disabled

It results in s1/s2 being up, and s3/s4 being down, while all of them
should be down.

The only clean way to process this is to run through all "root" servers
(those not tracking any other server), and to propagate their state down
to all their trackers. This is the same algorithm used to propagate the
state changes. It has to be done both to compute the IDRAIN flag and the
IMAINT flag. However, doing so requires that tracking servers are not
marked as inherited maintenance anymore while parsing the configuration
(and given that it is wrong, better drop it).

This fix also addresses another side effect of the bug above which is
that the IDRAIN/IMAINT flags are stored in the state files, and if
restored while the tracked server doesn't have the equivalent flag,
the servers may end up in a situation where it's impossible to remove
these flags. For example in the configuration above, after removing
"disabled" on server s4, the other servers would have remained down,
and not anymore with this fix. Similarly, the combination of IMAINT
or IDRAIN with their respective forced modes was not accepted on
reload, which is wrong as well.

This bug has been present at least since 1.5, maybe even 1.4 (it came
with tracking support). The fix needs to be backported there, though
the srv-state parts are irrelevant.

This commit relies on previous patch to silence warnings on startup.
2016-11-07 14:31:52 +01:00
Willy Tarreau
6fb8dc1a5a MINOR: server: do not emit warnings/logs/alerts on server state changes at boot
We'll have to use srv_set_admin_flag() to propagate some server flags
during the startup, and we don't want the resulting actions to cause
warnings, logs nor e-mail alerts to be generated since we're just applying
the config or a state file. So let's condition these notifications to the
fact that we're starting.
2016-11-07 14:31:45 +01:00
Willy Tarreau
e1bde1492a BUG/MINOR: srv-state: allow to have both CMAINT and FDRAIN flags
CMAINT indicates that the server was *initially* disabled in the
configuration via the "disabled" keyword. FDRAIN indicates that the
server was switched to the DRAIN state from the CLI or the agent.
This it's perfectly valid to have both of them in the state file,
so the parser must not reject this combination.

This fix must be backported to 1.6.
2016-11-07 14:30:19 +01:00
Willy Tarreau
22cace2f4c BUG/MEDIUM: srv-state: properly restore the DRAIN state
There were seveal reports about the DRAIN state not being properly
restored upon reload.

It happens that the condition in the code does exactly the opposite
of what the comment says, and the comment is right so the code is
wrong.

It's worth noting that the conditions are complex here due to the 2
available methods to set the drain state (CLI/agent, and config's
weight). To paraphrase the updated comment in the code, there are
two possible reasons for FDRAIN to have been present :
  - previous config weight was zero
  - "set server b/s drain" was sent to the CLI

In the first case, we simply want to drop this drain state if the new
weight is not zero anymore, meaning the administrator has intentionally
turned the weight back to a positive value to enable the server again
after an operation. In the second case, the drain state was forced on
the CLI regardless of the config's weight so we don't want a change to
the config weight to lose this status. What this means is :
  - if previous weight was 0 and new one is >0, drop the DRAIN state.
  - if the previous weight was >0, keep it.

This fix must be backported to 1.6.
2016-11-07 14:30:19 +01:00
Willy Tarreau
e6d9c21059 OPTIM: http: optimize lookup of comma and quote in header values
http_find_header2() relies on find_hdr_value_end() to find the comma
delimiting a header field value, which also properly handles double
quotes and backslashes within quotes. In fact double quotes are very
rare, and commas happen once every multiple characters, especially
with cookies where a full block can be found at once. So it makes
sense to optimize this function to speed up the lookup of the first
block before the quote.

This change increases the performance from 212k to 217k req/s when
requests contain a 1kB cookie (+2.5%). We don't care about going
back into the fast parser after the first quote, as it may
needlessly make the parser more complex for very marginal gains.
2016-11-05 18:23:38 +01:00
Willy Tarreau
5f10ea30f4 OPTIM: http: improve parsing performance of long URIs
Searching the trailing space in long URIs takes some time. This can
happen especially on static files and some blogs. By skipping valid
character ranges by 32-bit blocks, it's possible to increase the
HTTP performance from 212k to 216k req/s on requests features a
100-character URI, which is an increase of 2%. This is done for
architectures supporting unaligned accesses (x86_64, x86, armv7a).
There's only a 32-bit version because URIs are rarely long and very
often short, so it's more efficient to limit the systematic overhead
than to try to optimize for the rarest requests.
2016-11-05 18:00:35 +01:00
Willy Tarreau
0431f9d476 OPTIM: http: improve parsing performance of long header lines
A performance test with 1kB cookies was capping at 194k req/s. After
implementing multi-byte skipping, the performance increased to 212k req/s,
or 9.2% faster. This patch implements this for architectures supporting
unaligned accesses (x86_64, x86, armv7a). Maybe other architectures can
benefit from this but they were not tested yet.
2016-11-05 18:00:17 +01:00
Willy Tarreau
2235b261b6 OPTIM: http: move all http character classs tables into a single one
We used to have 7 different character classes, each was 256 bytes long,
resulting in almost 2kB being used in the L1 cache. It's as cheap to
test a bit than to check the byte is not null, so let's store a 7-bit
composite value and check for the respective bits there instead.

The executable is now 4 kB smaller and the performance on small
objects increased by about 1% to 222k requests/second with a config
involving 4 http-request rules including 1 header lookup, one header
replacement, and 2 variable assignments.
2016-11-05 15:58:08 +01:00
Willy Tarreau
dc3a9e830c CLEANUP: tools: make ipcpy() preserve the original port
ipcpy() is used to replace an IP address with another one, but it
doesn't preserve the original port so all callers have to do it
manually while it's trivial to do there. Better do it inside the
function.
2016-11-05 13:56:04 +01:00
Willy Tarreau
ecde7df11b MEDIUM: tools: make str2ip2() preserve existing ports
Often we need to call str2ip2() on an address which already contains a
port without replacing it, so let's ensure we preserve it even if the
family changes.
2016-11-05 13:56:04 +01:00
Willy Tarreau
f7659cb10c BUG/MEDIUM: systemd-wrapper: return correct exit codes
Gabriele Cerami reported the the exit codes of the systemd-wrapper are
wrong. In short, it directly returns the output of the wait syscall's
status, which is a composite value made of error code an signal numbers.
In general it contains the signal number on the lower bits and the error
code on the higher bits, but exit() truncates it to the lowest 8 bits,
causing config validations to incorrectly report a success. Example :

  $ ./haproxy-systemd-wrapper -c -f /dev/null
  <7>haproxy-systemd-wrapper: executing /tmp/haproxy -c -f /dev/null -Ds
  Configuration file has no error but will not start (no listener) => exit(2).
  <5>haproxy-systemd-wrapper: exit, haproxy RC=512
  $ echo $?
  0

If the process is killed however, the signal number is directly reported
in the exit code.

Let's fix all this to ensure that the exit code matches what the shell does,
which means that codes 0..127 are for exit codes, codes 128..254 for signals,
and code 255 for unknown exit code. Now the return code is correct :

  $ ./haproxy-systemd-wrapper -c -f /dev/null
  <7>haproxy-systemd-wrapper: executing /tmp/haproxy -c -f /dev/null -Ds
  Configuration file has no error but will not start (no listener) => exit(2).
  <5>haproxy-systemd-wrapper: exit, haproxy RC=2
  $ echo $?
  2

  $ ./haproxy-systemd-wrapper -f /tmp/cfg.conf
  <7>haproxy-systemd-wrapper: executing /tmp/haproxy -f /dev/null -Ds
  ^C
  <5>haproxy-systemd-wrapper: exit, haproxy RC=130
  $ echo $?
  130

This fix must be backported to 1.6 and 1.5.
2016-11-03 20:34:20 +01:00
Willy Tarreau
9df94c2b25 MINOR: peers: remove the pointer to the stream
There's no reason to use the stream anymore, only the appctx should be
used by a peer. This was a leftover from the migration to appctx and it
caused some confusion, so let's totally drop it now. Note that half of
the patch are just comment updates.
2016-10-31 20:07:01 +01:00
Willy Tarreau
81bc3b062b MINOR: peers: make peer_session_forceshutdown() use the appctx and not the stream
It was inherited from initial code but we must only manipulate the appctx
and never the stream, otherwise we always risk shooting ourselves in the
foot.
2016-10-31 20:07:01 +01:00
Willy Tarreau
b21d08e249 BUG/MEDIUM: peers: fix use after free in peer_session_create()
In case of resource allocation error, peer_session_create() frees
everything allocated and returns a pointer to the stream/session that
was put back into the free pool. This stream/session is then assigned
to ps->{stream,session} with no error control. This means that it is
perfectly possible to have a new stream or session being both used for
a regular communication and for a peer at the same time.

In fact it is the only way (for now) to explain a CLOSE_WAIT on peers
connections that was caught in this dump with the stream interface in
SI_ST_CON state while the error field proves the state ought to have
been SI_ST_DIS, very likely indicating two concurrent accesses on the
same area :

  0x7dbd50: [31/Oct/2016:17:53:41.267510] id=0 proto=tcpv4
    flags=0x23006, conn_retries=0, srv_conn=(nil), pend_pos=(nil)
    frontend=myhost2 (id=4294967295 mode=tcp), listener=? (id=0)
    backend=<NONE> (id=-1 mode=-) addr=127.0.0.1:41432
    server=<NONE> (id=-1) addr=127.0.0.1:8521
    task=0x7dbcd8 (state=0x08 nice=0 calls=2 exp=<NEVER> age=1m5s)
    si[0]=0x7dbf48 (state=CLO flags=0x4040 endp0=APPCTX:0x7d99c8 exp=<NEVER>, et=0x000)
    si[1]=0x7dbf68 (state=CON flags=0x50 endp1=CONN:0x7dc0b8 exp=<NEVER>, et=0x020)
    app0=0x7d99c8 st0=11 st1=0 st2=0 applet=<PEER>
    co1=0x7dc0b8 ctrl=tcpv4 xprt=RAW data=STRM target=PROXY:0x7fe62028a010
        flags=0x0020b310 fd=7 fd.state=22 fd.cache=0 updt=0
    req=0x7dbd60 (f=0x80a020 an=0x0 pipe=0 tofwd=0 total=0)
        an_exp=<NEVER> rex=<NEVER> wex=<NEVER>
        buf=0x78a3c0 data=0x78a3d4 o=0 p=0 req.next=0 i=0 size=0
    res=0x7dbda0 (f=0x80402020 an=0x0 pipe=0 tofwd=0 total=0)
        an_exp=<NEVER> rex=<NEVER> wex=<NEVER>
        buf=0x78a3c0 data=0x78a3d4 o=0 p=0 rsp.next=0 i=0 size=0

Special thanks to Arnaud Gavara who provided lots of valuable input and
ran some validation testing on this patch.

This fix must be backported to 1.6 and 1.5. Note that in 1.5 the
session is not assigned from within the function so some extra checks
may be needed in the callers.
2016-10-31 20:02:05 +01:00
Willy Tarreau
78c0c50705 BUG/MEDIUM: peers: on shutdown, wake up the appctx, not the stream
This part was missed when peers were ported to the new applet
infrastructure in 1.6, the main stream is woken up instead of the
appctx. This creates a race condition by which it is possible to
wake the stream at the wrong moment and miss an event. This bug
might be at least partially responsible for some of the CLOSE_WAIT
that were reported on peers session upon reload in version 1.6.

This fix must be backported to 1.6.
2016-10-31 20:01:42 +01:00
Ian Miell
71c432e937 CLEANUP: cfgparse: Very minor spelling correction
'optionnally' changed to 'optionally'
2016-10-26 18:46:01 +02:00
Chad Lavoie
1666930f03 MINOR: stats: Escape equals sign on socket dump
Greetings,

Was recently working with a stick table storing URL's and one had an
equals sign in it (e.g. 127.0.0.1/f=ab) which made it difficult to
easily split the key and value without a regex.

This patch will change it so that the key looks like
"key=127.0.0.1/f\=ab" instead of "key=127.0.0.1/f=ab".

Not very important given that there are ways to work around it.

Thanks,

- Chad
2016-10-25 22:15:22 +02:00
Andrew Rodland
4f88c63609 MEDIUM: server: Implement bounded-load hash algorithm
The consistent hash lookup is done as normal, then if balancing is
enabled, we progress through the hash ring until we find a server that
doesn't have "too much" load. In the case of equal weights for all
servers, the allowed number of requests for a server is either the
floor or the ceil of (num_requests * hash-balance-factor / num_servers);
with unequal weights things are somewhat more complicated, but the
spirit is the same -- a server should not be able to go too far above
(its relative weight times) the average load. Using the hash ring to
make the second/third/etc. choice maintains as much locality as
possible given the load limit.

Signed-off-by: Andrew Rodland <andrewr@vimeo.com>
2016-10-25 20:21:32 +02:00
Andrew Rodland
13d5ebb913 MINOR: server: compute a "cumulative weight" to allow chash balancing to hit its target
For active servers, this is the sum of the eweights of all active
servers before this one in the backend, and
[srv->cumulative_weight .. srv_cumulative_weight + srv_eweight) is a
space occupied by this server in the range [0 .. lbprm.tot_wact), and
likewise for backup servers with tot_wbck. This allows choosing a
server or a range of servers proportional to their weight, by simple
integer comparison.

Signed-off-by: Andrew Rodland <andrewr@vimeo.com>
2016-10-25 20:21:32 +02:00
Andrew Rodland
b1f48e3161 MINOR: backend: add hash-balance-factor option for hash-type consistent
0 will mean no balancing occurs; otherwise it represents the ratio
between the highest-loaded server and the average load, times 100 (i.e.
a value of 150 means a 1.5x ratio), assuming equal weights.

Signed-off-by: Andrew Rodland <andrewr@vimeo.com>
2016-10-25 20:21:32 +02:00
Andrew Rodland
e168feb4a8 MINOR: proxy: add 'served' field to proxy, equal to total of all servers'
This will allow lb_chash to determine the total active sessions for a
proxy without any computation.

Signed-off-by: Andrew Rodland <andrewr@vimeo.com>
2016-10-25 20:21:32 +02:00
Willy Tarreau
b957109727 BUG/MEDIUM: systemd: let the wrapper know that haproxy has completed or failed
Pierre Cheynier found that there's a persistent issue with the systemd
wrapper. Too fast reloads can lead to certain old processes not being
signaled at all and continuing to run. The problem was tracked down as
a race between the startup and the signal processing : nothing prevents
the wrapper from starting new processes while others are still starting,
and the resulting pid file will only contain the latest pids in this
case. This can happen with large configs and/or when a lot of SSL
certificates are involved.

In order to solve this we want the wrapper to wait for the new processes
to complete their startup. But we also want to ensure it doesn't wait for
nothing in case of error.

The solution found here is to create a pipe between the wrapper and the
sub-processes. The wrapper waits on the pipe and the sub-processes are
expected to close this pipe once they completed their startup. That way
we don't queue up new processes until the previous ones have registered
their pids to the pid file. And if anything goes wrong, the wrapper is
immediately released. The only thing is that we need the sub-processes
to know the pipe's file descriptor. We pass it in an environment variable
called HAPROXY_WRAPPER_FD.

It was confirmed both by Pierre and myself that this completely solves
the "zombie" process issue so that only the new processes continue to
listen on the sockets.

It seems that in the future this stuff could be moved to the haproxy
master process, also getting rid of an environment variable.

This fix needs to be backported to 1.6 and 1.5.
2016-10-25 17:43:45 +02:00
Willy Tarreau
a785269b4e MINOR: systemd: report it when execve() fails
It's important to know that a signal sent to the wrapper had no effect
because something failed during execve(). Ideally more info (strerror)
should be reported. It would be nice to backport this to 1.6 and 1.5.
2016-10-25 17:36:40 +02:00
Willy Tarreau
3747ea07ce BUG/MINOR: systemd: check return value of calloc()
The wrapper is not the best reliable thing in the universe, so start
by adding at least the minimum expected controls :-/

To be backported to 1.5 and 1.6.
2016-10-25 17:36:40 +02:00
Willy Tarreau
4351ea61fb BUG/MINOR: systemd: always restore signals before execve()
Since signals are inherited, we must restore them before calling execve()
and intercept them again after a failed execve(). In order to cleanly deal
with the SIGUSR2/SIGHUP loops where we re-exec the wrapper, we ignore these
two signals during a re-exec, and restore them to defaults when spawning
haproxy.

This should be backported to 1.6 and 1.5.
2016-10-25 17:36:20 +02:00
Willy Tarreau
7643d09dca BUG/MINOR: systemd: make the wrapper return a non-null status code on error
When execv() fails to execute the haproxy executable, it's important to
return an error instead of pretending everything is cool. This fix should
be backported to 1.6 and 1.5 in order to improve the overall reliability
under systemd.
2016-10-25 17:36:20 +02:00
Thierry FOURNIER / OZON.IO
07c3d78c2c BUG/MINOR: ssl: prevent multiple entries for the same certificate
Today, the certificate are indexed int he SNI tree using their CN and the
list of thier AltNames. So, Some certificates have the same names in the
CN and one of the AltNames entries.

Typically Let's Encrypt duplicate the the DNS name in the CN and the
AltName.

This patch prevents the creation of identical entries in the trees. It
checks the same DNS name and the same SSL context.

If the same certificate is registered two time it will be duplicated.

This patch should be backported in the 1.6 and 1.5 version.
2016-10-24 19:13:12 +02:00
Thierry FOURNIER / OZON.IO
7a3bd3b9dc BUG/MINOR: ssl: Check malloc return code
If malloc() can't allocate memory and return NULL, a segfaut will raises.

This patch should be backported in the 1.6 and 1.5 version.
2016-10-24 19:13:12 +02:00
Thierry FOURNIER / OZON.IO
d44ea3f77c BUILD/CLEANUP: ssl: Check BIO_reset() return code
The BIO_reset function can fails, and the error is not processed.
This patch just take in account the return code of the BIO_reset()
function.
2016-10-24 19:13:12 +02:00
Thierry FOURNIER / OZON.IO
8b068c2993 MINOR: ssl: add debug traces
Add some debug trace when haproxy is configured in debug & verbose mode.
This is useful for openssl tests. Typically, the error "SSL handshake
failure" can be caused by a lot of protocol error. This patch details
the encountered error. For exemple:

   OpenSSL error 0x1408a0c1: ssl3_get_client_hello: no shared cipher

Note that my compilator (gcc-4.7) refuse to considers the function
ssl_sock_dump_errors() as inline. The condition "if" ensure that the
content of the function is not executed in normal case. It should be
a pity to call a function just for testing its execution condition, so
I use the macro "forceinline".
2016-10-24 19:13:12 +02:00
Willy Tarreau
a5bc36b31c MINOR: stats: emit dses
This is the number of denied sessions, blocked by "tcp-request session reject".
2016-10-21 18:19:48 +02:00
Willy Tarreau
620408f406 MEDIUM: tcp: add registration and processing of TCP L5 rules
This commit introduces "tcp-request session" rules. These are very
much like "tcp-request connection" rules except that they're processed
after the handshake, so it is possible to consider SSL information and
addresses rewritten by the proxy protocol header in actions. This is
particularly useful to track proxied sources as this was not possible
before, given that tcp-request content rules are processed after each
HTTP request. Similarly it is possible to assign the proxied source
address or the client's cert to a variable.
2016-10-21 18:19:24 +02:00
Willy Tarreau
7d9736fb5d CLEANUP: tcp rules: mention everywhere that tcp-conn rules are L4
This is in order to make integration of tcp-request-session cleaner :
- tcp_exec_req_rules() was renamed tcp_exec_l4_rules()
- LI_O_TCP_RULES was renamed LI_O_TCP_L4_RULES
  (LI_O_*'s horrible indent was also fixed and a provision was left
   for L5 rules).
2016-10-21 18:19:24 +02:00
Willy Tarreau
8a90b8ea19 MINOR: stats: output dcon
These are denied conns. Strangely this wasn't emitted while it used to be
available for a while. It corresponds to the number of connections blocked
by "tcp-request connection reject".
2016-10-21 18:17:56 +02:00
Willy Tarreau
87846e42a4 BUG/MINOR: vars: smp_fetch_var() doesn't depend on HTTP but on the session
Thus the SMP_USE_HTTP_ANY dependency is incorrect, we have to depend on
SMP_USE_L5_CLI (the session). It's particularly important for session-wide
variables which are kept across HTTP requests. For now there is no impact
but it will make a difference with tcp-request session rules.
2016-10-21 17:53:46 +02:00
Willy Tarreau
7513d001c8 BUG/MINOR: vars: make smp_fetch_var() more robust against misuses
smp_fetch_var() may be called from everywhere since it just reads a
variable. It must ensure that the stream exists before trying to return
a stream-dependant variable. For now there is no impact but it will
cause trouble with tcp-request session rules.
2016-10-21 17:53:46 +02:00
Willy Tarreau
108a8fd8be BUG/MINOR: vars: use sess and not s->sess in action_store()
This causes the stream to be dereferenced when not needed. It will
cause trouble when variables are used outside of a stream.
2016-10-21 17:53:46 +02:00
Willy Tarreau
00005ce5a1 MINOR: tcp: make set-src/set-src-port and set-dst/set-dst-port commutative
When the tcp/http actions above were introduced in 1.7-dev4, we used to
proceed like this :

  - set-src/set-dst would force the port to zero
  - set-src-port/set-dst-port would not do anything if the address family is
    neither AF_INET nor AF_INET6.

It was a stupid idea of mine to request this behaviour because it ensures
that these functions cannot be used in a wide number of situations. Because
of the first rule, it is necessary to save the source port one way or
another if only the address has to be changed (so you have to use an
variable). Due to the second rule, there's no way to set the source port
on a unix socket without first overwriting the address. And sometimes it's
really not convenient, especially when there's no way to guarantee that all
fields will properly be set.

In order to fix all this, this small change does the following :
  - set-src/set-dst always preserve the original port even if the address
    family changes. If the previous address family didn't have a port (eg:
    AF_UNIX), then the port is set to zero ;

  - set-src-port/set-dst-port always preserve the original address. If the
    address doesn't have a port, then the family is forced to IPv4 and the
    address to "0.0.0.0".

Thanks to this it now becomes possible to perform one action, the other or
both in any order.
2016-10-21 15:15:20 +02:00
William Lallemand
1e08cd819a MEDIUM: cli: register CLI keywords with cli_register_kw()
To register a new cli keyword, you need to declare a cli_kw_list
structure in your source file:

	static struct cli_kw_list cli_kws = {{ },{
		{ { "test", "list", NULL }, "test list : do some tests on the cli", test_parsing, NULL },
		{ { NULL }, NULL, NULL, NULL, NULL }
	}};

And then register it:

	cli_register_kw(&cli_kws);

The first field is an array of 5 elements, where you declare the
keywords combination which will match, it must be ended by a NULL
element.

The second field is used as a usage message, it will appear in the help
of the cli, you can set it to NULL if you don't want to show it, it's a
good idea if you want to overwrite some existing keywords.

The two last fields are callbacks.

The first one is used at parsing time, you can use it to parse the
arguments of your keywords and print small messages. The function must
return 1 in case of a failure, otherwise 0:

	#include <proto/dumpstats.h>

	static int test_parsing(char **args, struct appctx *appctx)
	{
		struct chunk out;

		if (!*args[2]) {
			appctx->ctx.cli.msg = "Error: the 3rd argument is mandatory !";
			appctx->st0 = STAT_CLI_PRINT;
			return 1;
		}
		chunk_reset(&trash);
		chunk_printf(&trash, "arg[3]: %s\n", args[2]);
		chunk_init(&out, NULL, 0);
		chunk_dup(&out, &trash);
		appctx->ctx.cli.err = out.str;
		appctx->st0 = STAT_CLI_PRINT_FREE; /* print and free in the default cli_io_handler */
		return 0;
	}

The last field is the IO handler callback, it can be set to NULL if you
want to use the default cli_io_handler() otherwise you can write your
own. You can use the private pointer in the appctx if you need to store
a context or some data. stats_dump_sess_to_buffer() is a good example of
IO handler, IO handlers often use the appctx->st2 variable for the state
machine. The handler must return 0 in case it have to be recall later
otherwise 1.
2016-10-19 19:03:40 +02:00
Frédéric Lécaille
523cc9e858 MEDIUM: peers: Fix a peer stick-tables synchronization issue.
During the stick-table teaching process which occurs at reloading/restart time,
expiration dates of stick-tables entries were not synchronized between peers.

This patch adds two new stick-table messages to provide such a synchronization feature.

As these new messages are not supported by older haproxy peers protocol versions,
this patch increments peers protol version, from 2.0 to 2.1, to help in detecting/supporting
such older peers protocol implementations so that new versions might still be able
to transparently communicate with a newer one.

[wt: technically speaking it would be nice to have this backported into 1.6
 as some people who reload often are affected by this design limitation, but
 it's not a totally transparent change that may make certain users feel
 reluctant to upgrade older versions. Let's let it cook in 1.7 first and
 decide later]
2016-10-17 19:44:35 +02:00
Nenad Merdanovic
ad9a7e9770 MINOR: Add fe_req_rate sample fetch
The fe_req_rate is similar to fe_sess_rate, but fetches the number
of HTTP requests per second instead of connections/sessions per second.

Signed-off-by: Nenad Merdanovic <nmerdan@anine.io>
2016-10-03 16:08:09 +02:00
Willy Tarreau
c3d8cd47e0 BUG/MEDIUM: dns: don't randomly crash on out-of-memory
dns_init_resolvers() tries to emit the current resolver's name in the
error message in case of out-of-memory condition. But it must not do
it when initializing the trash before even having such a resolver
otherwise the user is certain to get a dirty crash instead of the
error message. No backport is needed.
2016-10-01 09:23:04 +02:00
Willy Tarreau
1f6367fa08 BUG/MINOR: stats: report the correct conn_time in backend's html output
An apparent copy-paste error resulted in backend's avg connection time
to report the average queue time instead in the HTML dump. Backport to
1.6 is desired.
2016-10-01 09:12:08 +02:00
Christopher Faulet
06ecf3ab72 BUG/MEDIUM: http/compression: Fix how chunked data are copied during the HTTP body parsing
When the compression is enable on HTTP responses, the chunked data are copied in
a temporary buffer during the HTTP body parsing and then compressed when
everything is forwarded to the client. But the amout of data that can be copied
was not correctly calculated. In many cases, it worked, else on the edge when
the channel buffer was almost full.

[wt: bug introduced by b77c5c26 in 1.7-dev, no backport needed]
2016-09-23 16:01:14 +02:00
Lukas Tribus
7d56c6d347 MINOR: enable IP_BIND_ADDRESS_NO_PORT on backend connections
Enable IP_BIND_ADDRESS_NO_PORT on backend connections when the source
address is specified without port or port ranges. This is supported
since Linux 4.2/libc 2.23.

If the kernel supports it but the libc doesn't, we can define it at
build time:
make [...] DEFINE=-DIP_BIND_ADDRESS_NO_PORT=24

For more informations about this feature, see Linux commit 90c337da
2016-09-13 15:22:54 +02:00
Lukas Tribus
a0bcbdcb04 MEDIUM: make SO_REUSEPORT configurable
With Linux officially introducing SO_REUSEPORT support in 3.9 and
its mainstream adoption we have seen more people running into strange
SO_REUSEPORT related issues (a process management issue turning into
hard to diagnose problems because the kernel load-balances between the
new and an obsolete haproxy instance).

Also some people simply want the guarantee that the bind fails when
the old process is still bound.

This change makes SO_REUSEPORT configurable, introducing the command
line argument "-dR" and the noreuseport configuration directive.

A backport to 1.6 should be considered.
2016-09-13 07:56:03 +02:00
Lukas Tribus
255cc5184d MINOR: show Running on zlib version 2016-09-13 07:55:59 +02:00
Lukas Tribus
dcbc5c5ecf MINOR: show Built with PCRE version
Inspired by PCRE's pcre_version.c and improved with Willy's
suggestions. Reusable parts have been added to
include/common/standard.h.
2016-09-13 07:55:51 +02:00
Lukas Tribus
d64788d9c6 BUG/MINOR: displayed PCRE version is running release
pcre_version() returns the running PCRE release, not the release
haproxy was built with.

This simple string fix should be backported to supported releases,
as the output may be confusing.
2016-09-13 07:55:46 +02:00
Baptiste Assmann
3cf7f98782 MINOR: dns: proper domain name validation when receiving DNS response
The analyse of CNAME resolution and request's domain name was performed
twice:
- when validating the response buffer
- when loading the right IP address from the response

Now DNS response are properly loaded into a DNS response structure, we
do the domain name validation when loading/validating the response in
the DNS strcucture and later processing of this task is now useless.

backport: no
2016-09-12 20:01:59 +02:00
Baptiste Assmann
65ce3f5ee4 MINOR: dns: query type change when last record is a CNAME
DNS servers don't return A or AAAA record if the query points to a CNAME
not resolving to the right type.
We know it because the last record of the response is a CNAME. We can
trigger a new query, switching to a new query type, handled by the layer
above.
2016-09-12 20:01:40 +02:00
Baptiste Assmann
c1ce5f358e MEDIUM: dns: new DNS response parser
New DNS response parser function which turn the DNS response from a
network buffer into a DNS structure, much easier for later analysis
by upper layer.

Memory is pre-allocated at start-up in a chunk dedicated to DNS
response store.

New error code to report a wrong number of queries in a DNS response.
2016-09-12 19:54:23 +02:00
Baptiste Assmann
bcbd491e9c CLEANUP/MINOR dns: comment do not follow up code update
The loop comment is not appropriate anymore and needed to be updated
according to the code.

backport: no
2016-09-12 19:51:49 +02:00
Baptiste Assmann
3749ebf6fc MINOR: cli: ability to change a server's port
Enrichment of the 'set server <b>/<s> addr' cli directive to allow changing
now a server's port.
The new syntax looks like:
  set server <b>/<s> addr [port <port>]
2016-09-11 08:13:31 +02:00
Baptiste Assmann
d458adcc52 MINOR: new update_server_addr_port() function to change both server's ADDR and service PORT
This function can replace update_server_addr() where the need to change the
server's port as well as the IP address is required.
It performs some validation before performing each type of change.
2016-09-11 08:13:11 +02:00
Baptiste Assmann
6b453f166f MINOR: server: introduction of 3 new server flags
Introduction of 3 new server flags to remember if some parameters were set
during configuration parsing.

* SRV_F_CHECKADDR: this server has a check addr configured
* SRV_F_CHECKPORT: this server has a check port configured
* SRV_F_AGENTADDR: this server has a agent addr configured
2016-09-11 08:12:42 +02:00
Baptiste Assmann
95db2bcfee MAJOR: check: find out which port to use for health check at run time
HAProxy used to deduce port used for health checks when parsing configuration
at startup time.
Because of this way of working, it makes it complicated to change the port at
run time.

The current patch changes this behavior and makes HAProxy to choose the
port used for health checking when preparing the check task itself.

A new type of error is introduced and reported when no port can be found.

There won't be any impact on performance, since the process to find out the
port value is made of a few 'if' statements.

This patch also introduces a new check state CHK_ST_PORT_MISS: this flag is
used to report an error in the case when HAProxy needs to establish a TCP
connection to a server, to perform a health check but no TCP ports can be
found for it.

And last, it also introduces a new stream termination condition:
SF_ERR_CHK_PORT. Purpose of this flag is to report an error in the event when
HAProxy has to run a health check but no port can be found to perform it.
2016-09-11 08:12:13 +02:00
Dinko Korunic
7276f3aa3d BUG/MINOR: Fix OSX compilation errors
SOL_IPV6 is not defined on OSX, breaking the compile. Also libcrypt is
not available for installation neither in Macports nor as a Brew recipe,
so we're disabling implicit dependancy.

Signed-off-by: Dinko Korunic <dinko.korunic@gmail.com>
2016-09-11 08:04:37 +02:00
Baptiste Assmann
5094656a67 MINOR: cli: change a server health check port through the stats socket
Introduction of a new CLI command "set server <srv> check-port <port>' to
allow admins to change a server's health check port at run time.

This changes the equivalent of the configuration server parameter
called 'port'.
2016-09-06 07:39:16 +02:00
Chad Lavoie
e3f5031b51 MINOR: cli: allow the semi-colon to be escaped on the CLI
Today I was working on an auto-update script for some ACLs, and found
that I couldn't load ACL entries with a semi-colon in them no matter
how I tried to escape it.

As such, I wrote this patch (this one is for 1.7dev, but it applies to
1.5 the same with just line numbers changed), which seems to allow me
to execute a command such as "add acl /etc/foo.lst foo\;bar" over the
socket. It's worth noting that stats_sock_parse_request() already uses
the backslash to escape spaces in words so it makes sense to use it as
well to escape the semi-colon.
2016-08-30 22:32:23 +02:00
Willy Tarreau
74967f60ec BUG/MINOR: payload: fix SSLv2 version parser
A typo resulting from a copy-paste in the original req.ssl_ver code will
make certain SSLv2 hello messages not properly detected. The bug has been
present since the code was added in 1.3.16. In 1.3 and 1.4, this code was
in proto_tcp.c. In 1.5-dev0, it moved to acl.c, then later to payload.c.

This bug was tagged "minor" because SSLv2 is outdated and this encoding
was rarely (if at all) used, the shorter form starting with 0x80 being
more common.

This fix needs to be backported to all currently maintained branches.
2016-08-30 14:47:57 +02:00
Erwan Velu
5457eb49b4 CLEANUP: dns: Removing usless variable & assignation
In dns_send_query(), ret was set to 0 but always reassigned before the
usage so this initialisation was useless.

The send_error variable was created, assigned to 0 but never used. So
this variable is just useless by itself. Removing it.
2016-08-30 14:24:48 +02:00
Erwan Velu
f714fb533e CLEANUP: dumpstats: Removing useless variables allocation
In dump_servers_state(), srv_time_since_last_change, bk_f_forced_id, srv_f_forced_id variables
were firstly set to zero and immediately reassigned to another value while never been accessed in between.

Sounds like a useless initiazation. So let's make only the useful allocation.
2016-08-30 14:24:48 +02:00
Erwan Velu
b12ff9a201 CLEANUP: proto_http: Removing useless variable assignation
delta is set to 0 just before being assigned to a buffer.
This patch is just removing this useless line, shorted is better.
2016-08-30 14:24:48 +02:00
Willy Tarreau
0857d7aa8f BUG/MAJOR: stream: properly mark the server address as unset on connect retry
Bartosz Koninski reported that recent commit 568743a ("BUG/MEDIUM: stream-int:
completely detach connection on connect error") introduced a nasty side effect
during the connection retry, causing a reconnect attempt to be performed on a
random address, possibly another server from another backend as shown in his
reproducer.

The real reason is in fact that by calling si_release_endpoint() after a
failed connect attempt and not clearing the SN_ADDR_SET flag on the
stream, we indicate that we continue to trust the connection's current
address. So upon next connect attempt, a connection is picked from the
pool and the last target address is reused. It is not very easy to
reproduce it 100% reliably out of production traffic, but it's easier to
see when haproxy is started with -dM to force the pools to be filled with
junk, and where subsequent connections are not even attempted due to their
family being incorrect.

The correct fix consists in clearing the SN_ADDR_SET flag after calling
si_release_endpoint() during a retry. But it outlines a deeper issue which
is in fact that the target address is *stored* in the connection and its
validity is stored in the stream. Until we have a better solution to store
target addresses, it would be better to rely on the connection flags
(CO_FL_ADDR_TO_SET) for this purpose. But it also outlines the fact that
the same issue still exists in Lua sockets and in idle sockets, which
fortunately are not affected by this issue.

Thanks to Bartosz for providing all the elements needed to understand the
problem.

This fix needs to be backported to 1.6 and 1.5.
2016-08-29 15:33:56 +02:00
ben51degrees
1f077ebff2 BUILD/MAJOR:updated 51d Trie implementation to incorperate latest update to 51Degrees.c
Trie now uses a dataset structure just like Pattern, so this has been
defined in includes/types/global.h for both Pattern and Trie where it
was just Pattern.
In src/51d.c all functions used by the Trie implementation which need a
dataset as an argument now use the global dataset. The
fiftyoneDegreesDestroy method has now been replaced with
fiftyoneDegreesDataSetFree which is common to Pattern and Trie. In
addition, two extra dataset init status' have been added to the switch
statement in init_51degrees.
2016-08-24 20:29:31 +02:00
Thierry FOURNIER / OZON.IO
4cac359a39 MEDIUM: log: Decompose %Tq in %Th %Ti %TR
Tq is the time between the instant the connection is accepted and a
complete valid request is received. This time includes the handshake
(SSL / Proxy-Protocol), the idle when the browser does preconnect and
the request reception.

This patch decomposes %Tq in 3 measurements names %Th, %Ti, and %TR
which returns respectively the handshake time, the idle time and the
duration of valid request reception. It also adds %Ta which reports
the request's active time, which is the total time without %Th nor %Ti.
It replaces %Tt as the total time, reporting accurate measurements for
HTTP persistent connections.

%Th is avalaible for TCP and HTTP sessions, %Ti, %TR and %Ta are only
avalaible for HTTP connections.

In addition to this, we have new timestamps %tr, %trg and %trl, which
log the date of start of receipt of the request, respectively in the
default format, in GMT time and in local time (by analogy with %t, %T
and %Tl). All of them are obviously only available for HTTP. These values
are more relevant as they more accurately represent the request date
without being skewed by a browser's preconnect nor a keep-alive idle
time.

The HTTP log format and the CLF log format have been modified to
use %tr, %TR, and %Ta respectively instead of %t, %Tq and %Tt. This
way the default log formats now produce the expected output for users
who don't want to manually fiddle with the log-format directive.

Example with the following log-format :

   log-format "%ci:%cp [%tr] %ft %b/%s h=%Th/i=%Ti/R=%TR/w=%Tw/c=%Tc/r=%Tr/a=%Ta/t=%Tt %ST %B %CC %CS %tsc %ac/%fc/%bc/%sc/%rc %sq/%bq %hr %hs %{+Q}r"

The request was sent by hand using "openssl s_client -connect" :

   Aug 23 14:43:20 haproxy[25446]: 127.0.0.1:45636 [23/Aug/2016:14:43:20.221] test~ test/test h=6/i=2375/R=261/w=0/c=1/r=0/a=262/t=2643 200 145 - - ---- 1/1/0/0/0 0/0 "GET / HTTP/1.1"

=> 6 ms of SSL handshake, 2375 waiting before sending the first char (in
fact the time to type the first line), 261 ms before the end of the request,
no time spent in queue, 1 ms spend connecting to the server, immediate
response, total active time for this request = 262ms. Total time from accept
to close : 2643 ms.

The timing now decomposes like this :

                 first request               2nd request
      |<-------------------------------->|<-------------- ...
      t         tr                       t    tr ...
   ---|----|----|----|----|----|----|----|----|--
      : Th   Ti   TR   Tw   Tc   Tr   Td : Ti   ...
      :<---- Tq ---->:                   :
      :<-------------- Tt -------------->:
                :<--------- Ta --------->:
2016-08-23 15:18:08 +02:00
David Carlier
70d604593d MINOR: cfgparse: few memory leaks fixes.
Some minor memory leak during the config parsing.
2016-08-23 12:16:34 +02:00
Baptiste Assmann
d260e1dea6 MAJOR: listen section: don't use first bind port anymore when no server ports are provided
Up to HAProxy 1.7-dev3, HAProxy used to use the first bind port from it's
local 'listen' section when no port is configured on the server.

IE, in the configuration below, the server port would be 25:

  listen smtp
   bind :25
   server s1 1.0.0.1 check

This way of working is now obsolete and can be removed, furthermore it is not
documented!

This will make the possibility to change the server's port much easier.
2016-08-14 12:18:14 +02:00
Baptiste Assmann
08396c87d0 MINOR: standard.c: ipcpy() function to copy an IP address from a struct sockaddr_storage into an other one
The function ipcpy() simply duplicates the IP address found in one
struct sockaddr_storage into an other struct sockaddr_storage.
It also update the family on the destination structure.

Memory of destination structure must be allocated and cleared by the
caller.
2016-08-14 12:16:43 +02:00
Baptiste Assmann
08b24cfdb2 MINOR: standard.c: ipcmp() function to compare 2 IP addresses stored in 2 struct sockaddr_storage
new ipcmp() function to compare 2 IP addresses stored in struct
sockaddr_storage.
Returns 0 if both addresses doesn't match and 1 if they do.
2016-08-14 12:16:27 +02:00
Willy Tarreau
4d03ef7f03 BUG/MAJOR: stick-counters: possible crash when using sc_trackers with wrong table
Bryan Talbot reported a very interesting bug. The sc_trackers() sample
fetch seems to have escaped the sanitization that was performed during 1.5
to ensure all dereferences of stkctr_entry() were safe. Here if a tacker
is set on a backend and is then checked against a different backend where
the entry doesn't exist, stkctr_entry() returns NULL and this is dereferenced
to retrieve the ref count. Thanks to Bryan for his detailed bug report featuring
a working config and reproducer.

This fix must be backported to 1.6 and 1.5.
2016-08-14 12:02:55 +02:00
Emeric Brun
597b26e432 BUG/MINOR: peers: empty chunks after a resync.
After pushing the last update related to a resync process, the teacher resets
the re-connection's origin to the saved one (pointer position when he receive
the resync request). But the last acknowledgement overwrites this pointer to
an inconsistent value. In peersv2, it results with empty table chunks
regularly pushed.

The fix consist to move the confirm code to assure that the confirm message is
always sent after the last acknowledgement related to the resync. And to reset
the re-connection's origin to the saved one when a confirm message is received.

This bug affects versions 1.6 and superior.

For older versions (peersv1), this inconsistent state should not generate side
effects because chunks are not used and a next acknowlegement message resets
the pointer in a valid state.
2016-08-14 11:47:05 +02:00
Joe Williams
30fcd39f35 MINOR: tcp: add further tcp info fetchers
Adding on to Thierry's work (http://git.haproxy.org/?p=haproxy.git;h=6310bef5)
I have added a few more fetchers for counters based on the tcp_info struct
maintained by the kernel :

  fc_unacked, fc_sacked, fc_retrans, fc_fackets, fc_lost,
  fc_reordering

Two fields were not added because they're version-dependant :
  fc_rcv_rtt, fc_total_retrans

The fields name depend on the operating system. FreeBSD and NetBSD prefix
all the field names with "__" so we have to rely on a few #ifdef for
portability.
2016-08-10 23:02:46 +02:00
Willy Tarreau
091e86e1ee BUILD: poll: remove unused hap_fd_isset() which causes a warning with clang
Clang reports that this function is not used :

src/ev_poll.c:34:28: warning: unused function 'hap_fd_isset' [-Wunused-function]
static inline unsigned int hap_fd_isset(int fd, unsigned int *evts)

It's been true since the rework of the pollers in 1.5 and it's unlikely we'll
ever need it anymore, so better remove it now to provide clean builds.

This fix can be backported to 1.6 and 1.5 now.
2016-08-10 21:23:48 +02:00
Willy Tarreau
e1cc4b5231 BUILD: compression: remove a warning when no compression lib is used
This warning appears when building without any compression library :

src/compression.c:143:19: warning: unused function 'init_comp_ctx' [-Wunused-function]
static inline int init_comp_ctx(struct comp_ctx **comp_ctx)
                  ^
src/compression.c:176:19: warning: unused function 'deinit_comp_ctx' [-Wunused-function]
static inline int deinit_comp_ctx(struct comp_ctx **comp_ctx)

No backport is needed.
2016-08-10 21:17:06 +02:00
Willy Tarreau
64345aaaf0 BUILD: checks: remove the last strcat and eliminate a warning on OpenBSD
OpenBSD emits warnings on usages of strcpy, strcat and sprintf (and
probably a few others). Here we have a single such warning in all the code
reintroduced by commit 0ba0e4a ("MEDIUM: Support sending email alerts") in
1.6-dev1. Let's get rid of it, the open-coding of strcat is as small as its
usage and the the result is even more efficient.

This fix needs to be backported to 1.6.
2016-08-10 19:32:39 +02:00
Willy Tarreau
7f3e3c0159 BUILD: tcp: do not include netinet/ip.h for IP_TTL
On OpenBSD, netinet/ip.h fails unless in_systm.h is included. This
include was added by the silent-drop feature introduced with commit
2d392c2 ("MEDIUM: tcp: add new tcp action "silent-drop"") in 1.6-dev6,
but we don't need it, IP_TTL is defined in netinet/in.h, so let's drop
this useless include.

This fix needs to be backported to 1.6.
2016-08-10 19:32:15 +02:00
Willy Tarreau
077edcba2e BUILD: log: iovec requires to include sys/uio.h on OpenBSD
The following commit merged into 1.6-dev6 broke the build on OpenBSD :

  609ac2a ("MEDIUM: log: replace sendto() with sendmsg() in __send_log()")

Including sys/uio.h is enough to fix this. This fix needs to be backported
to 1.6.
2016-08-10 19:32:06 +02:00
Willy Tarreau
a6e3be7ae9 BUILD: protocol: fix some build errors on OpenBSD
Building 1.6 and above on OpenBSD 5.2 fails due to protocol.c not
including sys/types.h before sys/socket.h :

  In file included from src/protocol.c:14:
  /usr/include/sys/socket.h:162: error: expected specifier-qualifier-list before 'u_int8_t'

This fix needs to be backported to 1.6.
2016-08-10 19:31:58 +02:00
Emeric Brun
cc52274496 BUG/MINOR: peers: some updates are pushed twice after a resync.
This bug is due to a copy/paste error and was introduced
with peers-protocol v2:

The last_pushed pointer was not correctly reset to the teaching_origin at
the end of the teaching state but to the first update present in the tree.

The result: some updates were re-pushed after leaving the teaching state.

This fix needs to be backported to 1.6.
2016-08-10 17:42:13 +02:00
Willy Tarreau
16e015635c MINOR: tcp: add dst_is_local and src_is_local
It is sometimes needed in application server environments to easily tell
if a source is local to the machine or a remote one, without necessarily
knowing all the local addresses (dhcp, vrrp, etc). Similarly in transparent
proxy configurations it is sometimes desired to tell the difference between
local and remote destination addresses.

This patch adds two new sample fetch functions for this :

dst_is_local : boolean
  Returns true if the destination address of the incoming connection is local
  to the system, or false if the address doesn't exist on the system, meaning
  that it was intercepted in transparent mode. It can be useful to apply
  certain rules by default to forwarded traffic and other rules to the traffic
  targetting the real address of the machine. For example the stats page could
  be delivered only on this address, or SSH access could be locally redirected.
  Please note that the check involves a few system calls, so it's better to do
  it only once per connection.

src_is_local : boolean
  Returns true if the source address of the incoming connection is local to the
  system, or false if the address doesn't exist on the system, meaning that it
  comes from a remote machine. Note that UNIX addresses are considered local.
  It can be useful to apply certain access restrictions based on where the
  client comes from (eg: require auth or https for remote machines). Please
  note that the check involves a few system calls, so it's better to do it only
  once per connection.
2016-08-09 16:50:08 +02:00
Willy Tarreau
f0645dce4f MINOR: sample: use smp_make_rw() in upper/lower converters
There's no point in always duplicating the sample, just ensure it's
writable, as was done prior to the smp_dup() change. This should be
backported to 1.6 to avoid a performance regression caused by this
change (about 30% more time for upper/lower due to the copy).
2016-08-09 14:31:25 +02:00
Willy Tarreau
f65c6c0456 BUG/MEDIUM: stick-table: properly convert binary samples to keys
The binary sample to stick-table key conversion is wrong. It doesn't
check that the binary sample is writable before padding it. After a
quick audit, it doesn't look like any existing sample fetch function
can trigger this bug. The correct fix consists in calling smp_make_rw()
prior to padding the sample.

This fix should be backported to 1.6.
2016-08-09 14:30:57 +02:00
Willy Tarreau
ce6955e632 BUG/MEDIUM: stick-tables: do not fail on string keys with no allocated size
When a stick-table key is derived from a string-based sample, it checks
if it's properly zero-terminated otherwise tries to do so. But the test
doesn't work for two reasons :
  - the reported allocated size may be zero while the sample is maked as
    not CONST (eg: certain sample fetch functions like smp_fetch_base()
    do this), so smp_dup() prior to the recent changes will fail on this.

  - the string might have been converted from a binary sample, where the
    trailing zero is not appended. If the sample was writable, smp_dup()
    would not modify it either and we would fail again here. This may
    happen with req.payload or req.body_param for example.

The correct solution consists in calling smp_make_safe() to ensure the
sample is usable as a valid string.

This fix must be backported to 1.6.
2016-08-09 14:30:57 +02:00
Willy Tarreau
2e0565cc09 BUG/MAJOR: server: the "sni" directive could randomly cause trouble
The "sni" server directive does some bad stuff on many occasions because
it works on a sample of type string and limits len to size-1 by hand. The
problem is that size used to be zero on many occasions before the recent
changes to smp_dup() and that it effectively results in setting len to -1
and writing the zero byte *before* the string (and not terminating the
string).

This patch makes use of the recently introduced smp_make_safe() to address
this issue.

This fix must be backported to 1.6.
2016-08-09 14:30:57 +02:00
Willy Tarreau
ad63582eb9 BUG/MEDIUM: samples: make smp_dup() always duplicate the sample
Vedran Furac reported a strange problem where the "base" sample fetch
would not always work for tracking purposes.

In fact, it happens that commit bc8c404 ("MAJOR: stick-tables: use sample
types in place of dedicated types") merged in 1.6 exposed a fundamental
bug related to the way samples use chunks as strings. The problem is that
chunks convey a base pointer, a length and an optional size, which may be
zero when unknown or when the chunk is allocated from a read-only location.
The sole purpose of this size is to know whether or not the chunk may be
appended new data. This size cause some semantics issue in the sample,
which has its own SMP_F_CONST flag to indicate read-only contents.

The problem was emphasized by the commit above because it made use of new
calls to smp_dup() to convert a sample to a table key. And since smp_dup()
would only check the SMP_F_CONST flag, it would happily return read-write
samples indicating size=0.

So some tests were added upon smp_dup() return to ensure that the actual
length is smaller than size, but this in fact made things even worse. For
example, the "sni" server directive does some bad stuff on many occasions
because it limits len to size-1 and effectively sets it to -1 and writes
the zero byte before the beginning of the string!

It is therefore obvious that smp_dup() needs to be modified to take this
nature of the chunks into account. It's not enough but is needed. The core
of the problem comes from the fact that smp_dup() is called for 5 distinct
needs which are not always fulfilled :

  1) duplicate a sample to keep a copy of it during some operations
  2) ensure that the sample is rewritable for a converter like upper()
  3) ensure that the sample is terminated with a \0
  4) set a correct size on the sample
  5) grow the sample in case it was extracted from a partial chunk

Case 1 is not used for now, so we can ignore it. Case 2 indicates the wish
to modify the sample, so its R/O status must be removed if any, but there's
no implied requirement that the chunk becomes larger. Case 3 is used when
the sample has to be made compatible with libc's str* functions. There's no
need to make it R/W nor to duplicate it if it is already correct. Case 4
can happen when the sample's size is required (eg: before performing some
changes that must fit in the buffer). Case 5 is more or less similar but
will happen when the sample by be grown but we want to ensure we're not
bound by the current small size.

So the proposal is to have different functions for various operations. One
will ensure a sample is safe for use with str* functions. Another one will
ensure it may be rewritten in place. And smp_dup() will have to perform an
inconditional duplication to guarantee at least #5 above, and implicitly
all other ones.

This patch only modifies smp_dup() to make the duplication inconditional. It
is enough to fix both the "base" sample fetch and the "sni" server directive,
and all use cases in general though not always optimally. More patches will
follow to address them more optimally and even better than the current
situation (eg: avoid a dup just to add a \0 when possible).

The bug comes from an ambiguous design, so its roots are old. 1.6 is affected
and a backport is needed. In 1.5, the function already existed but was only
used by two converters modifying the data in place, so the bug has no effect
there.
2016-08-09 14:03:23 +02:00
Willy Tarreau
d8b8b5329e BUG/MAJOR: compression: initialize avail_in/next_in even during flush
For quite some time, a few users have been experiencing random crashes
when compressing with zlib, from versions 1.2.3 to 1.2.8 included.

Upon thourough investigation in zlib's deflate.c, it appeared obvious
that avail_in and next_in are used during the flush operation and need
to be initialized, while admittedly it's not obvious in the documentation.

By simply forcing both values to -1 it's possible to immediately reproduce
the exact crash that these users have been experiencing :

  (gdb) bt
  #0  0x00007fa73ce10c00 in __memcpy_sse2 () from /lib64/libc.so.6
  #1  0x00007fa73e0c5d49 in ?? () from /lib64/libz.so.1
  #2  0x00007fa73e0c68e0 in ?? () from /lib64/libz.so.1
  #3  0x00007fa73e0c73c7 in deflate () from /lib64/libz.so.1
  #4  0x00000000004dca1c in deflate_flush_or_finish (comp_ctx=0x7b6580, out=0x7fa73e5bd010, flag=2) at src/compression.c:808
  #5  0x00000000004dcb60 in deflate_flush (comp_ctx=0x7b6580, out=0x7fa73e5bd010) at src/compression.c:835
  #6  0x00000000004dbc50 in http_compression_buffer_end (s=0x7c0050, in=0x7c00a8, out=0x78adf0 <tmpbuf.24662>, end=0) at src/compression.c:249
  #7  0x000000000048bb5f in http_response_forward_body (s=0x7c0050, res=0x7c00a0, an_bit=1048576) at src/proto_http.c:7173
  #8  0x00000000004cce54 in process_stream (t=0x7bffd8) at src/stream.c:1939
  #9  0x0000000000427ddf in process_runnable_tasks () at src/task.c:238
  #10 0x0000000000419892 in run_poll_loop () at src/haproxy.c:1573
  #11 0x000000000041a4a5 in main (argc=4, argv=0x7fffcda38348) at src/haproxy.c:1933

Note that for all reports deflate_flush_or_finish() was always involved.

The crash is very hard to reproduce when using regular traffic because it
requires that the combination of avail_in and next_in are inadequate so
that the memcpy() call reads out of bounds. But this can very likely
happen when the input buffer points to an area reused by another stream
when the flush has been interrupted due to a full output buffer. This
also explains why this report is recent, as dynamic buffer allocation
was introduced in 1.6.

Anyway it's not acceptable to call a function with a randomly set input
buffer. The deflate() function explicitly checks for the case where both
avail_in and next_in are null and doesn't use it in this case during a
flush, so this is the best solution.

Special thanks to Sasha Litvak, James Hartshorn and Paul Bauer for
reporting very useful stack traces which were critical to finding the
root cause of this bug.

This fix must be backported into 1.6 and 1.5, though 1.5 is less likely to
trigger this case given that it keeps its own buffers allocated all along
the session's life.
2016-08-08 16:57:48 +02:00
Baptiste Assmann
39a5f22c36 BUILD: make proto_tcp.c compatible with musl library
musl library expose tcp_info structure only when _GNU_SOURCE is defined.
This is required to build HAProxy on OSes relying musl such as Alpine
Linux.
2016-08-08 14:20:23 +02:00
Willy Tarreau
568743a21f BUG/MEDIUM: stream-int: completely detach connection on connect error
Tim Butler reported a troubling issue affecting all versions since 1.5.
When a connection error occurs and a retry is performed on the same server,
the server connection first goes into the turn-around state (SI_ST_TAR) for
one second. During this time, the client may speak and try to push some
data. The tests in place confirm that the stream interface is in a state
<= SI_ST_EST and that a connection exists, so all ingredients are present
to try to perform a send() to forward data. The send() cannot be performed
since the connection's control layer is marked as not ready, but the
polling flags are changed, and due to the remaining error flag present
on the connection, the polling on the FD is disabled in both directions.
But if this FD was reassigned to another connection in the mean time, it
is this FD which is disabled, and it causes a timeout on another connection.

A configuration allowing to reproduce the issue looks like this :

   listen test
        bind :8003
        server s1 127.0.0.1:8001    # this one should be closed

   listen victim
        bind :8002
        server s1 127.0.0.1:8000    # this one should respond slowly (~50ms)

Two parallel injections should be run with short time-outs (100ms). After
some time, some dead connections will appear in listener "victim" due to
their I/Os being disabled by some of the failed transfers on "test"
instance. These ones will only be flushed on time out. A dead connection
looks like this :

  > show sess 0x7dcb70
  0x7dcb70: [07/Aug/2016:08:58:40.120151] id=3771 proto=tcpv4 source=127.0.0.1:34682
    flags=0xce, conn_retries=3, srv_conn=0x7da020, pend_pos=(nil)
    frontend=victim (id=3 mode=tcp), listener=? (id=1) addr=127.0.0.1:8002
    backend=victim (id=3 mode=tcp) addr=127.0.0.1:37736
    server=s1 (id=1) addr=127.0.0.1:8000
    task=0x7dcaf8 (state=0x08 nice=0 calls=2 exp=<NEVER> age=30s)
    si[0]=0x7dcd68 (state=EST flags=0x08 endp0=CONN:0x7e2410 exp=<NEVER>, et=0x000)
    si[1]=0x7dcd88 (state=EST flags=0x18 endp1=CONN:0x7e0cd0 exp=<NEVER>, et=0x000)
    co0=0x7e2410 ctrl=tcpv4 xprt=RAW data=STRM target=LISTENER:0x7d9ea8
        flags=0x0020b306 fd=122 fd.state=25 fd.cache=0 updt=0
    co1=0x7e0cd0 ctrl=tcpv4 xprt=RAW data=STRM target=SERVER:0x7da020
        flags=0x0020b306 fd=93 fd.state=20 fd.cache=0 updt=0
    req=0x7dcb80 (f=0x848000 an=0x0 pipe=0 tofwd=-1 total=129)
        an_exp=<NEVER> rex=<NEVER> wex=<NEVER>
        buf=0x7893c0 data=0x7893d4 o=0 p=0 req.next=0 i=0 size=0
    res=0x7dcbc0 (f=0x80008000 an=0x0 pipe=0 tofwd=-1 total=0)
        an_exp=<NEVER> rex=<NEVER> wex=<NEVER>
        buf=0x7893c0 data=0x7893d4 o=0 p=0 rsp.next=0 i=0 size=0

The solution against this issue is to completely detach the connection upon
error instead of only performing a forced close.

This fix should be backported to 1.6 and 1.5.

Special thanks to Tim who did all the troubleshooting work and provided a
lot of traces allowing to find the root cause of this problem.
2016-08-07 09:21:04 +02:00
Herve COMMOWICK
8dfe863fbf DOC: fix json converter example and error message 2016-08-07 08:08:18 +02:00
Thierry FOURNIER / OZON.IO
b84ae92615 BUG/MEDIUM: lua: somme HTTP manipulation functions are called without valid requests
Somme HTTP manipulation functions are executed without valid and parsed
requests or responses. This causes a segmentation fault when the executed
code tries to change an empty buffer.

This patch must be backported in the 1.6 version
2016-08-03 00:06:13 +02:00
Thierry Fournier / OZON.IO
6310bef511 MINOR: tcp: Return TCP statistics like RTT and RTT variance
This patch adds 4 new sample fetches which returns the RTT of the
established connexion and the RTT variance. The established connection
can be between the client and HAProxy, and between HAProxy and the
server. This is very useful for statistics. A great use case is the
estimation of the TCP connection time of the client. Note that the
RTT of the server side is not so interesting because we already have
the connect() time.
2016-07-27 13:47:09 +02:00
Dragan Dosen
db1b6f9ecb BUG/MEDIUM: log: use function "escape_string" instead of "escape_chunk"
In function lf_text_len(), we used escape_chunk() to escape special
characters. There could be a problem if len is greater than the real src
string length (zero-terminated), eg. when calling lf_text_len() from
lf_text().
2016-07-26 15:25:32 +02:00
Dragan Dosen
1a5d06032b MINOR: standard: add function "escape_string"
Similar to "escape_chunk", this function tries to prefix all characters
tagged in the <map> with the <escape> character. The specified <string>
contains the input to be escaped.
2016-07-26 15:25:32 +02:00
Willy Tarreau
3146a4cde2 BUG/MINOR: peers: don't count track-sc multiple times on errors
Ruoshan Huang found that the call to session_inc_http_err_ctr() in the
recent http-response patch was not a good idea because it also increments
counters that are already tracked (eg: http-request track-sc or previous
http-response track-sc). Better open-code the update, it's simple.
2016-07-26 15:25:32 +02:00
Frédéric Lécaille
22fc3203db BUG/MINOR: peers: Fix peers data decoding issue
This error led to truncated data after decoding upon receipt.
It's specific to peers v2 and needs to be backported to 1.6.
2016-07-26 14:37:38 +02:00
Ruoshan Huang
e4edc6b628 MEDIUM: http: implement http-response track-sc* directive
This enables tracking of sticky counters from current response. The only
difference from "http-request track-sc" is the <key> sample expression
can only make use of samples in response (eg. res.*, status etc.) and
samples below Layer 6.
2016-07-26 14:31:14 +02:00
Thierry FOURNIER
9bd52d478b BUG/MEDIUM: lua: the function txn_done() from action wrapper can crash
If an action wrapper stops the processing of the transaction
with a txn_done() function, the return code of the action is
"continue". So the continue can implies the processing of other
like adding headers. However, the HTTP content is flushed and
a segfault occurs.

This patchs add a flag indicating that the Lua code want to
stop the processing, ths flags is forwarded to the haproxy core,
and other actions are ignored.

Must be backported in 1.6
2016-07-14 16:14:32 +02:00
Thierry FOURNIER
ab00df6cf6 BUG/MEDIUM: lua: the function txn_done() from sample fetches can crash
The function txn_done() ends a transaction. It does not make
sense to call this function from a lua sample-fetch wrapper,
because the role of a sample-fetch is not to terminate a
transaction.

This patch modify the role of the fucntion txn_done() if it
is called from a sample-fetch wrapper, now it just ends the
execution of the Lua code like the done() function.

Must be backported in 1.6
2016-07-14 16:14:24 +02:00
Nenad Merdanovic
8ab79420ba BUG/MINOR: Fix endiness issue in DNS header creation code
Alexander Lebedev reported that the response bit is set on SPARC when
DNS queries are sent. This has been tracked to the endianess issue, so
this patch makes the code portable.

Signed-off-by: Nenad Merdanovic <nmerdan@anine.io>
2016-07-13 14:47:58 +02:00
Willy Tarreau
eec1d3869d BUG/MEDIUM: dns: fix alignment issues in the DNS response parser
Alexander Lebedev reported that the DNS parser crashes in 1.6 with a bus
error on Sparc when it receives a response. This is obviously caused by
some alignment issues. The issue can also be reproduced on ARMv5 when
setting /proc/cpu/alignment to 4 (which helps debugging).

Two places cause this crash in turn, the first one is when the IP address
from the packet is compared to the current one, and the second place is
when the address is assigned because an unaligned address is passed to
update_server_addr().

This patch modifies these places to properly use memcpy() and memcmp()
to manipulate the unaligned data.

Nenad Merdanovic found another set of places specific to 1.7 in functions
in_net_ipv4() and in_net_ipv6(), which are used to compare networks. 1.6
has the functions but does not use them. There we perform a temporary copy
to a local variable to fix the problem. The type of the function's argument
is wrong since it's not necessarily aligned, so we change it for a const
void * instead.

This fix must be backported to 1.6. Note that in 1.6 the code is slightly
different, there's no rec[] array, the pointer is used directly from the
buffer.
2016-07-13 12:13:24 +02:00
Remi Gacogne
c7e12637df BUG/MINOR: ssl: fix potential memory leak in ssl_sock_load_dh_params()
Roberto Guimaraes reported that Valgrind complains about a leak
in ssl_get_dh_1024().
This is caused caused by an oversight in ssl_sock_load_dh_params(),
where local_dh_1024 is always replaced by a new DH object even if
it already holds one. This patch simply checks whether local_dh_1024
is NULL before calling ssl_get_dh_1024().
2016-07-12 11:48:06 +02:00
David Carlier
3015a2eebd CLEANUP: connection: using internal struct to hold source and dest port.
Originally, tcphdr's source and dest from Linux were used to get the
source and port which led to a build issue on BSD oses.
To avoid side problems related to network then we just use an internal
struct as we need only those two fields.
2016-07-05 14:43:05 +02:00
Willy Tarreau
90fd35c3a7 Revert "BUG/MINOR: ssl: fix potential memory leak in ssl_sock_load_dh_params()"
This reverts commit 0ea4c23ca7.

Certain very simple confs randomly segfault upon startup with openssl 1.0.2
with this patch, which seems to indicate a use after free. Better drop it
and let valgrind complain about the potential leak.

Also it's worth noting that the man page for SSL_CTX_set_tmp_dh() makes no
mention about whether or not the element should be freed, and the example
provided does not use it either.

This fix should be backported to 1.6 and 1.5 where the patch was just
included.
2016-06-30 20:00:19 +02:00
Hubert Verstraete
831962e3b3 CLEANUP: fixed some usages of realloc leading to memory leak
Changed all the cases where the pointer passed to realloc is overwritten
by the pointer returned by realloc. The new function my_realloc2 has
been used except in function register_name. If register_name fails to
add a new variable because of an "out of memory" error, all the existing
variables remain valid. If we had used my_realloc2, the array of variables
would have been freed.
2016-06-29 10:45:18 +02:00
Christopher Faulet
a9300a3d5a BUG/MINOR: Rework slightly commit 9962f8fc to clean code and avoid mistakes
In commit 9962f8fc (BUG/MEDIUM: http: unbreak uri/header/url_param hashing), we
take care to update 'msg->sov' value when the parser changes to state
HTTP_MSG_DONE. This works when no filter is used. But, if a filter is used and
if it loops on 'http_end' callback, the following block is evaluated two times
consecutively:

    if (unlikely(!(chn->flags & CF_WROTE_DATA) || msg->sov > 0))
            msg->sov -= ret;

Today, in practice, because this happens when all data are parsed and forwarded,
the second test always fails (after the first update, msg->sov is always lower
or equal to 0). But it is useless and error prone. So to avoid misunderstanding
the code has been slightly changed. Now, in all cases, we try to update msg->sov
only once per iteration.

No backport is needed.
2016-06-28 16:34:50 +02:00
Willy Tarreau
9962f8fc44 BUG/MEDIUM: http: unbreak uri/header/url_param hashing
Vedran Furac reported that "balance uri" doesn't work anymore in recent
1.7-dev versions. Dragan Dosen found that the first faulty commit was
dbe34eb ("MEDIUM: filters/http: Move body parsing of HTTP messages in
dedicated functions"), merged in 1.7-dev2.

After this patch, the hashing is performed on uninitialized data,
indicating that the buffer is not correctly rewound. In fact, all forms
of content-based hashing are broken since the commit above. Upon code
inspection, it appears that the new functions http_msg_forward_chunked_body()
and http_msg_forward_body() forget to rewind the buffer in the success
case, when the parser changes to state HTTP_MSG_DONE. The rewinding code
was reinserted in both functions and the fix was confirmed by two test,
with and without chunking.

No backport it needed.
2016-06-28 11:57:06 +02:00
David Carlier
08d35deada CLEANUP: dumpstats: u64 field is an unsigned type.
The check will always be true, thus unnecessary.
2016-06-27 15:27:05 +02:00
Willy Tarreau
29bdb1c7ff BUG/MINOR: http: fix misleading error message for response captures
Kay Fuchs reported that the error message is misleading in response
captures because it suggests that "len" is accepted while it's not.

This needs to be backported to 1.6.
2016-06-24 15:36:34 +02:00
mildis
16aa0153b5 BUG/MINOR: ssl: close ssl key file on error
Explicitly close the FILE opened to read the ssl key file when parsing
fails to find a valid key.

This fix needs to be backported to 1.6.
2016-06-24 15:27:30 +02:00
Willy Tarreau
a58c4359bb BUG/MINOR: srv-state: fix incorrect output of state file
Eric Webster reported that the state file wouldn't reload in 1.6.5
while it used to work in 1.6.4. The issue is that headers are now
missing from the output when a specific backend is dumped since
commit 4c1544d ("BUG/MEDIUM: stats: show servers state may show an
empty or incomplete result"). This patch fixes this by introducing
a dump state.

It must be backported to 1.6.
2016-06-22 14:51:40 +02:00
Christopher Faulet
1eea6d7ba8 BUG/MINOR: filters: Fix HTTP parsing when a filter loops on data forwarding
A filter can choose to loop on data forwarding. When this loop occurs in
HTTP_MSG_ENDING state, http_foward_data callbacks are called twice because of a
goto on the wrong label.

A filter can also choose to loop at the end of a HTTP message, in http_end
callback. Here the goto is good but the label is not at the right place. We must
be sure to upate msg->sov value.
2016-06-21 18:53:09 +02:00
Christopher Faulet
55048a498a BUG/MEDIUM: filters: Fix data filtering when data are modified
Filters can alter data during the parsing, i.e when http_data or tcp_data
callbacks are called. For now, the update must be done by hand. So we must
handle changes in the channel buffers, especially on the number of input bytes
pending (buf->i).
In addition, a filter can choose to switch channel buffers to do its
updates. So, during data filtering, we must always use the right buffer and not
use variable to reference them.

Without this patch, filters cannot safely alter data during the data parsing.
2016-06-21 18:53:08 +02:00
Willy Tarreau
78f8dcb7f0 CLEANUP: external-check: don't block/unblock SIGCHLD when manipulating the list
There's no point in blocking/unblocking sigchld when removing entries
from the list since the code is called asynchronously.

Similarly the blocking/unblocking could be removed from the connect_proc_chk()
function but it happens that at high signal rates, fork() takes twice as much
time to execute as it is regularly interrupted by a signal, so in the end this
signal blocking is beneficial there for performance reasons.
2016-06-21 18:10:51 +02:00
Willy Tarreau
ebc9244059 BUG/MINOR: external-checks: do not unblock undesired signals
The external checks code makes use of block_sigchld() and unblock_sigchld()
to ensure nobody modifies the signals list while they're being manipulated.
It happens that these functions clear the list of blocked signals, so they
can possibly have a side effect if other signals are blocked. For now no
other signal is blocked but it may very well change in the future so rather
correctly use SIG_BLOCK/SIG_UNBLOCK instead of touching unrelated signals.

This fix should be backported to 1.6 for correctness.
2016-06-21 18:10:50 +02:00
Willy Tarreau
48d6bf2e82 BUG/MAJOR: external-checks: use asynchronous signal delivery
There are random segfaults occuring when using external checks. The
reason is that when receiving a SIGCHLD, a call to task_wakeup() is
performed. There are two situations where this causes trouble :
  - the scheduler is in process_running_tasks(), since task_wakeup()
    sets rq_next to NULL, when the former dereferences it to get the
    next pointer, the program crashes ;

  - when another task_wakeup() is being called and during eb_next()
    in process_running_tasks(), because the structure of the run queue
    tree changes while it is being processed.

The solution against this is to use asynchronous signal processing
thanks to the internal signal API. The signal handler is registered,
and upon delivery, the signal is added to the queue and processed
out of any other processing.

This code was stressed at 2500 forks/s and their respective signals
for quite some time and the segfault is now gone.
2016-06-21 18:10:50 +02:00
Willy Tarreau
b7b2478733 BUG/MEDIUM: external-checks: close all FDs right after the fork()
Lukas Erlacher reported an interesting problem : since we don't close
FDs after the fork() upon external checks, any script executed that
writes data on stdout/stderr will possibly send its data to wrong
places, very likely an existing connection.

After some analysis, the problem is even wider. It's not enough to
just close stdin/stdout/stderr, as all sockets are passed to the
sub-process, and as long as they're not closed, they are usable for
whatever mistake can be done. Furthermore with epoll, such FDs will
continue to be reported after a close() as the underlying file is
not closed yet.

CLOEXEC would be an acceptable workaround except that 1) it adds an
extra syscall on the fast path, and 2) we have no control over FDs
created by external libs (eg: openssl using /dev/crypto, libc using
/dev/random, lua using anything else), so in the end we still need
to close them all.

On some BSD systems there's a closefrom() syscall which could be
very useful for this.

Based on an insightful idea from Simon Horman, we don't close 0/1/2
when we're in verbose mode since they're properly connected to
stdin/stdout/stderr and can become quite useful during debugging
sessions to detect some script output errors or execve() failures.

This fix must be backported to 1.6.
2016-06-21 18:10:50 +02:00
Willy Tarreau
164dd0b6e4 BUG/MINOR: init: ensure that FD limit is raised to the max allowed
When the requested amount of FDs cannot be allocated, setrlimit() fails.
That's bad because if the limit is set to 1024 and we need 10000, we
stay on 1024 while we could possibly raise it to 4096 thanks to rlim_max.
This patch takes care of trying to assign rlim_cur to rlim_max on failure
so that we get as much as possible if we can't get all we need. The case
is particularly visible when starting haproxy as a non-privileged user
and a large maxconn is specified in the configuration.

Another point of doing this is that it is the only way to allow us to
close inherited FDs upon fork(), ie those between rlim_cur and rlim_max.

This patch may be backported to 1.6 and 1.5.
2016-06-21 18:10:50 +02:00
Willy Tarreau
ef6354719b BUG/MINOR: init: always ensure that global.rlimit_nofile matches actual limits
global.rlimit_nofile contains the mxa number of file descriptors that
can be allocated, except if the user is not allowed to reach this limit,
where it still contains the initially requested value. It is important
that this value always matches what is really configured so that it is
properly reported in the stats and that we can use it later to close
all FDs without wasting time closing impossible FDs.

This fix may be backported to 1.6 and 1.5.
2016-06-21 18:10:50 +02:00
Bertrand Jacquin
9075968356 MINOR: tcp: add "tcp-request connection expect-netscaler-cip layer4"
This configures the client-facing connection to receive a NetScaler
Client IP insertion protocol header before any byte is read from the
socket. This is equivalent to having the "accept-netscaler-cip" keyword
on the "bind" line, except that using the TCP rule allows the PROXY
protocol to be accepted only for certain IP address ranges using an ACL.
This is convenient when multiple layers of load balancers are passed
through by traffic coming from public hosts.
2016-06-20 23:02:47 +02:00
Bertrand Jacquin
93b227db95 MINOR: listener: add the "accept-netscaler-cip" option to the "bind" keyword
When NetScaler application switch is used as L3+ switch, informations
regarding the original IP and TCP headers are lost as a new TCP
connection is created between the NetScaler and the backend server.

NetScaler provides a feature to insert in the TCP data the original data
that can then be consumed by the backend server.

Specifications and documentations from NetScaler:
  https://support.citrix.com/article/CTX205670
  https://www.citrix.com/blogs/2016/04/25/how-to-enable-client-ip-in-tcpip-option-of-netscaler/

When CIP is enabled on the NetScaler, then a TCP packet is inserted just after
the TCP handshake. This is composed as:

  - CIP magic number : 4 bytes
    Both sender and receiver have to agree on a magic number so that
    they both handle the incoming data as a NetScaler Client IP insertion
    packet.

  - Header length : 4 bytes
    Defines the length on the remaining data.

  - IP header : >= 20 bytes if IPv4, 40 bytes if IPv6
    Contains the header of the last IP packet sent by the client during TCP
    handshake.

  - TCP header : >= 20 bytes
    Contains the header of the last TCP packet sent by the client during TCP
    handshake.
2016-06-20 23:02:47 +02:00
Willy Tarreau
24b892f324 BUILD: ssl: fix typo causing a build failure in the multicert patch
I just noticed that SSL wouldn't build anymore since this afternoon's patch :

src/ssl_sock.c: In function 'ssl_sock_load_multi_cert':
src/ssl_sock.c:1982:26: warning: left-hand operand of comma expression has no effect [-Wunused-value]
    for (i = 0; i < fcount, i++)
                          ^
src/ssl_sock.c:1982:31: error: expected ';' before ')' token
    for (i = 0; i < fcount, i++)
                               ^
Makefile:791: recipe for target 'src/ssl_sock.o' failed
make: *** [src/ssl_sock.o] Error 1
2016-06-20 23:02:46 +02:00
Emmanuel Hocdet
5e0e6e409b MINOR: ssl: crt-list parsing factor
LINESIZE and MAX_LINE_ARGS are too low for parsing crt-list.
2016-06-20 17:29:56 +02:00
Emmanuel Hocdet
d294aea605 MEDIUM: ssl: support SNI filters with multicerts
SNI filters used to be ignored with multicerts (eg: those providing
ECDSA and RSA at the same time). This patch makes them work like
other certs.

Note: most of the changes in this patch are due to an extra level of
      indent, read it with "git show -b".
2016-06-20 17:15:17 +02:00
Ruoshan Huang
dd01678a79 BUG/MINOR: fix http-response set-log-level parsing error
hi,
    `http-response set-log-level` doesn't work, as the config parsing always set the log level to -1.

From 2b183447c5b37c19aae5d596871fc0b9004c87b4 Mon Sep 17 00:00:00 2001
From: Ruoshan Huang <ruoshan.huang@gmail.com>
Date: Wed, 15 Jun 2016 22:07:58 +0800
Subject: [PATCH] BUG/MINOR: fix http-response set-log-level parsing error

http-response set-log-level can't parse the log level correctly,
as the level argument ptr is one byte ahead when passed to get_log_level
---
 src/proto_http.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)
2016-06-17 17:57:58 +02:00
Dragan Dosen
db5af61f3c BUG/MINOR: http: url32+src should check cli_conn before using it
In function smp_fetch_url32_src(), it's better to check the value of
cli_conn before we go any further.

This patch needs to be backported to 1.6 and 1.5.
2016-06-16 12:53:25 +02:00
Dragan Dosen
e5f4133b19 BUG/MINOR: http: url32+src should use the big endian version of url32
This is similar to the commit 5ad6e1dc ("BUG/MINOR: http: base32+src
should use the big endian version of base32"). Now we convert url32 to big
endian when building the binary block.

This patch needs to be backported to 1.6 and 1.5.
2016-06-16 12:53:25 +02:00
William Lallemand
72a8a18e89 MEDIUM: dumpstats: make stats_tlskeys_list() yield-aware during tls-keys dump
The previous dump algorithm was not trying to yield when the buffer is
full, it's not a problem with the TLS_TICKETS_NO which is 3 by default
but it can become one if the buffer size is lowered and if the
TLS_TICKETS_NO is increased.

The index of the latest ticket dumped is now stored to ensure we can
resume the dump after a yield.
2016-06-14 19:42:08 +02:00
William Lallemand
cf9e788790 BUG/MEDIUM: dumpstats: undefined behavior in stats_tlskeys_list()
The function stats_tlskeys_list() can meet an undefined behavior when
called with appctx->st2 == STAT_ST_LIST, indeed the ref pointer is used
uninitialized.

However this function was using NULL in appctx->ctx.tlskeys.ref as a
flag to dump every tickets from every references.  A real flag
appctx->ctx.tlskeys.dump_all is now used for this behavior.

This patch delete the 'ref' variable and use appctx->ctx.tlskeys.ref
directly.
2016-06-14 19:41:58 +02:00
Roberto Guimaraes
0ea4c23ca7 BUG/MINOR: ssl: fix potential memory leak in ssl_sock_load_dh_params()
Valgrind reports that the memory allocated in ssl_get_dh_1024() was leaking. Upon further inspection of openssl code, it seems that SSL_CTX_set_tmp_dh makes a copy of the data, so calling DH_free afterwards makes sense.
2016-06-12 13:12:32 +02:00
Thierry Fournier
4b788f7d34 BUG/MEDIUM: http: add-header: buffer overwritten
If we use the action "http-request add-header" with a Lua sample-fetch or
converter, and the Lua function calls one of the Lua log function, the
header name is corrupted, it contains an extract of the last loggued data.

This is due to an overwrite of the trash buffer, because his scope is not
respected in the "add-header" function. The scope of the trash buffer must
be limited to the function using it. The build_logline() function can
execute a lot of other function which can use the trash buffer.

This patch fix the usage of the trash buffer. It limits the scope of this
global buffer to the local function, we build first the header value using
build_logline, and after we store the header name.

Thanks Michael Ezzell for the repporting.

This patch must be backported in 1.6 version
2016-06-08 10:34:22 +02:00
Thierry Fournier
53c1a9b7cb BUG/MINOR: http: add-header: header name copied twice
The header name is copied two time in the buffer. The first copy is a printf-like
function writing the name and the http separators in the buffer, and the second
form is a memcopy. This seems to be inherited from some changes. This patch
removes the printf like, format.

This patch must be backported in 1.6 and 1.5 versions
2016-06-08 10:34:07 +02:00
Thierry Fournier
4a53bfdc1d BUG/MEDIUM: lua: converters doesn't work
The number of arguments pushed in the stack are false, so we try to execute a
function out of the stack. This function is always a nil pointer, so the
following message is displayed.

   Lua converter 'testconv': runtime error: attempt to call a nil value.

Thanks Michael Ezzell for the repporting.

This patch must be backported in the 1.6 version.
2016-06-08 10:33:27 +02:00
Thierry Fournier
6fc340ff07 BUG/MEDIUM: sticktables: segfault in some configuration error cases
When a stick table is tracked, and another one is used later on the
configuration, a segfault occurs.

The function "smp_create_src_stkctr" can return a NULL value, and
its value is not tested, so one other function try to dereference
a NULL pointer. This patch just add a verification of the NULL
pointer.

The problem is reproduced with this configuration:

   listen www
       mode http
       bind :12345
       tcp-request content track-sc0 src table IPv4
       http-request allow if { sc0_inc_gpc0(IPv6) gt 0 }
       server dummy 127.0.0.1:80
   backend IPv4
       stick-table type ip size 10 expire 60s store gpc0
   backend IPv6
       stick-table type ipv6 size 10 expire 60s store gpc0

Thank to kabefuna@gmail.com for the bug report.

This patch must be backported in the 1.6 and 1.5 version.
2016-06-07 11:05:23 +02:00
William Lallemand
13e9b0c9ed MEDIUM: tcp/http: new set-dst/set-dst-port actions
Like 'set-src' and 'set-src-port' but for destination address and port.
It's available in 'tcp-request connection' and 'http-request' actions.
2016-06-01 11:44:11 +02:00
William Lallemand
44be6405a1 MEDIUM: tcp/http: add 'set-src-port' action
set-src-port works the same way as 'set-src' but for the source port.
It's available in 'tcp-request connection' and 'http-request' actions.
2016-06-01 11:44:11 +02:00
William Lallemand
01252ed53c MINOR: set the CO_FL_ADDR_FROM_SET flags with 'set-src'
When the 'set-src' action is used, the CO_FL_ADDR_FROM_SET wasn't set,
it can lead to address being rewritten.
2016-06-01 11:44:11 +02:00
William Lallemand
2e785f23cb MEDIUM: tcp: add 'set-src' to 'tcp-request connection'
The 'set-src' action was not available for tcp actions The action code
has been converted into a function in proto_tcp.c to be used for both
'http-request' and 'tcp-request connection' actions.

Both http and tcp keywords are registered in proto_tcp.c
2016-06-01 11:44:11 +02:00
William Lallemand
1d0b36aa66 MEDIUM: dumpstats: 'show tls-keys' is now able to show secrets
You can now dump the tls-tickets-keys from the unix socket using the
file ID prefixed by an '#' or using a '*' to dump everything.
2016-05-31 20:30:01 +02:00
William Lallemand
7bba4ccfb6 BUG/MEDIUM: fix risk of segfault with "show tls-keys"
The reference to the tls_keys_ref was not deleted from the
tlskeys_reference linked list.

When the SSL is malconfigured, it can lead to an access to freed memory
during a "show tls-keys" on the admin socked.
2016-05-31 20:30:01 +02:00
Cyril Bonté
d55bd7a6a9 BUG/MEDIUM: stats: show servers state may show an servers from another backend
Olivier Doucet reported that "show servers state" was producing an invalid
output with some configurations where nbproc > 1.

Indeed, commit 76a99784f4 fixed some issues but unfortunately introduced a
regression when a backend bound to the same process as the stats socket and a
previous backend is bound to another one.

For example :
  global
    daemon
    nbproc 2
    stats socket /var/run/haproxy-1.sock process 1
    stats socket /var/run/haproxy-2.sock process 2

  listen proc1
    bind 127.0.0.1:9001
    bind-process 1
    server WRONG 127.0.0.1:80

  listen proc2
    bind 127.0.0.1:9002
    bind-process 2
    server RIGHT 127.0.0.1:80

Requesting "show servers state" on /var/run/haproxy-2.sock was producing a line
like :
3 proc2 1 WRONG 127.0.0.1 2 0 1 1 4 1 0 2 0 0 0 0

whereas the line below was awaited :
3 proc2 1 RIGHT 127.0.0.1 2 0 1 1 5 1 0 2 0 0 0 0

This was caused by the initialization of the server loop too early, before the
bind_proc filtering whereas it should be done after.

This fix should be backported to 1.6, where the regression has unfortunately
been backported.
2016-05-27 00:47:10 +02:00
Willy Tarreau
659fbf0230 BUG/MEDIUM: config: fix multiple declaration of section parsers
Ben Cabot reported that after commit 5e4261b ("CLEANUP: config:
detect double registration of a config section") recently introduced
in 1.7-dev, it's not possible anymore to load multiple configuration
files. Bryan Talbot provided a simple reproducer to exhibit the issue.

It turns out that function readcfgfile() registers new parsers for
section keywords for each new file. In addition to being useless, this
has the negative effect of wasting memory and slowing down the config
parser as the number of configuration files increases.

This fix only needs to be backported if/where the commit above is
backported.
2016-05-26 17:59:28 +02:00
Willy Tarreau
5f6e9054b9 BUILD: fix build on Solaris 11
htonll()/ntohll() already exist on Solaris 11 with a different declaration,
causing a build error as reported by Jonathan Fisher. They used to exist on
OSX with a #define which allowed us to detect them. It was a bad idea to give
these functions a name subject to conflicts like this. Simply rename them
my_htonll()/my_ntohll() to definitely get rid of the conflict.

This patch must be backported to 1.6.
2016-05-26 07:15:57 +02:00
Willy Tarreau
2d17db589b MINOR: stick-table: change all stick-table converters' inputs to SMP_T_ANY
The stick-table converters used to take a string on input because it
was the only type that could be casted to from any other type. This is
inefficient and possibly inaccurate sometimes. For example in order to
look up an IP address, it must first be converted to a string then
converted back to an IP address.

We've had SMP_T_ANY introduced long ago in 1.6, but unfortunately it
was not propagated to these converters, so let's do it now.

It's important to note that a few direct type conversions which already
would not make any sense are not possible (for example, converting a
boolean to an IP address or an HTTP method to an integer). While this
would have caused the lookup to be performed on the wrong key, now the
lookup will fail and the converter will return no data. While there
should not be any case where this happens, it's probably best to avoid
backporting this change before a longer observation period.
2016-05-25 17:20:59 +02:00
Willy Tarreau
f0c730a0ac BUG/MEDIUM: stick-tables: fix breakage in table converters
Baptiste reported that the table_conn_rate() converter would always
return zero in 1.6.5. In fact, commit bc8c404 ("MAJOR: stick-tables:
use sample types in place of dedicated types") broke all stick-table
converters because smp_to_stkey() now returns a pointer to the sample
instead of holding a copy of the key, and the converters used to
reinitialize the sample prior to performing the lookup. Only
"in_table()" continued to work.

The construct is still fragile, so some comments were added to a few
function to clarify their impacts. It's also worth noting that there
is no point anymore in forcing these converters to take a string on
input, but that will be changed in another commit.

The bug was introduced in 1.6-dev4, this fix must be backported to 1.6.
2016-05-25 17:13:48 +02:00
Willy Tarreau
58727ec088 BUG/MAJOR: http: fix breakage of "reqdeny" causing random crashes
Commit 108b1dd ("MEDIUM: http: configurable http result codes for
http-request deny") introduced in 1.6-dev2 was incomplete. It introduced
a new field "rule_deny_status" into struct http_txn, which is filled only
by actions "http-request deny" and "http-request tarpit". It's then used
in the deny code path to emit the proper error message, but is used
uninitialized when the deny comes from a "reqdeny" rule, causing random
behaviours ranging from returning a 200, an empty response, or crashing
the process. Often upon startup only 200 was returned but after the fields
are used the crash happens. This can be sped up using -dM.

There's no need at all for storing this status in the http_txn struct
anyway since it's used immediately after being set. Let's store it in
a temporary variable instead which is passed as an argument to function
http_req_get_intercept_rule().

As an extra benefit, removing it from struct http_txn reduced the size
of this struct by 8 bytes.

This fix must be backported to 1.6 where the bug was detected. Special
thanks to Falco Schmutz for his detailed report including an exploitable
core and a reproducer.
2016-05-25 16:23:59 +02:00
Vincent Bernat
6e46ff11e9 BUG/MINOR: fix listening IP address storage for frontends (cont)
Commit 6e6158 was incomplete. There was an additional aggregate copy
that may trigger a similar case in the future.
2016-05-19 21:53:10 +02:00
Vincent Bernat
6e61589573 BUG/MAJOR: fix listening IP address storage for frontends
When compiled with GCC 6, the IP address specified for a frontend was
ignored and HAProxy was listening on all addresses instead. This is
caused by an incomplete copy of a "struct sockaddr_storage".

With the GNU Libc, "struct sockaddr_storage" is defined as this:

    struct sockaddr_storage
      {
        sa_family_t ss_family;
        unsigned long int __ss_align;
        char __ss_padding[(128 - (2 * sizeof (unsigned long int)))];
      };

Doing an aggregate copy (ss1 = ss2) is different than using memcpy():
only members of the aggregate have to be copied. Notably, padding can be
or not be copied. In GCC 6, some optimizations use this fact and if a
"struct sockaddr_storage" contains a "struct sockaddr_in", the port and
the address are part of the padding (between sa_family and __ss_align)
and can be not copied over.

Therefore, we replace any aggregate copy by a memcpy(). There is another
place using the same pattern. We also fix a function receiving a "struct
sockaddr_storage" by copy instead of by reference. Since it only needs a
read-only copy, the function is converted to request a reference.
2016-05-19 10:43:24 +02:00
Maxime de Roucy
e3841395ad BUG/MEDIUM: init: don't use environment locale
This patch removes setlocale from the main function. It was introduced
by commit 379d9c7 ("MEDIUM: init: allow directory as argument of -f")
in 1.7-dev a few commits ago after a discussion on the mailing list.

Some regex may have different behaviours depending on the
locale. Some LUA scripts may change their behaviour too
(http://lua-users.org/wiki/LuaLocales).

Without this patch (haproxy is using setlocale) :

	$ cat locale.cfg
	defaults
	  mode http

	frontend test
	  bind :9000
	  mode http
	  use_backend testbk if { hdr_reg(X-Test) ^\w+$ }

	backend testbk
	  mode http
	  server s 127.0.0.1:80

	$ LANG=fr_FR.UTF-8 ./haproxy -f locale.cfg
	$ curl -i -H "X-Test: échec" localhost:9000
	HTTP/1.1 200 OK
	...

	$ LANG=C ./haproxy -f locale.cfg
	$ curl -i -H "X-Test: échec" localhost:9000
	HTTP/1.0 503 Service Unavailable
	...
2016-05-19 07:19:19 +02:00
Christopher Faulet
3a394fa7cd MEDIUM: filters: Add pre and post analyzer callbacks
'channel_analyze' callback has been removed. Now, there are 2 callbacks to
surround calls to analyzers:

  * channel_pre_analyze: Called BEFORE all filterable analyzers. it can be
    called many times for the same analyzer, once at each loop until the
    analyzer finishes its processing. This callback is resumable, it returns a
    negative value if an error occurs, 0 if it needs to wait, any other value
    otherwise.

  * channel_post_analyze: Called AFTER all filterable analyzers. Here, AFTER
    means when an analyzer finishes its processing. This callback is NOT
    resumable, it returns a negative value if an error occurs, any other value
    otherwise.

Pre and post analyzer callbacks are not automatically called. 'pre_analyzers'
and 'post_analyzers' bit fields in the filter structure must be set to the right
value using AN_* flags (see include/types/channel.h).

The flag AN_RES_ALL has been added (AN_REQ_ALL already exists) to ease the life
of filter developers. AN_REQ_ALL and AN_RES_ALL include all filterable
analyzers.
2016-05-18 15:11:54 +02:00
Christopher Faulet
a9215b7206 MINOR: filters: Simplify calls to analyzers using 2 new macros
Now, to call an analyzer in 'process_stream' function, we should use
FLT_ANALAYZE or ANALYZE macros, depending if this is a filterable analyzer or
not.
2016-05-18 15:11:54 +02:00
Christopher Faulet
1339d744d5 MEDIUM: filters: Move HTTP headers filtering in its own callback
Instead of calling 'channel_analyze' callback with the flag AN_FLT_HTTP_HDRS,
now we use the new callback 'http_headers'. This change is done because
'channel_analyze' callback will be removed in a next commit.
2016-05-18 15:11:54 +02:00
Willy Tarreau
27b639d37f MINOR: log: add the %Td log-format specifier
As suggested by Pavlos, it's too bad that we didn't have a %Td log
format tag given that there are a few mentions of Td corresponding
to the data transmission time already in the doc, so this is now done.
Just like the other specifiers, we report -1 if the connection failed
before reaching the data transmission state.
2016-05-17 18:04:30 +02:00
Willy Tarreau
5e4261b0e4 CLEANUP: config: detect double registration of a config section
In an effort to make the config parser more robust, we should validate
that everything we register is not already registered. Most cfg_register_*
functions unfortunately return void and just perform a LIST_ADDQ(), so they
will have to change for this. At least cfg_register_section() does perform
a bit of checks and is easy to check for such errors, so let's start with
this one. Future patches will definitely have to focus on the remaining
functions and ensure unicity of all config parsers.
2016-05-17 16:18:31 +02:00
Maxime de Roucy
379d9c7c14 MEDIUM: init: allow directory as argument of -f
If -f argument is a directory add all the files (and only files) it
containes to the config files list.
These files are added in lexical order (respecting LC_COLLATE).
Only files with ".cfg" extension are added.
Only non hidden files (not prefixed with ".") are added.
Symlink are followed.
The -f order is still respected:

        $ tree -a rootdir
        rootdir
        |-- dir1
        |   |-- .6.cfg
        |   |-- 1.cfg
        |   |-- 2
        |   |-- 3.cfg
        |   |-- 4.cfg -> 1.cfg
        |   |-- 5 -> 1.cfg
        |   |-- 7.cfg -> .
        |   `-- dir4
        |       `-- 8.cfg
        |-- dir2
        |   |-- 10.cfg
        |   `-- 9.cfg
        |-- dir3
        |   `-- 11.cfg
        |-- link -> dir3/
        |-- root1
        |-- root2
        `-- root3

        $ ./haproxy -C rootdir -f root2 -f dir2 -f root3 -f dir1 \
                               -f link -f root1
        root2
        dir2/10.cfg
        dir2/9.cfg
        root3
        dir1/1.cfg
        dir1/3.cfg
        dir1/4.cfg
        link/11.cfg
        root1

This can be useful on systemd where you can't change the haproxy
commande line options on service reload.
2016-05-14 07:09:33 +02:00
Maxime de Roucy
0f503925f0 MEDIUM: init: use list_append_word in haproxy.c
replace LIST_ADDQ with list_append_word
2016-05-14 00:00:54 +02:00
Maxime de Roucy
dc88785f9c MINOR: add list_append_word function
int list_append_word(struct list *li, const char *str, char **err)

Append a copy of string <str> (inside a wordlist) at the end of
the list <li>.
The caller is responsible for freeing the <err> and <str> copy memory
area using free().

On failure : return 0 and <err> filled with an error message.
2016-05-14 00:00:54 +02:00
Willy Tarreau
7d1b48fae0 [RELEASE] Released version 1.7-dev3
Released version 1.7-dev3 with the following main changes :
    - MINOR: sample: Moves ARGS underlying type from 32 to 64 bits.
    - BUG/MINOR: log: Don't use strftime() which can clobber timezone if chrooted
    - BUILD: namespaces: fix a potential build warning in namespaces.c
    - MINOR: da: Using ARG12 macro for the sample fetch and the convertor.
    - DOC: add encoding to json converter example
    - BUG/MINOR: conf: "listener id" expects integer, but its not checked
    - DOC: Clarify tunes.vars.xxx-max-size settings
    - CLEANUP: chunk: adding NULL check to chunk_dup allocation.
    - CLEANUP: connection: fix double negation on memcmp()
    - BUG/MEDIUM: peers: fix incorrect age in frequency counters
    - BUG/MEDIUM: Fix RFC5077 resumption when more than TLS_TICKETS_NO are present
    - BUG/MAJOR: Fix crash in http_get_fhdr with exactly MAX_HDR_HISTORY headers
    - BUG/MINOR: lua: can't load external libraries
    - BUG/MINOR: prevent the dump of uninitialized vars
    - CLEANUP: map: it seems that the map were planed to be chained
    - MINOR: lua: move class registration facilities
    - MINOR: lua: remove some useless checks
    - CLEANUP: lua: Remove two same functions
    - MINOR: lua: refactor the Lua object registration
    - MINOR: lua: precise message when a critical error is catched
    - MINOR: lua: post initialization
    - MINOR: lua: Add internal function which strip spaces
    - MINOR: lua: convert field to lua type
    - DOC: "addr" parameter applies to both health and agent checks
    - DOC: timeout client: pointers to timeout http-request
    - DOC: typo on stick-store response
    - DOC: stick-table: amend paragraph blaming the loss of table upon reload
    - DOC: typo: ACL subdir match
    - DOC: typo: maxconn paragraph is wrong due to a wrong buffer size
    - DOC: regsub: parser limitation about the inability to use closing square brackets
    - DOC: typo: req.uri is now replaced by capture.req.uri
    - DOC: name set-gpt0 mismatch with the expected keyword
    - MINOR: http: sample fetch which returns unique-id
    - MINOR: dumpstats: extract stats fields enum and names
    - MINOR: dumpstats: split stats_dump_info_to_buffer() in two parts
    - MINOR: dumpstats: split stats_dump_fe_stats() in two parts
    - MINOR: dumpstats: split stats_dump_li_stats() in two parts
    - MINOR: dumpstats: split stats_dump_sv_stats() in two parts
    - MINOR: dumpstats: split stats_dump_be_stats() in two parts
    - MINOR: lua: dump general info
    - MINOR: lua: add class proxy
    - MINOR: lua: add class server
    - MINOR: lua: add class listener
    - BUG/MEDIUM: stick-tables: some sample-fetch doesn't work in the connection state.
    - MEDIUM: proxy: use dynamic allocation for error dumps
    - CLEANUP: remove unneeded casts
    - CLEANUP: uniformize last argument of malloc/calloc
    - DOC: fix "needed" typo
    - BUG/MINOR: dumpstats: fix write to global chunk
    - BUG/MINOR: dns: inapropriate way out after a resolution timeout
    - BUG/MINOR: dns: trigger a DNS query type change on resolution timeout
    - CLEANUP: proto_http: few corrections for gcc warnings.
    - BUG/MINOR: DNS: resolution structure change
    - BUG/MINOR : allow to log cookie for tarpit and denied request
    - BUG/MEDIUM: ssl: rewind the BIO when reading certificates
    - OPTIM/MINOR: session: abort if possible before connecting to the backend
    - DOC: http: rename the unique-id sample and add the documentation
    - BUG/MEDIUM: trace.c: rdtsc() is defined in two files
    - BUG/MEDIUM: channel: fix miscalculation of available buffer space (2nd try)
    - BUG/MINOR: server: risk of over reading the pref_net array.
    - BUG/MINOR: cfgparse: couple of small memory leaks.
    - BUG/MEDIUM: sample: initialize the pointer before parse_binary call.
    - DOC: fix discrepancy in the example for http-request redirect
    - MINOR: acl: Add predefined METH_DELETE, METH_PUT
    - CLEANUP: .gitignore cleanup
    - DOC: Clarify IPv4 address / mask notation rules
    - CLEANUP: fix inconsistency between fd->iocb, proto->accept and accept()
    - BUG/MEDIUM: fix maxaccept computation on per-process listeners
    - BUG/MINOR: listener: stop unbound listeners on startup
    - BUG/MINOR: fix maxaccept computation according to the frontend process range
    - TESTS: add blocksig.c to run tests with all signals blocked
    - MEDIUM: unblock signals on startup.
    - MINOR: filters: Print the list of existing filters during HA startup
    - MINOR: filters: Typo in an error message
    - MINOR: filters: Filters must define the callbacks struct during config parsing
    - DOC: filters: Add filters documentation
    - BUG/MEDIUM: channel: don't allow to overwrite the reserve until connected
    - BUG/MEDIUM: channel: incorrect polling condition may delay event delivery
    - BUG/MEDIUM: channel: fix miscalculation of available buffer space (3rd try)
    - BUG/MEDIUM: log: fix risk of segfault when logging HTTP fields in TCP mode
    - MINOR: Add ability for agent-check to set server maxconn
    - CLEANUP: Use server_parse_maxconn_change_request for maxconn CLI updates
    - MINOR: filters: add opaque data
    - BUG/MEDIUM: lua: protects the upper boundary of the argument list for converters/fetches.
    - MINOR: lua: migrate the argument mask to 64 bits type.
    - BUG/MINOR: dumpstats: Fix the "Total bytes saved" counter in backends stats
    - BUG/MINOR: log: fix a typo that would cause %HP to log <BADREQ>
    - BUG/MEDIUM: http: fix incorrect reporting of server errors
    - MINOR: channel: add new function channel_congested()
    - BUG/MEDIUM: http: fix risk of CPU spikes with pipelined requests from dead client
    - BUG/MAJOR: channel: fix miscalculation of available buffer space (4th try)
    - BUG/MEDIUM: stream: ensure the SI_FL_DONT_WAKE flag is properly cleared
    - BUG/MEDIUM: channel: fix inconsistent handling of 4GB-1 transfers
    - BUG/MEDIUM: stats: show servers state may show an empty or incomplete result
    - BUG/MEDIUM: stats: show backend may show an empty or incomplete result
    - MINOR: stats: fix typo in help messages
    - MINOR: stats: show stat resolvers missing in the help message
    - BUG/MINOR: dns: fix DNS header definition
    - BUG/MEDIUM: dns: fix alignment issue when building DNS queries
    - CLEANUP: don't ignore scripts in .gitignore
    - BUILD: add a few release and backport scripts in scripts/
2016-05-10 15:36:58 +02:00
Vincent Bernat
9b7125cde9 BUG/MEDIUM: dns: fix alignment issue when building DNS queries
On some architectures, unaligned access is not authorized. On most
architectures, it is just slower. Therefore, we have to use memcpy()
when an unaligned access is needed, specifically when writing the qinfo.

Also remove the unaligned access when reading answer count when reading
the answer. It's likely that this instruction was optimized away by the
compiler since it is unneeded. Add a comment to explain why we use 7 as
an offset instead of 6. Not an unaligned offset since "resp" is
"unsigned char", then promoted to int.
2016-05-09 11:01:08 +02:00
Cyril Bonté
a8322f96f1 MINOR: stats: show stat resolvers missing in the help message
The help message provided by the "help" command on the unix socket didn't
provide "show stat resolvers" in the list of supported commands.

This patch should be backported to 1.6
2016-05-06 12:28:43 +02:00
Cyril Bonté
db98eb3b5c MINOR: stats: fix typo in help messages
The word "available" was mistakenly written "avalaible" in help messages and
comments.

This patch should be backported to 1.5 and 1.6
2016-05-06 12:28:43 +02:00
Cyril Bonté
6ca9e01ab2 BUG/MEDIUM: stats: show backend may show an empty or incomplete result
This is the same issue as "show servers state", where the result is incorrect
it the data can't fit in one buffer. The similar fix is applied, to restart
the data processing where it stopped as buffers are sent to the client.

This fix should be backported to haproxy 1.6
2016-05-06 12:28:43 +02:00
Cyril Bonté
76a99784f4 BUG/MEDIUM: stats: show servers state may show an empty or incomplete result
It was reported that the unix socket command "show servers state" returned an
empty response while "show servers state <backend>" worked.
In fact, both cases can reproduce the issue. It happens when the response can't
fit in one buffer.

The fix consists in processing the response in several steps, as it is done in
some others commands, by restarting where it was stopped after the buffer is
sent to the client.

This fix should be backported to haproxy 1.6
2016-05-06 12:28:43 +02:00
Willy Tarreau
8bf242b764 BUG/MEDIUM: channel: fix inconsistent handling of 4GB-1 transfers
In 1.4-dev3, commit 31971e5 ("[MEDIUM] add support for infinite forwarding")
made it possible to configure the lower layer to forward data indefinitely
by setting the forward size to CHN_INFINITE_FORWARD (4GB-1). By then larger
chunk sizes were not supported so there was no confusion in the usage of the
function.

Since 1.5 we support 64-bit content-lengths and chunk sizes and the function
has grown to support 64-bit arguments, though it still limits a single pass
to 32-bit quantities (what fit in the channel's to_forward field). The issue
now becomes that a 4GB-1 content-length can be confused with infinite
forwarding (in fact it's 4GB-1+what was already in the buffer). It causes a
visible effect when transferring this exact size because the transfer rate
is lower than with other sizes due in part to the disabling of the Nagle
algorithm on the sendto() call.

In theory with keep-alive it should prevent a second request from being
processed after such a transfer, but since the analysers are still present,
the forwarding analyser properly counts down the remaining size to transfer
and ultimately the transaction gets correctly reset so there is no visible
effect.

Since the root cause of the issue is an API problem (lack of distinction
between a real valid length and a magic value), this patch modifies the API
to have a new dedicated function called channel_forward_forever() to program
a permanent forwarding. The existing function __channel_forward() was modified
to properly take care of the requested sizes and ensure it 1) never overflows
and 2) never reaches CHN_INFINITE_FORWARD by accident.

It is worth noting that the function used to have a bug causing a 2GB
forward to be scheduled if it was called with less data than what is present
in buf->i. Fortunately this bug couldn't be triggered with existing code.

This fix should be backported to 1.6 and 1.5. While it also theorically
affects 1.4, it's better not to backport it there, as the risk of breaking
large object transfers due to significant API differences is high, compared
to the fact that the largest supported objects (4GB-1) are just slower to
transfer.
2016-05-04 15:26:37 +02:00
Willy Tarreau
5fb04711f0 BUG/MEDIUM: stream: ensure the SI_FL_DONT_WAKE flag is properly cleared
The previous buffer space bug has revealed an issue causing some stalled
connections to remain orphaned forever, preventing an old process from
dying. The issue is that once in a while a task may be woken up because
a disabled expiration timer has been reached despite no timeout being
reached. In this case we exit very early but the SI_FL_DONT_WAKE flag
wasn't cleared, resulting in new events not waking the task up. It may
be one of the reasons why a few people have already observed some peers
connections stuck in CLOSE_WAIT state.

This bug was introduced in 1.5-dev13 by commit 798f432 ("OPTIM: session:
don't process the whole session when only timers need a refresh"), so
the fix must be backported to 1.6 and 1.5.
2016-05-04 10:18:37 +02:00
Willy Tarreau
3de5bd603c BUG/MEDIUM: http: fix risk of CPU spikes with pipelined requests from dead client
Since client-side HTTP keep-alive was introduced in 1.4-dev, a good
number of corner cases had to be dealt with. One of them remained and
caused some occasional CPU spikes that Cyril Bonté had the opportunity
to observe from time to time and even recently to capture for deeper
analysis.

What happens is the following scenario :
  1) a client sends a first request which causes the server to respond
     using chunked encoding

  2) the server starts to respond with a large response that doesn't
     fit into a single buffer

  3) the response buffer fills up with the response

  4) the client reads the response while it is being sent by the server.

  5) haproxy eventually receives the end of the response from the server,
     which occupies the whole response buffer (must at least override the
     reserve), parses it and prepares to receive a second request while
     sending the last data blocks to the client. It then reinitializes the
     whole http_txn and calls http_wait_for_request(), which arms the
     http-request timeout.

  6) at this exact moment the client emits a second request, probably
     asking for an object referenced in the first page while parsing it
     on the fly, and silently disappears from the internet (internet
     access cut or software having trouble with pipelined request).

  7) the second request arrives into the request buffer and the response
     data stall in the response buffer, preventing the reserve from being
     used for anything else.

  8) haproxy calls http_wait_for_request() to parse the next request which
     has just arrived, but since it sees the response buffer is full, it
     refrains from doing so and waits for some data to leave or a timeout
     to strike.

  9) the http-request timeout strikes, causing http_wait_for_request() to
     be called again, and the function immediately returns since it cannot
     even produce an error message, and the loop is maintained here.

 10) the client timeout strikes, aborting the loop.

At first glance a check for timeout would be needed *before* considering
the buffer in http_wait_for_request(), but the issue is not there in fact,
because when doing so we see in the logs a client timeout error while
waiting for a request, which is wrong. The real issue is that we must not
consider the first transaction terminated until at least the reserve is
released so that http_wait_for_request() has no problem starting to process
a new request. This allows the first response to be reported in timeout and
not the second request. A more restrictive control could consist in not
considering the request complete until the response buffer has no more
outgoing data but this brings no added value and needlessly increases the
number of wake-ups when dealing with pipelining.

Note that the same issue exists with the request, we must ensure that any
POST data are finished being forwarded to the server before accepting a new
request. This case is much harder to trigger however as servers rarely
disappear and if they do so, they impact all their sessions at once.

No specific reproducer is usable because the bug is very hard to reproduce
and depends on the system as well with settings varying across reboots. On
a test machine, socket buffers were reduced to 4096 (tcp_rmem and tcp_wmem),
haproxy's buffers were set to 16384 and tune.maxrewrite to 12288. The proxy
must work in http-server-close mode, with a request timeout smaller than the
client timeout. The test is run like this :

  $ (printf "GET /15000 HTTP/1.1\r\n\r\n"; usleep 100000; \
     printf "GET / HTTP/1.0\r\n\r\n"; sleep 15) | nc6 --send-only 0 8002

The server returns chunks of the requested size (15000 bytes here, but
78000 in a previous test was the only working value). Strace must show the
last recvfrom() succeed and the last sendto() being shorter than desired or
better, not being called.

This fix must be backported to 1.6, 1.5 and 1.4. It was made in a way that
should make it possible to backport it. It depends on channel_congested()
which also needs to be backported. Further cleanup of http_wait_for_request()
could be made given that some of the tests are now useless.
2016-05-02 16:39:22 +02:00
Willy Tarreau
f51d03cf14 BUG/MEDIUM: http: fix incorrect reporting of server errors
Commit dbe34eb ("MEDIUM: filters/http: Move body parsing of HTTP
messages in dedicated functions") introduced a bug in function
http_response_forward_body() by getting rid of the while(1) loop.
The code immediately following the loop was only reachable on missing
data but now it's also reachable under normal conditions, which used
to be dealt with by the skip_resync_state label returning zero. The
side effect is that in http_server_close situations, the channel's
SHUTR flag is seen and considered as a server error which is reported
if any other error happens (eg: client timeout).

This bug is specific to 1.7, no backport is needed.
2016-05-02 16:39:22 +02:00
Nenad Merdanovic
54e439f0b4 BUG/MINOR: log: fix a typo that would cause %HP to log <BADREQ>
Typo was introduced in 57bc891 ("BUG/MEDIUM: log: fix risk of
segfault when logging HTTP fields in TCP mode") which inverted the
condition in the test and caused <BADREQ> to be logged when using
%HP.

Signed-off-by: Nenad Merdanovic <nmerdan@anine.io>
2016-04-29 07:28:44 +02:00
Christopher Faulet
0b64f62a69 BUG/MINOR: dumpstats: Fix the "Total bytes saved" counter in backends stats
Instead of subtracting ST_F_COMP_OUT (Compression out) from ST_F_COMP_IN
(Compressio in) in backends stats, ST_F_COMP_BYP (Compression bypass) was used.
2016-04-29 07:18:57 +02:00
David Carlier
0c437f4d35 MINOR: lua: migrate the argument mask to 64 bits type.
Recently the maximum number of arguments were ported to 12
due to the migration to 64 bits. So we allow lua to extend for
both converters and fetches.
2016-04-29 07:17:42 +02:00
David Carlier
abdb00fbc0 BUG/MEDIUM: lua: protects the upper boundary of the argument list for converters/fetches.
When a converter or sample is called from within a Lua code, there is a risk
of invalid argument string data usage when the upper boundary is reached.
Would be kind to port to 1.6 if possible.
2016-04-29 07:17:42 +02:00
Thierry Fournier
3610c39c8c MINOR: filters: add opaque data
Add opaque data between the filter keyword registrering and the parsing
function. This opaque data allow to use the same parser with differents
registered keywords. The opaque data is used for giving data which mainly
makes difference between the two keywords.

It will be used with Lua keywords registering.
2016-04-27 10:48:15 +02:00
Nenad Merdanovic
5c3ed34529 CLEANUP: Use server_parse_maxconn_change_request for maxconn CLI updates 2016-04-25 17:23:50 +02:00
Nenad Merdanovic
174dd37d88 MINOR: Add ability for agent-check to set server maxconn
This is very useful in complex architecture systems where HAproxy
is balancing DB connections for example. We want to keep the maxconn
high in order to avoid issues with queueing on the LB level when
there is slowness on another part of the system. Example is a case of
an architecture where each thread opens multiple DB connections, which
if get stuck in queue cause a snowball effect (old connections aren't
closed, new ones cannot be established). These connections are mostly
idle and the DB server has no problem handling thousands of them.

Allowing us to dynamically set maxconn depending on the backend usage
(LA, CPU, memory, etc.) enables us to have high maxconn for situations
like above, but lowering it in case there are real issues where the
backend servers become overloaded (cache issues, DB gets hit hard).
2016-04-25 17:23:50 +02:00
Willy Tarreau
57bc8917c3 BUG/MEDIUM: log: fix risk of segfault when logging HTTP fields in TCP mode
David Torgerson faced an issue when using HTTP fields in log-format in TCP
sections. The txn is dereferenced while it's null, resulting in a crash of
the process. Such configurations are invalid and a warning is emitted, but
nevertheless the process must not crash. As found by Lukas Tribus, this is
a side effect of the split between the stream and the HTTP transaction that
happened in 1.6, making it possible to have txn==NULL there.

The fix consists in checking that txn is valid before using it. Fortunately
it's easy since almost all places already used to check for the existence
of a field (eg: txn->uri).

This patch should be backported to 1.6.
2016-04-25 17:15:58 +02:00
Christopher Faulet
00e818aa58 MINOR: filters: Filters must define the callbacks struct during config parsing 2016-04-21 06:59:18 +02:00
Christopher Faulet
cc7317d11e MINOR: filters: Typo in an error message 2016-04-21 06:59:05 +02:00
Christopher Faulet
b3f4e14932 MINOR: filters: Print the list of existing filters during HA startup
This is done  in verbose/debug mode and when build options are reported.
2016-04-21 06:58:08 +02:00
Willy Tarreau
d50b4ac0d4 MEDIUM: unblock signals on startup.
A problem was reported recently by some users of programs compiled
with Go 1.5 which by default blocks all signals before executing
processes, resulting in haproxy not receiving SIGUSR1 or even SIGTERM,
causing lots of zombie processes.

This problem was apparently observed by users of consul and kubernetes
(at least).

This patch is a workaround for this issue. It consists in unblocking
all signals on startup. Since they're normally not blocked in a regular
shell, it ensures haproxy always starts under the same conditions.

Quite useful information reported by both Matti Savolainen and REN
Xiaolei actually helped find the root cause of this problem and this
workaround. Thanks to them for this.

This patch must be backported to 1.6 and 1.5 where the problem is
observed.
2016-04-20 10:53:12 +02:00
Cyril Bonté
4920d70fa0 BUG/MINOR: fix maxaccept computation according to the frontend process range
commit 7c0ffd23 is only considering the explicit use of the "process" keyword
on the listeners. But at this step, if it's not defined in the configuration,
the listener bind_proc mask is set to 0. As a result, the code will compute
the maxaccept value based on only 1 process, which is not always true.

For example :
  global
    nbproc 4

  frontend test
    bind-process 1-2
    bind :80

Here, the maxaccept value for the "test" frontend was set to the global
tune.maxaccept value (default to 64), whereas it should consider 2 processes
will accept connections. As of the documentation, the value should be divided
by twice the number of processes the listener is bound to.

To fix this, we can consider that if no mask is set to the listener, we take
the frontend mask.

This is not critical but it can introduce unfairness distribution of the
incoming connections across the processes.

It should be backported to the same branches as commit 7c0ffd23 (1.6 and 1.5
were in the scope).
2016-04-15 08:22:52 +02:00
Willy Tarreau
d6c06d0f65 BUG/MINOR: listener: stop unbound listeners on startup
When a listener is not bound to a process its frontend belongs to, it
is only paused and not stopped. This creates confusion from the outside
as "netstat -ltnp" for example will report only the parent process as
the listener instead of the effective one. "ss -lnp" will report that
all processes are listening to all sockets.

This is confusing enough to suggest a fix. Now we simply stop the unused
listeners. Example with this simple config :

  global
      nbproc 4

  frontend haproxy_test
      bind-process 1-40
      bind :12345 process 1
      bind :12345 process 2
      bind :12345 process 3
      bind :12345 process 4

Before the patch :
  $ netstat -ltnp
  Proto Recv-Q Send-Q Local Address           Foreign Address         State       PID/Program name
  tcp        0      0 0.0.0.0:12345           0.0.0.0:*               LISTEN      30457/./haproxy
  tcp        0      0 0.0.0.0:12345           0.0.0.0:*               LISTEN      30457/./haproxy
  tcp        0      0 0.0.0.0:12345           0.0.0.0:*               LISTEN      30457/./haproxy
  tcp        0      0 0.0.0.0:12345           0.0.0.0:*               LISTEN      30457/./haproxy

After the patch :
  $ netstat -ltnp
  Proto Recv-Q Send-Q Local Address           Foreign Address         State       PID/Program name
  tcp        0      0 0.0.0.0:12345           0.0.0.0:*               LISTEN      30504/./haproxy
  tcp        0      0 0.0.0.0:12345           0.0.0.0:*               LISTEN      30503/./haproxy
  tcp        0      0 0.0.0.0:12345           0.0.0.0:*               LISTEN      30502/./haproxy
  tcp        0      0 0.0.0.0:12345           0.0.0.0:*               LISTEN      30501/./haproxy

This patch may be backported to 1.6 and 1.5, but it relies on commit
7a798e5 ("CLEANUP: fix inconsistency between fd->iocb, proto->accept
and accept()") since it will expose an API inconsistency by including
listener.h in the .c.
2016-04-14 12:05:02 +02:00
Willy Tarreau
7c0ffd23d2 BUG/MEDIUM: fix maxaccept computation on per-process listeners
Christian Ruppert reported a performance degradation when binding a
single frontend to many processes while only one bind line was being
used, bound to a single process.

The reason comes from the fact that whenever a listener is bound to
multiple processes, the it is assigned a maxaccept value which equals
half the global maxaccept value divided by the number of processes the
frontend is bound to. The purpose is to ensure that no single process
will drain all the incoming requests at once and ensure a fair share
between all listeners. Usually this works pretty well, when a listener
is bound to all the processes of its frontend. But here we're in a
situation where the maxaccept of a listener which is bound to a single
process is still divided by a large value.

The fix consists in taking into account the number of processes the
listener is bound do and not only those of the frontend. This way it
is perfectly possible to benefit from nbproc and SO_REUSEPORT without
performance degradation.

1.6 and 1.5 normally suffer from the same issue.
2016-04-14 11:53:50 +02:00
Willy Tarreau
7a798e5d6b CLEANUP: fix inconsistency between fd->iocb, proto->accept and accept()
There's quite some inconsistency in the internal API. listener_accept()
which is the main accept() function returns void but is declared as int
in the include file. It's assigned to proto->accept() for all stream
protocols where an int is expected but the result is never checked (nor
is it documented by the way). This proto->accept() is in turn assigned
to fd->iocb() which is supposed to return an int composed of FD_WAIT_*
flags, but which is never checked either.

So let's fix all this mess :
  - nobody checks accept()'s return
  - nobody checks iocb()'s return
  - nobody sets a return value

=> let's mark all these functions void and keep the current ones intact.

Additionally we now include listener.h from listener.c to ensure we won't
silently hide this incoherency in the future.

Note that this patch could/should be backported to 1.6 and even 1.5 to
simplify debugging sessions.
2016-04-14 11:18:22 +02:00
Daniel Schneller
9ff96c7a62 MINOR: acl: Add predefined METH_DELETE, METH_PUT
Adds the missing HTTP verbs DELETE and PUT as predefined ACLs, similar
to GET, POST etc.
2016-04-12 11:44:09 +02:00
David Carlier
64a16ab19c BUG/MEDIUM: sample: initialize the pointer before parse_binary call.
parse_binary line 2025 checks the nullity of binstr parameter.
Other calls of parse_binary properly zeroify this parameter.
[wt: this could result in random failures of the const parser]
2016-04-12 11:08:24 +02:00
David Carlier
97880bb46d BUG/MINOR: cfgparse: couple of small memory leaks.
During the config parse in some code paths, there is some
forgotten pointers freeing, and as often, during errors handlings.
2016-04-12 11:01:41 +02:00
David Carlier
d10025c671 BUG/MINOR: server: risk of over reading the pref_net array.
dns_option struct pref_net field is an array of 5. The issue
here shows that pref_net_nb can go up to 5 as well which might lead
to read outside of this array.
2016-04-12 11:00:39 +02:00
William Lallemand
0567fa3af5 BUG/MEDIUM: trace.c: rdtsc() is defined in two files
The rdtsc() function provided in standard.h forbid trace.c to compile
because it's already defined there.
2016-04-09 22:27:01 +02:00
Thierry Fournier
0e00dca58b DOC: http: rename the unique-id sample and add the documentation
This patch renames the ssample fetch from "uniqueid" to "unique-id".
It also adds the documentation associated with this sample fetch.
2016-04-07 19:14:58 +02:00
Frederik Deweerdt
6cd8d13c05 OPTIM/MINOR: session: abort if possible before connecting to the backend
Depending on the path that led to sess_update_stream_int(), it's
possible that we had a read error on the frontend, but that we haven't
checked if we may abort the connection.

This was seen in particular the following setup: tcp mode, with
abortonclose set, frontend using ssl. If the ssl connection had a first
successful read, but the second read failed, we would stil try to open a
connection to the backend, although we had enough information to close
the connection early.

sess_update_stream_int() had some logic to handle that case in the
SI_ST_QUE and SI_ST_TAR, but that was missing in the SI_ST_ASS case.

This patches addresses the issue by verifying the state of the req
channel (and the abortonclose option) right before opening the
connection to the backend, so we have the opportunity to close the
connection there, and factorizes the shared SI_ST_{QUE,TAR,ASS} code.
2016-04-07 19:12:02 +02:00
Willy Tarreau
bb137a8af7 BUG/MEDIUM: ssl: rewind the BIO when reading certificates
Emeric found that some certificate files that were valid with the old method
(the one with the explicit name involving SSL_CTX_use_PrivateKey_file()) do
not work anymore with the new one (the one trying to load multiple cert types
using PEM_read_bio_PrivateKey()). With the last one, the private key couldn't
be loaded.

The difference was related to the ordering in the PEM file was different. The
old method would always work. The new method only works if the private key is
at the top, or if it appears as an "EC" private key. The cause in fact is that
we never rewind the BIO between the various calls. So this patch moves the
loading of the private key as the first step, then it rewinds the BIO, and
then it loads the cert and the chain. With this everything works.

No backport is needed, this issue came with the recent addition of the
multi-cert support.
2016-04-06 19:02:38 +02:00
Bertrand Paquet
83a2c3d4d7 BUG/MINOR : allow to log cookie for tarpit and denied request
The following patch allow to log cookie for tarpit and denied request.
This minor bug affect at least 1.5, 1.6 and 1.7 branch.

The solution is not perfect : may be the cookie processing
(manage_client_side_cookies) can be moved into http_process_req_common.
2016-04-06 14:58:41 +02:00
Baptiste Assmann
6f79aca339 BUG/MINOR: DNS: resolution structure change
060e57301d introduced a bug, related to a
dns option structure change and an improper rebase.

Thanks Lukas Tribus for reporting it.

backport: 1.7 and above
2016-04-05 21:35:42 +02:00
David Carlier
7365f7d41b CLEANUP: proto_http: few corrections for gcc warnings.
first, we modify the signatures of http_msg_forward_body and
http_msg_forward_chunked_body as they are declared as inline
below. Secondly, just verify the returns of the chunk initialization
which holds the Authorization Method (althought it is unlikely to fail  ...).
Both from gcc warnings.
2016-04-05 18:05:24 +02:00
Baptiste Assmann
060e57301d BUG/MINOR: dns: trigger a DNS query type change on resolution timeout
After Cedric Jeanneret reported an issue with HAProxy and DNS resolution
when multiple servers are in use, I saw that the implementation of DNS
query type update on resolution timeout was not implemented, even if it
is documented.

backport: 1.6 and above
2016-04-05 05:56:11 +02:00
Baptiste Assmann
382824c475 BUG/MINOR: dns: inapropriate way out after a resolution timeout
A bug leading HAProxy to stop DNS resolution when multiple servers are
configured and one is in timeout, the request is not resent.
Current code fix this issue.

backport status: 1.6 and above
2016-04-05 05:56:11 +02:00
Conrad Hoffmann
692c9386db BUG/MINOR: dumpstats: fix write to global chunk
This just happens to work as it is the correct chunk, but should be whatever
gets passed in as argument.

Signed-off-by: Conrad Hoffmann <conrad@soundcloud.com>
2016-04-05 05:56:10 +02:00
Vincent Bernat
02779b6263 CLEANUP: uniformize last argument of malloc/calloc
Instead of repeating the type of the LHS argument (sizeof(struct ...))
in calls to malloc/calloc, we directly use the pointer
name (sizeof(*...)). The following Coccinelle patch was used:

@@
type T;
T *x;
@@

  x = malloc(
- sizeof(T)
+ sizeof(*x)
  )

@@
type T;
T *x;
@@

  x = calloc(1,
- sizeof(T)
+ sizeof(*x)
  )

When the LHS is not just a variable name, no change is made. Moreover,
the following patch was used to ensure that "1" is consistently used as
a first argument of calloc, not the last one:

@@
@@

  calloc(
+ 1,
  ...
- ,1
  )
2016-04-03 14:17:42 +02:00
Vincent Bernat
3c2f2f207f CLEANUP: remove unneeded casts
In C89, "void *" is automatically promoted to any pointer type. Casting
the result of malloc/calloc to the type of the LHS variable is therefore
unneeded.

Most of this patch was built using this Coccinelle patch:

@@
type T;
@@

- (T *)
  (\(lua_touserdata\|malloc\|calloc\|SSL_get_app_data\|hlua_checkudata\|lua_newuserdata\)(...))

@@
type T;
T *x;
void *data;
@@

  x =
- (T *)
  data

@@
type T;
T *x;
T *data;
@@

  x =
- (T *)
  data

Unfortunately, either Coccinelle or I is too limited to detect situation
where a complex RHS expression is of type "void *" and therefore casting
is not needed. Those cases were manually examined and corrected.
2016-04-03 14:17:42 +02:00
Willy Tarreau
f3764b7993 MEDIUM: proxy: use dynamic allocation for error dumps
There are two issues with error captures. The first one is that the
capture size is still hard-coded to BUFSIZE regardless of any possible
tune.bufsize setting and of the fact that frontends only capture request
errors and that backends only capture response errors. The second is that
captures are allocated in both directions for all proxies, which start to
count a lot in configs using thousands of proxies.

This patch changes this so that error captures are allocated only when
needed, and of the proper size. It also refrains from dumping a buffer
that was not allocated, which still allows to emit all relevant info
such as flags and HTTP states. This way it is possible to save up to
32 kB of RAM per proxy in the default configuration.
2016-03-31 13:49:23 +02:00
Thierry Fournier
40e1d51068 BUG/MEDIUM: stick-tables: some sample-fetch doesn't work in the connection state.
The sc_* sample fetch can work without the struct strm, because the
tracked counters are also stored in the session. So, this patchs
removes the check for the strm existance.

This bug is recent and was introduced in 1.7-dev2 by commit 6204cd9
("BUG/MAJOR: vars: always retrieve the stream and session from the sample")

This bugfix must be backported in 1.6.
2016-03-30 19:51:33 +02:00
Thierry Fournier
ff480424ab MINOR: lua: add class listener
This class provides the access to the listener struct, it allows
some manipulations and retrieve informations.
2016-03-30 18:43:47 +02:00
Thierry Fournier
f2fdc9dc39 MINOR: lua: add class server
This class provides the access to the server struct, it allows
some manipulations and retrieve informations.
2016-03-30 18:43:47 +02:00
Thierry Fournier
f61aa6356e MINOR: lua: add class proxy
This class provides the access to the proxy struct, it allows
some manipulations and retrieve informations.
2016-03-30 18:43:42 +02:00
Thierry Fournier
eea77c0e17 MINOR: lua: dump general info
This patch adds function able to dump general haproxy information.
2016-03-30 17:27:40 +02:00
Thierry Fournier
d0a56c2953 MINOR: dumpstats: split stats_dump_be_stats() in two parts
This patch splits the function stats_dump_be_stats() in two parts. The
part is called stats_fill_be_stats(), and just fill the stats buffer.
This split allows the usage of preformated stats in other parts of HAProxy
like the Lua.
2016-03-30 17:26:19 +02:00
Thierry Fournier
61fe6c0adb MINOR: dumpstats: split stats_dump_sv_stats() in two parts
This patch splits the function stats_dump_sv_stats() in two parts. The
extracted part is called stats_fill_sv_stats(), and just fill the stats buffer.
This split allows the usage of preformated stats in other parts of HAProxy
like the Lua.
2016-03-30 17:26:09 +02:00
Thierry Fournier
c4456856b0 MINOR: dumpstats: split stats_dump_li_stats() in two parts
This patch splits the function stats_dump_li_stats() in two parts. The
extracted part is called stats_fill_li_stats(), and just fill the stats buffer.
This split allows the usage of preformated stats in other parts of HAProxy
like the Lua.
2016-03-30 17:26:02 +02:00
Thierry Fournier
23d2d64185 MINOR: dumpstats: split stats_dump_fe_stats() in two parts
This patch splits the function stats_dump_fe_stats() in two parts. The
extracted part is called stats_fill_fe_stats(), and just fill the stats buffer.
This split allows the usage of preformated stats in other parts of HAProxy
like the Lua.
2016-03-30 17:21:59 +02:00
Thierry Fournier
cb2c767681 MINOR: dumpstats: split stats_dump_info_to_buffer() in two parts
This patch splits the function stats_dump_info_to_buffer() in two parts. The
extracted part is called stats_fill_info(), and just fill the stats buffer.
This split allows the usage of preformated stats in other parts of HAProxy
like the Lua.
2016-03-30 17:21:37 +02:00
Thierry Fournier
31e64ca301 MINOR: dumpstats: extract stats fields enum and names
These field names can be used outside of the dumpstats file.
This will be useful for exporting stats in Lua.
2016-03-30 17:21:09 +02:00
Thierry Fournier
f4011ddcf5 MINOR: http: sample fetch which returns unique-id
This patch adds a sample fetch which returns the unique-id if it is
configured. If the unique-id is not yet generated, it build it. If
the unique-id is not configured, it returns none.
2016-03-30 17:19:45 +02:00
Thierry Fournier
8b0d6e1d04 MINOR: lua: convert field to lua type
This function converts a field used by stats in lua type.
2016-03-30 15:46:18 +02:00
Thierry Fournier
94ed1c127e MINOR: lua: Add internal function which strip spaces
Some internal HAproxy error message are provided with a final '\n'.
Its typically for the integration in the CLI. Sometimes, these messages
are returned as Lua string. These string must be without "\n" or final
spaces.

This patch adds a function whoch removes unrequired parameters.
2016-03-30 15:45:45 +02:00
Thierry Fournier
3d4a675f24 MINOR: lua: post initialization
This patch adds a Lua post initialisation wrapper. It already exists for
pure Lua function, now it executes also C. It is useful for doing things
when the configuration is ready to use. For example we can can browse and
register all the proxies.
2016-03-30 15:44:58 +02:00
Thierry Fournier
fd107a2b1c MINOR: lua: precise message when a critical error is catched
This patch try to find error message when the safe execution wrapper
function catch a critical error.
2016-03-30 15:44:44 +02:00
Thierry Fournier
45e78d7aa9 MINOR: lua: refactor the Lua object registration
All the HAProxy Lua object are declared with the same pattern:
 - Add the function __tosting which dumps the object name
 - Register the name in the Lua REGISTRY
 - Register the reference ID

These action are refactored in on function. This remove some
lines of code.
2016-03-30 15:43:52 +02:00
Thierry Fournier
4f99b27c34 CLEANUP: lua: Remove two same functions
The function hlua_array_add_fcn() is exactly the same than the function
hlua_class_function(), so this patch removes the first one.
2016-03-30 15:43:25 +02:00
Thierry Fournier
991188d297 MINOR: lua: remove some useless checks
The modified function are declared in the safe environment, so
they must called from safe environement. As the environement is
safe, its useles to check the stack size.
2016-03-30 15:42:50 +02:00
Thierry Fournier
ddd8988fe5 MINOR: lua: move class registration facilities
The functions
 - hlua_class_const_int()
 - hlua_class_const_str()
 - hlua_class_function()
are use for common class registration actions.

The function 'hlua_dump_object()' is generic dump name function.

These functions can be used by all the HAProxy objects, so I move
it into the safe functions file.
2016-03-30 15:42:20 +02:00
Thierry Fournier
ac9d467c5e BUG/MINOR: prevent the dump of uninitialized vars
Some vars are not initialized when the dumps of variables
are called. This patch prevent the dereferencement of
uninitialized pointers.
2016-03-30 15:38:10 +02:00
Nenad Merdanovic
69ad4b9977 BUG/MAJOR: Fix crash in http_get_fhdr with exactly MAX_HDR_HISTORY headers
Similar issue was fixed in 67dad27, but the fix is incomplete. Crash still
happened when utilizing req.fhdr() and sending exactly MAX_HDR_HISTORY
headers.

This fix needs to be backported to 1.5 and 1.6.

Signed-off-by: Nenad Merdanovic <nmerdan@anine.io>
2016-03-29 16:03:41 +02:00
Nenad Merdanovic
1789115a52 BUG/MEDIUM: Fix RFC5077 resumption when more than TLS_TICKETS_NO are present
Olivier Doucet reported the issue on the ML and tested that when using
more than TLS_TICKETS_NO keys in the file, the CPU usage is much higeher
than expected.

Lukas Tribus then provided a test case which showed that resumption doesn't
work at all in that case.

This fix needs to be backported to 1.6.

Signed-off-by: Nenad Merdanovic <nmerdan@anine.io>
2016-03-29 16:03:37 +02:00
Willy Tarreau
3bb46177ac BUG/MEDIUM: peers: fix incorrect age in frequency counters
The frequency counters's window start is sent as "now - freq.date",
which is a positive age compared to the current date. But on receipt,
this age was added to the current date instead of subtracted. So
since the date was always in the future, they were always expired if
the activity changed side in less than the counter's measuring period
(eg: 10s).

This bug was reported by Christian Ruppert who also provided an easy
reproducer.

It needs to be backported to 1.6.
2016-03-25 18:17:47 +01:00
David CARLIER
42ff05e2d3 CLEANUP: connection: fix double negation on memcmp()
Nothing harmful in here, just clarify that it applies to the whole
expression.
2016-03-24 11:25:46 +01:00
Thierry Fournier
e7fe8eb889 BUG/MINOR: conf: "listener id" expects integer, but its not checked
The listener id was converted with a simple atol, so conversion
error are unchecked. This patch uses strtol and checks the
conversion status.
2016-03-19 07:39:51 +01:00
David Carlier
840b0240bc MINOR: da: Using ARG12 macro for the sample fetch and the convertor.
Regarding the minor update introduced in the
cd6c3c7cb4 commit, the DeviceAtlas
module is now able to use up to 12 device properties via the
new ARG12 macro.
2016-03-17 05:44:33 +01:00
Willy Tarreau
3fa0e2a745 BUILD: namespaces: fix a potential build warning in namespaces.c
I just met this warning today making me realize that haproxy's
headers were included prior to the system ones, so all #ifndefs
are taken first then the system redefines them. Simply move
haproxy includes after the system's. This should be backported
to 1.6 as well.

In file included from /usr/include/bits/fcntl.h:61:0,
                 from /usr/include/fcntl.h:35,
                 from src/namespace.c:13:
/usr/include/bits/fcntl-linux.h:203:0: warning: "F_SETPIPE_SZ" redefined [enabled by default]
In file included from include/common/config.h:26:0,
                 from include/proto/log.h:29,
                 from src/namespace.c:7:
include/common/compat.h:81:0: note: this is the location of the previous definition
2016-03-17 05:39:53 +01:00
Benoit GARNIER
e2e5bde3f2 BUG/MINOR: log: Don't use strftime() which can clobber timezone if chrooted
The strftime() function can call tzset() internally on some platforms.
When haproxy is chrooted, the /etc/localtime file is not found, and some
implementations will clobber the content of the current timezone.

The GMT offset is computed by diffing the times returned by gmtime_r() and
localtime_r(). These variants are guaranteed to not call tzset() and were
already used in haproxy while chrooted, so they should be safe.

This patch must be backported to 1.6 and 1.5.
2016-03-17 05:30:03 +01:00
David Carlier
15073a3393 MINOR: sample: Moves ARGS underlying type from 32 to 64 bits.
ARG# macros allow to create a list up to 7 in theory but 5 in
practice. The change to a guaranteed 64 bits type increase to
up to 12.
2016-03-15 22:11:52 +01:00
Willy Tarreau
8234f6dae8 [RELEASE] Released version 1.7-dev2
Released version 1.7-dev2 with the following main changes :
    - DOC: lua: fix lua API
    - DOC: mailers: typo in 'hostname' description
    - DOC: compression: missing mention of libslz for compression algorithm
    - BUILD/MINOR: regex: missing header
    - BUG/MINOR: stream: bad return code
    - DOC: lua: fix somme errors and add implicit types
    - MINOR: lua: add set/get priv for applets
    - BUG/MINOR: http: fix several off-by-one errors in the url_param parser
    - BUG/MINOR: http: Be sure to process all the data received from a server
    - MINOR: filters/http: Use a wrapper function instead of stream_int_retnclose
    - BUG/MINOR: chunk: make chunk_dup() always check and set dst->size
    - DOC: ssl: fixed some formatting errors in crt tag
    - MINOR: chunks: ensure that chunk_strcpy() adds a trailing zero
    - MINOR: chunks: add chunk_strcat() and chunk_newstr()
    - MINOR: chunk: make chunk_initstr() take a const string
    - MEDIUM: tools: add csv_enc_append() to preserve the original chunk
    - MINOR: tools: make csv_enc_append() always start at the first byte of the chunk
    - MINOR: lru: new function to delete <nb> least recently used keys
    - DOC: add Ben Shillito as the maintainer of 51d
    - BUG/MINOR: 51d: Ensures a unique domain for each configuration
    - BUG/MINOR: 51d: Aligns Pattern cache implementation with HAProxy best practices.
    - BUG/MINOR: 51d: Releases workset back to pool.
    - BUG/MINOR: 51d: Aligned const pointers to changes in 51Degrees.
    - CLEANUP: 51d: Aligned if statements with HAProxy best practices and removed casts from malloc.
    - MINOR: rename master process name in -Ds (systemd mode)
    - DOC: fix a few spelling mistakes
    - DOC: fix "workaround" spelling
    - BUG/MINOR: examples: Fixing haproxy.spec to remove references to .cfg files
    - MINOR: fix the return type for dns_response_get_query_id() function
    - MINOR: server state: missing LF (\n) on error message printed when parsing server state file
    - BUG/MEDIUM: dns: no DNS resolution happens if no ports provided to the nameserver
    - BUG/MAJOR: servers state: server port is erased when dns resolution is enabled on a server
    - BUG/MEDIUM: servers state: server port is used uninitialized
    - BUG/MEDIUM: config: Adding validation to stick-table expire value.
    - BUG/MEDIUM: sample: http_date() doesn't provide the right day of the week
    - BUG/MEDIUM: channel: fix miscalculation of available buffer space.
    - MEDIUM: pools: add a new flag to avoid rounding pool size up
    - BUG/MEDIUM: buffers: do not round up buffer size during allocation
    - BUG/MINOR: stream: don't force retries if the server is DOWN
    - BUG/MINOR: counters: make the sc-inc-gpc0 and sc-set-gpt0 touch the table
    - MINOR: unix: don't mention free ports on EAGAIN
    - BUG/CLEANUP: CLI: report the proper field states in "show sess"
    - MINOR: stats: send content-length with the redirect to allow keep-alive
    - BUG: stream_interface: Reuse connection even if the output channel is empty
    - DOC: remove old tunnel mode assumptions
    - BUG/MAJOR: http-reuse: fix risk of orphaned connections
    - BUG/MEDIUM: http-reuse: do not share private connections across backends
    - BUG/MINOR: ssl: Be sure to use unique serial for regenerated certificates
    - BUG/MINOR: stats: fix missing comma in stats on agent drain
    - MAJOR: filters: Add filters support
    - MINOR: filters: Do not reset stream analyzers if the client is gone
    - REORG: filters: Prepare creation of the HTTP compression filter
    - MAJOR: filters/http: Rewrite the HTTP compression as a filter
    - MEDIUM: filters: Use macros to call filters callbacks to speed-up processing
    - MEDIUM: filters: remove http_start_chunk, http_last_chunk and http_chunk_end
    - MEDIUM: filters: Replace filter_http_headers callback by an analyzer
    - MEDIUM: filters/http: Move body parsing of HTTP messages in dedicated functions
    - MINOR: filters: Add stream_filters structure to hide filters info
    - MAJOR: filters: Require explicit registration to filter HTTP body and TCP data
    - MINOR: filters: Remove unused or useless stuff and do small optimizations
    - MEDIUM: filters: Optimize the HTTP compression for chunk encoded response
    - MINOR: filters/http: Slightly update the parsing of chunks
    - MINOR: filters/http: Forward remaining data when a channel has no "data" filters
    - MINOR: filters: Add an filter example
    - MINOR: filters: Extract proxy stuff from the struct filter
    - MINOR: map: Add regex matching replacement
    - BUG/MINOR: lua: unsafe initialization
    - DOC: lua: fix somme errors
    - MINOR: lua: file dedicated to unsafe functions
    - MINOR: lua: add "now" time function
    - MINOR: standard: add RFC HTTP date parser
    - MINOR: lua: Add date functions
    - MINOR: lua: move common function
    - MINOR: lua: merge function
    - MINOR: lua: Add concat class
    - MINOR: standard: add function "escape_chunk"
    - MEDIUM: log: add a new log format flag "E"
    - DOC: add server name at rate-limit sessions example
    - BUG/MEDIUM: ssl: fix off-by-one in ALPN list allocation
    - BUG/MEDIUM: ssl: fix off-by-one in NPN list allocation
    - DOC: LUA: fix some typos and syntax errors
    - MINOR: cli: add a new "show env" command
    - MEDIUM: config: allow to manipulate environment variables in the global section
    - MEDIUM: cfgparse: reject incorrect 'timeout retry' keyword spelling in resolvers
    - MINOR: mailers: increase default timeout to 10 seconds
    - MINOR: mailers: use <CRLF> for all line endings
    - BUG/MAJOR: lua: segfault using Concat object
    - DOC: lua: copyrights
    - MINOR: common: mask conversion
    - MEDIUM: dns: extract options
    - MEDIUM: dns: add a "resolve-net" option which allow to prefer an ip in a network
    - MINOR: mailers: make it possible to configure the connection timeout
    - BUG/MAJOR: lua: applets can't sleep.
    - BUG/MINOR: server: some prototypes are renamed
    - BUG/MINOR: lua: Useless copy
    - BUG/MEDIUM: stats: stats bind-process doesn't propagate the process mask correctly
    - BUG/MINOR: server: fix the format of the warning on address change
    - CLEANUP: server: add "const" to some message strings
    - MINOR: server: generalize the "updater" source
    - BUG/MEDIUM: chunks: always reject negative-length chunks
    - BUG/MINOR: systemd: ensure we don't miss signals
    - BUG/MINOR: systemd: report the correct signal in debug message output
    - BUG/MINOR: systemd: propagate the correct signal to haproxy
    - MINOR: systemd: ensure a reload doesn't mask a stop
    - BUG/MEDIUM: cfgparse: wrong argument offset after parsing server "sni" keyword
    - CLEANUP: stats: Avoid computation with uninitialized bits.
    - CLEANUP: pattern: Ignore unknown samples in pat_match_ip().
    - CLEANUP: map: Avoid memory leak in out-of-memory condition.
    - BUG/MINOR: tcpcheck: fix incorrect list usage resulting in failure to load certain configs
    - BUG/MAJOR: samples: check smp->strm before using it
    - MINOR: sample: add a new helper to initialize the owner of a sample
    - MINOR: sample: always set a new sample's owner before evaluating it
    - BUG/MAJOR: vars: always retrieve the stream and session from the sample
    - CLEANUP: payload: remove useless and confusing nullity checks for channel buffer
    - BUG/MINOR: ssl: fix usage of the various sample fetch functions
    - MINOR: stats: create fields types suitable for all CSV output data
    - MINOR: stats: add all the "show info" fields in a table
    - MEDIUM: stats: fill all the show info elements prior to displaying them
    - MINOR: stats: add a function to emit fields into a chunk
    - MINOR: stats: add stats_dump_info_fields() to dump one field per line
    - MEDIUM: stats: make use of stats_dump_info_fields() for "show info"
    - MINOR: stats: add a declaration of all stats fields
    - MINOR: stats: don't hard-code the CSV fields list anymore
    - MINOR: stats: create stats fields storage and CSV dump function
    - MEDIUM: stats: convert stats_dump_fe_stats() to use stats_dump_fields_csv()
    - MEDIUM: stats: make stats_dump_fe_stats() use stats fields for HTML dump
    - MEDIUM: stats: convert stats_dump_li_stats() to use stats_dump_fields_csv()
    - MEDIUM: stats: make stats_dump_li_stats() use stats fields for HTML dump
    - MEDIUM: stats: convert stats_dump_be_stats() to use stats_dump_fields_csv()
    - MEDIUM: stats: make stats_dump_be_stats() use stats fields for HTML dump
    - MEDIUM: stats: convert stats_dump_sv_stats() to use stats_dump_fields_csv()
    - MEDIUM: stats: make stats_dump_sv_stats() use the stats field for HTML
    - MEDIUM: stats: move the server state coloring logic to the server dump function
    - MINOR: stats: do not use srv->admin & STATS_ADMF_MAINT in HTML dumps
    - MINOR: stats: do not check srv->state for SRV_ST_STOPPED in HTML dumps
    - MINOR: stats: make CSV report server check status only when enabled
    - MINOR: stats: only report backend's down time if it has servers
    - MINOR: stats: prepend '*' in front of the check status when in progress
    - MINOR: stats: make HTML stats dump rely on the table for the check status
    - MINOR: stats: add agent_status, agent_code, agent_duration to output
    - MINOR: stats: add check_desc and agent_desc to the output fields
    - MINOR: stats: add check and agent's health values in the output
    - MEDIUM: stats: make the HTML server state dump use the CSV states
    - MEDIUM: stats: only report observe errors when observe is set
    - MEDIUM: stats: expose the same flags for CLI and HTTP accesses
    - MEDIUM: stats: report server's address in the CSV output
    - MEDIUM: stats: report the cookie value in the server & backend CSV dumps
    - MEDIUM: stats: compute the color code only in the HTML form
    - MEDIUM: stats: report the listeners' address in the CSV output
    - MEDIUM: stats: make it possible to report the WAITING state for listeners
    - REORG: stats: dump the frontend's HTML stats via a generic function
    - REORG: stats: dump the socket stats via the generic function
    - REORG: stats: dump the server stats via the generic function
    - REORG: stats: dump the backend stats via the generic function
    - MEDIUM: stats: add a new "mode" column to report the proxy mode
    - MINOR: stats: report the load balancing algorithm in CSV output
    - MINOR: stats: add 3 fields to report the frontend-specific connection stats
    - MINOR: stats: report number of intercepted requests for frontend and backends
    - MINOR: stats: introduce stats_dump_one_line() to dump one stats line
    - CLEANUP: stats: make stats_dump_fields_html() not rely on proxy anymore
    - MINOR: stats: add ST_SHOWADMIN to pass the admin info in the regular flags
    - MINOR: stats: make stats_dump_fields_html() not use &trash by default
    - MINOR: stats: add functions to emit typed fields into a chunk
    - MEDIUM: stats: support "show info typed" on the CLI
    - MEDIUM: stats: implement a typed output format for stats
    - DOC: document the "show info typed" and "show stat typed" output formats
    - MINOR: cfgparse: warn when uid parameter is not a number
    - MINOR: cfgparse: warn when gid parameter is not a number
    - BUG/MINOR: standard: Avoid free of non-allocated pointer
    - BUG/MINOR: pattern: Avoid memory leak on out-of-memory condition
    - CLEANUP: http: fix a build warning introduced by a recent fix
    - BUG/MINOR: log: GMT offset not updated when entering/leaving DST
2016-03-14 00:10:05 +01:00
Benoit GARNIER
b413c2a759 BUG/MINOR: log: GMT offset not updated when entering/leaving DST
GMT offset used in local time formats was computed at startup, but was not updated when DST status changed while running.

For example these two RFC5424 syslog traces where emitted 5 seconds apart, just before and after DST changed:
  <14>1 2016-03-27T01:59:58+01:00 bunch-VirtualBox haproxy 2098 - - Connect ...
  <14>1 2016-03-27T03:00:03+01:00 bunch-VirtualBox haproxy 2098 - - Connect ...

It looked like they were emitted more than 1 hour apart, unlike with the fix:
  <14>1 2016-03-27T01:59:58+01:00 bunch-VirtualBox haproxy 3381 - - Connect ...
  <14>1 2016-03-27T03:00:03+02:00 bunch-VirtualBox haproxy 3381 - - Connect ...

This patch should be backported to 1.6 and partially to 1.5 (no fix needed in log.c).
2016-03-13 23:48:05 +01:00
Willy Tarreau
5c557d14d5 CLEANUP: http: fix a build warning introduced by a recent fix
Cyril reported that recent commit 320ec2a ("BUG/MEDIUM: chunks: always
reject negative-length chunks") introduced a build warning because gcc
cannot guess that we can't fall into the case where the auth_method
chunk is not initialized.

This patch addresses it, though for the long term it would be best
if chunk_initlen() would always initialize the result.

This fix must be backported to 1.6 and 1.5 where the aforementionned
fix was already backported.
2016-03-13 08:17:02 +01:00
Andreas Seltenreich
e6e22e8e90 BUG/MINOR: pattern: Avoid memory leak on out-of-memory condition
pattern_new_expr() failed to free the allocated list element when an
out-of-memory error occurs during initialization of the element.  As
this only happens when loading the configuration file or evaluating
commands via the CLI, it is unlikely for this leak to be relevant
unless the user makes automated, heavy use of the CLI.

Found in HAProxy 1.5.14.
2016-03-13 07:47:25 +01:00
Andreas Seltenreich
93f91c3082 BUG/MINOR: standard: Avoid free of non-allocated pointer
The original author forgot to dereference the argument to free in
parse_binary.  This may result in a crash on reading bad input from
the configuration file instead of a proper error message.

Found in HAProxy 1.5.14.
2016-03-13 07:46:54 +01:00
Baptiste Assmann
776e518caf MINOR: cfgparse: warn when gid parameter is not a number
Currently, no warning are emitted when the gid is not a number.
Purpose of this warning is to let admins know they their configuration
won't be applied as expected.
2016-03-13 07:46:31 +01:00
Baptiste Assmann
79fee6aa7a MINOR: cfgparse: warn when uid parameter is not a number
Currently, no warning are emitted when the uid is not a number.
Purpose of this warning is to let admins know they their configuration
won't be applied as expected.
2016-03-13 07:45:41 +01:00
Willy Tarreau
1e62df92e3 MEDIUM: stats: implement a typed output format for stats
The output for each field is :
  field:<origin><nature><scope>:type:value

where field reminds the type of the object being dumped as well as its
position (pid, iid, sid), field number and field name. This way a
monitoring utility may very well report all available information without
knowing new fields in advance.

This format is also supported in the HTTP version of the stats by adding
";typed" after the URI, instead of ";csv" for the CSV format.

The doc was not updated yet.
2016-03-11 17:24:15 +01:00
Willy Tarreau
cb80912001 MEDIUM: stats: support "show info typed" on the CLI
This emits the field positions, names and types. It is more convenient
than the default output for a parser that doesn't know all the fields. It
simply relies on stats_emit_typed_data_field() and stats_emit_field_tags()
added by previous patch for the output. A new stats format flag was added,
STAT_FMT_TYPED, which is set when the "typed" keyword is specified on the
CLI.
2016-03-11 17:08:06 +01:00
Willy Tarreau
b47785f862 MINOR: stats: add functions to emit typed fields into a chunk
New function stats_emit_typed_data_field() does exactly like
stats_emit_raw_data_field() except that it also prints the data
type after a colon. This will be used to print using the typed
format.

And function stats_emit_field_tags() appends a 3-letter code
describing the origin, nature, and scope, followed by an optional
delimiter. This will be particularly convenient to dump typed
data.
2016-03-11 17:08:05 +01:00
Willy Tarreau
6060074a57 MINOR: stats: make stats_dump_fields_html() not use &trash by default
This function must dump into the buffer it gets in argument, and should
not assume it's always trash. This was the last part of the rework, now
the CSV and HTML functions are compatible and the output format may easily
be extended.
2016-03-11 17:08:05 +01:00
Willy Tarreau
508a63fb96 MINOR: stats: add ST_SHOWADMIN to pass the admin info in the regular flags
It's easier to have a new flag in <flags> to indicate whether or not we
want to display the admin column in HTML dumps. We already have similar
flags to show the version or the legends.
2016-03-11 17:08:05 +01:00
Willy Tarreau
36090d2236 CLEANUP: stats: make stats_dump_fields_html() not rely on proxy anymore
Now this function doesn't need to use the proxy anymore, remove it from
its arguments.
2016-03-11 17:08:05 +01:00
Willy Tarreau
501f60244f MINOR: stats: introduce stats_dump_one_line() to dump one stats line
This new function dumps the current stats line according to the
specified format (CSV or HTML for now), and returns these functions'
output code, which will serve later to indicate a failure (eg: buffer
full).

This further simplifies the code since all dumpers now just call this
function.
2016-03-11 17:08:05 +01:00
Willy Tarreau
5b9bdff007 MINOR: stats: report number of intercepted requests for frontend and backends
This was reported in HTML dumps already but not CSV. It reports the
number of monitor and stats requests. Ideally use-service and redirs
should be accounted for as well.
2016-03-11 17:08:05 +01:00
Willy Tarreau
c73810f94f MINOR: stats: add 3 fields to report the frontend-specific connection stats
Frontends have extra information compared to other entities, they can
report some statistics at the connection level while the other ones
are limited to the session level. This patch adds 3 more fields for
this :
 - conn_rate
 - conn_rate_max
 - conn_tot

It's worth noting that listeners theorically have such statistics, except
that the distinction between connections and sessions is not clearly made
in the code, so that will have to be improved later.
2016-03-11 17:08:05 +01:00
Willy Tarreau
f1516d9840 MINOR: stats: report the load balancing algorithm in CSV output
It was already present in the HTML output, let's add it to CSV now,
but only when SHLGNDS is set.
2016-03-11 17:08:05 +01:00
Willy Tarreau
f8211dff89 MEDIUM: stats: add a new "mode" column to report the proxy mode
Now even CSV stats will see the proxy mode, and we can save the
HTML stats dumps from accessing px->mode directly.
2016-03-11 17:08:05 +01:00
Willy Tarreau
bbf845043f REORG: stats: dump the backend stats via the generic function
The code was simply moved as-is to the new function. There's no
functional change.
2016-03-11 17:08:05 +01:00
Willy Tarreau
362eaeb9bc REORG: stats: dump the server stats via the generic function
The code was simply moved as-is to the new function. There's no
functional change.
2016-03-11 17:08:05 +01:00
Willy Tarreau
fa9512f2b7 REORG: stats: dump the socket stats via the generic function
The code was simply moved as-is to the new function. There's no
functional change.
2016-03-11 17:08:05 +01:00
Willy Tarreau
b5f66b8138 REORG: stats: dump the frontend's HTML stats via a generic function
This new function stats_dump_fields_html() checks the type of the object
being dumped from the stats table, and emits it in HTML format. It uses
an argument indicating if the HTML page is also used as an admin page,
and for now still takes the proxy in argument as a few entries still
need it.

The code was simply moved as-is to the new function. There's no
functional change.
2016-03-11 17:08:05 +01:00
Willy Tarreau
7101b64cfd MEDIUM: stats: make it possible to report the WAITING state for listeners
HTML output used to have it but not the CSV output. It indicates that the
listener is not full but was forced to wait because the max connection
rate was reached.
2016-03-11 17:08:05 +01:00
Willy Tarreau
a6f5a73202 MEDIUM: stats: report the listeners' address in the CSV output
It's the same principle as for the server dump, and we use this field
for the HTML dump of course.
2016-03-11 17:08:05 +01:00
Willy Tarreau
0c378efe81 MEDIUM: stats: compute the color code only in the HTML form
The color code requires a complex logic, and we use it only in the
HTML part. So let's compute it there based on the server state, its
health and its weight. The thing is tricky but OK. There's a 1-to-1
mapping of down servers, but not of up servers, hence the need for
the weight and health.
2016-03-11 17:08:05 +01:00
Willy Tarreau
e4847c6405 MEDIUM: stats: report the cookie value in the server & backend CSV dumps
The server's cookie value is now reported in the "cookie" column and
used as-is from the HTML dump. It was the last reference to the sv
pointer from this place.

The same was done for the backend's dump.
2016-03-11 17:08:05 +01:00
Willy Tarreau
3a4ec3a04b MEDIUM: stats: report server's address in the CSV output
This new field "addr" presents the server's address:port if the client
is either enabled via "stats show legends" in case of HTTP dumps, or
has at least level operator on the CLI. The address formats might be :
 - ipv4:port
 - [ipv6]:port
 - unix
 - (error message)
2016-03-11 17:08:05 +01:00
Willy Tarreau
0deb85acd1 MEDIUM: stats: expose the same flags for CLI and HTTP accesses
The HTML dump over HTTP request may have several flags including
ST_SHLGNDS (to show legends), ST_SHNODE (to show node name),
ST_SHDESC (to show some descriptions).

There's no such thing over the CLI so we need to have an equivalent.
Let's compute the flags earlier so that we can make use of these flags
regardless of the call point.
2016-03-11 17:08:05 +01:00
Willy Tarreau
89fa6918c4 MEDIUM: stats: only report observe errors when observe is set
This doesn't produce this field when not relevant anymore.
2016-03-11 17:08:05 +01:00
Willy Tarreau
30d33730f4 MEDIUM: stats: make the HTML server state dump use the CSV states
Now instead of recomputing the state based on the health, rise etc,
we reuse the same state as in the CSV file, and optionally complete
it with a down or an up arrow if a change is occurring. We could
have parsed the strings to detect a '/' indicating a state change,
but it was easier to check the health against rise and fall.
2016-03-11 17:08:05 +01:00
Willy Tarreau
3141f5975e MINOR: stats: add check and agent's health values in the output
This adds the following fields :
- check_rise [...S]: server's "rise" parameter used by checks
- check_fall [...S]: server's "fall" parameter used by checks
- check_health [...S]: server's health check value between 0 and rise+fall-1
- agent_rise [...S]: agent's "rise" parameter, normally 1
- agent_fall [...S]: agent's "fall" parameter, normally 1
- agent_health [...S]: agent's health parameter, between 0 and rise+fall-1
2016-03-11 17:08:05 +01:00
Willy Tarreau
dd7354b772 MINOR: stats: add check_desc and agent_desc to the output fields
Added these two new fields to the CSV output :
- check_desc : short human-readable description of check_status
- agent_desc : short human-readable description of agent_status

Also factor two tests for enabled checks.
2016-03-11 17:08:05 +01:00
Willy Tarreau
7f61884620 MINOR: stats: add agent_status, agent_code, agent_duration to output
The agent check status is now reported :
- agent_status : status of last agent check
- agent_code : numeric code reported by agent if any (unused for now)
- agent_duration : time in ms taken to finish last check
2016-03-11 17:08:05 +01:00
Willy Tarreau
ce1dd44055 MINOR: stats: make HTML stats dump rely on the table for the check status
We can now check that the check status field is present to know that a
check is enabled.
2016-03-11 17:08:05 +01:00
Willy Tarreau
03fb1c57b5 MINOR: stats: prepend '*' in front of the check status when in progress
The HTML version already does this, but the CSV doesn't provide this
status, so make both of them report the same status.
2016-03-11 17:08:05 +01:00
Willy Tarreau
7344f47893 MINOR: stats: only report backend's down time if it has servers
There's no point in reporting a backend's up/down time if it has no
servers. The CSV output used to report "0" for a serverless backend
while the HTML version already removed the field. For servers, this
field is already omitted if checks are disabled. Let's uniformize
all of this and remove the field in CSV as well when irrelevant.
2016-03-11 17:08:04 +01:00
Willy Tarreau
8a8cce1f87 MINOR: stats: make CSV report server check status only when enabled
The HTML version doesn't report a check status when the server is in
maintenance since it can be quite old and irrelevant. The CSV forgot
to care about that, so let's do it here as well.
2016-03-11 17:08:04 +01:00
Willy Tarreau
8485f4e43b MINOR: stats: do not check srv->state for SRV_ST_STOPPED in HTML dumps
We don't want the HTML dump to rely on the server state. We
already have this piece of information in the status field by
checking that it starts with "DOWN".
2016-03-11 17:08:04 +01:00
Willy Tarreau
02bc6c2244 MINOR: stats: do not use srv->admin & STATS_ADMF_MAINT in HTML dumps
We don't want the HTML dump to rely on the server admin bits. We
already have this piece of information in the status field.
2016-03-11 17:08:04 +01:00
Willy Tarreau
ba2f2649c2 MEDIUM: stats: move the server state coloring logic to the server dump function
It currently is really not convenient to have a state and a color detection
outside of the function and to use these ones inside. It makes it harder to
adjust the stats output based on the server state exactly. Let's move the
logic into the dump function itself.
2016-03-11 17:08:04 +01:00
Willy Tarreau
164d4a9e73 MEDIUM: stats: make stats_dump_sv_stats() use the stats field for HTML
Here we still have a huge amount of stuff to extract from the HTML code
and even from the caller. Indeed, the calling function computes the
server state and prepares a color code that will be used to determine
what style to use. The operations needed to decide what field to present
or not depend a lot on the server's state, admin state, health value,
rise and fall etc... all of which are not easily present in the table.

We also have to check the reference's values for all of the above.

There are also a number of differences between the CSV and HTML outputs :
  - CSV always reports check duration, HTML only if not zero
  - regarding last_change, CSV always report the server's while the HTML
    considers either the server's or the reference based on the admin state.
  - agent and health are separate in the CSV but mixed in the HTML.
  - too few info on agent anyway.

After careful code inspection it happens that both sv->last_change and
ref->last_change are identical and can both derive from [LASTCHG].

Also, the following info are missing from the array to complete the HTML
code :
  - cookie, address, status description, check-in-progress, srv->admin

At least for now it still works but a lot of info now need to be added.
2016-03-11 17:08:04 +01:00
Willy Tarreau
2b96cf1292 MEDIUM: stats: convert stats_dump_sv_stats() to use stats_dump_fields_csv()
This function now only fills the relevant fields with raw values and
calls stats_dump_fields_csv() for the CSV part. The output remains
exactly the same for now.

Some limits are only emitted if set, so the HTML part will have to
check for these being set.

A number of fields had to be built using printf-format strings, so
instead of allocating strings that risk not being freed, we use
chunk_newstr() and chunk_appendf().

Text strings are now copied verbatim in the stats fields so that only
the CSV dump encodes them as CSV. A single "out" chunk is used and cut
into multiple substrings using chunk_newstr() so that we don't need
distinct chunks for each stats field holding a string. The total amount
of data being emitted at once will never overflow the "out" chunk since
only a small part of it goes into the trash buffer which is the same size
and will thus overflow first.

One point of care is that failed_checks and down_trans were logged as
64-bit quantities on servers but they're 32-bit on proxies. This may
have to be changed later to unify them.
2016-03-11 17:08:04 +01:00
Willy Tarreau
770c217bb1 MEDIUM: stats: make stats_dump_be_stats() use stats fields for HTML dump
Some fields are still needed to complete the conversion :
  - px->srv : used to take decisions when backend has no server (eg: print down or not)
  - algo string (useful for CSV as well) // only if SHLGNDS
  - cookie_name (useful for CSV as well) // only if SHLGNDS
  - px->mode == HTTP (or px->mode as a string) // same for frontend
  - px->be_counters.intercepted_req (stats and redirects ?)

The following field already has a place but was not presented in the
CSV output, so it should simply be added afterwards :
  - px->be_counters.http.cum_req (was in HTML and missing from CSV)
2016-03-11 17:08:04 +01:00
Willy Tarreau
f6eecbece1 MEDIUM: stats: convert stats_dump_be_stats() to use stats_dump_fields_csv()
This function now only fills the relevant fields with raw values and
calls stats_dump_fields_csv() for the CSV part. The output remains
exactly the same for now. It's worth noting that there are some
ambiguities between connections and sessions, for example cum_conn
is dumped into cum_sess. Additionally, there is a naming ambiguity
in that the internal "d_time" (time where the beginning of data
appeared) is called "rtime" in the output (response time) and they
actually are indeed the same.
2016-03-11 17:08:04 +01:00
Willy Tarreau
f7e27230bd MEDIUM: stats: make stats_dump_li_stats() use stats fields for HTML dump
The conversion still requires some elements which are not present in the
current fields :
  - the HTML status may emit "WAITING"/"OPEN"/"FULL" while the CSV format
    doesn't propose "WAITING", so this last one will have to be added.

  - the HTML output emits the listening adresses when the ST_SHLGNDS flag
    is set but this address field doesn't exist in the CSV format

  - it's interesting to note that when the ST_SHLGNDS flag is not set, the
    HTML output doesn't provide the listener's ID while it's present in the
    CSV output accessible from the same interface.
2016-03-11 17:08:04 +01:00
Willy Tarreau
4607fadb99 MEDIUM: stats: convert stats_dump_li_stats() to use stats_dump_fields_csv()
This function now only fills the relevant fields with raw values and
calls stats_dump_fields_csv() for the CSV part. The output remains
exactly the same for now.

It is worth mentionning that l->cum_conn is being dumped into a cum_sess
field and that once we introduce an official cum_conn field we may have
to dump the same value at both places to maintain compatibility with the
existing stats.
2016-03-11 17:08:04 +01:00
Willy Tarreau
d72c917683 MEDIUM: stats: make stats_dump_fe_stats() use stats fields for HTML dump
Now we avoid directly accessing the proxy and instead we pick the values
from the stats fields. This unveils that only a few fields are missing to
complete the job :
  - know whether or not the checkbox column needs to be displayed. This
    is not directly relevant to the stats but rather to the fact that the
    HTML dump is also a control interface. This doesn't need a field, just
    a function argument.
  - px->mode == HTTP (or px->mode as a string)
  - px->fe_counters.intercepted_req (stats and redirects ?)
  - px->fe_counters.cum_conn
  - px->fe_counters.cps_max
  - px->fe_conn_per_sec

All the last ones make sense in the CSV, so they'll have to be added as well.
2016-03-11 17:08:04 +01:00
Willy Tarreau
9e561077e6 MEDIUM: stats: convert stats_dump_fe_stats() to use stats_dump_fields_csv()
This function now only fills the relevant fields with raw values and
calls stats_dump_fields_csv() for the CSV part. The output remains
exactly the same for now.
2016-03-11 17:08:04 +01:00
Willy Tarreau
82a8602da7 MINOR: stats: create stats fields storage and CSV dump function
The new function stats_dump_fields_csv() currenty walks over all CSV
fields and prints all non-empty ones as one line. Strings are csv-encoded
on the fly.
2016-03-11 17:08:04 +01:00
Willy Tarreau
f614229b31 MINOR: stats: don't hard-code the CSV fields list anymore
Now the list of CSV fields is directly taken from the table.
2016-03-11 17:08:04 +01:00
Willy Tarreau
8e20514283 MINOR: stats: add a declaration of all stats fields
This is in preparation for a unifying of the stats output between the
multiple formats. The long-term goal will be that HTML stats are built
from the array used to produce the CSV output in order to ensure that
no information is missing in any format.
2016-03-11 17:08:04 +01:00
Willy Tarreau
1b4ba1e905 MEDIUM: stats: make use of stats_dump_info_fields() for "show info"
Now we can get rid of the huge chunk_printf() and use this new function
instead.
2016-03-11 17:08:04 +01:00
Willy Tarreau
bf95cba0e4 MINOR: stats: add stats_dump_info_fields() to dump one field per line
This function dumps non-empty fields, one per line with their name and
values, in the same format as is currently used by "show info". It relies
on previously added stats_emit_raw_data_field().
2016-03-11 17:08:04 +01:00
Willy Tarreau
638d40a193 MINOR: stats: add a function to emit fields into a chunk
New function stats_emit_raw_data_field() may be used to write a field
into a chunk. It takes care of the field format to know how to print
it.
2016-03-11 17:08:04 +01:00
Willy Tarreau
4529c07c0e MEDIUM: stats: fill all the show info elements prior to displaying them
The table is completely filled with all relevant information. Only the
fields that should appear are presented. The description field is now
properly omitted if not set, instead of being reported as empty.
2016-03-11 17:08:04 +01:00
Willy Tarreau
c4832de7a0 MINOR: stats: add all the "show info" fields in a table
This will be used to dump them in the most appropriate format.
2016-03-11 17:08:04 +01:00
Willy Tarreau
e237fe1172 BUG/MINOR: ssl: fix usage of the various sample fetch functions
Technically speaking, many SSL sample fetch functions act on the
connection and depend on USE_L5CLI on the client side, which means
they're usable as soon as a handshake is completed on a connection.
This means that the test consisting in refusing to call them when
the stream is NULL will prevent them from working when we implement
the tcp-request session ruleset. Better fix this now. The fix consists
in using smp->sess->origin when they're called for the front connection,
and smp->strm->si[1].end when called for the back connection.

There is currently no known side effect for this issue, though it would
better be backported into 1.6 so that the code base remains consistend.
2016-03-10 17:28:05 +01:00
Willy Tarreau
4cefbc0752 CLEANUP: payload: remove useless and confusing nullity checks for channel buffer
The buffer could not be null by definition since we moved the stream out
of the session. It's the stream which gained the ability to be NULL (hence
the recent fix for this case). The checks in place are useless and make one
think this situation can happen.
2016-03-10 17:28:04 +01:00
Willy Tarreau
6204cd9f27 BUG/MAJOR: vars: always retrieve the stream and session from the sample
This is the continuation of previous patch called "BUG/MAJOR: samples:
check smp->strm before using it".

It happens that variables may have a session-wide scope, and that their
session is retrieved by dereferencing the stream. But nothing prevents them
from being used from a streamless context such as tcp-request connection,
thus crashing the process. Example :

    tcp-request connection accept if { src,set-var(sess.foo) -m found }

In order to fix this, we have to always ensure that variable manipulation
only happens via the sample, which contains the correct owner and context,
and that we never use one from a different source. This results in quite a
large change since a lot of functions are inderctly involved in the call
chain, but the change is easy to follow.

This fix must be backported to 1.6, and requires the last two patches.
2016-03-10 17:28:04 +01:00
Willy Tarreau
7560dd4b6a MINOR: sample: always set a new sample's owner before evaluating it
Some functions like sample_conv_var2smp(), var_get_byname(), and
var_set_byname() directly or indirectly need to access the current
stream and/or session and must find it in the sample itself and not
as a distinct argument. Thus we first need to call smp_set_owner()
prior to each such calls.
2016-03-10 16:42:58 +01:00
Willy Tarreau
1777ea63e0 MINOR: sample: add a new helper to initialize the owner of a sample
Since commit 6879ad3 ("MEDIUM: sample: fill the struct sample with the
session, proxy and stream pointers") merged in 1.6-dev2, the sample
contains the pointer to the stream and sample fetch functions as well
as converters use it heavily. This requires from a lot of call places
to initialize 4 fields, and it was even forgotten at a few places.

This patch provides a convenient helper to initialize all these fields
at once, making it easy to prepare a new sample from a previous one for
example.

A few call places were cleaned up to make use of it. It will be needed
by further fixes.

At one place in the Lua code, it was moved earlier because we used to
call sample casts with a non completely initialized sample, which is
not clean eventhough at the moment there are no consequences.
2016-03-10 16:42:58 +01:00
Willy Tarreau
be508f1580 BUG/MAJOR: samples: check smp->strm before using it
Since commit 6879ad3 ("MEDIUM: sample: fill the struct sample with the
session, proxy and stream pointers") merged in 1.6-dev2, the sample
contains the pointer to the stream and sample fetch functions as well
as converters use it heavily.

The problem is that earlier commit 87b0966 ("REORG/MAJOR: session:
rename the "session" entity to "stream"") had split the session and
stream resulting in the possibility for smp->strm to be NULL before
the stream was initialized. This is what happens in tcp-request
connection rulesets, as discovered by Baptiste.

The sample fetch functions must now check that smp->strm is valid
before using it. An alternative could consist in using a dummy stream
with nothing in it to avoid some checks but it would only result in
deferring them to the next step anyway, and making it harder to detect
that a stream is valid or the dummy one.

There is still an issue with variables which requires a complete
independant fix. They use strm->sess to find the session with strm
possibly NULL and passed as an argument. All call places indirectly
use smp->strm to build strm. So the problem is there but the API needs
to be changed to remove this duplicate argument that makes it much
harder to know what pointer to use.

This fix must be backported to 1.6, as well as the next one fixing
variables.
2016-03-10 16:42:58 +01:00
Willy Tarreau
1a786d7f33 BUG/MINOR: tcpcheck: fix incorrect list usage resulting in failure to load certain configs
Commit baf9794 ("BUG/MINOR: tcpcheck: conf parsing error when no port
configured on server and first rule(s) is (are) COMMENT") was wrong, it
incorrectly implemented a list access by dereferencing a pointer of an
incorrect type resulting in checking the next element in the list. The
consequence is that it stops before the last comment instead of at the
last one and skips the first rule. In the end, rules starting with
comments are not affected, but if a sequence of checks directly starts
with connect, it is then skipped and this is visible when no port is
configured on the server line as the config refuses to load.

There was another occurence of the same bug a few lines below, both
of them were fixed. Tests were made on different configs and confirm
the new fix is OK.

This fix must be backported to 1.6.
2016-03-08 15:20:25 +01:00
Andreas Seltenreich
78f3595f4d CLEANUP: map: Avoid memory leak in out-of-memory condition.
This memory leak of about 100 bytes occurs only if there is an error
condidtion during evaluation of a "map" directive in the configuration
file.  This evaluation only happens once on startup because haproxy
does not have a mechanism for re-loading the configuration file during
run-time.  The startup will be aborted anyway due to error conditions
raised.

Nevertheless fix it to silence warnings of static code analysis tools
and be safe against future revisions of the code.

Found in haproxy 1.5.14.
2016-03-08 12:55:06 +01:00
Andreas Seltenreich
f0653192e3 CLEANUP: pattern: Ignore unknown samples in pat_match_ip().
Ignore samples that are neither SMP_T_IPV4 nor SMP_T_IPV6 instead of
matching with an uninitialized value in this case.

This situation should not occur in the current codebase but triggers
warnings in static code analysis tools.

Found in haproxy 1.5.
2016-03-08 12:55:06 +01:00
Andreas Seltenreich
9727cf482c CLEANUP: stats: Avoid computation with uninitialized bits.
stats_map_lookup() sets bit SMP_F_CONST in the uninitialized member
flags of a stack-allocated sample, leaving the other bits
uninitialized.  All code paths that can access the struct only ever
check for this specific flag, so there is no risk of unintended
behavior.

Nevertheless fix it as it triggers warnings in static code analysis
tools and might become a problem on future revisions of the code.

Problem found in version 1.5.
2016-03-08 12:55:06 +01:00
Cyril Bonté
23d19d669b BUG/MEDIUM: cfgparse: wrong argument offset after parsing server "sni" keyword
Owen Marshall reported an issue depending on the server keywords order in the
configuration.

Working line :
  server dev1 <ip>:<port> check inter 5000 ssl verify none sni req.hdr(Host)

Non working line :
  server dev1 <ip>:<port> check inter 5000 ssl sni req.hdr(Host) verify none

Indeed, both parse_server() and srv_parse_sni() modified the current argument
offset at the same time. To fix the issue, srv_parse_sni() can work on a local
copy ot the offset, leaving parse_server() responsible of the actual value.

This fix must be backported to 1.6.
2016-03-07 23:47:05 +01:00
Willy Tarreau
a3aa9e6840 MINOR: systemd: ensure a reload doesn't mask a stop
If a SIGHUP/SIGUSR2 is sent immediately after a SIGTERM/SIGINT and
before wait() is notified, it will mask it since there's no queue,
only a copy of the last received signal. Let's add a special check
before overwriting the signal so that SIGTERM/SIGINT are not masked.
2016-02-27 08:28:43 +01:00
Willy Tarreau
6c2f7955e7 BUG/MINOR: systemd: propagate the correct signal to haproxy
Some people report that sometimes there's a collection of old processes
after a restart of the systemd wrapper. It's not surprizing when reading
the code, the SIGTERM is propagated as a SIGINT which asks for a graceful
stop instead. If people ask for termination, we should terminate.
2016-02-27 08:28:43 +01:00
Willy Tarreau
2fadbe5b0a BUG/MINOR: systemd: report the correct signal in debug message output
Commit c54bdd2 ("MINOR: Also accept SIGHUP/SIGTERM in systemd-wrapper")
added support for extra signals but did not adapt the debug message,
which continues to report incorrect signal types. Let's pass the signal
number to the do_restart() and do_shutdown() functions so that the output
matches the signal that was received.
2016-02-27 08:28:42 +01:00
Willy Tarreau
1ab5e8642a BUG/MINOR: systemd: ensure we don't miss signals
There's a race condition in the systemd wrapper. People who restart it
too fast report old processes remaining there. Here if the signal is
caught before entering the "while" loop we block indefinitely on the
wait() call. Let's check the caught signal before calling wait(). It
doesn't completely close the race, a window still exists between the
test of the flag and the call to wait(), but it's much smaller. A
safer solution could involve pselect() to wait for the signal
delivery.
2016-02-27 08:28:42 +01:00
Willy Tarreau
320ec2a745 BUG/MEDIUM: chunks: always reject negative-length chunks
The recent addition of "show env" on the CLI has revealed an interesting
design bug. Chunks are supposed to support a negative length to indicate
that they carry no data. chunk_printf() sets this size to -1 if the string
is too large for the buffer. At a few places in the http engine we may end
up with trash.len = -1. But bi_putchk(), chunk_appendf() and a few other
chunks consumers don't consider this case as possible and will use such a
chunk, possibly restoring an invalid string or trying to copy -1 bytes.

This fix takes care of clarifying the situation in a backportable way
where such sizes are used, so that a negative length indicating an error
remains present until the chunk is reinitialized or overwritten. But a
cleaner design adjustment needs to be done so that there's a clear contract
on how to use these chunks. At first glance it doesn't seem *that* useful
to support negative sizes, so probably this is what should change.

This fix must be backported to 1.6 and 1.5.
2016-02-25 16:24:14 +01:00
Thierry Fournier
09a9178311 MINOR: server: generalize the "updater" source
the function server_parse_addr_change_request() contain an hardcoded
updater source "stats command". this function can be called from other
sources than the "stats command", so this patch make this argument
generic.
2016-02-24 23:37:39 +01:00
Thierry Fournier
d35b7a6d93 CLEANUP: server: add "const" to some message strings
"updater" is used in "read only" mode, so I add a const qualifier
to the variable declaration.
2016-02-24 23:37:39 +01:00
Thierry Fournier
c62df8463b BUG/MINOR: server: fix the format of the warning on address change
When the server address is changed, a message with unrequired '\n' or
'.' is displayed, like this:

   [WARNING] 054/101137 (3229) : zzzz/s3 changed its IP from 127.0.0.1 to ::55 by stats command
   .

This patch remove the '\n' which is sent before the '.'.

This patch must be backported in 1.6
2016-02-24 23:37:39 +01:00
Cyril Bonté
0618195a11 BUG/MEDIUM: stats: stats bind-process doesn't propagate the process mask correctly
With nbproc > 1, it is possible to specify on which process the stats socket
will be bound using "stats bind-process", but the behaviour was not correct,
ignoring the value in some configurations.

Example :
global
  nbproc 4
  stats bind-process 1
  stats socket /var/run/haproxy.sock

With such a configuration, all the processes will listen on the stats socket.
As a workaround, it is also possible to declare a "process" keyword on
the "stats stocket" line.

The patch must be applied to 1.7, 1.6 and 1.5
2016-02-24 07:38:37 +01:00
Thierry Fournier
9d5fb6d6a0 BUG/MINOR: lua: Useless copy
A value is copied two time in teh stack, but only one is usefull. The
second copy leaves unused in the stack and take some room for noting.

This path removes the second copy.

Must be backported in 1.6
2016-02-23 22:42:47 +01:00
Thierry Fournier
0164f200ab BUG/MAJOR: lua: applets can't sleep.
This patch must be backported in 1.6

hlua_yield() function returns the required sleep time. The Lua core must
be resume the execution after the required time. The core dedicated to
the http and tcp applet doesn't implement the wake up function. It is a
miss.

This patch fix this.
2016-02-20 18:55:33 +01:00
Pieter Baauw
235fcfcf14 MINOR: mailers: make it possible to configure the connection timeout
This patch introduces a configurable connection timeout for mailers
with a new "timeout mail <time>" directive.

Acked-by: Simon Horman <horms@verge.net.au>
2016-02-20 15:33:06 +01:00
Thierry Fournier
ac88cfe452 MEDIUM: dns: add a "resolve-net" option which allow to prefer an ip in a network
This options prioritize th choice of an ip address matching a network. This is
useful with clouds to prefer a local ip. In some cases, a cloud high
avalailibility service can be announced with many ip addresses on many
differents datacenters. The latency between datacenter is not negligible, so
this patch permitsto prefers a local datacenter. If none address matchs the
configured network, another address is selected.
2016-02-19 14:37:49 +01:00
Thierry Fournier
ada348459f MEDIUM: dns: extract options
DNS selection preferences are actually declared inline in the
struct server. There are copied from the server struct to the
dns_resolution struct for each resolution.

Next patchs adds new preferences options, and it is not a good
way to copy all the configuration information before each dns
resolution.

This patch extract the configuration preference from the struct
server and declares a new dedicated struct. Only a pointer to this
new striuict will be copied before each dns resolution.
2016-02-19 14:37:46 +01:00
Thierry Fournier
70473a5f8c MINOR: common: mask conversion
Add function which converts network mask from bit length form
to struct in*_addr form.
2016-02-19 14:37:41 +01:00
Thierry Fournier
e726b14d23 DOC: lua: copyrights 2016-02-19 13:24:21 +01:00
Thierry Fournier
49d4842e98 BUG/MAJOR: lua: segfault using Concat object
Concat object is based on "luaL_Buffer". The luaL_Buffer documentation says:

   During its normal operation, a string buffer uses a variable number of stack
   slots. So, while using a buffer, you cannot assume that you know where the
   top of the stack is. You can use the stack between successive calls to buffer
   operations as long as that use is balanced; that is, when you call a buffer
   operation, the stack is at the same level it was immediately after the
   previous buffer operation. (The only exception to this rule is
   luaL_addvalue.) After calling luaL_pushresult the stack is back to its level
   when the buffer was initialized, plus the final string on its top.

So, the stack cannot be manipulated between the first call at the function
"luaL_buffinit()" and the last call to the function "luaL_pushresult()" because
we cannot known the stack status.

In other way, the memory used by these functions seems to be collected by GC, so
if the GC is triggered during the usage of the Concat object, it can be used
some released memory.

This patch rewrite the Concat class without the "luaL_Buffer" system. It uses
"userdata()" forr the memory allocation of the buffer strings.
2016-02-19 13:24:09 +01:00
Pieter Baauw
5e0964ed01 MINOR: mailers: use <CRLF> for all line endings
Not doing so causes issues with Exchange2013 not processing the message
body from the email. Specification seems to specify that as correct
behavior : https://www.ietf.org/rfc/rfc2821.txt # 2.3.7 Lines

 > SMTP client implementations MUST NOT transmit "bare" "CR" or "LF" characters.

This patch should be backported to 1.6.

Acked-by: Simon Horman <horms@verge.net.au>
2016-02-17 10:19:09 +01:00
Pieter Baauw
46af170e41 MINOR: mailers: increase default timeout to 10 seconds
This allows the tcp connection to send multiple SYN packets, so 1 lost
packet does not cause the mail to be lost. It changes the socket timeout
from 2 to 10 seconds, this allows for 3 syn packets to be send and
waiting a little for their reply.

This patch should be backported to 1.6.

Acked-by: Simon Horman <horms@verge.net.au>
2016-02-17 10:19:08 +01:00
Pieter Baauw
7a91a0e1e5 MEDIUM: cfgparse: reject incorrect 'timeout retry' keyword spelling in resolvers
If for example it was written as 'timeout retri 1s' or 'timeout wrong 1s'
this would be used for the retry timeout value. Resolvers section only
timeout setting currently is 'retry', others are still parsed as before
this patch to not break existing configurations.

A less strict version will be backported to 1.6.
2016-02-17 10:10:06 +01:00
Willy Tarreau
1d54972789 MEDIUM: config: allow to manipulate environment variables in the global section
With new init systems such as systemd, environment variables became a
real mess because they're only considered on startup but not on reload
since the init script's variables cannot be passed to the process that
is signaled to reload.

This commit introduces an alternative method consisting in making it
possible to modify the environment from the global section with directives
like "setenv", "unsetenv", "presetenv" and "resetenv".

Since haproxy supports loading multiple config files, it now becomes
possible to put the host-dependant variables in one file and to
distribute the rest of the configuration to all nodes, without having
to deal with the init system's deficiencies.

Environment changes take effect immediately when the directives are
processed, so it's possible to do perform the same operations as are
usually performed in regular service config files.
2016-02-16 12:44:54 +01:00
Willy Tarreau
ae79572f89 MINOR: cli: add a new "show env" command
Using environment variables in configuration files can make troubleshooting
complicated because there's no easy way to verify that the variables are
correct. This patch introduces a new "show env" command which displays the
whole environment on the CLI, one variable per line.

The socket must at least have level operator to display the environment.
2016-02-16 11:43:03 +01:00
Willy Tarreau
3724da1261 BUG/MEDIUM: ssl: fix off-by-one in NPN list allocation
After seeing previous ALPN fix, I suspected that NPN code was wrong
as well, and indeed it was since ALPN was copied from it. This fix
must be backported into 1.6 and 1.5.
2016-02-12 17:11:12 +01:00
Marcoen Hirschberg
bef6091cff BUG/MEDIUM: ssl: fix off-by-one in ALPN list allocation
The first time I tried it (1.6.3) I got a segmentation fault :(

After some investigation with gdb and valgrind I found the
problem. memcpy() copies past an allocated buffer in
"bind_parse_alpn". This patch fixes it.

[wt: this fix must be backported into 1.6 and 1.5]
2016-02-12 17:10:52 +01:00
Dragan Dosen
835b9212f6 MEDIUM: log: add a new log format flag "E"
The +E mode escapes characters '"', '\' and ']' with '\' as prefix. It
mostly makes sense to use it in the RFC5424 structured-data log formats.

Example:

log-format-sd %{+Q,+E}o\ [exampleSDID@1234\ header=%[capture.req.hdr(0)]]
2016-02-12 13:36:47 +01:00
Dragan Dosen
0edd10925d MINOR: standard: add function "escape_chunk"
This function tries to prefix all characters tagged in the <map> with the
<escape> character. The specified <chunk> contains the input to be
escaped.
2016-02-12 13:36:47 +01:00
Thierry Fournier
1de1659923 MINOR: lua: Add concat class
This patch adds the Concat class. This class provides a fast
way for the string concatenation.
2016-02-12 11:08:53 +01:00
Thierry Fournier
5351827560 MINOR: lua: merge function
This patch merges the last imported functions in one, because
the function hlua_metatype is only used by hlua_checudata.
This patch fix also the compilation error became by the
copy of the code.
2016-02-12 11:08:53 +01:00
Thierry Fournier
9e7e3ea991 MINOR: lua: move common function
This patch moves the function hlua_checkudata which check that
an object contains the expected class_reference as metatable.
This function is commonly used by all the lua functions.
The function hlua_metatype is also moved.
2016-02-12 11:08:53 +01:00
Thierry Fournier
1550d5d035 MINOR: lua: Add date functions
This patch add date parsing function in Lua API. These function
are usefull for parsing standard HTTP date provided in headers.
2016-02-12 11:08:53 +01:00
Thierry Fournier
9312794ed7 MINOR: standard: add RFC HTTP date parser
This parser takes a string containing an HTTP date. It returns
a broken-down time struct. We must considers considers this
time as GMT. Maybe later the timezone will be taken in account.
2016-02-12 11:08:53 +01:00
Thierry Fournier
b1f46561a0 MINOR: lua: add "now" time function
This function returns the current time in the Lua.
2016-02-12 11:08:53 +01:00
Thierry Fournier
fb0b5467ca MINOR: lua: file dedicated to unsafe functions
When Lua executes functions from its API, these can throws an error.
These function must be executed in a special environment which catch
these error, otherwise a critical error (like segfault) can raise.

This patch add a c file called "hlua_fcn.c" which collect all the
Lua/c function needing safe environment for its execution.
2016-02-12 11:08:53 +01:00
Thierry Fournier
75933d48fe BUG/MINOR: lua: unsafe initialization
During the Lua HAProxy initialisation, C functions using Lua calls
can throws an error. This error was not catched. This patch fix
this behaviour.
2016-02-11 19:30:19 +01:00
Thierry Fournier
8feaa661b6 MINOR: map: Add regex matching replacement
This patch declares a new map which provides a string based on
a string with back references replaced by the content matched
by the regex.
2016-02-10 23:38:34 +01:00
Christopher Faulet
443ea1a242 MINOR: filters: Extract proxy stuff from the struct filter
Now, filter's configuration (.id, .conf and .ops fields) is stored in the
structure 'flt_conf'. So proxies own a flt_conf list instead of a filter
list. When a filter is attached to a stream, it gets a pointer on its
configuration. This avoids mixing the filter's context (owns by a stream) and
its configuration (owns by a proxy). It also saves 2 pointers per filter
instance.
2016-02-09 14:53:15 +01:00
Christopher Faulet
e6c3b69be0 MINOR: filters: Add an filter example
The "trace" filter has been added. It defines all available callbacks and for
each one it prints a trace message. To enable it:

  listener test
      ...
      filter trace
      ...
2016-02-09 14:53:15 +01:00
Christopher Faulet
75e2eb66e5 MINOR: filters/http: Forward remaining data when a channel has no "data" filters
This is an improvement, especially when the message body is big. Before this
patch, remaining data were forwarded when there is no filter on the stream. Now,
the forwarding is triggered when there is no "data" filter on the channel. When
no filter is used, there is no difference, but when at least one filter is used,
it can be really significative.
2016-02-09 14:53:15 +01:00
Christopher Faulet
113f7decfc MINOR: filters/http: Slightly update the parsing of chunks
Now, http_parse_chunk_size and http_skip_chunk_crlf return the number of bytes
parsed on success. http_skip_chunk_crlf does not use msg->sol anymore.

On the other hand, http_forward_trailers is unchanged. It returns >0 if the end
of trailers is reached and 0 if not. In all cases (except if an error is
encountered), msg->sol contains the length of the last parsed part of the
trailer headers.

Internal doc and comments about msg->sol has been updated accordingly.
2016-02-09 14:53:15 +01:00
Christopher Faulet
b77c5c2693 MEDIUM: filters: Optimize the HTTP compression for chunk encoded response
Instead of compressing all chunks as they come, we store them in a temporary
buffer. The compression happens during the forwarding phase. This change speeds
up the compression of chunked response.
2016-02-09 14:53:15 +01:00
Christopher Faulet
3e7bc67722 MINOR: filters: Remove unused or useless stuff and do small optimizations 2016-02-09 14:53:15 +01:00
Christopher Faulet
da02e17d42 MAJOR: filters: Require explicit registration to filter HTTP body and TCP data
Before, functions to filter HTTP body (and TCP data) were called from the moment
at least one filter was attached to the stream. If no filter is interested by
these data, this uselessly slows data parsing.
A good example is the HTTP compression filter. Depending of request and response
headers, the response compression can be enabled or not. So it could be really
nice to call it only when enabled.

So, now, to filter HTTP/TCP data, a filter must use the function
register_data_filter. For TCP streams, this function can be called only
once. But for HTTP streams, when needed, it must be called for each HTTP request
or HTTP response.
Only registered filters will be called during data parsing. At any time, a
filter can be unregistered by calling the function unregister_data_filter.
2016-02-09 14:53:15 +01:00
Christopher Faulet
fcf035cb5a MINOR: filters: Add stream_filters structure to hide filters info
From the stream point of view, this new structure is opaque. it hides filters
implementation details. So, impact for future optimizations will be reduced
(well, we hope so...).

Some small improvements has been made in filters.c to avoid useless checks.
2016-02-09 14:53:15 +01:00
Christopher Faulet
dbe34eb8cb MEDIUM: filters/http: Move body parsing of HTTP messages in dedicated functions
Now body parsing is done in http_msg_forward_body and
http_msg_forward_chunked_body functions, regardless of whether we parse a
request or a response.
Parsing result is still handled in http_request_forward_body and
http_response_forward_body functions.

This patch will ease futur optimizations, mainly on filters.
2016-02-09 14:53:15 +01:00
Christopher Faulet
309c6418b0 MEDIUM: filters: Replace filter_http_headers callback by an analyzer
This new analyzer will be called for each HTTP request/response, before the
parsing of the body. It is identified by AN_FLT_HTTP_HDRS.

Special care was taken about the following condition :

  * the frontend is a TCP proxy
  * filters are defined in the frontend section
  * the selected backend is a HTTP proxy

So, this patch explicitly add AN_FLT_HTTP_HDRS analyzer on the request and the
response channels when the backend is a HTTP proxy and when there are filters
attatched on the stream.
This patch simplifies http_request_forward_body and http_response_forward_body
functions.
2016-02-09 14:53:15 +01:00
Christopher Faulet
2fb2880caf MEDIUM: filters: remove http_start_chunk, http_last_chunk and http_chunk_end
For Chunked HTTP request/response, the body filtering can be really
expensive. In the worse case (many chunks of 1 bytes), the filters overhead is
of 3 calls per chunk. If http_data callback is useful, others are just
informative.

So these callbacks has been removed. Of course, existing filters (trace and
compression) has beeen updated accordingly. For the HTTP compression filter, the
update is quite huge. Its implementation is closer to the old one.
2016-02-09 14:53:15 +01:00
Christopher Faulet
3e34429515 MEDIUM: filters: Use macros to call filters callbacks to speed-up processing
When no filter is attached to the stream, the CPU footprint due to the calls to
filters_* functions is huge, especially for chunk-encoded messages. Using macros
to check if we have some filters or not is a great improvement.

Furthermore, instead of checking the filter list emptiness, we introduce a flag
to know if filters are attached or not to a stream.
2016-02-09 14:53:15 +01:00
Christopher Faulet
92d3638d2d MAJOR: filters/http: Rewrite the HTTP compression as a filter
HTTP compression has been rewritten to use the filter API. This is more a PoC
than other thing for now. It allocates memory to work. So, if only for that, it
should be rewritten.

In the mean time, the implementation has been refactored to allow its use with
other filters. However, there are limitations that should be respected:

  - No filter placed after the compression one is allowed to change input data
    (in 'http_data' callback).
  - No filter placed before the compression one is allowed to change forwarded
    data (in 'http_forward_data' callback).

For now, these limitations are informal, so you should be careful when you use
several filters.

About the configuration, 'compression' keywords are still supported and must be
used to configure the HTTP compression behavior. In absence of a 'filter' line
for the compression filter, it is added in the filter chain when the first
compression' line is parsed. This is an easy way to do when you do not use other
filters. But another filter exists, an error is reported so that the user must
explicitly declare the filter.

For example:

  listen tst
      ...
      compression algo gzip
      compression offload
      ...
      filter flt_1
      filter compression
      filter flt_2
      ...
2016-02-09 14:53:15 +01:00
Christopher Faulet
3d97c90974 REORG: filters: Prepare creation of the HTTP compression filter
HTTP compression will be moved in a true filter. To prepare the ground, some
functions have been moved in a dedicated file. Idea is to keep everything about
compression algos in compression.c and everything related to the filtering in
flt_http_comp.c.

For now, a header has been added to help during the transition. It will be
removed later.

Unused empty ACL keyword list was removed. The "compression" keyword
parser was moved from cfgparse.c to flt_http_comp.c.
2016-02-09 14:53:15 +01:00
Christopher Faulet
02c7b22267 MINOR: filters: Do not reset stream analyzers if the client is gone
When all callbacks have been called for all filters registered on a stream, if
we are waiting for the next HTTP request, we must reset stream analyzers. But it
is useless to do so if the client has already closed the connection.
2016-02-09 14:53:15 +01:00
Christopher Faulet
d7c9196ae5 MAJOR: filters: Add filters support
This patch adds the support of filters in HAProxy. The main idea is to have a
way to "easely" extend HAProxy by adding some "modules", called filters, that
will be able to change HAProxy behavior in a programmatic way.

To do so, many entry points has been added in code to let filters to hook up to
different steps of the processing. A filter must define a flt_ops sutrctures
(see include/types/filters.h for details). This structure contains all available
callbacks that a filter can define:

struct flt_ops {
       /*
        * Callbacks to manage the filter lifecycle
        */
       int  (*init)  (struct proxy *p);
       void (*deinit)(struct proxy *p);
       int  (*check) (struct proxy *p);

        /*
         * Stream callbacks
         */
        void (*stream_start)     (struct stream *s);
        void (*stream_accept)    (struct stream *s);
        void (*session_establish)(struct stream *s);
        void (*stream_stop)      (struct stream *s);

       /*
        * HTTP callbacks
        */
       int  (*http_start)         (struct stream *s, struct http_msg *msg);
       int  (*http_start_body)    (struct stream *s, struct http_msg *msg);
       int  (*http_start_chunk)   (struct stream *s, struct http_msg *msg);
       int  (*http_data)          (struct stream *s, struct http_msg *msg);
       int  (*http_last_chunk)    (struct stream *s, struct http_msg *msg);
       int  (*http_end_chunk)     (struct stream *s, struct http_msg *msg);
       int  (*http_chunk_trailers)(struct stream *s, struct http_msg *msg);
       int  (*http_end_body)      (struct stream *s, struct http_msg *msg);
       void (*http_end)           (struct stream *s, struct http_msg *msg);
       void (*http_reset)         (struct stream *s, struct http_msg *msg);
       int  (*http_pre_process)   (struct stream *s, struct http_msg *msg);
       int  (*http_post_process)  (struct stream *s, struct http_msg *msg);
       void (*http_reply)         (struct stream *s, short status,
                                   const struct chunk *msg);
};

To declare and use a filter, in the configuration, the "filter" keyword must be
used in a listener/frontend section:

  frontend test
    ...
    filter <FILTER-NAME> [OPTIONS...]

The filter referenced by the <FILTER-NAME> must declare a configuration parser
on its own name to fill flt_ops and filter_conf field in the proxy's
structure. An exemple will be provided later to make it perfectly clear.

For now, filters cannot be used in backend section. But this is only a matter of
time. Documentation will also be added later. This is the first commit of a long
list about filters.

It is possible to have several filters on the same listener/frontend. These
filters are stored in an array of at most MAX_FILTERS elements (define in
include/types/filters.h). Again, this will be replaced later by a list of
filters.

The filter API has been highly refactored. Main changes are:

* Now, HA supports an infinite number of filters per proxy. To do so, filters
  are stored in list.

* Because filters are stored in list, filters state has been moved from the
  channel structure to the filter structure. This is cleaner because there is no
  more info about filters in channel structure.

* It is possible to defined filters on backends only. For such filters,
  stream_start/stream_stop callbacks are not called. Of course, it is possible
  to mix frontend and backend filters.

* Now, TCP streams are also filtered. All callbacks without the 'http_' prefix
  are called for all kind of streams. In addition, 2 new callbacks were added to
  filter data exchanged through a TCP stream:

    - tcp_data: it is called when new data are available or when old unprocessed
      data are still waiting.

    - tcp_forward_data: it is called when some data can be consumed.

* New callbacks attached to channel were added:

    - channel_start_analyze: it is called when a filter is ready to process data
      exchanged through a channel. 2 new analyzers (a frontend and a backend)
      are attached to channels to call this callback. For a frontend filter, it
      is called before any other analyzer. For a backend filter, it is called
      when a backend is attached to a stream. So some processing cannot be
      filtered in that case.

    - channel_analyze: it is called before each analyzer attached to a channel,
      expects analyzers responsible for data sending.

    - channel_end_analyze: it is called when all other analyzers have finished
      their processing. A new analyzers is attached to channels to call this
      callback. For a TCP stream, this is always the last one called. For a HTTP
      one, the callback is called when a request/response ends, so it is called
      one time for each request/response.

* 'session_established' callback has been removed. Everything that is done in
  this callback can be handled by 'channel_start_analyze' on the response
  channel.

* 'http_pre_process' and 'http_post_process' callbacks have been replaced by
  'channel_analyze'.

* 'http_start' callback has been replaced by 'http_headers'. This new one is
  called just before headers sending and parsing of the body.

* 'http_end' callback has been replaced by 'channel_end_analyze'.

* It is possible to set a forwarder for TCP channels. It was already possible to
  do it for HTTP ones.

* Forwarders can partially consumed forwardable data. For this reason a new
  HTTP message state was added before HTTP_MSG_DONE : HTTP_MSG_ENDING.

Now all filters can define corresponding callbacks (http_forward_data
and tcp_forward_data). Each filter owns 2 offsets relative to buf->p, next and
forward, to track, respectively, input data already parsed but not forwarded yet
by the filter and parsed data considered as forwarded by the filter. A any time,
we have the warranty that a filter cannot parse or forward more input than
previous ones. And, of course, it cannot forward more input than it has
parsed. 2 macros has been added to retrieve these offets: FLT_NXT and FLT_FWD.

In addition, 2 functions has been added to change the 'next size' and the
'forward size' of a filter. When a filter parses input data, it can alter these
data, so the size of these data can vary. This action has an effet on all
previous filters that must be handled. To do so, the function
'filter_change_next_size' must be called, passing the size variation. In the
same spirit, if a filter alter forwarded data, it must call the function
'filter_change_forward_size'. 'filter_change_next_size' can be called in
'http_data' and 'tcp_data' callbacks and only these ones. And
'filter_change_forward_size' can be called in 'http_forward_data' and
'tcp_forward_data' callbacks and only these ones. The data changes are the
filter responsability, but with some limitation. It must not change already
parsed/forwarded data or data that previous filters have not parsed/forwarded
yet.

Because filters can be used on backends, when we the backend is set for a
stream, we add filters defined for this backend in the filter list of the
stream. But we must only do that when the backend and the frontend of the stream
are not the same. Else same filters are added a second time leading to undefined
behavior.

The HTTP compression code had to be moved.

So it simplifies http_response_forward_body function. To do so, the way the data
are forwarded has changed. Now, a filter (and only one) can forward data. In a
commit to come, this limitation will be removed to let all filters take part to
data forwarding. There are 2 new functions that filters should use to deal with
this feature:

 * flt_set_http_data_forwarder: This function sets the filter (using its id)
   that will forward data for the specified HTTP message. It is possible if it
   was not already set by another filter _AND_ if no data was yet forwarded
   (msg->msg_state <= HTTP_MSG_BODY). It returns -1 if an error occurs.

 * flt_http_data_forwarder: This function returns the filter id that will
   forward data for the specified HTTP message. If there is no forwarder set, it
   returns -1.

When an HTTP data forwarder is set for the response, the HTTP compression is
disabled. Of course, this is not definitive.
2016-02-09 14:53:15 +01:00
Raghu Udiyar
0d6b7a490e BUG/MINOR: stats: fix missing comma in stats on agent drain
The csv stats format breaks when agent changes server state to drain.
Tools like hatop, metric or check agents will fail due to this. This
should be backported to 1.6.
2016-02-09 09:06:13 +01:00
Christopher Faulet
635c0adec2 BUG/MINOR: ssl: Be sure to use unique serial for regenerated certificates
The serial number for a generated certificate was computed using the requested
servername, without any variable/random part. It is not a problem from the
moment it is not regenerated.

But if the cache is disabled or when the certificate is evicted from the cache,
we may need to regenerate it. It is important to not reuse the same serial
number for the new certificate. Else clients (especially browsers) trigger a
warning because 2 certificates issued by the same CA have the same serial
number.

So now, the serial is a static variable initialized with now_ms (internal date
in milliseconds) and incremented at each new certificate generation.

(Ref MPS-2031)
2016-02-09 09:04:53 +01:00
Willy Tarreau
53f9685b72 BUG/MEDIUM: http-reuse: do not share private connections across backends
When working on the previous bug, it appeared that it the case that was
triggering the bug would also work between two backends, one of which
doesn't support http-reuse. The reason is that while the idle connection
is moved to the private pool, upon reuse we only check if it holds the
CO_FL_PRIVATE flag. And we don't set this flag when there's no reuse.

So let's always set it in this case, it will guarantee that no undesired
connection sharing may happen.

This fix must be backported to 1.6.
2016-02-03 21:23:08 +01:00
Willy Tarreau
0aae4806a3 BUG/MAJOR: http-reuse: fix risk of orphaned connections
There is a bug in connect_server() : we use si_attach_conn() to offer
the current session's connection to the session we're stealing the
connection from. Unfortunately, si_attach_conn() uses the standard data
connection operations while here we need to use the idle connection
operations.

This results in a situation where when the server's idle timeout strikes,
the read0 is silently ignored, causes the response channel to be shut down
for reads, and the connection remains attached. Next attempt to send a
request when using this connection simply results in nothing being done
because we try to send over an already closed connection. Worse, if the
client aborts, then no timeout remains at all and the session waits
forever and remains assigned to the server.

A more-or-less easy way to reproduce this bug is to have two concurrent
streams each connecting to a different server with "http-reuse aggressive",
typically a cache farm using a URL hash :

   stream1: GET /1 HTTP/1.1
   stream2: GET /2 HTTP/1.1
   stream1: GET /2 HTTP/1.1
   wait for the server 1's connection to timeout
   stream2: GET /1 HTTP/1.1

The connection hangs here, and "show sess all" shows a closed connection
with a SHUTR on the response channel.

The fix is very simple though not optimal. It consists in calling
si_idle_conn() again after attaching the connection. But in practise
it should not be done like this. The real issue is that there's no way
to cleanly attach a connection to a stream interface without changing
the connection's operations. So the API clearly needs to be revisited
to make such operations easier.

Many thanks to Yves Lafon from W3C for providing lots of useful dumps
and testing patches to help figure the root cause!

This fix must be backported to 1.6.
2016-02-03 21:23:08 +01:00
Willy Tarreau
fdfcc9d2b7 MINOR: stats: send content-length with the redirect to allow keep-alive
After a POST on the stats admin page, a 303 is emitted. Unfortunately
this 303 doesn't contain a content-length, which forces the connection
to be closed and reopened. Let's simply add a content-length: 0 to solve
this.
2016-01-27 20:22:36 +01:00
Willy Tarreau
ceeafb5efc BUG/CLEANUP: CLI: report the proper field states in "show sess"
Commit 16f649c ("REORG: polling: rename "fd_spec" to "fd_cache"")
missed the server-facing connection during the rename, so the old
names are still in used and add a bit of confusion during the
debugging.

This should be backported to 1.6 and 1.5.
2016-01-27 20:22:36 +01:00
Lukas Tribus
9f256d4d85 MINOR: unix: don't mention free ports on EAGAIN
When a connect() to a unix socket returns EAGAIN we talk about
"no free ports" in the error/debug message, which only makes
sense when using TCP.

Explain connect() failure and suggest troubleshooting server
backlog size.
2016-01-26 21:11:51 +01:00
Willy Tarreau
79c1e912bb BUG/MINOR: counters: make the sc-inc-gpc0 and sc-set-gpt0 touch the table
These two actions don't touch the table so the entry will expire and
the values will not be pushed to other peers. Also in the case of gpc0,
the gpc0_rate counter must be updated. The issue was reported by
Ruoshan Huang.

This fix needs to be backported to 1.6.
2016-01-25 14:56:33 +01:00
Willy Tarreau
49008c157e BUG/MINOR: stream: don't force retries if the server is DOWN
Arkadiy Kulev noticed that if a server is marked down while a connection
is being trying to establish, we still insist on performing retries on
the same server, which is absurd. Better perform the redispatch if we
already know the server is down. Because of this, it's likely that the
observe-l4 and sudden-death mechanisms are not optimal an cannot help
much the connection which was used to detect the problem.

The fix should be backported to 1.6 and 1.5 at least.
2016-01-25 14:36:20 +01:00
Willy Tarreau
484b53da52 BUG/MEDIUM: buffers: do not round up buffer size during allocation
When users request 16384 bytes for a buffer, they get 16392 after
rounding up. This is problematic for SSL as it systematically
causes a small 8-bytes message to be appended after the first 16kB
message and costs about 15% of performance.

Let's add MEM_F_EXACT to use exactly the size we need. This requires
previous patch (MEDIUM: pools: add a new flag to avoid rounding pool
size up).

This issue was introduced in 1.6 and causes trouble there, so this
fix must be backported.

This is issue was reported by Gary Barrueto and diagnosed by Cyril Bonté.
2016-01-25 02:31:18 +01:00
Willy Tarreau
581bf81d34 MEDIUM: pools: add a new flag to avoid rounding pool size up
Usually it's desirable to merge similarly sized pools, which is the
reason why their size is rounded up to the next multiple of 16. But
for the buffers this is problematic because we add the size of
struct buffer to the user-requested size, and the rounding results
in 8 extra bytes that are usable in the end. So the user gets more
bytes than asked for, and in case of SSL it results in short writes
for the extra bytes that are sent above multiples of 16 kB.

So we add a new flag MEM_F_EXACT to request that the size is not
rounded up when creating the entry. Thus it doesn't disable merging.
2016-01-25 02:31:18 +01:00
Cyril Bonté
f78d8967d7 BUG/MEDIUM: sample: http_date() doesn't provide the right day of the week
Gregor KovaÄ reported that http_date() did not return the right day of the
week. For example "Sat, 22 Jan 2016 17:43:38 GMT" instead of "Fri, 22 Jan
2016 17:43:38 GMT". Indeed, gmtime() returns a 'struct tm' result, where
tm_wday begins on Sunday, whereas the code assumed it began on Monday.

This patch must be backported to haproxy 1.5 and 1.6.
2016-01-22 19:52:31 +01:00
Ben Cabot
3b90f0a267 BUG/MEDIUM: config: Adding validation to stick-table expire value.
If the expire value exceedes the maximum value clients are not added
to the stick table.
2016-01-21 19:46:47 +01:00
Willy Tarreau
f3c7a83acc BUG/MEDIUM: servers state: server port is used uninitialized
Nenad spotted that the last fix was unfortunately wrong. Needs to be
backported to 1.6 as well.
2016-01-21 13:51:56 +01:00
Baptiste Assmann
a875b1f92e BUG/MAJOR: servers state: server port is erased when dns resolution is enabled on a server
Servers state function save and apply server IP when DNS resolution is
enabled on a server.
Purpose is to prevent switching traffic from one server to an other one
when multiple IPs are returned by the DNS server for the A or AAAA
record.

That said, a bug in current code lead to erase the service port while
copying the IP found in the file into the server structure in HAProxy's
memory.
This patch fix this bug.

The bug was reported on the ML by Robert Samuel Newson and fix proposed
by Nenad Merdanovic.
Thank you both!!!

backport: can be backported to 1.6
2016-01-21 10:47:12 +01:00
Baptiste Assmann
7f43fa9b2c BUG/MEDIUM: dns: no DNS resolution happens if no ports provided to the nameserver
Erez reported a bug on discourse.haproxy.org about DNS resolution not
occuring when no port is specified on the nameserver directive.

This patch prevent this behavior by returning an error explaining this
issue when parsing the configuration file.
That said, later, we may want to force port 53 when client did not
provide any.

backport: 1.6
2016-01-21 07:41:59 +01:00
Baptiste Assmann
0821bb9ec0 MINOR: server state: missing LF (\n) on error message printed when parsing server state file
There is no LF characters printed at the end of the error message
returned by the function when applying server state found in a file.
2016-01-21 07:40:51 +01:00
Thiago Farina
b1af23ebea MINOR: fix the return type for dns_response_get_query_id() function
This function should return a 16-bit type as that is the type for
dns header id.
Also because it is doing an uint16 unpack big-endian operation.

Backport: can be backported to 1.6

Signed-off-by: Thiago Farina <tfarina@chromium.org>
Signed-off-by: Baptiste Assmann <bedis9@gmail.com>
2016-01-20 23:51:24 +01:00
William Lallemand
6ad6bde9e1 MINOR: rename master process name in -Ds (systemd mode)
To avoid confusion between the master process and child processes,
the master process is renamed after the forks.
2016-01-14 18:29:15 +01:00
ben@51degrees.com
d3842523ff CLEANUP: 51d: Aligned if statements with HAProxy best practices and removed casts from malloc.
Changes to if statements do not affect code operation, just layout of
the code. Type casts from malloc returns have been removed as this cast
happens automatically from the void* type.

This may be backported to 1.6.
2016-01-13 12:10:44 +01:00
ben@51degrees.com
496299a4d5 BUG/MINOR: 51d: Aligned const pointers to changes in 51Degrees.
Parameters provided to 51Degrees methods that have changed to require
const pointers are now cast to avoid compiler warnings.

This should be backported to 1.6.
2016-01-13 12:10:42 +01:00
ben@51degrees.com
6baceb9f64 BUG/MINOR: 51d: Releases workset back to pool.
The workset is now released correctly when a cache hit occurs.

This should be backported to 1.6.
2016-01-13 12:10:40 +01:00
ben@51degrees.com
c9dfa24808 BUG/MINOR: 51d: Aligns Pattern cache implementation with HAProxy best practices.
Malloc continues to be used for the creation of cache entries. The
implementation has been enhanced ready for production deployment. A new
method to free cache entries created in 51d.c has been added to ensure
memory is released correctly.

This should be backported to 1.6.
2016-01-13 12:10:37 +01:00
ben@51degrees.com
82a9d76f15 BUG/MINOR: 51d: Ensures a unique domain for each configuration
Args pointer is now used as the LRU cache domain to ensure the cache
distinguishes between multiple fetch and conv configurations.

This should be backported to 1.6.
2016-01-13 12:10:35 +01:00
Baptiste Assmann
22c4ed6937 MINOR: lru: new function to delete <nb> least recently used keys
Introduction of a new function in the LRU cache source file.
Purpose of this function is to be used to delete a number of entries in
the cache. 'number' is defined by the caller and the key removed are
taken at the tail of the tree
2016-01-11 07:31:35 +01:00
Willy Tarreau
b631c291c9 MINOR: tools: make csv_enc_append() always start at the first byte of the chunk
csv_enc_append() returns a pointer to the beginning of the encoded
string, which makes it convenient to use in printf(). However it's not
convenient for use in chunks as it may leave an unused byte at the
beginning depending on the automatic quoting. Let's modify it to work
in two passes. First it looks for a character that requires escaping
using strpbrk(), and second it encodes the string. This way it
guarantees to always start at the first available byte of the chunk.
Additionally it made the code quite simpler.
2016-01-08 10:08:15 +01:00
Willy Tarreau
898529b4a8 MEDIUM: tools: add csv_enc_append() to preserve the original chunk
We have csv_enc() but there's no way to append some CSV-encoded data
to an existing chunk, so here we modify the existing function for this
and create an inlined version of csv_enc() which first resets the output
chunk. It will be handy to append data to an existing chunk without
having to use an extra temporary chunk, or to encode multiple strings
into a single chunk with chunk_newstr().

The patch is quite small, in fact most changes are typo fixes in the
comments.
2016-01-06 20:58:55 +01:00
Christopher Faulet
a94e5a548c MINOR: filters/http: Use a wrapper function instead of stream_int_retnclose
The function http_reply_and_close has been added in proto_http.c to wrap calls
to stream_int_retnclose. This functions will be modified when the filters will
be added.
2015-12-28 16:49:36 +01:00
Christopher Faulet
a46bbd893a BUG/MINOR: http: Be sure to process all the data received from a server
When the response body is forwarded, if the server closes the input before the
end, an error is thrown. But if the data processing is too slow, all data could
already be received and pending in the input buffer. So this is a bug to stop
processing in this context. The server doesn't really closed the input before
the end.

As an example, this could happen when HAProxy is configured to do compression
offloading. If the server closes the connection explicitly after the response
(keep-alive disabled by the server) and if HAProxy receives the data faster than
they are compressed, then the response could be truncated.

This patch fixes the bug by checking if some pending data remain in the input
buffer before returning an error. If yes, the processing continues.
2015-12-28 16:49:36 +01:00
Willy Tarreau
f66258237c BUG/MINOR: http: fix several off-by-one errors in the url_param parser
Several cases of "<=" instead of "<" were found in the url_param parser,
mostly affecting the case where the parameter is wrapping. They shouldn't
affect header operations, just body parsing in a wrapped pipelined request.

The code is a bit complicated with certain operations done multiple times
in multiple functions, so it's not sure others are not left. This code
must be re-audited.

It should only be backported to 1.6 once carefully tested, because it is
possible that other bugs relied on these ones.
2015-12-27 14:51:01 +01:00
Thierry FOURNIER
8db004cbf4 MINOR: lua: add set/get priv for applets
The applet can't have access to the session private data. This patch
fix this problem. Now an applet can use private data stored by actions
and fecthes.
2015-12-25 10:34:28 +01:00
Thierry FOURNIER
337eae1882 BUG/MINOR: stream: bad return code
In error case, we expect the enum ACT_RET_PRS_ERR, but the
function "stream_parse_use_service()" returns "-1"

This patch must be backported in 1.6
2015-12-22 13:36:20 +01:00
Thierry FOURNIER
ed0bdaa623 CLEANUP: lua: bad error messages
An error message reference "register_service" in place of
"register_action".

This one should be backported to 1.6.
2015-12-20 23:13:01 +01:00
Thierry FOURNIER
52e2606188 BUG/MAJOR: lua: Do not force the HTTP analysers in use-services
INNER and XFERBODY analyzer were set in order to support HTTP applets
from TCP rulesets, but this does not work (cf previous patch).

Other cases already provides theses analyzers, so their addition is
not needed. Furthermore if INNER was set it could cause some headers
to be rewritten (ex: connection) after headers were already forwarded,
resulting in a crash in buffer_insert_line2().

Special thanks to Bernd Helm for providing very detailed information,
captures and stack traces making it possible to spot the root cause
here.

This fix must be backported to 1.6.
2015-12-20 23:13:01 +01:00
Thierry FOURNIER
718e2a73a2 BUG/MEDIUM: lua: Forbid HTTP applets from being called from tcp rulesets
HTTP applets request requires everything initilized by
"http_process_request" (analyzer flag AN_REQ_HTTP_INNER).
The applet will be immediately initilized, but its before
the call of this analyzer.

Due to this problem HTTP applets could be called with uncompletely
initialized http_txn.

This fix must be backported to 1.6.
2015-12-20 23:13:01 +01:00
Thierry FOURNIER
d93ea2b603 BUG/MINOR: lua: Lua applets must not use http_txn
In certain circumstances (eg: Lua HTTP applet called from a
TCP ruleset before http_process_request()), the HTTP TXN is not
yet fully initialized so some information it contains cannot be
relied on. Such information include the HTTP version, the state
of the expect: 100-continue header, the connection header and
the transfer-encoding header.

Here the bug only turns something which already doesn't work
into something wrong, but better avoid any references to the
http_txn from the Lua code to avoid future mistakes.

This patch should be backported into 1.6 for code consistency.
2015-12-20 23:13:00 +01:00
Thierry FOURNIER
ca98866bcf BUG/MEDIUM: lua: Lua applets must not fetch samples using http_txn
If a sample fetch needing http_txn is called from an HTTP Lua applet,
the result will be invalid and may even cause a crash because some HTTP
data can be forwarded and the HTTP txn is no longer valid.

Here the solution is to ensure that a fetch called from Lua never
needs http_txn. This is done thanks to a new flag HLUA_F_MAY_USE_HTTP
which indicates whether or not it is safe to call a fetch which needs
HTTP.

This fix needs to be backported to 1.6.
2015-12-20 23:13:00 +01:00
Thierry FOURNIER
7fa0549a2b REORG/MINOR: lua: convert boolean "int" to bitfield
This patch converts a boolean "int" to a bitfiled. The main
reason is to save space in the struct if another flag may will
be require.

Note that this patch is required for next fix and will need to be
backported to 1.6.
2015-12-20 23:13:00 +01:00
Thierry FOURNIER
841475e304 MINOR: lua: service/applet can have access to the HTTP headers when a POST is received
When a POST is processed by a Lua service, the HTTP header are
potentially gone. So, we cannot retrieve their content using
the standard "hdr" sample fetchs (which will soon become invalid
anyway) from an applet.

This patch add an entry "headers" to the object applet_http. This
entry is an array containing all the headers. It permits to use the
HTTP headers during the processing of the service.

Many thanks to Jan Bruder for reporting this issue with enough
details to reproduce it.

This patch will have to be backported to 1.6 since it will be the
only way to access headers from Lua applets.
2015-12-20 23:12:12 +01:00
Emeric Brun
1c6235dbba BUG/MEDIUM: peers: old stick table updates could be repushed.
Because the stick table updates tree was not properly initialized to EB_ROOT_UNIQUE.
2015-12-16 15:50:53 +01:00
Emeric Brun
234fc3c31e BUG/MEDIUM: peers: table entries learned from a remote are pushed to others after a random delay.
New sticktable entries learned from a remote peer can be pushed to others after
a random delay because they are not inserted at the right position in the updates
tree.
2015-12-16 15:50:22 +01:00
Willy Tarreau
7006045e48 BUG/MEDIUM: config: properly adjust maxconn with nbproc when memmax is forced
When memmax is forced using "-m", the per-process memory limit is enforced
using setrlimit(), but this value is not used to compute the automatic
maxconn limit. In addition, the per-process memory limit didn't consider
the fact that the shared SSL cache only needs to be accounted once.

The doc was also fixed to clearly state that "-m" is global and not per
process. It makes sense because people who use -m want to protect the
system's resources regardless of whatever appears in the configuration.
2015-12-14 13:03:09 +01:00
Willy Tarreau
b22fc30aaa MINOR: config: make tune.recv_enough configurable
This setting used to be assigned to a variable tunable from a constant
and for an unknown reason never made its way into the config parser.

tune.recv_enough <number>
  Haproxy uses some hints to detect that a short read indicates the end of the
  socket buffers. One of them is that a read returns more than <recv_enough>
  bytes, which defaults to 10136 (7 segments of 1448 each). This default value
  may be changed by this setting to better deal with workloads involving lots
  of short messages such as telnet or SSH sessions.
2015-12-14 12:05:45 +01:00
Willy Tarreau
30da7ad809 BUILD: ssl: set SSL_SOCK_NUM_KEYTYPES with openssl < 1.0.2
Last patch unfortunately broke build with openssl older than 1.0.2.
Let's just define a single key type in this case.
2015-12-14 11:28:33 +01:00
yanbzhu
be2774dd35 MEDIUM: ssl: Added support for Multi-Cert OCSP Stapling
Enabled loading of OCSP staple responses (.ocsp files) when processing a
bundled certificate with multiple keytypes.
2015-12-14 11:22:29 +01:00
yanbzhu
63ea846399 MEDIUM: ssl: Added multi cert support for loading crt directories
Loading of multiple certs into shared contexts is now supported if a user
specifies a directory instead of a cert file.
2015-12-14 11:22:29 +01:00
yanbzhu
1b04e5b0e0 MINOR: ssl: Added multi cert support for crt-list config keyword
Added support for loading mutiple certs into shared contexts when they
are specified in a crt-list

Note that it's not practical to support SNI filters with multicerts, so
any SNI filters that's provided to the crt-list is ignored if a
multi-cert opertion is used.
2015-12-14 11:22:29 +01:00
yanbzhu
08ce6ab0c9 MEDIUM: ssl: Added support for creating SSL_CTX with multiple certs
Added ability for users to specify multiple certificates that all relate
a single server. Users do this by specifying certificate "cert_name.pem"
but having "cert_name.pem.rsa", "cert_name.pem.dsa" and/or
"cert_name.pem.ecdsa" in the directory.

HAProxy will now intelligently search for those 3 files and try combine
them into as few SSL_CTX's as possible based on CN/SAN. This will allow
HAProxy to support multiple ciphersuite key algorithms off a single
SSL_CTX.

This change integrates into the existing architecture of SNI lookup and
multiple SNI's can point to the same SSL_CTX, which can support multiple
key_types.
2015-12-14 11:22:29 +01:00
yanbzhu
488a4d2e75 MINOR: ssl: Added cert_key_and_chain struct
Added cert_key_and_chain struct to ssl. This struct will store the
contents of a crt path (from the config file) into memory. This will
allow us to use the data stored in memory instead of reading the file
multiple times.

This will be used to support a later commit to load multiple pkeys/certs
into a single SSL_CTX
2015-12-14 11:22:29 +01:00
David Carlier
7ece096767 CLEANUP: haproxy: using _GNU_SOURCE instead of __USE_GNU macro.
In order to properly enable sched_setaffinity, in some versions of Linux,
it is rather _GNU_SOURCE than __USE_GNU (spotted on Alpine Linux for instance),
also for the sake of consistency as __USE_GNU seems not used across the code and
for last, it seems on Linux it is the best way to enable non portable code.
On Linux glibc's based versions, it seems _GNU_SOURCE defines __USE_GNU
it should be safe enough.
2015-12-09 10:38:29 +01:00
Willy Tarreau
858b103631 BUG/MEDIUM: http: fix http-reuse when frontend and backend differ
Krishna Kumar reported that the following configuration doesn't permit
HTTP reuse between two clients :

    frontend private-frontend
        mode http
        bind :8001
        default_backend private-backend

    backend private-backend
        mode http
        http-reuse always
        server bck 127.0.0.1:8888

The reason for this is that in http_end_txn_clean_session() we check the
stream's backend backend's http-reuse option before deciding whether the
backend connection should be moved back to the server's pool or not. But
since we're doing this after the call to http_reset_txn(), the backend is
reset to match the frontend, which doesn't have the option. However it
will work fine in a setup involving a "listen" section.

We just need to keep a pointer to the current backend before calling
http_reset_txn(). The code does that and replaces the few remaining
references to s->be inside the same function so that if any part of
code were to be moved later, this trap doesn't happen again.

This fix must be backported to 1.6.
2015-12-07 17:04:59 +01:00
Baptiste Assmann
baf9794b4d BUG/MINOR: tcpcheck: conf parsing error when no port configured on server and first rule(s) is (are) COMMENT
A small configuration parsing error exists when no port is setup on the
server IP:port statement and the server's parameter 'port' is not set
and if the first tcp-check rule is a comment, like in the example below:

  backend b
   option tcp-check
   tcp-check comment blah
   tcp-check connect 8444
   server s 127.0.0.1 check

In such case, an ALERT is improperly returned, despite this
configuration is valid and works.

The new code move the pointer to the first tcp-check rule which isn't a
comment before checking the presence of the port.

backport status: 1.6 and above
2015-12-04 07:48:44 +01:00
Baptiste Assmann
3dd73bea64 BUG/MINOR: tcpcheck: conf parsing error when no port configured on server and last rule is a CONNECT with no port
Current configuration parsing is permissive in such situation:
A server in a backend with no port conigured on the IP address
statement, no 'port' parameter configured and last rule of a tcp-check
is a CONNECT with no port.

The current code currently parses all the rules to validate a port is
well available, but it misses the last one, which means such
configuration is valid:

  backend b
   option tcp-check
   tcp-check connect port 8444
   tcp-check connect
   server s 127.0.0.1 check

the second connect tentative is sent to port '0'...

Current patch fixes this by parsing the list the right way, including
the last rule.

backport status: 1.6 and above
2015-12-04 07:48:35 +01:00
Cyril Bonté
b65e0335d9 BUG/MINOR: checks: typo in an email-alert error message
When the email alert message couldn't be formatted, the logged error message
said the contrary.

This fix must be backported to 1.6.
2015-12-04 06:09:30 +01:00
Cyril Bonté
e22bfd61b1 BUG/MINOR: checks: email-alert causes a segfault when an unknown mailers section is configured
A segfault can occur during at the initialization phase, when an unknown
"mailers" name is configured. This happens when "email-alert myhostname" is not
set, where a direct pointer to an array is used instead of copying the string,
causing the segfault when haproxy tries to free the memory.

This is a minor issue because the configuration is invalid and a fatal error
will remain, but it should be fixed to prevent reload issues.

Example of minimal configuration to reproduce the bug :
    backend example
        email-alert mailers NOT_FOUND
        email-alert from foo@localhost
        email-alert to bar@localhost

This fix must be backported to 1.6.
2015-12-04 06:09:30 +01:00
Cyril Bonté
7e0847045a BUG/MEDIUM: checks: email-alert not working when declared in defaults
Tommy Atkinson and Sylvain Faivre reported that email alerts didn't work when
they were declared in the defaults section. This is due to the use of an
internal attribute which is set once an email-alert is at least partially
configured. But this attribute was not propagated to the current proxy during
the configuration parsing.

Not that the issue doesn't occur if "email-alert myhostname" is configured in
the defaults section.

This fix must be backported to 1.6.
2015-12-04 06:09:30 +01:00
David Carlier
3b7113836d BUG/MEDIUM: da: stop DeviceAtlas processing in the convertor if there is no input.
In case a HTTP header modifier, like req*del, is used, the User-Agent would be removed
and cause a segfault, hence the work is stopped in due time.
2015-12-03 11:37:01 +01:00
David Carlier
df3785fe2a MINOR: da: silent logging by default and displaying DeviceAtlas support if built. 2015-12-03 11:37:01 +01:00
David CARLIER
087ca283e4 CLEANUP: proxy: calloc call inverted arguments
Nothing major but a human typo mistake.
2015-12-03 11:37:01 +01:00
David Carlier
081b336f7d BUILD: dumpstats: silencing warning for printf format specifier / time_t
time_t is not necesseraly a long int (spotted in OpenBSD), so just an explicit cast to
avoid the compiler warning. should be safe enough.
2015-12-03 11:37:01 +01:00
Cyril Bonté
ce1ef4df01 BUG/MEDIUM: sample: urlp can't match an empty value
Currently urlp fetching samples were able to find parameters with an empty
value, but the return code depended on the value length. The final result was
that acls using urlp couldn't match empty values.

Example of acl which always returned "false":
  acl MATCH_EMPTY urlp(foo) -m len 0

The fix consists in unconditionally return 1 when the parameter is found.

This fix must be backported to 1.6 and 1.5.
2015-11-26 23:51:42 +01:00
Willy Tarreau
a1c2b2c4f3 BUG/MEDIUM: cli: changing compression rate-limiting must require admin level
Right now it's possible to change the global compression rate limiting
without the CLI being at the admin level.

This fix must be backported to 1.6 and 1.5.
2015-11-26 18:32:39 +01:00
Willy Tarreau
ed9dddd237 CLEANUP: compression: don't allocate DEFAULT_MAXZLIBMEM without USE_ZLIB
It's pointless to reserve this amount of memory when zlib is not used.
Adding the condition will make build scripts easier to manage. This may
be backported to 1.6.
2015-11-26 16:35:53 +01:00
Willy Tarreau
f25b3573d6 BUG/MEDIUM: stream: fix half-closed timeout handling
client-fin and server-fin are bogus. They are applied on the write
side after a SHUTR was seen. The immediate effect is that sometimes
if a SHUTR was seen after a SHUTW on the same side, the timeout is
enabled again regardless of the fact that the output is already
closed. This results in the timeout event not to be processed and
a busy poll loop to happen until another timeout on the stream gets
rid of it. Note that haproxy continues its job during this, it's just
that it eats all the CPU trying to handle an event that it ignores.

An reproducible case consists in having a client stop reading data from
a server to ensure data remain in the response buffer, then the client
sends a shutdown(write). If abortonclose is enabled on haproxy, the
shutdown is passed to the server side and the server responds with a
SHUTR that cannot immediately be forwarded to the client since the
buffer is full. During this time the event is ignored and the task is
woken again in loops.

It is worth noting that the timeout handling since 1.5 is a bit fragile
and that it might be possible that other similar conditions still exist,
so the timeout handling should be audited regarding this issue.

Many thanks to BaiYang for providing detailed information showing the
problem in action.

This bug also affects 1.5 thus the fix must be backported.
2015-11-26 10:33:47 +01:00
Willy Tarreau
714ea78c9a BUG/MEDIUM: http: don't enable auto-close on the response side
There is a bug where "option http-keep-alive" doesn't force a response
to stay in keep-alive if the server sends the FIN along with the response
on the second or subsequent response. The reason is that the auto-close
was forced enabled when recycling the HTTP transaction and it's never
disabled along the response processing chain before the SHUTR gets a
chance to be forwarded to the client side. The MSG_DONE state of the
HTTP response properly disables it but too late.

There's no more reason for enabling auto-close here, because either it
doesn't matter in non-keep-alive modes because the connection is closed,
or it is automatically enabled by process_stream() when it sees there's
no analyser on the stream.

This bug also affects 1.5 so a backport is desired.
2015-11-26 10:25:11 +01:00
Lukas Tribus
d334a2c843 BUG/MINOR: lua: don't force-sslv3 LUA's SSL socket
Sander Klein reported an error messages about SSLv3 not
being supported on Debian 8, although he didn't force-sslv3.

Vincent Bernat tracked this down to the LUA initialization, which
actually does force-sslv3.

This patch removes force-sslv3 from the LUA initialization, so
the LUA SSL socket can actually use TLS and doesn't trigger
warnings when SSLv3 is not supported by libssl (such as in
Debian 8).

This should be backported to 1.6.
2015-11-26 07:30:22 +01:00
Willy Tarreau
7f876a1eeb BUG/MEDIUM: http: switch the request channel to no-delay once done.
There's an issue when sending POST data that came in a second packet,
the CF_NEVER_WAIT flag is not always set on the request channel, while
the server is waiting for the request. We must always set this flag in
this case since we're not going to shut down after sending, contrary
to the response side.

Note that option http-no-delay works around this issue.

Reproducer :

listen  px
        mode http
        timeout client 10s
        timeout server 5s
        timeout connect 3s
        option http-server-close
        #option http-no-delay
        bind :8001
        server s1 127.0.0.1:8003

$ (printf "POST / HTTP/1.1\r\nTransfer-encoding: chunked\r\n\r\n"; sleep 0.01; printf "10\r\nAZERTYUIOPQSDFGH\r\n0\r\n\r\n") | nc6 0 8001

Before this fix :

12:03:31.946763 epoll_wait(3, {{EPOLLIN, {u32=5, u64=5}}}, 200, 1000) = 1
12:03:32.634175 accept4(5, {sa_family=AF_INET, sin_port=htons(53849), sin_addr=inet_addr("127.0.0.1")}, [16], SOCK_NONBLOCK) = 6
12:03:32.634318 setsockopt(6, SOL_TCP, TCP_NODELAY, [1], 4) = 0
12:03:32.634434 accept4(5, 0x7ffccfbb2cf0, [128], SOCK_NONBLOCK) = -1 EAGAIN (Resource temporarily unavailable)
12:03:32.634574 recvfrom(6, "POST / HTTP/1.1\r\nTransfer-encodi"..., 8192, 0, NULL, NULL) = 47
12:03:32.634809 setsockopt(6, SOL_TCP, TCP_QUICKACK, [1], 4) = 0
12:03:32.634952 socket(PF_INET, SOCK_STREAM, IPPROTO_TCP) = 7
12:03:32.635031 fcntl(7, F_SETFL, O_RDONLY|O_NONBLOCK) = 0
12:03:32.635089 setsockopt(7, SOL_TCP, TCP_NODELAY, [1], 4) = 0
12:03:32.635153 connect(7, {sa_family=AF_INET, sin_port=htons(8003), sin_addr=inet_addr("127.0.0.1")}, 16) = -1 EINPROGRESS (Operation now in progress)
12:03:32.635315 epoll_wait(3, {}, 200, 0) = 0
12:03:32.635394 sendto(7, "POST / HTTP/1.1\r\nTransfer-encodi"..., 66, MSG_DONTWAIT|MSG_NOSIGNAL, NULL, 0) = 66
12:03:32.635527 recvfrom(6, 0x7f0224e66024, 8192, 0, 0, 0) = -1 EAGAIN (Resource temporarily unavailable)
12:03:32.635651 epoll_ctl(3, EPOLL_CTL_ADD, 6, {EPOLLIN|0x2000, {u32=6, u64=6}}) = 0
12:03:32.635782 epoll_wait(3, {}, 200, 0) = 0
12:03:32.635842 recvfrom(7, 0x7f0224e66024, 8192, 0, 0, 0) = -1 EAGAIN (Resource temporarily unavailable)
12:03:32.635924 epoll_ctl(3, EPOLL_CTL_ADD, 7, {EPOLLIN|0x2000, {u32=7, u64=7}}) = 0
12:03:32.636027 epoll_wait(3, {{EPOLLIN, {u32=6, u64=6}}}, 200, 1000) = 1
12:03:32.644892 recvfrom(6, "10\r\nAZERTYUIOPQSDFGH\r\n0\r\n\r\n", 8192, 0, NULL, NULL) = 27
12:03:32.645016 epoll_wait(3, {}, 200, 0) = 0
12:03:32.645105 sendto(7, "10\r\nAZERTYUIOPQSDFGH\r\n0\r\n\r\n", 27, MSG_DONTWAIT|MSG_NOSIGNAL|MSG_MORE, NULL, 0) = 27

After the fix :

11:59:12.538617 connect(7, {sa_family=AF_INET, sin_port=htons(8003), sin_addr=inet_addr("127.0.0.1")}, 16) = -1 EINPROGRESS (Operation now in progress)
11:59:12.538787 epoll_wait(3, {}, 200, 0) = 0
11:59:12.538867 sendto(7, "POST / HTTP/1.1\r\nTransfer-encodi"..., 66, MSG_DONTWAIT|MSG_NOSIGNAL, NULL, 0) = 66
11:59:12.539031 recvfrom(6, 0x7f832ce45024, 8192, 0, 0, 0) = -1 EAGAIN (Resource temporarily unavailable)
11:59:12.539161 epoll_ctl(3, EPOLL_CTL_ADD, 6, {EPOLLIN|0x2000, {u32=6, u64=6}}) = 0
11:59:12.539259 epoll_wait(3, {}, 200, 0) = 0
11:59:12.539337 recvfrom(7, 0x7f832ce45024, 8192, 0, 0, 0) = -1 EAGAIN (Resource temporarily unavailable)
11:59:12.539421 epoll_ctl(3, EPOLL_CTL_ADD, 7, {EPOLLIN|0x2000, {u32=7, u64=7}}) = 0
11:59:12.539499 epoll_wait(3, {{EPOLLIN, {u32=6, u64=6}}}, 200, 1000) = 1
11:59:12.548519 recvfrom(6, "10\r\nAZERTYUIOPQSDFGH\r\n0\r\n\r\n", 8192, 0, NULL, NULL) = 27
11:59:12.548844 epoll_wait(3, {}, 200, 0) = 0
11:59:12.549012 sendto(7, "10\r\nAZERTYUIOPQSDFGH\r\n0\r\n\r\n", 27, MSG_DONTWAIT|MSG_NOSIGNAL, NULL, 0) = 27
11:59:12.549454 epoll_wait(3, {}, 200, 1000) = 0

This fix must be backported to 1.6, 1.5 and 1.4.
2015-11-18 12:50:38 +01:00
lsenta
1e1f41d0f3 BUG: http: do not abort keep-alive connections on server timeout
When a server timeout is detected on the second or nth request of a keep-alive
connection, HAProxy closes the connection without writing a response.
Some clients would fail with a remote disconnected exception and some
others would retry potentially unsafe requests.

This patch removes the special case and makes sure a 504 timeout is
written back whenever a server timeout is handled.

Signed-off-by: lsenta <laurent.senta@gmail.com>
2015-11-13 14:41:51 +01:00
Daniel Jakots
54ffb918cb BUILD: check for libressl to be able to build against it
[wt: might be worth backporting it to 1.6]
2015-11-08 07:28:02 +01:00
Thierry FOURNIER
a3308fd8c1 BUG/MEDIUM: lua: clean output buffer
When the txn.done() fiunction is called, the ouput buffer is cleaned,
but the associated relative pointer on the HTTP requests elements
is not reseted.

This patch remove this cleanup, because the output buffer may contain
data to forward.
2015-11-06 01:15:34 +01:00
Lukas Tribus
c93242cab9 BUG/MINOR: acl: don't use record layer in req_ssl_ver
The initial record layer version in a SSL handshake may be set to TLSv1.0
or similar for compatibility reasons, this is allowed as per RFC5246
Appendix E.1 [1]. Some implementations are Openssl [2] and NSS [3].

A related issue has been fixed some time ago in commit 57d229747
("BUG/MINOR: acl: req_ssl_sni fails with SSLv3 record version").

Fix this by using the real client hello version instead of the record
layer version.

This was reported by Julien Vehent and analyzed by Cyril Bonté.
The initial patch is from Julien Vehent as well.

This should be backported to stable series, the req_ssl_ver keyword was
first introduced in 1.3.16.

[1] https://tools.ietf.org/html/rfc5246#appendix-E.1
[2] 4a1cf50187
[3] https://bugzilla.mozilla.org/show_bug.cgi?id=774547
2015-11-05 14:10:06 +01:00
Dragan Dosen
cf4fb036a4 BUG/MINOR: server: check return value of fgets() in apply_server_state()
fgets() can return NULL on error or when EOF occurs. This patch adds a
check of fgets() return value and displays a warning if the first line of
the server state file can not be read. Additionally, we make sure to close
the previously opened file descriptor.
2015-11-05 10:39:09 +01:00
Baptiste Assmann
e9544935e8 BUG/MINOR: http rule: http capture 'id' rule points to a non existing id
It is possible to create a http capture rule which points to a capture slot
id which does not exist.

Current patch prevent this when parsing configuration and prevent running
configuration which contains such rules.

This configuration is now invalid:

  frontend f
   bind :8080
   http-request capture req.hdr(User-Agent) id 0
   default_backend b

this one as well:

  frontend f
   bind :8080
   declare capture request len 32 # implicit id is 0 here
   http-request capture req.hdr(User-Agent) id 1
   default_backend b

It applies of course to both http-request and http-response rules.
2015-11-04 08:47:55 +01:00
James Brown
55f9ff11b5 MINOR: check: add agent-send server parameter
Causes HAProxy to emit a static string to the agent on every check,
so that you can independently control multiple services running
behind a single agent port.
2015-11-04 07:26:51 +01:00
Thierry FOURNIER
c4eebc8157 BUG/MEDIUM: lua: sample fetches based on response doesn't work
The direction (request or response) is not propagated in the
sample fecthes called throught Lua. This patch adds the direction
status in some structs (hlua_txn and hlua_smp) to make sure that
the sample fetches will be called with all the information.

The converters can not access to a TXN object, so there are not
impacted the direction. However, the samples used as input of the
Lua converter wrapper are initiliazed with the direction. Thereby,
the struct smp stay consistent.
[wt: needs to be backported to 1.6]
2015-11-03 10:50:14 +01:00
Thierry FOURNIER
6e01f38e73 CLEANUP: use direction names in place of numeric values
This patch cleanups the direction names. It replaces numeric values,
by the associated defines. It ensure the compliance with values found
somwhere else in HAProxy.

It is required by the bugfix patch which is following.
[wt: needs to be backported to 1.6]
2015-11-03 10:48:00 +01:00
Baptiste Assmann
a315c5534e BUG/MINOR: dns: check for duplicate nameserver id in a resolvers section was missing
Current resolvers section parsing function is permissive on nameserver
id and two nameservers may have the same id.
It's a shame, since we don't know for example, whose statistics belong
to which nameserver...

From now, configuration with duplicated nameserver id in a resolvers
section are considered as broken and returns a fatal error when parsing.
2015-11-03 09:56:29 +01:00
Willy Tarreau
1c59bd5abc BUG/MAJOR: http: don't requeue an idle connection that is already queued
Cyril Bonté reported a reproduceable sequence which can lead to a crash
when using backend connection reuse. The problem comes from the fact that
we systematically add the server connection to an idle pool at the end of
the HTTP transaction regardless of the fact that it might already be there.

This is possible for example when processing a request which doesn't use
a server connection (typically a redirect) after a request which used a
connection. Then after the first request, the connection was already in
the idle queue and we're putting it a second time at the end of the second
request, causing a corruption of the idle pool.

Interestingly, the memory debugger in 1.7 immediately detected a suspicious
double free on the connection, leading to a very early detection of the
cause instead of its consequences.

Thanks to Cyril for quickly providing a working reproducer.

This fix must be backported to 1.6 since connection reuse was introduced
there.
2015-11-02 22:28:25 +01:00
mildis
ff5d510294 MINOR: config: allow IPv6 bracketed literals 2015-11-01 21:30:41 +01:00
Baptiste Assmann
e4c4b7dda6 BUG/MINOR: dns: unable to parse CNAMEs response
A bug lied in the parsing of DNS CNAME response, leading HAProxy to
think the CNAME was improperly resolved in the response.

This should be backported into 1.6 branch
2015-10-30 12:39:08 +01:00
Baptiste Assmann
fad0318c74 BUG/MAJOR: dns: first DNS response packet not matching queried hostname may lead to a loop
The status DNS_UPD_NAME_ERROR returned by dns_get_ip_from_response and
which means the queried name can't be found in the response was
improperly processed (fell into the default case).
This lead to a loop where HAProxy simply resend a new query as soon as
it got a response for this status and in the only case where such type
of response is the very first one received by the process.

This should be backported into 1.6 branch
2015-10-30 12:38:14 +01:00
Willy Tarreau
f2dd5e4159 BUG/MEDIUM: config: count memory limits on 64 bits, not 32
It was accidently discovered that limiting haproxy to 5000 MB leads to
an effective limit of 904 MB. This is because the computation for the
size limit is performed by multiplying rlimit_memmax by 1048576, and
doing so causes the operation to be performed on an int instead of a
long or long long. Just switch to 1048576ULL as is done at other places
to fix this.

This bug affects all supported versions, the backport is desired, though
it rarely affects users since few people apply memory limits.
2015-10-29 10:42:55 +01:00
Willy Tarreau
58102cf30b MEDIUM: memory: add accounting for failed allocations
We now keep a per-pool counter of failed memory allocations and
we report that, as well as the amount of memory allocated and used
on the CLI.
2015-10-28 16:24:21 +01:00
Willy Tarreau
de30a684ca DEBUG/MEDIUM: memory: add optional control pool memory operations
When DEBUG_MEMORY_POOLS is used, we now use the link pointer at the end
of the pool to store a pointer to the pool, and to control it during
pool_free2() in order to serve four purposes :
  - at any instant we can know what pool an object was allocated from
    when examining memory, hence how we should possibly decode it ;

  - it serves to detect double free when they happen, as the pointer
    cannot be valid after the element is linked into the pool ;

  - it serves to detect if an element is released in the wrong pool ;

  - it serves as a canary, to detect if some buffers experienced an
    overflow before being release.

All these elements will definitely help better troubleshoot strange
situations, or at least confirm that certain conditions did not happen.
2015-10-28 15:28:05 +01:00
Willy Tarreau
ac421118db DEBUG/MEDIUM: memory: optionally protect free data in pools
When debugging a core file, it's sometimes convenient to be able to
visit the released entries in the pools (typically last released
session). Unfortunately the first bytes of these entries are destroyed
by the link elements of the pool. And of course, most structures have
their most accessed elements at the beginning of the structure (typically
flags). Let's add a build-time option DEBUG_MEMORY_POOLS which allocates
an extra pointer in each pool to put the link at the end of each pool
item instead of the beginning.
2015-10-28 15:27:59 +01:00
Andrew Hayworth
edb93a7c28 MINOR: cli: ability to set per-server maxconn
This commit adds support for setting a per-server maxconn from the stats
socket. The only really notable part of this commit is that we need to
check if maxconn == minconn before changing things, as this indicates
that we are NOT using dynamic maxconn. When we are not using dynamic
maxconn, we should update maxconn/minconn in lockstep.
2015-10-28 08:01:56 +01:00
Christopher Faulet
e7db21693f BUILD: ssl: fix build error introduced in commit 7969a3 with OpenSSL < 1.0.0
The function 'EVP_PKEY_get_default_digest_nid()' was introduced in OpenSSL
1.0.0. So for older version of OpenSSL, compiled with the SNI support, the
HAProxy compilation fails with the following error:

src/ssl_sock.c: In function 'ssl_sock_do_create_cert':
src/ssl_sock.c:1096:7: warning: implicit declaration of function 'EVP_PKEY_get_default_digest_nid'
   if (EVP_PKEY_get_default_digest_nid(capkey, &nid) <= 0)
[...]
src/ssl_sock.c:1096: undefined reference to `EVP_PKEY_get_default_digest_nid'
collect2: error: ld returned 1 exit status
Makefile:760: recipe for target 'haproxy' failed
make: *** [haproxy] Error 1

So we must add a #ifdef to check the OpenSSL version (>= 1.0.0) to use this
function. It is used to get default signature digest associated to the private
key used to sign generated X509 certificates. It is called when the private key
differs than EVP_PKEY_RSA, EVP_PKEY_DSA and EVP_PKEY_EC. It should be enough for
most of cases.
2015-10-22 13:32:34 +02:00
Andrew Hayworth
e6a4a329b8 MEDIUM: dns: Don't use the ANY query type
Basically, it's ill-defined and shouldn't really be used going forward.
We can't guarantee that resolvers will do the 'legwork' for us and
actually resolve CNAMES when we request the ANY query-type. Case in point
(obfuscated, clearly):

  PRODUCTION! ahayworth@secret-hostname.com:~$
  dig @10.11.12.53 ANY api.somestartup.io

  ; <<>> DiG 9.8.4-rpz2+rl005.12-P1 <<>> @10.11.12.53 ANY api.somestartup.io
  ; (1 server found)
  ;; global options: +cmd
  ;; Got answer:
  ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 62454
  ;; flags: qr rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 4, ADDITIONAL: 0

  ;; QUESTION SECTION:
  ;api.somestartup.io.                        IN      ANY

  ;; ANSWER SECTION:
  api.somestartup.io.         20      IN      CNAME api-somestartup-production.ap-southeast-2.elb.amazonaws.com.

  ;; AUTHORITY SECTION:
  somestartup.io.               166687  IN      NS      ns-1254.awsdns-28.org.
  somestartup.io.               166687  IN      NS      ns-1884.awsdns-43.co.uk.
  somestartup.io.               166687  IN      NS      ns-440.awsdns-55.com.
  somestartup.io.               166687  IN      NS      ns-577.awsdns-08.net.

  ;; Query time: 1 msec
  ;; SERVER: 10.11.12.53#53(10.11.12.53)
  ;; WHEN: Mon Oct 19 22:02:29 2015
  ;; MSG SIZE  rcvd: 242

HAProxy can't handle that response correctly.

Rather than try to build in support for resolving CNAMEs presented
without an A record in an answer section (which may be a valid
improvement further on), this change just skips ANY record types
altogether. A and AAAA are much more well-defined and predictable.

Notably, this commit preserves the implicit "Prefer IPV6 behavior."

Furthermore, ANY query type by default is a bad idea: (from Robin on
HAProxy's ML):
  Using ANY queries for this kind of stuff is considered by most people
  to be a bad practice since besides all the things you named it can
  lead to incomplete responses. Basically a resolver is allowed to just
  return whatever it has in cache when it receives an ANY query instead
  of actually doing an ANY query at the authoritative nameserver. Thus
  if it only received queries for an A record before you do an ANY query
  you will not get an AAAA record even if it is actually available since
  the resolver doesn't have it in its cache. Even worse if before it
  only got MX queries, you won't get either A or AAAA
2015-10-20 22:31:01 +02:00
Willy Tarreau
2f63ef4d1c BUG/MAJOR: ssl: free the generated SSL_CTX if the LRU cache is disabled
Kim Seri reported that haproxy 1.6.0 crashes after a few requests
when a bind line has SSL enabled with more than one certificate. This
was caused by an insufficient condition to free generated certs during
ssl_sock_close() which can also catch other certs.

Christopher Faulet analysed the situation like this :

-------
First the LRU tree is only initialized when the SSL certs generation is
configured on a bind line. So, in the most of cases, it is NULL (it is
not the same thing than empty).
When the SSL certs generation is used, if the cache is not NULL, a such
certificate is pushed in the cache and there is no need to release it
when the connection is closed.
But it can be disabled in the configuration. So in that case, we must
free the generated certificate when the connection is closed.

Then here, we have really a bug. Here is the buggy part:

3125)      if (conn->xprt_ctx) {
3126) #ifdef SSL_CTRL_SET_TLSEXT_HOSTNAME
3127)              if (!ssl_ctx_lru_tree && objt_listener(conn->target)) {
3128)                      SSL_CTX *ctx = SSL_get_SSL_CTX(conn->xprt_ctx);
3129)                      if (ctx != 3130)
 SSL_CTX_free(ctx);
3131)              }
3133)              SSL_free(conn->xprt_ctx);
3134)              conn->xprt_ctx = NULL;
3135)              sslconns--;
3136)      }

The check on the line 3127 is not enough to determine if this is a
generated certificate or not. Because ssl_ctx_lru_tree is NULL,
generated certificates, if any, must be freed. But here ctx should also
be compared to all SNI certificates and not only to default_ctx. Because
of this bug, when a SNI certificate is used for a connection, it is
erroneously freed when this connection is closed.
-------

Christopher provided this reliable reproducer :

----------
global
    tune.ssl.default-dh-param   2048
    daemon

listen ssl_server
    mode tcp
    bind 127.0.0.1:4443 ssl crt srv1.test.com.pem crt srv2.test.com.pem

    timeout connect 5000
    timeout client  30000
    timeout server  30000

    server srv A.B.C.D:80

You just need to generate 2 SSL certificates with 2 CN (here
srv1.test.com and srv2.test.com).

Then, by doing SSL requests with the first CN, there is no problem. But
with the second CN, it should segfault on the 2nd request.

openssl s_client -connect 127.0.0.1:4443 -servername srv1.test.com // OK
openssl s_client -connect 127.0.0.1:4443 -servername srv1.test.com // OK

But,

openssl s_client -connect 127.0.0.1:4443 -servername srv2.test.com // OK
openssl s_client -connect 127.0.0.1:4443 -servername srv2.test.com // KO
-----------

A long discussion led to the following proposal which this patch implements :

- the cert is generated. It gets a refcount = 1.
- we assign it to the SSL. Its refcount becomes two.
- we try to insert it into the tree. The tree will handle its freeing
  using SSL_CTX_free() during eviction.
- if we can't insert into the tree because the tree is disabled, then
  we have to call SSL_CTX_free() ourselves, then we'd rather do it
  immediately. It will more closely mimmick the case where the cert
  is added to the tree and immediately evicted by concurrent activity
  on the cache.
- we never have to call SSL_CTX_free() during ssl_sock_close() because
  the SSL session only relies on openssl doing the right thing based on
  the refcount only.
- thus we never need to know how the cert was created since the
  SSL_CTX_free() is either guaranteed or already done for generated
  certs, and this protects other ones against any accidental call to
  SSL_CTX_free() without having to track where the cert comes from.

This patch also reduces the inter-dependence between the LRU tree and
the SSL stack, so it should cause less sweating to migrate to threads
later.

This bug is specific to 1.6.0, as it was introduced after dev7 by
this fix :

   d2cab92 ("BUG/MINOR: ssl: fix management of the cache where forged certificates are stored")

Thus a backport to 1.6 is required, but not to 1.5.
2015-10-20 15:29:01 +02:00
Willy Tarreau
70f289cf8d BUG/MEDIUM: namespaces: don't fail if no namespace is used
Susheel Jalali reported a confusing bug in namespaces implementation.
If namespaces are enabled at build time (USE_NS=1) and *no* namespace
is used at all in the whole config file, my_socketat() returns -1 and
all socket bindings fail. This is because of a wrong condition in this
function. A possible workaround consists in creating some namespaces.
2015-10-20 15:29:00 +02:00
Baptiste Assmann
5d681ba976 BUG/MINOR: dns: parsing error of some DNS response
The function which parses a DNS response buffer did not move properly a
pointer when reading a packet where records does not use DNS "message
compression" techniques.

Thanks to 0yvind Johnsen for the help provided during the troubleshooting
session.
2015-10-15 22:05:59 +02:00
peter cai
aede6ddd1f BUG/MEDIUM: pattern: fixup use_after_free in the pat_ref_delete_by_id
I found there is use_after_free bug in the pat_ref_delete_by_id.

[wt: it seems this fix must be backported to 1.5 as well]
2015-10-13 18:31:49 +02:00
Willy Tarreau
86ac176e03 MINOR: init: report use of libslz instead of "no compression"
It's confusing to see "no zlib support" followed by supported
compression algorithms. Fix this.
2015-10-13 16:47:16 +02:00
Willy Tarreau
163d4620c6 MEDIUM: server: implement TCP_USER_TIMEOUT on the server
This is equivalent to commit 2af207a ("MEDIUM: tcp: implement tcp-ut
bind option to set TCP_USER_TIMEOUT") except that this time it works
on the server side. The purpose is to detect dead server connections
even when checks are rare, disabled, or after a soft reload (since
checks are disabled there as well), and to ensure client connections
will get killed faster.
2015-10-13 16:18:27 +02:00
Willy Tarreau
061b5ded28 BUG/MINOR: config: make the stats socket pass the correct proxy to the parsers
Baptiste reported a segfault when the "id" keyword was passed on the
"stats socket" line. The problem is related to the fact that the stats
parser stats_parse_global() passes curpx instead of global.stats_fe to
the keyword parser. Indeed, curpx being a pointer to the proxy in the
current section, it is not correct here since the global section does
not describe a proxy. It's just by pure luck that only bind_parse_id()
uses the proxy since any other keyword parser could use it as well.

The bug has no impact since the id specified here is not usable at all
and can be discarded from a faulty configuration.

This fix must be backported to 1.5.
2015-10-13 15:49:31 +02:00
Thierry FOURNIER
26a7aacaff BUG/MEDIUM: lua: direction test failed
Lua needs to known the direction of the http data processed (request or
response). It checks the flag SMP_OPT_DIR_REQ, buf this flag is 0. This patch
correctly checks the flags after applying the SMP_OPT_DIR mask.
2015-10-13 15:49:31 +02:00
Andrew Hayworth
e32d1867f6 BUG/MINOR: Handle interactive mode in cli handler
A previous commit broke the interactive stats cli prompt. Specifically,
it was not clear that we could be in STAT_CLI_PROMPT when we get to
the output functions for the cli handler, and the switch statement did
not handle this case. We would then fall through to the default
statement, which was recently changed to set error flags on the socket.
This in turn causes the socket to be closed, which is not what we wanted
in this specific case.

To fix, we add a case for STAT_CLI_PROMPT, and simply break out of the
switch statement.

Testing:
 - Connected to unix stats socket, issued 'prompt', observed that I
   could issue multiple consecutive commands.
 - Connected to unix stats socket, issued 'prompt', observed that socket
   timed out after inactivity expired.
 - Connected to unix stats socket, issued 'prompt' then 'set timeout cli
   5', observed that socket timed out after 5 seconds expired.
 - Connected to unix stats socket, issued invalid commands, received
   usage output.
 - Connected to unix stats socket, issued 'show info', received info
   output and socket disconnected.
 - Connected to unix stats socket, issued 'show stat', received stats
   output and socket disconnected.
 - Repeated above tests with TCP stats socket.

[wt: no backport needed, this was introduced during the applet rework in 1.6]
2015-10-12 20:54:50 +02:00
Vincent Bernat
a72db18243 MINOR: lua: fix a spelling error in some error messages
"unknown" was spelled "unkown".
2015-10-10 08:13:37 +02:00
Dragan Dosen
17def46e10 BUG/MEDIUM: logs: fix time zone offset format in RFC5424
The time zone offset format used in function update_log_hdr_rfc5424() was
missing ":" as a separator.
2015-10-10 00:07:03 +02:00
Christopher Faulet
85b5a1a781 MINOR: ssl: Add callbacks to set DH/ECDH params for generated certificates
Now, A callback is defined for generated certificates to set DH parameters for
ephemeral key exchange when required.
In same way, when possible, we also defined Elliptic Curve DH (ECDH) parameters.
2015-10-09 12:13:17 +02:00
Christopher Faulet
7969a33a01 MINOR: ssl: Add support for EC for the CA used to sign generated certificates
This is done by adding EVP_PKEY_EC type in supported types for the CA private
key when we get the message digest used to sign a generated X509 certificate.
So now, we support DSA, RSA and EC private keys.

And to be sure, when the type of the private key is not directly supported, we
get its default message digest using the function
'EVP_PKEY_get_default_digest_nid'.

We also use the key of the default certificate instead of generated it. So we
are sure to use the same key type instead of always using a RSA key.
2015-10-09 12:13:12 +02:00
Christopher Faulet
c6f02fb929 MINOR: ssl: Read the file used to generate certificates in any order
the file specified by the SSL option 'ca-sign-file' can now contain the CA
certificate used to dynamically generate certificates and its private key in any
order.
2015-10-09 12:13:08 +02:00
Willy Tarreau
a84c267522 BUILD: ssl: fix build error introduced by recent commit
Commit d2cab92 ("BUG/MINOR: ssl: fix management of the cache where forged
certificates are stored") removed some needed #ifdefs resulting in ssl not
building on older openssl versions where SSL_CTRL_SET_TLSEXT_HOSTNAME is
not defined :

src/ssl_sock.c: In function 'ssl_sock_load_ca':
src/ssl_sock.c:2504: error: 'ssl_ctx_lru_tree' undeclared (first use in this function)
src/ssl_sock.c:2504: error: (Each undeclared identifier is reported only once
src/ssl_sock.c:2504: error: for each function it appears in.)
src/ssl_sock.c:2505: error: 'ssl_ctx_lru_seed' undeclared (first use in this function)
src/ssl_sock.c: In function 'ssl_sock_close':
src/ssl_sock.c:3095: error: 'ssl_ctx_lru_tree' undeclared (first use in this function)
src/ssl_sock.c: In function '__ssl_sock_deinit':
src/ssl_sock.c:5367: error: 'ssl_ctx_lru_tree' undeclared (first use in this function)
make: *** [src/ssl_sock.o] Error 1

Reintroduce the ifdefs around the faulty areas.
2015-10-09 12:13:07 +02:00
Christopher Faulet
77fe80c0b4 MINOR: ssl: Release Servers SSL context when HAProxy is shut down
[wt: could be backported to 1.5 as well]
2015-10-09 10:33:00 +02:00
Christopher Faulet
d2cab92e75 BUG/MINOR: ssl: fix management of the cache where forged certificates are stored
First, the LRU cache must be initialized after the configuration parsing to
correctly set its size.
Next, the function 'ssl_sock_set_generated_cert' returns -1 when an error occurs
(0 if success). In that case, the caller is responsible to free the memory
allocated for the certificate.
Finally, when a SSL certificate is generated by HAProxy but cannot be inserted
in the cache, it must be freed when the SSL connection is closed. This happens
when 'tune.ssl.ssl-ctx-cache-size' is set to 0.
2015-10-09 10:20:53 +02:00
Christopher Faulet
d57ad64873 BUG/MINOR: http: Add OPTIONS in supported http methods (found by find_http_meth)
The 'OPTIONS' method was not in the list of supported HTTP methods and
find_http_meth return HTTP_METH_OTHER instead of HTTP_METH_OPTIONS.

[wt: this fix needs to be backported at least to 1.5, 1.4 and 1.3]
2015-10-09 10:18:09 +02:00
Christopher Faulet
3c3a035be0 MINOR: lru: do not allocate useless memory in lru64_lookup
lru64_lookup function was added in a previous patch of mine. This one
just remove a useless memory allocation.
2015-10-09 10:13:18 +02:00
Willy Tarreau
067ac9f4b6 MINOR: debug: enable memory poisonning to use byte 0
When debugging an issue, sometimes it can be useful to be able to use
byte 0 to poison memory areas, resulting in the same effect as a calloc().
This patch changes the default mem_poison_byte to -1 to disable it so that
all positive values are usable.
2015-10-08 14:12:13 +02:00
Willy Tarreau
a088d316b7 MEDIUM: init: support a list of files on the command line
HAProxy could already support being passed a file list on the command
line, by passing multiple times "-f" followed by a file name. People
have been complaining that it made it hard to pass file lists from init
scripts.

This patch introduces an end of arguments using the common "--" tag,
after which only file names may appear. These files are then added to
the existing list of other files specified using -f and are loaded in
their declaration order. Thus it becomes possible to do something like
this :

    haproxy -sf $(pidof haproxy) -- /etc/haproxy/global.cfg /etc/haproxy/customers/*.cfg
2015-10-08 11:58:48 +02:00
Willy Tarreau
c6ca1aa34d MEDIUM: init: support more command line arguments after pid list
Given that all command line arguments start with a '-' and that
no pid number can start with this character, there's no constraint
to make the pid list the last argument. Let's relax this rule.
2015-10-08 11:32:32 +02:00
Willy Tarreau
0078bfcb43 BUG/MEDIUM: lua: force server-close mode on Lua services
Thierry reported that keep-alive still didn't cope well with Lua
services. The reason is that for now applets have to be closed at
the end of a transaction so we want to work in server-close mode,
which isn't noticeable by the client since it still sees keep-alive.
Additionally we want to enable the request body transfer analyser
which will be needed to synchronize with the response analyser to
indicate the end of the transfer.
2015-10-07 20:24:05 +02:00
Willy Tarreau
6457d0fac3 CLEANUP: cli: ensure we can never double-free error messages
The release handler used to be called twice for some time and just by
pure luck we never ended up double-freeing the data there. Add a NULL
to ensure this can never happen should a future change permit this
situation again.
2015-10-07 20:00:24 +02:00
Andrew Hayworth
68d0534885 MINOR: cli: Dump all resolvers stats if no resolver section is given
This commit adds support for dumping all resolver stats. Specifically
if a command 'show stats resolvers' is issued withOUT a resolver section
id, we dump all known resolver sections. If none are configured, a
message is displayed indicating that.
2015-10-06 07:08:09 +02:00
Ben Cabot
49795eb00c BUG: config: external-check command validation is checking for incorrect arguments.
When using the external-check command option HAProxy was failing to
start with a fatal error "'external-check' cannot handle unexpected
argument". When looking at the code it was looking for an incorrect
argument. Also correcting an Alert message text as spotted by by
PiBa-NL.
2015-10-02 23:11:49 +02:00
Thierry FOURNIER
ab95e656ea MINOR: http/tcp: fill the avalaible actions
This patch adds a function that generates the list of avalaible actions
for the error message.
2015-10-02 22:56:11 +02:00
Thierry FOURNIER
56da1012d2 MINOR: lua: rename the tune.lua.applet-timeout
The name of applet is "service", so this patch renames the
tune.lua.applet-timeout to tune.lua.service-timeout
2015-10-02 22:56:10 +02:00
Dmitry Sivachenko
eab7f3996f BUG/MEDIUM: str2ip: make getaddrinfo() consider local address selection policy
When first parameter to getaddrinfo() is not NULL (it is always not NULL
in str2ip()), on Linux AI_PASSIVE value for ai_flags is ignored. On
FreeBSD, when AI_PASSIVE is specified and hostname parameter is not NULL,
getaddrinfo() ignores local address selection policy, always returning
AAAA record. Pass zero ai_flags to behave correctly on FreeBSD, this
change should be no-op for Linux.

This fix should be backported to 1.5 as well, after some observation
period.
2015-10-02 01:01:58 +02:00
Dragan Dosen
43885c728e BUG/MEDIUM: logs: segfault writing to log from Lua
Michael Ezzell reported a bug causing haproxy to segfault during startup
when trying to send syslog message from Lua. The function __send_log() can
be called with *p that is NULL and/or when the configuration is not fully
parsed, as is the case with Lua.

This patch fixes this problem by using individual vectors instead of the
pre-generated strings log_htp and log_htp_rfc5424.

Also, this patch fixes a problem causing haproxy to write the wrong pid in
the logs -- the log_htp(_rfc5424) strings were generated at the haproxy
start, but "pid" value would be changed after haproxy is started in
daemon/systemd mode.
2015-10-02 00:57:45 +02:00
Thierry FOURNIER
10770faf8e MEDIUM: lua: change the timeout execution
Now, the Lua timeout is relative to the effective run time.
When the Lua is waiting for I/O, this time is not took in
lua run time account.
2015-09-29 19:13:49 +02:00
Thierry FOURNIER
bee90aeda1 MINOR: lua: remove the run flag
Only the main execution function can set the run flag, because it is
the last function before the execution time.

This patch removes the flag set by another function. It will be used
by the new lua timeout counter.
2015-09-29 18:57:03 +02:00
Willy Tarreau
31138fae9f BUG/MEDIUM: server: fix misuse of format string in load-server-state's warnings
Commit e11cfcd ("MINOR: config: new backend directives:
load-server-state-from-file and server-state-file-name") introduced a bug
which can cause haproxy to crash upon startup by sending user-controlled
data in a format string when emitting a warning. Fix the way the warning
message is built to avoid this.

No backport is needed, this was introduced in 1.6-dev6 only.
2015-09-29 18:51:40 +02:00
Willy Tarreau
e1aebb2994 BUILD: server: fix build warnings introduced by load-server-state
Commit e11cfcd ("MINOR: config: new backend directives:
load-server-state-from-file and server-state-file-name") caused these
warnings when building with Clang :

src/server.c:1972:21: warning: comparison of unsigned expression < 0 is always false [-Wtautological-compare]
                            (srv_uweight < 0) || (srv_uweight > SRV_UWGHT_MAX))
                             ~~~~~~~~~~~ ^ ~
src/server.c:1980:21: warning: comparison of unsigned expression < 0 is always false [-Wtautological-compare]
                            (srv_iweight < 0) || (srv_iweight > SRV_UWGHT_MAX))
                             ~~~~~~~~~~~ ^ ~

Indeed, srv_iweight and srv_uweight are unsigned. Just drop the offending test.
2015-09-29 18:32:57 +02:00
Willy Tarreau
fc2a2d97d6 CLEANUP: tcp: silent-drop: only drain the connection when quick-ack is disabled
The conn_sock_drain() call is only there to force the system to ACK
pending data in case of TCP_QUICKACK so that the client doesn't retransmit,
otherwise it leads to a real RST making the feature useless. There's no
point in draining the connection when quick ack cannot be disabled, so
let's move the call inside the ifdef part.
2015-09-29 18:15:01 +02:00
Willy Tarreau
f50ec0fdbc BUG/MINOR: tcp: make silent-drop always force a TCP reset
The silent-drop action is supposed to close with a TCP reset that is
either not sent or not too far. But since it's on the client-facing
side, the socket's lingering is enabled by default and the RST only
occurs if some pending unread data remain in the queue when closing.
This causes some clean shutdowns to occur with retransmits, which is
not good at all. Force linger_risk on the socket to flush all data
and destroy the socket.

No backport is needed, this was introduced in 1.6-dev6.
2015-09-29 18:11:32 +02:00
Pradeep Jindal
bb2acf589f MINOR: payload: add support for tls session ticket ext
req.ssl_st_ext : integer
  Returns 0 if the client didn't send a SessionTicket TLS Extension (RFC5077)
  Returns 1 if the client sent SessionTicket TLS Extension
  Returns 2 if the client also sent non-zero length TLS SessionTicket
2015-09-29 14:07:32 +02:00
Willy Tarreau
2d392c2c2f MEDIUM: tcp: add new tcp action "silent-drop"
This stops the evaluation of the rules and makes the client-facing
connection suddenly disappear using a system-dependant way that tries
to prevent the client from being notified. The effect it then that the
client still sees an established connection while there's none on
HAProxy. The purpose is to achieve a comparable effect to "tarpit"
except that it doesn't use any local resource at all on the machine
running HAProxy. It can resist much higher loads than "tarpit", and
slow down stronger attackers. It is important to undestand the impact
of using this mechanism. All stateful equipments placed between the
client and HAProxy (firewalls, proxies, load balancers) will also keep
the established connection for a long time and may suffer from this
action. On modern Linux systems running with enough privileges, the
TCP_REPAIR socket option is used to block the emission of a TCP
reset. On other systems, the socket's TTL is reduced to 1 so that the
TCP reset doesn't pass the first router, though it's still delivered to
local networks.
2015-09-28 22:14:57 +02:00
Willy Tarreau
45059e9b98 BUG/MEDIUM: tcp: fix inverted condition to call custom actions
tcp-request connection had an inverted condition on action_ptr, resulting
in no registered actions to be usable since commit 4214873 ("MEDIUM: actions:
remove ACTION_STOP") merged in 1.6-dev5. Very few new actions were impacted.
No backport is needed.
2015-09-28 18:32:41 +02:00
Dragan Dosen
5b78d9b437 MEDIUM: logs: pass the trailing "\n" as an iovec
This patch passes the trailing "\n" as an iovec in the function
__send_log(), so that we don't need to modify the original log message.
2015-09-28 18:31:09 +02:00
Dragan Dosen
c8cfa7b4f3 MEDIUM: logs: have global.log_send_hostname not contain the trailing space
This patch unifies global.log_send_hostname addition in the log header
processing.
2015-09-28 18:27:45 +02:00
Willy Tarreau
47c8c029db MEDIUM: init: completely deallocate unused peers
When peers are stopped due to not being running on the appropriate
process, we want to completely release them and unregister their signals
and task in order to ensure there's no way they may be called in the
future.

Note: ideally we should have a list of all tables attached to a peers
section being disabled in order to unregister them and void their
sync_task. It doesn't appear to be *that* easy for now.
2015-09-28 16:43:48 +02:00
Willy Tarreau
20b7afbc14 BUG/MEDIUM: proxy: do not wake stopped proxies' tasks during soft_stop()
When performing a soft stop, we used to wake up every proxy's task and
each of their table's task. The problem is that since we're able to stop
proxies and peers not bound to a specific process, we may end up calling
random junk by doing so of the proxy we're waking up is already stopped.
This causes a segfault to appear during soft reloads for old processes
not bound to a peers section if such a section exists in other processes.

Let's only consider proxies that are not stopped when doing this.

This fix must be backported to 1.5 which also has the same issue.
2015-09-28 16:35:04 +02:00
Willy Tarreau
337a666572 BUG/MEDIUM: proxy: ignore stopped peers
Since commit f83d3fe ("MEDIUM: init: stop any peers section not bound
to the correct process"), it is possible to stop unused peers on certain
processes. The problem is that the pause/resume/stop functions are not
aware of this and will pass a NULL proxy pointer to the respective
functions, resulting in segfaults in unbound processes during soft
restarts.

Properly check that the peers' frontend is still valid before calling
them.

This bug also affects 1.5 so the fix must be backported. Note that this
fix is not enough to completely get rid of the segfault, the next one
is needed as well.
2015-09-28 16:27:44 +02:00
David Carlier
608c65ac6a MAJOR: da: Update of the DeviceAtlas API module
Introduction of the new keyword about the client's cookie name
via a new config entry.

Splits the work between the original convertor and the new fetch type.
The latter iterates through the whole list of HTTP headers but first,
make sure the request data are ready to process and set the input
to the string type rightly.

The API is then able to filter itself which headers are gueninely useful,
with a special care of the Client's cookie.

[WT: the immediate impact for the end user is that configs making use of
    "da-csv" won't load anymore and will need to be modified to use either
    "da-csv-conv" or "da-csv-fetch"]
2015-09-28 14:01:27 +02:00
David Carlier
5801a8247a MINOR: global: Few new struct fields for da module
The name and length of the client cookie, useful for extracting
cookie value's function and a simple bitfield one to define if
set or not.
2015-09-28 14:01:27 +02:00
David Carlier
4686f792b4 MINOR: proto_http: Externalisation of previously internal functions
Needs to expose the HTTP headers 'iterator' and the client's cookie
value extraction functions.
2015-09-28 14:01:27 +02:00
Dragan Dosen
0b85ecee53 MEDIUM: logs: add a new RFC5424 log-format for the structured-data
This patch adds a new RFC5424-specific log-format for the structured-data
that is automatically send by __send_log() when the sender is in RFC5424
mode.

A new statement "log-format-sd" should be used in order to set log-format
for the structured-data part in RFC5424 formatted syslog messages.
Example:

    log-format-sd [exampleSDID@1234\ bytes=\"%B\"\ status=\"%ST\"]
2015-09-28 14:01:27 +02:00
Dragan Dosen
1322d09a6f MEDIUM: logs: add support for RFC5424 header format per logger
The function __send_log() iterates over senders and passes the header as
the first vector to sendmsg(), thus it can send a logger-specific header
in each message.

A new logger arguments "format rfc5424" should be used in order to enable
RFC5424 header format. For example:

    log 10.2.3.4:1234 len 2048 format rfc5424 local2 info
2015-09-28 14:01:27 +02:00
Dragan Dosen
68d2e3a742 MEDIUM: logs: remove the hostname, tag and pid part from the logheader
At the moment we have to call snprintf() for every log line just to
rebuild a constant. Thanks to sendmsg(), we send the message in 3 parts:
time-based header, proxy-specific hostname+log-tag+pid, session-specific
message.
2015-09-28 14:01:27 +02:00
Dragan Dosen
59cee973cd MEDIUM: log: use a separate buffer for the header and for the message
Make sendmsg() use two vectors, one for the message header that is updated
by update_log_hdr() and one for the message buffer.
2015-09-28 14:01:27 +02:00
Dragan Dosen
609ac2ab6c MEDIUM: log: replace sendto() with sendmsg() in __send_log()
This patch replaces sendto() with sendmsg() in __send_log() and makes use
of an iovec to send the log message.
2015-09-28 14:01:27 +02:00
David Carlier
834cb2e445 BUG/MEDIUM: main: Freeing a bunch of static pointers
static_table_key, get_http_auth_buff and swap_buffer static variables
are now freed during deinit and the two previously new functions are
called as well. In addition, the 'trash' string buffer is cleared.
2015-09-28 14:00:00 +02:00
David Carlier
60deeba090 MINOR: chunk: New function free_trash_buffers()
This new function is meant to be called in the general deinit phase,
to free those two internal chunks.
2015-09-28 14:00:00 +02:00
David Carlier
845efb53c7 MINOR: cfgparse: New function cfg_unregister_sections()
A new function introduced meant to be called during general deinit phase.
During the configuration parsing, the section entries are all allocated.
This new function free them.
2015-09-28 14:00:00 +02:00
Willy Tarreau
270978492c MEDIUM: config: set tune.maxrewrite to 1024 by default
The tune.maxrewrite parameter used to be pre-initialized to half of
the buffer size since the very early days when buffers were very small.
It has grown to absurdly large values over the years to reach 8kB for a
16kB buffer. This prevents large requests from being accepted, which is
the opposite of the initial goal.

Many users fix it to 1024 which is already quite large for header
addition.

So let's change the default setting policy :
  - pre-initialize it to 1024
  - let the user tweak it
  - in any case, limit it to tune.bufsize / 2

This results in 15kB usable to buffer HTTP messages instead of 8kB, and
doesn't affect existing configurations which already force it.
2015-09-28 13:59:41 +02:00
Willy Tarreau
9b69454570 BUG/MINOR: config: check that tune.bufsize is always positive
We must not accept negative values for tune.bufsize as they will only
result in crashing the process during the parsing.
2015-09-28 13:59:38 +02:00
Thierry FOURNIER
a30b5dbf85 MINOR: lua: add AppletHTTP class and service
This class is used by Lua code for running as an applet called in HTTP mode
It defines also the associated lua service
2015-09-28 01:03:48 +02:00
Thierry FOURNIER
f0a64b676f MINOR: lua: add AppletTCP class and service
This class is used by Lua code for running as an applet called in TCP mode.
It defines also the lua service.
2015-09-28 01:03:48 +02:00
Thierry FOURNIER
5a363e71b2 MINOR: stream/applet: add use-service action
This new target can be called from the frontend or the backend. It
is evaluated just before the backend choice and just before the server
choice. So, the input stream or HTTP request can be forwarded to a
server or to an internal service.
2015-09-28 01:03:48 +02:00
Thierry FOURNIER
2f3867f429 BUG/MEDIUM: lua: don't reset undesired flags in hlua_ctx_resume
Some flags like HLUA_MUST_GC must not be cleared otherwise sessions are
not properly cleaned.
2015-09-28 01:03:43 +02:00
Willy Tarreau
acc980036f MEDIUM: action: add a new flag ACT_FLAG_FIRST
This flag is used by custom actions to know that they're called for the
first time. The only case where it's not set is when they're resuming
from a yield. It will be needed to let them know when they have to
allocate some resources.
2015-09-27 23:34:39 +02:00
Thierry FOURNIER
7c39ab4ac2 OPTIM/MEDIUM: lua: executes the garbage collector only when using cosocket
The garbage collector is a little bit heavy to run, and it was added
only for cosockets. This patch prevent useless executions when no
cosockets are used.
2015-09-27 22:56:40 +02:00
Thierry FOURNIER
6ab4d8ec0e MEDIUM: lua: change the GC policy
The GC is run each times that an Lua function exit with an error or with a success.
2015-09-27 22:22:37 +02:00
Thierry FOURNIER
6d695e6314 BUG/MEDIUM: lua: socket destroy before reading pending data
When the channel is down, the applet is waked up. Before this patch,
the applet closes the stream and the unread data are discarded.

After this patch, the stream is not closed except if the buffers are
empty.

It will be closed later by the close function, or by the garbage collector.
2015-09-27 21:50:26 +02:00
Thierry FOURNIER
8c8fbbe103 MINOR: lua/applet: the cosocket applet should use appctx_wakeup in place of task_wakeup
It seems that in the new applet system, the applet me be wakeup with the
function appctx_wakeup() in place of the function task_wakeup().
2015-09-27 19:30:26 +02:00
Willy Tarreau
bb3c09ab2b CLEANUP: config: make the errorloc/errorfile messages less confusing
Some users believe that "status code XXX not handled" means "not handled
by haproxy". Let's be clear that's only about the option.
2015-09-27 15:13:30 +02:00
Thierry FOURNIER
c2f5653452 MINOR: lua: extend socket address to support non-IP families
The HAProxy Lua socket respects the Lua Socket tcp specs. these specs
are a little bit limited, it not permits to connect to unix socket.

this patch extends a little it the specs. It permit to accept a
syntax that begin with "unix@", "ipv4@", "ipv6@", "fd@" or "abns@".
2015-09-27 15:04:32 +02:00
Thierry FOURNIER
7fe3be7281 MINOR: standard: avoid DNS resolution from the function str2sa_range()
This patch blocks the DNS resolution in the function str2sa_range(),
this is useful if the function is used during the HAProxy runtime.
2015-09-27 15:04:32 +02:00
Willy Tarreau
528192d310 MEDIUM: lua: only allow actions to yield if not in a final call
Actions may yield but must not do it during the final call from a ruleset
because it indicates there will be no more opportunity to complete or
clean up. This is indicated by ACT_FLAG_FINAL in the action's flags,
which must be passed to hlua_resume().

Thanks to this, an action called from a TCP ruleset is properly woken
up and possibly finished when the client disconnects.
2015-09-27 11:04:19 +02:00
Willy Tarreau
394586836f MEDIUM: http: pass ACT_FLAG_FINAL to custom actions
In HTTP it's more difficult to know when to pass the flag or not
because all actions are supposed to be final and there's no inspection
delay. Also, the input channel may very well be closed without this
being an error. So we only set the flag when option abortonclose is
set and the input channel is closed, which is the only case where the
user explicitly wants to forward a close down the chain.
2015-09-27 11:04:06 +02:00
Willy Tarreau
c1b10d38d7 MEDIUM: actions: add new flag ACT_FLAG_FINAL to notify about last call
This new flag indicates to a custom action that it must not yield because
it will not be called anymore. This addresses an issue introduced by commit
bc4c1ac ("MEDIUM: http/tcp: permit to resume http and tcp custom actions"),
which made it possible to yield even after the last call and causes Lua
actions not to be stopped when the session closes. Note that the Lua issue
is not fixed yet at this point. Also only TCP rules were handled, for now
HTTP rules continue to let the action yield since we don't know whether or
not it is a final call.
2015-09-27 11:04:06 +02:00
Willy Tarreau
658b85b68d MEDIUM: actions: pass a new "flags" argument to custom actions
Since commit bc4c1ac ("MEDIUM: http/tcp: permit to resume http and tcp
custom actions"), some actions may yield and be called back when new
information are available. Unfortunately some of them may continue to
yield because they simply don't know that it's the last call from the
rule set. For this reason we'll need to pass a flag to the custom
action to pass such information and possibly other at the same time.
2015-09-27 11:04:06 +02:00
Thierry FOURNIER
eba6f64a5f BUG/MEDIUM: lua: wakeup task on bad conditions
the condition was :
 * wakeup for read if the output channel contains data
 * wakeup for write if the input channel have some room.
2015-09-27 11:03:19 +02:00
Thierry FOURNIER
5a50a85f4f BUG/MEDIUM: lua: forces a garbage collection
When a thread stops this patch forces a garbage collection which
cleanup and free the memory.

The HAProxy Lua Socket class contains an HAProxy session, so it
uses a big amount of memory, in other this Socket class uses a
filedescriptor and maintain a connection open.

If the class Socket is stored in a global variable, the Socket stay
alive along of the life of the process (except if it is closed by the
other size, or if a timeout is reached). If the class Socket is stored
in a local variable, it must die with this variable.

The socket is closed by a call from the garbage collector. And the
portability and use of a variable is known by the same garbage collector.

so, running the GC just after the end of all Lua code ensure that the
heavy resources like the socket class are freed quickly.
2015-09-27 11:03:08 +02:00
Willy Tarreau
3adac08849 BUG/MEDIUM: lua: properly set the target on the connection
Not having the target set on the connection causes it to be released
at the last moment, and the destination address to randomly be valid
depending on the data found in the memory at this moment. In practice
it works as long as memory poisonning is disabled. The deep reason is
that connect_server() doesn't expect to be called with SF_ADDR_SET and
an existing connection with !reuse. This causes the release of the
connection, its reallocation (!reuse), and taking the address from the
newly allocated connection. This should certainly be improved.
2015-09-26 17:56:23 +02:00
Willy Tarreau
9af89f7905 BUG/MEDIUM: lua: better fix for the protocol check
Commit d75cb0f ("BUG/MAJOR: lua: segfault after the channel data is
modified by some Lua action.") introduced a regression causing an
action run from a TCP rule in an HTTP proxy to end in HTTP error
if it terminated cleanly, because it didn't parse the HTTP request!

Relax the test so that it takes into account the opportunity for the
analysers to parse the message.
2015-09-26 11:50:08 +02:00
Willy Tarreau
de70fa17a9 MINOR: lua: use the proper applet wakeup mechanism
The lua code must the the appropriate wakeup mechanism for cosockets.
It wakes them up from outside the stream and outside the applet, so it
shouldn't use the applet's wakeup callback which is designed only for
use from within the applet itself. For now it didn't cause any trouble
(yet).
2015-09-26 11:25:05 +02:00
Thierry FOURNIER
10e5bc76c7 BUG/MEDIUM: lua: longjmp function must be unregistered
the longjmp function must be unregistered when we leaves the function
who install it.
2015-09-26 01:34:29 +02:00
Willy Tarreau
7a4501757f CLEANUP: lua: remove unneeded memset(0) after calloc()
No need for these memset() anymore, since calloc() was put there
to avoid them.
2015-09-26 01:33:24 +02:00
Thierry FOURNIER
3c7a77c0e7 CLEANUP: lua: use calloc in place of malloc
calloc is safer because it fills the allocated memory zone with zeros.
2015-09-26 01:32:06 +02:00
Thierry FOURNIER
d75cb0fce6 BUG/MAJOR: lua: segfault after the channel data is modified by some Lua action.
When an action or a fetch modify the channel data, the http request parser
pointer become inconsistent. This patch detects the modification and call
again the parser.
2015-09-26 00:37:55 +02:00
Thierry FOURNIER
84e73c8882 MEDIUM: lua: use the function lua_rawset in place of lua_settable
The function lua_settable uses the metatable for acessing data,
so in the C part, we want to index value in the real table and
not throught the meta-methods. It's much faster this way.
2015-09-26 00:37:35 +02:00
Thierry FOURNIER
d2a3dcc8bd MINOR: lua: identify userdata objects
This patch adds lua primitives for identifying lua class
associated with userdata.
2015-09-25 23:40:20 +02:00
Thierry FOURNIER
a7b536b53b MINOR: lua: reset pointer after use
After releasing the Lua environment, this patch set the only one pointer to
the Lua stack to NULL. This prevents an accidental re-use.
2015-09-25 23:40:02 +02:00
Thierry FOURNIER
fd50f0bcc8 MINOR: http: split initialization
The goal is to export the http txn initialisation functions for
using it in the Lua code.
2015-09-25 23:39:48 +02:00
Thierry FOURNIER
3c3317849f MINOR: http: export http_get_path() function
This patch simply exports the http_get_path() function from the proto_http.c file.
2015-09-25 23:39:27 +02:00
Thierry FOURNIER
27929fbfd7 MINOR: channel: rename function chn_sess to chn_strm
The name of the function chn_sess is no longer appropriate.
This patch renames it to chn_strm.
2015-09-25 23:27:33 +02:00
Thierry FOURNIER
ed08d6a9be MEDIUM: proto_http: smp_prefetch_http initialize txn
When we call the function smp_prefetch_http(), if the txn is not initialized,
it doesn't work. This patch fix this. Now, smp_prefecth_http() permits to use
http with any proxy mode.
2015-09-25 23:27:23 +02:00
Willy Tarreau
958f0742a2 BUG/MEDIUM: stream-int: avoid double-call to applet->release
While the SI_ST_DIS state is set *after* doing the close on a connection,
it was set *before* calling release on an applet. Applets have no internal
flags contrary to connections, so they have no way to detect they were
already released. Because of this it happened that applets were closed
twice, once via si_applet_release() and once via si_release_endpoint() at
the end of a transaction. The CLI applet could perform a double free in
this case, though the situation to cause it is quite hard because it
requires that the applet is stuck on output in states that produce very
few data.

In order to solve this, we now assign the SI_ST_DIS state *after* calling
->release, and we refrain from doing so if the state is already assigned.
This makes applets work much more like connections and definitely avoids
this double release.

In the future it might be worth making applets have their own flags like
connections to carry their own state regardless of the stream interface's
state, especially when dealing with connection reuse.

No backport is needed since this issue was caused by the rearchitecture
in 1.6.
2015-09-25 21:16:03 +02:00
Willy Tarreau
5cfa3bcc22 MINOR: cli: do not call the release handler on internal error.
It's dangerous to call this on internal state error, because it risks
to perform a double-free. This can only happen when a state is not
handled. Note that the switch/case currently doesn't offer any option
for missed states since they're all declared. Better fix this anyway.
The fix was tested by commenting out some entries in the switch/case.
2015-09-25 21:16:03 +02:00
Willy Tarreau
c9e930accc BUG/MEDIUM: cli: properly handle closed output
Due to the code inherited from the early CLI mode with the non-interactive
code, the SHUTR status was only considered while waiting for a request,
which prevents the connection from properly being closed during a dump,
and the connection used to remain established. This issue didn't happen
in 1.5 because while this part was missed, the resynchronization performed
in process_session() would detect the situation and handle the cleanup.

No backport is needed.
2015-09-25 21:16:03 +02:00
Willy Tarreau
68c00c7185 BUG/MINOR: stats: do not call cli_release_handler 3 times
If an error happens during a dump on the CLI, an explicit call to
cli_release_handler() is performed. This is not needed anymore since
we introduced ->release() in the applet which is called upon error.
Let's remove this confusing call which can even be risky in some
situations.
2015-09-25 21:16:02 +02:00
Willy Tarreau
eca572fe7f BUG/MEDIUM: applet: fix reporting of broken write situation
If an applet tries to write to a closed connection, it hangs forever.
This results in some "get map" commands on the CLI to leave orphaned
connections alive.

Now the applet wakeup function detects that the applet still wants to
write while the channel is closed for reads, which is the equivalent
to the common "broken pipe" situation. In this case, an error is
reported on the stream interface, just as it happens with connections
trying to perform a send() in a similar situation.

With this fix the stats socket is properly released.
2015-09-25 21:16:02 +02:00
Willy Tarreau
aa977ba205 MINOR: stream-int: rename si_applet_done() to si_applet_wake_cb()
This function is a callback made only for calls from the applet handler.
Rename it to remove confusion. It's currently called from the Lua code
but that's not correct, we should call the notify and update functions
instead otherwise it will not enable the applet again.
2015-09-25 21:16:02 +02:00
Willy Tarreau
335520305c MEDIUM: stream-int: completely remove stream_int_update_embedded()
This one is not needed anymore as what it used to do is either
completely covered by the new stream_int_notify() function, or undesired
and inherited from the past as a side effect of introducing the
connections.

This update is theorically never called since it's assigned only when
nothing is connected to the stream interface. However a test has been
added to si_update() to stay safe if some foreign code decides to call
si_update() in unsafe situations.
2015-09-25 21:16:02 +02:00
Willy Tarreau
651e18292d MEDIUM: stream-int: use the same stream notification function for applets and conns
The code to report completion after a connection update or an applet update
was almost the same since applets stole it from the connection. But the
differences made them hard to maintain and prevented the creation of new
functions doing only one part of the work.

This patch replaces the common code from the si_conn_wake_cb() and
si_applet_wake_cb() with a single call to stream_int_notify() which only
notifies the stream (si+channels+task) from the outside.

No functional change was made beyond this.
2015-09-25 21:16:02 +02:00
Willy Tarreau
615f28bec1 MINOR: stream-int: implement the stream_int_notify() function
stream_int_notify() was taken from the common part between si_conn_wake_cb()
and si_applet_done(). It is designed to report activity to a stream from
outside its handler. It'll generally be used by lower layers to report I/O
completion but may also be used by remote streams if the buffer processing
is shared.
2015-09-25 21:16:02 +02:00
Willy Tarreau
ea3cc48d64 MEDIUM: stream-int: clean up the conditions to enable reading in si_conn_wake_cb
The condition to release the SI_FL_WAIT_ROOM flag was abnormally
complicated because it was inherited from 6 years ago before we used
to check for the buffer's emptiness. The CF_READ_PARTIAL flag had to be
removed, and the complex test was replaced with a simpler one checking
if *some* data were moved out or not.

The reason behind this change is to have a condition compatible with
both connections and applets, as applets currently don't work very
well in this area. Specifically, some optimizations on the applet
side cause them not to release the flag above until the buffer is
empty, which may prevent applets from taking together (eg: peers
over large haproxy buffers and small kernel buffers).
2015-09-25 18:07:16 +02:00
Willy Tarreau
388a2385a5 MINOR: stream-int: move the applet_pause call out of the stream updates
It's just to split the part dealing with the stream update and the part
dealing with the applet update in si_applet_done().
2015-09-25 18:07:16 +02:00
Willy Tarreau
cbc32601a6 MINOR: stream-int: export stream_int_update_*
Not only these functions were not static, but we'll also want to export
them.
2015-09-25 18:07:16 +02:00
Willy Tarreau
5d5b2fecac MEDIUM: stream-int: call stream_int_update() from si_update()
Now the call to stream_int_update() is moved to si_update(), which
is exclusively called from the stream, so that the socket layer may
be updated without updating the stream layer. This will later permit
to call it individually from other places (other tasks or applets for
example).
2015-09-25 18:07:16 +02:00
Willy Tarreau
452c7d5d93 MEDIUM: stream-int: factor out the stream update functions
Now that we have a generic stream_int_update() function, we can
replace the equivalent part in stream_int_update_conn() and
stream_int_update_applet() to avoid code duplication.

There is no functional change, as the code is the same but split
in two functions for each call.
2015-09-25 18:07:16 +02:00
Willy Tarreau
25f1310f33 MINOR: stream-int: implement a new stream_int_update() function
This function is designed to be called from within the stream handler to
update the channels' expiration timers and the stream interface's flags
based on the channels' flags. It needs to be called only once after the
channels' flags have settled down, and before they are cleared, though it
doesn't harm to call it as often as desired (it just slightly hurts
performance). It must not be called from outside of the stream handler,
as what it does will be used to compute the stream task's expiration.

The code was taken directly from stream_int_update_applet() and
stream_int_update_conn() which had exactly the same one except for
applet-specific or connection-specific status update.
2015-09-25 18:07:16 +02:00
Willy Tarreau
2f4e702031 MEDIUM: stream-int: split stream_int_update_conn() into si- and conn-specific parts
The purpose is to separate the connection-specific parts so that the
stream-int specific one can be factored out. There's no functional
change here, only code displacement.
2015-09-25 18:07:16 +02:00
Willy Tarreau
9994238adc BUG/MAJOR: applet: use a separate run queue to maintain list integrity
If an applet wakes up and causes the next one to sleep, the active list
is corrupted and cannot be scanned anymore, as the process then loops
over the next element.

In order to avoid this problem, we move the active applet list to a run
queue and reinit the active list. Only the first element of this queue
is checked, and if the element is not removed, it is removed and requeued
into the active list.

Since we're using a distinct list, if an applet wants to requeue another
applet into the active list, it properly gets added to the active list
and not to the run queue.

This stops the infinite loop issue that could be caused with Lua applets,
and in any future configuration where two applets could be attached
together.
2015-09-25 18:07:16 +02:00
Willy Tarreau
64bca9d36a MINOR: applet: rename applet_runq to applet_active_queue
This is not a real run queue and we're facing ugly bugs because
if this : if a an applet removes another applet from the queue,
typically the next one after itself, the list iterator loops
forever because the list's backup pointer is not valid anymore.
Before creating a run queue, let's rename this list.
2015-09-25 18:02:44 +02:00
Willy Tarreau
9bb49f6906 BUG/MEDIUM: acl: always accept match "found"
The pattern match "found" fails to parse on binary type samples. The
reason is that it presents itself as an integer type which bin cannot
be cast into. We must always accept this match since it validates
anything on input regardless of the type. Let's just relax the parser
to accept it.

This problem might also exist in 1.5.
(cherry picked from commit 91cc2368a73198bddc3e140d267cce4ee08cf20e)
2015-09-24 16:38:48 +02:00
Willy Tarreau
d7bdcb874b BUG/MEDIUM: payload: make req.payload and payload_lv aware of dynamic buffers
Due to a check between offset+len and buf->size, an empty buffer returns
"will never match". Check against tune.bufsize instead.
(cherry picked from commit 43e4039fd5d208fd9d32157d20de93d3ddf9bc0d)
2015-09-24 16:38:48 +02:00
Willy Tarreau
c4b56e4470 MINOR: stream-int: use si_release_endpoint() to close idle conns
We don't want to open-code the connection close code in
si_idle_conn_wake_cb() because we need to centralize some controls.
2015-09-24 11:57:34 +02:00
Thierry FOURNIER
8255a75e08 MINOR: lua: change actions registration
The current Lua action are not registered. The executed function is
selected according with a function name writed in the HAProxy configuration.

This patch add an action registration function. The configuration mode
described above disappear.

This change make some incompatibilities with existing configuration files for
HAProxy 1.6-dev.
2015-09-23 21:44:23 +02:00
Thierry FOURNIER
85c6c97830 MINOR: action: add reference to the original keywork matched for the called parser.
This is usefull because the keyword can contains some condifiguration
data set while the keyword registration.
2015-09-23 21:44:23 +02:00
Willy Tarreau
2c1068cb67 BUG/MEDIUM: stream: do not dereference strm_li(stream)
Some streams do not have a listener (eg: Lua's cosockets) so
let's check for this. For now this problem cannot happen but
it's definitely unsafe.
2015-09-23 13:42:08 +02:00
Willy Tarreau
b746329dc3 BUG/MEDIUM: proxy: do not dereference strm_li(stream)
Some streams do not have a listener (eg: Lua's cosockets) so
let's check for this. For now this problem cannot happen but
it's definitely unsafe.
2015-09-23 13:42:08 +02:00
Willy Tarreau
c29d0cda4b BUG/MEDIUM: http: do not dereference strm_li(stream)
Some streams do not have a listener (eg: Lua's cosockets) so
let's check for this. For now this problem cannot happen but
it's definitely unsafe.
2015-09-23 13:42:08 +02:00
Willy Tarreau
f1e0212d54 BUG/MAJOR: cli: do not dereference strm_li()->proto->name
Or it will crash when dumping streams instanciated by an applet,
like Lua's cosockets.
2015-09-23 13:42:08 +02:00
Emeric Brun
b058f1c548 BUG/MINOR: fct peer_prepare_ackmsg should not use trash.
function 'peer_prepare_ackmsg' is designed to use the argument 'msg'
instead of 'trash.str'.

There is currently no bug because the caller passes 'trash.str' in
the 'msg' argument.
2015-09-22 16:07:34 +02:00
Emeric Brun
a6a0998529 BUG/MEDIUM: peers: same table updates re-pushed after a re-connect
Some updates are pushed using an incremental update message after a
re-connection whereas the origin is forgotten by the peer.
These updates are never correctly acknowledged. So they are regularly
re-pushed after an idle timeout and a re-connect.

The fix consists to use an absolute update message in some cases.
2015-09-22 16:07:32 +02:00
Emeric Brun
c703a9d296 BUG/MEDIUM: peers: some table updates are randomly not pushed.
If an entry is still not present in the update tree, we could miss to schedule
for a push depending of an un-initialized value (upd.key remains un-initialized
for new sessions or isn't re-initalized for reused ones).

In the same way, if an entry is present in the tree, but its update's tick
is far in the past (> 2^31). We could consider it's still scheduled even if
it is not the case.

The fix consist to force the re-scheduling of an update if it was not present in
the updates tree or if the update is not in the scheduling window of every peers.
2015-09-22 16:07:27 +02:00
Baptiste Assmann
8c62c47cb2 BUG: dns: can't connect UDP socket on FreeBSD
PiBANL reported that HAProxy's DNS resolver can't "connect" its socker
on FreeBSD.
Remi Gacogne reported that we should use the function 'get_addr_len' to
get the addr structure size instead of sizeof.
2015-09-22 16:06:41 +02:00
Willy Tarreau
f7ead61388 BUG/MINOR: args: add name for ARGT_VAR
Commit 4834bc7 ("MEDIUM: vars: adds support of variables") introduced
ARGT_VAR but forgot to put it in the names array. No backport needed.
2015-09-21 20:57:12 +02:00
Willy Tarreau
a68f7629dd BUG/MEDIUM: stick-tables: fix double-decrement of tracked entries
Mailing list participant "mlist" reported negative conn_cur values in
stick tables as the result of "tcp-request connection track-sc". The
reason is that after the stick entry it copied from the session to the
stream, both the session and the stream grab a reference to the entry
and when the stream ends, it decrements one reference and one connection,
then the same is done for the session.

In fact this problem was already encountered slightly differently in the
past and addressed by Thierry using the patch below as it was believed by
then to be only a refcount issue since it was the observable symptom :

   827752e "BUG/MEDIUM: stick-tables: refcount error after copying SC..."

In reality the problem is that the stream must touch neither the refcount
nor the connection count for entries it inherits from the session. While
we have no way to tell whether a track entry was inherited from the session
(since they're simply memcpy'd), it is possible to prevent the stream from
touching an entry that already exists in the session because that's a
guarantee that it was inherited from it.

Note that it may be a temporary fix. Maybe in the future when a session
gives birth to multiple streams we'll face a situation where a session may
be updated to add more tracked entries after instanciating some streams.
The correct long-term fix is to mark some tracked entries as shared or
private (or RO/RW). That will allow the session to track more entries
even after the same trackers are being used by early streams.

No backport is needed, this is only caused by the session/stream split in 1.6.
2015-09-21 17:48:24 +02:00
Willy Tarreau
37bb7be09c BUG/MAJOR: peers: fix a crash when stopping peers on unbound processes
Pradeep Jindal reported and troubleshooted a bug causing haproxy to die
during startup on all processes not making use of a peers section. It
only happens with nbproc > 1 when peers are declared. Technically it's
when the peers task is stopped on processes that don't use it that the
crash occurred (a task_free() called on a NULL task pointer).

This only affects peers v2 in the dev branch, no backport is needed.
2015-09-21 15:24:58 +02:00
James Rosewell
63426cb6a6 MINOR: 51d: Improved string handling for LRU cache
Removed use of strlen with the data added to and retrived from the cache
by using chunk structures instead of string pointers.
2015-09-21 12:55:24 +02:00
James Rosewell
a28bbd53c4 MAJOR: 51d: Upgraded to support 51Degrees V3.2 and new features
Trie device detection doesn't benefit from caching compared to Pattern.
As such the LRU cache has been removed from the Trie method.

A new fetch  method has been added named 51d.all which uses all the
available HTTP headers for device device detection. The previous 51d
conv method has been changed to 51d.single where one HTTP header,
typically User-Agent, is used for detection. This method is marginally
faster but less accurate.

Three new properties are available with the Pattern method called
Method, Difference and Rank which provide insight into the validity of
the results returned.

A pool of worksets is used to avoid needing to create a new workset for
every request. The workset pool is thread safe ready to support a future
multi threaded version of HAProxy.
2015-09-21 12:44:59 +02:00
James Rosewell
91a41cb32d MINOR: http: made CHECK_HTTP_MESSAGE_FIRST accessible to other functions
Added the definition of CHECK_HTTP_MESSAGE_FIRST and the declaration of
smp_prefetch_http to the header.

Changed smp_prefetch_http implementation to remove the static qualifier.
2015-09-21 12:05:26 +02:00
Baptiste Assmann
9b6857e9b5 MINOR: cli: new stats socket command: show backend
new stats socket command which displays only the list of backends
available in the current process.
For now only the backend name is displayed.
2015-09-19 17:05:29 +02:00
Baptiste Assmann
6076d1c02d MINOR: server: startup slowstart task when using seamless reload of HAProxy
This patch uses the start up of the health check task to also start
the warmup task when required.

This is executed only once: when HAProxy has just started up and can
be started only if the load-server-state-from-file feature is enabled
and the server was in the warmup state before a reload occurs.
2015-09-19 17:05:28 +02:00
Baptiste Assmann
fecd2b53af MINOR: init: server state loaded from file
With this patch, HAProxy reads the content of server state file and
update state of servers accordingly.
2015-09-19 17:05:28 +02:00
Baptiste Assmann
e11cfcd2c9 MINOR: config: new backend directives: load-server-state-from-file and server-state-file-name
This directive gives HAProxy the ability to use the either the global
server-state-file directive or a local one using server-state-file-name to
load server states.
The state can be saved right before the reload by the init script, using
the "show servers state" command on the stats socket redirecting output into
a file.
2015-09-19 17:05:28 +02:00
Baptiste Assmann
e0882263e0 MINOR: config: new global section directive: server-state-file
This new global section directive is used to store the path to the file
where HAProxy will be able to retrieve server states across reloads.

The file pointed by this path is used to store a file which can contains
state of all servers from all backends.
2015-09-19 17:05:27 +02:00
Baptiste Assmann
6bc89366bb MINOR: config: new global directive server-state-base
This new global directive can be used to provide a base directory where
all the server state files could be loaded.
If a server state file name starts with a slash '/', then this directive
must not be applied.
2015-09-19 17:05:26 +02:00
Baptiste Assmann
2828946cb5 MINOR: cli: new stats socket command: show servers state
new command 'show servers state' which dumps all variable parameters
of a server during an HAProxy process life.
Purpose is to dump current server state at current run time in order to
read them right after the reload.

The format of the output is versionned and we support version 1 for now.
2015-09-19 16:52:46 +02:00
Baptiste Assmann
54a4730c65 BUG/MAJOR: can't enable a server through the stat socket
When a server is disabled in the configuration using the "disabled"
keyword, a single flag is positionned: SRV_ADMF_CMAINT (use to be
SRV_ADMF_FMAINT)..
That said, when providing the first version of this code, we also
changed the SRV_ADMF_MAINT mask to match any of the possible MAINT
cases: SRV_ADMF_FMAINT, SRV_ADMF_IMAINT, SRV_ADMF_CMAINT

Since SRV_ADMF_CMAINT is never (and is not supposed to be) altered at
run time, once a server has this flag set up, it can never ever be
enabled again using the stats socket.

In order to fix this, we should:
- consider SRV_ADMF_CMAINT as a simple flag to report the state in the
  old configuration file (will be used after a reload to deduce the
  state of the server in a new running process)
- enabling both SRV_ADMF_CMAINT and SRV_ADMF_FMAINT when the keyword
  "disabled" is in use in the configuration
- update the mask SRV_ADMF_MAINT as it was before, to only match
  SRV_ADMF_FMAINT and SRV_ADMF_IMAINT.

The following patch perform the changes above.
It allows fixing the regression without breaking the way the up coming
feature (seamless server state accross reloads) is going to work.

Note: this is 1.6-only, no backport needed.
2015-09-18 12:38:23 +02:00
Pieter Baauw
caa6a1bb46 MINOR: support cpu-map feature through the compile option USE_CPU_AFFINITY on FreeBSD 2015-09-17 22:11:09 +02:00
Thierry FOURNIER
ccf0063896 BUG/MINOR: lua: breaks the log message if his size exceed one buffer
Previously, the log was ignored if the log message exceed one buffer.
This patch doens't ignore the log, but trancate the message.
2015-09-17 17:51:51 +02:00
Thierry FOURNIER
babae28c87 BUG/MAJOR: lua: potential unexpected aborts()
This couple of function executes securely some Lua calls outside of
the lua runtime environment. Each Lua call can return a longjmp
if it encounter a memory error.

Lua documentation extract:

   If an error happens outside any protected environment, Lua calls
   a panic function (see lua_atpanic) and then calls abort, thus
   exiting the host application. Your panic function can avoid this
   exit by never returning (e.g., doing a long jump to your own
   recovery point outside Lua).

   The panic function runs as if it were a message handler (see
   §2.3); in particular, the error message is at the top of the
   stack. However, there is no guarantee about stack space. To push
   anything on the stack, the panic function must first check the
   available space (see §4.2).

We must check all the Lua entry point. This includes:
 - The include/proto/hlua.h exported functions
 - the task wrapper function
 - The action wrapper function
 - The converters wrapper function
 - The sample-fetch wrapper functions

It is tolerated that the initilisation function returns an abort.
Before each Lua abort, an error message is writed on stderr.

The macro SET_SAFE_LJMP initialise the longjmp. The Macro
RESET_SAFE_LJMP reset the longjmp. These function must be macro
because they must be exists in the program stack when the longjmp
is called
2015-09-17 17:51:29 +02:00
Thierry FOURNIER
23bc375c59 CLEANUP: lua: Merge log functions
All the code which emits error log have the same pattern. Its:
Send log with syslog system, and if it is allowed, display error
log on screen.

This patch replace this pattern by a macro. This reduces the number
of lines.
2015-09-11 20:58:04 +02:00
Thierry FOURNIER
5bc2cbf8f4 CLEANUP: typo: bad indent
A space alignment remains in the stream_interface.c file
2015-09-10 21:16:55 +02:00