990 Commits

Author SHA1 Message Date
Willy Tarreau
ece9b07c71 MINOR: cfgparse: add two new functions to check arguments count
We already had alertif_too_many_args{,_idx}(), but these ones are
specifically designed for use in cfgparse. Outside of it we're
trying to avoid calling Alert() all the time so we need an
equivalent using a pointer to an error message.

These new functions called too_many_args{,_idx)() do exactly this.
They don't take the file name nor the line number which they have
no use for but instead they take an optional pointer to an error
message and the pointer to the error code is optional as well.
With (NULL, NULL) they'll simply check the validity and return a
verdict. They are quite convenient for use in isolated keyword
parsers.

These two new functions as well as the previous ones have all been
exported.
2016-12-21 23:39:26 +01:00
Willy Tarreau
397131093f REORG: tcp-rules: move tcp rules processing to their own file
There's no more reason to keep tcp rules processing inside proto_tcp.c
given that there is nothing in common there except these 3 letters : tcp.
The tcp rules are in fact connection, session and content processing rules.
Let's move them to "tcp-rules" and let them live their life there.
2016-11-25 15:57:38 +01:00
Willy Tarreau
d39ad449b9 CLEANUP: cfgparse: cascade the warnif_misplaced_* rules
There are 8 functions each repeating what another does and adding one
extra test. We used to have some copy-paste issues in the past due to
this. Instead we now make them simply rely on the previous one and add
the final test. It's much better and much safer. The functions could
be moved to inlines but they're used at a few other locations only,
it didn't make much sense in the end.
2016-11-25 15:16:12 +01:00
Willy Tarreau
ae9bea0591 CLEANUP: counters: move from 3 types to 2 types
We used to have 3 types of counters with a huge overlap :
  - listener counters : stats collected for each bind line
  - proxy counters : union of the frontend and backend counters
  - server counters : stats collected per server

It happens that quite a good part was common between listeners and
proxies due to the frontend counters being updated at the two locations,
and that similarly the server and proxy counters were overlapping and
being updated together.

This patch cleans this up to propose only two types of counters :
  - fe_counters: used by frontends and listeners, related to
    incoming connections activity
  - be_counters: used by backends and servers, related to outgoing
    connections activity

This allowed to remove some non-sensical counters from both parts. For
frontends, the following entries were removed :

  cum_lbconn, last_sess, nbpend_max, failed_conns, failed_resp,
  retries, redispatches, q_time, c_time, d_time, t_time

For backends, this ones was removed : intercepted_req.

While doing this it was discovered that we used to incorrectly report
intercepted_req for backends in the HTML stats, which was always zero
since it's never updated.

Also it revealed a few inconsistencies (which were not fixed as they
are harmless). For example, backends count connections (cum_conn)
instead of sessions while servers count sessions and not connections.

Over the long term, some extra cleanups may be performed by having
some counters update functions touching both the server and backend
at the same time, as well as both the frontend and listener, to
ensure that all sides have all their stats properly filled. The stats
dump will also be able to factor the dump functions by counter types.
2016-11-25 15:03:12 +01:00
Thierry FOURNIER / OZON.IO
8a4e4420fb MEDIUM: log-format: Use standard HAProxy log system to report errors
The function log format emit its own error message using Alert(). This
patch replaces this behavior and uses the standard HAProxy error system
(with memprintf).

The benefits are:
 - cleaning the log system

 - the logformat can ignore the caller (actually the caller must set
   a flag designing the caller function).

 - Make the usage of the logformat function easy for future components.
2016-11-25 07:32:58 +01:00
Thierry FOURNIER / OZON.IO
4ed1c9585d MINOR: http/conf: store the use_backend configuration file and line for logs
The error log of the directive use_backend doesn't provide the
file and line containing the declaration. This patch stores
theses informations.
2016-11-25 07:15:09 +01:00
Thierry FOURNIER / OZON.IO
5948b01149 BUG/MINOR: conf: calloc untested
A calloc is executed without check of its returns code.
2016-11-25 07:15:06 +01:00
Thierry FOURNIER / OZON.IO
59fd511555 MEDIUM: log-format/conf: take into account the parse_logformat_string() return code
This patch takes into account the return code of the parse_logformat_string()
function. Now the configuration parser will fail if the log_format is not
strict.
2016-11-24 18:54:26 +01:00
Thierry FOURNIER / OZON.IO
6fe0e1b977 CLEANUP: log-format: remove unused arguments
The log-format function parse_logformat_string() takes file and line
for building parsing logs. These two parameters are embedded in the
struct proxy curproxy, which is the current parsing context.

This patch removes these two unused arguments.
2016-11-24 18:54:26 +01:00
William Lallemand
9ed6203aef REORG: cli: split dumpstats.h in stats.h and cli.h
proto/dumpstats.h has been split in 4 files:

  * proto/cli.h  contains protypes for the CLI
  * proto/stats.h contains prototypes for the stats
  * types/cli.h contains definition for the CLI
  * types/stats.h contains definition for the stats
2016-11-24 16:59:27 +01:00
Christopher Faulet
ba7bc164f7 MINOR: spoe/checks: Add support for SPOP health checks
A new "option spop-check" statement has been added to enable server health
checks based on SPOP HELLO handshake. SPOP is the protocol used by SPOE filters
to talk to servers.
2016-11-09 22:57:02 +01:00
Christopher Faulet
79bdef3cad MINOR: cfgparse: Parse scope lines and save the last one parsed
A scope is a section name between square bracket, alone on its line, ie:

  [scope-name]
  ...

The spaces at the beginning and at the end of the line are skipped. Comments at
the end of the line are also skipped.

When a scope is parsed, its name is saved in the global variable
cfg_scope. Initially, cfg_scope is NULL and it remains NULL until a valid scope
line is parsed.

This feature remains unused in the HAProxy configuration file and
undocumented. However, it will be used during SPOE configuration parsing.
2016-11-09 22:56:59 +01:00
Christopher Faulet
7110b40d06 MINOR: cfgparse: Add functions to backup and restore registered sections
This feature will be used by the stream processing offload engine (SPOE) to
parse dedicated configuration files without mixing HAProxy sections with SPOE
sections.

So, here we can back up all sections known by HAProxy, unregister all of them
and add new ones, dedicted to the SPOE. Once the SPOE configuration file parsed,
we can roll back all changes by restoring HAProxy sections.
2016-11-09 22:56:59 +01:00
Christopher Faulet
898566e7e6 CLEANUP: remove last references to 'ruleset' section 2016-11-09 22:50:54 +01:00
Baptiste Assmann
987e16d6f4 MINOR: dns: implement extra 'hold' timers.
This adds new "hold" timers : nx, refused, timeout, other. This timers
will be used to tell HAProxy to keep an erroneous response as valid for
the corresponding period. For now they're only configured, not enforced.
2016-11-09 15:30:47 +01:00
Willy Tarreau
757478e900 BUG/MEDIUM: servers: properly propagate the maintenance states during startup
Right now there is an issue with the way the maintenance flags are
propagated upon startup. They are not propagate, just copied from the
tracked server. This implies that depending on the server's order, some
tracking servers may not be marked down. For example this configuration
does not work as expected :

        server s1 1.1.1.1:8000 track s2
        server s2 1.1.1.1:8000 track s3
        server s3 1.1.1.1:8000 track s4
        server s4 wtap:8000 check inter 1s disabled

It results in s1/s2 being up, and s3/s4 being down, while all of them
should be down.

The only clean way to process this is to run through all "root" servers
(those not tracking any other server), and to propagate their state down
to all their trackers. This is the same algorithm used to propagate the
state changes. It has to be done both to compute the IDRAIN flag and the
IMAINT flag. However, doing so requires that tracking servers are not
marked as inherited maintenance anymore while parsing the configuration
(and given that it is wrong, better drop it).

This fix also addresses another side effect of the bug above which is
that the IDRAIN/IMAINT flags are stored in the state files, and if
restored while the tracked server doesn't have the equivalent flag,
the servers may end up in a situation where it's impossible to remove
these flags. For example in the configuration above, after removing
"disabled" on server s4, the other servers would have remained down,
and not anymore with this fix. Similarly, the combination of IMAINT
or IDRAIN with their respective forced modes was not accepted on
reload, which is wrong as well.

This bug has been present at least since 1.5, maybe even 1.4 (it came
with tracking support). The fix needs to be backported there, though
the srv-state parts are irrelevant.

This commit relies on previous patch to silence warnings on startup.
2016-11-07 14:31:52 +01:00
Ian Miell
71c432e937 CLEANUP: cfgparse: Very minor spelling correction
'optionnally' changed to 'optionally'
2016-10-26 18:46:01 +02:00
Andrew Rodland
b1f48e3161 MINOR: backend: add hash-balance-factor option for hash-type consistent
0 will mean no balancing occurs; otherwise it represents the ratio
between the highest-loaded server and the average load, times 100 (i.e.
a value of 150 means a 1.5x ratio), assuming equal weights.

Signed-off-by: Andrew Rodland <andrewr@vimeo.com>
2016-10-25 20:21:32 +02:00
Willy Tarreau
620408f406 MEDIUM: tcp: add registration and processing of TCP L5 rules
This commit introduces "tcp-request session" rules. These are very
much like "tcp-request connection" rules except that they're processed
after the handshake, so it is possible to consider SSL information and
addresses rewritten by the proxy protocol header in actions. This is
particularly useful to track proxied sources as this was not possible
before, given that tcp-request content rules are processed after each
HTTP request. Similarly it is possible to assign the proxied source
address or the client's cert to a variable.
2016-10-21 18:19:24 +02:00
Willy Tarreau
7d9736fb5d CLEANUP: tcp rules: mention everywhere that tcp-conn rules are L4
This is in order to make integration of tcp-request-session cleaner :
- tcp_exec_req_rules() was renamed tcp_exec_l4_rules()
- LI_O_TCP_RULES was renamed LI_O_TCP_L4_RULES
  (LI_O_*'s horrible indent was also fixed and a provision was left
   for L5 rules).
2016-10-21 18:19:24 +02:00
Lukas Tribus
a0bcbdcb04 MEDIUM: make SO_REUSEPORT configurable
With Linux officially introducing SO_REUSEPORT support in 3.9 and
its mainstream adoption we have seen more people running into strange
SO_REUSEPORT related issues (a process management issue turning into
hard to diagnose problems because the kernel load-balances between the
new and an obsolete haproxy instance).

Also some people simply want the guarantee that the bind fails when
the old process is still bound.

This change makes SO_REUSEPORT configurable, introducing the command
line argument "-dR" and the noreuseport configuration directive.

A backport to 1.6 should be considered.
2016-09-13 07:56:03 +02:00
David Carlier
70d604593d MINOR: cfgparse: few memory leaks fixes.
Some minor memory leak during the config parsing.
2016-08-23 12:16:34 +02:00
Ruoshan Huang
e4edc6b628 MEDIUM: http: implement http-response track-sc* directive
This enables tracking of sticky counters from current response. The only
difference from "http-request track-sc" is the <key> sample expression
can only make use of samples in response (eg. res.*, status etc.) and
samples below Layer 6.
2016-07-26 14:31:14 +02:00
Hubert Verstraete
831962e3b3 CLEANUP: fixed some usages of realloc leading to memory leak
Changed all the cases where the pointer passed to realloc is overwritten
by the pointer returned by realloc. The new function my_realloc2 has
been used except in function register_name. If register_name fails to
add a new variable because of an "out of memory" error, all the existing
variables remain valid. If we had used my_realloc2, the array of variables
would have been freed.
2016-06-29 10:45:18 +02:00
William Lallemand
7bba4ccfb6 BUG/MEDIUM: fix risk of segfault with "show tls-keys"
The reference to the tls_keys_ref was not deleted from the
tlskeys_reference linked list.

When the SSL is malconfigured, it can lead to an access to freed memory
during a "show tls-keys" on the admin socked.
2016-05-31 20:30:01 +02:00
Willy Tarreau
659fbf0230 BUG/MEDIUM: config: fix multiple declaration of section parsers
Ben Cabot reported that after commit 5e4261b ("CLEANUP: config:
detect double registration of a config section") recently introduced
in 1.7-dev, it's not possible anymore to load multiple configuration
files. Bryan Talbot provided a simple reproducer to exhibit the issue.

It turns out that function readcfgfile() registers new parsers for
section keywords for each new file. In addition to being useless, this
has the negative effect of wasting memory and slowing down the config
parser as the number of configuration files increases.

This fix only needs to be backported if/where the commit above is
backported.
2016-05-26 17:59:28 +02:00
Vincent Bernat
6e46ff11e9 BUG/MINOR: fix listening IP address storage for frontends (cont)
Commit 6e6158 was incomplete. There was an additional aggregate copy
that may trigger a similar case in the future.
2016-05-19 21:53:10 +02:00
Vincent Bernat
6e61589573 BUG/MAJOR: fix listening IP address storage for frontends
When compiled with GCC 6, the IP address specified for a frontend was
ignored and HAProxy was listening on all addresses instead. This is
caused by an incomplete copy of a "struct sockaddr_storage".

With the GNU Libc, "struct sockaddr_storage" is defined as this:

    struct sockaddr_storage
      {
        sa_family_t ss_family;
        unsigned long int __ss_align;
        char __ss_padding[(128 - (2 * sizeof (unsigned long int)))];
      };

Doing an aggregate copy (ss1 = ss2) is different than using memcpy():
only members of the aggregate have to be copied. Notably, padding can be
or not be copied. In GCC 6, some optimizations use this fact and if a
"struct sockaddr_storage" contains a "struct sockaddr_in", the port and
the address are part of the padding (between sa_family and __ss_align)
and can be not copied over.

Therefore, we replace any aggregate copy by a memcpy(). There is another
place using the same pattern. We also fix a function receiving a "struct
sockaddr_storage" by copy instead of by reference. Since it only needs a
read-only copy, the function is converted to request a reference.
2016-05-19 10:43:24 +02:00
Willy Tarreau
5e4261b0e4 CLEANUP: config: detect double registration of a config section
In an effort to make the config parser more robust, we should validate
that everything we register is not already registered. Most cfg_register_*
functions unfortunately return void and just perform a LIST_ADDQ(), so they
will have to change for this. At least cfg_register_section() does perform
a bit of checks and is easy to check for such errors, so let's start with
this one. Future patches will definitely have to focus on the remaining
functions and ensure unicity of all config parsers.
2016-05-17 16:18:31 +02:00
Cyril Bonté
4920d70fa0 BUG/MINOR: fix maxaccept computation according to the frontend process range
commit 7c0ffd23 is only considering the explicit use of the "process" keyword
on the listeners. But at this step, if it's not defined in the configuration,
the listener bind_proc mask is set to 0. As a result, the code will compute
the maxaccept value based on only 1 process, which is not always true.

For example :
  global
    nbproc 4

  frontend test
    bind-process 1-2
    bind :80

Here, the maxaccept value for the "test" frontend was set to the global
tune.maxaccept value (default to 64), whereas it should consider 2 processes
will accept connections. As of the documentation, the value should be divided
by twice the number of processes the listener is bound to.

To fix this, we can consider that if no mask is set to the listener, we take
the frontend mask.

This is not critical but it can introduce unfairness distribution of the
incoming connections across the processes.

It should be backported to the same branches as commit 7c0ffd23 (1.6 and 1.5
were in the scope).
2016-04-15 08:22:52 +02:00
Willy Tarreau
7c0ffd23d2 BUG/MEDIUM: fix maxaccept computation on per-process listeners
Christian Ruppert reported a performance degradation when binding a
single frontend to many processes while only one bind line was being
used, bound to a single process.

The reason comes from the fact that whenever a listener is bound to
multiple processes, the it is assigned a maxaccept value which equals
half the global maxaccept value divided by the number of processes the
frontend is bound to. The purpose is to ensure that no single process
will drain all the incoming requests at once and ensure a fair share
between all listeners. Usually this works pretty well, when a listener
is bound to all the processes of its frontend. But here we're in a
situation where the maxaccept of a listener which is bound to a single
process is still divided by a large value.

The fix consists in taking into account the number of processes the
listener is bound do and not only those of the frontend. This way it
is perfectly possible to benefit from nbproc and SO_REUSEPORT without
performance degradation.

1.6 and 1.5 normally suffer from the same issue.
2016-04-14 11:53:50 +02:00
David Carlier
97880bb46d BUG/MINOR: cfgparse: couple of small memory leaks.
During the config parse in some code paths, there is some
forgotten pointers freeing, and as often, during errors handlings.
2016-04-12 11:01:41 +02:00
Vincent Bernat
02779b6263 CLEANUP: uniformize last argument of malloc/calloc
Instead of repeating the type of the LHS argument (sizeof(struct ...))
in calls to malloc/calloc, we directly use the pointer
name (sizeof(*...)). The following Coccinelle patch was used:

@@
type T;
T *x;
@@

  x = malloc(
- sizeof(T)
+ sizeof(*x)
  )

@@
type T;
T *x;
@@

  x = calloc(1,
- sizeof(T)
+ sizeof(*x)
  )

When the LHS is not just a variable name, no change is made. Moreover,
the following patch was used to ensure that "1" is consistently used as
a first argument of calloc, not the last one:

@@
@@

  calloc(
+ 1,
  ...
- ,1
  )
2016-04-03 14:17:42 +02:00
Vincent Bernat
3c2f2f207f CLEANUP: remove unneeded casts
In C89, "void *" is automatically promoted to any pointer type. Casting
the result of malloc/calloc to the type of the LHS variable is therefore
unneeded.

Most of this patch was built using this Coccinelle patch:

@@
type T;
@@

- (T *)
  (\(lua_touserdata\|malloc\|calloc\|SSL_get_app_data\|hlua_checkudata\|lua_newuserdata\)(...))

@@
type T;
T *x;
void *data;
@@

  x =
- (T *)
  data

@@
type T;
T *x;
T *data;
@@

  x =
- (T *)
  data

Unfortunately, either Coccinelle or I is too limited to detect situation
where a complex RHS expression is of type "void *" and therefore casting
is not needed. Those cases were manually examined and corrected.
2016-04-03 14:17:42 +02:00
Baptiste Assmann
776e518caf MINOR: cfgparse: warn when gid parameter is not a number
Currently, no warning are emitted when the gid is not a number.
Purpose of this warning is to let admins know they their configuration
won't be applied as expected.
2016-03-13 07:46:31 +01:00
Baptiste Assmann
79fee6aa7a MINOR: cfgparse: warn when uid parameter is not a number
Currently, no warning are emitted when the uid is not a number.
Purpose of this warning is to let admins know they their configuration
won't be applied as expected.
2016-03-13 07:45:41 +01:00
Cyril Bonté
0618195a11 BUG/MEDIUM: stats: stats bind-process doesn't propagate the process mask correctly
With nbproc > 1, it is possible to specify on which process the stats socket
will be bound using "stats bind-process", but the behaviour was not correct,
ignoring the value in some configurations.

Example :
global
  nbproc 4
  stats bind-process 1
  stats socket /var/run/haproxy.sock

With such a configuration, all the processes will listen on the stats socket.
As a workaround, it is also possible to declare a "process" keyword on
the "stats stocket" line.

The patch must be applied to 1.7, 1.6 and 1.5
2016-02-24 07:38:37 +01:00
Pieter Baauw
235fcfcf14 MINOR: mailers: make it possible to configure the connection timeout
This patch introduces a configurable connection timeout for mailers
with a new "timeout mail <time>" directive.

Acked-by: Simon Horman <horms@verge.net.au>
2016-02-20 15:33:06 +01:00
Pieter Baauw
7a91a0e1e5 MEDIUM: cfgparse: reject incorrect 'timeout retry' keyword spelling in resolvers
If for example it was written as 'timeout retri 1s' or 'timeout wrong 1s'
this would be used for the retry timeout value. Resolvers section only
timeout setting currently is 'retry', others are still parsed as before
this patch to not break existing configurations.

A less strict version will be backported to 1.6.
2016-02-17 10:10:06 +01:00
Willy Tarreau
1d54972789 MEDIUM: config: allow to manipulate environment variables in the global section
With new init systems such as systemd, environment variables became a
real mess because they're only considered on startup but not on reload
since the init script's variables cannot be passed to the process that
is signaled to reload.

This commit introduces an alternative method consisting in making it
possible to modify the environment from the global section with directives
like "setenv", "unsetenv", "presetenv" and "resetenv".

Since haproxy supports loading multiple config files, it now becomes
possible to put the host-dependant variables in one file and to
distribute the rest of the configuration to all nodes, without having
to deal with the init system's deficiencies.

Environment changes take effect immediately when the directives are
processed, so it's possible to do perform the same operations as are
usually performed in regular service config files.
2016-02-16 12:44:54 +01:00
Christopher Faulet
443ea1a242 MINOR: filters: Extract proxy stuff from the struct filter
Now, filter's configuration (.id, .conf and .ops fields) is stored in the
structure 'flt_conf'. So proxies own a flt_conf list instead of a filter
list. When a filter is attached to a stream, it gets a pointer on its
configuration. This avoids mixing the filter's context (owns by a stream) and
its configuration (owns by a proxy). It also saves 2 pointers per filter
instance.
2016-02-09 14:53:15 +01:00
Christopher Faulet
309c6418b0 MEDIUM: filters: Replace filter_http_headers callback by an analyzer
This new analyzer will be called for each HTTP request/response, before the
parsing of the body. It is identified by AN_FLT_HTTP_HDRS.

Special care was taken about the following condition :

  * the frontend is a TCP proxy
  * filters are defined in the frontend section
  * the selected backend is a HTTP proxy

So, this patch explicitly add AN_FLT_HTTP_HDRS analyzer on the request and the
response channels when the backend is a HTTP proxy and when there are filters
attatched on the stream.
This patch simplifies http_request_forward_body and http_response_forward_body
functions.
2016-02-09 14:53:15 +01:00
Christopher Faulet
3d97c90974 REORG: filters: Prepare creation of the HTTP compression filter
HTTP compression will be moved in a true filter. To prepare the ground, some
functions have been moved in a dedicated file. Idea is to keep everything about
compression algos in compression.c and everything related to the filtering in
flt_http_comp.c.

For now, a header has been added to help during the transition. It will be
removed later.

Unused empty ACL keyword list was removed. The "compression" keyword
parser was moved from cfgparse.c to flt_http_comp.c.
2016-02-09 14:53:15 +01:00
Christopher Faulet
d7c9196ae5 MAJOR: filters: Add filters support
This patch adds the support of filters in HAProxy. The main idea is to have a
way to "easely" extend HAProxy by adding some "modules", called filters, that
will be able to change HAProxy behavior in a programmatic way.

To do so, many entry points has been added in code to let filters to hook up to
different steps of the processing. A filter must define a flt_ops sutrctures
(see include/types/filters.h for details). This structure contains all available
callbacks that a filter can define:

struct flt_ops {
       /*
        * Callbacks to manage the filter lifecycle
        */
       int  (*init)  (struct proxy *p);
       void (*deinit)(struct proxy *p);
       int  (*check) (struct proxy *p);

        /*
         * Stream callbacks
         */
        void (*stream_start)     (struct stream *s);
        void (*stream_accept)    (struct stream *s);
        void (*session_establish)(struct stream *s);
        void (*stream_stop)      (struct stream *s);

       /*
        * HTTP callbacks
        */
       int  (*http_start)         (struct stream *s, struct http_msg *msg);
       int  (*http_start_body)    (struct stream *s, struct http_msg *msg);
       int  (*http_start_chunk)   (struct stream *s, struct http_msg *msg);
       int  (*http_data)          (struct stream *s, struct http_msg *msg);
       int  (*http_last_chunk)    (struct stream *s, struct http_msg *msg);
       int  (*http_end_chunk)     (struct stream *s, struct http_msg *msg);
       int  (*http_chunk_trailers)(struct stream *s, struct http_msg *msg);
       int  (*http_end_body)      (struct stream *s, struct http_msg *msg);
       void (*http_end)           (struct stream *s, struct http_msg *msg);
       void (*http_reset)         (struct stream *s, struct http_msg *msg);
       int  (*http_pre_process)   (struct stream *s, struct http_msg *msg);
       int  (*http_post_process)  (struct stream *s, struct http_msg *msg);
       void (*http_reply)         (struct stream *s, short status,
                                   const struct chunk *msg);
};

To declare and use a filter, in the configuration, the "filter" keyword must be
used in a listener/frontend section:

  frontend test
    ...
    filter <FILTER-NAME> [OPTIONS...]

The filter referenced by the <FILTER-NAME> must declare a configuration parser
on its own name to fill flt_ops and filter_conf field in the proxy's
structure. An exemple will be provided later to make it perfectly clear.

For now, filters cannot be used in backend section. But this is only a matter of
time. Documentation will also be added later. This is the first commit of a long
list about filters.

It is possible to have several filters on the same listener/frontend. These
filters are stored in an array of at most MAX_FILTERS elements (define in
include/types/filters.h). Again, this will be replaced later by a list of
filters.

The filter API has been highly refactored. Main changes are:

* Now, HA supports an infinite number of filters per proxy. To do so, filters
  are stored in list.

* Because filters are stored in list, filters state has been moved from the
  channel structure to the filter structure. This is cleaner because there is no
  more info about filters in channel structure.

* It is possible to defined filters on backends only. For such filters,
  stream_start/stream_stop callbacks are not called. Of course, it is possible
  to mix frontend and backend filters.

* Now, TCP streams are also filtered. All callbacks without the 'http_' prefix
  are called for all kind of streams. In addition, 2 new callbacks were added to
  filter data exchanged through a TCP stream:

    - tcp_data: it is called when new data are available or when old unprocessed
      data are still waiting.

    - tcp_forward_data: it is called when some data can be consumed.

* New callbacks attached to channel were added:

    - channel_start_analyze: it is called when a filter is ready to process data
      exchanged through a channel. 2 new analyzers (a frontend and a backend)
      are attached to channels to call this callback. For a frontend filter, it
      is called before any other analyzer. For a backend filter, it is called
      when a backend is attached to a stream. So some processing cannot be
      filtered in that case.

    - channel_analyze: it is called before each analyzer attached to a channel,
      expects analyzers responsible for data sending.

    - channel_end_analyze: it is called when all other analyzers have finished
      their processing. A new analyzers is attached to channels to call this
      callback. For a TCP stream, this is always the last one called. For a HTTP
      one, the callback is called when a request/response ends, so it is called
      one time for each request/response.

* 'session_established' callback has been removed. Everything that is done in
  this callback can be handled by 'channel_start_analyze' on the response
  channel.

* 'http_pre_process' and 'http_post_process' callbacks have been replaced by
  'channel_analyze'.

* 'http_start' callback has been replaced by 'http_headers'. This new one is
  called just before headers sending and parsing of the body.

* 'http_end' callback has been replaced by 'channel_end_analyze'.

* It is possible to set a forwarder for TCP channels. It was already possible to
  do it for HTTP ones.

* Forwarders can partially consumed forwardable data. For this reason a new
  HTTP message state was added before HTTP_MSG_DONE : HTTP_MSG_ENDING.

Now all filters can define corresponding callbacks (http_forward_data
and tcp_forward_data). Each filter owns 2 offsets relative to buf->p, next and
forward, to track, respectively, input data already parsed but not forwarded yet
by the filter and parsed data considered as forwarded by the filter. A any time,
we have the warranty that a filter cannot parse or forward more input than
previous ones. And, of course, it cannot forward more input than it has
parsed. 2 macros has been added to retrieve these offets: FLT_NXT and FLT_FWD.

In addition, 2 functions has been added to change the 'next size' and the
'forward size' of a filter. When a filter parses input data, it can alter these
data, so the size of these data can vary. This action has an effet on all
previous filters that must be handled. To do so, the function
'filter_change_next_size' must be called, passing the size variation. In the
same spirit, if a filter alter forwarded data, it must call the function
'filter_change_forward_size'. 'filter_change_next_size' can be called in
'http_data' and 'tcp_data' callbacks and only these ones. And
'filter_change_forward_size' can be called in 'http_forward_data' and
'tcp_forward_data' callbacks and only these ones. The data changes are the
filter responsability, but with some limitation. It must not change already
parsed/forwarded data or data that previous filters have not parsed/forwarded
yet.

Because filters can be used on backends, when we the backend is set for a
stream, we add filters defined for this backend in the filter list of the
stream. But we must only do that when the backend and the frontend of the stream
are not the same. Else same filters are added a second time leading to undefined
behavior.

The HTTP compression code had to be moved.

So it simplifies http_response_forward_body function. To do so, the way the data
are forwarded has changed. Now, a filter (and only one) can forward data. In a
commit to come, this limitation will be removed to let all filters take part to
data forwarding. There are 2 new functions that filters should use to deal with
this feature:

 * flt_set_http_data_forwarder: This function sets the filter (using its id)
   that will forward data for the specified HTTP message. It is possible if it
   was not already set by another filter _AND_ if no data was yet forwarded
   (msg->msg_state <= HTTP_MSG_BODY). It returns -1 if an error occurs.

 * flt_http_data_forwarder: This function returns the filter id that will
   forward data for the specified HTTP message. If there is no forwarder set, it
   returns -1.

When an HTTP data forwarder is set for the response, the HTTP compression is
disabled. Of course, this is not definitive.
2016-02-09 14:53:15 +01:00
Ben Cabot
3b90f0a267 BUG/MEDIUM: config: Adding validation to stick-table expire value.
If the expire value exceedes the maximum value clients are not added
to the stick table.
2016-01-21 19:46:47 +01:00
Baptiste Assmann
7f43fa9b2c BUG/MEDIUM: dns: no DNS resolution happens if no ports provided to the nameserver
Erez reported a bug on discourse.haproxy.org about DNS resolution not
occuring when no port is specified on the nameserver directive.

This patch prevent this behavior by returning an error explaining this
issue when parsing the configuration file.
That said, later, we may want to force port 53 when client did not
provide any.

backport: 1.6
2016-01-21 07:41:59 +01:00
Willy Tarreau
b22fc30aaa MINOR: config: make tune.recv_enough configurable
This setting used to be assigned to a variable tunable from a constant
and for an unknown reason never made its way into the config parser.

tune.recv_enough <number>
  Haproxy uses some hints to detect that a short read indicates the end of the
  socket buffers. One of them is that a read returns more than <recv_enough>
  bytes, which defaults to 10136 (7 segments of 1448 each). This default value
  may be changed by this setting to better deal with workloads involving lots
  of short messages such as telnet or SSH sessions.
2015-12-14 12:05:45 +01:00
Cyril Bonté
e22bfd61b1 BUG/MINOR: checks: email-alert causes a segfault when an unknown mailers section is configured
A segfault can occur during at the initialization phase, when an unknown
"mailers" name is configured. This happens when "email-alert myhostname" is not
set, where a direct pointer to an array is used instead of copying the string,
causing the segfault when haproxy tries to free the memory.

This is a minor issue because the configuration is invalid and a fatal error
will remain, but it should be fixed to prevent reload issues.

Example of minimal configuration to reproduce the bug :
    backend example
        email-alert mailers NOT_FOUND
        email-alert from foo@localhost
        email-alert to bar@localhost

This fix must be backported to 1.6.
2015-12-04 06:09:30 +01:00
Cyril Bonté
7e0847045a BUG/MEDIUM: checks: email-alert not working when declared in defaults
Tommy Atkinson and Sylvain Faivre reported that email alerts didn't work when
they were declared in the defaults section. This is due to the use of an
internal attribute which is set once an email-alert is at least partially
configured. But this attribute was not propagated to the current proxy during
the configuration parsing.

Not that the issue doesn't occur if "email-alert myhostname" is configured in
the defaults section.

This fix must be backported to 1.6.
2015-12-04 06:09:30 +01:00
Baptiste Assmann
e9544935e8 BUG/MINOR: http rule: http capture 'id' rule points to a non existing id
It is possible to create a http capture rule which points to a capture slot
id which does not exist.

Current patch prevent this when parsing configuration and prevent running
configuration which contains such rules.

This configuration is now invalid:

  frontend f
   bind :8080
   http-request capture req.hdr(User-Agent) id 0
   default_backend b

this one as well:

  frontend f
   bind :8080
   declare capture request len 32 # implicit id is 0 here
   http-request capture req.hdr(User-Agent) id 1
   default_backend b

It applies of course to both http-request and http-response rules.
2015-11-04 08:47:55 +01:00