Commit Graph

985 Commits

Author SHA1 Message Date
Willy Tarreau
ef907fee12 BUG/MAJOR: channel: fix miscalculation of available buffer space (4th try)
Unfortunately, commit 169c470 ("BUG/MEDIUM: channel: fix miscalculation of
available buffer space (3rd try)") was still not enough to completely
address the issue. It fell into an integer comparison trap. Contrary to
expectations, chn->to_forward may also have the sign bit set when
forwarding regular data having a large content-length, resulting in
an incomplete check of the result and of the reserve because the with
to_forward very large, to_forward+o could become very small and also
the reserve could become positive again and make channel_recv_limit()
return a negative value.

One way to reproduce this situation is to transfer a large file (> 2GB)
with http-keep-alive or http-server-close, without splicing, and ensure
that the server uses content-length instead of chunks. The transfer
should stall very early after the first buffer has been transferred
to the client.

This fix now properly checks 1) for an overflow caused by summing o and
to_forward, and 2) for o+to_forward being smaller or larger than maxrw
before performing the subtract, so that all sensitive operations are
properly performed on 33-bit arithmetics.

The code was subjected again to a series of tests using inject+httpterm
scanning a wide range of object sizes (+10MB after each new request) :

  $ printf "new page 1\nget 127.0.0.1:8002 / s=%%s0m\n" | \
      inject64 -o 1 -u 1 -f /dev/stdin

With previous fix, the transfer would suddenly stop when reaching 2GB :

   hits ^hits hits/s  ^h/s     bytes  kB/s  last  errs  tout htime  sdht ptime
    203     1      2     1 216816173354 2710202 3144892     0     0 685.0 0.0 685.0
    205     2      2     2 219257283186 2706880 2441109     0     0 679.5 6.5 679.5
    205     0      2     0 219257283186 2673836     0     0     0 0.0 0.0 0.0
    205     0      2     0 219257283186 2641622     0     0     0 0.0 0.0 0.0
    205     0      2     0 219257283186 2610174     0     0     0 0.0 0.0 0.0

Now it's fine even past 4 GB.

Many thanks to Vedran Furac for reporting this issue early with a common
access pattern helping to troubleshoot this.

This fix must be backported to 1.6 and 1.5 where the commit above was
already backported.
2016-05-03 17:58:03 +02:00
Willy Tarreau
55e58f2334 MINOR: channel: add new function channel_congested()
This function returns non-zero if the channel is congested with data in
transit waiting for leaving, indicating to the caller that it should wait
for the reserve to be released before starting to process new data in
case it needs the ability to append data. This is meant to be used while
waiting for a clean response buffer before processing a request.
2016-05-02 16:39:22 +02:00
Nenad Merdanovic
174dd37d88 MINOR: Add ability for agent-check to set server maxconn
This is very useful in complex architecture systems where HAproxy
is balancing DB connections for example. We want to keep the maxconn
high in order to avoid issues with queueing on the LB level when
there is slowness on another part of the system. Example is a case of
an architecture where each thread opens multiple DB connections, which
if get stuck in queue cause a snowball effect (old connections aren't
closed, new ones cannot be established). These connections are mostly
idle and the DB server has no problem handling thousands of them.

Allowing us to dynamically set maxconn depending on the backend usage
(LA, CPU, memory, etc.) enables us to have high maxconn for situations
like above, but lowering it in case there are real issues where the
backend servers become overloaded (cache issues, DB gets hit hard).
2016-04-25 17:23:50 +02:00
Willy Tarreau
169c47028a BUG/MEDIUM: channel: fix miscalculation of available buffer space (3rd try)
Latest fix 8a32106 ("BUG/MEDIUM: channel: fix miscalculation of available
buffer space (2nd try)") did happen to fix some observable issues but not
all of them in fact, some corner cases still remained and at least one user
reported a busy loop that appeared possible, though not easily reproducible
under experimental conditions.

The remaining issue is that we still consider min(i, to_fwd) as the number
of bytes in transit, but in fact <i> is not relevant here. Indeed, what
matters is that we can read everything we want at once provided that at
the end, <i> cannot be larger than <size-maxrw> (if it was not already).

This is visible in two cases :
  - let's have i=o=max/2 and to_fwd=0. Then i+o >= max indicates that the
    buffer is already full, while it is not since once <o> is forwarded,
    some space remains.

  - when to_fwd is much larger than i, it's obvious that we can fill the
    buffer.

The only relevant part in fact is o + to_fwd. to_fwd will ensure that at
least this many bytes will be moved from <i> to <o> hence will leave the
buffer, whatever the number of rounds it takes.

Interestingly, the fix applied here ensures that channel_recv_max() will
now equal (size - maxrw - i + to_fwd), which is indeed what remains
available below maxrw after to_fwd bytes are forwarded from i to o and
leave the buffer.

Additionally, the latest fix made it possible to meet an integer overflow
that was not caught by the range test when forwarding in TCP or tunnel
mode due to to_forward being added to an existing value, causing the
buffer size to be limited when it should not have been, resulting in 2
to 3 recv() calls when a single one was enough. The first one was limited
to the unreserved buffer size, the second one to the size of the reserve
minus 1, and the last one to the last byte. Eg with a 2kB buffer :

recvfrom(22, "HTTP/1.1 200\r\nConnection: close\r"..., 1024, 0, NULL, NULL) = 1024
recvfrom(22, "23456789.123456789.123456789.123"..., 1023, 0, NULL, NULL) = 1023
recvfrom(22, "5", 1, 0, NULL, NULL)     = 1

This bug is still present in 1.6 and 1.5 so the fix should be backported
there.
2016-04-21 18:06:08 +02:00
Willy Tarreau
93dc478a04 BUG/MEDIUM: channel: incorrect polling condition may delay event delivery
The condition to poll for receive as implemented in channel_may_recv()
is still incorrect. If buf->o is null and buf->i is slightly larger than
chn->to_forward and at least as large as buf->size - maxrewrite, then
reading will be disabled. It may slightly delay some data delivery by
having first to forward pending bytes, but may also cause some random
issues with analysers that wait for some data before starting to forward
what they correctly parsed. For instance, a body analyser may be prevented
from seeing the data that only fits in the reserve.

This bug may also prevent an applet's chk_rcv() function from being called
when part of a buffer is released. It is possible (though not verified)
that this participated to some peers frozen session issues some people
have been facing.

This fix should be backported to 1.6 and 1.5 to ensure better coherency
with channel_recv_limit().
2016-04-21 17:03:46 +02:00
Willy Tarreau
4b46a3e8cc BUG/MEDIUM: channel: don't allow to overwrite the reserve until connected
Commit 9c06ee4 ("BUG/MEDIUM: channel: don't schedule data in transit for
leaving until connected") took care of an issue involving POST in conjunction
with http-send-name-header, where we absolutely never want to touch the
reserve until we're sure not to touch the buffer contents anymore, which
is indicated by the output stream-interface being connected.

But channel_may_recv() was not equipped with such a test, so in some
situations it might decide that it is possible to poll for reads, and
later channel_recv_limit() will decide it's not possible to read,
causing a loop. So we must add a similar test there.

Since the fix above was backported to 1.6 and 1.5, this fix must as well.
2016-04-21 15:31:22 +02:00
Christopher Faulet
b3f4e14932 MINOR: filters: Print the list of existing filters during HA startup
This is done  in verbose/debug mode and when build options are reported.
2016-04-21 06:58:08 +02:00
Willy Tarreau
7a798e5d6b CLEANUP: fix inconsistency between fd->iocb, proto->accept and accept()
There's quite some inconsistency in the internal API. listener_accept()
which is the main accept() function returns void but is declared as int
in the include file. It's assigned to proto->accept() for all stream
protocols where an int is expected but the result is never checked (nor
is it documented by the way). This proto->accept() is in turn assigned
to fd->iocb() which is supposed to return an int composed of FD_WAIT_*
flags, but which is never checked either.

So let's fix all this mess :
  - nobody checks accept()'s return
  - nobody checks iocb()'s return
  - nobody sets a return value

=> let's mark all these functions void and keep the current ones intact.

Additionally we now include listener.h from listener.c to ensure we won't
silently hide this incoherency in the future.

Note that this patch could/should be backported to 1.6 and even 1.5 to
simplify debugging sessions.
2016-04-14 11:18:22 +02:00
Willy Tarreau
8a32106fff BUG/MEDIUM: channel: fix miscalculation of available buffer space (2nd try)
Commit 999f643 ("BUG/MEDIUM: channel: fix miscalculation of available buffer
space.") introduced a bug which made output data to be ignored when computing
the remaining room in a buffer. The problem is that channel_may_recv()
properly considers them and may declare that the FD may be polled for read
events, but once the even strikes, channel_recv_limit() called before recv()
says the opposite. In 1.6 and later this case is automatically caught by
polling loop detection at the connection level and is harmless. But the
backport in 1.5 ends up with a busy polling loop as soon as it becomes
possible to have a buffer with this conflict. In order to reproduce it, it
is necessary to have less than [maxrewrite] bytes available in a buffer, no
forwarding enabled (end of transfer) and [buf->o >= maxrewrite - free space].

Since this heavily depends on socket buffers, it will randomly strike users.
On 1.5 with 8kB buffers it was possible to reproduce it with httpterm using
the following command line :

   $ (printf "GET /?s=675000 HTTP/1.0\r\n\r\n"; sleep 60) | \
       nc6 --rcvbuf-size 1 --send-only 127.0.0.1 8002

This bug is only medium in 1.6 and later but is major in the 1.5 backport,
so it must be backported there.

Thanks to Nenad Merdanovic and Janusz Dziemidowicz for reporting this issue
with enough elements to help understand it.
2016-04-11 17:13:35 +02:00
Thierry Fournier
d0a56c2953 MINOR: dumpstats: split stats_dump_be_stats() in two parts
This patch splits the function stats_dump_be_stats() in two parts. The
part is called stats_fill_be_stats(), and just fill the stats buffer.
This split allows the usage of preformated stats in other parts of HAProxy
like the Lua.
2016-03-30 17:26:19 +02:00
Thierry Fournier
61fe6c0adb MINOR: dumpstats: split stats_dump_sv_stats() in two parts
This patch splits the function stats_dump_sv_stats() in two parts. The
extracted part is called stats_fill_sv_stats(), and just fill the stats buffer.
This split allows the usage of preformated stats in other parts of HAProxy
like the Lua.
2016-03-30 17:26:09 +02:00
Thierry Fournier
c4456856b0 MINOR: dumpstats: split stats_dump_li_stats() in two parts
This patch splits the function stats_dump_li_stats() in two parts. The
extracted part is called stats_fill_li_stats(), and just fill the stats buffer.
This split allows the usage of preformated stats in other parts of HAProxy
like the Lua.
2016-03-30 17:26:02 +02:00
Thierry Fournier
23d2d64185 MINOR: dumpstats: split stats_dump_fe_stats() in two parts
This patch splits the function stats_dump_fe_stats() in two parts. The
extracted part is called stats_fill_fe_stats(), and just fill the stats buffer.
This split allows the usage of preformated stats in other parts of HAProxy
like the Lua.
2016-03-30 17:21:59 +02:00
Thierry Fournier
cb2c767681 MINOR: dumpstats: split stats_dump_info_to_buffer() in two parts
This patch splits the function stats_dump_info_to_buffer() in two parts. The
extracted part is called stats_fill_info(), and just fill the stats buffer.
This split allows the usage of preformated stats in other parts of HAProxy
like the Lua.
2016-03-30 17:21:37 +02:00
Thierry Fournier
31e64ca301 MINOR: dumpstats: extract stats fields enum and names
These field names can be used outside of the dumpstats file.
This will be useful for exporting stats in Lua.
2016-03-30 17:21:09 +02:00
Thierry Fournier
3d4a675f24 MINOR: lua: post initialization
This patch adds a Lua post initialisation wrapper. It already exists for
pure Lua function, now it executes also C. It is useful for doing things
when the configuration is ready to use. For example we can can browse and
register all the proxies.
2016-03-30 15:44:58 +02:00
Thierry Fournier
45e78d7aa9 MINOR: lua: refactor the Lua object registration
All the HAProxy Lua object are declared with the same pattern:
 - Add the function __tosting which dumps the object name
 - Register the name in the Lua REGISTRY
 - Register the reference ID

These action are refactored in on function. This remove some
lines of code.
2016-03-30 15:43:52 +02:00
Thierry Fournier
ddd8988fe5 MINOR: lua: move class registration facilities
The functions
 - hlua_class_const_int()
 - hlua_class_const_str()
 - hlua_class_function()
are use for common class registration actions.

The function 'hlua_dump_object()' is generic dump name function.

These functions can be used by all the HAProxy objects, so I move
it into the safe functions file.
2016-03-30 15:42:20 +02:00
David Carlier
15073a3393 MINOR: sample: Moves ARGS underlying type from 32 to 64 bits.
ARG# macros allow to create a list up to 7 in theory but 5 in
practice. The change to a guaranteed 64 bits type increase to
up to 12.
2016-03-15 22:11:52 +01:00
Willy Tarreau
cb80912001 MEDIUM: stats: support "show info typed" on the CLI
This emits the field positions, names and types. It is more convenient
than the default output for a parser that doesn't know all the fields. It
simply relies on stats_emit_typed_data_field() and stats_emit_field_tags()
added by previous patch for the output. A new stats format flag was added,
STAT_FMT_TYPED, which is set when the "typed" keyword is specified on the
CLI.
2016-03-11 17:08:06 +01:00
Willy Tarreau
b47785f862 MINOR: stats: add functions to emit typed fields into a chunk
New function stats_emit_typed_data_field() does exactly like
stats_emit_raw_data_field() except that it also prints the data
type after a colon. This will be used to print using the typed
format.

And function stats_emit_field_tags() appends a 3-letter code
describing the origin, nature, and scope, followed by an optional
delimiter. This will be particularly convenient to dump typed
data.
2016-03-11 17:08:05 +01:00
Willy Tarreau
8e62c05af2 MINOR: stats: create fields types suitable for all CSV output data
We're preparing for various data types for each stats field as they
appear in the CSV output. For now we only cover the regular types handled
by printf, so we have 32 and 64 bit ints and counters, strings, and of
course "empty" to indicate that there's nothing in the field and which
guarantees that any accessed entry will return 0.

More types will surely come later so that some fields are properly
represented. For example, we could see limits where only the value 0
doesn't show up, or human time, etc.
2016-03-11 17:08:04 +01:00
Willy Tarreau
6204cd9f27 BUG/MAJOR: vars: always retrieve the stream and session from the sample
This is the continuation of previous patch called "BUG/MAJOR: samples:
check smp->strm before using it".

It happens that variables may have a session-wide scope, and that their
session is retrieved by dereferencing the stream. But nothing prevents them
from being used from a streamless context such as tcp-request connection,
thus crashing the process. Example :

    tcp-request connection accept if { src,set-var(sess.foo) -m found }

In order to fix this, we have to always ensure that variable manipulation
only happens via the sample, which contains the correct owner and context,
and that we never use one from a different source. This results in quite a
large change since a lot of functions are inderctly involved in the call
chain, but the change is easy to follow.

This fix must be backported to 1.6, and requires the last two patches.
2016-03-10 17:28:04 +01:00
Willy Tarreau
1777ea63e0 MINOR: sample: add a new helper to initialize the owner of a sample
Since commit 6879ad3 ("MEDIUM: sample: fill the struct sample with the
session, proxy and stream pointers") merged in 1.6-dev2, the sample
contains the pointer to the stream and sample fetch functions as well
as converters use it heavily. This requires from a lot of call places
to initialize 4 fields, and it was even forgotten at a few places.

This patch provides a convenient helper to initialize all these fields
at once, making it easy to prepare a new sample from a previous one for
example.

A few call places were cleaned up to make use of it. It will be needed
by further fixes.

At one place in the Lua code, it was moved earlier because we used to
call sample casts with a non completely initialized sample, which is
not clean eventhough at the moment there are no consequences.
2016-03-10 16:42:58 +01:00
Thierry Fournier
09a9178311 MINOR: server: generalize the "updater" source
the function server_parse_addr_change_request() contain an hardcoded
updater source "stats command". this function can be called from other
sources than the "stats command", so this patch make this argument
generic.
2016-02-24 23:37:39 +01:00
Thierry Fournier
d35b7a6d93 CLEANUP: server: add "const" to some message strings
"updater" is used in "read only" mode, so I add a const qualifier
to the variable declaration.
2016-02-24 23:37:39 +01:00
Thierry Fournier
9f72555b65 BUG/MINOR: server: some prototypes are renamed
The commit 87b096 renames the functions srv_shutdown_backup_sessions()
and srv_shutdown_sessions() to srv_shutdown_backup_streams() and
srv_shutdown_streams().

The header file <proto/servers.h> does not repport these changes.

This bug should be repported in the 1.6 branch, even if it is useless
because new dev are frozen.
2016-02-23 22:42:47 +01:00
Thierry Fournier
ada348459f MEDIUM: dns: extract options
DNS selection preferences are actually declared inline in the
struct server. There are copied from the server struct to the
dns_resolution struct for each resolution.

Next patchs adds new preferences options, and it is not a good
way to copy all the configuration information before each dns
resolution.

This patch extract the configuration preference from the struct
server and declares a new dedicated struct. Only a pointer to this
new striuict will be copied before each dns resolution.
2016-02-19 14:37:46 +01:00
Dragan Dosen
835b9212f6 MEDIUM: log: add a new log format flag "E"
The +E mode escapes characters '"', '\' and ']' with '\' as prefix. It
mostly makes sense to use it in the RFC5424 structured-data log formats.

Example:

log-format-sd %{+Q,+E}o\ [exampleSDID@1234\ header=%[capture.req.hdr(0)]]
2016-02-12 13:36:47 +01:00
Thierry Fournier
9e7e3ea991 MINOR: lua: move common function
This patch moves the function hlua_checkudata which check that
an object contains the expected class_reference as metatable.
This function is commonly used by all the lua functions.
The function hlua_metatype is also moved.
2016-02-12 11:08:53 +01:00
Thierry Fournier
fb0b5467ca MINOR: lua: file dedicated to unsafe functions
When Lua executes functions from its API, these can throws an error.
These function must be executed in a special environment which catch
these error, otherwise a critical error (like segfault) can raise.

This patch add a c file called "hlua_fcn.c" which collect all the
Lua/c function needing safe environment for its execution.
2016-02-12 11:08:53 +01:00
Thierry Fournier
8feaa661b6 MINOR: map: Add regex matching replacement
This patch declares a new map which provides a string based on
a string with back references replaced by the content matched
by the regex.
2016-02-10 23:38:34 +01:00
Christopher Faulet
443ea1a242 MINOR: filters: Extract proxy stuff from the struct filter
Now, filter's configuration (.id, .conf and .ops fields) is stored in the
structure 'flt_conf'. So proxies own a flt_conf list instead of a filter
list. When a filter is attached to a stream, it gets a pointer on its
configuration. This avoids mixing the filter's context (owns by a stream) and
its configuration (owns by a proxy). It also saves 2 pointers per filter
instance.
2016-02-09 14:53:15 +01:00
Christopher Faulet
3e7bc67722 MINOR: filters: Remove unused or useless stuff and do small optimizations 2016-02-09 14:53:15 +01:00
Christopher Faulet
da02e17d42 MAJOR: filters: Require explicit registration to filter HTTP body and TCP data
Before, functions to filter HTTP body (and TCP data) were called from the moment
at least one filter was attached to the stream. If no filter is interested by
these data, this uselessly slows data parsing.
A good example is the HTTP compression filter. Depending of request and response
headers, the response compression can be enabled or not. So it could be really
nice to call it only when enabled.

So, now, to filter HTTP/TCP data, a filter must use the function
register_data_filter. For TCP streams, this function can be called only
once. But for HTTP streams, when needed, it must be called for each HTTP request
or HTTP response.
Only registered filters will be called during data parsing. At any time, a
filter can be unregistered by calling the function unregister_data_filter.
2016-02-09 14:53:15 +01:00
Christopher Faulet
fcf035cb5a MINOR: filters: Add stream_filters structure to hide filters info
From the stream point of view, this new structure is opaque. it hides filters
implementation details. So, impact for future optimizations will be reduced
(well, we hope so...).

Some small improvements has been made in filters.c to avoid useless checks.
2016-02-09 14:53:15 +01:00
Christopher Faulet
309c6418b0 MEDIUM: filters: Replace filter_http_headers callback by an analyzer
This new analyzer will be called for each HTTP request/response, before the
parsing of the body. It is identified by AN_FLT_HTTP_HDRS.

Special care was taken about the following condition :

  * the frontend is a TCP proxy
  * filters are defined in the frontend section
  * the selected backend is a HTTP proxy

So, this patch explicitly add AN_FLT_HTTP_HDRS analyzer on the request and the
response channels when the backend is a HTTP proxy and when there are filters
attatched on the stream.
This patch simplifies http_request_forward_body and http_response_forward_body
functions.
2016-02-09 14:53:15 +01:00
Christopher Faulet
2fb2880caf MEDIUM: filters: remove http_start_chunk, http_last_chunk and http_chunk_end
For Chunked HTTP request/response, the body filtering can be really
expensive. In the worse case (many chunks of 1 bytes), the filters overhead is
of 3 calls per chunk. If http_data callback is useful, others are just
informative.

So these callbacks has been removed. Of course, existing filters (trace and
compression) has beeen updated accordingly. For the HTTP compression filter, the
update is quite huge. Its implementation is closer to the old one.
2016-02-09 14:53:15 +01:00
Christopher Faulet
3e34429515 MEDIUM: filters: Use macros to call filters callbacks to speed-up processing
When no filter is attached to the stream, the CPU footprint due to the calls to
filters_* functions is huge, especially for chunk-encoded messages. Using macros
to check if we have some filters or not is a great improvement.

Furthermore, instead of checking the filter list emptiness, we introduce a flag
to know if filters are attached or not to a stream.
2016-02-09 14:53:15 +01:00
Christopher Faulet
92d3638d2d MAJOR: filters/http: Rewrite the HTTP compression as a filter
HTTP compression has been rewritten to use the filter API. This is more a PoC
than other thing for now. It allocates memory to work. So, if only for that, it
should be rewritten.

In the mean time, the implementation has been refactored to allow its use with
other filters. However, there are limitations that should be respected:

  - No filter placed after the compression one is allowed to change input data
    (in 'http_data' callback).
  - No filter placed before the compression one is allowed to change forwarded
    data (in 'http_forward_data' callback).

For now, these limitations are informal, so you should be careful when you use
several filters.

About the configuration, 'compression' keywords are still supported and must be
used to configure the HTTP compression behavior. In absence of a 'filter' line
for the compression filter, it is added in the filter chain when the first
compression' line is parsed. This is an easy way to do when you do not use other
filters. But another filter exists, an error is reported so that the user must
explicitly declare the filter.

For example:

  listen tst
      ...
      compression algo gzip
      compression offload
      ...
      filter flt_1
      filter compression
      filter flt_2
      ...
2016-02-09 14:53:15 +01:00
Christopher Faulet
3d97c90974 REORG: filters: Prepare creation of the HTTP compression filter
HTTP compression will be moved in a true filter. To prepare the ground, some
functions have been moved in a dedicated file. Idea is to keep everything about
compression algos in compression.c and everything related to the filtering in
flt_http_comp.c.

For now, a header has been added to help during the transition. It will be
removed later.

Unused empty ACL keyword list was removed. The "compression" keyword
parser was moved from cfgparse.c to flt_http_comp.c.
2016-02-09 14:53:15 +01:00
Christopher Faulet
d7c9196ae5 MAJOR: filters: Add filters support
This patch adds the support of filters in HAProxy. The main idea is to have a
way to "easely" extend HAProxy by adding some "modules", called filters, that
will be able to change HAProxy behavior in a programmatic way.

To do so, many entry points has been added in code to let filters to hook up to
different steps of the processing. A filter must define a flt_ops sutrctures
(see include/types/filters.h for details). This structure contains all available
callbacks that a filter can define:

struct flt_ops {
       /*
        * Callbacks to manage the filter lifecycle
        */
       int  (*init)  (struct proxy *p);
       void (*deinit)(struct proxy *p);
       int  (*check) (struct proxy *p);

        /*
         * Stream callbacks
         */
        void (*stream_start)     (struct stream *s);
        void (*stream_accept)    (struct stream *s);
        void (*session_establish)(struct stream *s);
        void (*stream_stop)      (struct stream *s);

       /*
        * HTTP callbacks
        */
       int  (*http_start)         (struct stream *s, struct http_msg *msg);
       int  (*http_start_body)    (struct stream *s, struct http_msg *msg);
       int  (*http_start_chunk)   (struct stream *s, struct http_msg *msg);
       int  (*http_data)          (struct stream *s, struct http_msg *msg);
       int  (*http_last_chunk)    (struct stream *s, struct http_msg *msg);
       int  (*http_end_chunk)     (struct stream *s, struct http_msg *msg);
       int  (*http_chunk_trailers)(struct stream *s, struct http_msg *msg);
       int  (*http_end_body)      (struct stream *s, struct http_msg *msg);
       void (*http_end)           (struct stream *s, struct http_msg *msg);
       void (*http_reset)         (struct stream *s, struct http_msg *msg);
       int  (*http_pre_process)   (struct stream *s, struct http_msg *msg);
       int  (*http_post_process)  (struct stream *s, struct http_msg *msg);
       void (*http_reply)         (struct stream *s, short status,
                                   const struct chunk *msg);
};

To declare and use a filter, in the configuration, the "filter" keyword must be
used in a listener/frontend section:

  frontend test
    ...
    filter <FILTER-NAME> [OPTIONS...]

The filter referenced by the <FILTER-NAME> must declare a configuration parser
on its own name to fill flt_ops and filter_conf field in the proxy's
structure. An exemple will be provided later to make it perfectly clear.

For now, filters cannot be used in backend section. But this is only a matter of
time. Documentation will also be added later. This is the first commit of a long
list about filters.

It is possible to have several filters on the same listener/frontend. These
filters are stored in an array of at most MAX_FILTERS elements (define in
include/types/filters.h). Again, this will be replaced later by a list of
filters.

The filter API has been highly refactored. Main changes are:

* Now, HA supports an infinite number of filters per proxy. To do so, filters
  are stored in list.

* Because filters are stored in list, filters state has been moved from the
  channel structure to the filter structure. This is cleaner because there is no
  more info about filters in channel structure.

* It is possible to defined filters on backends only. For such filters,
  stream_start/stream_stop callbacks are not called. Of course, it is possible
  to mix frontend and backend filters.

* Now, TCP streams are also filtered. All callbacks without the 'http_' prefix
  are called for all kind of streams. In addition, 2 new callbacks were added to
  filter data exchanged through a TCP stream:

    - tcp_data: it is called when new data are available or when old unprocessed
      data are still waiting.

    - tcp_forward_data: it is called when some data can be consumed.

* New callbacks attached to channel were added:

    - channel_start_analyze: it is called when a filter is ready to process data
      exchanged through a channel. 2 new analyzers (a frontend and a backend)
      are attached to channels to call this callback. For a frontend filter, it
      is called before any other analyzer. For a backend filter, it is called
      when a backend is attached to a stream. So some processing cannot be
      filtered in that case.

    - channel_analyze: it is called before each analyzer attached to a channel,
      expects analyzers responsible for data sending.

    - channel_end_analyze: it is called when all other analyzers have finished
      their processing. A new analyzers is attached to channels to call this
      callback. For a TCP stream, this is always the last one called. For a HTTP
      one, the callback is called when a request/response ends, so it is called
      one time for each request/response.

* 'session_established' callback has been removed. Everything that is done in
  this callback can be handled by 'channel_start_analyze' on the response
  channel.

* 'http_pre_process' and 'http_post_process' callbacks have been replaced by
  'channel_analyze'.

* 'http_start' callback has been replaced by 'http_headers'. This new one is
  called just before headers sending and parsing of the body.

* 'http_end' callback has been replaced by 'channel_end_analyze'.

* It is possible to set a forwarder for TCP channels. It was already possible to
  do it for HTTP ones.

* Forwarders can partially consumed forwardable data. For this reason a new
  HTTP message state was added before HTTP_MSG_DONE : HTTP_MSG_ENDING.

Now all filters can define corresponding callbacks (http_forward_data
and tcp_forward_data). Each filter owns 2 offsets relative to buf->p, next and
forward, to track, respectively, input data already parsed but not forwarded yet
by the filter and parsed data considered as forwarded by the filter. A any time,
we have the warranty that a filter cannot parse or forward more input than
previous ones. And, of course, it cannot forward more input than it has
parsed. 2 macros has been added to retrieve these offets: FLT_NXT and FLT_FWD.

In addition, 2 functions has been added to change the 'next size' and the
'forward size' of a filter. When a filter parses input data, it can alter these
data, so the size of these data can vary. This action has an effet on all
previous filters that must be handled. To do so, the function
'filter_change_next_size' must be called, passing the size variation. In the
same spirit, if a filter alter forwarded data, it must call the function
'filter_change_forward_size'. 'filter_change_next_size' can be called in
'http_data' and 'tcp_data' callbacks and only these ones. And
'filter_change_forward_size' can be called in 'http_forward_data' and
'tcp_forward_data' callbacks and only these ones. The data changes are the
filter responsability, but with some limitation. It must not change already
parsed/forwarded data or data that previous filters have not parsed/forwarded
yet.

Because filters can be used on backends, when we the backend is set for a
stream, we add filters defined for this backend in the filter list of the
stream. But we must only do that when the backend and the frontend of the stream
are not the same. Else same filters are added a second time leading to undefined
behavior.

The HTTP compression code had to be moved.

So it simplifies http_response_forward_body function. To do so, the way the data
are forwarded has changed. Now, a filter (and only one) can forward data. In a
commit to come, this limitation will be removed to let all filters take part to
data forwarding. There are 2 new functions that filters should use to deal with
this feature:

 * flt_set_http_data_forwarder: This function sets the filter (using its id)
   that will forward data for the specified HTTP message. It is possible if it
   was not already set by another filter _AND_ if no data was yet forwarded
   (msg->msg_state <= HTTP_MSG_BODY). It returns -1 if an error occurs.

 * flt_http_data_forwarder: This function returns the filter id that will
   forward data for the specified HTTP message. If there is no forwarder set, it
   returns -1.

When an HTTP data forwarder is set for the response, the HTTP compression is
disabled. Of course, this is not definitive.
2016-02-09 14:53:15 +01:00
Christopher Faulet
635c0adec2 BUG/MINOR: ssl: Be sure to use unique serial for regenerated certificates
The serial number for a generated certificate was computed using the requested
servername, without any variable/random part. It is not a problem from the
moment it is not regenerated.

But if the cache is disabled or when the certificate is evicted from the cache,
we may need to regenerate it. It is important to not reuse the same serial
number for the new certificate. Else clients (especially browsers) trigger a
warning because 2 certificates issued by the same CA have the same serial
number.

So now, the serial is a static variable initialized with now_ms (internal date
in milliseconds) and incremented at each new certificate generation.

(Ref MPS-2031)
2016-02-09 09:04:53 +01:00
Christopher Faulet
c34d19fc3c BUG: stream_interface: Reuse connection even if the output channel is empty
in function 'si_connect', an existing connection is reused (and considered as
established) only when there are some pending data in the output channel.

This can be problem when filters are used, because a filter can choose to not
forward data immediatly. So when we try to initiate a connection to a server,
the output channel can be empty. In this situation, if the connection already
exists, it is not considered as established and nothing happens. If the stream
interface is in the state SI_ST_ASS, this leads to an infinite loop in
process_stream because it remains in this state.

This patch fixes this problem. Now, in 'si_connect', we always reuse an existing
connection, whether or not there are pending data in the output channel.
2016-02-03 14:22:55 +01:00
Willy Tarreau
999f643ed2 BUG/MEDIUM: channel: fix miscalculation of available buffer space.
The function channel_recv_limit() relies on channel_reserved() which
itself relies on channel_in_transit(). Individually they're OK but
combined they're doing the wrong thing.

The problem is that we refrain from filling buffers while to_forward
is even much larger than the buffer because of a semantic issue along
the call chain. This is particularly visible when offloading SSL on
moderately large files (1 MB), though it is also visible on clear text.
Twice the number of recv() calls are made compared to what is needed,
and the typical performance drops by 15-20% in SSL in 1.6 and later,
and no directly measurable drop in 1.5 except when using strace.

There's no need for all these intermediate functions, so let's get
rid of them and reimplement channel_recv_limit() from scratch in a
safer way.

This fix needs to be backported to 1.6 and 1.5 (at least). Note that in
1.5 the function is called buffer_recv_limit() and it may differ a bit.
2016-01-25 02:31:18 +01:00
Thiago Farina
b1af23ebea MINOR: fix the return type for dns_response_get_query_id() function
This function should return a 16-bit type as that is the type for
dns header id.
Also because it is doing an uint16 unpack big-endian operation.

Backport: can be backported to 1.6

Signed-off-by: Thiago Farina <tfarina@chromium.org>
Signed-off-by: Baptiste Assmann <bedis9@gmail.com>
2016-01-20 23:51:24 +01:00
Christopher Faulet
a94e5a548c MINOR: filters/http: Use a wrapper function instead of stream_int_retnclose
The function http_reply_and_close has been added in proto_http.c to wrap calls
to stream_int_retnclose. This functions will be modified when the filters will
be added.
2015-12-28 16:49:36 +01:00
Baptiste Assmann
e9544935e8 BUG/MINOR: http rule: http capture 'id' rule points to a non existing id
It is possible to create a http capture rule which points to a capture slot
id which does not exist.

Current patch prevent this when parsing configuration and prevent running
configuration which contains such rules.

This configuration is now invalid:

  frontend f
   bind :8080
   http-request capture req.hdr(User-Agent) id 0
   default_backend b

this one as well:

  frontend f
   bind :8080
   declare capture request len 32 # implicit id is 0 here
   http-request capture req.hdr(User-Agent) id 1
   default_backend b

It applies of course to both http-request and http-response rules.
2015-11-04 08:47:55 +01:00
Christopher Faulet
7969a33a01 MINOR: ssl: Add support for EC for the CA used to sign generated certificates
This is done by adding EVP_PKEY_EC type in supported types for the CA private
key when we get the message digest used to sign a generated X509 certificate.
So now, we support DSA, RSA and EC private keys.

And to be sure, when the type of the private key is not directly supported, we
get its default message digest using the function
'EVP_PKEY_get_default_digest_nid'.

We also use the key of the default certificate instead of generated it. So we
are sure to use the same key type instead of always using a RSA key.
2015-10-09 12:13:12 +02:00
Christopher Faulet
77fe80c0b4 MINOR: ssl: Release Servers SSL context when HAProxy is shut down
[wt: could be backported to 1.5 as well]
2015-10-09 10:33:00 +02:00