Commit Graph

6022 Commits

Author SHA1 Message Date
PiBa-NL
c55b88ece6 BUG/MEDIUM: pollers/kqueue: use incremented position in event list
When composing the event list for subscribe to kqueue events, the index
where the new event is added must be after the previous events, as such
the changes counter should continue counting.

This caused haproxy to accept connections but not try read and process
the incoming data.

This patch is for 1.9 only
2018-05-11 14:08:56 +02:00
Willy Tarreau
29d698040d BUG/MINOR: lua: ensure large proxy IDs can be represented
In function hlua_fcn_new_proxy() too small a buffer was passed to
snprintf(), resulting in large proxy or listener IDs to make
snprintf() fail. It is unlikely to meet this case but let's fix it
anyway.

This fix must be backported to all stable branches where it applies.
2018-05-06 14:50:09 +02:00
PiBa-NL
706d5ee0c3 BUG/MINOR: lua: schedule socket task upon lua connect()
The parameters like server-address, port and timeout should be set before
process_stream task is called to avoid the stream being 'closed' before it
got initialized properly. This is most clearly visible when running with
tune.lua.forced-yield=1.. So scheduling the task should not be done when
creating the lua socket, but when connect is called. The error
"socket: not yet initialised, you can't set timeouts." would then appear.

Below code for example also shows this issue, as the sleep will
yield the lua code:
  local con = core.tcp()
  core.sleep(1)
  con:settimeout(10)
2018-05-06 14:36:41 +02:00
Olivier Houchard
cb92f5cae4 MINOR: pollers: move polled_mask outside of struct fdtab.
The polled_mask is only used in the pollers, and removing it from the
struct fdtab makes it fit in one 64B cacheline again, on a 64bits machine,
so make it a separate array.
2018-05-06 06:27:34 +02:00
Olivier Houchard
6b96f7289c BUG/MEDIUM: pollers: Use a global list for fd shared between threads.
With the old model, any fd shared by multiple threads, such as listeners
or dns sockets, would only be updated on one threads, so that could lead
to missed event, or spurious wakeups.
To avoid this, add a global list for fd that are shared, using the same
implementation as the fd cache, and only remove entries from this list
when every thread as updated its poller.

[wt: this will need to be backported to 1.8 but differently so this patch
 must not be backported as-is]
2018-05-06 06:27:09 +02:00
Olivier Houchard
6a2cf8752c MINOR: fd: Make the lockless fd list work with multiple lists.
Modify fd_add_to_fd_list() and fd_rm_from_fd_list() so that they take an
offset in the fdtab to the list entry, instead of hardcoding the fd cache,
so we can use them with other lists.
2018-05-06 06:25:49 +02:00
Olivier Houchard
9b36cb4a41 BUG/MEDIUM: task: Don't free a task that is about to be run.
While running a task, we may try to delete and free a task that is about to
be run, because it's part of the local tasks list, or because rq_next points
to it.
So flag any task that is in the local tasks list to be deleted, instead of
run, by setting t->process to NULL, and re-make rq_next a global,
thread-local variable, that is modified if we attempt to delete that task.

Many thanks to PiBa-NL for reporting this and analysing the problem.

This should be backported to 1.8.
2018-05-04 20:11:04 +02:00
Dragan Dosen
336a11f755 BUG/MINOR: map: correctly track reference to the last ref_elt being dumped
The bug was introduced in the commit 8d85aa4 ("BUG/MAJOR: map: fix
segfault during 'show map/acl' on cli").

This patch should be backported to 1.8, 1.7 and 1.6.
2018-05-04 17:14:39 +02:00
Patrick Hemmer
32d539fa88 MINOR: lua: add get_maxconn and set_maxconn to LUA Server class. 2018-05-03 18:53:42 +02:00
Patrick Hemmer
a62ae7ed9a MINOR: lua: Add server name & puid to LUA Server class. 2018-05-03 18:44:44 +02:00
Willy Tarreau
760e81d356 MINOR: backend: implement random-based load balancing
For large farms where servers are regularly added or removed, picking
a random server from the pool can ensure faster load transitions than
when using round-robin and less traffic surges on the newly added
servers than when using leastconn.

This commit introduces "balance random". It internally uses a random as
the key to the consistent hashing mechanism, thus all features available
in consistent hashing such as weights and bounded load via hash-balance-
factor are usable. It is extremely convenient because one common concern
when using random is what happens when a server is hammered a bit too
much. Here that can trivially be avoided, like in the configuration below :

    backend bk0
        balance random
        hash-balance-factor 110
        server-template s 1-100 127.0.0.1:8000 check inter 1s

Note that while "balance random" internally relies on a hash algorithm,
it holds the same properties as round-robin and as such is compatible with
reusing an existing server connection with "option prefer-last-server".
2018-05-03 07:20:40 +02:00
PiBa-NL
fe971b35ae BUG/MINOR, BUG/MINOR: lua: Put tasks to sleep when waiting for data
If a lua socket is waiting for data it currently spins at 100% cpu usage.
This because the TICK_ETERNITY returned by the socket is ignored when
setting the 'expire' time of the task.

Fixed by removing the check for yields that return TICK_ETERNITY.

This should be backported to at least 1.8.
2018-05-03 05:00:25 +02:00
Christopher Faulet
148b16e1ce BUG/MEDIUM: threads: Fix the sync point for more than 32 threads
In the sync point, to know if a thread has requested a synchronization, we call
the function thread_need_sync(). It should return 1 if yes, otherwise it should
return 0. It is intended to return a signed integer.

But internally, instead of returning 0 or 1, it returns 0 or tid_bit
(threads_want_sync & tid_bit). So, tid_bit is casted in integer. For the first
32 threads, it's ok, because we always check if thread_need_sync() returns
something else than 0. But this is a problem if HAProxy is started with more
than 32 threads, because for threads 33 to 64 (so for tid 32 to 63), their
tid_bit casted to integer are evaluated to 0. So the sync point does not work for
more than 32 threads.

Now, the function thread_need_sync() respects its contract, returning 0 or
1. the function thread_no_sync() has also been updated to avoid any ambiguities.

This patch must be backported in HAProxy 1.8.
2018-05-02 17:58:36 +02:00
Christopher Faulet
b119a79fc3 BUG/MINOR: checks: Fix check->health computation for flapping servers
This patch fixes an old bug introduced in the commit 7b1d47ce ("MAJOR: checks:
move health checks changes to set_server_check_status()"). When a DOWN server is
flapping, everytime a check succeds, check->health is incremented. But when a
check fails, it is decremented only when it is higher than the rise value. So if
only one check succeds for a DOWN server, check->health will remain set to 1 for
all subsequent failing checks.

So, at first glance, it seems not that terrible because the server remains
DOWN. But it is reported in the transitional state "DOWN server, going up". And
it will remain in this state until it is UP again. And there is also an
insidious side effect. If a DOWN server is flapping time to time, It will end to
be considered UP after a uniq successful check, , regardless the rise threshold,
because check->health will be increased slowly and never decreased.

To fix the bug, we just need to reset check->health to 0 when a check fails for
a DOWN server. To do so, we just need to relax the condition to handle a failure
in the function set_server_check_status.

This patch must be backported to haproxy 1.5 and newer.
2018-05-02 14:57:58 +02:00
Patrick Hemmer
e027547f8d MINOR: ssl: add fetch 'ssl_fc_session_key' and 'ssl_bc_session_key'
These fetches return the SSL master key of the front/back connection.
This is useful to decrypt traffic encrypted with ephemeral ciphers.
2018-04-30 14:56:19 +02:00
Patrick Hemmer
419667746b MINOR: ssl: disable SSL sample fetches when unsupported
Previously these fetches would return empty results when HAProxy was
compiled
without the requisite SSL support. This results in confusion and problem
reports from people who unexpectedly encounter the behavior.
2018-04-30 14:56:19 +02:00
Willy Tarreau
46deab6e64 BUG/MINOR: config: disable http-reuse on TCP proxies
Louis Chanouha reported an inappropriate warning when http-reuse is
present in a defaults section while a TCP proxy accidently inherits
it and finds a conflict with other options like the use of the PROXY
protocol. To fix this patch removes the http-reuse option for TCP
proxies.

This fix needs to be backported to 1.8, 1.7 and possibly 1.6.
2018-04-28 07:18:15 +02:00
Tim Duesterhus
e2b10bf491 MINOR: http: Add support for 421 Misdirected Request
This makes haproxy aware of HTTP 421 Misdirected Request, which
is defined in RFC 7540, section 9.1.2.
2018-04-28 07:03:39 +02:00
Tim Duesterhus
ca097c16a8 MINOR: sample: Add strcmp sample converter
This converter supplements the existing string matching by allowing
strings to be converted to a variable.

Example usage:

  http-request set-var(txn.host) hdr(host)
  # Check whether the client is attempting domain fronting.
  acl ssl_sni_http_host_match ssl_fc_sni,strcmp(txn.host) eq 0
2018-04-28 07:03:39 +02:00
Christopher Faulet
5bc9972ed8 BUG/MINOR: lua/threads: Make lua's tasks sticky to the current thread
PiBa-NL reported a bug with tasks registered in lua when HAProxy is started with
serveral threads. These tasks have not specific affinity with threads so they
can be woken up on any threads. So, it is impossbile for these tasks to handled
cosockets or applets, because cosockets and applets are sticky on the thread
which created them. It is forbbiden to manipulate a cosocket from another
thread.

So to fix the bug, tasks registered in lua are now sticky to the current
thread. Because these tasks can be registered before threads creation, the
affinity is set the first time a lua's task is processed.

This patch must be backported in HAProxy 1.8.
2018-04-26 22:58:16 +02:00
Aurélien Nephtali
1e0867cfbc MINOR: ssl: Add payload support to "set ssl ocsp-response"
It is now possible to use a payload with the "set ssl ocsp-response"
command.  These syntaxes will work the same way:

 # echo "set ssl ocsp-response $(base64 -w 10000 ocsp.der)" | \
     socat /tmp/sock1 -

 # echo -e "set ssl ocsp-response <<\n$(base64 ocsp.der)\n" | \
     socat /tmp/sock1 -

Signed-off-by: Aurélien Nephtali <aurelien.nephtali@corp.ovh.com>
2018-04-26 14:20:09 +02:00
Aurélien Nephtali
25650ce513 MINOR: map: Add payload support to "add map"
It is now possible to use a payload with the "add map" command.
These syntaxes will work the same way:

 # echo "add map #-1 key value" | socat /tmp/sock1 -

 # echo -e "add map #-1 <<\n$(cat data)\n" | socat /tmp/sock1 -

with

 # cat data
 key1 value1 with spaces
 key2 value2
 key3 value3 also with spaces

Signed-off-by: Aurélien Nephtali <aurelien.nephtali@corp.ovh.com>
2018-04-26 14:20:01 +02:00
Aurélien Nephtali
abbf607105 MEDIUM: cli: Add payload support
In order to use arbitrary data in the CLI (multiple lines or group of words
that must be considered as a whole, for example), it is now possible to add a
payload to the commands. To do so, the first line needs to end with a special
pattern: <<\n. Everything that follows will be left untouched by the CLI parser
and will be passed to the commands parsers.

Per-command support will need to be added to take advantage of this
feature.

Signed-off-by: Aurélien Nephtali <aurelien.nephtali@corp.ovh.com>
2018-04-26 14:19:33 +02:00
Christopher Faulet
799f51801a BUG/MINOR: spoe: Fix parsing of dontlog-normal option
A missing goto led to a parsing error when line "option dontlog-normal" was
parsed.
2018-04-26 11:50:30 +02:00
Christopher Faulet
ebe1399efe BUG/MINOR: spoe: Fix counters update when processing is interrupted
When the processing is interrupted, because of a typo, <nb_sending> was
incremented instead of decremented.
2018-04-26 11:50:18 +02:00
Willy Tarreau
eba10f24b7 BUG/MEDIUM: h2: implement missing support for chunked encoded uploads
Upload requests not carrying a content-length nor tunnelling data must
be sent chunked-encoded over HTTP/1. The code was planned but for some
reason forgotten during the implementation, leading to such payloads to
be sent as tunnelled data.

Browsers always emit a content length in uploads so this problem doesn't
happen for most sites. However some applications may send data frames
after a request without indicating it earlier.

The only way to detect that a client will need to send data is that the
HEADERS frame doesn't hold the ES bit. In this case it's wise to look
for the content-length header. If it's not there, either we're in tunnel
(CONNECT method) or chunked-encoding (other methods).

This patch implements this.

The following request is sent using content-length :

    curl --http2 -sk https://127.0.0.1:4443/s2 -XPOST -T /large/file

and these ones using chunked-encoding :

    curl --http2 -sk https://127.0.0.1:4443/s2 -XPUT -T /large/file
    curl --http2 -sk https://127.0.0.1:4443/s2 -XPUT -T - < /dev/urandom

Thanks to Robert Samuel Newson for raising this issue with details.
This fix must be backported to 1.8.
2018-04-26 10:20:44 +02:00
Willy Tarreau
174b06a572 MINOR: h2: detect presence of CONNECT and/or content-length
We'll need this in order to support uploading chunks. The h2 to h1
converter checks for the presence of the content-length header field
as well as the CONNECT method and returns these information to the
caller. The caller indicates whether or not a body is detected for
the message (presence of END_STREAM or not). No transfer-encoding
header is emitted yet.
2018-04-26 10:15:14 +02:00
Tim Duesterhus
cd235c6042 BUG/MEDIUM: lua: Fix segmentation fault if a Lua task exits
PiBa-NL reported that haproxy crashes with a segmentation fault
if a function registered using `core.register_task` returns.

An example Lua script that reproduces the bug is:

  mytask = function()
  	core.Info("Stopping task")
  end
  core.register_task(mytask)

The Valgrind output is as follows:

  ==6759== Process terminating with default action of signal 11 (SIGSEGV)
  ==6759==  Access not within mapped region at address 0x20
  ==6759==    at 0x5B60AA9: lua_sethook (in /usr/lib/x86_64-linux-gnu/liblua5.3.so.0.0.0)
  ==6759==    by 0x430264: hlua_ctx_resume (hlua.c:1009)
  ==6759==    by 0x43BB68: hlua_process_task (hlua.c:5525)
  ==6759==    by 0x4FED0A: process_runnable_tasks (task.c:231)
  ==6759==    by 0x4B2256: run_poll_loop (haproxy.c:2397)
  ==6759==    by 0x4B2256: run_thread_poll_loop (haproxy.c:2459)
  ==6759==    by 0x41A7E4: main (haproxy.c:3049)

Add the missing `task = NULL` for the `HLUA_E_OK` case. The error cases
have been fixed as of 253e53e661 which
first was included in haproxy v1.8-dev3. This bugfix should be backported
to haproxy 1.8.
2018-04-25 11:30:56 +02:00
Rian McGuire
89fcb7d929 BUG/MINOR: log: t_idle (%Ti) is not set for some requests
If TCP content inspection is used, msg_state can be >= HTTP_MSG_ERROR
the first time http_wait_for_request is called. t_idle was being left
unset in that case.

In the example below :
     stick-table type string len 64 size 100k expire 60s
     tcp-request inspect-delay 1s
     tcp-request content track-sc1 hdr(X-Session)

%Ti will always be -1, because the msg_state is already at HTTP_MSG_BODY
when http_wait_for_request is called for the first time.

This patch should backported to 1.8 and 1.7.
2018-04-25 08:59:23 +02:00
Tim Duesterhus
45be38c9c7 BUG/MAJOR: channel: Fix crash when trying to read from a closed socket
When haproxy is compiled using GCC <= 3.x or >= 5.x the `unlikely`
macro performs a comparison with zero: `(x) != 0`, thus returning
either 0 or 1.

In `int co_getline_nc()` this macro was accidentally applied to
the variable `retcode` itself, instead of the result of the
comparison `retcode <= 0`. As a result any negative `retcode`
is converted to `1` for purposes of the comparison.
Thus never taking the branch (and exiting the function) for
negative values.

This in turn leads to reads of uninitialized memory in the for-loop
below:

  ==12141== Conditional jump or move depends on uninitialised value(s)
  ==12141==    at 0x4EB6B4: co_getline_nc (channel.c:346)
  ==12141==    by 0x421CA4: hlua_socket_receive_yield (hlua.c:1713)
  ==12141==    by 0x421F6F: hlua_socket_receive (hlua.c:1896)
  ==12141==    by 0x529B08F: ??? (in /usr/lib/x86_64-linux-gnu/liblua5.3.so.0.0.0)
  ==12141==    by 0x52A7EFC: ??? (in /usr/lib/x86_64-linux-gnu/liblua5.3.so.0.0.0)
  ==12141==    by 0x529B497: ??? (in /usr/lib/x86_64-linux-gnu/liblua5.3.so.0.0.0)
  ==12141==    by 0x529711A: lua_pcallk (in /usr/lib/x86_64-linux-gnu/liblua5.3.so.0.0.0)
  ==12141==    by 0x52ABDF0: ??? (in /usr/lib/x86_64-linux-gnu/liblua5.3.so.0.0.0)
  ==12141==    by 0x529B08F: ??? (in /usr/lib/x86_64-linux-gnu/liblua5.3.so.0.0.0)
  ==12141==    by 0x52A7EFC: ??? (in /usr/lib/x86_64-linux-gnu/liblua5.3.so.0.0.0)
  ==12141==    by 0x529A9F1: ??? (in /usr/lib/x86_64-linux-gnu/liblua5.3.so.0.0.0)
  ==12141==    by 0x529B523: lua_resume (in /usr/lib/x86_64-linux-gnu/liblua5.3.so.0.0.0)
  ==12141==
  ==12141== Use of uninitialised value of size 8
  ==12141==    at 0x4EB6B9: co_getline_nc (channel.c:346)
  ==12141==    by 0x421CA4: hlua_socket_receive_yield (hlua.c:1713)
  ==12141==    by 0x421F6F: hlua_socket_receive (hlua.c:1896)
  ==12141==    by 0x529B08F: ??? (in /usr/lib/x86_64-linux-gnu/liblua5.3.so.0.0.0)
  ==12141==    by 0x52A7EFC: ??? (in /usr/lib/x86_64-linux-gnu/liblua5.3.so.0.0.0)
  ==12141==    by 0x529B497: ??? (in /usr/lib/x86_64-linux-gnu/liblua5.3.so.0.0.0)
  ==12141==    by 0x529711A: lua_pcallk (in /usr/lib/x86_64-linux-gnu/liblua5.3.so.0.0.0)
  ==12141==    by 0x52ABDF0: ??? (in /usr/lib/x86_64-linux-gnu/liblua5.3.so.0.0.0)
  ==12141==    by 0x529B08F: ??? (in /usr/lib/x86_64-linux-gnu/liblua5.3.so.0.0.0)
  ==12141==    by 0x52A7EFC: ??? (in /usr/lib/x86_64-linux-gnu/liblua5.3.so.0.0.0)
  ==12141==    by 0x529A9F1: ??? (in /usr/lib/x86_64-linux-gnu/liblua5.3.so.0.0.0)
  ==12141==    by 0x529B523: lua_resume (in /usr/lib/x86_64-linux-gnu/liblua5.3.so.0.0.0)
  ==12141==
  ==12141== Invalid read of size 1
  ==12141==    at 0x4EB6B9: co_getline_nc (channel.c:346)
  ==12141==    by 0x421CA4: hlua_socket_receive_yield (hlua.c:1713)
  ==12141==    by 0x421F6F: hlua_socket_receive (hlua.c:1896)
  ==12141==    by 0x529B08F: ??? (in /usr/lib/x86_64-linux-gnu/liblua5.3.so.0.0.0)
  ==12141==    by 0x52A7EFC: ??? (in /usr/lib/x86_64-linux-gnu/liblua5.3.so.0.0.0)
  ==12141==    by 0x529B497: ??? (in /usr/lib/x86_64-linux-gnu/liblua5.3.so.0.0.0)
  ==12141==    by 0x529711A: lua_pcallk (in /usr/lib/x86_64-linux-gnu/liblua5.3.so.0.0.0)
  ==12141==    by 0x52ABDF0: ??? (in /usr/lib/x86_64-linux-gnu/liblua5.3.so.0.0.0)
  ==12141==    by 0x529B08F: ??? (in /usr/lib/x86_64-linux-gnu/liblua5.3.so.0.0.0)
  ==12141==    by 0x52A7EFC: ??? (in /usr/lib/x86_64-linux-gnu/liblua5.3.so.0.0.0)
  ==12141==    by 0x529A9F1: ??? (in /usr/lib/x86_64-linux-gnu/liblua5.3.so.0.0.0)
  ==12141==    by 0x529B523: lua_resume (in /usr/lib/x86_64-linux-gnu/liblua5.3.so.0.0.0)
  ==12141==  Address 0x8637171e928bb500 is not stack'd, malloc'd or (recently) free'd

Fix this bug by correctly applying the `unlikely` macro to the result of the comparison.

This bug exists as of commit ca16b03813
which is the first commit adding this function.

v1.6-dev1 is the first tag containing this commit, the fix should
be backported to haproxy 1.6 and newer.
2018-04-25 05:39:49 +02:00
Aurélien Nephtali
564d15a71e BUG/MINOR: pattern: Add a missing HA_SPIN_INIT() in pat_ref_newid()
pat_ref_newid() is lacking a spinlock init. It was probably forgotten
in b5997f740b ("MAJOR: threads/map: Make acls/maps thread safe").

Signed-off-by: Aurélien Nephtali <aurelien.nephtali@corp.ovh.com>
2018-04-19 17:49:48 +02:00
Willy Tarreau
3f0e1ec701 BUG/CRITICAL: h2: fix incorrect frame length check
The incoming H2 frame length was checked against the max_frame_size
setting instead of being checked against the bufsize. The max_frame_size
only applies to outgoing traffic and not to incoming one, so if a large
enough frame size is advertised in the SETTINGS frame, a wrapped frame
will be defragmented into a temporary allocated buffer where the second
fragment my overflow the heap by up to 16 kB.

It is very unlikely that this can be exploited for code execution given
that buffers are very short lived and their address not realistically
predictable in production, but the likeliness of an immediate crash is
absolutely certain.

This fix must be backported to 1.8.

Many thanks to Jordan Zebor from F5 Networks for reporting this issue
in a responsible way.
2018-04-19 10:35:30 +02:00
Willy Tarreau
9eb2a4addf BUILD: sample: avoid build warning in sample.c
Recent commit 9631a28 ("MEDIUM: sample: Extend functionality for field/word
converters") introduced this minor build warning that this patch addresses :

 src/sample.c: In function 'sample_conv_word':
 src/sample.c:2108:8: warning: suggest explicit braces to avoid ambiguous 'else' [-Wparentheses]
 src/sample.c:2137:8: warning: suggest explicit braces to avoid ambiguous 'else' [-Wparentheses]

No backport is needed.
2018-04-19 10:33:28 +02:00
Olivier Houchard
ebaba75429 BUG/MEDIUM: kqueue: When adding new events, provide an output to get errors.
When adding new events using kevent(), if there's an error, because we're
trying to delete an event that wasn't there, or because the fd has already
been closed, kevent() will either add an event in the eventlist array if
there's enough room for it, and keep on handling other events, or stop and
return -1.
We want it to process all the events, so give it a large-enough array to
store any error.

Special thanks to PiBa-NL for diagnosing the root cause of this bug.

This should be backported to 1.8.
2018-04-17 17:46:56 +02:00
William Lallemand
daf4cd209a MINOR: export localpeer as an environment variable
Export localpeer as the environment variable $HAPROXY_LOCALPEER,
allowing to use this variable in the configuration file.

It's useful to use this variable in the case of synchronized
configuration between peers.
2018-04-17 17:17:58 +02:00
Marcin Deranek
9631a28275 MEDIUM: sample: Extend functionality for field/word converters
Extend functionality of field/word converters, so it's possible
to extract field(s)/word(s) counting from the beginning/end and/or
extract multiple fields/words (including separators) eg.

str(f1_f2_f3__f5),field(2,_,2)  # f2_f3
str(f1_f2_f3__f5),field(2,_,0)  # f2_f3__f5
str(f1_f2_f3__f5),field(-2,_,3) # f2_f3_
str(f1_f2_f3__f5),field(-3,_,0) # f1_f2_f3

str(w1_w2_w3___w4),word(3,_,2)  # w3___w4
str(w1_w2_w3___w4),word(2,_,0)  # w2_w3___w4
str(w1_w2_w3___w4),word(-2,_,3) # w1_w2_w3
str(w1_w2_w3___w4),word(-3,_,0) # w1_w2

Change is backward compatible.
2018-04-17 11:27:48 +02:00
Aurélien Nephtali
9a4da683a6 MINOR: cli: Ensure the CLI always outputs an error when it should
When using the CLI_ST_PRINT_FREE state, always output something back
if the faulty function did not fill the 'err' variable.
The map/acl code could lead to a crash whereas the SSL code was silently
failing.

Signed-off-by: Aurélien Nephtali <aurelien.nephtali@corp.ovh.com>
2018-04-16 19:23:16 +02:00
Aurélien Nephtali
c511b7cc97 BUG/MINOR: cli: Guard against NULL messages when using CLI_ST_PRINT_FREE
Some error paths (especially those followed when running out of memory)
can set the error message to NULL. In order to avoid a crash, use a
generic message ("Out of memory") when this case arises.

It should be backported to 1.8.

Signed-off-by: Aurélien Nephtali <aurelien.nephtali@corp.ovh.com>
2018-04-16 19:22:42 +02:00
Ben Draut
054fbee67a MINOR: config: Warn if resolvers has no nameservers
Today, a `resolvers` section may be configured without any `nameserver`
directives, which is useless. This implements a warning when such
sections are detected.

[List thread][1].

[1]: https://www.mail-archive.com/haproxy@formilux.org/msg29600.html
2018-04-16 15:58:23 +02:00
Marcin Deranek
9a66dfbd6c MINOR: proxy: Add fe_defbe fetcher
Patch adds ability to fetch frontend's default backend name in your
logic, so it can be used later to derive other backend names to make routing
decisions.
2018-04-16 15:51:57 +02:00
Christopher Faulet
11ebb2080e BUG/MINOR: http: Return an error in proxy mode when url2sa fails
In proxy mode, the result of url2sa is never checked. So when the function fails
to resolve the destination server from the URL, we continue. Depending on the
internal state of the connection, we get different behaviours. With a newly
allocated connection, the field <addr.to> is not set. So we will get a HTTP
error. The status code is 503 instead of 400, but it's not really critical. But,
if it's a recycled connection, we will reuse the previous value of <addr.to>,
opening a connection on an unexpected server.

To fix the bug, we return an error when url2sa fails.

This patch should be backported in all version from 1.5.
2018-04-16 15:31:18 +02:00
Thierry Fournier
f7b7c3e2f2 MINOR: servers: Support alphanumeric characters for the server templates names
'server-template' directive doesn't support the same name alphabet as
the 'server' directive. This patch allows the usage of chars [0-9].

[wt: let's backport this to 1.8 to apply the principle of least surprize
 to people migrating to server templates]
2018-04-06 19:16:18 +02:00
Willy Tarreau
1093a4586c BUG/MAJOR: cache: always initialize newly created objects
Recent commit 5bd37fa ("BUG/MAJOR: cache: fix random crashes caused by
incorrect delete() on non-first blocks") addressed an issue where
dangling objects could be deleted in the cache, but even after this fix
some similar segfaults were reported at the same place (cache_free_blocks()).
The tree was always corrupted as well. Placing some traces revealed
that this time it's caused by a missing initialization in
http_action_store_cache() : while object->eb.key is used to note that
the object is not in the tree, the first retrieved block may contain
random data and is not initialized. Further, this entry can be updated
later without the object being inserted into the tree. Thus, if at the
end the object is not stored and the blocks are put back to the avail
list, the next attempt to use them will find eb.key != 0 and will try
to delete the uninitialized block, will see that eb.node.leaf_p is not
NULL (random data), and will dereference it as well as a few other
uninitialized pointers. It was harder to trigger than the previous one,
despite being very closely related. This time the following config
was used :

  listen l1
    mode http
    bind :8888
    http-request cache-use c1
    http-response cache-store c1
    server s1 127.0.0.1:8000

  cache c1
    total-max-size 4
    max-age 10

Httpterm was running on port 8000.

And it was stressed this way :

  $ inject -o 1 -u 500 -P 1 -G '127.0.0.1:8888/?s=4097&p=1&x=%s'
  ... wait 5 seconds then Ctrl-C ...
  # wait 3 seconds doing nothing
  $ inject -o 1 -u 500 -P 1 -G '127.0.0.1:8888/?s=4097&p=1&x=%s'
  => segfault

Other values don't work well. The size and the small pieces in the
responses (p=1) are critical to make it work.

Here the fix consists in pre-zeroing object->eb.key AND object->eb.leaf_p
just after the object is allocated so as to stay consistent with other
locations. Ideally this could be simplified later by only relying on
eb->node.leaf_p everywhere since in the end the key alone is not a
reliable indicator, so that we use only one indicator of being part of
the tree or not.

This fix needs to be backported to 1.8.
2018-04-06 19:02:25 +02:00
Christopher Faulet
caf2feca62 MINOR: spoe: Add counters to log info about SPOE agents
In addition to metrics about time spent in the SPOE, following counters have
been added:

  * applets : number of SPOE applets.
  * idles : number of idle applets.
  * nb_sending : number of streams waiting to send data.
  * nb_waiting : number of streams waiting for a ack.
  * nb_processed : number of events/groups processed by the SPOE (from the
                   stream point of view).
  * nb_errors : number of errors during the processing (from the stream point of
                view).

Log messages has been updated to report these counters. Following pattern has
been added at the end of the log message:

    ... <idles>/<applets> <nb_sending>/<nb_waiting> <nb_error>/<nb_processed>
2018-04-05 15:13:54 +02:00
Christopher Faulet
3b8e34902b MINOR: spoe: use agent's logger to log SPOE messages
Instead of using the logger of the stream, we now use dedicated logger of the
SPOE. This means a logger should be defined.
2018-04-05 15:13:54 +02:00
Christopher Faulet
0e0f085a73 MINOR: spoe: Add support for option dontlog-normal in the SPOE agent section
It does the same than for proxies.
2018-04-05 15:13:54 +02:00
Christopher Faulet
7250b8fb5c MINOR: spoe: Add loggers dedicated to the SPOE agent
Now it is possible to configure a logger in a spoe-agent section using a "log"
line, as for a proxy. "no log", "log global" and "log <address> ..." syntaxes
are supported.
2018-04-05 15:13:54 +02:00
Christopher Faulet
28ac099907 MINOR: log: Keep the ref when a log server is copied to avoid duplicate entries
With "log global" line, the global list of loggers are copied into the proxy's
struct. The list coming from the default section is also copied when a frontend
or a backend section is parsed. So it is possible to have duplicate entries in
the proxy's list. For instance, with this following config, all messages will be
logged twice:

    global
        log 127.0.0.1 local0 debug
        daemon

    defaults
        mode   http
        log    global
        option httplog

    frontend front-http
        log global
        bind *:8888
        default_backend back-http

    backend back-http
        server www 127.0.0.1:8000
2018-04-05 15:13:54 +02:00
Christopher Faulet
4b0b79dd56 MINOR: log: move 'log' keyword parsing in dedicated function
Now, the function parse_logsrv should be used to parse a "log" line. This
function will update the list of loggers passed in argument. It can release all
log servers when "no log" line was parsed (by the caller) or it can parse "log
global" or "log <address> ... " lines. It takes care of checking the caller
context (global or not) to prohibit "log global" usage in the global section.
2018-04-05 15:13:54 +02:00
Christopher Faulet
36bda1cd4a MINOR: spoe: Add options to store processing times in variables
"set-process-time" and "set-total-time" options have been added to store
processing times in the transaction scope, at each event and group processing,
the current one and the total one. So it is possible to get them.

TODO: documentation
2018-04-05 15:13:54 +02:00
Christopher Faulet
b2dd1e034c MINOR: spoe: Add metrics in to know time spent in the SPOE
Following metrics are added for each event or group of messages processed in the
SPOE:

  * processing time: the delay to process the event or the group. From the
                     stream point of view, it is the latency added by the SPOE
                     processing.
  * request time : It is the encoding time. It includes ACLs processing, if
                   any. For fragmented frames, it is the sum of all fragments.
  * queue time : the delay before the request gets out the sending queue. For
                 fragmented frames, it is the sum of all fragments.
  * waiting time: the delay before the reponse is received. No fragmentation
                  supported here.
  * response time: the delay to process the response. No fragmentation supported
                   here.
  * total time: (unused for now). It is the sum of all events or groups
                processed by the SPOE for a specific threads.

Log messages has been updated. Before, only errors was logged (status_code !=
0). Now every processing is logged, following this format:

  SPOE: [AGENT] <TYPE:NAME> sid=STREAM-ID st=STATUC-CODE reqT/qT/wT/resT/pT

where:

  AGENT              is the agent name
  TYPE               is EVENT of GROUP
  NAME               is the event or the group name
  STREAM-ID          is an integer, the unique id of the stream
  STATUS_CODE        is the processing's status code
  reqT/qT/wT/resT/pT are delays descrive above

For all these delays, -1 means the processing was interrupted before the end. So
-1 for the queue time means the request was never dequeued. For fragmented
frames it is harder to know when the interruption happened.

For now, messages are logged using the same logger than the backend of the
stream which initiated the request.
2018-04-05 15:13:53 +02:00
Christopher Faulet
879dca9a76 BUG/MINOR: spoe: Don't forget to decrement fpa when a processing is interrupted
In async or pipelining mode, we count the number of NOTIFY frames sent waiting
for their corresponding ACK frames. This is a way to evaluate the "load" of a
SPOE applet. For pipelining mode, it is easy to make the link between a NOTIFY
frame and its ACK one, because exchanges are done using the same TCP connection.

For async mode, it is harder because a ACK frame can be received on another
connection than the one sending the NOTIFY frame. So to decrement the fpa of the
right applet, we need to keep it in the SPOE context. Most of time, it works
expect when the processing is interrupted by the stream, because of a timeout.

This patch fixes this issue. If a SPOE applet is still link to a SPOE context
when the processing is interrupted by the stream, the applet's fpa is
decremented. This is only done for unfragmented frames.
2018-04-05 15:13:53 +02:00
Christopher Faulet
b7426d1562 BUG/MINOR: spoe: Register the variable to set when an error occurred
Variables referenced in HAProxy's configuration file are registered during the
configuration parsing (during parsing of "var", "set-var" or "unset-var"
keywords). For the SPOE, you can use "register-var-names" directive to
explicitly register variable names. All unknown variables will be rejected
(unless you set "force-set-var" option). But, the variable set when an error
occurred (when "set-on-error" option is defined) should also be regiestered by
default. This is done with this patch.
2018-04-05 15:13:53 +02:00
Christopher Faulet
ac580608d7 BUG/MINOR: spoe: Don't release the context buffer in .check_timeouts callbaclk
It is better to let spoe_stop_processing release this buffer because, in
.check_timeouts callback, we lack information to know if it should be release or
not. For instance, if the processing timeout is reached while the SPOE applet
receives the reply, it is preferable to ignore the timeout and process the
result.

This patch should be backported in 1.8.
2018-04-05 15:13:53 +02:00
Christopher Faulet
84c844eb12 BUG/MINOR: spoe: Initialize variables used during conf parsing before any check
Some initializations must be done at the beginning of parse_spoe_flt to avoid
segmentaion fault when first errors are catched, when the "filter spoe" line is
parsed.

This patch must be backported in 1.8.
[cf: the variable "curvars" doesn't exist in 1.8. So the patch must be adapted.]
2018-04-05 15:13:53 +02:00
Willy Tarreau
5bd37fa625 BUG/MAJOR: cache: fix random crashes caused by incorrect delete() on non-first blocks
Several segfaults were reported in the cache, each time in eb_delete()
called from cache_free_blocks() itself called from shctx_row_reserve_hot().
Each time the tree node was corrupted with random cached data (often JS or
HTML contents).

The problem comes from an incompatibility between the cache's expectations
and the recycling algorithm used in the shctx. The shctx allocates and
releases a chain of blocks at once. And when it needs to allocate N blocks
from the avail list while a chain of M>N is found, it picks the first N
from the list, moves them to the hot list, and marks all remaining M-N
blocks as isolated blocks (chains of 1).

For each such released block, the shctx->free_block() callback is used
and passed a pointer to the first and current block of the chain. For
the cache, it's cache_free_blocks(). What this function does is check
that the current block is the first one, and in this case delete the
object from the tree and mark it as not in tree by setting key to zero.

The problem this causes is that the tail blocks when M>N become first
blocks for the next call to shctx_row_reserve_hot(), these ones will
be passed to cache_free_blocks() as list heads, and will be sent to
eb_delete() despite containing only cached data.

The simplest solution for now is to mark each block as holding no cache
object by setting key to zero all the time. It keeps the principle used
elsewhere in the code. The SSL code is not subject to this problem
because it relies on the block's len not being null, which happens
immediately after a block was released. It was uncertain however whether
this method is suitable for the cache. It is not critical though since
this code is going to change soon in 1.9 to dynamically allocate only
the number of required blocks.

This fix must be backported to 1.8. Thanks to Thierry for providing
exploitable cores.
2018-04-04 20:17:03 +02:00
Willy Tarreau
afe1de5d98 BUG/MINOR: cache: fix "show cache" output
The "show cache" command used to dump the header for each entry into into
the handler loop, making it repeated every ~16kB of output data. Additionally
chunk_appendf() was used instead of chunk_printf(), causing the output to
repeat already emitted lines, and the output size to grow in O(n^2). It used
to take several minutes to report tens of millions of objects from a small
cache containing only a few thousands. There was no more impact though.

This fix must be backported to 1.8.
2018-04-04 11:56:43 +02:00
Christopher Faulet
b797ae1f15 BUG/MINOR: email-alert: Set the mailer port during alert initialization
Since the commit 2f3a56b4f ("BUG/MINOR: tcp-check: use the server's service port
as a fallback"), email alerts stopped working because the mailer's port was
overriden by the server's port. Remember, email alerts are defined as checks
with specific tcp-check rules and triggered on demand to send alerts. So to send
an email, a check is executed. Because no specific port's was defined, the
server's one was used.

To fix the bug, the ports used for checks attached an email alert are explicitly
set using the mailer's port. So this port will be used instead of the server's
one.

In this patch, the assignement to a default port (587) when an email alert is
defined has been removed. Indeed, when a mailer is defined, the port must be
defined. So the default port was never used.

This patch must be backported in 1.8.
2018-04-04 10:36:50 +02:00
Olivier Houchard
8ef1a6b0d8 BUG/MINOR: fd: Don't clear the update_mask in fd_insert.
Clearing the update_mask bit in fd_insert may lead to duplicate insertion
of fd in fd_updt, that could lead to a write past the end of the array.
Instead, make sure the update_mask bit is cleared by the pollers no matter
what.

This should be backported to 1.8.
[wt: warning: 1.8 doesn't have the lockless fdcache changes and will
 require some careful changes in the pollers]
2018-04-03 19:38:15 +02:00
Willy Tarreau
2500fc2c34 BUG/MINOR: checks: check the conn_stream's readiness and not the connection
Since commit 9aaf778 ("MAJOR: connection : Split struct connection into
struct connection and struct conn_stream."), the checks use a conn_stream
and not directly the connection anymore. However wake_srv_chk() still used
to verify the connection's readiness instead of the conn_stream's. Due to
the existence of a mux, the connection is always waiting for receiving
something, and doesn't reflect the changes made in event_srv_chk_{r,w}(),
causing the connection appear as not ready yet, and the check to be
validated only after its timeout. The difference is only visible when
sending pure TCP checks, and simply adding a "tcp-check connect" line
is enough to work around it.

This fix must be backported to 1.8.
2018-04-03 19:31:38 +02:00
Willy Tarreau
b2e290acb6 BUG/MEDIUM: h2: always add a stream to the send or fctl list when blocked
When a stream blocks on a mux buffer full/unallocated or on connection
flow control, a flag among H2_SF_MUX_M* is set, but the stream is not
always added to the connection's list. It's properly done when the
operations are performed from the connection handler but not always when
done from the stream handler. For instance, a simple shutr or shutw may
fail by lack of room. If it's immediately followed by a call to h2_detach(),
the stream remains lying around in no list at all, and prevents the
connection from ending. This problem is actually quite difficult to
trigger and seems to require some large objects and low server-side
timeouts.

This patch covers all identified paths. Some are redundant but since the
code will change and will be simplified in 1.9, it's better to stay on
the safe side here for now. It must be backported to 1.8.
2018-03-30 17:43:49 +02:00
Willy Tarreau
1a1dd6066f BUG/MINOR: h2: remove accidental debug code introduced with show_fd function
Commit e3f36cd ("MINOR: h2: implement a basic "show_fd" function")
accidently brought one surrounding debugging part that was in the same
context. No backport needed.
2018-03-30 17:41:19 +02:00
Willy Tarreau
c754b343a2 MINOR: cli: report cache indexes in "show fd"
Instead of just indicating "cache={0,1}" we now report cache.next and
cache.prev since they are the ones used with the lockless fd cache.
2018-03-30 15:00:15 +02:00
Willy Tarreau
e3f36cd479 MINOR: h2: implement a basic "show_fd" function
The purpose here is to dump some information regarding an H2 connection,
and a few statistics about its streams. The output looks like this :

     35 : st=0x55(R:PrA W:PrA) ev=0x00(heopi) [lc] cache=0 owner=0x7ff49ee15e80 iocb=0x588a61(conn_fd_handler) tmask=0x1 umask=0x0 cflg=0x00201366 fe=decrypt mux=H2 mux_ctx=0x7ff49ee16f30 st0=2 flg=0x00000002 fctl_cnt=0 send_cnt=33 tree_cnt=33 orph_cnt=0

- st0 is the connection's state (FRAME_H here)
- flg is the connection's flags (MUX_MFULL here)
- fctl_cnt is the number of streams in the fctl_list
- send_cnt is the number of streams in the send_list
- tree_cnt is the number of streams in the streams_by_id tree
- orph_cnt is the number of orphaned streams (cs==0) in the tree
2018-03-30 14:43:13 +02:00
Willy Tarreau
b011d8f4c4 MINOR: mux: add a "show_fd" function to dump debugging information for "show fd"
This function will be called from the CLI's "show fd" command to append some
extra mux-specific information that only the mux handler can decode. This is
supposed to help collect various hints about what is happening when facing
certain anomalies.
2018-03-30 14:41:19 +02:00
Willy Tarreau
e96e61cadc BUILD/MINOR: threads: always export thread_sync_io_handler()
Otherwise it doesn't build again without threads.
2018-03-29 18:54:33 +02:00
Willy Tarreau
3041fcc2fd BUG/MEDIUM: h2: don't consider pending data on detach if connection is in error
Interrupting an h2load test shows that some connections remain active till
the client timeout. This is due to the fact that h2_detach() immediately
returns if the h2s flags indicate that the h2s is still waiting for some
buffer room in the output mux (possibly to emit a response or to send some
window updates). If the connection is broken, these data will never leave
and must not prevent the stream from being terminated nor the connection
from being released.

This fix must be backported to 1.8.
2018-03-29 15:41:32 +02:00
Willy Tarreau
0975f11d55 BUG/MEDIUM: h2/threads: never release the task outside of the task handler
Currently, h2_release() will release all resources assigned to the h2
connection, including the timeout task if any. But since the multi-threaded
scheduler, the timeout task could very well be queued in the thread-local
list of running tasks without any way to remove it, so task_delete() will
have no effect and task_free() will cause this undefined object to be
dereferenced.

In order to prevent this from happening, we never release the task in
h2_release(), instead we wake it up after marking its context NULL so that
the task handler can release the task.

Future improvements could consist in modifying the scheduler so that a
task_wakeup() has to be done on any task having to be killed, letting
the scheduler take care of it.

This fix must be backported to 1.8. This bug was apparently not reported
so far.
2018-03-29 15:22:59 +02:00
Willy Tarreau
71049cce3f MINOR: h2: fuse h2s_detach() and h2s_free() into h2s_destroy()
Since these two functions are always used together, let's simplify
the code by having a single one for both operations. It also ensures
we don't leave wandering elements that risk to leak later.
2018-03-29 13:22:15 +02:00
Willy Tarreau
e323f3458c MINOR: h2: always call h2s_detach() in h2_detach()
The code is safer and more robust this way, it avoids multiple paths.
This is possible due to the idempotence of LIST_DEL() and eb32_delete()
that are called in h2s_detach().
2018-03-29 13:22:15 +02:00
Willy Tarreau
4a333d3d53 BUG/MAJOR: h2: remove orphaned streams from the send list before closing
Several people reported very strange occasional crashes when using H2.
Every time it appeared that either an h2s or a task was corrupted. The
outcome is that a missing LIST_DEL() when removing an orphaned stream
from the list in h2_wake_some_streams() can cause this stream to
remain present in the send list after it was freed. This may happen
when receiving a GOAWAY frame for example. In the mean time the send
list may be processed due to pending streams, and the just released
stream is still found. If due to a buffer full condition we left the
h2_process_demux() loop before being able to process the pending
stream, the pool entry may be reassigned somewhere else. Either another
h2 connection will get it, or a task, since they are the same size and
are shared. Then upon next pass in h2_process_mux(), the stream is
processed again. Either it crashes here due to modifications, or the
contents are harmless to it and its last changes affect the other object
reasigned to this area (typically a struct task). In the case of a
collision with struct task, the LIST_DEL operation performed on h2s
corrupts the task's wait queue's leaf_p pointer, thus all the wait
queue's structure.

The fix consists in always performing the LIST_DEL in h2s_detach().
It will also make h2s_stream_new() more robust against a possible
future situation where stream_create_from_cs() could have sent data
before failing.

Many thanks to all the reporters who provided extremely valuable
information, traces and/or cores, namely Thierry Fournier, Yves Lafon,
Holger Amann, Peter Lindegaard Hansen, and discourse user "slawekc".

This fix must be backported to 1.8. It is probably better to also
backport the following code cleanups with it as well to limit the
divergence between master and 1.8-stable :

  00dd078 CLEANUP: h2: rename misleading h2c_stream_close() to h2s_close()
  0a10de6 MINOR: h2: provide and use h2s_detach() and h2s_free()
2018-03-29 13:22:15 +02:00
Willy Tarreau
a833cd90b2 BUILD/MINOR: cli: fix a build warning introduced by last commit
Commit 35b1b48 ("MINOR: cli: make "show fd" report the mux and mux_ctx
pointers when available") introduced an accidental build warning due to
a missing const statement.
2018-03-29 13:19:37 +02:00
Willy Tarreau
35b1b48c75 MINOR: cli: make "show fd" report the mux and mux_ctx pointers when available
This is handy to quickly distinguish H2 connections as well as to easily
access the h2c context. It could be backported to 1.8 to help during
troubleshooting sessions.
2018-03-28 18:41:30 +02:00
Willy Tarreau
4037a3f904 MINOR: cli/threads: make "show fd" report thread_sync_io_handler instead of "unknown"
The output was confusing when the sync point's dummy handler was shown.

This patch should be backported to 1.8 to help with troubleshooting.
2018-03-28 18:06:47 +02:00
Willy Tarreau
a7394e1b72 BUG/MINOR: hpack: fix harmless use of uninitialized value in hpack_dht_insert
A warning is reported here by valgrind on first pass in hpack_dht_insert().
The cause is that the not-yet-initialized dht->head is checked in
hpack_dht_get_tail(), though the result is not used, making it have
no impact. At the very least it confuses valgrind, and maybe it makes
it harder for gcc to optimize the code path. Let's move the variable
initialization around to shut it up. Thanks to Olivier for reporting
this one.

This fix may be backported to 1.8 at least to make valgrind usage less
painful.
2018-03-27 20:05:13 +02:00
Mark Lakes
56cc12509c MINOR: lua: allow socket api settimeout to accept integers, float, and doubles
Instead of hlua_socket_settimeout() accepting only integers, allow user
to specify float and double as well. Convert to milliseconds much like
cli_parse_set_timeout but also sanity check the value.

http://w3.impa.br/~diego/software/luasocket/tcp.html#settimeout

T. Fournier edit:

The main goal is to keep compatibility with the LuaSocket API. This
API only accept seconds, so using a float to specify milliseconds is
an acceptable way.

Update doc.
2018-03-27 14:17:02 +02:00
Ilya Shipitsin
7741c854cd BUILD/MINOR: fix build when USE_THREAD is not defined
src/queue.o: In function `pendconn_redistribute':
/home/ilia/haproxy/src/queue.c:272: undefined reference to `thread_want_sync'
src/queue.o: In function `pendconn_grab_from_px':
/home/ilia/haproxy/src/queue.c:311: undefined reference to `thread_want_sync'
src/queue.o: In function `process_srv_queue':
/home/ilia/haproxy/src/queue.c:184: undefined reference to `thread_want_sync'
collect2: error: ld returned 1 exit status
make: *** [Makefile:900: haproxy] Error 1

To be backported to 1.8.
2018-03-26 17:17:59 +02:00
Mark Lakes
22154b437d CLEANUP: lua: typo fix in comments
Some typo fixes in comments.
2018-03-26 11:12:41 +02:00
Thierry Fournier
17a921b799 BUG/MINOR: lua funtion hlua_socket_settimeout don't check negative values
Negatives timeouts doesn't have sense. A negative timeout doesn't cause
a crash, but the connection expires before the system try to extablish it.

This patch should be backported in all versions from 1.6
2018-03-26 11:11:49 +02:00
Thierry Fournier
e9636f192a BUG/MINOR: lua: the function returns anything
The output of these function indicates that one element is pushed in
the stack, but no element is set in the stack. Actually, if anyone
read the value returned by this function, is gets "something"
present in the stack.

This patch is a complement of these one: 119a5f10e4

The LuaSocket documentation tell anything about the returned value,
but the effective code set an integer of value one.

   316a9455b9/src/timeout.c (L172)

Thanks to Tim for the bug report.

This patch should be backported in all version from 1.6
2018-03-26 11:11:23 +02:00
Ilya Shipitsin
f93f0935c9 CLEANUP: map, stream: remove duplicate code in src/map.c, src/stream.c
issue was identified by cppcheck

[src/map.c:372] -> [src/map.c:376]: (warning) Variable 'appctx->st2' is reassigned a value before the old one has been used. 'break;' missing?
[src/map.c:433] -> [src/map.c:437]: (warning) Variable 'appctx->st2' is reassigned a value before the old one has been used. 'break;' missing?
[src/map.c:555] -> [src/map.c:559]: (warning) Variable 'appctx->st2' is reassigned a value before the old one has been used. 'break;' missing?
[src/stream.c:3264] -> [src/stream.c:3268]: (warning) Variable 'appctx->st2' is reassigned a value before the old one has been used. 'break;' missing?

Signed-off-by: Ilya Shipitsin <chipitsine@gmail.com>
2018-03-23 18:00:09 +01:00
Christopher Faulet
fe234281d6 BUG/MINOR: listener: Don't decrease actconn twice when a new session is rejected
When a freshly created session is rejected, for any reason, during the accept in
the function "session_accept_fd", the variable "actconn" is decreased twice. The
first time when the rejected session is released, then in the function
"listener_accpect", because of the failure. So it is possible to have an
negative value for actconn. Note that, in this case, we will also have a negatve
value for the current number of connections on the listener rejecting the
session (actconn and l->nbconn are in/decreased in same time).

It is easy to reproduce the bug with this small configuration:

  global
      stats socket /tmp/haproxy

  listen test
      bind *:12345
      tcp-request connection reject if TRUE

A "show info" on the stat socket, after a connection attempt, will show a very
high value (the unsigned representation of -1).

To fix the bug, if the function "session_accept_fd" returns an error, it
decrements the right counters and "listener_accpect" leaves them untouched.

This patch must be backported in 1.8.
2018-03-23 16:21:50 +01:00
Willy Tarreau
8adae7c15f BUG/MINOR: h2: ensure we can never send an RST_STREAM in response to an RST_STREAM
There are some corner cases where this could happen by accident. Since
the spec explicitly forbids this (RFC7540#5.4.2), let's add a test in
the two only functions which make the RST to avoid this. Thanks to user
klzgrad for reporting this problem. Usually it is expected to be harmless
but may result in browsers issuing a warning.

This fix must be backported to 1.8.
2018-03-22 17:37:05 +01:00
Willy Tarreau
d1023bbab3 BUG/MEDIUM: h2: properly account for DATA padding in flow control
Recent fixes made to process partial frames broke the flow control on
DATA frames, as the padding is not considered anymore, only the actual
data is. Let's simply take account of the padding once the transfer
ends. The probability to meet this bug is low because, when used, padding
is small and it can require a large number of padded transfers before the
window is completely depleted.

Thanks to user klzgrad for reporting this bug and confirming the fix.

This fix must be backported to 1.8.
2018-03-22 16:53:12 +01:00
Emmanuel Hocdet
50791a7df3 MINOR: samples: add crc32c converter
This patch adds the support of CRC32c (rfc4960).
2018-03-21 16:17:00 +01:00
Emmanuel Hocdet
115df3e38e MINOR: accept-proxy: support proxy protocol v2 CRC32c checksum
When proxy protocol v2 CRC32c tlv is received, check it before accept
connection (as describe in "doc/proxy-protocol.txt").
2018-03-21 05:04:01 +01:00
Emmanuel Hocdet
4399c75f6c MINOR: proxy-v2-options: add crc32c
This patch add option crc32c (PP2_TYPE_CRC32C) to proxy protocol v2.
It compute the checksum of proxy protocol v2 header as describe in
"doc/proxy-protocol.txt".
2018-03-21 05:04:01 +01:00
Emmanuel Hocdet
6afd898988 MINOR: hash: add new function hash_crc32c
This function will be used to perform CRC32c computations. This is
required to compute proxy protocol v2 CRC32C tlv (PP2_TYPE_CRC32C).
2018-03-21 05:04:01 +01:00
Willy Tarreau
c98aebcdb8 MINOR: log: stop emitting alerts when it's not possible to write on the socket
This is a recurring pain when using certain unix domain sockets or when
sending to temporarily unroutable addresses, if the process remains in
the foreground, the console is full of error which it's impossible to
do anything about. It's even worse when the process is remote, or when
run from a serial console which will slow the whole process down. Let's
send them only once now to warn about a possible config issue, and not
pollute the system nor slow everything down.
2018-03-20 16:44:25 +01:00
Christopher Faulet
fd83f0bfa4 BUG/MEDIUM: threads/queue: wake up other threads upon dequeue
The previous patch about queues (5cd4bbd7a "BUG/MAJOR: threads/queue: Fix
thread-safety issues on the queues management") revealed a performance drop when
multithreading is enabled (nbthread > 1). This happens when pending connections
handled by other theads are dequeued. If these other threads are blocked in the
poller, we have to wait the poller's timeout (or any I/O event) to process the
dequeued connections.

To fix the problem, at least temporarly, we "wake up" the threads by requesting
a synchronization. This may seem a bit overkill to use the sync point to do a
wakeup on threads, but it fixes this performance issue. So we can now think
calmly on the good way to address this kind of issues.

This patch should be backported in 1.8 with the commit 5cd4bbd7a ("BUG/MAJOR:
threads/queue: Fix thread-safety issues on the queues management").
2018-03-19 22:16:58 +01:00
Baptiste Assmann
2f3a56b4ff BUG/MINOR: tcp-check: use the server's service port as a fallback
When running tcp-check scripts, one must ensure we can establish a tcp
connection first.
When doing this action, HAProxy needs a TCP port configured either on
the server or on the check itself or on the connect rule itself.
For some reasons, the connect code did not evaluate the service port on
the server structure...

this patch fixes this error.

Backport status: 1.8
2018-03-19 13:55:55 +01:00
Baptiste Assmann
248f1173f2 BUG/MEDIUM: tcp-check: single connect rule can't detect DOWN servers
When tcpcheck is used to do TCP port monitoring only and the script is
composed by a single "tcp-check connect" rule (whatever port and ssl
options enabled), then the server can't be seen as DOWN.
Simple configuration to reproduce:

  backend b
    [...]
    option tcp-check
    tcp-check connect
    server s1 127.0.0.1:22 check

The main reason for this issue is that the piece of code which validates
that we're not at the end of the chained list (of rules) prevents
executing the validation of the establishment of the TCP connection.
Since validation is not executed, the rule is terminated and the report
says no errors were encountered, hence the server is UP all the time.

The workaround is simple: move the connection validation outsied the
CONNECT rule processing loop, into the main function.
That way, if the connection status is not CONNECTED, then HAProxy will
now add more time to wait for it. If the time is expired, an error is
now well reported.

Backport status: 1.8
2018-03-19 13:53:59 +01:00
Thierry FOURNIER
2986c0db88 CLEANUP: lua/syntax: lua is a name and not an acronym
This patch fix some first letter upercase for Lua messages.
2018-03-19 12:59:26 +01:00
Thierry FOURNIER
fd1e955a56 BUG/MINOR: lua: return bad error messages
The returned type is the type of the top of stack value and
not the type of the checked argument.

[wt: this can be backported to 1.8, 1.7 and 1.6]
2018-03-19 12:59:19 +01:00
Bernard Spil
13c53f8cc2 BUILD: ssl: Fix build with OpenSSL without NPN capability
OpenSSL can be built without NEXTPROTONEG support by passing
-no-npn to the configure script. This sets the
OPENSSL_NO_NEXTPROTONEG flag in opensslconf.h

Since NEXTPROTONEG is now considered deprecated, it is superseeded
by ALPN (Application Layer Protocol Next), HAProxy should allow
building withough NPN support.
2018-03-19 12:43:15 +01:00
Aurélien Nephtali
6a61e968ac BUG/MINOR: cli: Fix a crash when sending a command with too many arguments
This bug was introduced in 48bcfdab2 ("MEDIUM: dumpstat: make the CLI
parser understand the backslash as an escape char").

This should be backported to 1.8.

Signed-off-by: Aurélien Nephtali <aurelien.nephtali@corp.ovh.com>
2018-03-19 12:15:38 +01:00
Aurélien Nephtali
6e8a41d8fc BUG/MINOR: cli: Ensure all command outputs end with a LF
Since 200b0fac ("MEDIUM: Add support for updating TLS ticket keys via
socket"), 4147b2ef ("MEDIUM: ssl: basic OCSP stapling support."),
4df59e9 ("MINOR: cli: add socket commands and config to prepend
informational messages with severity") and 654694e1 ("MEDIUM: stats/cli:
add support for "set table key" to enter values"), commands
'set ssl tls-key', 'set ssl ocsp-response', 'set severity-output' and
'set table' do not always send an extra LF at the end of their outputs.

This is required as mentioned in doc/management.txt:

"Since multiple commands may be issued at once, haproxy uses the empty
line as a delimiter to mark an end of output for each command"

Signed-off-by: Aurélien Nephtali <aurelien.nephtali@corp.ovh.com>
2018-03-19 12:13:02 +01:00
Olivier Houchard
33e083c92e BUG/MINOR: seemless reload: Fix crash when an interface is specified.
When doing a seemless reload, while receiving the sockets from the old process
the new process will die if the socket has been bound to a specific
interface.
This happens because the code that tries to parse the informations bogusly
try to set xfer_sock->namespace, while it should be setting wfer_sock->iface.

This should be backported to 1.8.
2018-03-19 12:10:53 +01:00
Ilya Shipitsin
210eb259bf CLEANUP: dns: remove duplicate code in src/dns.c
issue was identified by cppcheck

[src/dns.c:2037] -> [src/dns.c:2041]: (warning) Variable 'appctx->st2' is reassigned a value before the old one has been used. 'break;' missing?
2018-03-19 12:09:16 +01:00
Baptiste Assmann
1fa7d2acce BUG/MINOR: dns: don't downgrade DNS accepted payload size automatically
Automatic downgrade of DNS accepted payload size may have undesired side
effect, which could make a backend with all servers DOWN.

After talking with Lukas on the ML, I realized this "feature" introduces
more issues that it fixes problem.
The "best" way to handle properly big responses will be to implement DNS
over TCP.

To be backported to 1.8.
2018-03-19 11:41:52 +01:00