117 Commits

Author SHA1 Message Date
Christopher Faulet
102854cbba BUG/MEDIUM: listener: Fix how unlimited number of consecutive accepts is handled
There is a bug when global.tune.maxaccept is set to -1 (no limit). It is pretty
visible with one process (nbproc sets to 1). The functions listener_accept() and
accept_queue_process() don't expect to handle negative maxaccept values. So
instead of accepting incoming connections without any limit, none are never
accepted and HAProxy loop infinitly in the scheduler.

When there are 2 or more processes, the bug is a bit more subtile. The limit for
a listener is set to 1. So only one connection is accepted at a time by a given
listener. This happens because the listener's maxaccept value is an unsigned
integer. In check_config_validity(), it is first set to UINT_MAX (-1 casted in
an unsigned integer), and then some calculations on it leads to an integer
overflow.

To fix the bug, the listener's maxaccept value is now a signed integer. So, if a
negative value is set for global.tune.maxaccept, we keep it untouched for the
listener and no calculation is made on it. Then, in the listener code, this
signed value is casted to a unsigned one. It simplifies all tests instead of
dealing with negative values. So, it limits the number of connections accepted
at a time to UINT_MAX at most. But, honestly, it not an issue.

This patch must be backported to 1.9 and 1.8.
2019-04-30 15:28:29 +02:00
Willy Tarreau
85d0424b20 BUG/MINOR: listener/mq: correctly scan all bound threads under low load
When iterating on the CLI using "show activity" and no other load, it
was visible that the last thread was always skipped. This was caused by
the way the thread bits were walking : t1 was updated after t2 to make
sure it never equals t2 (thus it skips t2), and in case of a tie we
choose t1. This results in the chosen thread never to equal t2 unless
the other ones already have one connection. In addition to this, t2 was
recalulated upon each pass due to the fact that only the 31th bit was
looked at instead of looking at the t2'th bit.

This patch fixes this by updating t2 after t1 so that t1 is free to
walk over all positions under equal load. No measurable performance
gains are expected from this though, but it at least removes one
strange indicator which could lead to some suspicion.

No backport is needed.
2019-04-16 18:09:13 +02:00
Willy Tarreau
64a9c05f37 MINOR: cli/listener: report the number of accepts on "show activity"
The "show activity" command reports the number of incoming connections
dispatched per thread but doesn't report the number of connections
received by each thread. It is important to be able to monitor this
value as it can show that for whatever reason a smaller set of threads
is receiving the connections and dispatching them to all other ones.
2019-04-12 15:54:15 +02:00
Willy Tarreau
0d858446b6 BUG/MINOR: listener: renice the accept ring processing task
It is not acceptable that the accept queues are handled with a normal
priority since they are supposed to quickly dispatch the incoming
traffic, resulting in tasks which will have their respective nice
values and place in the queue. Let's renice the accept ring tasks
to -1024.

No backport is needed, this is strictly 2.0.
2019-04-12 15:54:03 +02:00
David Carlier
5671662f08 BUILD/MINOR: listener: Silent a few signedness warnings.
Silenting couple of warnings related to signedness, due to a mismatch of
signed and unsigned ints with l->nbconn, actconn and p->feconn.
2019-03-27 17:37:44 +01:00
Willy Tarreau
57cb506df8 BUILD: listener: shut up a build warning when threads are disabled
We get this with __decl_hathreads due to the lone semi-colon, let's move
it at the end of the innermost declaration :

  src/listener.c: In function 'listener_accept':
  src/listener.c:601:2: warning: ISO C90 forbids mixed declarations and code [-Wdeclaration-after-statement]
2019-03-15 17:17:33 +01:00
Willy Tarreau
897e2c58e6 BUG/MEDIUM: listener: make sure we don't pick stopped threads
Dragan Dosen reported that after the multi-queue changes, appending
"process 1/even" on a bind line can make the process immediately crash
when delivering a first connection. This is due to the fact that I
believed that thread_mask(mask) applied the all_threads_mask value,
but it doesn't. And in case of even/odd the bits cover more than the
available threads, resulting in too high a thread number being selected
and a non-existing task to be woken up.

No backport is needed.
2019-03-13 15:03:53 +01:00
Olivier Houchard
64213e910d MEDIUM: listeners: Use the new _HA_ATOMIC_* macros.
Use the new _HA_ATOMIC_* macros and add barriers where needed.
2019-03-11 17:02:38 +01:00
Olivier Houchard
a51885621d BUG/MEDIUM: listeners: Don't call fd_stop_recv() if fd_updt is NULL.
In do_unbind_listener, don't bother calling fd_stop_recv() if fd_updt is
NULL. It means it has already been free'd, and it would crash.
2019-03-08 16:05:31 +01:00
Willy Tarreau
0cf33176bd MINOR: listener: move thr_idx from the bind_conf to the listener
Tests show that it's slightly faster to have this field in the listener.
The cache walk patterns are under heavy stress and having only this field
written to in the bind_conf was wasting a cache line that was heavily
read. Let's move this close to the other entries already written to in
the listener. Warning, the position does have an impact on peak performance.
2019-03-07 14:08:26 +01:00
Willy Tarreau
9f1d4e7f7f CLEANUP: listener: remove old thread bit mapping
Now that the P2C algorithm for the accept queue is removed, we don't
need to map a number to a thread bit anymore, so let's remove all
these fields which are taking quite some space for no reason.
2019-03-07 13:59:04 +01:00
Willy Tarreau
0fe703bd50 MEDIUM: listener: change the LB algorithm again to use two round robins instead
At this point, the random used in the hybrid queue distribution algorithm
provides little benefit over a periodic scan, can even have a slightly
worse worst case, and it requires to establish a mapping between a
discrete number and a thread ID among a mask.

This patch introduces a different approach using two indexes. One scans
the thread mask from the left, the other one from the right. The related
threads' loads are compared, and the least loaded one receives the new
connection. Then one index is adjusted depending on the load resulting
from this election, so that we start the next election from two known
lightly loaded threads.

This approach provides an extra 1% peak performance boost over the previous
one, which likely corresponds to the removal of the extra work on the
random and the previously required two mappings of index to thread.

A test was attempted with two indexes going in the same direction but it
was much less interesting because the same thread pairs were compared most
of the time with the load climbing in a ladder-like model. With the reverse
directions this cannot happen.
2019-03-07 13:57:33 +01:00
Willy Tarreau
fc630bd373 MINOR: listener: improve incoming traffic distribution
By picking two randoms following the P2C algorithm, we seldom observe
asymmetric loads on bursts of small session counts. This is typically
what makes h2load take a bit of time to complete the last 100% because
if a thread gets two connections while the other ones only have one,
it takes twice the time to complete its work.

This patch proposes a modification of the p2c algorithm which seems
more suitable to this case : it mixes a rotating index with a random.
This way, we're certain that all threads are consulted in turn and at
the same time we're not forced to use the ones we're giving a chance.

This significantly increases the traffic rate. Now h2load shows faster
completion and the average request rates on H2 and the TLS resume rate
increases by a bit more than 5% compared to pure p2c.

The index was placed into the struct bind_conf because 1) it's faster
there and it's the best place to optimally distribute traffic among a
group of listeners. It's the only runtime-modified element there and
it will be quite cache-hot.
2019-03-07 13:48:04 +01:00
Willy Tarreau
a8cf66bcab MINOR: listener: do not needlessly set l->maxconn
It's pointless to always set and maintain l->maxconn because the accept
loop already enforces the frontend's limit anyway. Thus let's stop setting
this value by default and keep it to zero meaning "no limit". This way the
frontend's maxconn will be used by default. Of course if a value is set,
it will be enforced.
2019-02-28 17:05:32 +01:00
Willy Tarreau
e2711c7bd6 MINOR: listener: introduce listener_backlog() to report the backlog value
In an attempt to try to provide automatic maxconn settings, we need to
decorrelate a listner's backlog and maxconn so that these values can be
independent. This introduces a listener_backlog() function which retrieves
the backlog value from the listener's backlog, the frontend's, the
listener's maxconn, the frontend's or falls back to 1024. This
corresponds to what was done in cfgparse.c to force a value there except
the last fallback which was not set since the frontend's maxconn is always
known.
2019-02-28 17:05:29 +01:00
Willy Tarreau
82c9789ac4 BUG/MEDIUM: listener: make sure the listener never accepts too many conns
We were not checking p->feconn nor the global actconn soon enough. In
older versions this could result in a frontend accepting more connections
than allowed by its maxconn or the global maxconn, exactly N-1 extra
connections where N is the number of threads, provided each of these
threads were running a different listener. But with the lock removal,
it became worse, the excess could be the listener's maxconn multiplied
by the number of threads. Among the nasty side effect was that LI_FULL
could be removed while the limit was still over and in some cases the
polling on the socket was no re-enabled.

This commit takes care of updating and checking p->feconn and the global
actconn *before* processing the connection, so that the listener can be
turned off before accepting the socket if needed. This requires to move
some of the bookkeeping operations form session to listen, which totally
makes sense in this context.

Now the limits are properly respected, even if a listener's maxconn is
over a frontend's. This only applies on top of the listener lock removal
series and doesn't have to be backported.
2019-02-28 16:08:54 +01:00
Willy Tarreau
01abd02508 BUG/MEDIUM: listener: use a self-locked list for the dequeue lists
There is a very difficult to reproduce race in the listener's accept
code, which is much easier to reproduce once connection limits are
properly enforced. It's an ABBA lock issue :

  - the following functions take l->lock then lq_lock :
      disable_listener, pause_listener, listener_full, limit_listener,
      do_unbind_listener

  - the following ones take lq_lock then l->lock :
      resume_listener, dequeue_all_listener

This is because __resume_listener() only takes the listener's lock
and expects to be called with lq_lock held. The problem can easily
happen when listener_full() and limit_listener() are called a lot
while in parallel another thread releases sessions for the same
listener using listener_release() which in turn calls resume_listener().

This scenario is more prevalent in 2.0-dev since the removal of the
accept lock in listener_accept(). However in 1.9 and before, a different
but extremely unlikely scenario can happen :

      thread1                                  thread2
         ............................  enter listener_accept()
  limit_listener()
         ............................  long pause before taking the lock
  session_free()
    dequeue_all_listeners()
      lock(lq_lock) [1]
         ............................  try_lock(l->lock) [2]
      __resume_listener()
        spin_lock(l->lock) =>WAIT[2]
         ............................  accept()
                                       l->accept()
                                       nbconn==maxconn =>
                                         listener_full()
                                           state==LI_LIMITED =>
                                             lock(lq_lock) =>DEADLOCK[1]!

In practice it is almost impossible to trigger it because it requires
to limit both on the listener's maxconn and the frontend's rate limit,
at the same time, and to release the listener when the connection rate
goes below the limit between poll() returns the FD and the lock is
taken (a few nanoseconds). But maybe with threads competing on the
same core it has more chances to appear.

This patch removes the lq_lock and replaces it with a lockless queue
for the listener's wait queue (well, technically speaking a self-locked
queue) brought by commit a8434ec14 ("MINOR: lists: Implement locked
variations.") and its few subsequent fixes. This relieves us from the
need of the lq_lock and removes the deadlock. It also gets rid of the
distinction between __resume_listener() and resume_listener() since the
only difference was the lq_lock. All listener removals from the list
are now unconditional to avoid races on the state. It's worth noting
that the list used to never be initialized and that it used to work
only thanks to the state tests, so the initialization has now been
added.

This patch must carefully be backported to 1.9 and very likely 1.8.
It is mandatory to be careful about replacing all manipulations of
l->wait_queue, global.listener_queue and p->listener_queue.
2019-02-28 16:08:54 +01:00
Willy Tarreau
7ac908bf8c MINOR: config: add global tune.listener.multi-queue setting
tune.listener.multi-queue { on | off }
  Enables ('on') or disables ('off') the listener's multi-queue accept which
  spreads the incoming traffic to all threads a "bind" line is allowed to run
  on instead of taking them for itself. This provides a smoother traffic
  distribution and scales much better, especially in environments where threads
  may be unevenly loaded due to external activity (network interrupts colliding
  with one thread for example). This option is enabled by default, but it may
  be forcefully disabled for troubleshooting or for situations where it is
  estimated that the operating system already provides a good enough
  distribution and connections are extremely short-lived.
2019-02-27 14:27:07 +01:00
Willy Tarreau
8a03408d81 MINOR: activity: add accept queue counters for pushed and overflows
It's important to monitor the accept queues to know if some incoming
connections had to be handled by their originating thread due to an
overflow. It's also important to be able to confirm thread fairness.
This patch adds "accq_pushed" to activity reporting, which reports
the number of connections that were successfully pushed into each
thread's queue, and "accq_full", which indicates the number of
connections that couldn't be pushed because the thread's queue was
full.
2019-02-27 14:27:07 +01:00
Willy Tarreau
e0e9c48ab2 MAJOR: listener: use the multi-queue for multi-thread listeners
The idea is to redistribute an incoming connection to one of the
threads a bind_conf is bound to when there is more than one. We do this
using a random improved by the p2c algorithm : a random() call returns
two different thread numbers. We then compare their respective connection
count and the length of their accept queues, and pick the least loaded
one. We even use this deferred accept mechanism if the target thread
ends up being the local thread, because this maintains fairness between
all connections and tests show that it's about 1% faster this way,
likely due to cache locality. If the target thread's accept queue is
full, the connection is accepted synchronously by the current thread.
2019-02-27 14:27:07 +01:00
Willy Tarreau
1efafce61f MINOR: listener: implement multi-queue accept for threads
There is one point where we can migrate a connection to another thread
without taking risk, it's when we accept it : the new FD is not yet in
the fd cache and no task was created yet. It's still possible to assign
it a different thread than the one which accepted the connection. The
only requirement for this is to have one accept queue per thread and
their respective processing tasks that have to be woken up each time
an entry is added to the queue.

This is a multiple-producer, single-consumer model. Entries are added
at the queue's tail and the processing task is woken up. The consumer
picks entries at the head and processes them in order. The accept queue
contains the fd, the source address, and the listener. Each entry of
the accept queue was rounded up to 64 bytes (one cache line) to avoid
cache aliasing because tests have shown that otherwise performance
suffers a lot (5%). A test has shown that it's important to have at
least 256 entries for the rings, as at 128 it's still possible to fill
them often at high loads on small thread counts.

The processing task does almost nothing except calling the listener's
accept() function and updating the global session and SSL rate counters
just like listener_accept() does on synchronous calls.

At this point the accept queue is implemented but not used.
2019-02-27 14:27:07 +01:00
Willy Tarreau
b2b50a7784 MINOR: listener: pre-compute some thread counts per bind_conf
In order to quickly pick a thread ID when accepting a connection, we'll
need to know certain pre-computed values derived from the thread mask,
which are counts of bits per position multiples of 1, 2, 4, 8, 16 and
32. In practice it is sufficient to compute only the 4 first ones and
store them in the bind_conf. We update the count every time the
bind_thread value is adjusted.

The fields in the bind_conf struct have been moved around a little bit
to make it easier to group all thread bit values into the same cache
line.

The function used to return a thread number is bind_map_thread_id(),
and it maps a number between 0 and 31/63 to a thread ID between 0 and
31/63, starting from the left.
2019-02-27 14:27:07 +01:00
Willy Tarreau
9e85318417 MINOR: listener: maintain a per-thread count of the number of connections on a listener
Having this information will help us improve thread-level distribution
of incoming traffic.
2019-02-27 14:27:07 +01:00
Willy Tarreau
3f0d02bbc2 MAJOR: listener: do not hold the listener lock in listener_accept()
This function used to hold the listener's lock as a way to stay safe
against concurrent manipulations, but it turns out this is wrong. First,
the lock is held during l->accept(), which itself might indirectly call
listener_release(), which, if the listener is marked full, could result
in __resume_listener() to be called and the lock being taken twice. In
practice it doesn't happen right now because the listener's FULL state
cannot change while we're doing this.

Second, all the code does is now protected against concurrent accesses.
It used not to be the case in the early days of threads : the frequency
counters are thread-safe. The rate limiting doesn't require extreme
precision. Only the nbconn check is not thread safe.

Third, the parts called here will have to be called from different
threads without holding this lock, and this becomes a bigger issue
if we need to keep this one.

This patch does 3 things which need to be addressed at once :
  1) it moves the lock to the only 2 functions that were not protected
     since called form listener_accept() :
     - limit_listener()
     - listener_full()

  2) it makes sure delete_listener() properly checks its state within
     the lock.

  3) it updates the l->nbconn tracking to make sure that it is always
     properly reported and accounted for. There is a point of particular
     care around the situation where the listener's maxconn is reached
     because the listener has to be marked full before accepting the
     connection, then resumed if the connection finally gets dropped.
     It is not possible to perform this change without removing the
     lock due to the deadlock issue explained above.

This patch almost doubles the accept rate in multi-thread on a shared
port between 8 threads, and multiplies by 4 the connection rate on a
tcp-request connection reject rule.
2019-02-27 14:27:07 +01:00
Willy Tarreau
a36b324777 MEDIUM: listener: keep a single thread-mask and warn on "process" misuse
Now that nbproc and nbthread are exclusive, we can still provide more
detailed explanations about what we've found in the config when a bind
line appears on multiple threads and processes at the same time, then
ignore the setting.

This patch reduces the listener's thread mask to a single mask instead
of an array of masks per process. Now we have only one thread mask and
one process mask per bind-conf. This removes ~504 bytes of RAM per
bind-conf and will simplify handling of thread masks.

If a "bind" line only refers to process numbers not found by its parent
frontend or not covered by the global nbproc directive, or to a thread
not covered by the global nbthread directive, a warning is emitted saying
what will be used instead.
2019-02-27 14:27:07 +01:00
Willy Tarreau
741b4d6b7a BUG/MINOR: listener: keep accept rate counters accurate under saturation
The test on l->nbconn forces to exit the loop before updating the freq
counters, so the last session which reaches a listener's limit will not
be accounted for in the session rate measurement.

Let's move the test at the beginning of the loop and mark the listener
as saturated on exit.

This may be backported to 1.9 and 1.8.
2019-02-27 08:03:41 +01:00
Olivier Houchard
d16a9dfed8 BUG/MAJOR: listener: Make sure the listener exist before using it.
In listener_accept(), make sure we have a listener before attempting to
use it.
An another thread may have closed the FD meanwhile, and set fdtab[fd].owner
to NULL.
As the listener is not free'd, it is ok to attempt to accept() a new
connection even if the listener was closed. At worst the fd has been
reassigned to another connection, and accept() will fail anyway.

Many thanks to Richard Russo for reporting the problem, and suggesting the
fix.

This should be backported to 1.9 and 1.8.
2019-02-25 16:30:13 +01:00
Willy Tarreau
ff9c9140f4 MINOR: config: make MAX_PROCS configurable at build time
For some embedded systems, it's pointless to have 32- or even 64- large
arrays of processes when it's known that much fewer processes will be
used in the worst case. Let's introduce this MAX_PROCS define which
contains the highest number of processes allowed to run at once. It
still defaults to LONGBITS but may be lowered.
2019-02-07 15:10:19 +01:00
Willy Tarreau
6daac19b3f MINOR: config: simplify bind_proc processing using proc_mask()
At a number of places we used to have null tests on bind_proc for
listeners and proxies. Let's simplify all these tests by always
having the proper bits reported via proc_mask().
2019-02-04 05:09:16 +01:00
Willy Tarreau
bbcf2b9e0d BUG/MINOR: threads: fix the process range of thread masks
Commit 421f02e ("MINOR: threads: add a MAX_THREADS define instead of
LONGBITS") used a MAX_THREADS macros to fix threads limits. However,
one change was wrong as it affected the upper bound of the process
loop when setting threads masks. No backport is needed.
2019-02-02 13:18:01 +01:00
Willy Tarreau
888d5678f7 BUG/MINOR: listener: always fill the source address for accepted socketpairs
The source address was not set but passed down the chain to the upper
layer's accept() calls. Let's initialize it like other UNIX sockets in
this case. At the moment it should not have any impact since socketpairs
are only usable for the master CLI.

This should be backported to 1.9.
2019-01-27 21:48:29 +01:00
Willy Tarreau
c9a82e48bf MINOR: cfgparse: make the process/thread parser support a maximum value
It was hard-wired to LONGBITS, let's make it configurable depending on the
context (threads, processes).
2019-01-26 13:25:14 +01:00
Willy Tarreau
76a551de2e MINOR: config: make sure to associate the proper mux to bind and servers
Currently a mux may be forced on a bind or server line by specifying the
"proto" keyword. The problem is that the mux may depend on the proxy's
mode, which is not known when parsing this keyword, so a wrong mux could
be picked.

Let's simply update the mux entry while checking its validity. We do have
the name and the side, we only need to see if a better mux fits based on
the proxy's mode. It also requires to remove the side check while parsing
the "proto" keyword since a wrong mux could be picked.

This way it becomes possible to declare multiple muxes with the same
protocol names and different sides or modes.
2018-12-02 13:29:35 +01:00
William Lallemand
d913800a7d BUG/MEDIUM: listeners: CLOEXEC flag is not correctly set
The CLOEXEC flag was set using a F_SETFL which can't work.
To set the CLOEXEC flag F_SETFD should be used, the problem is that it
needs a new call to fcntl() and it's on the path of every accept.

This flag was only needed in the case of the master, so the patch was
reverted and the flag set only in this case.

The bug was introduced by 0b3e849 ("MEDIUM: listeners: set O_CLOEXEC on
the accepted FDs").

No backport needed.
2018-11-27 19:34:00 +01:00
William Lallemand
4b58c80ee2 REORG: mworker: declare master variable in global.h
This variable is used at several places, better declare it in global.h.
2018-11-27 19:34:00 +01:00
Willy Tarreau
86abe44e42 MEDIUM: init: use self-initializing spinlocks and rwlocks
This patch replaces a number of __decl_hathread() followed by HA_SPIN_INIT
or HA_RWLOCK_INIT by the new __decl_spinlock() or __decl_rwlock() which
automatically registers the lock for initialization in during the STG_LOCK
init stage. A few static modifiers were lost in the process, but since they
were not essential at all it was not worth extending the API to provide such
a variant.
2018-11-26 19:50:32 +01:00
Willy Tarreau
0108d90c6c MEDIUM: init: convert all trivial registration calls to initcalls
This switches explicit calls to various trivial registration methods for
keywords, muxes or protocols from constructors to INITCALL1 at stage
STG_REGISTER. All these calls have in common to consume a single pointer
and return void. Doing this removes 26 constructors. The following calls
were addressed :

- acl_register_keywords
- bind_register_keywords
- cfg_register_keywords
- cli_register_kw
- flt_register_keywords
- http_req_keywords_register
- http_res_keywords_register
- protocol_register
- register_mux_proto
- sample_register_convs
- sample_register_fetches
- srv_register_keywords
- tcp_req_conn_keywords_register
- tcp_req_cont_keywords_register
- tcp_req_sess_keywords_register
- tcp_res_cont_keywords_register
- flt_register_keywords
2018-11-26 19:50:32 +01:00
William Lallemand
0b3e849a48 MEDIUM: listeners: set O_CLOEXEC on the accepted FDs
Set the O_CLOEXEC flag on the accept, useful to avoid an FD leak in the
master process, since it reexecutes itself during a reload
2018-10-28 14:03:31 +01:00
William Lallemand
2fe7dd0b2e MEDIUM: protocol: sockpair protocol
This protocol is based on the uxst one, but it uses socketpair and FD
passing insteads of a connect()/accept().

The "sockpair@" prefix has been implemented for both bind and server
keywords.

When HAProxy wants to connect through a sockpair@, it creates 2 new
sockets using the socketpair() syscall and pass one of the socket
through the FD specified on the server line.

On the bind side, haproxy will receive the FD, and will use it like it
was the FD of an accept() syscall.

This protocol was designed for internal communication within HAProxy
between the master and the workers, but it's possible to use it
externaly with a wrapper and pass the FD through environment variabls.
2018-09-12 07:20:17 +02:00
William Lallemand
e22f11ff47 MINOR: mworker: keep and clean the listeners
Keep the listeners that should be used in the master process and clean
them in the workers.
2018-09-11 10:23:24 +02:00
Christopher Faulet
a717b99284 MINOR: mux/frontend: Add 'proto' keyword to force the mux protocol
For now, it is parsed but not used. Tests are done on it to check if the side
and the mode are compatible with the proxy's definition.
2018-08-08 10:41:11 +02:00
Christopher Faulet
fe234281d6 BUG/MINOR: listener: Don't decrease actconn twice when a new session is rejected
When a freshly created session is rejected, for any reason, during the accept in
the function "session_accept_fd", the variable "actconn" is decreased twice. The
first time when the rejected session is released, then in the function
"listener_accpect", because of the failure. So it is possible to have an
negative value for actconn. Note that, in this case, we will also have a negatve
value for the current number of connections on the listener rejecting the
session (actconn and l->nbconn are in/decreased in same time).

It is easy to reproduce the bug with this small configuration:

  global
      stats socket /tmp/haproxy

  listen test
      bind *:12345
      tcp-request connection reject if TRUE

A "show info" on the stat socket, after a connection attempt, will show a very
high value (the unsigned representation of -1).

To fix the bug, if the function "session_accept_fd" returns an error, it
decrements the right counters and "listener_accpect" leaves them untouched.

This patch must be backported in 1.8.
2018-03-23 16:21:50 +01:00
Christopher Faulet
510c0d67ef BUG/MEDIUM: threads/unix: Fix a deadlock when a listener is temporarily disabled
When a listener is temporarily disabled, we start by locking it and then we call
.pause callback of the underlying protocol (tcp/unix). For TCP listeners, this
is not a problem. But listeners bound on an unix socket are in fact closed
instead. So .pause callback relies on unbind_listener function to do its job.

Unfortunatly, unbind_listener hold the listener's lock and then call an internal
function to unbind it. So, there is a deadlock here. This happens during a
reload. To fix the problemn, the function do_unbind_listener, which is lockless,
is now exported and is called when a listener bound on an unix socket is
temporarily disabled.

This patch must be backported in 1.8.
2018-03-16 11:19:07 +01:00
Willy Tarreau
c5532acb4d MINOR: fd: don't report maxfd in alert messages
The listeners and connectors may complain that process-wide or
system-wide FD limits have been reached and will in this case report
maxfd as the limit. This is wrong in fact since there's no reason for
the whole FD space to be contiguous when the total # of FD is reached.
A better approach would consist in reporting the accurate number of
opened FDs, but this is pointless as what matters here is to give a
hint about what might be wrong. So let's simply report the configured
maxsock, which will generally explain why the process' limits were
reached, which is the most common reason. This removes another
dependency on maxfd.
2018-01-29 15:18:54 +01:00
Willy Tarreau
421f02e738 MINOR: threads: add a MAX_THREADS define instead of LONGBITS
This one allows not to inflate some structures when threads are
disabled. Now struct global is 1.4 kB instead of 33 kB.

Should be backported to 1.8 for ease of backporting of upcoming
patches.
2018-01-23 15:28:20 +01:00
Christopher Faulet
767a84bcc0 CLEANUP: log: Rename Alert/Warning in ha_alert/ha_warning 2017-11-24 17:19:12 +01:00
Christopher Faulet
c644fa9bf5 MINOR: config: Add threads support for "process" option on "bind" lines
It is now possible on a "bind" line (or a "stats socket" line) to specify the
thread set allowed to process listener's connections. For instance:

    # HTTPS connections will be processed by all threads but the first and HTTP
    # connection will be processed on the first thread.
    bind *:80 process 1/1
    bind *:443 ssl crt mycert.pem process 1/2-
2017-11-24 15:38:50 +01:00
Christopher Faulet
26028f6209 MINOR: config: Add auto-increment feature for cpu-map
The prefix "auto:" can be added before the process set to let HAProxy
automatically bind a process to a CPU by incrementing process and CPU sets. To
be valid, both sets must have the same size. No matter the declaration order of
the CPU sets, it will be bound from the lower to the higher bound.

  Examples:
      # all these lines bind the process 1 to the cpu 0, the process 2 to cpu 1
      #  and so on.
      cpu-map auto:1-4   0-3
      cpu-map auto:1-4   0-1 2-3
      cpu-map auto:1-4   3 2 1 0

      # bind each process to exaclty one CPU using all/odd/even keyword
      cpu-map auto:all   0-63
      cpu-map auto:even  0-31
      cpu-map auto:odd   32-63

      # invalid cpu-map because process and CPU sets have different sizes.
      cpu-map auto:1-4   0    # invalid
      cpu-map auto:1     0-3  # invalid
2017-11-24 15:38:49 +01:00
Christopher Faulet
f1f0c5f591 MINOR: config: Export parse_process_number and use it wherever it's applicable
This function is used when "bind-process" directive is parsed and when "process"
parameter on a "bind" or a "stats socket" line is parsed.
2017-11-24 15:38:49 +01:00
Christopher Faulet
15eb3a9a08 BUG/MINOR: listener: Allow multiple "process" options on "bind" lines
The documentation specifies that you can have several "process" options to
define several ranges on "bind" lines (or "stats socket" lines). It is uncommon,
but it should be possible. So the bind_proc bitmask in bind_conf structure must
not be overwritten at each new "process" option parsed.

This bug also exists in 1.7, 1.6 and 1.5. So it may be backported. But no one
seems to have noticed it, so it was probably never hitted.
2017-11-24 15:38:49 +01:00