In tcpcheck_eval_connect(), if we're targetting a server, increase its
curr_used_conns when creating a new connection, as the counter will be
decreased later when the connection is destroyed and conn_free() is called.
In connect_server(), we want to increase curr_used_conns only if the
connection is new, or if it comes from an idle_pool, otherwise it means
the connection is already used by at least one another stream, and it is
already accounted for.
We used to have 3 thread-based arrays for toremove_lock, idle_cleanup,
and toremove_connections. The problem is that these items are small,
and that this creates false sharing between threads since it's possible
to pack up to 8-16 of these values into a single cache line. This can
cause real damage where there is contention on the lock.
This patch creates a new array of struct "idle_conns" that is aligned
on a cache line and which contains all three members above. This way
each thread has access to its variables without hindering the other
ones. Just doing this increased the HTTP/1 request rate by 5% on a
16-thread machine.
The definition was moved to connection.{c,h} since it appeared a more
natural evolution of the ongoing changes given that there was already
one of them declared in connection.h previously.
"show sess" and particularly "show sess all" can be very slow when dumping
lots of information, and while dumping, new sessions might appear, making
the output really endless. When threads are used, this causes a double
problem:
- all threads are paused during the dump, so an overly long dump degrades
the quality of service ;
- since all threads are paused, more events get postponed, possibly
resulting in more streams to be dumped on next invocation of the dump
function.
This patch addresses this long-lasting issue by doing something simple:
the CLI's stream is moved at the end of the steams list, serving as an
identifiable marker to end the dump, because all entries past it were
added after the command was entered. As a result, the CLI's stream always
appears as the last one.
It may make sense to backport this to stable branches where dumping live
streams is difficult as well.
Commit cd4159f ("MEDIUM: mux_h2: Implement the takeover() method.")
added a return in the middle of the function, and as usual with such
stray return statements, some unrolling was lost. Here it's only the
TRACE_LEAVE() call, so it's mostly harmless. That's 2.2 only, no
backport is needed.
The IPv4 code did not take into account that the header value might not
contain the trailing NUL byte, possibly reading stray data after the header
value, failing the parse and testing the IPv6 branch. That one adds the
missing NUL, but fails to parse IPv4 addresses.
Fix this issue by always adding the trailing NUL.
The bug was reported on GitHub as issue #715.
It's not entirely clear when this bug started appearing, possibly earlier
versions of smp_fetch_hdr guaranteed the NUL termination. However the
addition of the NUL in the IPv6 case was added together with IPv6 support,
hinting that at that point in time the NUL was not guaranteed.
The commit that added IPv6 support was 69fa99292e689e355080d83ab19db4698b7c502b
which first appeared in HAProxy 1.5. This patch should be backported to
1.5+, taking into account the various buffer / chunk changes and the movement
across different files.
Issue 23653 in oss-fuzz reports a heap overflow bug which is in fact a
bug introduced by commit 9e1758efb ("BUG/MEDIUM: cfgparse: use
parse_line() to expand/unquote/unescape config lines") to address
oss-fuzz issue 22689, which was only partially fixed by commit 70f58997f
("BUG/MINOR: cfgparse: Support configurations without newline at EOF").
Actually on an empty line, end == line so we cannot dereference end-1
to check for a trailing LF without first being sure that end is greater
than line.
No backport is needed, this is 2.2 only.
When an event must be processed, we decide to create a new SPOE applet if there
is no idle applet at all or if the processing rate is lower than the number of
waiting events. But when the processing rate is very low (< 1 event/second), a
new applet is created independently of the number of idle applets.
Now, when there is at least one idle applet when there is only one event to
process, no new applet is created.
This patch is related to the issue #690.
When an informational response (1xx) is returned by HAProxy, we must be sure to
send it ASAP. To do so, CF_SEND_DONTWAIT flag must be set on the response
channel to instruct the stream-interface to not set the CO_SFL_MSG_MORE flag on
the transport layer. Otherwise the response delivery may be delayed, because of
the commit 8945bb6c0 ("BUG/MEDIUM: stream-int: fix loss of CO_SFL_MSG_MORE flag
in forwarding").
This patch may be backported as far as 1.9, for HTX part only. But this part has
changed in the 2.2, so it may be a bit tricky. Note it does not fix any known
bug on previous versions because the CO_SFL_MSG_MORE flag is ignored by the h1
mux.
To be consistent with other processings on the channels, when HAProxy generates
a final response, the CF_EOI flag must be set on the response channel. This flag
is used to know that a full message was pushed into the channel (HTX messages
with an EOM block). It is used in conjunction with other channel's flags in
stream-interface functions. Especially when si_cs_send() is called, to know if
we must set or not the CO_SFL_MSG_MORE flag. Without CF_EOI, the CO_SFL_MSG_MORE
flag is always set and the message forwarding is delayed.
This patch may be backported as far as 1.9, for HTX part only. But this part has
changed in the 2.2, so it may be a bit tricky. Note it does not fix any known
bug on previous versions because the CO_SFL_MSG_MORE flag is ignored by the h1
mux.
In HTX, since the commit 8945bb6c0 ("BUG/MEDIUM: stream-int: fix loss of
CO_SFL_MSG_MORE flag in forwarding"), the CO_SFL_MSG_MORE flag is set on the
transport layer if the end of the HTTP message is not reached, to delay the data
forwarding. To do so, the CF_EOI flag is tested and must not be set on the
output channel.
But the CO_SFL_MSG_MORE flag is also added if the message was truncated. Only
CF_SHUTR is set if this case. So the forwarding may be delayed to wait more data
that will never come. So, in HTX, the CO_SFL_MSG_MORE flag must not be set if
the message is finished (full or truncated).
No backport is needed.
When HAProxy generates a 500 response, if the formatting failed, for instance
because the message is larger than a buffer, it retries to format it in loop. To
fix the bug, we must stop trying to send a response if it is a non-rewritable
response (TX_CONST_REPLY flag is set on the HTTP transaction).
Because this part is not trivial, some comments have been added.
No backport is needed.
This commit adds some sample fetches that were lacking on the server
side:
ssl_s_key_alg, ssl_s_notafter, ssl_s_notbefore, ssl_s_sig_alg,
ssl_s_i_dn, ssl_s_s_dn, ssl_s_serial, ssl_s_sha1, ssl_s_der,
ssl_s_version
Trailing slashes were not handled in crt-list commands on CLI which can
be useful when you use the commands with a directory.
Strip the slashes before looking for the crtlist in the tree.
With the rework of the config line parser, we've started to emit a dump
of the initial line underlined by a caret character indicating the error
location. But with extremely large lines it starts to take time and can
even cause trouble to slow terminals (e.g. over ssh), and this becomes
useless. In addition, control characters could be dumped as-is which is
bad, especially when the input file is accidently wrong (an executable).
This patch adds a string sanitization function which isolates an area
around the error position in order to report only that area if the string
is too large. The limit was set to 80 characters, which will result in
roughly 40 chars around the error being reported only, prefixed and suffixed
with "..." as needed. In addition, non-printable characters in the line are
now replaced with '?' so as not to corrupt the terminal. This way invalid
variable names, unmatched quotes etc will be easier to spot.
A typical output is now:
[ALERT] 176/092336 (23852) : parsing [bad.cfg:8]: forbidden first char in environment variable name at position 811957:
...c$PATH$PATH$d(xlc`%?$PATH$PATH$dgc?T$%$P?AH?$PATH$PATH$d(?$PATH$PATH$dgc?%...
^
The config parser change in commit 9e1758efb ("BUG/MEDIUM: cfgparse: use
parse_line() to expand/unquote/unescape config lines") is wrong when
displaying the last parsed word, because it doesn't verify that the output
string was properly allocated. This may fail in two cases:
- very first line (outline is NULL, as in oss-fuzz issue 23657)
- much longer line than previous ones, requiring a realloc(), in which
case the final 0 is out of the allocated space.
This patch moves the reporting after the allocation check to fix this.
No backport is needed, this is 2.2 only.
parse_line() as added in commit c8d167bcf ("MINOR: tools: add a new
configurable line parse, parse_line()") presents an difficult usage
because it's up to the caller to determine the last written argument
based on what was passed to it. In practice the only way to safely
use it is for the caller to always pass nbarg-1 and make that last
entry point to the last arg + its strlen. This is annoying because
it makes it as painful to use as the infamous strncpy() while it has
all the information the caller needs.
This patch changes its behavior so that it guarantees that at least
one argument will point to the trailing zero at the end of the output
string, as long as there is at least one argument. The caller just
has to pass +1 to the arg count to make sure at least a last one is
empty.
When fgets() returns an incomplete line we must not increment linenum
otherwise line numbers become incorrect. This may happen when parsing
files with extremely long lines which require a realloc().
The bug has been present since unbounded line length was supported, so
the fix should be backported to older branches.
A crash was reported in issue #707 because the private key was not
uploaded correctly with "set ssl cert".
The bug is provoked by X509_check_private_key() being called when there
is no private key, which can lead to a segfault.
This patch adds a check and return an error is the private key is not
present.
This must be backported in 2.1.
Now that all tasklet queues are scanned at once by run_tasks_from_lists(),
it becomes possible to always check for lower priority classes and jump
back to them when they exist.
This patch adds tune.sched.low-latency global setting to enable this
behavior. What it does is stick to the lowest ranked priority list in
which tasks are still present with an available budget, and leave the
loop to refill the tasklet lists if the trees got new tasks or if new
work arrived into the shared urgent queue.
Doing so allows to cut the latency in half when running with extremely
deep run queues (10k-100k), thus allowing forwarding of small and large
objects to coexist better. It remains off by default since it does have
a small impact on large traffic by default (shorter batches).
Now process_runnable_tasks is responsible for calculating the budgets
for each queue, dequeuing from the tree, and calling run_tasks_from_lists().
This latter one scans the queues, picking tasks there and respecting budgets.
Note that its name was updated with a plural "s" for this reason.
It is neither convenient nor scalable to check each and every tasklet
queue to figure whether it's empty or not while we often need to check
them all at once. This patch introduces a tasklet class mask which gets
a bit 1 set for each queue representing one class of service. A single
test on the mask allows to figure whether there's still some work to be
done. It will later be usable to better factor the runqueue code.
Bits are set when tasklets are queued. They're cleared when queues are
emptied. It is possible that a queue is empty but has a bit if a tasklet
was added then removed, but this is not a problem as this is properly
checked for in run_tasks_from_list().
It will be convenient to have the tasklet queue number soon, better make
current_queue an index rather than a pointer to the queue. When not currently
running (e.g. from I/O), the index is -1.
Till now in process_runnable_tasks() we used to reserve a fixed portion
of max_processed to urgent tasks, then a portion of what remains for
normal tasks, then what remains for bulk tasks. This causes two issues:
- the current budget for processed tasks could be drained once for
all by higher level tasks so that they couldn't have enough left
for the next run. For example, if bulk tasklets cause task wakeups,
the required share to run them could be eaten by other bulk tasklets.
- it forces the urgent tasks to be run before scanning the tree so that
we know how many tasks to pick from the tree, and this isn't very
efficient cache-wise.
This patch changes this so that we compute upfront how max_processed will
be shared between classes that require so. We can then decide in advance
to pick a certain number of tasks from the tree, then execute all tasklets
in turn. When reaching the end, if there's still some budget, we can go
back and do the same thing again, improving chances to pick new work
before the global budget is depleted.
The default weights have been set to 50% for urgent tasklets, 37% for
normal ones and 13% for the bulk ones. In practice, there are not that
many urgent tasklets but when they appear they are cheap and must be
processed in as large batches as possible. Every time there is nothing
to pick there, the unused budget is shared between normal and bulk and
this allows bulk tasklets to still have quite some CPU to run on.
Move the ckch_deinit() and crtlist_deinit() call to ssl_sock.c,
also unlink the SNI from the ckch_inst because they are free'd before in
ssl_sock_free_all_ctx().
In ticket #706 it was reported that a certificate which was added from
the CLI can't be removed with 'del ssl cert' and is marked as 'Used'.
The problem is that the certificate instances are not added to the
created crtlist_entry, so they can't be deleted upon a 'del ssl
crt-list', and the store can't never be marked 'Unused' because of this.
This patch fixes the issue by adding the instances to the crtlist_entry,
which is enough to fix the issue.
Add some functions to deinit the whole crtlist and ckch architecture.
It will free all crtlist, crtlist_entry, ckch_store, ckch_inst and their
associated SNI, ssl_conf and SSL_CTX.
The SSL_CTX in the default_ctx and initial_ctx still needs to be free'd
separately.
Since commit 2954c47 ("MEDIUM: ssl: allow crt-list caching"), the
ssl_bind_conf is allocated directly in the crt-list, and the crt-list
can be shared between several bind_conf. The deinit() code wasn't
changed to handle that.
This patch fixes the issue by removing the free of the ssl_conf in
ssl_sock_free_all_ctx().
It should be completed with a patch that free the ssl_conf and the
crt-list.
Fix issue #700.
The arguments are relative to the outline, not relative to the input line.
This patch fixes up commit 9e1758efbd68c8b1d27e17e2abe4444e110f3ebe which
is 2.2 only. No backport needed.
The returned `arg` value is the number of arguments found, but in case
of the error message it's not a valid argument index.
Because we know how many arguments we allowed (MAX_LINE_ARGS) we know
what to print in the error message, so do just that.
Consider a configuration like this:
listen foo
1 2 3 [...] 64 65
Then running a configuration check within valgrind reports the following:
==18265== Conditional jump or move depends on uninitialised value(s)
==18265== at 0x56E8B83: vfprintf (vfprintf.c:1631)
==18265== by 0x57B1895: __vsnprintf_chk (vsnprintf_chk.c:63)
==18265== by 0x4A8642: vsnprintf (stdio2.h:77)
==18265== by 0x4A8642: memvprintf (tools.c:3647)
==18265== by 0x4CB8A4: print_message (log.c:1085)
==18265== by 0x4CE0AC: ha_alert (log.c:1128)
==18265== by 0x459E41: readcfgfile (cfgparse.c:1978)
==18265== by 0x507CB5: init (haproxy.c:2029)
==18265== by 0x4182A2: main (haproxy.c:3137)
==18265==
==18265== Use of uninitialised value of size 8
==18265== at 0x56E576B: _itoa_word (_itoa.c:179)
==18265== by 0x56E912C: vfprintf (vfprintf.c:1631)
==18265== by 0x57B1895: __vsnprintf_chk (vsnprintf_chk.c:63)
==18265== by 0x4A8642: vsnprintf (stdio2.h:77)
==18265== by 0x4A8642: memvprintf (tools.c:3647)
==18265== by 0x4CB8A4: print_message (log.c:1085)
==18265== by 0x4CE0AC: ha_alert (log.c:1128)
==18265== by 0x459E41: readcfgfile (cfgparse.c:1978)
==18265== by 0x507CB5: init (haproxy.c:2029)
==18265== by 0x4182A2: main (haproxy.c:3137)
==18265==
==18265== Conditional jump or move depends on uninitialised value(s)
==18265== at 0x56E5775: _itoa_word (_itoa.c:179)
==18265== by 0x56E912C: vfprintf (vfprintf.c:1631)
==18265== by 0x57B1895: __vsnprintf_chk (vsnprintf_chk.c:63)
==18265== by 0x4A8642: vsnprintf (stdio2.h:77)
==18265== by 0x4A8642: memvprintf (tools.c:3647)
==18265== by 0x4CB8A4: print_message (log.c:1085)
==18265== by 0x4CE0AC: ha_alert (log.c:1128)
==18265== by 0x459E41: readcfgfile (cfgparse.c:1978)
==18265== by 0x507CB5: init (haproxy.c:2029)
==18265== by 0x4182A2: main (haproxy.c:3137)
==18265==
==18265== Conditional jump or move depends on uninitialised value(s)
==18265== at 0x56E91AF: vfprintf (vfprintf.c:1631)
==18265== by 0x57B1895: __vsnprintf_chk (vsnprintf_chk.c:63)
==18265== by 0x4A8642: vsnprintf (stdio2.h:77)
==18265== by 0x4A8642: memvprintf (tools.c:3647)
==18265== by 0x4CB8A4: print_message (log.c:1085)
==18265== by 0x4CE0AC: ha_alert (log.c:1128)
==18265== by 0x459E41: readcfgfile (cfgparse.c:1978)
==18265== by 0x507CB5: init (haproxy.c:2029)
==18265== by 0x4182A2: main (haproxy.c:3137)
==18265==
==18265== Conditional jump or move depends on uninitialised value(s)
==18265== at 0x56E8C59: vfprintf (vfprintf.c:1631)
==18265== by 0x57B1895: __vsnprintf_chk (vsnprintf_chk.c:63)
==18265== by 0x4A8642: vsnprintf (stdio2.h:77)
==18265== by 0x4A8642: memvprintf (tools.c:3647)
==18265== by 0x4CB8A4: print_message (log.c:1085)
==18265== by 0x4CE0AC: ha_alert (log.c:1128)
==18265== by 0x459E41: readcfgfile (cfgparse.c:1978)
==18265== by 0x507CB5: init (haproxy.c:2029)
==18265== by 0x4182A2: main (haproxy.c:3137)
==18265==
==18265== Conditional jump or move depends on uninitialised value(s)
==18265== at 0x56E941A: vfprintf (vfprintf.c:1631)
==18265== by 0x57B1895: __vsnprintf_chk (vsnprintf_chk.c:63)
==18265== by 0x4A8642: vsnprintf (stdio2.h:77)
==18265== by 0x4A8642: memvprintf (tools.c:3647)
==18265== by 0x4CB8A4: print_message (log.c:1085)
==18265== by 0x4CE0AC: ha_alert (log.c:1128)
==18265== by 0x459E41: readcfgfile (cfgparse.c:1978)
==18265== by 0x507CB5: init (haproxy.c:2029)
==18265== by 0x4182A2: main (haproxy.c:3137)
==18265==
==18265== Conditional jump or move depends on uninitialised value(s)
==18265== at 0x56E8CAB: vfprintf (vfprintf.c:1631)
==18265== by 0x57B1895: __vsnprintf_chk (vsnprintf_chk.c:63)
==18265== by 0x4A8642: vsnprintf (stdio2.h:77)
==18265== by 0x4A8642: memvprintf (tools.c:3647)
==18265== by 0x4CB8A4: print_message (log.c:1085)
==18265== by 0x4CE0AC: ha_alert (log.c:1128)
==18265== by 0x459E41: readcfgfile (cfgparse.c:1978)
==18265== by 0x507CB5: init (haproxy.c:2029)
==18265== by 0x4182A2: main (haproxy.c:3137)
==18265==
==18265== Conditional jump or move depends on uninitialised value(s)
==18265== at 0x56E8CE2: vfprintf (vfprintf.c:1631)
==18265== by 0x57B1895: __vsnprintf_chk (vsnprintf_chk.c:63)
==18265== by 0x4A8642: vsnprintf (stdio2.h:77)
==18265== by 0x4A8642: memvprintf (tools.c:3647)
==18265== by 0x4CB8A4: print_message (log.c:1085)
==18265== by 0x4CE0AC: ha_alert (log.c:1128)
==18265== by 0x459E41: readcfgfile (cfgparse.c:1978)
==18265== by 0x507CB5: init (haproxy.c:2029)
==18265== by 0x4182A2: main (haproxy.c:3137)
==18265==
==18265== Conditional jump or move depends on uninitialised value(s)
==18265== at 0x56EA2DB: vfprintf (vfprintf.c:1632)
==18265== by 0x57B1895: __vsnprintf_chk (vsnprintf_chk.c:63)
==18265== by 0x4A8642: vsnprintf (stdio2.h:77)
==18265== by 0x4A8642: memvprintf (tools.c:3647)
==18265== by 0x4CB8A4: print_message (log.c:1085)
==18265== by 0x4CE0AC: ha_alert (log.c:1128)
==18265== by 0x459E41: readcfgfile (cfgparse.c:1978)
==18265== by 0x507CB5: init (haproxy.c:2029)
==18265== by 0x4182A2: main (haproxy.c:3137)
==18265==
[ALERT] 174/165720 (18265) : parsing [./config.cfg:2]: too many words, truncating at word 65, position -95900735: <(null)>.
[ALERT] 174/165720 (18265) : Error(s) found in configuration file : ./config.cfg
[ALERT] 174/165720 (18265) : Fatal errors found in configuration.
Valgrind reports conditional jumps relying on an undefined value and the
error message clearly shows incorrect stuff.
After this patch is applied the relying on undefined values is gone and
the <(null)> will actually show the argument. However the position value
still is incorrect. This will be fixed in a follow up patch.
This patch fixes up commit 9e1758efbd68c8b1d27e17e2abe4444e110f3ebe which
is 2.2 only. No backport needed.
In task_per_thread[] we now have current_queue which is a pointer to
the current tasklet_list entry being evaluated. This will be used to
know the class under which the current task/tasklet is currently
running.
We want to be sure not to exceed max_processed. It can actually go
slightly negative due to the rounding applied to ratios, but we must
refrain from processing too many tasks if it's already low.
This became particularly relevant since recent commit 5c8be272c ("MEDIUM:
tasks: also process late wakeups in process_runnable_tasks()") which was
merged into 2.2-dev10. No backport is needed.
When DEBUG_FD is set at build time, we'll keep a counter of per-FD events
in the fdtab. This counter is reported in "show fd" even for closed FDs if
not zero. The purpose is to help spot situations where an apparently closed
FD continues to be reported in loops, or where some events are dismissed.
Coverity reports a possible null deref in issue #703. It seems this
cannot happen as in order to have a CF_READ_ERROR we'd need to have
attempted a recv() which implies a conn_stream, thus conn cannot be
NULL anymore. But at least one line tests for conn and the other one
not, which is confusing. So let's add a check for conn before
dereferencing it.
This needs to be backported to 2.1 and 2.0. Note that in 2.0 it's
in proto_htx.c.
As discussed on the list: https://www.mail-archive.com/haproxy@formilux.org/msg37698.html
This patch adds warnings to the configuration parser that detect the
following situations:
- A line being truncated by a null byte in the middle.
- A file not ending in a new line (and possibly being truncated).
Fix parsing of configurations if the configuration file does not end with
an LF.
This patch fixes GitHub issue #704. It's a regression in
9e1758efbd68c8b1d27e17e2abe4444e110f3ebe which is 2.2 specific. No backport
needed.
When a SPOE filter starts the response analyze, the wrong flag is tested on the
pre_analyzers bit field. AN_RES_INSPECT must be tested instead of
SPOE_EV_ON_TCP_RSP.
This patch must be backported to all versions with the SPOE support, i.e as far
as 1.7.
If a fcgi application is configured to send its logs to a ring buffer, the
corresponding sink must be resolved during the configuration post
parsing. Otherwise, the sink is undefined when a log message is emitted,
crashing HAProxy.
No need to backport.
In h1_snd_buf(), also set H1_F_CO_MSG_MORE if we know we still have more to
send, not just if the stream-interface told us to do so. This may happen if
the last block of a transfer doesn't fit in the buffer, it remains useful
for the transport layer to know that more data follows what's already in
the buffer.
In 2.2-dev1, a change was made by commit 46230363a ("MINOR: mux-h1: Inherit
send flags from the upper layer"). The purpose was to accurately set the
CO_SFL_MSG_MORE flag on the transport layer because previously it as only
set based on the buffer full condition, which does not accurately indicate
that there are more data to follow.
The problem is that the stream-interface never sets this flag anymore in
HTX mode due to the channel's to_forward always being set to infinity.
Because of this, HTX transfers are always performed without the MSG_MORE
flag and experience a severe performance degradation on large transfers.
This patch addresses this by making the stream-interface aware of HTX and
having it check for CF_EOI to check if more contents are expected or not.
With this change, the single-threaded forwarding performance on 10 MB
objects jumped from 29 to 40 Gbps.
No backport is needed.
As reported in issue #419, a "clear map" operation on a very large map
can take a lot of time and freeze the entire process for several seconds.
This patch makes sure that pat_ref_prune() can regularly yield after
clearing some entries so that the rest of the process continues to work.
The first part, the removal of the patterns, can take quite some time
by itself in one run but it's still relatively fast. It may block for
up to 100ms for 16M IP addresses in a tree typically. This change needed
to declare an I/O handler for the clear operation so that we can get
back to it after yielding.
The second part can be much slower because it deconstructs the elements
and its users, but it iterates progressively so we can yield less often
here.
The patch was tested with traffic in parallel sollicitating the map being
released and showed no problem. Some traffic will definitely notice an
incomplete map but the filling is already not atomic anyway thus this is
not different.
It may be backported to stable versions once sufficiently tested for side
effects, at least as far as 2.0 in order to avoid the watchdog triggering
when the process is frozen there. For a better behaviour, all these
prune_* functions should support yielding so that the callers have a
chance to continue also yield in turn.
Initial default settings for maxconn/maxsock/maxpipes were rearranged
in commit a409f30d0 ("MINOR: init: move the maxsock calculation code
to compute_ideal_maxsock()") but as a side effect, the calculated
maxpipes value was not stored anymore into global.maxpipes. This
resulted in splicing being disabled unless there is an explicit
maxpipes setting in the global section.
This patch just stores the calculated ideal value as planned in the
computation and as was done before the patch above.
This is strictly 2.2, no backport is needed.
Fix the semicolon escaping which must be handled in the master CLI,
the commands were wrongly splitted and could be forwarded partially to
the target CLI.