10974 Commits

Author SHA1 Message Date
Dragan Dosen
e99af978c8 BUG/MEDIUM: pattern: fix memory leak in regex pattern functions
The allocated regex is not freed properly and can cause a memory leak,
eg. when patterns are updated via CLI socket.

This patch should be backported to all supported versions.
2019-05-02 10:05:11 +02:00
Dragan Dosen
026ef570e1 BUG/MINOR: checks: free memory allocated for tasklets
The check->wait_list.task and agent->wait_list.task were not
freed properly on deinit().

This patch should be backported to 1.9.
2019-05-02 10:05:09 +02:00
Dragan Dosen
61302da0e7 BUG/MINOR: log: properly free memory on logformat parse error and deinit()
This patch may be backported to all supported versions.
2019-05-02 10:05:07 +02:00
Dragan Dosen
2a7c20f602 BUG/MINOR: haproxy: fix rule->file memory leak
When using the "use_backend" configuration directive, the configuration
file name stored as rule->file was not freed in some situations. This
was introduced in commit 4ed1c95 ("MINOR: http/conf: store the
use_backend configuration file and line for logs").

This patch should be backported to 1.9, 1.8 and 1.7.
2019-05-02 10:05:06 +02:00
Olivier Houchard
b51937ebaa BUG/MEDIUM: ssl: Don't pretend we can retry a recv/send if we got a shutr/w.
In ha_ssl_write() and ha_ssl_read(), don't pretend we can retry a read/write
if we got a shutr/shutw, or we will never properly shutdown the connection.
2019-05-01 17:37:33 +02:00
Ilya Shipitsin
0c50b1ecbb BUG/MEDIUM: servers: fix typo "src" instead of "srv"
When copying the settings for all servers when using server templates,
fix a typo, or we would never copy the length of the ALPN to be used for
checks.

This should be backported to 1.9.
2019-04-30 23:04:47 +02:00
Christopher Faulet
02f3cf19ed CLEANUP: config: Don't alter listener->maxaccept when nbproc is set to 1
This patch only removes a useless calculation on listener->maxaccept when nbproc
is set to 1. Indeed, the following formula has no effet in such case:

  listener->maxaccept = (listener->maxaccept + nbproc - 1) / nbproc;

This patch may be backported as far as 1.5.
2019-04-30 15:28:29 +02:00
Christopher Faulet
6b02ab8734 MINOR: config: Test validity of tune.maxaccept during the config parsing
Only -1 and positive integers from 0 to INT_MAX are accepted. An error is
triggered during the config parsing for any other values.

This patch may be backported to all supported versions.
2019-04-30 15:28:29 +02:00
Christopher Faulet
102854cbba BUG/MEDIUM: listener: Fix how unlimited number of consecutive accepts is handled
There is a bug when global.tune.maxaccept is set to -1 (no limit). It is pretty
visible with one process (nbproc sets to 1). The functions listener_accept() and
accept_queue_process() don't expect to handle negative maxaccept values. So
instead of accepting incoming connections without any limit, none are never
accepted and HAProxy loop infinitly in the scheduler.

When there are 2 or more processes, the bug is a bit more subtile. The limit for
a listener is set to 1. So only one connection is accepted at a time by a given
listener. This happens because the listener's maxaccept value is an unsigned
integer. In check_config_validity(), it is first set to UINT_MAX (-1 casted in
an unsigned integer), and then some calculations on it leads to an integer
overflow.

To fix the bug, the listener's maxaccept value is now a signed integer. So, if a
negative value is set for global.tune.maxaccept, we keep it untouched for the
listener and no calculation is made on it. Then, in the listener code, this
signed value is casted to a unsigned one. It simplifies all tests instead of
dealing with negative values. So, it limits the number of connections accepted
at a time to UINT_MAX at most. But, honestly, it not an issue.

This patch must be backported to 1.9 and 1.8.
2019-04-30 15:28:29 +02:00
Willy Tarreau
bc13bec548 MINOR: activity: report context switch counts instead of rates
It's not logical to report context switch rates per thread in show activity
because everything else is a counter and it's not even possible to compare
values. Let's only report counts. Further, this simplifies the scheduler's
code.
2019-04-30 14:55:18 +02:00
Willy Tarreau
49ee3b2f9a BUG/MAJOR: map/acl: real fix segfault during show map/acl on CLI
A previous commit 8d85aa44d ("BUG/MAJOR: map: fix segfault during
'show map/acl' on cli.") was provided to address a concurrency issue
between "show acl" and "clear acl" on the CLI. Sadly the code placed
there was copy-pasted without changing the element type (which was
struct stream in the original code) and not tested since the crash
is still present.

The reproducer is simple : load a large ACL file (e.g. geolocation
addresses), issue "show acl #0" in loops in one window and issue a
"clear acl #0" in the other one, haproxy crashes.

This fix was also tested with threads enabled and looks good since
the locking seems to work correctly in these areas though. It will
have to be backported as far as 1.6 since the commit above went
that far as well...
2019-04-30 11:50:59 +02:00
Frédéric Lécaille
d803e475e5 MINOR: log: Enable the log sampling and load-balancing feature.
This patch implements the sampling and load-balancing of log servers configured
with "sample" new keyword implemented by this commit:
    'MINOR: log: Add "sample" new keyword to "log" lines'.
As the list of ranges used to sample the log to balance is ordered, we only
have to maintain ->curr_idx member of smp_info struct which is the index of
the sample and check if it belongs or not to the current range to decide if we
must send it to the log server or not.
2019-04-30 09:25:09 +02:00
Frédéric Lécaille
d95ea2897e MINOR: log: Add "sample" new keyword to "log" lines.
This patch implements the parsing of "sample" new optional keyword for "log" lines
to be able to sample and balance the load of log messages between serveral log
destinations declared by "log" lines. This keyword must be followed by a list of
comma seperated ranges of indexes numbered from 1 to define the samples to be used
to balance the load of logs to send. This "sample" keyword must be used on "log" lines
obviously before the remaining optional ones without keyword. The list of ranges
must be followed by a colon character to separate it from the log sampling size.

With such following configuration declarations:

   log stderr local0
   log 127.0.0.1:10001 sample 2-3,8-11:11 local0
   log 127.0.0.2:10002 sample 5:5 local0

in addition to being sent to stderr, about the second "log" line, every 11 logs
the logs #2 up to #3 would be sent to 127.0.0.1:10001, then #8 up tp #11 four
logs would be sent to the same log server and so on periodically. Logs would be
sent to 127.0.0.2:100002 every 5 logs.

It is also possible to define the size of the sample with a value different of
the maximum of the high limits of the ranges, for instance as follows:

   log 127.0.0.1:10001 sample 2-3,8-11:15 local0

as before the two logs #2 and #3 would be sent to 127.0.0.1:10001, then #8
up tp #11 logs, but in this case here, this would be done periodically every 15
messages.

Also note that the ranges must not overlap each others. This is to ease the
way the logs are periodically sent.
2019-04-30 09:25:09 +02:00
Christopher Faulet
85db3212b8 MINOR: spoe: Use the sample context to pass frag_ctx info during encoding
This simplifies the API and hide the details in the sample. This way, only
string and binary are aware of these info, because other types cannot be
partially encoded.

This patch may be backported to 1.9 and 1.8.
2019-04-29 16:02:05 +02:00
Kevin Zhu
f7f54280c8 BUG/MEDIUM: spoe: arg len encoded in previous frag frame but len changed
Fragmented arg will do fetch at every encode time, each fetch may get
different result if SMP_F_MAY_CHANGE, for example res.payload, but
the length already encoded in first fragment of the frame, that will
cause SPOA decode failed and waste resources.

This patch must be backported to 1.9 and 1.8.
2019-04-29 16:02:05 +02:00
Christopher Faulet
1907ccc2f7 BUG/MINOR: http: Call stream_inc_be_http_req_ctr() only one time per request
The function stream_inc_be_http_req_ctr() is called at the beginning of the
analysers AN_REQ_HTTP_PROCESS_FE/BE. It as an effect only on the backend. But we
must be careful to call it only once. If the processing of HTTP rules is
interrupted in the middle, when the analyser is resumed, we must not call it
again. Otherwise, the tracked counters of the backend are incremented several
times.

This bug was reported in github. See issue #74.

This fix should be backported as far as 1.6.
2019-04-29 16:01:47 +02:00
Willy Tarreau
97215ca284 BUG/MEDIUM: mux-h2: properly deal with too large headers frames
In h2c_decode_headers(), now that we support CONTINUATION frames, we
try to defragment all pending frames at once before processing them.
However if the first is exactly full and the second cannot be parsed,
we don't detect the problem and we wait for the next part forever due
to an incorrect check on exit; we must abort the processing as soon as
the current frame remains full after defragmentation as in this case
there is no way to make forward progress.

Thanks to Yves Lafon for providing traces exhibiting the problem.

This must be backported to 1.9.
2019-04-29 10:20:21 +02:00
David CARLIER
4de0eba848 MEDIUM: da: HTX mode support.
The DeviceAtlas module now can support both the legacy
mode and the new HTX's with the known set of support headers
for the latter.
2019-04-26 17:06:32 +02:00
David Carlier
0470d704a7 BUILD/MEDIUM: contrib: Dummy DeviceAtlas API.
Creating a "mocked" version mainly for testing purposes.
2019-04-26 17:06:32 +02:00
Willy Tarreau
4ad574fbe2 MEDIUM: streams: measure processing time and abort when detecting bugs
On some occasions we've had loops happening when processing actions
(e.g. a yield not being well understood) resulting in analysers being
called in loops until the analysis timeout without incrementing the
stream's call count, thus this type of bug cannot be caught by the
current protection system.

What this patch proposes is to start to measure the time spent in analysers
when profiling is enabled on the thread, in order to detect if a stream is
really misbehaving. In this case we measured the consumed CPU time, not the
wall clock time, so as not to be affected by possible noisy neighbours
sharing the same CPU. When more than 100ms are spent in an analyser, we
trigger the stream_dump_and_crash() function to report the anomaly.

The choice of 100ms comes from the fact that regular calls only take around
1 microsecond and it seems reasonable to accept a degradation factor of
100000, which covers very slow machines such as home gateways running on
sub-ghz processors, with extremely heavy configurations. Some complete
tests show that even this common bogus map_regm() entry supposedly designed
to extract a port from an IP:port entry does not trigger the timeout (25 ms
evaluation time for a 4kB header, exercise left to the reader to spot the
mistake) :

   ([0-9]{0,3}).([0-9]{0,3}).([0-9]{0,3}).([0-9]{0,3}):([0-9]{0,5}) \5

However this one purposely designed to kill haproxy definitely dies as it
manages to completely freeze the whole process for more than one second
on a 4 GHz CPU for only 120 bytes in :

   (.{0,20})(.{0,20})(.{0,20})(.{0,20})(.{0,20})b \1

This protection will definitely help during the code stabilization period
and may possibly be left enabled later depending on reported issues or not.

If you've noticed that your workload is affected by this patch, please
report it as you have very likely found a bug. And in the mean time you
can turn profiling off to disable it.
2019-04-26 14:30:59 +02:00
Willy Tarreau
3d07a16f14 MEDIUM: stream/debug: force a crash if a stream spins over itself forever
If a stream is caught spinning over itself at more than 100000 loops per
second and for more than one second, the process will be aborted and the
offender reported on the console and logs. Typical figures usually are just
a few tens to hundreds per second over a very short time so there is a huge
margin here. Using even higher values could also work but there is the risk
of not being able to catch offenders if multiple ones start to bug at the
same time and share the load. This code should ideally be disabled for
stable releases, though in theory nothing should ever trigger it.
2019-04-26 13:16:14 +02:00
Willy Tarreau
dcb0e1d37d MEDIUM: appctx/debug: force a crash if an appctx spins over itself forever
If an appctx is caught spinning over itself at more than 100000 loops per
second and for more than one second, the process will be aborted and the
offender reported on the console and logs. Typical figures usually are just
a few tens to hundreds per second over a very short time so there is a huge
margin here. Using even higher values could also work but there is the risk
of not being able to catch offenders if multiple ones start to bug at the
same time and share the load. This code should ideally be disabled for
stable releases, though in theory nothing should ever trigger it.
2019-04-26 13:15:56 +02:00
Willy Tarreau
71c07ac65a MINOR: stream/debug: make a stream dump and crash function
During 1.9 development (and even a bit after) we've started to face a
significant number of situations where streams were abusively spinning
due to an uncaught error flag or complex conditions that couldn't be
correctly identified. Sometimes streams wake appctx up and conversely
as well. More importantly when this happens the only fix is to restart.

This patch adds a new function to report a serious error, some relevant
info and to crash the process using abort() so that a core dump is
available. The purpose will be for this function to be called in various
situations where the process is unfixable. It will help detect these
issues much earlier during development and may even help fixing test
platforms which are able to automatically restart when such a condition
happens, though this is not the primary purpose.

This patch only provides the function and doesn't use it yet.
2019-04-26 13:15:56 +02:00
Willy Tarreau
5e370daa52 BUG/MINOR: proto_http: properly reset the stream's call rate on keep-alive
The stream's call rate measurement was added by commit 2e9c1d296 ("MINOR:
stream: measure and report a stream's call rate in "show sess"") but it
forgot to reset it in case of HTTP keep-alive (legacy mode), resulting
in incorrect measurements.

No backport is needed, unless the patch above is backported.
2019-04-25 18:33:37 +02:00
Willy Tarreau
d5ec4bfe85 CLEANUP: standard: use proper const to addr_to_str() and port_to_str()
The input parameter was not marked const, making it painful for some calls.
2019-04-25 17:48:16 +02:00
Willy Tarreau
d2d3348acb MINOR: activity: enable automatic profiling turn on/off
Instead of having to manually turn task profiling on/off in the
configuration, by default it will work in "auto" mode, which
automatically turns on on any thread experiencing sustained loop
latencies over one millisecond averaged over the last 1024 samples.

This may happen with configs using lots of regex (thing map_reg for
example, which is the lazy way to convert Apache's rewrite rules but
must not be abused), and such high latencies affect all the process
and the problem is most often intermittent (e.g. hitting a map which
is only used for certain host names).

Thus now by default, with profiling set to "auto", it remains off all
the time until something bad happens. This also helps better focus on
the issues when looking at the logs as well as in "show sess" output.
It automatically turns off when the average loop latency over the last
1024 calls goes below 990 microseconds (which typically takes a while
when in idle).

This patch could be backported to stable versions after a bit more
exposure, as it definitely improves observability and the ability to
quickly spot the culprit. In this case, previous patch ("MINOR:
activity: make the profiling status per thread and not global") must
also be taken.
2019-04-25 17:26:46 +02:00
Willy Tarreau
d9add3acc8 MINOR: activity: make the profiling status per thread and not global
In order to later support automatic profiling turn on/off, we need to
have it per-thread. We're keeping the global option to know whether to
turn it or on off, but the profiling status is now set per thread. We're
updating the status in activity_count_runtime() which is called before
entering poll(). The reason is that we'll extend this with run time
measurement when deciding to automatically turn it on or off.
2019-04-25 17:26:19 +02:00
Willy Tarreau
d636675137 BUG/MINOR: activity: always initialize the profiling variable
It happens it was only set if present in the configuration. It's
harmless anyway but can still cause doubts when comparing logs and
configurations so better correctly initialize it.

This should be backported to 1.9.
2019-04-25 17:26:19 +02:00
Willy Tarreau
22d63a24d9 MINOR: applet: measure and report an appctx's call rate in "show sess"
Very similarly to previous commit doing the same for streams, we now
measure and report an appctx's call rate. This will help catch applets
which do not consume all their data and/or which do not properly report
that they're waiting for something else. Some of them like peers might
theorically be able to exhibit some occasional peeks when teaching a
full table to a nearby peer (e.g. the new replacement process), but
nothing close to what a bogus service can do so there is no risk of
confusion.
2019-04-24 16:04:23 +02:00
Willy Tarreau
2e9c1d2960 MINOR: stream: measure and report a stream's call rate in "show sess"
Quite a few times some bugs have made a stream task incorrectly
handle a complex combination of events, which was often reported as
"100% CPU", and was usually caused by the event not being properly
identified and flushed, and the stream's handler called in loops.

This patch adds a call rate counter to the stream struct. It's not
huge, it's really inexpensive (especially compared to the rest of the
processing function) and will easily help spot such tasks in "show sess"
output, possibly even allowing to kill them.

A future patch should probably consist in alerting when they're above a
certain threshold, possibly sending a dump and killing them. Some options
could also consist in aborting in order to get an analyzable core dump
and let a service manager restart a fresh new process.
2019-04-24 16:04:23 +02:00
Willy Tarreau
0212fadd65 MINOR: tasks/activity: report the context switch and task wakeup rates
It's particularly useful to spot runaway tasks to see this. The context
switch rate covers all tasklet calls (tasks and I/O handlers) while the
task wakeups only covers tasks picked from the run queue to be executed.
High values there will indicate either an intense traffic or a bug that
mades a task go wild.
2019-04-24 16:04:23 +02:00
Willy Tarreau
69b5a7f1a3 CLEANUP: task: report calls as unsigned in show sess
The "show sess" output used signed ints to report the number of calls,
which is confusing for runaway tasks where the call count can turn
negative.
2019-04-24 16:04:23 +02:00
Christopher Faulet
4904058661 BUG/MINOR: htx: Exclude TCP proxies when the HTX mode is handled during startup
When tests are performed on the HTX mode during HAProxy startup, only HTTP
proxies are considered. It is important because, since the commit 1d2b586cd
("MAJOR: htx: Enable the HTX mode by default for all proxies"), the HTX is
enabled on all proxies by default. But for TCP proxies, it is "deactivated".

This patch must be backported to 1.9.
2019-04-24 15:40:02 +02:00
Willy Tarreau
274ba67862 BUG/MAJOR: lb/threads: fix AB/BA locking issue in round-robin LB
An occasional divide by zero in the round-robin scheduler was addressed
in commit 9df86f997 ("BUG/MAJOR: lb/threads: fix insufficient locking on
round-robin LB") by grabing the server's lock in fwrr_get_server_from_group().

But it happens that this is not the correct approach as it introduces a
case of AB/BA deadlock reported by Maksim Kupriianov. This happens when
a server weight changes from/to zero while another thread extracts this
server from the tree. The reason is that the functions used to manipulate
the state work under the server's lock and grab the LB lock while the ones
used in LB work under the LB lock and grab the server's lock when needed.

This commit mostly reverts the changes above and instead further completes
the locking analysis performed on this code to identify areas that really
need to be protected by the server's lock, since this is the only algorithm
which happens to have this requirement. This audit showed that in fact all
locations which require the server's lock are already protected by the LB
lock. This was not noticed the first time due to the server's lock being
taken instead and due to some functions misleadingly using atomic ops to
modify server fields which are under the LB lock protection (these ones
were now removed).

The change consists in not taking the server's lock anymore here, and
instead making sure that the aforementioned function which used to
suffer from the server's weight becoming zero only uses a copy of the
weight which was preliminary verified to be non-null (when the weight
is null, the server will be removed from the tree anyway so there is
no need to recalculate its position).

With this change, the code survived an injection at 200k req/s split
on two servers with weights changing 50 times a second.

This commit must be backported to 1.9 only.
2019-04-24 14:23:40 +02:00
Olivier Houchard
a28454ee21 BUG/MEDIUM: ssl: Return -1 on recv/send if we got EAGAIN.
In ha_ssl_read()/ha_ssl_write(), if we couldn't send/receive data because
we got EAGAIN, return -1 and not 0, as older SSL versions expect that.
This should fix the problems with OpenSSL < 1.1.0.
2019-04-24 12:06:08 +02:00
Christopher Faulet
371723b0c2 BUG/MINOR: spoe: Don't systematically wakeup SPOE stream in the applet handler
This can lead to wakeups in loop between the SPOE stream and the SPOE applets
waiting to receive agent messages (mainly AGENT-HELLO and AGENT-DISCONNECT).

This patch must be backported to 1.9 and 1.8.
2019-04-23 21:20:47 +02:00
Christopher Faulet
5e1a9d715e BUG/MEDIUM: stream: Fix the way early aborts on the client side are handled
A regression was introduced with the commit c9aecc8ff ("BUG/MEDIUM: stream:
Don't request a server connection if a shutw was scheduled"). Among other this,
it breaks the CLI when the shutr on the client side is handled with the client
data. To depend on the flag CF_SHUTW_NOW to not establish the server connection
when an error on the client side is detected is the right way to fix the bug,
because this flag may be set without any error on the client side.

So instead, we abort the request where the error is handled and only when the
backend stream-interface is in the state SI_ST_INI. This way, there is no
ambiguity on the reason why the abort accurred. The stream-interface is also
switched to the state SI_ST_CLO.

This patch must be backported to 1.9. If the commit c9aecc8ff is backported to
previous versions, this one MUST also be backported. Otherwise, it MAY be
backported to older versions that 1.9 with caution.
2019-04-23 21:20:47 +02:00
Frédéric Lécaille
bed883abe8 BUG/MAJOR: stream: Missing DNS context initializations.
Fix some missing initializations wich came with 333939c commit (MINOR: action:
new '(http-request|tcp-request content) do-resolve' action). The DNS contexts of
streams which were allocated were not initialized by stream_new(). This leaded to
accesses to non-allocated memory when freeing these contexts with stream_free().
2019-04-23 20:24:11 +02:00
Frédéric Lécaille
0bad840b4d MINOR: log: Extract some code to send syslog messages.
This patch extracts the code of __send_log() responsible of sending a syslog
message to a syslog destination represented as a logsrv struct to define
__do_send_log() function. __send_log() calls __do_send_log() for each syslog
destination of a proxy after having prepared some of its parameters.
2019-04-23 14:16:51 +02:00
Baptiste Assmann
333939c2ee MINOR: action: new '(http-request|tcp-request content) do-resolve' action
The 'do-resolve' action is an http-request or tcp-request content action
which allows to run DNS resolution at run time in HAProxy.
The name to be resolved can be picked up in the request sent by the
client and the result of the resolution is stored in a variable.
The time the resolution is being performed, the request is on pause.
If the resolution can't provide a suitable result, then the variable
will be empty. It's up to the admin to take decisions based on this
statement (return 503 to prevent loops).

Read carefully the documentation concerning this feature, to ensure your
setup is secure and safe to be used in production.

This patch creates a global counter to track various errors reported by
the action 'do-resolve'.
2019-04-23 11:41:52 +02:00
Baptiste Assmann
db4c8521ca MINOR: dns: move callback affection in dns_link_resolution()
In dns.c, dns_link_resolution(), each type of dns requester is managed
separately, that said, the callback function is affected globaly (and
points to server type callbacks only).
This design prevents the addition of new dns requester type and this
patch aims at fixing this limitation: now, the callback setting is done
directly into the portion of code dedicated to each requester type.
2019-04-23 11:34:11 +02:00
Baptiste Assmann
dfd35fd71a MINOR: dns: dns_requester structures are now in a memory pool
dns_requester structure can be allocated at run time when servers get
associated to DNS resolution (this happens when SRV records are used in
conjunction with service discovery).
Well, this memory allocation is safer if managed in an HAProxy pool,
furthermore with upcoming HTTP action which can perform DNS resolution
at runtime.

This patch moves the memory management of the dns_requester structure
into its own pool.
2019-04-23 11:33:48 +02:00
paulborile
7714b12604 MINOR: wurfl: enabled multithreading mode
Initially excluded multithreaded mode is completely supported (libwurfl is fully MT safe).
Internal tests now are run also with multithreading enabled.
2019-04-23 11:00:23 +02:00
paulborile
bad132c384 CLEANUP: wurfl: removed deprecated methods
last 2 major releases of libwurfl included a complete review of engine options with
the result of deprecating many features. The patch removes unecessary code and fixes
the documentation.
Can be backported on any version of haproxy.

[wt: must not be backported since it removes config keywords and would
 thus break existing configurations]
Signed-off-by: Willy Tarreau <w@1wt.eu>
2019-04-23 11:00:23 +02:00
paulborile
59d50145dc BUILD: wurfl: build fix for 1.9/2.0 code base
This applies the required changes for the new buffer API that came in 1.9.
This patch must be backported to 1.9.
2019-04-23 11:00:23 +02:00
Willy Tarreau
b518823f1b MINOR: wurfl: indicate in haproxy -vv the wurfl version in use
It also explicitly mentions that the library is the dummy one when it
is detected.

We have this output now :

$ ./haproxy  -vv |grep -i wurfl
Built with WURFL support (dummy library version 1.11.2.100)
2019-04-23 11:00:23 +02:00
Willy Tarreau
b3cc9f2887 Revert "CLEANUP: wurfl: remove dead, broken and unmaintained code"
This reverts commit 8e5e1e7bf000f2603a085c76be118214d22c55b4.

The following patches will fix this code and may be backported.
2019-04-23 10:34:43 +02:00
Emeric Brun
d0e095c2aa MINOR: ssl/cli: async fd io-handlers printable on show fd
This patch exports the async fd iohandlers and make them printable
doing a 'show fd' on cli.
2019-04-19 17:27:01 +02:00
Christopher Faulet
46451d6e04 MINOR: gcc: Fix a silly gcc warning in connect_server()
Don't know why it happens now, but gcc seems to think srv_conn may be NULL when
a reused connection is removed from the orphan list. It happens when HAProxy is
compiled with -O2 with my gcc (8.3.1) on fedora 29... Changing a little how
reuse parameter is tested removes the warnings. So...

This patch may be backported to 1.9.
2019-04-19 15:53:23 +02:00
Christopher Faulet
f48552f2c1 BUG/MINOR: da: Get the request channel to call CHECK_HTTP_MESSAGE_FIRST()
Since the commit 89dc49935 ("BUG/MAJOR: http_fetch: Get the channel depending on
the keyword used"), the right channel must be passed as argument when the macro
CHECK_HTTP_MESSAGE_FIRST is called.

This patch must be backported to 1.9.
2019-04-19 15:53:23 +02:00