After a POST on the stats admin page, a 303 is emitted. Unfortunately
this 303 doesn't contain a content-length, which forces the connection
to be closed and reopened. Let's simply add a content-length: 0 to solve
this.
Commit 16f649c ("REORG: polling: rename "fd_spec" to "fd_cache"")
missed the server-facing connection during the rename, so the old
names are still in used and add a bit of confusion during the
debugging.
This should be backported to 1.6 and 1.5.
When a connect() to a unix socket returns EAGAIN we talk about
"no free ports" in the error/debug message, which only makes
sense when using TCP.
Explain connect() failure and suggest troubleshooting server
backlog size.
These two actions don't touch the table so the entry will expire and
the values will not be pushed to other peers. Also in the case of gpc0,
the gpc0_rate counter must be updated. The issue was reported by
Ruoshan Huang.
This fix needs to be backported to 1.6.
Arkadiy Kulev noticed that if a server is marked down while a connection
is being trying to establish, we still insist on performing retries on
the same server, which is absurd. Better perform the redispatch if we
already know the server is down. Because of this, it's likely that the
observe-l4 and sudden-death mechanisms are not optimal an cannot help
much the connection which was used to detect the problem.
The fix should be backported to 1.6 and 1.5 at least.
When users request 16384 bytes for a buffer, they get 16392 after
rounding up. This is problematic for SSL as it systematically
causes a small 8-bytes message to be appended after the first 16kB
message and costs about 15% of performance.
Let's add MEM_F_EXACT to use exactly the size we need. This requires
previous patch (MEDIUM: pools: add a new flag to avoid rounding pool
size up).
This issue was introduced in 1.6 and causes trouble there, so this
fix must be backported.
This is issue was reported by Gary Barrueto and diagnosed by Cyril Bont.
Usually it's desirable to merge similarly sized pools, which is the
reason why their size is rounded up to the next multiple of 16. But
for the buffers this is problematic because we add the size of
struct buffer to the user-requested size, and the rounding results
in 8 extra bytes that are usable in the end. So the user gets more
bytes than asked for, and in case of SSL it results in short writes
for the extra bytes that are sent above multiples of 16 kB.
So we add a new flag MEM_F_EXACT to request that the size is not
rounded up when creating the entry. Thus it doesn't disable merging.
Gregor Kovač reported that http_date() did not return the right day of the
week. For example "Sat, 22 Jan 2016 17:43:38 GMT" instead of "Fri, 22 Jan
2016 17:43:38 GMT". Indeed, gmtime() returns a 'struct tm' result, where
tm_wday begins on Sunday, whereas the code assumed it began on Monday.
This patch must be backported to haproxy 1.5 and 1.6.
Servers state function save and apply server IP when DNS resolution is
enabled on a server.
Purpose is to prevent switching traffic from one server to an other one
when multiple IPs are returned by the DNS server for the A or AAAA
record.
That said, a bug in current code lead to erase the service port while
copying the IP found in the file into the server structure in HAProxy's
memory.
This patch fix this bug.
The bug was reported on the ML by Robert Samuel Newson and fix proposed
by Nenad Merdanovic.
Thank you both!!!
backport: can be backported to 1.6
Erez reported a bug on discourse.haproxy.org about DNS resolution not
occuring when no port is specified on the nameserver directive.
This patch prevent this behavior by returning an error explaining this
issue when parsing the configuration file.
That said, later, we may want to force port 53 when client did not
provide any.
backport: 1.6
This function should return a 16-bit type as that is the type for
dns header id.
Also because it is doing an uint16 unpack big-endian operation.
Backport: can be backported to 1.6
Signed-off-by: Thiago Farina <tfarina@chromium.org>
Signed-off-by: Baptiste Assmann <bedis9@gmail.com>
Changes to if statements do not affect code operation, just layout of
the code. Type casts from malloc returns have been removed as this cast
happens automatically from the void* type.
This may be backported to 1.6.
Parameters provided to 51Degrees methods that have changed to require
const pointers are now cast to avoid compiler warnings.
This should be backported to 1.6.
Malloc continues to be used for the creation of cache entries. The
implementation has been enhanced ready for production deployment. A new
method to free cache entries created in 51d.c has been added to ensure
memory is released correctly.
This should be backported to 1.6.
Args pointer is now used as the LRU cache domain to ensure the cache
distinguishes between multiple fetch and conv configurations.
This should be backported to 1.6.
Introduction of a new function in the LRU cache source file.
Purpose of this function is to be used to delete a number of entries in
the cache. 'number' is defined by the caller and the key removed are
taken at the tail of the tree
csv_enc_append() returns a pointer to the beginning of the encoded
string, which makes it convenient to use in printf(). However it's not
convenient for use in chunks as it may leave an unused byte at the
beginning depending on the automatic quoting. Let's modify it to work
in two passes. First it looks for a character that requires escaping
using strpbrk(), and second it encodes the string. This way it
guarantees to always start at the first available byte of the chunk.
Additionally it made the code quite simpler.
We have csv_enc() but there's no way to append some CSV-encoded data
to an existing chunk, so here we modify the existing function for this
and create an inlined version of csv_enc() which first resets the output
chunk. It will be handy to append data to an existing chunk without
having to use an extra temporary chunk, or to encode multiple strings
into a single chunk with chunk_newstr().
The patch is quite small, in fact most changes are typo fixes in the
comments.
The function http_reply_and_close has been added in proto_http.c to wrap calls
to stream_int_retnclose. This functions will be modified when the filters will
be added.
When the response body is forwarded, if the server closes the input before the
end, an error is thrown. But if the data processing is too slow, all data could
already be received and pending in the input buffer. So this is a bug to stop
processing in this context. The server doesn't really closed the input before
the end.
As an example, this could happen when HAProxy is configured to do compression
offloading. If the server closes the connection explicitly after the response
(keep-alive disabled by the server) and if HAProxy receives the data faster than
they are compressed, then the response could be truncated.
This patch fixes the bug by checking if some pending data remain in the input
buffer before returning an error. If yes, the processing continues.
Several cases of "<=" instead of "<" were found in the url_param parser,
mostly affecting the case where the parameter is wrapping. They shouldn't
affect header operations, just body parsing in a wrapped pipelined request.
The code is a bit complicated with certain operations done multiple times
in multiple functions, so it's not sure others are not left. This code
must be re-audited.
It should only be backported to 1.6 once carefully tested, because it is
possible that other bugs relied on these ones.
The applet can't have access to the session private data. This patch
fix this problem. Now an applet can use private data stored by actions
and fecthes.
INNER and XFERBODY analyzer were set in order to support HTTP applets
from TCP rulesets, but this does not work (cf previous patch).
Other cases already provides theses analyzers, so their addition is
not needed. Furthermore if INNER was set it could cause some headers
to be rewritten (ex: connection) after headers were already forwarded,
resulting in a crash in buffer_insert_line2().
Special thanks to Bernd Helm for providing very detailed information,
captures and stack traces making it possible to spot the root cause
here.
This fix must be backported to 1.6.
HTTP applets request requires everything initilized by
"http_process_request" (analyzer flag AN_REQ_HTTP_INNER).
The applet will be immediately initilized, but its before
the call of this analyzer.
Due to this problem HTTP applets could be called with uncompletely
initialized http_txn.
This fix must be backported to 1.6.
In certain circumstances (eg: Lua HTTP applet called from a
TCP ruleset before http_process_request()), the HTTP TXN is not
yet fully initialized so some information it contains cannot be
relied on. Such information include the HTTP version, the state
of the expect: 100-continue header, the connection header and
the transfer-encoding header.
Here the bug only turns something which already doesn't work
into something wrong, but better avoid any references to the
http_txn from the Lua code to avoid future mistakes.
This patch should be backported into 1.6 for code consistency.
If a sample fetch needing http_txn is called from an HTTP Lua applet,
the result will be invalid and may even cause a crash because some HTTP
data can be forwarded and the HTTP txn is no longer valid.
Here the solution is to ensure that a fetch called from Lua never
needs http_txn. This is done thanks to a new flag HLUA_F_MAY_USE_HTTP
which indicates whether or not it is safe to call a fetch which needs
HTTP.
This fix needs to be backported to 1.6.
This patch converts a boolean "int" to a bitfiled. The main
reason is to save space in the struct if another flag may will
be require.
Note that this patch is required for next fix and will need to be
backported to 1.6.
When a POST is processed by a Lua service, the HTTP header are
potentially gone. So, we cannot retrieve their content using
the standard "hdr" sample fetchs (which will soon become invalid
anyway) from an applet.
This patch add an entry "headers" to the object applet_http. This
entry is an array containing all the headers. It permits to use the
HTTP headers during the processing of the service.
Many thanks to Jan Bruder for reporting this issue with enough
details to reproduce it.
This patch will have to be backported to 1.6 since it will be the
only way to access headers from Lua applets.
New sticktable entries learned from a remote peer can be pushed to others after
a random delay because they are not inserted at the right position in the updates
tree.
When memmax is forced using "-m", the per-process memory limit is enforced
using setrlimit(), but this value is not used to compute the automatic
maxconn limit. In addition, the per-process memory limit didn't consider
the fact that the shared SSL cache only needs to be accounted once.
The doc was also fixed to clearly state that "-m" is global and not per
process. It makes sense because people who use -m want to protect the
system's resources regardless of whatever appears in the configuration.
This setting used to be assigned to a variable tunable from a constant
and for an unknown reason never made its way into the config parser.
tune.recv_enough <number>
Haproxy uses some hints to detect that a short read indicates the end of the
socket buffers. One of them is that a read returns more than <recv_enough>
bytes, which defaults to 10136 (7 segments of 1448 each). This default value
may be changed by this setting to better deal with workloads involving lots
of short messages such as telnet or SSH sessions.
Added support for loading mutiple certs into shared contexts when they
are specified in a crt-list
Note that it's not practical to support SNI filters with multicerts, so
any SNI filters that's provided to the crt-list is ignored if a
multi-cert opertion is used.
Added ability for users to specify multiple certificates that all relate
a single server. Users do this by specifying certificate "cert_name.pem"
but having "cert_name.pem.rsa", "cert_name.pem.dsa" and/or
"cert_name.pem.ecdsa" in the directory.
HAProxy will now intelligently search for those 3 files and try combine
them into as few SSL_CTX's as possible based on CN/SAN. This will allow
HAProxy to support multiple ciphersuite key algorithms off a single
SSL_CTX.
This change integrates into the existing architecture of SNI lookup and
multiple SNI's can point to the same SSL_CTX, which can support multiple
key_types.
Added cert_key_and_chain struct to ssl. This struct will store the
contents of a crt path (from the config file) into memory. This will
allow us to use the data stored in memory instead of reading the file
multiple times.
This will be used to support a later commit to load multiple pkeys/certs
into a single SSL_CTX
In order to properly enable sched_setaffinity, in some versions of Linux,
it is rather _GNU_SOURCE than __USE_GNU (spotted on Alpine Linux for instance),
also for the sake of consistency as __USE_GNU seems not used across the code and
for last, it seems on Linux it is the best way to enable non portable code.
On Linux glibc's based versions, it seems _GNU_SOURCE defines __USE_GNU
it should be safe enough.
Krishna Kumar reported that the following configuration doesn't permit
HTTP reuse between two clients :
frontend private-frontend
mode http
bind :8001
default_backend private-backend
backend private-backend
mode http
http-reuse always
server bck 127.0.0.1:8888
The reason for this is that in http_end_txn_clean_session() we check the
stream's backend backend's http-reuse option before deciding whether the
backend connection should be moved back to the server's pool or not. But
since we're doing this after the call to http_reset_txn(), the backend is
reset to match the frontend, which doesn't have the option. However it
will work fine in a setup involving a "listen" section.
We just need to keep a pointer to the current backend before calling
http_reset_txn(). The code does that and replaces the few remaining
references to s->be inside the same function so that if any part of
code were to be moved later, this trap doesn't happen again.
This fix must be backported to 1.6.
A small configuration parsing error exists when no port is setup on the
server IP:port statement and the server's parameter 'port' is not set
and if the first tcp-check rule is a comment, like in the example below:
backend b
option tcp-check
tcp-check comment blah
tcp-check connect 8444
server s 127.0.0.1 check
In such case, an ALERT is improperly returned, despite this
configuration is valid and works.
The new code move the pointer to the first tcp-check rule which isn't a
comment before checking the presence of the port.
backport status: 1.6 and above
Current configuration parsing is permissive in such situation:
A server in a backend with no port conigured on the IP address
statement, no 'port' parameter configured and last rule of a tcp-check
is a CONNECT with no port.
The current code currently parses all the rules to validate a port is
well available, but it misses the last one, which means such
configuration is valid:
backend b
option tcp-check
tcp-check connect port 8444
tcp-check connect
server s 127.0.0.1 check
the second connect tentative is sent to port '0'...
Current patch fixes this by parsing the list the right way, including
the last rule.
backport status: 1.6 and above