Released version 1.3.14.10 with the following main changes :
- [MINOR] cfgparse: fix off-by 2 in error message size
- [BUG] cookie capture is declared in the frontend but checked on the backend
Cookie capture would only work by pure luck on the request but did
never work on responses since only the backend was checked. The fix
consists in always checking frontend for cookie captures.
(cherry picked from commit bfca9e51b77b856593a3c4a3215a8e0397e7cdba)
Released version 1.3.14.9 with the following main changes :
- [BUG] do not try to pause backends during reload
- [BUG] ensure that listeners from disabled proxies are correctly unbound.
- [BUG] acl-related keywords are not allowed in defaults sections
Using an ACL-related keyword in the defaults section causes a
segfault during parsing because the list headers are not initialized.
We must initialize list headers for default instance and reject
keywords relying on ACLs.
There is a problem when an instance is marked "disabled". Its ports are
still bound but will not be unbound upon termination. This causes processes
to accumulate during soft restarts, and might even cause failures to restart
new ones due to the inability to bind to the same port.
The ideal solution would be to bind all ports at the end of the configuration
parsing. An acceptable workaround is to unbind all listeners of disabled
proxies. This is what the current patch does.
During a configuration reload, haproxy tried to pause all proxies.
Unfortunately, it also tried to pause backends, which would fail
and cause trouble to the new process since the port was still bound.
Released version 1.3.14.8 with the following main changes :
- [BUG] do not release the connection slot during a retry
- [BUG] dynamic connection throttling could return a max of zero conns
srv_dynamic_maxconn() is clearly documented as returning at least 1
possible connection under throttling. But the computation was wrong,
the minimum 1 was divided and got lost in case of very low maxconns.
Apply the MAX(1, max) before returning the result in order to ensure
that a newly appeared server will get some traffic.
(cherry picked from commit 819970098f134453c0934047b3bd3440b0996b55)
A bug was introduced during last queue management fix. If a server
connection fails, the allocated connection slot is released, but it
will be needed again after the turn-around. This also causes more
connections than expected to go to the server because it appears to
have less connections than real.
Many thanks to Rupert Fiasco, Mark Imbriaco, Cody Fauser, Brian
Gupta and Alexander Staubo for promptly providing configuration
and diagnosis elements to help reproduce this problem easily.
(cherry picked from commit 8262d8bd7fdb262c980bd70cb2931e51df07513f)
Released version 1.3.14.7 with the following main changes :
- [BUG] use_backend would not correctly consider "unless"
- [BUG] disable buffer read timeout when reading stats
- [BUILD] change declaration of base64tab to fix build with Intel C++
- [CLEANUP] remove dependency on obsolete INTBITS macro
- [BUG] server timeout was not considered in some circumstances
- [BUG] ev_sepoll: closed file descriptors could persist in the spec list
- [BUG] maintain_proxies must not disable backends
- [BUG] regparm is broken on gcc < 3
- [OPTIM] force inlining of large functions with gcc >= 3
GCC 3 and above do not inline large functions, which is a problem
with ebtree where most core functions are inlined.
This simple patch has both reduced code size and increased speed.
It should be back-ported to ebtree.
(cherry picked from commit 707d3da01f8475d5c172d347a73bd9e947076df6)
(cherry picked from commit 21cca2e81a3d9ceaafad17e9cdd19dffe4c61776)
Gcc < 3 does not consider regparm declarations for function pointers.
This causes big trouble at least with pollers (and with any function
pointer after all). Disable CONFIG_HAP_USE_REGPARM for gcc < 3.
(cherry picked from commit 61eadc028fb8774ea05d893cd3eca6c671fb511e)
(cherry picked from commit ee113f5345c49a1e8ea9c8ea6b047f3c0f43db1f)
maintain_proxies could disable backends (p->maxconn == 0) which is
wrong (but apparently harmless). Add a check for p->maxconn == 0.
(cherry picked from commit d5382b4aaa099ce5ce2af5828bd4d6dc38e9e8ea)
(cherry picked from commit 2f9127b4b91de1ac685498e145f29342115bcb71)
If __fd_clo() was called on a file descriptor which was previously
disabled, it was not removed from the spec list. This apparently
could not happen on previous code because the TCP states prevented
this, but now it happens regularly. The effects are spec entries
stuck populated, leading to busy loops.
(cherry picked from commit 7a52a5c4680477272b2f34eaf5896b85746e6fd6)
(cherry picked from commit 116f4105d4fc6fbd8f2d0a139f691973332176de)
Due to a copy-paste typo, the client timeout was refreshed instead
of the server's when waiting for server response. This means that
the server's timeout remained eternity.
(cherry picked from commit 9f1f24bb7fb8ebd6b43b5fee1bda0afbdbcb768e)
(cherry picked from commit df82605d3e73573ae842a1ddaf418997bef33274)
The INTBITS macro was found to be already defined on some platforms,
and to equal 32 (while INTBITS was 5 here). Due to pure luck, there
was no declaration conflict, but it's nonetheless a problem to fix.
Looking at the code showed that this macro was only used for left
shifts and nothing else anymore. So the replacement is obvious. The
new macro, BITS_PER_INT is more obviously correct.
(cherry picked from commit 177e2b012723ef65c6c7f850df3e6e0cd2cca2b4)
(cherry picked from commit 0e3e59b11f7926a570cfc98d8967b61098c91602)
I got a report that Intel C++ complains about the size of the
base64tab in base64.c. Setting it to 65 chars to allow for the
trailing zero fixes the problem.
(cherry picked from commit 69e989ccbcc1d5cbb623493d6c9cca169fb36ff6)
(cherry picked from commit 66c9f287c2c2d016eb12cb3ab12cd80b5c225f5e)
The buffer read timeouts were not reset when stats were produced. This
caused unneeded wakeups.
(cherry picked from commit 284c7b319566a66d5b742c905072175aac6445e1)
(cherry picked from commit 80f35306e97e8ae762f81a178c8c225b5bbac91e)
A copy-paste typo made use_backend not correctly consider the "unless"
case, depending on the previous "block" rule.
(cherry picked from commit a8cfa34a9c011cecfaedfaf7d91de3e5f7f004a0)
Released version 1.3.14.6 with the following main changes :
- [BUILD] make install should depend on haproxy not "all"
- [BUG] event pollers must not wait if a task exists in the run queue
- [BUG] queue management: wake oldest request in queues
- [BUG] log: reported queue position was offed-by-one
- [BUG] fix the dequeuing logic to ensure that all requests get served
- [DOC] documentation for the "retries" parameter was missing.
The dequeuing logic was completely wrong. First, a task was assigned
to all servers to process the queue, but this task was never scheduled
and was only woken up on session free. Second, there was no reservation
of server entries when a task was assigned a server. This means that
as long as the task was not connected to the server, its presence was
not accounted for. This was causing trouble when detecting whether or
not a server had reached maxconn. Third, during a redispatch, a session
could lose its place at the server's and get blocked because another
session at the same moment would have stolen the entry. Fourth, the
redispatch option did not work when maxqueue was reached for a server,
and it was not possible to do so without indefinitely hanging a session.
The root cause of all those problems was the lack of pre-reservation of
connections at the server's, and the lack of tracking of servers during
a redispatch. Everything relied on combinations of flags which could
appear similarly in quite distinct situations.
This patch is a major rework but there was no other solution, as the
internal logic was deeply flawed. The resulting code is cleaner, more
understandable, uses less magics and is overall more robust.
As an added bonus, "option redispatch" now works when maxqueue has
been reached on a server.
The reported queue position in the logs was 0 for the first pending request
in the queue, which is wrong because it means that one request will have to
be completed before the queued one may execute. It caused the undesired side
effect that 0/0 was reported when either 0 or 1 request was pending in the
queue. Thus, we have to increment the queue size before reporting the value.
When a server terminates a connection, the next session in its
own queue was immediately processed. Because of this, if all
server queues are always filled, then no new anonymous request
will be processed. Consider oldest request between global and
server queues to choose from which to pick the request.
An improvement over this will consist in adding a configurable
offset when comparing expiration dates, so that cookie-less
requests can get either less or more priority.
Under some circumstances, a task may already lie in the run queue
(eg: inter-task wakeup). It is disastrous to wait for an event in
this case because some processing gets delayed.
Reported by Cherife Li : just doing a "make install" fails because it
depends on "all" which is equivalent to "help" if no TARGET was specified.
Make it depend on "haproxy" instead.
Released version 1.3.14.5 with the following main changes :
- [BUILD] fix build with gcc 4.3
- [TESTS] add a debug patch to help trigger the stats bug
- [BUG] Flush buffers also where there are exactly 0 bytes left
- [DOC] fix unescaped space in httpchk example.
- [DOC] update the README file with new build options
- [MEDIUM] reduce risk of event starvation in ev_sepoll
If too many events are set for spec I/O, those ones can starve the
polled events. Experiments show that when polled events starve, they
quickly turn into spec I/O, making the situation even worse. While
we can reduce the number of polled events processed at once, we
cannot do this on speculative events because most of them are new
ones (avg 2/3 new - 1/3 old from experiments).
The solution against this problem relies on those two factors :
1) one FD registered as a spec event cannot be polled at the same time
2) even during very high loads, we will almost never be interested in
simultaneous read and write streaming on the same FD.
The first point implies that during starvation, we will not have more than
half of our FDs in the poll list, otherwise it means there is less than that
in the spec list, implying there is no starvation.
The second point implies that we're statically only interested in half of
the maximum number of file descriptors at once, because we will unlikely
have simultaneous read and writes for a same buffer during long periods.
So, if we make it possible to drain maxsock/2/2 during peak loads, then we
can ensure that there will be no starvation effect. This means that we must
always allocate maxsock/4 events for the poller.
Last, sepoll uses an optimization consisting in reducing the number of calls
to epoll_wait() to once every too polls. However, when dealing with many
spec events, we can wait very long and skipping epoll_wait() every second
time increases latency. For this reason, we try to detect if we are beyond
a reasonable limit and stop doing so at this stage.
For Fedora 9 gcc 4.3 will be shipping as a feature, and right now haproxy does
not compile with gcc 4.3.
It appears that there is a reordering of headers or something along those lines,
This is the patch that gets haproxy to compile with gcc 4.3. I'm not sure if
this is the correct approach you would want to use, so please correct me.
If this works for you, I'll go ahead and put this patch in the src rpm until a
release of haproxy which compiles with gcc 4.3 is released.
About: [BUG] Flush buffers also where there are exactly 0 bytes left
I'm also attaching a debug patch that helps to trigger this bug.
Without the fix:
# echo -ne "GET /haproxy?stats;csv;norefresh HTTP/1.0\r\n\r\n"|nc 127.0.0.1
801|wc -c
16384
With the fix:
# echo -ne "GET /haproxy?stats;csv;norefresh HTTP/1.0\r\n\r\n"|nc 127.0.0.1
801|wc -c
33089
Best regards,
Krzysztof Oledzki
I noticed it was possible to get truncated http/csv stats. Sometimes.
Usually the problem disappeared as fast as it appeared, but once it
happend that my http-stats page was truncated for about one hour.
It was quite weird as it happened independently for csv and http
output and it took me some time to track & fix this bug.
Both buffer_write & buffer_write_chunk used to return 0 in two
situations: is case of success or where there was exactly 0 bytes
left. The first one is intentional but I believe the second one
is not as it was not possible to distinguish between successful
write and unsuccessful one, which means that if the buffer was 100%
filled, it was never flushed and it was not possible to write
more data.
This patch fixes this problem.
Released version 1.3.14.4 with the following main changes :
- [BUILD] Replace hardcoded 'LD = gcc' with 'LD = $(CC)'
- [BUILD] Added support for 'make install'
- [BUILD] Added 'install-man' make target for installing the man page
- [BUILD] Added 'install-bin' make target
- [BUILD] Added 'install-doc' make target
- [BUILD] Removed "/" after '$(DESTDIR)' in install targets
- [BUILD] Changed 'install' target to install the binaries first
- [MEDIUM] fix stats socket limitation to 16 kB
To be flexible while installing haproxy following variables have been
added to the Makefile:
- DESTDIR useful i.e. while installing in a sandbox (not set by default)
- PREFIX defines the default install prefix (default: /usr/local)
- SBINDIR defines the dir the haproxy binary gets installed
(default: $PREFIX/sbin)
haproxy relies on linking the binary using gcc, so there is no real need to
hardcode both (CC and LD). Setting 'LD = $(CC)' will make the build system
a bit more cross-compile friendly because only the right cross-compiler has
to be passed via make.
Due to the way the stats socket work, it was not possible to
maintain the information related to the command entered, so
after filling a whole buffer, the request was lost and it was
considered that there was nothing to write anymore.
The major reason was that some flags were passed directly
during the first call to stats_dump_raw() instead of being
stored persistently in the session.
To definitely fix this problem, flags were added to the stats
member of the session structure.
A second problem appeared. When the stats were produced, a first
call to client_retnclose() was performed, then one or multiple
subsequent calls to buffer_write_chunks() were done. But once the
stats buffer was full and a reschedule operated, the buffer was
flushed, the write flag cleared from the buffer and nothing was
done to re-arm it.
For this reason, a check was added in the proto_uxst_stats()
function in order to re-call the client FSM when data were added
by stats_dump_raw(). Finally, the whole unix stats dump FSM was
rewritten to avoid all the magics it depended on. It is now
simpler and looks more like the HTTP one.
Released version 1.3.14.3 with the following main changes :
- [BUG]: Restore clearing t->logs.bytes
- [DOC] Update a "contrib" file with a hint about a scheme used for formathing subjects
- [BUG] Don't increment server connections too much + fix retries
- [BUG] appsession lookup in URL does not work
- [MINOR] report correct section type for unknown keywords.
- [BUILD] update MacOS Makefile to build on newer versions
- [DOC] fix erroneous "useallbackups" option in the doc
- [DOC] applied small fixes from early readers
- [BUG] failed conns were sometimes incremented in the frontend!
- [TESTS] add test-pollers.cfg to easily report pollers in use
- [BUILD] ensure that makefile understands USE_DLMALLOC=1
- [CLEANUP] update .gitignore to ignore more temporary files
- [CLEANUP] report dlmalloc's source path only if explictly specified
- [BUG] str2sun could leak a small buffer in case of error during parsing
- [BUG] option allbackups was not working anymore in roundrobin mode
Commit 3168223a7b33a1d5aad1e11b8f2ad917645d7f27 broke option
"allbackups" in roundrobin mode due to an erroneous structure
member replacement in backend.c. The PR_O_USE_ALL_BK flag was
not tested in the right member anymore.
This bug uncoverred another one, by which all backup servers would
be used whatever the option's value, if all of them had been seen
as simultaneously failed at one moment.
This patch fixes the two stupid errors. Correctness has been tested
using the test-fwrr.cfg config example.
(cherry picked from commit f4cca45b5e6c6ed88a0062cf92ae57e01405ab12)
Matt Farnsworth reported a memory leak in str2sun() in case a too large
socket path is passed. The bug is very minor because it only happens
once during config parsing, but has to be fixed nevertheless. The patch
Matt provided could even be improved by completely removing the useless
strdup() in this function.
(cherry picked from commit caf720d3ff7758273278aecab26bb7624ec2f555)
Commit 98937b875798e10fac671d109355cde29d2a411a while fixing
one bug introduced another one. With "retries 4" and
"option redispatch" haproxy tries to connect 4 times to
one server server and 1 time to a second one. However
logs showed 5 connections to the first server (the
last one was counted twice) and 2 to the second.
This patch also fixes srv->retries and be->retries increments.
Now I get: 3 retries and 1 error in a first server (4 cum_sess)
and 1 error in a second server (1 cum_sess) with:
retries 4
option redispatch
and: 4 retries and 1 error (5 cum_sess) with:
retries 4
So, the number of connections effectively served by a server is:
srv->cum_sess - srv->failed_conns - srv->retries
(cherry picked from commit 626a19b66f769a87e7c995267ccedf14149e03b3)
USE_DLMALLOC=1 was ignored since last makefile update. It's better
to keep it running for existing setups.
(cherry picked from commit f14358bd1ad4f7c9fd32c3900ac3a2848bed1b9a)