smp_fetch_res_comp_algo() returns the name of the compression algorithm
in use. The output type is set to SMP_T_STR instead of SMP_T_CSTR, which
causes any transformation to be operated without a cast. Fortunately,
the current converters do not overwrite a zero-sized area, so the result
is an empty string. Fix this to have SMP_T_CSTR instead so that the cast
is always performed using a copy before any transformation is done.
We're having a lot of duplicate code just because of minor variants between
fetch functions that could be dealt with if the functions had the pointer to
the original keyword, so let's pass it as the last argument. An earlier
version used to pass a pointer to the sample_fetch element, but this is not
the best solution for two reasons :
- fetch functions will solely rely on the keyword string
- some other smp_fetch_* users do not have the pointer to the original
keyword and were forced to pass NULL.
So finally we're passing a pointer to the keyword as a const char *, which
perfectly fits the original purpose.
Benoit Dolez reported a failure to start haproxy 1.5-dev19. The
process would immediately report an internal error with missing
fetches from some crap instead of ACL names.
The cause is that some versions of gcc seem to trim static structs
containing a variable array when moving them to BSS, and only keep
the fixed size, which is just a list head for all ACL and sample
fetch keywords. This was confirmed at least with gcc 3.4.6. And we
can't move these structs to const because they contain a list element
which is needed to link all of them together during the parsing.
The bug indeed appeared with 1.5-dev19 because it's the first one
to have some empty ACL keyword lists.
One solution is to impose -fno-zero-initialized-in-bss to everyone
but this is not really nice. Another solution consists in ensuring
the struct is never empty so that it does not move there. The easy
solution consists in having a non-null list head since it's not yet
initialized.
A new "ILH" list head type was thus created for this purpose : create
an Initialized List Head so that gcc cannot move the struct to BSS.
This fixes the issue for this version of gcc and does not create any
burden for the declarations.
Global compression settings (windowsize and memlevel) were only considered
for the gzip algorithm but not the deflate algorithm. Since a single allocator
is used for both algos, if gzip was first initialized the memory with parameters
smaller than default, then initializing deflate after with default settings
would result in overusing the small allocated areas.
To fix this, we make use of deflateInit2() for deflate_init() as well.
Thanks to Godbach for reporting this bug, introduced by in 1.5-dev13 by commit
8b52bb38. No backport is needed.
Implements the "res.comp" ACL which is a boolean returning 1 when a
response has been compressed by HAProxy or 0 otherwise.
Implements the "res.comp_algo" fetch which contains the name of the
algorithm HAProxy used to compress the response.
It was a bit frustrating to have no idea about the bandwidth saved by
HTTP compression. Now we have per-frontend and per-backend stats. The
stats on the HTTP interface are shown in a hover title in the "bytes out"
column if at least something was fed to the compressor. 3 new columns
appeared in the CSV stats output.
New option 'maxcompcpuusage' in global section.
Sets the maximum CPU usage HAProxy can reach before stopping the
compression for new requests or decreasing the compression level of
current requests. It works like 'maxcomprate' but with the Idle.
Using compression rate limit, the compression level wasn't taking care
of the max compression level during a session because the test was done
on the wrong variable.
This patch makes changes in the http_response_forward_body state
machine. It checks if the compress algorithm had consumed data before
swapping the temporary and the input buffer. So it prevents null sized
zlib chunks.
Zlib (at least 1.2 and 1.3) aborts when it fails to allocate the state, so we
must not count a round on this event. If the state succeeds, then it allocates
all the 4 remaining counters at once.
Compression algorithms are not always supported depending on build options.
"haproxy -vv" now reports if zlib is supported and lists compression algorithms
also supported.
gcc emits this warning while building free_zlib() :
src/compression.c: In function `free_zlib':
src/compression.c:403: warning: 'pool' might be used uninitialized in this function
This is not a bug as the pool cannot take other values, but let's
pre-initialize is to null to fix the warning.
This patch adds input and output rate calcutation on the HTTP compresion
feature.
Compression can be limited with a maximum rate value in kilobytes per
second. The rate is set with the global 'maxcomprate' option. You can
change this value dynamicaly with 'set rate-limit http-compression
global' on the UNIX socket.
With the global maxzlibmem option, you are able ton control the maximum
amount of RAM usable for HTTP compression.
A test is done before each zlib allocation, if the there isn't available
memory, the test fail and so the zlib initialization, so data won't be
compressed.
Don't use the zlib allocator anymore, 5 pools are used for the zlib
compression. Their sizes depends of the window size and the memLevel in
deflateInit2.
The window size and the memlevel of the zlib are now configurable using
global options tune.zlib.memlevel and tune.zlib.windowsize.
It affects the memory consumption of the zlib.
The build was dependent of the zlib.h header, regardless of the USE_ZLIB
option. The fix consists of several #ifdef in the source code.
It removes the overhead of the zstream structure in the session when you
don't use the option.
The crappy zlib and openssl libs both define a free_func as a different typedef.
That's a very clever idea to use such a generic name in general purpose libraries,
really... The zlib one is easier to redefine than openssl's, so let's only fix this
one.
Decreasing the deflateInit2's memLevel parameter from 9 to 8 does not
affect the compression ratio and increases the compression speed by 12%.
Lower values do not increase transfer speed but decrease the compression
ratio so it looks like 8 is optimal.
This commit introduces HTTP compression using the zlib library.
http_response_forward_body has been modified to call the compression
functions.
This feature includes 3 algorithms: identity, gzip and deflate:
* identity: this is mostly for debugging, and it was useful for
developping the compression feature. With Content-Length in input, it
is making each chunk with the data available in the current buffer.
With chunks in input, it is rechunking, the output chunks will be
bigger or smaller depending of the size of the input chunk and the
size of the buffer. Identity does not apply any change on data.
* gzip: same as identity, but applying a gzip compression. The data
are deflated using the Z_NO_FLUSH flag in zlib. When there is no more
data in the input buffer, it flushes the data in the output buffer
(Z_SYNC_FLUSH). At the end of data, when it receives the last chunk in
input, or when there is no more data to read, it writes the end of
data with Z_FINISH and the ending chunk.
* deflate: same as gzip, but with deflate algorithm and zlib format.
Note that this algorithm has ambiguous support on many browsers and
no support at all from recent ones. It is strongly recommended not
to use it for anything else than experimentation.
You can't choose the compression ratio at the moment, it will be set to
Z_BEST_SPEED (1), as tests have shown very little benefit in terms of
compression ration when going above for HTML contents, at the cost of
a massive CPU impact.
Compression will be activated depending of the Accept-Encoding request
header. With identity, it does not take care of that header.
To build HAProxy with zlib support, use USE_ZLIB=1 in the make
parameters.
This work was initially started by David Du Colombier at Exceliance.