Released version 1.8-dev1 with the following main changes :
- BUG/MEDIUM: proxy: return "none" and "unknown" for unknown LB algos
- BUG/MINOR: stats: make field_str() return an empty string on NULL
- DOC: Spelling fixes
- BUG/MEDIUM: http: Fix tunnel mode when the CONNECT method is used
- BUG/MINOR: http: Keep the same behavior between 1.6 and 1.7 for tunneled txn
- BUG/MINOR: filters: Protect args in macros HAS_DATA_FILTERS and IS_DATA_FILTER
- BUG/MINOR: filters: Invert evaluation order of HTTP_XFER_BODY and XFER_DATA analyzers
- BUG/MINOR: http: Call XFER_DATA analyzer when HTTP txn is switched in tunnel mode
- BUG/MAJOR: stream: fix session abort on resource shortage
- OPTIM: stream-int: don't disable polling anymore on DONT_READ
- BUG/MINOR: cli: allow the backslash to be escaped on the CLI
- BUG/MEDIUM: cli: fix "show stat resolvers" and "show tls-keys"
- DOC: Fix map table's format
- DOC: Added 51Degrees conv and fetch functions to documentation.
- BUG/MINOR: http: don't send an extra CRLF after a Set-Cookie in a redirect
- DOC: mention that req_tot is for both frontends and backends
- BUG/MEDIUM: variables: some variable name can hide another ones
- MINOR: lua: Allow argument for actions
- BUILD: rearrange target files by build time
- CLEANUP: hlua: just indent functions
- MINOR: lua: give HAProxy variable access to the applets
- BUG/MINOR: stats: fix be/sessions/max output in html stats
- MINOR: proxy: Add fe_name/be_name fetchers next to existing fe_id/be_id
- DOC: lua: Documentation about some entry missing
- DOC: lua: Add documentation about variable manipulation from applet
- MINOR: Do not forward the header "Expect: 100-continue" when the option http-buffer-request is set
- DOC: Add undocumented argument of the trace filter
- DOC: Fix some typo in SPOE documentation
- MINOR: cli: Remove useless call to bi_putchk
- BUG/MINOR: cli: be sure to always warn the cli applet when input buffer is full
- MINOR: applet: Count number of (active) applets
- MINOR: task: Rename run_queue and run_queue_cur counters
- BUG/MEDIUM: stream: Save unprocessed events for a stream
- BUG/MAJOR: Fix how the list of entities waiting for a buffer is handled
- BUILD/MEDIUM: Fixing the build using LibreSSL
- BUG/MEDIUM: lua: In some case, the return of sample-fetches is ignored (2)
- SCRIPTS: git-show-backports: fix a harmless typo
- SCRIPTS: git-show-backports: add -H to use the hash of the commit message
- BUG/MINOR: stream-int: automatically release SI_FL_WAIT_DATA on SHUTW_NOW
- CLEANUP: applet/lua: create a dedicated ->fcn entry in hlua_cli context
- CLEANUP: applet/table: add an "action" entry in ->table context
- CLEANUP: applet: remove the now unused appctx->private field
- DOC: lua: documentation about time parser functions
- DOC: lua: improve links
- DOC: lua: section declared twice
- MEDIUM: cli: 'show cli sockets' list the CLI sockets
- BUG/MINOR: cli: "show cli sockets" wouldn't list all processes
- BUG/MINOR: cli: "show cli sockets" would always report process 64
- CLEANUP: lua: rename one of the lua appctx union
- BUG/MINOR: lua/cli: bad error message
- MEDIUM: lua: use memory pool for hlua struct in applets
- MINOR: lua/signals: Remove Lua part from signals.
- DOC: cli: show cli sockets
- MINOR: cli: automatically enable a CLI I/O handler when there's no parser
- CLEANUP: memory: remove the now unused cli_parse_show_pools() function
- CLEANUP: applet: group all CLI contexts together
- CLEANUP: stats: move a misplaced stats context initialization
- MINOR: cli: add two general purpose pointers and integers in the CLI struct
- MINOR: appctx/cli: remove the cli_socket entry from the appctx union
- MINOR: appctx/cli: remove the env entry from the appctx union
- MINOR: appctx/cli: remove the "be" entry from the appctx union
- MINOR: appctx/cli: remove the "dns" entry from the appctx union
- MINOR: appctx/cli: remove the "server_state" entry from the appctx union
- MINOR: appctx/cli: remove the "tlskeys" entry from the appctx union
- CONTRIB: tcploop: add limits.h to fix build issue with some compilers
- MINOR/DOC: lua: just precise one thing
- DOC: fix small typo in fe_id (backend instead of frontend)
- BUG/MINOR: Fix the sending function in Lua's cosocket
- BUG/MINOR: lua: memory leak executing tasks
- BUG/MINOR: lua: bad return code
- BUG/MINOR: lua: memleak when Lua/cli fails
- MEDIUM: lua: remove Lua struct from session, and allocate it with memory pools
- CLEANUP: haproxy: statify unexported functions
- MINOR: haproxy: add a registration for build options
- CLEANUP: wurfl: use the build options list to report it
- CLEANUP: 51d: use the build options list to report it
- CLEANUP: da: use the build options list to report it
- CLEANUP: namespaces: use the build options list to report it
- CLEANUP: tcp: use the build options list to report transparent modes
- CLEANUP: lua: use the build options list to report it
- CLEANUP: regex: use the build options list to report the regex type
- CLEANUP: ssl: use the build options list to report the SSL details
- CLEANUP: compression: use the build options list to report the algos
- CLEANUP: auth: use the build options list to report its support
- MINOR: haproxy: add a registration for post-check functions
- CLEANUP: checks: make use of the post-init registration to start checks
- CLEANUP: filters: use the function registration to initialize all proxies
- CLEANUP: wurfl: make use of the late init registration
- CLEANUP: 51d: make use of the late init registration
- CLEANUP: da: make use of the late init registration code
- MINOR: haproxy: add a registration for post-deinit functions
- CLEANUP: wurfl: register the deinit function via the dedicated list
- CLEANUP: 51d: register the deinitialization function
- CLEANUP: da: register the deinitialization function
- CLEANUP: wurfl: move global settings out of the global section
- CLEANUP: 51d: move global settings out of the global section
- CLEANUP: da: move global settings out of the global section
- MINOR: cfgparse: add two new functions to check arguments count
- MINOR: cfgparse: move parsing of "ca-base" and "crt-base" to ssl_sock
- MEDIUM: cfgparse: move all tune.ssl.* keywords to ssl_sock
- MEDIUM: cfgparse: move maxsslconn parsing to ssl_sock
- MINOR: cfgparse: move parsing of ssl-default-{bind,server}-ciphers to ssl_sock
- MEDIUM: cfgparse: move ssl-dh-param-file parsing to ssl_sock
- MEDIUM: compression: move the zlib-specific stuff from global.h to compression.c
- BUG/MEDIUM: ssl: properly reset the reused_sess during a forced handshake
- BUG/MEDIUM: ssl: avoid double free when releasing bind_confs
- BUG/MINOR: stats: fix be/sessions/current out in typed stats
- MINOR: tcp-rules: check that the listener exists before updating its counters
- MEDIUM: spoe: don't create a dummy listener for outgoing connections
- MINOR: listener: move the transport layer pointer to the bind_conf
- MEDIUM: move listener->frontend to bind_conf->frontend
- MEDIUM: ssl: remote the proxy argument from most functions
- MINOR: connection: add a new prepare_bind_conf() entry to xprt_ops
- MEDIUM: ssl_sock: implement ssl_sock_prepare_bind_conf()
- MINOR: connection: add a new destroy_bind_conf() entry to xprt_ops
- MINOR: ssl_sock: implement ssl_sock_destroy_bind_conf()
- MINOR: server: move the use_ssl field out of the ifdef USE_OPENSSL
- MINOR: connection: add a minimal transport layer registration system
- CLEANUP: connection: remove all direct references to raw_sock and ssl_sock
- CLEANUP: connection: unexport raw_sock and ssl_sock
- MINOR: connection: add new prepare_srv()/destroy_srv() entries to xprt_ops
- MINOR: ssl_sock: implement and use prepare_srv()/destroy_srv()
- CLEANUP: ssl: move tlskeys_finalize_config() to a post_check callback
- CLEANUP: ssl: move most ssl-specific global settings to ssl_sock.c
- BUG/MINOR: backend: nbsrv() should return 0 if backend is disabled
- BUG/MEDIUM: ssl: for a handshake when server-side SNI changes
- BUG/MINOR: systemd: potential zombie processes
- DOC: Add timings events schemas
- BUILD: lua: build failed on FreeBSD.
- MINOR: samples: add xx-hash functions
- MEDIUM: regex: pcre2 support
- BUG/MINOR: option prefer-last-server must be ignored in some case
- MINOR: stats: Support "select all" for backend actions
- BUG/MINOR: sample-fetches/stick-tables: bad type for the sample fetches sc*_get_gpt0
- BUG/MAJOR: channel: Fix the definition order of channel analyzers
- BUG/MINOR: http: report real parser state in error captures
- BUILD: scripts: automatically update the branch in version.h when releasing
- MINOR: tools: add a generic hexdump function for debugging
- BUG/MAJOR: http: fix risk of getting invalid reports of bad requests
- MINOR: http: custom status reason.
- MINOR: connection: add sample fetch "fc_rcvd_proxy"
- BUG/MINOR: config: emit a warning if http-reuse is enabled with incompatible options
- BUG/MINOR: tools: fix off-by-one in port size check
- BUG/MEDIUM: server: consider AF_UNSPEC as a valid address family
- MEDIUM: server: split the address and the port into two different fields
- MINOR: tools: make str2sa_range() return the port in a separate argument
- MINOR: server: take the destination port from the port field, not the addr
- MEDIUM: server: disable protocol validations when the server doesn't resolve
- BUG/MEDIUM: tools: do not force an unresolved address to AF_INET:0.0.0.0
- BUG/MINOR: ssl: EVP_PKEY must be freed after X509_get_pubkey usage
- BUG/MINOR: ssl: assert on SSL_set_shutdown with BoringSSL
- MINOR: Use "500 Internal Server Error" for 500 error/status code message.
- MINOR: proto_http.c 502 error txt typo.
- DOC: add deprecation notice to "block"
- MINOR: compression: fix -vv output without zlib/slz
- BUG/MINOR: Reset errno variable before calling strtol(3)
- MINOR: ssl: don't show prefer-server-ciphers output
- OPTIM/MINOR: config: Optimize fullconn automatic computation loading configuration
- BUG/MINOR: stream: Fix how backend-specific analyzers are set on a stream
- MAJOR: ssl: bind configuration per certificat
- MINOR: ssl: add curve suite for ECDHE negotiation
- MINOR: checks: Add agent-addr config directive
- MINOR: cli: Add possiblity to change agent config via CLI/socket
- MINOR: doc: Add docs for agent-addr configuration variable
- MINOR: doc: Add docs for agent-addr and agent-send CLI commands
- BUILD: ssl: fix to build (again) with boringssl
- BUILD: ssl: fix build on OpenSSL 1.0.0
- BUILD: ssl: silence a warning reported for ERR_remove_state()
- BUILD: ssl: eliminate warning with OpenSSL 1.1.0 regarding RAND_pseudo_bytes()
- BUILD: ssl: kill a build warning introduced by BoringSSL compatibility
- BUG/MEDIUM: tcp: don't poll for write when connect() succeeds
- BUG/MINOR: unix: fix connect's polling in case no data are scheduled
- MINOR: server: extend the flags to 32 bits
- BUG/MINOR: lua: Map.end are not reliable because "end" is a reserved keyword
- MINOR: dns: give ability to dns_init_resolvers() to close a socket when requested
- BUG/MAJOR: dns: restart sockets after fork()
- MINOR: chunks: implement a simple dynamic allocator for trash buffers
- BUG/MEDIUM: http: prevent redirect from overwriting a buffer
- BUG/MEDIUM: filters: Do not truncate HTTP response when body length is undefined
- BUG/MEDIUM: http: Prevent replace-header from overwriting a buffer
- BUG/MINOR: http: Return an error when a replace-header rule failed on the response
- BUG/MINOR: sendmail: The return of vsnprintf is not cleanly tested
- BUG/MAJOR: ssl: fix a regression in ssl_sock_shutw()
- BUG/MAJOR: lua segmentation fault when the request is like 'GET ?arg=val HTTP/1.1'
- BUG/MEDIUM: config: reject anything but "if" or "unless" after a use-backend rule
- MINOR: http: don't close when redirect location doesn't start with "/"
- MEDIUM: boringssl: support native multi-cert selection without bundling
- BUG/MEDIUM: ssl: fix verify/ca-file per certificate
- BUG/MEDIUM: ssl: switchctx should not return SSL_TLSEXT_ERR_ALERT_WARNING
- MINOR: ssl: removes SSL_CTX_set_ssl_version call and cleanup CTX creation.
- BUILD: ssl: fix build with -DOPENSSL_NO_DH
- MEDIUM: ssl: add new sample-fetch which captures the cipherlist
- MEDIUM: ssl: remove ssl-options from crt-list
- BUG/MEDIUM: ssl: in bind line, ssl-options after 'crt' are ignored.
- BUG/MINOR: ssl: fix cipherlist captures with sustainable SSL calls
- MINOR: ssl: improved cipherlist captures
- BUG/MINOR: spoe: Fix soft stop handler using a specific id for spoe filters
- BUG/MINOR: spoe: Fix parsing of arguments in spoe-message section
- MAJOR: spoe: Add support of pipelined and asynchronous exchanges with agents
- MINOR: spoe: Add support for pipelining/async capabilities in the SPOA example
- MINOR: spoe: Remove SPOE details from the appctx structure
- MINOR: spoe: Add status code in error variable instead of hardcoded value
- MINOR: spoe: Send a log message when an error occurred during event processing
- MINOR: spoe: Check the scope of sample fetches used in SPOE messages
- MEDIUM: spoe: Be sure to wakeup the good entity waiting for a buffer
- MINOR: spoe: Use the min of all known max_frame_size to encode messages
- MAJOR: spoe: Add support of payload fragmentation in NOTIFY frames
- MINOR: spoe: Add support for fragmentation capability in the SPOA example
- MAJOR: spoe: refactor the filter to clean up the code
- MINOR: spoe: Handle NOTIFY frames cancellation using ABORT bit in ACK frames
- REORG: spoe: Move struct and enum definitions in dedicated header file
- REORG: spoe: Move low-level encoding/decoding functions in dedicated header file
- MINOR: spoe: Improve implementation of the payload fragmentation
- MINOR: spoe: Add support of negation for options in SPOE configuration file
- MINOR: spoe: Add "pipelining" and "async" options in spoe-agent section
- MINOR: spoe: Rely on alertif_too_many_arg during configuration parsing
- MINOR: spoe: Add "send-frag-payload" option in spoe-agent section
- MINOR: spoe: Add "max-frame-size" statement in spoe-agent section
- DOC: spoe: Update SPOE documentation to reflect recent changes
- MINOR: config: warn when some HTTP rules are used in a TCP proxy
- BUG/MEDIUM: ssl: Clear OpenSSL error stack after trying to parse OCSP file
- BUG/MEDIUM: cli: Prevent double free in CLI ACL lookup
- BUG/MINOR: Fix "get map <map> <value>" CLI command
- MINOR: Add nbsrv sample converter
- CLEANUP: Replace repeated code to count usable servers with be_usable_srv()
- MINOR: Add hostname sample fetch
- CLEANUP: Remove comment that's no longer valid
- MEDIUM: http_error_message: txn->status / http_get_status_idx.
- MINOR: http-request tarpit deny_status.
- CLEANUP: http: make http_server_error() not set the status anymore
- MEDIUM: stats: Add JSON output option to show (info|stat)
- MEDIUM: stats: Add show json schema
- BUG/MAJOR: connection: update CO_FL_CONNECTED before calling the data layer
- MINOR: server: Add dynamic session cookies.
- MINOR: cli: Let configure the dynamic cookies from the cli.
- BUG/MINOR: checks: attempt clean shutw for SSL check
- CONTRIB: tcploop: make it build on FreeBSD
- CONTRIB: tcploop: fix time format to silence build warnings
- CONTRIB: tcploop: report action 'K' (kill) in usage message
- CONTRIB: tcploop: fix connect's address length
- CONTRIB: tcploop: use the trash instead of NULL for recv()
- BUG/MEDIUM: listener: do not try to rebind another process' socket
- BUG/MEDIUM server: Fix crash when dynamic is defined, but not key is provided.
- CLEANUP: config: Typo in comment.
- BUG/MEDIUM: filters: Fix channels synchronization in flt_end_analyze
- TESTS: add a test configuration to stress handshake combinations
- BUG/MAJOR: stream-int: do not depend on connection flags to detect connection
- BUG/MEDIUM: connection: ensure to always report the end of handshakes
- MEDIUM: connection: don't test for CO_FL_WAKE_DATA
- CLEANUP: connection: completely remove CO_FL_WAKE_DATA
- BUG: payload: fix payload not retrieving arbitrary lengths
- BUILD: ssl: simplify SSL_CTX_set_ecdh_auto compatibility
- BUILD: ssl: fix OPENSSL_NO_SSL_TRACE for boringssl and libressl
- BUG/MAJOR: http: fix typo in http_apply_redirect_rule
- MINOR: doc: 2.4. Examples should be 2.5. Examples
- BUG/MEDIUM: stream: fix client-fin/server-fin handling
- MINOR: fd: add a new flag HAP_POLL_F_RDHUP to struct poller
- BUG/MINOR: raw_sock: always perfom the last recv if RDHUP is not available
- OPTIM: poll: enable support for POLLRDHUP
- MINOR: kqueue: exclusively rely on the kqueue returned status
- MEDIUM: kqueue: take care of EV_EOF to improve polling status accuracy
- MEDIUM: kqueue: only set FD_POLL_IN when there are pending data
- DOC/MINOR: Fix typos in proxy protocol doc
- DOC: Protocol doc: add checksum, TLV type ranges
- DOC: Protocol doc: add SSL TLVs, rename CHECKSUM
- DOC: Protocol doc: add noop TLV
- MEDIUM: global: add a 'hard-stop-after' option to cap the soft-stop time
- MINOR: dns: improve DNS response parsing to use as many available records as possible
- BUG/MINOR: cfgparse: loop in tracked servers lists not detected by check_config_validity().
- MINOR: server: irrelevant error message with 'default-server' config file keyword.
- MINOR: server: Make 'default-server' support 'backup' keyword.
- MINOR: server: Make 'default-server' support 'check-send-proxy' keyword.
- CLEANUP: server: code alignement.
- MINOR: server: Make 'default-server' support 'non-stick' keyword.
- MINOR: server: Make 'default-server' support 'send-proxy' and 'send-proxy-v2 keywords.
- MINOR: server: Make 'default-server' support 'check-ssl' keyword.
- MINOR: server: Make 'default-server' support 'force-sslv3' and 'force-tlsv1[0-2]' keywords.
- CLEANUP: server: code alignement.
- MINOR: server: Make 'default-server' support 'no-ssl*' and 'no-tlsv*' keywords.
- MINOR: server: Make 'default-server' support 'ssl' keyword.
- MINOR: server: Make 'default-server' support 'send-proxy-v2-ssl*' keywords.
- CLEANUP: server: code alignement.
- MINOR: server: Make 'default-server' support 'verify' keyword.
- MINOR: server: Make 'default-server' support 'verifyhost' setting.
- MINOR: server: Make 'default-server' support 'check' keyword.
- MINOR: server: Make 'default-server' support 'track' setting.
- MINOR: server: Make 'default-server' support 'ca-file', 'crl-file' and 'crt' settings.
- MINOR: server: Make 'default-server' support 'redir' keyword.
- MINOR: server: Make 'default-server' support 'observe' keyword.
- MINOR: server: Make 'default-server' support 'cookie' keyword.
- MINOR: server: Make 'default-server' support 'ciphers' keyword.
- MINOR: server: Make 'default-server' support 'tcp-ut' keyword.
- MINOR: server: Make 'default-server' support 'namespace' keyword.
- MINOR: server: Make 'default-server' support 'source' keyword.
- MINOR: server: Make 'default-server' support 'sni' keyword.
- MINOR: server: Make 'default-server' support 'addr' keyword.
- MINOR: server: Make 'default-server' support 'disabled' keyword.
- MINOR: server: Add 'no-agent-check' server keyword.
- DOC: server: Add docs for "server" and "default-server" new "no-*" and other settings.
- MINOR: doc: fix use-server example (imap vs mail)
- BUG/MEDIUM: tcp: don't require privileges to bind to device
- BUILD: make the release script use shortlog for the final changelog
- BUILD: scripts: fix typo in announce-release error message
- CLEANUP: time: curr_sec_ms doesn't need to be exported
- BUG/MEDIUM: server: Wrong server default CRT filenames initialization.
- BUG/MEDIUM: peers: fix buffer overflow control in intdecode.
- BUG/MEDIUM: buffers: Fix how input/output data are injected into buffers
- BUG/MINOR: http: Fix conditions to clean up a txn and to handle the next request
- CLEANUP: http: Remove channel_congested function
- CLEANUP: buffers: Remove buffer_bounce_realign function
- CLEANUP: buffers: Remove buffer_contig_area and buffer_work_area functions
- MINOR: http: remove useless check on HTTP_MSGF_XFER_LEN for the request
- MINOR: http: Add debug messages when HTTP body analyzers are called
- BUG/MEDIUM: http: Fix blocked HTTP/1.0 responses when compression is enabled
- BUG/MINOR: filters: Don't force the stream's wakeup when we wait in flt_end_analyze
- DOC: fix parenthesis and add missing "Example" tags
- DOC: update the contributing file
- DOC: log-format/tcplog/httplog update
- MINOR: config parsing: add warning when log-format/tcplog/httplog is overriden in "defaults" sections
When SIGUSR1 is received, haproxy enters in soft-stop and quits when no
connection remains.
It can happen that the instance remains alive for a long time, depending
on timeouts and traffic. This option ensures that soft-stop won't run
for too long.
Example:
global
hard-stop-after 30s # Once in soft-stop, the instance will remain
# alive for at most 30 seconds.
The trash buffers are becoming increasingly complex to deal with due to
the code's modularity allowing some functions to be chained and causing
the same chunk buffers to be used multiple times along the chain, possibly
corrupting each other. In fact the trash were designed from scratch for
explicitly not surviving a function call but string manipulation makes
this impossible most of the time while not fullfilling the need for
reliable temporary chunks.
Here we introduce the ability to allocate a temporary trash chunk which
is reserved, so that it will not conflict with the trash chunks other
functions use, and will even support reentrant calls (eg: build_logline).
For this, we create a new pool which is exactly the size of a usual chunk
buffer plus the size of the chunk struct so that these chunks when allocated
are exactly the same size as the ones returned by get_trash_buffer(). These
chunks may fail so the caller must check them, and the caller is also
responsible for freeing them.
The code focuses on minimal changes and ease of reliable backporting
because it will be needed in stable versions in order to support next
patch.
UDP sockets used to send DNS queries are created before fork happens and
this is a big problem because all the processes (in case of a
configuration starting multiple processes) share the same socket. Some
processes may consume responses dedicated to an other one, some servers
may be disabled, some IPs changed, etc...
As a workaround, this patch close the existing socket and create a new
one after the fork() has happened.
[wt: backport this to 1.7]
The function dns_init_resolvers() is used to initialize socket used to
send DNS queries.
This patch gives the function the ability to close a socket before
re-opening it.
[wt: this needs to be backported to 1.7 for next fix]
In systemd mode (-Ds), the master haproxy process is waiting for each
child to exit in a specific order. If a process die when it's not his
turn, it will become a zombie process until every processes exit.
The master is now waiting for any process to exit in any order.
This patch should be backported to 1.7, 1.6 and 1.5.
Historically a lot of SSL global settings were stored into the global
struct, but we've reached a point where there are 3 ifdefs in it just
for this, and others in haproxy.c to initialize it.
This patch moves all the private fields to a new struct "global_ssl"
stored in ssl_sock.c. This includes :
char *crt_base;
char *ca_base;
char *listen_default_ciphers;
char *connect_default_ciphers;
int listen_default_ssloptions;
int connect_default_ssloptions;
int tune.sslprivatecache; /* Force to use a private session cache even if nbproc > 1 */
unsigned int tune.ssllifetime; /* SSL session lifetime in seconds */
unsigned int tune.ssl_max_record; /* SSL max record size */
unsigned int tune.ssl_default_dh_param; /* SSL maximum DH parameter size */
int tune.ssl_ctx_cache; /* max number of entries in the ssl_ctx cache. */
The "tune" part was removed (useless here) and the occasional "ssl"
prefixes were removed as well. Thus for example instead of
global.tune.ssl_default_dh_param
we now have :
global_ssl.default_dh_param
A few initializers were present in the constructor, they could be brought
back to the structure declaration.
A few other entries had to stay in global for now. They concern memory
calculationn (used in haproxy.c) and stats (used in stats.c).
The code is already much cleaner now, especially for global.h and haproxy.c
which become readable.
tlskeys_finalize_config() was the only reason for haproxy.c to still
require ifdef and includes for ssl_sock. This one fits perfectly well
in the late initializers so it was changed to be registered with
hap_register_post_check().
Now we can simply check the transport layer at run time and decide
whether or not to initialize or destroy these entries. This removes
other ifdefs and includes from cfgparse.c, haproxy.c and hlua.c.
Instead of hard-coding all SSL destruction in cfgparse.c and haproxy.c,
we now register this new function as the transport layer's destroy_bind_conf()
and call it only when defined. This removes some non-obvious SSL-specific
code and #ifdefs from cfgparse.c and haproxy.c
This finishes to clean up the zlib-specific parts. It also unbreaks recent
commit b97c6fb ("CLEANUP: compression: use the build options list to report
the algos") which broke USE_ZLIB due to MAXWBITS not being defined anymore
in haproxy.c.
We replaced global.deviceatlas with global_deviceatlas since there's no need
to store all this into the global section. This removes the last #ifdefs,
and now the code is 100% self-contained in da.c. The file da.h was now
removed because it was only used to load dac.h, which is more easily
loaded directly from da.c. It provides another good example of how to
integrate code in the future without touching the core parts.
We replaced global._51degrees with global_51degrees since there's no need
to store all this into the global section. This removes the last #ifdefs,
and now the code is 100% self-contained in 51d.c. The file 51d.h was now
removed because it was only used to load 51Degrees.h, which is more easily
loaded from 51d.c. It provides a good example of how to integrate code in
the future without touching the core parts.
We replaced global.wurfl with global_wurfl since there's no need to store
all this into the global section. This removes the last #ifdefs, and now
the code is 100% self-contained in wurfl.c. It provides a good example of
how to integrate code in the future without touching the core parts.
deinit_51degrees() is not called anymore from haproxy.c, removing
2 #ifdefs and one include. The function was made static. The include
file still includes 51Degrees.h which is needed by global.h and 51d.c
so it was not touched beyond this last function removal.
By registering the deinit function we avoid another #ifdef in haproxy.c.
The ha_wurfl_deinit() function has been made static and unexported. Now
proto/wurfl.h is totally empty, the code being self-contained in wurfl.c,
so the useless .h has been removed.
The 3 device detection engines stop at the same place in deinit()
with the usual #ifdefs. Similar to the other functions we can have
some late deinitialization functions. These functions do not return
anything however so we have to use a different type.
Instead of having a #ifdef in the main init code we now use the registered
init functions. Doing so also enables error checking as errors were previously
reported as alerts but ignored. Also they were incorrect as the 'status'
variable was hidden by a second one and was always reporting DA_SYS (which
is apparently an error) in every case including the case where no file was
loaded. The init_deviceatlas() function was unexported since it's not used
outside of this place anymore.
This removes some #ifdefs from the main haproxy code path. Function
init_51degrees() now returns ERR_* instead of exit(1) on error, and
this function was made static and is not exported anymore.
This removes some #ifdefs from the main haproxy code path and enables
error checking. The current code only makes use of warnings even for
some errors that look serious. While this choice is questionnable, it
has been kept as-is, and only the return codes were adapted to ERR_WARN
to at least report that some warnings were emitted. ha_wurfl_init() was
unexported as it's not needed anymore.
Instead of calling the checks directly from the init code, we now
register the start_checks() function to be run at this point. This
also allows to unexport the check init function and to remove one
include from haproxy.c.
There's a significant amount of late initialization calls which are
performed after the point where we exit in check mode. These calls
are used to allocate resource and perform certain slow operations.
Let's have a way to register some functions which need to be called
there instead of having this multitude of #ifdef in the init path.
This removes 2 #ifdef, an include, an ugly construct and a wild "extern"
declaration from haproxy.c. The message indicating that compression is
*not* enabled is not there anymore.
Many extensions now report some build options to ease debugging, but
this is now being done at the expense of code maintainability. Let's
provide a registration function to do this so that we can start to
remove most of the #ifdefs from haproxy.c (18 currently just for a
single function).
<run_queue> is used to track the number of task in the run queue and
<run_queue_cur> is a copy used for the reporting purpose. These counters has
been renamed, respectively, <tasks_run_queue> and <tasks_run_queue_cur>. So the
naming is consistent between tasks and applets.
[wt: needed for next fixes, backport to 1.7 and 1.6]
As for tasks, 2 counters has been added to track :
* the total number of applets : nb_applets
* the number of active applets : applets_active_queue
[wt: needed for next fixes, to backport to 1.7 and 1.6]
Now it is possible to use variables attached to a process. The scope name is
'proc'. These variables are released only when HAProxy is stopped.
'tune.vars.proc-max-size' directive has been added to confiure the maximum
amount of memory used by "proc" variables. And because memory accounting is
hierachical for variables, memory for "proc" vars includes memory for "sess"
vars.
This code has been moved from haproxy.c to sample.c and the function
release_sample_expr can now be called from anywhere to release a sample
expression. This function will be used by the stream processing offload engine
(SPOE).
It is very common when validating a configuration out of production not to
have access to the same resolvers and to fail on server address resolution,
making it difficult to test a configuration. This option simply appends the
"none" method to the list of address resolution methods for all servers,
ensuring that even if the libc fails to resolve an address, the startup
sequence is not interrupted.
Server addresses are not resolved anymore upon the first pass so that we
don't fail if an address cannot be resolved by the libc. Instead they are
processed all at once after the configuration is fully loaded, by the new
function srv_init_addr(). This function only acts on the server's address
if this address uses an FQDN, which appears in server->hostname.
For now the function does two things, to followup with HAProxy's historical
default behavior:
1. apply server IP address found in server-state file if runtime DNS
resolution is enabled for this server
2. use the DNS resolver provided by the libc
If none of the 2 options above can find an IP address, then an error is
returned.
All of this will be needed to support the new server parameter "init-addr".
For now, the biggest user-visible change is that all server resolution errors
are dumped at once instead of causing a startup failure one by one.
Currently, the function which applies server states provided by the
"old" process is applied after configuration sanity check. This results
in the impossibility to check the validity of the state file during a
regular config check, implying a full start is required, which can be
a problem sometimes.
This patch moves the loading of server_state file before MODE_CHECK.
The only reason wurfl/wurfl.h was needed outside of wurfl.c was to expose
wurfl_handle which is a pointer to a structure, referenced by global.h.
By just storing a void* there instead, we can confine all wurfl code to
wurfl.c, which is really nice.
WURFL is a high-performance and low-memory footprint mobile device
detection software component that can quickly and accurately detect
over 500 capabilities of visiting devices. It can differentiate between
portable mobile devices, desktop devices, SmartTVs and any other types
of devices on which a web browser can be installed.
In order to add WURFL device detection support, you would need to
download Scientiamobile InFuze C API and install it on your system.
Refer to www.scientiamobile.com to obtain a valid InFuze license.
Any useful information on how to configure HAProxy working with WURFL
may be found in:
doc/WURFL-device-detection.txt
doc/configuration.txt
examples/wurfl-example.cfg
Please find more information about WURFL device detection API detection
at https://docs.scientiamobile.com/documentation/infuze/infuze-c-api-user-guide
Right now there is an issue with the way the maintenance flags are
propagated upon startup. They are not propagate, just copied from the
tracked server. This implies that depending on the server's order, some
tracking servers may not be marked down. For example this configuration
does not work as expected :
server s1 1.1.1.1:8000 track s2
server s2 1.1.1.1:8000 track s3
server s3 1.1.1.1:8000 track s4
server s4 wtap:8000 check inter 1s disabled
It results in s1/s2 being up, and s3/s4 being down, while all of them
should be down.
The only clean way to process this is to run through all "root" servers
(those not tracking any other server), and to propagate their state down
to all their trackers. This is the same algorithm used to propagate the
state changes. It has to be done both to compute the IDRAIN flag and the
IMAINT flag. However, doing so requires that tracking servers are not
marked as inherited maintenance anymore while parsing the configuration
(and given that it is wrong, better drop it).
This fix also addresses another side effect of the bug above which is
that the IDRAIN/IMAINT flags are stored in the state files, and if
restored while the tracked server doesn't have the equivalent flag,
the servers may end up in a situation where it's impossible to remove
these flags. For example in the configuration above, after removing
"disabled" on server s4, the other servers would have remained down,
and not anymore with this fix. Similarly, the combination of IMAINT
or IDRAIN with their respective forced modes was not accepted on
reload, which is wrong as well.
This bug has been present at least since 1.5, maybe even 1.4 (it came
with tracking support). The fix needs to be backported there, though
the srv-state parts are irrelevant.
This commit relies on previous patch to silence warnings on startup.
Pierre Cheynier found that there's a persistent issue with the systemd
wrapper. Too fast reloads can lead to certain old processes not being
signaled at all and continuing to run. The problem was tracked down as
a race between the startup and the signal processing : nothing prevents
the wrapper from starting new processes while others are still starting,
and the resulting pid file will only contain the latest pids in this
case. This can happen with large configs and/or when a lot of SSL
certificates are involved.
In order to solve this we want the wrapper to wait for the new processes
to complete their startup. But we also want to ensure it doesn't wait for
nothing in case of error.
The solution found here is to create a pipe between the wrapper and the
sub-processes. The wrapper waits on the pipe and the sub-processes are
expected to close this pipe once they completed their startup. That way
we don't queue up new processes until the previous ones have registered
their pids to the pid file. And if anything goes wrong, the wrapper is
immediately released. The only thing is that we need the sub-processes
to know the pipe's file descriptor. We pass it in an environment variable
called HAPROXY_WRAPPER_FD.
It was confirmed both by Pierre and myself that this completely solves
the "zombie" process issue so that only the new processes continue to
listen on the sockets.
It seems that in the future this stuff could be moved to the haproxy
master process, also getting rid of an environment variable.
This fix needs to be backported to 1.6 and 1.5.
With Linux officially introducing SO_REUSEPORT support in 3.9 and
its mainstream adoption we have seen more people running into strange
SO_REUSEPORT related issues (a process management issue turning into
hard to diagnose problems because the kernel load-balances between the
new and an obsolete haproxy instance).
Also some people simply want the guarantee that the bind fails when
the old process is still bound.
This change makes SO_REUSEPORT configurable, introducing the command
line argument "-dR" and the noreuseport configuration directive.
A backport to 1.6 should be considered.
pcre_version() returns the running PCRE release, not the release
haproxy was built with.
This simple string fix should be backported to supported releases,
as the output may be confusing.
When the requested amount of FDs cannot be allocated, setrlimit() fails.
That's bad because if the limit is set to 1024 and we need 10000, we
stay on 1024 while we could possibly raise it to 4096 thanks to rlim_max.
This patch takes care of trying to assign rlim_cur to rlim_max on failure
so that we get as much as possible if we can't get all we need. The case
is particularly visible when starting haproxy as a non-privileged user
and a large maxconn is specified in the configuration.
Another point of doing this is that it is the only way to allow us to
close inherited FDs upon fork(), ie those between rlim_cur and rlim_max.
This patch may be backported to 1.6 and 1.5.
global.rlimit_nofile contains the mxa number of file descriptors that
can be allocated, except if the user is not allowed to reach this limit,
where it still contains the initially requested value. It is important
that this value always matches what is really configured so that it is
properly reported in the stats and that we can use it later to close
all FDs without wasting time closing impossible FDs.
This fix may be backported to 1.6 and 1.5.
This patch removes setlocale from the main function. It was introduced
by commit 379d9c7 ("MEDIUM: init: allow directory as argument of -f")
in 1.7-dev a few commits ago after a discussion on the mailing list.
Some regex may have different behaviours depending on the
locale. Some LUA scripts may change their behaviour too
(http://lua-users.org/wiki/LuaLocales).
Without this patch (haproxy is using setlocale) :
$ cat locale.cfg
defaults
mode http
frontend test
bind :9000
mode http
use_backend testbk if { hdr_reg(X-Test) ^\w+$ }
backend testbk
mode http
server s 127.0.0.1:80
$ LANG=fr_FR.UTF-8 ./haproxy -f locale.cfg
$ curl -i -H "X-Test: échec" localhost:9000
HTTP/1.1 200 OK
...
$ LANG=C ./haproxy -f locale.cfg
$ curl -i -H "X-Test: échec" localhost:9000
HTTP/1.0 503 Service Unavailable
...
If -f argument is a directory add all the files (and only files) it
containes to the config files list.
These files are added in lexical order (respecting LC_COLLATE).
Only files with ".cfg" extension are added.
Only non hidden files (not prefixed with ".") are added.
Symlink are followed.
The -f order is still respected:
$ tree -a rootdir
rootdir
|-- dir1
| |-- .6.cfg
| |-- 1.cfg
| |-- 2
| |-- 3.cfg
| |-- 4.cfg -> 1.cfg
| |-- 5 -> 1.cfg
| |-- 7.cfg -> .
| `-- dir4
| `-- 8.cfg
|-- dir2
| |-- 10.cfg
| `-- 9.cfg
|-- dir3
| `-- 11.cfg
|-- link -> dir3/
|-- root1
|-- root2
`-- root3
$ ./haproxy -C rootdir -f root2 -f dir2 -f root3 -f dir1 \
-f link -f root1
root2
dir2/10.cfg
dir2/9.cfg
root3
dir1/1.cfg
dir1/3.cfg
dir1/4.cfg
link/11.cfg
root1
This can be useful on systemd where you can't change the haproxy
commande line options on service reload.
Released version 1.7-dev3 with the following main changes :
- MINOR: sample: Moves ARGS underlying type from 32 to 64 bits.
- BUG/MINOR: log: Don't use strftime() which can clobber timezone if chrooted
- BUILD: namespaces: fix a potential build warning in namespaces.c
- MINOR: da: Using ARG12 macro for the sample fetch and the convertor.
- DOC: add encoding to json converter example
- BUG/MINOR: conf: "listener id" expects integer, but its not checked
- DOC: Clarify tunes.vars.xxx-max-size settings
- CLEANUP: chunk: adding NULL check to chunk_dup allocation.
- CLEANUP: connection: fix double negation on memcmp()
- BUG/MEDIUM: peers: fix incorrect age in frequency counters
- BUG/MEDIUM: Fix RFC5077 resumption when more than TLS_TICKETS_NO are present
- BUG/MAJOR: Fix crash in http_get_fhdr with exactly MAX_HDR_HISTORY headers
- BUG/MINOR: lua: can't load external libraries
- BUG/MINOR: prevent the dump of uninitialized vars
- CLEANUP: map: it seems that the map were planed to be chained
- MINOR: lua: move class registration facilities
- MINOR: lua: remove some useless checks
- CLEANUP: lua: Remove two same functions
- MINOR: lua: refactor the Lua object registration
- MINOR: lua: precise message when a critical error is catched
- MINOR: lua: post initialization
- MINOR: lua: Add internal function which strip spaces
- MINOR: lua: convert field to lua type
- DOC: "addr" parameter applies to both health and agent checks
- DOC: timeout client: pointers to timeout http-request
- DOC: typo on stick-store response
- DOC: stick-table: amend paragraph blaming the loss of table upon reload
- DOC: typo: ACL subdir match
- DOC: typo: maxconn paragraph is wrong due to a wrong buffer size
- DOC: regsub: parser limitation about the inability to use closing square brackets
- DOC: typo: req.uri is now replaced by capture.req.uri
- DOC: name set-gpt0 mismatch with the expected keyword
- MINOR: http: sample fetch which returns unique-id
- MINOR: dumpstats: extract stats fields enum and names
- MINOR: dumpstats: split stats_dump_info_to_buffer() in two parts
- MINOR: dumpstats: split stats_dump_fe_stats() in two parts
- MINOR: dumpstats: split stats_dump_li_stats() in two parts
- MINOR: dumpstats: split stats_dump_sv_stats() in two parts
- MINOR: dumpstats: split stats_dump_be_stats() in two parts
- MINOR: lua: dump general info
- MINOR: lua: add class proxy
- MINOR: lua: add class server
- MINOR: lua: add class listener
- BUG/MEDIUM: stick-tables: some sample-fetch doesn't work in the connection state.
- MEDIUM: proxy: use dynamic allocation for error dumps
- CLEANUP: remove unneeded casts
- CLEANUP: uniformize last argument of malloc/calloc
- DOC: fix "needed" typo
- BUG/MINOR: dumpstats: fix write to global chunk
- BUG/MINOR: dns: inapropriate way out after a resolution timeout
- BUG/MINOR: dns: trigger a DNS query type change on resolution timeout
- CLEANUP: proto_http: few corrections for gcc warnings.
- BUG/MINOR: DNS: resolution structure change
- BUG/MINOR : allow to log cookie for tarpit and denied request
- BUG/MEDIUM: ssl: rewind the BIO when reading certificates
- OPTIM/MINOR: session: abort if possible before connecting to the backend
- DOC: http: rename the unique-id sample and add the documentation
- BUG/MEDIUM: trace.c: rdtsc() is defined in two files
- BUG/MEDIUM: channel: fix miscalculation of available buffer space (2nd try)
- BUG/MINOR: server: risk of over reading the pref_net array.
- BUG/MINOR: cfgparse: couple of small memory leaks.
- BUG/MEDIUM: sample: initialize the pointer before parse_binary call.
- DOC: fix discrepancy in the example for http-request redirect
- MINOR: acl: Add predefined METH_DELETE, METH_PUT
- CLEANUP: .gitignore cleanup
- DOC: Clarify IPv4 address / mask notation rules
- CLEANUP: fix inconsistency between fd->iocb, proto->accept and accept()
- BUG/MEDIUM: fix maxaccept computation on per-process listeners
- BUG/MINOR: listener: stop unbound listeners on startup
- BUG/MINOR: fix maxaccept computation according to the frontend process range
- TESTS: add blocksig.c to run tests with all signals blocked
- MEDIUM: unblock signals on startup.
- MINOR: filters: Print the list of existing filters during HA startup
- MINOR: filters: Typo in an error message
- MINOR: filters: Filters must define the callbacks struct during config parsing
- DOC: filters: Add filters documentation
- BUG/MEDIUM: channel: don't allow to overwrite the reserve until connected
- BUG/MEDIUM: channel: incorrect polling condition may delay event delivery
- BUG/MEDIUM: channel: fix miscalculation of available buffer space (3rd try)
- BUG/MEDIUM: log: fix risk of segfault when logging HTTP fields in TCP mode
- MINOR: Add ability for agent-check to set server maxconn
- CLEANUP: Use server_parse_maxconn_change_request for maxconn CLI updates
- MINOR: filters: add opaque data
- BUG/MEDIUM: lua: protects the upper boundary of the argument list for converters/fetches.
- MINOR: lua: migrate the argument mask to 64 bits type.
- BUG/MINOR: dumpstats: Fix the "Total bytes saved" counter in backends stats
- BUG/MINOR: log: fix a typo that would cause %HP to log <BADREQ>
- BUG/MEDIUM: http: fix incorrect reporting of server errors
- MINOR: channel: add new function channel_congested()
- BUG/MEDIUM: http: fix risk of CPU spikes with pipelined requests from dead client
- BUG/MAJOR: channel: fix miscalculation of available buffer space (4th try)
- BUG/MEDIUM: stream: ensure the SI_FL_DONT_WAKE flag is properly cleared
- BUG/MEDIUM: channel: fix inconsistent handling of 4GB-1 transfers
- BUG/MEDIUM: stats: show servers state may show an empty or incomplete result
- BUG/MEDIUM: stats: show backend may show an empty or incomplete result
- MINOR: stats: fix typo in help messages
- MINOR: stats: show stat resolvers missing in the help message
- BUG/MINOR: dns: fix DNS header definition
- BUG/MEDIUM: dns: fix alignment issue when building DNS queries
- CLEANUP: don't ignore scripts in .gitignore
- BUILD: add a few release and backport scripts in scripts/
In C89, "void *" is automatically promoted to any pointer type. Casting
the result of malloc/calloc to the type of the LHS variable is therefore
unneeded.
Most of this patch was built using this Coccinelle patch:
@@
type T;
@@
- (T *)
(\(lua_touserdata\|malloc\|calloc\|SSL_get_app_data\|hlua_checkudata\|lua_newuserdata\)(...))
@@
type T;
T *x;
void *data;
@@
x =
- (T *)
data
@@
type T;
T *x;
T *data;
@@
x =
- (T *)
data
Unfortunately, either Coccinelle or I is too limited to detect situation
where a complex RHS expression is of type "void *" and therefore casting
is not needed. Those cases were manually examined and corrected.
Released version 1.7-dev2 with the following main changes :
- DOC: lua: fix lua API
- DOC: mailers: typo in 'hostname' description
- DOC: compression: missing mention of libslz for compression algorithm
- BUILD/MINOR: regex: missing header
- BUG/MINOR: stream: bad return code
- DOC: lua: fix somme errors and add implicit types
- MINOR: lua: add set/get priv for applets
- BUG/MINOR: http: fix several off-by-one errors in the url_param parser
- BUG/MINOR: http: Be sure to process all the data received from a server
- MINOR: filters/http: Use a wrapper function instead of stream_int_retnclose
- BUG/MINOR: chunk: make chunk_dup() always check and set dst->size
- DOC: ssl: fixed some formatting errors in crt tag
- MINOR: chunks: ensure that chunk_strcpy() adds a trailing zero
- MINOR: chunks: add chunk_strcat() and chunk_newstr()
- MINOR: chunk: make chunk_initstr() take a const string
- MEDIUM: tools: add csv_enc_append() to preserve the original chunk
- MINOR: tools: make csv_enc_append() always start at the first byte of the chunk
- MINOR: lru: new function to delete <nb> least recently used keys
- DOC: add Ben Shillito as the maintainer of 51d
- BUG/MINOR: 51d: Ensures a unique domain for each configuration
- BUG/MINOR: 51d: Aligns Pattern cache implementation with HAProxy best practices.
- BUG/MINOR: 51d: Releases workset back to pool.
- BUG/MINOR: 51d: Aligned const pointers to changes in 51Degrees.
- CLEANUP: 51d: Aligned if statements with HAProxy best practices and removed casts from malloc.
- MINOR: rename master process name in -Ds (systemd mode)
- DOC: fix a few spelling mistakes
- DOC: fix "workaround" spelling
- BUG/MINOR: examples: Fixing haproxy.spec to remove references to .cfg files
- MINOR: fix the return type for dns_response_get_query_id() function
- MINOR: server state: missing LF (\n) on error message printed when parsing server state file
- BUG/MEDIUM: dns: no DNS resolution happens if no ports provided to the nameserver
- BUG/MAJOR: servers state: server port is erased when dns resolution is enabled on a server
- BUG/MEDIUM: servers state: server port is used uninitialized
- BUG/MEDIUM: config: Adding validation to stick-table expire value.
- BUG/MEDIUM: sample: http_date() doesn't provide the right day of the week
- BUG/MEDIUM: channel: fix miscalculation of available buffer space.
- MEDIUM: pools: add a new flag to avoid rounding pool size up
- BUG/MEDIUM: buffers: do not round up buffer size during allocation
- BUG/MINOR: stream: don't force retries if the server is DOWN
- BUG/MINOR: counters: make the sc-inc-gpc0 and sc-set-gpt0 touch the table
- MINOR: unix: don't mention free ports on EAGAIN
- BUG/CLEANUP: CLI: report the proper field states in "show sess"
- MINOR: stats: send content-length with the redirect to allow keep-alive
- BUG: stream_interface: Reuse connection even if the output channel is empty
- DOC: remove old tunnel mode assumptions
- BUG/MAJOR: http-reuse: fix risk of orphaned connections
- BUG/MEDIUM: http-reuse: do not share private connections across backends
- BUG/MINOR: ssl: Be sure to use unique serial for regenerated certificates
- BUG/MINOR: stats: fix missing comma in stats on agent drain
- MAJOR: filters: Add filters support
- MINOR: filters: Do not reset stream analyzers if the client is gone
- REORG: filters: Prepare creation of the HTTP compression filter
- MAJOR: filters/http: Rewrite the HTTP compression as a filter
- MEDIUM: filters: Use macros to call filters callbacks to speed-up processing
- MEDIUM: filters: remove http_start_chunk, http_last_chunk and http_chunk_end
- MEDIUM: filters: Replace filter_http_headers callback by an analyzer
- MEDIUM: filters/http: Move body parsing of HTTP messages in dedicated functions
- MINOR: filters: Add stream_filters structure to hide filters info
- MAJOR: filters: Require explicit registration to filter HTTP body and TCP data
- MINOR: filters: Remove unused or useless stuff and do small optimizations
- MEDIUM: filters: Optimize the HTTP compression for chunk encoded response
- MINOR: filters/http: Slightly update the parsing of chunks
- MINOR: filters/http: Forward remaining data when a channel has no "data" filters
- MINOR: filters: Add an filter example
- MINOR: filters: Extract proxy stuff from the struct filter
- MINOR: map: Add regex matching replacement
- BUG/MINOR: lua: unsafe initialization
- DOC: lua: fix somme errors
- MINOR: lua: file dedicated to unsafe functions
- MINOR: lua: add "now" time function
- MINOR: standard: add RFC HTTP date parser
- MINOR: lua: Add date functions
- MINOR: lua: move common function
- MINOR: lua: merge function
- MINOR: lua: Add concat class
- MINOR: standard: add function "escape_chunk"
- MEDIUM: log: add a new log format flag "E"
- DOC: add server name at rate-limit sessions example
- BUG/MEDIUM: ssl: fix off-by-one in ALPN list allocation
- BUG/MEDIUM: ssl: fix off-by-one in NPN list allocation
- DOC: LUA: fix some typos and syntax errors
- MINOR: cli: add a new "show env" command
- MEDIUM: config: allow to manipulate environment variables in the global section
- MEDIUM: cfgparse: reject incorrect 'timeout retry' keyword spelling in resolvers
- MINOR: mailers: increase default timeout to 10 seconds
- MINOR: mailers: use <CRLF> for all line endings
- BUG/MAJOR: lua: segfault using Concat object
- DOC: lua: copyrights
- MINOR: common: mask conversion
- MEDIUM: dns: extract options
- MEDIUM: dns: add a "resolve-net" option which allow to prefer an ip in a network
- MINOR: mailers: make it possible to configure the connection timeout
- BUG/MAJOR: lua: applets can't sleep.
- BUG/MINOR: server: some prototypes are renamed
- BUG/MINOR: lua: Useless copy
- BUG/MEDIUM: stats: stats bind-process doesn't propagate the process mask correctly
- BUG/MINOR: server: fix the format of the warning on address change
- CLEANUP: server: add "const" to some message strings
- MINOR: server: generalize the "updater" source
- BUG/MEDIUM: chunks: always reject negative-length chunks
- BUG/MINOR: systemd: ensure we don't miss signals
- BUG/MINOR: systemd: report the correct signal in debug message output
- BUG/MINOR: systemd: propagate the correct signal to haproxy
- MINOR: systemd: ensure a reload doesn't mask a stop
- BUG/MEDIUM: cfgparse: wrong argument offset after parsing server "sni" keyword
- CLEANUP: stats: Avoid computation with uninitialized bits.
- CLEANUP: pattern: Ignore unknown samples in pat_match_ip().
- CLEANUP: map: Avoid memory leak in out-of-memory condition.
- BUG/MINOR: tcpcheck: fix incorrect list usage resulting in failure to load certain configs
- BUG/MAJOR: samples: check smp->strm before using it
- MINOR: sample: add a new helper to initialize the owner of a sample
- MINOR: sample: always set a new sample's owner before evaluating it
- BUG/MAJOR: vars: always retrieve the stream and session from the sample
- CLEANUP: payload: remove useless and confusing nullity checks for channel buffer
- BUG/MINOR: ssl: fix usage of the various sample fetch functions
- MINOR: stats: create fields types suitable for all CSV output data
- MINOR: stats: add all the "show info" fields in a table
- MEDIUM: stats: fill all the show info elements prior to displaying them
- MINOR: stats: add a function to emit fields into a chunk
- MINOR: stats: add stats_dump_info_fields() to dump one field per line
- MEDIUM: stats: make use of stats_dump_info_fields() for "show info"
- MINOR: stats: add a declaration of all stats fields
- MINOR: stats: don't hard-code the CSV fields list anymore
- MINOR: stats: create stats fields storage and CSV dump function
- MEDIUM: stats: convert stats_dump_fe_stats() to use stats_dump_fields_csv()
- MEDIUM: stats: make stats_dump_fe_stats() use stats fields for HTML dump
- MEDIUM: stats: convert stats_dump_li_stats() to use stats_dump_fields_csv()
- MEDIUM: stats: make stats_dump_li_stats() use stats fields for HTML dump
- MEDIUM: stats: convert stats_dump_be_stats() to use stats_dump_fields_csv()
- MEDIUM: stats: make stats_dump_be_stats() use stats fields for HTML dump
- MEDIUM: stats: convert stats_dump_sv_stats() to use stats_dump_fields_csv()
- MEDIUM: stats: make stats_dump_sv_stats() use the stats field for HTML
- MEDIUM: stats: move the server state coloring logic to the server dump function
- MINOR: stats: do not use srv->admin & STATS_ADMF_MAINT in HTML dumps
- MINOR: stats: do not check srv->state for SRV_ST_STOPPED in HTML dumps
- MINOR: stats: make CSV report server check status only when enabled
- MINOR: stats: only report backend's down time if it has servers
- MINOR: stats: prepend '*' in front of the check status when in progress
- MINOR: stats: make HTML stats dump rely on the table for the check status
- MINOR: stats: add agent_status, agent_code, agent_duration to output
- MINOR: stats: add check_desc and agent_desc to the output fields
- MINOR: stats: add check and agent's health values in the output
- MEDIUM: stats: make the HTML server state dump use the CSV states
- MEDIUM: stats: only report observe errors when observe is set
- MEDIUM: stats: expose the same flags for CLI and HTTP accesses
- MEDIUM: stats: report server's address in the CSV output
- MEDIUM: stats: report the cookie value in the server & backend CSV dumps
- MEDIUM: stats: compute the color code only in the HTML form
- MEDIUM: stats: report the listeners' address in the CSV output
- MEDIUM: stats: make it possible to report the WAITING state for listeners
- REORG: stats: dump the frontend's HTML stats via a generic function
- REORG: stats: dump the socket stats via the generic function
- REORG: stats: dump the server stats via the generic function
- REORG: stats: dump the backend stats via the generic function
- MEDIUM: stats: add a new "mode" column to report the proxy mode
- MINOR: stats: report the load balancing algorithm in CSV output
- MINOR: stats: add 3 fields to report the frontend-specific connection stats
- MINOR: stats: report number of intercepted requests for frontend and backends
- MINOR: stats: introduce stats_dump_one_line() to dump one stats line
- CLEANUP: stats: make stats_dump_fields_html() not rely on proxy anymore
- MINOR: stats: add ST_SHOWADMIN to pass the admin info in the regular flags
- MINOR: stats: make stats_dump_fields_html() not use &trash by default
- MINOR: stats: add functions to emit typed fields into a chunk
- MEDIUM: stats: support "show info typed" on the CLI
- MEDIUM: stats: implement a typed output format for stats
- DOC: document the "show info typed" and "show stat typed" output formats
- MINOR: cfgparse: warn when uid parameter is not a number
- MINOR: cfgparse: warn when gid parameter is not a number
- BUG/MINOR: standard: Avoid free of non-allocated pointer
- BUG/MINOR: pattern: Avoid memory leak on out-of-memory condition
- CLEANUP: http: fix a build warning introduced by a recent fix
- BUG/MINOR: log: GMT offset not updated when entering/leaving DST
GMT offset used in local time formats was computed at startup, but was not updated when DST status changed while running.
For example these two RFC5424 syslog traces where emitted 5 seconds apart, just before and after DST changed:
<14>1 2016-03-27T01:59:58+01:00 bunch-VirtualBox haproxy 2098 - - Connect ...
<14>1 2016-03-27T03:00:03+01:00 bunch-VirtualBox haproxy 2098 - - Connect ...
It looked like they were emitted more than 1 hour apart, unlike with the fix:
<14>1 2016-03-27T01:59:58+01:00 bunch-VirtualBox haproxy 3381 - - Connect ...
<14>1 2016-03-27T03:00:03+02:00 bunch-VirtualBox haproxy 3381 - - Connect ...
This patch should be backported to 1.6 and partially to 1.5 (no fix needed in log.c).
The +E mode escapes characters '"', '\' and ']' with '\' as prefix. It
mostly makes sense to use it in the RFC5424 structured-data log formats.
Example:
log-format-sd %{+Q,+E}o\ [exampleSDID@1234\ header=%[capture.req.hdr(0)]]
HTTP compression has been rewritten to use the filter API. This is more a PoC
than other thing for now. It allocates memory to work. So, if only for that, it
should be rewritten.
In the mean time, the implementation has been refactored to allow its use with
other filters. However, there are limitations that should be respected:
- No filter placed after the compression one is allowed to change input data
(in 'http_data' callback).
- No filter placed before the compression one is allowed to change forwarded
data (in 'http_forward_data' callback).
For now, these limitations are informal, so you should be careful when you use
several filters.
About the configuration, 'compression' keywords are still supported and must be
used to configure the HTTP compression behavior. In absence of a 'filter' line
for the compression filter, it is added in the filter chain when the first
compression' line is parsed. This is an easy way to do when you do not use other
filters. But another filter exists, an error is reported so that the user must
explicitly declare the filter.
For example:
listen tst
...
compression algo gzip
compression offload
...
filter flt_1
filter compression
filter flt_2
...
This patch adds the support of filters in HAProxy. The main idea is to have a
way to "easely" extend HAProxy by adding some "modules", called filters, that
will be able to change HAProxy behavior in a programmatic way.
To do so, many entry points has been added in code to let filters to hook up to
different steps of the processing. A filter must define a flt_ops sutrctures
(see include/types/filters.h for details). This structure contains all available
callbacks that a filter can define:
struct flt_ops {
/*
* Callbacks to manage the filter lifecycle
*/
int (*init) (struct proxy *p);
void (*deinit)(struct proxy *p);
int (*check) (struct proxy *p);
/*
* Stream callbacks
*/
void (*stream_start) (struct stream *s);
void (*stream_accept) (struct stream *s);
void (*session_establish)(struct stream *s);
void (*stream_stop) (struct stream *s);
/*
* HTTP callbacks
*/
int (*http_start) (struct stream *s, struct http_msg *msg);
int (*http_start_body) (struct stream *s, struct http_msg *msg);
int (*http_start_chunk) (struct stream *s, struct http_msg *msg);
int (*http_data) (struct stream *s, struct http_msg *msg);
int (*http_last_chunk) (struct stream *s, struct http_msg *msg);
int (*http_end_chunk) (struct stream *s, struct http_msg *msg);
int (*http_chunk_trailers)(struct stream *s, struct http_msg *msg);
int (*http_end_body) (struct stream *s, struct http_msg *msg);
void (*http_end) (struct stream *s, struct http_msg *msg);
void (*http_reset) (struct stream *s, struct http_msg *msg);
int (*http_pre_process) (struct stream *s, struct http_msg *msg);
int (*http_post_process) (struct stream *s, struct http_msg *msg);
void (*http_reply) (struct stream *s, short status,
const struct chunk *msg);
};
To declare and use a filter, in the configuration, the "filter" keyword must be
used in a listener/frontend section:
frontend test
...
filter <FILTER-NAME> [OPTIONS...]
The filter referenced by the <FILTER-NAME> must declare a configuration parser
on its own name to fill flt_ops and filter_conf field in the proxy's
structure. An exemple will be provided later to make it perfectly clear.
For now, filters cannot be used in backend section. But this is only a matter of
time. Documentation will also be added later. This is the first commit of a long
list about filters.
It is possible to have several filters on the same listener/frontend. These
filters are stored in an array of at most MAX_FILTERS elements (define in
include/types/filters.h). Again, this will be replaced later by a list of
filters.
The filter API has been highly refactored. Main changes are:
* Now, HA supports an infinite number of filters per proxy. To do so, filters
are stored in list.
* Because filters are stored in list, filters state has been moved from the
channel structure to the filter structure. This is cleaner because there is no
more info about filters in channel structure.
* It is possible to defined filters on backends only. For such filters,
stream_start/stream_stop callbacks are not called. Of course, it is possible
to mix frontend and backend filters.
* Now, TCP streams are also filtered. All callbacks without the 'http_' prefix
are called for all kind of streams. In addition, 2 new callbacks were added to
filter data exchanged through a TCP stream:
- tcp_data: it is called when new data are available or when old unprocessed
data are still waiting.
- tcp_forward_data: it is called when some data can be consumed.
* New callbacks attached to channel were added:
- channel_start_analyze: it is called when a filter is ready to process data
exchanged through a channel. 2 new analyzers (a frontend and a backend)
are attached to channels to call this callback. For a frontend filter, it
is called before any other analyzer. For a backend filter, it is called
when a backend is attached to a stream. So some processing cannot be
filtered in that case.
- channel_analyze: it is called before each analyzer attached to a channel,
expects analyzers responsible for data sending.
- channel_end_analyze: it is called when all other analyzers have finished
their processing. A new analyzers is attached to channels to call this
callback. For a TCP stream, this is always the last one called. For a HTTP
one, the callback is called when a request/response ends, so it is called
one time for each request/response.
* 'session_established' callback has been removed. Everything that is done in
this callback can be handled by 'channel_start_analyze' on the response
channel.
* 'http_pre_process' and 'http_post_process' callbacks have been replaced by
'channel_analyze'.
* 'http_start' callback has been replaced by 'http_headers'. This new one is
called just before headers sending and parsing of the body.
* 'http_end' callback has been replaced by 'channel_end_analyze'.
* It is possible to set a forwarder for TCP channels. It was already possible to
do it for HTTP ones.
* Forwarders can partially consumed forwardable data. For this reason a new
HTTP message state was added before HTTP_MSG_DONE : HTTP_MSG_ENDING.
Now all filters can define corresponding callbacks (http_forward_data
and tcp_forward_data). Each filter owns 2 offsets relative to buf->p, next and
forward, to track, respectively, input data already parsed but not forwarded yet
by the filter and parsed data considered as forwarded by the filter. A any time,
we have the warranty that a filter cannot parse or forward more input than
previous ones. And, of course, it cannot forward more input than it has
parsed. 2 macros has been added to retrieve these offets: FLT_NXT and FLT_FWD.
In addition, 2 functions has been added to change the 'next size' and the
'forward size' of a filter. When a filter parses input data, it can alter these
data, so the size of these data can vary. This action has an effet on all
previous filters that must be handled. To do so, the function
'filter_change_next_size' must be called, passing the size variation. In the
same spirit, if a filter alter forwarded data, it must call the function
'filter_change_forward_size'. 'filter_change_next_size' can be called in
'http_data' and 'tcp_data' callbacks and only these ones. And
'filter_change_forward_size' can be called in 'http_forward_data' and
'tcp_forward_data' callbacks and only these ones. The data changes are the
filter responsability, but with some limitation. It must not change already
parsed/forwarded data or data that previous filters have not parsed/forwarded
yet.
Because filters can be used on backends, when we the backend is set for a
stream, we add filters defined for this backend in the filter list of the
stream. But we must only do that when the backend and the frontend of the stream
are not the same. Else same filters are added a second time leading to undefined
behavior.
The HTTP compression code had to be moved.
So it simplifies http_response_forward_body function. To do so, the way the data
are forwarded has changed. Now, a filter (and only one) can forward data. In a
commit to come, this limitation will be removed to let all filters take part to
data forwarding. There are 2 new functions that filters should use to deal with
this feature:
* flt_set_http_data_forwarder: This function sets the filter (using its id)
that will forward data for the specified HTTP message. It is possible if it
was not already set by another filter _AND_ if no data was yet forwarded
(msg->msg_state <= HTTP_MSG_BODY). It returns -1 if an error occurs.
* flt_http_data_forwarder: This function returns the filter id that will
forward data for the specified HTTP message. If there is no forwarder set, it
returns -1.
When an HTTP data forwarder is set for the response, the HTTP compression is
disabled. Of course, this is not definitive.
When memmax is forced using "-m", the per-process memory limit is enforced
using setrlimit(), but this value is not used to compute the automatic
maxconn limit. In addition, the per-process memory limit didn't consider
the fact that the shared SSL cache only needs to be accounted once.
The doc was also fixed to clearly state that "-m" is global and not per
process. It makes sense because people who use -m want to protect the
system's resources regardless of whatever appears in the configuration.
In order to properly enable sched_setaffinity, in some versions of Linux,
it is rather _GNU_SOURCE than __USE_GNU (spotted on Alpine Linux for instance),
also for the sake of consistency as __USE_GNU seems not used across the code and
for last, it seems on Linux it is the best way to enable non portable code.
On Linux glibc's based versions, it seems _GNU_SOURCE defines __USE_GNU
it should be safe enough.
It's pointless to reserve this amount of memory when zlib is not used.
Adding the condition will make build scripts easier to manage. This may
be backported to 1.6.
Causes HAProxy to emit a static string to the agent on every check,
so that you can independently control multiple services running
behind a single agent port.
It was accidently discovered that limiting haproxy to 5000 MB leads to
an effective limit of 904 MB. This is because the computation for the
size limit is performed by multiplying rlimit_memmax by 1048576, and
doing so causes the operation to be performed on an int instead of a
long or long long. Just switch to 1048576ULL as is done at other places
to fix this.
This bug affects all supported versions, the backport is desired, though
it rarely affects users since few people apply memory limits.
HAProxy could already support being passed a file list on the command
line, by passing multiple times "-f" followed by a file name. People
have been complaining that it made it hard to pass file lists from init
scripts.
This patch introduces an end of arguments using the common "--" tag,
after which only file names may appear. These files are then added to
the existing list of other files specified using -f and are loaded in
their declaration order. Thus it becomes possible to do something like
this :
haproxy -sf $(pidof haproxy) -- /etc/haproxy/global.cfg /etc/haproxy/customers/*.cfg
Given that all command line arguments start with a '-' and that
no pid number can start with this character, there's no constraint
to make the pid list the last argument. Let's relax this rule.
Michael Ezzell reported a bug causing haproxy to segfault during startup
when trying to send syslog message from Lua. The function __send_log() can
be called with *p that is NULL and/or when the configuration is not fully
parsed, as is the case with Lua.
This patch fixes this problem by using individual vectors instead of the
pre-generated strings log_htp and log_htp_rfc5424.
Also, this patch fixes a problem causing haproxy to write the wrong pid in
the logs -- the log_htp(_rfc5424) strings were generated at the haproxy
start, but "pid" value would be changed after haproxy is started in
daemon/systemd mode.
When peers are stopped due to not being running on the appropriate
process, we want to completely release them and unregister their signals
and task in order to ensure there's no way they may be called in the
future.
Note: ideally we should have a list of all tables attached to a peers
section being disabled in order to unregister them and void their
sync_task. It doesn't appear to be *that* easy for now.
This patch adds a new RFC5424-specific log-format for the structured-data
that is automatically send by __send_log() when the sender is in RFC5424
mode.
A new statement "log-format-sd" should be used in order to set log-format
for the structured-data part in RFC5424 formatted syslog messages.
Example:
log-format-sd [exampleSDID@1234\ bytes=\"%B\"\ status=\"%ST\"]
The function __send_log() iterates over senders and passes the header as
the first vector to sendmsg(), thus it can send a logger-specific header
in each message.
A new logger arguments "format rfc5424" should be used in order to enable
RFC5424 header format. For example:
log 10.2.3.4:1234 len 2048 format rfc5424 local2 info
At the moment we have to call snprintf() for every log line just to
rebuild a constant. Thanks to sendmsg(), we send the message in 3 parts:
time-based header, proxy-specific hostname+log-tag+pid, session-specific
message.
static_table_key, get_http_auth_buff and swap_buffer static variables
are now freed during deinit and the two previously new functions are
called as well. In addition, the 'trash' string buffer is cleared.
The tune.maxrewrite parameter used to be pre-initialized to half of
the buffer size since the very early days when buffers were very small.
It has grown to absurdly large values over the years to reach 8kB for a
16kB buffer. This prevents large requests from being accepted, which is
the opposite of the initial goal.
Many users fix it to 1024 which is already quite large for header
addition.
So let's change the default setting policy :
- pre-initialize it to 1024
- let the user tweak it
- in any case, limit it to tune.bufsize / 2
This results in 15kB usable to buffer HTTP messages instead of 8kB, and
doesn't affect existing configurations which already force it.
This is not a real run queue and we're facing ugly bugs because
if this : if a an applet removes another applet from the queue,
typically the next one after itself, the list iterator loops
forever because the list's backup pointer is not valid anymore.
Before creating a run queue, let's rename this list.
This was the first transparent proxy technology supported by haproxy
circa 2005 but it was obsoleted in 2007 by Tproxy 4.0 which removed a
lot of the earlier versions' shortcomings and was finally merged into
the kernel. Since nobody has been using cttproxy for many years now
and nobody has even just tried to compile the files, it's time to
remove it. The doc was updated as well.
This patch is the first of a serie which merge all the action structs. The
function "tcp-request content", "tcp-response-content", "http-request" and
"http-response" have the same values and the same process for some defined
actions, but the struct and the prototype of the declared function are
different.
This patch try to unify all of these entries.
This patch adds a few checks on "global._51degrees.data_file_path" and allows
haproxy to start even when the pattern or trie data file is not specified.
If the "51d" converter is used, a new function "_51d_conv_check" will check
"global._51degrees.data_file_path" and displays a warning if necessary.
In src/haproxy.c, the global 51Degrees "cache_size" has moved outside of the
FIFTYONEDEGREES_H_PATTERN_INCLUDED ifdef block.
This cache is used by 51d converter. The input User-Agent string, the
converter args and a random seed are used as a hashing key. The cached
entries contains a pointer to the resulting string for specific
User-Agent string detection.
The cache size can be tuned using 51degrees-cache-size parameter.
Moved 51Degrees code from src/haproxy.c, src/sample.c and src/cfgparse.c
into a separate files src/51d.c and include/import/51d.h.
Added two new functions init_51degrees() and deinit_51degrees(), updated
Makefile and other code reorganizations related to 51Degrees.
Released version 1.6-dev2 with the following main changes :
- BUG/MINOR: ssl: Display correct filename in error message
- MEDIUM: logs: Add HTTP request-line log format directives
- BUG/MEDIUM: check: tcpcheck regression introduced by e16c1b3f
- BUG/MINOR: check: fix tcpcheck error message
- MINOR: use an int instead of calling tcpcheck_get_step_id
- MINOR: tcpcheck_rule structure update
- MINOR: include comment in tcpcheck error log
- DOC: tcpcheck comment documentation
- MEDIUM: server: add support for changing a server's address
- MEDIUM: server: change server ip address from stats socket
- MEDIUM: protocol: add minimalist UDP protocol client
- MEDIUM: dns: implement a DNS resolver
- MAJOR: server: add DNS-based server name resolution
- DOC: server name resolution + proto DNS
- MINOR: dns: add DNS statistics
- MEDIUM: http: configurable http result codes for http-request deny
- BUILD: Compile clean when debug options defined
- MINOR: lru: Add the possibility to free data when an item is removed
- MINOR: lru: Add lru64_lookup function
- MEDIUM: ssl: Add options to forge SSL certificates
- MINOR: ssl: Export functions to manipulate generated certificates
- MEDIUM: config: add DeviceAtlas global keywords
- MEDIUM: global: add the DeviceAtlas required elements to struct global
- MEDIUM: sample: add the da-csv converter
- MEDIUM: init: DeviceAtlas initialization
- BUILD: Makefile: add options to build with DeviceAtlas
- DOC: README: explain how to build with DeviceAtlas
- BUG/MEDIUM: http: fix the url_param fetch
- BUG/MEDIUM: init: segfault if global._51d_property_names is not initialized
- MAJOR: peers: peers protocol version 2.0
- MINOR: peers: avoid re-scheduling of pending stick-table's updates still not pushed.
- MEDIUM: peers: re-schedule stick-table's entry for sync when data is modified.
- MEDIUM: peers: support of any stick-table data-types for sync
- BUG/MAJOR: sample: regression on sample cast to stick table types.
- CLEANUP: deinit: remove codes for cleaning p->block_rules
- DOC: Fix L4TOUT typo in documentation
- DOC: set-log-level in Logging section preamble
- BUG/MEDIUM: compat: fix segfault on FreeBSD
- MEDIUM: check: include server address and port in the send-state header
- MEDIUM: backend: Allow redispatch on retry intervals
- MINOR: Add TLS ticket keys reference and use it in the listener struct
- MEDIUM: Add support for updating TLS ticket keys via socket
- DOC: Document new socket commands "show tls-keys" and "set ssl tls-key"
- MINOR: Add sample fetch which identifies if the SSL session has been resumed
- DOC: Update doc about weight, act and bck fields in the statistics
- BUG/MEDIUM: ssl: fix tune.ssl.default-dh-param value being overwritten
- MINOR: ssl: add a destructor to free allocated SSL ressources
- MEDIUM: ssl: add the possibility to use a global DH parameters file
- MEDIUM: ssl: replace standards DH groups with custom ones
- MEDIUM: stats: Add enum srv_stats_state
- MEDIUM: stats: Separate server state and colour in stats
- MEDIUM: stats: Only report drain state in stats if server has SRV_ADMF_DRAIN set
- MEDIUM: stats: Differentiate between DRAIN and DRAIN (agent)
- MEDIUM: Lower priority of email alerts for log-health-checks messages
- MEDIUM: Send email alerts when servers are marked as UP or enter the drain state
- MEDIUM: Document when email-alerts are sent
- BUG/MEDIUM: lua: bad argument number in analyser and in error message
- MEDIUM: lua: automatically converts strings in proxy, tables, server and ip
- BUG/MINOR: utf8: remove compilator warning
- MEDIUM: map: uses HAProxy facilities to store default value
- BUG/MINOR: lua: error in detection of mandatory arguments
- BUG/MINOR: lua: set current proxy as default value if it is possible
- BUG/MEDIUM: http: the action set-{method|path|query|uri} doesn't run.
- BUG/MEDIUM: lua: undetected infinite loop
- BUG/MAJOR: http: don't read past buffer's end in http_replace_value
- BUG/MEDIUM: http: the function "(req|res)-replace-value" doesn't respect the HTTP syntax
- MEDIUM/CLEANUP: http: rewrite and lighten http_transform_header() prototype
- BUILD: lua: it miss the '-ldl' directive
- MEDIUM: http: allows 'R' and 'S' in the protocol alphabet
- MINOR: http: split the function http_action_set_req_line() in two parts
- MINOR: http: split http_transform_header() function in two parts.
- MINOR: http: export function inet_set_tos()
- MINOR: lua: txn: add function set_(loglevel|tos|mark)
- MINOR: lua: create and register HTTP class
- DOC: lua: fix some typos
- MINOR: lua: add log functions
- BUG/MINOR: lua: Fix SSL initialisation
- DOC: lua: some fixes
- MINOR: lua: (req|res)_get_headers return more than one header value
- MINOR: lua: map system integration in Lua
- BUG/MEDIUM: http: functions set-{path,query,method,uri} breaks the HTTP parser
- MINOR: sample: add url_dec converter
- MEDIUM: sample: fill the struct sample with the session, proxy and stream pointers
- MEDIUM: sample change the prototype of sample-fetches and converters functions
- MINOR: sample: fill the struct sample with the options.
- MEDIUM: sample: change the prototype of sample-fetches functions
- MINOR: http: split the url_param in two parts
- CLEANUP: http: bad indentation
- MINOR: http: add body_param fetch
- MEDIUM: http: url-encoded parsing function can run throught wrapped buffer
- DOC: http: req.body_param documentation
- MINOR: proxy: custom capture declaration
- MINOR: capture: add two "capture" converters
- MEDIUM: capture: Allow capture with slot identifier
- MINOR: http: add array of generic pointers in http_res_rules
- MEDIUM: capture: adds http-response capture
- MINOR: common: escape CSV strings
- MEDIUM: stats: escape some strings in the CSV dump
- MINOR: tcp: add custom actions that can continue tcp-(request|response) processing
- MINOR: lua: Lua tcp action are not final action
- DOC: lua: schematics about lua socket organization
- BUG/MINOR: debug: display (null) in place of "meth"
- DOC: mention the "lua action" in documentation
- MINOR: standard: add function that converts signed int to a string
- BUG/MINOR: sample: wrong conversion of signed values
- MEDIUM: sample: Add type any
- MINOR: debug: add a special converter which display its input sample content.
- MINOR: tcp: increase the opaque data array
- MINOR: tcp/http/conf: extends the keyword registration options
- MINOR: build: fix build dependency
- MEDIUM: vars: adds support of variables
- MINOR: vars: adds get and set functions
- MINOR: lua: Variable access
- MINOR: samples: add samples which returns constants
- BUG/MINOR: vars/compil: fix some warnings
- BUILD: add 51degrees options to makefile.
- MINOR: global: add several 51Degrees members to global
- MINOR: config: add 51Degrees config parsing.
- MINOR: init: add 51Degrees initialisation code
- MEDIUM: sample: add fiftyone_degrees converter.
- MEDIUM: deinit: add cleanup for 51Degrees to deinit
- MEDIUM: sample: add trie support to 51Degrees
- DOC: add 51Degrees notes to configuration.txt.
- DOC: add build indications for 51Degrees to README.
- MEDIUM: cfgparse: introduce weak and strong quoting
- BUG/MEDIUM: cfgparse: incorrect memmove in quotes management
- MINOR: cfgparse: remove line size limitation
- MEDIUM: cfgparse: expand environment variables
- BUG/MINOR: cfgparse: fix typo in 'option httplog' error message
- BUG/MEDIUM: cfgparse: segfault when userlist is misused
- CLEANUP: cfgparse: remove reference to 'ruleset' section
- MEDIUM: cfgparse: check section maximum number of arguments
- MEDIUM: cfgparse: max arguments check in the global section
- MEDIUM: cfgparse: check max arguments in the proxies sections
- CLEANUP: stream-int: remove a redundant clearing of the linger_risk flag
- MINOR: connection: make conn_sock_shutw() actually perform the shutdown() call
- MINOR: stream-int: use conn_sock_shutw() to shutdown a connection
- MINOR: connection: perform the call to xprt->shutw() in conn_data_shutw()
- MEDIUM: stream-int: replace xprt->shutw calls with conn_data_shutw()
- MINOR: checks: use conn_data_shutw_hard() instead of call via xprt
- MINOR: connection: implement conn_sock_send()
- MEDIUM: stream-int: make conn_si_send_proxy() use conn_sock_send()
- MEDIUM: connection: make conn_drain() perform more controls
- REORG: connection: move conn_drain() to connection.c and rename it
- CLEANUP: stream-int: remove inclusion of fd.h that is not used anymore
- MEDIUM: channel: don't always set CF_WAKE_WRITE on bi_put*
- CLEANUP: lua: don't use si_ic/si_oc on known stream-ints
- BUG/MEDIUM: peers: correctly configure the client timeout
- MINOR: peers: centralize configuration of the peers frontend
- MINOR: proxy: store the default target into the frontend's configuration
- MEDIUM: stats: use frontend_accept() as the accept function
- MEDIUM: peers: use frontend_accept() instead of peer_accept()
- CLEANUP: listeners: remove unused timeout
- MEDIUM: listener: store the default target per listener
- BUILD: fix automatic inclusion of libdl.
- MEDIUM: lua: implement a simple memory allocator
- MEDIUM: compression: postpone buffer adjustments after compression
- MEDIUM: compression: don't send leading zeroes with chunk size
- BUG/MINOR: compression: consider the expansion factor in init
- MINOR: http: check the algo name "identity" instead of the function pointer
- CLEANUP: compression: statify all algo-specific functions
- MEDIUM: compression: add a distinction between UA- and config- algorithms
- MEDIUM: compression: add new "raw-deflate" compression algorithm
- MEDIUM: compression: split deflate_flush() into flush and finish
- CLEANUP: compression: remove unused reset functions
- MAJOR: compression: integrate support for libslz
- BUG/MEDIUM: http: hdr_cnt would not count any header when called without name
- BUG/MAJOR: http: null-terminate the http actions keywords list
- CLEANUP: lua: remove the unused hlua_sleep memory pool
- BUG/MAJOR: lua: use correct object size when initializing a new converter
- CLEANUP: lua: remove hard-coded sizeof() in object creations and mallocs
- CLEANUP: lua: fix confusing local variable naming in hlua_txn_new()
- CLEANUP: hlua: stop using variable name "s" alternately for hlua_txn and hlua_smp
- CLEANUP: lua: get rid of the last "*ht" for struct hlua_txn.
- CLEANUP: lua: rename last occurrences of "*s" to "*htxn" for hlua_txn
- CLEANUP: lua: rename variable "sc" for struct hlua_smp
- CLEANUP: lua: get rid of the last two "*hs" for hlua_smp
- REORG/MAJOR: session: rename the "session" entity to "stream"
- REORG/MEDIUM: stream: rename stream flags from SN_* to SF_*
- MINOR: session: start to reintroduce struct session
- MEDIUM: stream: allocate the session when a stream is created
- MEDIUM: stream: move the listener's pointer to the session
- MEDIUM: stream: move the frontend's pointer to the session
- MINOR: session: add a pointer to the session's origin
- MEDIUM: session: use the pointer to the origin instead of s->si[0].end
- CLEANUP: sample: remove useless tests in fetch functions for l4 != NULL
- MEDIUM: http: move header captures from http_txn to struct stream
- MINOR: http: create a dedicated pool for http_txn
- MAJOR: http: move http_txn out of struct stream
- MAJOR: sample: don't pass l7 anymore to sample fetch functions
- CLEANUP: lua: remove unused hlua_smp->l7 and hlua_txn->l7
- MEDIUM: http: remove the now useless http_txn from {req/res} rules
- CLEANUP: lua: don't pass http_txn anymore to hlua_request_act_wrapper()
- MAJOR: sample: pass a pointer to the session to each sample fetch function
- MINOR: stream: provide a few helpers to retrieve frontend, listener and origin
- CLEANUP: stream: don't set ->target to the incoming connection anymore
- MINOR: stream: move session initialization before the stream's
- MINOR: session: store the session's accept date
- MINOR: session: don't rely on s->logs.logwait in embryonic sessions
- MINOR: session: implement session_free() and use it everywhere
- MINOR: session: add stick counters to the struct session
- REORG: stktable: move the stkctr_* functions from stream to sticktable
- MEDIUM: streams: support looking up stkctr in the session
- MEDIUM: session: update the session's stick counters upon session_free()
- MEDIUM: proto_tcp: track the session's counters in the connection ruleset
- MAJOR: tcp: make tcp_exec_req_rules() only rely on the session
- MEDIUM: stream: don't call stream_store_counters() in kill_mini_session() nor session_accept()
- MEDIUM: stream: move all the session-specific stuff of stream_accept() earlier
- MAJOR: stream: don't initialize the stream anymore in stream_accept
- MEDIUM: session: remove the task pointer from the session
- REORG: session: move the session parts out of stream.c
- MINOR: stream-int: make appctx_new() take the applet in argument
- MEDIUM: peers: move the appctx initialization earlier
- MINOR: session: introduce session_new()
- MINOR: session: make use of session_new() when creating a new session
- MINOR: peers: make use of session_new() when creating a new session
- MEDIUM: peers: initialize the task before the stream
- MINOR: session: set the CO_FL_CONNECTED flag on the connection once ready
- CLEANUP: stream.c: do not re-attach the connection to the stream
- MEDIUM: stream: isolate connection-specific initialization code
- MEDIUM: stream: also accept appctx as origin in stream_accept_session()
- MEDIUM: peers: make use of stream_accept_session()
- MEDIUM: frontend: make ->accept only return +/-1
- MEDIUM: stream: return the stream upon accept()
- MEDIUM: frontend: move some stream initialisation to stream_new()
- MEDIUM: frontend: move the fd-specific settings to session_accept_fd()
- MEDIUM: frontend: don't restrict frontend_accept() to connections anymore
- MEDIUM: frontend: move some remaining stream settings to stream_new()
- CLEANUP: frontend: remove one useless local variable
- MEDIUM: stream: don't rely on the session's listener anymore in stream_new()
- MEDIUM: lua: make use of stream_new() to create an outgoing connection
- MINOR: lua: minor cleanup in hlua_socket_new()
- MINOR: lua: no need for setting timeouts / conn_retries in hlua_socket_new()
- MINOR: peers: no need for setting timeouts / conn_retries in peer_session_create()
- CLEANUP: stream-int: swap stream-int and appctx declarations
- CLEANUP: namespaces: fix protection against multiple inclusions
- MINOR: session: maintain the session count stats in the session, not the stream
- MEDIUM: session: adjust the connection flags before stream_new()
- MINOR: stream: pass the pointer to the origin explicitly to stream_new()
- CLEANUP: poll: move the conditions for waiting out of the poll functions
- BUG/MEDIUM: listener: don't report an error when resuming unbound listeners
- BUG/MEDIUM: init: don't limit cpu-map to the first 32 processes only
- BUG/MAJOR: tcp/http: fix current_rule assignment when restarting over a ruleset
- BUG/MEDIUM: stream-int: always reset si->ops when si->end is nullified
- DOC: update the entities diagrams
- BUG/MEDIUM: http: properly retrieve the front connection
- MINOR: applet: add a new "owner" pointer in the appctx
- MEDIUM: applet: make the applet not depend on a stream interface anymore
- REORG: applet: move the applet definitions out of stream_interface
- CLEANUP: applet: rename struct si_applet to applet
- REORG: stream-int: create si_applet_ops dedicated to applets
- MEDIUM: applet: add basic support for an applet run queue
- MEDIUM: applet: implement a run queue for active appctx
- MEDIUM: stream-int: add a new function si_applet_done()
- MAJOR: applet: now call si_applet_done() instead of si_update() in I/O handlers
- MAJOR: stream: use a regular ->update for all stream interfaces
- MEDIUM: dumpstats: don't unregister the applet anymore
- MEDIUM: applet: centralize the call to si_applet_done() in the I/O handler
- MAJOR: stream: do not allocate request buffers anymore when the left side is an applet
- MINOR: stream-int: add two flags to indicate an applet's wishes regarding I/O
- MEDIUM: applet: make the applets only use si_applet_{cant|want|stop}_{get|put}
- MEDIUM: stream-int: pause the appctx if the task is woken up
- BUG/MAJOR: tcp: only call registered actions when they're registered
- BUG/MEDIUM: peers: fix applet scheduling
- BUG/MEDIUM: peers: recent applet changes broke peers updates scheduling
- MINOR: tools: provide an rdtsc() function for time comparisons
- IMPORT: lru: import simple ebtree-based LRU functions
- IMPORT: hash: import xxhash-r39
- MEDIUM: pattern: add a revision to all pattern expressions
- MAJOR: pattern: add LRU-based cache on pattern matching
- BUG/MEDIUM: http: remove content-length from chunked messages
- DOC: http: update the comments about the rules for determining transfer-length
- BUG/MEDIUM: http: do not restrict parsing of transfer-encoding to HTTP/1.1
- BUG/MEDIUM: http: incorrect transfer-coding in the request is a bad request
- BUG/MEDIUM: http: remove content-length form responses with bad transfer-encoding
- MEDIUM: http: restrict the HTTP version token to 1 digit as per RFC7230
- MEDIUM: http: disable support for HTTP/0.9 by default
- MEDIUM: http: add option-ignore-probes to get rid of the floods of 408
- BUG/MINOR: config: clear proxy->table.peers.p for disabled proxies
- MEDIUM: init: don't stop proxies in parent process when exiting
- MINOR: stick-table: don't attach to peers in stopped state
- MEDIUM: config: initialize stick-tables after peers, not before
- MEDIUM: peers: add the ability to disable a peers section
- MINOR: peers: store the pointer to the signal handler
- MEDIUM: peers: unregister peers that were never started
- MEDIUM: config: propagate the table's process list to the peers sections
- MEDIUM: init: stop any peers section not bound to the correct process
- MEDIUM: config: validate that peers sections are bound to exactly one process
- MAJOR: peers: allow peers section to be used with nbproc > 1
- DOC: relax the peers restriction to single-process
- DOC: document option http-ignore-probes
- DOC: fix the comments about the meaning of msg->sol in HTTP
- BUG/MEDIUM: http: wait for the exact amount of body bytes in wait_for_request_body
- BUG/MAJOR: http: prevent risk of reading past end with balance url_param
- MEDIUM: stream: move HTTP request body analyser before process_common
- MEDIUM: http: add a new option http-buffer-request
- MEDIUM: http: provide 3 fetches for the body
- DOC: update the doc on the proxy protocol
- BUILD: pattern: fix build warnings introduced in the LRU cache
- BUG/MEDIUM: stats: properly initialize the scope before dumping stats
- CLEANUP: config: fix misleading information in error message.
- MINOR: config: report the number of processes using a peers section in the error case
- BUG/MEDIUM: config: properly compute the default number of processes for a proxy
- MEDIUM: http: add new "capture" action for http-request
- BUG/MEDIUM: http: fix the http-request capture parser
- BUG/MEDIUM: http: don't forward client shutdown without NOLINGER except for tunnels
- BUILD/MINOR: ssl: fix build failure introduced by recent patch
- BUG/MAJOR: check: fix breakage of inverted tcp-check rules
- CLEANUP: checks: fix double usage of cur / current_step in tcp-checks
- BUG/MEDIUM: checks: do not dereference head of a tcp-check at the end
- CLEANUP: checks: simplify the loop processing of tcp-checks
- BUG/MAJOR: checks: always check for end of list before proceeding
- BUG/MEDIUM: checks: do not dereference a list as a tcpcheck struct
- BUG/MAJOR: checks: break infinite loops when tcp-checks starts with comment
- MEDIUM: http: make url_param iterate over multiple occurrences
- BUG/MEDIUM: peers: apply a random reconnection timeout
- MEDIUM: config: reject invalid config with name duplicates
- MEDIUM: config: reject conflicts in table names
- CLEANUP: proxy: make the proxy lookup functions more user-friendly
- MINOR: proxy: simply ignore duplicates in proxy name lookups
- MINOR: config: don't open-code proxy name lookups
- MEDIUM: config: clarify the conflicting modes detection for backend rules
- CLEANUP: proxy: remove now unused function findproxy_mode()
- MEDIUM: stick-table: remove the now duplicate find_stktable() function
- MAJOR: config: remove the deprecated reqsetbe / reqisetbe actions
- MINOR: proxy: add a new function proxy_find_by_id()
- MINOR: proxy: add a flag to memorize that the proxy's ID was forced
- MEDIUM: proxy: add a new proxy_find_best_match() function
- CLEANUP: http: explicitly reference request in http_apply_redirect_rules()
- MINOR: http: prepare support for parsing redirect actions on responses
- MEDIUM: http: implement http-response redirect rules
- MEDIUM: http: no need to close the request on redirect if data was parsed
- BUG/MEDIUM: http: fix body processing for the stats applet
- BUG/MINOR: da: fix log-level comparison to emove annoying warning
- CLEANUP: global: remove one ifdef USE_DEVICEATLAS
- CLEANUP: da: move the converter registration to da.c
- CLEANUP: da: register the config keywords in da.c
- CLEANUP: adjust the envelope name in da.h to reflect the file name
- CLEANUP: da: remove ifdef USE_DEVICEATLAS from da.c
- BUILD: make 51D easier to build by defaulting to 51DEGREES_SRC
- BUILD: fix build warning when not using 51degrees
- BUILD: make DeviceAtlas easier to build by defaulting to DEVICEATLAS_SRC
- BUILD: ssl: fix recent build breakage on older SSL libs
Implementation of a DNS client in HAProxy to perform name resolution to
IP addresses.
It relies on the freshly created UDP client to perform the DNS
resolution. For now, all UDP socket calls are performed in the
DNS layer, but this might change later when the protocols are
extended to be more suited to datagram mode.
A new section called 'resolvers' is introduced thanks to this patch. It
is used to describe DNS servers IP address and also many parameters.
With this patch, it is possible to configure HAProxy to forge the SSL
certificate sent to a client using the SNI servername. We do it in the SNI
callback.
To enable this feature, you must pass following BIND options:
* ca-sign-file <FILE> : This is the PEM file containing the CA certitifacte and
the CA private key to create and sign server's certificates.
* (optionally) ca-sign-pass <PASS>: This is the CA private key passphrase, if
any.
* generate-certificates: Enable the dynamic generation of certificates for a
listener.
Because generating certificates is expensive, there is a LRU cache to store
them. Its size can be customized by setting the global parameter
'tune.ssl.ssl-ctx-cache-size'.
When using the "51d" converter without specifying the list of 51Degrees
properties to detect (see parameter "51degrees-property-name-list"), the
"global._51d_property_names" could be left uninitialized which will lead to
segfault during init.
Since all rules listed in p->block_rules have been moved to the beginning of
the http-request rules in check_config_validity(), there is no need to clean
p->block_rules in deinit().
Signed-off-by: Godbach <nylzhaowei@gmail.com>
An ifdef was missing to avoid declaring these variables :
src/haproxy.c: In function 'deinit':
src/haproxy.c:1253:47: warning: unused variable '_51d_prop_nameb' [-Wunused-variable]
src/haproxy.c:1253:30: warning: unused variable '_51d_prop_name' [-Wunused-variable]
This diff initialises few DeviceAtlas struct fields member with
their inherent default values.
Furthermore, the specific DeviceAtlas configuration keywords are
registered and the module is initialised and all necessary
resources are freed during the deinit phase.
These ones were already obsoleted in 1.4, marked for removal in 1.5,
and not documented anymore. They used to emit warnings, and do still
require quite some code to stay in place. Let's remove them now.
Until now, HAproxy needed to be restarted to change the TLS ticket
keys. With this patch, the TLS keys can be updated on a per-file
basis using the admin socket. Two new socket commands have been
introduced: "show tls-keys" and "set ssl tls-keys".
Signed-off-by: Nenad Merdanovic <nmerdan@anine.io>
This will prevent the peers section from remaining in listen state on
the incorrect process. The peers_fe pointer is set to NULL, which will
tell the peers task to commit suicide if it was already scheduled.
The principle of this cache is to have a global cache for all pattern
matching operations which rely on lists (reg, sub, dir, dom, ...). The
input data, the expression and a random seed are used as a hashing key.
The cached entries contains a pointer to the expression and a revision
number for that expression so that we don't accidently used obsolete
data after a pattern update or a very unlikely hash collision.
Regarding the risk of collisions, 10k entries at 10k req/s mean 1% risk
of a collision after 60 years, that's already much less than the memory's
reliability in most machines and more durable than most admin's life
expectancy. A collision will result in a valid result to be returned
for a different entry from the same list. If this is not acceptable,
the cache can be disabled using tune.pattern.cache-size.
A test on a file containing 10k small regex showed that the regex
matching was limited to 6k/s instead of 70k with regular strings.
When enabling the LRU cache, the performance was back to 70k/s.
The new function is called for each round of polling in order to call any
active appctx. For now we pick the stream interface from the appctx's
owner. At the moment there's no appctx queued yet, but we have everything
needed to queue them and remove them.
We have to allow 32 or 64 processes depending on the machine's word
size, and on 64-bit machines only the first 32 processes were properly
bound.
This fix should be backported to 1.5.
The poll() functions have become a bit dirty because they now check the
size of the signal queue, the FD cache and the number of tasks. It's not
their job, this must be moved to the caller. In the end it simplifies the
code because the expiration date is now set to now_ms if we must not wait,
and this achieves in exactly the same result and is cleaner. The change
looks large due to the change of indent for blocks which were inside an
"if" block.
This one will not necessarily be allocated for each stream, and we want
to use the fact that it equals null to know it's not present so that we
can always deduce its presence from the stream pointer.
This commit only creates the new pool.
There is now a pointer to the session in the stream, which is NULL
for now. The session pool is created as well. Some parts will move
from the stream to the session now.
With HTTP/2, we'll have to support multiplexed streams. A stream is in
fact the largest part of what we currently call a session, it has buffers,
logs, etc.
In order to catch any error, this commit removes any reference to the
struct session and tries to rename most "session" occurrences in function
names to "stream" and "sess" to "strm" when that's related to a session.
The files stream.{c,h} were added and session.{c,h} removed.
The session will be reintroduced later and a few parts of the stream
will progressively be moved overthere. It will more or less contain
only what we need in an embryonic session.
Sample fetch functions and converters will have to change a bit so
that they'll use an L5 (session) instead of what's currently called
"L4" which is in fact L6 for now.
Once all changes are completed, we should see approximately this :
L7 - http_txn
L6 - stream
L5 - session
L4 - connection | applet
There will be at most one http_txn per stream, and a same session will
possibly be referenced by multiple streams. A connection will point to
a session and to a stream. The session will hold all the information
we need to keep even when we don't yet have a stream.
Some more cleanup is needed because some code was already far from
being clean. The server queue management still refers to sessions at
many places while comments talk about connections. This will have to
be cleaned up once we have a server-side connection pool manager.
Stream flags "SN_*" still need to be renamed, it doesn't seem like
any of them will need to move to the session.
Thanks to MSIE/IIS, the "deflate" name is ambigous. According to the RFC
it's a zlib-wrapped deflate stream, but IIS used to send only a raw deflate
stream, which is the only format MSIE understands for "deflate". The other
widely used browsers do support both formats. For this reason some people
prefer to emit a raw deflate stream on "deflate" to serve more users even
it that means violating the standards. Haproxy only follows the standard,
so they cannot do this.
This patch makes it possible to have one algorithm name in the configuration
and another one in the protocol. This will make it possible to have a new
configuration token to add a different algorithm so that users can decide if
they want a raw deflate or the standard one.
Released version 1.6-dev1 with the following main changes :
- CLEANUP: extract temporary $CFG to eliminate duplication
- CLEANUP: extract temporary $BIN to eliminate duplication
- CLEANUP: extract temporary $PIDFILE to eliminate duplication
- CLEANUP: extract temporary $LOCKFILE to eliminate duplication
- CLEANUP: extract quiet_check() to avoid duplication
- BUG/MINOR: don't start haproxy on reload
- DOC: Address issue where documentation is excluded due to a gitignore rule.
- BUG/MEDIUM: systemd: set KillMode to 'mixed'
- BUILD: fix "make install" to support spaces in the install dirs
- BUG/MINOR: config: http-request replace-header arg typo
- BUG: config: error in http-response replace-header number of arguments
- DOC: missing track-sc* in http-request rules
- BUILD: lua: missing ifdef related to SSL when enabling LUA
- BUG/MEDIUM: regex: fix pcre_study error handling
- MEDIUM: regex: Use pcre_study always when PCRE is used, regardless of JIT
- BUG/MINOR: Fix search for -p argument in systemd wrapper.
- MEDIUM: Improve signal handling in systemd wrapper.
- DOC: fix typo in Unix Socket commands
- BUG/MEDIUM: checks: external checks can't change server status to UP
- BUG/MEDIUM: checks: segfault with external checks in a backend section
- BUG/MINOR: checks: external checks shouldn't wait for timeout to return the result
- BUG/MEDIUM: auth: fix segfault with http-auth and a configuration with an unknown encryption algorithm
- BUG/MEDIUM: config: userlists should ensure that encrypted passwords are supported
- BUG/MINOR: config: don't propagate process binding for dynamic use_backend
- BUG/MINOR: log: fix request flags when keep-alive is enabled
- BUG/MEDIUM: checks: fix conflicts between agent checks and ssl healthchecks
- MINOR: checks: allow external checks in backend sections
- MEDIUM: checks: provide environment variables to the external checks
- MINOR: checks: update dynamic environment variables in external checks
- DOC: checks: environment variables used by "external-check command"
- BUG/MEDIUM: backend: correctly detect the domain when use_domain_only is used
- MINOR: ssl: load certificates in alphabetical order
- BUG/MINOR: checks: prevent http keep-alive with http-check expect
- MINOR: lua: typo in an error message
- MINOR: report the Lua version in -vv
- MINOR: lua: add a compilation error message when compiled with an incompatible version
- BUG/MEDIUM: lua: segfault when calling haproxy sample fetches from lua
- BUILD: try to automatically detect the Lua library name
- BUILD/CLEANUP: systemd: avoid a warning due to mixed code and declaration
- BUG/MEDIUM: backend: Update hash to use unsigned int throughout
- BUG/MEDIUM: connection: fix memory corruption when building a proxy v2 header
- MEDIUM: connection: add new bit in Proxy Protocol V2
- BUG/MINOR: ssl: rejects OCSP response without nextupdate.
- BUG/MEDIUM: ssl: Fix to not serve expired OCSP responses.
- BUG/MINOR: ssl: Fix OCSP resp update fails with the same certificate configured twice.
- BUG/MINOR: ssl: Fix external function in order not to return a pointer on an internal trash buffer.
- MINOR: add fetchs 'ssl_c_der' and 'ssl_f_der' to return DER formatted certs
- MINOR: ssl: add statement to force some ssl options in global.
- BUG/MINOR: ssl: correctly initialize ssl ctx for invalid certificates
- BUG/MEDIUM: ssl: fix bad ssl context init can cause segfault in case of OOM.
- BUG/MINOR: samples: fix unnecessary memcopy converting binary to string.
- MINOR: samples: adds the bytes converter.
- MINOR: samples: adds the field converter.
- MINOR: samples: add the word converter.
- BUG/MINOR: server: move the directive #endif to the end of file
- BUG/MAJOR: buffer: check the space left is enough or not when input data in a buffer is wrapped
- DOC: fix a few typos
- CLEANUP: epoll: epoll_events should be allocated according to global.tune.maxpollevents
- BUG/MINOR: http: fix typo: "401 Unauthorized" => "407 Unauthorized"
- BUG/MINOR: parse: refer curproxy instead of proxy
- BUG/MINOR: parse: check the validity of size string in a more strict way
- BUILD: add new target 'make uninstall' to support uninstalling haproxy from OS
- DOC: expand the docs for the provided stats.
- BUG/MEDIUM: unix: do not unlink() abstract namespace sockets upon failure.
- MEDIUM: ssl: Certificate Transparency support
- MEDIUM: stats: proxied stats admin forms fix
- MEDIUM: http: Compress HTTP responses with status codes 201,202,203 in addition to 200
- BUG/MEDIUM: connection: sanitize PPv2 header length before parsing address information
- MAJOR: namespace: add Linux network namespace support
- MINOR: systemd: Check configuration before start
- BUILD: ssl: handle boringssl in openssl version detection
- BUILD: ssl: disable OCSP when using boringssl
- BUILD: ssl: don't call get_rfc2409_prime when using boringssl
- MINOR: ssl: don't use boringssl's cipher_list
- BUILD: ssl: use OPENSSL_NO_OCSP to detect OCSP support
- MINOR: stats: fix minor typo in HTML page
- MINOR: Also accept SIGHUP/SIGTERM in systemd-wrapper
- MEDIUM: Add support for configurable TLS ticket keys
- DOC: Document the new tls-ticket-keys bind keyword
- DOC: clearly state that the "show sess" output format is not fixed
- MINOR: stats: fix minor typo fix in stats_dump_errors_to_buffer()
- DOC: httplog does not support 'no'
- BUG/MEDIUM: ssl: Fix a memory leak in DHE key exchange
- MINOR: ssl: use SSL_get_ciphers() instead of directly accessing the cipher list.
- BUG/MEDIUM: Consistently use 'check' in process_chk
- MEDIUM: Add external check
- BUG/MEDIUM: Do not set agent health to zero if server is disabled in config
- MEDIUM/BUG: Only explicitly report "DOWN (agent)" if the agent health is zero
- MEDIUM: Remove connect_chk
- MEDIUM: Refactor init_check and move to checks.c
- MEDIUM: Add free_check() helper
- MEDIUM: Move proto and addr fields struct check
- MEDIUM: Attach tcpcheck_rules to check
- MEDIUM: Add parsing of mailers section
- MEDIUM: Allow configuration of email alerts
- MEDIUM: Support sending email alerts
- DOC: Document email alerts
- MINOR: Remove trailing '.' from email alert messages
- MEDIUM: Allow suppression of email alerts by log level
- BUG/MEDIUM: Do not consider an agent check as failed on L7 error
- MINOR: deinit: fix memory leak
- MINOR: http: export the function 'smp_fetch_base32'
- BUG/MEDIUM: http: tarpit timeout is reset
- MINOR: sample: add "json" converter
- BUG/MEDIUM: pattern: don't load more than once a pattern list.
- MINOR: map/acl/dumpstats: remove the "Done." message
- BUG/MAJOR: ns: HAProxy segfault if the cli_conn is not from a network connection
- BUG/MINOR: pattern: error message missing
- BUG/MEDIUM: pattern: some entries are not deleted with case insensitive match
- BUG/MINOR: ARG6 and ARG7 don't fit in a 32 bits word
- MAJOR: poll: only rely on wake_expired_tasks() to compute the wait delay
- MEDIUM: task: call session analyzers if the task is woken by a message.
- MEDIUM: protocol: automatically pick the proto associated to the connection.
- MEDIUM: channel: wake up any request analyzer on response activity
- MINOR: converters: add a "void *private" argument to converters
- MINOR: converters: give the session pointer as converter argument
- MINOR: sample: add private argument to the struct sample_fetch
- MINOR: global: export function and permits to not resolve DNS names
- MINOR: sample: add function for browsing samples.
- MINOR: global: export many symbols.
- MINOR: includes: fix a lot of missing or useless includes
- MEDIUM: tcp: add register keyword system.
- MEDIUM: buffer: make bo_putblk/bo_putstr/bo_putchk return the number of bytes copied.
- MEDIUM: http: change the code returned by the response processing rule functions
- MEDIUM: http/tcp: permit to resume http and tcp custom actions
- MINOR: channel: functions to get data from a buffer without copy
- MEDIUM: lua: lua integration in the build and init system.
- MINOR: lua: add ease functions
- MINOR: lua: add runtime execution context
- MEDIUM: lua: "com" signals
- MINOR: lua: add the configuration directive "lua-load"
- MINOR: lua: core: create "core" class and object
- MINOR: lua: post initialisation bindings
- MEDIUM: lua: add coroutine as tasks.
- MINOR: lua: add sample and args type converters
- MINOR: lua: txn: create class TXN associated with the transaction.
- MINOR: lua: add shared context in the lua stack
- MINOR: lua: txn: import existing sample-fetches in the class TXN
- MINOR: lua: txn: add lua function in TXN that returns an array of http headers
- MINOR: lua: register and execute sample-fetches in LUA
- MINOR: lua: register and execute converters in LUA
- MINOR: lua: add bindings for tcp and http actions
- MINOR: lua: core: add sleep functions
- MEDIUM: lua: socket: add "socket" class for TCP I/O
- MINOR: lua: core: pattern and acl manipulation
- MINOR: lua: channel: add "channel" class
- MINOR: lua: txn: object "txn" provides two objects "channel"
- MINOR: lua: core: can set the nice of the current task
- MINOR: lua: core: can yield an execution stack
- MINOR: lua: txn: add binding for closing the client connection.
- MEDIUM: lua: Lua initialisation "on demand"
- BUG/MAJOR: lua: send function fails and return bad bytes
- MINOR: remove unused declaration.
- MINOR: lua: remove some #define
- MINOR: lua: use bitfield and macro in place of integer and enum
- MINOR: lua: set skeleton for Lua execution expiration
- MEDIUM: lua: each yielding function returns a wake up time.
- MINOR: lua: adds "forced yield" flag
- MEDIUM: lua: interrupt the Lua execution for running other process
- MEDIUM: lua: change the sleep function core
- BUG/MEDIUM: lua: the execution timeout is ignored in yield case
- DOC: lua: Lua configuration documentation
- MINOR: lua: add the struct session in the lua channel struct
- BUG/MINOR: lua: set buffer if it is nnot avalaible.
- BUG/MEDIUM: lua: reset flags before resuming execution
- BUG/MEDIUM: lua: fix infinite loop about channel
- BUG/MEDIUM: lua: the Lua process is not waked up after sending data on requests side
- BUG/MEDIUM: lua: many errors when we try to send data with the channel API
- MEDIUM: lua: use the Lua-5.3 version of the library
- BUG/MAJOR: lua: some function are not yieldable, the forced yield causes errors
- BUG/MEDIUM: lua: can't handle the response bytes
- BUG/MEDIUM: lua: segfault with buffer_replace2
- BUG/MINOR: lua: check buffers before initializing socket
- BUG/MINOR: log: segfault if there are no proxy reference
- BUG/MEDIUM: lua: sockets don't have buffer to write data
- BUG/MEDIUM: lua: cannot connect socket
- BUG/MINOR: lua: sockets receive behavior doesn't follows the specs
- BUG/BUILD: lua: The strict Lua 5.3 version check is not done.
- BUG/MEDIUM: buffer: one byte miss in buffer free space check
- MEDIUM: lua: make the functions hlua_gethlua() and hlua_sethlua() faster
- MINOR: replace the Core object by a simple model.
- MEDIUM: lua: change the objects configuration
- MEDIUM: lua: create a namespace for the fetches
- MINOR: converters: add function to browse converters
- MINOR: lua: wrapper for converters
- MINOR: lua: replace function (req|get)_channel by a variable
- MINOR: lua: fetches and converters can return an empty string in place of nil
- DOC: lua api
- BUG/MEDIUM: sample: fix random number upper-bound
- BUG/MINOR: stats:Fix incorrect printf type.
- BUG/MAJOR: session: revert all the crappy client-side timeout changes
- BUG/MINOR: logs: properly initialize and count log sockets
- BUG/MEDIUM: http: fetch "base" is not compatible with set-header
- BUG/MINOR: counters: do not untrack counters before logging
- BUG/MAJOR: sample: correctly reinitialize sample fetch context before calling sample_process()
- MINOR: stick-table: make stktable_fetch_key() indicate why it failed
- BUG/MEDIUM: counters: fix track-sc* to wait on unstable contents
- BUILD: remove TODO from the spec file and add README
- MINOR: log: make MAX_SYSLOG_LEN overridable at build time
- MEDIUM: log: support a user-configurable max log line length
- DOC: provide an example of how to use ssl_c_sha1
- BUILD: checks: external checker needs signal.h
- BUILD: checks: kill a minor warning on Solaris in external checks
- BUILD: http: fix isdigit & isspace warnings on Solaris
- BUG/MINOR: listener: set the listener's fd to -1 after deletion
- BUG/MEDIUM: unix: failed abstract socket binding is retryable
- MEDIUM: listener: implement a per-protocol pause() function
- MEDIUM: listener: support rebinding during resume()
- BUG/MEDIUM: unix: completely unbind abstract sockets during a pause()
- DOC: explicitly mention the limits of abstract namespace sockets
- DOC: minor fix on {sc,src}_kbytes_{in,out}
- DOC: fix alphabetical sort of converters
- MEDIUM: stick-table: implement lookup from a sample fetch
- MEDIUM: stick-table: add new converters to fetch table data
- MINOR: samples: add two converters for the date format
- BUG/MAJOR: http: correctly rewind the request body after start of forwarding
- DOC: remove references to CPU=native in the README
- DOC: mention that "compression offload" is ignored in defaults section
- DOC: mention that Squid correctly responds 400 to PPv2 header
- BUILD: fix dependencies between config and compat.h
- MINOR: session: export the function 'smp_fetch_sc_stkctr'
- MEDIUM: stick-table: make it easier to register extra data types
- BUG/MINOR: http: base32+src should use the big endian version of base32
- MINOR: sample: allow IP address to cast to binary
- MINOR: sample: add new converters to hash input
- MINOR: sample: allow integers to cast to binary
- BUILD: report commit ID in git versions as well
- CLEANUP: session: move the stick counters declarations to stick_table.h
- MEDIUM: http: add the track-sc* actions to http-request rules
- BUG/MEDIUM: connection: fix proxy v2 header again!
- BUG/MAJOR: tcp: fix a possible busy spinning loop in content track-sc*
- OPTIM/MINOR: proxy: reduce struct proxy by 48 bytes on 64-bit archs
- MINOR: log: add a new field "%lc" to implement a per-frontend log counter
- BUG/MEDIUM: http: fix inverted condition in pat_match_meth()
- BUG/MEDIUM: http: fix improper parsing of HTTP methods for use with ACLs
- BUG/MINOR: pattern: remove useless allocation of unused trash in pat_parse_reg()
- BUG/MEDIUM: acl: correctly compute the output type when a converter is used
- CLEANUP: acl: cleanup some of the redundancy and spaghetti after last fix
- BUG/CRITICAL: http: don't update msg->sov once data start to leave the buffer
- MEDIUM: http: enable header manipulation for 101 responses
- BUG/MEDIUM: config: propagate frontend to backend process binding again.
- MEDIUM: config: properly propagate process binding between proxies
- MEDIUM: config: make the frontends automatically bind to the listeners' processes
- MEDIUM: config: compute the exact bind-process before listener's maxaccept
- MEDIUM: config: only warn if stats are attached to multi-process bind directives
- MEDIUM: config: report it when tcp-request rules are misplaced
- DOC: indicate in the doc that track-sc* can wait if data are missing
- MINOR: config: detect the case where a tcp-request content rule has no inspect-delay
- MEDIUM: systemd-wrapper: support multiple executable versions and names
- BUG/MEDIUM: remove debugging code from systemd-wrapper
- BUG/MEDIUM: http: adjust close mode when switching to backend
- BUG/MINOR: config: don't propagate process binding on fatal errors.
- BUG/MEDIUM: check: rule-less tcp-check must detect connect failures
- BUG/MINOR: tcp-check: report the correct failed step in the status
- DOC: indicate that weight zero is reported as DRAIN
- BUG/MEDIUM: config: avoid skipping disabled proxies
- BUG/MINOR: config: do not accept more track-sc than configured
- BUG/MEDIUM: backend: fix URI hash when a query string is present
- BUG/MEDIUM: http: don't dump debug headers on MSG_ERROR
- BUG/MAJOR: cli: explicitly call cli_release_handler() upon error
- BUG/MEDIUM: tcp: fix outgoing polling based on proxy protocol
- BUILD/MINOR: ssl: de-constify "ciphers" to avoid a warning on openssl-0.9.8
- BUG/MEDIUM: tcp: don't use SO_ORIGINAL_DST on non-AF_INET sockets
- BUG/BUILD: revert accidental change in the makefile from latest SSL fix
- BUG/MEDIUM: ssl: force a full GC in case of memory shortage
- MEDIUM: ssl: add support for smaller SSL records
- MINOR: session: release a few other pools when stopping
- MINOR: task: release the task pool when stopping
- BUG/MINOR: config: don't inherit the default balance algorithm in frontends
- BUG/MAJOR: frontend: initialize capture pointers earlier
- BUG/MINOR: stats: correctly set the request/response analysers
- MAJOR: polling: centralize calls to I/O callbacks
- DOC: fix typo in the body parser documentation for msg.sov
- BUG/MINOR: peers: the buffer size is global.tune.bufsize, not trash.size
- MINOR: sample: add a few basic internal fetches (nbproc, proc, stopping)
- DEBUG: pools: apply poisonning on every allocated pool
- BUG/MAJOR: sessions: unlink session from list on out of memory
- BUG/MEDIUM: patterns: previous fix was incomplete
- BUG/MEDIUM: payload: ensure that a request channel is available
- BUG/MINOR: tcp-check: don't condition data polling on check type
- BUG/MEDIUM: tcp-check: don't rely on random memory contents
- BUG/MEDIUM: tcp-checks: disable quick-ack unless next rule is an expect
- BUG/MINOR: config: fix typo in condition when propagating process binding
- BUG/MEDIUM: config: do not propagate processes between stopped processes
- BUG/MAJOR: stream-int: properly check the memory allocation return
- BUG/MEDIUM: memory: fix freeing logic in pool_gc2()
- BUG/MAJOR: namespaces: conn->target is not necessarily a server
- BUG/MEDIUM: compression: correctly report zlib_mem
- CLEANUP: lists: remove dead code
- CLEANUP: memory: remove dead code
- CLEANUP: memory: replace macros pool_alloc2/pool_free2 with functions
- MINOR: memory: cut pool allocator in 3 layers
- MEDIUM: memory: improve pool_refill_alloc() to pass a refill count
- MINOR: stream-int: retrieve session pointer from stream-int
- MINOR: buffer: reset a buffer in b_reset() and not channel_init()
- MEDIUM: buffer: use b_alloc() to allocate and initialize a buffer
- MINOR: buffer: move buffer initialization after channel initialization
- MINOR: buffer: only use b_free to release buffers
- MEDIUM: buffer: always assign a dummy empty buffer to channels
- MEDIUM: buffer: add a new buf_wanted dummy buffer to report failed allocations
- MEDIUM: channel: do not report full when buf_empty is present on a channel
- MINOR: session: group buffer allocations together
- MINOR: buffer: implement b_alloc_fast()
- MEDIUM: buffer: implement b_alloc_margin()
- MEDIUM: session: implement a basic atomic buffer allocator
- MAJOR: session: implement a wait-queue for sessions who need a buffer
- MAJOR: session: only allocate buffers when needed
- MINOR: stats: report a "waiting" flags for sessions
- MAJOR: session: only wake up as many sessions as available buffers permit
- MINOR: config: implement global setting tune.buffers.reserve
- MINOR: config: implement global setting tune.buffers.limit
- MEDIUM: channel: implement a zero-copy buffer transfer
- MEDIUM: stream-int: support splicing from applets
- OPTIM: stream-int: try to send pending spliced data
- CLEANUP: session: remove session_from_task()
- DOC: add missing entry for log-format and clarify the text
- MINOR: logs: add a new per-proxy "log-tag" directive
- BUG/MEDIUM: http: fix header removal when previous header ends with pure LF
- MINOR: config: extend the default max hostname length to 64 and beyond
- BUG/MEDIUM: channel: fix possible integer overflow on reserved size computation
- BUG/MINOR: channel: compare to_forward with buf->i, not buf->size
- MINOR: channel: add channel_in_transit()
- MEDIUM: channel: make buffer_reserved() use channel_in_transit()
- MEDIUM: channel: make bi_avail() use channel_in_transit()
- BUG/MEDIUM: channel: don't schedule data in transit for leaving until connected
- CLEANUP: channel: rename channel_reserved -> channel_is_rewritable
- MINOR: channel: rename channel_full() to !channel_may_recv()
- MINOR: channel: rename buffer_reserved() to channel_reserved()
- MINOR: channel: rename buffer_max_len() to channel_recv_limit()
- MINOR: channel: rename bi_avail() to channel_recv_max()
- MINOR: channel: rename bi_erase() to channel_truncate()
- BUG/MAJOR: log: don't try to emit a log if no logger is set
- MINOR: tools: add new round_2dig() function to round integers
- MINOR: global: always export some SSL-specific metrics
- MINOR: global: report information about the cost of SSL connections
- MAJOR: init: automatically set maxconn and/or maxsslconn when possible
- MINOR: http: add a new fetch "query" to extract the request's query string
- MINOR: hash: add new function hash_crc32
- MINOR: samples: provide a "crc32" converter
- MEDIUM: backend: add the crc32 hash algorithm for load balancing
- BUG/MINOR: args: add missing entry for ARGT_MAP in arg_type_names
- BUG/MEDIUM: http: make http-request set-header compute the string before removal
- MEDIUM: args: use #define to specify the number of bits used by arg types and counts
- MEDIUM: args: increase arg type to 5 bits and limit arg count to 5
- MINOR: args: add type-specific flags for each arg in a list
- MINOR: args: implement a new arg type for regex : ARGT_REG
- MEDIUM: regex: add support for passing regex flags to regex_exec_match()
- MEDIUM: samples: add a regsub converter to perform regex-based transformations
- BUG/MINOR: sample: fix case sensitivity for the regsub converter
- MEDIUM: http: implement http-request set-{method,path,query,uri}
- DOC: fix missing closing brackend on regsub
- MEDIUM: samples: provide basic arithmetic and bitwise operators
- MEDIUM: init: continue to enforce SYSTEM_MAXCONN with auto settings if set
- BUG/MINOR: http: fix incorrect header value offset in replace-hdr/replace-value
- BUG/MINOR: http: abort request processing on filter failure
- MEDIUM: tcp: implement tcp-ut bind option to set TCP_USER_TIMEOUT
- MINOR: ssl/server: add the "no-ssl-reuse" server option
- BUG/MAJOR: peers: initialize s->buffer_wait when creating the session
- MINOR: http: add a new function to iterate over each header line
- MINOR: http: add the new sample fetches req.hdr_names and res.hdr_names
- MEDIUM: task: always ensure that the run queue is consistent
- BUILD: Makefile: add -Wdeclaration-after-statement
- BUILD/CLEANUP: ssl: avoid a warning due to mixed code and declaration
- BUILD/CLEANUP: config: silent 3 warnings about mixed declarations with code
- MEDIUM: protocol: use a family array to index the protocol handlers
- BUILD: lua: cleanup many mixed occurrences declarations & code
- BUG/MEDIUM: task: fix recently introduced scheduler skew
- BUG/MINOR: lua: report the correct function name in an error message
- BUG/MAJOR: http: fix stats regression consecutive to HTTP_RULE_RES_YIELD
- Revert "BUG/MEDIUM: lua: can't handle the response bytes"
- MINOR: lua: convert IP addresses to type string
- CLEANUP: lua: use the same function names in C and Lua
- REORG/MAJOR: move session's req and resp channels back into the session
- CLEANUP: remove now unused channel pool
- REORG/MEDIUM: stream-int: introduce si_ic/si_oc to access channels
- MEDIUM: stream-int: add a flag indicating which side the SI is on
- MAJOR: stream-int: only rely on SI_FL_ISBACK to find the requested channel
- MEDIUM: stream-interface: remove now unused pointers to channels
- MEDIUM: stream-int: make si_sess() use the stream int's side
- MEDIUM: stream-int: use si_task() to retrieve the task from the stream int
- MEDIUM: stream-int: remove any reference to the owner
- CLEANUP: stream-int: add si_ib/si_ob to dereference the buffers
- CLEANUP: stream-int: add si_opposite() to find the other stream interface
- REORG/MEDIUM: channel: only use chn_prod / chn_cons to find stream-interfaces
- MEDIUM: channel: add a new flag "CF_ISRESP" for the response channel
- MAJOR: channel: only rely on the new CF_ISRESP flag to find the SI
- MEDIUM: channel: remove now unused ->prod and ->cons pointers
- CLEANUP: session: simplify references to chn_{prod,cons}(&s->{req,res})
- CLEANUP: session: use local variables to access channels / stream ints
- CLEANUP: session: don't needlessly pass a pointer to the stream-int
- CLEANUP: session: don't use si_{ic,oc} when we know the session.
- CLEANUP: stream-int: limit usage of si_ic/si_oc
- CLEANUP: lua: limit usage of si_ic/si_oc
- MINOR: channel: add chn_sess() helper to retrieve session from channel
- MEDIUM: session: simplify receive buffer allocator to only use the channel
- MEDIUM: lua: use CF_ISRESP to detect the channel's side
- CLEANUP: lua: remove the session pointer from hlua_channel
- CLEANUP: lua: hlua_channel_new() doesn't need the pointer to the session anymore
- MEDIUM: lua: remove struct hlua_channel
- MEDIUM: lua: remove hlua_sample_fetch
As of the other libraries used by haproxy, it can be useful to display the Lua
version used at compilation time.
A new line is added to "haproxy -vv", which shows if Lua is supported by the
binary, and with which version it was compiled.
This system permits to execute some lua function after than HAProxy
complete his initialisation. These functions are executed between
the end of the configuration parsing and check and the begin of the
scheduler.
This is the first step of the lua integration. We add the useful
files in the HAProxy project. These files contains the main
includes, the Makefile options and empty initialisation function.
Is is the LUA skeleton.
Actually, HAProxy uses the function "process_runnable_tasks" and
"wake_expired_tasks" to get the next task which can expires.
If a task is added with "task_schedule" or other method during
the execution of an other task, the expiration of this new task
is not taken into account, and the execution of this task can be
too late.
Actualy, HAProxy seems to be no sensitive to this bug.
This fix moves the call to process_runnable_tasks() before the timeout
calculation and ensures that all wakeups are processed together. Only
wake_expired_tasks() needs to return a timeout now.
Commit d025648 ("MAJOR: init: automatically set maxconn and/or maxsslconn
when possible") resulted in a case where if enough memory is available,
a maxconn value larger than SYSTEM_MAXCONN could be computed, resulting
in possibly overflowing other systems resources (eg: kernel socket buffers,
conntrack entries, etc). Let's bound any automatic maxconn to SYSTEM_MAXCONN
if it is defined. Note that the value is set to DEFAULT_MAXCONN since
SYSTEM_MAXCONN forces DEFAULT_MAXCONN, thus it is not an error.
This one will be used when a regex is expected. It is automatically
resolved after the parsing and compiled into a regex. Some optional
flags are supported in the type-specific flags that should be set by
the optional arg checker. One is used during the regex compilation :
ARGF_REG_ICASE to ignore case.
If a memory size limit is enforced using "-n" on the command line and
one or both of maxconn / maxsslconn are not set, instead of using the
build-time values, haproxy now computes the number of sessions that can
be allocated depending on a number of parameters among which :
- global.maxconn (if set)
- global.maxsslconn (if set)
- maxzlibmem
- tune.ssl.cachesize
- presence of SSL in at least one frontend (bind lines)
- presence of SSL in at least one backend (server lines)
- tune.bufsize
- tune.cookie_len
The purpose is to ensure that not haproxy will not run out of memory
when maxing out all parameters. If neither maxconn nor maxsslconn are
used, it will consider that 100% of the sessions involve SSL on sides
where it's supported. That means that it will typically optimize maxconn
for SSL offloading or SSL bridging on all connections. This generally
means that the simple act of enabling SSL in a frontend or in a backend
will significantly reduce the global maxconn but in exchange of that, it
will guarantee that it will not fail.
All metrics may be enforced using #defines to accomodate variations in
SSL libraries or various allocation sizes.
We've already experimented with three wake up algorithms when releasing
buffers : the first naive one used to wake up far too many sessions,
causing many of them not to get any buffer. The second approach which
was still in use prior to this patch consisted in waking up either 1
or 2 sessions depending on the number of FDs we had released. And this
was still inaccurate. The third one tried to cover the accuracy issues
of the second and took into consideration the number of FDs the sessions
would be willing to use, but most of the time we ended up waking up too
many of them for nothing, or deadlocking by lack of buffers.
This patch completely removes the need to allocate two buffers at once.
Instead it splits allocations into critical and non-critical ones and
implements a reserve in the pool for this. The deadlock situation happens
when all buffers are be allocated for requests pending in a maxconn-limited
server queue, because then there's no more way to allocate buffers for
responses, and these responses are critical to release the servers's
connection in order to release the pending requests. In fact maxconn on
a server creates a dependence between sessions and particularly between
oldest session's responses and latest session's requests. Thus, it is
mandatory to get a free buffer for a response in order to release a
server connection which will permit to release a request buffer.
Since we definitely have non-symmetrical buffers, we need to implement
this logic in the buffer allocation mechanism. What this commit does is
implement a reserve of buffers which can only be allocated for responses
and that will never be allocated for requests. This is made possible by
the requester indicating how much margin it wants to leave after the
allocation succeeds. Thus it is a cooperative allocation mechanism : the
requester (process_session() in general) prefers not to get a buffer in
order to respect other's need for response buffers. The session management
code always knows if a buffer will be used for requests or responses, so
that is not difficult :
- either there's an applet on the initiator side and we really need
the request buffer (since currently the applet is called in the
context of the session)
- or we have a connection and we really need the response buffer (in
order to support building and sending an error message back)
This reserve ensures that we don't take all allocatable buffers for
requests waiting in a queue. The downside is that all the extra buffers
are really allocated to ensure they can be allocated. But with small
values it is not an issue.
With this change, we don't observe any more deadlocks even when running
with maxconn 1 on a server under severely constrained memory conditions.
The code becomes a bit tricky, it relies on the scheduler's run queue to
estimate how many sessions are already expected to run so that it doesn't
wake up everyone with too few resources. A better solution would probably
consist in having two queues, one for urgent requests and one for normal
requests. A failed allocation for a session dealing with an error, a
connection event, or the need for a response (or request when there's an
applet on the left) would go to the urgent request queue, while other
requests would go to the other queue. Urgent requests would be served
from 1 entry in the pool, while the regular ones would be served only
according to the reserve. Despite not yet having this, it works
remarkably well.
This mechanism is quite efficient, we don't perform too many wake up calls
anymore. For 1 million sessions elapsed during massive memory contention,
we observe about 4.5M calls to process_session() compared to 4.0M without
memory constraints. Previously we used to observe up to 16M calls, which
rougly means 12M failures.
During a test run under high memory constraints (limit enforced to 27 MB
instead of the 58 MB normally needed), performance used to drop by 53% prior
to this patch. Now with this patch instead it *increases* by about 1.5%.
The best effect of this change is that by limiting the memory usage to about
2/3 to 3/4 of what is needed by default, it's possible to increase performance
by up to about 18% mainly due to the fact that pools are reused more often
and remain hot in the CPU cache (observed on regular HTTP traffic with 20k
objects, buffers.limit = maxconn/10, buffers.reserve = limit/2).
Below is an example of scenario which used to cause a deadlock previously :
- connection is received
- two buffers are allocated in process_session() then released
- one is allocated when receiving an HTTP request
- the second buffer is allocated then released in process_session()
for request parsing then connection establishment.
- poll() says we can send, so the request buffer is sent and released
- process session gets notified that the connection is now established
and allocates two buffers then releases them
- all other sessions do the same till one cannot get the request buffer
without hitting the margin
- and now the server responds. stream_interface allocates the response
buffer and manages to get it since it's higher priority being for a
response.
- but process_session() cannot allocate the request buffer anymore
=> We could end up with all buffers used by responses so that none may
be allocated for a request in process_session().
When the applet processing leaves the session context, the test will have
to be changed so that we always allocate a response buffer regardless of
the left side (eg: H2->H1 gateway). A final improvement would consists in
being able to only retry the failed I/O operation without waking up a
task, but to date all experiments to achieve this have proven not to be
reliable enough.
This patch makes it possible to create binds and servers in separate
namespaces. This can be used to proxy between multiple completely independent
virtual networks (with possibly overlapping IP addresses) and a
non-namespace-aware proxy implementation that supports the proxy protocol (v2).
The setup is something like this:
net1 on VLAN 1 (namespace 1) -\
net2 on VLAN 2 (namespace 2) -- haproxy ==== proxy (namespace 0)
net3 on VLAN 3 (namespace 3) -/
The proxy is configured to make server connections through haproxy and sending
the expected source/target addresses to haproxy using the proxy protocol.
The network namespace setup on the haproxy node is something like this:
= 8< =
$ cat setup.sh
ip netns add 1
ip link add link eth1 type vlan id 1
ip link set eth1.1 netns 1
ip netns exec 1 ip addr add 192.168.91.2/24 dev eth1.1
ip netns exec 1 ip link set eth1.$id up
...
= 8< =
= 8< =
$ cat haproxy.cfg
frontend clients
bind 127.0.0.1:50022 namespace 1 transparent
default_backend scb
backend server
mode tcp
server server1 192.168.122.4:2222 namespace 2 send-proxy-v2
= 8< =
A bind line creates the listener in the specified namespace, and connections
originating from that listener also have their network namespace set to
that of the listener.
A server line either forces the connection to be made in a specified
namespace or may use the namespace from the client-side connection if that
was set.
For more documentation please read the documentation included in the patch
itself.
Signed-off-by: KOVACS Tamas <ktamas@balabit.com>
Signed-off-by: Sarkozi Laszlo <laszlo.sarkozi@balabit.com>
Signed-off-by: KOVACS Krisztian <hidden@balabit.com>
Google's boringssl doesn't have OPENSSL_VERSION_TEXT, SSLeay_version()
or SSLEAY_VERSION, in fact, it doesn't have any real versioning, its
just git-based.
So in case we build against boringssl, we can't access those values.
Instead, we just inform the user that HAProxy was build against
boringssl.
Signed-off-by: Lukas Tribus <luky-37@hotmail.com>
This patch remove all references of standard regex in haproxy. The last
remaining references are only in the regex.[ch] files.
In the file src/checks.c, the original function uses a "pmatch" array.
In fact this array is unused. This patch remove it.
This patch adds two new actions to http-request and http-response rulesets :
- replace-header : replace a whole header line, suited for headers
which might contain commas
- replace-value : replace a single header value, suited for headers
defined as lists.
The match consists in a regex, and the replacement string takes a log-format
and supports back-references.
When no static DH parameters are specified, this patch makes haproxy
use standardized (rfc 2409 / rfc 3526) DH parameters with prime lenghts
of 1024, 2048, 4096 or 8192 bits for DHE key exchange. The size of the
temporary/ephemeral DH key is computed as the minimum of the RSA/DSA server
key size and the value of a new option named tune.ssl.default-dh-param.
Using the systemd daemon mode the parent doesn't exits but waits for
his childs without closing its listening sockets.
As linux 3.9 introduced a SO_REUSEPORT option (always enabled in
haproxy if available) this will give unhandled connections problems
after an haproxy reload with open connections.
The problem is that when on reload a new parent is started (-Ds
$oldchildspids), in haproxy.c main there's a call to start_proxies
that, without SO_REUSEPORT, should fail (as the old processes are
already listening) and so a SIGTOU is sent to old processes. On this
signal the old childs will call (in pause_listener) a shutdown() on
the listening fd. From my tests (if I understand it correctly) this
affects the in kernel file (so the listen is really disabled for all
the processes, also the parent).
Instead, with SO_REUSEPORT, the call to start_proxies doesn't fail and
so SIGTOU is never sent. Only SIGUSR1 is sent and the listen isn't
disabled for the parent but only the childs will stop listening (with
a call to close())
So, with SO_REUSEPORT, the old childs will close their listening
sockets but will wait for the current connections to finish or
timeout, and, as their parent has its listening socket open, the
kernel will schedule some connections on it. These connections will
never be accepted by the parent as it's in the waitpid loop.
This fix will close all the listeners on the parent before entering the
waitpid loop.
Signed-off-by: Simone Gotti <simone.gotti@gmail.com>
Servers used to have 3 flags to store a state, now they have 4 states
instead. This avoids lots of confusion for the 4 remaining undefined
states.
The encoding from the previous to the new states can be represented
this way :
SRV_STF_RUNNING
| SRV_STF_GOINGDOWN
| | SRV_STF_WARMINGUP
| | |
0 x x SRV_ST_STOPPED
1 0 0 SRV_ST_RUNNING
1 0 1 SRV_ST_STARTING
1 1 x SRV_ST_STOPPING
Note that the case where all bits were set used to exist and was randomly
dealt with. For example, the task was not stopped, the throttle value was
still updated and reported in the stats and in the http_server_state header.
It was the same if the server was stopped by the agent or for maintenance.
It's worth noting that the internal function names are still quite confusing.
Till now, the server's state and flags were all saved as a single bit
field. It causes some difficulties because we'd like to have an enum
for the state and separate flags.
This commit starts by splitting them in two distinct fields. The first
one is srv->state (with its counter-part srv->prev_state) which are now
enums, but which still contain bits (SRV_STF_*).
The flags now lie in their own field (srv->flags).
The function srv_is_usable() was updated to use the enum as input, since
it already used to deal only with the state.
Note that currently, the maintenance mode is still in the state for
simplicity, but it must move as well.
Some consistency checks cannot be performed between frontends, backends
and peers at the moment because there is no way to check for intersection
between processes bound to some processes when the number of processes is
higher than the number of bits in a word.
So first, let's limit the number of processes to the machine's word size.
This means nbproc will be limited to 32 on 32-bit machines and 64 on 64-bit
machines. This is far more than enough considering that configs rarely go
above 16 processes due to scalability and management issues, so 32 or 64
should be fine.
This way we'll ensure we can always build a mask of all the processes a
section is bound to.
Since it became possible to use log-format expressions in use_backend,
having a mandatory condition becomes annoying because configurations
are full of "if TRUE". Let's relax the check to accept no condition
like many other keywords (eg: redirect).
When compiled with USE_GETADDRINFO, make sure we use getaddrinfo(3) to
perform name lookups. On default dual-stack setups this will change the
behavior of using IPv6 first. Global configuration option
'nogetaddrinfo' can be used to revert to deprecated gethostbyname(3).
The pattern reference are stored with two identifiers: the unique_id and
the reference.
The reference identify a file. Each file with the same name point to the
same reference. We can register many times one file. If the file is
modified, all his dependencies are also modified. The reference can be
used with map or acl.
The unique_id identify inline acl. The unique id is unique for each acl.
You cannot force the same id in the configuration file, because this
repport an error.
The format of the acl and map listing through the "socket" has changed
for displaying these new ids.
Sometimes it can be useful to generate a random value, at least
for debugging purposes, but also to take routing decisions or to
pass such a value to a backend server.
The ability to globally override the default client and server cipher
suites has been requested multiple times since the introduction of SSL.
This commit adds two new keywords to the global section for this :
- ssl-default-bind-ciphers
- ssl-default-server-ciphers
It is still possible to preset them at build time by setting the macros
LISTEN_DEFAULT_CIPHERS and CONNECT_DEFAULT_CIPHERS.
The new tune.idletimer value allows one to set a different value for
idle stream detection. The default value remains set to one second.
It is possible to disable it using zero, and to change the default
value at build time using DEFAULT_IDLE_TIMER.
Released version 1.5-dev22 with the following main changes :
- MEDIUM: tcp-check new feature: connect
- MEDIUM: ssl: Set verify 'required' as global default for servers side.
- MINOR: ssl: handshake optim for long certificate chains.
- BUG/MINOR: pattern: pattern comparison executed twice
- BUG/MEDIUM: map: segmentation fault with the stats's socket command "set map ..."
- BUG/MEDIUM: pattern: Segfault in binary parser
- MINOR: pattern: move functions for grouping pat_match_* and pat_parse_* and add documentation.
- MINOR: standard: The parse_binary() returns the length consumed and his documentation is updated
- BUG/MINOR: payload: the patterns of the acl "req.ssl_ver" are no parsed with the good function.
- BUG/MEDIUM: pattern: "pat_parse_dotted_ver()" set bad expect_type.
- BUG/MINOR: sample: The c_str2int converter does not fail if the entry is not an integer
- BUG/MEDIUM: http/auth: Sometimes the authentication credentials can be mix between two requests
- MINOR: doc: Bad cli function name.
- MINOR: http: smp_fetch_capture_header_* fetch captured headers
- BUILD: last release inadvertently prepended a "+" in front of the date
- BUG/MEDIUM: stream-int: fix the keep-alive idle connection handler
- BUG/MEDIUM: backend: do not re-initialize the connection's context upon reuse
- BUG: Revert "OPTIM/MEDIUM: epoll: fuse active events into polled ones during polling changes"
- BUG/MINOR: checks: successful check completion must not re-enable MAINT servers
- MINOR: http: try to stick to same server after status 401/407
- BUG/MINOR: http: always disable compression on HTTP/1.0
- OPTIM: poll: restore polling after a poll/stop/want sequence
- OPTIM: http: don't stop polling for read on the client side after a request
- BUG/MEDIUM: checks: unchecked servers could not be enabled anymore
- BUG/MEDIUM: stats: the web interface must check the tracked servers before enabling
- BUG/MINOR: channel: CHN_INFINITE_FORWARD must be unsigned
- BUG/MINOR: stream-int: do not clear the owner upon unregister
- MEDIUM: stats: add support for HTTP keep-alive on the stats page
- BUG/MEDIUM: stats: fix HTTP/1.0 breakage introduced in previous patch
- Revert "MEDIUM: stats: add support for HTTP keep-alive on the stats page"
- MAJOR: channel: add a new flag CF_WAKE_WRITE to notify the task of writes
- OPTIM: session: set the READ_DONTWAIT flag when connecting
- BUG/MINOR: http: don't clear the SI_FL_DONT_WAKE flag between requests
- MINOR: session: factor out the connect time measurement
- MEDIUM: session: prepare to support earlier transitions to the established state
- MEDIUM: stream-int: make si_connect() return an established state when possible
- MINOR: checks: use an inline function for health_adjust()
- OPTIM: session: put unlikely() around the freewheeling code
- MEDIUM: config: report a warning when multiple servers have the same name
- BUG: Revert "OPTIM: poll: restore polling after a poll/stop/want sequence"
- BUILD/MINOR: listener: remove a glibc warning on accept4()
- BUG/MAJOR: connection: fix mismatch between rcv_buf's API and usage
- BUILD: listener: fix recent accept4() again
- BUG/MAJOR: ssl: fix breakage caused by recent fix abf08d9
- BUG/MEDIUM: polling: ensure we update FD status when there's no more activity
- MEDIUM: listener: fix polling management in the accept loop
- MINOR: protocol: improve the proto->drain() API
- MINOR: connection: add a new conn_drain() function
- MEDIUM: tcp: report in tcp_drain() that lingering is already disabled on close
- MEDIUM: connection: update callers of ctrl->drain() to use conn_drain()
- MINOR: connection: add more error codes to report connection errors
- MEDIUM: tcp: report connection error at the connection level
- MEDIUM: checks: make use of chk_report_conn_err() for connection errors
- BUG/MEDIUM: unique_id: HTTP request counter is not stable
- DOC: fix misleading information about SIGQUIT
- BUG/MAJOR: fix freezes during compression
- BUG/MEDIUM: stream-interface: don't wake the task up before end of transfer
- BUILD: fix VERDATE exclusion regex
- CLEANUP: polling: rename "spec_e" to "state"
- DOC: add a diagram showing polling state transitions
- REORG: polling: rename "spec_e" to "state" and "spec_p" to "cache"
- REORG: polling: rename "fd_spec" to "fd_cache"
- REORG: polling: rename the cache allocation functions
- REORG: polling: rename "fd_process_spec_events()" to "fd_process_cached_events()"
- MAJOR: polling: rework the whole polling system
- MAJOR: connection: remove the CO_FL_WAIT_{RD,WR} flags
- MEDIUM: connection: remove conn_{data,sock}_poll_{recv,send}
- MEDIUM: connection: add check for readiness in I/O handlers
- MEDIUM: stream-interface: the polling flags must always be updated in chk_snd_conn
- MINOR: stream-interface: no need to call fd_stop_both() on error
- MEDIUM: connection: no need to recheck FD state
- CLEANUP: connection: use conn_ctrl_ready() instead of checking the flag
- CLEANUP: connection: use conn_xprt_ready() instead of checking the flag
- CLEANUP: connection: fix comments in connection.h to reflect new behaviour.
- OPTIM: raw-sock: don't speculate after a short read if polling is enabled
- MEDIUM: polling: centralize polled events processing
- MINOR: polling: create function fd_compute_new_polled_status()
- MINOR: cli: add more information to the "show info" output
- MEDIUM: listener: add support for limiting the session rate in addition to the connection rate
- MEDIUM: listener: apply a limit on the session rate submitted to SSL
- REORG: stats: move the stats socket states to dumpstats.c
- MINOR: cli: add the new "show pools" command
- BUG/MEDIUM: counters: flush content counters after each request
- BUG/MEDIUM: counters: fix stick-table entry leak when using track-sc2 in connection
- MINOR: tools: add very basic support for composite pointers
- MEDIUM: counters: stop relying on session flags at all
- BUG/MINOR: cli: fix missing break in command line parser
- BUG/MINOR: config: correctly report when log-format headers require HTTP mode
- MAJOR: http: update connection mode configuration
- MEDIUM: http: make keep-alive + httpclose be passive mode
- MAJOR: http: switch to keep-alive mode by default
- BUG/MEDIUM: http: fix regression caused by recent switch to keep-alive by default
- BUG/MEDIUM: listener: improve detection of non-working accept4()
- BUILD: listener: add fcntl.h and unistd.h
- BUG/MINOR: raw_sock: correctly set the MSG_MORE flag
If no CA file specified on a server line, the config parser will show an error.
Adds an cmdline option '-dV' to re-set verify 'none' as global default on
servers side (previous behavior).
Also adds 'ssl-server-verify' global statement to set global default to
'none' or 'required'.
WARNING: this changes the default verify mode from "none" to "required" on
the server side, and it *will* break insecure setups.
It's becoming increasingly difficult to ignore unwanted function returns in
debug code with gcc. Now even when you try to work around it, it suggests a
way to write your code differently. For example :
src/frontend.c:187:65: warning: if statement has empty body [-Wempty-body]
if (write(1, trash.str, trash.len) < 0) /* shut gcc warning */;
^
src/frontend.c:187:65: note: put the semicolon on a separate line to silence this warning
1 warning generated.
This is totally unacceptable, this code already had to be written this way
to shut it up in earlier versions. And now it comments the form ? What's the
purpose of the C language if you can't write anymore the code that does what
you want ?
Emeric proposed to just keep a global variable to drain such useless results
so that gcc stops complaining all the time it believes people who write code
are monkeys. The solution is acceptable because the useless assignment is done
only in debug code so it will not impact performance. This patch implements
this, until gcc becomes even "smarter" to detect that we tried to cheat.
We handle "http-request redirect" with a log-format string now, but we
leave "redirect" unaffected.
Note that the control of the special "/" case is move from the runtime
execution to the configuration parsing. If the format rule list is
empty, the build_logline() function does nothing.
Allow an auxiliary agent check to be run independently of the
regular a regular health check. This is enabled by the agent-check
server setting.
The agent-port, which specifies the TCP port to use for the agent's
connections, is required.
The agent-inter, which specifies the interval between agent checks and
timeout of agent checks, is optional. If not set the value for regular
checks is used.
e.g.
server web1_1 127.0.0.1:80 check agent-port 10000
If either the health or agent check determines that a server is down
then it is marked as being down, otherwise it is marked as being up.
An agent health check performed by opening a TCP socket and reading an
ASCII string. The string should have one of the following forms:
* An ASCII representation of an positive integer percentage.
e.g. "75%"
Values in this format will set the weight proportional to the initial
weight of a server as configured when haproxy starts.
* The string "drain".
This will cause the weight of a server to be set to 0, and thus it
will not accept any new connections other than those that are
accepted via persistence.
* The string "down", optionally followed by a description string.
Mark the server as down and log the description string as the reason.
* The string "stopped", optionally followed by a description string.
This currently has the same behaviour as "down".
* The string "fail", optionally followed by a description string.
This currently has the same behaviour as "down".
Signed-off-by: Simon Horman <horms@verge.net.au>
Both static-rr and hash with type map-based call init_server_map() to allocate
server map, so the server map should be freed while doing cleanup if one of
the above load balance algorithms is used.
Signed-off-by: Godbach <nylzhaowei@gmail.com>
[wt: removed the unneeded "if" before the free]
Both fdinfo and fdtab are allocated memory in init() while haproxy is starting,
but only fdtab is freed in deinit(), fdinfo should also be freed.
Signed-off-by: Godbach <nylzhaowei@gmail.com>
FreeBSD uses (IPPROTO_IP, IP_BINDANY) and (IPPROTO_IPV6, IPV6_BINDANY)
to enable transparent proxy on a socket.
This patch adds support for the relevant setsockopt() calls.
This patch does not change the logic of the code, it only changes the
way OS-specific defines are tested.
At the moment the transparent proxy code heavily depends on Linux-specific
defines. This first patch introduces a new define "CONFIG_HAP_TRANSPARENT"
which is set every time the defines used by transparent proxy are present.
This also means that with an up-to-date libc, it should not be necessary
anymore to force CONFIG_HAP_LINUX_TPROXY during the build, as the flags
will automatically be detected.
The CTTPROXY flags still remain separate because this older API doesn't
work the same way.
A new line has been added in the version output for haproxy -vv to indicate
what transparent proxy support is available.
It happens that openssl's API can differ between versions, causing some
serious trouble if the version used at runtime is not the same as used
for building.
Now we report the two versions separately along with a warning if the
version differs (except the patch version).
Since ea68d36 we show whether JIT is enabled, based on the USE-flag
(USE_PCRE_JIT). This is too naive; libpcre may be built without JIT
support (which is the default).
Fix this by calling pcre_config(), which has the accurate information
we are looking for.
Example of a libpcre without JIT support after this patch:
> ./haproxy -vv | grep PCRE
> OPTIONS = USE_STATIC_PCRE=1 USE_PCRE_JIT=1
> Built with PCRE version : 8.32 2012-11-30
> PCRE library supports JIT : no (libpcre build without JIT?)
Commit a4312fa2 merged into dev18 improved log-format management by
processing "log-format" and "unique-id-format" where they were declared,
so that the faulty args could be reported with their correct line numbers.
Unfortunately, the log-format parser considers the proxy mode (TCP/HTTP)
and now if the directive is set before the "mode" statement, it can be
rejected and report warnings.
So we really need to parse these directives at the end of a section at
least. Right now we do not have an "end of section" event, so we need
to store the file name and line number for each of these directives,
and take care of them at the end.
One of the benefits is that now the line numbers can be inherited from
the line passing "option httplog" even if it's in a defaults section.
Future improvements should be performed to report line numbers in every
log-format processed by the parser.
haproxy -vv shows build informations about USE flags and lib versions.
This patch introduces informations about PCRE and the new JIT feature.
It also makes USE_PCRE_JIT=1 appear in the haproxy -vv "OPTIONS".
This is useful since with the introduction of JIT we will see libpcre
related issues.
Released version 1.5-dev18 with the following main changes :
- DOCS: Add explanation of intermediate certs to crt paramater
- DOC: typo and minor fixes in compression paragraph
- MINOR: config: http-request configuration error message misses new keywords
- DOC: minor typo fix in documentation
- BUG/MEDIUM: ssl: ECDHE ciphers not usable without named curve configured.
- MEDIUM: ssl: add bind-option "strict-sni"
- MEDIUM: ssl: add mapping from SNI to cert file using "crt-list"
- MEDIUM: regex: Use PCRE JIT in acl
- DOC: simplify bind option "interface" explanation
- DOC: tfo: bump required kernel to linux-3.7
- BUILD: add explicit support for TFO with USE_TFO
- MEDIUM: New cli option -Ds for systemd compatibility
- MEDIUM: add haproxy-systemd-wrapper
- MEDIUM: add systemd service
- BUG/MEDIUM: systemd-wrapper: don't leak zombie processes
- BUG/MEDIUM: remove supplementary groups when changing gid
- BUG/MEDIUM: config: fix parser crash with bad bind or server address
- BUG/MINOR: Correct logic in cut_crlf()
- CLEANUP: checks: Make desc argument to set_server_check_status const
- CLEANUP: dumpstats: Make cli_release_handler() static
- MEDIUM: server: Break out set weight processing code
- MEDIUM: server: Allow relative weights greater than 100%
- MEDIUM: server: Tighten up parsing of weight string
- MEDIUM: checks: Add agent health check
- BUG/MEDIUM: ssl: openssl 0.9.8 doesn't open /dev/random before chroot
- BUG/MINOR: time: frequency counters are not totally accurate
- BUG/MINOR: http: don't process abortonclose when request was sent
- BUG/MEDIUM: stream_interface: don't close outgoing connections on shutw()
- BUG/MEDIUM: checks: ignore late resets after valid responses
- DOC: fix bogus recommendation on usage of gpc0 counter
- BUG/MINOR: http-compression: lookup Cache-Control in the response, not the request
- MINOR: signal: don't block SIGPROF by default
- OPTIM: epoll: make use of EPOLLRDHUP
- OPTIM: splice: detect shutdowns and avoid splice() == 0
- OPTIM: splice: assume by default that splice is working correctly
- BUG/MINOR: log: temporary fix for lost SSL info in some situations
- BUG/MEDIUM: peers: only the last peers section was used by tables
- BUG/MEDIUM: config: verbosely reject peers sections with multiple local peers
- BUG/MINOR: epoll: use a fix maxevents argument in epoll_wait()
- BUG/MINOR: config: fix improper check for failed memory alloc in ACL parser
- BUG/MINOR: config: free peer's address when exiting upon parsing error
- BUG/MINOR: config: check the proper variable when parsing log minlvl
- BUG/MEDIUM: checks: ensure the health_status is always within bounds
- BUG/MINOR: cli: show sess should always validate s->listener
- BUG/MINOR: log: improper NULL return check on utoa_pad()
- CLEANUP: http: remove a useless null check
- CLEANUP: tcp/unix: remove useless NULL check in {tcp,unix}_bind_listener()
- BUG/MEDIUM: signal: signal handler does not properly check for signal bounds
- BUG/MEDIUM: tools: off-by-one in quote_arg()
- BUG/MEDIUM: uri_auth: missing NULL check and memory leak on memory shortage
- BUG/MINOR: unix: remove the 'level' field from the ux struct
- CLEANUP: http: don't try to deinitialize http compression if it fails before init
- CLEANUP: config: slowstart is never negative
- CLEANUP: config: maxcompcpuusage is never negative
- BUG/MEDIUM: log: emit '-' for empty fields again
- BUG/MEDIUM: checks: fix a race condition between checks and observe layer7
- BUILD: fix a warning emitted by isblank() on non-c99 compilers
- BUILD: improve the makefile's support for libpcre
- MEDIUM: halog: add support for counting per source address (-ic)
- MEDIUM: tools: make str2sa_range support all address syntaxes
- MEDIUM: config: make use of str2sa_range() instead of str2sa()
- MEDIUM: config: use str2sa_range() to parse server addresses
- MEDIUM: config: use str2sa_range() to parse peers addresses
- MINOR: tests: add a config file to ease address parsing tests.
- MINOR: ssl: add a global tunable for the max SSL/TLS record size
- BUG/MINOR: syscall: fix NR_accept4 system call on sparc/linux
- BUILD/MINOR: syscall: add definition of NR_accept4 for ARM
- MINOR: config: report missing peers section name
- BUG/MEDIUM: tools: fix bad character handling in str2sa_range()
- BUG/MEDIUM: stats: never apply "unix-bind prefix" to the global stats socket
- MINOR: tools: prepare str2sa_range() to return an error message
- BUG/MEDIUM: checks: don't call connect() on unsupported address families
- MINOR: tools: prepare str2sa_range() to accept a prefix
- MEDIUM: tools: make str2sa_range() parse unix addresses too
- MEDIUM: config: make str2listener() use str2sa_range() to parse unix addresses
- MEDIUM: config: use a single str2sa_range() call to parse bind addresses
- MEDIUM: config: use str2sa_range() to parse log addresses
- CLEANUP: tools: remove str2sun() which is not used anymore.
- MEDIUM: config: add complete support for str2sa_range() in dispatch
- MEDIUM: config: add complete support for str2sa_range() in server addr
- MEDIUM: config: add complete support for str2sa_range() in 'server'
- MEDIUM: config: add complete support for str2sa_range() in 'peer'
- MEDIUM: config: add complete support for str2sa_range() in 'source' and 'usesrc'
- CLEANUP: minor cleanup in str2sa_range() and str2ip()
- CLEANUP: config: do not use multiple errmsg at once
- MEDIUM: tools: support specifying explicit address families in str2sa_range()
- MAJOR: listener: support inheriting a listening fd from the parent
- MAJOR: tools: support environment variables in addresses
- BUG/MEDIUM: http: add-header should not emit "-" for empty fields
- BUG/MEDIUM: config: ACL compatibility check on "redirect" was wrong
- BUG/MEDIUM: http: fix another issue caused by http-send-name-header
- DOC: mention the new HTTP 307 and 308 redirect statues
- MEDIUM: poll: do not use FD_* macros anymore
- BUG/MAJOR: ev_select: disable the select() poller if maxsock > FD_SETSIZE
- BUG/MINOR: acl: ssl_fc_{alg,use}_keysize must parse integers, not strings
- BUG/MINOR: acl: ssl_c_used, ssl_fc{,_has_crt,_has_sni} take no pattern
- BUILD: fix usual isdigit() warning on solaris
- BUG/MEDIUM: tools: vsnprintf() is not always reliable on Solaris
- OPTIM: buffer: remove one jump in buffer_count()
- OPTIM: http: improve branching in chunk size parser
- OPTIM: http: optimize the response forward state machine
- BUILD: enable poll() by default in the makefile
- BUILD: add explicit support for Mac OS/X
- BUG/MAJOR: http: use a static storage for sample fetch context
- BUG/MEDIUM: ssl: improve error processing and reporting in ssl_sock_load_cert_list_file()
- BUG/MAJOR: http: fix regression introduced by commit a890d072
- BUG/MAJOR: http: fix regression introduced by commit d655ffe
- BUG/CRITICAL: using HTTP information in tcp-request content may crash the process
- MEDIUM: acl: remove flag ACL_MAY_LOOKUP which is improperly used
- MEDIUM: samples: use new flags to describe compatibility between fetches and their usages
- MINOR: log: indicate it when some unreliable sample fetches are logged
- MEDIUM: samples: move payload-based fetches and ACLs to their own file
- MINOR: backend: rename sample fetch functions and declare the sample keywords
- MINOR: frontend: rename sample fetch functions and declare the sample keywords
- MINOR: listener: rename sample fetch functions and declare the sample keywords
- MEDIUM: http: unify acl and sample fetch functions
- MINOR: session: rename sample fetch functions and declare the sample keywords
- MAJOR: acl: make all ACLs reference the fetch function via a sample.
- MAJOR: acl: remove the arg_mask from the ACL definition and use the sample fetch's
- MAJOR: acl: remove fetch argument validation from the ACL struct
- MINOR: http: add new direction-explicit sample fetches for headers and cookies
- MINOR: payload: add new direction-explicit sample fetches
- CLEANUP: acl: remove ACL hooks which were never used
- MEDIUM: proxy: remove acl_requires and just keep a flag "http_needed"
- MINOR: sample: provide a function to report the name of a sample check point
- MAJOR: acl: convert all ACL requires to SMP use+val instead of ->requires
- CLEANUP: acl: remove unused references to ACL_USE_*
- MINOR: http: replace acl_parse_ver with acl_parse_str
- MEDIUM: acl: move the ->parse, ->match and ->smp fields to acl_expr
- MAJOR: acl: add option -m to change the pattern matching method
- MINOR: acl: remove the use_count in acl keywords
- MEDIUM: acl: have a pointer to the keyword name in acl_expr
- MEDIUM: acl: support using sample fetches directly in ACLs
- MEDIUM: http: remove val_usr() to validate user_lists
- MAJOR: sample: maintain a per-proxy list of the fetch args to resolve
- MINOR: ssl: add support for the "alpn" bind keyword
- MINOR: http: status code 303 is HTTP/1.1 only
- MEDIUM: http: implement redirect 307 and 308
- MINOR: http: status 301 should not be marked non-cacheable
ACL fetch functions used to directly reference a fetch function. Now
that all ACL fetches have their sample fetches equivalent, we can make
ACLs reference a sample fetch keyword instead.
In order to simplify the code, a sample keyword name may be NULL if it
is the same as the ACL's, which is the most common case.
A minor change appeared, http_auth always expects one argument though
the ACL allowed it to be missing and reported as such afterwards, so
fix the ACL to match this. This is not really a bug.
Some recent glibc updates have added controls on FD_SET/FD_CLR/FD_ISSET
that crash the program if it tries to use a file descriptor larger than
FD_SETSIZE.
For this reason, we now control the compatibility between global.maxsock
and FD_SETSIZE, and refuse to use select() if there too many FDs are
expected to be used. Note that on Solaris, FD_SETSIZE is already forced
to 65536, and that FreeBSD and OpenBSD allow it to be redefined, though
this is not needed thanks to kqueue which is much more efficient.
In practice, since poll() is enabled on all targets, it should not cause
any problem, unless it is explicitly disabled.
This change must be backported to 1.4 because the crashes caused by glibc
have already been reported on this version.
This patch adds a new option "-Ds" which is exactly like "-D", but instead of
forking n times to get n jobs running and then exiting, prefers to wait for all the
children it just created. With this done, haproxy becomes more systemd-compliant,
without changing anything for other systems.
Signed-off-by: Marc-Antoine Perennou <Marc-Antoine@Perennou.com>
Without it, haproxy will retain the group membership of root, which may
give more access than intended to the process. For example, haproxy would
still be in the wheel group on Fedora 18, as seen with :
# haproxy -f /etc/haproxy/haproxy.cfg
# ps a -o pid,user,group,command | grep hapr
3545 haproxy haproxy haproxy -f /etc/haproxy/haproxy.cfg
4356 root root grep --color=auto hapr
# grep Group /proc/3545/status
Groups: 0 1 2 3 4 6 10
# getent group wheel
wheelâŒ10:root,misc
[WT: The issue has been investigated by independent security research team
and realized by itself not being able to allow security exploitation.
Additionally, dropping groups is not allowed to unprivileged users,
though this mode of deployment is quite common. Thus a warning is
emitted in this case to inform the user. The fix could be backported
into all supported versions as the issue has always been there. ]
At the moment, we need trash chunks almost everywhere and the only
correctly implemented one is in the sample code. Let's move this to
the chunks so that all other places can use this allocator.
Additionally, the get_trash_chunk() function now really returns two
different chunks. Previously it used to always overwrite the same
chunk and point it to a different buffer, which was a bit tricky
because it's not obvious that two consecutive results do alias each
other.
global.tune.maxaccept was used for all listeners. This becomes really not
convenient when some listeners are bound to a single process and other ones
are bound to many processes.
Now we change the principle : we count the number of processes a listener
is bound to, and apply the maxaccept either entirely if there is a single
process, or divided by twice the number of processes in order to maintain
fairness.
The default limit has also been increased from 32 to 64 as it appeared that
on small machines, 32 was too low to achieve high connection rates.
The new "cpu-map" directive allows one to assign the CPU sets that
a process is allowed to bind to. This is useful in combination with
the "nbproc" and "bind-process" directives.
The support is implicit on Linux 2.6.28 and above.
Having nbproc preinitialized to zero is really annoying as it prevents
some checks from being correctly performed. Also the check to prevent
nbproc from being redefined is totally useless, so let's preset it to
1 and remove the test.
This is done by passing the default value to SSLCACHESIZE in sessions.
User can use tune.sslcachesize to change this value.
By default, it is set to 20000 sessions as openssl internal cache size.
Currently, a session entry size is between 592 and 616 bytes depending on the arch.
Now that all pollers make use of speculative I/O, there is no point
having two epoll implementations, so replace epoll with the sepoll code
and remove sepoll which has just become the standard epoll method.
Compression algorithms are not always supported depending on build options.
"haproxy -vv" now reports if zlib is supported and lists compression algorithms
also supported.
This patch adds input and output rate calcutation on the HTTP compresion
feature.
Compression can be limited with a maximum rate value in kilobytes per
second. The rate is set with the global 'maxcomprate' option. You can
change this value dynamicaly with 'set rate-limit http-compression
global' on the UNIX socket.
With the global maxzlibmem option, you are able ton control the maximum
amount of RAM usable for HTTP compression.
A test is done before each zlib allocation, if the there isn't available
memory, the test fail and so the zlib initialization, so data won't be
compressed.
The window size and the memlevel of the zlib are now configurable using
global options tune.zlib.memlevel and tune.zlib.windowsize.
It affects the memory consumption of the zlib.
Keys are copied from samples to stick_table_key. If a key is larger
than the stick_table_key, we have an overflow. In pratice it does not
happen because it requires :
1) a configuration with tune.bufsize larger than BUFSIZE (common)
2) a stick-table configured with keys strictly larger than buffers
3) extraction of data larger than BUFSIZE (eg: using payload())
Points 2 and 3 don't make any sense for a real world configuration. That
said the issue needs be fixed. The solution consists in allocating it the
same size as the global buffer size, just like the samples. This fixes the
issue.
Sample conversions rely on two alternative buffers which were previously
allocated as static bufs of size BUFSIZE. Now they're initialized to the
global buffer size. It was the same for HTTP authentication. Note that it
seems that none of them was prone to any mistake when dealing with the
buffer size, but better stay on the safe side by maintaining the old
assumption that a trash buffer is always "large enough".
The trash is used everywhere to store the results of temporary strings
built out of s(n)printf, or as a storage for a chunk when chunks are
needed.
Using global.tune.bufsize is not the most convenient thing either.
So let's replace trash with a chunk and directly use it as such. We can
then use trash.size as the natural way to get its size, and get rid of
many intermediary chunks that were previously used.
The patch is huge because it touches many areas but it makes the code
a lot more clear and even outlines places where trash was used without
being that obvious.
We will need to be able to switch server connections on a session and
to keep idle connections. In order to achieve this, the preliminary
requirement is that the connections can survive the session and be
detached from them.
Right now they're still allocated at exactly the same place, so when
there is a session, there are always 2 connections. We could soon
improve on this by allocating the outgoing connection only during a
connect().
This current patch touches a lot of code and intentionally does not
change any functionnality. Performance tests show no regression (even
a very minor improvement). The doc has not yet been updated.
From the beginning it has been said that -D must always be used on the
command line from startup scripts so that haproxy does not accidentally
stay in foreground when loaded from init script... Except that this has
not been true for a long time now.
The fix is easy and must be backported to 1.4 too which is affected.
ACL and sample fetches use args list and it is really not convenient to
check for null args everywhere. Now for empty args we pass a constant
list of end of lists. It will allow us to remove many useless checks.
With this commit, we now separate the channel from the buffer. This will
allow us to replace buffers on the fly without touching the channel. Since
nobody is supposed to keep a reference to a buffer anymore, doing so is not
a problem and will also permit some copy-less data manipulation.
Interestingly, these changes have shown a 2% performance increase on some
workloads, probably due to a better cache placement of data.
These ones are used to set the default ciphers suite on "bind" lines and
"server" lines respectively, instead of using OpenSSL's defaults. These
are probably mainly useful for distro packagers.
Till now the request was made in the trash and sent to the network at
once, and the response was read into a preallocated char[]. Now we
allocate a full buffer for both the request and the response, and make
use of it.
Some of the operations will probably be replaced later with buffer macros
but the point was to ensure we could migrate to use the data layers soon.
One nice improvement caused by this change is that requests are now formed
at the beginning of the check and may safely be sent in multiple chunks if
needed.
The health checks in the servers are becoming a real mess, move them
into their own subsection. We'll soon need to have a struct buffer to
replace the char * as well as check-specific protocol and transport
layers.
Each proxy contains a reference to the original config file and line
number where it was declared. The pointer used is just a reference to
the one passed to the function instead of being duplicated. The effect
is that it is not valid anymore at the end of the parsing and that all
proxies will be enumerated as coming from the same file on some late
configuration errors. This may happen for exmaple when reporting SSL
certificate issues.
By copying using strdup(), we avoid this issue.
1.4 has the same issue, though no report of the proxy file name is done
out of the config section. Anyway a backport is recommended to ease
post-mortem analysis.
Add keyword 'verify' on bind:
'verify none': authentication disabled (default)
'verify optional': accept connection without certificate
and process a verify if the client sent a certificate
'verify required': reject connection without certificate
and process a verify if the client send a certificate
Add keyword 'cafile' on bind:
'cafile <path>' path to a client CA file used to verify.
'crlfile <path>' path to a client CRL file used to verify.
Unix permissions are per-bind configuration line and not per listener,
so let's concretize this in the way the config is stored. This avoids
some unneeded loops to set permissions on all listeners.
The access level is not part of the unix perms so it has been moved
away. Once we can use str2listener() to set all listener addresses,
we'll have a bind keyword parser for this one.
Navigating through listeners was very inconvenient and error-prone. Not to
mention that listeners were linked in reverse order and reverted afterwards.
In order to definitely get rid of these issues, we now do the following :
- frontends have a dual-linked list of bind_conf
- frontends have a dual-linked list of listeners
- bind_conf have a dual-linked list of listeners
- listeners have a pointer to their bind_conf
This way we can now navigate from anywhere to anywhere and always find the
proper bind_conf for a given listener, as well as find the list of listeners
for a current bind_conf.
Some settings need to be merged per-bind config line and are not necessarily
SSL-specific. It becomes quite inconvenient to have this ssl_conf SSL-specific,
so let's replace it with something more generic.
Since it's common enough to discover that some config options are not
supported due to some openssl version or build options, we report the
relevant ones in "haproxy -vv".
A side effect of this change is that the "ssl" keyword on "bind" lines is now
just a boolean and that "crt" is needed to designate certificate files or
directories.
Note that much refcounting was needed to have the free() work correctly due to
the number of cert aliases which can make a context be shared by multiple names.
SSL config holds many parameters which are per bind line and not per
listener. Let's use a per-bind line config instead of having it
replicated for each listener.
At the moment we only do this for the SSL part but this should probably
evolved to handle more of the configuration and maybe even the state per
bind line.
SSL connections take a huge amount of memory, and unfortunately openssl
does not check malloc() returns and easily segfaults when too many
connections are used.
The only solution against this is to provide a global maxsslconn setting
to reject SSL connections above the limit in order to avoid reaching
unsafe limits.
Thomas Heil reported that when using nbproc > 1, his pidfiles were
regularly truncated. The issue could be tracked down to the presence
of a call to lseek(pidfile, 0, SEEK_SET) just before the close() call
in the children, resulting in the file being truncated by the children
while the parent was feeding it. This unexpected lseek() is transparently
performed by fclose().
Since there is no way to have the file automatically closed during the
fork, the only solution is to bypass the libc and use open/write/close
instead of fprintf() and fclose().
The issue was observed on eglibc 2.15.
This is a massive rename of most functions which should make use of the
word "channel" instead of the word "buffer" in their names.
In concerns the following ones (new names) :
unsigned long long channel_forward(struct channel *buf, unsigned long long bytes);
static inline void channel_init(struct channel *buf)
static inline int channel_input_closed(struct channel *buf)
static inline int channel_output_closed(struct channel *buf)
static inline void channel_check_timeouts(struct channel *b)
static inline void channel_erase(struct channel *buf)
static inline void channel_shutr_now(struct channel *buf)
static inline void channel_shutw_now(struct channel *buf)
static inline void channel_abort(struct channel *buf)
static inline void channel_stop_hijacker(struct channel *buf)
static inline void channel_auto_connect(struct channel *buf)
static inline void channel_dont_connect(struct channel *buf)
static inline void channel_auto_close(struct channel *buf)
static inline void channel_dont_close(struct channel *buf)
static inline void channel_auto_read(struct channel *buf)
static inline void channel_dont_read(struct channel *buf)
unsigned long long channel_forward(struct channel *buf, unsigned long long bytes)
Some functions provided by channel.[ch] have kept their "buffer" name because
they are really designed to act on the buffer according to some information
gathered from the channel. They have been moved together to the same place in
the file for better readability but they were not changed at all.
The "buffer" memory pool was also renamed "channel".
The "raw_sock" prefix will be more convenient for naming functions as
it will be prefixed with the data layer and suffixed with the data
direction. So let's rename the files now to avoid any further confusion.
The #include directive was also removed from a number of files which do
not need it anymore.
In an attempt to get rid of fdtab[].state, and to move the relevant
parts to the connection struct, we remove the FD_STCLOSE state which
can easily be deduced from the <owner> pointer as there is a 1:1 match.
When passing arguments to ACLs and samples, some types are stored as
strings then resolved later after config parsing is done. Upon exit,
the arguments need to be freed only if the string was not resolved
yet. At the moment we can encounter double free during deinit()
because some arguments (eg: userlists) are freed once as their own
type and once as a string.
The solution consists in adding an "unresolved" flag to the args to
say whether the value is still held in the <str> part or is final.
This could be debugged thanks to a useful bug report from Sander Klein.
Option httplog needs to be checked only once the proxy has been validated,
so that its final mode (tcp/http) can be used. Also we need to check for
httplog before checking the log format, so that we can report a warning
about this specific option and not about the format it implies.
Before it was possible to resize the buffers using global.tune.bufsize,
the trash has always been the size of a buffer by design. Unfortunately,
the recent buffer sizing at runtime forgot to adjust the trash, resulting
in it being too short for content rewriting if buffers were enlarged from
the default value.
The bug was encountered in 1.4 so the fix must be backported there.
We'll soon have an SSL socket layer, and in order to ease the difference
between the two, we use the name "sock_raw" to designate the one which
directly talks to the sockets without any conversion.
From time to time, some bugs are discovered that are caused by non-initialized
memory areas. It happens that most platforms return a zero-filled area upon
first malloc() thus hiding potential bugs. This patch also replaces malloc()
in pools with calloc() to ensure that all platforms exhibit the same behaviour
upon startup. In order to catch these bugs more easily, add a -dM command line
flag to enable memory poisonning. Optionally, passing -dM<byte> forces the
poisonning byte to <byte>.
This is mainly a massive renaming in the code to get it in line with the
calling convention. Next patch will rename a few files to complete this
operation.
arg_i was almost unused, and since we migrated to use struct arg everywhere,
the rare cases where arg_i was needed could be replaced by switching to
arg->type = ARGT_STOP.
There were a few unchecked write() calls in the debug code that cause
gcc 4.x to emit warnings on recent libc. We don't want to check them
as we can't make anything from the result, let's simply surround them
with an empty if statement.
Note that one of the warnings was for chdir("/") which normally cannot
fail since it follows a successful chroot (which means the perms are
necessarily there). Anyway let's move the call uppe to protect it too.
%Fi: Frontend IP
%Fp: Frontend Port
%Si: Server IP
%Sp: Server Port
%Ts: Timestamp
%rt: HTTP request counter
%H: hostname
%pid: PID
+X: Hexadecimal represenation
The +X mode in logformat displays hexadecimal for the following flags
%Ci %Cp %Fi %Fp %Bi %Bp %Si %Sp %Ts %ct %pid
rename logformat_write_string() to lf_text()
Optimize size computation
Sometimes it is desirable to forward a particular request to a specific
server without having to declare a dedicated backend for this server. This
can be achieved using the "use-server" rules. These rules are evaluated after
the "redirect" rules and before evaluating cookies, and they have precedence
on them. There may be as many "use-server" rules as desired. All of these
rules are evaluated in their declaration order, and the first one which
matches will assign the server.
Released version 1.5-dev8 with the following main changes :
- MINOR: patch for minor typo (ressources/resources)
- MEDIUM: http: add support for sending the server's name in the outgoing request
- DOC: mention that default checks are TCP connections
- BUG/MINOR: fix options forwardfor if-none when an alternative header name is specified
- CLEANUP: Make check_statuses, analyze_statuses and process_chk static
- CLEANUP: Fix HCHK spelling errors
- BUG/MINOR: fix typo in processing of http-send-name-header
- MEDIUM: log: Use linked lists for loggers
- BUILD: fix declaration inside a scope block
- REORG: log: split send_log function
- MINOR: config: Parse the string of the log-format config keyword
- MINOR: add ultoa, ulltoa, ltoa, lltoa implementations
- MINOR: Date and time fonctions that don't use snprintf
- MEDIUM: log: make http_sess_log use log_format
- DOC: log-format documentation
- MEDIUM: log: use log_format for mode tcplog
- MEDIUM: log-format: backend source address %Bi %Bp
- BUG/MINOR: log-format: fix %o flag
- BUG/MEDIUM: bad length in log_format and __send_log
- MINOR: logformat %st is signed
- BUILD/MINOR: fix the source URL in the spec file
- DOC: acl is http_first_req, not http_req_first
- BUG/MEDIUM: don't trim last spaces from headers consisting only of spaces
- MINOR: acl: add new matches for header/path/url length
- BUILD: halog: make halog build on solaris
- BUG/MINOR: don't use a wrong port when connecting to a server with mapped ports
- MINOR: remove the client/server side distinction in SI addresses
- MINOR: halog: add support for matching queued requests
- DOC: indicate that cookie "prefix" and "indirect" should not be mixed
- OPTIM/MINOR: move struct sockaddr_storage to the tail of structs
- OPTIM/MINOR: make it possible to change pipe size (tune.pipesize)
- BUILD/MINOR: silent a build warning in src/pipe.c (fcntl)
- OPTIM/MINOR: move the hdr_idx pools out of the proxy struct
- MEDIUM: tune.http.maxhdr makes it possible to configure the maximum number of HTTP headers
- BUG/MINOR: fix a segfault when parsing a config with undeclared peers
- CLEANUP: rename possibly confusing struct field "tracked"
- BUG/MEDIUM: checks: fix slowstart behaviour when server tracking is in use
- MINOR: config: tolerate server "cookie" setting in non-HTTP mode
- MEDIUM: buffers: add some new primitives and rework existing ones
- BUG: buffers: don't return a negative value on buffer_total_space_res()
- MINOR: buffers: make buffer_pointer() support negative pointers too
- CLEANUP: kill buffer_replace() and use an inline instead
- BUG: tcp: option nolinger does not work on backends
- CLEANUP: ebtree: remove a few annoying signedness warnings
- CLEANUP: ebtree: clarify licence and update to 6.0.6
- CLEANUP: ebtree: remove 4-year old harmless typo in duplicates insertion code
- CLEANUP: ebtree: remove another typo, a wrong initialization in insertion code
- BUG: ebtree: ebst_lookup() could return the wrong entry
- OPTIM: stream_sock: reduce the amount of in-flight spliced data
- OPTIM: stream_sock: save a failed recv syscall when splice returns EAGAIN
- MINOR: acl: add support for TLS server name matching using SNI
- BUG: http: re-enable TCP quick-ack upon incomplete HTTP requests
- BUG: proto_tcp: don't try to bind to a foreign address if sin_family is unknown
- MINOR: pattern: export the global temporary pattern
- CLEANUP: patterns: get rid of pattern_data_setstring()
- MEDIUM: acl: use temp_pattern to store fetched information in the "method" match
- MINOR: acl: include pattern.h to make pattern migration more transparent
- MEDIUM: pattern: change the pattern data integer from unsigned to signed
- MEDIUM: acl: use temp_pattern to store any integer-type information
- MEDIUM: acl: use temp_pattern to store any address-type information
- CLEANUP: acl: integer part of acl_test is not used anymore
- MEDIUM: acl: use temp_pattern to store any string-type information
- CLEANUP: acl: remove last data fields from the acl_test struct
- MEDIUM: http: replace get_ip_from_hdr2() with http_get_hdr()
- MEDIUM: patterns: the hdr() pattern is now of type string
- DOC: add minimal documentation on how ACLs work internally
- DOC: add a coding-style file
- OPTIM: halog: keep a fast path for the lines-count only
- CLEANUP: silence a warning when building on sparc
- BUG: http: tighten the list of allowed characters in a URI
- MEDIUM: http: block non-ASCII characters in URIs by default
- DOC: add some documentation from RFC3986 about URI format
- BUG/MINOR: cli: correctly remove the whole table on "clear table"
- BUG/MEDIUM: correctly disable servers tracking another disabled servers.
- BUG/MEDIUM: zero-weight servers must not dequeue requests from the backend
- MINOR: halog: add some help on the command line
- BUILD: fix build error on FreeBSD
- BUG: fix double free in peers config error path
- MEDIUM: improve config check return codes
- BUILD: make it possible to look for pcre in the default system paths
- MINOR: config: emit a warning when 'default_backend' masks servers
- MINOR: backend: rework the LC definition to support other connection-based algos
- MEDIUM: backend: add the 'first' balancing algorithm
- BUG: fix httplog trailing LF
- MEDIUM: increase chunk-size limit to 2GB-1
- BUG: queue: fix dequeueing sequence on HTTP keep-alive sessions
- BUG: http: disable TCP delayed ACKs when forwarding content-length data
- BUG: checks: fix server maintenance exit sequence
- BUG/MINOR: stream_sock: don't remove BF_EXPECT_MORE and BF_SEND_DONTWAIT on partial writes
- DOC: enumerate valid status codes for "observe layer7"
- MINOR: buffer: switch a number of buffer args to const
- CLEANUP: silence signedness warning in acl.c
- BUG: stream_sock: si->release was not called upon shutw()
- MINOR: log: use "%ts" to log term status only and "%tsc" to log with cookie
- BUG/CRITICAL: log: fix risk of crash in development snapshot
- BUG/MAJOR: possible crash when using capture headers on TCP frontends
- MINOR: config: disable header captures in TCP mode and complain
parse_logformat_string: parse the string, detect the type: text,
separator or variable
parse_logformat_var: dectect variable name
parse_logformat_var_args: parse arguments and flags
add_to_logformat_list: add to the logformat linked list
When checking a configuration file using "-c -f xxx", sometimes it is
reported that a config is valid while it will later fail (eg: no enabled
listener). Instead, let's improve the return values :
- return 0 if config is 100% OK
- return 1 if config has errors
- return 2 if config is OK but no listener nor peer is enabled
This patch settles the 2 loggers limitation.
Loggers are now stored in linked lists.
Using "global log", the global loggers list content is added at the end
of the current proxy list. Each "log" entries are added at the end of
the proxy list.
"no log" flush a logger list.
Ludovic Levesque reported and diagnosed an annoying bug. When a server is
configured to track another one and has a slowstart interval set, it's
assigned a minimal weight when the tracked server goes back up but keeps
this weight forever.
This is because the throttling during the warmup phase is only computed
in the health checking function.
After several attempts to resolve the issue, the only real solution is to
split the check processing task in two tasks, one for the checks and one
for the warmup. Each server with a slowstart setting has a warmum task
which is responsible for updating the server's weight after a down to up
transition. The task does not run in othe situations.
In the end, the fix is neither complex nor long and should be backported
to 1.4 since the issue was detected there first.
It makes no sense to have one pointer to the hdr_idx pool in each proxy
struct since these pools do not depend on the proxy. Let's have a common
pool instead as it is already the case for other types.
Passing -C <dir> causes haproxy to chdir to <dir> before loading
any file. The argument may be passed anywhere on the command line.
A typical use case is :
$ haproxy -C /etc/haproxy -f global.cfg -f haproxy.cfg
The way the unix socket is initialized is awkward. Some of the settings are put
in the sockets itself, other ones in the backend. And more importantly the
global.maxsock value is adjusted so that the stats socket evades the global
maxconn value. This complexifies maxsock computations for nothing, since the
stats socket is not supposed to receive hundreds of concurrent connections when
the global maxconn is very low. What is needed however is to ensure that there
are always connections left for the stats socket even when traffic sockets are
saturated, but this guarantee is not offered anymore by current code.
So as of now, the stats socket is subject to the global maxconn limitation just
as any other socket until a reservation mechanism is implemented.
This global task is used to periodically check for end of resource shortage
and to try to enable queued listeners again. This is important in case some
temporary system-wide shortage is encountered, so that we don't have to wait
for an existing connection to be released before checking the queue again.
For situations where listeners are queued due to the global maxconn being
reached, the task is woken up at least every second. For situations where
a system resource shortage is detected (memory, sockets, ...) the task is
woken up at least every 100 ms. That way, recovery from severe events can
still be achieved under acceptable conditions.
This function is finally not needed anymore, as it has been replaced with
a per-proxy task that is scheduled when some limits are encountered on
incoming connections or when the process is stopping. The savings should
be noticeable on configs with a large number of proxies. The most important
point is that the rate limiting is now enforced in a clean and solid way.
When an accept() fails because of a connection limit or a memory shortage,
we now disable it and queue it so that it's dequeued only when a connection
is released. This has improved the behaviour of the process near the fd limit
as now a listener with a no connection (eg: stats) will not loop forever
trying to get its connection accepted.
The solution is still not 100% perfect, as we'd like to have this used when
proxy limits are reached (use a per-proxy list) and for safety, we'd need
to have dedicated tasks to periodically re-enable them (eg: to overcome
temporary system-wide resource limitations when no connection is released).
Managing listeners state is difficult because they have their own state
and can at the same time have theirs dictated by their proxy. The pause
is not done properly, as the proxy code is fiddling with sockets. By
introducing new functions such as pause_listener()/resume_listener(), we
make it a bit more obvious how/when they're supposed to be used. The
listen_proxies() function was also renamed to resume_proxies() since
it's only used for pause/resume.
This patch is the first in a series aiming at getting rid of the maintain_proxies
mess. In the end, proxies should not call enable_listener()/disable_listener()
anymore.
By default on a single process, we accept 100 connections at once. This is too
much on recent CPUs where the cache is constantly thrashing, because we visit
all those connections several times. We should batch the processing slightly
less so that all the accepted session may remain in cache during their initial
processing.
Lowering the batch size from 100 to 32 has changed the connection rate for
concurrencies between 5-10k from 67 kcps to 94 kcps on a Core i5 660 (4M L3),
and forward rates from 30k to 39.5k.
Tests on this hardware show that values between 10 and 30 seem to do the job fine.
The motivation for this is that when soft-restart is merged
it will be come more important to free all relevant memory in deinit()
Discovered using valgrind.
The motivation for this is that when soft-restart is merged
it will be come more important to free all relevant memory in deinit()
Discovered using valgrind.
The motivation for this is that when soft-restart is merged
it will be come more important to free all relevant memory in deinit()
Discovered using valgrind.
The motivation for this is that when soft-restart is merged
it will be come more important to free all relevant memory in deinit()
Discovered using valgrind.
And also rename "req_acl_rule" "http_req_rule". At the beginning that
was a bit confusing to me, especially the "req_acl" list which in fact
holds what we call rules. After some digging, it appeared that some
part of the code is 100% HTTP and not just related to authentication
anymore, so let's move that part to HTTP and keep the auth-only code
in auth.c.
It's very annoying that frontend and backend stats are merged because we
don't know what we're observing. For instance, if a "listen" instance
makes use of a distinct backend, it's impossible to know what the bytes_out
means.
Some points take care of not updating counters twice if the backend points
to the frontend, indicating a "listen" instance. The thing becomes more
complex when we try to add support for server side keep-alive, because we
have to maintain a pointer to the backend used for last request, and to
update its stats. But we can't perform such comparisons anymore because
the counters will not match anymore.
So in order to get rid of this situation, let's have both frontend AND
backend stats in the "struct proxy". We simply update the relevant ones
during activity. Some of them are only accounted for in the backend,
while others are just for frontend. Maybe we can improve a bit on that
later, but the essential part is that those counters now reflect what
they really mean.
As reported by the Loadbalancer.org team, it was not possible to bind
more than 1024 ports. This is because the process' limits were set after
trying to bind the sockets, which defeats their purpose.
This fix must be backported to 1.4 and 1.3.
One of the requirements we have is to run multiple instances of haproxy on a
single host; this is so that we can split the responsibilities (and change
permissions) between product teams. An issue we ran up against is how we
would distinguish between the logs generated by each instance. The solution
we came up with (please let me know if there is a better way) is to override
the application tag written to syslog. We can then configure syslog to write
these to different files.
I have attached a patch adding a global option 'log-tag' to override the
default syslog tag 'haproxy' (actually defaults to argv[0]).
Haproxy does not include the hostname rather the IP of the machine in
the syslog headers it sends. Unfortunately this means that for each log
line rsyslog does a reverse dns on the client IP and in the case of
non-routable IPs one gets the public hostname not the internal one.
While this is valid according to RFC3164 as one might imagine this is
troublsome if you have some machines with public IPs, internal IPs, no
reverse DNS entries, etc and you want a standardized hostname based log
directory structure. The rfc says the preferred value is the hostname.
This patch adds a global "log-send-hostname" statement which accepts an
optional string to force the host name. If unset, the local host name
is used.
HTTP content-based health checks will be involved in searching text in pages.
Some pages may not fit in the default buffer (16kB) and sometimes it might be
desired to have larger buffers in order to find patterns. Running checks on
smaller URIs is always preferred of course.
(cherry picked from commit 043f44aeb835f3d0b57626c4276581a73600b6b1)
In deinit(), it is possible that we first free the listeners, then
unbind them all. Right now this situation can't happen because the
only way to call deinit() is to pass via a soft-stop which will
already unbind all protocols. But later this might become a problem.
This counter is incremented for each incoming connection and each active
listener, and is used to prevent haproxy from stopping upon SIGUSR1. It
will thus be possible for some tasks in increment this counter in order
to prevent haproxy from dying until they have completed their job.
Signal zero is never delivered by the system. However having a signal to
which functions and tasks can subscribe to be notified of a stopping event
is useful. So this patch does two things :
1) allow signal zero to be delivered from any function of signal handler
2) make soft_stop() deliver this signal so that tasks can be notified of
a stopping condition.
The two new functions below make it possible to register any number
of functions or tasks to a system signal. They will be called in the
registration order when the signal is received.
struct sig_handler *signal_register_fct(int sig, void (*fct)(struct sig_handler *), int arg);
struct sig_handler *signal_register_task(int sig, struct task *task, int reason);
In case of binding failure during startup, we wait for some time sending
signals to old pids so that they release the ports we need. But if there
aren't any old pids anymore, it's useless to wait, we prefer to fail fast.
Along with this change, we now have the number of old pids really found
in the nb_oldpids variable.
The 'client.c' file now only contained frontend-specific functions,
so it has naturally be renamed 'frontend.c'. Same for client.h. This
has also been an opportunity to remove some cross references from
files that should not have depended on it.
In the end, this file should contain a protocol-agnostic accept()
code, which would initialize a session, task, etc... based on an
accept() from a lower layer. Right now there are still references
to TCP.
Apparently some systems define MSG_NOSIGNAL but do not necessarily
check it (or maybe binaries are built somewhere and used on older
versions). There were reports of very recent FreeBSD setups causing
SIGPIPEs, while older ones catch the signal. Recent FreeBSD manpages
indeed define MSG_NOSIGNAL.
So let's now unconditionnaly catch the signal. It's useless not to do
it for the rare cases where it's not needed (linux 2.4 and below).
We are seeing both real servers repeatedly going on- and off-line with
a period of tens of seconds. Packet tracing, stracing, and adding
debug code to HAProxy itself has revealed that the real servers are
always responding correctly, but HAProxy is sometimes receiving only
part of the response.
It appears that the real servers are sending the test page as three
separate packets. HAProxy receives the contents of one, two, or three
packets, apparently randomly. Naturally, the health check only
succeeds when all three packets' data are seen by HAProxy. If HAProxy
and the real servers are modified to use a plain HTML page for the
health check, the response is in the form of a single packet and the
checks do not fail.
(...)
I've added buffer and length variables to struct server, and allocated
space with the rest of the server initialisation.
(...)
It seems to be working fine in my tests, and handles check responses
that are bigger than the buffer.
Marcello Gorlani reported that at least on FreeBSD, a long hostname
was reported with garbage on the stats page. POSIX does not make it
mandatory for gethostname() to NULL-terminate the string in case of
truncation, and at least FreeBSD appears not to do it. So let's
force null-termination to keep safe.
The bounce realign function was algorithmically good but as expected
it was not cache-friendly. Using it with large requests caused so many
cache thrashing that the function itself could drain 70% of the total
CPU time for only 0.5% of the calls !
Revert back to a standard memcpy() using a specially allocated swap
buffer. We're now back to 2M req/s on pipelined requests.
Thich patch fixes cfgparser not to leak memory on each
default server statement and adds several missing free
calls in deinit():
- free(l->name)
- free(l->counters)
- free(p->desc);
- free(p->fwdfor_hdr_name);
None of them are critical, hopefully.
Released version 1.4-rc1 with the following main changes :
- [MEDIUM] add a maintenance mode to servers
- [MINOR] http-auth: last fix was wrong
- [CONTRIB] add base64rev-gen.c that was used to generate the base64rev table.
- [MINOR] Base64 decode
- [MINOR] generic auth support with groups and encrypted passwords
- [MINOR] add ACL_TEST_F_NULL_MATCH
- [MINOR] http-request: allow/deny/auth support for frontend/backend/listen
- [MINOR] acl: add http_auth and http_auth_group
- [MAJOR] use the new auth framework for http stats
- [DOC] add info about userlists, http-request and http_auth/http_auth_group acls
- [STATS] make it possible to change a CLI connection timeout
- [BUG] patterns: copy-paste typo in type conversion arguments
- [MINOR] pattern: make the converter more flexible by supporting void* and int args
- [MINOR] standard: str2mask: string to netmask converter
- [MINOR] pattern: add support for argument parsers for converters
- [MINOR] pattern: add the "ipmask()" converting function
- [MINOR] config: off-by-one in "stick-table" after list of converters
- [CLEANUP] acl, patterns: make use of my_strndup() instead of malloc+memcpy
- [BUG] restore accidentely removed line in last patch !
- [MINOR] checks: make the HTTP check code add the CRLF itself
- [MINOR] checks: add the server's status in the checks
- [BUILD] halog: make without arch-specific optimizations
- [BUG] halog: fix segfault in case of empty log in PCT mode (cherry picked from commit fe362fe476)
- [MINOR] http: disable keep-alive when process is going down
- [MINOR] acl: add build_acl_cond() to make it easier to add ACLs in config
- [CLEANUP] config: use build_acl_cond() instead of parse_acl_cond()
- [CLEANUP] config: use warnif_cond_requires_resp() to check for bad ACLs
- [MINOR] prepare req_*/rsp_* to receive a condition
- [CLEANUP] config: specify correct const char types to warnif_* functions
- [MEDIUM] config: factor out the parsing of 20 req*/rsp* keywords
- [MEDIUM] http: make the request filter loop check for optional conditions
- [MEDIUM] http: add support for conditional request filter execution
- [DOC] add some build info about the AIX platform (cherry picked from commit e41914c77e)
- [MEDIUM] http: add support for conditional request header addition
- [MEDIUM] http: add support for conditional response header rewriting
- [DOC] add some missing ACLs about response header matching
- [MEDIUM] http: add support for proxy authentication
- [MINOR] http-auth: make the 'unless' keyword work as expected
- [CLEANUP] config: use build_acl_cond() to simplify http-request ACL parsing
- [MEDIUM] add support for anonymous ACLs
- [MEDIUM] http: switch to tunnel mode after status 101 responses
- [MEDIUM] http: stricter processing of the CONNECT method
- [BUG] config: reset check request to avoid double free when switching to ssl/sql
- [MINOR] config: fix too large ssl-hello-check message.
- [BUG] fix error response in case of server error
Support the new syntax (http-request allow/deny/auth) in
http stats.
Now it is possible to use the same syntax is the same like in
the frontend/backend http-request access control:
acl src_nagios src 192.168.66.66
acl stats_auth_ok http_auth(L1)
stats http-request allow if src_nagios
stats http-request allow if stats_auth_ok
stats http-request auth realm LB
The old syntax is still supported, but now it is emulated
via private acls and an aditional userlist.
Add generic authentication & authorization support.
Groups are implemented as bitmaps so the count is limited to
sizeof(int)*8 == 32.
Encrypted passwords are supported with libcrypt and crypt(3), so it is
possible to use any method supported by your system. For example modern
Linux/glibc instalations support MD5/SHA-256/SHA-512 and of course classic,
DES-based encryption.
Cyril Bonté found that when an error is detected in one config file, it
is also reported in all other ones, which is wrong. The fix obviously
consists in checking the return code from readcfgfile() and not the
accumulator.
Cameron Simpson reported an annoying case where haproxy simply reports
"Error(s) found in configuration file" when the file is not found or
not readable.
Fortunately the parsing function still returns -1 in case of open
error, so we're able to detect the issue from the caller and report
the corresponding errno message.
Some rarely information are stored in fdtab, making it larger for no
reason (source port ranges, remote address, ...). Such information
lie there because the checks can't find them anywhere else. The goal
will be to move these information to the stream interface once the
checks make use of it.
For now, we move them to an fdinfo array. This simple change might
have improved the cache hit ratio a little bit because a 0.5% of
performance increase has measured.
During troubleshooting, it's often useful to get the list of supported
pollers but until now it was required to have a working configuration
first. Since the pollers are known before main() is called, let's list
them with the build options.
This patch implements "description" (proxy and global) and "node" (global)
options, removes "node-name" and adds "show-node" & "show-desc" options
for "stats". It also changes the way the header lines (with proxy name) and
the statistics are displayed, so stats no longer look so clumsy with very
long names.
Instead of "node-name" it is possible to use show-node/show-desc with
an optional parameter that overrides a default node/description.
backend cust-0045
# report specific values for this customer
stats show-node Europe
stats show-desc Master node for Europe, Asia, Africa
Commit 27a674efb8 introduced the ability
to configure buffer sizes. Unfortunately, the pool was created before
the conf was read, so that is was always set to the default size.
In order to fix that, we delay the call to init_buffer(), which is not
a problem since nothing uses it during the initialization.
The new tune.bufsize and tune.maxrewrite global directives allow one to
change the buffer size and the maxrewrite size. Right now, setting bufsize
too low will block stats sockets which will not be able to write at all.
An error checking must be added to buffer_write_chunk() so that if it
cannot write its message to an empty buffer, it causes the caller to abort.
Creating a frontend for the global stats socket will help merge
unix sockets management with the other socket management. Since
frontends are huge structs, we only allocate it if required.
This Linux-specific option was never really used in production and
has since been superseded by new splicing options brought by recent
Linux kernels.
It caused several particular cases in the code because the kernel
would take care of the session without haproxy being able to do
anything on it, which became hard to handle in the new architecture.
Let's simply get rid of it now that there is a replacement available.
Do not exit early at the first error found while checking configuration
validity. This particularly helps spotting multiple wrong tracked server
names at once.