CIDs are provided by haproxy so that the peer can use them as DCID of
its packets. Their value is set via a random generator. It happens on
several occasions during connection lifetime:
* via ODCID derivation if haproxy is the server
* on quic_conn init if haproxy is the client
* during post-handshake if haproxy is the server
* on RETIRE_CONNECTION_ID frame parsing
CIDs are stored in a global tree. On ODCID derivation, a check is
performed to ensure the CID is not a duplicate value. This is mandatory
to properly handle multiple INITIAL packets from the same client on
different thread.
However, for the other cases, no check is performed for CID collision.
As _quic_cid_insert() is silent, the issue is not detected at all. This
results in a CID advertized to the peer but not stored in the global
one. In the end, this may cause two issues. The first one is that
packets from the client which use the new CID will be rejected by
haproxy, most probably with a STATELESS_RESET. The second issue is that
it can cause a crash during quic_conn release. Indeed, the CID is stored
in the quic_conn local tree and thus eb_delete() for the global tree
will be performed. As <leaf_p> member is uninit, this results in a
segfault.
Note that this issue is pretty rare. It can only be observed if running
with a high number of concurrent connections in parallel, so that the
random generator will provide duplicate values. Patch is still labelled
as MEDIUM as this modifies code paths used frequently.
To fix this, _quic_cid_insert() unsafe function is completely removed.
Instead, quic_cid_insert() can be used, which reports an error code if a
collision happens. CID are then stored in the quic_conn tree only after
global tree insert success. Here is the solution for each steps if a
collision occurs :
* on init as client: the connection is completely released
* post-handshake: the CID is immediately released. The connection is
kept, but it will miss an extra CID.
* on RETIRE_CONNECTION_ID parsing: a loop is implemented to retry random
generation. It it fails several times, the connection is closed in
error.
A small convenience change is made to quic_cid_insert(). Output
parameter <new_tid> can now be NULL, which is useful as most of the
times caller do not care about it.
This must be backported up to 2.6.
Split new_quic_cid() function into multiple ones. This patch should not
introduce any visible change. The objective is to render CID allocation
and generation more modular.
The first advantage of this patch is to bring code simplication. In
particular, conn CID sequence number increment and insertion into
connection tree is simpler than before. Another improvment is also that
errors could now be handled easier at each different steps of the CID
init.
This patch is a prerequisite for the fix on CID collision, thus it must
be backported prior to it to every affected version.
Change qc_new_conn() so that the connection CID tree is allocated
earlier in the function. This patch does not introduce a behavior
change. Its objective is to facilitate future evolutions on CIDs
handling.
This patch is a prerequisite for the fix on CID collision, thus it must
be backported prior to it to every affected version.
During RETIRE_CONNECTION_ID frame parsing, a new connection ID is
immediately reallocated after the release of the previous one. This is
done to ensure that the peer will never run out of DCID.
Prior to this patch, a CID allocation failure was be silently ignored.
This prevent the emission of a new CID, which could prevent the peer to
emit packets if it had no other CIDs available for use. Now, such error
is considered fatal to the connection. This is the safest solution as
it's better to close connections when memory is running low.
It must be backported up to 2.8.
Amaury reported a case where "${FOO[*]}" still produces an empty field.
It happens if the variable is defined but does not contain any non-space
characters. The reason is that we special-case word expansion only on
non-existing vars. Let's change the ordering of operations so that word-
expanded vars always pretend the current arg is not an empty quote, so
that we don't make any difference between a non-existing var and an
empty one.
No backport is needed unless commit 1968731765 ("BUG/MEDIUM: config:
solve the empty argument problem again") is.
Released version 3.3-dev12 with the following main changes :
- MINOR: quic: enable SSL on QUIC servers automatically
- MINOR: quic: reject conf with QUIC servers if not compiled
- OPTIM: quic: adjust automatic ALPN setting for QUIC servers
- MINOR: sample: optional AAD parameter support to aes_gcm_enc/dec
- REGTESTS: converters: check USE_OPENSSL in aes_gcm.vtc
- BUG/MINOR: resolvers: ensure fair round robin iteration
- BUG/MAJOR: stats-file: fix crash on non-x86 platform caused by unaligned cast
- OPTIM: backend: skip conn reuse for incompatible proxies
- SCRIPTS: build-ssl: allow to build a FIPS version without FIPS
- OPTIM: proxy: move atomically access fields out of the read-only ones
- SCRIPTS: build-ssl: fix rpath in AWS-LC install for openssl and bssl bin
- CI: github: update to macos-26
- BUG/MINOR: quic: fix crash on client handshake abort
- MINOR: quic: do not set conn member if ssl_sock_ctx
- MINOR: quic: remove connection arg from qc_new_conn()
- BUG/MEDIUM: server: Add a rwlock to path parameter
- BUG/MEDIUM: server: Also call srv_reset_path_parameters() on srv up
- BUG/MEDIUM: mux-h1: fix 414 / 431 status code reporting
- BUG/MEDIUM: mux-h2: make sure not to move a dead connection to idle
- BUG/MEDIUM: connections: permit to permanently remove an idle conn
- MEDIUM: cfgparse: deprecate 'master-worker' keyword alone
- MEDIUM: cfgparse: 'daemon' not compatible with -Ws
- DOC: configuration: deprecate the master-worker keyword
- MINOR: quic: remove <mux_state> field
- BUG/MEDIUM: stick-tables: Make sure we handle expiration on all tables
- MEDIUM: stick-tables: Optimize the expiration process a bit.
- MEDIUM: ssl/ckch: use ckch_store instead of ckch_data for ckch_conf_kws
- MINOR: acme: generate a temporary key pair
- MEDIUM: acme: generate a key pair when no file are available
- BUILD: ssl/ckch: wrong function name in ckch_conf_kws
- BUILD: acme: acme_gen_tmp_x509() signedness and unused variables
- BUG/MINOR: acme: fix initialization issue in acme_gen_tmp_x509()
- BUILD: ssl/ckch: fix ckch_conf_kws parsing without ACME
- MINOR: server: move the lock inside srv_add_idle()
- DOC: acme: crt-store allows you to start without a certificate
- BUG/MINOR: acme: allow 'key' when generating cert
- MINOR: stconn: Add counters to SC to know number of bytes received and sent
- MINOR: stream: Add samples to get number of bytes received or sent on each side
- MINOR: counters: Add req_in/req_out/res_in/res_out counters for fe/be/srv/li
- MINOR: stream: Remove bytes_in and bytes_out counters from stream
- MINOR: counters: Remove bytes_in and bytes_out counter from fe/be/srv/li
- MINOR: stats: Add stats about request and response bytes received and sent
- MINOR: applet: Add function to get amount of data in the output buffer
- MINOR: channel: Remove total field from channels
- DEBUG: stream: Add bytes_in/bytes_out value for both SC in session dump
- MEDIUM: stktables: Limit the number of stick counters to 100
- BUG/MINOR: config: Limit "tune.maxpollevents" parameter to 1000000
- BUG/MEDIUM: server: close a race around ready_srv when deleting a server
- BUG/MINOR: config: emit warning for empty args when *not* in discovery mode
- BUG/MEDIUM: config: solve the empty argument problem again
- MEDIUM: config: now reject configs with empty arguments
- MINOR: tools: add support for ist to the word fingerprinting functions
- MINOR: tools: add env_suggest() to suggest alternate variable names
- MINOR: tools: have parse_line's error pointer point to unknown variable names
- MINOR: cfgparse: try to suggest correct variable names on errors
- IMPORT: cebtree: Replace offset calculation with offsetof to avoid UB
- BUG/MINOR: acme: wrong dns-01 challenge in the log
- MEDIUM: backend: Defer conn_xprt_start() after mux creation
- MINOR: peers: Improve traces for peers
- MEDIUM: peers: No longer ack updates during a full resync
- MEDIUM: peers: Remove commitupdate field on stick-tables
- BUG/MEDIUM: peers: Fix update message parsing during a full resync
- MINOR: sample/stats: Add "bytes" in req_{in,out} and res_{in,out} names
- BUG/MEDIUM: stick-tables: Make sure updates are seen as local
- BUG/MEDIUM: proxy: use aligned allocations for struct proxy
- BUG/MEDIUM: proxy: use aligned allocations for struct proxy_per_tgroup
- BUG/MINOR: acme: avoid a possible crash on error paths
In acme_EVP_PKEY_gen(), an error message is printed if *errmsg is set,
however, since commit 546c67d13 ("MINOR: acme: generate a temporary key
pair"), errmsg is passed as NULL in at least one occurrence, leading
the compiler to issue a NULL deref warning at -O3. And indeed, if the
errors are encountered, a crash will occur. No backport is needed.
In 3.2, commit f879b9a18 ("MINOR: proxies: Add a per-thread group field
to struct proxy") introduced struct proxy_per_tgroup that is declared as
thread_aligned, but is allocated using calloc(). Thus it is at risk of
crashing on machines using instructions requiring 64-byte alignment such
as AVX512. Let's use ha_aligned_zalloc_typed() instead of malloc().
For 3.2, we don't have aligned allocations, so instead the THREAD_ALIGNED()
will have to be removed from the struct definition. Alternately, we could
manually align it as is done for fdtab.
Commit fd012b6c5 ("OPTIM: proxy: move atomically access fields out of
the read-only ones") caused the proxy struct to be 64-byte aligned,
which allows the compiler to use optimizations such as AVX512 to zero
certain fields. However the struct was allocated using calloc() so it
was not necessarily aligned, causing segv on startup on compatible
machines. Let's just use ha_aligned_zalloc_typed() to allocate the
struct.
No backport is needed.
In stktable_touch_with_exp, if it is a local update, add it to the
pending update list even if it's already in the tree as a remote update,
otherwise it will never be communicated to other peers;
It used to work before 3.2 because of the ordering of operations, but
it's been broken by adding an extra step with the pending update list,
so we now have to explicitely check for that.
This should be backported to 3.2.
Number of bytes received or sent by a client or a server are now
saved. Sample fetches and stats fields to retrieve these informations are
renamed to add "bytes" in names to avoid any ambiguity with number of
requests and responses.
The commit 590c5ff2e ("MEDIUM: peers: No longer ack updates during a full
resync") introduced a regression. During a full resync, the ID of an update
message is not parsed at all. Thus, the parsing of the whole message in
desynchronized.
On full resync the update id itself is ignored, to not be acked, but it must
be parsed. It is now fixed.
It is a 3.3-specific bug, no backport needed.
This stick-table field was atomically updated with the last update id pushed
and dumped on the CLI but never used otherwise. And all peer sessions share
the same id because it is a stick-table info. So the info in peers dump is
pretty limited.
So, let's remove it.
ACK messages received by a peer sending updates during a full resync are
ignored. So, on the other side, there is no reason to still send these ACK
messages. Let's skip them.
In addition, the received updates during this stage are not considered as to
be acked. It is important to be sure to properly emit ACK messages once the
full sync finished.
Trace messages for peers were only protocol oriented and information
provided were quite light. With this patch, the traces were
improved. information about the peer, its applet and the section are
dumped. Several verbosities are now available and messages are dumped at
different levels depending on the context. It should easier to track issues
in the peers.
In connect_server(), defer the call to conn_xprt_start() until after we
had a chance to create the mux. The xprt can behave differently
depending on if a mux is or is not available at this point, as if it is,
it may want to wait until some data comes from the mux.
This does not need to be backported.
Since 861fe532046 ("MINOR: acme: add the dns-01-record field to the
sink"), the dns-01 challenge is output in the dns_record trash, instead
of the global trash.
The send_log string was never updated with this change, and dumps some
data from the global trash instead. Since the last data emitted in the
trash seems to be the dns-01 token from the authorization object, it
looks like the response to the challenge.
This must be backported to 3.2.
This is the same as the equivalent fix in ebtree:
The C standard specifies that it's undefined behavior to dereference
NULL (even if you use & right after). The hand-rolled offsetof idiom
&(((s*)NULL)->f) is thus technically undefined. This clutters the
output of UBSan and is simple to fix: just use the real offsetof when
it's available.
This is cebtree commit 2d08958858c2b8a1da880061aed941324e20e748.
When an empty argument comes from the use of a non-existing variable,
we'll now detect the difference with an empty variable (error pointer
points to the variable's name instead), and submit it to env_suggest()
to see if another variable looks likely to be the right one or not.
This can be quite useful to quickly figure how to fix misspelled variable
names. Currently only series of letters, digits and underscores are
attempted to be resolved as a name. A typical example is:
peer "${HAPROXY_LOCAL_PEER}" 127.0.0.1:10000
which produces:
[ALERT] (24231) : config : parsing [bug-argv4.cfg:2]: argument number 1 at position 13 is empty and marks the end of the argument list:
peer "${HAPROXY_LOCAL_PEER}" 127.0.0.1:10000
^
[NOTICE] (24231) : config : Hint: maybe you meant HAPROXY_LOCALPEER instead ?
When an argument is empty, parse_line() currently returns a pointer to
the empty string itself. This is convenient, but it's only actionable by
the user who will see for example "${HAPROXY_LOCALPEER}" and figure what
is wrong. Here we slightly change the reported pointer so that if an empty
argument results from the evaluation of an empty variable (meaning that
all variables in string are empty and no other char is present), then
instead of pointing to the opening quote, we'll return a pointer to the
first character of the variable's name. This will allow to make a
difference between an empty variable and an unknown variable, and for
the caller to take action based on this.
I.e. before we would get:
log "${LOG_SERVER_IP}" local0
^
if LOG_SERVER_IP is not set, and now instead we'll get this:
log "${LOG_SERVER_IP}" local0
^
The purpose here is to look in the environment for a variable whose
name looks like the provided one. This will be used to try to auto-
correct misspelled environment variables that would silently be turned
to an empty string.
The word fingerprinting functions are used to compare similar words to
suggest a correctly spelled one that looks like what the user proposed.
Currently the functions only support const char*, but there's no reason
for this, and it would be convenient to support substrings extracted
from random pieces of configurations. Here we're adding new variants
"_with_len" that take these ISTs and which are in fact a slight change
of the original ones that the old ones now rely on.
As prepared during 3.2, we must error on empty arguments because they
mark the end of the line and cause subsequent arguments to be silently
ignored. It was too late in 3.2 to turn that into an error so it's a
warning, but for 3.3 it needed to be an alert.
This patch does that. It doesn't instantly break, instead it counts
one fatal error per violating line. This allows to emit several errors
at once, which can often be caused by the same variable being missed,
or a group of variables sharing a same misspelled prefix for example.
Tests show that it helps locate them better. It also explains what to
look for in the config manual for help with variables expansion.
This mostly reverts commit ff8db5a85 ("BUG/MINOR: config: Stopped parsing
upon unmatched environment variables").
As explained in commit #2367, finally the fix above was incorrect because
it causes other trouble such as this:
log "192.168.100.${NODE}" "local0"
being resolved to this:
log 192.168.100.local0
when NODE does not exist due to the loss of the spaces. In fact, while none
of us was well aware of this, when the user had:
server app 127.0.0.1:80 "${NO_CHECK}" weight 123
in fact they should have written it this way:
server app 127.0.0.1:80 "${NO_CHECK[*]}" weight 123
so that the variable is expanded to zero, one or multiple words, leaving
no empty arg (like in shell). This is supported since 2.3 with commit
fa41cb6 so the right fix is in the config, let's revert the fix and
properly address the issue.
Some changes are necessary however, since after that patch, the in_arg
checks were added and are now inserting an empty argument even for
proper error reporting. For example, the following statement:
acl foo path "/a" "${FOO[*]}" "/b"
would complain about an empty arg at FOO due to in_arg=1, while dropping
this in_arg=1 with the following config:
acl foo path "/a" "${FOO}" "/b"
would silently stop after "/a" instead of complaining about an empty
field. So the approach here consists in noting whether or not something
was written since the quotes were emitted, in order to decide whether
or not to produce an argument. This way, "" continues to be an explicitly
empty arg, just like the same with an unknown variable, while "${FOO[*]}"
is allowed to prevent the creation of an argument if empty.
This should be backported to *some* versions, but the risk that some
configs were altered to rely on the broken fix is not null. At least
recent LTS should be reverted. Note that this requires previous commit:
BUG/MINOR: config: emit warning for empty args when *not* in discovery mode
otherwise this will break again configs relying on HAPROXY_LOCALPEER and
maybe a few other variables set at the end of discovery.
This actually reverses the condition of commit 5f1fad1690 ("BUG/MINOR:
config: emit warning for empty args only in discovery mode"). Indeed,
some variables are not known in discovery mode (e.g. HAPROXY_LOCALPEER),
and statements like:
peer "${HAPROXY_LOCALPEER}" 127.0.0.1:10000
are broken during discovery mode. It turns out that the warning is
currently hidden by commit ff8db5a85d ("BUG/MINOR: config: Stopped
parsing upon unmatched environment variables") since it silently drops
empty args which is sufficient to hide the warning, but it also breaks
other configs and needs to be reverted, which will break configs like
above again.
In issue #2995 we were not fully decided about discovery mode or not,
and already suspected some possible issues without being able to guess
which ones. The only downside of not displaying them in discovery mode
is that certain empty fields on the rare keywords specific to master
mode might remain silent until used. Let's just flip the condition to
check for empty args in normal mode only.
This should be backported to 3.2 after some time of observation.
When a server is being disabled or deleted, in case it matches the
backend's ready_srv, this one is reset. However it's currently done in
a non-atomic way when the server goes down, and that could occasionally
reset the entry matching another server, but more importantly if in
parallel some requests are dequeued for that server, it may re-appear
there after having been removed, leading to a possible crash once it
is fully removed, as shown in issue #3177.
Let's make sure we reset the pointer when detaching the server from
the proxy, and use a CAS in both cases to only reset this server.
This fix needs to be backported to 3.2. There, srv_detach() is in
server.c instead of server.h. Thanks to Basha Mougamadou for the
detailed report and the useful backtraces.
"tune.maxpollevents" global parameter was not limited. It was possible to
set any integer value. But this value is used to allocate the array of
events used by epoll. With a huge value, it seems the allocation silently
fail, making haproxy totally unresponsive.
So let's to limit its value to 1 million. It is pretty high and it should
not be an issue to forbid greater values. The documentation was updated
accordingly.
This patch could be backported to all stable branches.
"tune.stick-counters" global parameter was accepting any positive integer
value. But the maximum value is incredibly high. Setting a huge value has
signitifcant impact on memory and CPU usage. To avoid any issue, this value
is now limited to 100. It should be greater enough to all usage.
It can be seen as a breaking change.
The <total> field in the channel structure is now useless, so it can be
removed. The <bytes_in> field from the SC is used instead.
This patch is related to issue #1617.
The helper function applet_output_data() returns the amount of data in the
output buffer of an applet. For applets using the new API, it is based on
data present in the outbuf buffer. For legacy applets, it is based on input
data present in the input channel's buffer. The HTX version,
applet_htx_output_data(), is also available
This patch is related to issue #1617.
In previous patches, these counters were added per frontend, backend, server
and listener. With this patch, these counters are reported on stats,
including promex.
Note that the stats file minor version was incremented by one because the
shm_stats_file_object struct size has changed.
This patch is related to issue #1617.
bytes_in and bytes_out counters per frontend, backend, listener and server
were removed and we now rely on, respectively on, req_in and res_in
counters.
This patch is related to issue #1617.
per-stream bytes_in and bytes_out counters was removed and replaced by
req.in and res.in. Coorresponding samples still exists but replies on new
counters.
This patch is related to issue #1617.
Thanks to the previous patch, and based on info available on the stream, it
is now possible to have counters for frontends, backends, servers and
listeners to report number of bytes received and sent on both sides.
This patch is related to issue #1617.
req.in and req.out samples can now be used to get the number of bytes
received by a client and send to the server. And res.in and res.out samples
can be used to get the number of bytes received by a server and send to the
client. These info are stored in the logs structure inside a stream.
This patch is related to issue #1617.
<bytes_in> and <bytes_out> counters were added to SC to count, respectively,
the number of bytes received from an endpoint or sent to an endpoint. These
counters are updated for connections and applets.
This patch is related to issue #1617.
If your acme certificate is declared in a crt-store, and the certificate
file does not exist on the disk, HAProxy will start with a temporary key
pair.
Almost all callers of _srv_add_idle() lock the list then call the
function. It's not the most efficient and it requires some care from
the caller to take care of that lock. Let's change this a little bit by
having srv_add_idle() that takes the lock and calls _srv_add_idle() that
is now inlined. This way callers don't have to handle the lock themselves
anymore, and the lock is only taken around the sensitive parts, not the
function call+return.
Interestingly, perf tests show a small perf increase from 2.28-2.32M RPS
to 2.32-2.37M RPS on a 128-thread system.
src/acme.c: In function ‘acme_gen_tmp_x509’:
src/acme.c:2685:15: error: ‘digest’ may be used uninitialized [-Werror=maybe-uninitialized]
2685 | if (!(X509_sign(newcrt, pkey, digest)))
| ~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
src/acme.c:2628:23: note: ‘digest’ was declared here
2628 | const EVP_MD *digest;
| ^~~~~~
When an acme keyword is associated to a crt and key, and the corresponding
files does not exist, HAProxy would not start.
This patch allows to configure acme without pre-generating a keypair before
starting HAProxy. If the files does not exist, it tries to generate a unique
keypair in memory, that will be used for every ACME certificates that don't
have a file on the disk yet.
This patch provides two functions acme_gen_tmp_pkey() and
acme_gen_tmp_x509().
These functions generates a unique keypair and X509 certificate that
will be stored in tmp_x509 and tmp_pkey. If the key pair or certificate
was already generated they will return the existing one.
The key is an RSA2048 and the X509 is generated with a expiration in the
past. The CN is "expired".
These are just placeholders to be used if we don't have files.
This is an API change, instead of passing a ckch_data alone, the
ckch_conf_kws.func() is called with a ckch_store.
This allows the callback to access the whole ckch_store, with the
ckch_conf and the ckch_data. But it requires the ckch_conf to be
actually put in the ckch_store before.
In process_tables_expire(), if the table we're analyzing still has
entries, and thus should be put back into the tree, do not put it in the
mt_list, to have it put back into the tree the next time the task runs.
There is no problem with putting it in the tree right away, as either
the next expiration is in the future, or we handled the maximum number
of expirations per task call and we're about to stop, anyway.
This does not need to be backported.
In process_tables_expire(), when parsing all the tables with expiration
set, to check if the any entry expired, make sure we start from the
oldest one, we can't just rely on eb32_first(), because of sign issues
on the timestamp.
Not doing that may mean some tables are not considered for expiration.
This does not need to be backported.