Low footprint client machines may not have enough memory to download a
complete 16KB TLS record at once. With the new option the maximum
record size can be defined on the server side.
Note: Before limiting the the record size on the server side, a client should
consider using the TLS Maximum Fragment Length Negotiation Extension defined
in RFC6066.
This patch fixes GitHub issue #1679.
There were plenty of leftovers from old code that were never removed
and that are not needed at all since these files do not use any
definition depending on fcntl.h, let's drop them.
Some older systems may routinely return EWOULDBLOCK for some syscalls
while we tend to check only for EAGAIN nowadays. Modern systems define
EWOULDBLOCK as EAGAIN so that solves it, but on a few older ones (AIX,
VMS etc) both are different, and for portability we'd need to test for
both or we never know if we risk to confuse some status codes with
plain errors.
There were few entries, the most annoying ones are the switch/case
because they require to only add the entry when it differs, but the
other ones are really trivial.
Transport layers (raw_sock, ssl_sock, xprt_handshake and xprt_quic)
were using 4 constructors and 2 destructors. The 4 constructors were
replaced with INITCALL and the destructors with REGISTER_POST_DEINIT()
so that we do not depend on this anymore.
Starting from OpenSSLv3, providers are at the core of cryptography
functions. Depending on the provider used, the way the SSL
functionalities work could change. This new 'show ssl providers' CLI
command allows to show what providers were loaded by the SSL library.
This is required because the provider configuration is exclusively done
in the OpenSSL configuration file (/usr/local/ssl/openssl.cnf for
instance).
A new line is also added to the 'haproxy -vv' output containing the same
information.
The DH parameters used for OpenSSL versions 1.1.1 and earlier where
changed. For OpenSSL 1.0.2 and LibreSSL the newly introduced
ssl_get_dh_by_nid function is not used since we keep the original
parameters.
DHE ciphers do not present a security risk if the key is big enough but
they are slow and mostly obsoleted by ECDHE. This patch removes any
default DH parameters. This will effectively disable all DHE ciphers
unless a global ssl-dh-param-file is defined, or
tune.ssl.default-dh-param is set, or a frontend has DH parameters
included in its PEM certificate. In this latter case, only the frontends
that have DH parameters will have DHE ciphers enabled.
Adding explicitely a DHE ciphers in a "bind" line will not be enough to
actually enable DHE. We would still need to know which DH parameters to
use so one of the three conditions described above must be met.
This request was described in GitHub issue #1604.
RFC7919 defined sets of DH parameters supposedly strong enough to be
used safely. We will then use them when we can instead of our hard coded
ones (namely the ffdhe2048 and ffdhe4096 named groups).
The ffdhe2048 and ffdhe4096 named groups were integrated in OpenSSL
starting with version 1.1.1. Instead of duplicating those parameters in
haproxy for older versions of OpenSSL, we will keep using our own
parameters when they are not provided by the SSL library.
We will also need to keep our 1024 bits DH parameters since they are
considered not safe enough to have a dedicated named group in RFC7919
but we must still keep it for retrocompatibility with old Java clients.
This request was described in GitHub issue #1604.
Remaining flags and associated functions are move in the conn-stream
scope. These flags are added on the endpoint and not the conn-stream
itself. This way it will be possible to get them from the mux or the
applet. The functions to get or set these flags are renamed accordingly with
the "cs_" prefix and updated to manipualte a conn-stream instead of a
stream-interface.
At many places, we now use the new CS functions to get a stream or a channel
from a conn-stream instead of using the stream-interface API. It is the
first step to reduce the scope of the stream-interfaces. The main change
here is about the applet I/O callback functions. Before the refactoring, the
stream-interface was the appctx owner. Thus, it was heavily used. Now, as
far as possible,the conn-stream is used. Of course, it remains many calls to
the stream-interface API.
Some CS flags, only related to the endpoint, are moved into the endpoint
struct. More will probably moved later. Those ones are not critical. So it
is pretty safe to move them now and this will ease next changes.
First gcc, then now coverity report possible null derefs in situations
where we know these cannot happen since we call the functions in
contexts that guarantee the existence of the connection and the method
used. Let's introduce an unchecked version of the function for such
cases, just like we had to do with objt_*. This allows us to remove the
ALREADY_CHECKED() statements (which coverity doesn't see), and addresses
github issues #1643, #1644, #1647.
Some compilers see a possible null deref after conn_get_ssl_sock_ctx()
in ssl_sock_parse_heartbeat, which cannot happen there, so let's mark
it as safe. No backport needed.
The SSL functions must not use conn->xprt_ctx anymore but find the context
by calling conn_get_ssl_sock_ctx(), which will properly pass through the
transport layers to retrieve the desired information. Otherwise when the
functions are called on a QUIC connection, they refuse to work for not
being called on the proper transport.
Historically there was a single way to have an SSL transport on a
connection, so detecting if the transport layer was SSL and a context
was present was sufficient to detect SSL. With QUIC, things have changed
because QUIC also relies on SSL, but the context is embedded inside the
quic_conn and the transport layer doesn't match expectations outside,
making it difficult to detect that SSL is in use over the connection.
The approach taken here to improve this consists in adding a new method
at the transport layer, get_ssl_sock_ctx(), to retrieve this often needed
ssl_sock_ctx, and to use this to detect the presence of SSL. This will
even allow some simplifications and cleanups to be made in the SSL code
itself, and QUIC will be able to provide one to export its ssl_sock_ctx.
Certain functions cannot be called on an FD-less conn because they are
normally called as part of the protocol-specific setup/teardown sequence.
Better place a few BUG_ON() to make sure none of them is called in other
situations. If any of them would trigger in ambiguous conditions, it would
always be possible to replace it with an error.
The OpenSSL engine API is deprecated starting with OpenSSL 3.0.
In order to have a clean build this feature is now disabled by default.
It can be reactivated with USE_ENGINE=1 on the build line.
ASAN complains about the SNI expression not being free upon an haproxy
-c. Indeed the httpclient is now initialized with a sni expression and
this one is never free in the server release code.
Must be backported in 2.5 and could be backported in every stable
versions.
frontend and backend conn-streams are now directly accesible from the
stream. This way, and with some other changes, it will be possible to remove
the stream-interfaces from the stream structure.
Because appctx is now an endpoint of the conn-stream, there is no reason to
still have the stream-interface as appctx owner. Thus, the conn-stream is
now the appctx owner.
Thanks to previous changes, it is now possible to set an appctx as endpoint
for a conn-stream. This means the appctx is no longer linked to the
stream-interface but to the conn-stream. Thus, a pointer to the conn-stream
is explicitly stored in the stream-interface. The endpoint (connection or
appctx) can be retrieved via the conn-stream.
When calling ssl_ocsp_response_print which is used to display an OCSP
response's details when calling the "show ssl ocsp-response" on the CLI,
we use the BIO_read function that copies an OpenSSL BIO into a trash.
The return value was not checked though, which could lead to some
crashes since BIO_read can return a negative value in case of error.
This patch should be backported to 2.5.
When calling the "show ssl ocsp-response" CLI command some OpenSSL
objects need to be created in order to get some information related to
the OCSP response and some of them were not freed.
It should be backported to 2.5.
The b_istput function called to append the last data block to the end of
an OCSP response's detailed output was not checked in
ssl_ocsp_response_print. The ssl_ocsp_response_print return value checks
were added as well since some of them were missing.
This error was raised by Coverity (CID 1469513).
This patch fixes GitHub issue #1541.
It can be backported to 2.5.
The SSL_CTX_set_tmp_dh_callback function was marked as deprecated in
OpenSSLv3 so this patch replaces this callback mechanism by a direct set
of DH parameters during init.
DH structure is a low-level one that should not be used anymore with
OpenSSLv3. All functions working on DH were marked as deprecated and
this patch replaces the ones we used with new APIs recommended in
OpenSSLv3, be it in the migration guide or the multiple new manpages
they created.
This patch replaces all mentions of the DH type by the HASSL_DH one,
which will be replaced by EVP_PKEY with OpenSSLv3 and will remain DH on
older versions. It also uses all the newly created helper functions that
enable for instance to load DH parameters from a file into an EVP_PKEY,
or to set DH parameters into an SSL_CTX for use in a DHE negotiation.
The following deprecated functions will effectively disappear when
building with OpenSSLv3 : DH_set0_pqg, PEM_read_bio_DHparams, DH_new,
DH_free, DH_up_ref, SSL_CTX_set_tmp_dh.
Starting from OpenSSLv3, we won't rely on the
SSL_CTX_set_tmp_dh_callback mechanism so we will need to know the DH
size we want to use during init. In order for the default DH param size
to be used when no RSA or DSA private key can be found for a given bind
line, we will need to know the default size we want to use (which was
not possible the way the code was built, since the global default dh
size was set too late.
The current way the local DH structures are built relies on the fact
that the ssl_get_tmp_dh function would only be called as a callback
during a DHE negotiation, so after all the SSL contexts are built and
the init is over. With OpenSSLv3, this function will now be called
during init, so before those objects are curretly built.
This patch ensures that when calling ssl_get_tmp_dh and trying to use
one of or hard-coded DH parameters, it will be created if it did not
exist yet.
The current DH parameter creation is also kept so that with versions
before OpenSSLv3 we don't end up creating this DH object during a
handshake.
Starting from OpenSSLv3, the DH_set0_pqg function is deprecated and the
use of DH objects directly is advised against so this new helper
function will be used to convert our hard-coded DH parameters into an
EVP_PKEY. It relies on the new OSSL_PARAM mechanism, as described in the
EVP_PKEY-DH manpage.
This helper function will only be used with OpenSSLv3. It simply sets in
an SSL_CTX a set of DH parameters of the same size as a certificate's
private key. This logic is the same as the one used with older versions,
it simply relies on new APIs.
If no pkey can be found the SSL_CTX_set_dh_auto function wll be called,
making the SSL_CTX rely on DH parameters provided by OpenSSL in case of
DHE negotiation.
Starting from OpenSSLv3, the SSL_CTX_set_tmp_dh function is deprecated
and it should be replaced by SSL_CTX_set0_tmp_dh_pkey, which takes an
EVP_PKEY instead of a DH parameter. Since this function is new to
OpenSSLv3 and its use requires an extra EVP_PKEY_up_ref call, we will
keep the two versions side by side, otherwise it would require to get
rid of all DH references in older OpenSSL versions as well.
This helper function is not used yet so this commit should be strictly
iso-functional, regardless of the OpenSSL version.
In the upcoming OpenSSLv3 specific patches, we will make use of the
newly created ssl_get_tmp_dh that returns an EVP_PKEY containing DH
parameters of the same size as a bind line's RSA or DSA private key.
The previously named ssl_get_tmp_dh function was renamed
ssl_get_tmp_dh_cbk because it is only used as a callback passed to
OpenSSL through SSL_CTX_set_tmp_dh_callback calls.
This new function makes use of the new OpenSSLv3 APIs that should be
used to load DH parameters from a file (or a BIO in this case) and that
should replace the deprecated PEM_read_bio_DHparams function.
Note that this function returns an EVP_PKEY when using OpenSSLv3 since
they now advise against using low level structures such as DH ones.
This helper function is not used yet so this commit should be stricly
iso-functional, regardless of the OpenSSL version.
ERR_func_error_string does not return anything anymore with OpenSSLv3,
it can be replaced by ERR_peek_error_func which did not exist on
previous versions.
HMAC_Init_ex being a function that acts on a low-level HMAC_CTX
structure was marked as deprecated in OpenSSLv3.
This patch replaces this call by EVP_MAC_CTX_set_params, as advised in
the migration_guide, and uses the new OSSL_PARAM mechanism to configure
the MAC context, as described in the EVP_MAC and EVP_MAC-HMAC manpages.
SSL_CTX_set_tlsext_ticket_key_cb was deprecated on OpenSSLv3 because it
uses an HMAC_pointer which is deprecated as well. According to the v3's
manpage it should be replaced by SSL_CTX_set_tlsext_ticket_key_evp_cb
which uses a EVP_MAC_CTX pointer.
This new callback was introduced in OpenSSLv3 so we need to keep the two
calls in the source base and to split the usage depending on the OpenSSL
version.
In the context of the 'generate-certificates' bind line option, if an
'ecdhe' option is present on the bind line as well, we use the
SSL_CTX_set_tmp_ecdh function which was marked as deprecated in
OpenSSLv3. As advised in the SSL_CTX_set_tmp_ecdh manpage, this function
should be replaced by the SSL_CTX_set1_groups one (or the
SSL_CTX_set1_curves one in our case which does the same but existed on
older OpenSSL versions as well).
The ECDHE behaviour with OpenSSL 1.0.2 is not the same when using the
SSL_CTX_set1_curves function as the one we have on newer versions.
Instead of looking for a code that would work exactly the same
regardless of the OpenSSL version, we will keep the original code on
1.0.2 and use newer APIs for other versions.
This patch should be strictly isofunctional.
The ecdhe option relies on the SSL_CTX_set_tmp_ecdh function which has
been marked as deprecated in OpenSSLv3. As advised in the
SSL_CTX_set_tmp_ecdh manpage, this function should be replaced by the
SSL_CTX_set1_groups one (or the SSL_CTX_set1_curves one in our case
which does the same but existed on older OpenSSL versions as well).
When using the "curves" option we have a different behaviour with
OpenSSL1.0.2 compared to later versions. On this early version an SSL
backend using a P-256 ECDSA certificate manages to connect to an SSL
frontend having a "curves P-384" option (when it fails with later
versions).
Even if the API used for later version than OpenSSL 1.0.2 already
existed then, for some reason the behaviour is not the same on the older
version which explains why the original code with the deprecated API is
kept for this version (otherwise we would risk breaking everything on a
version that might still be used by some people despite being pretty old).
This patch should be strictly isofunctional.
There were empty lines in the output of the CLI's "show ssl
ocsp-response <id>" command. The plain "show ssl ocsp-response" command
(without parameter) was already managed in commit
cc750efbc5. This patch adds an extra space
to those lines so that the only existing empty lines actually mark the
end of the output. This requires to post-process the buffer filled by
OpenSSL's OCSP_RESPONSE_print function (which produces the output of the
"openssl ocsp -respin <ocsp.pem>" command). This way the output of our
command still looks the same as openssl's one.
Must be backported in 2.5.
Do not proceed to direct accept when creating a new quic_conn. Wait for
the QUIC handshake to succeeds to insert the quic_conn in the accept
queue. A tasklet is then woken up to call listener_accept to accept the
quic_conn.
The most important effect is that the connection/mux layers are not
instantiated at the same time as the quic_conn. This forces to delay
some process to be sure that the mux is allocated :
* initialization of mux transport parameters
* installation of the app-ops
Also, the mux instance is not checked now to wake up the quic_conn
tasklet. This is safe because the xprt-quic code is now ready to handle
the absence of the connection/mux layers.
Note that this commit has a deep impact as it changes significantly the
lower QUIC architecture. Most notably, it breaks the 0-RTT feature.
Fix potential null pointer dereference. In fact, this case is not
possible, only a mistake in SSL ex-data initialization may cause it :
either connection is set or quic_conn, which allows to retrieve
the bind_conf.
A BUG_ON was already present but this does not cover release build.
Allow to register quic_conn as ex-data in SSL callbacks. A new index is
used to identify it as ssl_qc_app_data_index.
Replace connection by quic_conn as SSL ex-data when initializing the QUIC
SSL session. When using SSL callbacks in QUIC context, the connection is
now NULL. Used quic_conn instead to retrieve the required parameters.
Also clean up
The same changes are conducted inside the QUIC SSL methods of xprt-quic
: connection instance usage is replaced by quic_conn.
If an error is raised during the ClientHello callback on the server side
(ssl_sock_switchctx_cbk), the servername callback won't be called and
the client's SNI will not be saved in the SSL context. But since we use
the SSL_get_servername function to return this SNI in the ssl_fc_sni
sample fetch, that means that in case of error, such as an SNI mismatch
with a frontend having the strict-sni option enabled, the sample fetch
would not work (making strict-sni related errors hard to debug).
This patch fixes that by storing the SNI as an ex_data in the SSL
context in case the ClientHello callback returns an error. This way the
sample fetch can fallback to getting the SNI this way. It will still
first call the SSL_get_servername function first since it is the proper
way of getting a client's SNI when the handshake succeeded.
In order to avoid memory allocations are runtime into this highly used
runtime function, a new memory pool was created to store those client
SNIs. Its entry size is set to 256 bytes since SNIs can't be longer than
255 characters.
This fixes GitHub #1484.
It can be backported in 2.5.
This patch unlinks and frees the ckch instance linked to a server during
the free of this server.
This could have locked certificates in a "Used" state when removing
servers dynamically from the CLI. And could provoke a segfault once we
try to dynamically update the certificate after that.
This must be backported as far as 2.4.
A lot of free are missing in ssl_sock_free_srv_ctx(), this could result
in memory leaking when removing dynamically a server via the CLI.
This must be backported in every branches, by removing the fields that
does not exist in the previous branches.
This bug was introduced by d817dc73 ("MEDIUM: ssl: Load client
certificates in a ckch for backend servers") in which the creation of
the SSL_CTX for a server was moved to the configuration parser when
using a "crt" keyword instead of being done in ssl_sock_prepare_srv_ctx().
The patch 0498fa40 ("BUG/MINOR: ssl: Default-server configuration ignored by
server") made it worse by setting the same SSL_CTX for every servers
using a default-server. Resulting in any SSL option on a server applied
to every server in its backend.
This patch fixes the issue by reintroducing a string which store the
path of certificate inside the server structure, and loading the
certificate in ssl_sock_prepare_srv_ctx() again.
This is a quick fix to backport, a cleaner way can be achieve by always
creating the SSL_CTX in ssl_sock_prepare_srv_ctx() and splitting
properly the ssl_sock_load_srv_cert() function.
This patch fixes issue #1488.
Must be backported as far as 2.4.
Since this case was already met previously with commit 655dec81b
("BUG/MINOR: backend: do not set sni on connection reuse"), let's make
sure that we don't change reused connection settings. This could be
generalized to most settings that are only in effect before the handshake
in fact (like set_alpn and a few other ones).
There were empty lines in the output of the CLI's "show ssl
ocsp-response" command (after the certificate ID and between two
certificates). This patch removes them since an empty line should mark
the end of the output.
Must be backported in 2.5.
If the ClientHello callback does not manage to find a correct QUIC transport
parameters extension, we immediately close the connection with
missing_extension(109) as TLS alert which is turned into 0x16d QUIC connection
error.
We set this TLS error when no application protocol could be negotiated
via the TLS callback concerned. It is converted as a QUIC CRYPTO_ERROR
error (0x178).
SSL counters were added with commit d0447a7c3 ("MINOR: ssl: add counters
for ssl sessions") in 2.4, but their updates were not atomic, so it's
likely that under significant loads they are not correct.
This needs to be backported to 2.4.
If we want to run quic-tracker against haproxy, we must at least
support the draft version of the TLS extension for the QUIC transport
parameters (0xffa5). quic-tracker QUIC version is draft-29 at this time.
We select this depending on the QUIC version. If draft, we select the
draft TLS extension.
Since commit c2aae74 ("MEDIUM: ssl: Handle early data with OpenSSL
1.1.1"), the codepath of the clientHello callback changed, letting an
unknown SNI escape with a 'return 1' instead of passing through the
abort label.
An error was still emitted because the frontend continued the handshake
with the initial_ctx, which can't be used to achieve an handshake.
However, it had the ugly side effect of letting the request pass in the
case of a TLS resume. Which could be surprising when combining strict-sni
with the removing of a crt-list entry over the CLI for example. (like
its done in the ssl/new_del_ssl_crlfile.vtc reg-test).
This patch switches the code path of the allow_early and abort label, so
the default code path is the abort one, letting the clientHello returns
the correct SSL_AD_UNRECOGNIZED_NAME in case of errors.
Which means the client will now receive:
OpenSSL error[0x14094458] ssl3_read_bytes: tlsv1 unrecognized name
Instead of:
OpenSSL error[0x14094410] ssl3_read_bytes: sslv3 alert handshake failure
Which was the error emitted before HAProxy 1.8.
This patch must be carrefuly backported as far as 1.8 once we validated
its impact.
When establishing an outboud connection, haproxy checks if the cached
TLS session has the same SNI as the connection we are trying to
resume.
This test was done by calling SSL_get_servername() which in TLSv1.2
returned the SNI. With TLSv1.3 this is not the case anymore and this
function returns NULL, which invalidates any outboud connection we are
trying to resume if it uses the sni keyword on its server line.
This patch fixes the problem by storing the SNI in the "reused_sess"
structure beside the session itself.
The ssl_sock_set_servername() now has a RWLOCK because this session
cache entry could be accessed by the CLI when trying to update a
certificate on the backend.
This fix must be backported in every maintained version, however the
RWLOCK only exists since version 2.4.
The else is not for boringSSL but for the lack of Client Hello callback.
Should have been changed in 1fc44d4 ("BUILD: ssl: guard Client Hello
callbacks with HAVE_SSL_CLIENT_HELLO_CB macro instead of openssl
version").
Could be backported in 2.4.
Remove the hardcoded initialization of h3 layer on mux init. Now the
ALPN is looked just after the SSL handshake. The app layer is then
installed if the ALPN negotiation returned a supported protocol.
This required to add a get_alpn on the ssl_quic layer which is just a
call to ssl_sock_get_alpn() from ssl_sock. This is mandatory to be able
to use conn_get_alpn().
These two counters were the only ones not in the global struct, while
the SSL freq counters or the req counts are already in it, this forces
stats.c to include ssl_sock just to know about them. Let's move them
over there with their friends. This reduces from 408 to 384 the number
of includes of opensslconf.h.
This one has nothing to do with ssl_sock as it manipulates the struct
server only. Let's move it to server.c and remove unneeded dependencies
on ssl_sock.h. This further reduces by 10% the number of includes of
opensslconf.h and by 0.5% the number of compiled lines.
This one doesn't use anything from an SSL context, it only checks the
type of the transport layer of a connection, thus it belongs to
connection.h. This is particularly visible due to all the ifdefs
around it in various call places.
In case of error while calling a SSL_read or SSL_write, the
SSL_get_error function is called in order to know more about the error
that happened. If the error code is SSL_ERROR_SSL or SSL_ERROR_SYSCALL,
the error queue might contain more information on the error. This error
code was not used until now. But we now need to store it in order for
backend error fetches to catch all handshake related errors.
The change was required because the previous backend fetch would not
have raised anything if the client's certificate was rejected by the
server (and the connection interrupted). This happens because starting
from TLS1.3, the 'Finished' state on the client is reached before its
certificate is sent to the server (see the "Protocol Overview" part of
RFC 8446). The only place where we can detect that the server rejected the
certificate is after the first SSL_read call after the SSL_do_handshake
function.
This patch then adds an extra ERR_peek_error after the SSL_read and
SSL_write calls in ssl_sock_to_buf and ssl_sock_from_buf. This means
that it could set an error code in the SSL context a long time after the
handshake is over, hence the change in the error fetches.
The ssl_bc_hsk_err sample fetch will need to raise more errors than only
handshake related ones hence its renaming to a more generic ssl_bc_err.
This patch is required because some handshake failures that should have
been caught by this fetch (verify error on the server side for instance)
were missed. This is caused by a change in TLS1.3 in which the
'Finished' state on the client is reached before its certificate is sent
(and verified) on the server side (see the "Protocol Overview" part of
RFC 8446).
This means that the SSL_do_handshake call is finished long before the
server can verify and potentially reject the client certificate.
The ssl_bc_hsk_err will then need to be expanded to catch other types of
errors.
This change is also applied to the frontend fetches (ssl_fc_hsk_err
becomes ssl_fc_err) and to their string counterparts.
In case of a connection error happening after the SSL handshake is
completed, the error code stored in the connection structure would not
always be set, hence having some connection failures being described as
successful in the fc_conn_err or bc_conn_err sample fetches.
The most common case in which it could happen is when the SSL server
rejects the client's certificate. The SSL_do_handshake call on the
client side would be sucessful because the client effectively sent its
client hello and certificate information to the server, but the next
call to SSL_read on the client side would raise an SSL_ERROR_SSL code
(through the SSL_get_error function) which is decribed in OpenSSL
documentation as a non-recoverable and fatal SSL error.
This patch ensures that in such a case, the connection's error code is
set to a special CO_ERR_SSL_FATAL value.
Set the streams transport parameters which could not be initialized because they
were not available during initializations. Indeed, the streams transport parameters
are provided by the peer during the handshake.
When we set tune.ssl.capture-cipherlist-size to a non-zero value
we are able to capture cipherlist supported by the client. To be able to
provide JA3 compatible TLS fingerprinting we need to capture more
information from Client Hello message:
- SSL Version
- SSL Extensions
- Elliptic Curves
- Elliptic Curve Point Formats
This patch allows HAProxy to capture such information and store it for
later use.
The X509_STORE_CTX_get0_cert did not exist yet on OpenSSL 1.0.2 and
neither did X509_STORE_CTX_get0_chain, which was not actually needed
since its get1 equivalent already existed.
Most of the SSL sample fetches related to the client certificate were
based on the SSL_get_peer_certificate function which returns NULL when
the verification process failed. This made it impossible to use those
fetches in a log format since they would always be empty.
The patch adds a reference to the X509 object representing the client
certificate in the SSL structure and makes use of this reference in the
fetches.
The reference can only be obtained in ssl_sock_bind_verifycbk which
means that in case of an SSL error occurring before the verification
process ("no shared cipher" for instance, which happens while processing
the Client Hello), we won't ever start the verification process and it
will be impossible to get information about the client certificate.
This patch also allows most of the ssl_c_XXX fetches to return a usable
value in case of connection failure (because of a verification error for
instance) by making the "conn->flags & CO_FL_WAIT_XPRT" test (which
requires a connection to be established) less strict.
Thanks to this patch, a log-format such as the following should return
usable information in case of an error occurring during the verification
process :
log-format "DN=%{+Q}[ssl_c_s_dn] serial=%[ssl_c_serial,hex] \
hash=%[ssl_c_sha1,hex]"
It should answer to GitHub issue #693.
This new sample fetch along the ssl_fc_hsk_err_str fetch contain the
last SSL error of the error stack that occurred during the SSL
handshake (from the frontend's perspective). The errors happening during
the client's certificate verification will still be given by the
ssl_c_err and ssl_c_ca_err fetches. This new fetch will only hold errors
retrieved by the OpenSSL ERR_get_error function.
Use non-checked function to retrieve listener/server via obj_type. This
is done as a previous obj_type function ensure that the type is well
known and the instance is not NULL.
Incidentally, this should prevent the coverity report from the #1335
github issue which warns about a possible NULL dereference.
The function ssl_sock_load_srv_cert will be used at runtime for dynamic
servers. If the cert is not loaded on ckch tree, we try to access it
from the file-system.
Now this access operation is rendered optional by a new function
argument. It is only allowed at parsing time, but will be disabled for
dynamic servers at runtime.
Explicitly call ssl_initialize_random to initialize the random generator
in init() global function. If the initialization fails, the startup is
interrupted.
This commit is in preparation for support of ssl on dynamic servers. To
be able to activate ssl on dynamic servers, it is necessary to ensure
that the random generator is initialized on startup regardless of the
config. It cannot be called at runtime as access to /dev/urandom is
required.
This also has the effect to fix the previous non-consistent behavior.
Indeed, if bind or server in the config are using ssl, the
initialization function was called, and if it failed, the startup was
interrupted. Otherwise, the ssl initialization code could have been
called through the ssl server for lua, but this times without blocking
the startup on error. Or not called at all if lua was deactivated.
The global shctx lookups and misses was updated without using atomic
ops, so the stats available in "show info" are very likely off by a few
units over time. This should be backported as far as 1.8. Versions
without _HA_ATOMIC_INC() can use HA_ATOMIC_ADD(,1).
The ifdefs surrounding the "show ssl ocsp-response" functionality that
were supposed to disable the code with BoringSSL were built the wrong
way.
It does not need to be backported.
This patch adds the "show ssl ocsp-response [<id>]" CLI command. This
command can be used to display the IDs of the OCSP tree entries along
with details about the entries' certificate ID (issuer's name and key
hash + serial number), or to display the details of a single
ocsp-response if an ID is given. The details displayed in this latter
case are the ones shown by a "openssl ocsp -respin <ocsp-response>
-text" call.
The OCSP tree entry key is a serialized version of the OCSP_CERTID of
the entry which is stored in a buffer that can be at most 128 bytes.
Depending on the length of the serial number, the actual non-zero part
of the key can be smaller than 128 bytes and this new structure member
allows to know how many of the bytes are filled. It will be useful when
dumping the key (in a "show ssl cert <cert>" output for instance).
The wey the "Next Update" field of the OCSP response is converted into a
timestamp relies on the use of signed integers for the year and month so
if the calculated timestamp happens to overflow INT_MAX, it ends up
being seen as negative and the OCSP response being dwignored in
ssl_sock_ocsp_stapling_cbk (because of the "ocsp->expire < now.tv_sec"
test).
It could be backported to all stable branches.
Since commit 04a5a44 ("BUILD: ssl: use HAVE_OPENSSL_KEYLOG instead of
OpenSSL versions") the "tune.ssl.keylog" feature is broken because
HAVE_OPENSSL_KEYLOG does not exist.
Replace this by a HAVE_SSL_KEYLOG which is defined in openssl-compat.h.
Also add an error when not built with the right openssl version.
Must be backported as far as 2.3.
Initialize the parsing context when checking server config validity.
Adjust the log messages to remove redundant config file/line and server
name. Do a similar cleaning in prepare_srv from ssl_sock as this
function is called at the same stage.
This will standardize the stderr output on startup with the parse_server
function.
Set "config :" as a prefix for the user messages context before starting
the configuration parsing. All following stderr output will be prefixed
by it.
As a consequence, remove extraneous prefix "config" already specified in
various ha_alert/warning/notice calls.
Some changes in the OpenSSL syntax API broke this syntax:
#if SSL_OP_NO_TLSv1_3
OpenSSL made this change which broke our usage in commit f04bb0bce490de847ed0482b8ec9eabedd173852:
-# define SSL_OP_NO_TLSv1_3 (uint64_t)0x20000000
+#define SSL_OP_BIT(n) ((uint64_t)1 << (uint64_t)n)
+# define SSL_OP_NO_TLSv1_3 SSL_OP_BIT(29)
Which can't be evaluated by the preprocessor anymore.
This patch replace the test by an openssl version test.
This fix part of #1276 issue.
A memory allocation failure happening during ssl_init_single_engine
would have resulted in a crash. This function is only called during
init.
It was raised in GitHub issue #1233.
It could be backported to all stable branches.
The CA/CRL hot update patches did not compile on some targets of the CI
(mainly gcc + ssl). This patch should fix almost all of them. It adds
missing variable initializations and return value checks to the
BIO_reset calls in show_crl_detail.
In order for the link between the cafile_entry and the default ckch
instance to be built, we need to give a pointer to the instance during
the ssl_sock_prepare_ctx call.
Each ca-file entry of the tree will now hold a list of the ckch
instances that use it so that we can iterate over them when updating the
ca-file via a cli command. Since the link between the SSL contexts and
the CA file tree entries is only built during the ssl_sock_prepare_ctx
function, which are called after all the ckch instances are created, we
need to add a little post processing after each ssl_sock_prepare_ctx
that builds the link between the corresponding ckch instance and CA file
tree entries.
In order to manage the ca-file and ca-verify-file options, any ckch
instance can be linked to multiple CA file tree entries and any CA file
entry can link multiple ckch instances. This is done thanks to a
dedicated list of ckch_inst references stored in the CA file tree
entries over which we can iterate (during an update for instance). We
avoid having one of those instances go stale by keeping a list of
references to those references in the instances.
When deleting a ckch_inst, we can then remove all the ckch_inst_link
instances that reference it, and when deleting a cafile_entry, we
iterate over the list of ckch_inst reference and clear the corresponding
entry in their own list of ckch_inst_link references.
This patch moves all the ssl_store related code to ssl_ckch.c since it
will mostly be used there once the CA file update CLI commands are all
implemented. It also makes the cafile_entry structure visible as well as
the cafile_tree.
This function is one of the few high-profile, unresolved ones in the memory
profile output, let's have it resolve to ease matching of SSL allocations,
which are not easy to follow.
There were 102 CLI commands whose help were zig-zagging all along the dump
making them unreadable. This patch realigns all these messages so that the
command now uses up to 40 characters before the delimiting colon. About a
third of the commands did not correctly list their arguments which were
added after the first version, so they were all updated. Some abuses of
the term "id" were fixed to use a more explanatory term. The
"set ssl ocsp-response" command was not listed because it lacked a help
message, this was fixed as well. The deprecated enable/disable commands
for agent/health/server were prominently written as deprecated. Whenever
possible, clearer explanations were provided.
The current "ADD" vs "ADDQ" is confusing because when thinking in terms
of appending at the end of a list, "ADD" naturally comes to mind, but
here it does the opposite, it inserts. Several times already it's been
incorrectly used where ADDQ was expected, the latest of which was a
fortunate accident explained in 6fa922562 ("CLEANUP: stream: explain
why we queue the stream at the head of the server list").
Let's use more explicit (but slightly longer) names now:
LIST_ADD -> LIST_INSERT
LIST_ADDQ -> LIST_APPEND
LIST_ADDED -> LIST_INLIST
LIST_DEL -> LIST_DELETE
The same is true for MT_LISTs, including their "TRY" variant.
LIST_DEL_INIT keeps its short name to encourage to use it instead of the
lazier LIST_DELETE which is often less safe.
The change is large (~674 non-comment entries) but is mechanical enough
to remain safe. No permutation was performed, so any out-of-tree code
can easily map older names to new ones.
The list doc was updated.
This patch replaces roughly all occurrences of an HA_ATOMIC_ADD(&foo, 1)
or HA_ATOMIC_SUB(&foo, 1) with the equivalent HA_ATOMIC_INC(&foo) and
HA_ATOMIC_DEC(&foo) respectively. These are 507 changes over 45 files.
Currently our atomic ops return a value but it's never known whether
the fetch is done before or after the operation, which causes some
confusion each time the value is desired. Let's create an explicit
variant of these operations suffixed with _FETCH to explicitly mention
that the fetch occurs after the operation, and make use of it at the
few call places.
The default SSL_CTX used by a specific frontend is the one of the first
ckch instance created for this frontend. If this instance has SNIs, then
the SSL context is linked to the instance through the list of SNIs
contained in it. If the instance does not have any SNIs though, then the
SSL_CTX is only referenced by the bind_conf structure and the instance
itself has no link to it.
When trying to update a certificate used by the default instance through
a cli command, a new version of the default instance was rebuilt but the
default SSL context referenced in the bind_conf structure would not be
changed, resulting in a buggy behavior in which depending on the SNI
used by the client, he could either use the new version of the updated
certificate or the original one.
This patch adds a reference to the default SSL context in the default
ckch instances so that it can be hot swapped during a certificate
update.
This should fix GitHub issue #1143.
It can be backported as far as 2.2.
If an unknown CA file was first mentioned in an "add ssl crt-list" CLI
command, it would result in a call to X509_STORE_load_locations which
performs a disk access which is forbidden during runtime. The same would
happen if a "ca-verify-file" or "crl-file" was specified. This was due
to the fact that the crt-list file parsing and the crt-list related CLI
commands parsing use the same functions.
The patch simply adds a new parameter to all the ssl_bind parsing
functions so that they know if the call is made during init or by the
CLI, and the ssl_store_load_locations function can then reject any new
cafile_entry creation coming from a CLI call.
It can be backported as far as 2.2.
pool_alloc_dirty() is the version below pool_alloc() that never performs
the memory poisonning. It should only be called directly for very large
unstructured areas for which enabling memory poisonning would not bring
anything but could significantly hurt performance (e.g. buffers). Using
this function here will not provide any benefit and will hurt the ability
to debug.
It would be desirable to backport this, although it does not cause any
user-visible bug, it just complicates debugging.
Add a start() method to ssl_sock. It is responsible with initiating the
SSL handshake, currently by just scheduling the tasklet, instead of doing
it in the init() method, when all the XPRT may not have been initialized.
Introduce a new XPRT method, start(). The init() method will now only
initialize whatever is needed for the XPRT to run, but any action the XPRT
has to do before being ready, such as handshakes, will be done in the new
start() method. That way, we will be sure the full stack of xprt will be
initialized before attempting to do anything.
The init() call is also moved to conn_prepare(). There's no longer any reason
to wait for the ctrl to be ready, any action will be deferred until start(),
anyway. This means conn_xprt_init() is no longer needed.
When tasklets were derived from tasks, there was no immediate need for
the scheduler to know their status after execution, and in a spirit of
simplicity they just started to always return NULL. The problem is that
it simply prevents the scheduler from 1) accounting their execution time,
and 2) keeping track of their current execution status. Indeed, a remote
wake-up could very well end up manipulating a tasklet that's currently
being executed. And this is the reason why those handlers have to take
the idle lock before checking their context.
In 2.5 we'll take care of making tasklets and tasks work more similarly,
but trouble is to be expected if we continue to propagate the trend of
returning NULL everywhere, especially if some fixes relying on a stricter
model later need to be backported. For this reason this patch updates all
known tasklet handlers to make them return NULL only when the tasklet was
freed. It has no effect for now and isn't even guaranteed to always be
100% safe but it puts the code into the right direction for this.
Emeric found that SSL+keepalive traffic had dropped quite a bit in the
recent changes, which could be bisected to recent commit 9205ab31d
("MINOR: ssl: mark the SSL handshake tasklet as heavy"). Indeed, a
first incarnation of this commit made use of the TASK_SELF_WAKING
flag but the last version directly used TASK_HEAVY, but it would still
continue to remove the already absent TASK_SELF_WAKING one instead of
TASK_HEAVY. As such, the SSL traffic remained processed with low
granularity.
No backport is needed as this is only 2.4.
There are multiple per-thread lists in the listeners, which isn't the
most efficient in terms of cache, and doesn't easily allow to store all
the per-thread stuff.
Now we introduce an srv_per_thread structure which the servers will have an
array of, and place the idle/safe/avail conns tree heads into. Overall this
was a fairly mechanical change, and the array is now always initialized for
all servers since we'll put more stuff there. It's worth noting that the Lua
code still has to deal with its own deinit by itself despite being in a
global list, because its server is not dynamically allocated.
Currently the SSL layer checks the validity of its tasklet's context just
in case it would have been stolen, had the connection been idle. Now it
will be able to be notified by the mux when this situation happens so as
not to have to grab the idle connection lock on each pass. This reuses the
TASK_F_USR1 flag just as the muxes do.
It's been too short for quite a while now and is now full. It's still
time to extend it to 32-bits since we have room for this without
wasting any space, so we now gained 16 new bits for future flags.
The values were not reassigned just in case there would be a few
hidden u16 or short somewhere in which these flags are placed (as
it used to be the case with stream->pending_events).
The patch is tagged MEDIUM because this required to update the task's
process() prototype to use an int instead of a short, that's quite a
bunch of places.
Errors reported by ssl_sock_dump_errors() to stderr would only report the
16 lower bits of the file descriptor because it used to be casted to ushort.
This can be backported to all versions but has really no importance in
practice since this is never seen.
There was a free(ptr) followed by ptr=malloc(ptr, len), which is the
equivalent of ptr = realloc(ptr, len) but slower and less clean. Let's
replace this.
In ssl_sock_free_srv_ctx() there are some calls to free() which are not
followed by a zeroing of the pointers. For now this function is only used
during deinit but it could be used at run time in the near future, so
better secure this.
This makes the code more readable and less prone to copy-paste errors.
In addition, it allows to place some __builtin_constant_p() predicates
to trigger a link-time error in case the compiler knows that the freed
area is constant. It will also produce compile-time error if trying to
free something that is not a regular pointer (e.g. a function).
The DEBUG_MEM_STATS macro now also defines an instance for ha_free()
so that all these calls can be checked.
178 occurrences were converted. The vast majority of them were handled
by the following Coccinelle script, some slightly refined to better deal
with "&*x" or with long lines:
@ rule @
expression E;
@@
- free(E);
- E = NULL;
+ ha_free(&E);
It was verified that the resulting code is the same, more or less a
handful of cases where the compiler optimized slightly differently
the temporary variable that holds the copy of the pointer.
A non-negligible amount of {free(str);str=NULL;str_len=0;} are still
present in the config part (mostly header names in proxies). These
ones should also be cleaned for the same reasons, and probably be
turned into ist strings.
There's a fairness issue between SSL and clear text. A full end-to-end
cleartext connection can require up to ~7.7 wakeups on average, plus 3.3
for the SSL tasklet, one of which is particularly expensive. So if we
accept to process many handshakes taking 1ms each, we significantly
increase the processing time of regular tasks just by adding an extra
delay between their calls. Ideally in order to be fair we should have a
1:18 call ratio, but this requires a bit more accounting. With very little
effort we can mark the SSL handshake tasklet as TASK_HEAVY until the
handshake completes, and remove it once done.
Doing so reduces from 14 to 3.0 ms the total response time experienced
by HTTP clients running in parallel to 1000 SSL clients doing full
handshakes in loops. Better, when tune.sched.low-latency is set to "on",
the latency further drops to 1.8 ms.
The tasks latency distribution explain pretty well what is happening:
Without the patch:
$ socat - /tmp/sock1 <<< "show profiling"
Per-task CPU profiling : on # set profiling tasks {on|auto|off}
Tasks activity:
function calls cpu_tot cpu_avg lat_tot lat_avg
ssl_sock_io_cb 2785375 19.35m 416.9us 5.401h 6.980ms
h1_io_cb 1868949 9.853s 5.271us 4.829h 9.302ms
process_stream 1864066 7.582s 4.067us 2.058h 3.974ms
si_cs_io_cb 1733808 1.932s 1.114us 26.83m 928.5us
h1_timeout_task 935760 - - 1.033h 3.975ms
accept_queue_process 303606 4.627s 15.24us 16.65m 3.291ms
srv_cleanup_toremove_connections452 64.31ms 142.3us 2.447s 5.415ms
task_run_applet 47 5.149ms 109.6us 57.09ms 1.215ms
srv_cleanup_idle_connections 34 2.210ms 65.00us 87.49ms 2.573ms
With the patch:
$ socat - /tmp/sock1 <<< "show profiling"
Per-task CPU profiling : on # set profiling tasks {on|auto|off}
Tasks activity:
function calls cpu_tot cpu_avg lat_tot lat_avg
ssl_sock_io_cb 3000365 21.08m 421.6us 20.30h 24.36ms
h1_io_cb 2031932 9.278s 4.565us 46.70m 1.379ms
process_stream 2010682 7.391s 3.675us 22.83m 681.2us
si_cs_io_cb 1702070 1.571s 922.0ns 8.732m 307.8us
h1_timeout_task 1009594 - - 17.63m 1.048ms
accept_queue_process 339595 4.792s 14.11us 3.714m 656.2us
srv_cleanup_toremove_connections779 75.42ms 96.81us 438.3ms 562.6us
srv_cleanup_idle_connections 48 2.498ms 52.05us 178.1us 3.709us
task_run_applet 17 1.738ms 102.3us 11.29ms 663.9us
other 1 947.8us 947.8us 202.6us 202.6us
=> h1_io_cb() and process_stream() are divided by 6 while ssl_sock_io_cb() is
multipled by 4
And with low-latency on:
$ socat - /tmp/sock1 <<< "show profiling"
Per-task CPU profiling : on # set profiling tasks {on|auto|off}
Tasks activity:
function calls cpu_tot cpu_avg lat_tot lat_avg
ssl_sock_io_cb 3000565 20.96m 419.1us 20.74h 24.89ms
h1_io_cb 2019702 9.294s 4.601us 49.22m 1.462ms
process_stream 2009755 6.570s 3.269us 1.493m 44.57us
si_cs_io_cb 1997820 1.566s 783.0ns 2.985m 89.66us
h1_timeout_task 1009742 - - 1.647m 97.86us
accept_queue_process 494509 4.697s 9.498us 1.240m 150.4us
srv_cleanup_toremove_connections1120 92.32ms 82.43us 463.0ms 413.4us
srv_cleanup_idle_connections 70 2.703ms 38.61us 204.5us 2.921us
task_run_applet 13 1.303ms 100.3us 85.12us 6.548us
=> process_stream() is divided by 100 while ssl_sock_io_cb() is
multipled by 4
Interestingly, the total HTTPS response time doesn't increase and even very
slightly decreases, with an overall ~1% higher request rate. The net effect
here is a redistribution of the CPU resources between internal tasks, and
in the case of SSL, handshakes wait bit more but everything after completes
faster.
This was made simple enough to be backportable if it helps some users
suffering from high latencies in mixed traffic.
Remove ebmb_node entry from struct connection and create a dedicated
struct conn_hash_node. struct connection contains now only a pointer to
a conn_hash_node, allocated only for connections where target is of type
OBJ_TYPE_SERVER. This will reduce memory footprints for every
connections that does not need http-reuse such as frontend connections.
The server idle/safe/available connection lists are replaced with ebmb-
trees. This is used to store backend connections, with the new field
connection hash as the key. The hash is a 8-bytes size field, used to
reflect specific connection parameters.
This is a preliminary work to be able to reuse connection with SNI,
explicit src/dst address or PROXY protocol.
This is a preparation work for connection reuse with sni/proxy
protocol/specific src-dst addresses.
Protect every access to idle conn lists with a lock. This is currently
strictly not needed because the access to the list are made with atomic
operations. However, to be able to reuse connection with specific
parameters, the list storage will be converted to eb-trees. As this
structure does not have atomic operation, it is mandatory to protect it
with a lock.
For this, the takeover lock is reused. Its role was to protect during
connection takeover. As it is now extended to general idle conns usage,
it is renamed to idle_conns_lock. A new lock section is also
instantiated named IDLE_CONNS_LOCK to isolate its impact on performance.
When adding the server side support for certificate update over the CLI
we encountered a design problem with the SSL session cache which was not
locked.
Indeed, once a certificate is updated we need to flush the cache, but we
also need to ensure that the cache is not used during the update.
To prevent the use of the cache during an update, this patch introduce a
rwlock for the SSL server session cache.
In the SSL session part this patch only lock in read, even if it writes.
The reason behind this, is that in the session part, there is one cache
storage per thread so it is not a problem to write in the cache from
several threads. The problem is only when trying to write in the cache
from the CLI (which could be on any thread) when a session is trying to
access the cache. So there is a write lock in the CLI part to prevent
simultaneous access by a session and the CLI.
This patch also remove the thread_isolate attempt which is eating too
much CPU time and was not protecting from the use of a free ptr in the
session.
both SSL_CTX_set_msg_callback and SSL_CTRL_SET_MSG_CALLBACK defined since
ea262260469e49149cb10b25a87dfd6ad3fbb4ba, we can safely switch to that guard
instead of OpenSSL version
special guard macros HAVE_SSL_CTX_ADD_SERVER_CUSTOM_EXT was defined earlier
exactly for guarding SSL_CTX_add_server_custom_ext, let us use it wherever
appropriate
HAVE_SSL_CTX_ADD_SERVER_CUSTOM_EXT was introduced in ec60909871
however it was defined as HAVE_SL_CTX_ADD_SERVER_CUSTOM_EXT (missing "S")
let us fix typo
The CO_FL_EARLY_SSL_HS flag was inconditionally set on the connection,
resulting in SSL_read_early_data() always being used first in handshake
calculations. While this seems to work well (probably that there are
fallback paths inside openssl), it's particularly confusing and makes
the debugging quite complicated. It possibly is not optimal by the way.
This flag ought to be set only when early_data is configured on the bind
line. Apparently there used to be a good reason for doing it this way in
1.8 times, but it really does not make sense anymore. It may be OK to
backport this to 2.3 if this helps with troubleshooting, but better not
go too far as it's unlikely to fix any real issue while it could introduce
some in old versions.
As spotted in issue #822, we're having a problem with error detection in
the SSL layer. The problem is that on an overwhelmed machine, accepted
connections can start to pile up, each of them requiring a slow handshake,
and during all this time if the client aborts, the handshake will still be
calculated.
The error controls are properly placed, it's just that the SSL layer
reads records exactly of the advertised size, without having the ability
to encounter a pending connection error. As such if injecting many TLS
connections to a listener with a huge backlog, it's fairly possible to
meet this situation:
12:50:48.236056 accept4(8, {sa_family=AF_INET, sin_port=htons(62794), sin_addr=inet_addr("127.0.0.1")}, [128->16], SOCK_NONBLOCK) = 1109
12:50:48.236071 setsockopt(1109, SOL_TCP, TCP_NODELAY, [1], 4) = 0
(process other connections' handshakes)
12:50:48.257270 getsockopt(1109, SOL_SOCKET, SO_ERROR, [ECONNRESET], [4]) = 0
(proof that error was detectable there but this code was added for the PoC)
12:50:48.257297 recvfrom(1109, "\26\3\1\2\0", 5, 0, NULL, NULL) = 5
12:50:48.257310 recvfrom(1109, "\1\0\1\3"..., 512, 0, NULL, NULL) = 512
(handshake calculation taking 700us)
12:50:48.258004 sendto(1109, "\26\3\3\0z"..., 1421, MSG_DONTWAIT|MSG_NOSIGNAL, NULL, 0) = -1 EPIPE (Broken pipe)
12:50:48.258036 close(1109) = 0
The situation was amplified by the multi-queue accept code, as it resulted
in many incoming connections to be accepted long before they could be
handled. Prior to this they would have been accepted and the handshake
immediately started, which would have resulted in most of the connections
waiting in the the system's accept queue, and dying there when the client
aborted, thus the error would have been detected before even trying to
pass them to the handshake code.
As a result, with a listener running on a very large backlog, it's possible
to quickly accept tens of thousands of connections and waste time slowly
running their handshakes while they get replaced by other ones.
This patch adds an SO_ERROR check on the connection's FD before starting
the handshake. This is not pretty as it requires to access the FD, but it
does the job.
Some improvements should be made over the long term so that the transport
layers can report extra information with their ->rcv_buf() call, or at the
very least, implement a ->get_conn_status() function to report various
flags such as shutr, shutw, error at various stages, allowing an upper
layer to inquire for the relevance of engaging into a long operation if
it's known the connection is not usable anymore. An even simpler step
could probably consist in implementing this in the control layer.
This patch is simple enough to be backported as far as 2.0.
Many thanks to @ngaugler for his numerous tests with detailed feedback.
The new ckch_inst_new_load_srv_store() function which mimics the
ckch_inst_new_load_store() function includes some dead code which was
used only in the former function.
Fix issue #1081.
This patch makes things more consistent between the bind_conf functions
and the server ones:
- ssl_sock_load_srv_ckchs() loads the SSL_CTX in the server
(ssl_sock_load_ckchs() load the SNIs in the bind_conf)
- add the server parameter to ssl_sock_load_srv_ckchs()
- changes made to the ckch_inst are done in
ckch_inst_new_load_srv_store()
Since the server SSL_CTX is now stored in the ckch_inst, it is not
needed anymore to pass an SSL_CTX to ckch_inst_new_load_srv_store() and
ssl_sock_load_srv_ckchs().
When trying to update a backend certificate, we should find a
server-side ckch instance thanks to which we can rebuild a new ssl
context and a new ckch instance that replace the previous ones in the
server structure. This way any new ssl session will be built out of the
new ssl context and the newly updated certificate.
This resolves a subpart of GitHub issue #427 (the certificate part)
In order for the backend server's certificate to be hot-updatable, it
needs to fit into the implementation used for the "bind" certificates.
This patch follows the architecture implemented for the frontend
implementation and reuses its structures and general function calls
(adapted for the server side).
The ckch store logic is kept and a dedicated ckch instance is used (one
per server). The whole sni_ctx logic was not kept though because it is
not needed.
All the new functions added in this patch are basically server-side
copies of functions that already exist on the frontend side with all the
sni and bind_cond references removed.
The ckch_inst structure has a new 'is_server_instance' flag which is
used to distinguish regular instances from the server-side ones, and a
new pointer to the server's structure in case of backend instance.
Since the new server ckch instances are linked to a standard ckch_store,
a lookup in the ckch store table will succeed so the cli code used to
update bind certificates needs to be covered to manage those new server
side ckch instances.
Split the server's ssl context initialization into the general ssl
related initializations and the actual initialization of a single
SSL_CTX structure. This way the context's initialization will be
usable by itself from elsewhere.
If a subscriber's tasklet was called more than one million times, if
the ssl_ctx's connection doesn't match the current one, or if the
connection appears closed in one direction while the SSL stack is
still subscribed, the FD is reported as suspicious. The close cases
may occasionally trigger a false positive during very short and rare
windows. Similarly the 1M calls will trigger after 16GB are transferred
over a given connection. These are rare enough events to be reported as
suspicious.
Now the show_fd helpers at the transport and mux levels return an integer
which indicates whether or not the inspected entry looks suspicious. When
an entry is reported as suspicious, "show fd" will suffix it with an
exclamation mark ('!') in the dump, that is supposed to help detecting
them.
For now, helpers were adjusted to adapt to the new API but none of them
reports any suspicious entry yet.
In FD dumps it's often very important to figure what upper layer function
is going to be called. Let's export the few I/O callbacks that appear as
tasklet functions so that "show fd" can resolve them instead of printing
a pointer relative to main. For example:
1028 : st=0x21(R:rA W:Ra) ev=0x01(heopI) [lc] tmask=0x2 umask=0x2 owner=0x7f00b889b200 iocb=0x65b638(sock_conn_iocb) back=0 cflg=0x00001300 fe=recv mux=H2 ctx=0x7f00c8824de0 h2c.st0=FRH .err=0 .maxid=795 .lastid=-1 .flg=0x0000 .nbst=0 .nbcs=0 .fctl_cnt=0 .send_cnt=0 .tree_cnt=0 .orph_cnt=0 .sub=1 .dsi=795 .dbuf=0@(nil)+0/0 .msi=-1 .mbuf=[1..1|32],h=[0@(nil)+0/0],t=[0@(nil)+0/0] xprt=SSL xprt_ctx=0x7f00c86d0750 xctx.st=0 .xprt=RAW .wait.ev=1 .subs=0x7f00c88252e0(ev=1 tl=0x7f00a07d1aa0 tl.calls=1047 tl.ctx=0x7f00c8824de0 tl.fct=h2_io_cb) .sent_early=0 .early_in=0
The SSL context contains a lot of important details that are currently
missing from debug outputs. Now that we detect ssl_sock, we can perform
some sanity checks, print the next xprt, the subscriber callback's context,
handler and number of calls. The process function is also resolved. This
now gives for example on an H2 connection:
1029 : st=0x21(R:rA W:Ra) ev=0x01(heopI) [lc] tmask=0x2 umask=0x2 owner=0x7fc714881700 iocb=0x65b528(sock_conn_iocb) back=0 cflg=0x00001300 fe=recv mux=H2 ctx=0x7fc734545e50 h2c.st0=FRH .err=0 .maxid=217 .lastid=-1 .flg=0x0000 .nbst=0 .nbcs=0 .fctl_cnt=0 .send_cnt=0 .tree_cnt=0 .orph_cnt=0 .sub=1 .dsi=217 .dbuf=0@(nil)+0/0 .msi=-1 .mbuf=[1..1|32],h=[0@(nil)+0/0],t=[0@(nil)+0/0] xprt=SSL xprt_ctx=0x7fc73478f230 xctx.st=0 .xprt=RAW .wait.ev=1 .subs=0x7fc734546350(ev=1 tl=0x7fc7346702e0 tl.calls=278 tl.ctx=0x7fc734545e50 tl.fct=main-0x144efa) .sent_early=0 .early_in=0
As reported in issue #1010, gcc-11 as of 2021-01-05 is overzealous in
its -Warray-bounds check as it considers that a cast of a global struct
accesses the entire struct even if only one specific element is accessed.
This instantly breaks all lists making use of container_of() to build
their iterators as soon as the starting point is known if the next
element is retrieved from the list head in a way that is visible to the
compiler's optimizer, because it decides that accessing the list's next
element dereferences the list as a larger struct (which it does not).
The temporary workaround consisted in disabling -Warray-bounds, but this
warning is traditionally quite effective at spotting real bugs, and we
actually have is a single occurrence of this issue in the whole code.
By changing the tlskeys_list_get_next() function to take a list element
as the starting point instead of the current element, we can avoid
the starting point issue but this requires to change all call places
to write hideous casts made of &((struct blah*)ref)->list. At the
moment we only have two such call places, the first one being used to
initialize the list (which is the one causing the warning) and which
is thus easy to simplify, and the second one for which we already have
an aliased pointer to the reference that is still valid at the call
place, and given the original pointer also remained unchanged, we can
safely use this alias, and this is safer than leaving a cast there.
Let's make this change now while it's still easy.
The generated code only changed in function cli_io_handler_tlskeys_files()
due to register allocation and the change of variable scope between the
old one and the new one.
`getnext` was only used to fill `ref` at the beginning of the function. Both
have the same type. Replace the parameter name by `ref` to remove the useless
local variable.
Makes TLS/TCP and QUIC share the same CTX initializer so that not to modify the
caller which is an XPRT callback used both by the QUIC xprt and the SSL xprt over
TCP.
QUIC needs to initialize its BIO and SSL session the same way as for SSL over TCP
connections. It needs also to use the same ClientHello callback.
This patch only exports functions and variables shared between QUIC and SSL/TCP
connections.
This patch extraces the code which initializes the BIO and SSL session
objects so that to reuse it elsewhere later for QUIC conections which
only needs SSL and BIO objects at th TLS layer stack level to work.
SSL_CTX_get0_privatekey is openssl/boringssl specific function present
since openssl-1.0.2, let us define readable guard for it, not depending
on HA_OPENSSL_VERSION
SSL_CTX_add_server_custom_ext is openssl specific function present
since openssl-1.0.2, let us define readable guard for it, not depending
on HA_OPENSSL_VERSION
Now we don't touch the fd anymore there, instead we rely on the ->drain()
provided by the control layer. As such the function was renamed to
conn_ctrl_drain().
Since HAProxy 2.3, OpenSSL 1.1.1 is a requirement for using a
multi-certificate bundle in the configuration. This patch emits a fatal
error when HAProxy tries to load a bundle with an older version of
HAProxy.
This problem was encountered by an user in issue #990.
This must be backported in 2.3.
HAVE_SSL_CTX_SET_CIPHERSUITES is newly defined macro set in openssl-compat.h,
which helps to identify ssl libs (currently OpenSSL-1.1.1 only) that supports
TLS13 cipersuites manipulation on TLS13 context
When a non-existing file was specified in the configuration, haproxy
does not exits with an error which is not normal.
This bug was introduced by dfa93be ("MEDIUM: ssl: emulate multi-cert
bundles loading in standard loading") which does nothing if the stat
failed.
This patch introduce a "found" variable which is checked at the end of
the function so we exit with an error if no find were found.
Must be backported to 2.3.
In bug #959 it was reported that haproxy segfault on startup when trying
to load a certifcate which use the X509v3 AKID extension but without the
keyid field.
This field is not mandatory and could be replaced by the serial or the
DirName.
For example:
X509v3 extensions:
X509v3 Basic Constraints:
CA:FALSE
X509v3 Subject Key Identifier:
42:7D:5F:6C:3E:0D:B7:2C:FD:6A:8A:32:C6:C6:B9:90:05:D1:B2:9B
X509v3 Authority Key Identifier:
DirName:/O=HAProxy Technologies/CN=HAProxy Test Intermediate CA
serial:F2:AB:C1:41:9F:AB:45:8E:86:23:AD:C5:54:ED:DF:FA
This bug was introduced by 70df7b ("MINOR: ssl: add "issuers-chain-path" directive").
This patch must be backported as far as 2.2.
in the context of a progressive backend migration, we want to be able to
activate SSL on outgoing connections to the server at runtime without
reloading.
This patch adds a `set server ssl` command; in order to allow that:
- add `srv_use_ssl` to `show servers state` command for compatibility,
also update associated parsing
- when using default-server ssl setting, and `no-ssl` on server line,
init SSL ctx without activating it
- when triggering ssl API, de/activate SSL connections as requested
- clean ongoing connections as it is done for addr/port changes, without
checking prior server state
example config:
backend be_foo
default-server ssl
server srv0 127.0.0.1:6011 weight 1 no-ssl
show servers state:
5 be_foo 1 srv0 127.0.0.1 2 0 1 1 15 1 0 4 0 0 0 0 - 6011 - -1
where srv0 can switch to ssl later during the runtime:
set server be_foo/srv0 ssl on
5 be_foo 1 srv0 127.0.0.1 2 0 1 1 15 1 0 4 0 0 0 0 - 6011 - 1
Also update existing tests and create a new one.
Signed-off-by: William Dauchy <wdauchy@gmail.com>
Return ERR_NONE instead of 0 on success for all config callbacks that should
return ERR_* codes. There is no change because ERR_NONE is a macro equals to
0. But this makes the return value more explicit.
Since commit d0447a7c3 ("MINOR: ssl: add counters for ssl sessions"),
gcc 9+ complains about this:
CC src/ssl_sock.o
src/ssl_sock.c: In function 'ssl_sock_io_cb':
src/ssl_sock.c:5416:3: warning: 'counters_px' may be used uninitialized in this function [-Wmaybe-uninitialized]
5416 | ++counters_px->reused_sess;
| ^~~~~~~~~~~~~~~~~~~~~~~~~~
src/ssl_sock.c:5133:23: note: 'counters_px' was declared here
5133 | struct ssl_counters *counters, *counters_px;
| ^~~~~~~~~~~
Either a listener or a server are expected there, so ther counters are
always initialized and the compiler cannot know this. Let's preset
them and test before updating the counter, we're not in a hot path
here.
No backport is needed.
The default dh_param value is 2048 and it's preset to zero unless explicitly
set, so we must not report a warning about DH param not being loadble in 1024
bits when we're going to use 2048. Thanks to Dinko for reporting this.
This should be backported to 2.2.
Let us use SSL_CTX_set1_curves_list, defined by OpenSSL, as well as in
openssl-compat when SSL_CTRL_SET_CURVES_LIST is present (BoringSSL),
for feature detection instead of versions.
BorinSSL pretends to be 1.1.1 version of OpenSSL. It messes some
version based feature presense checks. For example, OpenSSL specific
early data support.
Let us change that feature detction to SSL_READ_EARLY_DATA_SUCCESS
macro check instead of version comparision.
It's a regression from b3201a3e "BUG/MINOR: disable dynamic OCSP load
with BoringSSL". The origin bug is link to 76b4a12 "BUG/MEDIUM: ssl:
memory leak of ocsp data at SSL_CTX_free()": ssl_sock_free_ocsp()
shoud be in #ifndef OPENSSL_IS_BORINGSSL.
To avoid long #ifdef for small code, the BoringSSL part for ocsp load
is isolated in a simple #ifdef.
This must be backported in 2.2 and 2.1
In issue #785, users are reporting that it's not convenient to load a
".crt.key" when the configuration contains a ".crt".
This option allows to remove the extension of the certificate before
trying to load any extra SSL file (.key, .ocsp, .sctl, .issuer etc.)
The patch changes a little bit the way ssl_sock_load_files_into_ckch()
looks for the file.
On deinit, when the server SSL ctx is released, we must take care to release the
cached SSL sessions stored in the array <ssl_ctx.reused_sess>. There are
global.nbthread entries in this array, each one may have a pointer on a cached
session.
This patch should fix the issue #802. No backport needed.
Remove the last utility functions for handling the multi-cert bundles
and remove the multi-variable from the ckch structure.
With this patch, the bundles are completely removed.
Since the removal of the multi-certificates bundle support, this
variable is not useful anymore, we can remove all tests for this
variable and suppose that every ckch contains a single certificate.
Like the previous commit, this one emulates the bundling by loading each
certificate separately and storing it in a separate SSL_CTX.
This patch does it for the standard certificate loading, which means
outside directories or crt-list.
The multi-certificates bundle was the common way of offering multiple
certificates of different types (ecdsa and rsa) for a same SSL_CTX.
This was implemented with OpenSSL 1.0.2 before the client_hello callback
was available.
Now that all versions which does not support this callback are
deprecated (< 1.1.0), we can safely removes the support for the bundle
which was inconvenient and complexify too much the code.
In the SSL code, when we were waiting for the availability of the crypto
engine, once it is ready and its fd's I/O handler is called, don't call
ssl_sock_io_cb() directly, instead, call tasklet_wakeup() on the
ssl_sock_ctx's tasklet. We were calling ssl_sock_io_cb() with NULL as
a tasklet, which used to be fine, but it is no longer true since the
fd takeover changes. We could just provide the tasklet, but let's just
wake the tasklet, as is done for other FDs, for fairness.
This should fix github issue #856.
This should be backported into 2.2.
In bug #835, @arjenzorgdoc reported that the verifyhost option on the
server line is case-sensitive, that shouldn't be the case.
This patch fixes the issue by replacing memcmp by strncasecmp and strcmp
by strcasecmp. The patch was suggested by @arjenzorgdoc.
This must be backported in all versions supporting the verifyhost
option.
Changes performed using the following coccinelle patch:
@@
type T;
expression E;
expression t;
@@
(
t = calloc(E, sizeof(*t))
|
- t = calloc(E, sizeof(T))
+ t = calloc(E, sizeof(*t))
)
Looking through the commit history, grepping for coccinelle shows that the same
replacement with a different patch was already performed in the past in commit
02779b6263.
We used to require fd_remove() to remove an FD from a poller when we
still had the FD cache and it was not possible to directly act on the
pollers. Nowadays we don't need this anymore as the pollers will
automatically unregister disabled FDs. The fd_remove() hack is
particularly problematic because it additionally hides the FD from
the known FD list and could make one think it's closed.
It's used at two places:
- with the async SSL engine
- with the listeners (when unbinding from an fd for another process)
Let's just use fd_stop_both() instead, which will propagate down the
stack to do the right thing, without removing the FD from the array
of known ones.
Now when dumping FDs using "show fd" on a process which still knows some
of the other workers' FDs, the FD will properly be listed with a listener
state equal to "ZOM" for "zombie". This guarantees that the FD is still
known and will properly be passed using _getsocks().
The fix 7df5c2d ("BUG/MEDIUM: ssl: fix ssl_bind_conf double free") was
not complete. The problem still occurs when using wildcards in
certificate, during the deinit.
This patch removes the free of the ssl_conf structure in
ssl_sock_free_all_ctx() since it's already done in the crtlist deinit.
It must be backported in 2.2.
The use of Common Name is fading out in favor of the RFC recommended
way of using SAN extensions. For example, Chrome from version 58
will only match server name against SAN.
The following patch adds SAN extension by default to all generated certificates.
The SAN extension will be of type DNS and based on the server name.
haproxy supports generating SSL certificates based on SNI using a provided
CA signing certificate. Because CA certificates may be signed by multiple
CAs, in some scenarios, it is neccesary for the server to attach the trust chain
in addition to the generated certificate.
The following patch adds the ability to serve the entire trust chain with
the generated certificate. The chain is loaded from the provided
`ca-sign-file` PEM file.
The negative filters which are supposed to exclude a SNI from a
wildcard, never worked. Indeed the negative filters were skipped in the
code.
To fix the issue, this patch looks for negative filters that are on the
same line as a the wildcard that just matched.
This patch should fix issue #818. It must be backported in 2.2. The
problem also exists in versions > 1.8 but the infrastructure required to
fix this was only introduced in 2.1. In older versions we should
probably change the documentation to state that negative filters are
useless.
In bug #810, the SNI are not matched correctly, indeed when trying to
match a certificate type in ssl_sock_switchctx_cbk() all SNIs were not
looked up correctly.
In the case you have in a crt-list:
wildcard.subdomain.domain.tld.pem.rsa *.subdomain.domain.tld record.subdomain.domain.tld
record.subdomain.domain.tld.pem.ecdsa record.subdomain.domain.tld another-record.subdomain.domain.tld
If the client only supports RSA and requests
"another-record.subdomain.domain.tld", HAProxy will find the single
ECDSA certificate and won't try to look up for a wildcard RSA
certificate.
This patch fixes the code so we look for all single and
wildcard before chosing the certificate type.
This bug was introduced by commit 3777e3a ("BUG/MINOR: ssl: certificate
choice can be unexpected with openssl >= 1.1.1").
It must be backported as far as 1.8 once it is heavily tested.
In bug #781 it was reported that HAProxy completes the certificate chain
using the verify store in the case there is no chain.
Indeed, according to OpenSSL documentation, when generating the chain,
OpenSSL use the chain store OR the verify store in the case there is no
chain store.
As a workaround, this patch always put a NULL chain in the SSL_CTX so
OpenSSL does not tries to complete it.
This must be backported in all branches, the code could be different,
the important part is to ALWAYS set a chain, and uses sk_X509_new_null()
if the chain is NULL.
In commit f187ce6, the ssl-skip-self-issued-ca option was accidentally
made useless by reverting the SSL_CTX reworking.
The previous attempt of making this feature was putting each certificate
of the chain in the SSL_CTX with SSL_CTX_add_extra_chain_cert() and was
skipping the Root CA.
The problem here is that doing it this way instead of doing a
SSL_CTX_set1_chain() break the support of the multi-certificate bundles.
The SSL_CTX_build_cert_chain() function allows one to remove the Root CA
with the SSL_BUILD_CHAIN_FLAG_NO_ROOT flag. Use it instead of doing
tricks with the CA.
Should fix issue #804.
Must be backported in 2.2.
This bug affects all version of HAProxy since the OCSP data is not free
in the deinit(), but leaking on exit() is not really an issue. However,
when doing dynamic update of certificates over the CLI, those data are
not free'd upon the free of the SSL_CTX.
3 leaks are happening, the first leak is the one of the ocsp_arg
structure which serves the purpose of containing the pointers in the
case of a multi-certificate bundle. The second leak is the one ocsp
struct. And the third leak is the one of the struct buffer in the
ocsp_struct.
The problem lies with SSL_CTX_set_tlsext_status_arg() which does not
provide a way to free the argument upon an SSL_CTX_free().
This fix uses ex index functions instead of registering a
tlsext_status_arg(). This is really convenient because it allows to
register a free callback which will free the ex index content upon a
SSL_CTX_free().
A refcount was also added to the ocsp_response structure since it is
stored in a tree and can be reused in another SSL_CTX.
Should fix part of the issue #746.
This must be backported in 2.2 and 2.1.
Fix a memory leak when loading an OCSP file when the file was already
loaded elsewhere in the configuration.
Indeed, if the OCSP file already exists, a useless chunk_dup() will be
done during the load.
To fix it we reverts "ocsp" to "iocsp" like it was done previously.
This was introduced by commit 246c024 ("MINOR: ssl: load the ocsp
in/from the ckch").
Should fix part of the issue #746.
It must be backported in 2.1 and 2.2.
Check the return of the calloc in ssl_sock_load_ocsp() which could lead
to a NULL dereference.
This was introduced by commit be2774d ("MEDIUM: ssl: Added support for
Multi-Cert OCSP Stapling").
Could be backported as far as 1.7.
When a connection is added to an idle list, it's already detached and
cannot be seen by two threads at once, so there's no point using
TRY_ADDQ, there will never be any conflict. Let's just use the cheaper
ADDQ.
Initially when mt_lists were added, their purpose was to be used with
the scheduler, where anyone may concurrently add the same tasklet, so
it sounded natural to implement a check in MT_LIST_ADD{,Q}. Later their
usage was extended and MT_LIST_ADD{,Q} started to be used on situations
where the element to be added was exclusively owned by the one performing
the operation so a conflict was impossible. This became more obvious with
the idle connections and the new macro was called MT_LIST_ADDQ_NOCHECK.
But this remains confusing and at many places it's not expected that
an MT_LIST_ADD could possibly fail, and worse, at some places we start
by initializing it before adding (and the test is superflous) so let's
rename them to something more conventional to denote the presence of the
check or not:
MT_LIST_ADD{,Q} : inconditional operation, the caller owns the
element, and doesn't care about the element's
current state (exactly like LIST_ADD)
MT_LIST_TRY_ADD{,Q}: only perform the operation if the element is not
already added or in the process of being added.
This means that the previously "safe" MT_LIST_ADD{,Q} are not "safe"
anymore. This also means that in case of backport mistakes in the
future causing this to be overlooked, the slower and safer functions
will still be used by default.
Note that the missing unchecked MT_LIST_ADD macro was added.
The rest of the code will have to be reviewed so that a number of
callers of MT_LIST_TRY_ADDQ are changed to MT_LIST_ADDQ to remove
the unneeded test.
OpenSSL 1.1.1 provides a callback registering function
SSL_CTX_set_keylog_callback, which allows one to receive a string
containing the keys to deciphers TLSv1.3.
Unfortunately it is not possible to store this data in binary form and
we can only get this information using the callback. Which means that we
need to store it until the connection is closed.
This patches add 2 pools, the first one, pool_head_ssl_keylog is used to
store a struct ssl_keylog which will be inserted as a ex_data in a SSL *.
The second one is pool_head_ssl_keylog_str which will be used to store
the hexadecimal strings.
To enable the capture of the keys, you need to set "tune.ssl.keylog on"
in your configuration.
The following fetches were implemented:
ssl_fc_client_early_traffic_secret,
ssl_fc_client_handshake_traffic_secret,
ssl_fc_server_handshake_traffic_secret,
ssl_fc_client_traffic_secret_0,
ssl_fc_server_traffic_secret_0,
ssl_fc_exporter_secret,
ssl_fc_early_exporter_secret
NetBSD apparently uses macros for tolower/toupper and complains about
the use of char for array subscripts. Let's properly cast all of them
to unsigned char where they are used.
This is needed to fix issue #729.
When we takeover a connection, let the xprt layer know. If it has its own
tasklet, and it is already scheduled, then it has to be destroyed, otherwise
it may run the new mux tasklet on the old thread.
Note that we only do this for the ssl xprt for now, because the only other
one that might wake the mux up is the handshake one, which is supposed to
disappear before idle connections exist.
No backport is needed, this is for 2.2.
Move the ckch_deinit() and crtlist_deinit() call to ssl_sock.c,
also unlink the SNI from the ckch_inst because they are free'd before in
ssl_sock_free_all_ctx().
Since commit 2954c47 ("MEDIUM: ssl: allow crt-list caching"), the
ssl_bind_conf is allocated directly in the crt-list, and the crt-list
can be shared between several bind_conf. The deinit() code wasn't
changed to handle that.
This patch fixes the issue by removing the free of the ssl_conf in
ssl_sock_free_all_ctx().
It should be completed with a patch that free the ssl_conf and the
crt-list.
Fix issue #700.
In bug #676, it was reported that ssl-min-ver SSLv3 does not work in
Amazon environments with OpenSSL 1.0.2.
The reason for this is a patch of Amazon OpenSSL which sets
SSL_OP_NO_SSLv3 in SSL_CTX_new(). Which is kind of a problem with our
implementation of ssl-{min,max}-ver in old openSSL versions, because it
does not try to clear existing version flags.
This patch fixes the bug by cleaning versions flags known by HAProxy in
the SSL_CTX before applying the right ones.
Should be backported as far as 1.8.
This patch fixes all the leftovers from the include cleanup campaign. There
were not that many (~400 entries in ~150 files) but it was definitely worth
doing it as it revealed a few duplicates.
This one was not easy because it was embarking many includes with it,
which other files would automatically find. At least global.h, arg.h
and tools.h were identified. 93 total locations were identified, 8
additional includes had to be added.
In the rare files where it was possible to finalize the sorting of
includes by adjusting only one or two extra lines, it was done. But
all files would need to be rechecked and cleaned up now.
It was the last set of files in types/ and proto/ and these directories
must not be reused anymore.
extern struct dict server_name_dict was moved from the type file to the
main file. A handful of inlined functions were moved at the bottom of
the file. Call places were updated to use server-t.h when relevant, or
to simply drop the entry when not needed.
This one is particularly difficult to split because it provides all the
functions used to manipulate a proxy state and to retrieve names or IDs
for error reporting, and as such, it was included in 73 files (down to
68 after cleanup). It would deserve a small cleanup though the cut points
are not obvious at the moment given the number of structs involved in
the struct proxy itself.
The current state of the logging is a real mess. The main problem is
that almost all files include log.h just in order to have access to
the alert/warning functions like ha_alert() etc, and don't care about
logs. But log.h also deals with real logging as well as log-format and
depends on stream.h and various other things. As such it forces a few
heavy files like stream.h to be loaded early and to hide missing
dependencies depending where it's loaded. Among the missing ones is
syslog.h which was often automatically included resulting in no less
than 3 users missing it.
Among 76 users, only 5 could be removed, and probably 70 don't need the
full set of dependencies.
A good approach would consist in splitting that file in 3 parts:
- one for error output ("errors" ?).
- one for log_format processing
- and one for actual logging.
It was moved without any change, however many callers didn't need it at
all. This was a consequence of the split of proto_http.c into several
parts that resulted in many locations to still reference it.
Almost no change except moving the cli_kw struct definition after the
defines. Almost all users had both types&proto included, which is not
surprizing since this code is old and it used to be the norm a decade
ago. These places were cleaned.
Just some minor reordering, and the usual cleanup of call places for
those which didn't need it. We don't include the whole tools.h into
stats-t anymore but just tools-t.h.
The type file was slightly tidied. The cli-specific APPCTX_CLI_ST1_* flag
definitions were moved to cli.h. The type file was adjusted to include
buf-t.h and not the huge buf.h. A few call places were fixed because they
did not need this include.
The files were moved almost as-is, just dropping arg-t and auth-t from
acl-t but keeping arg-t in acl.h. It was useful to revisit the call places
since a handful of files used to continue to include acl.h while they did
not need it at all. Struct stream was only made a forward declaration
since not otherwise needed.
The type file is becoming a mess, half of it is for the proxy protocol,
another good part describes conn_streams and mux ops, it would deserve
being split again. At least it was reordered so that elements are easier
to find, with the PP-stuff left at the end. The MAX_SEND_FD macro was moved
to compat.h as it's said to be the value for Linux.
The TASK_IS_TASKLET() macro was moved to the proto file instead of the
type one. The proto part was a bit reordered to remove a number of ugly
forward declaration of static inline functions. About a tens of C and H
files had their dependency dropped since they were not using anything
from task.h.
global.h was one of the messiest files, it has accumulated tons of
implicit dependencies and declares many globals that make almost all
other file include it. It managed to silence a dependency loop between
server.h and proxy.h by being well placed to pre-define the required
structs, forcing struct proxy and struct server to be forward-declared
in a significant number of files.
It was split in to, one which is the global struct definition and the
few macros and flags, and the rest containing the functions prototypes.
The UNIX_MAX_PATH definition was moved to compat.h.
It was moved as-is, except for extern declaration of pattern_reference.
A few C files used to include it but didn't need it anymore after having
been split apart so this was cleaned.
A few includes were missing in each file. A definition of
struct polled_mask was moved to fd-t.h. The MAX_POLLERS macro was
moved to defaults.h
Stdio used to be silently inherited from whatever path but it's needed
for list_pollers() which takes a FILE* and which can thus not be
forward-declared.
And also rename standard.c to tools.c. The original split between
tools.h and standard.h dates from version 1.3-dev and was mostly an
accident. This patch moves the files back to what they were expected
to be, and takes care of not changing anything else. However this
time tools.h was split between functions and types, because it contains
a small number of commonly used macros and structures (e.g. name_desc)
which in turn cause the massive list of includes of tools.h to conflict
with the callers.
They remain the ugliest files of the whole project and definitely need
to be cleaned and split apart. A few types are defined there only for
functions provided there, and some parts are even OS-specific and should
move somewhere else, such as the symbol resolution code.
The pretty confusing "buffer.h" was in fact not the place to look for
the definition of "struct buffer" but the one responsible for dynamic
buffer allocation. As such it defines the struct buffer_wait and the
few functions to allocate a buffer or wait for one.
This patch moves it renaming it to dynbuf.h. The type definition was
moved to its own file since it's included in a number of other structs.
Doing this cleanup revealed that a significant number of files used to
rely on this one to inherit struct buffer through it but didn't need
anything from this file at all.
types/freq_ctr.h was moved to haproxy/freq_ctr-t.h and proto/freq_ctr.h
was moved to haproxy/freq_ctr.h. Files were updated accordingly, no other
change was applied.
This one is included almost everywhere and used to rely on a few other
.h that are not needed (unistd, stdlib, standard.h). It could possibly
make sense to split it into multiple parts to distinguish operations
performed on timers and the internal time accounting, but at this point
it does not appear much important.
This one used to be stored into debug.h but the debug tools got larger
and require a lot of other includes, which can't use BUG_ON() anymore
because of this. It does not make sense and instead this macro should
be placed into the lower includes and given its omnipresence, the best
solution is to create a new bug.h with the few surrounding macros needed
to trigger bugs and place assertions anywhere.
Another benefit is that it won't be required to add include <debug.h>
anymore to use BUG_ON, it will automatically be covered by api.h. No
less than 32 occurrences were dropped.
The FSM_PRINTF macro was dropped since not used at all anymore (probably
since 1.6 or so).
This file is to openssl what compat.h is to the libc, so it makes sense
to move it to haproxy/. It could almost be part of api.h but given the
amount of openssl stuff that gets loaded I fear it could increase the
build time.
Note that this file contains lots of inlined functions. But since it
does not depend on anything else in haproxy, it remains safe to keep
all that together.
All files that were including one of the following include files have
been updated to only include haproxy/api.h or haproxy/api-t.h once instead:
- common/config.h
- common/compat.h
- common/compiler.h
- common/defaults.h
- common/initcall.h
- common/tools.h
The choice is simple: if the file only requires type definitions, it includes
api-t.h, otherwise it includes the full api.h.
In addition, in these files, explicit includes for inttypes.h and limits.h
were dropped since these are now covered by api.h and api-t.h.
No other change was performed, given that this patch is large and
affects 201 files. At least one (tools.h) was already freestanding and
didn't get the new one added.
This is where other imported components are located. All files which
used to directly include ebtree were touched to update their include
path so that "import/" is now prefixed before the ebtree-related files.
The ebtree.h file was slightly adjusted to read compiler.h from the
common/ subdirectory (this is the only change).
A build issue was encountered when eb32sctree.h is loaded before
eb32tree.h because only the former checks for the latter before
defining type u32. This was addressed by adding the reverse ifdef
in eb32tree.h.
No further cleanup was done yet in order to keep changes minimal.
This reverts commit 4fed93eb72.
This commit was simplifying the certificate chain loading with
SSL_CTX_add_extra_chain_cert() which is available in all SSL libraries.
Unfortunately this function is not compatible with the
multi-certificates bundles, which have the effect of concatenating the
chains of all certificate types instead of creating a chain for each
type (RSA, ECDSA etc.)
Should fix issue #655.
Using ssl-max-ver without ssl-min-ver is ambiguous.
When the ssl-min-ver is not configured, and ssl-max-ver is set to a
value lower than the default ssl-min-ver (which is TLSv1.2 currently),
set the ssl-min-ver to the value of ssl-max-ver, and emit a warning.
Since HAProxy 1.8, the TLS default minimum version was set to TLSv1.0 to
avoid using the deprecated SSLv3.0. Since then, the standard changed and
the recommended TLS version is now TLSv1.2.
This patch changes the minimum default version to TLSv1.2 on bind lines.
If you need to use prior TLS version, this is still possible by
using the ssl-min-ver keyword.
In order to move all SSL sample fetches in another file, moving the
ssl_sock_ctx definition in a .h file is required.
Unfortunately it became a cross dependencies hell to solve, because of
the struct wait_event field, so <types/connection.h> is needed which
created other problems.