If the SSL_CTX of a previous instance (ckch_inst) was used as a
default_ctx, replace the default_ctx of the bind_conf by the first
SSL_CTX inserted in the SNI tree.
Use the RWLOCK of the sni tree to handle the change of the default_ctx.
When trying to update a certificate <file>.{rsa,ecdsa,dsa}, but this one
does not exist and if <file> was used as a regular file in the
configuration, the error was ambiguous. Correct it so we can return a
certificate not found error.
Commit bc6ca7c ("MINOR: ssl/cli: rework 'set ssl cert' as 'set/commit'")
broke the ability to commit a unique certificate which does not use a
bundle extension .{rsa,ecdsa,dsa}.
When doing an 'ssl set cert' with a certificate which does not exist in
configuration, the appctx->ctx.ssl.old_ckchs->path was duplicated while
app->ctx.ssl.old_ckchs was NULL, resulting in a NULL dereference.
Move the code so the 'not referenced' error is done before this.
Fix 541a534 ("BUG/MINOR: ssl/cli: fix build of SCTL and OCSP") was not
enough.
[wla: It will probably be better later to put the #ifdef in the
functions so they can return an error if they are not implemented]
Remove the leftovers of the certificate + bundle updating in 'ssl set
cert' and 'commit ssl cert'.
* Remove the it variable in appctx.ctx.ssl.
* Stop doing everything twice.
* Indent
This patch splits the 'set ssl cert' CLI command into 2 commands.
The previous way of updating the certificate on the CLI was limited with
the bundles. It was only able to apply one of the tree part of the
certificate during an update, which mean that we needed 3 updates to
update a full 3 certs bundle.
It was also not possible to apply atomically several part of a
certificate with the ability to rollback on error. (For example applying
a .pem, then a .ocsp, then a .sctl)
The command 'set ssl cert' will now duplicate the certificate (or
bundle) and update it in a temporary transaction..
The second command 'commit ssl cert' will commit all the changes made
during the transaction for the certificate.
This commit breaks the ability to update a certificate which was used as
a unique file and as a bundle in the HAProxy configuration. This way of
using the certificates wasn't making any sense.
Example:
// For a bundle:
$ echo -e "set ssl cert localhost.pem.rsa <<\n$(cat kikyo.pem.rsa)\n" | socat /tmp/sock1 -
Transaction created for certificate localhost.pem!
$ echo -e "set ssl cert localhost.pem.dsa <<\n$(cat kikyo.pem.dsa)\n" | socat /tmp/sock1 -
Transaction updated for certificate localhost.pem!
$ echo -e "set ssl cert localhost.pem.ecdsa <<\n$(cat kikyo.pem.ecdsa)\n" | socat /tmp/sock1 -
Transaction updated for certificate localhost.pem!
$ echo "commit ssl cert localhost.pem" | socat /tmp/sock1 -
Committing localhost.pem.
Success!
Actually gcc believes it has detected a possible truncation but it
cannot since the output string is necessarily at least one char
shorter than what it expects. However addressing it is easy and
removes the need for an intermediate copy so let's do it.
Rework the 'set ssl cert' IO handler so it is clearer.
Use its own SETCERT_ST_* states insted of the STAT_ST ones.
Use an inner loop in SETCERT_ST_GEN and SETCERT_ST_INSERT to do the work
for both the certificate and the bundle.
The io_release() is now called only when the CKCH spinlock is taken so
we can unlock during a release without any condition.
Since commit 90b098c ("BUG/MINOR: cli: don't call the kw->io_release if
kw->parse failed"), the io_release() callback is not called anymore when
the parse() failed. Call it directly on the error path of the
cli_parse_set_cert() function.
Commit 541a534 ("BUG/MINOR: ssl/cli: fix build of SCTL and OCSP")
introduced a bug in which we iterate outside the array durint a 'set ssl
cert' if we didn't built with the ocsp or sctl.
To avoid affecting too much the traffic during a certificate update,
create the SNIs in a IO handler which yield every 10 ckch instances.
This way haproxy continues to respond even if we tries to update a
certificate which have 50 000 instances.
When updating a certificate from the CLI, it is not possible to revert
some of the changes if part of the certicate update failed. We now
creates a copy of the ckch_store for the changes so we can revert back
if something goes wrong.
Even if the ckch_store was affected before this change, it wasn't
affecting the SSL_CTXs used for the traffic. It was only a problem if we
try to update a certificate after we failed to do it the first time.
The new ckch_store is also linked to the new sni_ctxs so it's easy to
insert the sni_ctxs before removing the old ones.
ssl_sock_copy_cert_key_and_chain() copy the content of a
<src> cert_key_and_chain to a <dst>.
It applies a refcount increasing on every SSL structures (X509, DH,
privte key..) and allocate new buffers for the other fields.
It is now possible to update new parts of a CKCH from the CLI.
Currently you will be able to update a PEM (by default), a OCSP response
in base64, an issuer file, and a SCTL file.
Each update will creates a new CKCH and new sni_ctx structure so we will
need a "commit" command later to apply several changes and create the
sni_ctx only once.
Split the ssl_sock_load_crt_file_into_ckch() in two functions:
- ssl_sock_load_files_into_ckch() which is dedicated to opening every
files related to a filename during the configuration parsing (PEM, sctl,
ocsp, issuer etc)
- ssl_sock_load_pem_into_ckch() which is dedicated to opening a PEM,
either in a file or a buffer
ssl_sock_load_issuer_file_into_ckch() is a new function which is able to
load an issuer from a buffer or from a file to a CKCH.
Use this function directly in ssl_sock_load_crt_file_into_ckch()
The ssl_sock_load_sctl_from_file() function was modified to
fill directly a struct cert_key_and_chain.
The function prototype was normalized in order to be used with the CLI
payload parser.
This function either read text from a buffer or read a file on the
filesystem.
It fills the ocsp_response buffer of the struct cert_key_and_chain.
The ssl_sock_load_ocsp_response_from_file() function was modified to
fill directly a struct cert_key_and_chain.
The function prototype was normalized in order to be used with the CLI
payload parser.
This function either read a base64 from a buffer or read a binary file
on the filesystem.
It fills the ocsp_response buffer of the struct cert_key_and_chain.
A trick is used to set SESSION_ID, and SESSION_ID_CONTEXT lengths
to 0 and avoid ASN1 encoding of these values.
There is no specific function to set the length of those parameters
to 0 so we fake this calling these function to a different value
with the same buffer but a length to zero.
But those functions don't seem to check the length of zero before
performing a memcpy of length zero but with src and dst buf on the
same pointer, causing valgrind to bark.
So the code was re-work to pass them different pointers even
if buffer content is un-used.
In a second time, reseting value, a memcpy overlap
happened on the SESSION_ID_CONTEXT. It was re-worked and this is
now reset using the constant global value SHCTX_APPNAME which is a
different pointer with the same content.
This patch should be backported in every version since ssl
support was added to haproxy if we want valgrind to shut up.
This is tracked in github issue #56.
When an error occurred in the function bind_parse_tls_ticket_keys(), during the
configuration parsing, the opened file is not always closed. To fix the bug, all
errors are catched at the same place, where all ressources are released.
This patch fixes the bug #325. It must be backported as far as 1.7.
If openssl 1.1.1 is used, c2aae74f0 commit mistakenly enables DH automatic
feature from openssl instead of ECDH automatic feature. There is no impact for
the ECDH one because the feature is always enabled for that version. But doing
this, the 'tune.ssl.default-dh-param' was completely ignored for DH parameters.
This patch fix the bug calling 'SSL_CTX_set_ecdh_auto' instead of
'SSL_CTX_set_dh_auto'.
Currently some users may use a 2048 DH bits parameter, thinking they're using a
1024 bits one. Doing this, they may experience performance issue on light hardware.
This patch warns the user if haproxy fails to configure the given DH parameter.
In this case and if openssl version is > 1.1.0, haproxy will let openssl to
automatically choose a default DH parameter. For other openssl versions, the DH
ciphers won't be usable.
A commonly case of failure is due to the security level of openssl.cnf
which could refuse a 1024 bits DH parameter for a 2048 bits key:
$ cat /etc/ssl/openssl.cnf
...
[system_default_sect]
MinProtocol = TLSv1
CipherString = DEFAULT@SECLEVEL=2
This should be backport into any branch containing the commit c2aae74f0.
It requires all or part of the previous CLEANUP series.
This addresses github issue #324.
ssl_sock_load_dh_params used to return >0 or -1 to indicate success
or failure. Make it return a set of ERR_* instead so that its callers
can transparently report its status. Given that its callers only used
to know about ERR_ALERT | ERR_FATAL, this is the only code returned
for now. An error message was added in the case of failure and the
comment was updated.
ssl_sock_put_ckch_into_ctx used to return 0 or >0 to indicate success
or failure. Make it return a set of ERR_* instead so that its callers
can transparently report its status. Given that its callers only used
to know about ERR_ALERT | ERR_FATAL, this is the only code returned
for now. And a comment was updated.
ckch_inst_new_load_store() and ckch_inst_new_load_multi_store used to
return 0 or >0 to indicate success or failure. Make it return a set of
ERR_* instead so that its callers can transparently report its status.
Given that its callers only used to know about ERR_ALERT | ERR_FATAL,
his is the only code returned for now. And the comment was updated.
ssl_sock_load_ckchs() used to return 0 or >0 to indicate success or
failure even though this was not documented. Make it return a set of
ERR_* instead so that its callers can transparently report its status.
Given that its callers only used to know about ERR_ALERT | ERR_FATAL,
this is the only code returned for now. And a comment was added.
These functions were returning only 0 or 1 to mention success or error,
and made it impossible to return a warning. Let's make them return error
codes from ERR_* and map all errors to ERR_ALERT|ERR_FATAL for now since
this is the only code that was set on non-zero return value.
In addition some missing comments were added or adjusted around the
functions' return values.
246c024 ("MINOR: ssl: load the ocsp in/from the ckch") broke the loading
of OCSP files. The function ssl_sock_load_ocsp_response_from_file() was
not returning 0 upon success which lead to an error after the .ocsp was
read.
The error messages for OCSP in ssl_sock_load_crt_file_into_ckch() add a
double extension to the filename, that can be confusing. The messages
reference a .issuer.issuer file.
The SSL engines code was written below the OCSP #ifdef, which means you
can't build the engines code if the OCSP is deactived in the SSL lib.
Could be backported in every version since 1.8.
A NULL dereference can occur when inserting SNIs. In the case of
checking for duplicates, if there is already several sni_ctx with the
same key.
Fix issue #321.
Don't try to load the files containing the issuer and the OCSP response
each time we generate a SSL_CTX.
The .ocsp and the .issuer are now loaded in the struct
cert_key_and_chain only once and then loaded from this structure when
creating a SSL_CTX.
Don't try to load the file containing the sctl each time we generate a
SSL_CTX.
The .sctl is now loaded in the struct cert_key_and_chain only once and
then loaded from this structure when creating a SSL_CTX.
Note that this now make possible the use of sctl with multi-cert
bundles.
$ echo -e "set ssl cert certificate.pem <<\n$(cat certificate2.pem)\n" | \
socat stdio /var/run/haproxy.stat
Certificate updated!
The operation is locked at the ckch level with a HA_SPINLOCK_T which
prevents the ckch architecture (ckch_store, ckch_inst..) to be modified
at the same time. So you can't do a certificate update at the same time
from multiple CLI connections.
SNI trees are also locked with a HA_RWLOCK_T so reading operations are
locked only during a certificate update.
Bundles are supported but you need to update each file (.rsa|ecdsa|.dsa)
independently. If a file is used in the configuration as a bundle AND
as a unique certificate, both will be updated.
Bundles, directories and crt-list are supported, however filters in
crt-list are currently unsupported.
The code tries to allocate every SNIs and certificate instances first,
so it can rollback the operation if that was unsuccessful.
If you have too much instances of the certificate (at least 20000 in my
tests on my laptop), the function can take too much time and be killed
by the watchdog. This will be fixed later. Also with too much
certificates it's possible that socat exits before the end of the
generation without displaying a message, consider changing the socat
timeout in this case (-t2 for example).
The size of the certificate is currently limited by the maximum size of
a payload, that must fit in a buffer.
The ssl_sock_load_{multi}_ckchs() function were renamed and modified:
- allocate a ckch_inst and loads the sni in it
- return a ckch_inst or NULL
- the sni_ctx are not added anymore in the sni trees from there
- renamed in ckch_inst_new_load_{multi}_store()
- new ssl_sock_load_ckchs() function calls
ckch_inst_new_load_{multi}_store() and add the sni_ctx to the sni trees.
ssl_sock_load_multi_ckchs() is now able to fail without polluting the
bind_conf trees and leaking memory.
It is a prerequisite to load certificate on-the-fly with the CLI.
The insertion of the sni_ctxs in the trees are done once everything has
been allocated correctly.
ssl_sock_load_ckchn() is now able to fail without polluting the
bind_conf trees and leaking memory.
It is a prerequisite to load certificate on-the-fly with the CLI.
The insertion of the sni_ctxs in the trees are done once everything has
been allocated correctly.
In order to allow the creation of sni_ctx in runtime, we need to split
the function to allow rollback.
We need to be able to allocate all sni_ctxs required before inserting
them in case we need to rollback if we didn't succeed the allocation.
The function was splitted in 2 parts.
The first one ckch_inst_add_cert_sni() allocates a struct sni_ctx, fill
it with the right data and insert it in the ckch_inst's list of sni_ctx.
The second will take every sni_ctx in the ckch_inst and insert them in
the bind_conf's sni tree.
struct ckch_inst represents an instance of a certificate (ckch_node)
used in a bind_conf. Every sni_ctx created for 1 ckch_node in a
bind_conf are linked in this structure.
This patch allocate the ckch_inst for each bind_conf and inserts the
sni_ctx in its linked list.
The ssl_sock_populate_sni_keytypes_hplr() function does not return an
error upon an allocation failure.
The process would probably crash during the configuration parsing if the
allocation fail since it tries to copy some data in the allocated
memory.
This patch could be backported as far as 1.5.
This patch frees the sni_keytype nodes once the sni_ctxs have been
allocated in ssl_sock_load_multi_ckchn();
Could be backported in every version using the multi-cert SSL bundles.
The ssl_sock_add_cert_sni() function never return an error when a
sni_ctx allocation fail. It silently ignores the problem and continues
to try to allocate other snis.
It is unlikely that a sni allocation will succeed after one failure and
start a configuration without all the snis. But to avoid any problem we
return a -1 upon an sni allocation error and stop the configuration
parsing.
This patch must be backported in every version supporting the crt-list
sni filters. (as far as 1.5)
A ckch_store is a storage which contains a pointer to one or several
cert_key_and_chain structures.
This patch renames ckch_node to ckch_store, and ckch_n, ckchn to ckchs.
src/ssl_sock.c:2928:12: warning: ‘ssl_sock_is_ckch_valid’ defined but not used [-Wunused-function]
static int ssl_sock_is_ckch_valid(struct cert_key_and_chain *ckch)
This function is only used with openssl >= 1.0.2, this patch adds a
condition to build the function.
In several SSL functions, the XPRT context is retrieved before any check on the
connection. In the function ssl_sock_is_ssl(), a test suggests the connection
may be null. So, it is safer to test the ssl connection before retrieving its
XPRT context. It removes any ambiguities and prevents possible null pointer
dereferences.
This patch fixes the issue #265. It must be backported to 2.0.
'ssl_sock' wasn't fully initialized so a new session can inherit some
flags from an old one.
This causes some fetches, related to client's certificate presence or
its verify status and errors, returning erroneous values.
This issue could generate other unexpected behaviors because a new
session could also inherit other flags such as SSL_SOCK_ST_FL_16K_WBFSIZE,
SSL_SOCK_SEND_UNLIMITED, or SSL_SOCK_RECV_HEARTBEAT from an old session.
This must be backported to 2.0 but it's useless for previous.
There were 221 places where a status message or an error message were built
to be returned on the CLI. All of them were replaced to use cli_err(),
cli_msg(), cli_dynerr() or cli_dynmsg() depending on what was expected.
This removed a lot of duplicated code because most of the times, 4 lines
are replaced by a single, safer one.
CO_FL_EARLY_SSL_HS/CO_FL_EARLY_DATA are removed for BoringSSL. Early
data can be checked via BoringSSL API and ssl_fc_has_early can used it.
This should be backported to all versions till 1.8.
Since BoringSSL commit 777a2391 "Hold off flushing NewSessionTicket until write.",
0-RTT doesn't work. It appears that half-RTT data (response from 0-RTT) never
worked before the BoringSSL fix. For HAProxy the regression come from 010941f8
"BUG/MEDIUM: ssl: Use the early_data API the right way.": the problem is link to
the logic of CO_FL_EARLY_SSL_HS used for OpenSSL. With BoringSSL, handshake is
done before reading early data, 0-RTT data and half-RTT data are processed as
normal data: CO_FL_EARLY_SSL_HS/CO_FL_EARLY_DATA is not needed, simply remove
it.
This should be backported to all versions till 1.8.
When using a ckch we should never try to free its content, because it
won't be usable after and can result in a NULL derefence during
parsing.
The content was previously freed because the ckch wasn't stored in a
tree to be used later, now that we use it multiple time, we need to keep
the data.
Recent changes use struct cert_key_and_chain to load all certificates in
frontends, this structure was previously used only to load multi-cert
bundle, which is supported only on >= 1.0.2.
OPENSSL_NO_DH can be defined to avoid obsolete and heavy DH processing.
With OPENSSL_NO_DH, parse the entire PEM file to look at DHparam is wast
of time.
Load a PEM certificate and use it in CTX are now decorrelated.
Checking the certificate and private key consistency can be done
earlier: in loading phase instead CTX set phase.
cert_key_and_chain handling is now outside openssl 1.0.2 #if: the
code must be libssl compatible. SSL_CTX_add1_chain_cert and
SSL_CTX_set1_chain requires openssl >= 1.0.2, replace it by legacy
SSL_CTX_add_extra_chain_cert when SSL_CTX_set1_chain is not provided.
Load the DH param at the same time as the certificate, we don't need to
open the file once more and read it again. We store it in the ckch_node.
There is a minor change comparing to the previous way of loading the DH
param in a bundle. With a bundle, the DH param in a certificate file was
never loaded, it only used the global DH or the default DH, now it's
able to use the DH param from a certificate file.
Don't read a certificate file again if it was already stored in the
ckchn tree. It allows HAProxy to start more quickly if the same
certificate is used at different places in the configuration.
HAProxy lookup in the ssl_sock_load_cert() function, doing it at this
level allows to skip the reading of the certificate in the filesystem.
If the certificate is not found in the tree, we insert the ckch_node in
the tree once the certificate is read on the filesystem, the filename or
the bundle name is used as the key.
Split the functions which open the certificates.
Instead of opening directly the certificates and inserting them directly
into a SSL_CTX, we use a struct cert_key_and_chain to store them in
memory and then we associate a SSL_CTX to the certificate stored in that
structure.
Introduce the struct ckch_node for the multi-cert bundles so we can
store multiple cert_key_and_chain in the same structure.
The functions ssl_sock_load_multi_cert() and ssl_sock_load_cert_file()
were modified so they don't open the certicates anymore on the
filesystem. (they still open the sctl and ocsp though). These functions
were renamed ssl_sock_load_ckchn() and ssl_sock_load_multi_ckchn().
The new function ckchn_load_cert_file() is in charge of loading the
files in the cert_key_and_chain. (TODO: load ocsp and sctl from there
too).
The ultimate goal is to be able to load a certificate from a certificate
tree without doing any filesystem access, so we don't try to open it
again if it was already loaded, and we share its configuration.
This structure was only used in the case of the multi-cert bundle.
Using these primitives everywhere when we load the file are a first step
in the deduplication of the code.
This commit merges the function ssl_sock_load_cert_file() and
ssl_sock_load_cert_chain_file().
The goal is to refactor the SSL code and use the cert_key_and_chain
struct to load everything.
The old module proto_http does not exist anymore. All code dedicated to the HTTP
analysis is now grouped in the file proto_htx.c. So, to finish the polishing
after removing the legacy HTTP code, proto_htx.{c,h} files have been moved in
http_ana.{c,h} files.
In addition, all HTX analyzers and related functions prefixed with "htx_" have
been renamed to start with "http_" instead.
Commit 54832b97 ("BUILD: enable several LibreSSL hacks, including")
changed empty handshake detection in OpenSSL <= 1.0.2 and LibreSSL,
from accessing packet_length directly (not available in LibreSSL) to
calling SSL_state() instead.
However, SSL_state() appears to be fully broken in both OpenSSL and
LibreSSL.
Since there is no possibility in LibreSSL to detect an empty handshake,
let's not try (like BoringSSL) and restore this functionality for
OpenSSL 1.0.2 and older, by reverting to the previous behavior.
Should be backported to 2.0.
Checks use ssl_sock_set_alpn() to set the ALPN if check-alpn is used, however
check-alpn failed to check if the connection was indeed using SSL, and thus,
would crash if check-alpn was used on a non-SSL connection. Fix this by
making sure the connection uses SSL before attempting to set the ALPN.
This should be backported to 2.0 and 1.9.
In ssl_subscribe(), make sure we have a ssl_sock_ctx before doing anything.
When ssl_sock_close() is called, it wakes any subscriber up, and that
subscriber may decide to subscribe again, for some reason. If we no longer
have a context, there's not much we can do.
This should be backported to 2.0.
When we're done sending/receiving early data, and we add the handshake
flags on the connection, make sure we wake the associated tasklet up, so that
the handshake will be initiated.
It's really confusing to call it a task because it's a tasklet and used
in places where tasks and tasklets are used together. Let's rename it
to tasklet to remove this confusion.
As reported in GH issue #109 and in discourse issue
https://discourse.haproxy.org/t/haproxy-returns-408-or-504-error-when-timeout-client-value-is-every-25d
the time parser doesn't error on overflows nor underflows. This is a
recurring problem which additionally has the bad taste of taking a long
time before hitting the user.
This patch makes parse_time_err() return special error codes for overflows
and underflows, and adds the control in the call places to report suitable
errors depending on the requested unit. In practice, underflows are almost
never returned as the parsing function takes care of rounding values up,
so this might possibly happen on 64-bit overflows returning exactly zero
after rounding though. It is not really possible to cut the patch into
pieces as it changes the function's API, hence all callers.
Tests were run on about every relevant part (cookie maxlife/maxidle,
server inter, stats timeout, timeout*, cli's set timeout command,
tcp-request/response inspect-delay).
When creating a new ssl_sock_ctx, don't forget to initialize its send_recv
and recv_wait to NULL, or we may end up dereferencing random values, and
crash.
Now that the various handshakes come with their own XPRT, there's no
need for the CONN_FL_SOCK* flags, and the conn_sock_want|stop functions,
so garbage-collect them.
Add a new method to xprt_ops, remove_xprt. When called, if the provided
xprt_ctx is the same as the xprt's underlying xprt_ctx, it then uses the
new xprt provided, otherwise it calls the remove_xprt method of the next
xprt.
The goal is to be able to add a temporary xprt, that removes itself from
the chain when it did what it had to do. This will be used to implement
a pseudo-xprt for anything that just requires a handshake (such as the
proxy protocol).
As the SSL code may have different needs than the upper layer, ie it may want
to receive when the upper layer wants to right, instead of directly forwarding
the subscribe to the underlying xprt, handle it ourself. The SSL code will
know remember any subscribe call, and wake the tasklet when it is ready
for more I/O.
This adds 4 sample fetches:
- ssl_fc_client_random
- ssl_fc_server_random
- ssl_bc_client_random
- ssl_bc_server_random
These fetches retrieve the client or server random value sent during the
handshake.
Their use is to be able to decrypt traffic sent using ephemeral ciphers. Tools
like wireshark expect a TLS log file with lines in a few known formats
(https://code.wireshark.org/review/gitweb?p=wireshark.git;a=blob;f=epan/dissectors/packet-tls-utils.c;h=28a51fb1fb029eae5cea52d37ff5b67d9b11950f;hb=HEAD#l5209).
Previously the only format supported using data retrievable from HAProxy state
was the one utilizing the Session-ID. However an SSL/TLS session ID is
optional, and thus cannot be relied upon for this purpose.
This change introduces the ability to extract the client random instead which
can be used for one of the other formats. The change also adds the ability to
extract the server random, just in case it might have some other use, as the
code change to support this was trivial.