Until now, the code assumed that it can get the offset to the first TLV
header just by subtracting the length of the TLV part from the length of
the complete buffer. However, if the buffer contains actual data after
the header, this computation is flawed and leads to haproxy trying to
parse TLV headers from the proxied data.
This change fixes this by making sure that the offset to the first TLV
header is calculated based from the start of the buffer -- simply by
adding the size of the proxy protocol v2 header plus the address
family-dependent size of the address information block.
With HTTP/2, we'll have to support multiplexed streams. A stream is in
fact the largest part of what we currently call a session, it has buffers,
logs, etc.
In order to catch any error, this commit removes any reference to the
struct session and tries to rename most "session" occurrences in function
names to "stream" and "sess" to "strm" when that's related to a session.
The files stream.{c,h} were added and session.{c,h} removed.
The session will be reintroduced later and a few parts of the stream
will progressively be moved overthere. It will more or less contain
only what we need in an embryonic session.
Sample fetch functions and converters will have to change a bit so
that they'll use an L5 (session) instead of what's currently called
"L4" which is in fact L6 for now.
Once all changes are completed, we should see approximately this :
L7 - http_txn
L6 - stream
L5 - session
L4 - connection | applet
There will be at most one http_txn per stream, and a same session will
possibly be referenced by multiple streams. A connection will point to
a session and to a stream. The session will hold all the information
we need to keep even when we don't yet have a stream.
Some more cleanup is needed because some code was already far from
being clean. The server queue management still refers to sessions at
many places while comments talk about connections. This will have to
be cleaned up once we have a server-side connection pool manager.
Stream flags "SN_*" still need to be renamed, it doesn't seem like
any of them will need to move to the session.
It's now called conn_sock_drain() to make it clear that it only reads
at the sock layer and not at the data layer. The function was too big
to remain inlined and it's used at a few places where size counts.
This function is an equivalent to send() which operates over a connection
instead of a file descriptor. It checks that the control layer is ready
and that it's allowed to send. If automatically enables polling if it
cannot send. It simplifies the return checks by returning zero in all
cases where it cannot send so that the caller only has to care about
negative values indicating errors.
This patch makes it possible to create binds and servers in separate
namespaces. This can be used to proxy between multiple completely independent
virtual networks (with possibly overlapping IP addresses) and a
non-namespace-aware proxy implementation that supports the proxy protocol (v2).
The setup is something like this:
net1 on VLAN 1 (namespace 1) -\
net2 on VLAN 2 (namespace 2) -- haproxy ==== proxy (namespace 0)
net3 on VLAN 3 (namespace 3) -/
The proxy is configured to make server connections through haproxy and sending
the expected source/target addresses to haproxy using the proxy protocol.
The network namespace setup on the haproxy node is something like this:
= 8< =
$ cat setup.sh
ip netns add 1
ip link add link eth1 type vlan id 1
ip link set eth1.1 netns 1
ip netns exec 1 ip addr add 192.168.91.2/24 dev eth1.1
ip netns exec 1 ip link set eth1.$id up
...
= 8< =
= 8< =
$ cat haproxy.cfg
frontend clients
bind 127.0.0.1:50022 namespace 1 transparent
default_backend scb
backend server
mode tcp
server server1 192.168.122.4:2222 namespace 2 send-proxy-v2
= 8< =
A bind line creates the listener in the specified namespace, and connections
originating from that listener also have their network namespace set to
that of the listener.
A server line either forces the connection to be made in a specified
namespace or may use the namespace from the client-side connection if that
was set.
For more documentation please read the documentation included in the patch
itself.
Signed-off-by: KOVACS Tamas <ktamas@balabit.com>
Signed-off-by: Sarkozi Laszlo <laszlo.sarkozi@balabit.com>
Signed-off-by: KOVACS Krisztian <hidden@balabit.com>
Previously, if hdr_v2->len was less than the length of the protocol
specific address information we could have read after the end of the
buffer and initialize the sockaddr structure with junk.
Signed-off-by: KOVACS Krisztian <hidden@balabit.com>
[WT: this is only tagged medium since proxy protocol is only used from
trusted sources]
This must be backported to 1.5.
There are two sample commands to get information about the presence of a
client certificate.
ssl_fc_has_crt is true if there is a certificate present in the current
connection
ssl_c_used is true if there is a certificate present in the session.
If a session has stopped and resumed, then ssl_c_used could be true, while
ssl_fc_has_crt is false.
In the client byte of the TLS TLV of Proxy Protocol V2, there is only one
bit to indicate whether a certificate is present on the connection. The
attached patch adds a second bit to indicate the presence for the session.
This maintains backward compatibility.
[wt: this should be backported to 1.5 to help maintain compatibility
between versions]
Last commit 77d1f01 ("BUG/MEDIUM: connection: fix memory corruption
when building a proxy v2 header") was wrong, using &cn_trash instead
of cn_trash resulting in a warning and the client's SSL cert CN not
being stored at the proper location.
Thanks to Lukas Tribus for spotting this quickly.
This should be backported to 1.5 after the patch above is backported.
'ssl_sock_get_common_name' applied to a connection was also renamed
'ssl_sock_get_remote_common_name'. Currently, this function is only used
with protocol PROXYv2 to retrieve the client certificate's common name.
A further usage could be to retrieve the server certificate's common name
on an outgoing connection.
The "accept-proxy" statement of bind lines was still limited to version
1 of the protocol, while send-proxy-v2 is now available on the server
lines. This patch adds support for parsing v2 of the protocol on incoming
connections. The v2 header is automatically recognized so there is no
need for a new option.
This is in order to simplify the PPv2 header parsing code to look more
like the one provided as an example in the spec. No code change was
performed beyond just merging the proxy_addr union into the proxy_hdr_v2
struct.
If haproxy receives a connection over a unix socket and forwards it to
another haproxy instance using proxy protocol v1, it sends an UNKNOWN
protocol, which is rejected by the other side. Make the receiver accept
the UNKNOWN protocol as per the spec, and only use the local connection's
address for this.
This commit modifies the PROXY protocol V2 specification to support headers
longer than 255 bytes allowing for optional extensions. It implements the
PROXY protocol V2 which is a binary representation of V1. This will make
parsing more efficient for clients who will know in advance exactly how
many bytes to read. Also, it defines and implements some optional PROXY
protocol V2 extensions to send information about downstream SSL/TLS
connections. Support for PROXY protocol V1 remains unchanged.
It's easier and safer to rely on conn_ctrl_ready() everywhere than to
check the flag itself. It will also simplify adding extra checks later
if needed. Some useless controls for !ctrl have been removed, as the
CTRL_READY flag itself guarantees ctrl is set.
We already have everything in the connection flags using the
CO_FL_DATA_*_ENA bits combined with the fd's ready state, so
we do not need to check fdtab[fd].ev anymore. This considerably
simplifies the connection handling logic since it doesn't
have to mix connection flags with past polling states.
The recv/send callbacks must check for readiness themselves instead of
having their callers do it. This will strengthen the test and will also
ensure we never refrain from calling a handshake handler because a
direction is being polled while the other one is ready.
We simply remove these functions and replace their calls with the
appropriate ones :
- if we're in the data phase, we can simply report wait on the FD
- if we're in the socket phase, we may also have to signal the
desire to read/write on the socket because it might not be
active yet.
These flags were used to report the readiness of the file descriptor.
Now this readiness is directly checked at the file descriptor itself.
This removes the need for constantly synchronizing updates between the
file descriptor and the connection and ensures that all layers share
the same level of information.
For now, the readiness is updated in conn_{sock,data}_poll_* by directly
touching the file descriptor. This must move to the lower layers instead
so that these functions can disappear as well. In this state, the change
works but is incomplete. It's sensible enough to avoid making it more
complex.
Now the sock/data updates become much simpler because they just have to
enable/disable access to a file descriptor and not to care anymore about
its readiness.
This commit heavily changes the polling system in order to definitely
fix the frequent breakage of SSL which needs to remember the last
EAGAIN before deciding whether to poll or not. Now we have a state per
direction for each FD, as opposed to a previous and current state
previously. An FD can have up to 8 different states for each direction,
each of which being the result of a 3-bit combination. These 3 bits
indicate a wish to access the FD, the readiness of the FD and the
subscription of the FD to the polling system.
This means that it will now be possible to remember the state of a
file descriptor across disable/enable sequences that generally happen
during forwarding, where enabling reading on a previously disabled FD
would result in forgetting the EAGAIN flag it met last time.
Several new state manipulation functions have been introduced or
adapted :
- fd_want_{recv,send} : enable receiving/sending on the FD regardless
of its state (sets the ACTIVE flag) ;
- fd_stop_{recv,send} : stop receiving/sending on the FD regardless
of its state (clears the ACTIVE flag) ;
- fd_cant_{recv,send} : report a failure to receive/send on the FD
corresponding to EAGAIN (clears the READY flag) ;
- fd_may_{recv,send} : report the ability to receive/send on the FD
as reported by poll() (sets the READY flag) ;
Some functions are used to report the current FD status :
- fd_{recv,send}_active
- fd_{recv,send}_ready
- fd_{recv,send}_polled
Some functions were removed :
- fd_ev_clr(), fd_ev_set(), fd_ev_rem(), fd_ev_wai()
The POLLHUP/POLLERR flags are now reported as ready so that the I/O layers
knows it can try to access the file descriptor to get this information.
In order to simplify the conditions to add/remove cache entries, a new
function fd_alloc_or_release_cache_entry() was created to be used from
pollers while scanning for updates.
The following pollers have been updated :
ev_select() : done, built, tested on Linux 3.10
ev_poll() : done, built, tested on Linux 3.10
ev_epoll() : done, built, tested on Linux 3.10 & 3.13
ev_kqueue() : done, built, tested on OpenBSD 5.2
We used to have two very similar functions for sending a PROXY protocol
line header. The reason is that the default one relies on the stream
interface to retrieve the other end's address, while the "local" one
performs a local address lookup and sends that instead (used by health
checks).
Now that the send_proxy_ofs is stored in the connection and not the
stream interface, we can make the local_send_proxy rely on it and
support partial sends. This also simplifies the code by removing the
local_send_proxy function, making health checks use send_proxy_ofs,
resulting in the removal of the CO_FL_LOCAL_SPROXY flag, and the
associated test in the connection handler. The other flag,
CO_FL_SI_SEND_PROXY was renamed without the "SI" part so that it
is clear that it is not dedicated anymore to a usage with a stream
interface.
Currently the control and transport layers of a connection are supposed
to be initialized when their respective pointers are not NULL. This will
not work anymore when we plan to reuse connections, because there is an
asymmetry between the accept() side and the connect() side :
- on accept() side, the fd is set first, then the ctrl layer then the
transport layer ; upon error, they must be undone in the reverse order,
then the FD must be closed. The FD must not be deleted if the control
layer was not yet initialized ;
- on the connect() side, the fd is set last and there is no reliable way
to know if it has been initialized or not. In practice it's initialized
to -1 first but this is hackish and supposes that local FDs only will
be used forever. Also, there are even less solutions for keeping trace
of the transport layer's state.
Also it is possible to support delayed close() when something (eg: logs)
tracks some information requiring the transport and/or control layers,
making it even more difficult to clean them.
So the proposed solution is to add two flags to the connection :
- CO_FL_CTRL_READY is set when the control layer is initialized (fd_insert)
and cleared after it's released (fd_delete).
- CO_FL_XPRT_READY is set when the control layer is initialized (xprt->init)
and cleared after it's released (xprt->close).
The functions have been adapted to rely on this and not on the pointers
anymore. conn_xprt_close() was unused and dangerous : it did not close
the control layer (eg: the socket itself) but still marks the transport
layer as closed, preventing any future call to conn_full_close() from
finishing the job.
The problem comes from conn_full_close() in fact. It needs to close the
xprt and ctrl layers independantly. After that we're still having an issue :
we don't know based on ->ctrl alone whether the fd was registered or not.
For this we use the two new flags CO_FL_XPRT_READY and CO_FL_CTRL_READY. We
now rely on this and not on conn->xprt nor conn->ctrl anymore to decide what
remains to be done on the connection.
In order not to miss some flag assignments, we introduce conn_ctrl_init()
to initialize the control layer, register the fd using fd_insert() and set
the flag, and conn_ctrl_close() which unregisters the fd and removes the
flag, but only if the transport layer was closed.
Similarly, at the transport layer, conn_xprt_init() calls ->init and sets
the flag, while conn_xprt_close() checks the flag, calls ->close and clears
the flag, regardless xprt_ctx or xprt_st. This also ensures that the ->init
and the ->close functions are called only once each and in the correct order.
Note that conn_xprt_close() does nothing if the transport layer is still
tracked.
conn_full_close() now simply calls conn_xprt_close() then conn_full_close()
in turn, which do nothing if CO_FL_XPRT_TRACKED is set.
In order to handle the error path, we also provide conn_force_close() which
ignores CO_FL_XPRT_TRACKED and closes the transport and the control layers
in turns. All relevant instances of fd_delete() have been replaced with
conn_force_close(). Now we always know what state the connection is in and
we can expect to split its initialization.
Currently a connection is required on the remote side to emit a proxy
protocol header line. Let's support NULL addresses to emit an UNKNOWN
tag as well.
When we get a hard error from a syscall indicating the socket is dead,
it makes sense to set the CO_FL_SOCK_WR_SH and CO_FL_SOCK_RD_SH flags
to indicate that the socket may not be used anymore. It will ease the
error processing in health checks where the state of socket is very
important. We'll also be able to avoid some setsockopt(nolinger) after
an error.
For now, the rest of the code is not impacted because CO_FL_ERROR is
always tested prior to these flags.
PROXY protocol header was not tolerant to signals, so it might cause a
connection to report an error if a signal comes in at the exact same
moment the send is done.
This is 1.5-specific and does not need any backport.
Both of them are rare and are detected from the same flags source, so
let's detect errors in the handshake loop and remove two tests in the
fast path. This seems to improve overall performance by less than 0.5%
on connection-bound workloads.
David Berard reported that send-proxy was broken on FreeBSD and tracked the
issue to be an error returned by send(). We already had the same issue in
the past in another area which was addressed by the following commit :
0ea0cf6 BUG: raw_sock: also consider ENOTCONN in addition to EAGAIN
In fact, on Linux send() returns EAGAIN when the connection is not yet
established while other OSes return ENOTCONN. Let's consider ENOTCONN for
send-proxy there as the same as EAGAIN.
David confirmed that this change properly fixed the issue.
Another place was affected as well (health checks with send-proxy), and
was fixed.
This fix does not need any backport since it only affects 1.5.
stream_int_chk_rcv_conn() did not clear connection flags before updating them. It
is unsure whether this could have caused the stalled transfers that have been
reported since dev15.
In order to avoid such further issues, we now use a simple inline function to do
all the job.
There were a few synchronous calls to polling updates in some functions
called from the connection handler. These ones are not needed and should
be replaced by more efficient and more debugable asynchronous calls.
Bryan Berry and Baptiste Assmann both reported some occasional CPU
spinning loops where haproxy was still processing I/O but burning
CPU for apparently uncaught events.
What happens is the following sequence :
- proxy is in TCP mode
- a connection from a client initiates a connection to a server
- the connection to the server does not immediately happen and is
polled for
- in the mean time, the client speaks and the stream interface
calls ->chk_snd() on the peer connection to send the new data
- chk_snd() calls send_loop() to send the data. This last one
makes the connection succeed and empties the buffer, so it
disables polling on the connection and on the FD by creating
an update entry.
- before the update is processed, poll() succeeds and reports
a write event for this fd. The poller does fd_ev_set() on the
FD to switch it to speculative mode
- the IO handler is called with a connection which has no write
flag but an FD which is enabled in speculative mode.
- the connection does nothing useful.
- conn_update_polling() at the end of conn_fd_handler() cannot
disable the FD because there were no changes on this FD.
- the handler is left with speculative polling still enabled on
the FD, and will be called over and over until a poll event is
needed to transfer data.
There is no perfectly elegant solution to this. At least we should
update the flags indicating the current polling status to reflect
what is being done at the FD level. This will allow to detect that
the FD needs to be disabled upon exit.
chk_snd() also needs minor changes to correctly switch to speculative
polling before calling send_loop(), and to reflect this in the connection
flags. This is needed so that no event remains stuck there without any
polling. In fact, chk_snd() and chk_rcv() should perform the same number
of preparations and cleanups as conn_fd_handler().
Errors and Hangups are sticky events, which means that once they're
detected, we never clear them, allowing them to be handled later if
needed.
Till now when an error was reported, it used to register a speculative
I/O event for both recv and send. Since the connection had not requested
such events, it was not able to detect a change and did not clear them,
so the events were called in loops until a timeout caused their owner
task to die.
So this patch does two things :
- stop registering spec events when no I/O activity was requested,
so that we don't end up with non-disablable polling state ;
- keep the sticky polling flags (ERR and HUP) when leaving the
connection handler so that an error notification doesn't
magically become a normal recv() or send() report once the
event is converted to a spec event.
It is normally not needed to make the connection handler emit an
error when it detects POLL_ERR because either a registered data
handler will have done it, or the event will be disabled by the
wake() callback.
When the PROXY protocol header is expected and fails, leading to an
abort of the incoming connection, we now emit a log message. If option
dontlognull is set and it was just a port probe, then nothing is logged.
It's annoying that handshake handlers remove themselves from the
connection flags when they fail because there is no way to tell
which one fails. So now we only remove them when they succeed.
The conn_local_send_proxy() function has to retrieve the local and remote
addresses, but the getpeername() and getsockname() functions may fail until
the connection is established. So now we catch this error and poll for write
when this happens.
If an uncaught CO_FL_ERROR flag on a connection is detected, we
immediately go to the wakeup function. This ensures that even if
an error is asynchronously delivered, we don't risk re-enabling
polling or doing unexpected things in the handshake handlers.
Commit 0ffde2cc in 1.5-dev13 tried to always disable polling on file
descriptors when errors were encountered. Unfortunately it did not
always succeed in doing so because it relied on detecting polling
changes to disable it. Let's use a dedicated conn_stop_polling()
function that is inconditionally called upon error instead.
This managed to stop a busy loop observed when a health check makes
use of the send-proxy protocol and fails before the connection can
be established.
The CO_FL_WAIT_* flags were not cleared after updating polling flags.
This means that any caller of these functions that did not clear it
would enable polling instead of speculative I/O. This happens during
the stream interface update call which is performed from the session
handler for example.
As of now it's not a problem yet because speculative I/O and polling
are handled the same way. However with upcoming changes it does cause
some deadlocks because enabling read processing on a file descriptor
where everything was already read will do nothing until something new
happens on this FD.
The correct fix consists in clearing the flags while leaving the update
functions.
This fix does not need any backport as it was introduced with recent
connection changes (dev12) and not triggered until last commit.
This is the first step of a series of changes aiming at making the
polling totally event-driven. This first change consists in only
remembering at the connection level whether an FD was enabled or not,
regardless of the fact it was being polled or cached. From now on, an
EAGAIN will always be considered as a change so that the pollers are
able to manage a cache and to flush it based on such events. One of
the noticeable effect is that conn_fd_handler() is called once more
per session (6 instead of 5 min) but other update functions are less
called.
Note that the performance loss caused by this change at the moment is
quite significant, around 2.5%, but the change is needed to have SSL
working correctly in all situations, even when data were read from the
socket and stored in the invisible cache, waiting for some room in the
channel's buffer.
The trash is used everywhere to store the results of temporary strings
built out of s(n)printf, or as a storage for a chunk when chunks are
needed.
Using global.tune.bufsize is not the most convenient thing either.
So let's replace trash with a chunk and directly use it as such. We can
then use trash.size as the natural way to get its size, and get rid of
many intermediary chunks that were previously used.
The patch is huge because it touches many areas but it makes the code
a lot more clear and even outlines places where trash was used without
being that obvious.
We will need to be able to switch server connections on a session and
to keep idle connections. In order to achieve this, the preliminary
requirement is that the connections can survive the session and be
detached from them.
Right now they're still allocated at exactly the same place, so when
there is a session, there are always 2 connections. We could soon
improve on this by allocating the outgoing connection only during a
connect().
This current patch touches a lot of code and intentionally does not
change any functionnality. Performance tests show no regression (even
a very minor improvement). The doc has not yet been updated.
In some circumstances, if the connection to the server is aborted while
some data were planned to be sent and the poller reported an ability to
send, then conn_fd_handler() would still call conn->data->send(), causing
the data layer to dereference the now NULL conn->xprt and crash.
So we have to check for conn->xprt validity before calling the data
layer.
This issue was introduced after 1.5-dev12 so it does not need any backport
and does not affect any released version.
Special thanks go to Cristian Ditoiu who once again provided amazing help
to troubleshoot this bug !
Commit 9e272bf9 broke connection setup in TCP mode, the comment was
misleading and obviously wrong, as after a connection is established,
we *do* have none of the CONNECT* flags. However we can never have
them all at the same time, so let's use this to trigger a detection.
This callback sends a PROXY protocol line on the outgoing connection,
with the local and remote endpoint information. This is used for local
connections (eg: health checks) where the other end needs to have a
valid address and no connection is relayed.
It was previously in frontend.c but there is no reason for this anymore
considering that all the information involved is in the connection itself
only. Theorically this should be in the socket layer but we don't have
this yet.
We absolutely want to disable FD polling after an error is detected,
otherwise the data layer has to do it and it's far from being obvious
at these layers.
The way we did it was a bit tricky in conn_update_*_polling and
conn_*_polling_changes. However it has almost no impact on performance
and code size both for the fast and slow path.
We'll now be able to remove some flag updates in the stream interface.
Just like ->init(), ->wake() may now be used to return an error and
abort the connection. Currently this is not used but will be with
embryonic sessions.
We now check the connection flags for changes in order not to call the
data->wake callback when there is no activity. Activity means a change
on any of the CO_FL_*_SH, CO_FL_ERROR, CO_FL_CONNECTED, CO_FL_WAIT_CONN*
flags, as well as a call to data->recv or data->send.
The generic data-layer init callback is now used after the transport
layer is complete and before calling the data layer recv/send callbacks.
This allows the session to switch from the embryonic session data layer
to the complete stream interface data layer, by making conn_session_complete()
the data layer's init callback.
It sill looks awkwards that the init() callback must be used opon error,
but except by adding yet another one, it does not seem to be mergeable
into another function (eg: it should probably not be merged with ->wake
to avoid unneeded calls during the handshake, though semantically that
would make sense).