Compare commits

..

55 Commits

Author SHA1 Message Date
Olivier Houchard
b6702d5342 BUG/MEDIUM: ssl: fix build with AWS-LC
AWS-LC doesn't provide SSL_in_before(), and doesn't provide an easy way
to know if we already started the handshake or not. So instead, just add
a new field in ssl_sock_ctx, "can_write_early_data", that will be
initialized to 1, and will be set to 0 as soon as we start the
handshake.

This should be backported up to 2.8 with
13aa5616c9.
2025-08-08 20:21:14 +02:00
Olivier Houchard
13aa5616c9 BUG/MEDIUM: ssl: Fix 0rtt to the server
In order to send early data, we have to make sure no handshake has been
initiated at all. To do that, we remove the CO_FL_SSL_WAIT_HS flag, so
that we won't attempt to start a handshake. However, by removing those
flags, we allow ssl_sock_to_buf() to call SSL_read(), as it's no longer
aware that no handshake has been done, and SSL_read() will begin the
handshake, thus preventing us from sending early data.
The fix is to just call SSL_in_before() to check if no handshake has
been done yet, in addition to checking CO_FL_SSL_WAIT_HS (both are
needed, as CO_FL_SSL_WAIT_HS may come back in case of renegociation).
In ssl_sock_from_buf(), fix the check to see if we may attempt to send
early data. Use SSL_in_before() instead of SSL_is_init_finished(), as
SSL_is_init_finished() will return 1 if the handshake has been started,
but not terminated, and if the handshake has been started, we can no
longer send early data.
This fixes errors when attempting to send early data (as well as
actually sending early data).

This should be backported up to 2.8.
2025-08-08 19:13:37 +02:00
Ilia Shipitsin
c10e8401e2 CI: vtest: add Ubuntu arm64 builds
Reference: https://github.com/actions/partner-runner-images

since GHA now supports arm64 as well, let add those builds. We will
start with ASAN builds, other will be added later if required
2025-08-08 15:36:11 +02:00
Ilia Shipitsin
6b2bbcb428 CI: vtest: add os name to OT cache key
currently OpenTracing cache does not include os name. it does not
allow to distinguish, for example between ubuntu-24.04 and
ubuntu-24.04-arm.
2025-08-08 15:36:12 +02:00
David Carlier
7fe8989fbb MINOR: sock: update broken accept4 detection for older hardwares.
Some older ARM embedded settings set errno to EPERM instead of ENOSYS
for missing implementations (e.g. Freescale ARM 2.6.35)
2025-08-08 06:01:18 +02:00
Valentine Krasnobaeva
21d5f43aa6 BUG/MINOR: stick-table: cap sticky counter idx with tune.nb_stk_ctr instead of MAX_SESS_STKCTR
Cap sticky counter index with tune.nb_stk_ctr instead of MAX_SESS_STKCTR for
sc-add-gpc. Same logic is already implemented for sc-inc-gpc and sc-set-gpt
keywords. So, it seems missed for sc-add-gpc.

This fixes the issue #3061 reported at GitHub. Thanks to @ma311 for
reporting their analysis of the issue.
This should be backported in all versions until 2.8, included 2.8.
2025-08-08 05:26:30 +02:00
Aurelien DARRAGON
7656a41784 BUILD: restore USE_SHM_OPEN build option
Some optional features may still require the use of shm_open() in the
future. In this patch we restore the USE_SHM_OPEN build option that
was removed in 143be1b59 ("MEDIUM: errors: get rid of shm_open()") and
should guard the use of shm_open() in the code.
2025-08-07 22:27:22 +02:00
Aurelien DARRAGON
bcb124f92a MINOR: init: add REGISTER_POST_DEINIT_MASTER() hook
Similar to REGISTER_POST_DEINIT() hook (which is invoked during deinit)
but for master process only, when haproxy was started in master-worker
mode. The goal is to be able to register cleanup functions that will
only run for the master process right before exiting.
2025-08-07 22:27:14 +02:00
Aurelien DARRAGON
c8282f6138 MINOR: clock: add clock_get_now_offset() helper
Same as clock_set_now_offset() but to retrieve the offset from external
location.
2025-08-07 22:27:09 +02:00
Aurelien DARRAGON
20f9d8fa4e MINOR: clock: add clock_set_now_offset() helper
Since now_offset is a static variable and is not exposed outside from
clock.c, let's add an helper so that it becomes possible to set its
value from another source file.
2025-08-07 22:27:05 +02:00
Aurelien DARRAGON
4c3a36c609 MINOR: guid: add guid_count() function
returns the total amount of registered GUIDs in the guid_tree
2025-08-07 22:26:58 +02:00
Aurelien DARRAGON
7c52964591 MINOR: guid: add guid_get() helper
guid_get() is a convenient function to get the actual key string
associated to a given guid_node struct
2025-08-07 22:26:52 +02:00
Aurelien DARRAGON
3759172015 BUG/MINOR: proxy: avoid NULL-deref in post_section_px_cleanup()
post_section_px_cleanup(), which was implemented in abcc73830
("MEDIUM: proxy: register a post-section cleanup function"), is called
for the current section no matter if the parsing was aborted due to
a fatal error. In this case, the curproxy pointer may point to NULL,
yet post_section_px_cleanup() assumes curproxy pointer is always valid,
which could lead to NULL-deref.

For instance, the config below will cause SEGFAULT:

  listen toto titi

To fix the issue, let's simply consider that the curproxy pointer may
be NULL in post_section_px_cleanup(), in which case we skip the cleanup
for the curproxy since there is nothing we can do.

No backport needed
2025-08-07 22:26:47 +02:00
Aurelien DARRAGON
833158f9e0 BUG/MINOR: cfgparse-listen: update err_code for fatal error on proxy directive
When improper arguments are provided on proxy directive (listen,
frontend or backend), such alert may be emitted:

  "please use the 'bind' keyword for listening addresses"

This was introduced in 6e62fb6405 ("MEDIUM: cfgparse: check section
maximum number of arguments"). However, despite the error being reported
as alert, the err_code isn't updated accordingly, which could make the
upper parser think there was no error, while it isn't the case.

In practise since the proxy directive is ignored following proxy related
directives should raise errors, so this didn't cause much harm, yet
better fix that.

It could be backported to all stable versions.
2025-08-07 22:26:42 +02:00
Aurelien DARRAGON
525750e135 BUG/MINOR: cfgparse: immediately stop after hard error in srv_init()
Since 368d01361 (" MEDIUM: server: add and use srv_init() function"), in
case of srv_init() error, we simply increment cfgerr variable and keep
going.

It isn't enough, some treatment occuring later in check_config_validity()
assume that srv_init() succeeded for servers, and may cause undefined
behavior. To fix the issue, let's consider that if (srv_init() & ERR_CODE)
returns true, then we must stop checking the config immediately.

No backport needed unless 368d01361 is.
2025-08-07 22:26:37 +02:00
Amaury Denoyelle
731b52ded9 MINOR: quic: prefer qc_is_back() usage over qc->target
Previously quic_conn <target> member was used to determine if quic_conn
was used on the frontend (as server) or backend side (as client). A new
helper function can now be used to directly check flag
QUIC_FL_CONN_IS_BACK.

This reduces the dependency between quic_conn and their relative
listener/server instances.
2025-08-07 16:59:59 +02:00
Amaury Denoyelle
cae828cbf5 MINOR: quic: define QUIC_FL_CONN_IS_BACK flag
Define a new quic_conn flag assign if the connection is used on the
backend side. This is similar to other haproxy components such as struct
connection and muxes element.

This flag is positionned via qc_new_conn(). Also update quic traces to
mark proxy side as 'F' or 'B' suffix.
2025-08-07 16:59:59 +02:00
Amaury Denoyelle
e064e5d461 MINOR: quic: duplicate GSO unsupp status from listener to conn
QUIC emission can use GSO to emit multiple datagrams with a single
syscall invokation. However, this feature relies on several kernel
parameters which are checked on haproxy process startup.

Even if these checks report no issue, GSO may still be unable due to the
underlying network adapter underneath. Thus, if a EIO occured on
sendmsg() with GSO, listener is flagged to mark GSO as unsupported. This
allows every other QUIC connections to share the status and avoid using
GSO when using this listener.

Previously, listener flag was checked for every QUIC emission. This was
done using an atomic operation to prevent races. Improve this by
duplicating GSO unsupported status as the connection level. This is done
on qc_new_conn() and also on thread rebinding if a new listener instance
is used.

The main benefit from this patch is to reduce the dependency between
quic_conn and listener instances.
2025-08-07 16:36:26 +02:00
Willy Tarreau
d76ee72d03 [RELEASE] Released version 3.3-dev6
Released version 3.3-dev6 with the following main changes :
    - MINOR: acme: implement traces
    - BUG/MINOR: hlua: take default-path into account with lua-load-per-thread
    - CLEANUP: counters: rename counters_be_shared_init to counters_be_shared_prepare
    - MINOR: clock: make global_now_ms a pointer
    - MINOR: clock: make global_now_ns a pointer as well
    - MINOR: mux-quic: release conn after shutdown on BE reuse failure
    - MINOR: session: strengthen connection attach to session
    - MINOR: session: remove redundant target argument from session_add_conn()
    - MINOR: session: strengthen idle conn limit check
    - MINOR: session: do not release conn in session_check_idle_conn()
    - MINOR: session: streamline session_check_idle_conn() usage
    - MINOR: muxes: refactor private connection detach
    - BUG/MEDIUM: mux-quic: ensure Early-data header is set
    - BUILD: acme: avoid declaring TRACE_SOURCE in acme-t.h
    - MINOR: acme: emit a log for DNS-01 challenge response
    - MINOR: acme: emit the DNS-01 challenge details on the dpapi sink
    - MEDIUM: acme: allow to wait and restart the task for DNS-01
    - MINOR: acme: update the log for DNS-01
    - BUG/MINOR: acme: possible integer underflow in acme_txt_record()
    - BUG/MEDIUM: hlua_fcn: ensure systematic watcher cleanup for server list iterator
    - MINOR: sample: Add le2dec (little endian to decimal) sample fetch
    - BUILD: fcgi: fix the struct name of fcgi_flt_ctx
    - BUILD: compat: provide relaxed versions of the MIN/MAX macros
    - BUILD: quic: use _MAX() to avoid build issues in pools declarations
    - BUILD: compat: always set _POSIX_VERSION to ease comparisons
    - MINOR: implement ha_aligned_alloc() to return aligned memory areas
    - MINOR: pools: support creating a pool from a pool registration
    - MINOR: pools: add a new flag to declare static registrations
    - MINOR: pools: force the name at creation time to be a const.
    - MEDIUM: pools: change the static pool creation to pass a registration
    - DEBUG: pools: store the pool registration file name and line number
    - DEBUG: pools: also retrieve file and line for direct callers of create_pool()
    - MEDIUM: pools: add an alignment property
    - MINOR: pools: add macros to register aligned pools
    - MINOR: pools: add macros to declare pools based on a struct type
    - MEDIUM: pools: respect pool alignment in allocations
2025-08-06 21:50:00 +02:00
Willy Tarreau
ef915e672a MEDIUM: pools: respect pool alignment in allocations
Now pool_alloc_area() takes the alignment in argument and makes use
of ha_aligned_malloc() instead of malloc(). pool_alloc_area_uaf()
simply applies the alignment before returning the mapped area. The
pool_free() functionn calls ha_aligned_free() so as to permit to use
a specific API for aligned alloc/free like mingw requires.

Note that it's possible to see warnings about mismatching sized
during pool_free() since we know both the pool and the type. In
pool_free, adding just this is sufficient to detect potential
offenders:

	WARN_ON(__alignof__(*__ptr) > pool->align);
2025-08-06 19:20:36 +02:00
Willy Tarreau
f0d0922aa1 MINOR: pools: add macros to declare pools based on a struct type
DECLARE_TYPED_POOL() and friends take a name, a type and an extra
size (to be added to the size of the element), and will use this
to create the pool. This has the benefit of letting the compiler
automatically adapt sizeof() and alignof() based on the type
declaration.
2025-08-06 19:20:36 +02:00
Willy Tarreau
6ea0e3e2f8 MINOR: pools: add macros to register aligned pools
This adds an alignment argument to create_pool_from_loc() and
completes the existing low-level macros with new ones that expose
the alignment and the new macros permit to specify it. For now
they're not used.
2025-08-06 19:20:36 +02:00
Willy Tarreau
eb075d15f6 MEDIUM: pools: add an alignment property
This will be used to declare aligned pools. For now it's not used,
but it's properly set from the various registrations that compose
a pool, and rounded up to the next power of 2, with a minimum of
sizeof(void*).

The alignment is returned in the "show pools" part that indicates
the entry size. E.g. "(56 bytes/8)" means 56 bytes, aligned by 8.
2025-08-06 19:20:36 +02:00
Willy Tarreau
ac23b873f5 DEBUG: pools: also retrieve file and line for direct callers of create_pool()
Just like previous patch, we want to retrieve the location of the caller.
For this we turn create_pool() into a macro that collects __FILE__ and
__LINE__ and passes them to the now renamed function create_pool_with_loc().

Now the remaining ~30 pools also have their location stored.
2025-08-06 19:20:34 +02:00
Willy Tarreau
efa856a8b0 DEBUG: pools: store the pool registration file name and line number
When pools are declared using DECLARE_POOL(), REGISTER_POOL etc, we
know where they are and it's trivial to retrieve the file name and line
number, so let's store them in the pool_registration, and display them
when known in "show pools detailed".
2025-08-06 19:20:32 +02:00
Willy Tarreau
ff62aacb20 MEDIUM: pools: change the static pool creation to pass a registration
Now we're creating statically allocated registrations instead of
passing all the parameters and allocating them on the fly. Not only
this is simpler to extend (we're limited in number of INITCALL args),
but it also leaves all of these in the data segment where they are
easier to find when debugging.
2025-08-06 19:20:30 +02:00
Willy Tarreau
f51d58bd2e MINOR: pools: force the name at creation time to be a const.
This is already the case as all names are constant so that's fine. If
it would ever change, it's not very hard to just replace it in-situ
via an strdup() and set a flag to mention that it's dynamically
allocated. We just don't need this right now.

One immediately visible effect is in "show pools detailed" where the
names are no longer truncated.
2025-08-06 19:20:28 +02:00
Willy Tarreau
ee5bc28865 MINOR: pools: add a new flag to declare static registrations
We must not free these ones when destroying a pool, so let's dedicate
them a flag to mention that they are static. For now we don't have any
such.
2025-08-06 19:20:26 +02:00
Willy Tarreau
18505f9718 MINOR: pools: support creating a pool from a pool registration
We've recently introduced pool registrations to be able to enumerate
all pool creation requests with their respective parameters, but till
now they were only used for debugging ("show pools detailed"). Let's
go a step further and split create_pool() in two:
  - the first half only allocates and sets the pool registration
  - the second half creates the pool from the registration

This is what this patch does. This now opens the ability to pre-create
registrations and create pools directly from there.
2025-08-06 19:20:22 +02:00
Willy Tarreau
325d1bdcca MINOR: implement ha_aligned_alloc() to return aligned memory areas
We have two versions, _safe() which verifies and adjusts alignment,
and the regular one which trusts the caller. There's also a dedicated
ha_aligned_free() due to mingw.

The currently detected OSes are mingw, unixes older than POSIX 200112
which require memalign(), and those post 200112 which will use
posix_memalign(). Solaris 10 reports 200112 (probably through
_GNU_SOURCE since it does not do it by default), and Solaris 11 still
supports memalign() so for all Solaris we use memalign(). The memstats
wrappers are also implemented, and have the exported names. This was
the opportunity for providing a separate free call that lets the caller
specify the size (e.g. for use with pools).

For now this code is not used.
2025-08-06 19:19:27 +02:00
Willy Tarreau
e921fe894f BUILD: compat: always set _POSIX_VERSION to ease comparisons
Sometimes we need to compare it to known versions, let's make sure it's
always defined. We set it to zero if undefined so that it cannot match
any comparison.
2025-08-06 19:19:27 +02:00
Willy Tarreau
2ce0c63206 BUILD: quic: use _MAX() to avoid build issues in pools declarations
With the upcoming pool declaration, we're filling a struct's fields,
while older versions were relying on initcalls which could be turned
to function declarations. Thus the compound expressions that were
usable there are not necessarily anymore, as witnessed here with
gcc-5.5 on solaris 10:

      In file included from include/haproxy/quic_tx.h:26:0,
                       from src/quic_tx.c:15:
      include/haproxy/compat.h:106:19: error: braced-group within expression allowed only inside a function
       #define MAX(a, b) ({    \
                         ^
      include/haproxy/pool.h:41:11: note: in definition of macro '__REGISTER_POOL'
         .size = _size,           \
                 ^
      ...
      include/haproxy/quic_tx-t.h:6:29: note: in expansion of macro 'MAX'
       #define QUIC_MAX_CC_BUFSIZE MAX(QUIC_INITIAL_IPV6_MTU, QUIC_INITIAL_IPV4_MTU)

Let's make the macro use _MAX() instead of MAX() since it relies on pure
constants.
2025-08-06 19:19:11 +02:00
Willy Tarreau
cf8871ae40 BUILD: compat: provide relaxed versions of the MIN/MAX macros
In 3.0 the MIN/MAX macros were converted to compound expressions with
commit 0999e3d959 ("CLEANUP: compat: make the MIN/MAX macros more
reliable"). However with older compilers these are not supported out
of code blocks (e.g. to initialize variables or struct members). This
is the case on Solaris 10 with gcc-5.5 when QUIC doesn't compile
anymore with the future pool registration:

  In file included from include/haproxy/quic_tx.h:26:0,
                   from src/quic_tx.c:15:
  include/haproxy/compat.h:106:19: error: braced-group within expression allowed only inside a function
   #define MAX(a, b) ({    \
                     ^
  include/haproxy/pool.h:41:11: note: in definition of macro '__REGISTER_POOL'
     .size = _size,           \
             ^
  ...
  include/haproxy/quic_tx-t.h:6:29: note: in expansion of macro 'MAX'
   #define QUIC_MAX_CC_BUFSIZE MAX(QUIC_INITIAL_IPV6_MTU, QUIC_INITIAL_IPV4_MTU)

Let's provide the old relaxed versions as _MIN/_MAX for use with constants
like such cases where it's certain that there is no risk. A previous attempt
using __builtin_constant_p() to switch between the variants did not work,
and it's really not worth the hassle of going this far.
2025-08-06 19:18:42 +02:00
Willy Tarreau
b1f854bb2e BUILD: fcgi: fix the struct name of fcgi_flt_ctx
The struct was mistakenly spelled flt_fcgi_ctx() in fcgi_flt_stop()
when it was introduced in 2.1 with commit 78fbb9f991 ("MEDIUM:
fcgi-app: Add FCGI application and filter"), causing build issues
when trying to get the alignment of the object in pool_free() for
debugging purposes. No backport is needed as it's just used to convey
a pointer.
2025-08-06 16:27:05 +02:00
Alexander Stephan
ffbb3cc306 MINOR: sample: Add le2dec (little endian to decimal) sample fetch
This commit introduces a sample fetch, `le2dec`, to convert
little-endian binary input samples into their decimal representations.
The function converts the input into a string containing unsigned
integer numbers, with each number derived from a specified number of
input bytes. The numbers are separated using a user-defined separator.

This new sample is achieved by adding a parametrized sample_conv_2dec
function, unifying the logic for be2dec and le2dec converters.

Co-authored-by: Christian Norbert Menges <christian.norbert.menges@sap.com>
[wt: tracked as GH issue #2915]
Signed-off-by: Willy Tarreau <w@1wt.eu>
2025-08-05 13:47:53 +02:00
Aurelien DARRAGON
aeff2a3b2a BUG/MEDIUM: hlua_fcn: ensure systematic watcher cleanup for server list iterator
In 358166a ("BUG/MINOR: hlua_fcn: restore server pairs iterator pointer
consistency"), I wrongly assumed that because the iterator was a temporary
object, no specific cleanup was needed for the watcher.

In fact watcher_detach() is not only relevant for the watcher itself, but
especially for its parent list to remove the current watcher from it.

As iterators are temporary objects, failing to remove their watchers from
the server watcher list causes the server watcher list to be corrupted.

On a normal iteration sequence, the last watcher_next() receives NULL
as target so it successfully detaches the last watcher from the list.
However the corner case here is with interrupted iterators: users are
free to break away from the iteration loop when a specific condition is
met for instance from the lua script, when this happens
hlua_listable_servers_pairs_iterator() doesn't get a chance to detach the
last iterator.

Also, Lua doesn't tell us that the loop was interrupted,
so to fix the issue we rely on the garbage collector to force a last
detach right before the object is freed. To achieve that, watcher_detach()
was slightly modified so that it becomes possible to call it without
knowing if the watcher is already detached or not, if watcher_detach() is
called on a detached watcher, the function does nothing. This way it saves
the caller from having to track the watcher state and makes the API a
little more convenient to use. This way we now systematically call
watcher_detach() for server iterators right before they are garbage
collected.

This was first reported in GH #3055. It can be observed when the server
list is browsed one than more time when it was already browsed from Lua
for a given proxy and the iteration was interrupted before the end. As the
watcher list is corrupted, the common symptom is watcher_attach() or
watcher_next() not ending due to the internal mt_list call looping
forever.

Thanks to GH users @sabretus and @sabretus for their precious help.

It should be backported everywhere 358166a was.
2025-08-05 13:06:46 +02:00
William Lallemand
66f28dbd3f BUG/MINOR: acme: possible integer underflow in acme_txt_record()
a2base64url() can return a negative value is olen is too short to
accept ilen. This is not supposed to happen since the sha256 should
always fit in a buffer. But this is confusing since a2base64()
returns a signed integer which is pt in output->data which is unsigned.

Fix the issue by setting ret to 0 instead of -1 upon error. And returns
a unsigned integer instead of a signed one.
This patch also checks the return value from the caller in order
to emit an error instead of setting trash.data which is already done
from the function.
2025-08-05 12:12:50 +02:00
William Lallemand
8afd3e588d MINOR: acme: update the log for DNS-01
Update the log for DNS-01 by mentionning the challenge_ready command
over the CLI.
2025-08-01 18:08:43 +02:00
William Lallemand
9ee14ed2d9 MEDIUM: acme: allow to wait and restart the task for DNS-01
DNS-01 needs a external process which would register a TXT record on a
DNS provider, using a REST API or something else.

To achieve this, the process should read the dpapi sink and wait for
events. With the DNS-01 challenge, HAProxy will put the task to sleep
before asking the ACME server to achieve the challenge. The task then
need to be woke up, using the command implemented by this patch.

This patch implements the "acme challenge_ready" command which should be
used by the agent once the challenge was configured in order to wake the
task up.

Example:
    echo "@1 acme challenge_ready foobar.pem.rsa domain kikyo" | socat /tmp/master.sock -
2025-08-01 18:07:12 +02:00
William Lallemand
3dde7626ba MINOR: acme: emit the DNS-01 challenge details on the dpapi sink
This commit adds a new message to the dpapi sink which is emitted during
the new authorization request.

One message is emitted by challenge to resolve. The certificate name as
well as the thumprint of the account key are on the first line of the
message. A dump of the JSON response for 1 challenge is dumped, en the
message ends with a \0.

The agent consuming these messages MUST NOT access the URLs, and SHOULD
only uses the thumbprint, dns and token to configure a challenge.

Example:

    $ ( echo "@@1 show events dpapi -w -0"; cat - ) | socat /tmp/master.sock -  | cat -e
    <0>2025-08-01T16:23:14.797733+02:00 acme deploy foobar.pem.rsa thumbprint Gv7pmGKiv_cjo3aZDWkUPz5ZMxctmd-U30P2GeqpnCo$
    {$
       "status": "pending",$
       "identifier": {$
          "type": "dns",$
          "value": "foobar.com"$
       },$
       "challenges": [$
          {$
             "type": "dns-01",$
             "url": "https://0.0.0.0:14000/chalZ/1o7sxLnwcVCcmeriH1fbHJhRgn4UBIZ8YCbcrzfREZc",$
             "token": "tvAcRXpNjbgX964ScRVpVL2NXPid1_V8cFwDbRWH_4Q",$
             "status": "pending"$
          },$
          {$
             "type": "dns-account-01",$
             "url": "https://0.0.0.0:14000/chalZ/z2_WzibwTPvE2zzIiP3BF0zNy3fgpU_8Nj-V085equ0",$
             "token": "UedIMFsI-6Y9Nq3oXgHcG72vtBFWBTqZx-1snG_0iLs",$
             "status": "pending"$
          },$
          {$
             "type": "tls-alpn-01",$
             "url": "https://0.0.0.0:14000/chalZ/AHnQcRvZlFw6e7F6rrc7GofUMq7S8aIoeDileByYfEI",$
             "token": "QhT4ejBEu6ZLl6pI1HsOQ3jD9piu__N0Hr8PaWaIPyo",$
             "status": "pending"$
          },$
          {$
             "type": "http-01",$
             "url": "https://0.0.0.0:14000/chalZ/Q_qTTPDW43-hsPW3C60NHpGDm_-5ZtZaRfOYDsK3kY8",$
             "token": "g5Y1WID1v-hZeuqhIa6pvdDyae7Q7mVdxG9CfRV2-t4",$
             "status": "pending"$
          }$
       ],$
       "expires": "2025-08-01T15:23:14Z"$
    }$
    ^@
2025-08-01 16:48:22 +02:00
William Lallemand
365a69648c MINOR: acme: emit a log for DNS-01 challenge response
This commit emits a log which output the TXT entry to create in case of
DNS-01. This is useful in cases you want to update your TXT entry
manually.

Example:

    acme: foobar.pem.rsa: DNS-01 requires to set the "acme-challenge.example.com" TXT record to "7L050ytWm6ityJqolX-PzBPR0LndHV8bkZx3Zsb-FMg"
2025-08-01 16:12:27 +02:00
William Lallemand
09275fd549 BUILD: acme: avoid declaring TRACE_SOURCE in acme-t.h
Files ending with '-t.h' are supposed to be used for structure
definitions and could be included in the same file to check API
definitions.

This patch removes TRACE_SOURCE from acme-t.h to avoid conflicts with
other TRACE_SOURCE definitions.
2025-07-31 16:03:28 +02:00
Amaury Denoyelle
a6e67e7b41 BUG/MEDIUM: mux-quic: ensure Early-data header is set
QUIC MUX may be initialized prior to handshake completion, when 0-RTT is
used. In this case, connection is flagged with CO_FL_EARLY_SSL_HS, which
is notably used by wait-for-hs http rule.

Early data may be subject to replay attacks. For this reason, haproxy
adds the header 'Early-data: 1' to all requests handled as TLS early
data. Thus the server can reject it if it is deemed unsafe. This header
injection is implemented by http-ana. However, it was not functional
with QUIC due to missing CO_FL_EARLY_DATA connection flag.

Fix this by ensuring that QUIC MUX sets CO_FL_EARLY_DATA when needed.
This is performed during qcc_recv() for STREAM frame reception. It is
only set if QC_CF_WAIT_HS is set, meaning that the handshake is not yet
completed. After this, the request is considered safe and Early-data
header is not necessary anymore.

This should fix github issue #3054.

This must be backported up to 3.2 at least. If possible, it should be
backported to all stable releases as well. On these versions, the
current patch relies on the following refactoring commit :
  commit 0a53a008d0
  MINOR: mux-quic: refactor wait-for-handshake support
2025-07-31 15:25:59 +02:00
Amaury Denoyelle
697f7d1142 MINOR: muxes: refactor private connection detach
Following the latest adjustment on session_add_conn() /
session_check_idle_conn(), detach muxes callbacks were rewritten for
private connection handling.

Nothing really fancy here : some more explicit comments and the removal
of a duplicate checks on idle conn status for muxes with true
multipexing support.
2025-07-30 16:14:00 +02:00
Amaury Denoyelle
2ecc5290f2 MINOR: session: streamline session_check_idle_conn() usage
session_check_idle_conn() is called by muxes when a connection becomes
idle. It ensures that the session idle limit is not yet reached. Else,
the connection is removed from the session and it can be freed.

Prior to this patch, session_check_idle_conn() was compatible with a
NULL session argument. In this case, it would return true, considering
that no limit was reached and connection not removed.

However, this renders the function error-prone and subject to future
bugs. This patch streamlines it by ensuring it is never called with a
NULL argument. Thus it can now only returns true if connection is kept
in the session or false if it was removed, as first intended.
2025-07-30 16:13:30 +02:00
Amaury Denoyelle
dd9645d6b9 MINOR: session: do not release conn in session_check_idle_conn()
session_check_idle_conn() is called to flag a connection already
inserted in a session list as idle. If the session limit on the number
of idle connections (max-session-srv-conns) is exceeded, the connection
is removed from the session list.

In addition to the connection removal, session_check_idle_conn()
directly calls MUX destroy callback on the connection. This means the
connection is freed by the function itself and should not be used by the
caller anymore.

This is not practical when an alternative connection closure method
should be used, such as a graceful shutdown with QUIC. As such, remove
MUX destroy invokation : this is now the responsability of the caller to
either close or release immediately the connection.
2025-07-30 11:43:41 +02:00
Amaury Denoyelle
57e9425dbc MINOR: session: strengthen idle conn limit check
Add a BUG_ON() on session_check_idle_conn() to ensure the connection is
not already flagged as CO_FL_SESS_IDLE.

This checks that this function is only called one time per connection
transition from active to idle. This is necessary to ensure that session
idle counter is only incremented one time per connection.
2025-07-30 11:40:16 +02:00
Amaury Denoyelle
ec1ab8d171 MINOR: session: remove redundant target argument from session_add_conn()
session_add_conn() uses three argument : connection and session
instances, plus a void pointer labelled as target. Typically, it
represents the server, but can also be a backend instance (for example
on dispatch).

In fact, this argument is redundant as <target> is already a member of
the connection. This commit simplifies session_add_conn() by removing
it. A BUG_ON() on target is extended to ensure it is never NULL.
2025-07-30 11:39:57 +02:00
Amaury Denoyelle
668c2cfb09 MINOR: session: strengthen connection attach to session
This commit is the first one of a serie to refactor insertion of backend
private connection into the session list.

session_add_conn() is used to attach a connection into a session list.
Previously, this function would report an error if the connection
specified was already attached to another session. However, this case
currently never happens and thus can be considered as buggy.

Remove this check and replace it with a BUG_ON(). This allows to ensure
that session insertion remains consistent. The same check is also
transformed in session_check_idle_conn().
2025-07-30 11:39:26 +02:00
Amaury Denoyelle
cfe9bec1ea MINOR: mux-quic: release conn after shutdown on BE reuse failure
On stream detach on backend side, connection is inserted in the proper
server/session list to be able to reuse it later. If insertion fails and
the connection is idle, the connection can be removed immediately.

If this occurs on a QUIC connection, QUIC MUX implements graceful
shutdown to ensure the server is notified of the closure. However, the
connection instance is not freed. Change this to ensure that both
shutdown and release is performed.
2025-07-30 10:04:19 +02:00
Aurelien DARRAGON
14966c856b MINOR: clock: make global_now_ns a pointer as well
Similar to previous commit but for global_now_ns
2025-07-29 18:04:15 +02:00
Aurelien DARRAGON
4a20b3835a MINOR: clock: make global_now_ms a pointer
This is preparation work for shared counters between co-processes. As
co-processes will need to share a common date. global_now_ms will be used
for that as it will point to the shm when sharing is enabled.

Thus in this patch we turn global_now_ms into a pointer (and adjust the
places where it is written to and read from, hopefully atomic operations
through pointer are already used so the change is trivial)

For now global_now_ms points to process-local _global_now_ms which is a
fallback for when sharing through the shm is not enabled.
2025-07-29 18:04:14 +02:00
Aurelien DARRAGON
713ebd2750 CLEANUP: counters: rename counters_be_shared_init to counters_be_shared_prepare
75e480d10 ("MEDIUM: stats: avoid 1 indirection by storing the shared
stats directly in counters struct") took care of renaming
counters_fe_shared_init() but we forgot counters_be_shared_init().

Let's fix that for consistency
2025-07-29 18:00:13 +02:00
Aurelien DARRAGON
2ffe515d97 BUG/MINOR: hlua: take default-path into account with lua-load-per-thread
As discussed in GH #3051, default-path is not taken into account when
loading files using lua-load-per-thread. In fact, the initial
hlua_load_state() (performed on first thread which parses the config)
is successful, but other threads run hlua_load_state() later based
on config hints which were saved by the first thread, and those config
hints only contain the file path provided on the lua-load-per-thread
config line, not the absolute one. Indeed, `default-path` directive
changes the current working directory only for the thread parsing the
configuration.

To fix the issue, when storing config hints under hlua_load_per_thread()
we now make sure to save the absolute file path for `lua-load-per-thread'
argument.

Thanks to GH user @zhanhb for having reported the issue

It may be backported to all stable versions.
2025-07-29 17:58:28 +02:00
William Lallemand
83a335f925 MINOR: acme: implement traces
Implement traces for the ACME protocol.

 -dt acme:data:complete will dump every input and output buffers,
 including decoded buffers before being converted to JWS.
 It will also dump certificates in the traces.

 -dt acme:user:complete will only dump the state of the task handler.
2025-07-29 17:25:10 +02:00
61 changed files with 1165 additions and 248 deletions

67
.github/matrix.py vendored
View File

@ -125,9 +125,11 @@ def main(ref_name):
# Ubuntu # Ubuntu
if "haproxy-" in ref_name: if "haproxy-" in ref_name:
os = "ubuntu-24.04" # stable branch os = "ubuntu-24.04" # stable branch
os_arm = "ubuntu-24.04-arm" # stable branch
else: else:
os = "ubuntu-24.04" # development branch os = "ubuntu-24.04" # development branch
os_arm = "ubuntu-24.04-arm" # development branch
TARGET = "linux-glibc" TARGET = "linux-glibc"
for CC in ["gcc", "clang"]: for CC in ["gcc", "clang"]:
@ -172,36 +174,37 @@ def main(ref_name):
# ASAN # ASAN
matrix.append( for os_asan in [os, os_arm]:
{ matrix.append(
"name": "{}, {}, ASAN, all features".format(os, CC), {
"os": os, "name": "{}, {}, ASAN, all features".format(os_asan, CC),
"TARGET": TARGET, "os": os_asan,
"CC": CC, "TARGET": TARGET,
"FLAGS": [ "CC": CC,
"USE_OBSOLETE_LINKER=1", "FLAGS": [
'ARCH_FLAGS="-g -fsanitize=address"', "USE_OBSOLETE_LINKER=1",
'OPT_CFLAGS="-O1"', 'ARCH_FLAGS="-g -fsanitize=address"',
"USE_ZLIB=1", 'OPT_CFLAGS="-O1"',
"USE_OT=1", "USE_ZLIB=1",
"OT_INC=${HOME}/opt-ot/include", "USE_OT=1",
"OT_LIB=${HOME}/opt-ot/lib", "OT_INC=${HOME}/opt-ot/include",
"OT_RUNPATH=1", "OT_LIB=${HOME}/opt-ot/lib",
"USE_PCRE2=1", "OT_RUNPATH=1",
"USE_PCRE2_JIT=1", "USE_PCRE2=1",
"USE_LUA=1", "USE_PCRE2_JIT=1",
"USE_OPENSSL=1", "USE_LUA=1",
"USE_WURFL=1", "USE_OPENSSL=1",
"WURFL_INC=addons/wurfl/dummy", "USE_WURFL=1",
"WURFL_LIB=addons/wurfl/dummy", "WURFL_INC=addons/wurfl/dummy",
"USE_DEVICEATLAS=1", "WURFL_LIB=addons/wurfl/dummy",
"DEVICEATLAS_SRC=addons/deviceatlas/dummy", "USE_DEVICEATLAS=1",
"USE_PROMEX=1", "DEVICEATLAS_SRC=addons/deviceatlas/dummy",
"USE_51DEGREES=1", "USE_PROMEX=1",
"51DEGREES_SRC=addons/51degrees/dummy/pattern", "USE_51DEGREES=1",
], "51DEGREES_SRC=addons/51degrees/dummy/pattern",
} ],
) }
)
for compression in ["USE_ZLIB=1"]: for compression in ["USE_ZLIB=1"]:
matrix.append( matrix.append(

View File

@ -76,7 +76,7 @@ jobs:
uses: actions/cache@v4 uses: actions/cache@v4
with: with:
path: '~/opt-ot/' path: '~/opt-ot/'
key: ot-${{ matrix.CC }}-${{ env.OT_CPP_VERSION }}-${{ contains(matrix.name, 'ASAN') }} key: ${{ matrix.os }}-ot-${{ matrix.CC }}-${{ env.OT_CPP_VERSION }}-${{ contains(matrix.name, 'ASAN') }}
- name: Install apt dependencies - name: Install apt dependencies
if: ${{ startsWith(matrix.os, 'ubuntu-') }} if: ${{ startsWith(matrix.os, 'ubuntu-') }}
run: | run: |

View File

@ -1,6 +1,44 @@
ChangeLog : ChangeLog :
=========== ===========
2025/08/06 : 3.3-dev6
- MINOR: acme: implement traces
- BUG/MINOR: hlua: take default-path into account with lua-load-per-thread
- CLEANUP: counters: rename counters_be_shared_init to counters_be_shared_prepare
- MINOR: clock: make global_now_ms a pointer
- MINOR: clock: make global_now_ns a pointer as well
- MINOR: mux-quic: release conn after shutdown on BE reuse failure
- MINOR: session: strengthen connection attach to session
- MINOR: session: remove redundant target argument from session_add_conn()
- MINOR: session: strengthen idle conn limit check
- MINOR: session: do not release conn in session_check_idle_conn()
- MINOR: session: streamline session_check_idle_conn() usage
- MINOR: muxes: refactor private connection detach
- BUG/MEDIUM: mux-quic: ensure Early-data header is set
- BUILD: acme: avoid declaring TRACE_SOURCE in acme-t.h
- MINOR: acme: emit a log for DNS-01 challenge response
- MINOR: acme: emit the DNS-01 challenge details on the dpapi sink
- MEDIUM: acme: allow to wait and restart the task for DNS-01
- MINOR: acme: update the log for DNS-01
- BUG/MINOR: acme: possible integer underflow in acme_txt_record()
- BUG/MEDIUM: hlua_fcn: ensure systematic watcher cleanup for server list iterator
- MINOR: sample: Add le2dec (little endian to decimal) sample fetch
- BUILD: fcgi: fix the struct name of fcgi_flt_ctx
- BUILD: compat: provide relaxed versions of the MIN/MAX macros
- BUILD: quic: use _MAX() to avoid build issues in pools declarations
- BUILD: compat: always set _POSIX_VERSION to ease comparisons
- MINOR: implement ha_aligned_alloc() to return aligned memory areas
- MINOR: pools: support creating a pool from a pool registration
- MINOR: pools: add a new flag to declare static registrations
- MINOR: pools: force the name at creation time to be a const.
- MEDIUM: pools: change the static pool creation to pass a registration
- DEBUG: pools: store the pool registration file name and line number
- DEBUG: pools: also retrieve file and line for direct callers of create_pool()
- MEDIUM: pools: add an alignment property
- MINOR: pools: add macros to register aligned pools
- MINOR: pools: add macros to declare pools based on a struct type
- MEDIUM: pools: respect pool alignment in allocations
2025/07/28 : 3.3-dev5 2025/07/28 : 3.3-dev5
- BUG/MEDIUM: queue/stats: also use stream_set_srv_target() for pendconns - BUG/MEDIUM: queue/stats: also use stream_set_srv_target() for pendconns
- DOC: list missing global QUIC settings - DOC: list missing global QUIC settings

View File

@ -62,6 +62,7 @@
# USE_MEMORY_PROFILING : enable the memory profiler. Linux-glibc only. # USE_MEMORY_PROFILING : enable the memory profiler. Linux-glibc only.
# USE_LIBATOMIC : force to link with/without libatomic. Automatic. # USE_LIBATOMIC : force to link with/without libatomic. Automatic.
# USE_PTHREAD_EMULATION : replace pthread's rwlocks with ours # USE_PTHREAD_EMULATION : replace pthread's rwlocks with ours
# USE_SHM_OPEN : use shm_open() for features that can make use of shared memory
# #
# Options can be forced by specifying "USE_xxx=1" or can be disabled by using # Options can be forced by specifying "USE_xxx=1" or can be disabled by using
# "USE_xxx=" (empty string). The list of enabled and disabled options for a # "USE_xxx=" (empty string). The list of enabled and disabled options for a
@ -343,7 +344,7 @@ use_opts = USE_EPOLL USE_KQUEUE USE_NETFILTER USE_POLL \
USE_MATH USE_DEVICEATLAS USE_51DEGREES \ USE_MATH USE_DEVICEATLAS USE_51DEGREES \
USE_WURFL USE_OBSOLETE_LINKER USE_PRCTL USE_PROCCTL \ USE_WURFL USE_OBSOLETE_LINKER USE_PRCTL USE_PROCCTL \
USE_THREAD_DUMP USE_EVPORTS USE_OT USE_QUIC USE_PROMEX \ USE_THREAD_DUMP USE_EVPORTS USE_OT USE_QUIC USE_PROMEX \
USE_MEMORY_PROFILING \ USE_MEMORY_PROFILING USE_SHM_OPEN \
USE_STATIC_PCRE USE_STATIC_PCRE2 \ USE_STATIC_PCRE USE_STATIC_PCRE2 \
USE_PCRE USE_PCRE_JIT USE_PCRE2 USE_PCRE2_JIT USE_QUIC_OPENSSL_COMPAT USE_PCRE USE_PCRE_JIT USE_PCRE2 USE_PCRE2_JIT USE_QUIC_OPENSSL_COMPAT
@ -382,7 +383,7 @@ ifeq ($(TARGET),linux-glibc)
USE_POLL USE_TPROXY USE_LIBCRYPT USE_DL USE_RT USE_CRYPT_H USE_NETFILTER \ USE_POLL USE_TPROXY USE_LIBCRYPT USE_DL USE_RT USE_CRYPT_H USE_NETFILTER \
USE_CPU_AFFINITY USE_THREAD USE_EPOLL USE_LINUX_TPROXY USE_LINUX_CAP \ USE_CPU_AFFINITY USE_THREAD USE_EPOLL USE_LINUX_TPROXY USE_LINUX_CAP \
USE_ACCEPT4 USE_LINUX_SPLICE USE_PRCTL USE_THREAD_DUMP USE_NS USE_TFO \ USE_ACCEPT4 USE_LINUX_SPLICE USE_PRCTL USE_THREAD_DUMP USE_NS USE_TFO \
USE_GETADDRINFO USE_BACKTRACE) USE_GETADDRINFO USE_BACKTRACE USE_SHM_OPEN)
INSTALL = install -v INSTALL = install -v
endif endif
@ -401,7 +402,7 @@ ifeq ($(TARGET),linux-musl)
USE_POLL USE_TPROXY USE_LIBCRYPT USE_DL USE_RT USE_CRYPT_H USE_NETFILTER \ USE_POLL USE_TPROXY USE_LIBCRYPT USE_DL USE_RT USE_CRYPT_H USE_NETFILTER \
USE_CPU_AFFINITY USE_THREAD USE_EPOLL USE_LINUX_TPROXY USE_LINUX_CAP \ USE_CPU_AFFINITY USE_THREAD USE_EPOLL USE_LINUX_TPROXY USE_LINUX_CAP \
USE_ACCEPT4 USE_LINUX_SPLICE USE_PRCTL USE_THREAD_DUMP USE_NS USE_TFO \ USE_ACCEPT4 USE_LINUX_SPLICE USE_PRCTL USE_THREAD_DUMP USE_NS USE_TFO \
USE_GETADDRINFO USE_BACKTRACE) USE_GETADDRINFO USE_BACKTRACE USE_SHM_OPEN)
INSTALL = install -v INSTALL = install -v
endif endif

View File

@ -1,2 +1,2 @@
$Format:%ci$ $Format:%ci$
2025/07/28 2025/08/06

View File

@ -1 +1 @@
3.3-dev5 3.3-dev6

View File

@ -3,7 +3,7 @@
Configuration Manual Configuration Manual
---------------------- ----------------------
version 3.3 version 3.3
2025/07/28 2025/08/06
This document covers the configuration language as implemented in the version This document covers the configuration language as implemented in the version
@ -19901,6 +19901,7 @@ and(value) integer integer
b64dec string binary b64dec string binary
base64 binary string base64 binary string
be2dec(separator,chunk_size[,truncate]) binary string be2dec(separator,chunk_size[,truncate]) binary string
le2dec(separator,chunk_size[,truncate]) binary string
be2hex([separator[,chunk_size[,truncate]]]) binary string be2hex([separator[,chunk_size[,truncate]]]) binary string
bool integer boolean bool integer boolean
bytes(offset[,length]) binary binary bytes(offset[,length]) binary binary
@ -20141,6 +20142,19 @@ be2dec(<separator>,<chunk_size>[,<truncate>])
bin(01020304050607),be2dec(,2,1) # 2587721286 bin(01020304050607),be2dec(,2,1) # 2587721286
bin(7f000001),be2dec(.,1) # 127.0.0.1 bin(7f000001),be2dec(.,1) # 127.0.0.1
le2dec(<separator>,<chunk_size>[,<truncate>])
Converts little-endian binary input sample to a string containing an unsigned
integer number per <chunk_size> input bytes. <separator> is inserted every
<chunk_size> binary input bytes if specified. The <truncate> flag indicates
whether the binary input is truncated at <chunk_size> boundaries. The maximum
value for <chunk_size> is limited by the size of long long int (8 bytes).
Example:
bin(01020304050607),le2dec(:,2) # 513:1284:2055:7
bin(01020304050607),le2dec(-,2,1) # 513-1284-2055
bin(01020304050607),le2dec(,2,1) # 51312842055
bin(7f000001),le2dec(.,1) # 127.0.0.1
be2hex([<separator>[,<chunk_size>[,<truncate>]]]) be2hex([<separator>[,<chunk_size>[,<truncate>]]])
Converts big-endian binary input sample to a hex string containing two hex Converts big-endian binary input sample to a hex string containing two hex
digits per input byte. It is used to log or transfer hex dumps of some digits per input byte. It is used to log or transfer hex dumps of some

View File

@ -51,9 +51,11 @@ enum http_st {
}; };
struct acme_auth { struct acme_auth {
struct ist dns; /* dns entry */
struct ist auth; /* auth URI */ struct ist auth; /* auth URI */
struct ist chall; /* challenge URI */ struct ist chall; /* challenge URI */
struct ist token; /* token */ struct ist token; /* token */
int ready; /* is the challenge ready ? */
void *next; void *next;
}; };
@ -79,6 +81,20 @@ struct acme_ctx {
X509_REQ *req; X509_REQ *req;
struct ist finalize; struct ist finalize;
struct ist certificate; struct ist certificate;
struct task *task;
struct mt_list el; struct mt_list el;
}; };
#define ACME_EV_SCHED (1ULL << 0) /* scheduling wakeup */
#define ACME_EV_NEW (1ULL << 1) /* new task */
#define ACME_EV_TASK (1ULL << 2) /* Task handler */
#define ACME_EV_REQ (1ULL << 3) /* HTTP Request */
#define ACME_EV_RES (1ULL << 4) /* HTTP Response */
#define ACME_VERB_CLEAN 1
#define ACME_VERB_MINIMAL 2
#define ACME_VERB_SIMPLE 3
#define ACME_VERB_ADVANCED 4
#define ACME_VERB_COMPLETE 5
#endif #endif

View File

@ -620,9 +620,92 @@ struct mem_stats {
_HA_ATOMIC_ADD(&_.size, __y); \ _HA_ATOMIC_ADD(&_.size, __y); \
strdup(__x); \ strdup(__x); \
}) })
#undef ha_aligned_alloc
#define ha_aligned_alloc(a,s) ({ \
size_t __a = (a); \
size_t __s = (s); \
static struct mem_stats _ __attribute__((used,__section__("mem_stats"),__aligned__(sizeof(void*)))) = { \
.caller = { \
.file = __FILE__, .line = __LINE__, \
.what = MEM_STATS_TYPE_MALLOC, \
.func = __func__, \
}, \
}; \
HA_WEAK(__start_mem_stats); \
HA_WEAK(__stop_mem_stats); \
_HA_ATOMIC_INC(&_.calls); \
_HA_ATOMIC_ADD(&_.size, __s); \
_ha_aligned_alloc(__a, __s); \
})
#undef ha_aligned_alloc_safe
#define ha_aligned_alloc_safe(a,s) ({ \
size_t __a = (a); \
size_t __s = (s); \
static struct mem_stats _ __attribute__((used,__section__("mem_stats"),__aligned__(sizeof(void*)))) = { \
.caller = { \
.file = __FILE__, .line = __LINE__, \
.what = MEM_STATS_TYPE_MALLOC, \
.func = __func__, \
}, \
}; \
HA_WEAK(__start_mem_stats); \
HA_WEAK(__stop_mem_stats); \
_HA_ATOMIC_INC(&_.calls); \
_HA_ATOMIC_ADD(&_.size, __s); \
_ha_aligned_alloc_safe(__a, __s); \
})
#undef ha_aligned_free
#define ha_aligned_free(x) ({ \
typeof(x) __x = (x); \
static struct mem_stats _ __attribute__((used,__section__("mem_stats"),__aligned__(sizeof(void*)))) = { \
.caller = { \
.file = __FILE__, .line = __LINE__, \
.what = MEM_STATS_TYPE_FREE, \
.func = __func__, \
}, \
}; \
HA_WEAK(__start_mem_stats); \
HA_WEAK(__stop_mem_stats); \
if (__builtin_constant_p((x))) { \
HA_LINK_ERROR(call_to_ha_aligned_free_attempts_to_free_a_constant); \
} \
if (__x) \
_HA_ATOMIC_INC(&_.calls); \
_ha_aligned_free(__x); \
})
#undef ha_aligned_free_size
#define ha_aligned_free_size(p,s) ({ \
void *__p = (p); size_t __s = (s); \
static struct mem_stats _ __attribute__((used,__section__("mem_stats"),__aligned__(sizeof(void*)))) = { \
.caller = { \
.file = __FILE__, .line = __LINE__, \
.what = MEM_STATS_TYPE_FREE, \
.func = __func__, \
}, \
}; \
HA_WEAK(__start_mem_stats); \
HA_WEAK(__stop_mem_stats); \
if (__builtin_constant_p((p))) { \
HA_LINK_ERROR(call_to_ha_aligned_free_attempts_to_free_a_constant); \
} \
if (__p) { \
_HA_ATOMIC_INC(&_.calls); \
_HA_ATOMIC_ADD(&_.size, __s); \
} \
_ha_aligned_free(__p); \
})
#else // DEBUG_MEM_STATS #else // DEBUG_MEM_STATS
#define will_free(x, y) do { } while (0) #define will_free(x, y) do { } while (0)
#define ha_aligned_alloc(a,s) _ha_aligned_alloc(a, s)
#define ha_aligned_alloc_safe(a,s) _ha_aligned_alloc_safe(a, s)
#define ha_aligned_free(p) _ha_aligned_free(p)
#define ha_aligned_free_size(p,s) _ha_aligned_free(p)
#endif /* DEBUG_MEM_STATS*/ #endif /* DEBUG_MEM_STATS*/

View File

@ -28,7 +28,7 @@
extern struct timeval start_date; /* the process's start date in wall-clock time */ extern struct timeval start_date; /* the process's start date in wall-clock time */
extern struct timeval ready_date; /* date when the process was considered ready */ extern struct timeval ready_date; /* date when the process was considered ready */
extern ullong start_time_ns; /* the process's start date in internal monotonic time (ns) */ extern ullong start_time_ns; /* the process's start date in internal monotonic time (ns) */
extern volatile ullong global_now_ns; /* common monotonic date between all threads, in ns (wraps every 585 yr) */ extern volatile ullong *global_now_ns;/* common monotonic date between all threads, in ns (wraps every 585 yr) */
extern THREAD_LOCAL ullong now_ns; /* internal monotonic date derived from real clock, in ns (wraps every 585 yr) */ extern THREAD_LOCAL ullong now_ns; /* internal monotonic date derived from real clock, in ns (wraps every 585 yr) */
extern THREAD_LOCAL struct timeval date; /* the real current date (wall-clock time) */ extern THREAD_LOCAL struct timeval date; /* the real current date (wall-clock time) */
@ -49,6 +49,8 @@ uint clock_report_idle(void);
void clock_leaving_poll(int timeout, int interrupted); void clock_leaving_poll(int timeout, int interrupted);
void clock_entering_poll(void); void clock_entering_poll(void);
void clock_adjust_now_offset(void); void clock_adjust_now_offset(void);
void clock_set_now_offset(llong ofs);
llong clock_get_now_offset(void);
static inline void clock_update_date(int max_wait, int interrupted) static inline void clock_update_date(int max_wait, int interrupted)
{ {

View File

@ -94,11 +94,21 @@ typedef struct { } empty_t;
# endif # endif
#endif #endif
/* unsafe ones for use with constant macros needed in initializers */
#ifndef _MIN
#define _MIN(a, b) ((a < b) ? a : b)
#endif
#ifndef _MAX
#define _MAX(a, b) ((a > b) ? a : b)
#endif
/* safe versions for use anywhere except in initializers */
#ifndef MIN #ifndef MIN
#define MIN(a, b) ({ \ #define MIN(a, b) ({ \
typeof(a) _a = (a); \ typeof(a) _a = (a); \
typeof(a) _b = (b); \ typeof(a) _b = (b); \
((_a < _b) ? _a : _b); \ _MIN(_a, _b); \
}) })
#endif #endif
@ -106,10 +116,15 @@ typedef struct { } empty_t;
#define MAX(a, b) ({ \ #define MAX(a, b) ({ \
typeof(a) _a = (a); \ typeof(a) _a = (a); \
typeof(a) _b = (b); \ typeof(a) _b = (b); \
((_a > _b) ? _a : _b); \ _MAX(_a, _b); \
}) })
#endif #endif
/* always set a _POSIX_VERSION if there isn't any, in order to ease compares */
#ifndef _POSIX_VERSION
# define _POSIX_VERSION 0
#endif
/* this is for libc5 for example */ /* this is for libc5 for example */
#ifndef TCP_NODELAY #ifndef TCP_NODELAY
#define TCP_NODELAY 1 #define TCP_NODELAY 1

View File

@ -28,7 +28,7 @@
#include <haproxy/guid-t.h> #include <haproxy/guid-t.h>
int counters_fe_shared_prepare(struct fe_counters_shared *counters, const struct guid_node *guid); int counters_fe_shared_prepare(struct fe_counters_shared *counters, const struct guid_node *guid);
int counters_be_shared_init(struct be_counters_shared *counters, const struct guid_node *guid); int counters_be_shared_prepare(struct be_counters_shared *counters, const struct guid_node *guid);
void counters_fe_shared_drop(struct fe_counters_shared *counters); void counters_fe_shared_drop(struct fe_counters_shared *counters);
void counters_be_shared_drop(struct be_counters_shared *counters); void counters_be_shared_drop(struct be_counters_shared *counters);

View File

@ -12,7 +12,16 @@ int guid_insert(enum obj_type *obj_type, const char *uid, char **errmsg);
void guid_remove(struct guid_node *guid); void guid_remove(struct guid_node *guid);
struct guid_node *guid_lookup(const char *uid); struct guid_node *guid_lookup(const char *uid);
/* Returns the actual text key associated to <guid> node or NULL if not
* set
*/
static inline const char *guid_get(const struct guid_node *guid)
{
return guid->node.key;
}
int guid_is_valid_fmt(const char *uid, char **errmsg); int guid_is_valid_fmt(const char *uid, char **errmsg);
char *guid_name(const struct guid_node *guid); char *guid_name(const struct guid_node *guid);
int guid_count(void);
#endif /* _HAPROXY_GUID_H */ #endif /* _HAPROXY_GUID_H */

View File

@ -14,6 +14,7 @@ extern struct list post_server_check_list;
extern struct list per_thread_alloc_list; extern struct list per_thread_alloc_list;
extern struct list per_thread_init_list; extern struct list per_thread_init_list;
extern struct list post_deinit_list; extern struct list post_deinit_list;
extern struct list post_deinit_master_list;
extern struct list proxy_deinit_list; extern struct list proxy_deinit_list;
extern struct list server_deinit_list; extern struct list server_deinit_list;
extern struct list per_thread_free_list; extern struct list per_thread_free_list;
@ -24,6 +25,7 @@ void hap_register_post_check(int (*fct)());
void hap_register_post_proxy_check(int (*fct)(struct proxy *)); void hap_register_post_proxy_check(int (*fct)(struct proxy *));
void hap_register_post_server_check(int (*fct)(struct server *)); void hap_register_post_server_check(int (*fct)(struct server *));
void hap_register_post_deinit(void (*fct)()); void hap_register_post_deinit(void (*fct)());
void hap_register_post_deinit_master(void (*fct)());
void hap_register_proxy_deinit(void (*fct)(struct proxy *)); void hap_register_proxy_deinit(void (*fct)(struct proxy *));
void hap_register_server_deinit(void (*fct)(struct server *)); void hap_register_server_deinit(void (*fct)(struct server *));
@ -63,6 +65,10 @@ void hap_register_unittest(const char *name, int (*fct)(int, char **));
#define REGISTER_POST_DEINIT(fct) \ #define REGISTER_POST_DEINIT(fct) \
INITCALL1(STG_REGISTER, hap_register_post_deinit, (fct)) INITCALL1(STG_REGISTER, hap_register_post_deinit, (fct))
/* simplified way to declare a post-deinit (master process when launched in master/worker mode) callback in a file */
#define REGISTER_POST_DEINIT_MASTER(fct) \
INITCALL1(STG_REGISTER, hap_register_post_deinit_master, (fct))
/* simplified way to declare a proxy-deinit callback in a file */ /* simplified way to declare a proxy-deinit callback in a file */
#define REGISTER_PROXY_DEINIT(fct) \ #define REGISTER_PROXY_DEINIT(fct) \
INITCALL1(STG_REGISTER, hap_register_proxy_deinit, (fct)) INITCALL1(STG_REGISTER, hap_register_proxy_deinit, (fct))

View File

@ -284,10 +284,11 @@ static __inline void watcher_attach(struct watcher *w, void *target)
MT_LIST_APPEND(list, &w->el); MT_LIST_APPEND(list, &w->el);
} }
/* Untracks target via <w> watcher. Invalid if <w> is not attached first. */ /* Untracks target via <w> watcher. Does nothing if <w> is not attached */
static __inline void watcher_detach(struct watcher *w) static __inline void watcher_detach(struct watcher *w)
{ {
BUG_ON_HOT(!MT_LIST_INLIST(&w->el)); if (!MT_LIST_INLIST(&w->el))
return;
*w->pptr = NULL; *w->pptr = NULL;
MT_LIST_DELETE(&w->el); MT_LIST_DELETE(&w->el);
} }

View File

@ -25,6 +25,7 @@
#include <sys/mman.h> #include <sys/mman.h>
#include <stdlib.h> #include <stdlib.h>
#include <haproxy/api.h> #include <haproxy/api.h>
#include <haproxy/tools.h>
/************* normal allocator *************/ /************* normal allocator *************/
@ -32,9 +33,9 @@
/* allocates an area of size <size> and returns it. The semantics are similar /* allocates an area of size <size> and returns it. The semantics are similar
* to those of malloc(). * to those of malloc().
*/ */
static forceinline void *pool_alloc_area(size_t size) static forceinline void *pool_alloc_area(size_t size, size_t align)
{ {
return malloc(size); return ha_aligned_alloc(align, size);
} }
/* frees an area <area> of size <size> allocated by pool_alloc_area(). The /* frees an area <area> of size <size> allocated by pool_alloc_area(). The
@ -43,8 +44,7 @@ static forceinline void *pool_alloc_area(size_t size)
*/ */
static forceinline void pool_free_area(void *area, size_t __maybe_unused size) static forceinline void pool_free_area(void *area, size_t __maybe_unused size)
{ {
will_free(area, size); ha_aligned_free_size(area, size);
free(area);
} }
/************* use-after-free allocator *************/ /************* use-after-free allocator *************/
@ -52,14 +52,15 @@ static forceinline void pool_free_area(void *area, size_t __maybe_unused size)
/* allocates an area of size <size> and returns it. The semantics are similar /* allocates an area of size <size> and returns it. The semantics are similar
* to those of malloc(). However the allocation is rounded up to 4kB so that a * to those of malloc(). However the allocation is rounded up to 4kB so that a
* full page is allocated. This ensures the object can be freed alone so that * full page is allocated. This ensures the object can be freed alone so that
* future dereferences are easily detected. The returned object is always * future dereferences are easily detected. The returned object is always at
* 16-bytes aligned to avoid issues with unaligned structure objects. In case * least 16-bytes aligned to avoid issues with unaligned structure objects, and
* some padding is added, the area's start address is copied at the end of the * in any case, is always at least aligned as required by the pool, though no
* padding to help detect underflows. * more than 4096. In case some padding is added, the area's start address is
* copied at the end of the padding to help detect underflows.
*/ */
static inline void *pool_alloc_area_uaf(size_t size) static inline void *pool_alloc_area_uaf(size_t size, size_t align)
{ {
size_t pad = (4096 - size) & 0xFF0; size_t pad = (4096 - size) & 0xFF0 & -align;
void *ret; void *ret;
ret = mmap(NULL, (size + 4095) & -4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0); ret = mmap(NULL, (size + 4095) & -4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0);

View File

@ -28,6 +28,7 @@
#define MEM_F_SHARED 0x1 #define MEM_F_SHARED 0x1
#define MEM_F_EXACT 0x2 #define MEM_F_EXACT 0x2
#define MEM_F_UAF 0x4 #define MEM_F_UAF 0x4
#define MEM_F_STATREG 0x8 /* static registration: do not free it! */
/* A special pointer for the pool's free_list that indicates someone is /* A special pointer for the pool's free_list that indicates someone is
* currently manipulating it. Serves as a short-lived lock. * currently manipulating it. Serves as a short-lived lock.
@ -69,7 +70,9 @@ struct pool_cache_head {
*/ */
struct pool_registration { struct pool_registration {
struct list list; /* link element */ struct list list; /* link element */
char name[12]; /* name of the pool */ const char *name; /* name of the pool */
const char *file; /* where the pool is declared */
unsigned int line; /* line in the file where the pool is declared, 0 if none */
unsigned int size; /* expected object size */ unsigned int size; /* expected object size */
unsigned int flags; /* MEM_F_* */ unsigned int flags; /* MEM_F_* */
unsigned int align; /* expected alignment; 0=unspecified */ unsigned int align; /* expected alignment; 0=unspecified */
@ -125,6 +128,7 @@ struct pool_head {
unsigned int minavail; /* how many chunks are expected to be used */ unsigned int minavail; /* how many chunks are expected to be used */
unsigned int size; /* chunk size */ unsigned int size; /* chunk size */
unsigned int flags; /* MEM_F_* */ unsigned int flags; /* MEM_F_* */
unsigned int align; /* alignment size */
unsigned int users; /* number of pools sharing this zone */ unsigned int users; /* number of pools sharing this zone */
unsigned int alloc_sz; /* allocated size (includes hidden fields) */ unsigned int alloc_sz; /* allocated size (includes hidden fields) */
unsigned int sum_size; /* sum of all registered users' size */ unsigned int sum_size; /* sum of all registered users' size */

View File

@ -30,19 +30,71 @@
#include <haproxy/pool-t.h> #include <haproxy/pool-t.h>
#include <haproxy/thread.h> #include <haproxy/thread.h>
/* This registers a call to create_pool_callback(ptr, name, size) */ /* This creates a pool_reg registers a call to create_pool_callback(ptr) with it.
* Do not use this one, use REGISTER_POOL() instead.
*/
#define __REGISTER_POOL(_line, _ptr, _name, _size, _align) \
static struct pool_registration __pool_reg_##_line = { \
.name = _name, \
.file = __FILE__, \
.line = __LINE__, \
.size = _size, \
.flags = MEM_F_STATREG, \
.align = _align, \
}; \
INITCALL3(STG_POOL, create_pool_callback, (_ptr), (_name), &__pool_reg_##_line);
/* intermediary level for line number resolution, do not use this one, use
* REGISTER_POOL() instead.
*/
#define _REGISTER_POOL(line, ptr, name, size, align) \
__REGISTER_POOL(line, ptr, name, size, align)
/* This registers a call to create_pool_callback(ptr) with these args */
#define REGISTER_POOL(ptr, name, size) \ #define REGISTER_POOL(ptr, name, size) \
INITCALL3(STG_POOL, create_pool_callback, (ptr), (name), (size)) _REGISTER_POOL(__LINE__, ptr, name, size, 0)
/* This macro declares a pool head <ptr> and registers its creation */ /* This macro declares a pool head <ptr> and registers its creation */
#define DECLARE_POOL(ptr, name, size) \ #define DECLARE_POOL(ptr, name, size) \
struct pool_head *(ptr) __read_mostly = NULL; \ struct pool_head *(ptr) __read_mostly = NULL; \
REGISTER_POOL(&ptr, name, size) _REGISTER_POOL(__LINE__, &ptr, name, size, 0)
/* This macro declares a static pool head <ptr> and registers its creation */ /* This macro declares a static pool head <ptr> and registers its creation */
#define DECLARE_STATIC_POOL(ptr, name, size) \ #define DECLARE_STATIC_POOL(ptr, name, size) \
static struct pool_head *(ptr) __read_mostly; \ static struct pool_head *(ptr) __read_mostly; \
REGISTER_POOL(&ptr, name, size) _REGISTER_POOL(__LINE__, &ptr, name, size, 0)
/*** below are the aligned pool macros, taking one extra arg for alignment ***/
/* This registers a call to create_pool_callback(ptr) with these args */
#define REGISTER_ALIGNED_POOL(ptr, name, size, align) \
_REGISTER_POOL(__LINE__, ptr, name, size, align)
/* This macro declares an aligned pool head <ptr> and registers its creation */
#define DECLARE_ALIGNED_POOL(ptr, name, size, align) \
struct pool_head *(ptr) __read_mostly = NULL; \
_REGISTER_POOL(__LINE__, &ptr, name, size, align)
/* This macro declares a static aligned pool head <ptr> and registers its creation */
#define DECLARE_STATIC_ALIGNED_POOL(ptr, name, size, align) \
static struct pool_head *(ptr) __read_mostly; \
_REGISTER_POOL(__LINE__, &ptr, name, size, align)
/*** below are the typed pool macros, taking a type and an extra size ***/
/* This registers a call to create_pool_callback(ptr) with these args */
#define REGISTER_TYPED_POOL(ptr, name, type, extra) \
_REGISTER_POOL(__LINE__, ptr, name, sizeof(type) + extra, __alignof__(type))
/* This macro declares an aligned pool head <ptr> and registers its creation */
#define DECLARE_TYPED_POOL(ptr, name, type, extra) \
struct pool_head *(ptr) __read_mostly = NULL; \
_REGISTER_POOL(__LINE__, &ptr, name, sizeof(type) + extra, __alignof__(type))
/* This macro declares a static aligned pool head <ptr> and registers its creation */
#define DECLARE_STATIC_TYPED_POOL(ptr, name, type, extra) \
static struct pool_head *(ptr) __read_mostly; \
_REGISTER_POOL(__LINE__, &ptr, name, sizeof(type) + extra, __alignof__(type))
/* By default, free objects are linked by a pointer stored at the beginning of /* By default, free objects are linked by a pointer stored at the beginning of
* the memory area. When DEBUG_MEMORY_POOLS is set, the allocated area is * the memory area. When DEBUG_MEMORY_POOLS is set, the allocated area is
@ -123,14 +175,22 @@ unsigned long long pool_total_allocated(void);
unsigned long long pool_total_used(void); unsigned long long pool_total_used(void);
void pool_flush(struct pool_head *pool); void pool_flush(struct pool_head *pool);
void pool_gc(struct pool_head *pool_ctx); void pool_gc(struct pool_head *pool_ctx);
struct pool_head *create_pool(char *name, unsigned int size, unsigned int flags); struct pool_head *create_pool_with_loc(const char *name, unsigned int size, unsigned int align,
void create_pool_callback(struct pool_head **ptr, char *name, unsigned int size); unsigned int flags, const char *file, unsigned int line);
struct pool_head *create_pool_from_reg(const char *name, struct pool_registration *reg);
void create_pool_callback(struct pool_head **ptr, char *name, struct pool_registration *reg);
void *pool_destroy(struct pool_head *pool); void *pool_destroy(struct pool_head *pool);
void pool_destroy_all(void); void pool_destroy_all(void);
void *__pool_alloc(struct pool_head *pool, unsigned int flags); void *__pool_alloc(struct pool_head *pool, unsigned int flags);
void __pool_free(struct pool_head *pool, void *ptr); void __pool_free(struct pool_head *pool, void *ptr);
void pool_inspect_item(const char *msg, struct pool_head *pool, const void *item, const void *caller, ssize_t ofs); void pool_inspect_item(const char *msg, struct pool_head *pool, const void *item, const void *caller, ssize_t ofs);
#define create_pool(name, size, flags) \
create_pool_with_loc(name, size, 0, flags, __FILE__, __LINE__)
#define create_aligned_pool(name, size, align, flags) \
create_pool_with_loc(name, size, align, flags, __FILE__, __LINE__)
/****************** Thread-local cache management ******************/ /****************** Thread-local cache management ******************/

View File

@ -448,9 +448,9 @@ struct quic_conn_closed {
#define QUIC_FL_CONN_ANTI_AMPLIFICATION_REACHED (1U << 0) #define QUIC_FL_CONN_ANTI_AMPLIFICATION_REACHED (1U << 0)
#define QUIC_FL_CONN_SPIN_BIT (1U << 1) /* Spin bit set by remote peer */ #define QUIC_FL_CONN_SPIN_BIT (1U << 1) /* Spin bit set by remote peer */
#define QUIC_FL_CONN_NEED_POST_HANDSHAKE_FRMS (1U << 2) /* HANDSHAKE_DONE must be sent */ #define QUIC_FL_CONN_NEED_POST_HANDSHAKE_FRMS (1U << 2) /* HANDSHAKE_DONE must be sent */
/* gap here */ #define QUIC_FL_CONN_IS_BACK (1U << 3) /* conn used on backend side */
#define QUIC_FL_CONN_ACCEPT_REGISTERED (1U << 4) #define QUIC_FL_CONN_ACCEPT_REGISTERED (1U << 4)
/* gap here */ #define QUIC_FL_CONN_UDP_GSO_EIO (1U << 5) /* GSO disabled due to a EIO occured on same listener */
#define QUIC_FL_CONN_IDLE_TIMER_RESTARTED_AFTER_READ (1U << 6) #define QUIC_FL_CONN_IDLE_TIMER_RESTARTED_AFTER_READ (1U << 6)
#define QUIC_FL_CONN_RETRANS_NEEDED (1U << 7) #define QUIC_FL_CONN_RETRANS_NEEDED (1U << 7)
#define QUIC_FL_CONN_RETRANS_OLD_DATA (1U << 8) /* retransmission in progress for probing with already sent data */ #define QUIC_FL_CONN_RETRANS_OLD_DATA (1U << 8) /* retransmission in progress for probing with already sent data */
@ -488,7 +488,9 @@ static forceinline char *qc_show_flags(char *buf, size_t len, const char *delim,
_(QUIC_FL_CONN_ANTI_AMPLIFICATION_REACHED, _(QUIC_FL_CONN_ANTI_AMPLIFICATION_REACHED,
_(QUIC_FL_CONN_SPIN_BIT, _(QUIC_FL_CONN_SPIN_BIT,
_(QUIC_FL_CONN_NEED_POST_HANDSHAKE_FRMS, _(QUIC_FL_CONN_NEED_POST_HANDSHAKE_FRMS,
_(QUIC_FL_CONN_IS_BACK,
_(QUIC_FL_CONN_ACCEPT_REGISTERED, _(QUIC_FL_CONN_ACCEPT_REGISTERED,
_(QUIC_FL_CONN_UDP_GSO_EIO,
_(QUIC_FL_CONN_IDLE_TIMER_RESTARTED_AFTER_READ, _(QUIC_FL_CONN_IDLE_TIMER_RESTARTED_AFTER_READ,
_(QUIC_FL_CONN_RETRANS_NEEDED, _(QUIC_FL_CONN_RETRANS_NEEDED,
_(QUIC_FL_CONN_RETRANS_OLD_DATA, _(QUIC_FL_CONN_RETRANS_OLD_DATA,
@ -507,7 +509,7 @@ static forceinline char *qc_show_flags(char *buf, size_t len, const char *delim,
_(QUIC_FL_CONN_EXP_TIMER, _(QUIC_FL_CONN_EXP_TIMER,
_(QUIC_FL_CONN_CLOSING, _(QUIC_FL_CONN_CLOSING,
_(QUIC_FL_CONN_DRAINING, _(QUIC_FL_CONN_DRAINING,
_(QUIC_FL_CONN_IMMEDIATE_CLOSE))))))))))))))))))))))); _(QUIC_FL_CONN_IMMEDIATE_CLOSE)))))))))))))))))))))))));
/* epilogue */ /* epilogue */
_(~0U); _(~0U);
return buf; return buf;

View File

@ -82,6 +82,12 @@ void qc_check_close_on_released_mux(struct quic_conn *qc);
int quic_stateless_reset_token_cpy(unsigned char *pos, size_t len, int quic_stateless_reset_token_cpy(unsigned char *pos, size_t len,
const unsigned char *salt, size_t saltlen); const unsigned char *salt, size_t saltlen);
/* Returns true if <qc> is used on the backed side (as a client). */
static inline int qc_is_back(const struct quic_conn *qc)
{
return qc->flags & QUIC_FL_CONN_IS_BACK;
}
/* Free the CIDs attached to <conn> QUIC connection. */ /* Free the CIDs attached to <conn> QUIC connection. */
static inline void free_quic_conn_cids(struct quic_conn *conn) static inline void free_quic_conn_cids(struct quic_conn *conn)
{ {

View File

@ -3,7 +3,7 @@
#define QUIC_MIN_CC_PKTSIZE 128 #define QUIC_MIN_CC_PKTSIZE 128
#define QUIC_DGRAM_HEADLEN (sizeof(uint16_t) + sizeof(void *)) #define QUIC_DGRAM_HEADLEN (sizeof(uint16_t) + sizeof(void *))
#define QUIC_MAX_CC_BUFSIZE MAX(QUIC_INITIAL_IPV6_MTU, QUIC_INITIAL_IPV4_MTU) #define QUIC_MAX_CC_BUFSIZE _MAX(QUIC_INITIAL_IPV6_MTU, QUIC_INITIAL_IPV4_MTU)
/* Sendmsg input buffer cannot be bigger than 65535 bytes. This comes from UDP /* Sendmsg input buffer cannot be bigger than 65535 bytes. This comes from UDP
* header which uses a 2-bytes length field. QUIC datagrams are limited to 1252 * header which uses a 2-bytes length field. QUIC datagrams are limited to 1252

View File

@ -171,25 +171,31 @@ static inline void session_unown_conn(struct session *sess, struct connection *c
} }
} }
/* Add the connection <conn> to the private conns list of session <sess>. This /* Add the connection <conn> to the private conns list of session <sess>. Each
* function is called only if the connection is private. Nothing is performed * connection is indexed by their respective target in the session. Nothing is
* if the connection is already in the session list or if the session does not * performed if the connection is already in the session list.
* owned the connection. *
* Returns true if conn is inserted or already present else false if a failure
* occurs during insertion.
*/ */
static inline int session_add_conn(struct session *sess, struct connection *conn, void *target) static inline int session_add_conn(struct session *sess, struct connection *conn)
{ {
struct sess_priv_conns *pconns = NULL; struct sess_priv_conns *pconns = NULL;
struct server *srv = objt_server(conn->target); struct server *srv = objt_server(conn->target);
int found = 0; int found = 0;
BUG_ON(objt_listener(conn->target)); /* Connection target is used to index it in the session. Only BE conns are expected in session list. */
BUG_ON(!conn->target || objt_listener(conn->target));
/* Already attach to the session or not the connection owner */ /* A connection cannot be attached already to another session. */
if (!LIST_ISEMPTY(&conn->sess_el) || (conn->owner && conn->owner != sess)) BUG_ON(conn->owner && conn->owner != sess);
/* Already attach to the session */
if (!LIST_ISEMPTY(&conn->sess_el))
return 1; return 1;
list_for_each_entry(pconns, &sess->priv_conns, sess_el) { list_for_each_entry(pconns, &sess->priv_conns, sess_el) {
if (pconns->target == target) { if (pconns->target == conn->target) {
found = 1; found = 1;
break; break;
} }
@ -199,7 +205,7 @@ static inline int session_add_conn(struct session *sess, struct connection *conn
pconns = pool_alloc(pool_head_sess_priv_conns); pconns = pool_alloc(pool_head_sess_priv_conns);
if (!pconns) if (!pconns)
return 0; return 0;
pconns->target = target; pconns->target = conn->target;
LIST_INIT(&pconns->conn_list); LIST_INIT(&pconns->conn_list);
LIST_APPEND(&sess->priv_conns, &pconns->sess_el); LIST_APPEND(&sess->priv_conns, &pconns->sess_el);
@ -219,25 +225,34 @@ static inline int session_add_conn(struct session *sess, struct connection *conn
return 1; return 1;
} }
/* Returns 0 if the session can keep the idle conn, -1 if it was destroyed. The /* Check that session <sess> is able to keep idle connection <conn>. This must
* connection must be private. * be called each time a connection stored in a session becomes idle.
*
* Returns 0 if the connection is kept, else non-zero if the connection was
* explicitely removed from session.
*/ */
static inline int session_check_idle_conn(struct session *sess, struct connection *conn) static inline int session_check_idle_conn(struct session *sess, struct connection *conn)
{ {
/* Another session owns this connection */ /* Connection must be attached to session prior to this function call. */
if (conn->owner != sess) BUG_ON(!conn->owner || conn->owner != sess);
/* Connection is not attached to a session. */
if (!conn->owner)
return 0; return 0;
/* Ensure conn is not already accounted as idle to prevent sess idle count excess increment. */
BUG_ON(conn->flags & CO_FL_SESS_IDLE);
if (sess->idle_conns >= sess->fe->max_out_conns) { if (sess->idle_conns >= sess->fe->max_out_conns) {
session_unown_conn(sess, conn); session_unown_conn(sess, conn);
conn->owner = NULL; conn->owner = NULL;
conn->flags &= ~CO_FL_SESS_IDLE;
conn->mux->destroy(conn->ctx);
return -1; return -1;
} else { }
else {
conn->flags |= CO_FL_SESS_IDLE; conn->flags |= CO_FL_SESS_IDLE;
sess->idle_conns++; sess->idle_conns++;
} }
return 0; return 0;
} }

View File

@ -258,6 +258,7 @@ struct ssl_sock_ctx {
unsigned long error_code; /* last error code of the error stack */ unsigned long error_code; /* last error code of the error stack */
struct buffer early_buf; /* buffer to store the early data received */ struct buffer early_buf; /* buffer to store the early data received */
int sent_early_data; /* Amount of early data we sent so far */ int sent_early_data; /* Amount of early data we sent so far */
int can_send_early_data; /* We did not start the handshake yet so we can send early data */
#ifdef USE_QUIC #ifdef USE_QUIC
struct quic_conn *qc; struct quic_conn *qc;

View File

@ -64,7 +64,7 @@
/* currently updated and stored in time.c */ /* currently updated and stored in time.c */
extern THREAD_LOCAL unsigned int now_ms; /* internal date in milliseconds (may wrap) */ extern THREAD_LOCAL unsigned int now_ms; /* internal date in milliseconds (may wrap) */
extern volatile unsigned int global_now_ms; extern volatile unsigned int *global_now_ms;
/* return 1 if tick is set, otherwise 0 */ /* return 1 if tick is set, otherwise 0 */
static inline int tick_isset(int expire) static inline int tick_isset(int expire)

View File

@ -1178,6 +1178,80 @@ static inline void *my_realloc2(void *ptr, size_t size)
return ret; return ret;
} }
/* portable memalign(): tries to accommodate OS specificities, and may fall
* back to plain malloc() if not supported, meaning that alignment guarantees
* are only a performance bonus but not granted. The caller is responsible for
* guaranteeing that the requested alignment is at least sizeof(void*) and a
* power of two. If uncertain, use ha_aligned_alloc() instead. The pointer
* needs to be passed to ha_aligned_free() for freeing (due to cygwin). Please
* use ha_aligned_alloc() instead (which does perform accounting).
*/
static inline void *_ha_aligned_alloc(size_t alignment, size_t size)
{
/* let's consider that most OSes have posix_memalign() and make the
* exception for the other ones. This way if an OS fails to build,
* we'll know about it and handle it as a new exception instead of
* relying on old fallbacks that may break (e.g. most BSDs have
* dropped memalign()).
*/
#if defined(_WIN32)
/* MINGW (Cygwin) uses _aligned_malloc() */
return _aligned_malloc(size, alignment);
#elif _POSIX_VERSION < 200112L || defined(__sun)
/* Old OSes or Solaris */
return memalign(alignment, size);
#else
void *ret;
/* most BSD, Linux since glibc 2.2, Solaris 11 */
if (posix_memalign(&ret, alignment, size) == 0)
return ret;
else
return NULL;
#endif
}
/* portable memalign(): tries to accommodate OS specificities, and may fall
* back to plain malloc() if not supported, meaning that alignment guarantees
* are only a performance bonus but not granted. The size will automatically be
* rounded up to the next power of two and set to a minimum of sizeof(void*).
* The checks are cheap and generally optimized away by the compiler since most
* input arguments are build time constants. The pointer needs to be passed to
* ha_aligned_free() for freeing (due to cygwin). Please use
* ha_aligned_alloc_safe() instead (which does perform accounting).
*/
static inline void *_ha_aligned_alloc_safe(size_t alignment, size_t size)
{
if (unlikely(alignment < sizeof(void*)))
alignment = sizeof(void*);
else if (unlikely(alignment & (alignment - 1))) {
/* not power of two! round up to next power of two by filling
* all LSB in O(log(log(N))) then increment the result.
*/
int shift = 1;
do {
alignment |= alignment >> shift;
shift *= 2;
} while (unlikely(alignment & (alignment + 1)));
alignment++;
}
return _ha_aligned_alloc(alignment, size);
}
/* To be used to free a pointer returned by _ha_aligned_alloc() or
* _ha_aligned_alloc_safe(). Please use ha_aligned_free() instead
* (which does perform accounting).
*/
static inline void _ha_aligned_free(void *ptr)
{
#if defined(_WIN32)
return _aligned_free(ptr);
#else
free(ptr);
#endif
}
int parse_dotted_uints(const char *s, unsigned int **nums, size_t *sz); int parse_dotted_uints(const char *s, unsigned int **nums, size_t *sz);
/* PRNG */ /* PRNG */

View File

@ -0,0 +1,56 @@
varnishtest "le2dec converter Test"
feature cmd "$HAPROXY_PROGRAM -cc 'version_atleast(3.0-dev0)'"
feature ignore_unknown_macro
server s1 {
rxreq
txresp -hdr "Connection: close"
} -repeat 3 -start
haproxy h1 -conf {
defaults
mode http
timeout connect "${HAPROXY_TEST_TIMEOUT-5s}"
timeout client "${HAPROXY_TEST_TIMEOUT-5s}"
timeout server "${HAPROXY_TEST_TIMEOUT-5s}"
frontend fe
bind "fd@${fe}"
#### requests
http-request set-var(txn.input) req.hdr(input)
http-response set-header le2dec-1 "%[var(txn.input),le2dec(:,1)]"
http-response set-header le2dec-2 "%[var(txn.input),le2dec(-,3)]"
http-response set-header le2dec-3 "%[var(txn.input),le2dec(::,3,1)]"
default_backend be
backend be
server s1 ${s1_addr}:${s1_port}
} -start
client c1 -connect ${h1_fe_sock} {
txreq -url "/" \
-hdr "input:"
rxresp
expect resp.status == 200
expect resp.http.le2dec-1 == ""
expect resp.http.le2dec-2 == ""
expect resp.http.le2dec-3 == ""
txreq -url "/" \
-hdr "input: 0123456789"
rxresp
expect resp.status == 200
expect resp.http.le2dec-1 == "48:49:50:51:52:53:54:55:56:57"
expect resp.http.le2dec-2 == "3289392-3486771-3684150-57"
expect resp.http.le2dec-3 == "3289392::3486771::3684150"
txreq -url "/" \
-hdr "input: abcdefghijklmnopqrstuvwxyz"
rxresp
expect resp.status == 200
expect resp.http.le2dec-1 == "97:98:99:100:101:102:103:104:105:106:107:108:109:110:111:112:113:114:115:116:117:118:119:120:121:122"
expect resp.http.le2dec-2 == "6513249-6710628-6908007-7105386-7302765-7500144-7697523-7894902-31353"
expect resp.http.le2dec-3 == "6513249::6710628::6908007::7105386::7302765::7500144::7697523::7894902"
} -run

View File

@ -34,9 +34,111 @@
#include <haproxy/ssl_sock.h> #include <haproxy/ssl_sock.h>
#include <haproxy/ssl_utils.h> #include <haproxy/ssl_utils.h>
#include <haproxy/tools.h> #include <haproxy/tools.h>
#include <haproxy/trace.h>
#define TRACE_SOURCE &trace_acme
#if defined(HAVE_ACME) #if defined(HAVE_ACME)
static void acme_trace(enum trace_level level, uint64_t mask, const struct trace_source *src,
const struct ist where, const struct ist func,
const void *a1, const void *a2, const void *a3, const void *a4);
static const struct trace_event acme_trace_events[] = {
{ .mask = ACME_EV_SCHED, .name = "acme_sched", .desc = "Wakeup scheduled ACME task" },
{ .mask = ACME_EV_NEW, .name = "acme_new", .desc = "New ACME task" },
{ .mask = ACME_EV_TASK, .name = "acme_task", .desc = "ACME task" },
{ }
};
static const struct name_desc acme_trace_lockon_args[4] = {
/* arg1 */ { .name="acme_ctx", .desc="ACME context" },
/* arg2 */ { },
/* arg3 */ { },
/* arg4 */ { }
};
static const struct name_desc acme_trace_decoding[] = {
{ .name="clean", .desc="only user-friendly stuff, generally suitable for level \"user\"" },
{ .name="minimal", .desc="report only conn, no real decoding" },
{ .name="simple", .desc="add error messages" },
{ .name="advanced", .desc="add handshake-related details" },
{ .name="complete", .desc="add full data dump when available" },
{ /* end */ }
};
struct trace_source trace_acme = {
.name = IST("acme"),
.desc = "ACME",
.arg_def = TRC_ARG_PRIV,
.default_cb = acme_trace,
.known_events = acme_trace_events,
.lockon_args = acme_trace_lockon_args,
.decoding = acme_trace_decoding,
.report_events = ~0, /* report everything by default */
};
INITCALL1(STG_REGISTER, trace_register_source, &trace_acme);
static void acme_trace(enum trace_level level, uint64_t mask, const struct trace_source *src,
const struct ist where, const struct ist func,
const void *a1, const void *a2, const void *a3, const void *a4)
{
const struct acme_ctx *ctx = a1;
if (src->verbosity <= ACME_VERB_CLEAN)
return;
chunk_appendf(&trace_buf, " :");
if (mask >= ACME_EV_NEW)
chunk_appendf(&trace_buf, " acme_ctx=%p", ctx);
if (mask == ACME_EV_NEW)
chunk_appendf(&trace_buf, ", crt=%s", ctx->store->path);
if (mask >= ACME_EV_TASK) {
switch (ctx->http_state) {
case ACME_HTTP_REQ:
chunk_appendf(&trace_buf, ", http_st: ACME_HTTP_REQ");
break;
case ACME_HTTP_RES:
chunk_appendf(&trace_buf, ", http_st: ACME_HTTP_RES");
break;
}
chunk_appendf(&trace_buf, ", st: ");
switch (ctx->state) {
case ACME_RESOURCES: chunk_appendf(&trace_buf, "ACME_RESOURCES"); break;
case ACME_NEWNONCE: chunk_appendf(&trace_buf, "ACME_NEWNONCE"); break;
case ACME_CHKACCOUNT: chunk_appendf(&trace_buf, "ACME_CHKACCOUNT"); break;
case ACME_NEWACCOUNT: chunk_appendf(&trace_buf, "ACME_NEWACCOUNT"); break;
case ACME_NEWORDER: chunk_appendf(&trace_buf, "ACME_NEWORDER"); break;
case ACME_AUTH: chunk_appendf(&trace_buf, "ACME_AUTH"); break;
case ACME_CHALLENGE: chunk_appendf(&trace_buf, "ACME_CHALLENGE"); break;
case ACME_CHKCHALLENGE: chunk_appendf(&trace_buf, "ACME_CHKCHALLENGE"); break;
case ACME_FINALIZE: chunk_appendf(&trace_buf, "ACME_FINALIZE"); break;
case ACME_CHKORDER: chunk_appendf(&trace_buf, "ACME_CHKORDER"); break;
case ACME_CERTIFICATE: chunk_appendf(&trace_buf, "ACME_CERTIFICATE"); break;
case ACME_END: chunk_appendf(&trace_buf, "ACME_END"); break;
}
}
if (mask & (ACME_EV_REQ|ACME_EV_RES)) {
const struct ist *url = a2;
const struct buffer *buf = a3;
if (mask & ACME_EV_REQ)
chunk_appendf(&trace_buf, " url: %.*s", (int)url->len, url->ptr);
if (src->verbosity >= ACME_VERB_COMPLETE && level >= TRACE_LEVEL_DATA) {
chunk_appendf(&trace_buf, " Buffer Dump:\n");
chunk_appendf(&trace_buf, "%.*s", (int)buf->data, buf->area);
}
}
}
struct mt_list acme_tasks = MT_LIST_HEAD_INIT(acme_tasks); struct mt_list acme_tasks = MT_LIST_HEAD_INIT(acme_tasks);
@ -653,6 +755,7 @@ static void acme_ctx_destroy(struct acme_ctx *ctx)
istfree(&auth->auth); istfree(&auth->auth);
istfree(&auth->chall); istfree(&auth->chall);
istfree(&auth->token); istfree(&auth->token);
istfree(&auth->token);
next = auth->next; next = auth->next;
free(auth); free(auth);
auth = next; auth = next;
@ -788,6 +891,43 @@ int acme_http_req(struct task *task, struct acme_ctx *ctx, struct ist url, enum
} }
/*
* compute a TXT record for DNS-01 challenge
* base64url(sha256(token || '.' || base64url(Thumbprint(accountKey))))
*
* https://datatracker.ietf.org/doc/html/rfc8555/#section-8.4
*
*/
unsigned int acme_txt_record(const struct ist thumbprint, const struct ist token, struct buffer *output)
{
unsigned char md[EVP_MAX_MD_SIZE];
struct buffer *tmp = NULL;
unsigned int size;
int ret = 0;
if ((tmp = alloc_trash_chunk()) == NULL)
goto out;
chunk_istcat(tmp, token);
chunk_appendf(tmp, ".");
chunk_istcat(tmp, thumbprint);
if (EVP_Digest(tmp->area, tmp->data, md, &size, EVP_sha256(), NULL) == 0)
goto out;
ret = a2base64url((const char *)md, size, output->area, output->size);
if (ret < 0)
ret = 0;
output->data = ret;
out:
free_trash_chunk(tmp);
return ret;
}
int acme_jws_payload(struct buffer *req, struct ist nonce, struct ist url, EVP_PKEY *pkey, struct ist kid, struct buffer *output, char **errmsg) int acme_jws_payload(struct buffer *req, struct ist nonce, struct ist url, EVP_PKEY *pkey, struct ist kid, struct buffer *output, char **errmsg)
{ {
struct buffer *b64payload = NULL; struct buffer *b64payload = NULL;
@ -930,6 +1070,8 @@ int acme_res_certificate(struct task *task, struct acme_ctx *ctx, char **errmsg)
} }
} }
TRACE_DATA(__FUNCTION__, ACME_EV_RES, ctx, NULL, &hc->res.buf);
if (hc->res.status < 200 || hc->res.status >= 300) { if (hc->res.status < 200 || hc->res.status >= 300) {
if ((ret = mjson_get_string(hc->res.buf.area, hc->res.buf.data, "$.detail", t1->area, t1->size)) > -1) if ((ret = mjson_get_string(hc->res.buf.area, hc->res.buf.data, "$.detail", t1->area, t1->size)) > -1)
t1->data = ret; t1->data = ret;
@ -1001,6 +1143,8 @@ int acme_res_chkorder(struct task *task, struct acme_ctx *ctx, char **errmsg)
} }
} }
TRACE_DATA(__FUNCTION__, ACME_EV_RES, ctx, NULL, &hc->res.buf);
if (hc->res.status < 200 || hc->res.status >= 300) { if (hc->res.status < 200 || hc->res.status >= 300) {
if ((ret = mjson_get_string(hc->res.buf.area, hc->res.buf.data, "$.detail", t1->area, t1->size)) > -1) if ((ret = mjson_get_string(hc->res.buf.area, hc->res.buf.data, "$.detail", t1->area, t1->size)) > -1)
t1->data = ret; t1->data = ret;
@ -1130,6 +1274,8 @@ int acme_res_finalize(struct task *task, struct acme_ctx *ctx, char **errmsg)
} }
} }
TRACE_DATA(__FUNCTION__, ACME_EV_RES, ctx, NULL, &hc->res.buf);
if (hc->res.status < 200 || hc->res.status >= 300) { if (hc->res.status < 200 || hc->res.status >= 300) {
if ((ret = mjson_get_string(hc->res.buf.area, hc->res.buf.data, "$.detail", t1->area, t1->size)) > -1) if ((ret = mjson_get_string(hc->res.buf.area, hc->res.buf.data, "$.detail", t1->area, t1->size)) > -1)
t1->data = ret; t1->data = ret;
@ -1174,9 +1320,13 @@ int acme_req_challenge(struct task *task, struct acme_ctx *ctx, struct acme_auth
chunk_printf(req_in, "{}"); chunk_printf(req_in, "{}");
TRACE_DATA("REQ challenge dec", ACME_EV_REQ, ctx, &auth->chall, req_in);
if (acme_jws_payload(req_in, ctx->nonce, auth->chall, ctx->cfg->account.pkey, ctx->kid, req_out, errmsg) != 0) if (acme_jws_payload(req_in, ctx->nonce, auth->chall, ctx->cfg->account.pkey, ctx->kid, req_out, errmsg) != 0)
goto error; goto error;
TRACE_DATA("REQ challenge enc", ACME_EV_REQ, ctx, &auth->chall, req_out);
if (acme_http_req(task, ctx, auth->chall, HTTP_METH_POST, hdrs, ist2(req_out->area, req_out->data))) if (acme_http_req(task, ctx, auth->chall, HTTP_METH_POST, hdrs, ist2(req_out->area, req_out->data)))
goto error; goto error;
@ -1211,6 +1361,8 @@ enum acme_ret acme_res_challenge(struct task *task, struct acme_ctx *ctx, struct
hdrs = hc->res.hdrs; hdrs = hc->res.hdrs;
TRACE_DATA(__FUNCTION__, ACME_EV_RES, ctx, NULL, &hc->res.buf);
for (hdr = hdrs; isttest(hdr->v); hdr++) { for (hdr = hdrs; isttest(hdr->v); hdr++) {
if (isteqi(hdr->n, ist("Replay-Nonce"))) { if (isteqi(hdr->n, ist("Replay-Nonce"))) {
istfree(&ctx->nonce); istfree(&ctx->nonce);
@ -1284,10 +1436,14 @@ int acme_post_as_get(struct task *task, struct acme_ctx *ctx, struct ist url, ch
if ((req_out = alloc_trash_chunk()) == NULL) if ((req_out = alloc_trash_chunk()) == NULL)
goto error_alloc; goto error_alloc;
TRACE_USER("POST-as-GET ", ACME_EV_REQ, ctx, &url);
/* empty payload */ /* empty payload */
if (acme_jws_payload(req_in, ctx->nonce, url, ctx->cfg->account.pkey, ctx->kid, req_out, errmsg) != 0) if (acme_jws_payload(req_in, ctx->nonce, url, ctx->cfg->account.pkey, ctx->kid, req_out, errmsg) != 0)
goto error_jws; goto error_jws;
TRACE_DATA("POST-as-GET enc", ACME_EV_REQ, ctx, &url, req_out);
if (acme_http_req(task, ctx, url, HTTP_METH_POST, hdrs, ist2(req_out->area, req_out->data))) if (acme_http_req(task, ctx, url, HTTP_METH_POST, hdrs, ist2(req_out->area, req_out->data)))
goto error_http; goto error_http;
@ -1342,6 +1498,7 @@ int acme_res_auth(struct task *task, struct acme_ctx *ctx, struct acme_auth *aut
} }
} }
TRACE_DATA(__FUNCTION__, ACME_EV_RES, ctx, NULL, &hc->res.buf);
if (hc->res.status < 200 || hc->res.status >= 300) { if (hc->res.status < 200 || hc->res.status >= 300) {
/* XXX: need a generic URN error parser */ /* XXX: need a generic URN error parser */
@ -1356,6 +1513,23 @@ int acme_res_auth(struct task *task, struct acme_ctx *ctx, struct acme_auth *aut
goto error; goto error;
} }
/* check and save the DNS entry */
ret = mjson_get_string(hc->res.buf.area, hc->res.buf.data, "$.identifier.type", t1->area, t1->size);
if (ret == -1) {
memprintf(errmsg, "couldn't get a type \"dns\" from Authorization URL \"%s\"", auth->auth.ptr);
goto error;
}
t1->data = ret;
ret = mjson_get_string(hc->res.buf.area, hc->res.buf.data, "$.identifier.value", t2->area, t2->size);
if (ret == -1) {
memprintf(errmsg, "couldn't get a type \"dns\" from Authorization URL \"%s\"", auth->auth.ptr);
goto error;
}
t2->data = ret;
auth->dns = istdup(ist2(t2->area, t2->data));
/* get the multiple challenges and select the one from the configuration */ /* get the multiple challenges and select the one from the configuration */
for (i = 0; ; i++) { for (i = 0; ; i++) {
int ret; int ret;
@ -1405,6 +1579,35 @@ int acme_res_auth(struct task *task, struct acme_ctx *ctx, struct acme_auth *aut
goto error; goto error;
} }
/* compute a response for the TXT entry */
if (strcasecmp(ctx->cfg->challenge, "DNS-01") == 0) {
struct sink *dpapi;
struct ist line[7];
if (acme_txt_record(ist(ctx->cfg->account.thumbprint), auth->token, &trash) == 0) {
memprintf(errmsg, "couldn't compute the DNS-01 challenge");
goto error;
}
send_log(NULL, LOG_NOTICE,"acme: %s: DNS-01 requires to set the \"_acme-challenge.%.*s\" TXT record to \"%.*s\" and use the \"acme challenge_ready\" command over the CLI\n",
ctx->store->path, (int)auth->dns.len, auth->dns.ptr, (int)trash.data, trash.area);
/* dump to the "dpapi" sink */
line[0] = ist("acme deploy ");
line[1] = ist(ctx->store->path);
line[2] = ist(" thumbprint ");
line[3] = ist(ctx->cfg->account.thumbprint);
line[4] = ist("\n");
line[5] = ist2( hc->res.buf.area, hc->res.buf.data); /* dump the HTTP response */
line[6] = ist("\n\0");
dpapi = sink_find("dpapi");
if (dpapi)
sink_write(dpapi, LOG_HEADER_NONE, 0, line, 7);
}
/* only useful for HTTP-01 */
if (acme_add_challenge_map(ctx->cfg->map, auth->token.ptr, ctx->cfg->account.thumbprint, errmsg) != 0) { if (acme_add_challenge_map(ctx->cfg->map, auth->token.ptr, ctx->cfg->account.thumbprint, errmsg) != 0) {
memprintf(errmsg, "couldn't add the token to the '%s' map: %s", ctx->cfg->map, *errmsg); memprintf(errmsg, "couldn't add the token to the '%s' map: %s", ctx->cfg->map, *errmsg);
goto error; goto error;
@ -1455,10 +1658,13 @@ int acme_req_neworder(struct task *task, struct acme_ctx *ctx, char **errmsg)
chunk_appendf(req_in, " ] }"); chunk_appendf(req_in, " ] }");
TRACE_DATA("NewOrder Decode", ACME_EV_REQ, ctx, &ctx->resources.newOrder, req_in);
if (acme_jws_payload(req_in, ctx->nonce, ctx->resources.newOrder, ctx->cfg->account.pkey, ctx->kid, req_out, errmsg) != 0) if (acme_jws_payload(req_in, ctx->nonce, ctx->resources.newOrder, ctx->cfg->account.pkey, ctx->kid, req_out, errmsg) != 0)
goto error; goto error;
TRACE_DATA("NewOrder JWS ", ACME_EV_REQ, ctx, &ctx->resources.newOrder, req_out);
if (acme_http_req(task, ctx, ctx->resources.newOrder, HTTP_METH_POST, hdrs, ist2(req_out->area, req_out->data))) if (acme_http_req(task, ctx, ctx->resources.newOrder, HTTP_METH_POST, hdrs, ist2(req_out->area, req_out->data)))
goto error; goto error;
@ -1507,6 +1713,7 @@ int acme_res_neworder(struct task *task, struct acme_ctx *ctx, char **errmsg)
ctx->order = istdup(hdr->v); ctx->order = istdup(hdr->v);
} }
} }
TRACE_DATA(__FUNCTION__, ACME_EV_RES, ctx, NULL, &hc->res.buf);
if (hc->res.status < 200 || hc->res.status >= 300) { if (hc->res.status < 200 || hc->res.status >= 300) {
if ((ret = mjson_get_string(hc->res.buf.area, hc->res.buf.data, "$.detail", t1->area, t1->size)) > -1) if ((ret = mjson_get_string(hc->res.buf.area, hc->res.buf.data, "$.detail", t1->area, t1->size)) > -1)
@ -1550,6 +1757,11 @@ int acme_res_neworder(struct task *task, struct acme_ctx *ctx, char **errmsg)
goto error; goto error;
} }
/* if the challenge is not DNS-01, consider that the challenge
* is ready because computed by HAProxy */
if (strcasecmp(ctx->cfg->challenge, "DNS-01") != 0)
auth->ready = 1;
auth->next = ctx->auths; auth->next = ctx->auths;
ctx->auths = auth; ctx->auths = auth;
ctx->next_auth = auth; ctx->next_auth = auth;
@ -1610,6 +1822,8 @@ int acme_req_account(struct task *task, struct acme_ctx *ctx, int newaccount, ch
else else
chunk_printf(req_in, "%s", accountreq); chunk_printf(req_in, "%s", accountreq);
TRACE_DATA("newAccount Decoded", ACME_EV_REQ, ctx, &ctx->resources.newAccount, req_in);
if (acme_jws_payload(req_in, ctx->nonce, ctx->resources.newAccount, ctx->cfg->account.pkey, ctx->kid, req_out, errmsg) != 0) if (acme_jws_payload(req_in, ctx->nonce, ctx->resources.newAccount, ctx->cfg->account.pkey, ctx->kid, req_out, errmsg) != 0)
goto error; goto error;
@ -1659,6 +1873,8 @@ int acme_res_account(struct task *task, struct acme_ctx *ctx, int newaccount, ch
} }
} }
TRACE_DATA(__FUNCTION__, ACME_EV_RES, ctx, NULL, &hc->res.buf);
if (hc->res.status < 200 || hc->res.status >= 300) { if (hc->res.status < 200 || hc->res.status >= 300) {
if ((ret = mjson_get_string(hc->res.buf.area, hc->res.buf.data, "$.detail", t1->area, t1->size)) > -1) if ((ret = mjson_get_string(hc->res.buf.area, hc->res.buf.data, "$.detail", t1->area, t1->size)) > -1)
t1->data = ret; t1->data = ret;
@ -1705,6 +1921,8 @@ int acme_nonce(struct task *task, struct acme_ctx *ctx, char **errmsg)
goto error; goto error;
} }
TRACE_DATA(__FUNCTION__, ACME_EV_RES, ctx, NULL, &hc->res.buf);
hdrs = hc->res.hdrs; hdrs = hc->res.hdrs;
for (hdr = hdrs; isttest(hdr->v); hdr++) { for (hdr = hdrs; isttest(hdr->v); hdr++) {
@ -1743,6 +1961,8 @@ int acme_directory(struct task *task, struct acme_ctx *ctx, char **errmsg)
goto error; goto error;
} }
TRACE_DATA(__FUNCTION__, ACME_EV_RES, ctx, NULL, &hc->res.buf);
if ((ret = mjson_get_string(hc->res.buf.area, hc->res.buf.data, "$.newNonce", trash.area, trash.size)) <= 0) { if ((ret = mjson_get_string(hc->res.buf.area, hc->res.buf.data, "$.newNonce", trash.area, trash.size)) <= 0) {
memprintf(errmsg, "couldn't get newNonce URL from the directory URL"); memprintf(errmsg, "couldn't get newNonce URL from the directory URL");
goto error; goto error;
@ -1806,6 +2026,7 @@ struct task *acme_process(struct task *task, void *context, unsigned int state)
struct mt_list tmp = MT_LIST_LOCK_FULL(&ctx->el); struct mt_list tmp = MT_LIST_LOCK_FULL(&ctx->el);
re: re:
TRACE_USER("ACME Task Handle", ACME_EV_TASK, ctx, &st);
switch (st) { switch (st) {
case ACME_RESOURCES: case ACME_RESOURCES:
@ -1899,6 +2120,11 @@ struct task *acme_process(struct task *task, void *context, unsigned int state)
break; break;
case ACME_CHALLENGE: case ACME_CHALLENGE:
if (http_st == ACME_HTTP_REQ) { if (http_st == ACME_HTTP_REQ) {
/* if the challenge is not ready, wait to be wakeup */
if (!ctx->next_auth->ready)
goto wait;
if (acme_req_challenge(task, ctx, ctx->next_auth, &errmsg) != 0) if (acme_req_challenge(task, ctx, ctx->next_auth, &errmsg) != 0)
goto retry; goto retry;
} }
@ -1999,6 +2225,8 @@ struct task *acme_process(struct task *task, void *context, unsigned int state)
/* this is called when changing step in the state machine */ /* this is called when changing step in the state machine */
http_st = ACME_HTTP_REQ; http_st = ACME_HTTP_REQ;
ctx->retries = ACME_RETRY; /* reinit the retries */ ctx->retries = ACME_RETRY; /* reinit the retries */
ctx->http_state = http_st;
ctx->state = st;
if (ctx->retryafter == 0) if (ctx->retryafter == 0)
goto re; /* optimize by not leaving the task for the next httpreq to init */ goto re; /* optimize by not leaving the task for the next httpreq to init */
@ -2006,8 +2234,6 @@ struct task *acme_process(struct task *task, void *context, unsigned int state)
/* if we have a retryafter, wait before next request (usually finalize) */ /* if we have a retryafter, wait before next request (usually finalize) */
task->expire = tick_add(now_ms, ctx->retryafter * 1000); task->expire = tick_add(now_ms, ctx->retryafter * 1000);
ctx->retryafter = 0; ctx->retryafter = 0;
ctx->http_state = http_st;
ctx->state = st;
MT_LIST_UNLOCK_FULL(&ctx->el, tmp); MT_LIST_UNLOCK_FULL(&ctx->el, tmp);
return task; return task;
@ -2055,8 +2281,16 @@ struct task *acme_process(struct task *task, void *context, unsigned int state)
task = NULL; task = NULL;
return task; return task;
}
wait:
/* wait for a task_wakeup */
ctx->http_state = ACME_HTTP_REQ;
ctx->state = st;
task->expire = TICK_ETERNITY;
MT_LIST_UNLOCK_FULL(&ctx->el, tmp);
return task;
}
/* /*
* Return 1 if the certificate must be regenerated * Return 1 if the certificate must be regenerated
* Check if the notAfter date will append in (validity period / 12) or 7 days per default * Check if the notAfter date will append in (validity period / 12) or 7 days per default
@ -2133,6 +2367,7 @@ struct task *acme_scheduler(struct task *task, void *context, unsigned int state
if (store->conf.acme.id) { if (store->conf.acme.id) {
if (acme_will_expire(store)) { if (acme_will_expire(store)) {
TRACE_USER("ACME Scheduling start", ACME_EV_SCHED);
if (acme_start_task(store, &errmsg) != 0) { if (acme_start_task(store, &errmsg) != 0) {
send_log(NULL, LOG_NOTICE,"acme: %s: %s Aborting.\n", store->path, errmsg ? errmsg : ""); send_log(NULL, LOG_NOTICE,"acme: %s: %s Aborting.\n", store->path, errmsg ? errmsg : "");
ha_free(&errmsg); ha_free(&errmsg);
@ -2321,12 +2556,14 @@ static int acme_start_task(struct ckch_store *store, char **errmsg)
ctx->store = newstore; ctx->store = newstore;
ctx->cfg = cfg; ctx->cfg = cfg;
task->context = ctx; task->context = ctx;
ctx->task = task;
MT_LIST_INIT(&ctx->el); MT_LIST_INIT(&ctx->el);
MT_LIST_APPEND(&acme_tasks, &ctx->el); MT_LIST_APPEND(&acme_tasks, &ctx->el);
send_log(NULL, LOG_NOTICE, "acme: %s: Starting update of the certificate.\n", ctx->store->path); send_log(NULL, LOG_NOTICE, "acme: %s: Starting update of the certificate.\n", ctx->store->path);
TRACE_USER("ACME Task start", ACME_EV_NEW, ctx);
task_wakeup(task, TASK_WOKEN_INIT); task_wakeup(task, TASK_WOKEN_INIT);
return 0; return 0;
@ -2372,6 +2609,55 @@ static int cli_acme_renew_parse(char **args, char *payload, struct appctx *appct
return cli_dynerr(appctx, errmsg); return cli_dynerr(appctx, errmsg);
} }
static int cli_acme_chall_ready_parse(char **args, char *payload, struct appctx *appctx, void *private)
{
char *errmsg = NULL;
const char *crt;
const char *dns;
struct mt_list back;
struct acme_ctx *ctx;
struct acme_auth *auth;
int found = 0;
if (!*args[2] && !*args[3] && !*args[4]) {
memprintf(&errmsg, ": not enough parameters\n");
goto err;
}
crt = args[2];
dns = args[4];
MT_LIST_FOR_EACH_ENTRY_LOCKED(ctx, &acme_tasks, el, back) {
if (strcmp(ctx->store->path, crt) != 0)
continue;
auth = ctx->auths;
while (auth) {
if (strncmp(dns, auth->dns.ptr, auth->dns.len) == 0) {
if (!auth->ready) {
auth->ready = 1;
task_wakeup(ctx->task, TASK_WOKEN_MSG);
found = 1;
} else {
memprintf(&errmsg, "ACME challenge for crt \"%s\" and dns \"%s\" was already READY !\n", crt, dns);
}
break;
}
auth = auth->next;
}
}
if (!found) {
memprintf(&errmsg, "Couldn't find the ACME task using crt \"%s\" and dns \"%s\" !\n", crt, dns);
goto err;
}
return cli_msg(appctx, LOG_INFO, "Challenge Ready!");
err:
return cli_dynerr(appctx, errmsg);
}
static int cli_acme_status_io_handler(struct appctx *appctx) static int cli_acme_status_io_handler(struct appctx *appctx)
{ {
struct ebmb_node *node = NULL; struct ebmb_node *node = NULL;
@ -2454,6 +2740,7 @@ static int cli_acme_ps(char **args, char *payload, struct appctx *appctx, void *
static struct cli_kw_list cli_kws = {{ },{ static struct cli_kw_list cli_kws = {{ },{
{ { "acme", "renew", NULL }, "acme renew <certfile> : renew a certificate using the ACME protocol", cli_acme_renew_parse, NULL, NULL, NULL, 0 }, { { "acme", "renew", NULL }, "acme renew <certfile> : renew a certificate using the ACME protocol", cli_acme_renew_parse, NULL, NULL, NULL, 0 },
{ { "acme", "status", NULL }, "acme status : show status of certificates configured with ACME", cli_acme_ps, cli_acme_status_io_handler, NULL, NULL, 0 }, { { "acme", "status", NULL }, "acme status : show status of certificates configured with ACME", cli_acme_ps, cli_acme_status_io_handler, NULL, NULL, 0 },
{ { "acme", "challenge_ready", NULL }, "acme challenge_ready <certfile> domain <domain> : show status of certificates configured with ACME", cli_acme_chall_ready_parse, NULL, NULL, NULL, 0 },
{ { NULL }, NULL, NULL, NULL } { { NULL }, NULL, NULL, NULL }
}}; }};

View File

@ -1425,7 +1425,7 @@ struct connection *conn_backend_get(int reuse_mode,
if (reuse_mode == PR_O_REUSE_SAFE && conn->mux->flags & MX_FL_HOL_RISK) { if (reuse_mode == PR_O_REUSE_SAFE && conn->mux->flags & MX_FL_HOL_RISK) {
/* attach the connection to the session private list */ /* attach the connection to the session private list */
conn->owner = sess; conn->owner = sess;
session_add_conn(sess, conn, conn->target); session_add_conn(sess, conn);
} }
else { else {
srv_add_to_avail_list(srv, conn); srv_add_to_avail_list(srv, conn);
@ -2159,7 +2159,7 @@ int connect_server(struct stream *s)
(reuse_mode == PR_O_REUSE_SAFE && (reuse_mode == PR_O_REUSE_SAFE &&
srv_conn->mux->flags & MX_FL_HOL_RISK)) { srv_conn->mux->flags & MX_FL_HOL_RISK)) {
/* If it fail now, the same will be done in mux->detach() callback */ /* If it fail now, the same will be done in mux->detach() callback */
session_add_conn(s->sess, srv_conn, srv_conn->target); session_add_conn(s->sess, srv_conn);
} }
} }
} }

View File

@ -367,8 +367,10 @@ int cfg_parse_listen(const char *file, int linenum, char **args, int kwm)
if ((*args[2] && (!*args[3] || strcmp(args[2], "from") != 0)) || if ((*args[2] && (!*args[3] || strcmp(args[2], "from") != 0)) ||
alertif_too_many_args(3, file, linenum, args, &err_code)) { alertif_too_many_args(3, file, linenum, args, &err_code)) {
if (rc & PR_CAP_FE) if (rc & PR_CAP_FE) {
err_code |= ERR_ALERT | ERR_FATAL;
ha_alert("parsing [%s:%d] : please use the 'bind' keyword for listening addresses.\n", file, linenum); ha_alert("parsing [%s:%d] : please use the 'bind' keyword for listening addresses.\n", file, linenum);
}
goto out; goto out;
} }
} }

View File

@ -2824,10 +2824,9 @@ int check_config_validity()
* as some of the fields may be accessed soon * as some of the fields may be accessed soon
*/ */
MT_LIST_FOR_EACH_ENTRY_LOCKED(newsrv, &servers_list, global_list, back) { MT_LIST_FOR_EACH_ENTRY_LOCKED(newsrv, &servers_list, global_list, back) {
if (srv_init(newsrv) & ERR_CODE) { err_code |= srv_init(newsrv);
cfgerr++; if (err_code & ERR_CODE)
continue; goto out;
}
} }
/* starting to initialize the main proxies list */ /* starting to initialize the main proxies list */

View File

@ -29,8 +29,10 @@
struct timeval start_date; /* the process's start date in wall-clock time */ struct timeval start_date; /* the process's start date in wall-clock time */
struct timeval ready_date; /* date when the process was considered ready */ struct timeval ready_date; /* date when the process was considered ready */
ullong start_time_ns; /* the process's start date in internal monotonic time (ns) */ ullong start_time_ns; /* the process's start date in internal monotonic time (ns) */
volatile ullong global_now_ns; /* common monotonic date between all threads, in ns (wraps every 585 yr) */ volatile ullong _global_now_ns; /* locally stored common monotonic date between all threads, in ns (wraps every 585 yr) */
volatile uint global_now_ms; /* common monotonic date in milliseconds (may wrap) */ volatile ullong *global_now_ns; /* common monotonic date, may point to _global_now_ns or shared memory */
volatile uint _global_now_ms; /* locally stored common monotonic date in milliseconds (may wrap) */
volatile uint *global_now_ms; /* common monotonic date in milliseconds (may wrap), may point to _global_now_ms or shared memory */
/* when CLOCK_MONOTONIC is supported, the offset is applied from th_ctx->prev_mono_time instead */ /* when CLOCK_MONOTONIC is supported, the offset is applied from th_ctx->prev_mono_time instead */
THREAD_ALIGNED(64) static llong now_offset; /* global offset between system time and global time in ns */ THREAD_ALIGNED(64) static llong now_offset; /* global offset between system time and global time in ns */
@ -238,7 +240,7 @@ void clock_update_local_date(int max_wait, int interrupted)
now_ns += ms_to_ns(max_wait); now_ns += ms_to_ns(max_wait);
/* consider the most recent known date */ /* consider the most recent known date */
now_ns = MAX(now_ns, HA_ATOMIC_LOAD(&global_now_ns)); now_ns = MAX(now_ns, HA_ATOMIC_LOAD(global_now_ns));
/* this event is rare, but it requires proper handling because if /* this event is rare, but it requires proper handling because if
* we just left now_ns where it was, the date will not be updated * we just left now_ns where it was, the date will not be updated
@ -269,8 +271,8 @@ void clock_update_global_date()
* realistic regarding the global date, which only moves forward, * realistic regarding the global date, which only moves forward,
* otherwise catch up. * otherwise catch up.
*/ */
old_now_ns = _HA_ATOMIC_LOAD(&global_now_ns); old_now_ns = _HA_ATOMIC_LOAD(global_now_ns);
old_now_ms = _HA_ATOMIC_LOAD(&global_now_ms); old_now_ms = _HA_ATOMIC_LOAD(global_now_ms);
do { do {
if (now_ns < old_now_ns) if (now_ns < old_now_ns)
@ -299,8 +301,8 @@ void clock_update_global_date()
/* let's try to update the global_now_ns (both in nanoseconds /* let's try to update the global_now_ns (both in nanoseconds
* and ms forms) or loop again. * and ms forms) or loop again.
*/ */
} while ((!_HA_ATOMIC_CAS(&global_now_ns, &old_now_ns, now_ns) || } while ((!_HA_ATOMIC_CAS(global_now_ns, &old_now_ns, now_ns) ||
(now_ms != old_now_ms && !_HA_ATOMIC_CAS(&global_now_ms, &old_now_ms, now_ms))) && (now_ms != old_now_ms && !_HA_ATOMIC_CAS(global_now_ms, &old_now_ms, now_ms))) &&
__ha_cpu_relax()); __ha_cpu_relax());
if (!th_ctx->curr_mono_time) { if (!th_ctx->curr_mono_time) {
@ -322,11 +324,12 @@ void clock_init_process_date(void)
th_ctx->prev_mono_time = th_ctx->curr_mono_time = before_poll_mono_ns; th_ctx->prev_mono_time = th_ctx->curr_mono_time = before_poll_mono_ns;
gettimeofday(&date, NULL); gettimeofday(&date, NULL);
after_poll = before_poll = date; after_poll = before_poll = date;
global_now_ns = th_ctx->curr_mono_time; _global_now_ns = th_ctx->curr_mono_time;
if (!global_now_ns) // CLOCK_MONOTONIC not supported if (!_global_now_ns) // CLOCK_MONOTONIC not supported
global_now_ns = tv_to_ns(&date); _global_now_ns = tv_to_ns(&date);
now_ns = global_now_ns; now_ns = _global_now_ns;
global_now_ms = ns_to_ms(now_ns);
_global_now_ms = ns_to_ms(now_ns);
/* force time to wrap 20s after boot: we first compute the time offset /* force time to wrap 20s after boot: we first compute the time offset
* that once applied to the wall-clock date will make the local time * that once applied to the wall-clock date will make the local time
@ -334,14 +337,19 @@ void clock_init_process_date(void)
* and will be used to recompute the local time, both of which will * and will be used to recompute the local time, both of which will
* match and continue from this shifted date. * match and continue from this shifted date.
*/ */
now_offset = sec_to_ns((uint)((uint)(-global_now_ms) / 1000U - BOOT_TIME_WRAP_SEC)); now_offset = sec_to_ns((uint)((uint)(-_global_now_ms) / 1000U - BOOT_TIME_WRAP_SEC));
global_now_ns += now_offset; _global_now_ns += now_offset;
now_ns = global_now_ns; now_ns = _global_now_ns;
now_ms = ns_to_ms(now_ns); now_ms = ns_to_ms(now_ns);
/* correct for TICK_ETNERITY (0) */ /* correct for TICK_ETNERITY (0) */
if (now_ms == TICK_ETERNITY) if (now_ms == TICK_ETERNITY)
now_ms++; now_ms++;
global_now_ms = now_ms; _global_now_ms = now_ms;
/* for now global_now_ms points to the process-local _global_now_ms */
global_now_ms = &_global_now_ms;
/* same goes for global_ns_ns */
global_now_ns = &_global_now_ns;
th_ctx->idle_pct = 100; th_ctx->idle_pct = 100;
clock_update_date(0, 1); clock_update_date(0, 1);
@ -356,6 +364,16 @@ void clock_adjust_now_offset(void)
HA_ATOMIC_STORE(&now_offset, now_ns - tv_to_ns(&date)); HA_ATOMIC_STORE(&now_offset, now_ns - tv_to_ns(&date));
} }
void clock_set_now_offset(llong ofs)
{
HA_ATOMIC_STORE(&now_offset, ofs);
}
llong clock_get_now_offset(void)
{
return HA_ATOMIC_LOAD(&now_offset);
}
/* must be called once per thread to initialize their thread-local variables. /* must be called once per thread to initialize their thread-local variables.
* Note that other threads might also be initializing and running in parallel. * Note that other threads might also be initializing and running in parallel.
*/ */
@ -364,7 +382,7 @@ void clock_init_thread_date(void)
gettimeofday(&date, NULL); gettimeofday(&date, NULL);
after_poll = before_poll = date; after_poll = before_poll = date;
now_ns = _HA_ATOMIC_LOAD(&global_now_ns); now_ns = _HA_ATOMIC_LOAD(global_now_ns);
th_ctx->idle_pct = 100; th_ctx->idle_pct = 100;
th_ctx->prev_cpu_time = now_cpu_time(); th_ctx->prev_cpu_time = now_cpu_time();
th_ctx->prev_mono_time = now_mono_time(); th_ctx->prev_mono_time = now_mono_time();

View File

@ -117,7 +117,7 @@ int conn_create_mux(struct connection *conn, int *closed_connection)
} }
else if (conn->flags & CO_FL_PRIVATE) { else if (conn->flags & CO_FL_PRIVATE) {
/* If it fail now, the same will be done in mux->detach() callback */ /* If it fail now, the same will be done in mux->detach() callback */
session_add_conn(sess, conn, conn->target); session_add_conn(sess, conn);
} }
return 0; return 0;
fail: fail:

View File

@ -52,12 +52,12 @@ void counters_be_shared_drop(struct be_counters_shared *counters)
_counters_shared_drop(counters); _counters_shared_drop(counters);
} }
/* retrieved shared counters pointer for a given <guid> object /* prepare shared counters pointer for a given <guid> object
* <size> hint is expected to reflect the actual tg member size (fe/be) * <size> hint is expected to reflect the actual tg member size (fe/be)
* if <guid> is not set, then sharing is disabled * if <guid> is not set, then sharing is disabled
* Returns the pointer on success or NULL on failure * Returns the pointer on success or NULL on failure
*/ */
static int _counters_shared_init(struct counters_shared *shared, const struct guid_node *guid, size_t size) static int _counters_shared_prepare(struct counters_shared *shared, const struct guid_node *guid, size_t size)
{ {
int it = 0; int it = 0;
@ -85,11 +85,11 @@ static int _counters_shared_init(struct counters_shared *shared, const struct gu
/* prepare shared fe counters pointer for a given <guid> object */ /* prepare shared fe counters pointer for a given <guid> object */
int counters_fe_shared_prepare(struct fe_counters_shared *shared, const struct guid_node *guid) int counters_fe_shared_prepare(struct fe_counters_shared *shared, const struct guid_node *guid)
{ {
return _counters_shared_init((struct counters_shared *)shared, guid, sizeof(struct fe_counters_shared_tg)); return _counters_shared_prepare((struct counters_shared *)shared, guid, sizeof(struct fe_counters_shared_tg));
} }
/* prepare shared be counters pointer for a given <guid> object */ /* prepare shared be counters pointer for a given <guid> object */
int counters_be_shared_init(struct be_counters_shared *shared, const struct guid_node *guid) int counters_be_shared_prepare(struct be_counters_shared *shared, const struct guid_node *guid)
{ {
return _counters_shared_init((struct counters_shared *)shared, guid, sizeof(struct be_counters_shared_tg)); return _counters_shared_prepare((struct counters_shared *)shared, guid, sizeof(struct be_counters_shared_tg));
} }

View File

@ -290,7 +290,7 @@ static int fcgi_flt_start(struct stream *s, struct filter *filter)
static void fcgi_flt_stop(struct stream *s, struct filter *filter) static void fcgi_flt_stop(struct stream *s, struct filter *filter)
{ {
struct flt_fcgi_ctx *fcgi_ctx = filter->ctx; struct fcgi_flt_ctx *fcgi_ctx = filter->ctx;
if (!fcgi_ctx) if (!fcgi_ctx)
return; return;

View File

@ -33,7 +33,7 @@ uint update_freq_ctr_period_slow(struct freq_ctr *ctr, uint period, uint inc)
*/ */
for (;; __ha_cpu_relax()) { for (;; __ha_cpu_relax()) {
curr_tick = HA_ATOMIC_LOAD(&ctr->curr_tick); curr_tick = HA_ATOMIC_LOAD(&ctr->curr_tick);
now_ms_tmp = HA_ATOMIC_LOAD(&global_now_ms); now_ms_tmp = HA_ATOMIC_LOAD(global_now_ms);
if (now_ms_tmp - curr_tick < period) if (now_ms_tmp - curr_tick < period)
return HA_ATOMIC_ADD_FETCH(&ctr->curr_ctr, inc); return HA_ATOMIC_ADD_FETCH(&ctr->curr_ctr, inc);
@ -81,7 +81,7 @@ ullong _freq_ctr_total_from_values(uint period, int pend,
{ {
int remain; int remain;
remain = tick + period - HA_ATOMIC_LOAD(&global_now_ms); remain = tick + period - HA_ATOMIC_LOAD(global_now_ms);
if (unlikely(remain < 0)) { if (unlikely(remain < 0)) {
/* We're past the first period, check if we can still report a /* We're past the first period, check if we can still report a
* part of last period or if we're too far away. * part of last period or if we're too far away.
@ -239,7 +239,7 @@ int freq_ctr_overshoot_period(const struct freq_ctr *ctr, uint period, uint freq
return 0; return 0;
} }
elapsed = HA_ATOMIC_LOAD(&global_now_ms) - tick; elapsed = HA_ATOMIC_LOAD(global_now_ms) - tick;
if (unlikely(elapsed < 0 || elapsed > period)) { if (unlikely(elapsed < 0 || elapsed > period)) {
/* The counter is in the future or the elapsed time is higher than the period, there is no overshoot */ /* The counter is in the future or the elapsed time is higher than the period, there is no overshoot */
return 0; return 0;

View File

@ -11,6 +11,7 @@
/* GUID global tree */ /* GUID global tree */
struct eb_root guid_tree = EB_ROOT_UNIQUE; struct eb_root guid_tree = EB_ROOT_UNIQUE;
__decl_thread(HA_RWLOCK_T guid_lock); __decl_thread(HA_RWLOCK_T guid_lock);
static int _guid_count = 0;
/* Initialize <guid> members. */ /* Initialize <guid> members. */
void guid_init(struct guid_node *guid) void guid_init(struct guid_node *guid)
@ -69,15 +70,19 @@ int guid_insert(enum obj_type *objt, const char *uid, char **errmsg)
memprintf(errmsg, "duplicate entry with %s", dup_name); memprintf(errmsg, "duplicate entry with %s", dup_name);
goto err; goto err;
} }
_guid_count += 1;
HA_RWLOCK_WRUNLOCK(GUID_LOCK, &guid_lock); HA_RWLOCK_WRUNLOCK(GUID_LOCK, &guid_lock);
guid->obj_type = objt; guid->obj_type = objt;
return 0; return 0;
err: err:
if (guid) if (guid)
ha_free(&guid->node.key); ha_free(&guid->node.key);
ha_free(&dup_name); ha_free(&dup_name);
if (guid)
guid->node.key = NULL; /* so that we can check that guid is not in a tree */
return 1; return 1;
} }
@ -88,6 +93,8 @@ void guid_remove(struct guid_node *guid)
{ {
HA_RWLOCK_WRLOCK(GUID_LOCK, &guid_lock); HA_RWLOCK_WRLOCK(GUID_LOCK, &guid_lock);
ebpt_delete(&guid->node); ebpt_delete(&guid->node);
if (guid->node.key)
_guid_count--;
ha_free(&guid->node.key); ha_free(&guid->node.key);
HA_RWLOCK_WRUNLOCK(GUID_LOCK, &guid_lock); HA_RWLOCK_WRUNLOCK(GUID_LOCK, &guid_lock);
} }
@ -171,3 +178,14 @@ char *guid_name(const struct guid_node *guid)
return NULL; return NULL;
} }
/* returns the number of guid inserted in guid_tree */
int guid_count(void)
{
int count;
HA_RWLOCK_WRLOCK(GUID_LOCK, &guid_lock);
count = _guid_count;
HA_RWLOCK_WRUNLOCK(GUID_LOCK, &guid_lock);
return count;
}

View File

@ -13363,7 +13363,24 @@ static int hlua_load_per_thread(char **args, int section_type, struct proxy *cur
return -1; return -1;
} }
for (i = 1; *(args[i]) != 0; i++) { for (i = 1; *(args[i]) != 0; i++) {
per_thread_load[len][i - 1] = strdup(args[i]); /* first arg is filename */
if (i == 1 && args[1][0] != '/') {
char *curpath;
char *fullpath = NULL;
/* filename is provided using relative path, store the absolute path
* to take current chdir into account for other threads file load
* which occur later
*/
curpath = getcwd(trash.area, trash.size);
if (!curpath) {
memprintf(err, "failed to retrieve cur path");
return -1;
}
per_thread_load[len][i - 1] = memprintf(&fullpath, "%s/%s", curpath, args[1]);
}
else
per_thread_load[len][i - 1] = strdup(args[i]);
if (per_thread_load[len][i - 1] == NULL) { if (per_thread_load[len][i - 1] == NULL) {
memprintf(err, "out of memory error"); memprintf(err, "out of memory error");
return -1; return -1;

View File

@ -1913,6 +1913,21 @@ int hlua_listable_servers_pairs_iterator(lua_State *L)
return 2; return 2;
} }
/* ensure proper cleanup for listable_servers_pairs */
int hlua_listable_servers_pairs_gc(lua_State *L)
{
struct hlua_server_list_iterator_context *ctx;
ctx = lua_touserdata(L, 1);
/* we need to make sure that the watcher leaves in detached state even
* if the iterator was interrupted (ie: "break" from the loop), else
* the server watcher list will become corrupted
*/
watcher_detach(&ctx->srv_watch);
return 0;
}
/* init the iterator context, return iterator function /* init the iterator context, return iterator function
* with context as closure. The only argument is a * with context as closure. The only argument is a
* server list object. * server list object.
@ -1925,6 +1940,12 @@ int hlua_listable_servers_pairs(lua_State *L)
hlua_srv_list = hlua_check_server_list(L, 1); hlua_srv_list = hlua_check_server_list(L, 1);
ctx = lua_newuserdata(L, sizeof(*ctx)); ctx = lua_newuserdata(L, sizeof(*ctx));
/* add gc metamethod to the newly created userdata */
lua_newtable(L);
hlua_class_function(L, "__gc", hlua_listable_servers_pairs_gc);
lua_setmetatable(L, -2);
ctx->px = hlua_srv_list->px; ctx->px = hlua_srv_list->px;
ctx->next = NULL; ctx->next = NULL;
watcher_init(&ctx->srv_watch, &ctx->next, offsetof(struct server, watcher_list)); watcher_init(&ctx->srv_watch, &ctx->next, offsetof(struct server, watcher_list));

View File

@ -1641,7 +1641,7 @@ int http_wait_for_response(struct stream *s, struct channel *rep, int an_bit)
conn_set_owner(srv_conn, sess, NULL); conn_set_owner(srv_conn, sess, NULL);
conn_set_private(srv_conn); conn_set_private(srv_conn);
/* If it fail now, the same will be done in mux->detach() callback */ /* If it fail now, the same will be done in mux->detach() callback */
session_add_conn(srv_conn->owner, srv_conn, srv_conn->target); session_add_conn(srv_conn->owner, srv_conn);
break; break;
} }
} }

View File

@ -57,6 +57,11 @@ struct list per_thread_init_list = LIST_HEAD_INIT(per_thread_init_list);
*/ */
struct list post_deinit_list = LIST_HEAD_INIT(post_deinit_list); struct list post_deinit_list = LIST_HEAD_INIT(post_deinit_list);
/* These functions after everything is stopped, right before exit(), for the master
* process when haproxy was started in master-worker mode. They don't return anything.
*/
struct list post_deinit_master_list = LIST_HEAD_INIT(post_deinit_master_list);
/* These functions are called when freeing a proxy during the deinit, after /* These functions are called when freeing a proxy during the deinit, after
* everything isg stopped. They don't return anything. They should not release * everything isg stopped. They don't return anything. They should not release
* the proxy itself or any shared resources that are possibly used by other * the proxy itself or any shared resources that are possibly used by other
@ -160,6 +165,22 @@ void hap_register_post_deinit(void (*fct)())
LIST_APPEND(&post_deinit_list, &b->list); LIST_APPEND(&post_deinit_list, &b->list);
} }
/* used to register some de-initialization functions to call after everything
* has stopped, but only for the master process (when started in master-worker mode).
*/
void hap_register_post_deinit_master(void (*fct)())
{
struct post_deinit_fct *b;
b = calloc(1, sizeof(*b));
if (!b) {
fprintf(stderr, "out of memory\n");
exit(1);
}
b->fct = fct;
LIST_APPEND(&post_deinit_master_list, &b->list);
}
/* used to register some per proxy de-initialization functions to call after /* used to register some per proxy de-initialization functions to call after
* everything has stopped. * everything has stopped.
*/ */

View File

@ -3723,22 +3723,25 @@ static void fcgi_detach(struct sedesc *sd)
(fconn->flags & FCGI_CF_KEEP_CONN)) { (fconn->flags & FCGI_CF_KEEP_CONN)) {
if (fconn->conn->flags & CO_FL_PRIVATE) { if (fconn->conn->flags & CO_FL_PRIVATE) {
/* Add the connection in the session serverlist, if not already done */ /* Add the connection in the session serverlist, if not already done */
if (!session_add_conn(sess, fconn->conn, fconn->conn->target)) { if (!session_add_conn(sess, fconn->conn))
fconn->conn->owner = NULL; fconn->conn->owner = NULL;
if (eb_is_empty(&fconn->streams_by_id)) {
/* let's kill the connection right away */ if (eb_is_empty(&fconn->streams_by_id)) {
if (!fconn->conn->owner) {
/* Session insertion above has failed and connection is idle, remove it. */
fconn->conn->mux->destroy(fconn); fconn->conn->mux->destroy(fconn);
TRACE_DEVEL("outgoing connection killed", FCGI_EV_STRM_END|FCGI_EV_FCONN_ERR); TRACE_DEVEL("outgoing connection killed", FCGI_EV_STRM_END|FCGI_EV_FCONN_ERR);
return; return;
} }
}
if (eb_is_empty(&fconn->streams_by_id)) {
/* mark that the tasklet may lose its context to another thread and /* mark that the tasklet may lose its context to another thread and
* that the handler needs to check it under the idle conns lock. * that the handler needs to check it under the idle conns lock.
*/ */
HA_ATOMIC_OR(&fconn->wait_event.tasklet->state, TASK_F_USR1); HA_ATOMIC_OR(&fconn->wait_event.tasklet->state, TASK_F_USR1);
if (session_check_idle_conn(fconn->conn->owner, fconn->conn) != 0) {
/* The connection is destroyed, let's leave */ /* Ensure session can keep a new idle connection. */
if (session_check_idle_conn(sess, fconn->conn) != 0) {
fconn->conn->mux->destroy(fconn);
TRACE_DEVEL("outgoing connection killed", FCGI_EV_STRM_END|FCGI_EV_FCONN_ERR); TRACE_DEVEL("outgoing connection killed", FCGI_EV_STRM_END|FCGI_EV_FCONN_ERR);
return; return;
} }

View File

@ -1138,20 +1138,24 @@ static int h1s_finish_detach(struct h1s *h1s)
if (h1c->conn->flags & CO_FL_PRIVATE) { if (h1c->conn->flags & CO_FL_PRIVATE) {
/* Add the connection in the session server list, if not already done */ /* Add the connection in the session server list, if not already done */
if (!session_add_conn(sess, h1c->conn, h1c->conn->target)) { if (!session_add_conn(sess, h1c->conn)) {
/* HTTP/1.1 conn is always idle after detach, can be removed if session insert failed. */
h1c->conn->owner = NULL; h1c->conn->owner = NULL;
h1c->conn->mux->destroy(h1c); h1c->conn->mux->destroy(h1c);
goto released; goto released;
} }
/* Always idle at this step */
/* HTTP/1.1 conn is always idle after detach. */
/* mark that the tasklet may lose its context to another thread and /* mark that the tasklet may lose its context to another thread and
* that the handler needs to check it under the idle conns lock. * that the handler needs to check it under the idle conns lock.
*/ */
HA_ATOMIC_OR(&h1c->wait_event.tasklet->state, TASK_F_USR1); HA_ATOMIC_OR(&h1c->wait_event.tasklet->state, TASK_F_USR1);
/* Ensure session can keep a new idle connection. */
if (session_check_idle_conn(sess, h1c->conn)) { if (session_check_idle_conn(sess, h1c->conn)) {
/* The connection got destroyed, let's leave */ TRACE_DEVEL("outgoing connection rejected", H1_EV_STRM_END|H1_EV_H1C_END, h1c->conn);
TRACE_DEVEL("outgoing connection killed", H1_EV_STRM_END|H1_EV_H1C_END); h1c->conn->mux->destroy(h1c);
goto released; goto released;
} }
} }

View File

@ -5533,21 +5533,25 @@ static void h2_detach(struct sedesc *sd)
if (h2c->conn->flags & CO_FL_PRIVATE) { if (h2c->conn->flags & CO_FL_PRIVATE) {
/* Add the connection in the session server list, if not already done */ /* Add the connection in the session server list, if not already done */
if (!session_add_conn(sess, h2c->conn, h2c->conn->target)) { if (!session_add_conn(sess, h2c->conn))
h2c->conn->owner = NULL; h2c->conn->owner = NULL;
if (eb_is_empty(&h2c->streams_by_id)) {
if (eb_is_empty(&h2c->streams_by_id)) {
if (!h2c->conn->owner) {
/* Session insertion above has failed and connection is idle, remove it. */
h2c->conn->mux->destroy(h2c); h2c->conn->mux->destroy(h2c);
TRACE_DEVEL("leaving on error after killing outgoing connection", H2_EV_STRM_END|H2_EV_H2C_ERR); TRACE_DEVEL("leaving on error after killing outgoing connection", H2_EV_STRM_END|H2_EV_H2C_ERR);
return; return;
} }
}
if (eb_is_empty(&h2c->streams_by_id)) {
/* mark that the tasklet may lose its context to another thread and /* mark that the tasklet may lose its context to another thread and
* that the handler needs to check it under the idle conns lock. * that the handler needs to check it under the idle conns lock.
*/ */
HA_ATOMIC_OR(&h2c->wait_event.tasklet->state, TASK_F_USR1); HA_ATOMIC_OR(&h2c->wait_event.tasklet->state, TASK_F_USR1);
if (session_check_idle_conn(h2c->conn->owner, h2c->conn) != 0) {
/* At this point either the connection is destroyed, or it's been added to the server idle list, just stop */ /* Ensure session can keep a new idle connection. */
if (session_check_idle_conn(sess, h2c->conn) != 0) {
h2c->conn->mux->destroy(h2c);
TRACE_DEVEL("leaving without reusable idle connection", H2_EV_STRM_END); TRACE_DEVEL("leaving without reusable idle connection", H2_EV_STRM_END);
return; return;
} }

View File

@ -1857,6 +1857,14 @@ int qcc_recv(struct qcc *qcc, uint64_t id, uint64_t len, uint64_t offset,
offset = qcs->rx.offset; offset = qcs->rx.offset;
} }
if (len && (qcc->flags & QC_CF_WAIT_HS)) {
if (!(qcc->conn->flags & CO_FL_EARLY_DATA)) {
/* Ensure 'Early-data: 1' will be set on the request. */
TRACE_PROTO("received early data", QMUX_EV_QCC_RECV|QMUX_EV_QCS_RECV, qcc->conn, qcs);
qcc->conn->flags |= CO_FL_EARLY_DATA;
}
}
left = len; left = len;
while (left) { while (left) {
struct qc_stream_rxbuf *buf; struct qc_stream_rxbuf *buf;
@ -3784,26 +3792,25 @@ static void qmux_strm_detach(struct sedesc *sd)
if (conn->flags & CO_FL_PRIVATE) { if (conn->flags & CO_FL_PRIVATE) {
TRACE_DEVEL("handle private connection reuse", QMUX_EV_STRM_END, conn); TRACE_DEVEL("handle private connection reuse", QMUX_EV_STRM_END, conn);
/* Add connection into session. If an error occured, /* Ensure conn is attached into session. Most of the times
* conn will be closed if idle, or insert will be * this is already done during connect so this is a no-op.
* retried on next detach.
*/ */
if (!session_add_conn(sess, conn, conn->target)) { if (!session_add_conn(sess, conn)) {
TRACE_ERROR("error during connection insert into session list", QMUX_EV_STRM_END, conn); TRACE_ERROR("error during connection insert into session list", QMUX_EV_STRM_END, conn);
conn->owner = NULL; conn->owner = NULL;
if (!qcc->nb_sc) {
qcc_shutdown(qcc);
goto end;
}
} }
/* If conn is idle, check if session can keep it. Conn is freed if this is not the case. if (!qcc->nb_sc) {
* TODO graceful shutdown should be preferable instead of plain mux->destroy(). if (!conn->owner) {
*/ /* Session insertion above has failed and connection is idle, remove it. */
if (!qcc->nb_sc && session_check_idle_conn(sess, conn)) { goto release;
TRACE_DEVEL("idle conn rejected by session", QMUX_EV_STRM_END); }
conn = NULL;
goto end; /* Ensure session can keep a new idle connection. */
if (session_check_idle_conn(sess, conn)) {
TRACE_DEVEL("idle conn rejected by session", QMUX_EV_STRM_END, conn);
goto release;
}
} }
} }
else { else {
@ -3812,8 +3819,9 @@ static void qmux_strm_detach(struct sedesc *sd)
if (!srv_add_to_idle_list(objt_server(conn->target), conn, 1)) { if (!srv_add_to_idle_list(objt_server(conn->target), conn, 1)) {
/* Idle conn insert failure, gracefully close the connection. */ /* Idle conn insert failure, gracefully close the connection. */
TRACE_DEVEL("idle connection cannot be kept on the server", QMUX_EV_STRM_END, conn); TRACE_DEVEL("idle connection cannot be kept on the server", QMUX_EV_STRM_END, conn);
qcc_shutdown(qcc); goto release;
} }
goto end; goto end;
} }
else if (!conn->hash_node->node.node.leaf_p && else if (!conn->hash_node->node.node.leaf_p &&

View File

@ -2977,21 +2977,25 @@ static void spop_detach(struct sedesc *sd)
if (!(spop_conn->flags & (SPOP_CF_RCVD_SHUT|SPOP_CF_ERR_PENDING|SPOP_CF_ERROR))) { if (!(spop_conn->flags & (SPOP_CF_RCVD_SHUT|SPOP_CF_ERR_PENDING|SPOP_CF_ERROR))) {
if (spop_conn->conn->flags & CO_FL_PRIVATE) { if (spop_conn->conn->flags & CO_FL_PRIVATE) {
/* Add the connection in the session server list, if not already done */ /* Add the connection in the session server list, if not already done */
if (!session_add_conn(sess, spop_conn->conn, spop_conn->conn->target)) { if (!session_add_conn(sess, spop_conn->conn))
spop_conn->conn->owner = NULL; spop_conn->conn->owner = NULL;
if (eb_is_empty(&spop_conn->streams_by_id)) {
if (eb_is_empty(&spop_conn->streams_by_id)) {
if (!spop_conn->conn->owner) {
/* Session insertion above has failed and connection is idle, remove it. */
spop_conn->conn->mux->destroy(spop_conn); spop_conn->conn->mux->destroy(spop_conn);
TRACE_DEVEL("leaving on error after killing outgoing connection", SPOP_EV_STRM_END|SPOP_EV_SPOP_CONN_ERR); TRACE_DEVEL("leaving on error after killing outgoing connection", SPOP_EV_STRM_END|SPOP_EV_SPOP_CONN_ERR);
return; return;
} }
}
if (eb_is_empty(&spop_conn->streams_by_id)) {
/* mark that the tasklet may lose its context to another thread and /* mark that the tasklet may lose its context to another thread and
* that the handler needs to check it under the idle conns lock. * that the handler needs to check it under the idle conns lock.
*/ */
HA_ATOMIC_OR(&spop_conn->wait_event.tasklet->state, TASK_F_USR1); HA_ATOMIC_OR(&spop_conn->wait_event.tasklet->state, TASK_F_USR1);
if (session_check_idle_conn(spop_conn->conn->owner, spop_conn->conn) != 0) {
/* At this point either the connection is destroyed, or it's been added to the server idle list, just stop */ /* Ensure session can keep a new idle connection. */
if (session_check_idle_conn(sess, spop_conn->conn) != 0) {
spop_conn->conn->mux->destroy(spop_conn);
TRACE_DEVEL("leaving without reusable idle connection", SPOP_EV_STRM_END); TRACE_DEVEL("leaving without reusable idle connection", SPOP_EV_STRM_END);
return; return;
} }

View File

@ -29,6 +29,7 @@
#include <haproxy/list.h> #include <haproxy/list.h>
#include <haproxy/log.h> #include <haproxy/log.h>
#include <haproxy/listener.h> #include <haproxy/listener.h>
#include <haproxy/list.h>
#include <haproxy/mworker.h> #include <haproxy/mworker.h>
#include <haproxy/peers.h> #include <haproxy/peers.h>
#include <haproxy/proto_sockpair.h> #include <haproxy/proto_sockpair.h>
@ -625,7 +626,13 @@ void mworker_catch_sigchld(struct sig_handler *sh)
} }
/* Better rely on the system than on a list of process to check if it was the last one */ /* Better rely on the system than on a list of process to check if it was the last one */
else if (exitpid == -1 && errno == ECHILD) { else if (exitpid == -1 && errno == ECHILD) {
struct post_deinit_fct *pdff;
ha_warning("All workers exited. Exiting... (%d)\n", (exitcode > 0) ? exitcode : EXIT_SUCCESS); ha_warning("All workers exited. Exiting... (%d)\n", (exitcode > 0) ? exitcode : EXIT_SUCCESS);
list_for_each_entry(pdff, &post_deinit_master_list, list)
pdff->fct();
atexit_flag = 0; atexit_flag = 0;
if (exitcode > 0) if (exitcode > 0)
exit(exitcode); /* parent must leave using the status code that provoked the exit */ exit(exitcode); /* parent must leave using the status code that provoked the exit */

View File

@ -290,27 +290,67 @@ static int mem_should_fail(const struct pool_head *pool)
* is available for a new creation. Two flags are supported : * is available for a new creation. Two flags are supported :
* - MEM_F_SHARED to indicate that the pool may be shared with other users * - MEM_F_SHARED to indicate that the pool may be shared with other users
* - MEM_F_EXACT to indicate that the size must not be rounded up * - MEM_F_EXACT to indicate that the size must not be rounded up
* The name must be a stable pointer during all the program's life time.
* The file and line are passed to store the registration location in the
* registration struct. Use create_pool() instead which does it for free.
* The alignment will be stored as-is in the registration.
*/ */
struct pool_head *create_pool(char *name, unsigned int size, unsigned int flags) struct pool_head *create_pool_with_loc(const char *name, unsigned int size,
unsigned int align, unsigned int flags,
const char *file, unsigned int line)
{ {
unsigned int extra_mark, extra_caller, extra;
struct pool_registration *reg; struct pool_registration *reg;
struct pool_head *pool; struct pool_head *pool;
reg = calloc(1, sizeof(*reg));
if (!reg)
return NULL;
reg->name = name;
reg->file = file;
reg->line = line;
reg->size = size;
reg->flags = flags;
reg->align = align;
pool = create_pool_from_reg(name, reg);
if (!pool)
free(reg);
return pool;
}
/* create a pool from a pool registration. All configuration is taken from
* there. The alignment will automatically be raised to sizeof(void*) or the
* next power of two so that it's always possible to lazily pass alignof() or
* sizeof(). Alignments are always respected when merging pools.
*/
struct pool_head *create_pool_from_reg(const char *name, struct pool_registration *reg)
{
unsigned int extra_mark, extra_caller, extra;
unsigned int flags = reg->flags;
unsigned int size = reg->size;
unsigned int alignment = reg->align;
struct pool_head *pool = NULL;
struct pool_head *entry; struct pool_head *entry;
struct list *start; struct list *start;
unsigned int align; unsigned int align;
unsigned int best_diff; unsigned int best_diff;
int thr __maybe_unused; int thr __maybe_unused;
pool = NULL; /* extend alignment if needed */
reg = calloc(1, sizeof(*reg)); if (alignment < sizeof(void*))
if (!reg) alignment = sizeof(void*);
goto fail; else if (alignment & (alignment - 1)) {
/* not power of two! round up to next power of two by filling
strlcpy2(reg->name, name, sizeof(reg->name)); * all LSB in O(log(log(N))) then increment the result.
reg->size = size; */
reg->flags = flags; int shift = 1;
reg->align = 0; do {
alignment |= alignment >> shift;
shift *= 2;
} while (alignment & (alignment + 1));
alignment++;
}
extra_mark = (pool_debugging & POOL_DBG_TAG) ? POOL_EXTRA_MARK : 0; extra_mark = (pool_debugging & POOL_DBG_TAG) ? POOL_EXTRA_MARK : 0;
extra_caller = (pool_debugging & POOL_DBG_CALLER) ? POOL_EXTRA_CALLER : 0; extra_caller = (pool_debugging & POOL_DBG_CALLER) ? POOL_EXTRA_CALLER : 0;
@ -407,6 +447,7 @@ struct pool_head *create_pool(char *name, unsigned int size, unsigned int flags)
strlcpy2(pool->name, name, sizeof(pool->name)); strlcpy2(pool->name, name, sizeof(pool->name));
pool->alloc_sz = size + extra; pool->alloc_sz = size + extra;
pool->size = size; pool->size = size;
pool->align = alignment;
pool->flags = flags; pool->flags = flags;
LIST_APPEND(start, &pool->list); LIST_APPEND(start, &pool->list);
LIST_INIT(&pool->regs); LIST_INIT(&pool->regs);
@ -426,6 +467,8 @@ struct pool_head *create_pool(char *name, unsigned int size, unsigned int flags)
pool->size = size; pool->size = size;
pool->alloc_sz = size + extra; pool->alloc_sz = size + extra;
} }
if (alignment > pool->align)
pool->align = alignment;
DPRINTF(stderr, "Sharing %s with %s\n", name, pool->name); DPRINTF(stderr, "Sharing %s with %s\n", name, pool->name);
} }
@ -433,10 +476,8 @@ struct pool_head *create_pool(char *name, unsigned int size, unsigned int flags)
pool->users++; pool->users++;
pool->sum_size += size; pool->sum_size += size;
return pool;
fail: fail:
free(reg); return pool;
return NULL;
} }
/* Tries to allocate an object for the pool <pool> using the system's allocator /* Tries to allocate an object for the pool <pool> using the system's allocator
@ -449,9 +490,9 @@ void *pool_get_from_os_noinc(struct pool_head *pool)
void *ptr; void *ptr;
if ((pool_debugging & POOL_DBG_UAF) || (pool->flags & MEM_F_UAF)) if ((pool_debugging & POOL_DBG_UAF) || (pool->flags & MEM_F_UAF))
ptr = pool_alloc_area_uaf(pool->alloc_sz); ptr = pool_alloc_area_uaf(pool->alloc_sz, pool->align);
else else
ptr = pool_alloc_area(pool->alloc_sz); ptr = pool_alloc_area(pool->alloc_sz, pool->align);
if (ptr) if (ptr)
return ptr; return ptr;
_HA_ATOMIC_INC(&pool->buckets[pool_tbucket()].failed); _HA_ATOMIC_INC(&pool->buckets[pool_tbucket()].failed);
@ -1037,7 +1078,8 @@ void *pool_destroy(struct pool_head *pool)
list_for_each_entry_safe(reg, back, &pool->regs, list) { list_for_each_entry_safe(reg, back, &pool->regs, list) {
LIST_DELETE(&reg->list); LIST_DELETE(&reg->list);
free(reg); if (!(reg->flags & MEM_F_STATREG))
free(reg);
} }
LIST_DELETE(&pool->list); LIST_DELETE(&pool->list);
@ -1291,10 +1333,10 @@ void dump_pools_to_trash(int how, int max, const char *pfx)
chunk_appendf(&trash, ". Use SIGQUIT to flush them.\n"); chunk_appendf(&trash, ". Use SIGQUIT to flush them.\n");
for (i = 0; i < nbpools && i < max; i++) { for (i = 0; i < nbpools && i < max; i++) {
chunk_appendf(&trash, " - Pool %s (%lu bytes) : %lu allocated (%lu bytes), %lu used" chunk_appendf(&trash, " - Pool %s (%u bytes/%u) : %lu allocated (%lu bytes), %lu used"
" (~%lu by thread caches)" " (~%lu by thread caches)"
", needed_avg %lu, %lu failures, %u users, @%p%s\n", ", needed_avg %lu, %lu failures, %u users, @%p%s\n",
pool_info[i].entry->name, (ulong)pool_info[i].entry->size, pool_info[i].entry->name, pool_info[i].entry->size, pool_info[i].entry->align,
pool_info[i].alloc_items, pool_info[i].alloc_bytes, pool_info[i].alloc_items, pool_info[i].alloc_bytes,
pool_info[i].used_items, pool_info[i].cached_items, pool_info[i].used_items, pool_info[i].cached_items,
pool_info[i].need_avg, pool_info[i].failed_items, pool_info[i].need_avg, pool_info[i].failed_items,
@ -1307,8 +1349,12 @@ void dump_pools_to_trash(int how, int max, const char *pfx)
if (detailed) { if (detailed) {
struct pool_registration *reg; struct pool_registration *reg;
list_for_each_entry(reg, &pool_info[i].entry->regs, list) list_for_each_entry(reg, &pool_info[i].entry->regs, list) {
chunk_appendf(&trash, " > %-12s: size=%u flags=%#x align=%u\n", reg->name, reg->size, reg->flags, reg->align); chunk_appendf(&trash, " > %-12s: size=%u flags=%#x align=%u", reg->name, reg->size, reg->flags, reg->align);
if (reg->file && reg->line)
chunk_appendf(&trash, " [%s:%u]", reg->file, reg->line);
chunk_appendf(&trash, "\n");
}
} }
} }
@ -1522,12 +1568,12 @@ static int cli_io_handler_dump_pools(struct appctx *appctx)
* resulting pointer into <ptr>. If the allocation fails, it quits with after * resulting pointer into <ptr>. If the allocation fails, it quits with after
* emitting an error message. * emitting an error message.
*/ */
void create_pool_callback(struct pool_head **ptr, char *name, unsigned int size) void create_pool_callback(struct pool_head **ptr, char *name, struct pool_registration *reg)
{ {
*ptr = create_pool(name, size, MEM_F_SHARED); *ptr = create_pool_from_reg(name, reg);
if (!*ptr) { if (!*ptr) {
ha_alert("Failed to allocate pool '%s' of size %u : %s. Aborting.\n", ha_alert("Failed to allocate pool '%s' of size %u : %s. Aborting.\n",
name, size, strerror(errno)); name, reg->size, strerror(errno));
exit(1); exit(1);
} }
} }

View File

@ -1768,7 +1768,7 @@ static int proxy_postcheck(struct proxy *px)
* be_counters may be used even if the proxy lacks the backend * be_counters may be used even if the proxy lacks the backend
* capability * capability
*/ */
if (!counters_be_shared_init(&px->be_counters.shared, &px->guid)) { if (!counters_be_shared_prepare(&px->be_counters.shared, &px->guid)) {
ha_alert("out of memory while setting up shared counters for %s %s\n", ha_alert("out of memory while setting up shared counters for %s %s\n",
proxy_type_str(px), px->id); proxy_type_str(px), px->id);
err_code |= ERR_ALERT | ERR_FATAL; err_code |= ERR_ALERT | ERR_FATAL;
@ -2823,6 +2823,8 @@ void proxy_adjust_all_maxconn()
*/ */
static int post_section_px_cleanup() static int post_section_px_cleanup()
{ {
if (!curproxy)
return 0; // nothing to do
if ((curproxy->cap & PR_CAP_LISTEN) && !(curproxy->cap & PR_CAP_DEF)) { if ((curproxy->cap & PR_CAP_LISTEN) && !(curproxy->cap & PR_CAP_DEF)) {
/* This is a regular proxy (not defaults). It doesn't need /* This is a regular proxy (not defaults). It doesn't need
* to keep a default-server section if it still had one. We * to keep a default-server section if it still had one. We

View File

@ -151,7 +151,7 @@ static int quic_conn_init_idle_timer_task(struct quic_conn *qc, struct proxy *px
/* Returns 1 if the peer has validated <qc> QUIC connection address, 0 if not. */ /* Returns 1 if the peer has validated <qc> QUIC connection address, 0 if not. */
int quic_peer_validated_addr(struct quic_conn *qc) int quic_peer_validated_addr(struct quic_conn *qc)
{ {
if (objt_server(qc->target)) if (qc_is_back(qc))
return 1; return 1;
if (qc->flags & QUIC_FL_CONN_PEER_VALIDATED_ADDR) if (qc->flags & QUIC_FL_CONN_PEER_VALIDATED_ADDR)
@ -478,7 +478,7 @@ int quic_build_post_handshake_frames(struct quic_conn *qc)
qel = qc->ael; qel = qc->ael;
/* Only servers must send a HANDSHAKE_DONE frame. */ /* Only servers must send a HANDSHAKE_DONE frame. */
if (objt_listener(qc->target)) { if (!qc_is_back(qc)) {
size_t new_token_frm_len; size_t new_token_frm_len;
frm = qc_frm_alloc(QUIC_FT_HANDSHAKE_DONE); frm = qc_frm_alloc(QUIC_FT_HANDSHAKE_DONE);
@ -825,7 +825,7 @@ struct task *quic_conn_io_cb(struct task *t, void *context, unsigned int state)
st = qc->state; st = qc->state;
if (objt_listener(qc->target)) { if (!qc_is_back(qc)) {
if (st >= QUIC_HS_ST_COMPLETE && !quic_tls_pktns_is_dcd(qc, qc->hpktns)) if (st >= QUIC_HS_ST_COMPLETE && !quic_tls_pktns_is_dcd(qc, qc->hpktns))
discard_hpktns = 1; discard_hpktns = 1;
} }
@ -841,13 +841,13 @@ struct task *quic_conn_io_cb(struct task *t, void *context, unsigned int state)
qc_set_timer(qc); qc_set_timer(qc);
qc_el_rx_pkts_del(qc->hel); qc_el_rx_pkts_del(qc->hel);
qc_release_pktns_frms(qc, qc->hel->pktns); qc_release_pktns_frms(qc, qc->hel->pktns);
if (objt_server(qc->target)) { if (qc_is_back(qc)) {
/* I/O callback switch */ /* I/O callback switch */
qc->wait_event.tasklet->process = quic_conn_app_io_cb; qc->wait_event.tasklet->process = quic_conn_app_io_cb;
} }
} }
if (objt_listener(qc->target) && st >= QUIC_HS_ST_COMPLETE) { if (!qc_is_back(qc) && st >= QUIC_HS_ST_COMPLETE) {
/* Note: if no token for address validation was received /* Note: if no token for address validation was received
* for a 0RTT connection, some 0RTT packet could still be * for a 0RTT connection, some 0RTT packet could still be
* waiting for HP removal AFTER the successful handshake completion. * waiting for HP removal AFTER the successful handshake completion.
@ -913,7 +913,7 @@ struct task *quic_conn_io_cb(struct task *t, void *context, unsigned int state)
* discard Initial keys when it first sends a Handshake packet... * discard Initial keys when it first sends a Handshake packet...
*/ */
if (objt_server(qc->target) && !quic_tls_pktns_is_dcd(qc, qc->ipktns) && if (qc_is_back(qc) && !quic_tls_pktns_is_dcd(qc, qc->ipktns) &&
qc->hpktns && qc->hpktns->tx.in_flight > 0) { qc->hpktns && qc->hpktns->tx.in_flight > 0) {
/* Discard the Initial packet number space. */ /* Discard the Initial packet number space. */
TRACE_PROTO("discarding Initial pktns", QUIC_EV_CONN_PRSHPKT, qc); TRACE_PROTO("discarding Initial pktns", QUIC_EV_CONN_PRSHPKT, qc);
@ -1029,7 +1029,7 @@ struct task *qc_process_timer(struct task *task, void *ctx, unsigned int state)
} }
} }
} }
else if (objt_server(qc->target) && qc->state <= QUIC_HS_ST_COMPLETE) { else if (qc_is_back(qc) && qc->state <= QUIC_HS_ST_COMPLETE) {
if (quic_tls_has_tx_sec(qc->hel)) if (quic_tls_has_tx_sec(qc->hel))
qc->hel->pktns->tx.pto_probe = 1; qc->hel->pktns->tx.pto_probe = 1;
if (quic_tls_has_tx_sec(qc->iel)) if (quic_tls_has_tx_sec(qc->iel))
@ -1178,6 +1178,11 @@ struct quic_conn *qc_new_conn(const struct quic_version *qv, int ipv4,
cc_algo = l->bind_conf->quic_cc_algo; cc_algo = l->bind_conf->quic_cc_algo;
qc->flags = 0; qc->flags = 0;
/* Duplicate GSO status on listener to connection */
if (HA_ATOMIC_LOAD(&l->flags) & LI_F_UDP_GSO_NOTSUPP)
qc->flags |= QUIC_FL_CONN_UDP_GSO_EIO;
/* Mark this connection as having not received any token when 0-RTT is enabled. */ /* Mark this connection as having not received any token when 0-RTT is enabled. */
if (l->bind_conf->ssl_conf.early_data && !token) if (l->bind_conf->ssl_conf.early_data && !token)
qc->flags |= QUIC_FL_CONN_NO_TOKEN_RCVD; qc->flags |= QUIC_FL_CONN_NO_TOKEN_RCVD;
@ -1193,7 +1198,7 @@ struct quic_conn *qc_new_conn(const struct quic_version *qv, int ipv4,
else { else {
struct quic_connection_id *conn_cid = NULL; struct quic_connection_id *conn_cid = NULL;
qc->flags = QUIC_FL_CONN_PEER_VALIDATED_ADDR; qc->flags = QUIC_FL_CONN_IS_BACK|QUIC_FL_CONN_PEER_VALIDATED_ADDR;
qc->state = QUIC_HS_ST_CLIENT_INITIAL; qc->state = QUIC_HS_ST_CLIENT_INITIAL;
/* This is the original connection ID from the peer server /* This is the original connection ID from the peer server
@ -1603,7 +1608,7 @@ int quic_conn_release(struct quic_conn *qc)
/* Connection released before handshake completion. */ /* Connection released before handshake completion. */
if (unlikely(qc->state < QUIC_HS_ST_COMPLETE)) { if (unlikely(qc->state < QUIC_HS_ST_COMPLETE)) {
if (objt_listener(qc->target)) { if (!qc_is_back(qc)) {
BUG_ON(__objt_listener(qc->target)->rx.quic_curr_handshake == 0); BUG_ON(__objt_listener(qc->target)->rx.quic_curr_handshake == 0);
HA_ATOMIC_DEC(&__objt_listener(qc->target)->rx.quic_curr_handshake); HA_ATOMIC_DEC(&__objt_listener(qc->target)->rx.quic_curr_handshake);
} }
@ -2013,9 +2018,16 @@ void qc_bind_tid_commit(struct quic_conn *qc, struct listener *new_li)
/* At this point no connection was accounted for yet on this /* At this point no connection was accounted for yet on this
* listener so it's OK to just swap the pointer. * listener so it's OK to just swap the pointer.
*/ */
if (new_li && new_li != __objt_listener(qc->target)) if (new_li && new_li != __objt_listener(qc->target)) {
qc->target = &new_li->obj_type; qc->target = &new_li->obj_type;
/* Update GSO conn support based on new listener status. */
if (HA_ATOMIC_LOAD(&new_li->flags) & LI_F_UDP_GSO_NOTSUPP)
qc->flags |= QUIC_FL_CONN_UDP_GSO_EIO;
else
qc->flags &= ~QUIC_FL_CONN_UDP_GSO_EIO;
}
/* Rebind the connection FD. */ /* Rebind the connection FD. */
if (qc_test_fd(qc)) { if (qc_test_fd(qc)) {
/* Reading is reactivated by the new thread. */ /* Reading is reactivated by the new thread. */

View File

@ -150,22 +150,22 @@ void quic_tls_compat_keylog_callback(const SSL *ssl, const char *line)
if (sizeof(QUIC_OPENSSL_COMPAT_CLIENT_HANDSHAKE) - 1 == n && if (sizeof(QUIC_OPENSSL_COMPAT_CLIENT_HANDSHAKE) - 1 == n &&
!strncmp(start, QUIC_OPENSSL_COMPAT_CLIENT_HANDSHAKE, n)) { !strncmp(start, QUIC_OPENSSL_COMPAT_CLIENT_HANDSHAKE, n)) {
level = ssl_encryption_handshake; level = ssl_encryption_handshake;
write = objt_listener(qc->target) ? 0 : 1; write = !qc_is_back(qc) ? 0 : 1;
} }
else if (sizeof(QUIC_OPENSSL_COMPAT_SERVER_HANDSHAKE) - 1 == n && else if (sizeof(QUIC_OPENSSL_COMPAT_SERVER_HANDSHAKE) - 1 == n &&
!strncmp(start, QUIC_OPENSSL_COMPAT_SERVER_HANDSHAKE, n)) { !strncmp(start, QUIC_OPENSSL_COMPAT_SERVER_HANDSHAKE, n)) {
level = ssl_encryption_handshake; level = ssl_encryption_handshake;
write = objt_listener(qc->target) ? 1 : 0; write = !qc_is_back(qc) ? 1 : 0;
} }
else if (sizeof(QUIC_OPENSSL_COMPAT_CLIENT_APPLICATION) - 1 == n && else if (sizeof(QUIC_OPENSSL_COMPAT_CLIENT_APPLICATION) - 1 == n &&
!strncmp(start, QUIC_OPENSSL_COMPAT_CLIENT_APPLICATION, n)) { !strncmp(start, QUIC_OPENSSL_COMPAT_CLIENT_APPLICATION, n)) {
level = ssl_encryption_application; level = ssl_encryption_application;
write = objt_listener(qc->target) ? 0 : 1; write = !qc_is_back(qc) ? 0 : 1;
} }
else if (sizeof(QUIC_OPENSSL_COMPAT_SERVER_APPLICATION) - 1 == n && else if (sizeof(QUIC_OPENSSL_COMPAT_SERVER_APPLICATION) - 1 == n &&
!strncmp(start, QUIC_OPENSSL_COMPAT_SERVER_APPLICATION, n)) { !strncmp(start, QUIC_OPENSSL_COMPAT_SERVER_APPLICATION, n)) {
level = ssl_encryption_application; level = ssl_encryption_application;
write = objt_listener(qc->target) ? 1 : 0; write = !qc_is_back(qc) ? 1 : 0;
} }
else else
goto leave; goto leave;

View File

@ -166,7 +166,7 @@ void qc_prep_fast_retrans(struct quic_conn *qc,
/* When building a packet from another one, the field which may increase the /* When building a packet from another one, the field which may increase the
* packet size is the packet number. And the maximum increase is 4 bytes. * packet size is the packet number. And the maximum increase is 4 bytes.
*/ */
if (!quic_peer_validated_addr(qc) && objt_listener(qc->target) && if (!quic_peer_validated_addr(qc) && !qc_is_back(qc) &&
pkt->len + 4 > quic_may_send_bytes(qc)) { pkt->len + 4 > quic_may_send_bytes(qc)) {
qc->flags |= QUIC_FL_CONN_ANTI_AMPLIFICATION_REACHED; qc->flags |= QUIC_FL_CONN_ANTI_AMPLIFICATION_REACHED;
TRACE_PROTO("anti-amplification limit would be reached", QUIC_EV_CONN_SPPKTS, qc, pkt); TRACE_PROTO("anti-amplification limit would be reached", QUIC_EV_CONN_SPPKTS, qc, pkt);
@ -230,7 +230,7 @@ void qc_prep_hdshk_fast_retrans(struct quic_conn *qc,
/* When building a packet from another one, the field which may increase the /* When building a packet from another one, the field which may increase the
* packet size is the packet number. And the maximum increase is 4 bytes. * packet size is the packet number. And the maximum increase is 4 bytes.
*/ */
if (!quic_peer_validated_addr(qc) && objt_listener(qc->target)) { if (!quic_peer_validated_addr(qc) && !qc_is_back(qc)) {
size_t dglen = pkt->len + 4; size_t dglen = pkt->len + 4;
size_t may_send; size_t may_send;

View File

@ -920,7 +920,7 @@ static int qc_parse_pkt_frms(struct quic_conn *qc, struct quic_rx_packet *pkt,
break; break;
case QUIC_RX_RET_FRM_DUP: case QUIC_RX_RET_FRM_DUP:
if (objt_listener(qc->target) && qel == qc->iel && if (!qc_is_back(qc) && qel == qc->iel &&
!(qc->flags & QUIC_FL_CONN_HANDSHAKE_SPEED_UP)) { !(qc->flags & QUIC_FL_CONN_HANDSHAKE_SPEED_UP)) {
fast_retrans = 1; fast_retrans = 1;
} }
@ -936,7 +936,7 @@ static int qc_parse_pkt_frms(struct quic_conn *qc, struct quic_rx_packet *pkt,
break; break;
case QUIC_FT_NEW_TOKEN: case QUIC_FT_NEW_TOKEN:
if (objt_listener(qc->target)) { if (!qc_is_back(qc)) {
TRACE_ERROR("reject NEW_TOKEN frame emitted by client", TRACE_ERROR("reject NEW_TOKEN frame emitted by client",
QUIC_EV_CONN_PRSHPKT, qc); QUIC_EV_CONN_PRSHPKT, qc);
@ -1096,7 +1096,7 @@ static int qc_parse_pkt_frms(struct quic_conn *qc, struct quic_rx_packet *pkt,
} }
break; break;
case QUIC_FT_HANDSHAKE_DONE: case QUIC_FT_HANDSHAKE_DONE:
if (objt_listener(qc->target)) { if (!qc_is_back(qc)) {
TRACE_ERROR("non accepted QUIC_FT_HANDSHAKE_DONE frame", TRACE_ERROR("non accepted QUIC_FT_HANDSHAKE_DONE frame",
QUIC_EV_CONN_PRSHPKT, qc); QUIC_EV_CONN_PRSHPKT, qc);
@ -1186,7 +1186,7 @@ static int qc_parse_pkt_frms(struct quic_conn *qc, struct quic_rx_packet *pkt,
* has successfully parse a Handshake packet. The Initial encryption must also * has successfully parse a Handshake packet. The Initial encryption must also
* be discarded. * be discarded.
*/ */
if (pkt->type == QUIC_PACKET_TYPE_HANDSHAKE && objt_listener(qc->target)) { if (pkt->type == QUIC_PACKET_TYPE_HANDSHAKE && !qc_is_back(qc)) {
if (qc->state >= QUIC_HS_ST_SERVER_INITIAL) { if (qc->state >= QUIC_HS_ST_SERVER_INITIAL) {
if (qc->ipktns && !quic_tls_pktns_is_dcd(qc, qc->ipktns)) { if (qc->ipktns && !quic_tls_pktns_is_dcd(qc, qc->ipktns)) {
/* Discard the handshake packet number space. */ /* Discard the handshake packet number space. */
@ -1225,7 +1225,7 @@ static inline void qc_handle_spin_bit(struct quic_conn *qc, struct quic_rx_packe
pkt->pn <= largest_pn) pkt->pn <= largest_pn)
return; return;
if (objt_listener(qc->target)) { if (!qc_is_back(qc)) {
if (pkt->flags & QUIC_FL_RX_PACKET_SPIN_BIT) if (pkt->flags & QUIC_FL_RX_PACKET_SPIN_BIT)
qc->flags |= QUIC_FL_CONN_SPIN_BIT; qc->flags |= QUIC_FL_CONN_SPIN_BIT;
else else
@ -1248,7 +1248,7 @@ static void qc_rm_hp_pkts(struct quic_conn *qc, struct quic_enc_level *el)
TRACE_ENTER(QUIC_EV_CONN_ELRMHP, qc); TRACE_ENTER(QUIC_EV_CONN_ELRMHP, qc);
/* A server must not process incoming 1-RTT packets before the handshake is complete. */ /* A server must not process incoming 1-RTT packets before the handshake is complete. */
if (el == qc->ael && objt_listener(qc->target) && qc->state < QUIC_HS_ST_COMPLETE) { if (el == qc->ael && !qc_is_back(qc) && qc->state < QUIC_HS_ST_COMPLETE) {
TRACE_PROTO("RX hp not removed (handshake not completed)", TRACE_PROTO("RX hp not removed (handshake not completed)",
QUIC_EV_CONN_ELRMHP, qc); QUIC_EV_CONN_ELRMHP, qc);
goto out; goto out;

View File

@ -232,7 +232,7 @@ static int ha_quic_set_encryption_secrets(SSL *ssl, enum ssl_encryption_level_t
* listener and if a token was received. Note that a listener derives only RX * listener and if a token was received. Note that a listener derives only RX
* secrets for this level. * secrets for this level.
*/ */
if (objt_listener(qc->target) && level == ssl_encryption_early_data) { if (!qc_is_back(qc) && level == ssl_encryption_early_data) {
if (qc->flags & QUIC_FL_CONN_NO_TOKEN_RCVD) { if (qc->flags & QUIC_FL_CONN_NO_TOKEN_RCVD) {
/* Leave a chance to the address validation to be completed by the /* Leave a chance to the address validation to be completed by the
* handshake without starting the mux: one does not want to process * handshake without starting the mux: one does not want to process
@ -281,7 +281,7 @@ static int ha_quic_set_encryption_secrets(SSL *ssl, enum ssl_encryption_level_t
} }
/* Set the transport parameters in the TLS stack. */ /* Set the transport parameters in the TLS stack. */
if (level == ssl_encryption_handshake && objt_listener(qc->target) && if (level == ssl_encryption_handshake && !qc_is_back(qc) &&
!qc_ssl_set_quic_transport_params(qc->xprt_ctx->ssl, qc, ver, 1)) !qc_ssl_set_quic_transport_params(qc->xprt_ctx->ssl, qc, ver, 1))
goto leave; goto leave;
@ -292,7 +292,7 @@ static int ha_quic_set_encryption_secrets(SSL *ssl, enum ssl_encryption_level_t
struct quic_tls_kp *nxt_tx = &qc->ku.nxt_tx; struct quic_tls_kp *nxt_tx = &qc->ku.nxt_tx;
#if !defined(USE_QUIC_OPENSSL_COMPAT) && !defined(HAVE_OPENSSL_QUIC) #if !defined(USE_QUIC_OPENSSL_COMPAT) && !defined(HAVE_OPENSSL_QUIC)
if (objt_server(qc->target)) { if (qc_is_back(qc)) {
const unsigned char *tp; const unsigned char *tp;
size_t tplen; size_t tplen;
@ -580,7 +580,6 @@ static int ha_quic_ossl_got_transport_params(SSL *ssl, const unsigned char *para
{ {
int ret = 0; int ret = 0;
struct quic_conn *qc = SSL_get_ex_data(ssl, ssl_qc_app_data_index); struct quic_conn *qc = SSL_get_ex_data(ssl, ssl_qc_app_data_index);
struct listener *l = objt_listener(qc->target);
TRACE_ENTER(QUIC_EV_TRANSP_PARAMS, qc); TRACE_ENTER(QUIC_EV_TRANSP_PARAMS, qc);
@ -589,7 +588,7 @@ static int ha_quic_ossl_got_transport_params(SSL *ssl, const unsigned char *para
QUIC_EV_TRANSP_PARAMS, qc); QUIC_EV_TRANSP_PARAMS, qc);
ret = 1; ret = 1;
} }
else if (!quic_transport_params_store(qc, !l, params, params + params_len)) { else if (!quic_transport_params_store(qc, qc_is_back(qc), params, params + params_len)) {
goto err; goto err;
} }
@ -956,7 +955,7 @@ static int qc_ssl_provide_quic_data(struct ncbuf *ncbuf,
* provided by the stack. This happens after having received the peer * provided by the stack. This happens after having received the peer
* handshake level CRYPTO data which are validated by the TLS stack. * handshake level CRYPTO data which are validated by the TLS stack.
*/ */
if (objt_listener(qc->target)) { if (!qc_is_back(qc)) {
if (__objt_listener(qc->target)->bind_conf->ssl_conf.early_data && if (__objt_listener(qc->target)->bind_conf->ssl_conf.early_data &&
(!qc->ael || !qc->ael->tls_ctx.rx.secret)) { (!qc->ael || !qc->ael->tls_ctx.rx.secret)) {
TRACE_PROTO("SSL handshake in progress", TRACE_PROTO("SSL handshake in progress",
@ -970,7 +969,7 @@ static int qc_ssl_provide_quic_data(struct ncbuf *ncbuf,
#endif #endif
/* Check the alpn could be negotiated */ /* Check the alpn could be negotiated */
if (objt_listener(qc->target)) { if (!qc_is_back(qc)) {
if (!qc->app_ops) { if (!qc->app_ops) {
TRACE_ERROR("No negotiated ALPN", QUIC_EV_CONN_IO_CB, qc, &state); TRACE_ERROR("No negotiated ALPN", QUIC_EV_CONN_IO_CB, qc, &state);
quic_set_tls_alert(qc, SSL_AD_NO_APPLICATION_PROTOCOL); quic_set_tls_alert(qc, SSL_AD_NO_APPLICATION_PROTOCOL);
@ -1000,7 +999,7 @@ static int qc_ssl_provide_quic_data(struct ncbuf *ncbuf,
} }
qc->flags |= QUIC_FL_CONN_NEED_POST_HANDSHAKE_FRMS; qc->flags |= QUIC_FL_CONN_NEED_POST_HANDSHAKE_FRMS;
if (objt_listener(ctx->qc->target)) { if (!qc_is_back(qc)) {
struct listener *l = __objt_listener(qc->target); struct listener *l = __objt_listener(qc->target);
/* I/O callback switch */ /* I/O callback switch */
qc->wait_event.tasklet->process = quic_conn_app_io_cb; qc->wait_event.tasklet->process = quic_conn_app_io_cb;
@ -1245,7 +1244,7 @@ int qc_alloc_ssl_sock_ctx(struct quic_conn *qc, struct connection *conn)
ctx->sent_early_data = 0; ctx->sent_early_data = 0;
ctx->qc = qc; ctx->qc = qc;
if (objt_listener(qc->target)) { if (!qc_is_back(qc)) {
struct bind_conf *bc = __objt_listener(qc->target)->bind_conf; struct bind_conf *bc = __objt_listener(qc->target)->bind_conf;
if (qc_ssl_sess_init(qc, bc->initial_ctx, &ctx->ssl, NULL, 1) == -1) if (qc_ssl_sess_init(qc, bc->initial_ctx, &ctx->ssl, NULL, 1) == -1)

View File

@ -115,8 +115,9 @@ static void quic_trace(enum trace_level level, uint64_t mask, const struct trace
if (qc) { if (qc) {
const struct quic_tls_ctx *tls_ctx; const struct quic_tls_ctx *tls_ctx;
chunk_appendf(&trace_buf, " : qc@%p idle_timer_task@%p flags=0x%x", chunk_appendf(&trace_buf, " : qc@%p(%c) idle_timer_task@%p flags=0x%x",
qc, qc->idle_timer_task, qc->flags); qc, (qc->flags & QUIC_FL_CONN_IS_BACK) ? 'B' : 'F',
qc->idle_timer_task, qc->flags);
if (mask & QUIC_EV_CONN_NEW) { if (mask & QUIC_EV_CONN_NEW) {
const int *ssl_err = a2; const int *ssl_err = a2;

View File

@ -307,11 +307,7 @@ static int qc_send_ppkts(struct buffer *buf, struct ssl_sock_ctx *ctx)
/* If datagram bigger than MTU, several ones were encoded for GSO usage. */ /* If datagram bigger than MTU, several ones were encoded for GSO usage. */
if (dglen > qc->path->mtu) { if (dglen > qc->path->mtu) {
/* TODO: note that at this time for connection to backends this if (likely(!(qc->flags & QUIC_FL_CONN_UDP_GSO_EIO))) {
* part is not run because no more than an MTU has been prepared for
* such connections (dglen <= qc->path->mtu). So, here l is not NULL.
*/
if (likely(!(HA_ATOMIC_LOAD(&l->flags) & LI_F_UDP_GSO_NOTSUPP))) {
TRACE_PROTO("send multiple datagrams with GSO", QUIC_EV_CONN_SPPKTS, qc); TRACE_PROTO("send multiple datagrams with GSO", QUIC_EV_CONN_SPPKTS, qc);
gso = qc->path->mtu; gso = qc->path->mtu;
} }
@ -333,6 +329,9 @@ static int qc_send_ppkts(struct buffer *buf, struct ssl_sock_ctx *ctx)
int ret = qc_snd_buf(qc, &tmpbuf, tmpbuf.data, 0, gso); int ret = qc_snd_buf(qc, &tmpbuf, tmpbuf.data, 0, gso);
if (ret < 0) { if (ret < 0) {
if (gso && ret == -EIO) { if (gso && ret == -EIO) {
/* GSO must not be used if already disabled. */
BUG_ON(qc->flags & QUIC_FL_CONN_UDP_GSO_EIO);
/* TODO: note that at this time for connection to backends this /* TODO: note that at this time for connection to backends this
* part is not run because no more than an MTU has been * part is not run because no more than an MTU has been
* prepared for such connections (l is not NULL). * prepared for such connections (l is not NULL).
@ -342,6 +341,7 @@ static int qc_send_ppkts(struct buffer *buf, struct ssl_sock_ctx *ctx)
*/ */
TRACE_ERROR("mark listener UDP GSO as unsupported", QUIC_EV_CONN_SPPKTS, qc, first_pkt); TRACE_ERROR("mark listener UDP GSO as unsupported", QUIC_EV_CONN_SPPKTS, qc, first_pkt);
HA_ATOMIC_OR(&l->flags, LI_F_UDP_GSO_NOTSUPP); HA_ATOMIC_OR(&l->flags, LI_F_UDP_GSO_NOTSUPP);
qc->flags |= QUIC_FL_CONN_UDP_GSO_EIO;
continue; continue;
} }
@ -586,7 +586,6 @@ static int qc_prep_pkts(struct quic_conn *qc, struct buffer *buf,
int dgram_cnt = 0; int dgram_cnt = 0;
/* Restrict GSO emission to comply with sendmsg limitation. See QUIC_MAX_GSO_DGRAMS for more details. */ /* Restrict GSO emission to comply with sendmsg limitation. See QUIC_MAX_GSO_DGRAMS for more details. */
uchar gso_dgram_cnt = 0; uchar gso_dgram_cnt = 0;
struct listener *l = objt_listener(qc->target);
TRACE_ENTER(QUIC_EV_CONN_IO_CB, qc); TRACE_ENTER(QUIC_EV_CONN_IO_CB, qc);
/* Currently qc_prep_pkts() does not handle buffer wrapping so the /* Currently qc_prep_pkts() does not handle buffer wrapping so the
@ -650,7 +649,7 @@ static int qc_prep_pkts(struct quic_conn *qc, struct buffer *buf,
* to stay under MTU limit. * to stay under MTU limit.
*/ */
if (!dglen) { if (!dglen) {
if (!quic_peer_validated_addr(qc) && objt_listener(qc->target)) if (!quic_peer_validated_addr(qc) && !qc_is_back(qc))
end = pos + QUIC_MIN(qc->path->mtu, quic_may_send_bytes(qc)); end = pos + QUIC_MIN(qc->path->mtu, quic_may_send_bytes(qc));
else else
end = pos + qc->path->mtu; end = pos + qc->path->mtu;
@ -672,7 +671,7 @@ static int qc_prep_pkts(struct quic_conn *qc, struct buffer *buf,
* datagrams carrying ack-eliciting Initial packets to at least the * datagrams carrying ack-eliciting Initial packets to at least the
* smallest allowed maximum datagram size of 1200 bytes. * smallest allowed maximum datagram size of 1200 bytes.
*/ */
if (qel == qc->iel && (!l || !LIST_ISEMPTY(frms) || probe)) { if (qel == qc->iel && (qc_is_back(qc) || !LIST_ISEMPTY(frms) || probe)) {
/* Ensure that no Initial packets are sent into too small datagrams */ /* Ensure that no Initial packets are sent into too small datagrams */
if (end - pos < QUIC_INITIAL_PACKET_MINLEN) { if (end - pos < QUIC_INITIAL_PACKET_MINLEN) {
TRACE_PROTO("No more enough room to build an Initial packet", TRACE_PROTO("No more enough room to build an Initial packet",
@ -704,8 +703,8 @@ static int qc_prep_pkts(struct quic_conn *qc, struct buffer *buf,
cur_pkt = qc_build_pkt(&pos, end, qel, tls_ctx, frms, cur_pkt = qc_build_pkt(&pos, end, qel, tls_ctx, frms,
qc, ver, dglen, pkt_type, must_ack, qc, ver, dglen, pkt_type, must_ack,
padding && padding &&
((!l && (!next_qel || LIST_ISEMPTY(next_qel->send_frms))) || ((qc_is_back(qc) && (!next_qel || LIST_ISEMPTY(next_qel->send_frms))) ||
(l && !next_qel && (!probe || !LIST_ISEMPTY(frms)))), (!qc_is_back(qc) && !next_qel && (!probe || !LIST_ISEMPTY(frms)))),
probe, cc, &err); probe, cc, &err);
if (!cur_pkt) { if (!cur_pkt) {
switch (err) { switch (err) {
@ -788,10 +787,10 @@ static int qc_prep_pkts(struct quic_conn *qc, struct buffer *buf,
prv_pkt = cur_pkt; prv_pkt = cur_pkt;
} }
else if (!(quic_tune.options & QUIC_TUNE_NO_UDP_GSO) && else if (!(quic_tune.options & QUIC_TUNE_NO_UDP_GSO) &&
!(qc->flags & QUIC_FL_CONN_UDP_GSO_EIO) &&
dglen == qc->path->mtu && dglen == qc->path->mtu &&
(char *)end < b_wrap(buf) && (char *)end < b_wrap(buf) &&
++gso_dgram_cnt < QUIC_MAX_GSO_DGRAMS && ++gso_dgram_cnt < QUIC_MAX_GSO_DGRAMS) {
l && !(HA_ATOMIC_LOAD(&l->flags) & LI_F_UDP_GSO_NOTSUPP)) {
/* TODO: note that for backends GSO is not used. No more than /* TODO: note that for backends GSO is not used. No more than
* an MTU is prepared. * an MTU is prepared.
*/ */

View File

@ -1983,7 +1983,7 @@ int sample_conv_var2smp_str(const struct arg *arg, struct sample *smp)
} }
} }
static int sample_conv_be2dec_check(struct arg *args, struct sample_conv *conv, static int sample_conv_2dec_check(struct arg *args, struct sample_conv *conv,
const char *file, int line, char **err) const char *file, int line, char **err)
{ {
if (args[1].data.sint <= 0 || args[1].data.sint > sizeof(unsigned long long)) { if (args[1].data.sint <= 0 || args[1].data.sint > sizeof(unsigned long long)) {
@ -1999,13 +1999,13 @@ static int sample_conv_be2dec_check(struct arg *args, struct sample_conv *conv,
return 1; return 1;
} }
/* Converts big-endian binary input sample to a string containing an unsigned /* Converts big-endian/little-endian binary input sample to a string containing an unsigned
* integer number per <chunk_size> input bytes separated with <separator>. * integer number per <chunk_size> input bytes separated with <separator>.
* Optional <truncate> flag indicates if input is truncated at <chunk_size> * Optional <truncate> flag indicates if input is truncated at <chunk_size>
* boundaries. * boundaries.
* Arguments: separator (string), chunk_size (integer), truncate (0,1) * Arguments: separator (string), chunk_size (integer), truncate (0,1), big endian (0,1)
*/ */
static int sample_conv_be2dec(const struct arg *args, struct sample *smp, void *private) static int sample_conv_2dec(const struct arg *args, struct sample *smp, void *private, int be)
{ {
struct buffer *trash = get_trash_chunk(); struct buffer *trash = get_trash_chunk();
const int last = args[2].data.sint ? smp->data.u.str.data - args[1].data.sint + 1 : smp->data.u.str.data; const int last = args[2].data.sint ? smp->data.u.str.data - args[1].data.sint + 1 : smp->data.u.str.data;
@ -2029,8 +2029,12 @@ static int sample_conv_be2dec(const struct arg *args, struct sample *smp, void *
max_size -= args[0].data.str.data; max_size -= args[0].data.str.data;
/* Add integer */ /* Add integer */
for (number = 0, i = 0; i < args[1].data.sint && ptr < smp->data.u.str.data; i++) for (number = 0, i = 0; i < args[1].data.sint && ptr < smp->data.u.str.data; i++) {
number = (number << 8) + (unsigned char)smp->data.u.str.area[ptr++]; if (be)
number = (number << 8) + (unsigned char)smp->data.u.str.area[ptr++];
else
number |= (unsigned char)smp->data.u.str.area[ptr++] << (i*8);
}
pos = ulltoa(number, trash->area + trash->data, trash->size - trash->data); pos = ulltoa(number, trash->area + trash->data, trash->size - trash->data);
if (pos) if (pos)
@ -2047,6 +2051,28 @@ static int sample_conv_be2dec(const struct arg *args, struct sample *smp, void *
return 1; return 1;
} }
/* Converts big-endian binary input sample to a string containing an unsigned
* integer number per <chunk_size> input bytes separated with <separator>.
* Optional <truncate> flag indicates if input is truncated at <chunk_size>
* boundaries.
* Arguments: separator (string), chunk_size (integer), truncate (0,1)
*/
static int sample_conv_be2dec(const struct arg *args, struct sample *smp, void *private)
{
return sample_conv_2dec(args, smp, private, 1);
}
/* Converts little-endian binary input sample to a string containing an unsigned
* integer number per <chunk_size> input bytes separated with <separator>.
* Optional <truncate> flag indicates if input is truncated at <chunk_size>
* boundaries.
* Arguments: separator (string), chunk_size (integer), truncate (0,1)
*/
static int sample_conv_le2dec(const struct arg *args, struct sample *smp, void *private)
{
return sample_conv_2dec(args, smp, private, 0);
}
static int sample_conv_be2hex_check(struct arg *args, struct sample_conv *conv, static int sample_conv_be2hex_check(struct arg *args, struct sample_conv *conv,
const char *file, int line, char **err) const char *file, int line, char **err)
{ {
@ -5415,7 +5441,8 @@ static struct sample_conv_kw_list sample_conv_kws = {ILH, {
{ "upper", sample_conv_str2upper, 0, NULL, SMP_T_STR, SMP_T_STR }, { "upper", sample_conv_str2upper, 0, NULL, SMP_T_STR, SMP_T_STR },
{ "lower", sample_conv_str2lower, 0, NULL, SMP_T_STR, SMP_T_STR }, { "lower", sample_conv_str2lower, 0, NULL, SMP_T_STR, SMP_T_STR },
{ "length", sample_conv_length, 0, NULL, SMP_T_STR, SMP_T_SINT }, { "length", sample_conv_length, 0, NULL, SMP_T_STR, SMP_T_SINT },
{ "be2dec", sample_conv_be2dec, ARG3(1,STR,SINT,SINT), sample_conv_be2dec_check, SMP_T_BIN, SMP_T_STR }, { "be2dec", sample_conv_be2dec, ARG3(1,STR,SINT,SINT), sample_conv_2dec_check, SMP_T_BIN, SMP_T_STR },
{ "le2dec", sample_conv_le2dec, ARG3(1,STR,SINT,SINT), sample_conv_2dec_check, SMP_T_BIN, SMP_T_STR },
{ "be2hex", sample_conv_be2hex, ARG3(1,STR,SINT,SINT), sample_conv_be2hex_check, SMP_T_BIN, SMP_T_STR }, { "be2hex", sample_conv_be2hex, ARG3(1,STR,SINT,SINT), sample_conv_be2hex_check, SMP_T_BIN, SMP_T_STR },
{ "hex", sample_conv_bin2hex, 0, NULL, SMP_T_BIN, SMP_T_STR }, { "hex", sample_conv_bin2hex, 0, NULL, SMP_T_BIN, SMP_T_STR },
{ "hex2i", sample_conv_hex2int, 0, NULL, SMP_T_STR, SMP_T_SINT }, { "hex2i", sample_conv_hex2int, 0, NULL, SMP_T_STR, SMP_T_SINT },

View File

@ -3450,7 +3450,7 @@ int srv_init(struct server *srv)
if (err_code & ERR_CODE) if (err_code & ERR_CODE)
goto out; goto out;
if (!counters_be_shared_init(&srv->counters.shared, &srv->guid)) { if (!counters_be_shared_prepare(&srv->counters.shared, &srv->guid)) {
ha_alert("memory error while setting up shared counters for %s/%s server\n", srv->proxy->id, srv->id); ha_alert("memory error while setting up shared counters for %s/%s server\n", srv->proxy->id, srv->id);
err_code |= ERR_ALERT | ERR_FATAL; err_code |= ERR_ALERT | ERR_FATAL;
goto out; goto out;

View File

@ -88,9 +88,13 @@ struct connection *sock_accept_conn(struct listener *l, int *status)
* the legacy accept() + fcntl(). * the legacy accept() + fcntl().
*/ */
if (unlikely(accept4_broken) || if (unlikely(accept4_broken) ||
/* Albeit it appears it does not make sense to carry on with accept
* if we encounter EPERM, some old embedded ARM Linux 2.6.x sets as
* such instead of ENOSYS.
*/
(((cfd = accept4(l->rx.fd, (struct sockaddr*)addr, &laddr, (((cfd = accept4(l->rx.fd, (struct sockaddr*)addr, &laddr,
SOCK_NONBLOCK | (master ? SOCK_CLOEXEC : 0))) == -1) && SOCK_NONBLOCK | (master ? SOCK_CLOEXEC : 0))) == -1) &&
(errno == ENOSYS || errno == EINVAL || errno == EBADF) && (errno == ENOSYS || errno == EINVAL || errno == EBADF || errno == EPERM) &&
((accept4_broken = 1)))) ((accept4_broken = 1))))
#endif #endif
{ {

View File

@ -491,11 +491,11 @@ int is_inet6_reachable(void)
int fd; int fd;
if (tick_isset(last_check) && if (tick_isset(last_check) &&
!tick_is_expired(tick_add(last_check, INET6_CONNECTIVITY_CACHE_TIME), HA_ATOMIC_LOAD(&global_now_ms))) !tick_is_expired(tick_add(last_check, INET6_CONNECTIVITY_CACHE_TIME), HA_ATOMIC_LOAD(global_now_ms)))
return HA_ATOMIC_LOAD(&sock_inet6_seems_reachable); return HA_ATOMIC_LOAD(&sock_inet6_seems_reachable);
/* update the test date to ensure nobody else does it in parallel */ /* update the test date to ensure nobody else does it in parallel */
HA_ATOMIC_STORE(&last_inet6_check, HA_ATOMIC_LOAD(&global_now_ms)); HA_ATOMIC_STORE(&last_inet6_check, HA_ATOMIC_LOAD(global_now_ms));
fd = socket(AF_INET6, SOCK_DGRAM, 0); fd = socket(AF_INET6, SOCK_DGRAM, 0);
if (fd >= 0) { if (fd >= 0) {

View File

@ -5104,6 +5104,7 @@ static int ssl_sock_init(struct connection *conn, void **xprt_ctx)
ctx->xprt_st = 0; ctx->xprt_st = 0;
ctx->xprt_ctx = NULL; ctx->xprt_ctx = NULL;
ctx->error_code = 0; ctx->error_code = 0;
ctx->can_send_early_data = 1;
next_sslconn = increment_sslconn(); next_sslconn = increment_sslconn();
if (!next_sslconn) { if (!next_sslconn) {
@ -5458,6 +5459,7 @@ static int ssl_sock_handshake(struct connection *conn, unsigned int flag)
/* read some data: consider handshake completed */ /* read some data: consider handshake completed */
goto reneg_ok; goto reneg_ok;
} }
ctx->can_send_early_data = 0;
ret = SSL_do_handshake(ctx->ssl); ret = SSL_do_handshake(ctx->ssl);
check_error: check_error:
if (ret != 1) { if (ret != 1) {
@ -5928,7 +5930,12 @@ static size_t ssl_sock_to_buf(struct connection *conn, void *xprt_ctx, struct bu
} }
#endif #endif
if (conn->flags & (CO_FL_WAIT_XPRT | CO_FL_SSL_WAIT_HS)) { /*
* We have to check can_send_early_data here, as the handshake flags
* may have been removed in case we want to try to send early data.
*/
if (ctx->can_send_early_data ||
(conn->flags & (CO_FL_WAIT_XPRT | CO_FL_SSL_WAIT_HS))) {
/* a handshake was requested */ /* a handshake was requested */
TRACE_LEAVE(SSL_EV_CONN_RECV, conn); TRACE_LEAVE(SSL_EV_CONN_RECV, conn);
return 0; return 0;
@ -6101,7 +6108,7 @@ static size_t ssl_sock_from_buf(struct connection *conn, void *xprt_ctx, const s
ctx->xprt_st &= ~SSL_SOCK_SEND_MORE; ctx->xprt_st &= ~SSL_SOCK_SEND_MORE;
#ifdef SSL_READ_EARLY_DATA_SUCCESS #ifdef SSL_READ_EARLY_DATA_SUCCESS
if (!SSL_is_init_finished(ctx->ssl) && conn_is_back(conn)) { if (ctx->can_send_early_data && conn_is_back(conn)) {
unsigned int max_early; unsigned int max_early;
if (objt_listener(conn->target)) if (objt_listener(conn->target))

View File

@ -3157,9 +3157,9 @@ static enum act_parse_ret parse_add_gpc(const char **args, int *arg, struct prox
return ACT_RET_PRS_ERR; return ACT_RET_PRS_ERR;
} }
if (rule->arg.gpc.sc >= MAX_SESS_STKCTR) { if (rule->arg.gpc.sc >= global.tune.nb_stk_ctr) {
memprintf(err, "invalid stick table track ID '%s' for '%s'. The max allowed ID is %d", memprintf(err, "invalid stick table track ID '%s'. The max allowed ID is %d (tune.stick-counters)",
cmd_name, args[*arg-1], MAX_SESS_STKCTR-1); args[*arg-1], global.tune.nb_stk_ctr - 1);
return ACT_RET_PRS_ERR; return ACT_RET_PRS_ERR;
} }
} }