This patch introduces a new setting: tune.h2.fe.max-total-streams. It
sets the HTTP/2 maximum number of total streams processed per incoming
connection. Once this limit is reached, HAProxy will send a graceful GOAWAY
frame informing the client that it will close the connection after all
pending streams have been closed. In practice, clients tend to close as fast
as possible when receiving this, and to establish a new connection for next
requests. Doing this is sometimes useful and desired in situations where
clients stay connected for a very long time and cause some imbalance inside a
farm. For example, in some highly dynamic environments, it is possible that
new load balancers are instantiated on the fly to adapt to a load increase,
and that once the load goes down they should be stopped without breaking
established connections. By setting a limit here, the connections will have
a limited lifetime and will be frequently renewed, with some possibly being
established to other nodes, so that existing resources are quickly released.
The default value is zero, which enforces no limit beyond those implied by
the protocol (2^30 ~= 1.07 billion). Values around 1000 were found to
already cause frequent enough connection renewal without causing any
perceptible latency to most clients. One notable exception here is h2load
which reports errors for all requests that were expected to be sent over
a given connection after it receives a GOAWAY. This is an already known
limitation: https://github.com/nghttp2/nghttp2/issues/981
The patch was made in two parts inside h2_frt_handle_headers():
- the first one, at the end of the function, which verifies if the
configured limit was reached and if it's needed to emit a GOAWAY ;
- the second, just before decoding the stream frame, which verifies if
a previously configured limit was ignored by the client, and closes
the connection if this happens. Indeed, one reason for a connection
to stay alive for too long definitely comes from a stupid bot that
periodically fetches the same resource, scans lots of URLs or tries
to brute-force something. These ones are more likely to just ignore
the last stream ID advertised in GOAWAY than a regular browser, or
a well-behaving client such as curl which respects it. So in order
to make sure we can close the connection we need to enforce the
advertised limit.
Note that a regular client will not face a problem with that because in
the worst case it will have max_concurrent_streams in flight and this
limit is taken into account when calculating the advertised last
acceptable stream ID.
Just a note: it may also be possible to move the first part above to
h2s_frt_stream_new() instead so that it's not processed for trailers,
though it doesn't seem to be more interesting, first because it has
two return points.
This is something that may be backported to 2.9 and 2.8 to offer more
control to those dealing with dynamic infrastructures, especially since
for now we cannot force a connection to be cleanly closed using rules
(e.g. github issues #946, #2146).
This test was already performed when a new message is queued into the
sending queue. However not when the last applet is released, in
spoe_release_appctx(). It is a quite old bug. It was introduced by commit
6f1296b5c7 ("BUG/MEDIUM: spoe: Create a SPOE applet if necessary when the
last one is released").
Because of this bug, new SPOE applets may be created and quickly released
because there is no server up, in loop and while there is at least one
message in the sending queue, consuming all the CPU. It is pretty visible if
the processing timeout is high.
To fix the bug, conditions to create or not a SPOE applet are now
centralized in spoe_create_appctx(). The test about the max connections per
second and about number of active servers are moved in this function.
This patch must be backported to all stable versions.
During the zero-copy forwarding, if the consumer side reports it is blocked,
it means it is blocked on send. At the stream-connector level, the event
must be reported to be sure to set/update the fsb date. Otherwise, write
timeouts cannot be properly reported. If this happens when no other timeout
is armed, this freezes the stream.
This patch must be backported to 2.9.
The commit b9c87f8082 ("BUG/MEDIUM: stconn/stream: Forward shutdown on write
timeout") introduced a regression. In sc_cond_forward_shut(), the write
timeout is considered too early to forward the shutdown. In fact, it is
always considered, even if the shutdown is not forwardable yet. It is of
course unexpected. It is especially an issue when a write timeout is
encountered on server side during the connection establishment. In this
case, if shutdown is forwarded too early on the client side, the connection
is closed before the 503 error sending.
So the write timeout must indeed be considered to forward the shutdown to
the underlying layer, but only if the shutdown is forwardable. Otherwise, we
should do nothing.
This patch should fix the issue #2404. It must be backported as far as 2.2.
Improve h3_resp_trailers_send() return value to be similar with
h3_resp_data_send(). In particular, if QCS Tx buffer has not enough
space for trailer encoding, 0 is returned instead of an error value,
with QC_SF_BLK_MROOM set.
This unify HTTP/3 headers/data/trailers encoding functions. Negative
error codes are limited to fatal error which should cause a connection
closure. Not enough output buffer space is only a transient condition
which is reflect by the QC_SF_BLK_MROOM flag.
h3_resp_data_send() is used to transcode HTX data into H3 data frames.
If QCS Tx buffer is not aligned when first invoked, two separate frames
may be built, first until buffer end, then with remaining space in
front.
If buffer space is not enough for at least the H3 frame header, -1 is
returned with the flag QC_SF_BLK_MROOM set to await for more room. An
issue arises if this occurs for the second frame : -1 is returned even
though HTX data were properly transcoded and removed on the first step.
This causes snd_buf callback to return an incorrect value to the stream
layer, which in the end will corrupt the channel output buffer.
To fix this, stop considering that not enough remaining space is an
error case. Instead, return 0 if this is encountered for the first frame
or the HTX removed block size for the second one. As QC_SF_BLK_MROOM is
set, this will correctly interrupt H3 encoding. Label err is thus only
properly limited to fatal error which should cause a connection closure.
A new BUG_ON() has been added which should prevent similar issues in the
future.
This issue was detected using the following client :
$ ngtcp2-client --no-quic-dump --no-http-dump --exit-on-all-streams-close \
127.0.0.1 20443 -n2 "http://127.0.0.1:20443/?s=50k"
This triggers the following CHECK_IF statement. Note that it may be
necessary to disable fast forwarding to enforce snd_buf usage.
Thread 1 "haproxy" received signal SIGILL, Illegal instruction.
0x00005555558bc48a in co_data (c=0x5555561ed428) at include/haproxy/channel.h:130
130 CHECK_IF_HOT(c->output > c_data(c));
[ ## gdb ## ] bt
#0 0x00005555558bc48a in co_data (c=0x5555561ed428) at include/haproxy/channel.h:130
#1 0x00005555558c1d69 in sc_conn_send (sc=0x5555561f92d0) at src/stconn.c:1637
#2 0x00005555558c2683 in sc_conn_io_cb (t=0x5555561f7f10, ctx=0x5555561f92d0, state=32832) at src/stconn.c:1824
#3 0x000055555590c48f in run_tasks_from_lists (budgets=0x7fffffffdaa0) at src/task.c:596
#4 0x000055555590cf88 in process_runnable_tasks () at src/task.c:876
#5 0x00005555558aae3b in run_poll_loop () at src/haproxy.c:3049
#6 0x00005555558ab57e in run_thread_poll_loop (data=0x555555d9fa00 <ha_thread_info>) at src/haproxy.c:3251
#7 0x00005555558ad053 in main (argc=6, argv=0x7fffffffddd8) at src/haproxy.c:3948
In case CHECK_IF are not activated, it may cause crash or incorrect
transfers.
This was introduced by the following commit
commit 2144d24186
BUG/MINOR: h3: close connection on sending alloc errors
This must be backported wherever the above patch is.
Such "#ifdef USE_QUIC" prepocessor statements are used by QUIC C header
to avoid inclusion of QUIC headers when the QUIC support is not enabled
(by USE_QUIC make variable). Furthermore, this allows inclusions of QUIC
header from C file without having to protect them with others "#ifdef USE_QUIC"
statements as follows:
#ifdef USE_QUIC
#include <a QUIC header>
#include <another one QUIC header>
#endif /* USE_QUIC */
So, here if this quic_ssl.h header was included by a C file, and compiled without
QUIC support, this will lead to build errrors as follows:
In file included from <a C file...>:
include/haproxy/quic_ssl.h:35:35: warning: ‘enum ssl_encryption_level_t’
declared inside parameter list will not be visible outside of this
definition or declaration
Should be backported to 2.9 to avoid such building issues to come.
Remove some QUIC definitions of members from server structure as the haproxy QUIC
stack does not support at all the server part (QUIC client) as this time.
Remove the statements in relation with their initializations.
This patch should be backported as far as 2.6 to save memory.
Previous patch fixed a regression which caused some config with
attach-srv to be rejected if the rule was declared before the target
server itself. To better detect this kind of error, mix the declaration
order in the corresponding regtest.
Since below commit, server_find_by_name() now search using
'used_server_id' proxy backend tree :
4bcfe30414
OPTIM: server: eb lookup for server_find_by_name()
This introduces a regression if server_find_by_name() is used via
check_config_validity() during post-parsing. Indeed, used_server_id tree
is populated at the same stage so it's possible to not found an existing
server. This can cause incorrect rejection of previously valid
configuration file.
To fix this, servers are now inserted in used_server_id tree during
parsing via parse_server(). This guarantees that server instances can be
retrieved during post parsing.
A known feature which uses server_find_by_name() during post parsing is
attach-srv tcp-rule used for reverse HTTP. Prior to the current fix, a
config was wrongly rejected if the rule was declared before the server
line.
This should not be backported unless the mentionned commit is.
The "show dev" CLI command is still missing useful elements such as the
build options, SSL version etc. Let's just add the build features and
the build options there so that it's possible to collect all of this
from a running process without having to start the executable with -vv.
This is still dumped all at once from the parsing function since the
output is small. If it were to grow, this would possibly require to be
reworked to support a context.
It might be helpful to backport this to 2.9 since it can help narrow
down certain issues.
The new function hap_get_next_build_opt() will iterate over the list of
build options. This will be used for debugging, so that the build options
can be retrieved from the CLI.
A new flag RX_F_PASS_PKTINFO is now available, whose purpose is to mark
that the destination address is about to be retrieved on some listeners.
The address can be retrieved from the first received datagram, and
relies on the IP_PKTINFO, IP_RECVDSTADDR and IPV6_RECVPKTINFO support.
snr_set_srv_down() (was formely known as snr_update_srv_status()), is
still too ambiguous because it's not clear whether we will be putting
the server under maintenance or not. This is mainly due to the fact that
the function behaves differently if has_no_ip is set or not.
By reviewing the function callers, it has now become clear that
snr_resolution_cb() is always calling the function with a valid resolution
so we only want to put the server under maintenance if we don't have a
valid IP address. On the other hand snr_resolution_error_cb() always
calls the function on error, with either no resolution (for SRV requests)
or with failing resolution (all cases except RSLV_STATUS_VALID), so in
this case we decide whether to put the server under maintenance case by
case (ie: expired? timeout?)
As a result, let's simplify snr_set_srv_down() so that it is only called
when the caller really thinks that the server should be put under
maintenance, which means always for snr_resolution_error_cb(), and only
if the resolution didn't yield usable ip for snr_resolution_cb().
RSLV_UPD_CNAME and RSLV_UPD_NAME_ERROR flags have now become useless since
3cf7f987 ("MINOR: dns: proper domain name validation when receiving DNS
response") as they are never set, but we forgot to remove them.
RSLV_UPD_OBSOLETE_IP was introduced with commit a8c6db8d2 ("MINOR: dns:
Cache previous DNS answers.") but the commit didn't make any use of it,
and today the flag is still unused. Since we have no valid use for it,
better remove it to prevent confusions.
A leftover check was left by recent patch series about server
addr:svc_port propagation: a check on (msg) being set was performed
in srv_update_addr_port(), but msg is always set, so the check is not
needed and confuses coverity (See GH #2399)
In table_process_entry(), stktable_data_ptr() result is dereferenced
without checking if it's NULL first, which may happen when bad inputs
are provided to the function.
However, data_type and ts arguments were already checked prior to calling
the function, so we know for sure that stktable_data_ptr() will never
return NULL in this case.
However some static code analyzers such as Coverity are being confused
because they think that the result might possibly be NULL.
(See GH #2398)
To make it explicit that we always provide good inputs and expect valid
result, let's switch to the __stktable_data_ptr() unsafe function.
This reverts commit 18f2ccd244.
Found issues related to QUIC fast-forward were resolved (see github
issue #2372). Reenable it by default. If any issue arises, it can be
disabled using the global statement :
tune.quit.zero-copy-fwd-send off
This can be backported to 2.9, but only after a sensible period of
observation.
If QCS Tx buffer cannot be allocated in nego_ff callback, disable
fast-forward for this connection and return immediately. If snd_buf is
later finally used but still no buffer can being allocated, the
connection will be closed on error.
This should fix coverity reported in github issue #2390.
This should be backported up to 2.9.
When encoding new HTTP/3 frames, QCS Tx buffer must be allocated if
currently NULL. Previously, allocation failure was not properly checked,
leaving the connection in an unspecified state, or worse risking a
crash.
Fix this by setting <h3c.err> to H3_INTERNAL_ERROR each time the
allocation fails. This will stop sending and close the connection. In
the future, it may be better to put the connection on pause waiting for
allocation to succeed but this is too complicated to implement for now
in a reliable way.
Along with the current change, return of all HTX parsing functions
(h3_resp_*_send) were set to a negative value in case of error. A new
BUG_ON() in h3_snd_buf() ensures that if such a value is returned,
either a connection error is register (via <h3c.err>) or buffer is
temporarily full (flag QC_SF_BLK_MROOM).
This should fix github issue #2389.
This should be backported up to 2.6. Note that qcc_get_stream_txbuf()
does not exist in 2.9 and below. mux_get_buf() is its equivalent. An
explicit check b_is_null(&qcs.tx.buf) should be used there.
When parsing a HTX response, if too many headers are present, stop
sending and close the connection with error code H3_INTERNAL_ERROR.
Previously, no error was reported despite the interruption of header
parsing. This cause an infinite loop. However, this is considered as
minor as it happens on the response path from backend side.
This should be backported up to 2.6.
It relies on previous commit
"MINOR: h3: check connection error during sending".
If an error occurs during HTX to H3 encoding, h3_snd_buf() should be
interrupted. This commit add this possibility by checking for <h3c.err>
member value. If non null, sending loop is stopped and an error is
reported using qcc_set_error().
This commit does not change any behavior for now, as <h3c.err> is never
set during sending. However, this will change in future commits, most
notably to reject too many headers or handle buffer allocation failure.
As such, this commit should be backported along the following fixes.
Note that in 2.6 qcc_set_error() does not exist and must be replaced by
qcc_emit_cc_app().
This bug impacts only the QUIC OpenSSL compatibility module (USE_QUIC_OPENSSL_COMPAT).
The TLS capture of information from client hello enabled by
tune.ssl.capture-buffer-size could not work with USE_QUIC_OPENSSL_COMPAT. This
is due to the fact the callback set for this feature was replaced by
quic_tls_compat_msg_callback(). In fact this called must be registered by
ssl_sock_register_msg_callback() as this done for the TLS client hello capture.
A call to this function appends the function passed as parameter to a list of
callbacks to be called when the TLS stack parse a TLS message.
quic_tls_compat_msg_callback() had to be modified to return if it is called
for a non-QUIC TLS session.
Must be backported to 2.8.
This bug impacts only the QUIC OpenSSL compatibility module (USE_QUIC_OPENSSL_COMPAT).
To make this module works, quic_tls_compat_keylog_callback() function must be
set as keylog callback, or at least be called by another keylog callback.
This is what SSL_CTX_keylog() was supposed to do. In addition to export the TLS
secrets via sample fetches this latter also calls quic_tls_compat_keylog_callback()
when compiled with USE_QUIC_OPENSSL_COMPAT defined.
Before this patch, SSL_CTX_keylog() was replaced by quic_tls_compat_keylog_callback()
and the TLS secret were no more exported by sample fetches.
Must be backported to 2.8.
Add a check on nego_ff to ensure connection is not on error. If this is
the case, fast-forward is disable to prevent unnecessary sending. If
snd_buf is latter called, stconn will be notified of the error to
interrupt the stream.
This check is necessary to ensure snd_buf and nego_ff are consistent.
Note that previously, if fast-forward was conducted even on connection
error, no sending would occur as qcc_io_send() also check these flags.
However, there is a risk that stconn is never notified of the error
status, thus it is considered as a bug.
Its impact is minimal for now as fast-forward is disable by default on
QUIC. By fixing it, it should be possible to reactive it soon.
This should be backported up to 2.9.
Previously, if snd_buf operation was conducted despite QCS already
locally closed, the input buffer was silently dropped. This situation
could happen if a RESET_STREAM was emitted butemission not reported to
the stream layer. Resetting silently the buffer ensure QUIC MUX remain
compliant with RFC 9000 which forbid emission after RESET_STREAM.
Since previous commit, it is now ensured that RESET_STREAM sending will
always be reported to stream-layer. Thus, there is no need anymore to
silently reset the buffer. A BUG_ON() statement is added to ensure this
assumption will remain valid.
The new code is deemed cleaner as it does not hide a missing error
notification on the stconn-layer. Previously, if an error was missing,
sending would continue unnecessarily with a false success status
reported for the stream.
Note that the BUG_ON() statement was also added into nego_ff callback.
This is necessary to ensure both sending path remains consistent.
This patch is labelled as MEDIUM as issues were already encountered in
snd_buf/nego_ff implementation and it's not easy to cover all occurences
during test. If the BUG_ON() is triggered without any apparent
stream-layer issue, this commit should be reverted.
On RESET_STREAM emission, the stream Tx channel is closed. This event
must be reported to stream-conn layer to interrupt future send
operations.
Previously, se_fl_set_error() was manually invocated before/after
qcc_reset_stream(). Change this by moving se_fl_set_error() invocation
into the latter. This ensures that notification won't be forget, most
notably in HTTP/3 layer.
In most cases, behavior should be identical as both functions were
called together unless not necessary. However, there is one exception
which could cause a RESET_STREAM emission without error notification :
this happens on H3 trailers parsing error. All other H3 errors happen
before the stream-layer creation and thus the error is notified on
stream creation. This regression has been caused by the following patch :
152beeec34
MINOR: mux-quic: report error on stream-endpoint earlier
Thus it should be backported up to 2.7.
Note that the case described above did not cause any crash or protocol
error. This is because currently MUX QUIC snd_buf operation silently
reset buffer on transmission if QCS is already closed locally. This will
however be removed in a future commit so the current patch is necessary
to prevent an invalid behavior.
All map_*_ converters now have an additional output type: key. Such
converters will return the matched entry's key (as found in the map file)
as a string instead of the value.
Consider this example map file:
|example.com value1
|haproxy value2
With the above map file:
str(test.example.com/url),map_dom_key(file.map) will return "example.com"
str(running haproxy),map_sub_key(file.map) will return "haproxy"
This should address GH #1446.
This patchs adds support for optional ptr (0xffff form) instead of key
argument to match against existing sticktable entries, ie: if the key is
empty or cannot be matched on the cli due to incompatible characters.
Lookup is performed using a linear search so it will be slower than key
search which relies on eb tree lookup.
Example:
set table mytable key mykey data.gpc0 1
show table mytable
> 0x7fbd00032bd8: key=mykey use=0 exp=86373242 shard=0 gpc0=1
clear table mytable ptr 0x7fbd00032bd8
This patchs depends on:
- "MINOR: stktable: add table_process_entry helper function"
It should solve GH #2118
Only keep key-related logic in table_process_entry_per_key() function,
and then use table_process_entry() function that takes an entry pointer
as argument to process the entry.
Similarly to the previous commit, we get rid of unused peer member.
peer->addr was only used to save a copy of the sever's addr at parsing
time. But instead of relying on an intermediate variable, we can actually
use server's address directly when initiating the peer session.
As with other streams created from server's settings (tcp/http, log, ring),
we should rely on srv->svc_port for the port part of the address. This
shouldn't change anything for peers since the address is fully resolved
at parsing time and runtime changes are not supported, but this should
help to make the code future-proof.
peer->proto and peer->xprt struct members are now pure legacy: they are
only set during parsing but never used afterwards.
This is due to commit 02efedac ("MINOR: peers: now remove the remote
connection setup code") which made some cleanup in the past, but the
unused proto and xprt members were probably left unused by mistake.
Since we don't have valid uses for them, we remove them.
Also, peer_xprt() helper function was removed since it was related to
peer->xprt struct member.
Historically, we used the internal peer proxy as stream target, because
then we only cared about initiating a basic tcp connection with the
endpoint, and relying on parent proxy settings was enough.
But later, we introduced the possibility to connect to an SSL peer by
taking server's SSL parameters into acount. This was done in commit
1055e687 ("MINOR: peers: Make outgoing connection to SSL/TLS peers work.")
However, the above commit introduced an ambiguity:
peer_session_target() function was introduced, and the function will
either return the peers proxy's object or the current server's object
depending if ssl is configured or not.
While this works fine to ensure proper SSL handling while being
conservative with historical behavior, this cause other server transport
related settings to only work when ssl settings are provided, which is
quite debatable.
Indeed, while we're there, why not always using the server's object as
a stream target, to ensure all transport related options are properly
handled? Moreover, the peers documentation tells this:
... "support for all "server" parameters found in 5.2 paragraph that
are related to transport settings" ...
To remove the ambiguity and fully comply with the documentation, we make
peer_session_target() always return the server's object.
This was the last missing bit from cd994407a ("BUG/MAJOR: server/addr:
fix a race during server addr:svc_port updates")
Indeed, despite the fix, svc_port updates from resolvers were still
directly performed on the server's struct.
Now they make proper use of the server_set_inetaddr() function so the port
change (+ optional addr change with AR) will be propagated atomically.
This patch depends on:
- "MINOR: server: ensure connection cleanup on server addr changes"
- "CLEANUP: server/event_hdl: remove purge_conn hint in INETADDR event"
- "MEDIUM: server: merge srv_update_addr() and srv_update_addr_port() logic"
- "MEDIUM: server: make server_set_inetaddr() updater serializable"
- "MINOR: server/event_hdl: expose updater info through INETADDR event"
- "MINOR: server: add dns hint in server_inetaddr_updater struct"
- "MEDIUM: server/dns: clear RMAINT when addr resolves again"
While it could be backported in 2.9 with cd994407a ("BUG/MAJOR:
server/addr: fix a race during server addr:svc_port updates") to ensure
addr and svc_port updates performed by resolver's code comply with the
API taking care of pushing the update (and thus avoid any race), some
patch dependencies are quite sensitive so it's probably best to avoid
backporting for no good reason, or at least wait for it to be considered
stable to prevent any breakeages
As seen before, server's addr and svc_port should not be updated directly
during runtime, because even if the update is performed under the lock,
some competing threads might be reading ->addr and ->svc_port without
the lock because they simply cannot afford it.
To prevent races with such competing threads, server's addr and port
should only be updated using server_set_inetaddr() function or similar.
This patch depends on:
- "MINOR: server: ensure connection cleanup on server addr changes"
- "CLEANUP: server/event_hdl: remove purge_conn hint in INETADDR event"
- "MEDIUM: server: merge srv_update_addr() and srv_update_addr_port() logic"
- "MEDIUM: server: make server_set_inetaddr() updater serializable"
- "MINOR: server/event_hdl: expose updater info through INETADDR event"
- "MINOR: server: add dns hint in server_inetaddr_updater struct"
- "MEDIUM: server/dns: clear RMAINT when addr resolves again"
While it could be backported in 2.9 with cd994407a ("BUG/MAJOR:
server/addr: fix a race during server addr:svc_port updates") to ensure
addr and svc_port reset performed by resolver's code comply with the
API taking care of pushing the update (and thus avoid any race), some
patch dependencies are quite sensitive so it's probably best to avoid
backporting for no good reason, or at least wait for it to be considered
stable to prevent any breakeages.
snr_update_srv_status() and srvrq_update_srv_status() will both set or
clear the server RMAINT state depending of the result of the current dns
resolution.
This used to work pretty well in the past, but now that addr:svc_port
changes are changed atomically through a dedicated task, the change is
performed asynchronously, so this can cause some flapping issues if the
server is put out of maintenance while the server's address is still
unassigned.
To prevent errors, the resolver's code is now only allowed to put the
server under maintenance but not to remove it from maintenance:
the decision to remove a server from maintenance is performed by the task
responsible for updating the server's addr: if the addr resolves again
thanks to a valid DNS resolution and the server was previously under
RMAINT, then it cleared from RMAINT state.
srvrq_update_srv_status() was renamed srvrq_set_srv_down(), since it is
only called to put the server in maintenance as a result of a failing
SRV entry.
snr_update_srv_status() was renamed srv_set_srv_down() and slightly
modified so that it only takes care of putting the server under
maintenance when needed.
The cli command "set server x/y addr" does not need to remove the RMAINT
flag anymore.
server_set_inetaddr() updater argument is a simple char * string
containing infos about the caller responsible for the update.
In this patch, we try to make this argument serializable, that is, make
it so that we can easily export it without having to keep the original
pointer passed by the caller or having to work with strings of variable
lengths.
This was a prerequisite for exposing more updater information through
SERVER_INETADDR event (upcoming patch).
Static strings were simply mapped to a fixed ID that can be converted back
to a string when needed using server_inetaddr_updater_by_to_str(). One
special case one made for the SERVER_INETADDR_UPDATER_DNS_RESOLVER updater
since in this case the updater hint has to be generated from the
corresponding resolver id / nameserver id combination. This was achieved
by saving the nameserver id within the updater struct. Knowing that the
resolver id can be guessed from the server struct directly, it was not
exposed through the updater struct.
This patch depends on:
- "MINOR: resolvers: add unique numeric id to nameservers"
No functional change should be expected.
When we want to avoid keeping pointers on a nameserver struct, it's not
always convenient to refer as a nameserver using it's text-based unique
identifier since it's not limited in length thus it cannot be serialized
and deserialized safely.
To address this limitation, we add a new ->puid member in dns_nameserver
struct which is a parent-unique numeric value that can be used to refer
to the dns nameserver within its parent resolver context.
To achieve this, we reused the resolver->nb_nameserver member that wasn't
used. Each time we add a new nameserver to a resolver: we set ns->puid to
the current number of nameservers within the resolver and we increment
this number right away.
Public helper function find_nameserver_by_resolvers_and_id() was added to
help retrieve nameserver pointer from (resolver X nameserver puid)
combination.
dns_dgram_init() function prototype was found in both resolvers and dns
header files, but it should belong to the dns header file, so the
duplicate entry was simply removed.