strace shows a lot of EAGAIN on small response messages. This
is caused by the fact that the READ_DONTWAIT flag is not set
on response message, it's only there when we want to flush
pending data.
For small responses, it's a waste of CPU cycles to call recv()
for nothing since most of the time, everything we'll need will
be in the first response. Also, this will offer more opportunities
for using splice() to transfer data.
It's becoming increasingly difficult to ignore unwanted function returns in
debug code with gcc. Now even when you try to work around it, it suggests a
way to write your code differently. For example :
src/frontend.c:187:65: warning: if statement has empty body [-Wempty-body]
if (write(1, trash.str, trash.len) < 0) /* shut gcc warning */;
^
src/frontend.c:187:65: note: put the semicolon on a separate line to silence this warning
1 warning generated.
This is totally unacceptable, this code already had to be written this way
to shut it up in earlier versions. And now it comments the form ? What's the
purpose of the C language if you can't write anymore the code that does what
you want ?
Emeric proposed to just keep a global variable to drain such useless results
so that gcc stops complaining all the time it believes people who write code
are monkeys. The solution is acceptable because the useless assignment is done
only in debug code so it will not impact performance. This patch implements
this, until gcc becomes even "smarter" to detect that we tried to cheat.
With this patch, patterns can be compiled for two modes :
- match
- lookup
The match mode is used for example in ACLs or maps. The lookup mode
is used to lookup a key for pattern maintenance. For example, looking
up a network is different from looking up one address belonging to
this network.
A special case is made for regex. In lookup mode they return the input
regex string and do not compile the regex.
Now, the pat_parse_*() functions parses the incoming data. The input
"pattern" struct can be preallocated. If the parser needs to add some
buffers, it allocates memory.
The function pattern_register() runs the call to the parser, process
the key indexation and associate the "sample_storage" used by maps.
This is used later for increasing the compability with incoming
sample types. When multiple compatible types are supported, one
is arbitrarily used (eg: UINT).
SSL and keep-alive will need to be able to fail on allocation errors,
and the stream interface did not allow to report such a cause. The flag
will then be "RC" as already documented.
This reduces its size which is not reused by anything else. However it
will significantly improve the debugger's output since we'll now get
real state values.
The default case had to be enabled in the parsers because gcc tries
to optimize the switch/case and noticed some values were missing from
the enums and emitted a warning.
Here again we had some oversized and misaligned entries. The method
and the status don't need 4 bytes each, and there was a hole after
the status that does not exist anymore. That's 8 additional bytes
saved from http_txn and as much for the session.
Also some fields were slightly moved to present better memory access
patterns resulting in a steady 0.5% performance increase.
The task returned by stream_int_register_handler() is never used, however we
always need to access the appctx afterwards. So make it return the appctx
instead. We already plan for it to fail, which is the reason for the addition
of a few tests and the possibility for the HTTP analyser to return a status
code 500.
We're about to remove si->appctx, so first let's replace all occurrences
of its usage with a dynamic extract from si->end. A lot of code was changed
by search-n-replace, but the behaviour was intentionally not altered.
The code surrounding calls to stream_int_register_handler() was slightly
changed since we can only use si->end *after* the registration.
The outgoing connection is now allocated dynamically upon the first attempt
to touch the connection's source or destination address. If this allocation
fails, we fail on SN_ERR_RESOURCE.
As we didn't use si->conn anymore, it was removed. The endpoints are released
upon session_free(), on the error path, and upon a new transaction. That way
we are able to carry the existing server's address across retries.
The stream interfaces are not initialized anymore before session_complete(),
so we could even think about allocating them dynamically as well, though
that would not provide much savings.
The session initialization now makes use of conn_new()/conn_free(). This
slightly simplifies the code and makes it more logical. The connection
initialization code is now shorter by about 120 bytes because it's done
at once, allowing the compiler to remove all redundant initializations.
The si_attach_applet() function now takes care of first detaching the
existing endpoint, and it is called from stream_int_register_handler(),
so we can safely remove the calls to si_release_endpoint() in the
application code around this call.
A call to si_detach() was made upon stream_int_unregister_handler() to
ensure we always free the allocated connection if one was allocated in
parallel to setting an applet (eg: detect HTTP proxy while proceeding
with stats maybe).
Currently the control and transport layers of a connection are supposed
to be initialized when their respective pointers are not NULL. This will
not work anymore when we plan to reuse connections, because there is an
asymmetry between the accept() side and the connect() side :
- on accept() side, the fd is set first, then the ctrl layer then the
transport layer ; upon error, they must be undone in the reverse order,
then the FD must be closed. The FD must not be deleted if the control
layer was not yet initialized ;
- on the connect() side, the fd is set last and there is no reliable way
to know if it has been initialized or not. In practice it's initialized
to -1 first but this is hackish and supposes that local FDs only will
be used forever. Also, there are even less solutions for keeping trace
of the transport layer's state.
Also it is possible to support delayed close() when something (eg: logs)
tracks some information requiring the transport and/or control layers,
making it even more difficult to clean them.
So the proposed solution is to add two flags to the connection :
- CO_FL_CTRL_READY is set when the control layer is initialized (fd_insert)
and cleared after it's released (fd_delete).
- CO_FL_XPRT_READY is set when the control layer is initialized (xprt->init)
and cleared after it's released (xprt->close).
The functions have been adapted to rely on this and not on the pointers
anymore. conn_xprt_close() was unused and dangerous : it did not close
the control layer (eg: the socket itself) but still marks the transport
layer as closed, preventing any future call to conn_full_close() from
finishing the job.
The problem comes from conn_full_close() in fact. It needs to close the
xprt and ctrl layers independantly. After that we're still having an issue :
we don't know based on ->ctrl alone whether the fd was registered or not.
For this we use the two new flags CO_FL_XPRT_READY and CO_FL_CTRL_READY. We
now rely on this and not on conn->xprt nor conn->ctrl anymore to decide what
remains to be done on the connection.
In order not to miss some flag assignments, we introduce conn_ctrl_init()
to initialize the control layer, register the fd using fd_insert() and set
the flag, and conn_ctrl_close() which unregisters the fd and removes the
flag, but only if the transport layer was closed.
Similarly, at the transport layer, conn_xprt_init() calls ->init and sets
the flag, while conn_xprt_close() checks the flag, calls ->close and clears
the flag, regardless xprt_ctx or xprt_st. This also ensures that the ->init
and the ->close functions are called only once each and in the correct order.
Note that conn_xprt_close() does nothing if the transport layer is still
tracked.
conn_full_close() now simply calls conn_xprt_close() then conn_full_close()
in turn, which do nothing if CO_FL_XPRT_TRACKED is set.
In order to handle the error path, we also provide conn_force_close() which
ignores CO_FL_XPRT_TRACKED and closes the transport and the control layers
in turns. All relevant instances of fd_delete() have been replaced with
conn_force_close(). Now we always know what state the connection is in and
we can expect to split its initialization.
The connection will only remain there as a pre-allocated entity whose
goal is to be placed in ->end when establishing an outgoing connection.
All connection initialization can be made on this connection, but all
information retrieved should be applied to the end point only.
This change is huge because there were many users of si->conn. Now the
only users are those who initialize the new connection. The difficulty
appears in a few places such as backend.c, proto_http.c, peers.c where
si->conn is used to hold the connection's target address before assigning
the connection to the stream interface. This is why we have to keep
si->conn for now. A future improvement might consist in dynamically
allocating the connection when it is needed.
Since this is the applet context, call it ->appctx to avoid the confusion
with the pointer to the applet. Many places were changed but it's only a
renaming.
At the moment, stats require some preliminary storage just to store
some flags and codes that are parsed very early and used later. In
fact that doesn't make much sense and makes it very hard to allocate
the applet dynamically.
This patch changes this. Now stats_check_uri() only checks for the
validity of the request and the fact that it matches the stats uri.
It's handle_stats() which parses it. It makes more sense because
handle_stats() used to already perform some preliminary processing
such as verifying that POST contents are not missing, etc...
There is only one minor hiccup in doing so : the reqrep rules might
be processed in between. This has been addressed by moving
http_handle_stats() just after stats_check_uri() and setting s->target
at the same time. Now that s->target is totally operational, it's used
to mark the current request as being targetted at the stats, and this
information is used after the request processing to remove the HTTP
analysers and only let the applet handle the request.
Thus we guarantee that the storage for the applet is filled with the
relevant information and not overwritten when we switch to the applet.
There is a big trouble with the way POST is handled for the admin
stats page. The POST parameters are extracted from some http-request
rules, and if not round they return zero hoping for being called again
when more data passes. This results in the HTTP analyser being called
several times and all the rules prior to the stats being executed
multiple times as well. That includes rewrite rules.
So instead of doing this, we now move all the processing of the stats
into the stats applet.
That way we just set the stats applet in the HTTP analyser when a stats
request is detected, and the applet takes the time it needs to read the
arguments and respond. We could even imagine improving the applet to
support requests larger than a single buffer.
The code was almost only moved and minimally changed. Several new HTTP
states were added to the stats applet to emit headers, redirects and
to read POST. It was necessary to do this because the headers sent
depend on the parsing of the POST request. In the end it's beneficial
because we removed two stream_int_retnclose() calls.
In preparation for moving the POST processing to the applet, we first
add new states to the HTTP I/O handler. Till now st0 was only 0/1 for
start/end. We now replace it with an enum.
These two fetch methods predate the samples and used to store the
destination address into the server-facing connection's address field
because we had no other place at this time.
This will become problematic with the current connection changes, so
let's fix this.
This field was used by dumpstats to retrieve a pointer to the current
session, which may already be found from ->owner. With this change,
the stats code doesn't need the connection at all anymore.
We're trying to move the applets out of the struct connection. So
let's remove the dependence on xprt_st and introduce si->applet.st2
to store the missing contextual data instead.
In commit 8c3d0be (MEDIUM: Add DRAIN state and report it on the stats page),
the drain state was updated on every weight change except those that can be
sent via the web interface. This caused inconsistent state combinations to
be reported in the stats depending on the sequence (web then cli vs cli
then web).
It would seem that a call to set_server_drain_state() from within
server_recalc_eweight() would simplify things but that's not completely
certain yet.
We need to initialize the rdr_fmt list inconditionally. Using only
a redirect rule without an http-redirect may cause a crash during
deinit because of the list iterating from null.
We handle "http-request redirect" with a log-format string now, but we
leave "redirect" unaffected.
Note that the control of the special "/" case is move from the runtime
execution to the configuration parsing. If the format rule list is
empty, the build_logline() function does nothing.
We now have the following enums and all related functions return them and
consume them :
enum pat_match_res {
PAT_NOMATCH = 0, /* sample didn't match any pattern */
PAT_MATCH = 3, /* sample matched at least one pattern */
};
enum acl_test_res {
ACL_TEST_FAIL = 0, /* test failed */
ACL_TEST_MISS = 1, /* test may pass with more info */
ACL_TEST_PASS = 3, /* test passed */
};
enum acl_cond_pol {
ACL_COND_NONE, /* no polarity set yet */
ACL_COND_IF, /* positive condition (after 'if') */
ACL_COND_UNLESS, /* negative condition (after 'unless') */
};
It's just in order to avoid doubts when reading some code.
This patch just renames functions, types and enums. No code was changed.
A significant number of files were touched, especially the ACL arrays,
so it is likely that some external patches will not apply anymore.
One important thing is that we had to split ACL_PAT_* into two groups :
- ACL_TEST_{PASS|MISS|FAIL}
- PAT_{MATCH|UNMATCH}
A future patch will enforce enums on all these places to avoid confusion.
This patch just moves code without any change.
The ACL are just the association between sample and pattern. The pattern
contains the match method and the parse method. These two things are
different. This patch cleans the code by splitting it.
This will be used later with maps. Each map will associate an entry with
a sample_storage value.
This patch changes the "parse" prototype and all the parsing methods.
The goal is to associate "struct sample_storage" to each entry of
"struct acl_pattern". Only the "parse" function can add the sample value
into the "struct acl_pattern".
This is achieved by moving rise and fall from struct server to struct check.
After this move the behaviour of the primary check, server->check is
unchanged. However, the secondary agent check, server->agent now has
independent rise and fall values each of which are set to 1.
The result is that receiving "fail", "stopped" or "down" just once from the
agent will mark the server as down. And receiving a weight just once will
allow the server to be marked up if its primary check is in good health.
This opens up the scope to allow the rise and fall values of the agent
check to be configurable, however this has not been implemented at this
stage.
Signed-off-by: Simon Horman <horms@verge.net.au>
A crash was reported by Igor at owind when changing a server's weight
on the CLI. Lukas Tribus could reproduce a related bug where setting
a server's weight would result in the new weight being multiplied by
the initial one. The two bugs are the same.
The incorrect weight calculation results in the total farm weight being
larger than what was initially allocated, causing the map index to be out
of bounds on some hashes. It's easy to reproduce using "balance url_param"
with a variable param, or with "balance static-rr".
It appears that the calculation is made at many places and is not always
right and not always wrong the same way. Thus, this patch introduces a
new function "server_recalc_eweight()" which is dedicated to this task
of computing ->eweight from many other elements including uweight and
current time (for slowstart), and all users now switch to use this
function.
The patch is a bit large but the code was not trivially fixable in a way
that could guarantee this situation would not occur anymore. The fix is
much more readable and has been verified to work with all algorithms,
with both consistent and map-based hashes, and even with static-rr.
Slowstart was tested as well, just like enable/disable server.
The same bug is very likely present in 1.4 as well, so the patch will
probably need to be backported eventhough it will not apply as-is.
Thanks to Lukas and Igor for the information they provided to reproduce it.
This is in preparation for associating a agent check
with a server which runs as well as the server's existing check.
Signed-off-by: Simon Horman <horms@verge.net.au>
Paramatise the following functions over the check of a server
* set_server_down
* set_server_up
* srv_getinter
* server_status_printf
* set_server_check_status
* set_server_disabled
* set_server_enabled
Generally the server parameter of these functions has been removed.
Where it is still needed it is obtained using check->server.
This is in preparation for associating a agent check
with a server which runs as well as the server's existing check.
By paramatising these functions they may act on each of the checks
without further significant modification.
Explanation of the SSP_O_HCHK portion of this change:
* Prior to this patch SSP_O_HCHK serves a single purpose which
is to tell server_status_printf() weather it should print
the details of the check of a server or not.
With the paramatisation that this patch adds there are two cases.
1) Printing the details of the check in which case a
valid check parameter is needed.
2) Not printing the details of the check in which case
the contents check parameter are unused.
In case 1) we could pass SSP_O_HCHK and a valid check and;
In case 2) we could pass !SSP_O_HCHK and any value for check
including NULL.
If NULL is used for case 2) then SSP_O_HCHK becomes supurfulous
and as NULL is used for case 2) SSP_O_HCHK has been removed.
Signed-off-by: Simon Horman <horms@verge.net.au>
commit 39c63c5 "url32+src - like base32+src but whole url including parameters"
was missing the last argument "const char *kw", resulting in the build warning
below :
src/proto_http.c:10351:2: warning: initialization from incompatible pointer type [enabled by default]
src/proto_http.c:10351:2: warning: (near initialization for 'sample_fetch_keywords.kw[50].process') [enabled by default]
src/proto_http.c:10352:2: warning: initialization from incompatible pointer type [enabled by default]
src/proto_http.c:10352:2: warning: (near initialization for 'sample_fetch_keywords.kw[51].process') [enabled by default]
It's harmless since it's not needed there anyway.
Baptiste Assmann reported a bug affecting the "http-request redirect"
parser. It may randomly crash when reporting an error message if the
syntax is not OK. It happens that this is caused by the output error
message pointer which was not initialized to NULL.
This bug is 1.5-specific (introduced in dev17), no backport is needed.
I have a need to limit traffic to each url from each source address. much
like base32+src but the whole url including parameters (this came from
looking at the recent 'Haproxy rate limit per matching request' thread)
attached is patch that seems to do the job, its a copy and paste job of the
base32 functions
the url32 function seems to work too and using 2 machines to request the
same url locks me out of both if I abuse from either with the url32 key
function and only the one if I use url32_src.
Neil
The reqdeny/reqtarpit and http-request deny/tarpit were using
a copy-paste of the error handling code because originally the
req* actions used to maintain their own stats. This is not the
case anymore so we can use the same error blocks for both.
The http-request rulesets still has precedence over req* so no
functionality was changed.
The reqdeny/reqideny and reqtarpit/reqitarpit rules used to maintain
the stats counters themselves while http-request deny/tarpit and
rspdeny/rspideny used to centralize them at the point where the
error is processed.
Thus, let's do the same for reqdeny/reqtarpit so that the functions
which iterate over the rules do not have to deal with these counters
anymore.
When a connection is tarpitted, a denied req is counted once when the
action is applied, and then a failed req is counted when the tarpit
timeout expires. This is completely wrong as the tarpit is exactly
equivalent to a deny since it's a disguised deny.
So let's not increment the failed req anymore.
This fix may be backported to 1.4 which has the same issue.
Currently url_decode returns 1 or 0 depending on whether it could decode
the string or not. For some future use cases, it will be needed to get the
decoded string length after a successful decoding, so let's make it return
that value, and fall back to a negative one in case of error.
Bertrand Jacquin reported a but when using tcp_request content rules
on large POST HTTP requests. The issue is that smp_prefetch_http()
first tries to validate an input buffer, but only if the buffer is
not full. This test is wrong since it must only be performed after
the parsing has failed, otherwise we don't accept POST requests which
fill the buffer as valid HTTP requests.
This bug is 1.5-specific, no backport needed.
At the moment, HTTP response time is computed after response headers are
processed. This can misleadingly assign to the server some heavy local
processing (eg: regex), and also prevents response headers from passing
information related to the response time (which can sometimes be useful
for stats).
Let's retrieve the reponse time before processing the headers instead.
Note that in order to remain compatible with what was previously done,
we disable the response time when we get a 502 or any bad response. This
should probably be changed in 1.6 since it does not make sense anymore
to lose this information.
When a request fail, the unique_id was allocated but not generated.
The string was not initialized and junk was printed in the log with %ID.
This patch changes the behavior of the unique_id. The unique_id is now
generated when a request failed.
This bug was reported by Patrick Hemmer.
The HTTP request counter is incremented non atomically, which means that
many requests can log the same ID. Let's increment it when it is consumed
so that we avoid this case.
This bug was reported by Patrick Hemmer. It's 1.5-specific and does not
need to be backported.
We're having a lot of duplicate code just because of minor variants between
fetch functions that could be dealt with if the functions had the pointer to
the original keyword, so let's pass it as the last argument. An earlier
version used to pass a pointer to the sample_fetch element, but this is not
the best solution for two reasons :
- fetch functions will solely rely on the keyword string
- some other smp_fetch_* users do not have the pointer to the original
keyword and were forced to pass NULL.
So finally we're passing a pointer to the keyword as a const char *, which
perfectly fits the original purpose.
Converts an integer supposed to contain a date since epoch to
a string representing this date in a format suitable for use
in HTTP header fields. If an offset value is specified, then
it is a number of seconds that is added to the date before the
conversion is operated. This is particularly useful to emit
Date header fields, Expires values in responses when combined
with a positive offset, or Last-Modified values when the
offset is negative.
When ACLs and samples were converged in 1.5-dev18, function
"acl_prefetch_http" was not properly converted after commit 8ed669b1.
It used to return -1 when contents did not match HTTP traffic, which
was considered as a "true" boolean result by the ACL execution code,
possibly causing crashes due to missing data when checking for HTTP
traffic in TCP rules.
Another issue is that when the function returned zero, it did not
set tje SMP_F_MAY_CHANGE flag, so it could randomly exit on partial
requests before waiting for a complete one.
Last issue is that when it returned 1, it did not set smp->data.uint,
so this last one would retain a random value from a past execution.
This could randomly cause some matches to fail as well.
Thanks to Remo Eichenberger for reporting this issue with a detailed
explanation and configuration.
This bug is 1.5-specific, no backport is needed.
The checkcache option checks for cacheable responses with a set-cookie
header. Since the response processing code was refactored in 1.3.8
(commit a15645d4), the check was broken because the no-cache value
is only checked as no-cache="set-cookie", and not alone.
Thanks to Hervé Commowick for reporting this stupid bug!
The fix should be backported to 1.4 and 1.3.