All files that were including one of the following include files have
been updated to only include haproxy/api.h or haproxy/api-t.h once instead:
- common/config.h
- common/compat.h
- common/compiler.h
- common/defaults.h
- common/initcall.h
- common/tools.h
The choice is simple: if the file only requires type definitions, it includes
api-t.h, otherwise it includes the full api.h.
In addition, in these files, explicit includes for inttypes.h and limits.h
were dropped since these are now covered by api.h and api-t.h.
No other change was performed, given that this patch is large and
affects 201 files. At least one (tools.h) was already freestanding and
didn't get the new one added.
This is where other imported components are located. All files which
used to directly include ebtree were touched to update their include
path so that "import/" is now prefixed before the ebtree-related files.
The ebtree.h file was slightly adjusted to read compiler.h from the
common/ subdirectory (this is the only change).
A build issue was encountered when eb32sctree.h is loaded before
eb32tree.h because only the former checks for the latter before
defining type u32. This was addressed by adding the reverse ifdef
in eb32tree.h.
No further cleanup was done yet in order to keep changes minimal.
The behavior of calloc() when being passed `0` as `nelem` is implementation
defined. It may return a NULL pointer.
Avoid this issue by checking before allocating. While doing so adjust the local
integer variables that are used to refer to memory offsets to `size_t`.
This issue was introced in commit f91ac19299. This
patch should be backported together with that commit.
This is the replacement of failed attempt to add thread safety and
per-process sequences of random numbers initally tried with commit
1c306aa84d ("BUG/MEDIUM: random: implement per-thread and per-process
random sequences").
This new version takes a completely different approach and doesn't try
to work around the horrible OS-specific and non-portable random API
anymore. Instead it implements "xoroshiro128**", a reputedly high
quality random number generator, which is one of the many variants of
xorshift, which passes all quality tests and which is described here:
http://prng.di.unimi.it/
While not cryptographically secure, it is fast and features a 2^128-1
period. It supports fast jumps allowing to cut the period into smaller
non-overlapping sequences, which we use here to support up to 2^32
processes each having their own, non-overlapping sequence of 2^96
numbers (~7*10^28). This is enough to provide 1 billion randoms per
second and per process for 2200 billion years.
The implementation was made thread-safe either by using a double 64-bit
CAS on platforms supporting it (x86_64, aarch64) or by using a local
lock for the time needed to perform the shift operations. This ensures
that all threads pick numbers from the same pool so that it is not
needed to assign per-thread ranges. For processes we use the fast jump
method to advance the sequence by 2^96 for each process.
Before this patch, the following config:
global
nbproc 8
frontend f
bind :4445
mode http
log stdout format raw daemon
log-format "%[uuid] %pid"
redirect location /
Would produce this output:
a4d0ad64-2645-4b74-b894-48acce0669af 12987
a4d0ad64-2645-4b74-b894-48acce0669af 12992
a4d0ad64-2645-4b74-b894-48acce0669af 12986
a4d0ad64-2645-4b74-b894-48acce0669af 12988
a4d0ad64-2645-4b74-b894-48acce0669af 12991
a4d0ad64-2645-4b74-b894-48acce0669af 12989
a4d0ad64-2645-4b74-b894-48acce0669af 12990
82d5f6cd-f6c1-4f85-a89c-36ae85d26fb9 12987
82d5f6cd-f6c1-4f85-a89c-36ae85d26fb9 12992
82d5f6cd-f6c1-4f85-a89c-36ae85d26fb9 12986
(...)
And now produces:
f94b29b3-da74-4e03-a0c5-a532c635bad9 13011
47470c02-4862-4c33-80e7-a952899570e5 13014
86332123-539a-47bf-853f-8c8ea8b2a2b5 13013
8f9efa99-3143-47b2-83cf-d618c8dea711 13012
3cc0f5c7-d790-496b-8d39-bec77647af5b 13015
3ec64915-8f95-4374-9e66-e777dc8791e0 13009
0f9bf894-dcde-408c-b094-6e0bb3255452 13011
49c7bfde-3ffb-40e9-9a8d-8084d650ed8f 13014
e23f6f2e-35c5-4433-a294-b790ab902653 13012
There are multiple benefits to using this method. First, it doesn't
depend anymore on a non-portable API. Second it's thread safe. Third it
is fast and more proven than any hack we could attempt to try to work
around the deficiencies of the various implementations around.
This commit depends on previous patches "MINOR: tools: add 64-bit rotate
operators" and "BUG/MEDIUM: random: initialize the random pool a bit
better", all of which will need to be backported at least as far as
version 2.0. It doesn't require to backport the build fixes for circular
include files dependecy anymore.
This reverts commit 1c306aa84d.
It breaks the build on all non-glibc platforms. I got confused by the
man page (which possibly is the most confusing man page I've ever read
about a standard libc function) and mistakenly understood that random_r
was portable, especially since it appears in latest freebsd source as
well but not in released versions, and with a slightly different API :-/
We need to find a different solution with a fallback. Among the
possibilities, we may reintroduce this one with a fallback relying on
locking around the standard functions, keeping fingers crossed for no
other library function to call them in parallel, or we may also provide
our own PRNG, which is not necessarily more difficult than working
around the totally broken up design of the portable API.
As mentioned in previous patch, the random number generator was never
made thread-safe, which used not to be a problem for health checks
spreading, until the uuid sample fetch function appeared. Currently
it is possible for two threads or processes to produce exactly the
same UUID. In fact it's extremely likely that this will happen for
processes, as can be seen with this config:
global
nbproc 8
frontend f
bind :4445
mode http
log stdout daemon format raw
log-format "%[uuid] %pid"
redirect location /
It typically produces this log:
551ce567-0bfb-4bbd-9b58-cdc7e9365325 30645
551ce567-0bfb-4bbd-9b58-cdc7e9365325 30641
551ce567-0bfb-4bbd-9b58-cdc7e9365325 30644
551ce567-0bfb-4bbd-9b58-cdc7e9365325 30639
551ce567-0bfb-4bbd-9b58-cdc7e9365325 30646
07764439-c24d-4e6f-a5a6-0138be59e7a8 30645
07764439-c24d-4e6f-a5a6-0138be59e7a8 30639
551ce567-0bfb-4bbd-9b58-cdc7e9365325 30643
07764439-c24d-4e6f-a5a6-0138be59e7a8 30646
b6773fdd-678f-4d04-96f2-4fb11ad15d6b 30646
551ce567-0bfb-4bbd-9b58-cdc7e9365325 30642
07764439-c24d-4e6f-a5a6-0138be59e7a8 30642
What this patch does is to use a distinct per-thread and per-process
seed to make sure the same sequences will not appear, and will then
extend these seeds by "burning" a number of randoms that depends on
the global random seed, the thread ID and the process ID. This adds
roughly 20 extra bits of randomness, resulting in 52 bits total per
thread and per process.
It only takes a few milliseconds to burn these randoms and given
that threads start with a different seed, we know they will not
catch each other. So these random extra bits are essentially added
to ensure randomness between boots and cluster instances.
This replaces all uses of random() with ha_random() which uses the
thread-local state.
This must be backported as far as 2.0 or any version having the
UUID sample-fetch function since it's the main victim here.
It's important to note that this patch, in addition to depending on
the previous one "BUG/MEDIUM: init: initialize the random pool a bit
better", also depends on the preceeding build fixes to address a
circular dependency issue in the include files that prevented it
from building. Part or all of these patches may need to be backported
or adapted as well.
pattern_finalize_config() uses an inefficient algorithm which is a
problem with very large configuration files. This affects startup, and
therefore reload time. When haproxy is deployed as a router in a
Kubernetes cluster the generated configuration file may be large and
reloads are frequently occuring, which makes this a significant issue.
The old algorithm is O(n^2)
* allocate missing uids - O(n^2)
* sort linked list - O(n^2)
The new algorithm is O(n log n):
* find the user allocated uids - O(n)
* store them for efficient lookup - O(n log n)
* allocate missing uids - n times O(log n)
* sort all uids - O(n log n)
* convert back to linked list - O(n)
Performance examples, startup time in seconds:
pat_refs old new
1000 0.02 0.01
10000 2.1 0.04
20000 12.3 0.07
30000 27.9 0.10
40000 52.5 0.14
50000 77.5 0.17
Please backport to 1.8, 2.0 and 2.1.
Commit 3c79d4bdc introduced the use of errno in pattern.c without
including errno.h.
If we build haproxy without any option errno is not defined and the
build fails.
We need to do some error handling after we call fgets to make sure everything
went fine. If we don't users can be fooled into thinking they can load pattens
from directory because cfgparse doesn't flinch. This applies to acl patterns
map files.
This should be backported to all supported versions.
As reported in issue #335, a lot of contention happens on the PATLRU lock
when performing expensive regex lookups. This is absurd since the purpose
of the LRU cache was to have a fast cache for expressions, thus the cache
must not be shared between threads and must remain lockless.
This commit makes the LRU cache thread-local and gets rid of the PATLRU
lock. A test with 7 threads on 4 cores climbed from 67kH/s to 369kH/s,
or a scalability factor of 5.5.
Given the huge performance difference and the regression caused to
users migrating from processes to threads, this should be backported at
least to 2.0.
Thanks to Brian Diekelman for his detailed report about this regression.
gcc-3.4 fails to compile pattern.c :
src/pattern.c: In function `pat_match_ip':
src/pattern.c:1092: error: unrecognizable insn:
(insn 186 185 187 9 src/pattern.c:970 (set (reg/f:SI 179)
(high:SI (const:SI (plus:SI (symbol_ref:SI ("static_pattern") [flags 0x22] <var_decl fe5bae80 static_pattern>)
(const_int 8 [0x8]))))) -1 (nil)
(nil))
src/pattern.c:1092: internal compiler error: in extract_insn, at recog.c:2083
This happens when performing the memcpy() on the union, and in this
case the workaround is trivial (and even cleaner) using a cast instead.
Now we atomically allocate the my_regex struct within function
regex_comp() and compile the regex or free both in case of failure. The
pointer to the allocated my_regex struct is returned directly. The
my_regex* argument to regex_comp() is removed.
Function regex_free() was modified so that it systematically frees the
my_regex entry. The function does nothing when called with a NULL as
argument (like free()). It will avoid existing risk of not properly
freeing the initialized area.
Other structures are also updated in order to be compatible (the ones
related to Lua and action rules).
The allocated regex is not freed properly and can cause a memory leak,
eg. when patterns are updated via CLI socket.
This patch should be backported to all supported versions.
A previous commit 8d85aa44d ("BUG/MAJOR: map: fix segfault during
'show map/acl' on cli.") was provided to address a concurrency issue
between "show acl" and "clear acl" on the CLI. Sadly the code placed
there was copy-pasted without changing the element type (which was
struct stream in the original code) and not tested since the crash
is still present.
The reproducer is simple : load a large ACL file (e.g. geolocation
addresses), issue "show acl #0" in loops in one window and issue a
"clear acl #0" in the other one, haproxy crashes.
This fix was also tested with threads enabled and looks good since
the locking seems to work correctly in these areas though. It will
have to be backported as far as 1.6 since the commit above went
that far as well...
This patch replaces a number of __decl_hathread() followed by HA_SPIN_INIT
or HA_RWLOCK_INIT by the new __decl_spinlock() or __decl_rwlock() which
automatically registers the lock for initialization in during the STG_LOCK
init stage. A few static modifiers were lost in the process, but since they
were not essential at all it was not worth extending the API to provide such
a variant.
A null pointer assignment was missing after free() in function
pat_ref_reload() which can lead to segfault.
This bug was introduced in commit b5997f7 ("MAJOR: threads/map: Make
acls/maps thread safe").
Must be backported to 1.8.
Now all the code used to manipulate chunks uses a struct buffer instead.
The functions are still called "chunk*", and some of them will progressively
move to the generic buffer handling code as they are cleaned up.
Chunks are only a subset of a buffer (a non-wrapping version with no head
offset). Despite this we still carry a lot of duplicated code between
buffers and chunks. Replacing chunks with buffers would significantly
reduce the maintenance efforts. This first patch renames the chunk's
fields to match the name and types used by struct buffers, with the goal
of isolating the code changes from the declaration changes.
Most of the changes were made with spatch using this coccinelle script :
@rule_d1@
typedef chunk;
struct chunk chunk;
@@
- chunk.str
+ chunk.area
@rule_d2@
typedef chunk;
struct chunk chunk;
@@
- chunk.len
+ chunk.data
@rule_i1@
typedef chunk;
struct chunk *chunk;
@@
- chunk->str
+ chunk->area
@rule_i2@
typedef chunk;
struct chunk *chunk;
@@
- chunk->len
+ chunk->data
Some minor updates to 3 http functions had to be performed to take size_t
ints instead of ints in order to match the unsigned length here.
The bug happens with an existing entry, when you try to overwrite the
value with wrong data, for example, a string when the type is INT.
The code path was not secure and tried to set *err and *merr while
err = merr = NULL when performing an http action.
Must be backported in 1.6, 1.7, 1.8.
pat_ref_newid() is lacking a spinlock init. It was probably forgotten
in b5997f740b ("MAJOR: threads/map: Make acls/maps thread safe").
Signed-off-by: Aurlien Nephtali <aurelien.nephtali@corp.ovh.com>
This macro should be used to declare variables or struct members depending on
the USE_THREAD compile option. It avoids the encapsulation of such declarations
between #ifdef/#endif. It is used to declare all lock variables.
To be thread safe, the function pattern_exec_match copy data (the pattern and
the inner sample) in thread-local variables. But when the sample is duplicated,
we must check its type and not the pattern one.
This is specific to threads, no backport is needed.
locks have been added in pat_ref and pattern_expr structures to protect all
accesses to an instance of on of them. Moreover, a global lock has been added to
protect the LRU cache used for pattern matching.
Patterns are now duplicated after a successfull matching, to avoid modification
by other threads when the result is used.
Finally, the function reloading a pattern list has been modified to be
thread-safe.
The bug: Maps/ACLs using the same file/id can mistakenly inherit
their flags from the last declared one.
i.e.
$ cat haproxy.conf
listen mylistener
mode http
bind 0.0.0.0:8080
acl myacl1 url -i -f mine.acl
acl myacl2 url -f mine.acl
acl myacl3 url -i -f mine.acl
redirect location / if myacl2
$ cat mine.acl
foobar
Shows an unexpected redirect for request 'GET /FOObAR HTTP/1.0\n\n'.
This fix should be backported on mainline branches v1.6 and v1.7.
The reference of the current map/acl element to dump could
be destroyed if map is updated from an 'http-request del-map'
configuration rule or throught a 'del map/acl' on CLI.
We use a 'back_refs' chaining element to fix this. As it
is done to dump sessions.
This patch needs also fix:
'BUG/MAJOR: cli: fix custom io_release was crushed by NULL.'
To clean the back_ref and avoid a crash on a further
del/clear map operation.
Those fixes should be backported on mainline branches 1.7 and 1.6.
This patch wont directly apply on 1.6.
pattern_new_expr() failed to free the allocated list element when an
out-of-memory error occurs during initialization of the element. As
this only happens when loading the configuration file or evaluating
commands via the CLI, it is unlikely for this leak to be relevant
unless the user makes automated, heavy use of the CLI.
Found in HAProxy 1.5.14.
Ignore samples that are neither SMP_T_IPV4 nor SMP_T_IPV6 instead of
matching with an uninitialized value in this case.
This situation should not occur in the current codebase but triggers
warnings in static code analysis tools.
Found in haproxy 1.5.
The union name "data" is a little bit heavy while we read the source
code because we can read "data.data.sint". The rename from "data" to "u"
makes the read easiest like "data.u.sint".
This patch remove the struct information stored both in the struct
sample_data and in the striuct sample. Now, only thestruct sample_data
contains data, and the struct sample use the struct sample_data for storing
his own data.
This patch removes the 32 bits unsigned integer and the 32 bit signed
integer. It replaces these types by a unique type 64 bit signed.
This makes easy the usage of integer and clarify signed and unsigned use.
With the previous version, signed and unsigned are used ones in place of
others, and sometimes the converter loose the sign. For example, divisions
are processed with "unsigned", if one entry is negative, the result is
wrong.
Note that the integer pattern matching and dotted version pattern matching
are already working with signed 64 bits integer values.
There is one user-visible change : the "uint()" and "sint()" sample fetch
functions which used to return a constant integer have been replaced with
a new more natural, unified "int()" function. These functions were only
introduced in the latest 1.6-dev2 so there's no impact on regular
deployments.
Now, When a item is committed in an LRU tree, you can define a function to free
data owned by this item. This function will be called when the item is removed
from the LRU tree or when the tree is destroyed..
They're caused by the cast to long long from ptr in 32-bit.
src/pattern.c: In function 'pat_match_str':
src/pattern.c:479:44: warning: cast from pointer to integer of different size [-Wpointer-to-int-cast]
The principle of this cache is to have a global cache for all pattern
matching operations which rely on lists (reg, sub, dir, dom, ...). The
input data, the expression and a random seed are used as a hashing key.
The cached entries contains a pointer to the expression and a revision
number for that expression so that we don't accidently used obsolete
data after a pattern update or a very unlikely hash collision.
Regarding the risk of collisions, 10k entries at 10k req/s mean 1% risk
of a collision after 60 years, that's already much less than the memory's
reliability in most machines and more durable than most admin's life
expectancy. A collision will result in a valid result to be returned
for a different entry from the same list. If this is not acceptable,
the cache can be disabled using tune.pattern.cache-size.
A test on a file containing 10k small regex showed that the regex
matching was limited to 6k/s instead of 70k with regular strings.
When enabling the LRU cache, the performance was back to 70k/s.
This will be used to detect any change on the pattern list between
two operations, ultimately making it possible to implement a cache
which immediately invalidates obsolete keys after an update. The
revision is simply taken from the timestamp counter to ensure that
even upon a pointer reuse we cannot accidently come back to the
same (expr,revision) tuple.
ACL or map entries are not deleted with the command "del acl" or "del map"
if the case insentive flag is set.
This is because the the case insensitive string are stored in a list and the
default delete function associated with string looks in a tree. I add a check
of the case insensitive flag and execute the delete function for lists if it
is set.
This patch must be backported in 1.5 version.
Dmitry Sivachenko <trtrmitya@gmail.com> reported that commit 315ec42
("BUG/MEDIUM: pattern: don't load more than once a pattern list.")
relies on an uninitialised variable in the stack. While it used to
work fine during the tests, if the uninitialized variable is non-null,
some patterns may be aggregated if loaded multiple times, resulting in
slower processing, which was the original issue it tried to address.
The fix needs to be backported to 1.5.
A memory optimization can use the same pattern expression for many
equal pattern list (same parse method, index method and index_smp
method).
The pattern expression is returned by "pattern_new_expr", but this
function dont indicate if the returned pattern is already in use.
So, the caller function reload the list of patterns in addition with
the existing patterns. This behavior is not a problem with tree indexed
pattern, but it grows the lists indexed patterns.
This fix add a "reuse" flag in return of the function "pattern_new_expr".
If the flag is set, I suppose that the patterns are already loaded.
This fix must be backported into 1.5.
Just like previous patch, this is a remains of an early implementation. Also
fix the outdated comments above. The fix may be backported to 1.5 though the
bug cannot be triggerred, thus it's just a matter of keeping the code clean.
This patchs rename the "regex_exec" to "regex_exec2". It add a new
"regex_exec", "regex_exec_match" and "regex_exec_match2" function. This
function can match regex and return array containing matching parts.
Otherwise, this function use the compiled method (JIT or PCRE or POSIX).
JIT require a subject with length. PCREPOSIX and native POSIX regex
require a null terminted subject. The regex_exec* function are splited
in two version. The first version take a null terminated string, but it
execute strlen() on the subject if it is compiled with JIT. The second
version (terminated by "2") take the subject and the length. This
version adds a null character in the subject if it is compiled with
PCREPOSIX or native POSIX functions.
The documentation of posix regex and pcreposix says that the function
returns 0 if the string matche otherwise it returns REG_NOMATCH. The
REG_NOMATCH macro take the value 1 with posix regex and the value 17
with the pcreposix. The documentaion of the native pcre API (used with
JIT) returns a negative number if no match, otherwise, it returns 0 or a
positive number.
This patch fix also the return codes of the regex_exec* functions. Now,
these function returns true if the string match, otherwise it returns
false.
Being able to map prefixes to values is already used for IPv4/IPv6
but was not yet used with strings. It can be very convenient to map
directories to server farms but large lists may be slow.
By using ebmb_insert_prefix() and ebmb_lookup_longest(), we can
insert strings with their own length as a prefix, and lookup
candidate strings and ensure that the longest matching one will
be returned, which is the longest string matching the entry.
Last fix did address the issue for inlined patterns, but it was not
enough because the flags are lost as well when updating patterns
dynamically over the CLI.
Also if the same file was used once with -i and another time without
-i, their references would have been merged and both would have used
the same matching method.
It's appear that the patterns have two types of flags. The first
ones are relative to the pattern matching, and the second are
relative to the pattern storage. The pattern matching flags are
the same for all the patterns of one expression. Now they are
stored in the expression. The storage flags are information
returned by the pattern mathing function. This information is
relative to each entry and is stored in the "struct pattern".
Now, the expression matching flags are forwarded to the parse
and index functions. These flags are stored during the
configuration parsing, and they are used during the parse and
index actions.
This issue was introduced in dev23 with the major pattern rework,
and is a continuation of commit a631fc8 ("BUG/MAJOR: patterns: -i
and -n are ignored for inlined patterns"). No backport is needed.
These flags are only passed to pattern_read_from_file() which
loads the patterns from a file. The functions used to parse the
patterns from the current line do not provide the means to pass
the pattern flags so they're lost.
This issue was introduced in dev23 with the major pattern rework,
and was reported by Graham Morley. No backport is needed.
Dmitry Sivachenko reported that nice warning :
src/pattern.c:2243:43: warning: if statement has empty body [-Wempty-body]
if (&ref2->list == &pattern_reference);
^
src/pattern.c:2243:43: note: put the semicolon on a separate line to silence
this warning
It was merged as is with the code from commit af5a29d ("MINOR: pattern:
Each pattern is identified by unique id").
So it looks like we can reassign an ID which is still in use because of
this.
This function it is used for dynamically update all the patterns
attached to one file. This function is atomic. All parsing or indexation
failures are reported in the haproxy logs.
The ACL changes made in the last patchset force the execution
of each pattern matching function. The function pat_match_nothing
was not provided to be excuted, it was just used as a flag that
was checked by the ACL execution code. Now this function is
executed and always returns false.
This patch makes it work as expected. Now, it returns the boolean
status of the received sample just as was done previously in the
ACL code.
This bug is a part of the patchset just merged. It does not need
to be backported.
The function str2net runs DNS resolution if valid ip cannot be parsed.
The DNS function used is the standard function of the libc and it
performs asynchronous request.
The asynchronous request is not compatible with the haproxy
archictecture.
str2net() is used during the runtime throught the "socket".
This patch remove the DNS resolution during the runtime.
The indexation functions now accept duplicates. This way it is possible
to always have some consistency between lists and trees. The "add" command
will always add regardless of any previous existence. The new entry will
not be used because both trees and list retrieve keys in insertion order.
Thus the "add" operation will always succeed (as long as there is enough
memory).
The pointer <regstr> is only used to compare and identify the original
regex string with the patterns. Now the patterns have a reference map
containing this original string. It is useless to store this value two
times.
Before this patch, this function try to add values in best effort. If
the parsing iof the value fail, the operation continue until the end.
Now, this function stop on the first error and left the pattern in
coherant state.
This patch adds new display type. This display returns allocated string,
when the string is flush into buffers, it is freed. This permit to
return the content of "memprintf(err, ...)" messages.
The pat_ref_add functions has changed to return error.
The format of the acl file are not the same than the format of the map
files. In some case, the same file can be used, but this is ambiguous
for the user because the patterns are not the expected.
The acl and map function do the same work with the file parsing. This
patch merge these code in only one.
Note that the function map_read_entries_from_file() in the file "map.c"
is moved to the the function pat_ref_read_from_file_smp() in the file
"pattern.c". The code of this function is not modified, only the the
name and the arguments order has changed.
The find_smp search the smp using the value of the pat_ref_elt pointer.
The pat_find_smp_* are no longer used. The function pattern_find_smp()
known all pattern indexation, and can be found
All the pattern delete function can use her reference to the original
"struct pat_ref_elt" to find the element to be remove. The functions
pat_del_list_str() and pat_del_meth() were deleted because after
applying this modification, they have the same code than pat_del_list_ptr().
Now, each pattern entry known the original "struct pat_ref_elt" from
that was built. This patch permit to delete each pattern entry without
confusion. After this patch, each reference can use his pointer to be
targeted.
The function Pattern_add() is only used by pat_ref_push(). This patch
remove the function pattern_add() and merge his code in the function
pat_ref_push().
The pattern reference are stored with two identifiers: the unique_id and
the reference.
The reference identify a file. Each file with the same name point to the
same reference. We can register many times one file. If the file is
modified, all his dependencies are also modified. The reference can be
used with map or acl.
The unique_id identify inline acl. The unique id is unique for each acl.
You cannot force the same id in the configuration file, because this
repport an error.
The format of the acl and map listing through the "socket" has changed
for displaying these new ids.
This patch extract the expect_type variable from the "struct pattern" to
"struct pattern_head". This variable is set during the declaration of
ACL and MAP. With this change, the function "pat_parse_len()" become
useless and can be replaced by "pat_parse_int()".
Implicit ACLs by default rely on the fetch's output type, so let's simply do
the same for all other ones. It has been verified that they all match.
Sometimes the same pattern file is used with the same index, parse and
parse_smp functions. If this two condition are true, these two pattern
are identical and the same struct can be used.
This patch add the following socket command line options:
show acl [<id>]
clear acl <id>
get acl <id> <pattern>
del acl <id> <pattern>
add acl <id> <pattern>
The system used for maps is backported in the pattern functions.
Some functions needs to change the sample associated to pattern. This
new pointer permit to return the a pointer to the sample pointer. The
caller can use or change the value.
This commit adds a delete function for patterns. It looks up all
instances of the pattern to delete and deletes them all. The fetch
keyword declarations have been extended to point to the appropriate
delete function.
This commit adds second tree node in the pattern struct and use it to
index IPv6 addresses. This commit report feature used in the list. If
IPv4 not match the tree, try to convert the IPv4 address in IPv6 with
prefixing the IPv4 address by "::ffff", after this operation, the match
function try lookup in the IPv6 tree. If the IPv6 sample dont match the
IPv6 tree, try to convert the IPv6 addresses prefixed by "2002:IPv4",
"::ffff:IPv4" and "::0000:IPv4" in IPv4 address. after this operation,
the match function try lookup in the IPv4 tree.
The match function known the format of the pattern. The pattern can be
stored in a list or in a tree. The pattern matching function use itself
the good entry point and indexation type.
Each pattern matching function return the struct pattern that match. If
the flag "fill" is set, the struct pattern is filled, otherwise the
content of this struct must not be used.
With this feature, the general pattern matching function cannot have
exceptions for building the "struct pattern".
Before this commit, the pattern_exec_match() function returns the
associate sample, the associate struct pattern or the associate struct
pattern_tree. This is complex to use, because we can check the type of
information returned.
Now the function return always a "struct pattern". If <fill> is not set,
only the value of the pointer can be used as boolean (NULL or other). If
<fill> is set, you can use the <smp> pointer and the pattern
information.
If information must be duplicated, it is stored in trash buffer.
Otherwise, the pattern can point on existing strings.
The operations applied on types SMP_T_CSTR and SMP_T_STR are the same,
but the check code and the declarations are double, because it must
declare action for SMP_T_C* and SMP_T_*. The declared actions and checks
are the same. this complexify the code. Only the "conv" functions can
change from "C*" to "*"
Now, if a function needs to modify input string, it can call the new
function smp_dup(). This one duplicate data in a trash buffer.
The pattern parse functions put the parsed result in a "struct pattern"
without memory allocation. If the pattern must reference the input data
without changes, the pattern point to the parsed string. If buffers are
needed to store translated data, it use th trash buffer. The indexation
function that allocate the memory later if it is needed.
Before this patch, the indexation function check the declared patttern
matching function and index the data according with this function. This
is not useful to add some indexation mode.
This commit adds dedicated indexation function. Each struct pattern is
associated with one indexation function. This function permit to index
data according with the type of pattern and with the type of match.
This commit separes the "struct list" used for the chain the "struct
pattern" which contain the pattern data. Later, this change will permit
to manipulate lists ans trees with the same "struct pattern".
Each pattern parser take only one string. This change is reported to the
function prototype of the function "pattern_register()". Now, it is
called with just one string and no need to browse the array of args.
After the previous patches, the "pat_parse_strcat()" function disappear,
and the "pat_parse_int()" and "pat_parse_dotted_ver()" functions dont
use anymore the "opaque" argument, and take only one string on his
input.
So, after this patch, each pattern parser no longer use the opaque
variable and take only one string as input. This patch change the
prototype of the pattern parsing functions.
Now, the "char **args" is replaced by a "char *arg", the "int *opaque"
is removed and these functions return 1 in succes case, and 0 if fail.
The goal of these patch is to simplify the prototype of
"pat_pattern_*()" functions. I want to replace the argument "char
**args" by a simple "char *arg" and remove the "opaque" argument.
"pat_parse_int()" and "pat_parse_dotted_ver()" are the unique pattern
parser using the "opaque" argument and using more than one string
argument of the char **args. These specificities are only used with ACL.
Other systems using this pattern parser (MAP and CLI) just use one
string for describing a range.
This two functions can read a range, but the min and the max must y
specified. This patch extends the syntax to describe a range with
implicit min and max. This is used for operators like "lt", "le", "gt",
and "ge". the syntax is the following:
":x" -> no min to "x"
"x:" -> "x" to no max
This patch moves the parsing of the comparison operator from the
functions "pat_parse_int()" and "pat_parse_dotted_ver()" to the acl
parser. The acl parser read the operator and the values and build a
volatile string readable by the functions "pat_parse_int()" and
"pat_parse_dotted_ver()". The transformation is done with these rules:
If the parser is "pat_parse_int()":
"eq x" -> "x"
"le x" -> ":x"
"lt x" -> ":y" (with y = x - 1)
"ge x" -> "x:"
"gt x" -> "y:" (with y = x + 1)
If the parser is "pat_parse_dotted_ver()":
"eq x.y" -> "x.y"
"le x.y" -> ":x.y"
"lt x.y" -> ":w.z" (with w.z = x.y - 1)
"ge x.y" -> "x.y:"
"gt x.y" -> "w.z:" (with w.z = x.y + 1)
Note that, if "y" is not present, assume that is "0".
Now "pat_parse_int()" and "pat_parse_dotted_ver()" accept only one
pattern and the variable "opaque" is no longer used. The prototype of
the pattern parsers can be changed.
This patch remove the limit of 32 groups. It also permit to use standard
"pat_parse_str()" function in place of "pat_parse_strcat()". The
"pat_parse_strcat()" is no longer used and its removed. Before this
patch, the groups are stored in a bitfield, now they are stored in a
list of strings. The matching is slower, but the number of groups is
low and generally the list of allowed groups is short.
The fetch function "smp_fetch_http_auth_grp()" used with the name
"http_auth_group" return valid username. It can be used as string for
displaying the username or with the acl "http_auth_group" for checking
the group of the user.
Maybe the names of the ACL and fetch methods are no longer suitable, but
I keep the current names for conserving the compatibility with existing
configurations.
The function "userlist_postinit()" is created from verification code
stored in the big function "check_config_validity()". The code is
adapted to the new authentication storage system and it is moved in the
"src/auth.c" file. This function is used to check the validity of the
users declared in groups and to check the validity of groups declared
on the "user" entries.
This resolve function is executed before the check of all proxy because
many acl needs solved users and groups.
This is a regression introducted by the patches "MINOR: pattern: Each
pattern sets the expected input type" and "MEDIUM: acl: Last patch
change the output type". The expected value is SMP_T_CSTR in place of
SMP_T_UINT.
This bug impact all the acl using the parser "pat_parse_dotted_ver()".
The two acl are "req_ssl_ver()" and "req.ssl_ver()".
This is a recent bug, no backport is needed.
The functions pat_parse_* must return 0 if fail and the number of
elements eated from **text if not fail. The function pat_parse_bin()
returns 0 or the length parsed. This causes a segfault. I just apply the
double operator "!" on the result of the function pat_parse_bin() and
the return value value match the expected value.
If the pattern is set as case insensitive, the string comparison
is executed twice. The first time is insensitive comparison, the
second is sensitive.
This is a recent bug, no backport is needed.
A null pointer assignment was missing after a free in commit 7148ce6 (MEDIUM:
pattern: Extract the index process from the pat_parse_*() functions), causing
a double free after loading a file of string patterns.
This bug was introduced in 1.5-dev20, no backport is needed.
Thanks to Sander Klein for reporting this bug and providing the config
needed to trigger it.
Gcc 3.4 warns that mask may be used uninitialized in pattern.c. This
is wrong since it's used in the same condition as its assignment,
although it's not necessarily obvious for the compiler. Fix this by
initializing the value.
This was introduced by recent commit 01cdcd4a so no backport is needed.