Without this, kube-router would end up sharing the index between ipv4
and ipv6 which would cause it to error out when one incremented beyond
the number of rules that actually existed in the chain.
This change allows to define two cluster CIDRs for compatibility with
Kubernetes dual-stack, with an assumption that two CIDRs are usually
IPv4 and IPv6.
Signed-off-by: Michal Rostecki <vadorovsky@gmail.com>
Annotations were taken into account during startup, but after they were
advertised the affect of annotations was only additive because we
were only tracking current state of VIPs that should be advertised and
not taking into account VIPs that should be withdrawn for anything other
than service locality.
Fixes#1491
Rather than setting BGP Graceful Restart on both IPv4 and IPv6
regardless of which family is enabled, check the current mode via
nrc.isIpv6 and only set on appropriate family.
Note, this mode is exclusive as the current portions of NRC kube-router
code are only meant to work with IPv4 or IPv6 not both at the same time.
Fixes#1323
Changes the custom import reject annotation support to not only block
the given subnet exactly, but also all subnets of the subnet given.
For example, this change blocks 10.100.100.0/24 when customimportreject
annotation has 10.100.0.0/16 in it.
Added the following items to the original logic:
* Added map route entry deletion on withdrawl so that the system doesn't
incorrectly sync it back to the kernel's routing table
* Added an immediate route sync upon BGP path receive
* Added a mutex to ensure that deleted routes aren't accidentally synced
back to the system
* Added stopCh and wg (wait group) handling
* Increase default sync time from 15 seconds to 1 minute since this
scenario is unlikely and netlink calls could potentially be burdensome
in large clusters.
This reverts commit 22b031beaa3393f8f02812242a9f637ce525b4eb.
@MikeSpreitzer pointed out that these metrics are already present in the
histogram type as *_count and *_sum and these two added metrics just add
duplicates. I've also proved out in my own environments that these
metric values are identical to the ones already carried in the
histogram.
Previously, we would iterate over rulesFromNode, but then check it
against the entirety of the rulesNeeded hash. This resulted in the loop
breaking as soon as it found any matching rule from the host rather than
it breaking if it matched the rule that we were currently processing.
When we receive service or endpoint updates from Kubernetes we process a
type of partial sync because the service and endpoints have already been
updated by the handler. However, previously, this partial update did not
include updating the hairpinning rules for services.
This would cause hairpinning changes to be delayed until the next full
sync or until kube-router restart. This changes adds hairpinning into
the partial service sync flow.