The problem here stems from the fact that when netpol generates its list of expected ipsets, it includes the inet6:
prefix, however, when the proxy and routing controller sent their list of expected ipsets, they did not do so. This
meant that no matter how we handled it in ipset.go it was wrong for one or the other use-cases.
I decided to standardize on the netpol way of sending the list of expected ipset names so that BuildIPSetRestore() can
function in the same way for all invocations.
Attempt to filter out sets that we are not authoritative for to avoid
race conditions with other operators (like Istio) that might be
attempting to modify ipsets at the same time.
It used to be when we were using iproute2's CLI we needed to have the
fwmark as a hex number so we were passing it as a string in that format.
However, now that we use the netlink library directly, we already have
the fwmark in the condition that we need it. So instead of doing all of
these string <-> int conversions, lets just keep this simpler.
When ip rules are evaluated in the netlink library, default routes for
src and dst are equated to nil. This makes it difficult to evaluate
them and requires additional handling in order for them.
I filed an issue upstream so that this could potentially get fixed:
https://github.com/vishvananda/netlink/issues/1080 however if it doesn't
get resolved, this should allow us to move forward.
It has proven to be tricky to insert new rules without calling the
designated NewRule() function from the netlink library. Usually attempts
will fail with an operation not supported message.
This improves the reliability of rule insertion.
Not making direct exec calls to user binary interfaces has long been a
principle of kube-router. When kube-router was first coded, the netlink
library was missing significant features that forced us to exec out.
However, now netlink seems to have most of the functionality that we
need.
This converts all of the places where we can use netlink to use the
netlink functionality.
Setting rp_filter to 2 when it is 0 will override its status to be always enabled (in the loose mode).
This behavior could break some networking solutions as it made packet admission rules more strict.
Over time, feedback from users has been that our interpretation of how
the kube-router service.local annotation interacts with the internal
traffic policy has been that it is too restrictive.
It seems like tuning it to fall in line with the local internal traffic
policy is too restrictive. This commit changes that posture, by equating
the service.local annotation with External Traffic Policy Local and
Internal Traffic Policy Cluster.
This means that when service.local is set the following will be true:
* ExternalIPs / LoadBalancer IPs will only be available on a node that
hosts the workload
* ExternalIPs / LoadBalancer IPs will only be BGP advertised (when
enabled) by nodes that host the workload
* Services will have the same posture as External Traffic Policy set to
local
* ClusterIPs will be available on all nodes for LoadBalancing
* ClusterIPs will only be BGP advertised (when enabled) by nodes that
host the workload
* Cluster IP services will have the same posture as Internal Traffic
Policy set to cluster
For anyone desiring the original functionality of the service.local
annotation that has been in place since kube-router v2.1.0, all that
would need to be done is to set `internalTrafficPolicy` to Local as
described here: https://kubernetes.io/docs/concepts/services-networking/service-traffic-policy/
Make the health controller more robust and extensible by adding in
constants for heart beats instead of 3 character random strings that are
easy to get wrong.
This prepares the way for broader refactors in the way that we handle
nodes by:
* Separating frequently used node logic from the controller creation
steps
* Keeping reused code DRY-er
* Adding interface abstractions for key groups of node data and starting
to rely on those more rather than concrete types
* Separating node data from the rest of the controller data structure so
that it smaller definitions of data can be passed around to functions
that need it rather than always passing the entire controller which
contains more data / surface area than most functions need.
This fixes the problem where if network policy is applied before any
communication between two pods, all subsequent communication fails
because the two pods aren't able to discovery each other.
Currently, kube-router just lists all IPVS services on the host and then
adds the load balancing service IPs to kube-router-svip blindly.
However, this assumes that the only IPVS entries are entries that
kube-router has originated and that the user isn't using IPVS.
We want to make sure that we are only creating rules for services that
we are authoritative for. So to this end, we now double-check that this
is one of our services before adding rules that may effect it.
Instead of using the hairping controller that I created, instead rely
upon the `hairpinMode` option of the bridge CNI that does this job
better. See https://www.kube-router.io/docs/user-guide/#hairpin-mode for
more information.
kube-router v2.X introduced the idea of iptables and ipset handlers that
allow kube-router to be dual-stack capable. However, the cleanup logic
for the various controllers was not properly ported when this happened.
When the cleanup functions run, they often have not had their
controllers fully initialized as cleanup should not be dependant on
kube-router being able to reach a kube-apiserver.
As such, they were missing these handlers. And as such they either
silently ended up doing noops or worse, they would run into nil pointer
failures.
This corrects that, so that kube-router no longer fails this way and
cleans up as it had in v1.X.
rp_filter on RedHat based OS's is often set to 1 instead of 2 which is
more permissive and allows the outbound route for traffic to differ from
the route of incoming traffic.
Upgrades to Go 1.21.7 now that Go 1.20 is no longer being maintained.
It also, resolves the race conditions that we were seeing with BGP
server tests when we upgraded from 1.20 -> 1.21. This appears to be
because some efficiency changed in 1.21 that caused BGP to write to the
events at the same time that the test harness was trying to read from
them. Solved this in a coarse manner by adding surrounding mutexes to
the test code.
Additionally, upgraded dependencies.
NodePort Health Check has long been part of the Kubernetes API, but
kube-router hasn't implemented it in the past. This is meant to be a
port that is assigned by the kube-controller-manager for LoadBalancer
services that have a traffic policy of `externalTrafficPolicy=Local`.
When set, the k8s networking implementation is meant to open a port and
provide HTTP responses that inform parties external to the Kubernetes
cluster about whether or not a local endpoint exists on the node. It
should return a 200 status if the node contains a local endpoint and
return a 503 status if the node does not contain a local endpoint.
This allows applications outside the cluster to choose their endpoint in
such a way that their source IP could be preserved. For more details
see:
https://kubernetes.io/docs/tutorials/services/source-ip/#source-ip-for-services-with-type-loadbalancer
Add isReady, isServing, and isTerminating to internal EndpointSlice
struct so that downstream consumers have more information about the
service to make decisions later on.
In order to be compliant with upstream network implementation
expectations we choose to proxy an endpoint as long as it is either
ready OR serving. This means that endpoints that are terminating will
still be proxied which makes kube-router conformant with the upstream
e2e tests.
Adds support for spec.internalTrafficPolicy and fixes support for
spec.externalTrafficPolicy so that it only effects external traffic.
Keeps existing support for kube-router.io/service-local annotation which
overrides both to local when set to true. Any other value in this
annotation is ignored.