When resolving EndpointSlice -> Service ownership, do expose an error if the ownerReference in the metadata does not
agree with the kubernetes.io/service-name label. Instead just use the service-name label.
This aligns better to the way that other network plugins work.
The problem here stems from the fact that when netpol generates its list of expected ipsets, it includes the inet6:
prefix, however, when the proxy and routing controller sent their list of expected ipsets, they did not do so. This
meant that no matter how we handled it in ipset.go it was wrong for one or the other use-cases.
I decided to standardize on the netpol way of sending the list of expected ipset names so that BuildIPSetRestore() can
function in the same way for all invocations.
Attempt to filter out sets that we are not authoritative for to avoid
race conditions with other operators (like Istio) that might be
attempting to modify ipsets at the same time.
The rt_tables list is already in an ordered form in terms of priority.
Once one is found, it should be considered the optimal one and stop
looking for additional tables.
Not making direct exec calls to user binary interfaces has long been a
principle of kube-router. When kube-router was first coded, the netlink
library was missing significant features that forced us to exec out.
However, now netlink seems to have most of the functionality that we
need.
This converts all of the places where we can use netlink to use the
netlink functionality.
Setting rp_filter to 2 when it is 0 will override its status to be always enabled (in the loose mode).
This behavior could break some networking solutions as it made packet admission rules more strict.
This prepares the way for broader refactors in the way that we handle
nodes by:
* Separating frequently used node logic from the controller creation
steps
* Keeping reused code DRY-er
* Adding interface abstractions for key groups of node data and starting
to rely on those more rather than concrete types
* Separating node data from the rest of the controller data structure so
that it smaller definitions of data can be passed around to functions
that need it rather than always passing the entire controller which
contains more data / surface area than most functions need.
This fixes the problem where if network policy is applied before any
communication between two pods, all subsequent communication fails
because the two pods aren't able to discovery each other.
iptables provides distinct handlers because different commands execute
on different families. As such, in the code, it has become a paradigm to
iterate over the active handlers and execute their functions. However,
ipset does not treat families distinct in any of the list or save
operations and a single command handles both families. The only thing
that is distinct about ipset families is their type and their name
prefix.
This means operations like calling ipset.Save() with their default
implementation will pull both IPv4 and IPv6 sets into the same struct
and comingle them. This can cause problems down the line with any other
logic that assumes they will be distinct like iptables.
This adds logic to separate them out on the Save() operation and ensure
that the sets don't become comingled.
Before the logic ran like the following in terms of preference:
1. Prefer environment var NODE_NAME
2. `Use os.Hostname()`
3. Fallback to `--hostname-override` passed by user
This didn't make a whole lot of sense, as `--hostname-override` is
directly, and supposedly intentionally set by the user, therefore it
should be the MOST preferred, not the least preferred. Additionally,
none of the errors encountered were passed back to the user so that
future conditions could be considered, so if there was an error at the
API level, that error was swallowed. Now the logic looks like:
1. Prefer `--hostname-override` if it is set. If it is set and we
weren't able to resolve to a node object, return the error
2. Use environment var NODE_NAME if it is set. If it is set and we
weren't able to resolve to a node object, return the error
3. Fallback to `os.Hostname()`. If we weren't able to resolve to a node
object then return the error and give the user options
Naming ipsets with the advent of IPv6 gets tricky because IPv6 ipsets
have to be prefixed with inet6:. This commit adds additional utilities
that help users find the correct name of ipsets.
Ever since version v6.5.0 of iproute2, iproute2 no longer automatically
creates the /etc/iproute2 files, instead preferring to add files to
/usr/lib/iproute2 and then later on /usr/share/iproute2.
This adds fallback path matching to kube-router so that it can find
/etc/iproute2/rt_tables wherever it is defined instead of just failing.
This also means people running kube-router in containers will need to
change their mounts depending on where this file is located on their
host OS. However, ensuring that this file is copied to `/etc/iproute2`
is a legitimate way to ensure that this is consistent across a fleet of
multiple OS versions.
Upgrades to Go 1.21.7 now that Go 1.20 is no longer being maintained.
It also, resolves the race conditions that we were seeing with BGP
server tests when we upgraded from 1.20 -> 1.21. This appears to be
because some efficiency changed in 1.21 that caused BGP to write to the
events at the same time that the test harness was trying to read from
them. Solved this in a coarse manner by adding surrounding mutexes to
the test code.
Additionally, upgraded dependencies.
Differentiate headless services from ClusterIP being none, in
preparation for handling the service.kubernetes.io/headless label. One
might thing that handling these is similar, which it sort of is and sort
of isn't. ClusterIP is an immutable field, whereas labels are mutable.
This changes our handling of ClusterIP none-ness from the presence of
the headless label.
When we consider what to do with ClusterIP being none, that is
fundamentally different, because once it is None, the k8s API guarantees
that the service won't ever change.
Whereas the label can be added and removed.
Before this, we had 2 different ways to interact with ipsets, through
the handler interface which had the best handling for IPv6 because NPC
heavily utilizes it, and through the ipset struct which mostly repeated
the handler logic, but didn't handle some key things.
NPC utilized the handler functions and NSC / NRC mostly utilized the old
ipset struct functions. This caused a lot of duplication between the two
groups of functions and also caused issues with proper IPv6 handling.
This commit consolidates the two sets of usage into just the handler
interface. This greatly simplifies how the controllers interact with
ipsets and it also reduces the logic complexity on the ipset side.
This also fixes up some inconsistency with how we handled IPv6 ipset
names. ipset likes them to be prefixed with inet6:, but we weren't
always doing this in a way that made sense and was consistent across all
functions in the ipset struct.
With the advent of IPv6 integrated into the NSC we no longer get all IPs
from endpoints, but rather just the primary IP of the pod (which is
often, but not always the IPv4 address).
In order to get all possible endpoint addresses for a given service we
need to switch to using EndpointSlice which also nicely groups addresses
into IPv4 and IPv6 by AddressType and also gives us more information
about the endpoint status by giving us attributes for serving and
terminating, instead of just ready or not ready.
This does mean that users will need to add another permission to their
RBAC in order for kube-router to access these objects.
Previously we used to do an idempotent sync all active VIPs any time we
got a service or endpoint update. However, this only worked when we
assumed a single-stack deployment model where IPs were never deleted
unless the whole service was deleted.
In a dual-stack model, we can add / remove LoadBalancer IPs and Cluster
IPs on updates. Given this, we need to take into account the finite
change that happens, and not just revert to sync-all because we'll never
stop advertising IPs that should be removed.
As a fall-back, we still have the outer Run loop that syncs all active
routes every X amount of seconds (configured by user CLI parameter). So
on that timer we'll still have something that syncs all active VIPs and
works as an outer control loop to ensure that desired state eventually
becomes active state if we accidentally remove a VIP that should have
been there.