includes workaround for musl hardcoded protocol table that
is missing SCTP support by using protocol name to
numeric value mapping in ipset entries
closes: https://github.com/cloudnativelabs/kube-router/issues/1019
Signed-off-by: Roman Kuzmitskii <roman@damex.org>
Not making direct exec calls to user binary interfaces has long been a
principle of kube-router. When kube-router was first coded, the netlink
library was missing significant features that forced us to exec out.
However, now netlink seems to have most of the functionality that we
need.
This converts all of the places where we can use netlink to use the
netlink functionality.
This prepares the way for broader refactors in the way that we handle
nodes by:
* Separating frequently used node logic from the controller creation
steps
* Keeping reused code DRY-er
* Adding interface abstractions for key groups of node data and starting
to rely on those more rather than concrete types
* Separating node data from the rest of the controller data structure so
that it smaller definitions of data can be passed around to functions
that need it rather than always passing the entire controller which
contains more data / surface area than most functions need.
Currently, kube-router just lists all IPVS services on the host and then
adds the load balancing service IPs to kube-router-svip blindly.
However, this assumes that the only IPVS entries are entries that
kube-router has originated and that the user isn't using IPVS.
We want to make sure that we are only creating rules for services that
we are authoritative for. So to this end, we now double-check that this
is one of our services before adding rules that may effect it.
Upgrades to Go 1.21.7 now that Go 1.20 is no longer being maintained.
It also, resolves the race conditions that we were seeing with BGP
server tests when we upgraded from 1.20 -> 1.21. This appears to be
because some efficiency changed in 1.21 that caused BGP to write to the
events at the same time that the test harness was trying to read from
them. Solved this in a coarse manner by adding surrounding mutexes to
the test code.
Additionally, upgraded dependencies.
It used to be that the kubelet handled setting hairpin mode for us:
https://github.com/kubernetes/kubernetes/pull/13628
Then this functionality moved to the dockershim:
https://github.com/kubernetes/kubernetes/pull/62212
Then the functionality was removed entirely:
https://github.com/kubernetes/kubernetes/commit/83265c9171f
Unfortunately, it was lost that we ever depended on this in order for
our hairpin implementation to work, if we ever knew it at all.
Additionally, I suspect that containerd and cri-o implementations never
worked correctly with hairpinning.
Without this, the NAT rules that we implement for hairpinning don't work
correctly. Because hairpin_mode isn't implemented on the virtual
interface of the container on the host, the packet bubbles up to the
kube-bridge. At some point in the traffic flow, the route back to the
pod gets resolved to the mac address inside the container, at that
point, the packet's source mac and destination mac don't match the
kube-bridge interface and the packet is black-holed.
This can also be fixed by putting the kube-bridge interface into
promiscuous mode so that it accepts all mac addresses, but I think that
going back to the original functionality of enabling hairpin_mode on the
veth interface of the container is likely the lesser of two evils here
as putting the kube-bridge interface into promiscuous mode will likely
have unintentional consequences.
With the advent of IPv6 integrated into the NSC we no longer get all IPs
from endpoints, but rather just the primary IP of the pod (which is
often, but not always the IPv4 address).
In order to get all possible endpoint addresses for a given service we
need to switch to using EndpointSlice which also nicely groups addresses
into IPv4 and IPv6 by AddressType and also gives us more information
about the endpoint status by giving us attributes for serving and
terminating, instead of just ready or not ready.
This does mean that users will need to add another permission to their
RBAC in order for kube-router to access these objects.
lookupFWMarkByService() was previous returning an error when no fwmark
was found in the tracking map for a given service. However, this isn't
really an error condition and shouldn't be treated as such. When it was
treated as an error condition users got a lot of confusing errors in the
logs.
* fact(NSC): consolidate constants to top
* fix(NSC): increase IPVS add service logging
* fix(NSC): improve logging for FWMark IPVS entries
* fix(NSC): add missing parameter to logging
* feat(NSC): generate unique FW marks
Because we trim the 32-bit FNV-1a hash to 16 bits there is the potential
for FW marks to collide with each other even for unique inputs of IP,
protocol, and port. This reduces that chance up to the 16-bit max by
keeping track of which FW marks we've already allocated and what IP,
protocol, port combo they've been allocated for.
Fixes#1045
* fact(NSC): move utility funcs to utils
* fix(NSC): reduce IPVS service shell outs
This also aligns it more with the almost identical function used for
non-FWmarked services ipvsAddService() which is also called from
setupExternalIPServices and passes in this same list of ipvsServices.
* fix(NSC): fix & consolidate DSR cleanup code
A lot of this is refactor work, but its important to know why the DSR
mangle tables were not being cleaned up in the first place. When we
transitioned to iptables-save to look over the mangle rules, we didn't
realize that iptables-save changes the format of the marks from integer
values (which is what the CLI works with) to hexadecimal.
This made it so that we were never actually matching on a mangle rule,
which left them all behind. When these mangle rules were left, it meant
that IPs that used to be part of a DSR service were essentially
black-holed on the system and were no longer route-able.
Fixes#1167
* doc(dsr): expand DSR documentation
fixes#1055
* ensure active service map is updated for non DSR services
Co-authored-by: Murali Reddy <muralimmreddy@gmail.com>