Ever since version v6.5.0 of iproute2, iproute2 no longer automatically
creates the /etc/iproute2 files, instead preferring to add files to
/usr/lib/iproute2 and then later on /usr/share/iproute2.
This adds fallback path matching to kube-router so that it can find
/etc/iproute2/rt_tables wherever it is defined instead of just failing.
This also means people running kube-router in containers will need to
change their mounts depending on where this file is located on their
host OS. However, ensuring that this file is copied to `/etc/iproute2`
is a legitimate way to ensure that this is consistent across a fleet of
multiple OS versions.
This is a feature that has been requested a few times over the years and
would bring us closer to feature parity with other k8s network
implementations for service proxy.
It used to be that the kubelet handled setting hairpin mode for us:
https://github.com/kubernetes/kubernetes/pull/13628
Then this functionality moved to the dockershim:
https://github.com/kubernetes/kubernetes/pull/62212
Then the functionality was removed entirely:
https://github.com/kubernetes/kubernetes/commit/83265c9171f
Unfortunately, it was lost that we ever depended on this in order for
our hairpin implementation to work, if we ever knew it at all.
Additionally, I suspect that containerd and cri-o implementations never
worked correctly with hairpinning.
Without this, the NAT rules that we implement for hairpinning don't work
correctly. Because hairpin_mode isn't implemented on the virtual
interface of the container on the host, the packet bubbles up to the
kube-bridge. At some point in the traffic flow, the route back to the
pod gets resolved to the mac address inside the container, at that
point, the packet's source mac and destination mac don't match the
kube-bridge interface and the packet is black-holed.
This can also be fixed by putting the kube-bridge interface into
promiscuous mode so that it accepts all mac addresses, but I think that
going back to the original functionality of enabling hairpin_mode on the
veth interface of the container is likely the lesser of two evils here
as putting the kube-bridge interface into promiscuous mode will likely
have unintentional consequences.
Adds more logging information (in the form of warnings) when we come
across common errors that are not big enough to stop processing, but
will still confuse users when the error gets bubbled up to NSC.