During our initial run, fail fatally when we encounter problems rather
than just continuing on and causing subsequent problems and potentially
burying the real error.
This change allows to define two cluster CIDRs for compatibility with
Kubernetes dual-stack, with an assumption that two CIDRs are usually
IPv4 and IPv6.
Signed-off-by: Michal Rostecki <vadorovsky@gmail.com>
Previously, we would iterate over rulesFromNode, but then check it
against the entirety of the rulesNeeded hash. This resulted in the loop
breaking as soon as it found any matching rule from the host rather than
it breaking if it matched the rule that we were currently processing.
When we receive service or endpoint updates from Kubernetes we process a
type of partial sync because the service and endpoints have already been
updated by the handler. However, previously, this partial update did not
include updating the hairpinning rules for services.
This would cause hairpinning changes to be delayed until the next full
sync or until kube-router restart. This changes adds hairpinning into
the partial service sync flow.
* fact(NSC): consolidate constants to top
* fix(NSC): increase IPVS add service logging
* fix(NSC): improve logging for FWMark IPVS entries
* fix(NSC): add missing parameter to logging
* feat(NSC): generate unique FW marks
Because we trim the 32-bit FNV-1a hash to 16 bits there is the potential
for FW marks to collide with each other even for unique inputs of IP,
protocol, and port. This reduces that chance up to the 16-bit max by
keeping track of which FW marks we've already allocated and what IP,
protocol, port combo they've been allocated for.
Fixes#1045
* fact(NSC): move utility funcs to utils
* fix(NSC): reduce IPVS service shell outs
This also aligns it more with the almost identical function used for
non-FWmarked services ipvsAddService() which is also called from
setupExternalIPServices and passes in this same list of ipvsServices.
* fix(NSC): fix & consolidate DSR cleanup code
A lot of this is refactor work, but its important to know why the DSR
mangle tables were not being cleaned up in the first place. When we
transitioned to iptables-save to look over the mangle rules, we didn't
realize that iptables-save changes the format of the marks from integer
values (which is what the CLI works with) to hexadecimal.
This made it so that we were never actually matching on a mangle rule,
which left them all behind. When these mangle rules were left, it meant
that IPs that used to be part of a DSR service were essentially
black-holed on the system and were no longer route-able.
Fixes#1167
* doc(dsr): expand DSR documentation
fixes#1055
* ensure active service map is updated for non DSR services
Co-authored-by: Murali Reddy <muralimmreddy@gmail.com>
I found that without taking a brief pause between iptables cleanup and
ipset deletion, sometimes the system still thought that there were
iptables references to the ipsets and would error instead of cleaning
the ipsets.
* remove IPVS metrics
Remove metrics for IPVS services when the IPVS service is deleted so
that the number of metrics does not grow without bound.
Fixes#734
* delete metricsMap key when IPVS service is removed
Delete the key in NetworkServicesController.metricsMap when the
respective IPVS configuration is removed.
Remove a period from a comment to conform to kube-router norms
* cleanup stale metrics in a distinct method
* remove unnecessary error return value on cleanupStaleMetrics
* fix(nsc): Overly eager IPVS updating
Switches the endpoint map comparison in OnEndpointsUpdate from being a
DeepEqual, to instead checking that all services exist, and that their
associated endpoints are similar.
Ordering is no longer considered important in regards to the IPVS
update check.
Fixes#1026