Before the logic ran like the following in terms of preference:
1. Prefer environment var NODE_NAME
2. `Use os.Hostname()`
3. Fallback to `--hostname-override` passed by user
This didn't make a whole lot of sense, as `--hostname-override` is
directly, and supposedly intentionally set by the user, therefore it
should be the MOST preferred, not the least preferred. Additionally,
none of the errors encountered were passed back to the user so that
future conditions could be considered, so if there was an error at the
API level, that error was swallowed. Now the logic looks like:
1. Prefer `--hostname-override` if it is set. If it is set and we
weren't able to resolve to a node object, return the error
2. Use environment var NODE_NAME if it is set. If it is set and we
weren't able to resolve to a node object, return the error
3. Fallback to `os.Hostname()`. If we weren't able to resolve to a node
object then return the error and give the user options
Before this, we had 2 different ways to interact with ipsets, through
the handler interface which had the best handling for IPv6 because NPC
heavily utilizes it, and through the ipset struct which mostly repeated
the handler logic, but didn't handle some key things.
NPC utilized the handler functions and NSC / NRC mostly utilized the old
ipset struct functions. This caused a lot of duplication between the two
groups of functions and also caused issues with proper IPv6 handling.
This commit consolidates the two sets of usage into just the handler
interface. This greatly simplifies how the controllers interact with
ipsets and it also reduces the logic complexity on the ipset side.
This also fixes up some inconsistency with how we handled IPv6 ipset
names. ipset likes them to be prefixed with inet6:, but we weren't
always doing this in a way that made sense and was consistent across all
functions in the ipset struct.
This adds a simple controller that will watch for services of type LoadBalancer
and try to allocated addresses from the specified IPv4 and/or IPv6 ranges.
It's assumed that kube-router (or another network controller) will announce the addresses.
As the controller uses leases for leader election and updates the service status new
RBAC permissions are required.
This change allows to define two cluster CIDRs for compatibility with
Kubernetes dual-stack, with an assumption that two CIDRs are usually
IPv4 and IPv6.
Signed-off-by: Michal Rostecki <vadorovsky@gmail.com>
* fact(network_policy): validate ClusterIP CIDR
Ensure that --service-cluster-ip-range is a valid CIDR while controller
is starting up.
* fix(network_policy): parse/validate NodePort
Validate the NodePort range that is passed and allow for it to be
specified with hyphens which is what the previous example used to show
and is more cohesive with the way NodePort ranges are specified when
passed to the kube-apiserver.
* test(network_policy): add tests for input validation
* feat(network_policy): permit ExternalIP on input
fixes#934
* fix(network_policy): ensure pos with index offset
Because iptables list function now appears to be returning -N and -P
items in the chain results, we need to account for them when taking into
consideration the rule position.
* fix(network_policy): add uuid to comments on ensure
iptables list is now no longer keeping the position of parameters which
means that we can't compare string to string. In absence of a better way
to handle this, this adds a UUID to the comment string which can then be
looked for when determining what position a rule occupies.
* feat(gitignore): don't track intellij files
* fact(network_policy): networkPoliciesInfo -> stack
Take networkPoliciesInfo off of the npc struct and convert it to a stack
variable that is easy to cleanup.
* fix(network_policy): k8s obj memory accumulation
Kubernetes informers will block on handler execution and will then begin
to accumulate cached Kubernetes object information into the heap. This
change moves the full sync logic into it's own goroutine where full
syncs are triggered and gated via writing to a single item channel.
This ensures that:
- Syncs will only happen one at a time (as they are full syncs and we
can't process multiple at once)
- Sync requests are only ever delayed and never lost as they will be
added to the request channel
- After we make a sync request we return fast to ensure that the handler
execution returns fast and that we don't block the Kubernetes
informers
* fact(network_policy): rework readyForUpdates
Now that we are better managing requests for full syncs we no longer
need to manage readyForUpdates on the npc controller. We already enforce
not blocking the handlers and a single sync execution chain, whether it
comes from the controller in the form of a periodic sync or whether it
comes from a Kubernetes informer, either way the result is a
non-blocking, single thread of execution, full sync.
* fix(network_policy): address PR feedback