* fact(network_policy): validate ClusterIP CIDR
Ensure that --service-cluster-ip-range is a valid CIDR while controller
is starting up.
* fix(network_policy): parse/validate NodePort
Validate the NodePort range that is passed and allow for it to be
specified with hyphens which is what the previous example used to show
and is more cohesive with the way NodePort ranges are specified when
passed to the kube-apiserver.
* test(network_policy): add tests for input validation
* feat(network_policy): permit ExternalIP on input
fixes#934
* fix(network_policy): ensure pos with index offset
Because iptables list function now appears to be returning -N and -P
items in the chain results, we need to account for them when taking into
consideration the rule position.
* fix(network_policy): add uuid to comments on ensure
iptables list is now no longer keeping the position of parameters which
means that we can't compare string to string. In absence of a better way
to handle this, this adds a UUID to the comment string which can then be
looked for when determining what position a rule occupies.
* feat(gitignore): don't track intellij files
* fact(network_policy): networkPoliciesInfo -> stack
Take networkPoliciesInfo off of the npc struct and convert it to a stack
variable that is easy to cleanup.
* fix(network_policy): k8s obj memory accumulation
Kubernetes informers will block on handler execution and will then begin
to accumulate cached Kubernetes object information into the heap. This
change moves the full sync logic into it's own goroutine where full
syncs are triggered and gated via writing to a single item channel.
This ensures that:
- Syncs will only happen one at a time (as they are full syncs and we
can't process multiple at once)
- Sync requests are only ever delayed and never lost as they will be
added to the request channel
- After we make a sync request we return fast to ensure that the handler
execution returns fast and that we don't block the Kubernetes
informers
* fact(network_policy): rework readyForUpdates
Now that we are better managing requests for full syncs we no longer
need to manage readyForUpdates on the npc controller. We already enforce
not blocking the handlers and a single sync execution chain, whether it
comes from the controller in the form of a periodic sync or whether it
comes from a Kubernetes informer, either way the result is a
non-blocking, single thread of execution, full sync.
* fix(network_policy): address PR feedback