* delete package app/watchers since we're now using shared informers
* used shared informers for events and listing resources
* install moq in travis test script
* - added protocol & port label to metrics
- removed some redundant code
* added example dashboard
* added dashboard screenshot
* updated dashboard json & screenshot
* ammend bad dashboard export
* first new metric
* .
* more metrics: controller_publish_metrics_time & controller_iptables_sync_time
* namespace redeclared
* fix typo in name
* smal fixes
* new metric controller_bgp_peers & controller_bgp_internal_peers_sync_time
* typo fix
* new metric controller_ipvs_service_sync_time
* fix
* register metric
* fix
* fix
* added more metrics
* service controller log levels
* fix
* fix
* added metrics controller
* fixes
* fix
* fix
* fixed more log levels
* server and graceful shutdown
* fix
* fix
* fix
* code cleanup
* docs
* move metrics exporting to controller
* fix
* fix
* fixes
* fix
* fix missing
* fix
* fix
* test
* test
* fix
* fix
* fix
* updated dashboard
* updates to metric controller
* fixed order in newmetricscontroller
* err declared and not used
* updated dashboard
* updated dashboard screenshot
* removed --metrics & changed --metrics-port to enable / disable metrics
* https://github.com/cloudnativelabs/kube-router/issues/271
* cannot use config.MetricsPort (type uint16) as type int in assignment
* cannot use mc.MetricsPort (type uint16) as type int in argument to strconv.Itoa
* updated docs
* changed default metric port to 0, disabled
* added missing newline to .dockerignore
* add lag parse to pickup on -v directives
* test
* test
* test
* fix regression
* syntax error: non-declaration statement outside function body
* fix
* changed nsc to mc
* updated docs
* markdown fix
* moved metrics registration out to respective controller so only metrics for running parts will be exposed
* removed junk that came from visual studio code
* fixed some typos
* Moved the metrics back into each controller and added expose behaviour so only the running components metrics would be published
* removed to much, added back instanciation of metricscontroller
* fixed some invalid variable names
* fixed last typos on config name
* fixed order in newnetworkservicecontroller
* updated metrics docs & removed the metrics sync period as it will obey the controllers sync period
* forgott to save options.go
* cleanup
* Updated metric name & docs
* updated metrics.md
* fixed a high cpu usage bug in the metrics_controller's wait loop
* Move getNodeIP logic to utils package
Remove redundant ipset lookups
utils.NewIPSet() does this for us.
* Don't masquerade pod -> nodeAddrsIPSet traffic
Previously with Pod egress enabled, this would get masqueraded.
This change also adds cleanup for said ipset.
* Enhanced cleanup of Pod egress, overlay networking
- Delete old/bad pod egress iptables rule(s) from old versions
- When pod egress or overlay are disabled, cleanup as needed
* Update IPSet.Sets to map type
* ipset enhancements
- Avoid providing method that would delete all ipset sets on a system
- New method DestroyAllWithin() destroys sets tracked by an IPSet
- Create() now handles cases where Sets/System state are not in sync
- Refresh() now handles leftover -temp set gracefully
- Swap() now uses ipset swap
- Delete() improved sync of Sets and system state
- Get() now validates if map element exists before trying
- etc
* Update routes controller to reflect ipset changes
Fix ensures below two cases are explicitly handled
- in the network policy spec for the ingress rule, its optionsl to give 'ports' and 'from' details
when not specified it translates to match all ports, match all sources respectivley
- user may explicitly give the 'ports' and 'from' details in the ingress rule. But at any given point
its possible there is no matching pods (with labels defined in 'from') in the namespace.
Before the fix both the cases were handled similarly resulting in unexpected behaviour
Fixes#85
with this refactoring support for network policy V1 (or GA) is added.
Changes are backward compatible so beta network policy semantics
are still available for k8s ver 1.6.* and less
Fixes#16
If NODE_NAME env is not set, fall back to hostname.
Partial fix towards #23 we still have issue where kube-router is run as agent
and kubelet is started with --hostname-overide flag
by host name or FQDN. kubelet can be started with --hostname-override with configurable value.
In AWS envirinment typcally its set FQDN obtained from the metda data. This fix ensures
we can deploy kube-router in case nodes are registered with FQDN
Fixes#17