* [jjo] support advertising status.loadBalancer.ingress IPs via flag
* add `--advertise-loadbalancer-ip` flag, which will make Service's
Ingress IP(s)set by the LoadBalancer to:
- be locally added to nodes' `kube-dummy-if` network interface
- be advertised to BGP peers
* support "kube-router.io/service.skiplbips=true" per Service
annotation to selectively skip above
* refactor several functions with dupped code to streamline logic as:
- `getIpsToAdvertise()` which calls ->
- `getClusterIPs()`
- `getExternalIPs()`
- `getLoadBalancerIPs()`
and contains nodeHasEndpoints logic, returns:
(ipsToAdvertise, ipsToUnAdvertise)
- advertiseIPs() which is essentially previous advertiseClusterIPs()
(which was previously used to advertise _any_ IP actually, ie misnamed),
with logic to un/advertise based on both passed arguments:
(ipsToAdvertise, ipsToUnAdvertise)
* fix some leftovers from uselbips -> skiplbips annotation change
* - added protocol & port label to metrics
- removed some redundant code
* added example dashboard
* added dashboard screenshot
* updated dashboard json & screenshot
* ammend bad dashboard export
* first new metric
* .
* more metrics: controller_publish_metrics_time & controller_iptables_sync_time
* namespace redeclared
* fix typo in name
* smal fixes
* new metric controller_bgp_peers & controller_bgp_internal_peers_sync_time
* typo fix
* new metric controller_ipvs_service_sync_time
* fix
* register metric
* fix
* fix
* added more metrics
* service controller log levels
* fix
* fix
* added metrics controller
* fixes
* fix
* fix
* fixed more log levels
* server and graceful shutdown
* fix
* fix
* fix
* code cleanup
* docs
* move metrics exporting to controller
* fix
* fix
* fixes
* fix
* fix missing
* fix
* fix
* test
* test
* fix
* fix
* fix
* updated dashboard
* updates to metric controller
* fixed order in newmetricscontroller
* err declared and not used
* updated dashboard
* updated dashboard screenshot
* removed --metrics & changed --metrics-port to enable / disable metrics
* https://github.com/cloudnativelabs/kube-router/issues/271
* cannot use config.MetricsPort (type uint16) as type int in assignment
* cannot use mc.MetricsPort (type uint16) as type int in argument to strconv.Itoa
* updated docs
* changed default metric port to 0, disabled
* added missing newline to .dockerignore
* add lag parse to pickup on -v directives
* test
* test
* test
* fix regression
* syntax error: non-declaration statement outside function body
* fix
* changed nsc to mc
* updated docs
* markdown fix
* moved metrics registration out to respective controller so only metrics for running parts will be exposed
* removed junk that came from visual studio code
* fixed some typos
* Moved the metrics back into each controller and added expose behaviour so only the running components metrics would be published
* removed to much, added back instanciation of metricscontroller
* fixed some invalid variable names
* fixed last typos on config name
* fixed order in newnetworkservicecontroller
* updated metrics docs & removed the metrics sync period as it will obey the controllers sync period
* forgott to save options.go
* cleanup
* Updated metric name & docs
* updated metrics.md
* fixed a high cpu usage bug in the metrics_controller's wait loop
* added prometheus metrics port option
* fix propper config
* added option to change path
* added path config to prometheus
* updated readme
* fixed string that should be int
During periodic sync of IPVS services there is a check if the required service
already existing in IPVS. For the check the list of currnet IPVS services are
read from IPVS. This is causing performance hit as number of services increases.
With this fix, Kube-router reads once from ipvs and use for further during service sync
- add a route to exteranl ip in custom routing table to prevent martian packets
- switch between Masqurade and Tunnel for forwarding when DSR in disabled and enabled
as you annotate and remove DSR annotation, switch the IPVS server
type to tunneling to masqurade mode
also restrict preparing the pod for DSR only to the local pods
* Move getNodeIP logic to utils package
Remove redundant ipset lookups
utils.NewIPSet() does this for us.
* Don't masquerade pod -> nodeAddrsIPSet traffic
Previously with Pod egress enabled, this would get masqueraded.
This change also adds cleanup for said ipset.
* Enhanced cleanup of Pod egress, overlay networking
- Delete old/bad pod egress iptables rule(s) from old versions
- When pod egress or overlay are disabled, cleanup as needed
* Update IPSet.Sets to map type
* ipset enhancements
- Avoid providing method that would delete all ipset sets on a system
- New method DestroyAllWithin() destroys sets tracked by an IPSet
- Create() now handles cases where Sets/System state are not in sync
- Refresh() now handles leftover -temp set gracefully
- Swap() now uses ipset swap
- Delete() improved sync of Sets and system state
- Get() now validates if map element exists before trying
- etc
* Update routes controller to reflect ipset changes
* Add --peer-router-password option
Also:
- Consolodated NRC peer fields into a []config.NeighborConfig
to store address, asn, and password for each peer.
- BREAKING: --peer-router and --peer-asn flags now take slices
rather than strings.
* Add password auth node annotation for external peer
* Update documentation
New CLI flags and annotations
Renamed ones as well
* Consistent CLI flags, annotations, and peer config
BGP configs now all accept multiple values and are treated consistently.
Other refactoring was done as well.
* Stop bgpserver on peering errors to avoid listener leak
* Clarify BGP doc sections
Fix some typos
This fix introduces flag nodeport-bindon-all-ip with which you can have kube-proxy like behaviour. If not specified
only nodeIP will be open for connections.
Fixes#139
* Enable pod egress masquerading by default
- Adds flag "--enable-pod-egress" (default: true)
- Removes previously created iptables rule if option is changed to false
* Use an ipset to match Pod egress traffic to be masqueraded
* Set --cluster-cidr as depreciated flag
If set to anything, normal dynamic Pod egress masquerading is turned on.
* Use Replace else Add logic for updating export policy
Fixes errors logged due to existing statement in poliy.
If NODE_NAME env is not set, fall back to hostname.
Partial fix towards #23 we still have issue where kube-router is run as agent
and kubelet is started with --hostname-overide flag
by host name or FQDN. kubelet can be started with --hostname-override with configurable value.
In AWS envirinment typcally its set FQDN obtained from the metda data. This fix ensures
we can deploy kube-router in case nodes are registered with FQDN
Fixes#17