Remove occurrences of "master" from the project (#1636)

* intial removal of inappropriate terminology

Signed-off-by: Raffaele Di Fazio <raffo@github.com>

* removed other occurrences

Signed-off-by: Raffaele Di Fazio <raffo@github.com>

* gofmt

Signed-off-by: Raffaele Di Fazio <raffo@github.com>

* addresses comment

Signed-off-by: Raffaele Di Fazio <raffo@github.com>

* gofmt

Signed-off-by: Raffaele Di Fazio <raffo@github.com>
This commit is contained in:
Raffaele Di Fazio 2020-07-08 10:13:08 +02:00 committed by GitHub
parent c9a60a54d7
commit 7505f29e4c
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
26 changed files with 102 additions and 102 deletions

View File

@ -12,7 +12,7 @@ STOP -- PLEASE READ!
GitHub is not the right place for support requests.
If you're looking for help, check our [docs](https://github.com/kubernetes-sigs/external-dns/tree/master/docs).
If you're looking for help, check our [docs](https://github.com/kubernetes-sigs/external-dns/tree/HEAD/docs).
You can also post your question on the [Kubernetes Slack #external-dns](https://kubernetes.slack.com/archives/C771MKDKQ).

2
OWNERS
View File

@ -1,5 +1,5 @@
# See the OWNERS file documentation:
# https://github.com/kubernetes/community/blob/master/contributors/guide/owners.md
# https://github.com/kubernetes/community/blob/HEAD/contributors/guide/owners.md
approvers:
- hjacobs

View File

@ -3,8 +3,8 @@
</p>
# ExternalDNS
[![Build Status](https://travis-ci.org/kubernetes-sigs/external-dns.svg?branch=master)](https://travis-ci.org/kubernetes-sigs/external-dns)
[![Coverage Status](https://coveralls.io/repos/github/kubernetes-sigs/external-dns/badge.svg?branch=master)](https://coveralls.io/github/kubernetes-sigs/external-dns?branch=master)
[![Build Status](https://travis-ci.org/kubernetes-sigs/external-dns.svg)](https://travis-ci.org/kubernetes-sigs/external-dns)
[![Coverage Status](https://coveralls.io/repos/github/kubernetes-sigs/external-dns/badge.svg)](https://coveralls.io/github/kubernetes-sigs/external-dns)
[![GitHub release](https://img.shields.io/github/release/kubernetes-sigs/external-dns.svg)](https://github.com/kubernetes-sigs/external-dns/releases)
[![go-doc](https://godoc.org/github.com/kubernetes-sigs/external-dns?status.svg)](https://godoc.org/github.com/kubernetes-sigs/external-dns)
[![Go Report Card](https://goreportcard.com/badge/github.com/kubernetes-sigs/external-dns)](https://goreportcard.com/report/github.com/kubernetes-sigs/external-dns)
@ -176,7 +176,7 @@ Assuming Go has been setup with module support it can be built simply by running
$ make
```
This will create external-dns in the build directory directly from master.
This will create external-dns in the build directory directly from the default branch.
Next, run an application and expose it via a Kubernetes Service:
@ -276,12 +276,12 @@ Here's a rough outline on what is to come (subject to change):
### v0.6
- [ ] Ability to replace Kops' [DNS Controller](https://github.com/kubernetes/kops/tree/master/dns-controller) (This could also directly become `v1.0`)
- [ ] Ability to replace Kops' [DNS Controller](https://github.com/kubernetes/kops/tree/HEAD/dns-controller) (This could also directly become `v1.0`)
- [x] Support for OVH
### v1.0
- [ ] Ability to replace Kops' [DNS Controller](https://github.com/kubernetes/kops/tree/master/dns-controller)
- [ ] Ability to replace Kops' [DNS Controller](https://github.com/kubernetes/kops/tree/HEAD/dns-controller)
- [x] Ability to replace Zalando's [Mate](https://github.com/linki/mate)
- [x] Ability to replace Molecule Software's [route53-kubernetes](https://github.com/wearemolecule/route53-kubernetes)
@ -322,7 +322,7 @@ For an overview on how to write new Sources and Providers check out [Sources and
ExternalDNS is an effort to unify the following similar projects in order to bring the Kubernetes community an easy and predictable way of managing DNS records across cloud providers based on their Kubernetes resources:
* Kops' [DNS Controller](https://github.com/kubernetes/kops/tree/master/dns-controller)
* Kops' [DNS Controller](https://github.com/kubernetes/kops/tree/HEAD/dns-controller)
* Zalando's [Mate](https://github.com/linki/mate)
* Molecule Software's [route53-kubernetes](https://github.com/wearemolecule/route53-kubernetes)

View File

@ -4,7 +4,7 @@
# to for triaging and handling of incoming issues.
#
# The below names agree to abide by the
# [Embargo Policy](https://github.com/kubernetes/sig-release/blob/master/security-release-process-documentation/security-release-process.md#embargo-policy)
# [Embargo Policy](https://github.com/kubernetes/sig-release/blob/HEAD/security-release-process-documentation/security-release-process.md#embargo-policy)
# and will be removed and replaced if they violate that agreement.
#
# DO NOT REPORT SECURITY VULNERABILITIES DIRECTLY TO THESE NAMES, FOLLOW THE

View File

@ -29,7 +29,7 @@ When the project was proposed (see the [original discussion](https://github.com/
* Mate - [https://github.com/linki/mate](https://github.com/linki/mate)
* DNS-controller from kops - [https://github.com/kubernetes/kops/tree/master/dns-controller](https://github.com/kubernetes/kops/tree/master/dns-controller)
* DNS-controller from kops - [https://github.com/kubernetes/kops/tree/HEAD/dns-controller](https://github.com/kubernetes/kops/tree/HEAD/dns-controller)
* Route53-kubernetes - [https://github.com/wearemolecule/route53-kubernetes](https://github.com/wearemolecule/route53-kubernetes)
@ -135,7 +135,7 @@ The docker registry service is provided as best effort with no sort of SLA and t
Providing a vanity URL for the docker images was consider a non goal till now, but the community seems to be wanting official images from a GCR domain, similarly to what is available for other parts of official Kubernetes projects.
ExternalDNS does not follow a specific release cycle. Releases are made often when there are major contributions (i.e. new providers) or important bug fixes. That said, the master is considered stable and can be used as well to build images.
ExternalDNS does not follow a specific release cycle. Releases are made often when there are major contributions (i.e. new providers) or important bug fixes. That said, the default branch is considered stable and can be used as well to build images.
### Risks and Mitigations

View File

@ -4,12 +4,12 @@ CRD source provides a generic mechanism to manage DNS records in your favourite
### Details
CRD source watches for a user specified CRD to extract [Endpoints](https://github.com/kubernetes-sigs/external-dns/blob/master/endpoint/endpoint.go) from its `Spec`.
CRD source watches for a user specified CRD to extract [Endpoints](https://github.com/kubernetes-sigs/external-dns/blob/HEAD/endpoint/endpoint.go) from its `Spec`.
So users need to create such a CRD and register it to the kubernetes cluster and then create new object(s) of the CRD specifying the Endpoints.
### Registering CRD
Here is typical example of [CRD API type](https://github.com/kubernetes-sigs/external-dns/blob/master/endpoint/endpoint.go) which provides Endpoints to `CRD source`:
Here is typical example of [CRD API type](https://github.com/kubernetes-sigs/external-dns/blob/HEAD/endpoint/endpoint.go) which provides Endpoints to `CRD source`:
```go
type TTL int64
@ -100,7 +100,6 @@ Run external-dns in dry-mode to see whether external-dns picks up the DNS record
```
$ build/external-dns --source crd --crd-source-apiversion externaldns.k8s.io/v1alpha1 --crd-source-kind DNSEndpoint --provider inmemory --once --dry-run
INFO[0000] config: {Master: KubeConfig: Sources:[crd] Namespace: AnnotationFilter: FQDNTemplate: CombineFQDNAndAnnotation:false Compatibility: PublishInternal:false PublishHostIP:false ConnectorSourceServer:localhost:8080 Provider:inmemory GoogleProject: DomainFilter:[] ZoneIDFilter:[] AWSZoneType: AWSAssumeRole: AWSMaxChangeCount:4000 AWSEvaluateTargetHealth:true AzureConfigFile:/etc/kubernetes/azure.json AzureResourceGroup: CloudflareProxied:false InfobloxGridHost: InfobloxWapiPort:443 InfobloxWapiUsername:admin InfobloxWapiPassword: InfobloxWapiVersion:2.3.1 InfobloxSSLVerify:true DynCustomerName: DynUsername: DynPassword: DynMinTTLSeconds:0 OCIConfigFile:/etc/kubernetes/oci.yaml InMemoryZones:[] PDNSServer:http://localhost:8081 PDNSAPIKey: PDNSTLSEnabled:false TLSCA: TLSClientCert: TLSClientCertKey: Policy:sync Registry:txt TXTOwnerID:default TXTPrefix: Interval:1m0s Once:true DryRun:true LogFormat:text MetricsAddress::7979 LogLevel:info TXTCacheInterval:0s ExoscaleEndpoint:https://community.exoscale.com/documentation/dns/api/ ExoscaleAPIKey: ExoscaleAPISecret: CRDSourceAPIVersion:externaldns.k8s.io/v1alpha1 CRDSourceKind:DNSEndpoint}
INFO[0000] running in dry-run mode. No changes to DNS records will be made.
INFO[0000] Connected to cluster at https://192.168.99.100:8443
INFO[0000] CREATE: foo.bar.com 180 IN A 192.168.99.216

View File

@ -75,13 +75,13 @@ Regarding Ingress, we'll support:
* Google's Ingress Controller on GKE that integrates with their Layer 7 load balancers (GLBC)
* nginx-ingress-controller v0.9.x with a fronting Service
* Zalando's [AWS Ingress controller](https://github.com/zalando-incubator/kube-ingress-aws-controller), based on AWS ALBs and [Skipper](https://github.com/zalando/skipper)
* [Traefik](https://github.com/containous/traefik) 1.7 and above, when [`kubernetes.ingressEndpoint`](https://docs.traefik.io/v1.7/configuration/backends/kubernetes/#ingressendpoint) is configured (`kubernetes.ingressEndpoint.useDefaultPublishedService` in the [Helm chart](https://github.com/helm/charts/tree/master/stable/traefik#configuration))
* [Traefik](https://github.com/containous/traefik) 1.7 and above, when [`kubernetes.ingressEndpoint`](https://docs.traefik.io/v1.7/configuration/backends/kubernetes/#ingressendpoint) is configured (`kubernetes.ingressEndpoint.useDefaultPublishedService` in the [Helm chart](https://github.com/helm/charts/tree/HEAD/stable/traefik#configuration))
### Are other Ingress Controllers supported?
For Ingress objects, ExternalDNS will attempt to discover the target hostname of the relevant Ingress Controller automatically. If you are using an Ingress Controller that is not listed above you may have issues with ExternalDNS not discovering Endpoints and consequently not creating any DNS records. As a workaround, it is possible to force create an Endpoint by manually specifying a target host/IP for the records to be created by setting the annotation `external-dns.alpha.kubernetes.io/target` in the Ingress object.
Another reason you may want to override the ingress hostname or IP address is if you have an external mechanism for handling failover across ingress endpoints. Possible scenarios for this would include using [keepalived-vip](https://github.com/kubernetes/contrib/tree/master/keepalived-vip) to manage failover faster than DNS TTLs might expire.
Another reason you may want to override the ingress hostname or IP address is if you have an external mechanism for handling failover across ingress endpoints. Possible scenarios for this would include using [keepalived-vip](https://github.com/kubernetes/contrib/tree/HEAD/keepalived-vip) to manage failover faster than DNS TTLs might expire.
Note that if you set the target to a hostname, then a CNAME record will be created. In this case, the hostname specified in the Ingress object's annotation must already exist. (i.e. you have a Service resource for your Ingress Controller with the `external-dns.alpha.kubernetes.io/hostname` annotation set to the same value.)
@ -89,7 +89,7 @@ Note that if you set the target to a hostname, then a CNAME record will be creat
ExternalDNS is a joint effort to unify different projects accomplishing the same goals, namely:
* Kops' [DNS Controller](https://github.com/kubernetes/kops/tree/master/dns-controller)
* Kops' [DNS Controller](https://github.com/kubernetes/kops/tree/HEAD/dns-controller)
* Zalando's [Mate](https://github.com/linki/mate)
* Molecule Software's [route53-kubernetes](https://github.com/wearemolecule/route53-kubernetes)
@ -205,7 +205,7 @@ $ docker run \
-e EXTERNAL_DNS_PROVIDER=google \
-e EXTERNAL_DNS_DOMAIN_FILTER=$'foo.com\nbar.com' \
registry.opensource.zalan.do/teapot/external-dns:latest
time="2017-08-08T14:10:26Z" level=info msg="config: &{Master: KubeConfig: Sources:[service ingress] Namespace: ...
time="2017-08-08T14:10:26Z" level=info msg="config: &{APIServerURL: KubeConfig: Sources:[service ingress] Namespace: ...
```
Locally:
@ -213,12 +213,12 @@ Locally:
```console
$ export EXTERNAL_DNS_SOURCE=$'service\ningress'
$ external-dns --provider=google
INFO[0000] config: &{Master: KubeConfig: Sources:[service ingress] Namespace: ...
INFO[0000] config: &{APIServerURL: KubeConfig: Sources:[service ingress] Namespace: ...
```
```
$ EXTERNAL_DNS_SOURCE=$'service\ningress' external-dns --provider=google
INFO[0000] config: &{Master: KubeConfig: Sources:[service ingress] Namespace: ...
INFO[0000] config: &{APIServerURL: KubeConfig: Sources:[service ingress] Namespace: ...
```
In a Kubernetes manifest:

View File

@ -10,7 +10,7 @@ This document describes the initial design proposal.
External DNS is purposed to fill the existing gap of creating DNS records for Kubernetes resources. While there exist alternative solutions, this project is meant to be a standard way of managing DNS records for Kubernetes. The current project is a fusion of the following projects and driven by its maintainers:
1. [Kops DNS Controller](https://github.com/kubernetes/kops/tree/master/dns-controller)
1. [Kops DNS Controller](https://github.com/kubernetes/kops/tree/HEAD/dns-controller)
2. [Mate](https://github.com/linki/mate)
3. [wearemolecule/route53-kubernetes](https://github.com/wearemolecule/route53-kubernetes)

View File

@ -68,7 +68,7 @@ Brief summary of open PRs and what they are trying to address:
### PRs
1. https://github.com/kubernetes-sigs/external-dns/pull/243 - first attempt to add support for multiple targets. It is lagging far behind from master tip
1. https://github.com/kubernetes-sigs/external-dns/pull/243 - first attempt to add support for multiple targets. It is lagging far behind from tip
*what it does*: unfinished attempt to extend `Endpoint` struct, for it to allow multiple targets (essentially `target string -> targets []string`)
@ -78,15 +78,15 @@ Brief summary of open PRs and what they are trying to address:
*what it does* : attempts to fix issues with `plan` described in `Current Behaviour` section above. Included tests reveal the current problem with `plan`
*action*: rebase on master and make necessary changes to satisfy requirements listed in this document including back-reference to owning record
*action*: rebase on default branch and make necessary changes to satisfy requirements listed in this document including back-reference to owning record
3. https://github.com/kubernetes-sigs/external-dns/pull/326 - attempt to add multiple target support.
*what it does*: for each pair `DNS Name` + `Record Type` it aggregates **all** targets from the cluster and passes them to Provider. It adds basic support
for DO, Azura, Cloudflare, AWS, GCP, however those are not tested (?). (DNSSimple and Infoblox providers were not updated)
for DO, Azure, Cloudflare, AWS, GCP, however those are not tested (?). (DNSSimple and Infoblox providers were not updated)
*action*: the `plan` logic will probably needs to be reworked, however the rest concerning support in Providers and extending `Endpoint` struct can be reused.
Rebase on master and add missing pieces. Depends on `2`.
Rebase on default branch and add missing pieces. Depends on `2`.
Related PRs: https://github.com/kubernetes-sigs/external-dns/pull/331/files, https://github.com/kubernetes-sigs/external-dns/pull/347/files - aiming at AWS Route53 weighted records.
These PRs should be considered after common agreement about the way to address multi-target support is achieved. Related discussion: https://github.com/kubernetes-sigs/external-dns/issues/196

View File

@ -6,6 +6,6 @@ Currently we don't release regularly. Whenever we think it makes sense to releas
## How to release a new image
When releasing a new version of external-dns, we tag the branch by using **vX.Y.Z** as tag name. This PR includes the updated **CHANGELOG.md** with the latest commits since last tag. As soon as we merge this PR into master, Kubernetes based CI/CD system [Prow](https://prow.k8s.io/?repo=kubernetes-sigs%2Fexternal-dns) will trigger a job to push the image. We're using the Google Container Registry for our Docker images.
When releasing a new version of external-dns, we tag the branch by using **vX.Y.Z** as tag name. This PR includes the updated **CHANGELOG.md** with the latest commits since last tag. As soon as we merge this PR into the default branch, Kubernetes based CI/CD system [Prow](https://prow.k8s.io/?repo=kubernetes-sigs%2Fexternal-dns) will trigger a job to push the image. We're using the Google Container Registry for our Docker images.
The job itself looks at external-dns `cloudbuild.yaml` and executes the given steps. Inside it runs `make release.staging` which is basically only a `docker build` and `docker push`. The docker image is pushed `gcr.io/k8s-staging-external-dns/external-dns`, which is only a staging image and shouldn't be used. Promoting the official image we need to create another PR in [k8s.io](https://github.com/kubernetes/k8s.io), e.g. https://github.com/kubernetes/k8s.io/pull/540 by taking the current staging image using sha256.

View File

@ -60,7 +60,7 @@ kiam or kube2iam.
### kiam
If you're using [kiam](https://github.com/uswitch/kiam), follow the
[instructions](https://github.com/uswitch/kiam/blob/master/docs/IAM.md) for
[instructions](https://github.com/uswitch/kiam/blob/HEAD/docs/IAM.md) for
creating the IAM role.
### kube2iam

View File

@ -19,7 +19,7 @@ Therefore, please see the subsequent prerequisites.
Helm is used to deploy the ingress controller.
We employ the popular chart [stable/nginx-ingress](https://github.com/helm/charts/tree/master/stable/nginx-ingress).
We employ the popular chart [stable/nginx-ingress](https://github.com/helm/charts/tree/HEAD/stable/nginx-ingress).
```
$ helm install stable/nginx-ingress \

View File

@ -7,7 +7,7 @@ Make sure to use **>=0.5.7** version of ExternalDNS for this tutorial.
This tutorial uses [Azure CLI 2.0](https://docs.microsoft.com/en-us/cli/azure/install-azure-cli) for all
Azure commands and assumes that the Kubernetes cluster was created via Azure Container Services and `kubectl` commands
are being run on an orchestration master.
are being run on an orchestration node.
## Creating an Azure DNS zone
@ -167,7 +167,7 @@ Ensure that your nginx-ingress deployment has the following arg: added to it:
- --publish-service=namespace/nginx-ingress-controller-svcname
```
For more details see here: [nginx-ingress external-dns](https://github.com/kubernetes-sigs/external-dns/blob/master/docs/faq.md#why-is-externaldns-only-adding-a-single-ip-address-in-route-53-on-aws-when-using-the-nginx-ingress-controller-how-do-i-get-it-to-use-the-fqdn-of-the-elb-assigned-to-my-nginx-ingress-controller-service-instead)
For more details see here: [nginx-ingress external-dns](https://github.com/kubernetes-sigs/external-dns/blob/HEAD/docs/faq.md#why-is-externaldns-only-adding-a-single-ip-address-in-route-53-on-aws-when-using-the-nginx-ingress-controller-how-do-i-get-it-to-use-the-fqdn-of-the-elb-assigned-to-my-nginx-ingress-controller-service-instead)
Connect your `kubectl` client to the cluster you want to test ExternalDNS with.
Then apply one of the following manifests file to deploy ExternalDNS.

View File

@ -107,7 +107,7 @@ spec:
### Verify External DNS works (IngressRoute example)
The following instructions are based on the
[Contour example workload](https://github.com/heptio/contour/blob/master/examples/example-workload/kuard-ingressroute.yaml).
[Contour example workload](https://github.com/heptio/contour/blob/HEAD/examples/example-workload/kuard-ingressroute.yaml).
#### Install a sample service
```bash

View File

@ -24,13 +24,13 @@ helm install stable/etcd-operator --name my-etcd-op
```
etcd cluster is installed with example yaml from etcd operator website.
```
kubectl apply -f https://raw.githubusercontent.com/coreos/etcd-operator/master/example/example-etcd-cluster.yaml
kubectl apply -f https://raw.githubusercontent.com/coreos/etcd-operator/HEAD/example/example-etcd-cluster.yaml
```
### Installing CoreDNS
In order to make CoreDNS work with etcd backend, values.yaml of the chart should be changed with corresponding configurations.
```
wget https://raw.githubusercontent.com/helm/charts/master/stable/coredns/values.yaml
wget https://raw.githubusercontent.com/helm/charts/HEAD/stable/coredns/values.yaml
```
You need to edit/patch the file with below diff

View File

@ -15,7 +15,7 @@ this is not required.
For help setting up the Kubernetes Ingress AWS Controller, that can
create ALBs and NLBs, follow the [Setup Guide][2].
[2]: https://github.com/zalando-incubator/kube-ingress-aws-controller/tree/master/deploy
[2]: https://github.com/zalando-incubator/kube-ingress-aws-controller/tree/HEAD/deploy
### Optional RouteGroup
@ -26,7 +26,7 @@ create ALBs and NLBs, follow the [Setup Guide][2].
First, you have to apply the RouteGroup CRD to your cluster:
```
kubectl apply -f https://github.com/zalando/skipper/blob/master/dataclients/kubernetes/deploy/apply/routegroups_crd.yaml
kubectl apply -f https://github.com/zalando/skipper/blob/HEAD/dataclients/kubernetes/deploy/apply/routegroups_crd.yaml
```
You have to grant all controllers: [Skipper][4],

View File

@ -105,7 +105,7 @@ spec:
### Verify External DNS works (OpenShift Route example)
The following instructions are based on the
[Hello Openshift](https://github.com/openshift/origin/tree/master/examples/hello-openshift).
[Hello Openshift](https://github.com/openshift/origin/tree/HEAD/examples/hello-openshift).
#### Install a sample service and expose it
```bash

View File

@ -58,7 +58,7 @@ curl -XPOST -H "X-Ovh-Application: <ApplicationKey>" -H "Content-type: applicati
"path": "/domain/zone/*/refresh"
}
],
"redirection":"https://github.com/kubernetes-sigs/external-dns/blob/master/docs/tutorials/ovh.md#creating-ovh-credentials"
"redirection":"https://github.com/kubernetes-sigs/external-dns/blob/HEAD/docs/tutorials/ovh.md#creating-ovh-credentials"
}'
```

View File

@ -110,7 +110,7 @@ func main() {
CRDSourceAPIVersion: cfg.CRDSourceAPIVersion,
CRDSourceKind: cfg.CRDSourceKind,
KubeConfig: cfg.KubeConfig,
KubeMaster: cfg.Master,
APIServerURL: cfg.APIServerURL,
ServiceTypeFilter: cfg.ServiceTypeFilter,
CFAPIEndpoint: cfg.CFAPIEndpoint,
CFUsername: cfg.CFUsername,
@ -122,8 +122,8 @@ func main() {
// Lookup all the selected sources by names and pass them the desired configuration.
sources, err := source.ByNames(&source.SingletonClientGenerator{
KubeConfig: cfg.KubeConfig,
KubeMaster: cfg.Master,
KubeConfig: cfg.KubeConfig,
APIServerURL: cfg.APIServerURL,
// If update events are enabled, disable timeout.
RequestTimeout: func() time.Duration {
if cfg.UpdateEvents {

View File

@ -38,9 +38,10 @@ var (
// Config is a project-wide configuration
type Config struct {
Master string
APIServerURL string
KubeConfig string
RequestTimeout time.Duration
IstioIngressGatewayServices []string
ContourLoadBalancerService string
SkipperRouteGroupVersion string
Sources []string
@ -144,7 +145,7 @@ type Config struct {
}
var defaultConfig = &Config{
Master: "",
APIServerURL: "",
KubeConfig: "",
RequestTimeout: time.Second * 30,
ContourLoadBalancerService: "heptio-contour/contour",
@ -283,7 +284,7 @@ func (cfg *Config) ParseFlags(args []string) error {
app.DefaultEnvars()
// Flags related to Kubernetes
app.Flag("master", "The Kubernetes API server to connect to (default: auto-detect)").Default(defaultConfig.Master).StringVar(&cfg.Master)
app.Flag("server", "The Kubernetes API server to connect to (default: auto-detect)").Default(defaultConfig.APIServerURL).StringVar(&cfg.APIServerURL)
app.Flag("kubeconfig", "Retrieve target cluster configuration from a Kubernetes configuration file (default: auto-detect)").Default(defaultConfig.KubeConfig).StringVar(&cfg.KubeConfig)
app.Flag("request-timeout", "Request timeout when calling Kubernetes APIs. 0s means no timeout").Default(defaultConfig.RequestTimeout.String()).DurationVar(&cfg.RequestTimeout)

View File

@ -29,17 +29,17 @@ import (
var (
minimalConfig = &Config{
Master: "",
KubeConfig: "",
RequestTimeout: time.Second * 30,
ContourLoadBalancerService: "heptio-contour/contour",
SkipperRouteGroupVersion: "zalando.org/v1",
Sources: []string{"service"},
Namespace: "",
FQDNTemplate: "",
Compatibility: "",
Provider: "google",
GoogleProject: "",
APIServerURL: "",
KubeConfig: "",
RequestTimeout: time.Second * 30,
ContourLoadBalancerService: "heptio-contour/contour",
SkipperRouteGroupVersion: "zalando.org/v1",
Sources: []string{"service"},
Namespace: "",
FQDNTemplate: "",
Compatibility: "",
Provider: "google",
GoogleProject: "",
GoogleBatchChangeSize: 1000,
GoogleBatchChangeInterval: time.Second,
DomainFilter: []string{""},
@ -103,17 +103,17 @@ var (
}
overriddenConfig = &Config{
Master: "http://127.0.0.1:8080",
KubeConfig: "/some/path",
RequestTimeout: time.Second * 77,
ContourLoadBalancerService: "heptio-contour-other/contour-other",
SkipperRouteGroupVersion: "zalando.org/v2",
Sources: []string{"service", "ingress", "connector"},
Namespace: "namespace",
IgnoreHostnameAnnotation: true,
FQDNTemplate: "{{.Name}}.service.example.com",
Compatibility: "mate",
Provider: "google",
APIServerURL: "http://127.0.0.1:8080",
KubeConfig: "/some/path",
RequestTimeout: time.Second * 77,
ContourLoadBalancerService: "heptio-contour-other/contour-other",
SkipperRouteGroupVersion: "zalando.org/v2",
Sources: []string{"service", "ingress", "connector"},
Namespace: "namespace",
IgnoreHostnameAnnotation: true,
FQDNTemplate: "{{.Name}}.service.example.com",
Compatibility: "mate",
Provider: "google",
GoogleProject: "project",
GoogleBatchChangeSize: 100,
GoogleBatchChangeInterval: time.Second * 2,
@ -203,7 +203,7 @@ func TestParseFlags(t *testing.T) {
{
title: "override everything via flags",
args: []string{
"--master=http://127.0.0.1:8080",
"--server=http://127.0.0.1:8080",
"--kubeconfig=/some/path",
"--request-timeout=77s",
"--contour-load-balancer=heptio-contour-other/contour-other",
@ -294,7 +294,7 @@ func TestParseFlags(t *testing.T) {
title: "override everything via environment variables",
args: []string{},
envVars: map[string]string{
"EXTERNAL_DNS_MASTER": "http://127.0.0.1:8080",
"EXTERNAL_DNS_SERVER": "http://127.0.0.1:8080",
"EXTERNAL_DNS_KUBECONFIG": "/some/path",
"EXTERNAL_DNS_REQUEST_TIMEOUT": "77s",
"EXTERNAL_DNS_CONTOUR_LOAD_BALANCER": "heptio-contour-other/contour-other",

View File

@ -60,7 +60,7 @@ var (
)
// AWSSDClient is the subset of the AWS Cloud Map API that we actually use. Add methods as required.
// Signatures must match exactly. Taken from https://github.com/aws/aws-sdk-go/blob/master/service/servicediscovery/api.go
// Signatures must match exactly. Taken from https://github.com/aws/aws-sdk-go/blob/HEAD/service/servicediscovery/api.go
type AWSSDClient interface {
CreateService(input *sd.CreateServiceInput) (*sd.CreateServiceOutput, error)
DeregisterInstance(input *sd.DeregisterInstanceInput) (*sd.DeregisterInstanceOutput, error)

View File

@ -55,14 +55,14 @@ func addKnownTypes(scheme *runtime.Scheme, groupVersion schema.GroupVersion) err
}
// NewCRDClientForAPIVersionKind return rest client for the given apiVersion and kind of the CRD
func NewCRDClientForAPIVersionKind(client kubernetes.Interface, kubeConfig, kubeMaster, apiVersion, kind string) (*rest.RESTClient, *runtime.Scheme, error) {
func NewCRDClientForAPIVersionKind(client kubernetes.Interface, kubeConfig, apiServerURL, apiVersion, kind string) (*rest.RESTClient, *runtime.Scheme, error) {
if kubeConfig == "" {
if _, err := os.Stat(clientcmd.RecommendedHomeFile); err == nil {
kubeConfig = clientcmd.RecommendedHomeFile
}
}
config, err := clientcmd.BuildConfigFromFlags(kubeMaster, kubeConfig)
config, err := clientcmd.BuildConfigFromFlags(apiServerURL, kubeConfig)
if err != nil {
return nil, nil, err
}

View File

@ -47,7 +47,7 @@ const (
type routeGroupSource struct {
cli routeGroupListClient
master string
apiServer string
namespace string
apiEndpoint string
annotationFilter string
@ -199,7 +199,7 @@ func parseTemplate(fqdnTemplate string) (tmpl *template.Template, err error) {
}
// NewRouteGroupSource creates a new routeGroupSource with the given config.
func NewRouteGroupSource(timeout time.Duration, token, tokenPath, master, namespace, annotationFilter, fqdnTemplate, routegroupVersion string, combineFqdnAnnotation, ignoreHostnameAnnotation bool) (Source, error) {
func NewRouteGroupSource(timeout time.Duration, token, tokenPath, apiServerURL, namespace, annotationFilter, fqdnTemplate, routegroupVersion string, combineFqdnAnnotation, ignoreHostnameAnnotation bool) (Source, error) {
tmpl, err := parseTemplate(fqdnTemplate)
if err != nil {
return nil, err
@ -210,7 +210,7 @@ func NewRouteGroupSource(timeout time.Duration, token, tokenPath, master, namesp
}
cli := newRouteGroupClient(token, tokenPath, timeout)
u, err := url.Parse(master)
u, err := url.Parse(apiServerURL)
if err != nil {
return nil, err
}
@ -223,7 +223,7 @@ func NewRouteGroupSource(timeout time.Duration, token, tokenPath, master, namesp
sc := &routeGroupSource{
cli: cli,
master: apiServer,
apiServer: apiServer,
namespace: namespace,
apiEndpoint: apiServer + fmt.Sprintf(routeGroupListResource, routegroupVersion),
annotationFilter: annotationFilter,

View File

@ -1541,7 +1541,7 @@ func TestNodePortServices(t *testing.T) {
},
},
}},
[]string{"master-0"},
[]string{"pod-0"},
[]int{1},
[]v1.PodPhase{v1.PodRunning},
},

View File

@ -53,7 +53,7 @@ type Config struct {
CRDSourceAPIVersion string
CRDSourceKind string
KubeConfig string
KubeMaster string
APIServerURL string
ServiceTypeFilter []string
CFAPIEndpoint string
CFUsername string
@ -76,7 +76,7 @@ type ClientGenerator interface {
// will be generated
type SingletonClientGenerator struct {
KubeConfig string
KubeMaster string
APIServerURL string
RequestTimeout time.Duration
kubeClient kubernetes.Interface
istioClient *istioclient.Clientset
@ -94,7 +94,7 @@ type SingletonClientGenerator struct {
func (p *SingletonClientGenerator) KubeClient() (kubernetes.Interface, error) {
var err error
p.kubeOnce.Do(func() {
p.kubeClient, err = NewKubeClient(p.KubeConfig, p.KubeMaster, p.RequestTimeout)
p.kubeClient, err = NewKubeClient(p.KubeConfig, p.APIServerURL, p.RequestTimeout)
})
return p.kubeClient, err
}
@ -103,7 +103,7 @@ func (p *SingletonClientGenerator) KubeClient() (kubernetes.Interface, error) {
func (p *SingletonClientGenerator) IstioClient() (istioclient.Interface, error) {
var err error
p.istioOnce.Do(func() {
p.istioClient, err = NewIstioClient(p.KubeConfig, p.KubeMaster)
p.istioClient, err = NewIstioClient(p.KubeConfig, p.APIServerURL)
})
return p.istioClient, err
}
@ -136,7 +136,7 @@ func NewCFClient(cfAPIEndpoint string, cfUsername string, cfPassword string) (*c
func (p *SingletonClientGenerator) DynamicKubernetesClient() (dynamic.Interface, error) {
var err error
p.contourOnce.Do(func() {
p.contourClient, err = NewDynamicKubernetesClient(p.KubeConfig, p.KubeMaster, p.RequestTimeout)
p.contourClient, err = NewDynamicKubernetesClient(p.KubeConfig, p.APIServerURL, p.RequestTimeout)
})
return p.contourClient, err
}
@ -145,7 +145,7 @@ func (p *SingletonClientGenerator) DynamicKubernetesClient() (dynamic.Interface,
func (p *SingletonClientGenerator) OpenShiftClient() (openshift.Interface, error) {
var err error
p.openshiftOnce.Do(func() {
p.openshiftClient, err = NewOpenShiftClient(p.KubeConfig, p.KubeMaster, p.RequestTimeout)
p.openshiftClient, err = NewOpenShiftClient(p.KubeConfig, p.APIServerURL, p.RequestTimeout)
})
return p.openshiftClient, err
}
@ -236,35 +236,35 @@ func BuildWithConfig(source string, p ClientGenerator, cfg *Config) (Source, err
if err != nil {
return nil, err
}
crdClient, scheme, err := NewCRDClientForAPIVersionKind(client, cfg.KubeConfig, cfg.KubeMaster, cfg.CRDSourceAPIVersion, cfg.CRDSourceKind)
crdClient, scheme, err := NewCRDClientForAPIVersionKind(client, cfg.KubeConfig, cfg.APIServerURL, cfg.CRDSourceAPIVersion, cfg.CRDSourceKind)
if err != nil {
return nil, err
}
return NewCRDSource(crdClient, cfg.Namespace, cfg.CRDSourceKind, cfg.AnnotationFilter, scheme)
case "skipper-routegroup":
master := cfg.KubeMaster
apiServerURL := cfg.APIServerURL
tokenPath := ""
token := ""
restConfig, err := GetRestConfig(cfg.KubeConfig, cfg.KubeMaster)
restConfig, err := GetRestConfig(cfg.KubeConfig, cfg.APIServerURL)
if err == nil {
master = restConfig.Host
apiServerURL = restConfig.Host
tokenPath = restConfig.BearerTokenFile
token = restConfig.BearerToken
}
return NewRouteGroupSource(cfg.RequestTimeout, token, tokenPath, master, cfg.Namespace, cfg.AnnotationFilter, cfg.FQDNTemplate, cfg.SkipperRouteGroupVersion, cfg.CombineFQDNAndAnnotation, cfg.IgnoreHostnameAnnotation)
return NewRouteGroupSource(cfg.RequestTimeout, token, tokenPath, apiServerURL, cfg.Namespace, cfg.AnnotationFilter, cfg.FQDNTemplate, cfg.SkipperRouteGroupVersion, cfg.CombineFQDNAndAnnotation, cfg.IgnoreHostnameAnnotation)
}
return nil, ErrSourceNotFound
}
// GetRestConfig returns the rest clients config to get automatically
// data if you run inside a cluster or by passing flags.
func GetRestConfig(kubeConfig, kubeMaster string) (*rest.Config, error) {
func GetRestConfig(kubeConfig, apiServerURL string) (*rest.Config, error) {
if kubeConfig == "" {
if _, err := os.Stat(clientcmd.RecommendedHomeFile); err == nil {
kubeConfig = clientcmd.RecommendedHomeFile
}
}
log.Debugf("kubeMaster: %s", kubeMaster)
log.Debugf("apiServerURL: %s", apiServerURL)
log.Debugf("kubeConfig: %s", kubeConfig)
// evaluate whether to use kubeConfig-file or serviceaccount-token
@ -277,7 +277,7 @@ func GetRestConfig(kubeConfig, kubeMaster string) (*rest.Config, error) {
config, err = rest.InClusterConfig()
} else {
log.Infof("Using kubeConfig")
config, err = clientcmd.BuildConfigFromFlags(kubeMaster, kubeConfig)
config, err = clientcmd.BuildConfigFromFlags(apiServerURL, kubeConfig)
}
if err != nil {
return nil, err
@ -287,11 +287,11 @@ func GetRestConfig(kubeConfig, kubeMaster string) (*rest.Config, error) {
}
// NewKubeClient returns a new Kubernetes client object. It takes a Config and
// uses KubeMaster and KubeConfig attributes to connect to the cluster. If
// uses APIServerURL and KubeConfig attributes to connect to the cluster. If
// KubeConfig isn't provided it defaults to using the recommended default.
func NewKubeClient(kubeConfig, kubeMaster string, requestTimeout time.Duration) (*kubernetes.Clientset, error) {
func NewKubeClient(kubeConfig, apiServerURL string, requestTimeout time.Duration) (*kubernetes.Clientset, error) {
log.Infof("Instantiating new Kubernetes client")
config, err := GetRestConfig(kubeConfig, kubeMaster)
config, err := GetRestConfig(kubeConfig, apiServerURL)
if err != nil {
return nil, err
}
@ -322,16 +322,16 @@ func NewKubeClient(kubeConfig, kubeMaster string, requestTimeout time.Duration)
// NB: Istio controls the creation of the underlying Kubernetes client, so we
// have no ability to tack on transport wrappers (e.g., Prometheus request
// wrappers) to the client's config at this level. Furthermore, the Istio client
// constructor does not expose the ability to override the Kubernetes master,
// so the Master config attribute has no effect.
func NewIstioClient(kubeConfig string, kubeMaster string) (*istioclient.Clientset, error) {
// constructor does not expose the ability to override the Kubernetes API server endpoint,
// so the apiServerURL config attribute has no effect.
func NewIstioClient(kubeConfig string, apiServerURL string) (*istioclient.Clientset, error) {
if kubeConfig == "" {
if _, err := os.Stat(clientcmd.RecommendedHomeFile); err == nil {
kubeConfig = clientcmd.RecommendedHomeFile
}
}
restCfg, err := clientcmd.BuildConfigFromFlags(kubeMaster, kubeConfig)
restCfg, err := clientcmd.BuildConfigFromFlags(apiServerURL, kubeConfig)
if err != nil {
return nil, err
}
@ -345,16 +345,16 @@ func NewIstioClient(kubeConfig string, kubeMaster string) (*istioclient.Clientse
}
// NewDynamicKubernetesClient returns a new Dynamic Kubernetes client object. It takes a Config and
// uses KubeMaster and KubeConfig attributes to connect to the cluster. If
// uses APIServerURL and KubeConfig attributes to connect to the cluster. If
// KubeConfig isn't provided it defaults to using the recommended default.
func NewDynamicKubernetesClient(kubeConfig, kubeMaster string, requestTimeout time.Duration) (dynamic.Interface, error) {
func NewDynamicKubernetesClient(kubeConfig, apiServerURL string, requestTimeout time.Duration) (dynamic.Interface, error) {
if kubeConfig == "" {
if _, err := os.Stat(clientcmd.RecommendedHomeFile); err == nil {
kubeConfig = clientcmd.RecommendedHomeFile
}
}
config, err := clientcmd.BuildConfigFromFlags(kubeMaster, kubeConfig)
config, err := clientcmd.BuildConfigFromFlags(apiServerURL, kubeConfig)
if err != nil {
return nil, err
}
@ -381,16 +381,16 @@ func NewDynamicKubernetesClient(kubeConfig, kubeMaster string, requestTimeout ti
}
// NewOpenShiftClient returns a new Openshift client object. It takes a Config and
// uses KubeMaster and KubeConfig attributes to connect to the cluster. If
// uses APIServerURL and KubeConfig attributes to connect to the cluster. If
// KubeConfig isn't provided it defaults to using the recommended default.
func NewOpenShiftClient(kubeConfig, kubeMaster string, requestTimeout time.Duration) (*openshift.Clientset, error) {
func NewOpenShiftClient(kubeConfig, apiServerURL string, requestTimeout time.Duration) (*openshift.Clientset, error) {
if kubeConfig == "" {
if _, err := os.Stat(clientcmd.RecommendedHomeFile); err == nil {
kubeConfig = clientcmd.RecommendedHomeFile
}
}
config, err := clientcmd.BuildConfigFromFlags(kubeMaster, kubeConfig)
config, err := clientcmd.BuildConfigFromFlags(apiServerURL, kubeConfig)
if err != nil {
return nil, err
}