mirror of
https://github.com/kubernetes-sigs/external-dns.git
synced 2025-08-05 17:16:59 +02:00
Fix typos
* fix typos * fix special quote characters * fix syntax highlighting in some code blocks
This commit is contained in:
parent
f5aa1c4c37
commit
84a23191b5
@ -17,7 +17,7 @@
|
||||
|
||||
## Summary
|
||||
|
||||
[ExternalDNS](https://github.com/kubernetes-sigs/external-dns) is a project that synchronizes Kubernetes’ Services, Ingresses and other Kubernetes resources to DNS backends for several DNS providers.
|
||||
[ExternalDNS](https://github.com/kubernetes-sigs/external-dns) is a project that synchronizes Kubernetes' Services, Ingresses and other Kubernetes resources to DNS backends for several DNS providers.
|
||||
|
||||
The projects was started as a Kubernetes Incubator project in February 2017 and being the Kubernetes incubation initiative officially over, the maintainers want to propose the project to be moved to the kubernetes GitHub organization or to kubernetes-sigs, under the sponsorship of sig-network.
|
||||
|
||||
@ -33,7 +33,7 @@ When the project was proposed (see the [original discussion](https://github.com/
|
||||
|
||||
* Route53-kubernetes - [https://github.com/wearemolecule/route53-kubernetes](https://github.com/wearemolecule/route53-kubernetes)
|
||||
|
||||
ExternalDNS’ goal from the beginning was to provide an officially supported solution to those problems.
|
||||
ExternalDNS' goal from the beginning was to provide an officially supported solution to those problems.
|
||||
|
||||
After two years of development, the project is still in the kubernetes-sigs.
|
||||
|
||||
@ -74,7 +74,7 @@ Moving the ExternalDNS project outside of Kubernetes projects would cause:
|
||||
|
||||
* Problems (re-)establishing user trust which could eventually lead to fragmentation and duplication.
|
||||
|
||||
* It would be hard to establish in which organization the project should be moved to. The most natural would be Zalando’s organization, being the company that put most of the work on the project. While it is possible to assume Zalando’s commitment to open-source, that would be a strategic mistake for the project community and for the Kubernetes ecosystem due to the obvious lack of neutrality.
|
||||
* It would be hard to establish in which organization the project should be moved to. The most natural would be Zalando's organization, being the company that put most of the work on the project. While it is possible to assume Zalando's commitment to open-source, that would be a strategic mistake for the project community and for the Kubernetes ecosystem due to the obvious lack of neutrality.
|
||||
|
||||
* Lack of resources to test, lack of issue management via automation.
|
||||
|
||||
@ -91,7 +91,7 @@ We have evidence that many companies are using ExternalDNS in production, but it
|
||||
|
||||
The project was quoted by a number of tutorials on the web, including the [official tutorials from AWS](https://aws.amazon.com/blogs/opensource/unified-service-discovery-ecs-kubernetes/).
|
||||
|
||||
ExternalDNS can’t be consider to be "done": while the core functionality has been implemented, there is lack of integration testing and structural changes that are needed.
|
||||
ExternalDNS can't be consider to be "done": while the core functionality has been implemented, there is lack of integration testing and structural changes that are needed.
|
||||
|
||||
Those are identified in the project roadmap, which is roughly made of the following items:
|
||||
|
||||
@ -129,7 +129,7 @@ The high number of providers contributed to the project pose a maintainability c
|
||||
|
||||
The project uses the free quota of TravisCI to run tests for the project.
|
||||
|
||||
The release pipeline for the project is currently fully owned by Zalando. It runs on the internal system of the company (closed source) which external maintainers/users can’t access and that pushes images to the publicly accessible docker registry available at the URL `registry.opensource.zalan.do`.
|
||||
The release pipeline for the project is currently fully owned by Zalando. It runs on the internal system of the company (closed source) which external maintainers/users can't access and that pushes images to the publicly accessible docker registry available at the URL `registry.opensource.zalan.do`.
|
||||
|
||||
The docker registry service is provided as best effort with no sort of SLA and the maintainers team openly suggests the users to build and maintain their own docker image based on the provided Dockerfiles.
|
||||
|
||||
@ -149,8 +149,8 @@ The following are risks that were identified:
|
||||
|
||||
We think that the following actions will constitute appropriate mitigations:
|
||||
|
||||
* Decoupling the providers via an API will allow us to resolve the problem of the providers. Being the project already more than 2 years old and given that there are 18 providers implemented, we possess enough informations to define an API that we can be stable in a short timeframe. Once this is stable, the problem of testing the providers can be deferred to be a provider’s responsibility. This will also reduce the scope of External DNS core code, which means that there will be no need for a further increase of the maintaining team.
|
||||
* Decoupling the providers via an API will allow us to resolve the problem of the providers. Being the project already more than 2 years old and given that there are 18 providers implemented, we possess enough information to define an API that we can be stable in a short timeframe. Once this is stable, the problem of testing the providers can be deferred to be a provider's responsibility. This will also reduce the scope of External DNS core code, which means that there will be no need for a further increase of the maintaining team.
|
||||
|
||||
* We added integration testing for the main cloud providers to the roadmap for the 1.0 release to make sure that we cover the mostly used ones. We believe that this item should be tackled independently from the decoupling of providers as it would be capable of generating value independently from the result of the decoupling efforts.
|
||||
|
||||
* With the move to the Kubernetes incubation, we hope that we will be able to access the testing resources of the Kubernetes project. In this way, we hope to decouple the project from the dependency on Zalando’s internal CI tool. This will help open up the possibility to increase the visibility on the project from external contributors, which currently would be blocked by the lack of access to the software used for the whole release pipeline.
|
||||
* With the move to the Kubernetes incubation, we hope that we will be able to access the testing resources of the Kubernetes project. In this way, we hope to decouple the project from the dependency on Zalando's internal CI tool. This will help open up the possibility to increase the visibility on the project from external contributors, which currently would be blocked by the lack of access to the software used for the whole release pipeline.
|
||||
|
@ -8,7 +8,7 @@
|
||||
- [kubectl](https://kubernetes.io/docs/tasks/tools/install-kubectl)
|
||||
|
||||
Compile and run locally against a remote k8s cluster.
|
||||
```
|
||||
```shell
|
||||
git clone https://github.com/kubernetes-sigs/external-dns.git && cd external-dns
|
||||
make build
|
||||
# login to remote k8s cluster
|
||||
@ -16,14 +16,14 @@ make build
|
||||
```
|
||||
|
||||
Run linting, unit tests, and coverage report.
|
||||
```
|
||||
```shell
|
||||
make lint
|
||||
make test
|
||||
make cover-html
|
||||
```
|
||||
|
||||
Build container image.
|
||||
```
|
||||
```shell
|
||||
make build.docker
|
||||
```
|
||||
|
||||
@ -49,10 +49,10 @@ A typical way to start on, e.g. a CoreDNS provider, would be to add a `coredns.g
|
||||
|
||||
Note, how your provider doesn't need to know anything about where the DNS records come from, nor does it have to figure out the difference between the current and the desired state, it merely executes the actions calculated by the plan.
|
||||
|
||||
# Running Github Actions locally
|
||||
# Running GitHub Actions locally
|
||||
|
||||
You can also extend the CI workflow which is currently implemented as Github Action within the [workflow](https://github.com/kubernetes-sigs/external-dns/tree/HEAD/.github/workflows) folder.
|
||||
In order to test your changes before committing you can leverage [act](https://github.com/nektos/act) to run the Github Action locally.
|
||||
You can also extend the CI workflow which is currently implemented as GitHub Action within the [workflow](https://github.com/kubernetes-sigs/external-dns/tree/HEAD/.github/workflows) folder.
|
||||
In order to test your changes before committing you can leverage [act](https://github.com/nektos/act) to run the GitHub Action locally.
|
||||
|
||||
Follow the installation instructions in the nektos/act [README.md](https://github.com/nektos/act/blob/master/README.md).
|
||||
Afterwards just run `act` within the root folder of the project.
|
||||
|
@ -2,7 +2,7 @@
|
||||
|
||||
## Background
|
||||
|
||||
[Project proposal](https://groups.google.com/forum/#!searchin/kubernetes-dev/external$20dns%7Csort:relevance/kubernetes-dev/2wGQUB0fUuE/9OXz01i2BgAJ)
|
||||
[Project proposal](https://groups.google.com/forum/#!searching/kubernetes-dev/external$20dns%7Csort:relevance/kubernetes-dev/2wGQUB0fUuE/9OXz01i2BgAJ)
|
||||
|
||||
[Initial discussion](https://docs.google.com/document/d/1ML_q3OppUtQKXan6Q42xIq2jelSoIivuXI8zExbc6ec/edit#heading=h.1pgkuagjhm4p)
|
||||
|
||||
|
@ -155,7 +155,7 @@ $ kubectl get services echo
|
||||
$ kubectl get endpoints echo
|
||||
```
|
||||
|
||||
Make sure everything looks correct, i.e the service is defined and recieves a
|
||||
Make sure everything looks correct, i.e the service is defined and receives a
|
||||
public IP, and that the endpoint also has a pod IP.
|
||||
|
||||
Once that's done, wait about 30s-1m (interval for external-dns to kick in), then do:
|
||||
|
@ -369,7 +369,7 @@ func (cfg *Config) ParseFlags(args []string) error {
|
||||
app.Flag("infoblox-max-results", "Add _max_results as query parameter to the URL on all API requests. The default is 0 which means _max_results is not set and the default of the server is used.").Default(strconv.Itoa(defaultConfig.InfobloxMaxResults)).IntVar(&cfg.InfobloxMaxResults)
|
||||
app.Flag("dyn-customer-name", "When using the Dyn provider, specify the Customer Name").Default("").StringVar(&cfg.DynCustomerName)
|
||||
app.Flag("dyn-username", "When using the Dyn provider, specify the Username").Default("").StringVar(&cfg.DynUsername)
|
||||
app.Flag("dyn-password", "When using the Dyn provider, specify the pasword").Default("").StringVar(&cfg.DynPassword)
|
||||
app.Flag("dyn-password", "When using the Dyn provider, specify the password").Default("").StringVar(&cfg.DynPassword)
|
||||
app.Flag("dyn-min-ttl", "Minimal TTL (in seconds) for records. This value will be used if the provided TTL for a service/ingress is lower than this.").IntVar(&cfg.DynMinTTLSeconds)
|
||||
app.Flag("oci-config-file", "When using the OCI provider, specify the OCI configuration file (required when --provider=oci").Default(defaultConfig.OCIConfigFile).StringVar(&cfg.OCIConfigFile)
|
||||
app.Flag("rcodezero-txt-encrypt", "When using the Rcodezero provider with txt registry option, set if TXT rrs are encrypted (default: false)").Default(strconv.FormatBool(defaultConfig.RcodezeroTXTEncrypt)).BoolVar(&cfg.RcodezeroTXTEncrypt)
|
||||
|
@ -157,7 +157,7 @@ func (p *CloudFlareProvider) Zones(ctx context.Context) ([]cloudflare.Zone, erro
|
||||
p.PaginationOptions.Page = 1
|
||||
|
||||
// if there is a zoneIDfilter configured
|
||||
// && if the filter isnt just a blank string (used in tests)
|
||||
// && if the filter isn't just a blank string (used in tests)
|
||||
if len(p.zoneIDFilter.ZoneIDs) > 0 && p.zoneIDFilter.ZoneIDs[0] != "" {
|
||||
log.Debugln("zoneIDFilter configured. only looking up zone IDs defined")
|
||||
for _, zoneID := range p.zoneIDFilter.ZoneIDs {
|
||||
|
@ -195,7 +195,7 @@ func fixMissingTTL(ttl endpoint.TTL, minTTLSeconds int) string {
|
||||
return strconv.Itoa(i)
|
||||
}
|
||||
|
||||
// merge produces a singe list of records that can be used as a replacement.
|
||||
// merge produces a single list of records that can be used as a replacement.
|
||||
// Dyn allows to replace all records with a single call
|
||||
// Invariant: the result contains only elements from the updateNew parameter
|
||||
func merge(updateOld, updateNew []*endpoint.Endpoint) []*endpoint.Endpoint {
|
||||
@ -625,7 +625,7 @@ func (d *dynProviderState) Records(ctx context.Context) ([]*endpoint.Endpoint, e
|
||||
// this method does C + 2*Z requests: C=total number of changes, Z = number of
|
||||
// affected zones (1 login + 1 commit)
|
||||
func (d *dynProviderState) ApplyChanges(ctx context.Context, changes *plan.Changes) error {
|
||||
log.Debugf("Processing chages: %+v", changes)
|
||||
log.Debugf("Processing changes: %+v", changes)
|
||||
|
||||
if d.DryRun {
|
||||
log.Infof("Will NOT delete these records: %+v", changes.Delete)
|
||||
|
@ -203,7 +203,7 @@ func (p *OCIProvider) Records(ctx context.Context) ([]*endpoint.Endpoint, error)
|
||||
|
||||
// ApplyChanges applies a given set of changes to a given zone.
|
||||
func (p *OCIProvider) ApplyChanges(ctx context.Context, changes *plan.Changes) error {
|
||||
log.Debugf("Processing chages: %+v", changes)
|
||||
log.Debugf("Processing changes: %+v", changes)
|
||||
|
||||
ops := []dns.RecordOperation{}
|
||||
ops = append(ops, p.newFilteredRecordOperations(changes.Create, dns.RecordOperationOperationAdd)...)
|
||||
|
@ -140,8 +140,8 @@ func TestOvhRecords(t *testing.T) {
|
||||
endpoints, err := provider.Records(context.TODO())
|
||||
assert.NoError(err)
|
||||
// Little fix for multi targets endpoint
|
||||
for _, endoint := range endpoints {
|
||||
sort.Strings(endoint.Targets)
|
||||
for _, endpoint := range endpoints {
|
||||
sort.Strings(endpoint.Targets)
|
||||
}
|
||||
assert.ElementsMatch(endpoints, []*endpoint.Endpoint{
|
||||
{DNSName: "example.org", RecordType: "A", RecordTTL: 10, Labels: endpoint.NewLabels(), Targets: []string{"203.0.113.42"}},
|
||||
|
@ -263,7 +263,7 @@ func (p *TransIPProvider) Records(ctx context.Context) ([]*endpoint.Endpoint, er
|
||||
}
|
||||
|
||||
// endpointNameForRecord returns "www.example.org" for DNSEntry with Name "www" and
|
||||
// Doman with Name "example.org"
|
||||
// Domain with Name "example.org"
|
||||
func (p *TransIPProvider) endpointNameForRecord(r transip.DNSEntry, d transip.Domain) string {
|
||||
// root name is identified by "@" and should be translated to domain name for
|
||||
// the endpoint entry.
|
||||
|
@ -231,7 +231,7 @@ func TestUltraDNSProvider_ApplyChangesCNAME(t *testing.T) {
|
||||
assert.NotNil(t, err)
|
||||
}
|
||||
|
||||
// This will work if you would set the environment variables such as "ULTRADNS_INTEGRATION" and zone should be avaialble "kubernetes-ultradns-provider-test.com"
|
||||
// This will work if you would set the environment variables such as "ULTRADNS_INTEGRATION" and zone should be available "kubernetes-ultradns-provider-test.com"
|
||||
func TestUltraDNSProvider_ApplyChanges_Integration(t *testing.T) {
|
||||
|
||||
_, ok := os.LookupEnv("ULTRADNS_INTEGRATION")
|
||||
@ -311,7 +311,7 @@ func TestUltraDNSProvider_ApplyChanges_Integration(t *testing.T) {
|
||||
|
||||
}
|
||||
|
||||
// This will work if you would set the environment variables such as "ULTRADNS_INTEGRATION" and zone should be avaialble "kubernetes-ultradns-provider-test.com" for multiple target
|
||||
// This will work if you would set the environment variables such as "ULTRADNS_INTEGRATION" and zone should be available "kubernetes-ultradns-provider-test.com" for multiple target
|
||||
func TestUltraDNSProvider_ApplyChanges_MultipleTarget_integeration(t *testing.T) {
|
||||
_, ok := os.LookupEnv("ULTRADNS_INTEGRATION")
|
||||
if !ok {
|
||||
@ -652,7 +652,7 @@ func TestUltraDNSProvider_PoolConversionCase(t *testing.T) {
|
||||
resp, _ := provider.client.Do("GET", "zones/kubernetes-ultradns-provider-test.com./rrsets/A/ttl.kubernetes-ultradns-provider-test.com.", nil, udnssdk.RRSetListDTO{})
|
||||
assert.Equal(t, resp.Status, "200 OK")
|
||||
|
||||
//Coverting to RD Pool
|
||||
//Converting to RD Pool
|
||||
_ = os.Setenv("ULTRADNS_POOL_TYPE", "rdpool")
|
||||
provider, _ = NewUltraDNSProvider(endpoint.NewDomainFilter([]string{"kubernetes-ultradns-provider-test.com"}), false)
|
||||
changes = &plan.Changes{}
|
||||
@ -662,7 +662,7 @@ func TestUltraDNSProvider_PoolConversionCase(t *testing.T) {
|
||||
resp, _ = provider.client.Do("GET", "zones/kubernetes-ultradns-provider-test.com./rrsets/A/ttl.kubernetes-ultradns-provider-test.com.", nil, udnssdk.RRSetListDTO{})
|
||||
assert.Equal(t, resp.Status, "200 OK")
|
||||
|
||||
//Coverting back to SB Pool
|
||||
//Converting back to SB Pool
|
||||
_ = os.Setenv("ULTRADNS_POOL_TYPE", "sbpool")
|
||||
provider, _ = NewUltraDNSProvider(endpoint.NewDomainFilter([]string{"kubernetes-ultradns-provider-test.com"}), false)
|
||||
changes = &plan.Changes{}
|
||||
|
@ -217,7 +217,7 @@ func NewRouteGroupSource(timeout time.Duration, token, tokenPath, apiServerURL,
|
||||
}
|
||||
|
||||
apiServer := u.String()
|
||||
// strip port if well known port, because of TLS certifcate match
|
||||
// strip port if well known port, because of TLS certificate match
|
||||
if u.Scheme == "https" && u.Port() == "443" {
|
||||
apiServer = "https://" + u.Hostname()
|
||||
}
|
||||
|
@ -315,7 +315,7 @@ func testServiceSourceEndpoints(t *testing.T) {
|
||||
false,
|
||||
},
|
||||
{
|
||||
"FQDN template with multiple hostnames return an endpoint with target IP when ignoreing annotations",
|
||||
"FQDN template with multiple hostnames return an endpoint with target IP when ignoring annotations",
|
||||
"",
|
||||
"",
|
||||
"testing",
|
||||
|
@ -75,7 +75,7 @@ func TestGetTTLFromAnnotations(t *testing.T) {
|
||||
expectedErr: nil,
|
||||
},
|
||||
{
|
||||
title: "TTL annotation value is set correcly using duration (fractional)",
|
||||
title: "TTL annotation value is set correctly using duration (fractional)",
|
||||
annotations: map[string]string{ttlAnnotationKey: "20.5s"},
|
||||
expectedTTL: endpoint.TTL(20),
|
||||
expectedErr: nil,
|
||||
|
Loading…
Reference in New Issue
Block a user