ci(docs): add markdown linters and editorconfig (#5055)

* ci(docs): add markdown linters

* fixes issues in md detected by the linter

* fix workflow

* pre commit

* add editor config

* fix test

* review
This commit is contained in:
Michel Loiseleur 2025-02-09 23:07:56 +01:00 committed by GitHub
parent e7ff1c9c44
commit ac4049bf03
No known key found for this signature in database
GPG Key ID: B5690EEEBB952194
85 changed files with 1137 additions and 777 deletions

20
.editorconfig Normal file
View File

@ -0,0 +1,20 @@
# EditorConfig helps developers define and maintain consistent
# coding styles between different editors and IDEs
# editorconfig.org
root = true
[*]
# Change these settings to your own preference
indent_style = space
indent_size = 2
# We recommend you to keep these unchanged
end_of_line = lf
charset = utf-8
trim_trailing_whitespace = true
insert_final_newline = true
[Makefile]
indent_style = tab

View File

@ -20,6 +20,7 @@ assignees: ''
**Anything else we need to know?**:
**Environment**:
- External-DNS version (use `external-dns --version`):
- DNS provider:
- Others:

View File

@ -1,4 +1,4 @@
name: json-yaml-validate
name: json-yaml-validate
on:
push:
branches: [ master ]

View File

@ -1,29 +1,27 @@
name: Lint
on:
push:
branches: [ master ]
pull_request:
branches: [ master ]
permissions:
contents: read # to fetch code (actions/checkout)
checks: write
jobs:
build:
lint:
name: Markdown, Go and OAS
runs-on: ubuntu-latest
permissions:
contents: read # to fetch code (actions/checkout)
checks: write # to create a new check based on the results (shogo82148/actions-goveralls)
name: Build
runs-on: ubuntu-latest
checks: write
steps:
- name: Check out code into the Go module directory
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
- name: Lint markdown
uses: nosborn/github-action-markdown-cli@v3.3.0
with:
files: '.'
config_file: ".markdownlint.json"
- name: Set up Go 1.x
uses: actions/setup-go@v5
with:

13
.markdownlint.json Normal file
View File

@ -0,0 +1,13 @@
{
"default": true,
"MD010": { "code_blocks": false },
"MD013": { "line_length": "300" },
"MD033": false,
"MD036": false,
"MD024": false,
"MD041": false,
"MD029": false,
"MD034": false,
"MD038": false,
"MD046": false
}

27
.pre-commit-config.yaml Normal file
View File

@ -0,0 +1,27 @@
---
default_language_version:
node: system
repos:
- repo: https://github.com/pre-commit/pre-commit-hooks
rev: v4.5.0
hooks:
- id: check-added-large-files
- id: check-case-conflict
- id: check-executables-have-shebangs
- id: check-merge-conflict
- id: check-shebang-scripts-are-executable
- id: check-symlinks
- id: destroyed-symlinks
- id: end-of-file-fixer
- id: fix-byte-order-marker
- id: forbid-new-submodules
- id: mixed-line-ending
- id: trailing-whitespace
- repo: https://github.com/igorshubovych/markdownlint-cli
rev: v0.44.0
hooks:
- id: markdownlint
minimum_pre_commit_version: !!str 3.2

View File

@ -2,7 +2,7 @@
Welcome to Kubernetes. We are excited about the prospect of you joining our [community](https://git.k8s.io/community)! The Kubernetes community abides by the CNCF [code of conduct](code-of-conduct.md). Here is an excerpt:
_As contributors and maintainers of this project, and in the interest of fostering an open and welcoming community, we pledge to respect all people who contribute through reporting issues, posting feature requests, updating documentation, submitting pull requests or patches, and other activities._
_In the interest of fostering an open and welcoming community, we pledge to respect all people who contribute through reporting issues, posting feature requests, updating documentation, submitting pull requests or other activities._
## Getting Started

View File

@ -162,6 +162,16 @@ ko:
scripts/install-ko.sh
# generate-flags-documentation: Generate documentation (docs/flags.md)
.PHONE: generate-flags-documentation
.PHONY: generate-flags-documentation
generate-flags-documentation:
go run internal/gen/docs/flags/main.go
pre-commit-install: ## Install pre-commit hooks
@pre-commit install
@pre-commit gc
pre-commit-uninstall: ## Uninstall hooks
@pre-commit uninstall
pre-commit-validate: ## Validate files with pre-commit hooks
@pre-commit run --all-files

198
README.md
View File

@ -5,13 +5,17 @@ hide:
---
<p align="center">
<img src="docs/img/external-dns.png" width="40%" align="center" alt="ExternalDNS">
<img src="docs/img/external-dns.png" width="40%" align="center" alt="ExternalDNS">
</p>
# ExternalDNS
[![Build Status](https://github.com/kubernetes-sigs/external-dns/workflows/Go/badge.svg)](https://github.com/kubernetes-sigs/external-dns/actions) [![Coverage Status](https://coveralls.io/repos/github/kubernetes-sigs/external-dns/badge.svg)](https://coveralls.io/github/kubernetes-sigs/external-dns) [![GitHub release](https://img.shields.io/github/release/kubernetes-sigs/external-dns.svg)](https://github.com/kubernetes-sigs/external-dns/releases) [![go-doc](https://godoc.org/github.com/kubernetes-sigs/external-dns?status.svg)](https://godoc.org/github.com/kubernetes-sigs/external-dns) [![Go Report Card](https://goreportcard.com/badge/github.com/kubernetes-sigs/external-dns)](https://goreportcard.com/report/github.com/kubernetes-sigs/external-dns) [![ExternalDNS docs](https://img.shields.io/badge/docs-external--dns-blue)](https://kubernetes-sigs.github.io/external-dns/)
[![Build Status](https://github.com/kubernetes-sigs/external-dns/workflows/Go/badge.svg)](https://github.com/kubernetes-sigs/external-dns/actions)
[![Coverage Status](https://coveralls.io/repos/github/kubernetes-sigs/external-dns/badge.svg)](https://coveralls.io/github/kubernetes-sigs/external-dns)
[![GitHub release](https://img.shields.io/github/release/kubernetes-sigs/external-dns.svg)](https://github.com/kubernetes-sigs/external-dns/releases)
[![go-doc](https://godoc.org/github.com/kubernetes-sigs/external-dns?status.svg)](https://godoc.org/github.com/kubernetes-sigs/external-dns)
[![Go Report Card](https://goreportcard.com/badge/github.com/kubernetes-sigs/external-dns)](https://goreportcard.com/report/github.com/kubernetes-sigs/external-dns)
[![ExternalDNS docs](https://img.shields.io/badge/docs-external--dns-blue)](https://kubernetes-sigs.github.io/external-dns/)
ExternalDNS synchronizes exposed Kubernetes Services and Ingresses with DNS providers.
@ -21,7 +25,9 @@ This README is a part of the complete documentation, available [here](https://ku
## What It Does
Inspired by [Kubernetes DNS](https://github.com/kubernetes/dns), Kubernetes' cluster-internal DNS server, ExternalDNS makes Kubernetes resources discoverable via public DNS servers. Like KubeDNS, it retrieves a list of resources (Services, Ingresses, etc.) from the [Kubernetes API](https://kubernetes.io/docs/api/) to determine a desired list of DNS records. *Unlike* KubeDNS, however, it's not a DNS server itself, but merely configures other DNS providers accordingly—e.g. [AWS Route 53](https://aws.amazon.com/route53/) or [Google Cloud DNS](https://cloud.google.com/dns/docs/).
Inspired by [Kubernetes DNS](https://github.com/kubernetes/dns), Kubernetes' cluster-internal DNS server, ExternalDNS makes Kubernetes resources discoverable via public DNS servers.
Like KubeDNS, it retrieves a list of resources (Services, Ingresses, etc.) from the [Kubernetes API](https://kubernetes.io/docs/api/) to determine a desired list of DNS records.
*Unlike* KubeDNS, however, it's not a DNS server itself, but merely configures other DNS providers accordingly—e.g. [AWS Route 53](https://aws.amazon.com/route53/) or [Google Cloud DNS](https://cloud.google.com/dns/docs/).
In a broader sense, ExternalDNS allows you to control DNS records dynamically via Kubernetes resources in a DNS provider-agnostic way.
@ -34,42 +40,45 @@ To see ExternalDNS in action, have a look at this [video](https://www.youtube.co
- [current release process](./docs/release.md)
ExternalDNS allows you to keep selected zones (via `--domain-filter`) synchronized with Ingresses and Services of `type=LoadBalancer` and nodes in various DNS providers:
* [Google Cloud DNS](https://cloud.google.com/dns/docs/)
* [AWS Route 53](https://aws.amazon.com/route53/)
* [AWS Cloud Map](https://docs.aws.amazon.com/cloud-map/)
* [AzureDNS](https://azure.microsoft.com/en-us/services/dns)
* [Civo](https://www.civo.com)
* [CloudFlare](https://www.cloudflare.com/dns)
* [DigitalOcean](https://www.digitalocean.com/products/networking)
* [DNSimple](https://dnsimple.com/)
* [OpenStack Designate](https://docs.openstack.org/designate/latest/)
* [PowerDNS](https://www.powerdns.com/)
* [CoreDNS](https://coredns.io/)
* [Exoscale](https://www.exoscale.com/dns/)
* [Oracle Cloud Infrastructure DNS](https://docs.cloud.oracle.com/iaas/Content/DNS/Concepts/dnszonemanagement.htm)
* [Linode DNS](https://www.linode.com/docs/networking/dns/)
* [RFC2136](https://tools.ietf.org/html/rfc2136)
* [NS1](https://ns1.com/)
* [TransIP](https://www.transip.eu/domain-name/)
* [OVH](https://www.ovh.com)
* [Scaleway](https://www.scaleway.com)
* [Akamai Edge DNS](https://learn.akamai.com/en-us/products/cloud_security/edge_dns.html)
* [GoDaddy](https://www.godaddy.com)
* [Gandi](https://www.gandi.net)
* [IBM Cloud DNS](https://www.ibm.com/cloud/dns)
* [TencentCloud PrivateDNS](https://cloud.tencent.com/product/privatedns)
* [TencentCloud DNSPod](https://cloud.tencent.com/product/cns)
* [Plural](https://www.plural.sh/)
* [Pi-hole](https://pi-hole.net/)
ExternalDNS is, by default, aware of the records it is managing, therefore it can safely manage non-empty hosted zones. We strongly encourage you to set `--txt-owner-id` to a unique value that doesn't change for the lifetime of your cluster. You might also want to run ExternalDNS in a dry run mode (`--dry-run` flag) to see the changes to be submitted to your DNS Provider API.
- [Google Cloud DNS](https://cloud.google.com/dns/docs/)
- [AWS Route 53](https://aws.amazon.com/route53/)
- [AWS Cloud Map](https://docs.aws.amazon.com/cloud-map/)
- [AzureDNS](https://azure.microsoft.com/en-us/services/dns)
- [Civo](https://www.civo.com)
- [CloudFlare](https://www.cloudflare.com/dns)
- [DigitalOcean](https://www.digitalocean.com/products/networking)
- [DNSimple](https://dnsimple.com/)
- [OpenStack Designate](https://docs.openstack.org/designate/latest/)
- [PowerDNS](https://www.powerdns.com/)
- [CoreDNS](https://coredns.io/)
- [Exoscale](https://www.exoscale.com/dns/)
- [Oracle Cloud Infrastructure DNS](https://docs.cloud.oracle.com/iaas/Content/DNS/Concepts/dnszonemanagement.htm)
- [Linode DNS](https://www.linode.com/docs/networking/dns/)
- [RFC2136](https://tools.ietf.org/html/rfc2136)
- [NS1](https://ns1.com/)
- [TransIP](https://www.transip.eu/domain-name/)
- [OVH](https://www.ovh.com)
- [Scaleway](https://www.scaleway.com)
- [Akamai Edge DNS](https://learn.akamai.com/en-us/products/cloud_security/edge_dns.html)
- [GoDaddy](https://www.godaddy.com)
- [Gandi](https://www.gandi.net)
- [IBM Cloud DNS](https://www.ibm.com/cloud/dns)
- [TencentCloud PrivateDNS](https://cloud.tencent.com/product/privatedns)
- [TencentCloud DNSPod](https://cloud.tencent.com/product/cns)
- [Plural](https://www.plural.sh/)
- [Pi-hole](https://pi-hole.net/)
ExternalDNS is, by default, aware of the records it is managing, therefore it can safely manage non-empty hosted zones.
We strongly encourage you to set `--txt-owner-id` to a unique value that doesn't change for the lifetime of your cluster.
You might also want to run ExternalDNS in a dry run mode (`--dry-run` flag) to see the changes to be submitted to your DNS Provider API.
Note that all flags can be replaced with environment variables; for instance,
`--dry-run` could be replaced with `EXTERNAL_DNS_DRY_RUN=1`.
## New providers
No new provider will be added to ExternalDNS _in-tree_.
No new provider will be added to ExternalDNS *in-tree*.
ExternalDNS has introduced a webhook system, which can be used to add a new provider.
See PR #3063 for all the discussions about it.
@ -101,9 +110,11 @@ Known providers using webhooks:
## Status of in-tree providers
ExternalDNS supports multiple DNS providers which have been implemented by the [ExternalDNS contributors](https://github.com/kubernetes-sigs/external-dns/graphs/contributors). Maintaining all of those in a central repository is a challenge, which introduces lots of toil and potential risks.
ExternalDNS supports multiple DNS providers which have been implemented by the [ExternalDNS contributors](https://github.com/kubernetes-sigs/external-dns/graphs/contributors).
Maintaining all of those in a central repository is a challenge, which introduces lots of toil and potential risks.
This mean that `external-dns` has begun the process to move providers out of tree. See #4347 for more details. Those who are interested can create a webhook provider based on an _in-tree_ provider and after submit a PR to reference it here.
This mean that `external-dns` has begun the process to move providers out of tree. See #4347 for more details.
Those who are interested can create a webhook provider based on an *in-tree* provider and after submit a PR to reference it here.
We define the following stability levels for providers:
@ -153,59 +164,59 @@ A [breaking change](https://github.com/kubernetes-sigs/external-dns/pull/2281) w
| Kubernetes >= 1.19 and <= 1.21 | :white_check_mark: | :white_check_mark: |
| Kubernetes >= 1.22 | :x: | :white_check_mark: |
## Running ExternalDNS:
## Running ExternalDNS
The are two ways of running ExternalDNS:
* Deploying to a Cluster
* Running Locally
- Deploying to a Cluster
- Running Locally
### Deploying to a Cluster
The following tutorials are provided:
* [Akamai Edge DNS](docs/tutorials/akamai-edgedns.md)
* [Alibaba Cloud](docs/tutorials/alibabacloud.md)
* AWS
* [AWS Load Balancer Controller](docs/tutorials/aws-load-balancer-controller.md)
* [Route53](docs/tutorials/aws.md)
* [Same domain for public and private Route53 zones](docs/tutorials/aws-public-private-route53.md)
* [Cloud Map](docs/tutorials/aws-sd.md)
* [Kube Ingress AWS Controller](docs/tutorials/kube-ingress-aws.md)
* [Azure DNS](docs/tutorials/azure.md)
* [Azure Private DNS](docs/tutorials/azure-private-dns.md)
* [Civo](docs/tutorials/civo.md)
* [Cloudflare](docs/tutorials/cloudflare.md)
* [CoreDNS](docs/tutorials/coredns.md)
* [DigitalOcean](docs/tutorials/digitalocean.md)
* [DNSimple](docs/tutorials/dnsimple.md)
* [Exoscale](docs/tutorials/exoscale.md)
* [ExternalName Services](docs/tutorials/externalname.md)
* Google Kubernetes Engine
* [Using Google's Default Ingress Controller](docs/tutorials/gke.md)
* [Using the Nginx Ingress Controller](docs/tutorials/gke-nginx.md)
* [Headless Services](docs/tutorials/hostport.md)
* [Istio Gateway Source](docs/sources/istio.md)
* [Linode](docs/tutorials/linode.md)
* [NS1](docs/tutorials/ns1.md)
* [NS Record Creation with CRD Source](docs/sources/ns-record.md)
* [MX Record Creation with CRD Source](docs/sources/mx-record.md)
* [TXT Record Creation with CRD Source](docs/sources/txt-record.md)
* [OpenStack Designate](docs/tutorials/designate.md)
* [Oracle Cloud Infrastructure (OCI) DNS](docs/tutorials/oracle.md)
* [PowerDNS](docs/tutorials/pdns.md)
* [RFC2136](docs/tutorials/rfc2136.md)
* [TransIP](docs/tutorials/transip.md)
* [OVH](docs/tutorials/ovh.md)
* [Scaleway](docs/tutorials/scaleway.md)
* [UltraDNS](docs/tutorials/ultradns.md)
* [GoDaddy](docs/tutorials/godaddy.md)
* [Gandi](docs/tutorials/gandi.md)
* [IBM Cloud](docs/tutorials/ibmcloud.md)
* [Nodes as source](docs/sources/nodes.md)
* [TencentCloud](docs/tutorials/tencentcloud.md)
* [Plural](docs/tutorials/plural.md)
* [Pi-hole](docs/tutorials/pihole.md)
- [Akamai Edge DNS](docs/tutorials/akamai-edgedns.md)
- [Alibaba Cloud](docs/tutorials/alibabacloud.md)
- AWS
- [AWS Load Balancer Controller](docs/tutorials/aws-load-balancer-controller.md)
- [Route53](docs/tutorials/aws.md)
- [Same domain for public and private Route53 zones](docs/tutorials/aws-public-private-route53.md)
- [Cloud Map](docs/tutorials/aws-sd.md)
- [Kube Ingress AWS Controller](docs/tutorials/kube-ingress-aws.md)
- [Azure DNS](docs/tutorials/azure.md)
- [Azure Private DNS](docs/tutorials/azure-private-dns.md)
- [Civo](docs/tutorials/civo.md)
- [Cloudflare](docs/tutorials/cloudflare.md)
- [CoreDNS](docs/tutorials/coredns.md)
- [DigitalOcean](docs/tutorials/digitalocean.md)
- [DNSimple](docs/tutorials/dnsimple.md)
- [Exoscale](docs/tutorials/exoscale.md)
- [ExternalName Services](docs/tutorials/externalname.md)
- Google Kubernetes Engine
- [Using Google's Default Ingress Controller](docs/tutorials/gke.md)
- [Using the Nginx Ingress Controller](docs/tutorials/gke-nginx.md)
- [Headless Services](docs/tutorials/hostport.md)
- [Istio Gateway Source](docs/sources/istio.md)
- [Linode](docs/tutorials/linode.md)
- [NS1](docs/tutorials/ns1.md)
- [NS Record Creation with CRD Source](docs/sources/ns-record.md)
- [MX Record Creation with CRD Source](docs/sources/mx-record.md)
- [TXT Record Creation with CRD Source](docs/sources/txt-record.md)
- [OpenStack Designate](docs/tutorials/designate.md)
- [Oracle Cloud Infrastructure (OCI) DNS](docs/tutorials/oracle.md)
- [PowerDNS](docs/tutorials/pdns.md)
- [RFC2136](docs/tutorials/rfc2136.md)
- [TransIP](docs/tutorials/transip.md)
- [OVH](docs/tutorials/ovh.md)
- [Scaleway](docs/tutorials/scaleway.md)
- [UltraDNS](docs/tutorials/ultradns.md)
- [GoDaddy](docs/tutorials/godaddy.md)
- [Gandi](docs/tutorials/gandi.md)
- [IBM Cloud](docs/tutorials/ibmcloud.md)
- [Nodes as source](docs/sources/nodes.md)
- [TencentCloud](docs/tutorials/tencentcloud.md)
- [Plural](docs/tutorials/plural.md)
- [Pi-hole](docs/tutorials/pihole.md)
### Running Locally
@ -249,7 +260,8 @@ Locally run a single sync loop of ExternalDNS.
external-dns --txt-owner-id my-cluster-id --provider google --google-project example-project --source service --once --dry-run
```
This should output the DNS records it will modify to match the managed zone with the DNS records you desire. It also assumes you are running in the `default` namespace. See the [FAQ](docs/faq.md) for more information regarding namespaces.
This should output the DNS records it will modify to match the managed zone with the DNS records you desire.
It also assumes you are running in the `default` namespace. See the [FAQ](docs/faq.md) for more information regarding namespaces.
Note: TXT records will have the `my-cluster-id` value embedded. Those are used to ensure that ExternalDNS is aware of the records it manages.
@ -267,10 +279,11 @@ dig +short nginx.example.org.
```
Now you can experiment and watch how ExternalDNS makes sure that your DNS records are configured as desired. Here are a couple of things you can try out:
* Change the desired hostname by modifying the Service's annotation.
* Recreate the Service and see that the DNS record will be updated to point to the new load balancer IP.
* Add another Service to create more DNS records.
* Remove Services to clean up your managed zone.
- Change the desired hostname by modifying the Service's annotation.
- Recreate the Service and see that the DNS record will be updated to point to the new load balancer IP.
- Add another Service to create more DNS records.
- Remove Services to clean up your managed zone.
The **tutorials** section contains examples, including Ingress resources, and shows you how to set up ExternalDNS in different environments such as other cloud providers and alternative Ingress controllers.
@ -278,7 +291,8 @@ The **tutorials** section contains examples, including Ingress resources, and sh
If using a txt registry and attempting to use a CNAME the `--txt-prefix` must be set to avoid conflicts. Changing `--txt-prefix` will result in lost ownership over previously created records.
If `externalIPs` list is defined for a `LoadBalancer` service, this list will be used instead of an assigned load balancer IP to create a DNS record. It's useful when you run bare metal Kubernetes clusters behind NAT or in a similar setup, where a load balancer IP differs from a public IP (e.g. with [MetalLB](https://metallb.universe.tf)).
If `externalIPs` list is defined for a `LoadBalancer` service, this list will be used instead of an assigned load balancer IP to create a DNS record.
It's useful when you run bare metal Kubernetes clusters behind NAT or in a similar setup, where a load balancer IP differs from a public IP (e.g. with [MetalLB](https://metallb.universe.tf)).
## Contributing
@ -305,12 +319,12 @@ For an overview on how to write new Sources and Providers check out [Sources and
ExternalDNS is an effort to unify the following similar projects in order to bring the Kubernetes community an easy and predictable way of managing DNS records across cloud providers based on their Kubernetes resources:
* Kops' [DNS Controller](https://github.com/kubernetes/kops/tree/HEAD/dns-controller)
* Zalando's [Mate](https://github.com/linki/mate)
* Molecule Software's [route53-kubernetes](https://github.com/wearemolecule/route53-kubernetes)
- Kops' [DNS Controller](https://github.com/kubernetes/kops/tree/HEAD/dns-controller)
- Zalando's [Mate](https://github.com/linki/mate)
- Molecule Software's [route53-kubernetes](https://github.com/wearemolecule/route53-kubernetes)
### User Demo How-To Blogs and Examples
* A full demo on GKE Kubernetes. See [How-to Kubernetes with DNS management (ssl-manager pre-req)](https://medium.com/@jpantjsoha/how-to-kubernetes-with-dns-management-for-gitops-31239ea75d8d)
* Run external-dns on GKE with workload identity. See [Kubernetes, ingress-nginx, cert-manager & external-dns](https://blog.atomist.com/kubernetes-ingress-nginx-cert-manager-external-dns/)
* [ExternalDNS integration with Azure DNS using workload identity](https://cloudchronicles.blog/blog/ExternalDNS-integration-with-Azure-DNS-using-workload-identity/)
- A full demo on GKE Kubernetes. See [How-to Kubernetes with DNS management (ssl-manager pre-req)](https://medium.com/@jpantjsoha/how-to-kubernetes-with-dns-management-for-gitops-31239ea75d8d)
- Run external-dns on GKE with workload identity. See [Kubernetes, ingress-nginx, cert-manager & external-dns](https://blog.atomist.com/kubernetes-ingress-nginx-cert-manager-external-dns/)
- [ExternalDNS integration with Azure DNS using workload identity](https://cloudchronicles.blog/blog/ExternalDNS-integration-with-Azure-DNS-using-workload-identity/)

View File

@ -32,7 +32,8 @@ helm upgrade --install external-dns external-dns/external-dns --version 1.15.1
## Providers
Configuring the _ExternalDNS_ provider should be done via the `provider.name` value with provider specific configuration being set via the `provider.<name>.<key>` values, where supported, and the `extraArgs` value. For legacy support `provider` can be set to the name of the provider with all additional configuration being set via the `extraArgs` value.
Configuring the _ExternalDNS_ provider should be done via the `provider.name` value with provider specific configuration being set via the `provider.<name>.<key>` values, where supported, and the `extraArgs` value.
For legacy support `provider` can be set to the name of the provider with all additional configuration being set via the `extraArgs` value.
See [documentation](https://kubernetes-sigs.github.io/external-dns/#new-providers) for more info on available providers and tutorials.
### Providers with Specific Configuration Support
@ -45,13 +46,13 @@ See [documentation](https://kubernetes-sigs.github.io/external-dns/#new-provider
For set up for a specific provider using the Helm chart, see the following links:
- [AWS](https://github.com/kubernetes-sigs/external-dns/blob/master/docs/tutorials/aws.md#using-helm-with-oidc)
- [akamai-edgedns](https://github.com/kubernetes-sigs/external-dns/blob/master/docs/tutorials/akamai-edgedns.md#using-helm)
- [cloudflare](https://github.com/kubernetes-sigs/external-dns/blob/master/docs/tutorials/cloudflare.md#using-helm)
- [digitalocean](https://github.com/kubernetes-sigs/external-dns/blob/master/docs/tutorials/digitalocean.md#using-helm)
- [godaddy](https://github.com/kubernetes-sigs/external-dns/blob/master/docs/tutorials/godaddy.md#using-helm)
- [ns1](https://github.com/kubernetes-sigs/external-dns/blob/master/docs/tutorials/ns1.md#using-helm)
- [plural](https://github.com/kubernetes-sigs/external-dns/blob/master/docs/tutorials/plural.md#using-helm)
* [AWS](https://github.com/kubernetes-sigs/external-dns/blob/master/docs/tutorials/aws.md#using-helm-with-oidc)
* [akamai-edgedns](https://github.com/kubernetes-sigs/external-dns/blob/master/docs/tutorials/akamai-edgedns.md#using-helm)
* [cloudflare](https://github.com/kubernetes-sigs/external-dns/blob/master/docs/tutorials/cloudflare.md#using-helm)
* [digitalocean](https://github.com/kubernetes-sigs/external-dns/blob/master/docs/tutorials/digitalocean.md#using-helm)
* [godaddy](https://github.com/kubernetes-sigs/external-dns/blob/master/docs/tutorials/godaddy.md#using-helm)
* [ns1](https://github.com/kubernetes-sigs/external-dns/blob/master/docs/tutorials/ns1.md#using-helm)
* [plural](https://github.com/kubernetes-sigs/external-dns/blob/master/docs/tutorials/plural.md#using-helm)
## Namespaced Scoped Installation

View File

@ -27,7 +27,8 @@ helm upgrade --install {{ template "chart.name" . }} external-dns/{{ template "c
## Providers
Configuring the _ExternalDNS_ provider should be done via the `provider.name` value with provider specific configuration being set via the `provider.<name>.<key>` values, where supported, and the `extraArgs` value. For legacy support `provider` can be set to the name of the provider with all additional configuration being set via the `extraArgs` value.
Configuring the _ExternalDNS_ provider should be done via the `provider.name` value with provider specific configuration being set via the `provider.<name>.<key>` values, where supported, and the `extraArgs` value.
For legacy support `provider` can be set to the name of the provider with all additional configuration being set via the `extraArgs` value.
See [documentation](https://kubernetes-sigs.github.io/external-dns/#new-providers) for more info on available providers and tutorials.
### Providers with Specific Configuration Support
@ -40,13 +41,13 @@ See [documentation](https://kubernetes-sigs.github.io/external-dns/#new-provider
For set up for a specific provider using the Helm chart, see the following links:
- [AWS](https://github.com/kubernetes-sigs/external-dns/blob/master/docs/tutorials/aws.md#using-helm-with-oidc)
- [akamai-edgedns](https://github.com/kubernetes-sigs/external-dns/blob/master/docs/tutorials/akamai-edgedns.md#using-helm)
- [cloudflare](https://github.com/kubernetes-sigs/external-dns/blob/master/docs/tutorials/cloudflare.md#using-helm)
- [digitalocean](https://github.com/kubernetes-sigs/external-dns/blob/master/docs/tutorials/digitalocean.md#using-helm)
- [godaddy](https://github.com/kubernetes-sigs/external-dns/blob/master/docs/tutorials/godaddy.md#using-helm)
- [ns1](https://github.com/kubernetes-sigs/external-dns/blob/master/docs/tutorials/ns1.md#using-helm)
- [plural](https://github.com/kubernetes-sigs/external-dns/blob/master/docs/tutorials/plural.md#using-helm)
* [AWS](https://github.com/kubernetes-sigs/external-dns/blob/master/docs/tutorials/aws.md#using-helm-with-oidc)
* [akamai-edgedns](https://github.com/kubernetes-sigs/external-dns/blob/master/docs/tutorials/akamai-edgedns.md#using-helm)
* [cloudflare](https://github.com/kubernetes-sigs/external-dns/blob/master/docs/tutorials/cloudflare.md#using-helm)
* [digitalocean](https://github.com/kubernetes-sigs/external-dns/blob/master/docs/tutorials/digitalocean.md#using-helm)
* [godaddy](https://github.com/kubernetes-sigs/external-dns/blob/master/docs/tutorials/godaddy.md#using-helm)
* [ns1](https://github.com/kubernetes-sigs/external-dns/blob/master/docs/tutorials/ns1.md#using-helm)
* [plural](https://github.com/kubernetes-sigs/external-dns/blob/master/docs/tutorials/plural.md#using-helm)
## Namespaced Scoped Installation

View File

@ -61,7 +61,7 @@ rules:
verbs: ["get","watch","list"]
- apiGroups: [""]
resources: ["namespaces"]
verbs: ["get","watch","list"]
verbs: ["get","watch","list"]
{{- end }}
{{- if has "gateway-httproute" .Values.sources }}
- apiGroups: ["gateway.networking.k8s.io"]

View File

@ -2,16 +2,16 @@
<!-- TOC depthFrom:1 depthTo:6 withLinks:1 updateOnSave:1 orderedList:0 -->
- [Move ExternalDNS out of Kubernetes incubator](#move-externaldns-out-of-kubernetes-sigs)
- [Summary](#summary)
- [Motivation](#motivation)
- [Goals](#goals)
- [Proposal](#proposal)
- [Details](#details)
- [Graduation Criteria](#graduation-criteria)
- [Maintainers](#maintainers)
- [Release process, artifacts](#release-process-artifacts)
- [Risks and Mitigations](#risks-and-mitigations)
- [Move ExternalDNS out of Kubernetes incubator]((#move-externaldns-out-of-kubernetes-incubator))
- [Summary](#summary)
- [Motivation](#motivation)
- [Goals](#goals)
- [Proposal](#proposal)
- [Details](#details)
- [Graduation Criteria](#graduation-criteria)
- [Maintainers](#maintainers)
- [Release process, artifacts](#release-process-artifacts)
- [Risks and Mitigations](#risks-and-mitigations)
<!-- /TOC -->
@ -27,11 +27,11 @@ ExternalDNS started as a community project with the goal of unifying several exi
When the project was proposed (see the [original discussion](https://github.com/kubernetes/kubernetes/issues/28525#issuecomment-270766227)), there were at least 3 existing implementations of the same functionality:
* Mate - [https://github.com/linki/mate](https://github.com/linki/mate)
- Mate - [https://github.com/linki/mate](https://github.com/linki/mate)
* DNS-controller from kops - [https://github.com/kubernetes/kops/tree/HEAD/dns-controller](https://github.com/kubernetes/kops/tree/HEAD/dns-controller)
- DNS-controller from kops - [https://github.com/kubernetes/kops/tree/HEAD/dns-controller](https://github.com/kubernetes/kops/tree/HEAD/dns-controller)
* Route53-kubernetes - [https://github.com/wearemolecule/route53-kubernetes](https://github.com/wearemolecule/route53-kubernetes)
- Route53-kubernetes - [https://github.com/wearemolecule/route53-kubernetes](https://github.com/wearemolecule/route53-kubernetes)
ExternalDNS' goal from the beginning was to provide an officially supported solution to those problems.
@ -51,36 +51,35 @@ This KEP is about moving External DNS out of the Kubernetes incubator. This sect
External DNS...
* Is the de facto solution to create DNS records for several Kubernetes resources.
- Is the de facto solution to create DNS records for several Kubernetes resources.
* Is a vital component to achieve an experience close to a PaaS that many Kubernetes users try to replicate on top of Kubernetes, by allowing to automatically create DNS records for web applications.
- Is a vital component to achieve an experience close to a PaaS that many Kubernetes users try to replicate on top of Kubernetes, by allowing to automatically create DNS records for web applications.
* Supports already 18 different DNS providers including all major public clouds (AWS, Azure, GCP).
- Supports already 18 different DNS providers including all major public clouds (AWS, Azure, GCP).
Given that the kubernetes-sigs organization will eventually be shut down, the possible alternatives to moving to be an official Kubernetes project are the following:
* Shut down the project
- Shut down the project
* Move the project elsewhere
- Move the project elsewhere
We believe that those alternatives would result in a worse outcome for the community compared to moving the project to the any of the other official Kubernetes organizations.
In fact, shutting down ExternalDNS can cause:
* The community to rebuild the same solution as already happened multiple times before the project was launched. Currently ExternalDNS is easy to be found, referenced in many articles/tutorials and for that reason not exposed to that risk.
- The community to rebuild the same solution as already happened multiple times before the project was launched. Currently ExternalDNS is easy to be found, referenced in many articles/tutorials and for that reason not exposed to that risk.
* Existing users of the projects to be left without a future proof working solution.
- Existing users of the projects to be left without a future proof working solution.
Moving the ExternalDNS project outside of Kubernetes projects would cause:
* Problems (re-)establishing user trust which could eventually lead to fragmentation and duplication.
- Problems (re-)establishing user trust which could eventually lead to fragmentation and duplication.
* It would be hard to establish in which organization the project should be moved to. The most natural would be Zalando's organization, being the company that put most of the work on the project. While it is possible to assume Zalando's commitment to open-source, that would be a strategic mistake for the project community and for the Kubernetes ecosystem due to the obvious lack of neutrality.
- It would be hard to establish in which organization the project should be moved to.
* Lack of resources to test, lack of issue management via automation.
- Lack of resources to test, lack of issue management via automation.
For those reasons, we propose to move ExternalDNS out of the Kubernetes incubator, to live either under the kubernetes or kubernetes-sigs organization to keep being a vital part of the Kubernetes ecosystem.
## Details
### Graduation Criteria
@ -95,15 +94,15 @@ ExternalDNS can't be consider to be "done": while the core functionality has bee
Those are identified in the project roadmap, which is roughly made of the following items:
* Decoupling of the providers
- Decoupling of the providers
* Implementation proposal
- Implementation proposal
* Development
- Development
* Bug fixing and performance optimization (i.e. rate limiting on cloud providers)
- Bug fixing and performance optimization (i.e. rate limiting on cloud providers)
* Integration testing suite, to be implemented at least for the "stable" providers
- Integration testing suite, to be implemented at least for the "stable" providers
For those reasons, we consider ExternalDNS to be in Beta state as a project. We believe that once the items mentioned above will be implemented, the project can reach a declared GA status.
@ -113,17 +112,18 @@ There are a number of other factors that need to be covered to fully describe th
The project has the following maintainers:
* hjacobs
- hjacobs
* Raffo
- Raffo
* linki
- linki
* njuettner
- njuettner
The list of maintainers shrunk over time as people moved out of the original development team (all the team members were working at Zalando at the time of project creation) and the project required less work.
The high number of providers contributed to the project pose a maintainability challenge: it is hard to bring the providers forward in terms of functionalities or even test them. The maintainers believe that the plan to transform the current Provider interface from a Go interface to an API will allow for enough decoupling and to hand over the maintenance of those plugins to the contributors themselves, see the risk and mitigations section for further details.
The high number of providers contributed to the project pose a maintainability challenge: it is hard to bring the providers forward in terms of functionalities or even test them.
The maintainers believe that the plan to transform the current Provider interface from a Go interface to an API will allow for enough decoupling and to hand over the maintenance of those plugins to the contributors themselves, see the risk and mitigations section for further details.
### Release process, artifacts
@ -141,16 +141,18 @@ ExternalDNS does not follow a specific release cycle. Releases are made often wh
The following are risks that were identified:
* Low number of maintainers: we are currently facing issues keeping up with the number of pull requests and issues giving the low number of maintainers. The list of maintainers already shrunk from 8 maintainers to 4.
- Low number of maintainers: we are currently facing issues keeping up with the number of pull requests and issues giving the low number of maintainers. The list of maintainers already shrunk from 8 maintainers to 4.
* Issues maintaining community contributed providers: we often lack access to external providers (i.e. InfoBlox, etc.) and this means that we cannot verify the implementations and/or run regression tests that go beyond unit testing.
- Issues maintaining community contributed providers: we often lack access to external providers (i.e. InfoBlox, etc.) and this means that we cannot verify the implementations and/or run regression tests that go beyond unit testing.
* Somewhat low quality of releases due to lack of integration testing.
- Somewhat low quality of releases due to lack of integration testing.
We think that the following actions will constitute appropriate mitigations:
* Decoupling the providers via an API will allow us to resolve the problem of the providers. Being the project already more than 2 years old and given that there are 18 providers implemented, we possess enough information to define an API that we can be stable in a short timeframe. Once this is stable, the problem of testing the providers can be deferred to be a provider's responsibility. This will also reduce the scope of External DNS core code, which means that there will be no need for a further increase of the maintaining team.
- Decoupling the providers via an API will allow us to resolve the problem of the providers. Being the project already more than 2 years old and given that there are 18 providers implemented, we possess enough information to define an API that we can be stable in a short timeframe.
- Once this is stable, the problem of testing the providers can be deferred to be a provider's responsibility. This will also reduce the scope of External DNS core code, which means that there will be no need for a further increase of the maintaining team.
* We added integration testing for the main cloud providers to the roadmap for the 1.0 release to make sure that we cover the mostly used ones. We believe that this item should be tackled independently from the decoupling of providers as it would be capable of generating value independently from the result of the decoupling efforts.
- We added integration testing for the main cloud providers to the roadmap for the 1.0 release to make sure that we cover the mostly used ones.
- We believe that this item should be tackled independently from the decoupling of providers as it would be capable of generating value independently from the result of the decoupling efforts.
* With the move to the Kubernetes incubation, we hope that we will be able to access the testing resources of the Kubernetes project. In this way, we hope to decouple the project from the dependency on Zalando's internal CI tool. This will help open up the possibility to increase the visibility on the project from external contributors, which currently would be blocked by the lack of access to the software used for the whole release pipeline.
- With the move to the Kubernetes incubation, we hope that we will be able to access the testing resources of the Kubernetes project.

View File

@ -59,7 +59,7 @@ Otherwise, use the `IP` of each of the `Service`'s `Endpoints`'s `Addresses`.
## external-dns.alpha.kubernetes.io/hostname
Specifies the domain for the resource's DNS records.
Specifies the domain for the resource's DNS records.
Multiple hostnames can be specified through a comma-separated list, e.g.
`svc.mydomain1.com,svc.mydomain2.com`.

View File

@ -2,6 +2,8 @@
## Chart Changes
When contributing chart changes please follow the same process as when contributing other content but also please **DON'T** modify _Chart.yaml_ in the PR as this would result in a chart release when merged and will mean that your PR will need modifying before it can be accepted. The chart version will be updated as part of the PR to release the chart.
When contributing chart changes please follow the same process as when contributing other content but also please **DON'T** modify _Chart.yaml_ in the PR as this would result in a chart release when merged and will mean that your PR will need modifying before it can be accepted.
The chart version will be updated as part of the PR to release the chart.
Please **DO** add your changes to the _CHANGELOG.md_ file in the chart directory under the `## [UNRELEASED]` section, if there isn't an uncommented `## [UNRELEASED]` section please copy the commented out template and use that.

View File

@ -1,26 +1,29 @@
# Design
ExternalDNS's sources of DNS records live in package [source](https://github.com/kubernetes-sigs/external-dns/tree/master/source). They implement the `Source` interface that has a single method `Endpoints` which returns the represented source's objects converted to `Endpoints`. Endpoints are just a tuple of DNS name and target where target can be an IP or another hostname.
ExternalDNS's sources of DNS records live in package [source](https://github.com/kubernetes-sigs/external-dns/tree/master/source).
They implement the `Source` interface that has a single method `Endpoints` which returns the represented source's objects converted to `Endpoints`. Endpoints are just a tuple of DNS name and target where target can be an IP or another hostname.
For example, the `ServiceSource` returns all Services converted to `Endpoints` where the hostname is the value of the `external-dns.alpha.kubernetes.io/hostname` annotation and the target is the IP of the load balancer or where the hostname is the value of the `external-dns.alpha.kubernetes.io/internal-hostname` annotation and the target is the IP of the service ClusterIP.
For example, the `ServiceSource` returns all Services converted to `Endpoints` where the hostname is the value of the `external-dns.alpha.kubernetes.io/hostname` annotation and the target is the IP of the load balancer or the target is the IP of the service ClusterIP.
This list of endpoints is passed to the [Plan](https://github.com/kubernetes-sigs/external-dns/tree/master/plan) which determines the difference between the current DNS records and the desired list of `Endpoints`.
Once the difference has been figured out the list of intended changes is passed to a `Registry` which live in the [registry](https://github.com/kubernetes-sigs/external-dns/tree/master/registry) package. The registry is a wrapper and access point to DNS provider. Registry implements the ownership concept by marking owned records and filtering out records not owned by ExternalDNS before passing them to DNS provider.
Once the difference has been figured out the list of intended changes is passed to a `Registry` which live in the [registry](https://github.com/kubernetes-sigs/external-dns/tree/master/registry) package.
The registry is a wrapper and access point to DNS provider. Registry implements the ownership concept by marking owned records and filtering out records not owned by ExternalDNS before passing them to DNS provider.
The [provider](https://github.com/kubernetes-sigs/external-dns/tree/master/provider) is the adapter to the DNS provider, e.g. Google Cloud DNS. It implements two methods: `ApplyChanges` to apply a set of changes filtered by `Registry` and `Records` to retrieve the current list of records from the DNS provider.
The [provider](https://github.com/kubernetes-sigs/external-dns/tree/master/provider) is the adapter to the DNS provider, e.g. Google Cloud DNS.
It implements two methods: `ApplyChanges` to apply a set of changes filtered by `Registry` and `Records` to retrieve the current list of records from the DNS provider.
The orchestration between the different components is controlled by the [controller](https://github.com/kubernetes-sigs/external-dns/tree/master/controller).
You can pick which `Source` and `Provider` to use at runtime via the `--source` and `--provider` flags, respectively.
# Adding a DNS Provider
## Adding a DNS Provider
A typical way to start on, e.g. a CoreDNS provider, would be to add a `coredns.go` to the providers package and implement the interface methods. Then you would have to register your provider under a name in `main.go`, e.g. `coredns`, and would be able to trigger it's functions via setting `--provider=coredns`.
Note, how your provider doesn't need to know anything about where the DNS records come from, nor does it have to figure out the difference between the current and the desired state, it merely executes the actions calculated by the plan.
# Running GitHub Actions locally
## Running GitHub Actions locally
You can also extend the CI workflow which is currently implemented as GitHub Action within the [workflow](https://github.com/kubernetes-sigs/external-dns/tree/HEAD/.github/workflows) folder.
In order to test your changes before committing you can leverage [act](https://github.com/nektos/act) to run the GitHub Action locally.

View File

@ -44,7 +44,8 @@ If added any flags, re-generate flags documentation
make generate-flags-documentation
```
We require all changes to be covered by acceptance tests and/or unit tests, depending on the situation. In the context of the `external-dns`, acceptance tests are tests of interactions with providers, such as creating, reading information about, and destroying DNS resources. In contrast, unit tests test functionality wholly within the codebase itself, such as function tests.
We require all changes to be covered by acceptance tests and/or unit tests, depending on the situation.
In the context of the `external-dns`, acceptance tests are tests of interactions with providers, such as creating, reading information about, and destroying DNS resources. In contrast, unit tests test functionality wholly within the codebase itself, such as function tests.
### Continuous Integration
@ -74,8 +75,8 @@ We use [Minikube](https://minikube.sigs.k8s.io/docs/start/?arch=%2Fmacos%2Fx86-6
- [Create local cluster](#create-a-local-cluster)
- [Build and load local images](#building-local-images)
- [Deploy with Helm](#deploy-with-helm)
- [Deploy with kubernetes manifests]()
- Deploy with Helm
- Deploy with kubernetes manifests
## Create a local cluster
@ -177,7 +178,6 @@ Refer to [pushing images](https://minikube.sigs.k8s.io/docs/handbook/pushing/#4-
## Building image and push to a registry
Build container image and push to a specific registry
```shell
@ -204,6 +204,7 @@ Deploy manifests to a cluster with required values
```
Modify chart or values and validate the diff
```sh
helm template external-dns charts/external-dns --output-dir _scratch
kubectl diff -f _scratch/external-dns --recursive=true --show-managed-fields=false
@ -229,4 +230,4 @@ Install required dependencies. In order to not to break system packages, we are
mkdocs serve
$$ ...
$$ Serving on http://127.0.0.1:8000/
```
```

View File

@ -2,9 +2,10 @@
ExternalDNS supports swapping out endpoint **sources** and DNS **providers** and both sides are pluggable. There currently exist multiple sources for different provider implementations.
### Sources
## Sources
Sources are an abstraction over any kind of source of desired Endpoints, e.g.:
* a list of Service objects from Kubernetes
* a random list for testing purposes
* an aggregated list of multiple nested sources
@ -13,7 +14,7 @@ The `Source` interface has a single method called `Endpoints` that should return
```go
type Source interface {
Endpoints() ([]*endpoint.Endpoint, error)
Endpoints() ([]*endpoint.Endpoint, error)
}
```
@ -28,23 +29,29 @@ All sources live in package `source`.
* `CRDSource`: returns a list of Endpoint objects sourced from the spec of CRD objects. For more details refer to [CRD source](crd-source.md) documentation.
* `EmptySource`: returns an empty list of Endpoint objects for the purpose of testing and cleaning out entries.
### Providers
## Providers
Providers are an abstraction over any kind of sink for desired Endpoints, e.g.:
* storing them in Google Cloud DNS
* printing them to stdout for testing purposes
* fanning out to multiple nested providers
The `Provider` interface has two methods: `Records` and `ApplyChanges`. `Records` should return all currently existing DNS records converted to Endpoint objects as a flat list. Upon receiving a change set (via an object of `plan.Changes`), `ApplyChanges` should translate these to the provider specific actions in order to persist them in the provider's storage.
The `Provider` interface has two methods: `Records` and `ApplyChanges`.
`Records` should return all currently existing DNS records converted to Endpoint objects as a flat list.
Upon receiving a change set (via an object of `plan.Changes`), `ApplyChanges` should translate these to the provider specific actions in order to persist them in the provider's storage.
```go
type Provider interface {
Records() ([]*endpoint.Endpoint, error)
ApplyChanges(changes *plan.Changes) error
Records() ([]*endpoint.Endpoint, error)
ApplyChanges(changes *plan.Changes) error
}
```
The interface tries to be generic and assumes a flat list of records for both functions. However, many providers scope records into zones. Therefore, the provider implementation has to do some extra work to return that flat list. For instance, the AWS provider fetches the list of all hosted zones before it can return or apply the list of records. If the provider has no concept of zones or if it makes sense to cache the list of hosted zones it is happily allowed to do so. Furthermore, the provider should respect the `--domain-filter` flag to limit the affected records by a domain suffix. For instance, the AWS provider filters out all hosted zones that doesn't match that domain filter.
The interface tries to be generic and assumes a flat list of records for both functions. However, many providers scope records into zones.
Therefore, the provider implementation has to do some extra work to return that flat list. For instance, the AWS provider fetches the list of all hosted zones before it can return or apply the list of records.
If the provider has no concept of zones or if it makes sense to cache the list of hosted zones it is happily allowed to do so.
Furthermore, the provider should respect the `--domain-filter` flag to limit the affected records by a domain suffix. For instance, the AWS provider filters out all hosted zones that doesn't match that domain filter.
All providers live in package `provider`.
@ -53,6 +60,8 @@ All providers live in package `provider`.
* `AzureProvider`: returns and creates DNS records in Azure DNS
* `InMemoryProvider`: Keeps a list of records in local memory
### Usage
## Usage
You can choose any combination of sources and providers on the command line. Given a cluster on AWS you would most likely want to use the Service and Ingress Source in combination with the AWS provider. `Service` + `InMemory` is useful for testing your service collecting functionality, whereas `Fake` + `Google` is useful for testing that the Google provider behaves correctly, etc.
You can choose any combination of sources and providers on the command line.
Given a cluster on AWS you would most likely want to use the Service and Ingress Source in combination with the AWS provider.
`Service` + `InMemory` is useful for testing your service collecting functionality, whereas `Fake` + `Google` is useful for testing that the Google provider behaves correctly, etc.

View File

@ -2,13 +2,16 @@
This document defines the Deprecation Policy for External DNS.
Kubernetes is a dynamic system driven by APIs, which evolve with each new release. A crucial aspect of any API-driven system is having a well-defined deprecation policy. This policy informs users about APIs that are slated for removal or modification. Kubernetes follows this principle and periodically refines or upgrades its APIs or capabilities. Consequently, older features are marked as deprecated and eventually phased out. To avoid breaking existing users, we should follow a simple deprecation policy for behaviors that a slated to be removed.
Kubernetes is a dynamic system driven by APIs, which evolve with each new release. A crucial aspect of any API-driven system is having a well-defined deprecation policy.
This policy informs users about APIs that are slated for removal or modification. Kubernetes follows this principle and periodically refines or upgrades its APIs or capabilities.
Consequently, older features are marked as deprecated and eventually phased out. To avoid breaking existing users, we should follow a simple deprecation policy for behaviors that a slated to be removed.
The features and capabilities either to evolve or need to be removed.
## Deprecation Policy
We follow the [Kubernetes Deprecation Policy](https://kubernetes.io/docs/reference/using-api/deprecation-policy/) and [API Versioning Scheme](https://kubernetes.io/docs/reference/using-api/#api-versioning): alpha, beta, GA. It is therefore important to be aware of deprecation announcements and know when API versions will be removed, to help minimize the effect.
We follow the [Kubernetes Deprecation Policy](https://kubernetes.io/docs/reference/using-api/deprecation-policy/) and [API Versioning Scheme](https://kubernetes.io/docs/reference/using-api/#api-versioning): alpha, beta, GA.
It is therefore important to be aware of deprecation announcements and know when API versions will be removed, to help minimize the effect.
### Scope
@ -24,13 +27,13 @@ Everything not listed in scope is not subject to this deprecation policy and it
This includes, but isn't limited to:
- Any feature/specific behavior not in Scope.
- Source code imports
- Source code refactorings
- Helm Charts
- Release process
- Docker Images (including multi-arch builds)
- Image Signature (including provenance, providers, keys)
* Any feature/specific behavior not in Scope.
* Source code imports
* Source code refactorings
* Helm Charts
* Release process
* Docker Images (including multi-arch builds)
* Image Signature (including provenance, providers, keys)
## Including features and behaviors to the Deprecation Policy
@ -38,7 +41,7 @@ Any `maintainer` or `contributor` may propose including a feature, component, or
The proposal must clearly outline the rationale for inclusion, the impact on users, stability, long term maintenance plan, and day-to-day activities, if such.
The proposal must be formalized by submitting a `docs/proposal/EDP-XXX.md` document in a Pull Request. Pull request must be labeled with `kind/proposal`.
The proposal must be formalized by submitting a `docs/proposal/EDP-XXX.md` document in a Pull Request. Pull request must be labeled with `kind/proposal`.
The proposal template location is [here](docs/proposal/design-template.md). The template is quite complete, one can remove any unnecessary or irrelevant section on a specific proposal.
@ -66,7 +69,7 @@ Votes may be conducted asynchronously, with a reasonable deadline for responses
Upon approval, the proposing maintainer is responsible for implementing the changes required to mark the feature as deprecated. This includes:
* Updating the codebase with deprecation warnings where applicable.
- log.Warn("The XXX is on the path of ***DEPRECATION***. We recommend that you use YYY (link to docs)")
* log.Warn("The XXX is on the path of ***DEPRECATION***. We recommend that you use YYY (link to docs)")
* Documenting the deprecation in release notes and relevant documentation.
* Updating APIs, metrics, or behaviors per the Kubernetes Deprecation Policy if in scope.
* If the feature is entirely deprecated, archival of any associated repositories (external provider as example).
@ -77,7 +80,7 @@ Deprecation must be introduced in the next release. The release must follow sema
* If the project is in the 0.x stage, a `minor` version `bump` is required.
* For projects 1.x and beyond, a major version bump is required. For the features completely removed.
- If it's a flag change/flip, the `minor` version `bump` is acceptable
* If it's a flag change/flip, the `minor` version `bump` is acceptable
### Full Deprecation and Removal

View File

@ -1,8 +1,9 @@
# Frequently asked questions
### How is ExternalDNS useful to me?
## How is ExternalDNS useful to me?
You've probably created many deployments. Typically, you expose your deployment to the Internet by creating a Service with `type=LoadBalancer`. Depending on your environment, this usually assigns a random publicly available endpoint to your service that you can access from anywhere in the world. On Google Kubernetes Engine, this is a public IP address:
You've probably created many deployments. Typically, you expose your deployment to the Internet by creating a Service with `type=LoadBalancer`.
Depending on your environment, this usually assigns a random publicly available endpoint to your service that you can access from anywhere in the world. On Google Kubernetes Engine, this is a public IP address:
```console
$ kubectl get svc
@ -26,85 +27,101 @@ But there's nothing that actually makes clients resolve those hostnames to the I
ExternalDNS can solve this for you as well.
### Which DNS providers are supported?
## Which DNS providers are supported?
Please check the [provider status table](https://github.com/kubernetes-sigs/external-dns#status-of-in-tree-providers) for the list of supported providers and their status.
As stated in the README, we are currently looking for stable maintainers for those providers, to ensure that bugfixes and new features will be available for all of those.
### Which Kubernetes objects are supported?
## Which Kubernetes objects are supported?
Services exposed via `type=LoadBalancer`, `type=ExternalName`, `type=NodePort`, and for the hostnames defined in Ingress objects as well as [headless hostPort](tutorials/hostport.md) services.
### How do I specify a DNS name for my Kubernetes objects?
## How do I specify a DNS name for my Kubernetes objects?
There are three sources of information for ExternalDNS to decide on DNS name. ExternalDNS will pick one in order as listed below:
1. For ingress objects ExternalDNS will create a DNS record based on the hosts specified for the ingress object, as well as the `external-dns.alpha.kubernetes.io/hostname` annotation. For services ExternalDNS will look for the annotation `external-dns.alpha.kubernetes.io/hostname` on the service and use the loadbalancer IP, it also will look for the annotation `external-dns.alpha.kubernetes.io/internal-hostname` on the service and use the service IP.
- For ingresses, you can optionally force ExternalDNS to create records based on _either_ the hosts specified or the `external-dns.alpha.kubernetes.io/hostname` annotation. This behavior is controlled by
1. For ingress objects ExternalDNS will create a DNS record based on the hosts specified for the ingress object, as well as the `external-dns.alpha.kubernetes.io/hostname` annotation.
- For services ExternalDNS will look for the annotation `external-dns.alpha.kubernetes.io/hostname` on the service and use the loadbalancer IP, it also will look for the annotation `external-dns.alpha.kubernetes.io/internal-hostname` on the service and use the service IP.
- For ingresses, you can optionally force ExternalDNS to create records based on _either_ the hosts specified or the `external-dns.alpha.kubernetes.io/hostname` annotation. This behavior is controlled by
setting the `external-dns.alpha.kubernetes.io/ingress-hostname-source` annotation on that ingress to either `defined-hosts-only` or `annotation-only`.
2. If compatibility mode is enabled (e.g. `--compatibility={mate,molecule}` flag), External DNS will parse annotations used by Zalando/Mate, wearemolecule/route53-kubernetes. Compatibility mode with Kops DNS Controller is planned to be added in the future.
3. If `--fqdn-template` flag is specified, e.g. `--fqdn-template={{.Name}}.my-org.com`, ExternalDNS will use service/ingress specifications for the provided template to generate DNS name.
### Can I specify multiple global FQDN templates?
## Can I specify multiple global FQDN templates?
Yes, you can. Pass in a comma separated list to `--fqdn-template`. Beaware this will double (triple, etc) the amount of DNS entries based on how many services, ingresses and so on you have and will get you faster towards the API request limit of your DNS provider.
### Which Service and Ingress controllers are supported?
## Which Service and Ingress controllers are supported?
Regarding Services, we'll support the OSI Layer 4 load balancers that Kubernetes creates on AWS and Google Kubernetes Engine, and possibly other clusters running on Google Compute Engine.
Regarding Ingress, we'll support:
* Google's Ingress Controller on GKE that integrates with their Layer 7 load balancers (GLBC)
* nginx-ingress-controller v0.9.x with a fronting Service
* Zalando's [AWS Ingress controller](https://github.com/zalando-incubator/kube-ingress-aws-controller), based on AWS ALBs and [Skipper](https://github.com/zalando/skipper)
* [Traefik](https://github.com/containous/traefik)
* version 1.7, when [`kubernetes.ingressEndpoint`](https://docs.traefik.io/v1.7/configuration/backends/kubernetes/#ingressendpoint) is configured (`kubernetes.ingressEndpoint.useDefaultPublishedService` in the [Helm chart](https://github.com/helm/charts/tree/HEAD/stable/traefik#configuration))
* versions \>=2.0, when [`providers.kubernetesIngress.ingressEndpoint`](https://doc.traefik.io/traefik/providers/kubernetes-ingress/#ingressendpoint) is configured (`providers.kubernetesIngress.publishedService.enabled` is set to `true` in the [new Helm chart](https://github.com/traefik/traefik-helm-chart))
### Are other Ingress Controllers supported?
- Google's Ingress Controller on GKE that integrates with their Layer 7 load balancers (GLBC)
- nginx-ingress-controller v0.9.x with a fronting Service
- Zalando's [AWS Ingress controller](https://github.com/zalando-incubator/kube-ingress-aws-controller), based on AWS ALBs and [Skipper](https://github.com/zalando/skipper)
- [Traefik](https://github.com/containous/traefik)
- version 1.7, when [`kubernetes.ingressEndpoint`](https://docs.traefik.io/v1.7/configuration/backends/kubernetes/#ingressendpoint) is configured (`kubernetes.ingressEndpoint.useDefaultPublishedService` in the [Helm chart](https://github.com/helm/charts/tree/HEAD/stable/traefik#configuration))
- versions \>=2.0, when [`providers.kubernetesIngress.ingressEndpoint`](https://doc.traefik.io/traefik/providers/kubernetes-ingress/#ingressendpoint) is configured (`providers.kubernetesIngress.publishedService.enabled` is set to `true` in the [new Helm chart](https://github.com/traefik/traefik-helm-chart))
For Ingress objects, ExternalDNS will attempt to discover the target hostname of the relevant Ingress Controller automatically. If you are using an Ingress Controller that is not listed above you may have issues with ExternalDNS not discovering Endpoints and consequently not creating any DNS records. As a workaround, it is possible to force create an Endpoint by manually specifying a target host/IP for the records to be created by setting the annotation `external-dns.alpha.kubernetes.io/target` in the Ingress object.
## Are other Ingress Controllers supported?
Another reason you may want to override the ingress hostname or IP address is if you have an external mechanism for handling failover across ingress endpoints. Possible scenarios for this would include using [keepalived-vip](https://github.com/kubernetes/contrib/tree/HEAD/keepalived-vip) to manage failover faster than DNS TTLs might expire.
For Ingress objects, ExternalDNS will attempt to discover the target hostname of the relevant Ingress Controller automatically.
If you are using an Ingress Controller that is not listed above you may have issues with ExternalDNS not discovering Endpoints and consequently not creating any DNS records.
As a workaround, it is possible to force create an Endpoint by manually specifying a target host/IP for the records to be created by setting the annotation `external-dns.alpha.kubernetes.io/target` in the Ingress object.
Note that if you set the target to a hostname, then a CNAME record will be created. In this case, the hostname specified in the Ingress object's annotation must already exist. (i.e. you have a Service resource for your Ingress Controller with the `external-dns.alpha.kubernetes.io/hostname` annotation set to the same value.)
Another reason you may want to override the ingress hostname or IP address is if you have an external mechanism for handling failover across ingress endpoints.
Possible scenarios for this would include using [keepalived-vip](https://github.com/kubernetes/contrib/tree/HEAD/keepalived-vip) to manage failover faster than DNS TTLs might expire.
### What about other projects similar to ExternalDNS?
Note that if you set the target to a hostname, then a CNAME record will be created.
In this case, the hostname specified in the Ingress object's annotation must already exist.
(i.e. you have a Service resource for your Ingress Controller with the `external-dns.alpha.kubernetes.io/hostname` annotation set to the same value)
## What about other projects similar to ExternalDNS?
ExternalDNS is a joint effort to unify different projects accomplishing the same goals, namely:
* Kops' [DNS Controller](https://github.com/kubernetes/kops/tree/HEAD/dns-controller)
* Zalando's [Mate](https://github.com/linki/mate)
* Molecule Software's [route53-kubernetes](https://github.com/wearemolecule/route53-kubernetes)
- Kops' [DNS Controller](https://github.com/kubernetes/kops/tree/HEAD/dns-controller)
- Zalando's [Mate](https://github.com/linki/mate)
- Molecule Software's [route53-kubernetes](https://github.com/wearemolecule/route53-kubernetes)
We strive to make the migration from these implementations a smooth experience. This means that, for some time, we'll support their annotation semantics in ExternalDNS and allow both implementations to run side-by-side. This enables you to migrate incrementally and slowly phase out the other implementation.
### How does it work with other implementations and legacy records?
## How does it work with other implementations and legacy records?
ExternalDNS will allow you to opt into any Services and Ingresses that you want it to consider, by an annotation. This way, it can co-exist with other implementations running in the same cluster if they also support this pattern. However, we'll most likely declare ExternalDNS to be the default implementation. This means that ExternalDNS will consider Services and Ingresses that don't specifically declare which controller they want to be processed by; this is similar to the `ingress.class` annotation on GKE.
ExternalDNS will allow you to opt into any Services and Ingresses that you want it to consider, by an annotation.
This way, it can co-exist with other implementations running in the same cluster if they also support this pattern.
However, we'll most likely declare ExternalDNS to be the default implementation.
This means that ExternalDNS will consider Services and Ingresses that don't specifically declare which controller they want to be processed by; this is similar to the `ingress.class` annotation on GKE.
### I'm afraid you will mess up my DNS records!
## I'm afraid you will mess up my DNS records
Since v0.3, ExternalDNS can be configured to use an ownership registry. When this option is enabled, ExternalDNS will keep track of which records it has control over, and will never modify any records over which it doesn't have control. This is a fundamental requirement to operate ExternalDNS safely when there might be other actors creating DNS records in the same target space.
Since v0.3, ExternalDNS can be configured to use an ownership registry.
When this option is enabled, ExternalDNS will keep track of which records it has control over, and will never modify any records over which it doesn't have control.
This is a fundamental requirement to operate ExternalDNS safely when there might be other actors creating DNS records in the same target space.
For now ExternalDNS uses TXT records to label owned records, and there might be other alternatives coming in the future releases.
### Does anyone use ExternalDNS in production?
## Does anyone use ExternalDNS in production?
Yes, multiple companies are using ExternalDNS in production. Zalando, as an example, has been using it in production since its v0.3 release, mostly using the AWS provider.
### How can we start using ExternalDNS?
## How can we start using ExternalDNS?
Check out the following descriptive tutorials on how to run ExternalDNS in [GKE](tutorials/gke.md) and [AWS](tutorials/aws.md) or any other supported provider.
### Why is ExternalDNS only adding a single IP address in Route 53 on AWS when using the `nginx-ingress-controller`? How do I get it to use the FQDN of the ELB assigned to my `nginx-ingress-controller` Service instead?
## Why is ExternalDNS only adding a single IP address in Route 53 on AWS when using the `nginx-ingress-controller` ?
By default the `nginx-ingress-controller` assigns a single IP address to an Ingress resource when it's created. ExternalDNS uses what's assigned to the Ingress resource, so it too will use this single IP address when adding the record in Route 53.
In most AWS deployments, you'll instead want the Route 53 entry to be the FQDN of the ELB that is assigned to the `nginx-ingress-controller` Service. To accomplish this, when you create the `nginx-ingress-controller` Deployment, you need to provide the `--publish-service` option to the `/nginx-ingress-controller` executable under `args`. Once this is deployed new Ingress resources will get the ELB's FQDN and ExternalDNS will use the same when creating records in Route 53.
### How do I get it to use the FQDN of the ELB assigned to my `nginx-ingress-controller` Service instead?
In most AWS deployments, you'll instead want the Route 53 entry to be the FQDN of the ELB that is assigned to the `nginx-ingress-controller` Service.
To accomplish this, when you create the `nginx-ingress-controller` Deployment, you need to provide the `--publish-service` option to the `/nginx-ingress-controller` executable under `args`.
Once this is deployed new Ingress resources will get the ELB's FQDN and ExternalDNS will use the same when creating records in Route 53.
According to the `nginx-ingress-controller` [docs](https://kubernetes.github.io/ingress-nginx/) the value you need to provide `--publish-service` is:
@ -112,7 +129,7 @@ According to the `nginx-ingress-controller` [docs](https://kubernetes.github.io/
For example if your `nginx-ingress-controller` Service's name is `nginx-ingress-controller-svc` and it's in the `default` namespace the start of your resource YAML might look like the following. Note the second to last line.
```
```yaml
apiVersion: apps/v1
kind: Deployment
metadata:
@ -139,38 +156,41 @@ spec:
- --configmap={your-configmap}
```
### I have a Service/Ingress but it's ignored by ExternalDNS. Why?
## I have a Service/Ingress but it's ignored by ExternalDNS. Why?
ExternalDNS can be configured to only use Services or Ingresses as source. In case Services or Ingresses seem to be ignored in your setup, consider checking how the flag `--source` was configured when deployed. For reference, see the issue https://github.com/kubernetes-sigs/external-dns/issues/267.
### I'm using an ELB with TXT registry but the CNAME record clashes with the TXT record. How to avoid this?
## I'm using an ELB with TXT registry but the CNAME record clashes with the TXT record. How to avoid this?
CNAMEs cannot co-exist with other records, therefore you can use the `--txt-prefix` flag which makes sure to create a TXT record with a name following the pattern `prefix.<CNAME record>`. For reference, see the issue https://github.com/kubernetes-sigs/external-dns/issues/262.
### Can I force ExternalDNS to create CNAME records for ELB/ALB?
## Can I force ExternalDNS to create CNAME records for ELB/ALB?
The default logic is: when a target looks like an ELB/ALB, ExternalDNS will create ALIAS records for it.
Under certain circumstances you want to force ExternalDNS to create CNAME records instead. If you want to do that, start ExternalDNS with the `--aws-prefer-cname` flag.
Why should I want to force ExternalDNS to create CNAME records for ELB/ALB? Some motivations of users were:
> "Our hosted zones records are synchronized with our enterprise DNS. The record type ALIAS is an AWS proprietary record type and AWS allows you to set a DNS record directly on AWS resources. Since this is not a DNS RfC standard and therefore can not be transferred and created in our enterprise DNS. So we need to force CNAME creation instead."
> "Our hosted zones records are synchronized with our enterprise DNS. The record type ALIAS is an AWS proprietary record type and AWS allows you to set a DNS record directly on AWS resources.
> Since this is not a DNS RfC standard and therefore can not be transferred and created in our enterprise DNS. So we need to force CNAME creation instead."
or
> "In case of ALIAS if we do nslookup with domain name, it will return only IPs of ELB. So it is always difficult for us to locate ELB in AWS console to which domain is pointing. If we configure it with CNAME it will return exact ELB CNAME, which is more helpful.!"
### Which permissions do I need when running ExternalDNS on a GCE or GKE node.
## Which permissions do I need when running ExternalDNS on a GCE or GKE node
You need to add either https://www.googleapis.com/auth/ndev.clouddns.readwrite or https://www.googleapis.com/auth/cloud-platform on your instance group's scope.
### What metrics can I get from ExternalDNS and what do they mean?
## What metrics can I get from ExternalDNS and what do they mean?
ExternalDNS exposes 2 types of metrics: Sources and Registry errors.
`Source`s are mostly Kubernetes API objects. Examples of `source` errors may be connection errors to the Kubernetes API server itself or missing RBAC permissions. It can also stem from incompatible configuration in the objects itself like invalid characters, processing a broken fqdnTemplate, etc.
`Source`s are mostly Kubernetes API objects. Examples of `source` errors may be connection errors to the Kubernetes API server itself or missing RBAC permissions.
It can also stem from incompatible configuration in the objects itself like invalid characters, processing a broken fqdnTemplate, etc.
`Registry` errors are mostly Provider errors, unless there's some coding flaw in the registry package. Provider errors often arise due to accessing their APIs due to network or missing cloud-provider permissions when reading records. When applying a changeset, errors will arise if the changeset applied is incompatible with the current state.
`Registry` errors are mostly Provider errors, unless there's some coding flaw in the registry package. Provider errors often arise due to accessing their APIs due to network or missing cloud-provider permissions when reading records.
When applying a changeset, errors will arise if the changeset applied is incompatible with the current state.
In case of an increased error count, you could correlate them with the `http_request_duration_seconds{handler="instrumented_http"}` metric which should show increased numbers for status codes 4xx (permissions, configuration, invalid changeset) or 5xx (apiserver down).
@ -193,7 +213,6 @@ Here is the full list of available metrics provided by ExternalDNS:
| external_dns_source_aaaa_records | Number of AAAA records in source | Gauge |
| external_dns_source_a_records | Number of A records in source | Gauge |
If you're using the webhook provider, the following additional metrics will be provided:
| Name | Description | Type |
@ -205,12 +224,11 @@ If you're using the webhook provider, the following additional metrics will be p
| external_dns_webhook_provider_adjustendpoints_errors_total | Number of errors with the /adjustendpoints method | Gauge |
| external_dns_webhook_provider_adjustendpoints_requests_total | Number of requests made to the /adjustendpoints method | Gauge |
### How can I run ExternalDNS under a specific GCP Service Account, e.g. to access DNS records in other projects?
## How can I run ExternalDNS under a specific GCP Service Account, e.g. to access DNS records in other projects?
Have a look at https://github.com/linki/mate/blob/v0.6.2/examples/google/README.md#permissions
### How do I configure multiple Sources via environment variables? (also applies to domain filters)
## How do I configure multiple Sources via environment variables? (also applies to domain filters)
Separate the individual values via a line break. The equivalent of `--source=service --source=ingress` would be `service\ningress`. However, it can be tricky do define that depending on your environment. The following examples work (zsh):
@ -225,7 +243,6 @@ $ docker run \
time="2017-08-08T14:10:26Z" level=info msg="config: &{APIServerURL: KubeConfig: Sources:[service ingress] Namespace: ...
```
Locally:
```console
@ -234,7 +251,7 @@ $ external-dns --provider=google
INFO[0000] config: &{APIServerURL: KubeConfig: Sources:[service ingress] Namespace: ...
```
```
```sh
$ EXTERNAL_DNS_SOURCE=$'service\ningress' external-dns --provider=google
INFO[0000] config: &{APIServerURL: KubeConfig: Sources:[service ingress] Namespace: ...
```
@ -267,8 +284,7 @@ spec:
ingress
```
### Running an internal and external dns service
## Running an internal and external dns service
Sometimes you need to run an internal and an external dns service.
The internal one should provision hostnames used on the internal network (perhaps inside a VPC), and the external one to expose DNS to the internet.
@ -299,7 +315,7 @@ In larger clusters with many resources which change frequently this can cause pe
If only some resources need to be managed by an instance of external-dns then label filtering can be used instead of ingress class filtering (or legacy annotation filtering).
This means that only those resources which match the selector specified in `--label-filter` will be passed to the controller.
### How do I specify that I want the DNS record to point to either the Node's public or private IP when it has both?
## How do I specify that I want the DNS record to point to either the Node's public or private IP when it has both?
If your Nodes have both public and private IP addresses, you might want to write DNS records with one or the other.
For example, you may want to write a DNS record in a private zone that resolves to your Nodes' private IPs so that traffic never leaves your private network.
@ -312,20 +328,19 @@ If this annotation is not set, and the node has both public and private IP addre
Some loadbalancer implementations assign multiple IP addresses as external addresses. You can filter the generated targets by their networks
using `--target-net-filter=10.0.0.0/8` or `--exclude-target-net=10.0.0.0/8`.
### Can external-dns manage(add/remove) records in a hosted zone which is setup in different AWS account?
## Can external-dns manage(add/remove) records in a hosted zone which is setup in different AWS account?
Yes, give it the correct cross-account/assume-role permissions and use the `--aws-assume-role` flag https://github.com/kubernetes-sigs/external-dns/pull/524#issue-181256561
### How do I provide multiple values to the annotation `external-dns.alpha.kubernetes.io/hostname`?
## How do I provide multiple values to the annotation `external-dns.alpha.kubernetes.io/hostname`?
Separate them by `,`.
### Are there official Docker images provided?
## Are there official Docker images provided?
When we tag a new release, we push a container image to the Kubernetes projects official container registry with the following name:
```
```sh
registry.k8s.io/external-dns/external-dns
```
@ -333,19 +348,21 @@ As tags, you use the external-dns release of choice(i.e. `v0.7.6`). A `latest` t
If you wish to build your own image, you can use the provided [.ko.yaml](https://github.com/kubernetes-sigs/external-dns/blob/master/.ko.yaml) as a starting point.
### Which architectures are supported?
## Which architectures are supported?
From `v0.7.5` on we support `amd64`, `arm32v7` and `arm64v8`. This means that you can run ExternalDNS on a Kubernetes cluster backed by Rasperry Pis or on ARM instances in the cloud as well as more traditional machines backed by `amd64` compatible CPUs.
### Which operating systems are supported?
## Which operating systems are supported?
At the time of writing we only support GNU/linux and we have no plans of supporting Windows or other operating systems.
### Why am I seeing time out errors even though I have connectivity to my cluster?
## Why am I seeing time out errors even though I have connectivity to my cluster?
If you're seeing an error such as this:
```
```sh
FATA[0060] failed to sync cache: timed out waiting for the condition
```
You may not have the correct permissions required to query all the necessary resources in your kubernetes cluster. Specifically, you may be running in a `namespace` that you don't have these permissions in. By default, commands are run against the `default` namespace. Try changing this to your particular namespace to see if that fixes the issue.
You may not have the correct permissions required to query all the necessary resources in your kubernetes cluster. Specifically, you may be running in a `namespace` that you don't have these permissions in.
By default, commands are run against the `default` namespace. Try changing this to your particular namespace to see if that fixes the issue.

View File

@ -2,6 +2,7 @@
<!-- THIS FILE MUST NOT BE EDITED BY HAND -->
<!-- ON NEW FLAG ADDED PLEASE RUN 'make generate-flags-documentation' -->
<!-- markdownlint-disable MD013 -->
| Flag | Description |
| :------ | :----------- |
@ -170,4 +171,4 @@
| `--webhook-provider-url="http://localhost:8888"` | The URL of the remote endpoint to call for the webhook provider (default: http://localhost:8888) |
| `--webhook-provider-read-timeout=5s` | The read timeout for the webhook provider in duration format (default: 5s) |
| `--webhook-provider-write-timeout=10s` | The write timeout for the webhook provider in duration format (default: 10s) |
| `--[no-]webhook-server` | When enabled, runs as a webhook server instead of a controller. (default: false). |
| `--[no-]webhook-server` | When enabled, runs as a webhook server instead of a controller. (default: false). |

View File

@ -8,13 +8,14 @@
This document describes the initial design proposal.
External DNS is purposed to fill the existing gap of creating DNS records for Kubernetes resources. While there exist alternative solutions, this project is meant to be a standard way of managing DNS records for Kubernetes. The current project is a fusion of the following projects and driven by its maintainers:
External DNS is purposed to fill the existing gap of creating DNS records for Kubernetes resources. While there exist alternative solutions, this project is meant to be a standard way of managing DNS records for Kubernetes.
The current project is a fusion of the following projects and driven by its maintainers:
1. [Kops DNS Controller](https://github.com/kubernetes/kops/tree/HEAD/dns-controller)
2. [Mate](https://github.com/linki/mate)
3. [wearemolecule/route53-kubernetes](https://github.com/wearemolecule/route53-kubernetes)
## Example use case:
## Example use case
User runs `kubectl create -f ingress.yaml`, this will create an ingress as normal.
Typically the user would then have to manually create a DNS record pointing the ingress endpoint
@ -36,6 +37,7 @@ New cloud providers should be easily pluggable. Initially only AWS/Google platfo
### Configuration
DNS records will be automatically created in multiple situations:
1. Setting `spec.rules.host` on an ingress object.
2. Setting `spec.tls.hosts` on an ingress object.
3. Adding the annotation `external-dns.alpha.kubernetes.io/hostname` on an ingress object.
@ -48,7 +50,8 @@ Record configuration should occur via resource annotations. Supported annotation
| Annotations | |
|---|---|
|Tag |external-dns.alpha.kubernetes.io/controller |
|Description | Tells a DNS controller to process this service. This is useful when running different DNS controllers at the same time (or different versions of the same controller). The v1 implementation of dns-controller would look for service annotations `dns-controller` and `dns-controller/v1` but not for `mate/v1` or `dns-controller/v2` |
|Description | Tells a DNS controller to process this service. This is useful when running different DNS controllers at the same time (or different versions of the same controller). |
| Details | The v1 implementation of dns-controller would look for service annotations `dns-controller` and `dns-controller/v1` but not for `mate/v1` or `dns-controller/v2` |
|Default | dns-controller |
|Example|dns-controller/v1|
|Required| false |
@ -61,11 +64,11 @@ Record configuration should occur via resource annotations. Supported annotation
### Compatibility
External DNS should be compatible with annotations used by three above mentioned projects. The idea is that resources created and tagged with annotations for other projects should continue to be valid and now managed by External DNS.
External DNS should be compatible with annotations used by three above mentioned projects. The idea is that resources created and tagged with annotations for other projects should continue to be valid and now managed by External DNS.
**Mate**
Mate does not require services/ingress to be tagged. Therefore, it is not safe to run both Mate and External-DNS simultaneously. The idea is that initial release (?) of External DNS will support Mate annotations, which indicates the hostname to be created. Therefore the switch should be simple.
Mate does not require services/ingress to be tagged. Therefore, it is not safe to run both Mate and External-DNS simultaneously. The idea is that initial release (?) of External DNS will support Mate annotations, which indicates the hostname to be created. Therefore the switch should be simple.
|Annotations | |
|---|---|
@ -77,8 +80,9 @@ Mate does not require services/ingress to be tagged. Therefore, it is not safe t
**route53-kubernetes**
It should be safe to run both `route53-kubernetes` and `external-dns` simultaneously. Since `route53-kubernetes` only looks at services with the label `dns=route53` and does not support ingress there should be no collisions between annotations. If users desire to switch to `external-dns` they can run both controllers and migrate services over as they are able.
It should be safe to run both `route53-kubernetes` and `external-dns` simultaneously.
Since `route53-kubernetes` only looks at services with the label `dns=route53` and does not support ingress there should be no collisions between annotations.
If users desire to switch to `external-dns` they can run both controllers and migrate services over as they are able.
### Ownership
@ -86,4 +90,5 @@ External DNS should be *responsible* for the created records. Which means that t
#### Ownership via TXT records
Each record managed by External DNS is accompanied with a TXT record with a specific value to indicate that corresponding DNS record is managed by External DNS and it can be updated/deleted respectively. TXT records are limited to lifetimes of service/ingress objects and are created/deleted once k8s resources are created/deleted.
Each record managed by External DNS is accompanied with a TXT record with a specific value to indicate that corresponding DNS record is managed by External DNS and it can be updated/deleted respectively.
TXT records are limited to lifetimes of service/ingress objects and are created/deleted once k8s resources are created/deleted.

View File

@ -1,21 +1,22 @@
Configure NAT64 DNS Records
=======================================
# Configure NAT64 DNS Records
Some NAT64 configurations are entirely handled outside the Kubernetes cluster, therefore Kubernetes does not know anything about the associated IPv4 addresses. ExternalDNS should also be able to create A records for those cases.
Therefore, we can configure `nat64-networks`, which **must** be a /96 network. You can also specify multiple `nat64-networks` for more complex setups.
This creates an additional A record with a NAT64-translated IPv4 address for each AAAA record pointing to an IPv6 address within the given `nat64-networks`.
This can be configured with the following flag passed to the operator binary. You can also pass multiple `nat64-networks` by using a comma as seperator.
```sh
--nat64-networks="2001:db8:96::/96"
```
## Setup Example
We use an external NAT64 resolver and SIIT (Stateless IP/ICMP Translation). Therefore, our nodes only have IPv6 IP adresses but can reach IPv4 addresses *and* can be reached via IPv4.
Outgoing connections are a classic NAT64 setup, where all IPv6 addresses gets translated to a small pool of IPv4 addresses.
Incoming connnections are mapped on a different IPv4 pool, e.g. `198.51.100.0/24`, which can get translated one-to-one to IPv6 addresses. We dedicate a `/96` network for this, for example `2001:db8:96::/96`, so `198.51.100.0/24` can translated to `2001:db8:96::c633:6400/120`. Note: `/120` IPv6 network has exactly as many IP addresses as `/24` IPv4 network.
Outgoing connections are a classic NAT64 setup, where all IPv6 addresses gets translated to a small pool of IPv4 addresses.
Incoming connnections are mapped on a different IPv4 pool, e.g. `198.51.100.0/24`, which can get translated one-to-one to IPv6 addresses.
We dedicate a `/96` network for this, for example `2001:db8:96::/96`, so `198.51.100.0/24` can translated to `2001:db8:96::c633:6400/120`. Note: `/120` IPv6 network has exactly as many IP addresses as `/24` IPv4 network.
Therefore, the `/96` network can be configured as `nat64-networks`. This means, that `2001:0DB8:96::198.51.100.10` or `2001:db8:96::c633:640a` can be translated to `198.51.100.10`.
Any source can point a record to an IPv6 address within the given `nat64-networks`, for example `2001:db8:96::c633:640a`. This creates by default an AAAA record and - if `nat64-networks` is configured - also an A record with `198.51.100.10` as target.
Therefore, the `/96` network can be configured as `nat64-networks`. This means, that `2001:0DB8:96::198.51.100.10` or `2001:db8:96::c633:640a` can be translated to `198.51.100.10`.
Any source can point a record to an IPv6 address within the given `nat64-networks`, for example `2001:db8:96::c633:640a`.
This creates by default an AAAA record and - if `nat64-networks` is configured - also an A record with `198.51.100.10` as target.

View File

@ -37,4 +37,4 @@
Material for MkDocs
</a>
{% endif %}
</div>
</div>

View File

@ -18,31 +18,41 @@ status: draft
<!-- /toc -->
## Summary
Please provide a summary of this proposal.
## Motivation
What is the motivation of this proposal? Why is it useful and relevant?
### Goals
What are the goals of this proposal, what's the problem we want to solve?
### Non-Goals
What are explicit non-goals of this proposal?
## Proposal
How does the proposal look like?
### User Stories
How would users use this feature, what are their needs?
### API
Please describe the API (CRD or other) and show some examples.
### Behavior
How should the new CRD or feature behave? Are there edge cases?
### Drawbacks
If we implement this feature, what are drawbacks and disadvantages of this approach?
## Alternatives
What alternatives do we have and what are their pros and cons?

View File

@ -1,4 +1,5 @@
# Multiple Targets per hostname
*(November 2017)*
## Purpose
@ -13,13 +14,14 @@ ingress/service owns the record it can have multiple targets enable iff they are
See https://github.com/kubernetes-sigs/external-dns/issues/239
## Current behaviour
*(as of the moment of writing)*
Central piece of enabling multi-target is having consistent and correct behaviour in `plan` component in regards to how endpoints generated
from kubernetes resources are mapped to dns records. Current implementation of the `plan` has inconsistent behaviour in the following scenarios, all
of which must be resolved before multi-target support can be enabled in the provider implementations:
1. No records registered so far. Two **different** ingresses request same hostname but different targets, e.g. Ingress A: example.com -> 1.1.1.1 and Ingress B: example.com -> 2.2.2.2
1. No records registered so far. Two **different** ingresses request same hostname but different targets, e.g. Ingress A: example.com -> 1.1.1.1 and Ingress B: example.com -> 2.2.2.2
* *Current Behaviour*: both are added to the "Create" (records to be created) list and passed to Provider
* *Expected Behaviour*: only one (random/ or according to predefined strategy) should be chosen and passed to Provider
@ -48,11 +50,11 @@ For this feature to work we have to make sure that:
should store back-reference for the resource this record was created for, i.e. `"heritage=external-dns,external-dns/resource=ingress/default/my-ingress-object-name"`
2. DNS records are updated only:
- If owning resource target list has changed
* If owning resource target list has changed
- If owning resource record is not found in the desired list (meaning it was deleted), therefore it will now be owned by another record. So its target list will be updated
* If owning resource record is not found in the desired list (meaning it was deleted), therefore it will now be owned by another record. So its target list will be updated
- Changes related to other record properties (e.g. TTL)
* Changes related to other record properties (e.g. TTL)
4. All of the issues described in `Current Behaviour` sections are resolved
@ -93,6 +95,7 @@ These PRs should be considered after common agreement about the way to address m
### How to proceed from here
The following steps are needed:
1. Make sure consensus regarding the approach is achieved via collaboration on the current document
2. Notify all PR (see above) authors about the agreed approach
3. Implementation:
@ -114,5 +117,5 @@ The following steps are needed:
## Open questions
- Handling cases when ingress/service targets include both hostnames and IPs - postpone this until use cases occurs
- "Weighted records scope": https://github.com/kubernetes-sigs/external-dns/issues/196 - this should be considered once multi-target support is implemented
* Handling cases when ingress/service targets include both hostnames and IPs - postpone this until use cases occurs
* "Weighted records scope": https://github.com/kubernetes-sigs/external-dns/issues/196 - this should be considered once multi-target support is implemented

View File

@ -1,5 +1,4 @@
DNS provider API rate limits considerations
===========================================
# DNS provider API rate limits considerations
## Introduction
@ -31,12 +30,12 @@ This option is enabled using the `--provider-cache-time=15m` command line argume
You can evaluate the behaviour of the cache thanks to the built-in metrics
* `external_dns_provider_cache_records_calls`
* The number of calls to the provider cache Records list.
* The label `from_cache=true` indicates that the records were retrieved from memory and the DNS provider was not reached
* The label `from_cache=false` indicates that the cache was not used and the records were retrieved from the provider
* The number of calls to the provider cache Records list.
* The label `from_cache=true` indicates that the records were retrieved from memory and the DNS provider was not reached
* The label `from_cache=false` indicates that the cache was not used and the records were retrieved from the provider
* `external_dns_provider_cache_apply_changes_calls`
* The number of calls to the provider cache ApplyChanges.
* Each ApplyChange systematically invalidates the cache and makes subsequent Records list to be retrieved from the provider without cache.
* The number of calls to the provider cache ApplyChanges.
* Each ApplyChange systematically invalidates the cache and makes subsequent Records list to be retrieved from the provider without cache.
## Related options
@ -60,7 +59,8 @@ to match the specific needs of your deployments, with the goal to reduce the num
* `--ovh-api-rate-limit=20` When using the OVH provider, specify the API request rate limit, X operations by seconds (default: 20)
* Global
* `--registry=txt` The registry implementation to use to keep track of DNS record ownership. Other registry options such as dynamodb can help mitigate rate limits by storing the registry outside of the DNS hosted zone (default: txt, options: txt, noop, dynamodb, aws-sd)
* `--registry=txt` The registry implementation to use to keep track of DNS record ownership.
* Other registry options such as dynamodb can help mitigate rate limits by storing the registry outside of the DNS hosted zone (default: txt, options: txt, noop, dynamodb, aws-sd)
* `--txt-cache-interval=0s` The interval between cache synchronizations in duration format (default: disabled)
* `--interval=1m0s` The interval between two consecutive synchronizations in duration format (default: 1m)
* `--min-event-sync-interval=5s` The minimum interval between two consecutive synchronizations triggered from kubernetes events in duration format (default: 5s)

View File

@ -111,7 +111,7 @@ spec:
## Validate ExternalDNS works
Create either a [Service](../tutorials/aws.md#verify-externaldns-works-service-example) or an [Ingress](../tutorials/aws.md#verify-externaldns-works-ingress-example) and
Create either a [Service](../tutorials/aws.md#verify-externaldns-works-service-example) or an [Ingress](../tutorials/aws.md#verify-externaldns-works-ingress-example) and
After roughly two minutes, check that the corresponding entry was created in the DynamoDB table:
@ -120,7 +120,8 @@ aws dynamodb scan --table-name external-dns
```
This will show something like:
```
```json
{
"Items": [
{

View File

@ -4,17 +4,21 @@ The TXT registry is the default registry.
It stores DNS record metadata in TXT records, using the same provider.
## Record Format Options
The TXT registry supports two formats for storing DNS record metadata:
- Legacy format: Creates a TXT record without record type information
- New format: Creates a TXT record with record type information (e.g., 'a-' prefix for A records)
By default, the TXT registry creates records in both formats for backwards compatibility. You can configure it to use only the new format by using the `--txt-new-format-only` flag. This reduces the number of TXT records created, which can be helpful when working with provider-specific record limits.
Note: The following record types always use only the new format regardless of this setting:
- AAAA records
- Encrypted TXT records (when using `--txt-encrypt-enabled`)
Example:
```sh
# Default behavior - creates both formats
external-dns --provider=aws --source=ingress --managed-record-types=A,TXT
@ -22,10 +26,13 @@ external-dns --provider=aws --source=ingress --managed-record-types=A,TXT
# Only create new format records (alongside other required flags)
external-dns --provider=aws --source=ingress --managed-record-types=A,TXT --txt-new-format-only
```
The `--txt-new-format-only` flag should be used in addition to your existing external-dns configuration flags. It does not implicitly configure TXT record handling - you still need to specify `--managed-record-types=TXT` if you want external-dns to manage TXT records.
### Migration to New Format Only
When transitioning from dual-format to new-format-only records:
- Ensure all your `external-dns` instances support the new format
- Enable the `--txt-new-format-only` flag on your external-dns instances
Manually clean up any existing legacy format TXT records from your DNS provider
@ -64,22 +71,27 @@ key must be specified in URL-safe base64 form (recommended) or be a plain text,
Note that the key used for encryption should be a secure key and properly managed to ensure the security of your TXT records.
### Generating the TXT Encryption Key
Python
```python
python -c 'import os,base64; print(base64.urlsafe_b64encode(os.urandom(32)).decode())'
```
Bash
```shell
dd if=/dev/urandom bs=32 count=1 2>/dev/null | base64 | tr -d -- '\n' | tr -- '+/' '-_'; echo
```
OpenSSL
```shell
openssl rand -base64 32 | tr -- '+/' '-_'
```
PowerShell
```powershell
# Add System.Web assembly to session, just in case
Add-Type -AssemblyName System.Web
@ -87,6 +99,7 @@ Add-Type -AssemblyName System.Web
```
Terraform
```hcl
resource "random_password" "txt_key" {
length = 32
@ -102,37 +115,37 @@ In some cases you might need to edit registry TXT records. The following example
package main
import (
"fmt"
"sigs.k8s.io/external-dns/endpoint"
"fmt"
"sigs.k8s.io/external-dns/endpoint"
)
func main() {
keys := []string{
"ZPitL0NGVQBZbTD6DwXJzD8RiStSazzYXQsdUowLURY=", // safe base64 url encoded 44 bytes and 32 when decoded
"01234567890123456789012345678901", // plain txt 32 bytes
"passphrasewhichneedstobe32bytes!", // plain txt 32 bytes
}
keys := []string{
"ZPitL0NGVQBZbTD6DwXJzD8RiStSazzYXQsdUowLURY=", // safe base64 url encoded 44 bytes and 32 when decoded
"01234567890123456789012345678901", // plain txt 32 bytes
"passphrasewhichneedstobe32bytes!", // plain txt 32 bytes
}
for _, k := range keys {
key := []byte(k)
if len(key) != 32 {
// if key is not a plain txt let's decode
var err error
if key, err = b64.StdEncoding.DecodeString(string(key)); err != nil || len(key) != 32 {
fmt.Errorf("the AES Encryption key must have a length of 32 byte")
}
}
encrypted, _ := endpoint.EncryptText(
"heritage=external-dns,external-dns/owner=example,external-dns/resource=ingress/default/example",
key,
nil,
)
decrypted, _, err := endpoint.DecryptText(encrypted, key)
if err != nil {
fmt.Println("Error decrypting:", err, "for key:", k)
}
fmt.Println(decrypted)
}
for _, k := range keys {
key := []byte(k)
if len(key) != 32 {
// if key is not a plain txt let's decode
var err error
if key, err = b64.StdEncoding.DecodeString(string(key)); err != nil || len(key) != 32 {
fmt.Errorf("the AES Encryption key must have a length of 32 byte")
}
}
encrypted, _ := endpoint.EncryptText(
"heritage=external-dns,external-dns/owner=example,external-dns/resource=ingress/default/example",
key,
nil,
)
decrypted, _, err := endpoint.DecryptText(encrypted, key)
if err != nil {
fmt.Println("Error decrypting:", err, "for key:", k)
}
fmt.Println(decrypted)
}
}
```

View File

@ -2,7 +2,8 @@
## Release cycle
Currently we don't release regularly. Whenever we think it makes sense to release a new version we do it, but we aim to do a new release every month. You might want to ask in our Slack channel [external-dns](https://kubernetes.slack.com/archives/C771MKDKQ) when the next release will come out.
Currently we don't release regularly. Whenever we think it makes sense to release a new version we do it.
You might want to ask in our Slack channel [external-dns](https://kubernetes.slack.com/archives/C771MKDKQ) when the next release will come out.
## Staging Release cycle
@ -39,7 +40,8 @@ You must be an official maintainer of the project to be able to do a release.
- Run `scripts/releaser.sh` to create a new GitHub release. Alternatively you can create a release in the GitHub UI making sure to click on the autogenerate release node feature.
- The step above will trigger the Kubernetes based CI/CD system [Prow](https://prow.k8s.io/?repo=kubernetes-sigs%2Fexternal-dns). Verify that a new image was built and uploaded to `gcr.io/k8s-staging-external-dns/external-dns`.
- Create a PR in the [k8s.io repo](https://github.com/kubernetes/k8s.io) (see https://github.com/kubernetes/k8s.io/pull/540 for reference) by taking the current staging image using the sha256 digest. Once the PR is merged, the image will be live with the corresponding tag specified in the PR.
- Create a PR in the [k8s.io repo](https://github.com/kubernetes/k8s.io) by taking the current staging image using the sha256 digest. Once the PR is merged, the image will be live with the corresponding tag specified in the PR.
- See https://github.com/kubernetes/k8s.io/pull/540 for reference
- Verify that the image is pullable with the given tag (i.e. `v0.7.5`).
- Branch out from the default branch and run `scripts/kustomize-version-updater.sh` to update the image tag used in the kustomization.yaml.
- Create an issue to release the corresponding Helm chart via the chart release process (below) assigned to a chart maintainer

View File

@ -1,3 +1,3 @@
<head>
<meta http-equiv="Refresh" content="0; url='/external-dns/{{.}}/'" />
</head>
</head>

View File

@ -2,12 +2,12 @@
CRD source provides a generic mechanism to manage DNS records in your favourite DNS provider supported by external-dns.
### Details
## Details
CRD source watches for a user specified CRD to extract [Endpoints](https://github.com/kubernetes-sigs/external-dns/blob/HEAD/endpoint/endpoint.go) from its `Spec`.
So users need to create such a CRD and register it to the kubernetes cluster and then create new object(s) of the CRD specifying the Endpoints.
### Registering CRD
## Registering CRD
Here is typical example of [CRD API type](https://github.com/kubernetes-sigs/external-dns/blob/HEAD/endpoint/endpoint.go) which provides Endpoints to `CRD source`:
@ -62,21 +62,20 @@ type DNSEndpoint struct {
Spec DNSEndpointSpec `json:"spec,omitempty"`
Status DNSEndpointStatus `json:"status,omitempty"`
}
```
Refer to [kubebuilder](https://github.com/kubernetes-sigs/kubebuilder) to create and register the CRD.
### Usage
## Usage
One can use CRD source by specifying `--source` flag with `crd` and specifying the ApiVersion and Kind of the CRD with `--crd-source-apiversion` and `crd-source-kind` respectively.
for e.g:
```
$ build/external-dns --source crd --crd-source-apiversion externaldns.k8s.io/v1alpha1 --crd-source-kind DNSEndpoint --provider inmemory --once --dry-run
```sh
build/external-dns --source crd --crd-source-apiversion externaldns.k8s.io/v1alpha1 --crd-source-kind DNSEndpoint --provider inmemory --once --dry-run
```
### Creating DNS Records
## Creating DNS Records
Create the objects of CRD type by filling in the fields of CRD and DNS record would be created accordingly.
@ -85,21 +84,21 @@ Create the objects of CRD type by filling in the fields of CRD and DNS record wo
Here is an example [CRD manifest](crd/crd-manifest.yaml) generated by kubebuilder.
Apply this to register the CRD
```
```sh
$ kubectl apply --validate=false -f docs/sources/crd/crd-manifest.yaml
customresourcedefinition.apiextensions.k8s.io "dnsendpoints.externaldns.k8s.io" created
```
Then you can create the dns-endpoint yaml similar to [dnsendpoint-example](crd/dnsendpoint-example.yaml)
```
```sh
$ kubectl apply -f docs/sources/crd/dnsendpoint-example.yaml
dnsendpoint.externaldns.k8s.io "examplednsrecord" created
```
Run external-dns in dry-mode to see whether external-dns picks up the DNS record from CRD.
```
```sh
$ build/external-dns --source crd --crd-source-apiversion externaldns.k8s.io/v1alpha1 --crd-source-kind DNSEndpoint --provider inmemory --once --dry-run
INFO[0000] running in dry-run mode. No changes to DNS records will be made.
INFO[0000] Connected to cluster at https://192.168.99.100:8443
@ -107,7 +106,7 @@ INFO[0000] CREATE: foo.bar.com 180 IN A 192.168.99.216
INFO[0000] CREATE: foo.bar.com 0 IN TXT "heritage=external-dns,external-dns/owner=default"
```
#### Using CRD source to manage DNS records in different DNS providers
### Using CRD source to manage DNS records in different DNS providers
[CRD source](https://github.com/kubernetes-sigs/external-dns/blob/master/docs/sources/crd.md) provides a generic mechanism and declarative way to manage DNS records in different DNS providers using external-dns.
@ -170,10 +169,11 @@ spec:
- ns2.example.com
```
### RBAC configuration
## RBAC configuration
If you use RBAC, extend the `external-dns` ClusterRole with:
```
```yaml
- apiGroups: ["externaldns.k8s.io"]
resources: ["dnsendpoints"]
verbs: ["get","watch","list"]

View File

@ -9,7 +9,7 @@ spec:
- name: "aws/failover"
value: "PRIMARY"
- name: "aws/health-check-id"
value: "asdf1234-as12-as12-as12-asdf12345678"
value: "asdf1234-as12-as12-as12-asdf12345678"
- name: "aws/evaluate-target-health"
value: "true"
recordType: CNAME

View File

@ -9,20 +9,24 @@ The F5 Networks TransportServer CRD is part of [this](https://github.com/F5Netwo
1. Make sure that you have the `k8s-bigip-ctlr` installed in your cluster. The needed CRDs are bundled within the controller.
2. In your Helm `values.yaml` add:
```
```yaml
sources:
- ...
- f5-transportserver
- ...
```
or add it in your `Deployment` if you aren't installing `external-dns` via Helm:
```
```yaml
args:
- --source=f5-transportserver
```
Note that, in case you're not installing via Helm, you'll need the following in the `ClusterRole` bound to the service account of `external-dns`:
```
```yaml
- apiGroups:
- cis.f5.com
resources:
@ -35,7 +39,7 @@ Note that, in case you're not installing via Helm, you'll need the following in
### Example TransportServer CR w/ host in spec
```
```yaml
apiVersion: cis.f5.com/v1
kind: TransportServer
metadata:
@ -58,7 +62,7 @@ spec:
If the `external-dns.alpha.kubernetes.io/target` annotation is set, the record created will reflect that and everything else will be ignored.
```
```yaml
apiVersion: cis.f5.com/v1
kind: TransportServer
metadata:
@ -83,7 +87,7 @@ spec:
If `virtualServerAddress` is set, the record created will reflect that. `external-dns.alpha.kubernetes.io/target` will take precedence though.
```
```yaml
apiVersion: cis.f5.com/v1
kind: TransportServer
metadata:

View File

@ -1,27 +1,33 @@
# F5 Networks VirtualServer Source
This tutorial describes how to configure ExternalDNS to use the F5 Networks VirtualServer Source. It is meant to supplement the other provider-specific setup tutorials.
The F5 Networks VirtualServer CRD is part of [this](https://github.com/F5Networks/k8s-bigip-ctlr) project. See more in-depth info regarding the VirtualServer CRD [here](https://github.com/F5Networks/k8s-bigip-ctlr/blob/master/docs/config_examples/customResource/CustomResource.md#virtualserver).
The F5 Networks VirtualServer CRD is part of [this](https://github.com/F5Networks/k8s-bigip-ctlr) project.
See more in-depth info regarding the VirtualServer CRD [here](https://github.com/F5Networks/k8s-bigip-ctlr/blob/master/docs/config_examples/customResource/CustomResource.md#virtualserver).
## Start with ExternalDNS with the F5 Networks VirtualServer source
1. Make sure that you have the `k8s-bigip-ctlr` installed in your cluster. The needed CRDs are bundled within the controller.
2. In your Helm `values.yaml` add:
```
```yaml
sources:
- ...
- f5-virtualserver
- ...
```
or add it in your `Deployment` if you aren't installing `external-dns` via Helm:
```
```yaml
args:
- --source=f5-virtualserver
```
Note that, in case you're not installing via Helm, you'll need the following in the `ClusterRole` bound to the service account of `external-dns`:
```
```yaml
- apiGroups:
- cis.f5.com
resources:

View File

@ -36,6 +36,7 @@ specs to provide all intended hostnames, since the Gateway that ultimately route
requests/connections won't recognize additional hostnames from the annotation.
## Manifest with RBAC
```yaml
apiVersion: v1
kind: ServiceAccount
@ -52,7 +53,7 @@ rules:
resources: ["namespaces"]
verbs: ["get","watch","list"]
- apiGroups: ["gateway.networking.k8s.io"]
resources: ["gateways","httproutes","grpcroutes","tlsroutes","tcproutes","udproutes"]
resources: ["gateways","httproutes","grpcroutes","tlsroutes","tcproutes","udproutes"]
verbs: ["get","watch","list"]
---
apiVersion: rbac.authorization.k8s.io/v1

View File

@ -1,8 +1,10 @@
# Gloo Proxy Source
This tutorial describes how to configure ExternalDNS to use the Gloo Proxy source.
It is meant to supplement the other provider-specific setup tutorials.
### Manifest (for clusters without RBAC enabled)
## Manifest (for clusters without RBAC enabled)
```yaml
apiVersion: apps/v1
kind: Deployment
@ -31,7 +33,8 @@ spec:
- --txt-owner-id=my-identifier
```
### Manifest (for clusters with RBAC enabled)
## Manifest (for clusters with RBAC enabled)
Could be change if you have mulitple sources
```yaml
@ -98,4 +101,3 @@ spec:
- --registry=txt
- --txt-owner-id=my-identifier
```

View File

@ -19,13 +19,13 @@ The domain names of the DNS entries created from an Ingress are sourced from the
This behavior is suppressed if the `--ignore-ingress-rules-spec` flag was specified
or the Ingress had an
`external-dns.alpha.kubernetes.io/ingress-hostname-source: annotation-only` annotation.
`external-dns.alpha.kubernetes.io/ingress-hostname-source: annotation-only` annotation.
2. Iterates over the Ingress's `spec.tls`, adding each member of `hosts`.
This behavior is suppressed if the `--ignore-ingress-tls-spec` flag was specified
or the Ingress had an
`external-dns.alpha.kubernetes.io/ingress-hostname-source: annotation-only` annotation,
`external-dns.alpha.kubernetes.io/ingress-hostname-source: annotation-only` annotation,
3. Adds the hostnames from any `external-dns.alpha.kubernetes.io/hostname` annotation.
@ -41,8 +41,8 @@ generated from any`--fqdn-template` flag.
The targets of the DNS entries created from an Ingress are sourced from the following places:
1. If the Ingress has an `external-dns.alpha.kubernetes.io/target` annotation, uses
the values from that.
1. If the Ingress has an `external-dns.alpha.kubernetes.io/target` annotation, uses
the values from that.
2. Otherwise, iterates over the Ingress's `status.loadBalancer.ingress`,
adding each non-empty `ip` and `hostname`.
2. Otherwise, iterates over the Ingress's `status.loadBalancer.ingress`,
adding each non-empty `ip` and `hostname`.

View File

@ -9,7 +9,7 @@ It is meant to supplement the other provider-specific setup tutorials.
* Manifest (for clusters with RBAC enabled)
* Update existing ExternalDNS Deployment
### Manifest (for clusters without RBAC enabled)
## Manifest (for clusters without RBAC enabled)
```yaml
apiVersion: apps/v1
@ -43,7 +43,7 @@ spec:
- --txt-owner-id=my-identifier
```
### Manifest (for clusters with RBAC enabled)
## Manifest (for clusters with RBAC enabled)
```yaml
apiVersion: v1
@ -114,7 +114,7 @@ spec:
- --txt-owner-id=my-identifier
```
### Update existing ExternalDNS Deployment
## Update existing ExternalDNS Deployment
* For clusters with running `external-dns`, you can just update the deployment.
* With access to the `kube-system` namespace, update the existing `external-dns` deployment.
@ -134,26 +134,29 @@ kubectl patch clusterrole external-dns --type='json' \
-p='[{"op": "add", "path": "/rules/4", "value": { "apiGroups": [ "networking.istio.io"], "resources": ["gateways"],"verbs": ["get", "watch", "list" ]} }]'
```
### Verify that Istio Gateway/VirtualService Source works
## Verify that Istio Gateway/VirtualService Source works
Follow the [Istio ingress traffic tutorial](https://istio.io/docs/tasks/traffic-management/ingress/)
to deploy a sample service that will be exposed outside of the service mesh.
The following are relevant snippets from that tutorial.
#### Install a sample service
### Install a sample service
With automatic sidecar injection:
```bash
$ kubectl apply -f https://raw.githubusercontent.com/istio/istio/release-1.6/samples/httpbin/httpbin.yaml
kubectl apply -f https://raw.githubusercontent.com/istio/istio/release-1.6/samples/httpbin/httpbin.yaml
```
Otherwise:
```bash
$ kubectl apply -f <(istioctl kube-inject -f https://raw.githubusercontent.com/istio/istio/release-1.6/samples/httpbin/httpbin.yaml)
kubectl apply -f <(istioctl kube-inject -f https://raw.githubusercontent.com/istio/istio/release-1.6/samples/httpbin/httpbin.yaml)
```
#### Using a Gateway as a source
### Using a Gateway as a source
##### Create an Istio Gateway:
#### Create an Istio Gateway
```bash
$ cat <<EOF | kubectl apply -f -
@ -175,7 +178,7 @@ spec:
EOF
```
##### Configure routes for traffic entering via the Gateway:
#### Configure routes for traffic entering via the Gateway
```bash
$ cat <<EOF | kubectl apply -f -
@ -202,9 +205,9 @@ spec:
EOF
```
#### Using a VirtualService as a source
### Using a VirtualService as a source
##### Create an Istio Gateway:
#### Create an Istio Gateway
```bash
$ cat <<EOF | kubectl apply -f -
@ -226,7 +229,7 @@ spec:
EOF
```
##### Configure routes for traffic entering via the Gateway:
#### Configure routes for traffic entering via the Gateway
```bash
$ cat <<EOF | kubectl apply -f -
@ -258,7 +261,8 @@ Please take a look at the [source service documentation](../sources/service.md)
It is also possible to set the targets manually by using the `external-dns.alpha.kubernetes.io/target` annotation on the Istio Ingress Gateway resource or the Istio VirtualService.
#### Access the sample service using `curl`
### Access the sample service using `curl`
```bash
$ curl -I http://httpbin.example.com/status/200
HTTP/1.1 200 OK
@ -272,6 +276,7 @@ x-envoy-upstream-service-time: 5
```
Accessing any other URL that has not been explicitly exposed should return an HTTP 404 error:
```bash
$ curl -I http://httpbin.example.com/headers
HTTP/1.1 404 Not Found
@ -282,7 +287,7 @@ transfer-encoding: chunked
**Note:** The `-H` flag in the original Istio tutorial is no longer necessary in the `curl` commands.
### Optional Gateway Annotation
## Optional Gateway Annotation
To support setups where an Ingress resource is used provision an external LB you can add the following annotation to your Gateway
@ -310,7 +315,7 @@ spec:
EOF
```
### Debug ExternalDNS
## Debug ExternalDNS
* Look for the deployment pod to see the status
@ -321,7 +326,7 @@ external-dns-6b84999479-4knv9 1/1 Running 0 3h29m
* Watch for the logs as follows
```console
$ kubectl logs -f external-dns-6b84999479-4knv9
kubectl logs -f external-dns-6b84999479-4knv9
```
At this point, you can `create` or `update` any `Istio Gateway` object with `hosts` entries array.

View File

@ -3,7 +3,7 @@
This tutorial describes how to configure ExternalDNS to use the Kong TCPIngress source.
It is meant to supplement the other provider-specific setup tutorials.
### Manifest (for clusters without RBAC enabled)
## Manifest (for clusters without RBAC enabled)
```yaml
apiVersion: apps/v1
@ -32,7 +32,8 @@ spec:
- --txt-owner-id=my-identifier
```
### Manifest (for clusters with RBAC enabled)
## Manifest (for clusters with RBAC enabled)
Could be changed if you have mulitple sources
```yaml

View File

@ -12,7 +12,7 @@ This avoid exposing Unhealthy, NotReady or SchedulingDisabled (cordon) nodes.
## Manifest (for cluster without RBAC enabled)
```
```yaml
---
apiVersion: apps/v1
kind: Deployment
@ -48,7 +48,7 @@ spec:
## Manifest (for cluster with RBAC enabled)
```
```yaml
apiVersion: v1
kind: ServiceAccount
metadata:

View File

@ -3,13 +3,14 @@
This tutorial describes how to configure ExternalDNS to use the OpenShift Route source.
It is meant to supplement the other provider-specific setup tutorials.
### For OCP 4.x
## For OCP 4.x
In OCP 4.x, if you have multiple [OpenShift ingress controllers](https://docs.openshift.com/container-platform/4.9/networking/ingress-operator.html) then you must specify an ingress controller name (also called router name), you can get it from the route's `status.ingress[*].routerName` field.
If you don't specify a router name when you have multiple ingress controllers in your cluster then the first router from the route's `status.ingress` will be used. Note that the router must have admitted the route in order to be selected.
Once the router is known, ExternalDNS will use this router's canonical hostname as the target for the CNAME record.
Starting from OCP 4.10 you can use [ExternalDNS Operator](https://github.com/openshift/external-dns-operator) to manage ExternalDNS instances. Example of its custom resource for AWS provider:
```yaml
apiVersion: externaldns.olm.openshift.io/v1alpha1
kind: ExternalDNS
@ -27,7 +28,8 @@ Starting from OCP 4.10 you can use [ExternalDNS Operator](https://github.com/ope
```
This will create an ExternalDNS POD with the following container args in `external-dns` namespace:
```
```yaml
spec:
containers:
- args:
@ -43,12 +45,15 @@ spec:
- --txt-prefix=external-dns-
```
### For OCP 3.11 environment
## For OCP 3.11 environment
### Prepare ROUTER_CANONICAL_HOSTNAME in default/router deployment
Read and go through [Finding the Host Name of the Router](https://docs.openshift.com/container-platform/3.11/install_config/router/default_haproxy_router.html#finding-router-hostname).
If no ROUTER_CANONICAL_HOSTNAME is set, you must annotate each route with external-dns.alpha.kubernetes.io/target!
### Manifest (for clusters without RBAC enabled)
```yaml
apiVersion: apps/v1
kind: Deployment
@ -79,6 +84,7 @@ spec:
```
### Manifest (for clusters with RBAC enabled)
```yaml
apiVersion: v1
kind: ServiceAccount
@ -94,7 +100,7 @@ rules:
resources: ["services","endpoints","pods"]
verbs: ["get","watch","list"]
- apiGroups: ["extensions","networking.k8s.io"]
resources: ["ingresses"]
resources: ["ingresses"]
verbs: ["get","watch","list"]
- apiGroups: [""]
resources: ["nodes"]
@ -146,10 +152,12 @@ spec:
```
### Verify External DNS works (OpenShift Route example)
The following instructions are based on the
The following instructions are based on the
[Hello Openshift](https://github.com/openshift/origin/tree/HEAD/examples/hello-openshift).
#### Install a sample service and expose it
```bash
$ oc apply -f - <<EOF
apiVersion: apps/v1
@ -203,6 +211,7 @@ EOF
```
#### Access the sample route using `curl`
```bash
$ curl -i http://hello-openshift.example.com
HTTP/1.1 200 OK

View File

@ -12,9 +12,12 @@ By default, the pod source will look into the pod annotations to find the FQDN a
## Configuration for registering all pods with their associated PTR record
A use case where combining these options can be pertinent is when you are running on-premise Kubernetes clusters without SNAT enabled for the pod network. You might want to register all the pods in the DNS with their associated PTR record so that the source of some traffic outside of the cluster can be rapidly associated with a workload using the "nslookup" or "dig" command on the pod IP. This can be particularly useful if you are running a large number of Kubernetes clusters.
A use case where combining these options can be pertinent is when you are running on-premise Kubernetes clusters without SNAT enabled for the pod network.
You might want to register all the pods in the DNS with their associated PTR record so that the source of some traffic outside of the cluster can be rapidly associated with a workload using the "nslookup" or "dig" command on the pod IP.
This can be particularly useful if you are running a large number of Kubernetes clusters.
You will then use the following mix of options:
- `--domain-filter=example.org`
- `--domain-filter=10.0.0.in-addr.arpa`
- `--source=pod`
@ -22,4 +25,4 @@ You will then use the following mix of options:
- `--no-ignore-non-host-network-pods`
- `--rfc2136-create-ptr`
- `--rfc2136-zone=example.org`
- `--rfc2136-zone=10.0.0.in-addr.arpa`
- `--rfc2136-zone=10.0.0.in-addr.arpa`

View File

@ -112,4 +112,3 @@ external-dns ... --managed-record-types=A --managed-record-types=CNAME --managed
1. If the Service has one or more `spec.externalIPs`, uses the values in that field.
2. Otherwise, creates a target with the value of the Service's `externalName` field.

View File

@ -126,8 +126,8 @@ ExternalDNS uses this annotation to determine what services should be registered
Create the IngressRoute:
```
$ kubectl create -f traefik-ingress.yaml
```sh
kubectl create -f traefik-ingress.yaml
```
Depending where you run your IngressRoute it can take a little while for ExternalDNS synchronize the DNS record.
@ -136,9 +136,9 @@ Depending where you run your IngressRoute it can take a little while for Externa
Now that we have verified that ExternalDNS will automatically manage Traefik DNS records, we can delete the tutorial's example:
```
$ kubectl delete -f traefik-ingress.yaml
$ kubectl delete -f externaldns.yaml
```sh
kubectl delete -f traefik-ingress.yaml
kubectl delete -f externaldns.yaml
```
## Additional Flags
@ -152,9 +152,8 @@ $ kubectl delete -f externaldns.yaml
Traefik has deprecated the legacy API group, traefik.containo.us, in favor of traefik.io. By default the traefik-proxy source will listen for resources under both API groups; however, this may cause timeouts with the following message
```
```sh
FATA[0060] failed to sync traefik.io/v1alpha1, Resource=ingressroutes: context deadline exceeded
```
In this case you can disable one or the other API groups with `--traefik-disable-new` or `--traefik-disable-legacy`

View File

@ -1,5 +1,4 @@
Configure DNS record TTL (Time-To-Live)
=======================================
# Configure DNS record TTL (Time-To-Live)
An optional annotation `external-dns.alpha.kubernetes.io/ttl` is available to customize the TTL value of a DNS record.
TTL is specified as an integer encoded as string representing seconds.
@ -32,8 +31,7 @@ Both examples result in the same value of 60 seconds TTL.
TTL must be a positive value.
Providers
=========
## Providers
- [x] AWS (Route53)
- [x] Azure
@ -49,29 +47,35 @@ Providers
PRs welcome!
Notes
=====
## Notes
When the `external-dns.alpha.kubernetes.io/ttl` annotation is not provided, the TTL will default to 0 seconds and `endpoint.TTL.isConfigured()` will be false.
### AWS Provider
The AWS Provider overrides the value to 300s when the TTL is 0.
This value is a constant in the provider code.
## Azure
### Azure
TTL value should be between 1 and 2,147,483,647 seconds.
By default it will be 300s.
## CloudFlare Provider
### CloudFlare Provider
CloudFlare overrides the value to "auto" when the TTL is 0.
### DigitalOcean Provider
The DigitalOcean Provider overrides the value to 300s when the TTL is 0.
This value is a constant in the provider code.
### DNSimple Provider
The DNSimple Provider default TTL is used when the TTL is 0. The default TTL is 3600s.
### Google Provider
Previously with the Google Provider, TTL's were hard-coded to 300s.
For safety, the Google Provider overrides the value to 300s when the TTL is 0.
This value is a constant in the provider code.
@ -80,10 +84,13 @@ For the moment, it is impossible to use a TTL value of 0 with the AWS, DigitalOc
This behavior may change in the future.
### Linode Provider
The Linode Provider default TTL is used when the TTL is 0. The default is 24 hours
### TransIP Provider
The TransIP Provider minimal TTL is used when the TTL is 0. The minimal TTL is 60s.
### UltraDNS
The UltraDNS provider minimal TTL is used when the TTL is not provided. The default TTL is account level default TTL, if defined, otherwise 24 hours.

View File

@ -6,11 +6,12 @@ External-DNS v0.8.0 or greater.
### Zones
External-DNS manages service endpoints in existing DNS zones. The Akamai provider does not add, remove or configure new zones. The [Akamai Control Center](https://control.akamai.com) or [Akamai DevOps Tools](https://developer.akamai.com/devops), [Akamai CLI](https://developer.akamai.com/cli) and [Akamai Terraform Provider](https://developer.akamai.com/tools/integrations/terraform) can create and manage Edge DNS zones.
External-DNS manages service endpoints in existing DNS zones. The Akamai provider does not add, remove or configure new zones.
The [Akamai Control Center](https://control.akamai.com) or [Akamai DevOps Tools](https://developer.akamai.com/devops), [Akamai CLI](https://developer.akamai.com/cli) and [Akamai Terraform Provider](https://developer.akamai.com/tools/integrations/terraform) can create and manage Edge DNS zones.
### Akamai Edge DNS Authentication
The Akamai Edge DNS provider requires valid Akamai Edgegrid API authentication credentials to access zones and manage DNS records.
The Akamai Edge DNS provider requires valid Akamai Edgegrid API authentication credentials to access zones and manage DNS records.
Either directly by key or indirectly via a file can set credentials for the provider. The Akamai credential keys and mappings to the Akamai provider utilizing different presentation methods are:
@ -82,7 +83,6 @@ Finally, install the ExternalDNS chart with Helm using the configuration specifi
helm upgrade --install external-dns external-dns/external-dns --values values.yaml
```
### Manifest (for clusters without RBAC enabled)
```yaml
@ -224,8 +224,8 @@ spec:
Create the deployment for External-DNS:
```
$ kubectl apply -f externaldns.yaml
```sh
kubectl apply -f externaldns.yaml
```
## Deploying an Nginx Service
@ -271,8 +271,8 @@ spec:
Create the deployment and service object:
```
$ kubectl apply -f nginx.yaml
```sh
kubectl apply -f nginx.yaml
```
## Verify Akamai Edge DNS Records
@ -280,14 +280,14 @@ $ kubectl apply -f nginx.yaml
Wait 3-5 minutes before validating the records to allow the record changes to propagate to all the Akamai name servers.
Validate records using the [Akamai Control Center](http://control.akamai.com) or by executing a dig, nslookup or similar DNS command.
## Cleanup
Once you successfully configure and verify record management via External-DNS, you can delete the tutorial's examples:
```
$ kubectl delete -f nginx.yaml
$ kubectl delete -f externaldns.yaml
```sh
kubectl delete -f nginx.yaml
kubectl delete -f externaldns.yaml
```
## Additional Information

View File

@ -72,21 +72,20 @@ When running on Alibaba Cloud, you need to make sure that your nodes (on which E
## Set up a Alibaba Cloud DNS service or Private Zone service
Alibaba Cloud DNS Service is the domain name resolution and management service for public access. It routes access from end-users to the designated web app.
Alibaba Cloud Private Zone is the domain name resolution and management service for VPC internal access.
Alibaba Cloud Private Zone is the domain name resolution and management service for VPC internal access.
*If you prefer to try-out ExternalDNS in one of the existing domain or zone you can skip this step*
Create a DNS domain which will contain the managed DNS records. For public DNS service, the domain name should be valid and owned by yourself.
```console
$ aliyun alidns AddDomain --DomainName "external-dns-test.com"
aliyun alidns AddDomain --DomainName "external-dns-test.com"
```
Make a note of the ID of the hosted zone you just created.
```console
$ aliyun alidns DescribeDomains --KeyWord="external-dns-test.com" | jq -r '.Domains.Domain[0].DomainId'
aliyun alidns DescribeDomains --KeyWord="external-dns-test.com" | jq -r '.Domains.Domain[0].DomainId'
```
## Deploy ExternalDNS
@ -95,6 +94,7 @@ Connect your `kubectl` client to the cluster you want to test ExternalDNS with.
Then apply one of the following manifests file to deploy ExternalDNS.
### Manifest (for clusters without RBAC enabled)
```yaml
apiVersion: apps/v1
kind: Deployment
@ -150,7 +150,7 @@ rules:
resources: ["services","endpoints","pods"]
verbs: ["get","watch","list"]
- apiGroups: ["extensions","networking.k8s.io"]
resources: ["ingresses"]
resources: ["ingresses"]
verbs: ["get","watch","list"]
- apiGroups: [""]
resources: ["nodes"]
@ -197,7 +197,7 @@ spec:
- --alibaba-cloud-zone-type=public # only look at public hosted zones (valid values are public, private or no value for both)
- --registry=txt
- --txt-owner-id=my-identifier
- --alibaba-cloud-config-file= # enable sts token
- --alibaba-cloud-config-file= # enable sts token
volumeMounts:
- mountPath: /usr/share/zoneinfo
name: hostpath
@ -208,8 +208,6 @@ spec:
type: Directory
```
## Arguments
This list is not the full list, but a few arguments that where chosen.
@ -221,7 +219,6 @@ This list is not the full list, but a few arguments that where chosen.
* If value is `public`, it will sync with records in Alibaba Cloud DNS Service
* If value is `private`, it will sync with records in Alibaba Cloud Private Zone Service
## Verify ExternalDNS works (Ingress example)
Create an ingress resource manifest file.
@ -326,7 +323,7 @@ $ aliyun alidns DescribeDomainRecords --DomainName=external-dns-test.com
"Locked": false,
"Line": "default",
"TTL": 600
}
}
]
}
}
@ -337,7 +334,7 @@ Note created TXT record alongside ALIAS record. TXT record signifies that the co
Let's check that we can resolve this DNS name. We'll ask the nameservers assigned to your zone first.
```console
$ dig nginx.external-dns-test.com.
dig nginx.external-dns-test.com.
```
If you hooked up your DNS zone with its parent zone correctly you can use `curl` to access your site.
@ -380,13 +377,13 @@ This will set the DNS record's TTL to 60 seconds.
Make sure to delete all Service objects before terminating the cluster so all load balancers get cleaned up correctly.
```console
$ kubectl delete service nginx
kubectl delete service nginx
```
Give ExternalDNS some time to clean up the DNS records for you. Then delete the hosted zone if you created one for the testing purpose.
```console
$ aliyun alidns DeleteDomain --DomainName external-dns-test.com
aliyun alidns DeleteDomain --DomainName external-dns-test.com
```
For more info about Alibaba Cloud external dns, please refer this [docs](https://yq.aliyun.com/articles/633412)

View File

@ -1,6 +1,7 @@
# AWS Route53 with same domain for public and private zones
This tutorial describes how to setup ExternalDNS using the same domain for public and private Route53 zones and [nginx-ingress-controller](https://github.com/kubernetes/ingress-nginx). It also outlines how to use [cert-manager](https://github.com/jetstack/cert-manager) to automatically issue SSL certificates from [Let's Encrypt](https://letsencrypt.org/) for both public and private records.
This tutorial describes how to setup ExternalDNS using the same domain for public and private Route53 zones and [nginx-ingress-controller](https://github.com/kubernetes/ingress-nginx).
It also outlines how to use [cert-manager](https://github.com/jetstack/cert-manager) to automatically issue SSL certificates from [Let's Encrypt](https://letsencrypt.org/) for both public and private records.
## Deploy public nginx-ingress-controller

View File

@ -2,7 +2,8 @@
This tutorial describes how to set up ExternalDNS for usage within a Kubernetes cluster with [AWS Cloud Map API](https://docs.aws.amazon.com/cloud-map/).
**AWS Cloud Map** API is an alternative approach to managing DNS records directly using the Route53 API. It is more suitable for a dynamic environment where service endpoints change frequently. It abstracts away technical details of the DNS protocol and offers a simplified model. AWS Cloud Map consists of three main API calls:
**AWS Cloud Map** API is an alternative approach to managing DNS records directly using the Route53 API. It is more suitable for a dynamic environment where service endpoints change frequently.
It abstracts away technical details of the DNS protocol and offers a simplified model. AWS Cloud Map consists of three main API calls:
* CreatePublicDnsNamespace automatically creates a DNS hosted zone
* CreateService creates a new named service inside the specified namespace
@ -14,7 +15,7 @@ Learn more about the API in the [AWS Cloud Map API Reference](https://docs.aws.a
To use the AWS Cloud Map API, a user must have permissions to create the DNS namespace. You need to make sure that your nodes (on which External DNS runs) have an IAM instance profile with the `AWSCloudMapFullAccess` managed policy attached, that provides following permissions:
```
```json
{
"Version": "2012-10-17",
"Statement": [
@ -43,10 +44,11 @@ To use the AWS Cloud Map API, a user must have permissions to create the DNS nam
```
### IAM Permissions with ABAC
You can use Attribute-based access control(ABAC) for advanced deployments.
You can define AWS tags that are applied to services created by the controller. By doing so, you can have precise control over your IAM policy to limit the scope of the permissions to services managed by the controller, rather than having to grant full permissions on your entire AWS account.
To pass tags to service creation, use either CLI flags or environment variables:
You can use Attribute-based access control(ABAC) for advanced deployments.
You can define AWS tags that are applied to services created by the controller. By doing so, you can have precise control over your IAM policy to limit the scope of the permissions to services managed by the controller, rather than having to grant full permissions on your entire AWS account.
To pass tags to service creation, use either CLI flags or environment variables:
*cli:* `--aws-sd-create-tag=key1=value1 --aws-sd-create-tag=key2=value2`
@ -54,7 +56,7 @@ To pass tags to service creation, use either CLI flags or environment variables:
Using tags, your `servicediscovery` policy can become:
```
```json
{
"Version": "2012-10-17",
"Statement": [
@ -123,13 +125,13 @@ Using tags, your `servicediscovery` policy can become:
Create a DNS namespace using the AWS Cloud Map API:
```console
$ aws servicediscovery create-public-dns-namespace --name "external-dns-test.my-org.com"
aws servicediscovery create-public-dns-namespace --name "external-dns-test.my-org.com"
```
Verify that the namespace was truly created
```console
$ aws servicediscovery list-namespaces
aws servicediscovery list-namespaces
```
## Deploy ExternalDNS
@ -284,7 +286,6 @@ spec:
After one minute check that a corresponding DNS record for your service was created in your hosted zone. We recommended that you use the [Amazon Route53 console](https://console.aws.amazon.com/route53) for that purpose.
## Custom TTL
The default DNS record TTL (time to live) is 300 seconds. You can customize this value by setting the annotation `external-dns.alpha.kubernetes.io/ttl`.
@ -336,7 +337,7 @@ spec:
Delete all service objects before terminating the cluster so all load balancers get cleaned up correctly.
```console
$ kubectl delete service nginx
kubectl delete service nginx
```
Give ExternalDNS some time to clean up the DNS records for you. Then delete the remaining service and namespace.
@ -356,7 +357,7 @@ $ aws servicediscovery list-services
```
```console
$ aws servicediscovery delete-service --id srv-6dygt5ywvyzvi3an
aws servicediscovery delete-service --id srv-6dygt5ywvyzvi3an
```
```console
@ -374,5 +375,5 @@ $ aws servicediscovery list-namespaces
```
```console
$ aws servicediscovery delete-namespace --id ns-durf2oxu4gxcgo6z
aws servicediscovery delete-namespace --id ns-durf2oxu4gxcgo6z
```

View File

@ -54,7 +54,6 @@ export POLICY_ARN=$(aws iam list-policies \
You can use [eksctl](https://eksctl.io) to easily provision an [Amazon Elastic Kubernetes Service](https://aws.amazon.com/eks) ([EKS](https://aws.amazon.com/eks)) cluster that is suitable for this tutorial. See [Getting started with Amazon EKS eksctl](https://docs.aws.amazon.com/eks/latest/userguide/getting-started-eksctl.html).
```bash
export EKS_CLUSTER_NAME="my-externaldns-cluster"
export EKS_CLUSTER_REGION="us-east-2"
@ -63,7 +62,9 @@ export KUBECONFIG="$HOME/.kube/${EKS_CLUSTER_NAME}-${EKS_CLUSTER_REGION}.yaml"
eksctl create cluster --name $EKS_CLUSTER_NAME --region $EKS_CLUSTER_REGION
```
Feel free to use other provisioning tools or an existing cluster. If [Terraform](https://www.terraform.io/) is used, [vpc](https://registry.terraform.io/modules/terraform-aws-modules/vpc/aws/) and [eks](https://registry.terraform.io/modules/terraform-aws-modules/eks/aws/) modules are recommended for standing up an EKS cluster. Amazon has a workshop called [Amazon EKS Terraform Workshop](https://catalog.us-east-1.prod.workshops.aws/workshops/afee4679-89af-408b-8108-44f5b1065cc7/) that may be useful for this process.
Feel free to use other provisioning tools or an existing cluster.
If [Terraform](https://www.terraform.io/) is used, [vpc](https://registry.terraform.io/modules/terraform-aws-modules/vpc/aws/) and [eks](https://registry.terraform.io/modules/terraform-aws-modules/eks/aws/) modules are recommended for standing up an EKS cluster.
Amazon has a workshop called [Amazon EKS Terraform Workshop](https://catalog.us-east-1.prod.workshops.aws/workshops/afee4679-89af-408b-8108-44f5b1065cc7/) that may be useful for this process.
## Permissions to modify DNS zone
@ -73,13 +74,17 @@ You will need to use the above policy (represented by the `POLICY_ARN` environme
* [Static credentials](#static-credentials)
* [IAM Roles for Service Accounts](#iam-roles-for-service-accounts)
For this tutorial, ExternalDNS will use the environment variable `EXTERNALDNS_NS` to represent the namespace, defaulted to `default`. Feel free to change this to something else, such `externaldns` or `kube-addons`. Make sure to edit the `subjects[0].namespace` for the `ClusterRoleBinding` resource when deploying ExternalDNS with RBAC enabled. See [When using clusters with RBAC enabled](#when-using-clusters-with-rbac-enabled) for more information.
For this tutorial, ExternalDNS will use the environment variable `EXTERNALDNS_NS` to represent the namespace, defaulted to `default`.
Feel free to change this to something else, such `externaldns` or `kube-addons`.
Make sure to edit the `subjects[0].namespace` for the `ClusterRoleBinding` resource when deploying ExternalDNS with RBAC enabled.
See [When using clusters with RBAC enabled](#when-using-clusters-with-rbac-enabled) for more information.
Additionally, throughout this tutorial, the example domain of `example.com` is used. Change this to appropriate domain under your control. See [Set up a hosted zone](#set-up-a-hosted-zone) section.
### Node IAM Role
In this method, you can attach a policy to the Node IAM Role. This will allow nodes in the Kubernetes cluster to access Route53 zones, which allows ExternalDNS to update DNS records. Given that this allows all containers to access Route53, not just ExternalDNS, running on the node with these privileges, this method is not recommended, and is only suitable for limited test environments.
In this method, you can attach a policy to the Node IAM Role. This will allow nodes in the Kubernetes cluster to access Route53 zones, which allows ExternalDNS to update DNS records.
Given that this allows all containers to access Route53, not just ExternalDNS, running on the node with these privileges, this method is not recommended, and is only suitable for limited test environments.
If you are using eksctl to provision a new cluster, you add the policy at creation time with:
@ -132,8 +137,8 @@ get_instance_id() {
INSTANCE_NAME=$1 # example: ip-192-168-74-34.us-east-2.compute.internal
# get list of nodes
# ip-192-168-74-34.us-east-2.compute.internal aws:///us-east-2a/i-xxxxxxxxxxxxxxxxx
# ip-192-168-86-105.us-east-2.compute.internal aws:///us-east-2a/i-xxxxxxxxxxxxxxxxx
# ip-192-168-74-34.us-east-2.compute.internal aws:///us-east-2a/i-xxxxxxxxxxxxxxxxx
# ip-192-168-86-105.us-east-2.compute.internal aws:///us-east-2a/i-xxxxxxxxxxxxxxxxx
NODES=$(kubectl get nodes \
--output jsonpath='{range .items[*]}{.metadata.name}{"\t"}{.spec.providerID}{"\n"}{end}')
@ -195,7 +200,9 @@ If ExternalDNS is not yet deployed, follow the steps under [Deploy ExternalDNS](
In this method, the policy is attached to an IAM user, and the credentials secrets for the IAM user are then made available using a Kubernetes secret.
This method is not the preferred method as the secrets in the credential file could be copied and used by an unauthorized threat actor. However, if the Kubernetes cluster is not hosted on AWS, it may be the only method available. Given this situation, it is important to limit the associated privileges to just minimal required privileges, i.e. read-write access to Route53, and not used a credentials file that has extra privileges beyond what is required.
This method is not the preferred method as the secrets in the credential file could be copied and used by an unauthorized threat actor.
However, if the Kubernetes cluster is not hosted on AWS, it may be the only method available.
Given this situation, it is important to limit the associated privileges to just minimal required privileges, i.e. read-write access to Route53, and not used a credentials file that has extra privileges beyond what is required.
#### Create IAM user and attach the policy
@ -242,15 +249,19 @@ Follow the steps under [Deploy ExternalDNS](#deploy-externaldns) using either RB
### IAM Roles for Service Accounts
[IRSA](https://docs.aws.amazon.com/eks/latest/userguide/iam-roles-for-service-accounts.html) ([IAM roles for Service Accounts](https://docs.aws.amazon.com/eks/latest/userguide/iam-roles-for-service-accounts.html)) allows cluster operators to map AWS IAM Roles to Kubernetes Service Accounts. This essentially allows only ExternalDNS pods to access Route53 without exposing any static credentials.
[IRSA](https://docs.aws.amazon.com/eks/latest/userguide/iam-roles-for-service-accounts.html) ([IAM roles for Service Accounts](https://docs.aws.amazon.com/eks/latest/userguide/iam-roles-for-service-accounts.html)) allows cluster operators to map AWS IAM Roles to Kubernetes Service Accounts.
This essentially allows only ExternalDNS pods to access Route53 without exposing any static credentials.
This is the preferred method as it implements [PoLP](https://csrc.nist.gov/glossary/term/principle_of_least_privilege) ([Principle of Least Privilege](https://csrc.nist.gov/glossary/term/principle_of_least_privilege)).
**IMPORTANT**: This method requires using KSA (Kubernetes service account) and RBAC.
> [!IMPORTANT]
> This method requires using KSA (Kubernetes service account) and RBAC.
This method requires deploying with RBAC. See [When using clusters with RBAC enabled](#when-using-clusters-with-rbac-enabled) when ready to deploy ExternalDNS.
**NOTE**: Similar methods to IRSA on AWS are [kiam](https://github.com/uswitch/kiam), which is in maintenence mode, and has [instructions](https://github.com/uswitch/kiam/blob/HEAD/docs/IAM.md) for creating an IAM role, and also [kube2iam](https://github.com/jtblin/kube2iam). IRSA is the officially supported method for EKS clusters, and so for non-EKS clusters on AWS, these other tools could be an option.
> [!NOTE]
> Similar methods to IRSA on AWS are [kiam](https://github.com/uswitch/kiam), which is in maintenence mode, and has [instructions](https://github.com/uswitch/kiam/blob/HEAD/docs/IAM.md) for creating an IAM role, and also [kube2iam](https://github.com/jtblin/kube2iam).
> IRSA is the officially supported method for EKS clusters, and so for non-EKS clusters on AWS, these other tools could be an option.
#### Verify OIDC is supported
@ -349,8 +360,8 @@ When annotation is added to service account, the ExternalDNS pod(s) scheduled wi
Follow the steps under [When using clusters with RBAC enabled](#when-using-clusters-with-rbac-enabled). Make sure to comment out the service account section if this has been created already.
If you deployed ExternalDNS before adding the service account annotation and the corresponding role, you will likely see error with `failed to list hosted zones: AccessDenied: User`. You can delete the current running ExternalDNS pod(s) after updating the annotation, so that new pods scheduled will have appropriate configuration to access Route53.
If you deployed ExternalDNS before adding the service account annotation and the corresponding role, you will likely see error with `failed to list hosted zones: AccessDenied: User`.
You can delete the current running ExternalDNS pod(s) after updating the annotation, so that new pods scheduled will have appropriate configuration to access Route53.
## Set up a hosted zone
@ -376,7 +387,7 @@ aws route53 list-resource-record-sets --output text \
This should yield something similar this:
```
```sh
ns-695.awsdns-22.net.
ns-1313.awsdns-36.org.
ns-350.awsdns-43.com.
@ -525,13 +536,17 @@ Annotations which are specific to AWS.
### alias
`external-dns.alpha.kubernetes.io/alias` if set to `true` on an ingress, it will create an ALIAS record when the target is an ALIAS as well. To make the target an alias, the ingress needs to be configured correctly as described in [the docs](./gke-nginx.md#with-a-separate-tcp-load-balancer). In particular, the argument `--publish-service=default/nginx-ingress-controller` has to be set on the `nginx-ingress-controller` container. If one uses the `nginx-ingress` Helm chart, this flag can be set with the `controller.publishService.enabled` configuration option.
`external-dns.alpha.kubernetes.io/alias` if set to `true` on an ingress, it will create an ALIAS record when the target is an ALIAS as well.
To make the target an alias, the ingress needs to be configured correctly as described in [the docs](./gke-nginx.md#with-a-separate-tcp-load-balancer).
In particular, the argument `--publish-service=default/nginx-ingress-controller` has to be set on the `nginx-ingress-controller` container.
If one uses the `nginx-ingress` Helm chart, this flag can be set with the `controller.publishService.enabled` configuration option.
### target-hosted-zone
`external-dns.alpha.kubernetes.io/aws-target-hosted-zone` can optionally be set to the ID of a Route53 hosted zone. This will force external-dns to use the specified hosted zone when creating an ALIAS target.
### aws-zone-match-parent
`aws-zone-match-parent` allows support subdomains within the same zone by using their parent domain, i.e --domain-filter=x.example.com would create a DNS entry for x.example.com (and subdomains thereof).
```yaml
@ -545,7 +560,6 @@ Annotations which are specific to AWS.
Create the following sample application to test that ExternalDNS works.
> For services ExternalDNS will look for the annotation `external-dns.alpha.kubernetes.io/hostname` on the service and use the corresponding value.
> If you want to give multiple names to service, you can set it to external-dns.alpha.kubernetes.io/hostname with a comma `,` separator.
For this verification phase, you can use default or another namespace for the nginx demo, for example:
@ -708,7 +722,9 @@ With the previous `deployment` and `service` objects deployed, we can add an `in
For this tutorial, we have two endpoints, the service with `LoadBalancer` type and an ingress. For practical purposes, if an ingress is used, the service type can be changed to `ClusterIP` as two endpoints are unecessary in this scenario.
**IMPORTANT**: This requires that an ingress controller has been installed in your Kubernetes cluster. EKS does not come with an ingress controller by default. A popular ingress controller is [ingress-nginx](https://github.com/kubernetes/ingress-nginx/), which can be installed by a [helm chart](https://artifacthub.io/packages/helm/ingress-nginx/ingress-nginx) or by [manifests](https://kubernetes.github.io/ingress-nginx/deploy/#aws).
> [!IMPORTANT]
> This requires that an ingress controller has been installed in your Kubernetes cluster.
> EKS does not come with an ingress controller by default. A popular ingress controller is [ingress-nginx](https://github.com/kubernetes/ingress-nginx/), which can be installed by a [helm chart](https://artifacthub.io/packages/helm/ingress-nginx/ingress-nginx) or by [manifests](https://kubernetes.github.io/ingress-nginx/deploy/#aws).
Create an ingress resource manifest file named `ingress.yaml` with the contents below:
@ -747,13 +763,12 @@ kubectl get ingress --watch --namespace ${NGINXDEMO_NS:-"default"}
You should see something like this:
```
```sh
NAME CLASS HOSTS ADDRESS PORTS AGE
nginx <none> server.example.com 80 47s
nginx <none> server.example.com ae11c2360188411e7951602725593fd1-1224345803.eu-central-1.elb.amazonaws.com. 80 54s
```
For the ingress test, run through similar checks, but using domain name used for the ingress:
```bash
@ -843,7 +858,8 @@ args:
- --txt-prefix={{ YOUR_PREFIX }}
```
* The first two changes are needed if you use Route53 in Govcloud, which only supports private zones. There are also no cross account IAM whatsoever between Govcloud and commercial AWS accounts. If services and ingresses need to make Route 53 entries to an public zone in a commercial account, you will have set env variables of `AWS_ACCESS_KEY_ID` and `AWS_SECRET_ACCESS_KEY` with a key and secret to the commercial account that has the sufficient rights.
* The first two changes are needed if you use Route53 in Govcloud, which only supports private zones. There are also no cross account IAM whatsoever between Govcloud and commercial AWS accounts.
* If services and ingresses need to make Route 53 entries to an public zone in a commercial account, you will have set env variables of `AWS_ACCESS_KEY_ID` and `AWS_SECRET_ACCESS_KEY` with a key and secret to the commercial account that has the sufficient rights.
```yaml
env:
@ -914,6 +930,7 @@ aws iam delete-policy --policy-arn $POLICY_ARN
Route53 has a [5 API requests per second per account hard quota](https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/DNSLimitations.html#limits-api-requests-route-53).
Running several fast polling ExternalDNS instances in a given account can easily hit that limit. Some ways to reduce the request rate include:
* Reduce the polling loop's synchronization interval at the possible cost of slower change propagation (but see `--events` below to reduce the impact).
* `--interval=5m` (default `1m`)
* Enable a Cache to store the zone records list. It comes with a cost: slower propagation when the zone gets modified from other sources such as the AWS console, terraform, cloudformation or anything similar.
@ -946,7 +963,7 @@ Running several fast polling ExternalDNS instances in a given account can easily
A simple way to implement randomised startup is with an init container:
```
```yaml
...
spec:
initContainers:
@ -974,8 +991,12 @@ An effective starting point for EKS with an ingress controller might look like:
### Batch size options
After external-dns generates all changes, it will perform a task to group those changes into batches. Each change will be validated against batch-change-size limits. If at least one of those parameters out of range - the change will be moved to a separate batch. If the change can't fit into any batch - *it will be skipped.*<br>
After external-dns generates all changes, it will perform a task to group those changes into batches. Each change will be validated against batch-change-size limits.
If at least one of those parameters out of range - the change will be moved to a separate batch.
If the change can't fit into any batch - *it will be skipped.*
There are 3 options to control batch size for AWS provider:
* Maximum amount of changes added to one batch
* `--aws-batch-change-size` (default `1000`)
* Maximum size of changes in bytes added to one batch
@ -985,7 +1006,9 @@ There are 3 options to control batch size for AWS provider:
`aws-batch-change-size` can be very useful for throttling purposes and can be set to any value.
Default values for flags `aws-batch-change-size-bytes` and `aws-batch-change-size-values` are taken from [AWS documentation](https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/DNSLimitations.html#limits-api-requests) for Route53 API. **You should not change those values until you really have to.** <br>
Default values for flags `aws-batch-change-size-bytes` and `aws-batch-change-size-values` are taken from [AWS documentation](https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/DNSLimitations.html#limits-api-requests) for Route53 API.
> [!WARNING]
> **You should not change those values until you really have to.**
Because those limits are in place, `aws-batch-change-size` can be set to any value: Even if your batch size is `4000` records, your change will be split to separate batches due to bytes/values size limits and apply request will be finished without issues.
## Using CRD source to manage DNS records in AWS

View File

@ -3,6 +3,7 @@
This tutorial describes how to set up ExternalDNS for managing records in Azure Private DNS.
It comprises of the following steps:
1) Provision Azure Private DNS
2) Configure service principal for managing the zone
3) Deploy ExternalDNS
@ -15,6 +16,7 @@ Everything will be deployed on Kubernetes.
Therefore, please see the subsequent prerequisites.
## Prerequisites
- Azure Kubernetes Service is deployed and ready
- [Azure CLI 2.0](https://docs.microsoft.com/en-us/cli/azure/install-azure-cli) and `kubectl` installed on the box to execute the subsequent steps
@ -25,8 +27,8 @@ not automatically create zones.
For this tutorial, we will create a Azure resource group named 'externaldns' that can easily be deleted later.
```
$ az group create -n externaldns -l westeurope
```sh
az group create -n externaldns -l westeurope
```
Substitute a more suitable location for the resource group if desired.
@ -34,7 +36,7 @@ Substitute a more suitable location for the resource group if desired.
As a prerequisite for Azure Private DNS to resolve records is to define links with VNETs.
Thus, first create a VNET.
```
```sh
$ az network vnet create \
--name myvnet \
--resource-group externaldns \
@ -46,20 +48,21 @@ $ az network vnet create \
Next, create a Azure Private DNS zone for "example.com":
```
$ az network private-dns zone create -g externaldns -n example.com
```sh
az network private-dns zone create -g externaldns -n example.com
```
Substitute a domain you own for "example.com" if desired.
Finally, create the mentioned link with the VNET.
```
```sh
$ az network private-dns link vnet create -g externaldns -n mylink \
-z example.com -v myvnet --registration-enabled false
```
## Configure service principal for managing the zone
ExternalDNS needs permissions to make changes in Azure Private DNS.
These permissions are roles assigned to the service principal used by ExternalDNS.
@ -67,7 +70,8 @@ A service principal with a minimum access level of `Private DNS Zone Contributor
More powerful role-assignments like `Owner` or assignments on subscription-level work too.
Start off by **creating the service principal** without role-assignments.
```
```sh
$ az ad sp create-for-rbac --skip-assignment -n http://externaldns-sp
{
"appId": "appId GUID", <-- aadClientId value
@ -76,12 +80,13 @@ $ az ad sp create-for-rbac --skip-assignment -n http://externaldns-sp
"tenant": "AzureAD Tenant Id" <-- tenantId value
}
```
> Note: Alternatively, you can issue `az account show --query "tenantId"` to retrieve the id of your AAD Tenant too.
Next, assign the roles to the service principal.
But first **retrieve the ID's** of the objects to assign roles on.
```
```sh
# find out the resource ids of the resource group where the dns zone is deployed, and the dns zone itself
$ az group show --name externaldns --query id -o tsv
/subscriptions/id/resourceGroups/externaldns
@ -89,8 +94,10 @@ $ az group show --name externaldns --query id -o tsv
$ az network private-dns zone show --name example.com -g externaldns --query id -o tsv
/subscriptions/.../resourceGroups/externaldns/providers/Microsoft.Network/privateDnsZones/example.com
```
Now, **create role assignments**.
```
```sh
# 1. as a reader to the resource group
$ az role assignment create --role "Reader" --assignee <appId GUID> --scope <resource group resource id>
@ -103,6 +110,7 @@ $ az role assignment create --role "Private DNS Zone Contributor" --assignee <ap
When the ExternalDNS managed zones list doesn't change frequently, one can set `--azure-zones-cache-duration` (zones list cache time-to-live). The zones list cache is disabled by default, with a value of 0s.
## Deploy ExternalDNS
Configure `kubectl` to be able to communicate and authenticate with your cluster.
This is per default done through the file `~/.kube/config`.
@ -116,6 +124,7 @@ Then apply one of the following manifests depending on whether you use RBAC or n
The credentials of the service principal are provided to ExternalDNS as environment-variables.
### Manifest (for clusters without RBAC enabled)
```yaml
apiVersion: apps/v1
kind: Deployment
@ -153,6 +162,7 @@ spec:
```
### Manifest (for clusters with RBAC enabled, cluster access)
```yaml
apiVersion: v1
kind: ServiceAccount
@ -224,6 +234,7 @@ spec:
```
### Manifest (for clusters with RBAC enabled, namespace access)
This configuration is the same as above, except it only requires privileges for the current namespace, not for the whole cluster.
However, access to [nodes](https://kubernetes.io/docs/concepts/architecture/nodes/) requires cluster access, so when using this manifest,
services with type `NodePort` will be skipped!
@ -296,8 +307,8 @@ spec:
Create the deployment for ExternalDNS:
```
$ kubectl create -f externaldns.yaml
```sh
kubectl create -f externaldns.yaml
```
## Create an nginx deployment
@ -350,7 +361,10 @@ spec:
type: LoadBalancer
```
In the service we used multiple annptations. The annotation `service.beta.kubernetes.io/azure-load-balancer-internal` is used to create an internal load balancer. The annotation `external-dns.alpha.kubernetes.io/hostname` is used to create a DNS record for the load balancer that will point to the internal IP address in the VNET allocated by the internal load balancer. The annotation `external-dns.alpha.kubernetes.io/internal-hostname` is used to create a private DNS record for the load balancer that will point to the cluster IP.
In the service we used multiple annotations.
The annotation `service.beta.kubernetes.io/azure-load-balancer-internal` is used to create an internal load balancer.
The annotation `external-dns.alpha.kubernetes.io/hostname` is used to create a DNS record for the load balancer that will point to the internal IP address in the VNET allocated by the internal load balancer.
The annotation `external-dns.alpha.kubernetes.io/internal-hostname` is used to create a private DNS record for the load balancer that will point to the cluster IP.
## Install NGINX Ingress Controller (Optional)
@ -358,7 +372,7 @@ Helm is used to deploy the ingress controller.
We employ the popular chart [ingress-nginx](https://github.com/kubernetes/ingress-nginx/tree/main/charts/ingress-nginx).
```
```sh
$ helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx
$ helm repo update
$ helm install [RELEASE_NAME] ingress-nginx/ingress-nginx
@ -373,13 +387,13 @@ In the subsequent parameter we will make use of this. If you don't want to work
Verify the correct propagation of the loadbalancer's ip by listing the ingresses.
```
$ kubectl get ingress
```sh
kubectl get ingress
```
The address column should contain the ip for each ingress. ExternalDNS will pick up exactly this piece of information.
```
```sh
NAME HOSTS ADDRESS PORTS AGE
nginx1 sample1.aks.com 52.167.195.110 80 6d22h
nginx2 sample2.aks.com 52.167.195.110 80 6d21h
@ -387,7 +401,7 @@ nginx2 sample2.aks.com 52.167.195.110 80 6d21h
If you do not want to deploy the ingress controller with Helm, ensure to pass the following cmdline-flags to it through the mechanism of your choice:
```
```sh
flags:
--publish-service=<namespace of ingress-controller >/<svcname of ingress-controller>
--update-status=true (default-value)
@ -444,8 +458,8 @@ When those hostnames are removed or renamed the corresponding DNS records are al
Create the deployment, service and ingress object:
```
$ kubectl create -f nginx.yaml
```sh
kubectl create -f nginx.yaml
```
Since your external IP would have already been assigned to the nginx-ingress service, the DNS records pointing to the IP of the nginx-ingress service should be created within a minute.
@ -454,8 +468,8 @@ Since your external IP would have already been assigned to the nginx-ingress ser
Run the following command to view the A records for your Azure Private DNS zone:
```
$ az network private-dns record-set a list -g externaldns -z example.com
```sh
az network private-dns record-set a list -g externaldns -z example.com
```
Substitute the zone for the one created above if a different domain was used.

View File

@ -15,7 +15,7 @@ The Azure provider for ExternalDNS will find suitable zones for domains it manag
For this tutorial, we will create a Azure resource group named `MyDnsResourceGroup` that can easily be deleted later:
```bash
$ az group create --name "MyDnsResourceGroup" --location "eastus"
az group create --name "MyDnsResourceGroup" --location "eastus"
```
Substitute a more suitable location for the resource group if desired.
@ -23,7 +23,7 @@ Substitute a more suitable location for the resource group if desired.
Next, create a Azure DNS zone for `example.com`:
```bash
$ az network dns zone create --resource-group "MyDnsResourceGroup" --name "example.com"
az network dns zone create --resource-group "MyDnsResourceGroup" --name "example.com"
```
Substitute a domain you own for `example.com` if desired.
@ -67,10 +67,10 @@ The Azure DNS provider expects, by default, that the configuration file is at `/
ExternalDNS needs permissions to make changes to the Azure DNS zone. There are four ways configure the access needed:
- [Service Principal](#service-principal)
- [Managed Identity Using AKS Kubelet Identity](#managed-identity-using-aks-kubelet-identity)
- [Managed Identity Using AAD Pod Identities](#managed-identity-using-aad-pod-identities)
- [Managed Identity Using Workload Identity](#managed-identity-using-workload-identity)
* [Service Principal](#service-principal)
* [Managed Identity Using AKS Kubelet Identity](#managed-identity-using-aks-kubelet-identity)
* [Managed Identity Using AAD Pod Identities](#managed-identity-using-aad-pod-identities)
* [Managed Identity Using Workload Identity](#managed-identity-using-workload-identity)
### Service Principal
@ -78,7 +78,8 @@ These permissions are defined in a Service Principal that should be made availab
#### Creating a service principal
A Service Principal with a minimum access level of `DNS Zone Contributor` or `Contributor` to the DNS zone(s) and `Reader` to the resource group containing the Azure DNS zone(s) is necessary for ExternalDNS to be able to edit DNS records. However, other more permissive access levels will work too (e.g. `Contributor` to the resource group or the whole subscription).
A Service Principal with a minimum access level of `DNS Zone Contributor` or `Contributor` to the DNS zone(s) and `Reader` to the resource group containing the Azure DNS zone(s) is necessary for ExternalDNS to be able to edit DNS records.
However, other more permissive access levels will work too (e.g. `Contributor` to the resource group or the whole subscription).
This is an Azure CLI example on how to query the Azure API for the information required for the Resource Group and DNS zone you would have already created in previous steps (requires `azure-cli` and `jq`)
@ -128,12 +129,14 @@ EOF
Use this file to create a Kubernetes secret:
```bash
$ kubectl create secret generic azure-config-file --namespace "default" --from-file /local/path/to/azure.json
kubectl create secret generic azure-config-file --namespace "default" --from-file /local/path/to/azure.json
```
### Managed identity using AKS Kubelet identity
The [managed identity](https://docs.microsoft.com/azure/active-directory/managed-identities-azure-resources/overview) that is assigned to the underlying node pool in the AKS cluster can be given permissions to access Azure DNS. Managed identities are essentially a service principal whose lifecycle is managed, such as deleting the AKS cluster will also delete the service principals associated with the AKS cluster. The managed identity assigned Kubernetes node pool, or specifically the [VMSS](https://docs.microsoft.com/azure/virtual-machine-scale-sets/overview), is called the Kubelet identity.
The [managed identity](https://docs.microsoft.com/azure/active-directory/managed-identities-azure-resources/overview) that is assigned to the underlying node pool in the AKS cluster can be given permissions to access Azure DNS.
Managed identities are essentially a service principal whose lifecycle is managed, such as deleting the AKS cluster will also delete the service principals associated with the AKS cluster.
The managed identity assigned Kubernetes node pool, or specifically the [VMSS](https://docs.microsoft.com/azure/virtual-machine-scale-sets/overview), is called the Kubelet identity.
The managed identites were previously called MSI (Managed Service Identity) and are enabled by default when creating an AKS cluster.
@ -196,23 +199,24 @@ EOF
Use the `azure.json` file to create a Kubernetes secret:
```bash
$ kubectl create secret generic azure-config-file --namespace "default" --from-file /local/path/to/azure.json
kubectl create secret generic azure-config-file --namespace "default" --from-file /local/path/to/azure.json
```
### Managed identity using AAD Pod Identities
For this process, we will create a [managed identity](https://docs.microsoft.com//azure/active-directory/managed-identities-azure-resources/overview) that will be explicitly used by the ExternalDNS container. This process is similar to Kubelet identity except that this managed identity is not associated with the Kubernetes node pool, but rather associated with explicit ExternalDNS containers.
For this process, we will create a [managed identity](https://docs.microsoft.com//azure/active-directory/managed-identities-azure-resources/overview) that will be explicitly used by the ExternalDNS container.
This process is similar to Kubelet identity except that this managed identity is not associated with the Kubernetes node pool, but rather associated with explicit ExternalDNS containers.
#### Enable the AAD Pod Identities feature
For this solution, [AAD Pod Identities](https://docs.microsoft.com/azure/aks/use-azure-ad-pod-identity) preview feature can be enabled. The commands below should do the trick to enable this feature:
```bash
$ az feature register --name EnablePodIdentityPreview --namespace Microsoft.ContainerService
$ az feature register --name AutoUpgradePreview --namespace Microsoft.ContainerService
$ az extension add --name aks-preview
$ az extension update --name aks-preview
$ az provider register --namespace Microsoft.ContainerService
az feature register --name EnablePodIdentityPreview --namespace Microsoft.ContainerService
az feature register --name AutoUpgradePreview --namespace Microsoft.ContainerService
az extension add --name aks-preview
az extension update --name aks-preview
az provider register --namespace Microsoft.ContainerService
```
#### Deploy the AAD Pod Identities service
@ -220,10 +224,10 @@ $ az provider register --namespace Microsoft.ContainerService
Once enabled, you can update your cluster and install needed services for the [AAD Pod Identities](https://docs.microsoft.com/azure/aks/use-azure-ad-pod-identity) feature.
```bash
$ AZURE_AKS_RESOURCE_GROUP="my-aks-cluster-group" # name of resource group where aks cluster was created
$ AZURE_AKS_CLUSTER_NAME="my-aks-cluster" # name of aks cluster previously created
AZURE_AKS_RESOURCE_GROUP="my-aks-cluster-group" # name of resource group where aks cluster was created
AZURE_AKS_CLUSTER_NAME="my-aks-cluster" # name of aks cluster previously created
$ az aks update --resource-group ${AZURE_AKS_RESOURCE_GROUP} --name ${AZURE_AKS_CLUSTER_NAME} --enable-pod-identity
az aks update --resource-group ${AZURE_AKS_RESOURCE_GROUP} --name ${AZURE_AKS_CLUSTER_NAME} --enable-pod-identity
```
Note that, if you use the default network plugin `kubenet`, then you need to add the command line option `--enable-pod-identity-with-kubenet` to the above command.
@ -278,12 +282,13 @@ EOF
Use the `azure.json` file to create a Kubernetes secret:
```bash
$ kubectl create secret generic azure-config-file --namespace "default" --from-file /local/path/to/azure.json
kubectl create secret generic azure-config-file --namespace "default" --from-file /local/path/to/azure.json
```
#### Creating an Azure identity binding
A binding between the managed identity and the ExternalDNS pods needs to be setup by creating `AzureIdentity` and `AzureIdentityBinding` resources. This will allow appropriately labeled ExternalDNS pods to authenticate using the managed identity. When AAD Pod Identity feature is enabled from previous steps above, the `az aks pod-identity add` can be used to create these resources:
A binding between the managed identity and the ExternalDNS pods needs to be setup by creating `AzureIdentity` and `AzureIdentityBinding` resources.
This will allow appropriately labeled ExternalDNS pods to authenticate using the managed identity. When AAD Pod Identity feature is enabled from previous steps above, the `az aks pod-identity add` can be used to create these resources:
```bash
$ IDENTITY_RESOURCE_ID=$(az identity show --resource-group ${IDENTITY_RESOURCE_GROUP} \
@ -333,17 +338,18 @@ kubectl patch deployment external-dns --namespace "default" --patch \
### Managed identity using Workload Identity
For this process, we will create a [managed identity](https://docs.microsoft.com//azure/active-directory/managed-identities-azure-resources/overview) that will be explicitly used by the ExternalDNS container. This process is somewhat similar to Pod Identity except that this managed identity is associated with a kubernetes service account.
For this process, we will create a [managed identity](https://docs.microsoft.com//azure/active-directory/managed-identities-azure-resources/overview) that will be explicitly used by the ExternalDNS container.
This process is somewhat similar to Pod Identity except that this managed identity is associated with a kubernetes service account.
#### Deploy OIDC issuer and Workload Identity services
Update your cluster to install [OIDC Issuer](https://learn.microsoft.com/en-us/azure/aks/use-oidc-issuer) and [Workload Identity](https://learn.microsoft.com/en-us/azure/aks/workload-identity-deploy-cluster):
```bash
$ AZURE_AKS_RESOURCE_GROUP="my-aks-cluster-group" # name of resource group where aks cluster was created
$ AZURE_AKS_CLUSTER_NAME="my-aks-cluster" # name of aks cluster previously created
AZURE_AKS_RESOURCE_GROUP="my-aks-cluster-group" # name of resource group where aks cluster was created
AZURE_AKS_CLUSTER_NAME="my-aks-cluster" # name of aks cluster previously created
$ az aks update --resource-group ${AZURE_AKS_RESOURCE_GROUP} --name ${AZURE_AKS_CLUSTER_NAME} --enable-oidc-issuer --enable-workload-identity
az aks update --resource-group ${AZURE_AKS_RESOURCE_GROUP} --name ${AZURE_AKS_CLUSTER_NAME} --enable-oidc-issuer --enable-workload-identity
```
#### Create a managed identity
@ -385,9 +391,9 @@ $ az role assignment create --role "Reader" \
A binding between the managed identity and the ExternalDNS service account needs to be setup by creating a federated identity resource:
```bash
$ OIDC_ISSUER_URL="$(az aks show -n myAKSCluster -g myResourceGroup --query "oidcIssuerProfile.issuerUrl" -otsv)"
OIDC_ISSUER_URL="$(az aks show -n myAKSCluster -g myResourceGroup --query "oidcIssuerProfile.issuerUrl" -otsv)"
$ az identity federated-credential create --name ${IDENTITY_NAME} --identity-name ${IDENTITY_NAME} --resource-group $AZURE_AKS_RESOURCE_GROUP} --issuer "$OIDC_ISSUER_URL" --subject "system:serviceaccount:default:external-dns"
az identity federated-credential create --name ${IDENTITY_NAME} --identity-name ${IDENTITY_NAME} --resource-group $AZURE_AKS_RESOURCE_GROUP} --issuer "$OIDC_ISSUER_URL" --subject "system:serviceaccount:default:external-dns"
```
NOTE: make sure federated credential refers to correct namespace and service account (`system:serviceaccount:<NAMESPACE>:<SERVICE_ACCOUNT>`)
@ -457,17 +463,19 @@ cat <<-EOF > /local/path/to/azure.json
}
EOF
```
NOTE: it's also possible to specify (or override) ClientID specified in the next section through `aadClientId` field in this `azure.json` file.
Use the `azure.json` file to create a Kubernetes secret:
```bash
$ kubectl create secret generic azure-config-file --namespace "default" --from-file /local/path/to/azure.json
kubectl create secret generic azure-config-file --namespace "default" --from-file /local/path/to/azure.json
```
##### Update labels and annotations on ExternalDNS service account
To instruct Workload Identity webhook to inject a projected token into the ExternalDNS pod, the pod needs to have a label `azure.workload.identity/use: "true"` (before Workload Identity 1.0.0, this label was supposed to be set on the service account instead). Also, the service account needs to have an annotation `azure.workload.identity/client-id: <IDENTITY_CLIENT_ID>`:
To instruct Workload Identity webhook to inject a projected token into the ExternalDNS pod, the pod needs to have a label `azure.workload.identity/use: "true"` (before Workload Identity 1.0.0, this label was supposed to be set on the service account instead).
Also, the service account needs to have an annotation `azure.workload.identity/client-id: <IDENTITY_CLIENT_ID>`:
To patch the existing serviceaccount and deployment, use the following command:
@ -488,11 +496,12 @@ When the ExternalDNS managed zones list doesn't change frequently, one can set `
## Ingress used with ExternalDNS
This deployment assumes that you will be using nginx-ingress. When using nginx-ingress do not deploy it as a Daemon Set. This causes nginx-ingress to write the Cluster IP of the backend pods in the ingress status.loadbalancer.ip property which then has external-dns write the Cluster IP(s) in DNS vs. the nginx-ingress service external IP.
This deployment assumes that you will be using nginx-ingress. When using nginx-ingress do not deploy it as a Daemon Set.
This causes nginx-ingress to write the Cluster IP of the backend pods in the ingress status.loadbalancer.ip property which then has external-dns write the Cluster IP(s) in DNS vs. the nginx-ingress service external IP.
Ensure that your nginx-ingress deployment has the following arg: added to it:
```
```sh
- --publish-service=namespace/nginx-ingress-controller-svcname
```
@ -505,6 +514,7 @@ Connect your `kubectl` client to the cluster you want to test ExternalDNS with.
The deployment assumes that ExternalDNS will be installed into the `default` namespace. If this namespace is different, the `ClusterRoleBinding` will need to be updated to reflect the desired alternative namespace, such as `external-dns`, `kube-addons`, etc.
### Manifest (for clusters without RBAC enabled)
```yaml
apiVersion: apps/v1
kind: Deployment
@ -610,6 +620,7 @@ spec:
```
### Manifest (for clusters with RBAC enabled, namespace access)
This configuration is the same as above, except it only requires privileges for the current namespace, not for the whole cluster.
However, access to [nodes](https://kubernetes.io/docs/concepts/architecture/nodes/) requires cluster access, so when using this manifest,
services with type `NodePort` will be skipped!
@ -682,7 +693,7 @@ spec:
Create the deployment for ExternalDNS:
```bash
$ kubectl create --namespace "default" --filename externaldns.yaml
kubectl create --namespace "default" --filename externaldns.yaml
```
## Ingress Option: Expose an nginx service with an ingress
@ -745,15 +756,15 @@ spec:
When you use ExternalDNS with Ingress resources, it automatically creates DNS records based on the hostnames listed in those Ingress objects.
Those hostnames must match the filters that you defined (if any):
- By default, `--domain-filter` filters Azure DNS zone.
- If you use `--domain-filter` together with `--zone-name-filter`, the behavior changes: `--domain-filter` then filters Ingress domains, not the Azure DNS zone name.
* By default, `--domain-filter` filters Azure DNS zone.
* If you use `--domain-filter` together with `--zone-name-filter`, the behavior changes: `--domain-filter` then filters Ingress domains, not the Azure DNS zone name.
When those hostnames are removed or renamed the corresponding DNS records are also altered.
Create the deployment, service and ingress object:
```bash
$ kubectl create --namespace "default" --filename nginx.yaml
kubectl create --namespace "default" --filename nginx.yaml
```
Since your external IP would have already been assigned to the nginx-ingress service, the DNS records pointing to the IP of the nginx-ingress service should be created within a minute.
@ -806,7 +817,7 @@ The annotation `external-dns.alpha.kubernetes.io/hostname` is used to specify th
Run the following command to view the A records for your Azure DNS zone:
```bash
$ az network dns record-set a list --resource-group "MyDnsResourceGroup" --zone-name example.com
az network dns record-set a list --resource-group "MyDnsResourceGroup" --zone-name example.com
```
Substitute the zone for the one created above if a different domain was used.
@ -819,7 +830,7 @@ Now that we have verified that ExternalDNS will automatically manage Azure DNS r
resource group:
```bash
$ az group delete --name "MyDnsResourceGroup"
az group delete --name "MyDnsResourceGroup"
```
## More tutorials

View File

@ -161,7 +161,7 @@ ExternalDNS uses this annotation to determine what services should be registered
Create the deployment and service:
```console
$ kubectl create -f nginx.yaml
kubectl create -f nginx.yaml
```
Depending where you run your service it can take a little while for your cloud provider to create an external IP for the service.
@ -180,7 +180,7 @@ This should show the external IP address of the service as the A record for your
Now that we have verified that ExternalDNS will automatically manage Civo DNS records, we can delete the tutorial's example:
```
$ kubectl delete service -f nginx.yaml
$ kubectl delete service -f externaldns.yaml
```sh
kubectl delete service -f nginx.yaml
kubectl delete service -f externaldns.yaml
```

View File

@ -14,9 +14,8 @@ We highly recommend to read this tutorial if you haven't used Cloudflare before:
Snippet from [Cloudflare - Getting Started](https://api.cloudflare.com/#getting-started-endpoints):
>Cloudflare's API exposes the entire Cloudflare infrastructure via a standardized programmatic interface. Using Cloudflare's API, you can do just about anything you can do on cloudflare.com via the customer dashboard.
>The Cloudflare API is a RESTful API based on HTTPS requests and JSON responses. If you are registered with Cloudflare, you can obtain your API key from the bottom of the "My Account" page, found here: [Go to My account](https://dash.cloudflare.com/profile).
> Cloudflare's API exposes the entire Cloudflare infrastructure via a standardized programmatic interface. Using Cloudflare's API, you can do just about anything you can do on cloudflare.com via the customer dashboard.
> The Cloudflare API is a RESTful API based on HTTPS requests and JSON responses. If you are registered with Cloudflare, you can obtain your API key from the bottom of the "My Account" page, found here: [Go to My account](https://dash.cloudflare.com/profile).
API Token will be preferred for authentication if `CF_API_TOKEN` environment variable is set.
Otherwise `CF_API_KEY` and `CF_API_EMAIL` should be set to run ExternalDNS with Cloudflare.
@ -31,7 +30,8 @@ If you would like to further restrict the API permissions to a specific zone (or
## Throttling
Cloudflare API has a [global rate limit of 1,200 requests per five minutes](https://developers.cloudflare.com/fundamentals/api/reference/limits/). Running several fast polling ExternalDNS instances in a given account can easily hit that limit. The AWS Provider [docs](./aws.md#throttling) has some recommendations that can be followed here too, but in particular, consider passing `--cloudflare-dns-records-per-page` with a high value (maximum is 5,000).
Cloudflare API has a [global rate limit of 1,200 requests per five minutes](https://developers.cloudflare.com/fundamentals/api/reference/limits/). Running several fast polling ExternalDNS instances in a given account can easily hit that limit.
The AWS Provider [docs](./aws.md#throttling) has some recommendations that can be followed here too, but in particular, consider passing `--cloudflare-dns-records-per-page` with a high value (maximum is 5,000).
## Deploy ExternalDNS
@ -86,7 +86,6 @@ env:
key: apiKey
```
Finally, install the ExternalDNS chart with Helm using the configuration specified in your values.yaml file:
```shell
@ -273,7 +272,7 @@ will cause ExternalDNS to remove the corresponding DNS records.
Create the deployment and service:
```shell
$ kubectl create -f nginx.yaml
kubectl create -f nginx.yaml
```
Depending where you run your service it can take a little while for your cloud provider to create an external IP for the service.
@ -294,8 +293,8 @@ This should show the external IP address of the service as the A record for your
Now that we have verified that ExternalDNS will automatically manage Cloudflare DNS records, we can delete the tutorial's example:
```shell
$ kubectl delete -f nginx.yaml
$ kubectl delete -f externaldns.yaml
kubectl delete -f nginx.yaml
kubectl delete -f externaldns.yaml
```
## Setting cloudflare-proxied on a per-ingress basis
@ -310,4 +309,4 @@ If not set the value will default to `global`.
## Using CRD source to manage DNS records in Cloudflare
Please refer to the [CRD source documentation](../sources/crd.md#example) for more information.
Please refer to the [CRD source documentation](../sources/crd.md#example) for more information.

View File

@ -3,8 +3,9 @@
This tutorial describes how to configure External DNS to use the Contour `HTTPProxy` source.
Using the `HTTPProxy` resource with External DNS requires Contour version 1.5 or greater.
### Example manifests for External DNS
#### Without RBAC
## Example manifests for External DNS
### Without RBAC
```yaml
apiVersion: apps/v1
@ -37,7 +38,8 @@ spec:
- --txt-owner-id=my-identifier
```
#### With RBAC
### With RBAC
```yaml
apiVersion: v1
kind: ServiceAccount
@ -53,7 +55,7 @@ rules:
resources: ["services","endpoints","pods"]
verbs: ["get","watch","list"]
- apiGroups: ["extensions","networking.k8s.io"]
resources: ["ingresses"]
resources: ["ingresses"]
verbs: ["get","watch","list"]
- apiGroups: [""]
resources: ["nodes"]
@ -107,10 +109,12 @@ spec:
```
### Verify External DNS works
The following instructions are based on the
The following instructions are based on the
[Contour example workload](https://github.com/projectcontour/contour/tree/master/examples/example-workload/httpproxy).
#### Install a sample service
### Install a sample service
```bash
$ kubectl apply -f - <<EOF
apiVersion: apps/v1
@ -153,7 +157,7 @@ EOF
Then create an `HTTPProxy`:
```
```sh
$ kubectl apply -f - <<EOF
apiVersion: projectcontour.io/v1
kind: HTTPProxy
@ -174,7 +178,8 @@ spec:
EOF
```
#### Access the sample service using `curl`
### Access the sample service using `curl`
```bash
$ curl -i http://kuard.example.com/healthy
HTTP/1.1 200 OK

View File

@ -31,7 +31,8 @@ helm init
### Installing etcd
[etcd operator](https://github.com/coreos/etcd-operator) is used to manage etcd clusters.
```
```sh
helm install stable/etcd-operator --name my-etcd-op
```
@ -45,7 +46,7 @@ kubectl apply -f https://raw.githubusercontent.com/coreos/etcd-operator/HEAD/exa
In order to make CoreDNS work with etcd backend, values.yaml of the chart should be changed with corresponding configurations.
```
```sh
wget https://raw.githubusercontent.com/helm/charts/HEAD/stable/coredns/values.yaml
```

View File

@ -7,6 +7,7 @@ This tutorial describes how to setup ExternalDNS for usage within a Kubernetes c
We are going to use OpenStack CLI - `openstack` utility, which is an umbrella application for most of OpenStack clients including `designate`.
All OpenStack CLIs require authentication parameters to be provided. These parameters include:
* URL of the OpenStack identity service (`keystone`) which is responsible for user authentication and also served as a registry for other
OpenStack services. Designate endpoints must be registered in `keystone` in order to ExternalDNS and OpenStack CLI be able to find them.
* OpenStack region name
@ -29,8 +30,9 @@ way to get yourself an OpenStack installation to play with is to use [DevStack](
## Creating DNS zones
All domain names that are ExternalDNS is going to create must belong to one of DNS zones created in advance. Here is an example of how to create `example.com` DNS zone:
```console
$ openstack zone create --email dnsmaster@example.com example.com.
openstack zone create --email dnsmaster@example.com example.com.
```
It is important to manually create all the zones that are going to be used for kubernetes entities (ExternalDNS sources) before starting ExternalDNS.
@ -99,7 +101,7 @@ rules:
resources: ["pods"]
verbs: ["get","watch","list"]
- apiGroups: ["extensions","networking.k8s.io"]
resources: ["ingresses"]
resources: ["ingresses"]
verbs: ["get","watch","list"]
- apiGroups: [""]
resources: ["nodes"]
@ -159,15 +161,17 @@ spec:
Create the deployment for ExternalDNS:
```console
$ kubectl create -f externaldns.yaml
kubectl create -f externaldns.yaml
```
### Optional: Trust self-sign certificates
If your OpenStack-Installation is configured with a self-sign certificate, you could extend the `pod.spec` with following secret-mount:
```yaml
volumeMounts:
- mountPath: /etc/ssl/certs/
name: cacerts
name: cacerts
volumes:
- name: cacerts
secret:
@ -177,7 +181,6 @@ If your OpenStack-Installation is configured with a self-sign certificate, you c
content of the secret `self-sign-certs` must be the certificate/chain in PEM format.
## Deploying an Nginx Service
Create a service file called 'nginx.yaml' with the following contents:
@ -225,10 +228,9 @@ ExternalDNS uses this annotation to determine what services should be registered
Create the deployment and service:
```console
$ kubectl create -f nginx.yaml
kubectl create -f nginx.yaml
```
Once the service has an external IP assigned, ExternalDNS will notice the new service IP address and notify Designate,
which in turn synchronize DNS records with underlying DNS server backend.
@ -237,13 +239,13 @@ which in turn synchronize DNS records with underlying DNS server backend.
To verify that DNS record was indeed created, you can use the following command:
```console
$ openstack recordset list example.com.
openstack recordset list example.com.
```
There should be a record for my-app.example.com having `ACTIVE` status. And of course, the ultimate method to verify is to issue a DNS query:
```console
$ dig my-app.example.com @controller
dig my-app.example.com @controller
```
## Cleanup
@ -251,6 +253,6 @@ $ dig my-app.example.com @controller
Now that we have verified that ExternalDNS created all DNS records, we can delete the tutorial's example:
```console
$ kubectl delete service -f nginx.yaml
$ kubectl delete service -f externaldns.yaml
kubectl delete service -f nginx.yaml
kubectl delete service -f externaldns.yaml
```

View File

@ -14,7 +14,9 @@ Create a new DNS zone where you want to create your records in. Let's use `examp
## Creating DigitalOcean Credentials
Generate a new personal token by going to [the API settings](https://cloud.digitalocean.com/settings/api/tokens) or follow [How To Use the DigitalOcean API v2](https://www.digitalocean.com/community/tutorials/how-to-use-the-digitalocean-api-v2) if you need more information. Give the token a name and choose read and write access. The token needs to be passed to ExternalDNS so make a note of it for later use.
Generate a new personal token by going to [the API settings](https://cloud.digitalocean.com/settings/api/tokens) or follow [How To Use the DigitalOcean API v2](https://www.digitalocean.com/community/tutorials/how-to-use-the-digitalocean-api-v2) if you need more information.
Give the token a name and choose read and write access.
The token needs to be passed to ExternalDNS so make a note of it for later use.
The environment variable `DO_TOKEN` will be needed to run ExternalDNS with DigitalOcean.
@ -37,7 +39,7 @@ Then apply one of the following manifests file to deploy ExternalDNS.
Create a values.yaml file to configure ExternalDNS to use DigitalOcean as the DNS provider. This file should include the necessary environment variables:
```shell
provider:
provider:
name: digitalocean
env:
- name: DO_TOKEN
@ -82,6 +84,7 @@ spec:
```
### Manifest (for clusters with RBAC enabled)
```yaml
apiVersion: v1
kind: ServiceAccount
@ -97,7 +100,7 @@ rules:
resources: ["services","endpoints","pods"]
verbs: ["get","watch","list"]
- apiGroups: ["extensions","networking.k8s.io"]
resources: ["ingresses"]
resources: ["ingresses"]
verbs: ["get","watch","list"]
- apiGroups: [""]
resources: ["nodes"]
@ -148,7 +151,6 @@ spec:
key: DO_TOKEN
```
## Deploying an Nginx Service
Create a service file called 'nginx.yaml' with the following contents:
@ -197,7 +199,7 @@ ExternalDNS uses this annotation to determine what services should be registered
Create the deployment and service:
```console
$ kubectl create -f nginx.yaml
kubectl create -f nginx.yaml
```
Depending where you run your service it can take a little while for your cloud provider to create an external IP for the service.
@ -216,9 +218,9 @@ This should show the external IP address of the service as the A record for your
Now that we have verified that ExternalDNS will automatically manage DigitalOcean DNS records, we can delete the tutorial's example:
```
$ kubectl delete service -f nginx.yaml
$ kubectl delete service -f externaldns.yaml
```sh
kubectl delete service -f nginx.yaml
kubectl delete service -f externaldns.yaml
```
## Advanced Usage
@ -227,6 +229,6 @@ $ kubectl delete service -f externaldns.yaml
If you have a large number of domains and/or records within a domain, you may encounter API
rate limiting because of the number of API calls that external-dns must make to the DigitalOcean API to retrieve
the current DNS configuration during every reconciliation loop. If this is the case, use the
the current DNS configuration during every reconciliation loop. If this is the case, use the
`--digitalocean-api-page-size` option to increase the size of the pages used when querying the DigitalOcean API.
(Note: external-dns uses a default of 50.)

View File

@ -1,6 +1,5 @@
# DNSimple
This tutorial describes how to setup ExternalDNS for usage with DNSimple.
Make sure to use **>=0.4.6** version of ExternalDNS for this tutorial.
@ -12,8 +11,9 @@ A DNSimple API access token can be acquired by following the [provided documenta
The environment variable `DNSIMPLE_OAUTH` must be set to the generated API token to run ExternalDNS with DNSimple.
When the generated DNSimple API access token is a _User token_, as opposed to an _Account token_, the following environment variables must also be set:
- `DNSIMPLE_ACCOUNT_ID`: Set this to the account ID which the domains to be managed by ExternalDNS belong to (eg. `1001234`).
- `DNSIMPLE_ZONES`: Set this to a comma separated list of DNS zones to be managed by ExternalDNS (eg. `mydomain.com,example.com`).
- `DNSIMPLE_ACCOUNT_ID`: Set this to the account ID which the domains to be managed by ExternalDNS belong to (eg. `1001234`).
- `DNSIMPLE_ZONES`: Set this to a comma separated list of DNS zones to be managed by ExternalDNS (eg. `mydomain.com,example.com`).
## Deploy ExternalDNS
@ -21,6 +21,7 @@ Connect your `kubectl` client to the cluster you want to test ExternalDNS with.
Then apply one of the following manifests file to deploy ExternalDNS.
### Manifest (for clusters without RBAC enabled)
```yaml
apiVersion: apps/v1
kind: Deployment
@ -123,7 +124,6 @@ spec:
value: "SET THIS IF USING A DNSIMPLE USER ACCESS TOKEN"
```
## Deploying an Nginx Service
Create a service file called 'nginx.yaml' with the following contents:
@ -172,7 +172,7 @@ ExternalDNS uses this annotation to determine what services should be registered
Create the deployment and service:
```sh
$ kubectl create -f nginx.yaml
kubectl create -f nginx.yaml
```
Depending where you run your service it can take a little while for your cloud provider to create an external IP for the service. Check the status by running
@ -211,7 +211,8 @@ You can view your DNSimple Record Editor at https://dnsimple.com/a/YOUR_ACCOUNT_
### Using the DNSimple Zone Records API
This approach allows for you to use the DNSimple [List records for a zone](https://developer.dnsimple.com/v2/zones/records/#listZoneRecords) endpoint to verify the creation of the A and TXT record. Ensure you substitute the value `YOUR_ACCOUNT_ID` with the ID of your DNSimple account and `example.com` with the correct domain that you used during validation.
This approach allows for you to use the DNSimple [List records for a zone](https://developer.dnsimple.com/v2/zones/records/#listZoneRecords) endpoint to verify the creation of the A and TXT record.
Ensure you substitute the value `YOUR_ACCOUNT_ID` with the ID of your DNSimple account and `example.com` with the correct domain that you used during validation.
```sh
curl -H "Authorization: Bearer $DNSIMPLE_ACCOUNT_TOKEN" \
@ -224,8 +225,8 @@ curl -H "Authorization: Bearer $DNSIMPLE_ACCOUNT_TOKEN" \
Now that we have verified that ExternalDNS will automatically manage DNSimple DNS records, we can delete the tutorial's example:
```sh
$ kubectl delete -f nginx.yaml
$ kubectl delete -f externaldns.yaml
kubectl delete -f nginx.yaml
kubectl delete -f externaldns.yaml
```
### Deleting Created Records

View File

@ -9,6 +9,7 @@ The main use cases that inspired this feature is the necessity for having a subd
## Setup
### External DNS
```yaml
apiVersion: apps/v1
kind: Deployment
@ -54,7 +55,8 @@ spec:
```
This will create 2 CNAME records pointing to `aws.example.org`:
```
```sh
tenant1.example.org
tenant2.example.org
```
@ -76,7 +78,8 @@ spec:
```
This will create 2 A records pointing to `111.111.111.111`:
```
```sh
tenant1.example.org
tenant2.example.org
```

View File

@ -22,6 +22,7 @@ Connect your `kubectl` client to the cluster you want to test ExternalDNS with.
Then apply one of the following manifests file to deploy ExternalDNS.
### Manifest (for clusters without RBAC enabled)
```yaml
apiVersion: apps/v1
kind: Deployment
@ -52,6 +53,7 @@ spec:
```
### Manifest (for clusters with RBAC enabled)
```yaml
apiVersion: v1
kind: ServiceAccount
@ -67,7 +69,7 @@ rules:
resources: ["services","endpoints","pods"]
verbs: ["get","watch","list"]
- apiGroups: ["extensions","networking.k8s.io"]
resources: ["ingresses"]
resources: ["ingresses"]
verbs: ["get","watch","list"]
- apiGroups: [""]
resources: ["nodes"]
@ -115,7 +117,6 @@ spec:
value: "YOUR_GANDI_PAT"
```
## Deploying an Nginx Service
Create a service file called 'nginx.yaml' with the following contents:
@ -164,7 +165,7 @@ ExternalDNS uses this annotation to determine what services should be registered
Create the deployment and service:
```console
$ kubectl create -f nginx.yaml
kubectl create -f nginx.yaml
```
Depending where you run your service it can take a little while for your cloud provider to create an external IP for the service.
@ -183,11 +184,11 @@ This should show the external IP address of the service as the A record for your
Now that we have verified that ExternalDNS will automatically manage Gandi DNS records, we can delete the tutorial's example:
```
$ kubectl delete service -f nginx.yaml
$ kubectl delete service -f externaldns.yaml
```sh
kubectl delete service -f nginx.yaml
kubectl delete service -f externaldns.yaml
```
# Additional options
## Additional options
If you're using organizations to separate your domains, you can pass the organization's ID in an environment variable called `GANDI_SHARING_ID` to get access to it.

View File

@ -7,9 +7,9 @@ This tutorial describes how to setup ExternalDNS for usage within a GKE cluster
Setup your environment to work with Google Cloud Platform. Fill in your values as needed, e.g. target project.
```console
$ gcloud config set project "zalando-external-dns-test"
$ gcloud config set compute/region "europe-west1"
$ gcloud config set compute/zone "europe-west1-d"
gcloud config set project "zalando-external-dns-test"
gcloud config set compute/region "europe-west1"
gcloud config set compute/zone "europe-west1-d"
```
## GKE Node Scopes
@ -109,11 +109,13 @@ spec:
#### Without a separate TCP load balancer
By default, the controller will update your Ingress objects with the public IPs of the nodes running your nginx controller instances. You should run multiple instances in case of pod or node failure. The controller will do leader election and will put multiple IPs as targets in your Ingress objects in that case. It could also make sense to run it as a DaemonSet. However, we'll just run a single replica. You have to open the respective ports on all of your worker nodes to allow nginx to receive traffic.
By default, the controller will update your Ingress objects with the public IPs of the nodes running your nginx controller instances.
You should run multiple instances in case of pod or node failure. The controller will do leader election and will put multiple IPs as targets in your Ingress objects in that case.
It could also make sense to run it as a DaemonSet. However, we'll just run a single replica. You have to open the respective ports on all of your worker nodes to allow nginx to receive traffic.
```console
$ gcloud compute firewall-rules create "allow-http" --allow tcp:80 --source-ranges "0.0.0.0/0" --target-tags "gke-external-dns-9488ba14-node"
$ gcloud compute firewall-rules create "allow-https" --allow tcp:443 --source-ranges "0.0.0.0/0" --target-tags "gke-external-dns-9488ba14-node"
gcloud compute firewall-rules create "allow-http" --allow tcp:80 --source-ranges "0.0.0.0/0" --target-tags "gke-external-dns-9488ba14-node"
gcloud compute firewall-rules create "allow-https" --allow tcp:443 --source-ranges "0.0.0.0/0" --target-tags "gke-external-dns-9488ba14-node"
```
Change `--target-tags` to the corresponding tags of your nodes. You can find them by describing your instances or by looking at the default firewall rules created by GKE for your cluster.
@ -158,7 +160,10 @@ spec:
#### With a separate TCP load balancer
However, you can also have the ingress controller proxied by a Kubernetes Service. This will instruct the controller to populate this Service's external IP as the external IP of the Ingress. This exposes the nginx proxies via a Layer 4 load balancer (`type=LoadBalancer`) which is more reliable than the other method. With that approach, you can run as many nginx proxy instances on your cluster as you like or have them autoscaled. This is the preferred way of running the nginx controller.
However, you can also have the ingress controller proxied by a Kubernetes Service.
This will instruct the controller to populate this Service's external IP as the external IP of the Ingress.
This exposes the nginx proxies via a Layer 4 load balancer (`type=LoadBalancer`) which is more reliable than the other method. With that approach, you can run as many nginx proxy instances on your cluster as you like or have them autoscaled.
This is the preferred way of running the nginx controller.
Apply the following manifests to your cluster. Note, how the controller is receiving an additional flag telling it which Service it should treat as its public endpoint and how it doesn't need hostPorts anymore.
@ -236,7 +241,7 @@ rules:
resources: ["services","endpoints","pods"]
verbs: ["get","watch","list"]
- apiGroups: ["extensions","networking.k8s.io"]
resources: ["ingresses"]
resources: ["ingresses"]
verbs: ["get","watch","list"]
- apiGroups: [""]
resources: ["nodes"]
@ -382,15 +387,15 @@ $ curl via-ingress.external-dns-test.gcp.zalan.do
Make sure to delete all Service and Ingress objects before terminating the cluster so all load balancers and DNS entries get cleaned up correctly.
```console
$ kubectl delete service nginx-ingress-controller
$ kubectl delete ingress nginx
kubectl delete service nginx-ingress-controller
kubectl delete ingress nginx
```
Give ExternalDNS some time to clean up the DNS records for you. Then delete the managed zone and cluster.
```console
$ gcloud dns managed-zones delete "external-dns-test-gcp-zalan-do"
$ gcloud container clusters delete "external-dns"
gcloud dns managed-zones delete "external-dns-test-gcp-zalan-do"
gcloud container clusters delete "external-dns"
```
Also delete the NS records for your removed zone from the parent zone.
@ -679,16 +684,16 @@ Make sure to delete all service and ingress objects before terminating the
cluster so all load balancers and DNS entries get cleaned up correctly.
```console
$ kubectl delete service --namespace=ingress-nginx ingress-nginx-controller
$ kubectl delete ingress nginx
kubectl delete service --namespace=ingress-nginx ingress-nginx-controller
kubectl delete ingress nginx
```
Give ExternalDNS some time to clean up the DNS records for you. Then delete the
managed zone and cluster.
```console
$ gcloud dns managed-zones delete external-dns-test-gcp-zalan-do
$ gcloud container clusters delete external-dns
gcloud dns managed-zones delete external-dns-test-gcp-zalan-do
gcloud container clusters delete external-dns
```
Also delete the NS records for your removed zone from the parent zone.

View File

@ -6,7 +6,8 @@ This tutorial describes how to setup ExternalDNS for usage within a [GKE](https:
*If you prefer to try-out ExternalDNS in one of the existing environments you can skip this step*
The following instructions use [access scopes](https://cloud.google.com/compute/docs/access/service-accounts#accesscopesiam) to provide ExternalDNS with the permissions it needs to manage DNS records within a single [project](https://cloud.google.com/docs/overview#projects), the organizing entity to allocate resources.
The following instructions use [access scopes](https://cloud.google.com/compute/docs/access/service-accounts#accesscopesiam) to provide ExternalDNS
with the permissions it needs to manage DNS records within a single [project](https://cloud.google.com/docs/overview#projects), the organizing entity to allocate resources.
Note that since these permissions are associated with the instance, all pods in the cluster will also have these permissions. As such, this approach is not suitable for anything but testing environments.
@ -41,11 +42,16 @@ gcloud container clusters create $GKE_CLUSTER_NAME \
--scopes "https://www.googleapis.com/auth/ndev.clouddns.readwrite"
```
**WARNING**: Note that this cluster will use the default [compute engine GSA](https://cloud.google.com/compute/docs/access/service-accounts#default_service_account) that contians the overly permissive project editor (`roles/editor`) role. So essentially, anything on the cluster could potentially grant escalated privileges. Also, as mentioned earlier, the access scope `ndev.clouddns.readwrite` will allow anything running on the cluster to have read/write permissions on all Cloud DNS zones within the same project.
> [!WARNING]
> Note that this cluster will use the default [compute engine GSA](https://cloud.google.com/compute/docs/access/service-accounts#default_service_account) that contians the overly permissive project editor (`roles/editor`) role.
> So essentially, anything on the cluster could potentially grant escalated privileges.
> Also, as mentioned earlier, the access scope `ndev.clouddns.readwrite` will allow anything running on the cluster to have read/write permissions on all Cloud DNS zones within the same project.
### Cloud DNS Zone
Create a DNS zone which will contain the managed DNS records. If using your own domain that was registered with a third-party domain registrar, you should point your domain's name servers to the values under the `nameServers` key. Please consult your registrar's documentation on how to do that. This tutorial will use example domain of `example.com`.
Create a DNS zone which will contain the managed DNS records.
If using your own domain that was registered with a third-party domain registrar, you should point your domain's name servers to the values under the `nameServers` key.
Please consult your registrar's documentation on how to do that. This tutorial will use example domain of `example.com`.
```bash
gcloud dns managed-zones create "example-com" --dns-name "example.com." \
@ -61,7 +67,7 @@ gcloud dns record-sets list \
Outputs:
```
```sh
NAME TYPE TTL DATA
example.com. NS 21600 ns-cloud-e1.googledomains.com.,ns-cloud-e2.googledomains.com.,ns-cloud-e3.googledomains.com.,ns-cloud-e4.googledomains.com.
```
@ -70,7 +76,8 @@ In this case it's `ns-cloud-{e1-e4}.googledomains.com.` but your's could slightl
## Cross project access scenario using Google Service Account
More often, following best practices in regards to security and operations, Cloud DNS zones will be managed in a separate project from the Kubernetes cluster. This section shows how setup ExternalDNS to access Cloud DNS from a different project. These steps will also work for single project scenarios as well.
More often, following best practices in regards to security and operations, Cloud DNS zones will be managed in a separate project from the Kubernetes cluster.
This section shows how setup ExternalDNS to access Cloud DNS from a different project. These steps will also work for single project scenarios as well.
ExternalDNS will need permissions to make changes to the Cloud DNS zone. There are three ways to configure the access needed:
@ -105,7 +112,7 @@ gcloud services enable "container.googleapis.com"
#### Provisioning Cloud DNS
Create a Cloud DNS zone in the designated DNS project.
Create a Cloud DNS zone in the designated DNS project.
```bash
gcloud dns managed-zones create "example-com" --project $DNS_PROJECT_ID \
@ -162,30 +169,30 @@ You have an option to chose from using the gcloud CLI or using Terraform.
The below instructions assume you are using the default Kubernetes Service account name of `external-dns` in the namespace `external-dns`
Grant the Kubernetes service account DNS `roles/dns.admin` at project level
```shell
gcloud projects add-iam-policy-binding projects/DNS_PROJECT_ID \
--role=roles/dns.admin \
--member=principal://iam.googleapis.com/projects/PROJECT_NUMBER/locations/global/workloadIdentityPools/PROJECT_ID.svc.id.goog/subject/ns/external-dns/sa/external-dns \
--condition=None
```
Replace the following:
* `DNS_PROJECT_ID` : Project ID of your DNS project. If DNS is in the same project as your GKE cluster, use your GKE project.
* `PROJECT_ID`: your Google Cloud project ID of your GKE Cluster
* `PROJECT_NUMBER`: your numerical Google Cloud project number of your GKE cluster
If you wish to change the namespace, replace
If you wish to change the namespace, replace
* `ns/external-dns` with `ns/<your namespace`
* `ns/external-dns` with `ns/<your namespace`
* `sa/external-dns` with `sa/<your ksa>`
=== "Terraform"
The below instructions assume you are using the default Kubernetes Service account name of `external-dns` in the namespace `external-dns`
Create a file called `main.tf` and place in it the below. _Note: If you're an experienced terraform user feel free to split these out in to different files_
Create a file called `main.tf` and place in it the below. _Note: If you're an experienced terraform user feel free to split these out in to different files_
```hcl
variable "gke-project" {
@ -193,27 +200,27 @@ You have an option to chose from using the gcloud CLI or using Terraform.
description = "Name of the project that the GKE cluster exists in"
default = "GKE-PROJECT"
}
variable "ksa_name" {
type = string
description = "Name of the Kubernetes service account that will be accessing the DNS Zones"
default = "external-dns"
}
variable "kns_name" {
type = string
description = "Name of the Kubernetes Namespace"
default = "external-dns"
}
data "google_project" "project" {
project_id = var.gke-project
}
locals {
member = "principal://iam.googleapis.com/projects/${data.google_project.project.number}/locations/global/workloadIdentityPools/${var.gke-project}.svc.id.goog/subject/ns/${var.kns_name}/sa/${var.ksa_name}"
}
resource "google_project_iam_member" "external_dns" {
member = local.member
project = "DNS-PROJECT"
@ -227,20 +234,20 @@ You have an option to chose from using the gcloud CLI or using Terraform.
member = local.member
}
```
Replace the following
* `GKE-PROJECT` : Project that contains your GKE cluster
* `DNS-PROJECT` : Project that holds your DNS zones
You can also change the below if you plan to use a different service account name and namespace
* `variable "ksa_name"` : Name of the Kubernetes service account external-dns will use
* `variable "kns_name"` : Name of the Kubernetes Name Space that will have external-dns installed to
### Worker Node Service Account method
In this method, the GSA (Google Service Account) that is associated with GKE worker nodes will be configured to have access to Cloud DNS.
In this method, the GSA (Google Service Account) that is associated with GKE worker nodes will be configured to have access to Cloud DNS.
**WARNING**: This will grant access to modify the Cloud DNS zone records for all containers running on cluster, not just ExternalDNS, so use this option with caution. This is not recommended for production environments.
@ -257,7 +264,7 @@ After this, follow the steps in [Deploy ExternalDNS](#deploy-externaldns). Make
### Static Credentials
In this scenario, a new GSA (Google Service Account) is created that has access to the CloudDNS zone. The credentials for this GSA are saved and installed as a Kubernetes secret that will be used by ExternalDNS.
In this scenario, a new GSA (Google Service Account) is created that has access to the CloudDNS zone. The credentials for this GSA are saved and installed as a Kubernetes secret that will be used by ExternalDNS.
This allows only containers that have access to the secret, such as ExternalDNS to update records on the Cloud DNS Zone.
@ -301,7 +308,7 @@ Deploy ExternalDNS with the following steps below, documented under [Deploy Exte
#### Update ExternalDNS pods
!!! note "Only required if not enabled on all nodes"
If you have GKE Workload Identity enabled on all nodes in your cluster, the below step is not necessary
If you have GKE Workload Identity enabled on all nodes in your cluster, the below step is not necessary
Update the Pod spec to schedule the workloads on nodes that use Workload Identity and to use the annotated Kubernetes service account.
@ -360,7 +367,7 @@ kind: Deployment
metadata:
name: external-dns
labels:
app.kubernetes.io/name: external-dns
app.kubernetes.io/name: external-dns
spec:
strategy:
type: Recreate
@ -387,7 +394,7 @@ spec:
- --policy=upsert-only # would prevent ExternalDNS from deleting any records, omit to enable full synchronization
- --registry=txt
- --txt-owner-id=my-identifier
# # uncomment below if static credentials are used
# # uncomment below if static credentials are used
# env:
# - name: GOOGLE_APPLICATION_CREDENTIALS
# value: /etc/secrets/service-account/credentials.json
@ -464,7 +471,7 @@ gcloud dns record-sets list --zone "example-com" --name "nginx.example.com."
Example output:
```
```sh
NAME TYPE TTL DATA
nginx.example.com. A 300 104.155.60.49
nginx.example.com. TXT 300 "heritage=external-dns,external-dns/owner=my-identifier"
@ -514,7 +521,8 @@ Create the ingress objects with:
kubectl create --namespace "default" --filename ingress.yaml
```
Note that this will ingress object will use the default ingress controller that comes with GKE to create a L7 load balancer in addition to the L4 load balancer previously with the service object. To use only the L7 load balancer, update the service manafest to change the Service type to `NodePort` and remove the ExternalDNS annotation.
Note that this will ingress object will use the default ingress controller that comes with GKE to create a L7 load balancer in addition to the L4 load balancer previously with the service object.
To use only the L7 load balancer, update the service manafest to change the Service type to `NodePort` and remove the ExternalDNS annotation.
After roughly two minutes check that a corresponding DNS record for your Ingress was created.
@ -523,9 +531,10 @@ gcloud dns record-sets list \
--zone "example-com" \
--name "server.example.com." \
```
Output:
```
```sh
NAME TYPE TTL DATA
server.example.com. A 300 130.211.46.224
server.example.com. TXT 300 "heritage=external-dns,external-dns/owner=my-identifier"

View File

@ -29,8 +29,8 @@ Connect your `kubectl` client to the cluster with which you want to test Externa
Create a values.yaml file to configure ExternalDNS to use GoDaddy as the DNS provider. This file should include the necessary environment variables:
```shell
provider:
name: godaddy
provider:
name: godaddy
extraArgs:
- --godaddy-api-key=YOUR_API_KEY
- --godaddy-api-secret=YOUR_API_SECRET
@ -197,8 +197,8 @@ ExternalDNS uses the hostname annotation to determine which services should be r
### Create the deployment and service
```
$ kubectl create -f nginx.yaml
```sh
kubectl create -f nginx.yaml
```
Depending on where you run your service, it may take some time for your cloud provider to create an external IP for the service. Once an external IP is assigned, ExternalDNS detects the new service IP address and synchronizes the GoDaddy DNS records.
@ -211,7 +211,7 @@ Use the GoDaddy web console or API to verify that the A record for your domain s
Once you successfully configure and verify record management via ExternalDNS, you can delete the tutorial's example:
```
$ kubectl delete -f nginx.yaml
$ kubectl delete -f externaldns.yaml
```sh
kubectl delete -f nginx.yaml
kubectl delete -f externaldns.yaml
```

View File

@ -3,7 +3,9 @@
This tutorial describes how to setup ExternalDNS for usage in conjunction with a Headless service.
## Use cases
The main use cases that inspired this feature is the necessity for fixed addressable hostnames with services, such as Kafka when trying to access them from outside the cluster. In this scenario, quite often, only the Node IP addresses are actually routable and as in systems like Kafka more direct connections are preferable.
The main use cases that inspired this feature is the necessity for fixed addressable hostnames with services, such as Kafka when trying to access them from outside the cluster.
In this scenario, quite often, only the Node IP addresses are actually routable and as in systems like Kafka more direct connections are preferable.
## Setup
@ -12,7 +14,9 @@ We will go through a small example of deploying a simple Kafka with use of a hea
### External DNS
A simple deploy could look like this:
### Manifest (for clusters without RBAC enabled)
```yaml
apiVersion: apps/v1
kind: Deployment
@ -37,13 +41,14 @@ spec:
- --source=service
- --source=ingress
- --namespace=dev
- --domain-filter=example.org.
- --domain-filter=example.org.
- --provider=aws
- --registry=txt
- --txt-owner-id=dev.example.org
```
### Manifest (for clusters with RBAC enabled)
```yaml
apiVersion: v1
kind: ServiceAccount
@ -59,7 +64,7 @@ rules:
resources: ["services","endpoints","pods"]
verbs: ["get","watch","list"]
- apiGroups: ["extensions","networking.k8s.io"]
resources: ["ingresses"]
resources: ["ingresses"]
verbs: ["get","watch","list"]
- apiGroups: [""]
resources: ["nodes"]
@ -102,13 +107,12 @@ spec:
- --source=service
- --source=ingress
- --namespace=dev
- --domain-filter=example.org.
- --domain-filter=example.org.
- --provider=aws
- --registry=txt
- --txt-owner-id=dev.example.org
```
### Kafka Stateful Set
First lets deploy a Kafka Stateful set, a simple example(a lot of stuff is missing) with a headless service called `ksvc`
@ -127,7 +131,7 @@ spec:
component: kafka
spec:
containers:
- name: kafka
- name: kafka
image: confluent/kafka
ports:
- containerPort: 9092
@ -155,6 +159,7 @@ spec:
requests:
storage: 500Gi
```
Very important here, is to set the `hostPort`(only works if the PodSecurityPolicy allows it)! and in case your app requires an actual hostname inside the container, unlike Kafka, which can advertise on another address, you have to set the hostname yourself.
### Headless Service
@ -162,7 +167,7 @@ Very important here, is to set the `hostPort`(only works if the PodSecurityPolic
Now we need to define a headless service to use to expose the Kafka pods. There are generally two approaches to use expose the nodeport of a Headless service:
1. Add `--fqdn-template={{name}}.example.org`
2. Use a full annotation
2. Use a full annotation
If you go with #1, you just need to define the headless service, here is an example of the case #2:
@ -181,8 +186,10 @@ spec:
selector:
component: kafka
```
This will create 3 dns records:
```
```sh
kafka-0.example.org
kafka-1.example.org
kafka-2.example.org
@ -192,7 +199,7 @@ If you set `--fqdn-template={{name}}.example.org` you can omit the annotation.
Generally it is a better approach to use `--fqdn-template={{name}}.example.org`, because then
you would get the service name inside the generated A records:
```
```sh
kafka-0.ksvc.example.org
kafka-1.ksvc.example.org
kafka-2.ksvc.example.org

View File

@ -7,29 +7,35 @@ IBM Cloud commands and assumes that the Kubernetes cluster was created via IBM C
are being run on an orchestration node.
## Creating a IBMCloud DNS zone
The IBMCloud provider for ExternalDNS will find suitable zones for domains it manages; it will
not automatically create zones.
For public zone, This tutorial assume that the [IBMCloud Internet Services](https://cloud.ibm.com/catalog/services/internet-services) was provisioned and the [cis cli plugin](https://cloud.ibm.com/docs/cis?topic=cis-cli-plugin-cis-cli) was installed with IBMCloud CLI
For private zone, This tutorial assume that the [IBMCloud DNS Services](https://cloud.ibm.com/catalog/services/dns-services) was provisioned and the [dns cli plugin](https://cloud.ibm.com/docs/dns-svcs?topic=dns-svcs-cli-plugin-dns-services-cli-commands) was installed with IBMCloud CLI
### Public Zone
For this tutorial, we create public zone named `example.com` on IBMCloud Internet Services instance `external-dns-public`
```sh
ibmcloud cis domain-add example.com -i external-dns-public
```
$ ibmcloud cis domain-add example.com -i external-dns-public
```
Follow [step](https://cloud.ibm.com/docs/cis?topic=cis-getting-started#configure-your-name-servers-with-the-registrar-or-existing-dns-provider) to active your zone
### Private Zone
For this tutorial, we create private zone named `example.com` on IBMCloud DNS Services instance `external-dns-private`
```
$ ibmcloud dns zone-create example.com -i external-dns-private
```sh
ibmcloud dns zone-create example.com -i external-dns-private
```
## Creating configuration file
The preferred way to inject the configuration file is by using a Kubernetes secret. The secret should contain an object named azure.json with content similar to this:
```
```json
{
"apiKey": "1234567890abcdefghijklmnopqrstuvwxyz",
"instanceCrn": "crn:v1:bluemix:public:internet-svcs:global:a/bcf1865e99742d38d2d5fc3fb80a5496:b950da8a-5be6-4691-810e-36388c77b0a3::"
@ -41,9 +47,11 @@ You can create or find the `apiKey` in your ibmcloud IAM --> [API Keys page](htt
You can find the `instanceCrn` in your service instance details
Now you can create a file named 'ibmcloud.json' with values gathered above and with the structure of the example above. Use this file to create a Kubernetes secret:
```sh
kubectl create secret generic ibmcloud-config-file --from-file=/local/path/to/ibmcloud.json
```
$ kubectl create secret generic ibmcloud-config-file --from-file=/local/path/to/ibmcloud.json
```
## Deploy ExternalDNS
Connect your `kubectl` client to the cluster you want to test ExternalDNS with.
@ -105,7 +113,7 @@ rules:
resources: ["services","endpoints","pods"]
verbs: ["get","watch","list"]
- apiGroups: ["extensions","networking.k8s.io"]
resources: ["ingresses"]
resources: ["ingresses"]
verbs: ["get","watch","list"]
- apiGroups: [""]
resources: ["nodes"]
@ -213,8 +221,8 @@ will cause ExternalDNS to remove the corresponding DNS records.
Create the deployment and service:
```
$ kubectl create -f nginx.yaml
```sh
kubectl create -f nginx.yaml
```
Depending where you run your service it can take a little while for your cloud provider to create an external IP for the service.
@ -223,10 +231,12 @@ Once the service has an external IP assigned, ExternalDNS will notice the new se
the IBMCloud DNS records.
## Verifying IBMCloud DNS records
Run the following command to view the A records:
### Public Zone
```
```sh
# Get the domain ID with below command on IBMCloud Internet Services instance `external-dns-public`
$ ibmcloud cis domains -i external-dns-public
# Get the records with domain ID
@ -234,21 +244,23 @@ $ ibmcloud cis dns-records DOMAIN_ID -i external-dns-public
```
### Private Zone
```
```sh
# Get the domain ID with below command on IBMCloud DNS Services instance `external-dns-private`
$ ibmcloud dns zones -i external-dns-private
# Get the records with domain ID
$ ibmcloud dns resource-records ZONE_ID -i external-dns-public
```
This should show the external IP address of the service as the A record for your domain.
## Cleanup
Now that we have verified that ExternalDNS will automatically manage IBMCloud DNS records, we can delete the tutorial's example:
```
$ kubectl delete -f nginx.yaml
$ kubectl delete -f externaldns.yaml
```sh
kubectl delete -f nginx.yaml
kubectl delete -f externaldns.yaml
```
## Setting proxied records on public zone
@ -257,6 +269,9 @@ Using the `external-dns.alpha.kubernetes.io/ibmcloud-proxied: "true"` annotation
## Active priviate zone with VPC allocated
By default, IBMCloud DNS Services don't active your private zone with new zone added, with externale DNS, you can use `external-dns.alpha.kubernetes.io/ibmcloud-vpc: "crn:v1:bluemix:public:is:us-south:a/bcf1865e99742d38d2d5fc3fb80a5496::vpc:r006-74353823-a60d-42e4-97c5-5e2551278435"` annotation on your ingress or service, it will active your private zone with in specific VPC for that record created in. this setting won't work if the private zone was active already.
By default, IBMCloud DNS Services don't active your private zone with new zone added.
With External DNS, you can use `external-dns.alpha.kubernetes.io/ibmcloud-vpc: "crn:v1:bluemix:public:is:us-south:a/bcf1865e99742d38d2d5fc3fb80a5496::vpc:r006-74353823-a60d-42e4-97c5-5e2551278435"` annotation on your ingress or service.
It will active your private zone with in specific VPC for that record created in.
This setting won't work if the private zone was active already.
Note: the annotaion value is the VPC CRN, every IBM Cloud service have a valid CRN.

View File

@ -17,7 +17,6 @@ create ALBs and NLBs, follow the [Setup Guide][2].
[2]: https://github.com/zalando-incubator/kube-ingress-aws-controller/tree/HEAD/deploy
### Optional RouteGroup
[RouteGroup][3] is a CRD, that enables you to do complex routing with
@ -25,7 +24,7 @@ create ALBs and NLBs, follow the [Setup Guide][2].
First, you have to apply the RouteGroup CRD to your cluster:
```
```sh
kubectl apply -f https://github.com/zalando/skipper/blob/HEAD/dataclients/kubernetes/deploy/apply/routegroups_crd.yaml
```
@ -76,6 +75,7 @@ rules:
```
See also current RBAC yaml files:
- [kube-ingress-aws-controller](https://github.com/zalando-incubator/kubernetes-on-aws/blob/dev/cluster/manifests/ingress-controller/01-rbac.yaml)
- [skipper](https://github.com/zalando-incubator/kubernetes-on-aws/blob/dev/cluster/manifests/skipper/rbac.yaml)
- [external-dns](https://github.com/zalando-incubator/kubernetes-on-aws/blob/dev/cluster/manifests/external-dns/01-rbac.yaml)
@ -83,7 +83,6 @@ See also current RBAC yaml files:
[3]: https://opensource.zalando.com/skipper/kubernetes/routegroups/#routegroups
[4]: https://opensource.zalando.com/skipper
## Deploy an example application
Create the following sample "echoserver" application to demonstrate how
@ -206,7 +205,6 @@ annotation (which defaults to `ipv4`) to determine this. If this annotation is
set to `dualstack` then ExternalDNS will create two alias records (one A record
and one AAAA record) for each hostname associated with the Ingress object.
Example:
```yaml

View File

@ -162,7 +162,7 @@ ExternalDNS uses this annotation to determine what services should be registered
Create the deployment and service:
```console
$ kubectl create -f nginx.yaml
kubectl create -f nginx.yaml
```
Depending where you run your service it can take a little while for your cloud provider to create an external IP for the service.
@ -181,7 +181,7 @@ This should show the external IP address of the service as the A record for your
Now that we have verified that ExternalDNS will automatically manage Linode DNS records, we can delete the tutorial's example:
```
$ kubectl delete service -f nginx.yaml
$ kubectl delete service -f externaldns.yaml
```sh
kubectl delete service -f nginx.yaml
kubectl delete service -f externaldns.yaml
```

View File

@ -27,15 +27,16 @@ var `NS1_APIKEY` will be needed to run ExternalDNS with NS1.
### To add or delete an API key
1. Log into the NS1 portal at [my.nsone.net](http://my.nsone.net).
1. Log into the NS1 portal at [my.nsone.net](http://my.nsone.net).
2. Click your username in the upper-right corner, and navigate to **Account Settings** \> **Users & Teams**.
2. Click your username in the upper-right corner, and navigate to **Account Settings** \> **Users & Teams**.
3. Navigate to the _API Keys_ tab, and click **Add Key**.
3. Navigate to the _API Keys_ tab, and click **Add Key**.
4. Enter the name of the application and modify permissions and settings as desired. Once complete, click **Create Key**. The new API key appears in the list.
4. Enter the name of the application and modify permissions and settings as desired. Once complete, click **Create Key**. The new API key appears in the list.
Note: Set the permissions for your API keys just as you would for a user or team associated with your organization's NS1 account. For more information, refer to the article [Creating and Managing API Keys](https://help.ns1.com/hc/en-us/articles/360026140094-Creating-managing-users) in the NS1 Knowledge Base.
> [!NOTE]
> Set the permissions for your API keys just as you would for a user or team associated with your organization's NS1 account. For more information, refer to the article [Creating and Managing API Keys](https://help.ns1.com/hc/en-us/articles/360026140094-Creating-managing-users) in the NS1 Knowledge Base.
## Deploy ExternalDNS
@ -56,7 +57,7 @@ Then apply one of the following manifests file to deploy ExternalDNS.
Create a values.yaml file to configure ExternalDNS to use NS1 as the DNS provider. This file should include the necessary environment variables:
```shell
provider:
provider:
name: ns1
env:
- name: NS1_APIKEY
@ -122,7 +123,7 @@ rules:
resources: ["services","endpoints","pods"]
verbs: ["get","watch","list"]
- apiGroups: ["extensions","networking.k8s.io"]
resources: ["ingresses"]
resources: ["ingresses"]
verbs: ["get","watch","list"]
- apiGroups: [""]
resources: ["nodes"]
@ -223,8 +224,8 @@ ExternalDNS uses the hostname annotation to determine which services should be r
### Create the deployment and service
```
$ kubectl create -f nginx.yaml
```sh
kubectl create -f nginx.yaml
```
Depending on where you run your service, it may take some time for your cloud provider to create an external IP for the service. Once an external IP is assigned, ExternalDNS detects the new service IP address and synchronizes the NS1 DNS records.
@ -237,7 +238,7 @@ Use the NS1 portal or API to verify that the A record for your domain shows the
Once you successfully configure and verify record management via ExternalDNS, you can delete the tutorial's example:
```
$ kubectl delete -f nginx.yaml
$ kubectl delete -f externaldns.yaml
```sh
kubectl delete -f nginx.yaml
kubectl delete -f externaldns.yaml
```

View File

@ -18,14 +18,14 @@ By default, the ExternalDNS OCI provider is configured to use Global OCI
DNS Zones. If you want to use Private OCI DNS Zones, add the following
argument to the ExternalDNS controller:
```
```sh
--oci-zone-scope=PRIVATE
```
To use both Global and Private OCI DNS Zones, set the OCI Zone Scope to be
empty:
```
```sh
--oci-zone-scope=
```
@ -60,7 +60,7 @@ compartment: ocid1.compartment.oc1...
Create a secret using the config file above:
```shell
$ kubectl create secret generic external-dns-config --from-file=oci.yaml
kubectl create secret generic external-dns-config --from-file=oci.yaml
```
### OCI IAM Instance Principal
@ -73,7 +73,7 @@ the target compartment to the dynamic group covering your instance running
ExternalDNS.
E.g.:
```
```sql
Allow dynamic-group <dynamic-group-name> to manage dns in compartment id <target-compartment-OCID>
```
@ -93,7 +93,7 @@ OCI IAM policy exists with a statement granting the `manage dns` permission on z
records in the target compartment covering your OKE cluster running ExternalDNS.
E.g.:
```
```sql
Allow any-user to manage dns in compartment <compartment-name> where all {request.principal.type='workload',request.principal.cluster_id='<cluster-ocid>',request.principal.service_account='external-dns'}
```
@ -111,7 +111,7 @@ compartment: ocid1.compartment.oc1...
Create a secret using the config file above:
```shell
$ kubectl create secret generic external-dns-config --from-file=oci.yaml
kubectl create secret generic external-dns-config --from-file=oci.yaml
```
## Manifest (for clusters with RBAC enabled)
@ -237,11 +237,10 @@ spec:
Apply the manifest above and wait roughly two minutes and check that a corresponding DNS record for your service was created.
```
$ kubectl apply -f nginx.yaml
```sh
kubectl apply -f nginx.yaml
```
[1]: https://docs.cloud.oracle.com/iaas/Content/DNS/Concepts/dnszonemanagement.htm
[2]: https://docs.cloud.oracle.com/iaas/Content/Identity/Reference/dnspolicyreference.htm
[3]: https://docs.cloud.oracle.com/iaas/Content/Identity/Tasks/callingservicesfrominstances.htm

View File

@ -21,6 +21,7 @@ You first need to create an OVH application.
Using the [OVH documentation](https://docs.ovh.com/gb/en/api/first-steps-with-ovh-api/#advanced-usage-pair-ovhcloud-apis-with-an-application_2) you will have your `Application key` and `Application secret`
And you will need to generate your consumer key, here the permissions needed :
- GET on `/domain/zone`
- GET on `/domain/zone/*/record`
- GET on `/domain/zone/*/record/*`
@ -230,8 +231,8 @@ ExternalDNS uses the hostname annotation to determine which services should be r
### Create the deployment and service
```
$ kubectl create -f nginx.yaml
```sh
kubectl create -f nginx.yaml
```
Depending on where you run your service, it may take some time for your cloud provider to create an external IP for the service. Once an external IP is assigned, ExternalDNS detects the new service IP address and synchronizes the OVH DNS records.
@ -244,7 +245,7 @@ Use the OVH manager or API to verify that the A record for your domain shows the
Once you successfully configure and verify record management via ExternalDNS, you can delete the tutorial's example:
```
$ kubectl delete -f nginx.yaml
$ kubectl delete -f externaldns.yaml
```sh
kubectl delete -f nginx.yaml
kubectl delete -f externaldns.yaml
```

View File

@ -55,7 +55,8 @@ spec:
- --interval=30s
```
#### Domain Filter (`--domain-filter`)
### Domain Filter (`--domain-filter`)
When the `--domain-filter` argument is specified, external-dns will only create DNS records for host names (specified in ingress objects and services with the external-dns annotation) related to zones that match the `--domain-filter` argument in the external-dns deployment manifest.
eg. ```--domain-filter=example.org``` will allow for zone `example.org` and any zones in PowerDNS that ends in `.example.org`, including `an.example.org`, ie. the subdomains of example.org.
@ -65,11 +66,13 @@ eg. ```--domain-filter=.example.org``` will allow *only* zones that end in `.exa
The filter can also match parent zones. For example `--domain-filter=a.example.com` will allow for zone `example.com`. If you want to match parent zones, you cannot pre-pend your filter with a ".", eg. `--domain-filter=.example.com` will not attempt to match parent zones.
### Regex Domain Filter (`--regex-domain-filter`)
`--regex-domain-filter` limits possible domains and target zone with a regex. It overrides domain filters and can be specified only once.
## RBAC
If your cluster is RBAC enabled, you also need to setup the following, before you can run external-dns:
```yaml
apiVersion: v1
kind: ServiceAccount
@ -151,29 +154,32 @@ spec:
port: 80
targetPort: 5678
```
**Important!**: Don't run dig, nslookup or similar immediately (until you've
confirmed the record exists). You'll get hit by [negative DNS caching](https://tools.ietf.org/html/rfc2308), which is hard to flush.
Run the following to make sure everything is in order:
```bash
$ kubectl get services echo
$ kubectl get endpoints echo
kubectl get services echo
kubectl get endpoints echo
```
Make sure everything looks correct, i.e the service is defined and receives a
public IP, and that the endpoint also has a pod IP.
Once that's done, wait about 30s-1m (interval for external-dns to kick in), then do:
```bash
$ curl -H "X-API-Key: ${PDNS_API_KEY}" ${PDNS_API_URL}/api/v1/servers/localhost/zones/example.com. | jq '.rrsets[] | select(.name | contains("echo"))'
curl -H "X-API-Key: ${PDNS_API_KEY}" ${PDNS_API_URL}/api/v1/servers/localhost/zones/example.com. | jq '.rrsets[] | select(.name | contains("echo"))'
```
Once the API shows the record correctly, you can double check your record using:
```bash
$ dig @${PDNS_FQDN} echo.example.com.
dig @${PDNS_FQDN} echo.example.com.
```
## Using CRD source to manage DNS records in PowerDNS
Please refer to the [CRD source documentation](../sources/crd.md#example) for more information.
Please refer to the [CRD source documentation](../sources/crd.md#example) for more information.

View File

@ -6,12 +6,11 @@ There is a pseudo-API exposed that ExternalDNS is able to use to manage these re
__NOTE:__ Your Pi-hole must be running [version 5.9 or newer](https://pi-hole.net/blog/2022/02/12/pi-hole-ftl-v5-14-web-v5-11-and-core-v5-9-released).
## Deploy ExternalDNS
You can skip to the [manifest](#externaldns-manifest) if authentication is disabled on your Pi-hole instance or you don't want to use secrets.
If your Pi-hole server's admin dashboard is protected by a password, you'll likely want to create a secret first containing its value.
If your Pi-hole server's admin dashboard is protected by a password, you'll likely want to create a secret first containing its value.
This is optional since you _do_ retain the option to pass it as a flag with `--pihole-password`.
You can create the secret with:
@ -21,12 +20,12 @@ kubectl create secret generic pihole-password \
--from-literal EXTERNAL_DNS_PIHOLE_PASSWORD=supersecret
```
Replacing **"supersecret"** with the actual password to your Pi-hole server.
Replacing __"supersecret"__ with the actual password to your Pi-hole server.
### ExternalDNS Manifest
Apply the following manifest to deploy ExternalDNS, editing values for your environment accordingly.
Be sure to change the namespace in the `ClusterRoleBinding` if you are using a namespace other than **default**.
Apply the following manifest to deploy ExternalDNS, editing values for your environment accordingly.
Be sure to change the namespace in the `ClusterRoleBinding` if you are using a namespace other than __default__.
```yaml
---
@ -107,9 +106,9 @@ spec:
### Arguments
- `--pihole-server (env: EXTERNAL_DNS_PIHOLE_SERVER)` - The address of the Pi-hole web server
- `--pihole-password (env: EXTERNAL_DNS_PIHOLE_PASSWORD)` - The password to the Pi-hole web server (if enabled)
- `--pihole-tls-skip-verify (env: EXTERNAL_DNS_PIHOLE_TLS_SKIP_VERIFY)` - Skip verification of any TLS certificates served by the Pi-hole web server.
- `--pihole-server (env: EXTERNAL_DNS_PIHOLE_SERVER)` - The address of the Pi-hole web server
- `--pihole-password (env: EXTERNAL_DNS_PIHOLE_PASSWORD)` - The password to the Pi-hole web server (if enabled)
- `--pihole-tls-skip-verify (env: EXTERNAL_DNS_PIHOLE_TLS_SKIP_VERIFY)` - Skip verification of any TLS certificates served by the Pi-hole web server.
## Verify ExternalDNS Works

View File

@ -32,7 +32,7 @@ env:
name: PLURAL_ACCESS_TOKEN
key: plural-env
- name: PLURAL_ENDPOINT
value: https://app.plural.sh
value: https://app.plural.sh
```
Finally, install the ExternalDNS chart with Helm using the configuration specified in your values.yaml file:
@ -95,7 +95,7 @@ rules:
resources: ["services","endpoints","pods"]
verbs: ["get","watch","list"]
- apiGroups: ["extensions","networking.k8s.io"]
resources: ["ingresses"]
resources: ["ingresses"]
verbs: ["get","watch","list"]
- apiGroups: [""]
resources: ["nodes"]
@ -199,8 +199,8 @@ will cause ExternalDNS to remove the corresponding DNS records.
Create the deployment and service:
```
$ kubectl create -f nginx.yaml
```sh
kubectl create -f nginx.yaml
```
Depending where you run your service it can take a little while for your cloud provider to create an external IP for the service.
@ -218,6 +218,6 @@ The records should show the external IP address of the service as the A record f
Now that we have verified that ExternalDNS will automatically manage Plural DNS records, we can delete the tutorial's example:
```
$ kubectl delete -f nginx.yaml
$ kubectl delete -f externaldns.yaml
```sh
kubectl delete -f nginx.yaml
kubectl delete -f externaldns.yaml

View File

@ -7,7 +7,7 @@ This tutorial describes how to use the RFC2136 with either BIND or Windows DNS.
To use external-dns with BIND: generate/procure a key, configure DNS and add a
deployment of external-dns.
### Server credentials:
### Server credentials
- RFC2136 was developed for and tested with [BIND](https://www.isc.org/downloads/bind/) DNS server.
This documentation assumes that you already have a configured and working server. If you don't,
@ -18,17 +18,18 @@ Skip the next steps wrt BIND setup.
```text
key "externaldns-key" {
algorithm hmac-sha256;
secret "96Ah/a2g0/nLeFGK+d/0tzQcccf9hCEIy34PoXX2Qg8=";
algorithm hmac-sha256;
secret "96Ah/a2g0/nLeFGK+d/0tzQcccf9hCEIy34PoXX2Qg8=";
};
```
- If you are your own DNS administrator create a TSIG key. Use
`tsig-keygen -a hmac-sha256 externaldns` or on older distributions
`dnssec-keygen -a HMAC-SHA256 -b 256 -n HOST externaldns`. You will end up with
a key printed to standard out like above (or in the case of dnssec-keygen in a
file called `Kexternaldns......key`).
### BIND Configuration:
### BIND Configuration
If you do not administer your own DNS, skip to RFC provider configuration
@ -43,14 +44,17 @@ following.
step. (I put the zone in it's own subdirectory because named,
which shouldn't be running as root, needs to create a journal file and the
default zone directory isn't writeable by named).
```text
zone "k8s.example.org" {
type master;
file "/etc/bind/pri/k8s/k8s.zone";
};
```
- Add your key to both transfer and update. For instance with our previous
zone.
```text
zone "k8s.example.org" {
type master;
@ -63,7 +67,9 @@ following.
};
};
```
- Create a zone file (k8s.zone):
```text
$TTL 60 ; 1 minute
k8s.example.org IN SOA k8s.example.org. root.k8s.example.org. (
@ -76,8 +82,8 @@ following.
NS ns.k8s.example.org.
ns A 123.456.789.012
```
- Reload (or restart) named
- Reload (or restart) named
### Using external-dns
@ -119,7 +125,7 @@ spec:
The default DNS record TTL (Time-To-Live) is 0 seconds. You can customize this value by setting the annotation `external-dns.alpha.kubernetes.io/ttl`. e.g., modify the service manifest YAML file above:
```
```yaml
apiVersion: v1
kind: Service
metadata:
@ -144,15 +150,26 @@ If you want to generate reverse DNS records for your services, you have to enabl
flag. You have also to add the zone to the list of zones managed by ExternalDNS via the `--rfc2136-zone` and `--domain-filter` flags.
An example of a valid configuration is the following:
```--domain-filter=157.168.192.in-addr.arpa --rfc2136-zone=157.168.192.in-addr.arpa```
```sh
--domain-filter=157.168.192.in-addr.arpa --rfc2136-zone=157.168.192.in-addr.arpa
```
PTR record tracking is managed by the A/AAAA record so you can't create PTR records for already generated A/AAAA records.
### Test with external-dns installed on local machine (optional)
You may install external-dns and test on a local machine by running:
```
external-dns --txt-owner-id k8s --provider rfc2136 --rfc2136-host=192.168.0.1 --rfc2136-port=53 --rfc2136-zone=k8s.example.org --rfc2136-tsig-secret=96Ah/a2g0/nLeFGK+d/0tzQcccf9hCEIy34PoXX2Qg8= --rfc2136-tsig-secret-alg=hmac-sha256 --rfc2136-tsig-keyname=externaldns-key --rfc2136-tsig-axfr --source ingress --once --domain-filter=k8s.example.org --dry-run
```sh
external-dns --txt-owner-id k8s --provider rfc2136 \
--rfc2136-host=192.168.0.1 --rfc2136-port=53 \
--rfc2136-zone=k8s.example.org \
--rfc2136-tsig-secret=96Ah/a2g0/nLeFGK+d/0tzQcccf9hCEIy34PoXX2Qg8= \
--rfc2136-tsig-secret-alg=hmac-sha256 \
--rfc2136-tsig-keyname=externaldns-key \
--rfc2136-tsig-axfr \
--source ingress --once \
--domain-filter=k8s.example.org --dry-run
```
- host should be the IP of your master DNS server.
@ -160,12 +177,14 @@ external-dns --txt-owner-id k8s --provider rfc2136 --rfc2136-host=192.168.0.1 --
- tsig-keyname needs to match the keyname you used (if you changed it).
- domain-filter can be used as shown to filter the domains you wish to update.
### RFC2136 provider configuration:
### RFC2136 provider configuration
In order to use external-dns with your cluster you need to add a deployment
with access to your ingress and service resources. The following are two
example manifests with and without RBAC respectively.
- With RBAC:
```text
apiVersion: v1
kind: Namespace
@ -257,6 +276,7 @@ spec:
```
- Without RBAC:
```text
apiVersion: v1
kind: Namespace
@ -317,7 +337,9 @@ existing DNS records from your DNS server, this could mean that you forgot about
##### Kerberos Configuration
DNS with secure updates relies upon a valid Kerberos configuration running within the `external-dns` container. At this time, you will need to create a ConfigMap for the `external-dns` container to use and mount it in your deployment. Below is an example of a working Kerberos configuration inside a ConfigMap definition. This may be different depending on many factors in your environment:
DNS with secure updates relies upon a valid Kerberos configuration running within the `external-dns` container.
At this time, you will need to create a ConfigMap for the `external-dns` container to use and mount it in your deployment.
Below is an example of a working Kerberos configuration inside a ConfigMap definition. This may be different depending on many factors in your environment:
```yaml
apiVersion: v1
@ -353,6 +375,7 @@ data:
yourdomain.com = YOUR-REALM.COM
.yourdomain.com = YOUR-REALM.COM
```
In most cases, the realm name will probably be the same as the domain name, so you can simply replace `YOUR-REALM.COM` with something like `YOURDOMAIN.COM`.
Once the ConfigMap is created, the container `external-dns` container needs to be told to mount that ConfigMap as a volume at the default Kerberos configuration location. The pod spec should include a similar configuration to the following:
@ -428,11 +451,11 @@ You'll want to configure `external-dns` similarly to the following:
If your DNS server does zone transfers over TLS, you can instruct `external-dns` to connect over TLS with the following flags:
* `--rfc2136-use-tls` Will enable TLS for both zone transfers and for updates.
* `--tls-ca=<cert-file>` Is the path to a file containing certificate(s) that can be used to verify the DNS server
* `--tls-client-cert=<client-cert-file>` and
* `--tls-client-cert-key=<client-key-file>` Set the client certificate and key for mutual verification
* `--rfc2136-skip-tls-verify` Disables verification of the certificate supplied by the DNS server.
- `--rfc2136-use-tls` Will enable TLS for both zone transfers and for updates.
- `--tls-ca=<cert-file>` Is the path to a file containing certificate(s) that can be used to verify the DNS server
- `--tls-client-cert=<client-cert-file>` and
- `--tls-client-cert-key=<client-key-file>` Set the client certificate and key for mutual verification
- `--rfc2136-skip-tls-verify` Disables verification of the certificate supplied by the DNS server.
It is currently not supported to do only zone transfers over TLS, but not the updates. They are enabled and disabled together.
@ -478,4 +501,4 @@ external-dns \
- Distributes the load of DNS updates across multiple data centers, preventing any single DC from becoming a bottleneck.
- Provides flexibility to choose different load balancing strategies based on the environment and requirements.
- Improves the resilience and reliability of DNS updates by introducing a retry mechanism with a list of hosts.
- Improves the resilience and reliability of DNS updates by introducing a retry mechanism with a list of hosts.

View File

@ -22,14 +22,18 @@ You can either use existing ones or you can create a new token, as explained in
Scaleway provider supports configuring credentials using profiles or supplying it directly with environment variables.
### Configuration using a config file
You can supply the credentials through a config file:
1. Create the config file. Check out [Scaleway docs](https://github.com/scaleway/scaleway-sdk-go/blob/master/scw/README.md#scaleway-config) for instructions
2. Mount it as a Secret into the Pod
3. Configure environment variable `SCW_PROFILE` to match the profile name in the config file
4. Configure environment variable `SCW_CONFIG_PATH` to match the location of the mounted config file
### Configuration using environment variables
Two environment variables are needed to run ExternalDNS with Scaleway DNS:
- `SCW_ACCESS_KEY` which is the Access Key.
- `SCW_SECRET_KEY` which is the Secret Key.
@ -41,6 +45,7 @@ Then apply one of the following manifests file to deploy ExternalDNS.
The following example are suited for development. For a production usage, prefer secrets over environment, and use a [tagged release](https://github.com/kubernetes-sigs/external-dns/releases).
### Manifest (for clusters without RBAC enabled)
```yaml
apiVersion: apps/v1
kind: Deployment
@ -87,6 +92,7 @@ spec:
```
### Manifest (for clusters with RBAC enabled)
```yaml
apiVersion: v1
kind: ServiceAccount
@ -166,7 +172,6 @@ spec:
###
```
## Deploying an Nginx Service
Create a service file called 'nginx.yaml' with the following contents:
@ -215,7 +220,7 @@ ExternalDNS uses this annotation to determine what services should be registered
Create the deployment and service:
```console
$ kubectl create -f nginx.yaml
kubectl create -f nginx.yaml
```
Depending where you run your service it can take a little while for your cloud provider to create an external IP for the service.
@ -234,7 +239,7 @@ This should show the external IP address of the service as the A record for your
Now that we have verified that ExternalDNS will automatically manage Scaleway DNS records, we can delete the tutorial's example:
```
$ kubectl delete service -f nginx.yaml
$ kubectl delete service -f externaldns.yaml
```sh
kubectl delete service -f nginx.yaml
kubectl delete service -f externaldns.yaml
```

View File

@ -10,10 +10,12 @@ Tencent Cloud DNSPod Service is the domain name resolution and management servic
Tencent Cloud PrivateDNS Service is the domain name resolution and management service for VPC internal access.
* If you want to use internal dns service in Tencent Cloud.
1. Set up the args `--tencent-cloud-zone-type=private`
2. Create a DNS domain in PrivateDNS console. DNS domain which will contain the managed DNS records.
* If you want to use public dns service in Tencent Cloud.
1. Set up the args `--tencent-cloud-zone-type=public`
2. Create a Domain in DnsPod console. DNS domain which will contain the managed DNS records.
@ -55,9 +57,9 @@ In Tencent CAM Console. you may get the secretId and secretKey pair. make sure t
}
```
# Deploy ExternalDNS
## Deploy ExternalDNS
## Manifest (for clusters with RBAC enabled)
### Manifest (for clusters with RBAC enabled)
```yaml
apiVersion: v1
@ -163,9 +165,9 @@ spec:
name: config-volume
```
# Example
## Example
## Service
### Service
```yaml
apiVersion: v1
@ -210,7 +212,5 @@ spec:
`nginx-internal.external-dns-test.com` will record to the ClusterIP.
all of the DNS Record ttl will be 600.
# Attention
This makes ExternalDNS safe for running in environments where there are other records managed via other means.
> [!WARNING]
> This makes ExternalDNS safe for running in environments where there are other records managed via other means.

View File

@ -9,12 +9,13 @@ Make sure to use **>=0.5.14** version of ExternalDNS for this tutorial, have at
To use the TransIP API you need an account at TransIP and enable API usage as described in the [knowledge base](https://www.transip.eu/knowledgebase/entry/77-want-use-the-transip-api/). With the private key generated by the API, we create a kubernetes secret:
```console
$ kubectl create secret generic transip-api-key --from-file=transip-api-key=/path/to/private.key
kubectl create secret generic transip-api-key --from-file=transip-api-key=/path/to/private.key
```
## Deploy ExternalDNS
Below are example manifests, for both cluster without or with RBAC enabled. Don't forget to replace `YOUR_TRANSIP_ACCOUNT_NAME` with your TransIP account name. In these examples, an example domain-filter is defined. Such a filter can be used to prevent ExternalDNS from touching any domain not listed in the filter. Refer to the docs for any other command-line parameters you might want to use.
Below are example manifests, for both cluster without or with RBAC enabled. Don't forget to replace `YOUR_TRANSIP_ACCOUNT_NAME` with your TransIP account name.
In these examples, an example domain-filter is defined. Such a filter can be used to prevent ExternalDNS from touching any domain not listed in the filter. Refer to the docs for any other command-line parameters you might want to use.
### Manifest (for clusters without RBAC enabled)
@ -171,7 +172,7 @@ ExternalDNS uses this annotation to determine what services should be registered
Create the deployment and service:
```console
$ kubectl create -f nginx.yaml
kubectl create -f nginx.yaml
```
Depending where you run your service it can take a little while for your cloud provider to create an external IP for the service.

View File

@ -24,6 +24,7 @@ Then, apply one of the following manifests file to deploy ExternalDNS.
- Note: We are assuming the zone is already present within UltraDNS.
- Note: While creating CNAMES as target endpoints, the `--txt-prefix` option is mandatory.
### Manifest (for clusters without RBAC enabled)
```yaml
@ -46,7 +47,7 @@ spec:
- name: external-dns
image: registry.k8s.io/external-dns/external-dns:v0.15.1
args:
- --source=service
- --source=service
- --source=ingress # ingress is also possible
- --domain-filter=example.com # (Recommended) We recommend to use this filter as it minimize the time to propagate changes, as there are less number of zones to look into..
- --provider=ultradns
@ -118,7 +119,7 @@ spec:
- name: external-dns
image: registry.k8s.io/external-dns/external-dns:v0.15.1
args:
- --source=service
- --source=service
- --source=ingress
- --domain-filter=example.com #(Recommended) We recommend to use this filter as it minimize the time to propagate changes, as there are less number of zones to look into..
- --provider=ultradns
@ -178,11 +179,11 @@ Please note the annotation on the service. Use the same hostname as the UltraDNS
ExternalDNS uses this annotation to determine what services should be registered with DNS. Removing the annotation will cause ExternalDNS to remove the corresponding DNS records.
## Creating the Deployment and Service:
## Creating the Deployment and Service
```console
$ kubectl create -f nginx.yaml
$ kubectl create -f external-dns.yaml
kubectl create -f nginx.yaml
kubectl create -f external-dns.yaml
```
Depending on where you run your service from, it can take a few minutes for your cloud provider to create an external IP for the service.
@ -203,13 +204,17 @@ The external IP address will be displayed as a CNAME record for your zone.
Now that we have verified that ExternalDNS will automatically manage your UltraDNS records, you can delete example zones that you created in this tutorial:
```sh
kubectl delete service -f nginx.yaml
kubectl delete service -f externaldns.yaml
```
$ kubectl delete service -f nginx.yaml
$ kubectl delete service -f externaldns.yaml
```
## Examples to Manage your Records
### Creating Multiple A Records Target
- First, you want to create a service file called 'apple-banana-echo.yaml'
- First, you want to create a service file called 'apple-banana-echo.yaml'
```yaml
---
kind: Pod
@ -235,7 +240,9 @@ spec:
ports:
- port: 5678 # Default port for image
```
- Then, create service file called 'expose-apple-banana-app.yaml' to expose the services. For more information to deploy ingress controller, refer to (https://kubernetes.github.io/ingress-nginx/deploy/)
```yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
@ -258,24 +265,31 @@ spec:
port:
number: 5678
```
- Then, create the deployment and service:
```console
$ kubectl create -f apple-banana-echo.yaml
$ kubectl create -f expose-apple-banana-app.yaml
$ kubectl create -f external-dns.yaml
kubectl create -f apple-banana-echo.yaml
kubectl create -f expose-apple-banana-app.yaml
kubectl create -f external-dns.yaml
```
- Depending on where you run your service from, it can take a few minutes for your cloud provider to create an external IP for the service.
- Please verify on the [UltraDNS UI](https://portal.ultradns.com/login) that the records have been created under the zone "example.com".
- Finally, you will need to clean up the deployment and service. Please verify on the UI afterwards that the records have been deleted from the zone "example.com":
```console
$ kubectl delete -f apple-banana-echo.yaml
$ kubectl delete -f expose-apple-banana-app.yaml
$ kubectl delete -f external-dns.yaml
kubectl delete -f apple-banana-echo.yaml
kubectl delete -f expose-apple-banana-app.yaml
kubectl delete -f external-dns.yaml
```
### Creating CNAME Record
- Please note, that prior to deploying the external-dns service, you will need to add the option txt-prefix=txt- into external-dns.yaml. If this not provided, your records will not be created.
- First, create a service file called 'apple-banana-echo.yaml'
- _Config File Example kubernetes cluster is on-premise not on cloud_
- First, create a service file called 'apple-banana-echo.yaml'
- _Config File Example kubernetes cluster is on-premise not on cloud_
```yaml
---
kind: Pod
@ -321,7 +335,9 @@ $ kubectl delete -f external-dns.yaml
port:
number: 5678
```
- _Config File Example Kubernetes cluster service from different cloud vendors_
- _Config File Example Kubernetes cluster service from different cloud vendors_
```yaml
---
kind: Pod
@ -352,22 +368,29 @@ $ kubectl delete -f external-dns.yaml
port: 5678
targetPort: 5678
```
- Then, create the deployment and service:
```console
$ kubectl create -f apple-banana-echo.yaml
$ kubectl create -f external-dns.yaml
kubectl create -f apple-banana-echo.yaml
kubectl create -f external-dns.yaml
```
- Depending on where you run your service from, it can take a few minutes for your cloud provider to create an external IP for the service.
- Please verify on the [UltraDNS UI](https://portal.ultradns.com/login), that the records have been created under the zone "example.com".
- Finally, you will need to clean up the deployment and service. Please verify on the UI afterwards that the records have been deleted from the zone "example.com":
```console
$ kubectl delete -f apple-banana-echo.yaml
$ kubectl delete -f external-dns.yaml
kubectl delete -f apple-banana-echo.yaml
kubectl delete -f external-dns.yaml
```
### Creating Multiple Types Of Records
- Please note, that prior to deploying the external-dns service, you will need to add the option txt-prefix=txt- into external-dns.yaml. Since you will also be created a CNAME record, If this not provided, your records will not be created.
- First, create a service file called 'apple-banana-echo.yaml'
- _Config File Example kubernetes cluster is on-premise not on cloud_
- First, create a service file called 'apple-banana-echo.yaml'
- _Config File Example kubernetes cluster is on-premise not on cloud_
```yaml
---
kind: Pod
@ -499,7 +522,9 @@ $ kubectl delete -f external-dns.yaml
port:
number: 5680
```
- _Config File Example Kubernetes cluster service from different cloud vendors_
- _Config File Example Kubernetes cluster service from different cloud vendors_
```yaml
---
apiVersion: apps/v1
@ -623,14 +648,18 @@ $ kubectl delete -f external-dns.yaml
port:
number: 5679
```
- Then, create the deployment and service:
```console
$ kubectl create -f apple-banana-echo.yaml
$ kubectl create -f external-dns.yaml
kubectl create -f apple-banana-echo.yaml
kubectl create -f external-dns.yaml
```
- Depending on where you run your service from, it can take a few minutes for your cloud provider to create an external IP for the service.
- Please verify on the [UltraDNS UI](https://portal.ultradns.com/login), that the records have been created under the zone "example.com".
- Finally, you will need to clean up the deployment and service. Please verify on the UI afterwards that the records have been deleted from the zone "example.com":
```console
$ kubectl delete -f apple-banana-echo.yaml
$ kubectl delete -f external-dns.yaml```
```console
kubectl delete -f apple-banana-echo.yaml
kubectl delete -f external-dns.yaml```

View File

@ -2,7 +2,8 @@
The "Webhook" provider allows integrating ExternalDNS with DNS providers through an HTTP interface.
The Webhook provider implements the `Provider` interface. Instead of implementing code specific to a provider, it implements an HTTP client that sends requests to an HTTP API.
The idea behind it is that providers can be implemented in separate programs: these programs expose an HTTP API that the Webhook provider interacts with. The ideal setup for providers is to run as a sidecar in the same pod of the ExternalDNS container, listening only on localhost. This is not strictly a requirement, but we do not recommend other setups.
The idea behind it is that providers can be implemented in separate programs: these programs expose an HTTP API that the Webhook provider interacts with.
The ideal setup for providers is to run as a sidecar in the same pod of the ExternalDNS container, listening only on localhost. This is not strictly a requirement, but we do not recommend other setups.
## Architectural diagram
@ -10,13 +11,14 @@ The idea behind it is that providers can be implemented in separate programs: th
## API guarantees
Providers implementing the HTTP API have to keep in sync with changes to the JSON serialization of Go types `plan.Changes`, `endpoint.Endpoint`, and `endpoint.DomainFilter`. Given the maturity of the project, we do not expect to make significant changes to those types, but can't exclude the possibility that changes will need to happen. We commit to publishing changes to those in the release notes, to ensure that providers implementing the API can keep providers up to date quickly.
Providers implementing the HTTP API have to keep in sync with changes to the JSON serialization of Go types `plan.Changes`, `endpoint.Endpoint`, and `endpoint.DomainFilter`.
Given the maturity of the project, we do not expect to make significant changes to those types, but can't exclude the possibility that changes will need to happen.
We commit to publishing changes to those in the release notes, to ensure that providers implementing the API can keep providers up to date quickly.
## Implementation requirements
The following table represents the methods to implement mapped to their HTTP method and route.
### Provider endpoints
| Provider method | HTTP Method | Route | Description |
@ -53,9 +55,10 @@ Custom annotations can be used to influence DNS record creation and updates. Pro
## Provider registry
To simplify the discovery of providers, we will accept pull requests that will add links to providers in this documentation. This list will only serve the purpose of simplifying finding providers and will not constitute an official endorsement of any of the externally implemented providers unless otherwise stated.
To simplify the discovery of providers, we will accept pull requests that will add links to providers in this documentation.
This list will only serve the purpose of simplifying finding providers and will not constitute an official endorsement of any of the externally implemented providers unless otherwise stated.
## Run an ExternalDNS in-tree provider as a webhook.
## Run an ExternalDNS in-tree provider as a webhook
To test the Webhook provider and provide a reference implementation, we added the functionality to run ExternalDNS as a webhook. To run the AWS provider as a webhook, you need the following flags:

View File

@ -4,4 +4,4 @@
"path": "."
}
]
}
}

View File

@ -41,11 +41,13 @@ const markdownTemplate = `# Flags
<!-- THIS FILE MUST NOT BE EDITED BY HAND -->
<!-- ON NEW FLAG ADDED PLEASE RUN 'make generate-flags-documentation' -->
<!-- markdownlint-disable MD013 -->
| Flag | Description |
| :------ | :----------- |
{{- range . }}
| {{ .Name }} | {{ .Description }} | {{- end -}}
| {{ .Name }} | {{ .Description }} |
{{- end -}}
`
// It generates a markdown file
@ -61,6 +63,7 @@ func main() {
if err != nil {
_ = fmt.Errorf("failed to generate markdown file '%s': %v\n", path, err.Error())
}
content = content + "\n"
_ = writeToFile(path, content)
}

View File

@ -73,6 +73,7 @@ func TestFlagsMdUpToDate(t *testing.T) {
flags := computeFlags()
actual, err := flags.generateMarkdownTable()
assert.NoError(t, err)
actual = actual + "\n"
assert.True(t, len(expected) == len(actual), "expected file '%s' to be up to date. execute 'make generate-flags-documentation", fileName)
}

0
scripts/update_route53_k8s_txt_owner.py Normal file → Executable file
View File