* Add annotation filter to Ambassador Host Source This change makes the Ambassador Host source respect the External-DNS annotationFilter allowing for an Ambassador Host resource to specify what External-DNS deployment to use when there are multiple External-DNS deployments within the same cluster. Before this change if you had two External-DNS deployments within the cluster and used the Ambassador Host source the first External-DNS to process the resource will create the record and not the one that was specified in the filter annotation. I added the `filterByAnnotations` function so that it matched the same way the other sources have implemented annotation filtering. I didn't add the controller check only because I wanted to keep this change to implementing the annotationFilter. Example: Create two External-DNS deployments 1 public and 1 private and set the Ambassador Host to use the public External-DNS using the annotation filter. ``` --- apiVersion: apps/v1 kind: Deployment metadata: name: external-dns-private spec: strategy: type: Recreate selector: matchLabels: app: external-dns-private template: metadata: labels: app: external-dns-private annotations: iam.amazonaws.com/role: {ARN} # AWS ARN role spec: serviceAccountName: external-dns containers: - name: external-dns image: k8s.gcr.io/external-dns/external-dns:latest args: - --source=ambassador-host - --domain-filter=example.net # will make ExternalDNS see only the hosted zones matching provided domain, omit to process all available hosted zones - --provider=aws - --policy=upsert-only # would prevent ExternalDNS from deleting any records, omit to enable full synchronization - --aws-zone-type=private # only look at public hosted zones (valid values are public, private or no value for both) - --registry=txt - --txt-owner-id= {Hosted Zone ID} # Insert Route53 Hosted Zone ID here - --annotation-filter=kubernetes.io/ingress.class in (private) --- apiVersion: apps/v1 kind: Deployment metadata: name: external-dns-public spec: strategy: type: Recreate selector: matchLabels: app: external-dns-public template: metadata: labels: app: external-dns-public annotations: iam.amazonaws.com/role: {ARN} # AWS ARN role spec: serviceAccountName: external-dns containers: - name: external-dns image: k8s.gcr.io/external-dns/external-dns:latest args: - --source=ambassador-host - --domain-filter=example.net # will make ExternalDNS see only the hosted zones matching provided domain, omit to process all available hosted zones - --provider=aws - --policy=upsert-only # would prevent ExternalDNS from deleting any records, omit to enable full synchronization - --aws-zone-type= # only look at public hosted zones (valid values are public, private or no value for both) - --registry=txt - --txt-owner-id= {Hosted Zone ID} # Insert Route53 Hosted Zone ID here - --annotation-filter=kubernetes.io/ingress.class in (public) --- apiVersion: getambassador.io/v3alpha1 kind: Host metadata: name: your-hostname annotations: external-dns.ambassador-service: emissary-ingress/emissary kubernetes.io/ingress.class: public spec: acmeProvider: authority: none hostname: your-hostname.example.com ``` Fixes kubernetes-sigs/external-dns#2632 * Add Label filltering for Ambassador Host source Currently the `--label-filter` flag can only be used to filter CRDs, Ingress, Service and Openshift Route objects which match the label selector passed through that flag. This change extends the functionality to the Ambassador Host type object. When the flag is not specified the default value is `labels.Everything()` which is an empty string, the same as before. An annotation based filter is inefficient because the filtering has to be done in the controller instead of the API server like with label filtering. The Annotation based filtering has been left in for legacy reasons so the Ambassador Host source can be used inconjunction with the other sources that don't yet support label filltering. It is possible to use label based filltering with annotation based filltering so you can initially filter by label then filter the returned hosts by annotation. This is not recomended * Update Ambassador Host source docs Add that the Ambassador Host source now supports both annotation and label filltering. |
||
---|---|---|
.github | ||
charts | ||
controller | ||
docs | ||
endpoint | ||
internal | ||
kustomize | ||
pkg | ||
plan | ||
provider | ||
registry | ||
scripts | ||
source | ||
.gitignore | ||
.golangci.yml | ||
.ko.yaml | ||
.zappr.yaml | ||
cloudbuild.yaml | ||
code-of-conduct.md | ||
CONTRIBUTING.md | ||
external-dns.code-workspace | ||
go.mod | ||
go.sum | ||
LICENSE.md | ||
main.go | ||
Makefile | ||
mkdocs.yml | ||
OWNERS | ||
README.md | ||
SECURITY_CONTACTS |
hide | ||
---|---|---|
|
ExternalDNS
ExternalDNS synchronizes exposed Kubernetes Services and Ingresses with DNS providers.
Documentation
This README is a part of the complete documentation, available here.
What It Does
Inspired by Kubernetes DNS, Kubernetes' cluster-internal DNS server, ExternalDNS makes Kubernetes resources discoverable via public DNS servers. Like KubeDNS, it retrieves a list of resources (Services, Ingresses, etc.) from the Kubernetes API to determine a desired list of DNS records. Unlike KubeDNS, however, it's not a DNS server itself, but merely configures other DNS providers accordingly—e.g. AWS Route 53 or Google Cloud DNS.
In a broader sense, ExternalDNS allows you to control DNS records dynamically via Kubernetes resources in a DNS provider-agnostic way.
The FAQ contains additional information and addresses several questions about key concepts of ExternalDNS.
To see ExternalDNS in action, have a look at this video or read this blogpost.
The Latest Release
ExternalDNS allows you to keep selected zones (via --domain-filter
) synchronized with Ingresses and Services of type=LoadBalancer
and nodes in various DNS providers:
- Google Cloud DNS
- AWS Route 53
- AWS Cloud Map
- AzureDNS
- BlueCat
- Civo
- CloudFlare
- RcodeZero
- DigitalOcean
- DNSimple
- Dyn
- OpenStack Designate
- PowerDNS
- CoreDNS
- Exoscale
- Oracle Cloud Infrastructure DNS
- Linode DNS
- RFC2136
- NS1
- TransIP
- VinylDNS
- Vultr
- OVH
- Scaleway
- Akamai Edge DNS
- GoDaddy
- Gandi
- ANS Group SafeDNS
- IBM Cloud DNS
- TencentCloud PrivateDNS
- TencentCloud DNSPod
- Plural
- Pi-hole
ExternalDNS is, by default, aware of the records it is managing, therefore it can safely manage non-empty hosted zones. We strongly encourage you to set --txt-owner-id
to a unique value that doesn't change for the lifetime of your cluster. You might also want to run ExternalDNS in a dry run mode (--dry-run
flag) to see the changes to be submitted to your DNS Provider API.
Note that all flags can be replaced with environment variables; for instance,
--dry-run
could be replaced with EXTERNAL_DNS_DRY_RUN=1
.
New providers
No new provider will be added to ExternalDNS in-tree.
ExternalDNS has introduced a webhook system, which can be used to add a new provider. See PR #3063 for all the discussions about it.
Known providers using webhooks:
Status of in-tree providers
ExternalDNS supports multiple DNS providers which have been implemented by the ExternalDNS contributors. Maintaining all of those in a central repository is a challenge, which introduces lots of toil and potential risks.
This mean that external-dns
has begun the process to move providers out of tree. See #4347 for more details. Those who are interested can create a webhook provider based on an in-tree provider and after submit a PR to reference it here.
We define the following stability levels for providers:
- Stable: Used for smoke tests before a release, used in production and maintainers are active.
- Beta: Community supported, well tested, but maintainers have no access to resources to execute integration tests on the real platform and/or are not using it in production.
- Alpha: Community provided with no support from the maintainers apart from reviewing PRs.
The following table clarifies the current status of the providers according to the aforementioned stability levels:
Provider | Status | Maintainers |
---|---|---|
Google Cloud DNS | Stable | |
AWS Route 53 | Stable | |
AWS Cloud Map | Beta | |
Akamai Edge DNS | Beta | |
AzureDNS | Stable | |
BlueCat | Alpha | @seanmalloy @vinny-sabatini |
Civo | Alpha | @alejandrojnm |
CloudFlare | Beta | |
RcodeZero | Alpha | |
DigitalOcean | Alpha | |
DNSimple | Alpha | |
Dyn | Alpha | |
OpenStack Designate | Alpha | |
PowerDNS | Alpha | |
CoreDNS | Alpha | |
Exoscale | Alpha | |
Oracle Cloud Infrastructure DNS | Alpha | |
Linode DNS | Alpha | |
RFC2136 | Alpha | |
NS1 | Alpha | |
TransIP | Alpha | |
VinylDNS | Alpha | |
RancherDNS | Alpha | |
OVH | Alpha | |
Scaleway DNS | Alpha | @Sh4d1 |
Vultr | Alpha | |
UltraDNS | Alpha | |
GoDaddy | Alpha | |
Gandi | Alpha | @packi |
SafeDNS | Alpha | @assureddt |
IBMCloud | Alpha | @hughhuangzh |
TencentCloud | Alpha | @Hyzhou |
Plural | Alpha | @michaeljguarino |
Pi-hole | Alpha | @tinyzimmer |
Kubernetes version compatibility
A breaking change was added in external-dns v0.10.0.
ExternalDNS | <= 0.9.x | >= 0.10.0 |
---|---|---|
Kubernetes <= 1.18 | ✅ | ❌ |
Kubernetes >= 1.19 and <= 1.21 | ✅ | ✅ |
Kubernetes >= 1.22 | ❌ | ✅ |
Running ExternalDNS:
The are two ways of running ExternalDNS:
- Deploying to a Cluster
- Running Locally
Deploying to a Cluster
The following tutorials are provided:
- Akamai Edge DNS
- Alibaba Cloud
- AWS
- Azure DNS
- Azure Private DNS
- Civo
- Cloudflare
- BlueCat
- CoreDNS
- DigitalOcean
- DNSimple
- Dyn
- Exoscale
- ExternalName Services
- Google Kubernetes Engine
- Headless Services
- Istio Gateway Source
- Kubernetes Security Context
- Linode
- Nginx Ingress Controller
- NS1
- NS Record Creation with CRD Source
- MX Record Creation with CRD Source
- OpenStack Designate
- Oracle Cloud Infrastructure (OCI) DNS
- PowerDNS
- RcodeZero
- RancherDNS (RDNS)
- RFC2136
- TransIP
- VinylDNS
- OVH
- Scaleway
- Vultr
- UltraDNS
- GoDaddy
- Gandi
- SafeDNS
- IBM Cloud
- Nodes as source
- TencentCloud
- Plural
- Pi-hole
Running Locally
See the contributor guide for details on compiling from source.
Setup Steps
Next, run an application and expose it via a Kubernetes Service:
kubectl run nginx --image=nginx --port=80
kubectl expose pod nginx --port=80 --target-port=80 --type=LoadBalancer
Annotate the Service with your desired external DNS name. Make sure to change example.org
to your domain.
kubectl annotate service nginx "external-dns.alpha.kubernetes.io/hostname=nginx.example.org."
Optionally, you can customize the TTL value of the resulting DNS record by using the external-dns.alpha.kubernetes.io/ttl
annotation:
kubectl annotate service nginx "external-dns.alpha.kubernetes.io/ttl=10"
For more details on configuring TTL, see here.
Use the internal-hostname annotation to create DNS records with ClusterIP as the target.
kubectl annotate service nginx "external-dns.alpha.kubernetes.io/internal-hostname=nginx.internal.example.org."
If the service is not of type Loadbalancer you need the --publish-internal-services flag.
Locally run a single sync loop of ExternalDNS.
external-dns --txt-owner-id my-cluster-id --provider google --google-project example-project --source service --once --dry-run
This should output the DNS records it will modify to match the managed zone with the DNS records you desire. It also assumes you are running in the default
namespace. See the FAQ for more information regarding namespaces.
Note: TXT records will have the my-cluster-id
value embedded. Those are used to ensure that ExternalDNS is aware of the records it manages.
Once you're satisfied with the result, you can run ExternalDNS like you would run it in your cluster: as a control loop, and not in dry-run mode:
external-dns --txt-owner-id my-cluster-id --provider google --google-project example-project --source service
Check that ExternalDNS has created the desired DNS record for your Service and that it points to its load balancer's IP. Then try to resolve it:
dig +short nginx.example.org.
104.155.60.49
Now you can experiment and watch how ExternalDNS makes sure that your DNS records are configured as desired. Here are a couple of things you can try out:
- Change the desired hostname by modifying the Service's annotation.
- Recreate the Service and see that the DNS record will be updated to point to the new load balancer IP.
- Add another Service to create more DNS records.
- Remove Services to clean up your managed zone.
The tutorials section contains examples, including Ingress resources, and shows you how to set up ExternalDNS in different environments such as other cloud providers and alternative Ingress controllers.
Note
If using a txt registry and attempting to use a CNAME the --txt-prefix
must be set to avoid conflicts. Changing --txt-prefix
will result in lost ownership over previously created records.
If externalIPs
list is defined for a LoadBalancer
service, this list will be used instead of an assigned load balancer IP to create a DNS record. It's useful when you run bare metal Kubernetes clusters behind NAT or in a similar setup, where a load balancer IP differs from a public IP (e.g. with MetalLB).
Contributing
Are you interested in contributing to external-dns? We, the maintainers and community, would love your suggestions, contributions, and help! Also, the maintainers can be contacted at any time to learn more about how to get involved.
We also encourage ALL active community participants to act as if they are maintainers, even if you don't have "official" write permissions. This is a community effort, we are here to serve the Kubernetes community. If you have an active interest and you want to get involved, you have real power! Don't assume that the only people who can get things done around here are the "maintainers". We also would love to add more "official" maintainers, so show us what you can do!
The external-dns project is currently in need of maintainers for specific DNS providers. Ideally each provider would have at least two maintainers. It would be nice if the maintainers run the provider in production, but it is not strictly required. Provider listed here that do not have a maintainer listed are in need of assistance.
Read the contributing guidelines and have a look at the contributing docs to learn about building the project, the project structure, and the purpose of each package.
For an overview on how to write new Sources and Providers check out Sources and Providers.
Heritage
ExternalDNS is an effort to unify the following similar projects in order to bring the Kubernetes community an easy and predictable way of managing DNS records across cloud providers based on their Kubernetes resources:
- Kops' DNS Controller
- Zalando's Mate
- Molecule Software's route53-kubernetes
User Demo How-To Blogs and Examples
- A full demo on GKE Kubernetes. See How-to Kubernetes with DNS management (ssl-manager pre-req)
- Run external-dns on GKE with workload identity. See Kubernetes, ingress-nginx, cert-manager & external-dns
- ExternalDNS integration with Azure DNS using workload identity