This file was never truly necessary and has never actually been used in
the history of Tailscale's open source releases.
A Brief History of AUTHORS files
---
The AUTHORS file was a pattern developed at Google, originally for
Chromium, then adopted by Go and a bunch of other projects. The problem
was that Chromium originally had a copyright line only recognizing
Google as the copyright holder. Because Google (and most open source
projects) do not require copyright assignemnt for contributions, each
contributor maintains their copyright. Some large corporate contributors
then tried to add their own name to the copyright line in the LICENSE
file or in file headers. This quickly becomes unwieldy, and puts a
tremendous burden on anyone building on top of Chromium, since the
license requires that they keep all copyright lines intact.
The compromise was to create an AUTHORS file that would list all of the
copyright holders. The LICENSE file and source file headers would then
include that list by reference, listing the copyright holder as "The
Chromium Authors".
This also become cumbersome to simply keep the file up to date with a
high rate of new contributors. Plus it's not always obvious who the
copyright holder is. Sometimes it is the individual making the
contribution, but many times it may be their employer. There is no way
for the proejct maintainer to know.
Eventually, Google changed their policy to no longer recommend trying to
keep the AUTHORS file up to date proactively, and instead to only add to
it when requested: https://opensource.google/docs/releasing/authors.
They are also clear that:
> Adding contributors to the AUTHORS file is entirely within the
> project's discretion and has no implications for copyright ownership.
It was primarily added to appease a small number of large contributors
that insisted that they be recognized as copyright holders (which was
entirely their right to do). But it's not truly necessary, and not even
the most accurate way of identifying contributors and/or copyright
holders.
In practice, we've never added anyone to our AUTHORS file. It only lists
Tailscale, so it's not really serving any purpose. It also causes
confusion because Tailscalars put the "Tailscale Inc & AUTHORS" header
in other open source repos which don't actually have an AUTHORS file, so
it's ambiguous what that means.
Instead, we just acknowledge that the contributors to Tailscale (whoever
they are) are copyright holders for their individual contributions. We
also have the benefit of using the DCO (developercertificate.org) which
provides some additional certification of their right to make the
contribution.
The source file changes were purely mechanical with:
git ls-files | xargs sed -i -e 's/\(Tailscale Inc &\) AUTHORS/\1 contributors/g'
Updates #cleanup
Change-Id: Ia101a4a3005adb9118051b3416f5a64a4a45987d
Signed-off-by: Will Norris <will@tailscale.com>
This commit contains the implementation of multi-tailnet support within the Kubernetes Operator
Each of our custom resources now expose the `spec.tailnet` field. This field is a string that must match the name of an existing `Tailnet` resource. A `Tailnet` resource looks like this:
```yaml
apiVersion: tailscale.com/v1alpha1
kind: Tailnet
metadata:
name: example # This is the name that must be referenced by other resources
spec:
credentials:
secretName: example-oauth
```
Each `Tailnet` references a `Secret` resource that contains a set of oauth credentials. This secret must be created in the same namespace as the operator:
```yaml
apiVersion: v1
kind: Secret
metadata:
name: example-oauth # This is the name that's referenced by the Tailnet resource.
namespace: tailscale
stringData:
client_id: "client-id"
client_secret: "client-secret"
```
When created, the operator performs a basic check that the oauth client has access to all required scopes. This is done using read actions on devices, keys & services. While this doesn't capture a missing "write" permission, it catches completely missing permissions. Once this check passes, the `Tailnet` moves into a ready state and can be referenced. Attempting to use a `Tailnet` in a non-ready state will stall the deployment of `Connector`s, `ProxyGroup`s and `Recorder`s until the `Tailnet` becomes ready.
The `spec.tailnet` field informs the operator that a `Connector`, `ProxyGroup`, or `Recorder` must be given an auth key generated using the specified oauth client. For backwards compatibility, the set of credentials the operator is configured with are considered the default. That is, where `spec.tailnet` is not set, the resource will be deployed in the same tailnet as the operator.
Updates https://github.com/tailscale/corp/issues/34561
ArgoCD sends boolean values but the template expects strings, causing
"incompatible types for comparison" errors. Wrap values with toString
so both work.
Fixes#17158
Signed-off-by: Raj Singh <raj@tailscale.com>
The feature is currently in private alpha, so requires a tailnet feature
flag. Initially focuses on supporting the operator's own auth, because the
operator is the only device we maintain that uses static long-lived
credentials. All other operator-created devices use single-use auth keys.
Testing steps:
* Create a cluster with an API server accessible over public internet
* kubectl get --raw /.well-known/openid-configuration | jq '.issuer'
* Create a federated OAuth client in the Tailscale admin console with:
* The issuer from the previous step
* Subject claim `system:serviceaccount:tailscale:operator`
* Write scopes services, devices:core, auth_keys
* Tag tag:k8s-operator
* Allow the Tailscale control plane to get the public portion of
the ServiceAccount token signing key without authentication:
* kubectl create clusterrolebinding oidc-discovery \
--clusterrole=system:service-account-issuer-discovery \
--group=system:unauthenticated
* helm install --set oauth.clientId=... --set oauth.audience=...
Updates #17457
Change-Id: Ib29c85ba97b093c70b002f4f41793ffc02e6c6e9
Signed-off-by: Tom Proctor <tomhjp@users.noreply.github.com>
Add a .gitignore for the chart version of the CRDs that we never commit,
because the static manifest CRD files are the canonical version. This
makes it easier to deploy the CRDs via the helm chart in a way that
reflects the production workflow without making the git checkout
"dirty".
Given that the chart CRDs are ignored, we can also now safely generate
them for the kube-generate-all Makefile target without being a nuisance
to the state of the git checkout. Added a slightly more robust repo root
detection to the generation logic to make sure the command works from
the context of both the Makefile and the image builder command we run
for releases in corp.
Updates tailscale/corp#32085
Change-Id: Id44a4707c183bfaf95a160911ec7a42ffb1a1287
Signed-off-by: Tom Proctor <tomhjp@users.noreply.github.com>
Adds a new k8s-proxy command to convert operator's in-process proxy to
a separately deployable type of ProxyGroup: kube-apiserver. k8s-proxy
reads in a new config file written by the operator, modelled on tailscaled's
conffile but with some modifications to ensure multiple versions of the
config can co-exist within a file. This should make it much easier to
support reading that config file from a Kube Secret with a stable file name.
To avoid needing to give the operator ClusterRole{,Binding} permissions,
the helm chart now optionally deploys a new static ServiceAccount for
the API Server proxy to use if in auth mode.
Proxies deployed by kube-apiserver ProxyGroups currently work the same as
the operator's in-process proxy. They do not yet leverage Tailscale Services
for presenting a single HA DNS name.
Updates #13358
Change-Id: Ib6ead69b2173c5e1929f3c13fb48a9a5362195d8
Signed-off-by: Tom Proctor <tomhjp@users.noreply.github.com>
Based on feedback that it wasn't clear what the user is meant to do with
the output of the last command, clarify that it's an optional command to
explore what got created.
Updates #13427
Change-Id: Iff64ec6d02dc04bf4bbebf415d7ed1a44e7dd658
Signed-off-by: Tom Proctor <tomhjp@users.noreply.github.com>
This commit modifies the k8s operator to allow for customisation of the ingress class name
via a new `OPERATOR_INGRESS_CLASS_NAME` environment variable. For backwards compatibility,
this defaults to `tailscale`.
When using helm, a new `ingress.name` value is provided that will set this environment variable
and modify the name of the deployed `IngressClass` resource.
Fixes https://github.com/tailscale/tailscale/issues/16248
Signed-off-by: David Bond <davidsbond93@gmail.com>
This commit modifies the operator helm chart values to bring the newly
added `loginServer` field to the top level. We felt as though it was a bit
confusing to be at the `operatorConfig` level as this value modifies the
behaviour or the operator, api server & all resources that the operator
manages.
Updates https://github.com/tailscale/corp/issues/29847
Signed-off-by: David Bond <davidsbond93@gmail.com>
This commit modifies the kubernetes operator to allow for customisation of the tailscale
login url. This provides some data locality for people that want to configure it.
This value is set in the `loginServer` helm value and is propagated down to all resources
managed by the operator. The only exception to this is recorder nodes, where additional
changes are required to support modifying the url.
Updates https://github.com/tailscale/corp/issues/29847
Signed-off-by: David Bond <davidsbond93@gmail.com>
This commit adds a NOTES.txt to the operator helm chart that will be written to the
terminal upon successful installation of the operator.
It includes a small list of knowledgebase articles with possible next steps for
the actor that installed the operator to the cluster. It also provides possible
commands to use for explaining the custom resources.
Fixes#13427
Signed-off-by: David Bond <davidsbond93@gmail.com>
This reconciler allows users to make applications highly available at L3 by
leveraging Tailscale Virtual Services. Many Kubernetes Service's
(irrespective of the cluster they reside in) can be mapped to a
Tailscale Virtual Service, allowing access to these Services at L3.
Updates #15895
Signed-off-by: chaosinthecrd <tom@tmlabs.co.uk>
cmd/k8s-operator: configure HA Ingress replicas to share certs
Creates TLS certs Secret and RBAC that allows HA Ingress replicas
to read/write to the Secret.
Configures HA Ingress replicas to run in read-only mode.
Updates tailscale/corp#24795
Signed-off-by: Irbe Krumina <irbe@tailscale.com>
This change builds on top of #14436 to ensure minimum downtime during egress ProxyGroup update rollouts:
- adds a readiness gate for ProxyGroup replicas that prevents kubelet from marking
the replica Pod as ready before a corresponding readiness condition has been added
to the Pod
- adds a reconciler that reconciles egress ProxyGroup Pods and, for each that is not ready,
if cluster traffic for relevant egress endpoints is routed via this Pod- if so add the
readiness condition to allow kubelet to mark the Pod as ready.
During the sequenced StatefulSet update rollouts kubelet does not restart
a Pod before the previous replica has been updated and marked as ready, so
ensuring that a replica is not marked as ready allows to avoid a temporary
post-update situation where all replicas have been restarted, but none of the
new ones are yet set up as an endpoint for the egress service, so cluster traffic is dropped.
Updates tailscale/tailscale#14326
Signed-off-by: Irbe Krumina <irbe@tailscale.com>
cmd/k8s-operator/deploy/chart: allow reading OAuth creds from a CSI driver's volume and annotating operator's Service account
Updates #14264
Signed-off-by: Oliver Rahner <o.rahner@dke-data.com>
* cmd/k8s-operator,k8s-operator,go.mod: optionally create ServiceMonitor
Adds a new spec.metrics.serviceMonitor field to ProxyClass.
If that's set to true (and metrics are enabled), the operator
will create a Prometheus ServiceMonitor for each proxy to which
the ProxyClass applies.
Additionally, create a metrics Service for each proxy that has
metrics enabled.
Updates tailscale/tailscale#11292
Signed-off-by: Irbe Krumina <irbe@tailscale.com>
A small follow-up to #14112- ensures that the operator itself can emit
Events for its kube state store changes.
Updates tailscale/tailscale#14080
Signed-off-by: Irbe Krumina <irbe@tailscale.com>
Adds functionality to kube client to emit Events.
Updates kube store to emit Events when tailscaled state has been loaded, updated or if any errors where
encountered during those operations.
This should help in cases where an error related to state loading/updating caused the Pod to crash in a loop-
unlike logs of the originally failed container instance, Events associated with the Pod will still be
accessible even after N restarts.
Updates tailscale/tailscale#14080
Signed-off-by: Irbe Krumina <irbe@tailscale.com>
Adds a new reconciler that reconciles ExternalName Services that define a
tailnet target that should be exposed to cluster workloads on a ProxyGroup's
proxies.
The reconciler ensures that for each such service, the config mounted to
the proxies is updated with the tailnet target definition and that
and EndpointSlice and ClusterIP Service are created for the service.
Adds a new reconciler that ensures that as proxy Pods become ready to route
traffic to a tailnet target, the EndpointSlice for the target is updated
with the Pods' endpoints.
Updates tailscale/tailscale#13406
Signed-off-by: Irbe Krumina <irbe@tailscale.com>
cmd/k8s-operator,k8s-operator,kube: Add TSRecorder CRD + controller
Deploys tsrecorder images to the operator's cluster. S3 storage is
configured via environment variables from a k8s Secret. Currently
only supports a single tsrecorder replica, but I've tried to take early
steps towards supporting multiple replicas by e.g. having a separate
secret for auth and state storage.
Example CR:
```yaml
apiVersion: tailscale.com/v1alpha1
kind: Recorder
metadata:
name: rec
spec:
enableUI: true
```
Updates #13298
Signed-off-by: Tom Proctor <tomhjp@users.noreply.github.com>
cmd/k8s-operator/deploy: replace wildcards in Kubernetes Operator RBAC role definitions with verbs
fixes: #13168
Signed-off-by: Pierig Le Saux <pierig@n3xt.io>
cmd/k8s-operator/deploy/chart: Support image 'repo' or 'repository' keys in helm values
Fixes#12100
Signed-off-by: Michael Long <michaelongdev@gmail.com>
cmd/k8s-operator: optionally update dnsrecords Configmap with DNS records for proxies.
This commit adds functionality to automatically populate
DNS records for the in-cluster ts.net nameserver
to allow cluster workloads to resolve MagicDNS names
associated with operator's proxies.
The records are created as follows:
* For tailscale Ingress proxies there will be
a record mapping the MagicDNS name of the Ingress
device and each proxy Pod's IP address.
* For cluster egress proxies, configured via
tailscale.com/tailnet-fqdn annotation, there will be
a record for each proxy Pod, mapping
the MagicDNS name of the exposed
tailnet workload to the proxy Pod's IP.
No records will be created for any other proxy types.
Records will only be created if users have configured
the operator to deploy an in-cluster ts.net nameserver
by applying tailscale.com/v1alpha1.DNSConfig.
It is user's responsibility to add the ts.net nameserver
as a stub nameserver for ts.net DNS names.
https://kubernetes.io/docs/tasks/administer-cluster/dns-custom-nameservers/#configuration-of-stub-domain-and-upstream-nameserver-using-corednshttps://cloud.google.com/kubernetes-engine/docs/how-to/kube-dns#upstream_nameservers
See also https://github.com/tailscale/tailscale/pull/11017
Updates tailscale/tailscale#10499
Signed-off-by: Irbe Krumina <irbe@tailscale.com>
cmd/k8s-operator/deploy/chart: allow users to configure additional labels for the operator's Pod via Helm chart values.
Fixes#11947
Signed-off-by: Gabe Gorelick <gabe@hightouch.io>
* cmd/k8s-nameserver,k8s-operator: add a nameserver that can resolve ts.net DNS names in cluster.
Adds a simple nameserver that can respond to A record queries for ts.net DNS names.
It can respond to queries from in-memory records, populated from a ConfigMap
mounted at /config. It dynamically updates its records as the ConfigMap
contents changes.
It will respond with NXDOMAIN to queries for any other record types
(AAAA to be implemented in the future).
It can respond to queries over UDP or TCP. It runs a miekg/dns
DNS server with a single registered handler for ts.net domain names.
Queries for other domain names will be refused.
The intended use of this is:
1) to allow non-tailnet cluster workloads to talk to HTTPS tailnet
services exposed via Tailscale operator egress over HTTPS
2) to allow non-tailnet cluster workloads to talk to workloads in
the same cluster that have been exposed to tailnet over their
MagicDNS names but on their cluster IPs.
DNSConfig CRD can be used to configure
the operator to deploy kube nameserver (./cmd/k8s-nameserver) to cluster.
Updates tailscale/tailscale#10499
Signed-off-by: Irbe Krumina <irbe@tailscale.com>
* cmd/k8s-nameserver,k8s-operator: add a nameserver that can resolve ts.net DNS names in cluster.
Adds a simple nameserver that can respond to A record queries for ts.net DNS names.
It can respond to queries from in-memory records, populated from a ConfigMap
mounted at /config. It dynamically updates its records as the ConfigMap
contents changes.
It will respond with NXDOMAIN to queries for any other record types
(AAAA to be implemented in the future).
It can respond to queries over UDP or TCP. It runs a miekg/dns
DNS server with a single registered handler for ts.net domain names.
Queries for other domain names will be refused.
The intended use of this is:
1) to allow non-tailnet cluster workloads to talk to HTTPS tailnet
services exposed via Tailscale operator egress over HTTPS
2) to allow non-tailnet cluster workloads to talk to workloads in
the same cluster that have been exposed to tailnet over their
MagicDNS names but on their cluster IPs.
Updates tailscale/tailscale#10499
Signed-off-by: Irbe Krumina <irbe@tailscale.com>
* cmd/k8s-operator/deploy/crds,k8s-operator: add DNSConfig CustomResource Definition
DNSConfig CRD can be used to configure
the operator to deploy kube nameserver (./cmd/k8s-nameserver) to cluster.
Signed-off-by: Irbe Krumina <irbe@tailscale.com>
* cmd/k8s-operator,k8s-operator: optionally reconcile nameserver resources
Adds a new reconciler that reconciles DNSConfig resources.
If a DNSConfig is deployed to cluster,
the reconciler creates kube nameserver resources.
This reconciler is only responsible for creating
nameserver resources and not for populating nameserver's records.
Signed-off-by: Irbe Krumina <irbe@tailscale.com>
* cmd/{k8s-operator,k8s-nameserver}: generate DNSConfig CRD for charts, append to static manifests
Signed-off-by: Irbe Krumina <irbe@tailscale.com>
---------
Signed-off-by: Irbe Krumina <irbe@tailscale.com>
* cmd/k8s-operator,k8s-operator: introduce proxy configuration mechanism via ProxyClass custom resource.
ProxyClass custom resource can be used to specify customizations
for the proxy resources created by the operator.
Add a reconciler that validates ProxyClass resources
and sets a Ready condition to True or False with a corresponding reason and message.
This is required because some fields (labels and annotations)
require complex validations that cannot be performed at custom resource apply time.
Reconcilers that use the ProxyClass to configure proxy resources are expected to
verify that the ProxyClass is Ready and not proceed with resource creation
if configuration from a ProxyClass that is not yet Ready is required.
If a tailscale ingress/egress Service is annotated with a tailscale.com/proxy-class annotation, look up the corresponding ProxyClass and, if it is Ready, apply the configuration from the ProxyClass to the proxy's StatefulSet.
If a tailscale Ingress has a tailscale.com/proxy-class annotation
and the referenced ProxyClass custom resource is available and Ready,
apply configuration from the ProxyClass to the proxy resources
that will be created for the Ingress.
Add a new .proxyClass field to the Connector spec.
If connector.spec.proxyClass is set to a ProxyClass that is available and Ready,
apply configuration from the ProxyClass to the proxy resources created for the Connector.
Ensure that when Helm chart is packaged, the ProxyClass yaml is added to chart templates. Ensure that static manifest generator adds ProxyClass yaml to operator.yaml. Regenerate operator.yaml
Signed-off-by: Irbe Krumina <irbe@tailscale.com>
* cmd/k8s-operator/deploy: deploy a Tailscale IngressClass resource.
Some Ingress validating webhooks reject Ingresses with
.spec.ingressClassName for which there is no matching IngressClass.
Additionally, validate that the expected IngressClass is present,
when parsing a tailscale `Ingress`.
We currently do not utilize the IngressClass,
however we might in the future at which point
we might start requiring that the right class
for this controller instance actually exists.
Updates tailscale/tailscale#10820
Signed-off-by: Irbe Krumina <irbe@tailscale.com>
Co-authored-by: Anton Tolchanov <anton@tailscale.com>
The configuration knob (that defaulted to Connector being disabled)
was added largely because the Connector CRD had to be installed in a separate step.
Now when the CRD has been added to both chart and static manifest, we can have it on by default.
Updates tailscale/tailscale#10878
Signed-off-by: Irbe Krumina <irbe@tailscale.com>
* k8s-operator,cmd/k8s-operator,Makefile,scripts,.github/workflows: add Connector kube CRD.
Connector CRD allows users to configure the Tailscale Kubernetes operator
to deploy a subnet router to expose cluster CIDRs or
other CIDRs available from within the cluster
to their tailnet.
Also adds various CRD related machinery to
generate CRD YAML, deep copy implementations etc.
Engineers will now have to run
'make kube-generate-all` after changing kube files
to ensure that all generated files are up to date.
* cmd/k8s-operator,k8s-operator: reconcile Connector resources
Reconcile Connector resources, create/delete subnetrouter resources in response to changes to Connector(s).
Connector reconciler will not be started unless
ENABLE_CONNECTOR env var is set to true.
This means that users who don't want to use the alpha
Connector custom resource don't have to install the Connector
CRD to their cluster.
For users who do want to use it the flow is:
- install the CRD
- install the operator (via Helm chart or using static manifests).
For Helm users set .values.enableConnector to true, for static
manifest users, set ENABLE_CONNECTOR to true in the static manifest.
Updates tailscale/tailscale#502
Signed-off-by: Irbe Krumina <irbe@tailscale.com>