With the advent of generics, redo pointer functionality and remove github.com/AlekSi/pointer dependency.
Signed-off-by: Dmitriy Matrenichev <dmitry.matrenichev@siderolabs.com>
The gist is that `kubelet` service code only manages the container
lifecycle, while `kubelet` configuration is managed now in the
controllers and resources.
New resources:
* `secrets.Kubelet` contains Kubelet PKI derived directly from the
machine configuration
* `k8s.KubeletConfig` contains Kubelet non-secret config derived
directly from the machine configuration
* `k8s.NodeIPConfig` contains configuration on picking up Node IP for
the kubelet (from machine configuration)
* `k8s.NodeIP` contains actual Node IPs picked from the node addresses
based on `NodeIPConfig`
* `k8s.KubeletSpec` contains final `kubelet` container configuration,
including merged arguments, KubeletConfig, etc. It is derived from
`KubeletConfig`, `Nodename` and `NodeIP`.
Final controller `KubeletServiceController` writes down configuration
and PKI to disk, and manages restart/start of the `kubelet` service
which is a pure wrapper around container lifecycle.
Signed-off-by: Andrey Smirnov <andrey.smirnov@talos-systems.com>
The problem is that the kubelet kubeconfig gets created early, but the
actual client key and cert files are not written, so controllers spam
with scary errors that the config is not valid. This PR removes those
scary messages as we wait for the kubeconfig to be usable.
Signed-off-by: Andrey Smirnov <andrey.smirnov@talos-systems.com>
* calculate covering IPPrefixes for the KubeSpan peer `AllowedIPs`,
check for overlap
* don't use KubeSpan IP as potential node endpoint (inception!)
* allow Wireguard config to be applied which doesn't change peer
endpoint
* support for pre-shared Wireguard peer keys
Signed-off-by: Andrey Smirnov <andrey.smirnov@talos-systems.com>
Signed-off-by: Seán C McCord <ulexus@gmail.com>
Co-authored-by: Seán C McCord <ulexus@gmail.com>
This implements pushing to and pulling from Kubernetes cluster discovery
registry which is simply using extra Talos annotations on the Node
resources.
Note: cluster discovery is still disabled by default.
This means that each Talos node is going to push data from its own local
`Affiliate` structure to the `Node` resource, and also watches the other
`Node`s to scrape data to build `Affiliate`s from each other cluster
member.
Further down the pipeline, `Affiliate` is converted to a cluster
`Member` which is an easy way to see the cluster membership.
In its current form, `talosctl get members` is mostly equivalent to
`kubectl get nodes`, but as we add more registries, it will become more
powerful.
Signed-off-by: Andrey Smirnov <andrey.smirnov@talos-systems.com>