Default manifests created by bootkube so far were only enabling
pod-checkpointer for kube-apiserver. This seems to have issues with
single-node control plane scenario, when without scheduler and
controller-manager node might fall into `NodeAffinity` state.
See https://github.com/talos-systems/bootkube-plugin/pull/23
Signed-off-by: Andrey Smirnov <smirnov.andrey@gmail.com>
For single node clusters, control plane is unstable after reboot, run
health check several times to let it settle down to avoid failures in
subsequent checks.
Signed-off-by: Andrey Smirnov <smirnov.andrey@gmail.com>
Idea is to add an option to perform "selective" reset: default reset
operation is to wipe all partitions (triggering reinstall), while spec
allows only to wipe some of the operations.
Other operations are performed exactly in the same way for any reset
flow.
Possible use case: reset only `EPHEMERAL` partition.
Signed-off-by: Andrey Smirnov <smirnov.andrey@gmail.com>
Allows merging two Talos configs into one. Merges the config in whatever
is set by TALOSCONFIG or ~/.talos/config.
Signed-off-by: Artem Chernyshev <artem.0xD2@gmail.com>
Bonus to `talosctl config merge`.
Got that idea after using talosctl for a weekend.
I feel that can be a good addition to have a command that can list existing
contexts in a table view, which is similar to what `kubectl config get-contexts`
does. To avoid going through the file which has all the certs and such.
Called it just `contexts` to align with whatever we have now (to switch
context you need to use `talosctl config context`).
Signed-off-by: Artem Chernyshev <artem.0xD2@gmail.com>
Regular upgrade path takes just one reboot, but it requires all the
processes to be stopped on the node before upgrade might proceed. Under
some circumstances and with potential Talos bugs it might not work
rendering Talos upgrades almost impossible.
Staged upgrades build upon regular install flow to run the upgrade on
the node reboot. Such upgrades require two reboots of the node, and it
requires two pulls of the installer image, but they should be much less
suspicious to the failure. Once the upgrade is staged, node can be
rebooted in any possible way, including hard reset and upgrade is
performed on the next boot.
New ADV format was implemented as well to allow to store install image
ref/options across reboots. New format allows for bigger values and
takes 50% of the `META` partition. Old ADV is still kept for
compatibility reasons.
Signed-off-by: Andrey Smirnov <smirnov.andrey@gmail.com>
Ex.:
```
$ talosctl gen config foo 192.168.0.1
no scheme and port specified for the cluster endpoint URL
try: "https://192.168.0.1:6443"
```
Signed-off-by: Andrey Smirnov <smirnov.andrey@gmail.com>
Initial version which only allows setting CNI using preset, no custom
CNI urls are supported at the moment. Still need to figure out what kind
of UI can be used for that.
Signed-off-by: Artem Chernyshev <artem.0xD2@gmail.com>
Talos 0.8 is going to ship with K8s 1.20.x.
Changes to support new `control-plane` label,
upgrade-k8s supports automated fixups for 1.20.
See also: https://github.com/talos-systems/bootkube-plugin/pull/22
Signed-off-by: Andrey Smirnov <smirnov.andrey@gmail.com>
This allows config to be written to disk without being applied
immediately.
Small refactoring to extract common code paths.
At first, I tried to implement this via the sequencer, but looks like
it's too hard to get it right, as sequencer lacks context and config to
be written is not applied to the runtime.
Fixes#2828
Signed-off-by: Andrey Smirnov <smirnov.andrey@gmail.com>
Fixes: https://github.com/talos-systems/talos/issues/2819
Only if requested config type is not `TypeInit`.
This functionality will help implementing TUI installer cluster
extension workflow.
Signed-off-by: Artem Chernyshev <artem.0xD2@gmail.com>
This is initial commit of the installer.
What's done:
- verifying node availability before starting any operations.
- gathering information about disks on the machine.
- allows setting: install disk, hostname, machine type, installer image,
kubernetes version, dns domain, cluster-name.
- dumps/merges talosconfig to a file after applying configuration.
Signed-off-by: Artem Chernyshev <artem.0xD2@gmail.com>
Bump to 0.7.0 as we have a new release.
Clean up the tests we do: 0.6.3 is a previous release, 0.7.0 is a stable
release, current version (0.8.x) is the "next" release.
We test the following:
* 0.6.3 -> 0.7.0
* 0.7.0 -> 0.8-current
* 0.7.0 -> 0.8-current (single node)
This tests upgrades always between two releases.
Signed-off-by: Andrey Smirnov <smirnov.andrey@gmail.com>
Fixes: https://github.com/talos-systems/talos/issues/2766
This API is implemented in Maintenance and Machine services.
Can be used to generate configuration on the node, instead of using
talosctl to generate it locally.
To be used in interactive installer and talosctl gen config.
Signed-off-by: Artem Chernyshev <artem.0xD2@gmail.com>
This fixes the reverse Go dependency from `pkg/machinery` to `talos`
package.
Add a check to `Dockerfile` to prevent `pkg/machinery/go.mod` getting
out of sync, this should prevent problems in the future.
Fix potential security issue in `token` authorizer to deny requests
without grpc metadata.
In provisioner, add support for launching nodes without the config
(config is not delivered to the provisioned nodes).
Breaking change in `pkg/provision`: now `NodeRequest.Type` should be set
to the node type (as config can be missing now).
In `talosctl cluster create` add a flag to skip providing config to the
nodes so that they enter maintenance mode, while the generated configs
are written down to disk (so they can be tweaked and applied easily).
Signed-off-by: Andrey Smirnov <smirnov.andrey@gmail.com>
User-disks are supported by QEMU and Firecracker providers.
Can be defined by using the following parameters:
```
--user-disk /mount/path:1GB
```
Can get more than 1 user disk.
Same set of user disks will be created for all master and worker nodes.
Additionally enable user-disks in qemu e2e test.
Signed-off-by: Artem Chernyshev <artem.0xD2@gmail.com>
For 0.6 -> 0.7 upgrade, in any case config.yaml is preserved and moved
from `/boot` to `/system/state`.
For single node upgrade, `EPHEMERAL` partition is not touched and other
partitions are re-created as needed.
Bump provision tests to 0.6/0.7 upgrades as we get closer to the new
release.
Signed-off-by: Andrey Smirnov <smirnov.andrey@gmail.com>
This fixes A/B upgrades and rollback API.
Installer manifest supports now an option to preserve partition contents
while disk is being re-partitioned and partitions are re-formatted.
Mount `/boot` partition as needed (to find current label before starting
the installation and in the rollback API).
Fix upgrade API for non-master nodes.
Contents of `/boot`, `/system/state` and META partitions are preserved
in memory while the disk is re-partitioned.
Remove `--save` flag from the installer as it's not being used.
Signed-off-by: Andrey Smirnov <smirnov.andrey@gmail.com>
This enables golangci-lint via build tags for integration tests (this
should have been done long ago!), and fixes the linting errors.
Two tests were updated to reduce flakiness:
* apply config: wait for nodes to issue "boot done" sequence event
before proceeding
* recover: kill pods even if they appear after the initial set gets
killed (potential race condition with previous test).
Signed-off-by: Andrey Smirnov <smirnov.andrey@gmail.com>
The problem was that some of the health checks sort the list of the
nodes in place (via `sort.Strings()`). If cluster info provider returns
original slice, it might be mutated in such a way that it gets
corrupted.
We never noticed it before CAPI clusters, as in our tests IPs are
assigned sequentially, and sort operation is a no-op.
Specifically, the problem was with the `Nodes()` function, it returns
`append(controlPlaneNodes, workerNodes...)` slice, which by definition
might share memory with `controlPlaneNodes` slice. For example,
if control plane nodes were `4, 5, 6` and worker nodes were `3`, the
returned slice will be `4, 5, 6, 3`, and it shares memory with
`controlPlaneNodes` slice (firs three items). If we apply `sort` to the
returned slice, it re-orders it as `3, 4, 5, 6`, but as it is done
in-place, the `controlPlaneNodes` slice is now `3, 4, 5`, which is
obviously wrong.
Fix that by always returning a copy of the slice from the functions
implementing `ClusterInfo` interface.
Signed-off-by: Andrey Smirnov <smirnov.andrey@gmail.com>
It only gets enabled if output is a terminal. Failures which resolve
themselves are removed from the final output. Small spinner to indicate
progress.
While I was at it, I fixed client-side `talosctl health` when init node
is missing.
Signed-off-by: Andrey Smirnov <smirnov.andrey@gmail.com>
Kubeconfig merge was completely rewritten to be "smarter":
* automatically apply renames done at previous stages to avoid asking
over and over again (in general should ask just once)
* skip checks if parts of the config match exactly
* allow overwrite as an option
* flexible way to control the output
* activating context in the end
* custom merged context name
Fixes#2578Fixes#2587Fixes#2577
Signed-off-by: Andrey Smirnov <smirnov.andrey@gmail.com>
This also refactors much of the CLI code for the `talosctl kubeconfig`:
1. Do all the checks before fetching kubeconfig from the server: as
kubeconfig generation takes a few seconds, it doesn't make sense to
generate it if it's not going to be used.
2. Unify most of merge & write directly features.
3. Don't use ExtractTarGz method to be more flexible.
4. Allow custom paths for kubeconfig, whether it is a directory or full
path to the file to be created.
Signed-off-by: Andrey Smirnov <smirnov.andrey@gmail.com>
Adds the ability to apply (replace) an existing node configuration with
a new one via the Machine API.
Fixes#2345
Signed-off-by: Seán C McCord <ulexus@gmail.com>
By default, build outside of Drone works the same and builds only amd64
version, loads images back into dockerd, etc.
If multiple platforms are used, multi-arch images are built which can't
be exported to docker or to `.tar` image, they're always pushed to the
registry (even for PR builds to our internal CI registry).
Artifacts as files (initramfs, kernel) now have `-arch` suffix:
`vmlinuz-amd64`, `initramfs-amd64.xz`. "Magic" script normalizes output
paths depending on whether single platform or multiple platforms were
given.
VM provisioners accept magic `${ARCH}` in initramfs/kernel paths which
gets replaced by cluster architecture.
Signed-off-by: Andrey Smirnov <smirnov.andrey@gmail.com>
This adds a command that lists all of the images used by Talos. This is
useful in the case of airgap installs, so that users will know which images
to pull.
Signed-off-by: Andrew Rynhard <andrew@rynhard.io>
Add Kubernetes upgrade as part of the provisioning (upgrade tests):
first K8s control plane is upgraded, then Talos is upgraded (with
kubelet), and e2e test is run last.
Signed-off-by: Andrey Smirnov <smirnov.andrey@gmail.com>
Add sonobuoy runner code with log fetching on failure. Use hand-picked
set of e2e tests to run: verify basic pod functionality, verify service
connectivity.
Add option `--run-e2e` to the `talosctl health` to run quick e2e test to
verify cluster health.
Add option to run provision tests with custom CNI, run one track of
provision tests with Cilium.
Bump Cilium to 1.8.2.
Talos 0.6 won't uncordon node automatically after upgrade from 0.5, as
0.5 doesn't put annotation. Workaround that in upgrade tests.
Bump upgrade test version to 0.6.0 release.
Signed-off-by: Andrey Smirnov <smirnov.andrey@gmail.com>
This moves to using grub instead of syslinux.
BREAKING CHANGE: Single node upgrades will fail in this change. This
will also break the A/B fallback setup since this version introduces
an entirely new partition scheme, that any fallback will not know about.
We plan on addressing these issues in a follow up change.
Signed-off-by: Andrew Rynhard <andrew@rynhard.io>