Implement the network config screen with input forms to configure the initial node networking by writing a config to the META partition.
Closessiderolabs/talos#6961.
Signed-off-by: Utku Ozdemir <utku.ozdemir@siderolabs.com>
This is handy when the node with qemu went down, so you had to manually delete the folder after it restarted.
Signed-off-by: Dmitriy Matrenichev <dmitry.matrenichev@siderolabs.com>
There's a cyclic dependency on siderolink library which imports talos
machinery back. We will fix that after we get talos pushed under a new
name.
Signed-off-by: Andrey Smirnov <andrey.smirnov@talos-systems.com>
This the first step towards replacing all import paths to be based on
`siderolabs/` instead of `talos-systems/`.
All updates contain no functional changes, just refactorings to adapt to
the new path structure.
Signed-off-by: Andrey Smirnov <andrey.smirnov@talos-systems.com>
Fixes#6119
With new stable default hostname feature, any default hostname is
disabled until the machine config is available.
Talos enters maintenance mode when the default config source is empty,
so it doesn't have any machine config available at the moment
maintenance service is started.
Hostname might be set via different sources, e.g. kernel args or via
DHCP before the machine config is available, but if all these sources
are not available, hostname won't be set at all.
This stops waiting for the hostname, and skips setting any DNS names in
the maintenance mode certificate SANs if the hostname is not available.
Also adds a regression test via new `--disable-dhcp-hostname` flag to
`talosctl cluster create`.
Signed-off-by: Andrey Smirnov <andrey.smirnov@talos-systems.com>
The default qemu binary was only set as qemu-system-<arch>.
Signed-off-by: Davincible <david.brouwer.99@gmail.com>
Signed-off-by: Andrey Smirnov <andrey.smirnov@talos-systems.com>
Add an OVMF image source path for QEMU, needed on RHEL-based systems.
Signed-off-by: Han Cen <hi@chamburr.com>
Signed-off-by: Andrey Smirnov <andrey.smirnov@talos-systems.com>
talosctl could not find the ovmf flash image to provision a local qemu
on Fedora. We added the source path where the dnf package manager will
place the image.
Fixes#5517
Signed-off-by: Philipp Sauter <philipp.sauter@siderolabs.com>
As QEMU clusters are used for testing, use unsafe cache options to
reduce amount of fsyncs going to the host blockdevice.
Signed-off-by: Andrey Smirnov <andrey.smirnov@talos-systems.com>
The CNI is executed on the host. Even if we want to run an arm64 qemu,
we still need to execute the amd64 CNI on the host.
Signed-off-by: Florian Klink <flokli@flokli.de>
Signed-off-by: Andrey Smirnov <andrey.smirnov@talos-systems.com>
Talos shouldn't try to re-encode the machine config it was provided
with.
So add a `ReadonlyWrapper` around `*v1alpha1.Config` which makes sure
that raw config object is not available anymore (it's a private field),
but config accessors are available for read-only access.
Another thing that `ReadonlyWrapper` does is that it preserves the
original `[]byte` encoding of the config keeping it exactly same way as
it was loaded from file or read over the network.
Improved `talosctl edit mc` to preserve the config as it was submitted,
and preserve the edits on error from Talos (previously edits were lost).
`ReadonlyWrapper` is not used on config generation path though - config
there is represented by `*v1alpha.Config` and can be freely modified.
Why almost? Some parts of Talos (platform code) patch the machine
configuration with new data. We need to fix platforms to provide
networking configuration in a different way, but this will come with
other PRs later.
Signed-off-by: Andrey Smirnov <andrey.smirnov@talos-systems.com>
Related to #4448
The only remaining part is filtering out SideroLink addresses when Talos
looks for a node address.
See also https://github.com/talos-systems/siderolink/pull/2
The way to test it out:
```
$ talosctl cluster create ... --extra-boot-kernel-args
siderolink.api=172.20.0.1:4000
```
(where 172.20.0.1 is the bridge IP)
Run `siderolink-agent` (test implementation):
```
$ sudo _out/siderolink-agent-linux-amd64
```
Now on the host, there should be a `siderolink` Wireguard userspace
tunnel:
```
$ sudo wg
interface: siderolink
public key: 2aq/V91QyrHAoH24RK0bldukgo2rWk+wqE5Eg6TArCM=
private key: (hidden)
listening port: 51821
peer: Tyr6C/F3FFLWtnzqq7Dsm54B40bOPq6++PTiD/zqn2Y=
endpoint: 172.20.0.1:47857
allowed ips: fdae:41e4:649b:9303:b6db:d99c:215e:dfc4/128
latest handshake: 2 minutes, 2 seconds ago
transfer: 3.62 KiB received, 1012 B sent
...
```
Each Talos node will be registered as a peer, tunnel is established.
You can now ping Talos nodes from the host over the tunnel:
```
$ ping fdae:41e4:649b:9303:b6db:d99c:215e:dfc4
PING fdae:41e4:649b:9303:b6db:d99c:215e:dfc4(fdae:41e4:649b:9303:b6db:d99c:215e:dfc4) 56 data bytes
64 bytes from fdae:41e4:649b:9303:b6db:d99c:215e:dfc4: icmp_seq=1 ttl=64 time=0.352 ms
64 bytes from fdae:41e4:649b:9303:b6db:d99c:215e:dfc4: icmp_seq=2 ttl=64 time=0.437 ms
```
Signed-off-by: Andrey Smirnov <andrey.smirnov@talos-systems.com>
Note: Talos can be still run under `Firecracker`, support for
Firecracker was only removed for `talosctl cluster create`.
Reason:
* code is untested/unmaintained, and probably doesn't work correctly
* firecracker Go SDK pulls lots of dependencies and it blocks CNI Go
module update
Bonus: `talosctl-linux-amd64` shrinks by 2 MiB.
Signed-off-by: Andrey Smirnov <andrey.smirnov@talos-systems.com>
This has two big visible changes:
* `installer` image now contains assets for both `amd64` and `arm64`, so
it can be used to generate any Talos image (including RPi on amd64 host)
* Talos is using cross-compilation instead of emulation to build
non-native architectures: on amd64, Go amd64 compiler produces binaries
for both arm64 and amd64
(before this change: Go arm64 compiler via QEMU produces arm64 binaries on amd64)
CI implications: we no longer require arm64 nodes.
Changes walkthrough:
* `installer` container now keeps assets under `/usr/install/<arch>`
* Dockerfile build starts forcing toolchain/base image to use the build
host native architecture, not target architecture
* lots of duplication for amd64/arm64 as we want to combine assets for
both arches in a single image (e.g. we have multi-arch amd64/arm64
installer image, each arch has native installer binary, but both arches
contain full set of amd64/arm64 assets)
* fixed a small bug preventing arm64 on amd64 talosctl cluster create
Signed-off-by: Andrey Smirnov <smirnov.andrey@gmail.com>
This drops support for 0.7.x in upgrade tests, and bumps tests to use
version 0.9.0-alpha.3 as the next stable (it will eventually graduate to
0.9.0).
Signed-off-by: Andrey Smirnov <smirnov.andrey@gmail.com>
This adds a VIP (virtual IP) option to the network configuration of an
interface, which will allow a set of nodes to share a floating IP
address among them. For now, this is restricted to control plane use
and only a single shared IP is supported.
Fixes#3111
Signed-off-by: Seán C McCord <ulexus@gmail.com>
Signed-off-by: Andrey Smirnov <smirnov.andrey@gmail.com>
Allow setting individual options for the network interface while
generating config instead of providing whole config. This solves the
problem of merging options from different sources to build the config.
There should be no changes with this PR.
This is prep work for control plane VIP.
Signed-off-by: Andrey Smirnov <smirnov.andrey@gmail.com>
Modify provision library to support multiple IPs, CIDRs, gateways, which
can be IPv4/IPv6. Based on IP types, enable services in the cluster to
run DHCPv4/DHCPv6 in the test environment.
There's outstanding bug left with routes not being properly set up in
the cluster so, IPs are not properly routable, but DHCPv6 works and IPs
are allocated (validates DHCPv6 client).
Signed-off-by: Andrey Smirnov <smirnov.andrey@gmail.com>
This refactoring is required to simplify the work to be done to support
disk encryption.
Tried to minimize amount of queries done by `blockdevice` `probe`
methods.
Instead, where we have `runtime.Runtime` we get all required blockdevices
there from blockdevice cache stored in `State().Machine().Disk()`.
This opens a way to store encryption settings in the `Partition`
objects.
Signed-off-by: Artem Chernyshev <artem.0xD2@gmail.com>
Ballooning is not automatic, but it can be verified via QEMU monitor by
inflating/deflating the balloon inside the VM.
Monitor can be used like that:
```
$ sudo socat - unix-connect:/home/smira/.talos/clusters/talos-default/talos-default-master-1.monitor
QEMU 5.0.0 monitor - type 'help' for more information
(qemu) info status
info status
VM status: running
```
Signed-off-by: Andrey Smirnov <smirnov.andrey@gmail.com>
That change should make Talos updates more straightforward in any
projects that depend on Talos.
Signed-off-by: Artem Chernyshev <artem.0xD2@gmail.com>
Fixes: https://github.com/talos-systems/talos/issues/2973
Can now supply disk image using `--disk-image-path` flag.
May need to enable `--with-apply-config` if it's necessary to bootstrap
nodes properly.
Signed-off-by: Artem Chernyshev <artem.0xD2@gmail.com>
If disk is empty and ISO path is given, QEMU provisioner mounts ISO on
the first boot.
To drop into maintenance mode:
```
talosctl cluster create --provisioner=qemu --iso-path=./_out/talos-amd64.iso --skip-injecting-config --wait=false
```
Then inject the config, bootstrap the node, wait for it to come up (via
`talosctl cluster health`).
Signed-off-by: Andrey Smirnov <smirnov.andrey@gmail.com>
This introduces the notion of a "board" in Talos. A board is an interface that is capable
of modifying the installation in specific ways for a given SBC. This also adds support for the
libretech_all_h3_cc_h5.
Signed-off-by: Andrew Rynhard <andrew@rynhard.io>
This fixes the reverse Go dependency from `pkg/machinery` to `talos`
package.
Add a check to `Dockerfile` to prevent `pkg/machinery/go.mod` getting
out of sync, this should prevent problems in the future.
Fix potential security issue in `token` authorizer to deny requests
without grpc metadata.
In provisioner, add support for launching nodes without the config
(config is not delivered to the provisioned nodes).
Breaking change in `pkg/provision`: now `NodeRequest.Type` should be set
to the node type (as config can be missing now).
In `talosctl cluster create` add a flag to skip providing config to the
nodes so that they enter maintenance mode, while the generated configs
are written down to disk (so they can be tweaked and applied easily).
Signed-off-by: Andrey Smirnov <smirnov.andrey@gmail.com>
Fixes were applied automatically.
Import ordering might be questionable, but it's strict:
* stdlib
* other packages
* same package imports
Signed-off-by: Andrey Smirnov <smirnov.andrey@gmail.com>
This builds a bundle with CNI plugins for talosctl which is
automatically downloaded by `talosctl` if CNI plugins are missing.
CNI directories are moved by default to the `~/.talos/cni` path.
Also add a bunch of pre-flight checks to the QEMU provisioner to make it
easier to bootstrap the Talos QEMU cluster.
Signed-off-by: Andrey Smirnov <smirnov.andrey@gmail.com>
User-disks are supported by QEMU and Firecracker providers.
Can be defined by using the following parameters:
```
--user-disk /mount/path:1GB
```
Can get more than 1 user disk.
Same set of user disks will be created for all master and worker nodes.
Additionally enable user-disks in qemu e2e test.
Signed-off-by: Artem Chernyshev <artem.0xD2@gmail.com>
Missing timeout in shutdown is the only reason I could find for Sfyra
tests being stuck on teardown.
Signed-off-by: Andrey Smirnov <smirnov.andrey@gmail.com>
This allows to change `Shutdown()` API behavior to halt the system
instead of powering it off.
This is useful for QEMU provisioner, as it doesn't distinguiush between
power off and reboot.
Signed-off-by: Andrey Smirnov <smirnov.andrey@gmail.com>
Library `blockdevice` was extracted as `talos-systems/go-blockdevice`,
this PR finalizes the move by removing Talos copy of it.
Some functions around `mkfs`/`growfs` were extracted as `makefs`
package, as they depend on `cmd` package.
Signed-off-by: Andrey Smirnov <smirnov.andrey@gmail.com>
By default, build outside of Drone works the same and builds only amd64
version, loads images back into dockerd, etc.
If multiple platforms are used, multi-arch images are built which can't
be exported to docker or to `.tar` image, they're always pushed to the
registry (even for PR builds to our internal CI registry).
Artifacts as files (initramfs, kernel) now have `-arch` suffix:
`vmlinuz-amd64`, `initramfs-amd64.xz`. "Magic" script normalizes output
paths depending on whether single platform or multiple platforms were
given.
VM provisioners accept magic `${ARCH}` in initramfs/kernel paths which
gets replaced by cluster architecture.
Signed-off-by: Andrey Smirnov <smirnov.andrey@gmail.com>