Docker by default disable IPv6 completely in the containers which breaks
SideroLink on Docker-based clusters, as SideroLink is using IPv6
addresses for the Wiregurard tunnel.
This change might break `talosctl cluster create` on host systems which
have IPv6 disabled completely, so provide a flag to revert this
behavior.
Signed-off-by: Andrey Smirnov <andrey.smirnov@talos-systems.com>
Related to #4448
The only remaining part is filtering out SideroLink addresses when Talos
looks for a node address.
See also https://github.com/talos-systems/siderolink/pull/2
The way to test it out:
```
$ talosctl cluster create ... --extra-boot-kernel-args
siderolink.api=172.20.0.1:4000
```
(where 172.20.0.1 is the bridge IP)
Run `siderolink-agent` (test implementation):
```
$ sudo _out/siderolink-agent-linux-amd64
```
Now on the host, there should be a `siderolink` Wireguard userspace
tunnel:
```
$ sudo wg
interface: siderolink
public key: 2aq/V91QyrHAoH24RK0bldukgo2rWk+wqE5Eg6TArCM=
private key: (hidden)
listening port: 51821
peer: Tyr6C/F3FFLWtnzqq7Dsm54B40bOPq6++PTiD/zqn2Y=
endpoint: 172.20.0.1:47857
allowed ips: fdae:41e4:649b:9303:b6db:d99c:215e:dfc4/128
latest handshake: 2 minutes, 2 seconds ago
transfer: 3.62 KiB received, 1012 B sent
...
```
Each Talos node will be registered as a peer, tunnel is established.
You can now ping Talos nodes from the host over the tunnel:
```
$ ping fdae:41e4:649b:9303:b6db:d99c:215e:dfc4
PING fdae:41e4:649b:9303:b6db:d99c:215e:dfc4(fdae:41e4:649b:9303:b6db:d99c:215e:dfc4) 56 data bytes
64 bytes from fdae:41e4:649b:9303:b6db:d99c:215e:dfc4: icmp_seq=1 ttl=64 time=0.352 ms
64 bytes from fdae:41e4:649b:9303:b6db:d99c:215e:dfc4: icmp_seq=2 ttl=64 time=0.437 ms
```
Signed-off-by: Andrey Smirnov <andrey.smirnov@talos-systems.com>
Modify provision library to support multiple IPs, CIDRs, gateways, which
can be IPv4/IPv6. Based on IP types, enable services in the cluster to
run DHCPv4/DHCPv6 in the test environment.
There's outstanding bug left with routes not being properly set up in
the cluster so, IPs are not properly routable, but DHCPv6 works and IPs
are allocated (validates DHCPv6 client).
Signed-off-by: Andrey Smirnov <smirnov.andrey@gmail.com>
This refactoring is required to simplify the work to be done to support
disk encryption.
Tried to minimize amount of queries done by `blockdevice` `probe`
methods.
Instead, where we have `runtime.Runtime` we get all required blockdevices
there from blockdevice cache stored in `State().Machine().Disk()`.
This opens a way to store encryption settings in the `Partition`
objects.
Signed-off-by: Artem Chernyshev <artem.0xD2@gmail.com>
Fixes: https://github.com/talos-systems/talos/issues/2973
Can now supply disk image using `--disk-image-path` flag.
May need to enable `--with-apply-config` if it's necessary to bootstrap
nodes properly.
Signed-off-by: Artem Chernyshev <artem.0xD2@gmail.com>
If disk is empty and ISO path is given, QEMU provisioner mounts ISO on
the first boot.
To drop into maintenance mode:
```
talosctl cluster create --provisioner=qemu --iso-path=./_out/talos-amd64.iso --skip-injecting-config --wait=false
```
Then inject the config, bootstrap the node, wait for it to come up (via
`talosctl cluster health`).
Signed-off-by: Andrey Smirnov <smirnov.andrey@gmail.com>
This fixes the reverse Go dependency from `pkg/machinery` to `talos`
package.
Add a check to `Dockerfile` to prevent `pkg/machinery/go.mod` getting
out of sync, this should prevent problems in the future.
Fix potential security issue in `token` authorizer to deny requests
without grpc metadata.
In provisioner, add support for launching nodes without the config
(config is not delivered to the provisioned nodes).
Breaking change in `pkg/provision`: now `NodeRequest.Type` should be set
to the node type (as config can be missing now).
In `talosctl cluster create` add a flag to skip providing config to the
nodes so that they enter maintenance mode, while the generated configs
are written down to disk (so they can be tweaked and applied easily).
Signed-off-by: Andrey Smirnov <smirnov.andrey@gmail.com>
This builds a bundle with CNI plugins for talosctl which is
automatically downloaded by `talosctl` if CNI plugins are missing.
CNI directories are moved by default to the `~/.talos/cni` path.
Also add a bunch of pre-flight checks to the QEMU provisioner to make it
easier to bootstrap the Talos QEMU cluster.
Signed-off-by: Andrey Smirnov <smirnov.andrey@gmail.com>
User-disks are supported by QEMU and Firecracker providers.
Can be defined by using the following parameters:
```
--user-disk /mount/path:1GB
```
Can get more than 1 user disk.
Same set of user disks will be created for all master and worker nodes.
Additionally enable user-disks in qemu e2e test.
Signed-off-by: Artem Chernyshev <artem.0xD2@gmail.com>
This isn't supposed to be used ever in Talos directly, but rather only
in integration tests for Sidero.
Signed-off-by: Andrey Smirnov <smirnov.andrey@gmail.com>
This moves `pkg/config`, `pkg/client` and `pkg/constants`
under `pkg/machinery` umbrella.
And `pkg/machinery` is published as Go module inside Talos repository.
Signed-off-by: Andrey Smirnov <smirnov.andrey@gmail.com>
This change is only moving packages and updating import paths.
Goal: expose `internal/pkg/provision` as `pkg/provision` to enable other
projects to import Talos provisioning library.
As cluster checks are almost always required as part of provisioning
process, package `internal/pkg/cluster` was also made public as
`pkg/cluster`.
Other changes were direct dependencies discovered by `importvet` which
were updated.
Public packages (useful, general purpose packages with stable API):
* `internal/pkg/conditions` -> `pkg/conditions`
* `internal/pkg/tail` -> `pkg/tail`
Private packages (used only on provisioning library internally):
* `internal/pkg/inmemhttp` -> `pkg/provision/internal/inmemhttp`
* `internal/pkg/kernel/vmlinuz` -> `pkg/provision/internal/vmlinuz`
* `internal/pkg/cniutils` -> `pkg/provision/internal/cniutils`
Signed-off-by: Andrey Smirnov <smirnov.andrey@gmail.com>