Modify provision library to support multiple IPs, CIDRs, gateways, which
can be IPv4/IPv6. Based on IP types, enable services in the cluster to
run DHCPv4/DHCPv6 in the test environment.
There's outstanding bug left with routes not being properly set up in
the cluster so, IPs are not properly routable, but DHCPv6 works and IPs
are allocated (validates DHCPv6 client).
Signed-off-by: Andrey Smirnov <smirnov.andrey@gmail.com>
Ballooning is not automatic, but it can be verified via QEMU monitor by
inflating/deflating the balloon inside the VM.
Monitor can be used like that:
```
$ sudo socat - unix-connect:/home/smira/.talos/clusters/talos-default/talos-default-master-1.monitor
QEMU 5.0.0 monitor - type 'help' for more information
(qemu) info status
info status
VM status: running
```
Signed-off-by: Andrey Smirnov <smirnov.andrey@gmail.com>
If disk is empty and ISO path is given, QEMU provisioner mounts ISO on
the first boot.
To drop into maintenance mode:
```
talosctl cluster create --provisioner=qemu --iso-path=./_out/talos-amd64.iso --skip-injecting-config --wait=false
```
Then inject the config, bootstrap the node, wait for it to come up (via
`talosctl cluster health`).
Signed-off-by: Andrey Smirnov <smirnov.andrey@gmail.com>
This introduces the notion of a "board" in Talos. A board is an interface that is capable
of modifying the installation in specific ways for a given SBC. This also adds support for the
libretech_all_h3_cc_h5.
Signed-off-by: Andrew Rynhard <andrew@rynhard.io>
Fixes were applied automatically.
Import ordering might be questionable, but it's strict:
* stdlib
* other packages
* same package imports
Signed-off-by: Andrey Smirnov <smirnov.andrey@gmail.com>
User-disks are supported by QEMU and Firecracker providers.
Can be defined by using the following parameters:
```
--user-disk /mount/path:1GB
```
Can get more than 1 user disk.
Same set of user disks will be created for all master and worker nodes.
Additionally enable user-disks in qemu e2e test.
Signed-off-by: Artem Chernyshev <artem.0xD2@gmail.com>
Missing timeout in shutdown is the only reason I could find for Sfyra
tests being stuck on teardown.
Signed-off-by: Andrey Smirnov <smirnov.andrey@gmail.com>
Library `blockdevice` was extracted as `talos-systems/go-blockdevice`,
this PR finalizes the move by removing Talos copy of it.
Some functions around `mkfs`/`growfs` were extracted as `makefs`
package, as they depend on `cmd` package.
Signed-off-by: Andrey Smirnov <smirnov.andrey@gmail.com>
Fixes#2515
This implements simple HTTP API which should cover same methods as IPMI
methods in Sidero.
Examples:
```
$ curl http://172.20.0.1:34791/status
{"PoweredOn":false}
```
```
$ curl -X POST http://172.20.0.1:34791/poweroff
```
API listens on bridge address, each VM has unique port which can be
found in cluster state as `apiport: NNNN`.
Signed-off-by: Andrey Smirnov <smirnov.andrey@gmail.com>
This isn't supposed to be used ever in Talos directly, but rather only
in integration tests for Sidero.
Signed-off-by: Andrey Smirnov <smirnov.andrey@gmail.com>
This change is only moving packages and updating import paths.
Goal: expose `internal/pkg/provision` as `pkg/provision` to enable other
projects to import Talos provisioning library.
As cluster checks are almost always required as part of provisioning
process, package `internal/pkg/cluster` was also made public as
`pkg/cluster`.
Other changes were direct dependencies discovered by `importvet` which
were updated.
Public packages (useful, general purpose packages with stable API):
* `internal/pkg/conditions` -> `pkg/conditions`
* `internal/pkg/tail` -> `pkg/tail`
Private packages (used only on provisioning library internally):
* `internal/pkg/inmemhttp` -> `pkg/provision/internal/inmemhttp`
* `internal/pkg/kernel/vmlinuz` -> `pkg/provision/internal/vmlinuz`
* `internal/pkg/cniutils` -> `pkg/provision/internal/cniutils`
Signed-off-by: Andrey Smirnov <smirnov.andrey@gmail.com>