This implements `osctl cluster destroy` for Firecracker, adds
new utility command `osctl cluser show`.
Firecracker mode now has control process for firecracker VMs, allowing
clean reboots and background operations.
Lots of small fixes to Firecracker mode, clean CNI shutdown, cleaning up
netns, etc.
Signed-off-by: Andrey Smirnov <smirnov.andrey@gmail.com>
This is initial PR to push the initial code, it has several known
problems which are going to be addressed in follow-up PRs:
1. there's no "cluster destroy", so the only way to stop the VMs is to
`pkill firecracker`
2. provisioner creates state in `/tmp` and never deletes it, that is
required to keep cluster running when `osctl cluster create` finishes
3. doesn't run any controller process around firecracker to support
reboots/CNI cleanup (vethxyz interfaces are lingering on the host as
they're never cleaned up)
The plan is to create some structure in `~/.talos` to manage cluster
state, e.g. `~/.talos/clusters/<name>` which will contain all the
required files (disk images, file sockets, VM logs, etc.). This
directory structure will also work as a way to detect running clusters
and clean them up.
For point number 3, `osctl cluster create` is going to exec lightweight
process to control the firecracker VM process and to simulate VM reboots
if firecracker finishes cleanly (when VM reboots).
Signed-off-by: Andrey Smirnov <smirnov.andrey@gmail.com>
We have been using two packages that define a config type and a machine
type, when really they are one and the same. This unifies the types down
to one set.
Signed-off-by: Andrew Rynhard <andrew@andrewrynhard.com>
This PR will allow users to issue `osctl config generate`, tweak the
configs to their liking, then use those configs to call `osctl cluster
create`.
Example workflow:
```
osctl config generate my-cluster https://10.5.0.2:6443 -o ./my-cluster
** tweaky tweak **
osctl cluster create --name my-cluster --input-dir "$PWD/my-cluster"
```
Signed-off-by: Spencer Smith <robertspencersmith@gmail.com>
This PR will allow users to use an existing docker network for their
talos cluster. Hoping this will be useful for those wanting further
control and configuration of their local docker clusters, as well as
possibly useful for us during CI. The docker networks can be pre-created
with something like: `docker network create my-cluster --subnet
192.168.0.0/24 --label talos.owned=true --label
talos.cluster.name=my-cluster`. Note that the labels are pre-reqs for our discovery and re-use of these networks.
Signed-off-by: Spencer Smith <robertspencersmith@gmail.com>
There are few workarounds for Drone way of running integration test:
DinD runs as a separate pod, and we can only access its exposed on the
"host" ports, while from Talos cluster this endpoint is not reachable.
So internally Talos nodes still use addresses like "10.5.0.2", while
test is using "docker" to access it (that's name of the `docker` service
in the pipeline).
When running locally, 127.0.0.1 is used as endpoint, which should work
fine both on OS X and Linux.
Signed-off-by: Andrey Smirnov <smirnov.andrey@gmail.com>
This extracts Docker Talos cluster provisioner as common code
which might be shared between `osctl cluster` and integration-test.
There should be almost no functional changes.
As proof of concept, abstract cluster readiness checks were implemented
based on provisioned cluster state. It implements same checks as
`basic-integration.sh` in pure Go via Talos/K8s clients.
`conditions` package was promoted from machined-internal to
`internal/pkg` as it is used to run the checks.
Signed-off-by: Andrey Smirnov <smirnov.andrey@gmail.com>