Conditions are now implemented as interface with two methods: `Wait` for
condition to be true (cancelable via context) and 'String' which
describes what condition is waiting for.
Generic 'WaitForAll' was implemented to wait for multiple conditions at
once.
Signed-off-by: Andrey Smirnov <smirnov.andrey@gmail.com>
Stream chunker should be cancellable at any point of execution, plus it
should be stop chunking on EOF.
Signed-off-by: Andrey Smirnov <smirnov.andrey@gmail.com>
This started as a simple unit-test for file chunker, but the first test
hung immediately, so I started looking into the code.
One problem was that when entering inotify() code, ctx cancel wasn't
considered. Another problem is that remove fsnotify was never triggered,
but I saw that with unit-test later.
Small nit was that inotify() was initialized every time we got to EOF,
which is not efficient for "follow" mode.
So I moved inotify into the main loop, and plugged context cancel watch
into the place when chunk is delivered. Chunker code is supposed to
block in two places: when it tries to deliver next chunk (as client
might be slow to recieve buffers) or when there's no new data (on
inotify). So it makes sense to assert context canceled condition in both
cases.
Signed-off-by: Andrey Smirnov <smirnov.andrey@gmail.com>
This returns list of all the services registered, with their current
status, past events, health state, etc.
New CLI is `osctl service [<id>]`: without `<id>` it prints list of all
the services, with specific `<id>` it provides details for a service.
I decided to create "parallel" data structures in protobuf as Go
structures don't map nicely onto what protoc generates: pointers vs.
values, additional fields like mutexes, etc. Probably there's a better
approach, I'm open for it.
For CLI, I tried to keep CLI stuff in `cmd/` package, and I also created
simple wrapper to remove duplicated code which sets up client for each
command.
Examples:
```
$ osctl service
SERVICE STATE HEALTH LAST CHANGE LAST EVENT
containerd Running OK 21s ago Health check successful
kubeadm Running ? 2s ago Started task kubeadm (PID 280) for container kubeadm
kubelet Running ? 0s ago Started task kubelet (PID 383) for container kubelet
ntpd Running ? 14s ago Started task ntpd (PID 129) for container ntpd
osd Running ? 14s ago Started task osd (PID 126) for container osd
proxyd Waiting ? 14s ago Waiting for conditions
trustd Running ? 14s ago Started task trustd (PID 125) for container trustd
udevd Running ? 14s ago Started task udevd (PID 130) for container udevd
```
```
$ osctl service proxyd
ID proxyd
STATE Running
HEALTH ?
EVENTS [Preparing]: Running pre state (22s ago)
[Waiting]: Waiting for conditions (22s ago)
[Preparing]: Creating service runner (6s ago)
[Running]: Started task proxyd (PID 461) for container proxyd (6s ago)
```
Signed-off-by: Andrey Smirnov <smirnov.andrey@gmail.com>
As containerd client API wasn't closed after use, connection was leaking
every time healthcheck was run.
Signed-off-by: Andrey Smirnov <smirnov.andrey@gmail.com>
Without this we never set the namespace for the context which prevents it from functioning at all
Signed-off-by: Brad Beam <brad.beam@talos-systems.com>