In #68 I reduced the general limits for the back off, thinking that it
would speed up the upload on average because it was retrying faster. But
because it was retrying faster, the 10 available retries were used up
before SSH became available.
The new 100 retries match the 3 minutes of total timeout that the
previous solution had, and should fix all issues.
In addition, I discovered that my implementation in
`hcloudimages/backoff.ExponentialBackoffWithLimit` has a bug where the
calculated offset could overflow before the limit was applied, resulting
in negative durations. I did not fix the issue because `hcloud-go`
provides such a method natively nowadays. Instead, I marked the method
as deprecated, to be removed in a later release.
Generate the help pages using `cobras` builtin functionality and commit
them to the repository. This gives users to ability to review the
options of `hcloud-upload-image` without having to install it first.
The current setup of the CLI requires the user to set HCLOUD_TOKEN for
every single invocation of the binary. Even if we just want to
autocomplete some arguments or even generate the completion scripts in
CI.
This fixes the bug by only initializing the hcloud-go client in the
"cleanup" and "upload" subcommands.