* actions: use self-hosted runners in hashicorp/vault
While it is recommended that we use self-hosted runners for every
workflow in private and internal accounts, this change was primarily
motivated by different runner types using different cache paths. By
using the same runner type everywhere we can avoid double caches of the
internal Vault tools.
* disable the terraform wrapper in ci-bootstrap to handle updated action
Signed-off-by: Ryan Cragun <me@ryan.ec>
Co-authored-by: Ryan Cragun <me@ryan.ec>
* [VAULT-39671] tools: use github cache for external tools
We currently have some ~13 tools that we need available both locally for
development and in CI for building, linting, and formatting, and testing Vault.
Each branch that we maintain often uses the same set of tools but often pinned
to different versions.
For development, we have a `make tools` target that will execute the
`tools/tool.sh` installation script for the various tools at the correct pin.
This works well enough but is cumbersome if you’re working across many branches
that have divergent versions.
For CI the problem is speed and repetition. For each build job (~10) and Go test
job (16-52) we have to install most of the same tools for each job. As we have
extremely limited Github Actions cache we can’t afford to cache the entire vault
go build cache, so if we were to build them from source each time we incur a
penalty of downloading all of the modules and building each tool from source.
This yields about an extra 2 minutes per job to install all of the tools. We’ve
worked around this problem by writing composite actions that download pre-built
binaries of the same tools instead of building them from source. That usually
takes a few seconds. The downside of that approach is rate limiting, which
Github has become much more aggressive in enforcing.
That leads us to where we are before this work:
- For builds in the compatibility docker container: the tools are built from
source and cached as separate builder image layer. (usually fast as we get
cache hits, slow on cache misses)
- For builds that compile directly on the runner: the tools are installed on
each job runner by composite github actions (fast, uses API requests, prone
to throttling)
- For tests, they use the same composite actions to install the tools on each
job. (fast, uses API requests, prone to throttling)
This also leads to inconsistencies since there are two sources of truth: the
composite actions have their own version pin outside of those in `tools.sh`.
This has led to drift.
We previously tried to save some API requests and move all builds into
the container. That almost works but docker's build conatiner had a hard
time with some esoteric builds. We could special case it but it's a bandaid at
best.
A prior version of this work (VAULT-39654) investigated using `go tool`, but
there were some showstopper issues with that workflow that make it a non-starter
for us. Instead, we’ll attempt to use more actions cache to resolve the
throttling. This will allow us to have a single source of truth for tools, their
pins, and afford us the same speed on cache hits as we had previously without
downloading the tools from github releases thousands of times per day.
We add a new composite github action for installing our tools.
- On cache misses it builds the tools and installs them into a cacheable path.
- On cache hits it restore the cacheable path.
- It adds the tools to the GITHUB_PATH to ensure runner based jobs can find
them.
- For Docker builds it mounts the tools at `/opt/tools/bin` which is
part of the PATH in the container.
- It uses a cache key of the SHA of the tools directory along with the
working directory SHA which is required to deal with actions/cache
issues.
This results in:
- A single source of truth for tools and their pins
- A single cache for tools that can be re-used between all CI and build jobs
- No more Github API calls for tooling. *_Rate limiting will be a thing of
the past._*
Signed-off-by: Ryan Cragun <me@ryan.ec>
Co-authored-by: Ryan Cragun <me@ryan.ec>
Something recently changed in Github Actions' decoding and display of
git errors is output. Now previously benign errors are showing up as
annotations and causing confusion.
Signed-off-by: Ryan Cragun <me@ryan.ec>
As the Vault pipeline and release processes evolve over time, so too must the tooling that drives them. Historically we've utilized a combination of CI features and shell scripts that are wrapped into make targets to drive our CI. While this
approach has worked, it requires careful consideration of what features to use (bash in CI almost never matches bash in developer machines, etc.) and often requires a deep understanding of several CLI tools (jq, etc). `make` itself also has limitations in user experience, e.g. passing flags.
As we're all in on Github Actions as our pipeline coordinator, continuing to utilize and build CLI tools to perform our pipeline tasks makes sense. This PR adds a new CLI tool called `pipeline` which we can use to build new isolated tasks that we can string together in Github Actions. We intend to use this utility as the interface for future release automation work, see VAULT-27514.
For the first task in this new `pipeline` tool, I've chosen to build two small sub-commands:
* `pipeline releases list-versions` - Allows us to list Vault versions between a range. The range is configurable either by setting `--upper` and/or `--lower` bounds, or by using the `--nminus` to set the N-X to go back from the current branches version. As CE and ENT do not have version parity we also consider the `--edition`, as well as none-to-many `--skip` flags to exclude specific versions.
* `pipeline generate enos-dynamic-config` - Which creates dynamic enos configuration based on the branch and the current list of release versions. It takes largely the same flags as the `release list-versions` command, however it also expects a `--dir` for the enos directory and a `--file` where the dynamic configuration will be written. This allows us to dynamically update and feed the latest versions into our sampling algorithm to get coverage over all supported prior versions.
We then integrate these new tools into the pipeline itself and cache the dynamic config on a weekly basis. We also cache the pipeline tool itself as it will likely become a repository for pipeline specific tooling. The caching strategy for the `pipeline` tool itself will make most workflows that require it super fast.
Signed-off-by: Ryan Cragun <me@ryan.ec>
Pin to the latest actions in preparation for the migration to
`actions/upload-artifact@v4`, `actions/download-artifact@v4`, and
`hashicorp/actions-docker-build@v2` on May 6 or 7.
Signed-off-by: Ryan Cragun <me@ryan.ec>
We're on a quest to reduce our pipeline execution time to both enhance
our developer productivity but also to reduce the overall cost of the CI
pipeline. The strategy we use here reduces workflow execution time and
network I/O cost by reducing our module cache size and using binary
external tools when possible. We no longer download modules and build
many of the external tools thousands of times a day.
Our previous process of installing internal and external developer tools
was scattered and inconsistent. Some tools were installed via `go
generate -tags tools ./tools/...`,
others via various `make` targets, and some only in Github Actions
workflows. This process led to some undesirable side effects:
* The modules of some dev and test tools were included with those
of the Vault project. This leads to us having to manage our own
Go modules with those of external tools. Prior to Go 1.16 this
was the recommended way to handle external tools, but now
`go install tool@version` is the recommended way to handle
external tools that need to be build from source as it supports
specific versions but does not modify the go.mod.
* Due to Github cache constraints we combine our build and test Go
module caches together, but having our developer tools as deps in
our module results in a larger cache which is downloaded on every
build and test workflow runner. Removing the external tools that were
included in our go.mod reduced the expanded module cache by size
by ~300MB, thus saving time and network I/O costs when downloading
the module cache.
* Not all of our developer tools were included in our modules. Some were
being installed with `go install` or `go run`, so they didn't take
advantage of a single module cache. This resulted in us downloading
Go modules on every CI and Build runner in order to build our
external tools.
* Building our developer tools from source in CI is slow. Where possible
we can prefer to use pre-built binaries in CI workflows. No more
module download or tool compiles if we can avoid them.
I've refactored how we define internal and external build tools
in our Makefile and added several new targets to handle both building
the developer tools locally for development and verifying that they are
available. This allows for an easy developer bootstrap while also
supporting installation of many of the external developer tools from
pre-build binaries in CI. This reduces our network IO and run time
across nearly all of our actions runners.
While working on this I caught and resolved a few unrelated issue:
* Both our Go and Proto format checks we're being run incorrectly. In
CI they we're writing changes but not failing if changes were
detected. The Go was less of a problem as we have git hooks that
are intended to enforce formatting, however we drifted over time.
* Our Git hooks couldn't handle removing a Go file without failing. I
moved the diff check into the new Go helper and updated it to handle
removing files.
* I combined a few separate scripts and into helpers and added a few
new capabilities.
* I refactored how we install Go modules to make it easier to download
and tidy all of the projects go.mod's.
* Refactor our internal and external tool installation and verification
into a tools.sh helper.
* Combined more complex Go verification into `scripts/go-helper.sh` and
utilize it in the `Makefile` and git commit hooks.
* Add `Makefile` targets for executing our various tools.sh helpers.
* Update our existing `make` targets to use new tool targets.
* Normalize our various scripts and targets output to have a consistent
output format.
* In CI, install many of our external dependencies as binaries wherever
possible. When not possible we'll build them from scratch but not mess
with the shared module cache.
* [QT-641] Remove our external build tools from our project Go modules.
* [QT-641] Remove extraneous `go list`'s from our `set-up-to` composite
action.
* Fix formatting and regen our protos
Signed-off-by: Ryan Cragun <me@ryan.ec>
* Migrate protobuf generation to Buf
Buf simplifies the generation story and allows us to lean
into other features in the Buf ecosystem, such as dependency
management, linting, breaking change detection, formatting
and remote plugins.
* Format all protobuf files with buf
Also add a CI job to ensure formatting remains consistent
* Add CI job to warn on proto generate diffs
Some files were not regenerated with the latest version
of the protobuf binary. This CI job will ensure we are always
detect if the protobuf files need regenerating.
* Add CI job for linting protobuf files
In order to reliably store Go test times in the Github Actions cache we
need to reduce our cache thrashing by not using more than 10gb over all
of our caches. This change reduces our cache usage significantly by
sharing Go module cache between our Go CI workflows and our build
workflows. We lose our per-builder cache which will result in a bit of
performance hit, but we'll enable better automatic rebalancing of our CI
workflows. Overall we should see a per branch reduction in cache sizes
from ~17gb to ~850mb.
Some preliminary investigation into this new strategy:
Prior build workflow strategy on a cache miss:
Download modules: ~20s
Build Vault: ~40s
Upload cache: ~30s
Total: ~1m30s
Prior build workflow strategy on a cache hit:
Download and decompress modules and build cache: ~12s
Build Vault: ~15s
Total: ~28s
New build workflow strategy on a cache miss:
Download modules: ~20
Build Vault: ~40s
Upload cache: ~6s
Total: ~1m6s
New build workflow strategy on a cache hit:
Download and decompress modules: ~3s
Build Vault: ~40s
Total: ~43s
Expected time if we used no Go caching:
Download modules: ~20
Build Vault: ~40s
Total: ~1m
Signed-off-by: Ryan Cragun <me@ryan.ec>
* combine into one checker
* combine and simplify ci checks
* add to test package list
* remove testing test
* only run deprecations check
* only run deprecations check
* remove unneeded repo check
* fix bash options