Add a new `github create backport` sub-command that can create a
backport of a given pull request. The command has been designed around a
Github Actions workflow where it is triggered on a closed pull request
event with a guard that checks for merges:
```yaml
pull_request_target:
types: closed
jobs:
backport:
if: github.even.pull_request.merged
runs-on: "..."
```
Eventually this sub-command (or another similar one) can be used to
implemente backporting a CE pull request to the corresponding ce/*
branch in vault-enterprise. This functionality will be implemented in
VAULT-34827.
This backport runner has several new behaviors not present in the
existing backport assistant:
- If the source PR was made against an enterprise branch we'll assume
that we want create a CE backport.
- Enterprise only files will be automatically _removed_ from the CE
backport for you. This will not guarantee a working CE pull request
but does quite a bit of the heavy lifting for you.
- If the change only contains enterprise files we'll skip creating a
CE backport.
- If the corresponding CE branch is inactive (as defined in
.release/versions.hcl) then we will skip creating a backport in most
cases. The exceptions are changes that include docs, README, or
pipeline changes as we assume that even active branches will want
those changes.
- Backport labels still work but _only_ to enterprise PR's. It is
assumed that when the subsequent PRs are merged that their
corresponding CE backports will be created.
- Backport labels no longer include editions. They will now use the
same schema as active versions defined .release/verions.hcl. E.g.
`backport/1.19.x`. `main` is always assumed to be active.
- The runner will always try and update the source PR with a Github
comment regarding the status of each individual backport. Even if
one attempt at backporting fails we'll continue until we've
attempted all backports.
Signed-off-by: Ryan Cragun <me@ryan.ec>
While working on VAULT-34829 it became apparent that if our new backporter
could know which branches are active and which CE counterparts are active
then we could completely omit the need for `ce` backport labels and instead
automatically backport to corresponding CE branches that are active.
To facilitate that we can re-use our `.release/versions.hcl` file as it is
the current source of truth for our present backport assistant workflow.
Here we add a new `pipeline releases list versions` command that is capable
of decoding that file and optionally displaying it. It will be used in the
next PR that fully implements VAULT-34829.
As part of this work we refactors `pipeline releases` to include a new `list`
sub-command and moved both `list-active-versions` and `versions` to it.
We also include a few small fixes that were noticed:
- `.release/verions.hcl` was not up-to-date
- Our cached dynamic config was not getting recreated when the pipeline
tool changed. That has been fixed so now dynamic config should always
get recreated when the pipeline binary changes
- We now initialize a git client when using the `github` sub-command.
This will be used in more forthcoming work
- Update our changed file detection to resolve some incorrect groupings
- Add some additional changed file helpers that we be used in forthcoming
work
Signed-off-by: Ryan Cragun <me@ryan.ec>
* VAULT-33074: add `github` sub-command to `pipeline`
Investigating test workflow failures is common task that engineers on the
sustaining rotation perform. This task often requires quite a bit of
manual labor by manually inspecting all failed/cancelled workflows in
the Github UI on per repo/branch/workflow basis and performing root cause
analysis.
As we work to improve our pipeline discoverability this PR adds a new `github`
sub-command to the `pipeline` utility that allows querying for such workflows
and returning either machine readable or human readable summaries in a single
place. Eventually we plan to automate sending a summary of this data to
an OTEL collector automatically but for now sustaining engineers can
utilize it to query for workflows with lots of various criteria.
A common pattern for investigating build/enos test failure workflows would be:
```shell
export GITHUB_TOKEN="YOUR_TOKEN"
go run -race ./tools/pipeline/... github list-workflow-runs -o hashicorp -r vault -d '2025-01-13..2025-01-23' --branch main --status failure build
```
This will list `build` workflow runs in `hashicorp/vault` repo for the
`main` branch with the `status` or `conclusion` of `failure` within the date
range of `2025-01-13..2025-01-23`.
A sustaining engineer will likely do this for both `vault` and
`vault-enterprise` repositories along with `enos-release-testing-oss` and
`enos-release-testing-ent` workflows in addition to `build` in order to
get a full picture of the last weeks failures.
You can also use this utility to summarize workflows based on other
statuses, branches, HEAD SHA's, event triggers, github actors, etc. For
a full list of filter arguments you can pass `-h` to the sub-command.
> [!CAUTION]
> Be careful not to run this without setting strict filter arguments.
> Failing to do so could result in trying to summarize way too many
> workflows resulting in your API token being disabled for an hour.
Signed-off-by: Ryan Cragun <me@ryan.ec>
As the Vault pipeline and release processes evolve over time, so too must the tooling that drives them. Historically we've utilized a combination of CI features and shell scripts that are wrapped into make targets to drive our CI. While this
approach has worked, it requires careful consideration of what features to use (bash in CI almost never matches bash in developer machines, etc.) and often requires a deep understanding of several CLI tools (jq, etc). `make` itself also has limitations in user experience, e.g. passing flags.
As we're all in on Github Actions as our pipeline coordinator, continuing to utilize and build CLI tools to perform our pipeline tasks makes sense. This PR adds a new CLI tool called `pipeline` which we can use to build new isolated tasks that we can string together in Github Actions. We intend to use this utility as the interface for future release automation work, see VAULT-27514.
For the first task in this new `pipeline` tool, I've chosen to build two small sub-commands:
* `pipeline releases list-versions` - Allows us to list Vault versions between a range. The range is configurable either by setting `--upper` and/or `--lower` bounds, or by using the `--nminus` to set the N-X to go back from the current branches version. As CE and ENT do not have version parity we also consider the `--edition`, as well as none-to-many `--skip` flags to exclude specific versions.
* `pipeline generate enos-dynamic-config` - Which creates dynamic enos configuration based on the branch and the current list of release versions. It takes largely the same flags as the `release list-versions` command, however it also expects a `--dir` for the enos directory and a `--file` where the dynamic configuration will be written. This allows us to dynamically update and feed the latest versions into our sampling algorithm to get coverage over all supported prior versions.
We then integrate these new tools into the pipeline itself and cache the dynamic config on a weekly basis. We also cache the pipeline tool itself as it will likely become a repository for pipeline specific tooling. The caching strategy for the `pipeline` tool itself will make most workflows that require it super fast.
Signed-off-by: Ryan Cragun <me@ryan.ec>