5 Commits

Author SHA1 Message Date
Ryan Cragun
025a6d5071
[VAULT-34829] pipeline(backport): add github create backport command (#30713)
Add a new `github create backport` sub-command that can create a
backport of a given pull request. The command has been designed around a
Github Actions workflow where it is triggered on a closed pull request
event with a guard that checks for merges:

```yaml
pull_request_target:
  types: closed

jobs:
  backport:
    if: github.even.pull_request.merged
    runs-on: "..."
```

Eventually this sub-command (or another similar one) can be used to
implemente backporting a CE pull request to the corresponding ce/*
branch in vault-enterprise. This functionality will be implemented in
VAULT-34827.

This backport runner has several new behaviors not present in the
existing backport assistant:
  - If the source PR was made against an enterprise branch we'll assume
    that we want create a CE backport.
  - Enterprise only files will be automatically _removed_ from the CE
    backport for you. This will not guarantee a working CE pull request
    but does quite a bit of the heavy lifting for you.
  - If the change only contains enterprise files we'll skip creating a
    CE backport.
  - If the corresponding CE branch is inactive (as defined in
    .release/versions.hcl) then we will skip creating a backport in most
    cases. The exceptions are changes that include docs, README, or
    pipeline changes as we assume that even active branches will want
    those changes.
  - Backport labels still work but _only_ to enterprise PR's. It is
    assumed that when the subsequent PRs are merged that their
    corresponding CE backports will be created.
  - Backport labels no longer include editions. They will now use the
    same schema as active versions defined .release/verions.hcl. E.g.
    `backport/1.19.x`. `main` is always assumed to be active.
  - The runner will always try and update the source PR with a Github
    comment regarding the status of each individual backport. Even if
    one attempt at backporting fails we'll continue until we've
    attempted all backports.

Signed-off-by: Ryan Cragun <me@ryan.ec>
2025-05-23 14:02:24 -06:00
Ryan Cragun
c37b3c46b4
VAULT-34822: Add pipeline github list changed-files (#30100)
* VAULT-34822: Add `pipeline github list changed-files`

Add a new `github list changed-files` sub-command to `pipeline` command and
integrate it into the pipeline. This replaces our previous
`changed-files.sh` script.

This command works quite a bit differently than the full checkout and
diff based solution we used before. Instead of checking out the base ref
and head ref and comparing a diff, we now provide either a pull request
number or git commit SHA and use the Github REST API to determine the
changed files.

This approach has several benefits:
  - Not requiring a local checkout of the repo to get the list of
    changed files. This yields a significant perfomance improvement in
    `setup` jobs where we typically determine the changed files list.
  - The CLI supports both PRs and commit SHAs.
  - The implementation is portable and doesn't require any system tools
    like `git` or `bash` to be installed.
  - A much more advanced system for adding group metadata to the changed
    files. These groupings are going to be used heavily in future
    pipeline automation work and will be used to make required jobs
    smarter.

The theoretical drawbacks:
   - It requires a GITHUB_TOKEN and only works for remote branches or
     commits in Github. We could eventually add a local diff sub-command
     or option to work locally, but that was not required for what we're
     trying to achieve here.

While the groupings that I added in this change are quite rudimentary,
the system will allow us to add additional groups with very little
overhead. I tried to make this change more or less a port of the old
system to enable future work. I did include one small change of
behavior, which is that we now build all extended targets if the
`go.mod` or `go.sum` files change. We do this to ensure that dependency
changes don't subtly result in some extended platform breakage.

Signed-off-by: Ryan Cragun <me@ryan.ec>
2025-03-28 15:18:52 -06:00
Steven Clark
1802204dec
Update golang.org/x/net to v0.37.0 for GO-2025-3503 (#29925) 2025-03-14 11:53:38 -04:00
Ryan Cragun
cda9ad3491
VAULT-33074: add github sub-command to pipeline (#29403)
* VAULT-33074: add `github` sub-command to `pipeline`

Investigating test workflow failures is common task that engineers on the
sustaining rotation perform. This task often requires quite a bit of
manual labor by manually inspecting all failed/cancelled workflows in
the Github UI on per repo/branch/workflow basis and performing root cause
analysis.

As we work to improve our pipeline discoverability this PR adds a new `github`
sub-command to the `pipeline` utility that allows querying for such workflows
and returning either machine readable or human readable summaries in a single
place. Eventually we plan to automate sending a summary of this data to
an OTEL collector automatically but for now sustaining engineers can
utilize it to query for workflows with lots of various criteria.

A common pattern for investigating build/enos test failure workflows would be:
```shell
export GITHUB_TOKEN="YOUR_TOKEN"
go run -race ./tools/pipeline/... github list-workflow-runs -o hashicorp -r vault -d '2025-01-13..2025-01-23' --branch main --status failure build
```

This will list `build` workflow runs in `hashicorp/vault` repo for the
`main` branch with the `status` or `conclusion` of `failure` within the date
range of `2025-01-13..2025-01-23`.

A sustaining engineer will likely do this for both `vault` and
`vault-enterprise` repositories along with `enos-release-testing-oss` and
`enos-release-testing-ent` workflows in addition to `build` in order to
get a full picture of the last weeks failures.

You can also use this utility to summarize workflows based on other
statuses, branches, HEAD SHA's, event triggers, github actors, etc. For
a full list of filter arguments you can pass `-h` to the sub-command.

> [!CAUTION]
> Be careful not to run this without setting strict filter arguments.
> Failing to do so could result in trying to summarize way too many
> workflows resulting in your API token being disabled for an hour.

Signed-off-by: Ryan Cragun <me@ryan.ec>
2025-01-31 13:48:38 -07:00
Ryan Cragun
ce5885279b
VAULT-31181: Add pipeline tool to Vault (#28536)
As the Vault pipeline and release processes evolve over time, so too must the tooling that drives them. Historically we've utilized a combination of CI features and shell scripts that are wrapped into make targets to drive our CI. While this 
approach has worked, it requires careful consideration of what features to use (bash in CI almost never matches bash in developer machines, etc.) and often requires a deep understanding of several CLI tools (jq, etc). `make` itself also has limitations in user experience, e.g. passing flags.

As we're all in on Github Actions as our pipeline coordinator, continuing to utilize and build CLI tools to perform our pipeline tasks makes sense. This PR adds a new CLI tool called `pipeline` which we can use to build new isolated tasks that we can string together in Github Actions. We intend to use this utility as the interface for future release automation work, see VAULT-27514.

For the first task in this new `pipeline` tool, I've chosen to build two small sub-commands:

* `pipeline releases list-versions` - Allows us to list Vault versions between a range. The range is configurable either by setting `--upper` and/or `--lower` bounds, or by using the `--nminus` to set the N-X to go back from the current branches version. As CE and ENT do not have version parity we also consider the `--edition`, as well as none-to-many `--skip` flags to exclude specific versions.

* `pipeline generate enos-dynamic-config` - Which creates dynamic enos configuration based on the branch and the current list of release versions. It takes largely the same flags as the `release list-versions` command, however it also expects a `--dir` for the enos directory and a `--file` where the dynamic configuration will be written. This allows us to dynamically update and feed the latest versions into our sampling algorithm to get coverage over all supported prior versions.

We then integrate these new tools into the pipeline itself and cache the dynamic config on a weekly basis. We also cache the pipeline tool itself as it will likely become a repository for pipeline specific tooling. The caching strategy for the `pipeline` tool itself will make most workflows that require it super fast.


Signed-off-by: Ryan Cragun <me@ryan.ec>
2024-10-23 15:31:24 -06:00