Verify vault secret integrity in unauthenticated I/O streams (audit log, STDOUT/STDERR via the systemd journal) by scanning the text with Vault Radar. We search for both known and unknown secrets by using an index of KVV2 values and also by radar's built-in heuristics for credentials, secrets, and keys.
The verification has been added to many scenarios where a slight time increase is allowed, as we now have to install Vault Radar and scan the text. In practice this adds less than 10 seconds to the overall duration of a scenario.
In the in-place upgrade scenario we explicitly exclude this verification when upgrading from a version that we know will fail the check. We also make the verification opt-in so as to not require a Vault Radar license to run Enos scenarios, though it will always be enabled in CI.
As part of this we also update our enos workflow to utilize secret values from our self-hosted Vault when executing in the vault-enterprise repo context.
Signed-off-by: Ryan Cragun <me@ryan.ec>
Fix two occasional flakes in the DR replication scenario:
* Always verify that all nodes in the cluster are unsealed before
verifying test data. Previously we only verified seal status on
followers.
* Fix an occasional timeout when waiting for the cluster to unseal by
rewriting the module to retry for a set duration instead of
exponential backoff.
Signed-off-by: Ryan Cragun <me@ryan.ec>
* [VAULT-30189] enos: verify identity and OIDC tokens
Expand our baseline API and data verification by including the identity
and identity OIDC tokens secrets engines. We now create a test entity,
entity-alias, identity group, various policies, and associate them with
the entity. For the OIDC side, we now configure the OIDC issuer, create
and rotate named keys, create and associate roles with the named key,
and issue and introspect tokens.
During a second phase we also verify that the those some entities,
groups, keys, roles, config, etc all exist with the expected values.
This is useful to test durability after upgrades, migrations, etc.
This change also includes new updates our prior `auth/userpass` and `kv`
verification. We had two modules that were loosely coupled and
interdependent. This restructures those both into a singular module with
child modules and fixes the assumed values by requiring the read module
to verify against the created state.
Going forward we can continue to extend this secrets engine verification
module with additional create and read checks for new secrets engines.
Signed-off-by: Ryan Cragun <me@ryan.ec>
* VAULT-29583: Modernize default distributions in enos scenarios
Our scenarios have been running the last gen of distributions in CI.
This updates our default distributions as follows:
- Amazon: 2023
- Leap: 15.6
- RHEL: 8.10, 9.4
- SLES: 15.6
- Ubuntu: 20.04, 24.04
With these changes we also unlock a few new variants combinations:
- `distro:amzn seal:pkcs11`
- `arch:arm64 distro:leap`
We also normalize our distro key for Amazon Linux to `amzn`, which
matches the uname output on both versions that we've supported.
Signed-off-by: Ryan Cragun <me@ryan.ec>
When verifying the Vault version, in addition to verifying the CLI
version we also check that the `/sys/version-history` contains the
expected version.
As part of this we also fix a bug where when doing an in-place upgrade
with a Debian or Redhat package we also remove the self-managed
`vault.service` systemd unit to ensure that correctly start up using the
new version of Vault.
Signed-off-by: Ryan Cragun <me@ryan.ec>
* VAULT-28146: Add IPV6 support to enos scenarios
Add support for testing all raft storage scenarios and variants when
running Vault with IPV6 networking. We retain our previous support for
IPV4 and create a new variant `ip_version` which can be used to
configure the IP version that we wish to test with.
It's important to note that the VPC in IPV6 mode is technically mixed
and that target machines still associate public IPV6 addresses. That
allows us to execute our resources against them from IPV4 networks like
developer machines and CI runners. Despite that, we've taken care to
ensure that only IPV6 addresses are used in IPV6 mode.
Because we previously had assumed the IP Version, Vault address, and
listener ports in so many places, this PR is essentially a rewrite and
removal of those assumptions. There are also a few places where
improvements to scenarios have been included as I encountered them while
working on the IPV6 changes.
Signed-off-by: Ryan Cragun <me@ryan.ec>
In preperation for arm64 builds of hsm, fips1402, and hsm.fips1402
editions of Vault Enterprise we'll allow them in our test scenarios.
Signed-off-by: Ryan Cragun <me@ryan.ec>
In order to take advantage of enos' ability to outline scenarios and to
inventory what verification they perform we needed to retrofit all of
that information to our existing scenarios and steps.
This change introduces an initial set of descriptions and verification
declarations that we can continue to refine over time.
As doing this required that I re-read every scenanario in its entirety I
also updated and fixed a few things along the way that I noticed,
including adding a few small features to enos that we utilize to make
handling initial versions programtic between versions instead of having a
delta between our globals in each branch.
* Update autopilot and in-place upgrade initial versions
* Programatically determine which initial versions to use based on Vault
version
* Partially normalize steps between scenarios to make comparisons easier
* Update the MOTD to explain that VAULT_ADDR and VAULT_TOKEN have been
set
* Add scenario and step descriptions to scenarios
* Add initial scenario quality verification declarations to scenarios
* Unpin Terraform in scenarios as >= 1.8.4 should work fine
Add `config_mode` variant to some scenarios so we can dynamically change
how we primarily configure the Vault cluster, either by a configuration
file or with environment variables.
As part of this change we also:
* Start consuming the Enos terraform provider from public Terraform
registry.
* Remove the old `seal_ha_beta` variant as it is no longer required.
* Add a module that performs a `vault operator step-down` so that we can
force leader elections in scenarios.
* Wire up an operator step-down into some scenarios to test both the old
and new multiseal code paths during leader elections.
Signed-off-by: Ryan Cragun <me@ryan.ec>
Add support for testing `+ent.hsm` and `+ent.hsm.fips1402` Vault editions
with `pkcs11` seal types utilizing a shared `softhsm` token. Softhsm2 is
a software HSM that will load seal keys from a local disk via pkcs11.
The pkcs11 seal implementation is fairly complex as we have to create a
one or more shared tokens with various keys and distribute them to all
nodes in the cluster before starting Vault. We also have to ensure that
each sets labels are unique.
We also make a few quality of life updates by utilizing globals for
variants that don't often change and update base versions for various
scenarios.
* Add `seal_pkcs11` module for creating a `pkcs11` seal key using
`softhsm2` as our backing implementation.
* Require the latest enos provider to gain access to the `enos_user`
resource to ensure correct ownership and permissions of the
`softhsm2` data directory and files.
* Add `pkcs11` seal to all scenarios that support configuring a seal
type.
* Extract system package installation out of the `vault_cluster` module
and into its own `install_package` module that we can reuse.
* Fix a bug when using the local builder variant that mangled the path.
This likely slipped in during the migration to auto-version bumping.
* Fix an issue where restarting Vault nodes with a socket seal would
fail because a seal socket sync wasn't available on all nodes. Now we
start the socket listener on all nodes to ensure any node can become
primary and "audit" to the socket listner.
* Remove unused attributes from some verify modules.
* Go back to using cheaper AWS regions.
* Use globals for variants.
* Update initial vault version for `upgrade` and `autopilot` scenarios.
* Update the consul versions for all scenarios that support a consul
storage backend.
Signed-off-by: Ryan Cragun <me@ryan.ec>
Add support for testing Vault Enterprise with HA seal support by adding
a new `seal_ha` scenario that configures more than one seal type for a
Vault cluster. We also extend existing scenarios to support testing
with or without the Seal HA code path enabled.
* Extract starting vault into a separate enos module to allow for better
handling of complex clusters that need to be started more than once.
* Extract seal key creation into a separate module and provide it to
target modules. This allows us to create more than one seal key and
associate it with instances. This also allows us to forego creating
keys when using shamir seals.
* [QT-615] Add support for configuring more that one seal type to
`vault_cluster` module.
* [QT-616] Add `seal_ha` scenario
* [QT-625] Add `seal_ha_beta` variant to existing scenarios to test with
both code paths.
* Unpin action-setup-terraform
* Add `kms:TagResource` to service user IAM profile
Signed-off-by: Ryan Cragun <me@ryan.ec>
Update our `proxy` and `agent` scenarios to support new variants and
perform baseline verification and their scenario specific verification.
We integrate these updated scenarios into the pipeline by adding them
to artifact samples.
We've also improved the reliability of the `autopilot` and `replication`
scenarios by refactoring our IP address gathering. Previously, we'd ask
vault for the primary IP address and use some Terraform logic to determine
followers. The leader IP address gathering script was also implicitly
responsible for ensuring that a found leader was within a given group of
hosts, and thus waiting for a given cluster to have a leader, and also for
doing some arithmetic and outputting `replication` specific output data.
We've broken these responsibilities into individual modules, improved their
error messages, and fixed various races and bugs, including:
* Fix a race between creating the file audit device and installing and starting
vault in the `replication` scenario.
* Fix how we determine our leader and follower IP addresses. We now query
vault instead of a prior implementation that inferred the followers and sometimes
did not allow all nodes to be an expected leader.
* Fix a bug where we'd always always fail on the first wrong condition
in the `vault_verify_performance_replication` module.
We also performed some maintenance tasks on Enos scenarios byupdating our
references from `oss` to `ce` to handle the naming and license changes. We
also enabled `shellcheck` linting for enos module scripts.
* Rename `oss` to `ce` for license and naming changes.
* Convert template enos scripts to scripts that take environment
variables.
* Add `shellcheck` linting for enos module scripts.
* Add additional `backend` and `seal` support to `proxy` and `agent`
scenarios.
* Update scenarios to include all baseline verification.
* Add `proxy` and `agent` scenarios to artifact samples.
* Remove IP address verification from the `vault_get_cluster_ips`
modules and implement a new `vault_wait_for_leader` module.
* Determine follower IP addresses by querying vault in the
`vault_get_cluster_ips` module.
* Move replication specific behavior out of the `vault_get_cluster_ips`
module and into it's own `replication_data` module.
* Extend initial version support for the `upgrade` and `autopilot`
scenarios.
We also discovered an issue with undo_logs that has been described in
the VAULT-20259. As such, we've disabled the undo_logs check until
it has been fixed.
Signed-off-by: Ryan Cragun <me@ryan.ec>
Replace our prior implementation of Enos test groups with the new Enos
sampling feature. With this feature we're able to describe which
scenarios and variant combinations are valid for a given artifact and
allow enos to create a valid sample field (a matrix of all compatible
scenarios) and take an observation (select some to run) for us. This
ensures that every valid scenario and variant combination will
now be a candidate for testing in the pipeline. See QT-504[0] for further
details on the Enos sampling capabilities.
Our prior implementation only tested the amd64 and arm64 zip artifacts,
as well as the Docker container. We now include the following new artifacts
in the test matrix:
* CE Amd64 Debian package
* CE Amd64 RPM package
* CE Arm64 Debian package
* CE Arm64 RPM package
Each artifact includes a sample definition for both pre-merge/post-merge
(build) and release testing.
Changes:
* Remove the hand crafted `enos-run-matrices` ci matrix targets and replace
them with per-artifact samples.
* Use enos sampling to generate different sample groups on all pull
requests.
* Update the enos scenario matrices to handle HSM and FIPS packages.
* Simplify enos scenarios by using shared globals instead of
cargo-culted locals.
Note: This will require coordination with vault-enterprise to ensure a
smooth migration to the new system. Integrating new scenarios or
modifying existing scenarios/variants should be much smoother after this
initial migration.
[0] https://github.com/hashicorp/enos/pull/102
Signed-off-by: Ryan Cragun <me@ryan.ec>
* Adding explicit MPL license for sub-package.
This directory and its subdirectories (packages) contain files licensed with the MPLv2 `LICENSE` file in this directory and are intentionally licensed separately from the BSL `LICENSE` file at the root of this repository.
* Adding explicit MPL license for sub-package.
This directory and its subdirectories (packages) contain files licensed with the MPLv2 `LICENSE` file in this directory and are intentionally licensed separately from the BSL `LICENSE` file at the root of this repository.
* Updating the license from MPL to Business Source License.
Going forward, this project will be licensed under the Business Source License v1.1. Please see our blog post for more details at https://hashi.co/bsl-blog, FAQ at www.hashicorp.com/licensing-faq, and details of the license at www.hashicorp.com/bsl.
* add missing license headers
* Update copyright file headers to BUS-1.1
* Fix test that expected exact offset on hcl file
---------
Co-authored-by: hashicorp-copywrite[bot] <110428419+hashicorp-copywrite[bot]@users.noreply.github.com>
Co-authored-by: Sarah Thompson <sthompson@hashicorp.com>
Co-authored-by: Brian Kassouf <bkassouf@hashicorp.com>
* Sync missing scenarios and modules
* Clean up variables and examples vars
* Add a `lint` make target for enos
* Update enos `fmt` workflow to run the `lint` target.
* Always use ipv4 addresses in target security groups.
Signed-off-by: Ryan Cragun <me@ryan.ec>