Update the base images for all scenarios:
- RHEL: upgrade base image for 10 to 10.1
- RHEL: upgrade base image for 9 to 9.7
- SLES: upgrade base image for 15 to 15.7
- SLES: add SLES 16.0 to the matrix
- OpenSUSE: remove OpenSUSE Leap from the matrix
I ended up removing OpenSUSE because the images that we were on were rarely updated and that resulted in very slow scenarios because of package upgrades. Also, despite the latest release being in October I didn't find any public cloud images produced for the new version of Leap. We can consider adding it back later but I'm comfortable just leaving SLES 15 and 16 in there for that test coverage.
I also ended up fixing a bug in our integration host setup where we'd provision three nodes instead of one. That ought to result in many fewer instance provisions per scenario. I also had to make a few small tweaks in how we detected whether or not SELinux is enabled, as the prior implementation did not work for SLES 16.
Signed-off-by: Ryan Cragun <me@ryan.ec>
Co-authored-by: Ryan Cragun <me@ryan.ec>
* Add docker based backed
* new line
* Add validation
* Add cloud_docker_vault_cluster
* Unify cloud scenario outputs
* Use min_vault_version consistently across both modules
* random network name for docker
* Add local build for docker
* Use environment instead of backend
* make use of existing modules for docker and k8s
* connect the peers
* formatting
* copyright
* Remove old duplicated code
* use enos local exec
* get version locally
* Dont use local time
* adjust bin path for docker
* use root dockerfile
* get dockerfile to work
* Build docker image from correct binary location
* Fix it... maybe
* Add docker admin token
* whitespace
* formatting and comment cleanup
* formatting
* undo
* Apply suggestion from @ryancragun
* Move build to make
* Default to local
* Revert k8s changes
* Add admint token
* Clean map
* whitespace
* whitespace
* Pull out k8 changes and vault_cluster_raft
* Some cleaning changes
* whitespace
* Naming cleanup
---------
Co-authored-by: Luis (LT) Carbonell <lt.carbonell@hashicorp.com>
Co-authored-by: Ryan Cragun <me@ryan.ec>
* change what performance replication checker script is checking
* fix lint errors
* enable consul backends for ent build samples
* fix up samples
* fix linting
* update release samples
* fix linting again
* output to stderr
Co-authored-by: Josh Black <raskchanky@gmail.com>
* license: update headers to IBM Corp.
* `make proto`
* update offset because source file changed
Signed-off-by: Ryan Cragun <me@ryan.ec>
Co-authored-by: Ryan Cragun <me@ryan.ec>
[VAULT-39160] actions(hcp): add support for testing custom images on HCP (#9345)
Add support for running the `cloud` scenario with a custom image in the
int HCP environment. We support two new tags that trigger new
functionality. If the `hcp/build-image` tag is present on a PR at the
time of `build`, we'll automatically trigger a custom build for the int
environment. If the `hcp/test` tag is present, we'll trigger a custom
build and run the `cloud` scenario with the resulting image.
* Fix a bug in our custom build pattern to handle prerelease versions.
* pipeline(hcp): add `--github-output` support to `show image` and
`wait image` commands.
* enos(hcp/create_vault_cluster): use a unique identifier for HVN
and vault clusters.
* actions(enos-cloud): add workflow to execute the `cloud` enos
scenario.
* actions(build): add support for triggering a custom build and running
the `enos-cloud` scenario.
* add more debug logging and query without a status
* add shim build-hcp-image for CE workflows
Signed-off-by: Ryan Cragun <me@ryan.ec>
Co-authored-by: Ryan Cragun <me@ryan.ec>
* [VAULT-39157] enos(cloud): add basic vault cloud scenario
Add the skeleton of a Vault Cloud scenario whereby we create an HCP
network, Vault Cloud cluster, and admin token.
In subsequent PR's we'll wire up building images, waiting on builds, and
ultimately fully testing the resulting image.
* copywrite: add headers
---------
Signed-off-by: Ryan Cragun <me@ryan.ec>
Co-authored-by: Ryan Cragun <me@ryan.ec>
Ubunut 20.04 is EOL. Per our support and package policies we no longer
need to develop or test for that platform.
Signed-off-by: Ryan Cragun <me@ryan.ec>
* Add Enos benchmark scenario
* add docs on how to run the scenario
* update description again
* see if this works better if we return an empty map
* hopefully disabling telemetry doesn't crash everything now
* yet another try at making telemetry configurable
* swap consul nodes over to be the same as the vault ones
* adjust up IOPs and add a note about it to the docs
* fix missing variables in the ec2 shim
* randomly pick an az for k6 and metrics instances
* enos(benchmark): futher modularize and make target infra cloud agnostic
The initial goal of this was to resolve an issue where sometimes the
one-or-more target instances would attempt to be provisioned in an
avaliability zone that doesn't support it. The target_ec2_instances
module already supports assigning based on instance offerings so I
wanted to use it for all instances. It also has a side effect of
provisioning instances in parallel to speed up overall scenario time.
I ended up futher modularizing the `benchmark` module into several
sub-modules that perform a single task well, and rely on provisioning in
the root module. This will allow us to utilize the module in other
clouds more easily should we desire to do that in the future.
Signed-off-by: Ryan Cragun <me@ryan.ec>
* add copywrite headers
Signed-off-by: Ryan Cragun <me@ryan.ec>
* address some feedback and limit disk iops to 16k by default
Signed-off-by: Ryan Cragun <me@ryan.ec>
---------
Signed-off-by: Ryan Cragun <me@ryan.ec>
Co-authored-by: Ryan Cragun <me@ryan.ec>
Fix a potential race where we might attempt to update the auth before
we've initially configured it. Also, rather than update it on all nodes
we now choose a node in the cluster at random.
Signed-off-by: Ryan Cragun <me@ryan.ec>
* enos(artifactory): unify dev and test scenario artifactory metadata into new module
There was previously a lot of shared logic between
`build_artifactory_artifact` and `build_artifactory_package` as it
regards to building an artifact name. When it comes down to it, both
modules are very similar and their only major difference is searching
for any artifact (released or not) by either a combination of
`revision`, `edition`, `version`, and `type` vs. searching for a
released artifact with a combination of `version`, `edition`, and
`type`.
Rather than bolt on new `s390x` and `fips1403` artifact metadata to
both, I factored their metadata for package names and such into a
unified and shared `artifact/metadata` module that is now called by
both.
This was tricky as dev and test scenarios currently differ in what
we pass in as the `vault_version`, but we hope to remove that
difference soon. We also add metadata support for the forthcoming
FIPS 140-3.
This commit was tested extensively, along with other test scenarios
in support for `s390x but will be useful immediately for FIPS 140-3
so I've extracted it out.
Signed-off-by: Ryan Cragun <me@ryan.ec>
* Fix artifactory metadata before merge
The initial pass of the artifactory metadata was largely untested and
extracted from a different branch. After testing, this commit fixes a
few issues with the metadata module.
In order to test this I also had to fix an issue where AWS secrets
engine testing became a requirement but is impossible unless you exectue
against a blessed AWS account that has required roles. Instead, we now
make those verification opt-in via a new variable.
We also make some improvements to the pki-verify-certificates script so
that it works reliably against all our supported distros.
We also update our dynamic configuration to use the updated versions in
samples.
Signed-off-by: Ryan Cragun <me@ryan.ec>
* drop the actual value of the secret entered by the user from printing inside field validation
* add changelog
* upgrade vault radar version to 0.24.0
* feedback
* remove changelog
* require explicit value for disable_mlock
* set disable_mlock back to true for all docker tests
* fix build error
* update test config files
* change explicit mlock check to apply to integrated storage only.
* formatting and typo fixes
* added test for raft
* remove erroneous test
* remove unecessary doc line
* remove unecessary var
* pr suggestions
* test compile fix
* add mlock config value to enos tests
* enos lint
* update enos tests to pass disable_mlock value
* move mlock error to runtime to check for env var
* fixed mlock config detection logic
* call out mlock on/off tradeoffs to docs
* rewording production hardening section on mlock for clarity
* update error message when missing disable_mlock value to help customers with the previous default
* fix config doc error and update production-hardening doc to align with existing recommendations.
* remove extra check for mlock config value
* fix docker recovery test
* Update changelog/29974.txt
Explicitly call out that Vault will not start without disable_mlock included in the config.
Co-authored-by: Kuba Wieczorek <kuba.wieczorek@hashicorp.com>
* more docker test experimentation.
* passing disable_mlock into test cluster
* add VAULT_DISABLE_MLOCK envvar to docker tests and pass through the value
* add missing envvar for docker env test
* upate additional docker test disable_mlock values
* Apply suggestions from code review
Use active voice.
Co-authored-by: Sarah Chavis <62406755+schavis@users.noreply.github.com>
---------
Co-authored-by: Kuba Wieczorek <kuba.wieczorek@hashicorp.com>
Co-authored-by: Sarah Chavis <62406755+schavis@users.noreply.github.com>
* add test
* add as module
* more debugging of scenario
* fixes
* smoke test working
* autopilot test working
* revert local autopilot changes, cleanup comments and raft remove peer changes
* enos fmt
* modules fmt
* add vault_install_dir
* skip removal correctly for consul
* lint
* pr fixes
* passed run
* pr comments
* change step name everywhere
* fix
* check correct field
* remove cluster_name
`$?` in bash is wonky. When you evaluate an expression in an `if`
statement the `$?` variable is only set the actual value in blocks
scoped in the statement. Therefore, since we rely on it in
synchronize-repos we have to evaluate the rest of the function in a
scope of that statement.
Signed-off-by: Ryan Cragun <me@ryan.ec>
In the `synchronize-repos.sh` script we use `cloud-init status --wait`
to ensure that `cloud-init` is not running when we attempt to sync the
repositories. This is all fine and good except that modern versions of
`cloud-init` can exit with 2 if they encounter an error but recover.
Since we're running the script with `-e` and don't gate the exit with an
expression, the script will fail rather than recover.
Signed-off-by: Ryan Cragun <me@ryan.ec>
* adding logic to print failures and retry if there is an cloud-init error
* adding logic to print failures and retry if there is an cloud-init error
* fixing timeout error
* fixing timeout error
* fixing timeout error
* fixing timeout error
* fixing timeout error
* updating retry to 2
* updating cloud init status logic
* updating cloud init status logic
* addressing comments
* addressing comments
* fixing error from sync scriot
Verify vault secret integrity in unauthenticated I/O streams (audit log, STDOUT/STDERR via the systemd journal) by scanning the text with Vault Radar. We search for both known and unknown secrets by using an index of KVV2 values and also by radar's built-in heuristics for credentials, secrets, and keys.
The verification has been added to many scenarios where a slight time increase is allowed, as we now have to install Vault Radar and scan the text. In practice this adds less than 10 seconds to the overall duration of a scenario.
In the in-place upgrade scenario we explicitly exclude this verification when upgrading from a version that we know will fail the check. We also make the verification opt-in so as to not require a Vault Radar license to run Enos scenarios, though it will always be enabled in CI.
As part of this we also update our enos workflow to utilize secret values from our self-hosted Vault when executing in the vault-enterprise repo context.
Signed-off-by: Ryan Cragun <me@ryan.ec>
* VAULT-31402: Add verification for all container images
Add verification for all container images that are generated as part of
the build. Before this change we only ever tested a limited subset of
"default" containers based on Alpine Linux that we publish via the
Docker hub and AWS ECR.
Now we support testing all Alpine and UBI based container images. We
also verify the repository and tag information embedded in each by
deploying them and verifying the repo and tag metadata match our
expectations.
This does change the k8s scenario interface quite a bit. We now take in
an archive image and set image/repo/tag information based on the
scenario variants.
To enable this I also needed to add `tar` to the UBI base image. It was
already available in the Alpine image and is used to copy utilities to
the image when deploying and configuring the cluster via Enos.
Since some images contain multiple tags we also add samples for each
image and randomly select which variant to test on a given PR.
Signed-off-by: Ryan Cragun <me@ryan.ec>
* VAULT-30819: verify DR secondary leader before unsealing followers
After we've enabled DR replication on the secondary leader the existing
cluster followers will be resealed with the primary clusters encryption
keys. We have to unseal the followers to make them available. To ensure
that we absolutely take every precaution before attempting to unseal the
followers we now verify that the secondary leader is the cluster leader,
has a valid merkle tree, and is streaming wals from the primary cluster
before we attempt to unseal the secondary followers.
Signed-off-by: Ryan Cragun <me@ryan.ec>
Fix two occasional flakes in the DR replication scenario:
* Always verify that all nodes in the cluster are unsealed before
verifying test data. Previously we only verified seal status on
followers.
* Fix an occasional timeout when waiting for the cluster to unseal by
rewriting the module to retry for a set duration instead of
exponential backoff.
Signed-off-by: Ryan Cragun <me@ryan.ec>
* [VAULT-30189] enos: verify identity and OIDC tokens
Expand our baseline API and data verification by including the identity
and identity OIDC tokens secrets engines. We now create a test entity,
entity-alias, identity group, various policies, and associate them with
the entity. For the OIDC side, we now configure the OIDC issuer, create
and rotate named keys, create and associate roles with the named key,
and issue and introspect tokens.
During a second phase we also verify that the those some entities,
groups, keys, roles, config, etc all exist with the expected values.
This is useful to test durability after upgrades, migrations, etc.
This change also includes new updates our prior `auth/userpass` and `kv`
verification. We had two modules that were loosely coupled and
interdependent. This restructures those both into a singular module with
child modules and fixes the assumed values by requiring the read module
to verify against the created state.
Going forward we can continue to extend this secrets engine verification
module with additional create and read checks for new secrets engines.
Signed-off-by: Ryan Cragun <me@ryan.ec>
Previously our `shutdown_nodes` modules would halt the machine. While
this is useful for simulating a failure it makes cleaning up the halted
machines very slow in AWS.
Instead, we now poweroff the machines and utilize EC2's instance
poweroff handling to immediately terminate the instances.
I've test both scenarios locally utilizing the change and both still
work as expected. I also timed before and after and this change saves 5
MINUTES in total runtime (~40%) for the PR replication scenario. I assume
it yields similar results for autopilot.
Signed-off-by: Ryan Cragun <me@ryan.ec>
Previously we'd fail in the verify-billing-start.sh retry loop instead
of returning a 1. This fixes that and normalizes the script.
Signed-off-by: Ryan Cragun <me@ryan.ec>
* auto-roll billing start enos test
* enos: don't expect curl available in docker image (#27984)
Signed-off-by: Ryan Cragun <me@ryan.ec>
* Update interoperability-matrix.mdx (#27977)
Updating the existing Vault/YubiHSM integration with a newer version of Vault as well as now supporting Managed Keys.
* Update hana db pkg (#27950)
* database/hana: use go-hdb v1.10.1
* docs/hana: quotes around password so dashes don't break it
* Clarify audit log failure telemetry docs. (#27969)
* Clarify audit log failure telemetry docs.
* Add the note about the misleading counts
* Auto-rolling billing start docs PR (#27926)
* auto-roll docs changes
* addressing comments
* address comments
* Update website/content/api-docs/system/internal-counters.mdx
Co-authored-by: Sarah Chavis <62406755+schavis@users.noreply.github.com>
* addressing some changes
* update docs
* update docs with common explanation file
* updated note info
* fix 1.18 upgrade doc
* fix content-check error
* Update website/content/partials/auto-roll-billing-start-example.mdx
Co-authored-by: miagilepner <mia.epner@hashicorp.com>
---------
Co-authored-by: Sarah Chavis <62406755+schavis@users.noreply.github.com>
Co-authored-by: miagilepner <mia.epner@hashicorp.com>
* docker: add upgrade notes for curl removal (#27995)
Signed-off-by: Ryan Cragun <me@ryan.ec>
* Update vault-plugin-auth-jwt to v0.21.1 (#27992)
* docs: fix upgrade 1.16.x (#27999)
Signed-off-by: Ryan Cragun <me@ryan.ec>
* UI: Add unsupportedCriticalCertExtensions to jwt config expected payload (#27996)
* Client Count Docs Updates/Cleanup (#27862)
* Docs changes
* More condensation of docs
* Added some clarity on date ranges
* Edited wording'
* Added estimation client count info
* Update website/content/api-docs/system/internal-counters.mdx
Co-authored-by: miagilepner <mia.epner@hashicorp.com>
---------
Co-authored-by: miagilepner <mia.epner@hashicorp.com>
* update(kubernetes.mdx): k8s-tokenreview URL (#27993)
Co-authored-by: akshya96 <87045294+akshya96@users.noreply.github.com>
* Update programmatic-management.mdx to clarify Terraform prereqs (#27548)
* UI: Replace getNewModel with hydrateModel when model exists (#27978)
* Replace getNewModel with hydrateModel when model exists
* Update getNewModel to only handle nonexistant model types
* Update test
* clarify test
* Fix auth-config models which need hydration not generation
* rename file to match service name
* cleanup + tests
* Add comment about helpUrl method
* Changelog for 1.17.3, 1.16.7 enterprise, 1.15.13 enterprise (#28018)
* changelog for 1.17.3, 1.16.7 enterprise, 1.15.13 enterprise
* Add spacing to match older changelogs
* Fix typo in variables.tf (#27693)
intialize -> initialize
Co-authored-by: akshya96 <87045294+akshya96@users.noreply.github.com>
* Update 1_15-auto-upgrade.mdx (#27675)
* Update 1_15-auto-upgrade.mdx
* Update known issue version numbers for AP issue
---------
Co-authored-by: Sarah Chavis <62406755+schavis@users.noreply.github.com>
* Update 1_16-default-policy-needs-to-be-updated.mdx (#27157)
Made a few grammar changes plus updating term from Vault IU to Vault UI
* change instances variable to hosts
* for each hosts
* add cluster addr port
* Add ENVs using NewTestDockerCluster (#27457)
* Add ENVs using NewTestDockerCluster
Currently NewTestDockerCluster had no means for setting any
environment variables. This makes it tricky to create test
for functionality that require thems, like having to set
AWS environment variables.
DockerClusterOptions now exposes an option to pass extra
enviroment variables to the containers, which are appended
to the existing ones.
* adding changelog
* added test case for setting env variables to containers
* fix changelog typo; env name
* Update changelog/27457.txt
Co-authored-by: akshya96 <87045294+akshya96@users.noreply.github.com>
* adding the missing copyright
---------
Co-authored-by: akshya96 <87045294+akshya96@users.noreply.github.com>
* UI: Build KV v2 overview page (#28106)
* move date-from-now helper to addon
* make overview cards consistent across engines
* make kv-paths-card component
* remove overview margin all together
* small styling changes for paths card
* small selector additions
* add overview card test
* add overview page and test
* add default timestamp format
* cleanup paths test
* fix dateFromNow import
* fix selectors, cleanup pki selectors
* and more selector cleanup
* make deactivated state single arg
* fix template and remove @isDeleted and @isDestroyed
* add test and hide badge unless deactivated
* address failings from changing selectors
* oops, not ready to show overview tab just yet!
* add deletionTime to currentSecret metadata getter
* Bump actions/download-artifact from 4.1.7 to 4.1.8 (#27704)
Bumps [actions/download-artifact](https://github.com/actions/download-artifact) from 4.1.7 to 4.1.8.
- [Release notes](https://github.com/actions/download-artifact/releases)
- [Commits](65a9edc588...fa0a91b85d)
---
updated-dependencies:
- dependency-name: actions/download-artifact
dependency-type: direct:production
update-type: version-update:semver-patch
...
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: akshya96 <87045294+akshya96@users.noreply.github.com>
* Bump actions/setup-node from 4.0.2 to 4.0.3 (#27738)
Bumps [actions/setup-node](https://github.com/actions/setup-node) from 4.0.2 to 4.0.3.
- [Release notes](https://github.com/actions/setup-node/releases)
- [Commits](60edb5dd54...1e60f620b9)
---
updated-dependencies:
- dependency-name: actions/setup-node
dependency-type: direct:production
update-type: version-update:semver-patch
...
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: akshya96 <87045294+akshya96@users.noreply.github.com>
* Add valid IP callout (#28112)
Co-authored-by: Yoko Hyakuna <yoko@hashicorp.com>
* Refactor SSH Configuration workflow (#28122)
* initial copy from other #28004
* pr feedback
* grr
* Bump browser-actions/setup-chrome from 1.7.1 to 1.7.2 (#28101)
Bumps [browser-actions/setup-chrome](https://github.com/browser-actions/setup-chrome) from 1.7.1 to 1.7.2.
- [Release notes](https://github.com/browser-actions/setup-chrome/releases)
- [Changelog](https://github.com/browser-actions/setup-chrome/blob/master/CHANGELOG.md)
- [Commits](db1b524c26...facf10a55b)
---
updated-dependencies:
- dependency-name: browser-actions/setup-chrome
dependency-type: direct:production
update-type: version-update:semver-patch
...
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: divyaac <divya.chandrasekaran@hashicorp.com>
* Bump vault-gcp-secrets-plugin (#28089)
Co-authored-by: divyaac <divya.chandrasekaran@hashicorp.com>
* docs: correct list syntax (#28119)
Co-authored-by: divyaac <divya.chandrasekaran@hashicorp.com>
* add semgrepconstraint check in skip step
---------
Signed-off-by: Ryan Cragun <me@ryan.ec>
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: Ryan Cragun <me@ryan.ec>
Co-authored-by: Adam Rowan <92474478+bear359@users.noreply.github.com>
Co-authored-by: Theron Voran <tvoran@users.noreply.github.com>
Co-authored-by: Paul Banks <pbanks@hashicorp.com>
Co-authored-by: Sarah Chavis <62406755+schavis@users.noreply.github.com>
Co-authored-by: miagilepner <mia.epner@hashicorp.com>
Co-authored-by: Scott Miller <smiller@hashicorp.com>
Co-authored-by: Chelsea Shaw <82459713+hashishaw@users.noreply.github.com>
Co-authored-by: divyaac <divya.chandrasekaran@hashicorp.com>
Co-authored-by: Roman O'Brien <58272664+romanobrien@users.noreply.github.com>
Co-authored-by: Adrian Todorov <adrian.todorov@hashicorp.com>
Co-authored-by: VAL <val@hashicorp.com>
Co-authored-by: Ikko Eltociear Ashimine <eltociear@gmail.com>
Co-authored-by: Owen Zhang <86668876+owenzorrin@users.noreply.github.com>
Co-authored-by: gkoutsou <gkoutsou@users.noreply.github.com>
Co-authored-by: claire bontempo <68122737+hellobontempo@users.noreply.github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Jonathan Frappier <92055993+jonathanfrappier@users.noreply.github.com>
Co-authored-by: Yoko Hyakuna <yoko@hashicorp.com>
Co-authored-by: Angel Garbarino <Monkeychip@users.noreply.github.com>
Co-authored-by: Max Levine <max@maxlevine.co.uk>
Co-authored-by: Steffy Fort <steffyfort@gmail.com>