Update the base images for all scenarios:
- RHEL: upgrade base image for 10 to 10.1
- RHEL: upgrade base image for 9 to 9.7
- SLES: upgrade base image for 15 to 15.7
- SLES: add SLES 16.0 to the matrix
- OpenSUSE: remove OpenSUSE Leap from the matrix
I ended up removing OpenSUSE because the images that we were on were rarely updated and that resulted in very slow scenarios because of package upgrades. Also, despite the latest release being in October I didn't find any public cloud images produced for the new version of Leap. We can consider adding it back later but I'm comfortable just leaving SLES 15 and 16 in there for that test coverage.
I also ended up fixing a bug in our integration host setup where we'd provision three nodes instead of one. That ought to result in many fewer instance provisions per scenario. I also had to make a few small tweaks in how we detected whether or not SELinux is enabled, as the prior implementation did not work for SLES 16.
Signed-off-by: Ryan Cragun <me@ryan.ec>
Co-authored-by: Ryan Cragun <me@ryan.ec>
Fix an incompatibility where we check out the repository with
checkout@v6 and then attempt to check it out again at checkout@v5 in the
set-product-version action.
* update enos directory to trigger lint
Signed-off-by: Ryan Cragun <me@ryan.ec>
Co-authored-by: Ryan Cragun <me@ryan.ec>
* Add docker based backed
* new line
* Add validation
* Add cloud_docker_vault_cluster
* Unify cloud scenario outputs
* Use min_vault_version consistently across both modules
* random network name for docker
* Add local build for docker
* Use environment instead of backend
* make use of existing modules for docker and k8s
* connect the peers
* formatting
* copyright
* Remove old duplicated code
* use enos local exec
* get version locally
* Dont use local time
* adjust bin path for docker
* use root dockerfile
* get dockerfile to work
* Build docker image from correct binary location
* Fix it... maybe
* Add docker admin token
* whitespace
* formatting and comment cleanup
* formatting
* undo
* Apply suggestion from @ryancragun
* Move build to make
* Default to local
* Revert k8s changes
* Add admint token
* Clean map
* whitespace
* whitespace
* Pull out k8 changes and vault_cluster_raft
* Some cleaning changes
* whitespace
* Naming cleanup
---------
Co-authored-by: Luis (LT) Carbonell <lt.carbonell@hashicorp.com>
Co-authored-by: Ryan Cragun <me@ryan.ec>
* change what performance replication checker script is checking
* fix lint errors
* enable consul backends for ent build samples
* fix up samples
* fix linting
* update release samples
* fix linting again
* output to stderr
Co-authored-by: Josh Black <raskchanky@gmail.com>
* license: update headers to IBM Corp.
* `make proto`
* update offset because source file changed
Signed-off-by: Ryan Cragun <me@ryan.ec>
Co-authored-by: Ryan Cragun <me@ryan.ec>
[VAULT-39160] actions(hcp): add support for testing custom images on HCP (#9345)
Add support for running the `cloud` scenario with a custom image in the
int HCP environment. We support two new tags that trigger new
functionality. If the `hcp/build-image` tag is present on a PR at the
time of `build`, we'll automatically trigger a custom build for the int
environment. If the `hcp/test` tag is present, we'll trigger a custom
build and run the `cloud` scenario with the resulting image.
* Fix a bug in our custom build pattern to handle prerelease versions.
* pipeline(hcp): add `--github-output` support to `show image` and
`wait image` commands.
* enos(hcp/create_vault_cluster): use a unique identifier for HVN
and vault clusters.
* actions(enos-cloud): add workflow to execute the `cloud` enos
scenario.
* actions(build): add support for triggering a custom build and running
the `enos-cloud` scenario.
* add more debug logging and query without a status
* add shim build-hcp-image for CE workflows
Signed-off-by: Ryan Cragun <me@ryan.ec>
Co-authored-by: Ryan Cragun <me@ryan.ec>
* [VAULT-39157] enos(cloud): add basic vault cloud scenario
Add the skeleton of a Vault Cloud scenario whereby we create an HCP
network, Vault Cloud cluster, and admin token.
In subsequent PR's we'll wire up building images, waiting on builds, and
ultimately fully testing the resulting image.
* copywrite: add headers
---------
Signed-off-by: Ryan Cragun <me@ryan.ec>
Co-authored-by: Ryan Cragun <me@ryan.ec>
Ubunut 20.04 is EOL. Per our support and package policies we no longer
need to develop or test for that platform.
Signed-off-by: Ryan Cragun <me@ryan.ec>
Right now our logic for consul doesn't consider whether or not the
version is available for ent or ce. Make sure that the versions we used
are available for both.
Signed-off-by: Ryan Cragun <me@ryan.ec>
* Add Enos benchmark scenario
* add docs on how to run the scenario
* update description again
* see if this works better if we return an empty map
* hopefully disabling telemetry doesn't crash everything now
* yet another try at making telemetry configurable
* swap consul nodes over to be the same as the vault ones
* adjust up IOPs and add a note about it to the docs
* fix missing variables in the ec2 shim
* randomly pick an az for k6 and metrics instances
* enos(benchmark): futher modularize and make target infra cloud agnostic
The initial goal of this was to resolve an issue where sometimes the
one-or-more target instances would attempt to be provisioned in an
avaliability zone that doesn't support it. The target_ec2_instances
module already supports assigning based on instance offerings so I
wanted to use it for all instances. It also has a side effect of
provisioning instances in parallel to speed up overall scenario time.
I ended up futher modularizing the `benchmark` module into several
sub-modules that perform a single task well, and rely on provisioning in
the root module. This will allow us to utilize the module in other
clouds more easily should we desire to do that in the future.
Signed-off-by: Ryan Cragun <me@ryan.ec>
* add copywrite headers
Signed-off-by: Ryan Cragun <me@ryan.ec>
* address some feedback and limit disk iops to 16k by default
Signed-off-by: Ryan Cragun <me@ryan.ec>
---------
Signed-off-by: Ryan Cragun <me@ryan.ec>
Co-authored-by: Ryan Cragun <me@ryan.ec>
Fix a potential race where we might attempt to update the auth before
we've initially configured it. Also, rather than update it on all nodes
we now choose a node in the cluster at random.
Signed-off-by: Ryan Cragun <me@ryan.ec>
* enos(artifactory): unify dev and test scenario artifactory metadata into new module
There was previously a lot of shared logic between
`build_artifactory_artifact` and `build_artifactory_package` as it
regards to building an artifact name. When it comes down to it, both
modules are very similar and their only major difference is searching
for any artifact (released or not) by either a combination of
`revision`, `edition`, `version`, and `type` vs. searching for a
released artifact with a combination of `version`, `edition`, and
`type`.
Rather than bolt on new `s390x` and `fips1403` artifact metadata to
both, I factored their metadata for package names and such into a
unified and shared `artifact/metadata` module that is now called by
both.
This was tricky as dev and test scenarios currently differ in what
we pass in as the `vault_version`, but we hope to remove that
difference soon. We also add metadata support for the forthcoming
FIPS 140-3.
This commit was tested extensively, along with other test scenarios
in support for `s390x but will be useful immediately for FIPS 140-3
so I've extracted it out.
Signed-off-by: Ryan Cragun <me@ryan.ec>
* Fix artifactory metadata before merge
The initial pass of the artifactory metadata was largely untested and
extracted from a different branch. After testing, this commit fixes a
few issues with the metadata module.
In order to test this I also had to fix an issue where AWS secrets
engine testing became a requirement but is impossible unless you exectue
against a blessed AWS account that has required roles. Instead, we now
make those verification opt-in via a new variable.
We also make some improvements to the pki-verify-certificates script so
that it works reliably against all our supported distros.
We also update our dynamic configuration to use the updated versions in
samples.
Signed-off-by: Ryan Cragun <me@ryan.ec>
* drop the actual value of the secret entered by the user from printing inside field validation
* add changelog
* upgrade vault radar version to 0.24.0
* feedback
* remove changelog
* require explicit value for disable_mlock
* set disable_mlock back to true for all docker tests
* fix build error
* update test config files
* change explicit mlock check to apply to integrated storage only.
* formatting and typo fixes
* added test for raft
* remove erroneous test
* remove unecessary doc line
* remove unecessary var
* pr suggestions
* test compile fix
* add mlock config value to enos tests
* enos lint
* update enos tests to pass disable_mlock value
* move mlock error to runtime to check for env var
* fixed mlock config detection logic
* call out mlock on/off tradeoffs to docs
* rewording production hardening section on mlock for clarity
* update error message when missing disable_mlock value to help customers with the previous default
* fix config doc error and update production-hardening doc to align with existing recommendations.
* remove extra check for mlock config value
* fix docker recovery test
* Update changelog/29974.txt
Explicitly call out that Vault will not start without disable_mlock included in the config.
Co-authored-by: Kuba Wieczorek <kuba.wieczorek@hashicorp.com>
* more docker test experimentation.
* passing disable_mlock into test cluster
* add VAULT_DISABLE_MLOCK envvar to docker tests and pass through the value
* add missing envvar for docker env test
* upate additional docker test disable_mlock values
* Apply suggestions from code review
Use active voice.
Co-authored-by: Sarah Chavis <62406755+schavis@users.noreply.github.com>
---------
Co-authored-by: Kuba Wieczorek <kuba.wieczorek@hashicorp.com>
Co-authored-by: Sarah Chavis <62406755+schavis@users.noreply.github.com>
* add test
* add as module
* more debugging of scenario
* fixes
* smoke test working
* autopilot test working
* revert local autopilot changes, cleanup comments and raft remove peer changes
* enos fmt
* modules fmt
* add vault_install_dir
* skip removal correctly for consul
* lint
* pr fixes
* passed run
* pr comments
* change step name everywhere
* fix
* check correct field
* remove cluster_name