[QT-697] enos: add descriptions and quality verification (#27311)

In order to take advantage of enos' ability to outline scenarios and to
inventory what verification they perform we needed to retrofit all of
that information to our existing scenarios and steps.

This change introduces an initial set of descriptions and verification
declarations that we can continue to refine over time.

As doing this required that I re-read every scenanario in its entirety I
also updated and fixed a few things along the way that I noticed,
including adding a few small features to enos that we utilize to make
handling initial versions programtic between versions instead of having a
delta between our globals in each branch.

* Update autopilot and in-place upgrade initial versions
* Programatically determine which initial versions to use based on Vault
  version
* Partially normalize steps between scenarios to make comparisons easier
* Update the MOTD to explain that VAULT_ADDR and VAULT_TOKEN have been
  set
* Add scenario and step descriptions to scenarios
* Add initial scenario quality verification declarations to scenarios
* Unpin Terraform in scenarios as >= 1.8.4 should work fine
This commit is contained in:
Ryan Cragun 2024-06-13 11:16:33 -06:00 committed by GitHub
parent c131b47535
commit 84935e4416
No known key found for this signature in database
GPG Key ID: B5690EEEBB952194
19 changed files with 2546 additions and 420 deletions

View File

@ -41,7 +41,6 @@ jobs:
- uses: hashicorp/setup-terraform@v3
with:
terraform_wrapper: false
terraform_version: "1.7.5" # Pin until 1.8.x crash has been resolved
- uses: hashicorp/action-setup-enos@v1
with:
github-token: ${{ secrets.ELEVATED_GITHUB_TOKEN }}

View File

@ -38,7 +38,6 @@ jobs:
# the Terraform wrapper will break Terraform execution in Enos because
# it changes the output to text when we expect it to be JSON.
terraform_wrapper: false
terraform_version: "1.7.5" # Pin until 1.8.x crash has been resolved
- name: Set up Enos
uses: hashicorp/action-setup-enos@v1
with:

View File

@ -32,8 +32,6 @@ jobs:
- uses: actions/checkout@a5ac7e51b41094c92402da3b24376905380afc29 # v4.1.6
- name: Set up Terraform
uses: hashicorp/setup-terraform@v3
with:
terraform_version: "1.7.5" # Pin until 1.8.x crash has been resolved
- name: Configure AWS credentials
uses: aws-actions/configure-aws-credentials@e3dd6a429d7300a6a4c196c26e071d42e0343502 # v4.0.2
with:

View File

@ -90,7 +90,6 @@ jobs:
with:
cli_config_credentials_token: ${{ secrets.TF_API_TOKEN }}
terraform_wrapper: false
terraform_version: "1.7.5" # Pin until 1.8.x crash has been resolved
- name: Prepare scenario dependencies
run: |
mkdir -p ./enos/support/terraform-plugin-cache

View File

@ -58,7 +58,7 @@ jobs:
- id: metadata
run: |
build_date=$(make ci-get-date)
sample_seed=$(date +%s%N)
sample_seed=$(date +%s)
sample=$(enos scenario sample observe "${{ inputs.sample-name }}" --chdir ./enos --min 1 --max "${{ inputs.sample-max }}" --seed "${sample_seed}" --format json | jq -c ".observation.elements")
if [[ "${{ inputs.vault-edition }}" == "ce" ]]; then
vault_version="${{ inputs.vault-version }}"
@ -113,7 +113,6 @@ jobs:
# the Terraform wrapper will break Terraform execution in Enos because
# it changes the output to text when we expect it to be JSON.
terraform_wrapper: false
terraform_version: "1.7.5" # Pin until 1.8.x crash has been resolved
- uses: aws-actions/configure-aws-credentials@e3dd6a429d7300a6a4c196c26e071d42e0343502 # v4.0.2
with:
aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY_ID_CI }}

195
enos/enos-descriptions.hcl Normal file
View File

@ -0,0 +1,195 @@
# Copyright (c) HashiCorp, Inc.
# SPDX-License-Identifier: BUSL-1.1
globals {
description = {
build_vault = <<-EOF
Determine which Vault artifact we want to use for the scenario. Depending on the
'artifact_source' variant we'll either build Vault from the local branch, fetch a candidate
build from Artifactory, or use a local artifact that was built in CI via CRT.
EOF
create_backend_cluster = <<-EOF
Create a storage backend cluster if necessary. When configured to use Consul it will
install, configure, and start the Consul cluster on the target hosts and wait for the Consul
cluster to become healthy. When using integrated raft storage this step is a no-op as the
Vault cluster nodes will provide their own integrated storage.
EOF
create_seal_key = <<-EOF
Create the necessary seal key infrastructure for Vaults auto-unseal functionality. Depending
on the 'seal' variant this step will perform different actions. When using 'shamir' the step
is a no-op as we won't require an external seal mechanism. When using 'pkcs11' this step will
create a SoftHSM slot and associated token which can be distributed to all target nodes. When
using 'awskms' a new AWSKMS key will be created. The necessary security groups and policies
for Vault target nodes to access it the AWSKMS key are handled in the target modules.
EOF
create_vault_cluster = <<-EOF
Create the the Vault cluster. In this module we'll install, configure, start, initialize and
unseal all the nodes in the Vault. After initialization it also enables various audit engines.
EOF
create_vault_cluster_backend_targets = <<-EOF
Create the target machines that we'll install Consul onto when using Consul for storage. We
also handle creating AWS instance profiles and security groups that allow for auto-discovery
via the retry_join functionality in Consul. The security group firewall rules will
automatically allow SSH access from the host external IP address of the machine executing
Enos, in addition to all of the required ports for Consul to function and be accessible in the
VPC.
EOF
create_vault_cluster_targets = <<-EOF
Create the target machines that we'll install Vault onto. We also handle creating AWS instance
profiles and security groups that allow for auto-discovery via the retry_join functionality in
Consul. The security group firewall rules will automatically allow SSH access from the host
external IP address of the machine executing Enos, in addition to all of the required ports
for Vault to function and be accessible in the VPC.
EOF
create_vpc = <<-EOF
Create an AWS VPC, internet gateway, default security group, and default subnet that allows
egress traffic via the internet gateway.
EOF
ec2_info = <<-EOF
Query various endpoints in AWS Ec2 to gather metadata we'll use later in our run when creating
infrastructure for the Vault cluster. This metadata includes:
- AMI IDs for different Linux distributions and platform architectures
- Available Ec2 Regions
- Availability Zones for our desired machine instance types
EOF
enable_multiseal = <<-EOF
Configure the Vault cluster with 'enable_multiseal' and up to three auto-unseal methods
via individual, prioritized 'seal' stanzas.
EOF
get_local_metadata = <<-EOF
Performs several Vault quality verification that are dynamically modified based on the Vault
binary version, commit SHA, build-date (commit SHA date), and edition metadata. When we're
testing existing artifacts this expected metadata is passed in via Enos variables. When we're
building a local by using the 'artifact_source:local' variant, this step executes and
populates the expected metadata with that of our branch so that we don't have to update the
Enos variables on each commit.
EOF
get_vault_cluster_ip_addresses = <<-EOF
Map the public and private IP addresses of the Vault cluster nodes and segregate them by
their leader status. This allows us to easily determine the public IP addresses of the leader
and follower nodes.
EOF
read_backend_license = <<-EOF
When using Consul Enterprise as a storage backend, ensure that a Consul Enterprise license is
present on disk and read its contents so that we can utilize it when configuring the storage
cluster. Must have the 'backend:consul' and 'consul_edition:ent' variants.
EOF
read_vault_license = <<-EOF
When deploying Vault Enterprise, ensure a Vault Enterprise license is present on disk and
read its contents so that we can utilize it when configuring the Vault Enterprise cluster.
Must have the 'edition' variant to be set to any Enterprise edition.
EOF
shutdown_nodes = <<-EOF
Shut down the nodes to ensure that they are no longer operating software as part of the
cluster.
EOF
start_vault_agent = <<-EOF
Create an agent approle in the auth engine, generate a Vault Agent configuration file, and
start the Vault agent.
EOF
stop_vault = <<-EOF
Stop the Vault cluster by stopping the vault service via systemctl.
EOF
vault_leader_step_down = <<-EOF
Force the Vault cluster leader to step down which forces the Vault cluster to perform a leader
election.
EOF
verify_agent_output = <<-EOF
Vault running in Agent mode uses templates to create log output.
EOF
verify_raft_cluster_all_nodes_are_voters = <<-EOF
When configured with a 'backend:raft' variant, verify that all nodes in the cluster are
healthy and are voters.
EOF
verify_autopilot_idle_state = <<-EOF
Wait for the Autopilot to upgrade the entire Vault cluster and ensure that the target version
matches the candidate version. Ensure that the cluster reaches an upgrade state of
'await-server-removal'.
EOF
verify_read_test_data = <<-EOF
Verify that we are able to read test data we've written in prior steps. This includes:
- Auth user policies
- Kv data
EOF
verify_replication_status = <<-EOF
Verify that the default replication status is correct depending on the edition of Vault that
been deployed. When testing a Community Edition of Vault we'll ensure that replication is not
enabled. When testing any Enterprise edition of Vault we'll ensure that Performance and
Disaster Recovery replication are available.
EOF
verify_seal_rewrap_entries_processed_eq_entries_succeeded_post_rewrap = <<-EOF
Verify that the v1/sys/sealwrap/rewrap Vault API returns the rewrap data and
'entries.processed' equals 'entries.succeeded' after the rewrap has completed.
EOF
verify_seal_rewrap_entries_processed_is_gt_zero_post_rewrap = <<-EOF
Verify that the /sys/sealwrap/rewrap Vault API returns the rewrap data and the 'entries.processed' has
processed at least one entry after the rewrap has completed.
EOF
verify_seal_rewrap_is_running_false_post_rewrap = <<-EOF
Verify that the v1/sys/sealwrap/rewrap Vault API returns the rewrap data and 'is_running' is set to
'false' after a rewrap has completed.
EOF
verify_seal_rewrap_no_entries_fail_during_rewrap = <<-EOF
Verify that the v1/sys/sealwrap/rewrap Vault API returns the rewrap data and 'entries.failed' is '0'
after the rewrap has completed.
EOF
verify_seal_type = <<-EOF
Vault's reported seal type matches our configuration.
EOF
verify_write_test_data = <<-EOF
Verify that vault is capable mounting engines and writing data to them. These currently include:
- Mount the auth engine
- Mount the kv engine
- Write auth user policies
- Write kv data
EOF
verify_ui = <<-EOF
The Vault UI assets are embedded in the Vault binary and available when running.
EOF
verify_vault_unsealed = <<-EOF
Verify that the Vault cluster has successfully unsealed.
EOF
verify_vault_version = <<-EOF
Verify that the Vault cluster has the correct embedded version metadata. This metadata includes
the Vault version, edition, build date, and any special prerelease metadata.
EOF
wait_for_cluster_to_have_leader = <<-EOF
Wait for a leader election to occur before we proceed with any further quality verification.
EOF
wait_for_seal_rewrap = <<-EOF
Wait for the Vault cluster seal rewrap process to complete.
EOF
}
}

View File

@ -14,11 +14,13 @@ scenario "dev_pr_replication" {
build and deploy the current branch!
In order to execute this scenario you'll need to install the enos CLI:
brew tap hashicorp/tap && brew update && brew install hashicorp/tap/enos
- $ brew tap hashicorp/tap && brew update && brew install hashicorp/tap/enos
You'll also need access to an AWS account with an SSH keypair.
Perform the steps here to get AWS access with Doormat https://eng-handbook.hashicorp.services/internal-tools/enos/common-setup-steps/#authenticate-with-doormat
Perform the steps here to get an AWS keypair set up: https://eng-handbook.hashicorp.services/internal-tools/enos/common-setup-steps/#set-your-aws-key-pair-name-and-private-key
You'll also need access to an AWS account via Doormat, follow the guide here:
https://eng-handbook.hashicorp.services/internal-tools/enos/common-setup-steps/#authenticate-with-doormat
Follow this guide to get an SSH keypair set up in the AWS account:
https://eng-handbook.hashicorp.services/internal-tools/enos/common-setup-steps/#set-your-aws-key-pair-name-and-private-key
Please note that this scenario requires several inputs variables to be set in order to function
properly. While not all variants will require all variables, it's suggested that you look over

View File

@ -37,7 +37,8 @@ globals {
"sles" = var.distro_version_sles
"ubuntu" = var.distro_version_ubuntu
}
editions = ["ce", "ent", "ent.fips1402", "ent.hsm", "ent.hsm.fips1402"]
editions = ["ce", "ent", "ent.fips1402", "ent.hsm", "ent.hsm.fips1402"]
enterprise_editions = [for e in global.editions : e if e != "ce"]
package_manager = {
"amzn2" = "yum"
"leap" = "zypper"
@ -64,7 +65,7 @@ globals {
// release branch's version. Also beware if adding versions below 1.11.x. Some scenarios
// that use this global might not work as expected with earlier versions. Below 1.8.x is
// not supported in any way.
upgrade_initial_versions = ["1.11.12", "1.12.11", "1.13.11", "1.14.7", "1.15.3"]
upgrade_initial_versions = ["1.8.12", "1.9.10", "1.10.11", "1.11.12", "1.12.11", "1.13.13", "1.14.13", "1.15.9", "1.16.3"]
vault_install_dir = {
bundle = "/opt/vault/bin"
package = "/usr/bin"

478
enos/enos-qualities.hcl Normal file
View File

@ -0,0 +1,478 @@
# Copyright (c) HashiCorp, Inc.
# SPDX-License-Identifier: BUSL-1.1
quality "consul_api_agent_host_read" {
description = "The /v1/agent/host Consul API returns host info for each node in the cluster"
}
quality "consul_api_health_node_read" {
description = <<-EOF
The /v1/health/node/<node> Consul API returns health info for each node in the cluster
EOF
}
quality "consul_api_operator_raft_config_read" {
description = "The /v1/operator/raft/configuration Consul API returns raft info for the cluster"
}
quality "consul_autojoin_aws" {
description = "The Consul cluster auto-joins with AWS tag discovery"
}
quality "consul_cli_validate" {
description = "The 'consul validate' command validates the Consul configuration"
}
quality "consul_config_file" {
description = "Consul starts when configured with a configuration file"
}
quality "consul_ha_leader_election" {
description = "The Consul cluster elects a leader node on start up"
}
quality "consul_health_state_passing_read_nodes_minimum" {
description = <<-EOF
The Consul cluster meets the minimum of number of healthy nodes according to the
/v1/health/state/passing Consul API
EOF
}
quality "consul_operator_raft_configuration_read_voters_minimum" {
description = <<-EOF
The Consul cluster meets the minimum number of raft voters according to the
/v1/operator/raft/configuration Consul API
EOF
}
quality "consul_service_start_client" {
description = "The Consul service starts in client mode"
}
quality "consul_service_start_server" {
description = "The Consul service starts in server mode"
}
quality "consul_service_systemd_notified" {
description = "The Consul binary notifies systemd when the service is active"
}
quality "consul_service_systemd_unit" {
description = "The 'consul.service' systemd unit starts the service"
}
quality "vault_agent_auto_auth_approle" {
description = <<-EOF
Vault running in Agent mode utilizes the approle auth method to do auto-auth via a role and
read secrets from a file source
EOF
}
quality "vault_agent_log_template" {
description = global.description.verify_agent_output
}
quality "vault_api_sys_auth_userpass_user_write" {
description = "The v1/sys/auth/userpass/users/<user> Vault API associates a policy with a user"
}
quality "vault_api_sys_config_read" {
description = <<-EOF
The v1/sys/config/sanitized Vault API returns sanitized configuration which matches our given
configuration
EOF
}
quality "vault_api_sys_ha_status_read" {
description = "The v1/sys/ha-status Vault API returns the HA status of the cluster"
}
quality "vault_api_sys_health_read" {
description = <<-EOF
The v1/sys/health Vault API returns the correct codes depending on the replication and
'seal-status' of the cluster
EOF
}
quality "vault_api_sys_host_info_read" {
description = "The v1/sys/host-info Vault API returns the host info for each node in the cluster"
}
quality "vault_api_sys_leader_read" {
description = "The v1/sys/leader Vault API returns the cluster leader info"
}
quality "vault_api_sys_metrics_vault_core_replication_write_undo_logs_enabled" {
description = <<-EOF
The v1/sys/metrics Vault API returns metrics and verifies that
'Gauges[vault.core.replication.write_undo_logs]' is enabled
EOF
}
quality "vault_api_sys_policy_write" {
description = "The v1/sys/policy Vault API writes a superuser policy"
}
quality "vault_api_sys_quotas_lease_count_read_max_leases_default" {
description = <<-EOF
The v1/sys/quotas/lease-count/default Vault API returns the lease 'count' and 'max_leases' is
set to 300,000
EOF
}
quality "vault_api_sys_replication_performance_primary_enable_write" {
description = <<-EOF
The v1/sys/replication/performance/primary/enable Vault API enables performance replication
EOF
}
quality "vault_api_sys_replication_performance_primary_secondary_token_write" {
description = <<-EOF
The v1/sys/replication/performance/primary/secondary-token Vault API configures the replication
token
EOF
}
quality "vault_api_sys_replication_performance_secondary_enable_write" {
description = <<-EOF
The v1/sys/replication/performance/secondary/enable Vault API enables performance replication
EOF
}
quality "vault_api_sys_replication_performance_read_connection_status_connected" {
description = <<-EOF
The v1/sys/replication/performance/status Vault API returns status info and the
'connection_status' is correct for the given node
EOF
}
quality "vault_api_sys_replication_performance_status_known_primary_cluster_addrs" {
description = <<-EOF
The v1/sys/replication/performance/status Vault API returns the replication status and
'known_primary_cluster_address' is the expected primary cluster leader
EOF
}
quality "vault_api_sys_replication_performance_status_read" {
description = <<-EOF
The v1/sys/replication/performance/status Vault API returns the performance replication status
EOF
}
quality "vault_api_sys_replication_performance_status_read_cluster_address" {
description = <<-EOF
The v1/sys/replication/performance/status Vault API returns the performance replication status
and the '{primaries,secondaries}[*].cluster_address' is correct for the given node
EOF
}
quality "vault_api_sys_replication_performance_status_read_state_not_idle" {
description = <<-EOF
The v1/sys/replication/performance/status Vault API returns the performance replication status
and the state is not idle
EOF
}
quality "vault_api_sys_replication_status_read" {
description = <<-EOF
The v1/sys/replication/status Vault API returns the performance replication status of the
cluster
EOF
}
quality "vault_api_sys_seal_status_api_read_matches_sys_health" {
description = <<-EOF
The v1/sys/seal-status Vault API and v1/sys/health Vault API agree on the health of each node
and the cluster
EOF
}
quality "vault_api_sys_sealwrap_rewrap_read_entries_processed_eq_entries_succeeded_post_rewrap" {
description = global.description.verify_seal_rewrap_entries_processed_eq_entries_succeeded_post_rewrap
}
quality "vault_api_sys_sealwrap_rewrap_read_entries_processed_gt_zero_post_rewrap" {
description = global.description.verify_seal_rewrap_entries_processed_is_gt_zero_post_rewrap
}
quality "vault_api_sys_sealwrap_rewrap_read_is_running_false_post_rewrap" {
description = global.description.verify_seal_rewrap_is_running_false_post_rewrap
}
quality "vault_api_sys_sealwrap_rewrap_read_no_entries_fail_during_rewrap" {
description = global.description.verify_seal_rewrap_no_entries_fail_during_rewrap
}
quality "vault_api_sys_step_down_steps_down" {
description = <<-EOF
The v1/sys/step-down Vault API forces the cluster leader to step down and intiates a new leader
election
EOF
}
quality "vault_api_sys_storage_raft_autopilot_configuration_read" {
description = <<-EOF
The /sys/storage/raft/autopilot/configuration Vault API returns the autopilot configuration of
the cluster
EOF
}
quality "vault_api_sys_storage_raft_autopilot_state_read" {
description = <<-EOF
The v1/sys/storage/raft/autopilot/state Vault API returns the raft autopilot state of the
cluster
EOF
}
quality "vault_api_sys_storage_raft_autopilot_upgrade_info_read_status_matches" {
description = <<-EOF
The v1/sys/storage/raft/autopilot/state Vault API returns the raft autopilot state and the
'upgrade_info.status' matches our expected state
EOF
}
quality "vault_api_sys_storage_raft_autopilot_upgrade_info_target_version_read_matches_candidate" {
description = <<-EOF
The v1/sys/storage/raft/autopilot/state Vault API returns the raft autopilot state and the
'upgrade_info.target_version' matches the the candidate version
EOF
}
quality "vault_api_sys_storage_raft_configuration_read" {
description = <<-EOF
The v1/sys/storage/raft/configuration Vault API returns the raft configuration of the cluster
EOF
}
quality "vault_api_sys_storage_raft_remove_peer_write_removes_peer" {
description = <<-EOF
The v1/sys/storage/raft/remove-peer Vault API removes the desired node from the raft sub-system
EOF
}
quality "vault_artifact_bundle" {
description = "The candidate binary packaged as a zip bundle is used for testing"
}
quality "vault_artifact_deb" {
description = "The candidate binary packaged as a deb package is used for testing"
}
quality "vault_artifact_rpm" {
description = "The candidate binary packaged as an rpm package is used for testing"
}
quality "vault_audit_log" {
description = "The Vault audit sub-system is enabled with the log and writes to a log"
}
quality "vault_audit_socket" {
description = "The Vault audit sub-system is enabled with the socket and writes to a socket"
}
quality "vault_audit_syslog" {
description = "The Vault audit sub-system is enabled with the syslog and writes to syslog"
}
quality "vault_auto_unseals_after_autopilot_upgrade" {
description = "Vault auto-unseals after upgrading the cluster with autopilot"
}
quality "vault_autojoins_new_nodes_into_initialized_cluster" {
description = "Vault sucessfully auto-joins new nodes into an existing cluster"
}
quality "vault_autojoin_aws" {
description = "Vault auto-joins nodes using AWS tag discovery"
}
quality "vault_autopilot_upgrade_leader_election" {
description = <<-EOF
Vault elects a new leader after upgrading the cluster with autopilot
EOF
}
quality "vault_cli_audit_enable" {
description = "The 'vault audit enable' command enables audit devices"
}
quality "vault_cli_auth_enable_approle" {
description = "The 'vault auth enable approle' command enables the approle auth method"
}
quality "vault_cli_operator_members" {
description = "The 'vault operator members' command returns the expected list of members"
}
quality "vault_cli_operator_raft_remove_peer" {
description = "The 'vault operator remove-peer' command removes the desired node"
}
quality "vault_cli_operator_step_down" {
description = "The 'vault operator step-down' command forces the cluster leader to step down"
}
quality "vault_cli_policy_write" {
description = "The 'vault policy write' command writes a policy"
}
quality "vault_cli_status_exit_code" {
description = <<-EOF
The 'vault status' command exits with the correct code depending on expected seal status
EOF
}
quality "vault_cluster_upgrade_in_place" {
description = <<-EOF
Vault starts with existing data and configuration in-place migrates the data
EOF
}
quality "vault_config_env_variables" {
description = "Vault starts when configured primarily with environment variables"
}
quality "vault_config_file" {
description = "Vault starts when configured primarily with a configuration file"
}
quality "vault_config_log_level" {
description = "The 'log_level' config stanza modifies its log level"
}
quality "vault_config_multiseal_is_toggleable" {
description = <<-EOF
The Vault Cluster can be configured with a single unseal method regardless of the
'enable_multiseal' config value
EOF
}
quality "vault_init" {
description = "Vault initializes the cluster with the given seal parameters"
}
quality "vault_license_required_ent" {
description = "Vault Enterprise requires a license in order to start"
}
quality "vault_mount_auth" {
description = "Vault mounts the auth engine"
}
quality "vault_mount_kv" {
description = "Vault mounts the kv engine"
}
quality "vault_multiseal_enable" {
description = <<-EOF
The Vault Cluster starts with 'enable_multiseal' and multiple auto-unseal methods.
EOF
}
quality "vault_proxy_auto_auth_approle" {
description = <<-EOF
Vault Proxy utilizes the approle auth method to to auto auth via a roles and secrets from file.
EOF
}
quality "vault_proxy_cli_access" {
description = <<-EOF
The Vault CLI accesses tokens through the Vault proxy without a VAULT_TOKEN available
EOF
}
quality "vault_raft_voters" {
description = global.description.verify_raft_cluster_all_nodes_are_voters
}
quality "vault_replication_ce_disabled" {
description = "Replication is not enabled for CE editions"
}
quality "vault_replication_ent_dr_available" {
description = "DR replication is available on Enterprise"
}
quality "vault_replication_ent_pr_available" {
description = "PR replication is available on Enterprise"
}
quality "vault_seal_awskms" {
description = "Vault auto-unseals with the awskms seal"
}
quality "vault_seal_shamir" {
description = <<-EOF
Vault manually unseals with the shamir seal when given the expected number of 'key_shares'
EOF
}
quality "vault_seal_pkcs11" {
description = "Vault auto-unseals with the pkcs11 seal"
}
quality "vault_secrets_auth_user_policy_write" {
description = "Vault creates auth user policies with the root token"
}
quality "vault_secrets_kv_read" {
description = "Vault kv secrets engine data is readable"
}
quality "vault_secrets_kv_write" {
description = "Vault kv secrets engine data is writable"
}
quality "vault_service_restart" {
description = "Vault restarts with existing configuration"
}
quality "vault_service_start" {
description = "Vault starts with the configuration"
}
quality "vault_service_systemd_notified" {
description = "The Vault binary notifies systemd when the service is active"
}
quality "vault_service_systemd_unit" {
description = "The 'vault.service' systemd unit starts the service"
}
quality "vault_status_seal_type" {
description = global.description.verify_seal_type
}
quality "vault_storage_backend_consul" {
description = "Vault operates using Consul for storage"
}
quality "vault_storage_backend_raft" {
description = "Vault operates using integrated Raft storage"
}
quality "vault_ui_assets" {
description = global.description.verify_ui
}
quality "vault_ui_test" {
description = <<-EOF
The Vault Web UI test suite runs against a live Vault server with the embedded static assets
EOF
}
quality "vault_unseal_ha_leader_election" {
description = "Vault performs a leader election after it is unsealed"
}
quality "vault_version_build_date" {
description = "Vault's reported build date matches our expectations"
}
quality "vault_version_edition" {
description = "Vault's reported edition matches our expectations"
}
quality "vault_version_release" {
description = "Vault's reported release version matches our expectations"
}

View File

@ -2,6 +2,22 @@
# SPDX-License-Identifier: BUSL-1.1
scenario "agent" {
description = <<-EOF
The agent scenario verifies Vault when running in Agent mode. The build can be a local branch,
any CRT built Vault artifact saved to the local machine, or any CRT built Vault artifact in the
stable channel in Artifactory.
The scenario creates a new Vault Cluster using the candidate build and then runs the same Vault
build in Agent mode and verifies behavior against the Vault cluster. The scenario also performs
standard baseline verification that is not specific to the Agent mode deployment.
If you want to use the 'distro:leap' variant you must first accept SUSE's terms for the AWS
account. To verify that your account has agreed, sign-in to your AWS through Doormat,
and visit the following links to verify your subscription or subscribe:
arm64 AMI: https://aws.amazon.com/marketplace/server/procurement?productId=a516e959-df54-4035-bb1a-63599b7a6df9
amd64 AMI: https://aws.amazon.com/marketplace/server/procurement?productId=5535c495-72d4-4355-b169-54ffa874f849
EOF
matrix {
arch = global.archs
artifact_source = global.artifact_sources
@ -29,7 +45,7 @@ scenario "agent" {
# PKCS#11 can only be used on ent.hsm and ent.hsm.fips1402.
exclude {
seal = ["pkcs11"]
edition = ["ce", "ent", "ent.fips1402"]
edition = [for e in matrix.edition : e if !strcontains(e, "hsm")]
}
# arm64 AMIs are not offered for Leap
@ -66,13 +82,9 @@ scenario "agent" {
manage_service = matrix.artifact_type == "bundle"
}
step "get_local_metadata" {
skip_step = matrix.artifact_source != "local"
module = module.get_local_metadata
}
step "build_vault" {
module = "build_${matrix.artifact_source}"
description = global.description.build_vault
module = "build_${matrix.artifact_source}"
variables {
build_tags = var.vault_local_build_tags != null ? var.vault_local_build_tags : global.build_tags[matrix.edition]
@ -93,11 +105,13 @@ scenario "agent" {
}
step "ec2_info" {
module = module.ec2_info
description = global.description.ec2_info
module = module.ec2_info
}
step "create_vpc" {
module = module.create_vpc
description = global.description.create_vpc
module = module.create_vpc
variables {
common_tags = global.tags
@ -107,8 +121,9 @@ scenario "agent" {
// This step reads the contents of the backend license if we're using a Consul backend and
// an "ent" Consul edition.
step "read_backend_license" {
skip_step = matrix.backend == "raft" || matrix.consul_edition == "ce"
module = module.read_license
description = global.description.read_backend_license
skip_step = matrix.backend == "raft" || matrix.consul_edition == "ce"
module = module.read_license
variables {
file_name = global.backend_license_path
@ -116,8 +131,9 @@ scenario "agent" {
}
step "read_vault_license" {
skip_step = matrix.edition == "ce"
module = module.read_license
description = global.description.read_vault_license
skip_step = matrix.edition == "ce"
module = module.read_license
variables {
file_name = global.vault_license_path
@ -125,8 +141,9 @@ scenario "agent" {
}
step "create_seal_key" {
module = "seal_${matrix.seal}"
depends_on = [step.create_vpc]
description = global.description.create_seal_key
module = "seal_${matrix.seal}"
depends_on = [step.create_vpc]
providers = {
enos = provider.enos.ubuntu
@ -139,8 +156,9 @@ scenario "agent" {
}
step "create_vault_cluster_targets" {
module = module.target_ec2_instances
depends_on = [step.create_vpc]
description = global.description.create_vault_cluster_targets
module = module.target_ec2_instances
depends_on = [step.create_vpc]
providers = {
enos = local.enos_provider[matrix.distro]
@ -156,8 +174,9 @@ scenario "agent" {
}
step "create_vault_cluster_backend_targets" {
module = matrix.backend == "consul" ? module.target_ec2_instances : module.target_ec2_shim
depends_on = [step.create_vpc]
description = global.description.create_vault_cluster_targets
module = matrix.backend == "consul" ? module.target_ec2_instances : module.target_ec2_shim
depends_on = [step.create_vpc]
providers = {
enos = provider.enos.ubuntu
@ -173,7 +192,8 @@ scenario "agent" {
}
step "create_backend_cluster" {
module = "backend_${matrix.backend}"
description = global.description.create_backend_cluster
module = "backend_${matrix.backend}"
depends_on = [
step.create_vault_cluster_backend_targets
]
@ -182,6 +202,23 @@ scenario "agent" {
enos = provider.enos.ubuntu
}
verifies = [
// verified in modules
quality.consul_autojoin_aws,
quality.consul_config_file,
quality.consul_ha_leader_election,
quality.consul_service_start_server,
// verified in enos_consul_start resource
quality.consul_api_agent_host_read,
quality.consul_api_health_node_read,
quality.consul_api_operator_raft_config_read,
quality.consul_cli_validate,
quality.consul_health_state_passing_read_nodes_minimum,
quality.consul_operator_raft_configuration_read_voters_minimum,
quality.consul_service_systemd_unit,
quality.consul_service_systemd_notified,
]
variables {
cluster_name = step.create_vault_cluster_backend_targets.cluster_name
cluster_tag_key = global.backend_tag_key
@ -195,7 +232,8 @@ scenario "agent" {
}
step "create_vault_cluster" {
module = module.vault_cluster
description = global.description.create_vault_cluster
module = module.vault_cluster
depends_on = [
step.create_backend_cluster,
step.build_vault,
@ -206,6 +244,39 @@ scenario "agent" {
enos = local.enos_provider[matrix.distro]
}
verifies = [
// verified in modules
quality.consul_service_start_client,
quality.vault_artifact_bundle,
quality.vault_artifact_deb,
quality.vault_artifact_rpm,
quality.vault_audit_log,
quality.vault_audit_syslog,
quality.vault_audit_socket,
quality.vault_autojoin_aws,
quality.vault_config_env_variables,
quality.vault_config_log_level,
quality.vault_config_file,
quality.vault_license_required_ent,
quality.vault_service_start,
quality.vault_init,
quality.vault_storage_backend_consul,
quality.vault_storage_backend_raft,
// verified in enos_vault_start resource
quality.vault_api_sys_config_read,
quality.vault_api_sys_ha_status_read,
quality.vault_api_sys_health_read,
quality.vault_api_sys_host_info_read,
quality.vault_api_sys_seal_status_api_read_matches_sys_health,
quality.vault_api_sys_storage_raft_autopilot_configuration_read,
quality.vault_api_sys_storage_raft_autopilot_state_read,
quality.vault_api_sys_storage_raft_configuration_read,
quality.vault_api_sys_replication_status_read,
quality.vault_cli_status_exit_code,
quality.vault_service_systemd_unit,
quality.vault_service_systemd_notified,
]
variables {
artifactory_release = matrix.artifact_source == "artifactory" ? step.build_vault.vault_artifactory_release : null
backend_cluster_name = step.create_vault_cluster_backend_targets.cluster_name
@ -230,15 +301,27 @@ scenario "agent" {
}
}
step "get_local_metadata" {
description = global.description.get_local_metadata
skip_step = matrix.artifact_source != "local"
module = module.get_local_metadata
}
// Wait for our cluster to elect a leader
step "wait_for_leader" {
module = module.vault_wait_for_leader
depends_on = [step.create_vault_cluster]
description = global.description.wait_for_cluster_to_have_leader
module = module.vault_wait_for_leader
depends_on = [step.create_vault_cluster]
providers = {
enos = local.enos_provider[matrix.distro]
}
verifies = [
quality.vault_api_sys_leader_read,
quality.vault_unseal_ha_leader_election,
]
variables {
timeout = 120 # seconds
vault_hosts = step.create_vault_cluster_targets.hosts
@ -248,13 +331,19 @@ scenario "agent" {
}
step "start_vault_agent" {
module = "vault_agent"
description = global.description.start_vault_agent
module = "vault_agent"
depends_on = [
step.build_vault,
step.create_vault_cluster,
step.wait_for_leader,
]
verifies = [
quality.vault_agent_auto_auth_approle,
quality.vault_cli_auth_enable_approle,
]
providers = {
enos = local.enos_provider[matrix.distro]
}
@ -276,6 +365,8 @@ scenario "agent" {
step.wait_for_leader,
]
verifies = quality.vault_agent_log_template
providers = {
enos = local.enos_provider[matrix.distro]
}
@ -288,13 +379,20 @@ scenario "agent" {
}
step "get_vault_cluster_ips" {
module = module.vault_get_cluster_ips
depends_on = [step.wait_for_leader]
description = global.description.get_vault_cluster_ip_addresses
module = module.vault_get_cluster_ips
depends_on = [step.wait_for_leader]
providers = {
enos = local.enos_provider[matrix.distro]
}
verifies = [
quality.vault_api_sys_ha_status_read,
quality.vault_api_sys_leader_read,
quality.vault_cli_operator_members,
]
variables {
vault_hosts = step.create_vault_cluster_targets.hosts
vault_install_dir = global.vault_install_dir[matrix.artifact_type]
@ -303,13 +401,20 @@ scenario "agent" {
}
step "verify_vault_version" {
module = module.vault_verify_version
depends_on = [step.wait_for_leader]
description = global.description.verify_vault_version
module = module.vault_verify_version
depends_on = [step.wait_for_leader]
providers = {
enos = local.enos_provider[matrix.distro]
}
verifies = [
quality.vault_version_build_date,
quality.vault_version_edition,
quality.vault_version_release,
]
variables {
vault_instances = step.create_vault_cluster_targets.hosts
vault_edition = matrix.edition
@ -322,13 +427,20 @@ scenario "agent" {
}
step "verify_vault_unsealed" {
module = module.vault_verify_unsealed
depends_on = [step.wait_for_leader]
description = global.description.verify_vault_unsealed
module = module.vault_verify_unsealed
depends_on = [step.wait_for_leader]
providers = {
enos = local.enos_provider[matrix.distro]
}
verifies = [
quality.vault_seal_awskms,
quality.vault_seal_pkcs11,
quality.vault_seal_shamir,
]
variables {
vault_install_dir = global.vault_install_dir[matrix.artifact_type]
vault_instances = step.create_vault_cluster_targets.hosts
@ -336,7 +448,8 @@ scenario "agent" {
}
step "verify_write_test_data" {
module = module.vault_verify_write_data
description = global.description.verify_write_test_data
module = module.vault_verify_write_data
depends_on = [
step.create_vault_cluster,
step.get_vault_cluster_ips
@ -346,6 +459,13 @@ scenario "agent" {
enos = local.enos_provider[matrix.distro]
}
verifies = [
quality.vault_secrets_auth_user_policy_write,
quality.vault_secrets_kv_write,
quality.vault_mount_auth,
quality.vault_mount_kv,
]
variables {
leader_public_ip = step.get_vault_cluster_ips.leader_public_ip
leader_private_ip = step.get_vault_cluster_ips.leader_private_ip
@ -356,8 +476,9 @@ scenario "agent" {
}
step "verify_raft_auto_join_voter" {
skip_step = matrix.backend != "raft"
module = module.vault_verify_raft_auto_join_voter
description = global.description.verify_raft_cluster_all_nodes_are_voters
skip_step = matrix.backend != "raft"
module = module.vault_verify_raft_auto_join_voter
depends_on = [
step.create_vault_cluster,
step.get_vault_cluster_ips
@ -367,6 +488,8 @@ scenario "agent" {
enos = local.enos_provider[matrix.distro]
}
verifies = quality.vault_raft_voters
variables {
vault_install_dir = global.vault_install_dir[matrix.artifact_type]
vault_instances = step.create_vault_cluster_targets.hosts
@ -375,7 +498,8 @@ scenario "agent" {
}
step "verify_replication" {
module = module.vault_verify_replication
description = global.description.verify_replication_status
module = module.vault_verify_replication
depends_on = [
step.create_vault_cluster,
step.get_vault_cluster_ips
@ -385,6 +509,12 @@ scenario "agent" {
enos = local.enos_provider[matrix.distro]
}
verifies = [
quality.vault_replication_ce_disabled,
quality.vault_replication_ent_dr_available,
quality.vault_replication_ent_pr_available,
]
variables {
vault_edition = matrix.edition
vault_install_dir = global.vault_install_dir[matrix.artifact_type]
@ -393,7 +523,8 @@ scenario "agent" {
}
step "verify_read_test_data" {
module = module.vault_verify_read_data
description = global.description.verify_read_test_data
module = module.vault_verify_read_data
depends_on = [
step.verify_write_test_data,
step.verify_replication
@ -403,6 +534,8 @@ scenario "agent" {
enos = local.enos_provider[matrix.distro]
}
verifies = quality.vault_secrets_kv_read
variables {
node_public_ips = step.get_vault_cluster_ips.follower_public_ips
vault_install_dir = global.vault_install_dir[matrix.artifact_type]
@ -410,13 +543,16 @@ scenario "agent" {
}
step "verify_ui" {
module = module.vault_verify_ui
depends_on = [step.create_vault_cluster]
description = global.description.verify_ui
module = module.vault_verify_ui
depends_on = [step.create_vault_cluster]
providers = {
enos = local.enos_provider[matrix.distro]
}
verifies = quality.vault_ui_assets
variables {
vault_instances = step.create_vault_cluster_targets.hosts
}

View File

@ -2,19 +2,40 @@
# SPDX-License-Identifier: BUSL-1.1
scenario "autopilot" {
description = <<-EOF
The autopilot scenario verifies autopilot upgrades between previously released versions of
Vault Enterprise against another candidate build. The build can be a local branch, any CRT built
Vault Enterprise artifact saved to the local machine, or any CRT built Vault Enterprise artifact
in the stable channel in Artifactory.
The scenario creates a new Vault Cluster with a previously released version of Vault, mounts
various engines and creates data, then perform an Autopilot upgrade with any candidate build.
The scenario also performs standard baseline verification that is not specific to the autopilot
upgrade.
If you want to use the 'distro:leap' variant you must first accept SUSE's terms for the AWS
account. To verify that your account has agreed, sign-in to your AWS through Doormat,
and visit the following links to verify your subscription or subscribe:
arm64 AMI: https://aws.amazon.com/marketplace/server/procurement?productId=a516e959-df54-4035-bb1a-63599b7a6df9
amd64 AMI: https://aws.amazon.com/marketplace/server/procurement?productId=5535c495-72d4-4355-b169-54ffa874f849
EOF
matrix {
arch = global.archs
artifact_source = global.artifact_sources
artifact_type = global.artifact_types
config_mode = global.config_modes
distro = global.distros
edition = global.editions
initial_version = global.upgrade_initial_versions
edition = global.enterprise_editions
// This reads the VERSION file, strips any pre-release metadata, and selects only initial
// versions that are less than our current version. E.g. A VERSION file containing 1.17.0-beta2
// would render: semverconstraint(v, "<1.17.0-0")
initial_version = [for v in global.upgrade_initial_versions : v if semverconstraint(v, "<${join("-", [split("-", chomp(file("../version/VERSION")))[0], "0"])}")]
seal = global.seals
# Autopilot wasn't available before 1.11.x
exclude {
initial_version = ["1.8.12", "1.9.10", "1.10.11"]
initial_version = [for e in matrix.initial_version : e if semverconstraint(e, "<1.11.0-0")]
}
# Our local builder always creates bundles
@ -32,7 +53,7 @@ scenario "autopilot" {
# PKCS#11 can only be used on ent.hsm and ent.hsm.fips1402.
exclude {
seal = ["pkcs11"]
edition = ["ce", "ent", "ent.fips1402"]
edition = [for e in matrix.edition : e if !strcontains(e, "hsm")]
}
# arm64 AMIs are not offered for Leap
@ -72,7 +93,8 @@ scenario "autopilot" {
}
step "build_vault" {
module = "build_${matrix.artifact_source}"
description = global.description.build_vault
module = "build_${matrix.artifact_source}"
variables {
build_tags = var.vault_local_build_tags != null ? var.vault_local_build_tags : global.build_tags[matrix.edition]
@ -93,11 +115,13 @@ scenario "autopilot" {
}
step "ec2_info" {
module = module.ec2_info
description = global.description.ec2_info
module = module.ec2_info
}
step "create_vpc" {
module = module.create_vpc
description = global.description.create_vpc
module = module.create_vpc
variables {
common_tags = global.tags
@ -105,7 +129,8 @@ scenario "autopilot" {
}
step "read_license" {
module = module.read_license
description = global.description.read_vault_license
module = module.read_license
variables {
file_name = global.vault_license_path
@ -113,8 +138,9 @@ scenario "autopilot" {
}
step "create_seal_key" {
module = "seal_${matrix.seal}"
depends_on = [step.create_vpc]
description = global.description.create_seal_key
module = "seal_${matrix.seal}"
depends_on = [step.create_vpc]
providers = {
enos = provider.enos.ubuntu
@ -127,8 +153,9 @@ scenario "autopilot" {
}
step "create_vault_cluster_targets" {
module = module.target_ec2_instances
depends_on = [step.create_vpc]
description = global.description.create_vault_cluster_targets
module = module.target_ec2_instances
depends_on = [step.create_vpc]
providers = {
enos = local.enos_provider[matrix.distro]
@ -144,8 +171,9 @@ scenario "autopilot" {
}
step "create_vault_cluster_upgrade_targets" {
module = module.target_ec2_instances
depends_on = [step.create_vpc]
description = global.description.create_vault_cluster_targets
module = module.target_ec2_instances
depends_on = [step.create_vpc]
providers = {
enos = local.enos_provider[matrix.distro]
@ -161,6 +189,11 @@ scenario "autopilot" {
}
step "create_vault_cluster" {
description = <<-EOF
${global.description.create_vault_cluster} In this instance we'll create a Vault Cluster with
and older version and use Autopilot to upgrade to it.
EOF
module = module.vault_cluster
depends_on = [
step.build_vault,
@ -171,6 +204,38 @@ scenario "autopilot" {
enos = local.enos_provider[matrix.distro]
}
verifies = [
// verified in modules
quality.vault_artifact_bundle,
quality.vault_artifact_deb,
quality.vault_artifact_rpm,
quality.vault_audit_log,
quality.vault_audit_socket,
quality.vault_audit_syslog,
quality.vault_autojoin_aws,
quality.vault_config_env_variables,
quality.vault_config_file,
quality.vault_config_log_level,
quality.vault_init,
quality.vault_license_required_ent,
quality.vault_service_start,
quality.vault_storage_backend_consul,
quality.vault_storage_backend_raft,
// verified in enos_vault_start resource
quality.vault_api_sys_config_read,
quality.vault_api_sys_ha_status_read,
quality.vault_api_sys_health_read,
quality.vault_api_sys_host_info_read,
quality.vault_api_sys_seal_status_api_read_matches_sys_health,
quality.vault_api_sys_storage_raft_autopilot_configuration_read,
quality.vault_api_sys_storage_raft_autopilot_state_read,
quality.vault_api_sys_storage_raft_configuration_read,
quality.vault_api_sys_replication_status_read,
quality.vault_cli_status_exit_code,
quality.vault_service_systemd_notified,
quality.vault_service_systemd_unit,
]
variables {
cluster_name = step.create_vault_cluster_targets.cluster_name
config_mode = matrix.config_mode
@ -193,18 +258,52 @@ scenario "autopilot" {
}
step "get_local_metadata" {
skip_step = matrix.artifact_source != "local"
module = module.get_local_metadata
description = global.description.get_local_metadata
skip_step = matrix.artifact_source != "local"
module = module.get_local_metadata
}
step "get_vault_cluster_ips" {
module = module.vault_get_cluster_ips
depends_on = [step.create_vault_cluster]
// Wait for our cluster to elect a leader
step "wait_for_leader" {
description = global.description.wait_for_cluster_to_have_leader
module = module.vault_wait_for_leader
depends_on = [step.create_vault_cluster]
providers = {
enos = local.enos_provider[matrix.distro]
}
verifies = [
quality.vault_api_sys_leader_read,
quality.vault_unseal_ha_leader_election,
]
variables {
timeout = 120 # seconds
vault_hosts = step.create_vault_cluster_targets.hosts
vault_install_dir = local.vault_install_dir
vault_root_token = step.create_vault_cluster.root_token
}
}
step "get_vault_cluster_ips" {
description = global.description.get_vault_cluster_ip_addresses
module = module.vault_get_cluster_ips
depends_on = [
step.create_vault_cluster,
step.wait_for_leader,
]
providers = {
enos = local.enos_provider[matrix.distro]
}
verifies = [
quality.vault_api_sys_ha_status_read,
quality.vault_api_sys_leader_read,
quality.vault_cli_operator_members,
]
variables {
vault_hosts = step.create_vault_cluster.target_hosts
vault_install_dir = global.vault_install_dir[matrix.artifact_type]
@ -213,7 +312,8 @@ scenario "autopilot" {
}
step "verify_write_test_data" {
module = module.vault_verify_write_data
description = global.description.verify_write_test_data
module = module.vault_verify_write_data
depends_on = [
step.create_vault_cluster,
step.get_vault_cluster_ips
@ -223,6 +323,13 @@ scenario "autopilot" {
enos = local.enos_provider[matrix.distro]
}
verifies = [
quality.vault_mount_auth,
quality.vault_mount_kv,
quality.vault_secrets_auth_user_policy_write,
quality.vault_secrets_kv_write,
]
variables {
leader_public_ip = step.get_vault_cluster_ips.leader_public_ip
leader_private_ip = step.get_vault_cluster_ips.leader_private_ip
@ -233,7 +340,11 @@ scenario "autopilot" {
}
step "create_autopilot_upgrade_storageconfig" {
module = module.autopilot_upgrade_storageconfig
description = <<-EOF
An arithmetic module used to dynamically create autopilot storage configuration depending on
whether or not we're testing a local build or a candidate build.
EOF
module = module.autopilot_upgrade_storageconfig
variables {
vault_product_version = matrix.artifact_source == "local" ? step.get_local_metadata.version : var.vault_product_version
@ -278,7 +389,8 @@ scenario "autopilot" {
}
step "verify_vault_unsealed" {
module = module.vault_verify_unsealed
description = global.description.verify_vault_unsealed
module = module.vault_verify_unsealed
depends_on = [
step.create_vault_cluster,
step.create_vault_cluster_upgrade_targets,
@ -289,6 +401,13 @@ scenario "autopilot" {
enos = local.enos_provider[matrix.distro]
}
verifies = [
quality.vault_auto_unseals_after_autopilot_upgrade,
quality.vault_seal_awskms,
quality.vault_seal_pkcs11,
quality.vault_seal_shamir,
]
variables {
vault_install_dir = global.vault_install_dir[matrix.artifact_type]
vault_instances = step.upgrade_vault_cluster_with_autopilot.target_hosts
@ -296,7 +415,8 @@ scenario "autopilot" {
}
step "verify_raft_auto_join_voter" {
module = module.vault_verify_raft_auto_join_voter
description = global.description.verify_raft_cluster_all_nodes_are_voters
module = module.vault_verify_raft_auto_join_voter
depends_on = [
step.upgrade_vault_cluster_with_autopilot,
step.verify_vault_unsealed
@ -306,6 +426,8 @@ scenario "autopilot" {
enos = local.enos_provider[matrix.distro]
}
verifies = quality.vault_raft_voters
variables {
vault_install_dir = global.vault_install_dir[matrix.artifact_type]
vault_instances = step.upgrade_vault_cluster_with_autopilot.target_hosts
@ -314,7 +436,8 @@ scenario "autopilot" {
}
step "verify_autopilot_await_server_removal_state" {
module = module.vault_verify_autopilot
description = global.description.verify_autopilot_idle_state
module = module.vault_verify_autopilot
depends_on = [
step.create_vault_cluster_upgrade_targets,
step.upgrade_vault_cluster_with_autopilot,
@ -325,6 +448,11 @@ scenario "autopilot" {
enos = local.enos_provider[matrix.distro]
}
verifies = [
quality.vault_api_sys_storage_raft_autopilot_upgrade_info_read_status_matches,
quality.vault_api_sys_storage_raft_autopilot_upgrade_info_target_version_read_matches_candidate,
]
variables {
vault_autopilot_upgrade_version = matrix.artifact_source == "local" ? step.get_local_metadata.version : var.vault_product_version
vault_autopilot_upgrade_status = "await-server-removal"
@ -335,7 +463,8 @@ scenario "autopilot" {
}
step "wait_for_leader_in_upgrade_targets" {
module = module.vault_wait_for_leader
description = global.description.wait_for_cluster_to_have_leader
module = module.vault_wait_for_leader
depends_on = [
step.create_vault_cluster,
step.create_vault_cluster_upgrade_targets,
@ -347,6 +476,11 @@ scenario "autopilot" {
enos = local.enos_provider[matrix.distro]
}
verifies = [
quality.vault_api_sys_leader_read,
quality.vault_autopilot_upgrade_leader_election,
]
variables {
vault_install_dir = global.vault_install_dir[matrix.artifact_type]
vault_root_token = step.create_vault_cluster.root_token
@ -355,7 +489,8 @@ scenario "autopilot" {
}
step "get_updated_vault_cluster_ips" {
module = module.vault_get_cluster_ips
description = global.description.get_vault_cluster_ip_addresses
module = module.vault_get_cluster_ips
depends_on = [
step.create_vault_cluster,
step.create_vault_cluster_upgrade_targets,
@ -368,6 +503,12 @@ scenario "autopilot" {
enos = local.enos_provider[matrix.distro]
}
verifies = [
quality.vault_api_sys_ha_status_read,
quality.vault_api_sys_leader_read,
quality.vault_cli_operator_members,
]
variables {
vault_hosts = step.upgrade_vault_cluster_with_autopilot.target_hosts
vault_install_dir = global.vault_install_dir[matrix.artifact_type]
@ -376,7 +517,8 @@ scenario "autopilot" {
}
step "verify_read_test_data" {
module = module.vault_verify_read_data
description = global.description.verify_read_test_data
module = module.vault_verify_read_data
depends_on = [
step.get_updated_vault_cluster_ips,
step.verify_write_test_data,
@ -388,6 +530,8 @@ scenario "autopilot" {
enos = local.enos_provider[matrix.distro]
}
verifies = quality.vault_secrets_kv_read
variables {
node_public_ips = step.get_updated_vault_cluster_ips.follower_public_ips
vault_instance_count = 6
@ -396,7 +540,10 @@ scenario "autopilot" {
}
step "raft_remove_peers" {
module = module.vault_raft_remove_peer
description = <<-EOF
Remove the nodes that were running the prior version of Vault from the raft cluster
EOF
module = module.vault_raft_remove_peer
depends_on = [
step.create_vault_cluster_upgrade_targets,
step.get_updated_vault_cluster_ips,
@ -408,6 +555,11 @@ scenario "autopilot" {
enos = local.enos_provider[matrix.distro]
}
verifies = [
quality.vault_api_sys_storage_raft_remove_peer_write_removes_peer,
quality.vault_cli_operator_raft_remove_peer,
]
variables {
operator_instance = step.get_updated_vault_cluster_ips.leader_public_ip
remove_vault_instances = step.create_vault_cluster.target_hosts
@ -418,7 +570,8 @@ scenario "autopilot" {
}
step "remove_old_nodes" {
module = module.shutdown_multiple_nodes
description = global.description.shutdown_nodes
module = module.shutdown_multiple_nodes
depends_on = [
step.create_vault_cluster,
step.raft_remove_peers
@ -435,7 +588,8 @@ scenario "autopilot" {
}
step "verify_autopilot_idle_state" {
module = module.vault_verify_autopilot
description = global.description.verify_autopilot_idle_state
module = module.vault_verify_autopilot
depends_on = [
step.create_vault_cluster_upgrade_targets,
step.upgrade_vault_cluster_with_autopilot,
@ -447,6 +601,11 @@ scenario "autopilot" {
enos = local.enos_provider[matrix.distro]
}
verifies = [
quality.vault_api_sys_storage_raft_autopilot_upgrade_info_read_status_matches,
quality.vault_api_sys_storage_raft_autopilot_upgrade_info_target_version_read_matches_candidate,
]
variables {
vault_autopilot_upgrade_version = matrix.artifact_source == "local" ? step.get_local_metadata.version : var.vault_product_version
vault_autopilot_upgrade_status = "idle"
@ -457,7 +616,8 @@ scenario "autopilot" {
}
step "verify_replication" {
module = module.vault_verify_replication
description = global.description.verify_replication_status
module = module.vault_verify_replication
depends_on = [
step.create_vault_cluster_upgrade_targets,
step.upgrade_vault_cluster_with_autopilot,
@ -469,6 +629,12 @@ scenario "autopilot" {
enos = local.enos_provider[matrix.distro]
}
verifies = [
quality.vault_replication_ce_disabled,
quality.vault_replication_ent_dr_available,
quality.vault_replication_ent_pr_available,
]
variables {
vault_edition = matrix.edition
vault_install_dir = global.vault_install_dir[matrix.artifact_type]
@ -477,7 +643,8 @@ scenario "autopilot" {
}
step "verify_vault_version" {
module = module.vault_verify_version
description = global.description.verify_vault_version
module = module.vault_verify_version
depends_on = [
step.create_vault_cluster_upgrade_targets,
step.upgrade_vault_cluster_with_autopilot,
@ -489,6 +656,12 @@ scenario "autopilot" {
enos = local.enos_provider[matrix.distro]
}
verifies = [
quality.vault_version_build_date,
quality.vault_version_edition,
quality.vault_version_release,
]
variables {
vault_instances = step.upgrade_vault_cluster_with_autopilot.target_hosts
vault_edition = matrix.edition
@ -501,7 +674,8 @@ scenario "autopilot" {
}
step "verify_ui" {
module = module.vault_verify_ui
description = global.description.verify_ui
module = module.vault_verify_ui
depends_on = [
step.create_vault_cluster_upgrade_targets,
step.upgrade_vault_cluster_with_autopilot,
@ -513,6 +687,8 @@ scenario "autopilot" {
enos = local.enos_provider[matrix.distro]
}
verifies = quality.vault_ui_assets
variables {
vault_instances = step.upgrade_vault_cluster_with_autopilot.target_hosts
}
@ -522,7 +698,12 @@ scenario "autopilot" {
skip_step = true
# NOTE: temporarily disable undo logs checking until it is fixed. See VAULT-20259
# skip_step = semverconstraint(var.vault_product_version, "<1.13.0-0")
module = module.vault_verify_undo_logs
module = module.vault_verify_undo_logs
description = <<-EOF
Verifies that undo logs is correctly enabled on newly upgraded target hosts. For this it will
query the metrics system backend for the vault.core.replication.write_undo_logs gauge.
EOF
depends_on = [
step.create_vault_cluster_upgrade_targets,
step.remove_old_nodes,
@ -530,6 +711,8 @@ scenario "autopilot" {
step.verify_autopilot_idle_state
]
verifies = quality.vault_api_sys_metrics_vault_core_replication_write_undo_logs_enabled
providers = {
enos = local.enos_provider[matrix.distro]
}
@ -543,7 +726,11 @@ scenario "autopilot" {
# Verify that upgrading from a version <1.16.0 does not introduce Default LCQ
step "verify_default_lcq" {
module = module.vault_verify_default_lcq
description = <<-EOF
Verify that the default max lease count is 300,000 when the upgraded nodes are running
Vault >= 1.16.0.
EOF
module = module.vault_verify_default_lcq
depends_on = [
step.create_vault_cluster_upgrade_targets,
step.remove_old_nodes,
@ -551,6 +738,8 @@ scenario "autopilot" {
step.verify_autopilot_idle_state
]
verifies = quality.vault_api_sys_quotas_lease_count_read_max_leases_default
providers = {
enos = local.enos_provider[matrix.distro]
}

View File

@ -2,6 +2,22 @@
# SPDX-License-Identifier: BUSL-1.1
scenario "proxy" {
description = <<-EOF
The agent scenario verifies Vault when running in Proxy mode. The build can be a local branch,
any CRT built Vault artifact saved to the local machine, or any CRT built Vault artifact in the
stable channel in Artifactory.
The scenario creates a new Vault Cluster using the candidate build and then runs the same Vault
build in Proxy mode and verifies behavior against the Vault cluster. The scenario also performs
standard baseline verification that is not specific to the Proxy mode deployment.
If you want to use the 'distro:leap' variant you must first accept SUSE's terms for the AWS
account. To verify that your account has agreed, sign-in to your AWS through Doormat,
and visit the following links to verify your subscription or subscribe:
arm64 AMI: https://aws.amazon.com/marketplace/server/procurement?productId=a516e959-df54-4035-bb1a-63599b7a6df9
amd64 AMI: https://aws.amazon.com/marketplace/server/procurement?productId=5535c495-72d4-4355-b169-54ffa874f849
EOF
matrix {
arch = global.archs
artifact_source = global.artifact_sources
@ -29,7 +45,7 @@ scenario "proxy" {
# PKCS#11 can only be used on ent.hsm and ent.hsm.fips1402.
exclude {
seal = ["pkcs11"]
edition = ["ce", "ent", "ent.fips1402"]
edition = [for e in matrix.edition : e if !strcontains(e, "hsm")]
}
# arm64 AMIs are not offered for Leap
@ -68,12 +84,14 @@ scenario "proxy" {
}
step "get_local_metadata" {
skip_step = matrix.artifact_source != "local"
module = module.get_local_metadata
description = global.description.get_local_metadata
skip_step = matrix.artifact_source != "local"
module = module.get_local_metadata
}
step "build_vault" {
module = "build_${matrix.artifact_source}"
description = global.description.build_vault
module = "build_${matrix.artifact_source}"
variables {
build_tags = var.vault_local_build_tags != null ? var.vault_local_build_tags : global.build_tags[matrix.edition]
@ -94,11 +112,13 @@ scenario "proxy" {
}
step "ec2_info" {
module = module.ec2_info
description = global.description.ec2_info
module = module.ec2_info
}
step "create_vpc" {
module = module.create_vpc
description = global.description.create_vpc
module = module.create_vpc
variables {
common_tags = global.tags
@ -108,8 +128,9 @@ scenario "proxy" {
// This step reads the contents of the backend license if we're using a Consul backend and
// an "ent" Consul edition.
step "read_backend_license" {
skip_step = matrix.backend == "raft" || matrix.consul_edition == "ce"
module = module.read_license
description = global.description.read_backend_license
skip_step = matrix.backend == "raft" || matrix.consul_edition == "ce"
module = module.read_license
variables {
file_name = global.backend_license_path
@ -117,8 +138,9 @@ scenario "proxy" {
}
step "read_vault_license" {
skip_step = matrix.edition == "ce"
module = module.read_license
description = global.description.read_vault_license
skip_step = matrix.edition == "ce"
module = module.read_license
variables {
file_name = global.vault_license_path
@ -126,8 +148,9 @@ scenario "proxy" {
}
step "create_seal_key" {
module = "seal_${matrix.seal}"
depends_on = [step.create_vpc]
description = global.description.create_seal_key
module = "seal_${matrix.seal}"
depends_on = [step.create_vpc]
providers = {
enos = provider.enos.ubuntu
@ -140,8 +163,9 @@ scenario "proxy" {
}
step "create_vault_cluster_targets" {
module = module.target_ec2_instances
depends_on = [step.create_vpc]
description = global.description.create_vault_cluster_targets
module = module.target_ec2_instances
depends_on = [step.create_vpc]
providers = {
enos = local.enos_provider[matrix.distro]
@ -157,8 +181,9 @@ scenario "proxy" {
}
step "create_vault_cluster_backend_targets" {
module = matrix.backend == "consul" ? module.target_ec2_instances : module.target_ec2_shim
depends_on = [step.create_vpc]
description = global.description.create_vault_cluster_targets
module = matrix.backend == "consul" ? module.target_ec2_instances : module.target_ec2_shim
depends_on = [step.create_vpc]
providers = {
enos = provider.enos.ubuntu
@ -174,7 +199,8 @@ scenario "proxy" {
}
step "create_backend_cluster" {
module = "backend_${matrix.backend}"
description = global.description.create_backend_cluster
module = "backend_${matrix.backend}"
depends_on = [
step.create_vault_cluster_backend_targets
]
@ -183,6 +209,23 @@ scenario "proxy" {
enos = provider.enos.ubuntu
}
verifies = [
// verified in modules
quality.consul_autojoin_aws,
quality.consul_config_file,
quality.consul_ha_leader_election,
quality.consul_service_start_server,
// verified in enos_consul_start resource
quality.consul_api_agent_host_read,
quality.consul_api_health_node_read,
quality.consul_api_operator_raft_config_read,
quality.consul_cli_validate,
quality.consul_health_state_passing_read_nodes_minimum,
quality.consul_operator_raft_configuration_read_voters_minimum,
quality.consul_service_systemd_notified,
quality.consul_service_systemd_unit,
]
variables {
cluster_name = step.create_vault_cluster_backend_targets.cluster_name
cluster_tag_key = global.backend_tag_key
@ -196,7 +239,8 @@ scenario "proxy" {
}
step "create_vault_cluster" {
module = module.vault_cluster
description = global.description.create_vault_cluster
module = module.vault_cluster
depends_on = [
step.create_backend_cluster,
step.build_vault,
@ -207,6 +251,39 @@ scenario "proxy" {
enos = local.enos_provider[matrix.distro]
}
verifies = [
// verified in modules
quality.consul_service_start_client,
quality.vault_artifact_bundle,
quality.vault_artifact_deb,
quality.vault_artifact_rpm,
quality.vault_audit_log,
quality.vault_audit_socket,
quality.vault_audit_syslog,
quality.vault_autojoin_aws,
quality.vault_config_env_variables,
quality.vault_config_file,
quality.vault_config_log_level,
quality.vault_init,
quality.vault_license_required_ent,
quality.vault_service_start,
quality.vault_storage_backend_consul,
quality.vault_storage_backend_raft,
// verified in enos_vault_start resource
quality.vault_api_sys_config_read,
quality.vault_api_sys_ha_status_read,
quality.vault_api_sys_health_read,
quality.vault_api_sys_host_info_read,
quality.vault_api_sys_replication_status_read,
quality.vault_api_sys_seal_status_api_read_matches_sys_health,
quality.vault_api_sys_storage_raft_autopilot_configuration_read,
quality.vault_api_sys_storage_raft_autopilot_state_read,
quality.vault_api_sys_storage_raft_configuration_read,
quality.vault_service_systemd_notified,
quality.vault_service_systemd_unit,
quality.vault_cli_status_exit_code,
]
variables {
artifactory_release = matrix.artifact_source == "artifactory" ? step.build_vault.vault_artifactory_release : null
backend_cluster_name = step.create_vault_cluster_backend_targets.cluster_name
@ -233,13 +310,19 @@ scenario "proxy" {
// Wait for our cluster to elect a leader
step "wait_for_leader" {
module = module.vault_wait_for_leader
depends_on = [step.create_vault_cluster]
description = global.description.wait_for_cluster_to_have_leader
module = module.vault_wait_for_leader
depends_on = [step.create_vault_cluster]
providers = {
enos = local.enos_provider[matrix.distro]
}
verifies = [
quality.vault_api_sys_leader_read,
quality.vault_unseal_ha_leader_election,
]
variables {
timeout = 120 # seconds
vault_hosts = step.create_vault_cluster_targets.hosts
@ -259,6 +342,12 @@ scenario "proxy" {
enos = local.enos_provider[matrix.distro]
}
verifies = [
quality.vault_cli_auth_enable_approle,
quality.vault_proxy_auto_auth_approle,
quality.vault_proxy_cli_access,
]
variables {
vault_install_dir = global.vault_install_dir[matrix.artifact_type]
vault_instances = step.create_vault_cluster_targets.hosts
@ -267,13 +356,20 @@ scenario "proxy" {
}
step "get_vault_cluster_ips" {
module = module.vault_get_cluster_ips
depends_on = [step.wait_for_leader]
description = global.description.get_vault_cluster_ip_addresses
module = module.vault_get_cluster_ips
depends_on = [step.wait_for_leader]
providers = {
enos = local.enos_provider[matrix.distro]
}
verifies = [
quality.vault_api_sys_ha_status_read,
quality.vault_api_sys_leader_read,
quality.vault_cli_operator_members,
]
variables {
vault_hosts = step.create_vault_cluster_targets.hosts
vault_install_dir = global.vault_install_dir[matrix.artifact_type]
@ -282,13 +378,20 @@ scenario "proxy" {
}
step "verify_vault_version" {
module = module.vault_verify_version
depends_on = [step.create_vault_cluster]
description = global.description.verify_vault_version
module = module.vault_verify_version
depends_on = [step.create_vault_cluster]
providers = {
enos = local.enos_provider[matrix.distro]
}
verifies = [
quality.vault_version_build_date,
quality.vault_version_edition,
quality.vault_version_release,
]
variables {
vault_instances = step.create_vault_cluster_targets.hosts
vault_edition = matrix.edition
@ -301,13 +404,20 @@ scenario "proxy" {
}
step "verify_vault_unsealed" {
module = module.vault_verify_unsealed
depends_on = [step.create_vault_cluster]
description = global.description.verify_vault_unsealed
module = module.vault_verify_unsealed
depends_on = [step.create_vault_cluster]
providers = {
enos = local.enos_provider[matrix.distro]
}
verifies = [
quality.vault_seal_awskms,
quality.vault_seal_pkcs11,
quality.vault_seal_shamir,
]
variables {
vault_install_dir = global.vault_install_dir[matrix.artifact_type]
vault_instances = step.create_vault_cluster_targets.hosts
@ -315,7 +425,8 @@ scenario "proxy" {
}
step "verify_write_test_data" {
module = module.vault_verify_write_data
description = global.description.verify_write_test_data
module = module.vault_verify_write_data
depends_on = [
step.create_vault_cluster,
step.get_vault_cluster_ips
@ -325,6 +436,13 @@ scenario "proxy" {
enos = local.enos_provider[matrix.distro]
}
verifies = [
quality.vault_mount_auth,
quality.vault_mount_kv,
quality.vault_secrets_auth_user_policy_write,
quality.vault_secrets_kv_write,
]
variables {
leader_public_ip = step.get_vault_cluster_ips.leader_public_ip
leader_private_ip = step.get_vault_cluster_ips.leader_private_ip
@ -335,14 +453,17 @@ scenario "proxy" {
}
step "verify_raft_auto_join_voter" {
skip_step = matrix.backend != "raft"
module = module.vault_verify_raft_auto_join_voter
depends_on = [step.create_vault_cluster]
description = global.description.verify_raft_cluster_all_nodes_are_voters
skip_step = matrix.backend != "raft"
module = module.vault_verify_raft_auto_join_voter
depends_on = [step.create_vault_cluster]
providers = {
enos = local.enos_provider[matrix.distro]
}
verifies = quality.vault_raft_voters
variables {
vault_install_dir = global.vault_install_dir[matrix.artifact_type]
vault_instances = step.create_vault_cluster_targets.hosts
@ -351,13 +472,20 @@ scenario "proxy" {
}
step "verify_replication" {
module = module.vault_verify_replication
depends_on = [step.create_vault_cluster]
description = global.description.verify_replication_status
module = module.vault_verify_replication
depends_on = [step.create_vault_cluster]
providers = {
enos = local.enos_provider[matrix.distro]
}
verifies = [
quality.vault_replication_ce_disabled,
quality.vault_replication_ent_dr_available,
quality.vault_replication_ent_pr_available,
]
variables {
vault_edition = matrix.edition
vault_install_dir = global.vault_install_dir[matrix.artifact_type]
@ -366,7 +494,8 @@ scenario "proxy" {
}
step "verify_read_test_data" {
module = module.vault_verify_read_data
description = global.description.verify_read_test_data
module = module.vault_verify_read_data
depends_on = [
step.verify_write_test_data,
step.verify_replication
@ -376,6 +505,8 @@ scenario "proxy" {
enos = local.enos_provider[matrix.distro]
}
verifies = quality.vault_secrets_kv_read
variables {
node_public_ips = step.get_vault_cluster_ips.follower_public_ips
vault_install_dir = global.vault_install_dir[matrix.artifact_type]
@ -383,13 +514,16 @@ scenario "proxy" {
}
step "verify_ui" {
module = module.vault_verify_ui
depends_on = [step.create_vault_cluster]
description = global.description.verify_ui
module = module.vault_verify_ui
depends_on = [step.create_vault_cluster]
providers = {
enos = local.enos_provider[matrix.distro]
}
verifies = quality.vault_ui_assets
variables {
vault_instances = step.create_vault_cluster_targets.hosts
}

View File

@ -1,10 +1,28 @@
# Copyright (c) HashiCorp, Inc.
# SPDX-License-Identifier: BUSL-1.1
// The replication scenario configures performance replication between two Vault clusters and verifies
// known_primary_cluster_addrs are updated on secondary Vault cluster with the IP addresses of replaced
// nodes on primary Vault cluster
scenario "replication" {
description = <<-EOF
The replication scenario configures performance replication between two Vault clusters and
verifies behavior and failure tolerance. The build can be a local branch, any CRT built Vault
Enterprise artifact saved to the local machine, or any CRT built Vault Enterprise artifact in
the stable channel in Artifactory.
The scenario deploys two Vault Enterprise clusters and establishes performance replication
between the primary cluster and the performance replication secondary cluster. Next, we simulate
a catastrophic failure event whereby the primary leader and a primary follower as ungracefully
removed from the cluster while running. This forces a leader election in the primary cluster
and requires the secondary cluster to recover replication and establish replication to the new
primary leader. The scenario also performs standard baseline verification that is not specific
to performance replication.
If you want to use the 'distro:leap' variant you must first accept SUSE's terms for the AWS
account. To verify that your account has agreed, sign-in to your AWS through Doormat,
and visit the following links to verify your subscription or subscribe:
arm64 AMI: https://aws.amazon.com/marketplace/server/procurement?productId=a516e959-df54-4035-bb1a-63599b7a6df9
amd64 AMI: https://aws.amazon.com/marketplace/server/procurement?productId=5535c495-72d4-4355-b169-54ffa874f849
EOF
matrix {
arch = global.archs
artifact_source = global.artifact_sources
@ -13,7 +31,7 @@ scenario "replication" {
consul_edition = global.consul_editions
consul_version = global.consul_versions
distro = global.distros
edition = global.editions
edition = global.enterprise_editions
primary_backend = global.backends
primary_seal = global.seals
secondary_backend = global.backends
@ -34,12 +52,12 @@ scenario "replication" {
# PKCS#11 can only be used on ent.hsm and ent.hsm.fips1402.
exclude {
primary_seal = ["pkcs11"]
edition = ["ce", "ent", "ent.fips1402"]
edition = [for e in matrix.edition : e if !strcontains(e, "hsm")]
}
exclude {
secondary_seal = ["pkcs11"]
edition = ["ce", "ent", "ent.fips1402"]
edition = [for e in matrix.edition : e if !strcontains(e, "hsm")]
}
# arm64 AMIs are not offered for Leap
@ -82,13 +100,9 @@ scenario "replication" {
vault_install_dir = matrix.artifact_type == "bundle" ? var.vault_install_dir : global.vault_install_dir[matrix.artifact_type]
}
step "get_local_metadata" {
skip_step = matrix.artifact_source != "local"
module = module.get_local_metadata
}
step "build_vault" {
module = "build_${matrix.artifact_source}"
description = global.description.build_vault
module = "build_${matrix.artifact_source}"
variables {
build_tags = var.vault_local_build_tags != null ? var.vault_local_build_tags : global.build_tags[matrix.edition]
@ -109,11 +123,13 @@ scenario "replication" {
}
step "ec2_info" {
module = module.ec2_info
description = global.description.ec2_info
module = module.ec2_info
}
step "create_vpc" {
module = module.create_vpc
description = global.description.create_vpc
module = module.create_vpc
variables {
common_tags = global.tags
@ -123,8 +139,9 @@ scenario "replication" {
// This step reads the contents of the backend license if we're using a Consul backend and
// an "ent" Consul edition.
step "read_backend_license" {
skip_step = (matrix.primary_backend == "raft" && matrix.secondary_backend == "raft") || matrix.consul_edition == "ce"
module = module.read_license
description = global.description.read_backend_license
skip_step = (matrix.primary_backend == "raft" && matrix.secondary_backend == "raft") || matrix.consul_edition == "ce"
module = module.read_license
variables {
file_name = global.backend_license_path
@ -132,7 +149,8 @@ scenario "replication" {
}
step "read_vault_license" {
module = module.read_license
description = global.description.read_vault_license
module = module.read_license
variables {
file_name = abspath(joinpath(path.root, "./support/vault.hclic"))
@ -140,8 +158,9 @@ scenario "replication" {
}
step "create_primary_seal_key" {
module = "seal_${matrix.primary_seal}"
depends_on = [step.create_vpc]
description = global.description.create_seal_key
module = "seal_${matrix.primary_seal}"
depends_on = [step.create_vpc]
providers = {
enos = provider.enos.ubuntu
@ -155,8 +174,9 @@ scenario "replication" {
}
step "create_secondary_seal_key" {
module = "seal_${matrix.secondary_seal}"
depends_on = [step.create_vpc]
description = global.description.create_seal_key
module = "seal_${matrix.secondary_seal}"
depends_on = [step.create_vpc]
providers = {
enos = provider.enos.ubuntu
@ -172,7 +192,8 @@ scenario "replication" {
# Create all of our instances for both primary and secondary clusters
step "create_primary_cluster_targets" {
module = module.target_ec2_instances
description = global.description.create_vault_cluster_targets
module = module.target_ec2_instances
depends_on = [
step.create_vpc,
]
@ -191,7 +212,8 @@ scenario "replication" {
}
step "create_primary_cluster_backend_targets" {
module = matrix.primary_backend == "consul" ? module.target_ec2_instances : module.target_ec2_shim
description = global.description.create_vault_cluster_targets
module = matrix.primary_backend == "consul" ? module.target_ec2_instances : module.target_ec2_shim
depends_on = [
step.create_vpc,
]
@ -210,7 +232,8 @@ scenario "replication" {
}
step "create_primary_cluster_additional_targets" {
module = module.target_ec2_instances
description = global.description.create_vault_cluster_targets
module = module.target_ec2_instances
depends_on = [
step.create_vpc,
step.create_primary_cluster_targets,
@ -231,8 +254,9 @@ scenario "replication" {
}
step "create_secondary_cluster_targets" {
module = module.target_ec2_instances
depends_on = [step.create_vpc]
description = global.description.create_vault_cluster_targets
module = module.target_ec2_instances
depends_on = [step.create_vpc]
providers = {
enos = local.enos_provider[matrix.distro]
@ -248,8 +272,9 @@ scenario "replication" {
}
step "create_secondary_cluster_backend_targets" {
module = matrix.secondary_backend == "consul" ? module.target_ec2_instances : module.target_ec2_shim
depends_on = [step.create_vpc]
description = global.description.create_vault_cluster_targets
module = matrix.secondary_backend == "consul" ? module.target_ec2_instances : module.target_ec2_shim
depends_on = [step.create_vpc]
providers = {
enos = provider.enos.ubuntu
@ -265,7 +290,8 @@ scenario "replication" {
}
step "create_primary_backend_cluster" {
module = "backend_${matrix.primary_backend}"
description = global.description.create_backend_cluster
module = "backend_${matrix.primary_backend}"
depends_on = [
step.create_primary_cluster_backend_targets,
]
@ -274,6 +300,23 @@ scenario "replication" {
enos = provider.enos.ubuntu
}
verifies = [
// verified in modules
quality.consul_autojoin_aws,
quality.consul_config_file,
quality.consul_ha_leader_election,
quality.consul_service_start_server,
// verified in enos_consul_start resource
quality.consul_api_agent_host_read,
quality.consul_api_health_node_read,
quality.consul_api_operator_raft_config_read,
quality.consul_cli_validate,
quality.consul_health_state_passing_read_nodes_minimum,
quality.consul_operator_raft_configuration_read_voters_minimum,
quality.consul_service_systemd_notified,
quality.consul_service_systemd_unit,
]
variables {
cluster_name = step.create_primary_cluster_backend_targets.cluster_name
cluster_tag_key = global.backend_tag_key
@ -287,7 +330,8 @@ scenario "replication" {
}
step "create_primary_cluster" {
module = module.vault_cluster
description = global.description.create_vault_cluster
module = module.vault_cluster
depends_on = [
step.create_primary_backend_cluster,
step.build_vault,
@ -298,6 +342,39 @@ scenario "replication" {
enos = local.enos_provider[matrix.distro]
}
verifies = [
// verified in modules
quality.consul_service_start_client,
quality.vault_artifact_bundle,
quality.vault_artifact_deb,
quality.vault_artifact_rpm,
quality.vault_audit_log,
quality.vault_audit_socket,
quality.vault_audit_syslog,
quality.vault_autojoin_aws,
quality.vault_config_env_variables,
quality.vault_config_file,
quality.vault_config_log_level,
quality.vault_init,
quality.vault_license_required_ent,
quality.vault_service_start,
quality.vault_storage_backend_consul,
quality.vault_storage_backend_raft,
// verified in enos_vault_start resource
quality.vault_api_sys_config_read,
quality.vault_api_sys_ha_status_read,
quality.vault_api_sys_health_read,
quality.vault_api_sys_host_info_read,
quality.vault_api_sys_replication_status_read,
quality.vault_api_sys_seal_status_api_read_matches_sys_health,
quality.vault_api_sys_storage_raft_autopilot_configuration_read,
quality.vault_api_sys_storage_raft_autopilot_state_read,
quality.vault_api_sys_storage_raft_configuration_read,
quality.vault_cli_status_exit_code,
quality.vault_service_systemd_notified,
quality.vault_service_systemd_unit,
]
variables {
artifactory_release = matrix.artifact_source == "artifactory" ? step.build_vault.vault_artifactory_release : null
backend_cluster_name = step.create_primary_cluster_backend_targets.cluster_name
@ -322,8 +399,36 @@ scenario "replication" {
}
}
step "get_local_metadata" {
skip_step = matrix.artifact_source != "local"
module = module.get_local_metadata
}
step "wait_for_primary_cluster_leader" {
description = global.description.wait_for_cluster_to_have_leader
module = module.vault_wait_for_leader
depends_on = [step.create_primary_cluster]
providers = {
enos = local.enos_provider[matrix.distro]
}
verifies = [
quality.vault_api_sys_leader_read,
quality.vault_unseal_ha_leader_election,
]
variables {
timeout = 120 # seconds
vault_hosts = step.create_primary_cluster_targets.hosts
vault_install_dir = local.vault_install_dir
vault_root_token = step.create_primary_cluster.root_token
}
}
step "create_secondary_backend_cluster" {
module = "backend_${matrix.secondary_backend}"
description = global.description.create_backend_cluster
module = "backend_${matrix.secondary_backend}"
depends_on = [
step.create_secondary_cluster_backend_targets
]
@ -356,6 +461,22 @@ scenario "replication" {
enos = local.enos_provider[matrix.distro]
}
verifies = [
// verified in modules
quality.consul_autojoin_aws,
quality.consul_config_file,
quality.consul_ha_leader_election,
quality.consul_service_start_client,
// verified in enos_consul_start resource
quality.consul_api_agent_host_read,
quality.consul_api_health_node_read,
quality.consul_api_operator_raft_config_read,
quality.consul_health_state_passing_read_nodes_minimum,
quality.consul_operator_raft_configuration_read_voters_minimum,
quality.consul_service_systemd_notified,
quality.consul_service_systemd_unit,
]
variables {
artifactory_release = matrix.artifact_source == "artifactory" ? step.build_vault.vault_artifactory_release : null
backend_cluster_name = step.create_secondary_cluster_backend_targets.cluster_name
@ -380,16 +501,47 @@ scenario "replication" {
}
}
step "wait_for_secondary_cluster_leader" {
description = global.description.wait_for_cluster_to_have_leader
module = module.vault_wait_for_leader
depends_on = [step.create_secondary_cluster]
providers = {
enos = local.enos_provider[matrix.distro]
}
verifies = [
quality.vault_api_sys_leader_read,
quality.vault_unseal_ha_leader_election,
]
variables {
timeout = 120 # seconds
vault_hosts = step.create_secondary_cluster_targets.hosts
vault_install_dir = local.vault_install_dir
vault_root_token = step.create_secondary_cluster.root_token
}
}
step "verify_that_vault_primary_cluster_is_unsealed" {
module = module.vault_verify_unsealed
description = global.description.verify_vault_unsealed
module = module.vault_verify_unsealed
depends_on = [
step.create_primary_cluster
step.create_primary_cluster,
step.wait_for_primary_cluster_leader,
]
providers = {
enos = local.enos_provider[matrix.distro]
}
verifies = [
quality.vault_auto_unseals_after_autopilot_upgrade,
quality.vault_seal_awskms,
quality.vault_seal_pkcs11,
quality.vault_seal_shamir,
]
variables {
vault_instances = step.create_primary_cluster_targets.hosts
vault_install_dir = global.vault_install_dir[matrix.artifact_type]
@ -397,15 +549,24 @@ scenario "replication" {
}
step "verify_that_vault_secondary_cluster_is_unsealed" {
module = module.vault_verify_unsealed
description = global.description.verify_vault_unsealed
module = module.vault_verify_unsealed
depends_on = [
step.create_secondary_cluster
step.create_secondary_cluster,
step.wait_for_secondary_cluster_leader,
]
providers = {
enos = local.enos_provider[matrix.distro]
}
verifies = [
quality.vault_auto_unseals_after_autopilot_upgrade,
quality.vault_seal_awskms,
quality.vault_seal_pkcs11,
quality.vault_seal_shamir,
]
variables {
vault_instances = step.create_secondary_cluster_targets.hosts
vault_install_dir = global.vault_install_dir[matrix.artifact_type]
@ -413,15 +574,23 @@ scenario "replication" {
}
step "verify_vault_version" {
module = module.vault_verify_version
description = global.description.verify_vault_version
module = module.vault_verify_version
depends_on = [
step.create_primary_cluster
step.create_primary_cluster,
step.wait_for_primary_cluster_leader,
]
providers = {
enos = local.enos_provider[matrix.distro]
}
verifies = [
quality.vault_version_build_date,
quality.vault_version_edition,
quality.vault_version_release,
]
variables {
vault_instances = step.create_primary_cluster_targets.hosts
vault_edition = matrix.edition
@ -434,32 +603,39 @@ scenario "replication" {
}
step "verify_ui" {
module = module.vault_verify_ui
description = global.description.verify_ui
module = module.vault_verify_ui
depends_on = [
step.create_primary_cluster
step.create_primary_cluster,
step.wait_for_primary_cluster_leader,
]
providers = {
enos = local.enos_provider[matrix.distro]
}
verifies = quality.vault_ui_assets
variables {
vault_instances = step.create_primary_cluster_targets.hosts
}
}
step "get_primary_cluster_ips" {
module = module.vault_get_cluster_ips
depends_on = [
step.verify_vault_version,
step.verify_ui,
step.verify_that_vault_primary_cluster_is_unsealed,
]
description = global.description.get_vault_cluster_ip_addresses
module = module.vault_get_cluster_ips
depends_on = [step.verify_that_vault_primary_cluster_is_unsealed]
providers = {
enos = local.enos_provider[matrix.distro]
}
verifies = [
quality.vault_api_sys_ha_status_read,
quality.vault_api_sys_leader_read,
quality.vault_cli_operator_members,
]
variables {
vault_hosts = step.create_primary_cluster_targets.hosts
vault_install_dir = global.vault_install_dir[matrix.artifact_type]
@ -468,8 +644,12 @@ scenario "replication" {
}
step "get_primary_cluster_replication_data" {
module = module.replication_data
depends_on = [step.get_primary_cluster_ips]
description = <<-EOF
An arithmetic module that we use to determine various metadata about the the leader and
follower nodes of the primary cluster so that we can correctly enable performance replication.
EOF
module = module.replication_data
depends_on = [step.get_primary_cluster_ips]
variables {
follower_hosts = step.get_primary_cluster_ips.follower_hosts
@ -477,13 +657,20 @@ scenario "replication" {
}
step "get_secondary_cluster_ips" {
module = module.vault_get_cluster_ips
depends_on = [step.verify_that_vault_secondary_cluster_is_unsealed]
description = global.description.get_vault_cluster_ip_addresses
module = module.vault_get_cluster_ips
depends_on = [step.verify_that_vault_secondary_cluster_is_unsealed]
providers = {
enos = local.enos_provider[matrix.distro]
}
verifies = [
quality.vault_api_sys_ha_status_read,
quality.vault_api_sys_leader_read,
quality.vault_cli_operator_members,
]
variables {
vault_hosts = step.create_secondary_cluster_targets.hosts
vault_install_dir = global.vault_install_dir[matrix.artifact_type]
@ -492,14 +679,22 @@ scenario "replication" {
}
step "write_test_data_on_primary" {
module = module.vault_verify_write_data
depends_on = [step.get_primary_cluster_ips]
description = global.description.verify_write_test_data
module = module.vault_verify_write_data
depends_on = [step.get_primary_cluster_ips]
providers = {
enos = local.enos_provider[matrix.distro]
}
verifies = [
quality.vault_mount_auth,
quality.vault_mount_kv,
quality.vault_secrets_auth_user_policy_write,
quality.vault_secrets_kv_write,
]
variables {
leader_public_ip = step.get_primary_cluster_ips.leader_public_ip
leader_private_ip = step.get_primary_cluster_ips.leader_private_ip
@ -510,7 +705,12 @@ scenario "replication" {
}
step "configure_performance_replication_primary" {
module = module.vault_setup_perf_primary
description = <<-EOF
Create the necessary superuser auth policy necessary for performance replicaztion, assign it
to a our previously create test user, and enable performance replication on the primary
cluster.
EOF
module = module.vault_setup_perf_primary
depends_on = [
step.get_primary_cluster_ips,
step.get_secondary_cluster_ips,
@ -521,6 +721,13 @@ scenario "replication" {
enos = local.enos_provider[matrix.distro]
}
verifies = [
quality.vault_api_sys_auth_userpass_user_write,
quality.vault_api_sys_policy_write,
quality.vault_api_sys_replication_performance_primary_enable_write,
quality.vault_cli_policy_write,
]
variables {
primary_leader_public_ip = step.get_primary_cluster_ips.leader_public_ip
primary_leader_private_ip = step.get_primary_cluster_ips.leader_private_ip
@ -530,8 +737,15 @@ scenario "replication" {
}
step "generate_secondary_token" {
module = module.generate_secondary_token
depends_on = [step.configure_performance_replication_primary]
description = <<-EOF
Generate a random token and configure the performance replication primary secondary-token and
configure the Vault cluster primary replication with the token. Export the wrapping token
so that secondary clusters can utilize it.
EOF
module = module.generate_secondary_token
depends_on = [step.configure_performance_replication_primary]
verifies = quality.vault_api_sys_replication_performance_primary_secondary_token_write
providers = {
enos = local.enos_provider[matrix.distro]
@ -545,13 +759,19 @@ scenario "replication" {
}
step "configure_performance_replication_secondary" {
module = module.vault_setup_perf_secondary
depends_on = [step.generate_secondary_token]
description = <<-EOF
Enable performance replication on the secondary cluster with the wrapping token created by
the primary cluster.
EOF
module = module.vault_setup_perf_secondary
depends_on = [step.generate_secondary_token]
providers = {
enos = local.enos_provider[matrix.distro]
}
verifies = quality.vault_api_sys_replication_performance_secondary_enable_write
variables {
secondary_leader_public_ip = step.get_secondary_cluster_ips.leader_public_ip
secondary_leader_private_ip = step.get_secondary_cluster_ips.leader_private_ip
@ -561,10 +781,14 @@ scenario "replication" {
}
}
// After replication is enabled, the secondary cluster followers need to be unsealed
// Secondary unseal keys are passed using the guide https://developer.hashicorp.com/vault/docs/enterprise/replication#seals
step "unseal_secondary_followers" {
module = module.vault_unseal_nodes
description = <<-EOF
After replication is enabled the secondary cluster followers need to be unsealed.
Secondary unseal keys are passed differently depending primary and secondary seal
type combinations. See the guide for more information:
https://developer.hashicorp.com/vault/docs/enterprise/replication#seals
EOF
module = module.vault_unseal_nodes
depends_on = [
step.create_primary_cluster,
step.create_secondary_cluster,
@ -585,7 +809,8 @@ scenario "replication" {
}
step "verify_secondary_cluster_is_unsealed_after_enabling_replication" {
module = module.vault_verify_unsealed
description = global.description.verify_vault_unsealed
module = module.vault_verify_unsealed
depends_on = [
step.unseal_secondary_followers
]
@ -594,6 +819,13 @@ scenario "replication" {
enos = local.enos_provider[matrix.distro]
}
verifies = [
quality.vault_auto_unseals_after_autopilot_upgrade,
quality.vault_seal_awskms,
quality.vault_seal_pkcs11,
quality.vault_seal_shamir,
]
variables {
vault_instances = step.create_secondary_cluster_targets.hosts
vault_install_dir = global.vault_install_dir[matrix.artifact_type]
@ -601,13 +833,25 @@ scenario "replication" {
}
step "verify_performance_replication" {
module = module.vault_verify_performance_replication
depends_on = [step.verify_secondary_cluster_is_unsealed_after_enabling_replication]
description = <<-EOF
Verify that the performance replication status meets our expectations after enabling replication
and ensuring that all secondary nodes are unsealed.
EOF
module = module.vault_verify_performance_replication
depends_on = [step.verify_secondary_cluster_is_unsealed_after_enabling_replication]
providers = {
enos = local.enos_provider[matrix.distro]
}
verifies = [
quality.vault_api_sys_replication_performance_read_connection_status_connected,
quality.vault_api_sys_replication_performance_status_read,
quality.vault_api_sys_replication_performance_status_read_cluster_address,
quality.vault_api_sys_replication_performance_status_read_state_not_idle,
quality.vault_api_sys_replication_performance_status_known_primary_cluster_addrs,
]
variables {
primary_leader_public_ip = step.get_primary_cluster_ips.leader_public_ip
primary_leader_private_ip = step.get_primary_cluster_ips.leader_private_ip
@ -618,7 +862,8 @@ scenario "replication" {
}
step "verify_replicated_data" {
module = module.vault_verify_read_data
description = global.description.verify_read_test_data
module = module.vault_verify_read_data
depends_on = [
step.verify_performance_replication,
step.get_secondary_cluster_ips,
@ -629,6 +874,8 @@ scenario "replication" {
enos = local.enos_provider[matrix.distro]
}
verifies = quality.vault_secrets_kv_read
variables {
node_public_ips = step.get_secondary_cluster_ips.follower_public_ips
vault_install_dir = global.vault_install_dir[matrix.artifact_type]
@ -636,7 +883,11 @@ scenario "replication" {
}
step "add_additional_nodes_to_primary_cluster" {
module = module.vault_cluster
description = <<-EOF
Add additional nodes the Vault Cluster to prepare for our catostrophic failure simulation.
These nodes will use a different storage storage_node_prefix
EOF
module = module.vault_cluster
depends_on = [
step.create_vpc,
step.create_primary_backend_cluster,
@ -649,6 +900,40 @@ scenario "replication" {
enos = local.enos_provider[matrix.distro]
}
verifies = [
// unique to this invocation of the module
quality.vault_autojoins_new_nodes_into_initialized_cluster,
// verified in modules
quality.vault_artifact_bundle,
quality.vault_artifact_deb,
quality.vault_artifact_rpm,
quality.vault_audit_log,
quality.vault_audit_socket,
quality.vault_audit_syslog,
quality.vault_autojoin_aws,
quality.vault_config_env_variables,
quality.vault_config_file,
quality.vault_config_log_level,
quality.vault_init,
quality.vault_license_required_ent,
quality.vault_service_start,
quality.vault_storage_backend_consul,
quality.vault_storage_backend_raft,
// verified in enos_vault_start resource
quality.vault_api_sys_config_read,
quality.vault_api_sys_ha_status_read,
quality.vault_api_sys_health_read,
quality.vault_api_sys_host_info_read,
quality.vault_api_sys_replication_status_read,
quality.vault_api_sys_seal_status_api_read_matches_sys_health,
quality.vault_api_sys_storage_raft_configuration_read,
quality.vault_api_sys_storage_raft_autopilot_configuration_read,
quality.vault_api_sys_storage_raft_autopilot_state_read,
quality.vault_service_systemd_notified,
quality.vault_service_systemd_unit,
quality.vault_cli_status_exit_code,
]
variables {
artifactory_release = matrix.artifact_source == "artifactory" ? step.build_vault.vault_artifactory_release : null
backend_cluster_name = step.create_primary_cluster_backend_targets.cluster_name
@ -662,30 +947,39 @@ scenario "replication" {
} : null
enable_audit_devices = var.vault_enable_audit_devices
force_unseal = matrix.primary_seal == "shamir"
initialize_cluster = false
install_dir = global.vault_install_dir[matrix.artifact_type]
license = matrix.edition != "ce" ? step.read_vault_license.license : null
local_artifact_path = local.artifact_path
manage_service = local.manage_service
packages = concat(global.packages, global.distro_packages[matrix.distro])
root_token = step.create_primary_cluster.root_token
seal_attributes = step.create_primary_seal_key.attributes
seal_type = matrix.primary_seal
shamir_unseal_keys = matrix.primary_seal == "shamir" ? step.create_primary_cluster.unseal_keys_hex : null
storage_backend = matrix.primary_backend
storage_node_prefix = "newprimary_node"
target_hosts = step.create_primary_cluster_additional_targets.hosts
// Don't init when adding nodes into the cluster.
initialize_cluster = false
install_dir = global.vault_install_dir[matrix.artifact_type]
license = matrix.edition != "ce" ? step.read_vault_license.license : null
local_artifact_path = local.artifact_path
manage_service = local.manage_service
packages = concat(global.packages, global.distro_packages[matrix.distro])
root_token = step.create_primary_cluster.root_token
seal_attributes = step.create_primary_seal_key.attributes
seal_type = matrix.primary_seal
shamir_unseal_keys = matrix.primary_seal == "shamir" ? step.create_primary_cluster.unseal_keys_hex : null
storage_backend = matrix.primary_backend
storage_node_prefix = "newprimary_node"
target_hosts = step.create_primary_cluster_additional_targets.hosts
}
}
step "verify_additional_primary_nodes_are_unsealed" {
module = module.vault_verify_unsealed
depends_on = [step.add_additional_nodes_to_primary_cluster]
description = global.description.verify_vault_unsealed
module = module.vault_verify_unsealed
depends_on = [step.add_additional_nodes_to_primary_cluster]
providers = {
enos = local.enos_provider[matrix.distro]
}
verifies = [
quality.vault_auto_unseals_after_autopilot_upgrade,
quality.vault_seal_awskms,
quality.vault_seal_pkcs11,
quality.vault_seal_shamir,
]
variables {
vault_instances = step.create_primary_cluster_additional_targets.hosts
vault_install_dir = global.vault_install_dir[matrix.artifact_type]
@ -693,8 +987,9 @@ scenario "replication" {
}
step "verify_raft_auto_join_voter" {
skip_step = matrix.primary_backend != "raft"
module = module.vault_verify_raft_auto_join_voter
description = global.description.verify_raft_cluster_all_nodes_are_voters
skip_step = matrix.primary_backend != "raft"
module = module.vault_verify_raft_auto_join_voter
depends_on = [
step.add_additional_nodes_to_primary_cluster,
step.create_primary_cluster,
@ -705,6 +1000,8 @@ scenario "replication" {
enos = local.enos_provider[matrix.distro]
}
verifies = quality.vault_raft_voters
variables {
vault_instances = step.create_primary_cluster_additional_targets.hosts
vault_install_dir = global.vault_install_dir[matrix.artifact_type]
@ -713,7 +1010,11 @@ scenario "replication" {
}
step "remove_primary_follower_1" {
module = module.shutdown_node
description = <<-EOF
Simulate a catostrophic failure by forcefully removing the a follower node from the Vault
Cluster.
EOF
module = module.shutdown_node
depends_on = [
step.get_primary_cluster_replication_data,
step.verify_additional_primary_nodes_are_unsealed
@ -729,7 +1030,11 @@ scenario "replication" {
}
step "remove_primary_leader" {
module = module.shutdown_node
description = <<-EOF
Simulate a catostrophic failure by forcefully removing the the primary leader node from the
Vault Cluster without allowing a graceful shutdown.
EOF
module = module.shutdown_node
depends_on = [
step.get_primary_cluster_ips,
step.remove_primary_follower_1
@ -744,9 +1049,15 @@ scenario "replication" {
}
}
// After we've removed two nodes from the cluster we need to get an updated set of vault hosts
// to work with.
step "get_remaining_hosts_replication_data" {
description = <<-EOF
An arithmetic module that we use to determine various metadata about the the leader and
follower nodes of the primary cluster so that we can correctly enable performance replication.
We execute this again to determine information about our hosts after having forced the leader
and a follower from the cluster.
EOF
module = module.replication_data
depends_on = [
step.get_primary_cluster_ips,
@ -763,9 +1074,9 @@ scenario "replication" {
}
}
// Wait for the remaining hosts in our cluster to elect a new leader.
step "wait_for_leader_in_remaining_hosts" {
module = module.vault_wait_for_leader
description = global.description.wait_for_cluster_to_have_leader
module = module.vault_wait_for_leader
depends_on = [
step.remove_primary_leader,
step.get_remaining_hosts_replication_data,
@ -775,6 +1086,11 @@ scenario "replication" {
enos = local.enos_provider[matrix.distro]
}
verifies = [
quality.vault_api_sys_leader_read,
quality.vault_unseal_ha_leader_election,
]
variables {
timeout = 120 # seconds
vault_install_dir = global.vault_install_dir[matrix.artifact_type]
@ -783,9 +1099,9 @@ scenario "replication" {
}
}
// Get our new leader and follower IP addresses.
step "get_updated_primary_cluster_ips" {
module = module.vault_get_cluster_ips
description = global.description.get_vault_cluster_ip_addresses
module = module.vault_get_cluster_ips
depends_on = [
step.get_remaining_hosts_replication_data,
step.wait_for_leader_in_remaining_hosts,
@ -795,6 +1111,12 @@ scenario "replication" {
enos = local.enos_provider[matrix.distro]
}
verifies = [
quality.vault_api_sys_ha_status_read,
quality.vault_api_sys_leader_read,
quality.vault_cli_operator_members,
]
variables {
vault_hosts = step.get_remaining_hosts_replication_data.remaining_hosts
vault_install_dir = global.vault_install_dir[matrix.artifact_type]
@ -803,8 +1125,12 @@ scenario "replication" {
}
}
// Make sure the cluster has the correct performance replication state after the new leader election.
step "verify_updated_performance_replication" {
description = <<-EOF
Verify that the performance replication status meets our expectations after the new leader
election.
EOF
module = module.vault_verify_performance_replication
depends_on = [
step.get_remaining_hosts_replication_data,
@ -816,6 +1142,14 @@ scenario "replication" {
enos = local.enos_provider[matrix.distro]
}
verifies = [
quality.vault_api_sys_replication_performance_read_connection_status_connected,
quality.vault_api_sys_replication_performance_status_known_primary_cluster_addrs,
quality.vault_api_sys_replication_performance_status_read,
quality.vault_api_sys_replication_performance_status_read_state_not_idle,
quality.vault_api_sys_replication_performance_status_read_cluster_address,
]
variables {
primary_leader_public_ip = step.get_updated_primary_cluster_ips.leader_public_ip
primary_leader_private_ip = step.get_updated_primary_cluster_ips.leader_private_ip

View File

@ -2,6 +2,25 @@
# SPDX-License-Identifier: BUSL-1.1
scenario "seal_ha" {
description = <<-EOF
The seal_ha scenario verifies Vault Enterprises seal HA capabilities. The build can be a local
branch, any CRT built Vault Enterprise artifact saved to the local machine, or any CRT built
Vault Enterprise artifact in the stable channel in Artifactory.
The scenario deploys a Vault Enterprise cluster with the candidate build and enables a single
primary seal, mounts various engines and writes data, then establishes seal HA with a secondary
seal, the removes the primary and verifies data integrity and seal data migration. It also
verifies that the cluster is able to recover from a forced leader election after the initial
seal rewrap. The scenario also performs standard baseline verification that is not specific to
seal_ha.
If you want to use the 'distro:leap' variant you must first accept SUSE's terms for the AWS
account. To verify that your account has agreed, sign-in to your AWS through Doormat,
and visit the following links to verify your subscription or subscribe:
arm64 AMI: https://aws.amazon.com/marketplace/server/procurement?productId=a516e959-df54-4035-bb1a-63599b7a6df9
amd64 AMI: https://aws.amazon.com/marketplace/server/procurement?productId=5535c495-72d4-4355-b169-54ffa874f849
EOF
matrix {
arch = global.archs
artifact_source = global.artifact_sources
@ -11,7 +30,7 @@ scenario "seal_ha" {
consul_edition = global.consul_editions
consul_version = global.consul_versions
distro = global.distros
edition = global.editions
edition = global.enterprise_editions
// Seal HA is only supported with auto-unseal devices.
primary_seal = ["awskms", "pkcs11"]
secondary_seal = ["awskms", "pkcs11"]
@ -31,12 +50,12 @@ scenario "seal_ha" {
# PKCS#11 can only be used on ent.hsm and ent.hsm.fips1402.
exclude {
primary_seal = ["pkcs11"]
edition = ["ce", "ent", "ent.fips1402"]
edition = [for e in matrix.edition : e if !strcontains(e, "hsm")]
}
exclude {
secondary_seal = ["pkcs11"]
edition = ["ce", "ent", "ent.fips1402"]
edition = [for e in matrix.edition : e if !strcontains(e, "hsm")]
}
# arm64 AMIs are not offered for Leap
@ -81,12 +100,14 @@ scenario "seal_ha" {
}
step "get_local_metadata" {
skip_step = matrix.artifact_source != "local"
module = module.get_local_metadata
description = global.description.get_local_metadata
skip_step = matrix.artifact_source != "local"
module = module.get_local_metadata
}
step "build_vault" {
module = "build_${matrix.artifact_source}"
description = global.description.build_vault
module = "build_${matrix.artifact_source}"
variables {
build_tags = var.vault_local_build_tags != null ? var.vault_local_build_tags : global.build_tags[matrix.edition]
@ -107,20 +128,45 @@ scenario "seal_ha" {
}
step "ec2_info" {
module = module.ec2_info
description = global.description.ec2_info
module = module.ec2_info
}
step "create_vpc" {
module = module.create_vpc
description = global.description.create_vpc
module = module.create_vpc
variables {
common_tags = global.tags
}
}
// This step reads the contents of the backend license if we're using a Consul backend and
// the edition is "ent".
step "read_backend_license" {
description = global.description.read_backend_license
skip_step = matrix.backend == "raft" || matrix.consul_edition == "ce"
module = module.read_license
variables {
file_name = global.backend_license_path
}
}
step "read_vault_license" {
description = global.description.read_vault_license
skip_step = matrix.edition == "ce"
module = module.read_license
variables {
file_name = global.vault_license_path
}
}
step "create_primary_seal_key" {
module = "seal_${matrix.primary_seal}"
depends_on = [step.create_vpc]
description = global.description.create_seal_key
module = "seal_${matrix.primary_seal}"
depends_on = [step.create_vpc]
providers = {
enos = provider.enos.ubuntu
@ -134,8 +180,9 @@ scenario "seal_ha" {
}
step "create_secondary_seal_key" {
module = "seal_${matrix.secondary_seal}"
depends_on = [step.create_vpc]
description = global.description.create_seal_key
module = "seal_${matrix.secondary_seal}"
depends_on = [step.create_vpc]
providers = {
enos = provider.enos.ubuntu
@ -149,29 +196,10 @@ scenario "seal_ha" {
}
}
// This step reads the contents of the backend license if we're using a Consul backend and
// an "ent" Consul edition.
step "read_backend_license" {
skip_step = matrix.backend == "raft" || matrix.consul_edition == "ce"
module = module.read_license
variables {
file_name = global.backend_license_path
}
}
step "read_vault_license" {
skip_step = matrix.edition == "ce"
module = module.read_license
variables {
file_name = global.vault_license_path
}
}
step "create_vault_cluster_targets" {
module = module.target_ec2_instances
depends_on = [step.create_vpc]
description = global.description.create_vault_cluster_targets
module = module.target_ec2_instances
depends_on = [step.create_vpc]
providers = {
enos = local.enos_provider[matrix.distro]
@ -187,8 +215,9 @@ scenario "seal_ha" {
}
step "create_vault_cluster_backend_targets" {
module = matrix.backend == "consul" ? module.target_ec2_instances : module.target_ec2_shim
depends_on = [step.create_vpc]
description = global.description.create_vault_cluster_targets
module = matrix.backend == "consul" ? module.target_ec2_instances : module.target_ec2_shim
depends_on = [step.create_vpc]
providers = {
enos = provider.enos.ubuntu
@ -204,7 +233,8 @@ scenario "seal_ha" {
}
step "create_backend_cluster" {
module = "backend_${matrix.backend}"
description = global.description.create_backend_cluster
module = "backend_${matrix.backend}"
depends_on = [
step.create_vault_cluster_backend_targets
]
@ -213,6 +243,23 @@ scenario "seal_ha" {
enos = provider.enos.ubuntu
}
verifies = [
// verified in modules
quality.consul_autojoin_aws,
quality.consul_config_file,
quality.consul_ha_leader_election,
quality.consul_service_start_server,
// verified in enos_consul_start resource
quality.consul_api_agent_host_read,
quality.consul_api_health_node_read,
quality.consul_api_operator_raft_config_read,
quality.consul_cli_validate,
quality.consul_health_state_passing_read_nodes_minimum,
quality.consul_operator_raft_configuration_read_voters_minimum,
quality.consul_service_systemd_notified,
quality.consul_service_systemd_unit,
]
variables {
cluster_name = step.create_vault_cluster_backend_targets.cluster_name
cluster_tag_key = global.backend_tag_key
@ -226,7 +273,8 @@ scenario "seal_ha" {
}
step "create_vault_cluster" {
module = module.vault_cluster
description = global.description.create_vault_cluster
module = module.vault_cluster
depends_on = [
step.create_backend_cluster,
step.build_vault,
@ -237,6 +285,38 @@ scenario "seal_ha" {
enos = local.enos_provider[matrix.distro]
}
verifies = [
// verified in modules
quality.vault_artifact_bundle,
quality.vault_artifact_deb,
quality.vault_artifact_rpm,
quality.vault_audit_log,
quality.vault_audit_socket,
quality.vault_audit_syslog,
quality.vault_autojoin_aws,
quality.vault_service_start,
quality.vault_config_env_variables,
quality.vault_config_file,
quality.vault_config_log_level,
quality.vault_init,
quality.vault_license_required_ent,
quality.vault_storage_backend_consul,
quality.vault_storage_backend_raft,
// verified in enos_vault_start resource
quality.vault_api_sys_health_read,
quality.vault_api_sys_config_read,
quality.vault_api_sys_ha_status_read,
quality.vault_api_sys_host_info_read,
quality.vault_api_sys_replication_status_read,
quality.vault_api_sys_seal_status_api_read_matches_sys_health,
quality.vault_api_sys_storage_raft_autopilot_configuration_read,
quality.vault_api_sys_storage_raft_autopilot_state_read,
quality.vault_api_sys_storage_raft_configuration_read,
quality.vault_cli_status_exit_code,
quality.vault_service_systemd_notified,
quality.vault_service_systemd_unit,
]
variables {
artifactory_release = matrix.artifact_source == "artifactory" ? step.build_vault.vault_artifactory_release : null
backend_cluster_name = step.create_vault_cluster_backend_targets.cluster_name
@ -264,13 +344,19 @@ scenario "seal_ha" {
// Wait for our cluster to elect a leader
step "wait_for_leader" {
module = module.vault_wait_for_leader
depends_on = [step.create_vault_cluster]
description = global.description.wait_for_cluster_to_have_leader
module = module.vault_wait_for_leader
depends_on = [step.create_vault_cluster]
providers = {
enos = local.enos_provider[matrix.distro]
}
verifies = [
quality.vault_api_sys_leader_read,
quality.vault_unseal_ha_leader_election,
]
variables {
timeout = 120 # seconds
vault_hosts = step.create_vault_cluster_targets.hosts
@ -280,13 +366,20 @@ scenario "seal_ha" {
}
step "get_vault_cluster_ips" {
module = module.vault_get_cluster_ips
depends_on = [step.wait_for_leader]
description = global.description.get_vault_cluster_ip_addresses
module = module.vault_get_cluster_ips
depends_on = [step.wait_for_leader]
providers = {
enos = local.enos_provider[matrix.distro]
}
verifies = [
quality.vault_api_sys_ha_status_read,
quality.vault_api_sys_leader_read,
quality.vault_cli_operator_members,
]
variables {
vault_hosts = step.create_vault_cluster_targets.hosts
vault_install_dir = global.vault_install_dir[matrix.artifact_type]
@ -295,13 +388,20 @@ scenario "seal_ha" {
}
step "verify_vault_unsealed" {
module = module.vault_verify_unsealed
depends_on = [step.wait_for_leader]
description = global.description.verify_vault_unsealed
module = module.vault_verify_unsealed
depends_on = [step.wait_for_leader]
providers = {
enos = local.enos_provider[matrix.distro]
}
verifies = [
quality.vault_seal_awskms,
quality.vault_seal_pkcs11,
quality.vault_seal_shamir,
]
variables {
vault_install_dir = global.vault_install_dir[matrix.artifact_type]
vault_instances = step.create_vault_cluster_targets.hosts
@ -310,7 +410,8 @@ scenario "seal_ha" {
// Write some test data before we create the new seal
step "verify_write_test_data" {
module = module.vault_verify_write_data
description = global.description.verify_write_test_data
module = module.vault_verify_write_data
depends_on = [
step.create_vault_cluster,
step.get_vault_cluster_ips,
@ -321,6 +422,13 @@ scenario "seal_ha" {
enos = local.enos_provider[matrix.distro]
}
verifies = [
quality.vault_mount_auth,
quality.vault_mount_kv,
quality.vault_secrets_auth_user_policy_write,
quality.vault_secrets_kv_write,
]
variables {
leader_public_ip = step.get_vault_cluster_ips.leader_public_ip
leader_private_ip = step.get_vault_cluster_ips.leader_private_ip
@ -332,7 +440,8 @@ scenario "seal_ha" {
// Wait for the initial seal rewrap to complete before we add our HA seal.
step "wait_for_initial_seal_rewrap" {
module = module.vault_wait_for_seal_rewrap
description = global.description.wait_for_seal_rewrap
module = module.vault_wait_for_seal_rewrap
depends_on = [
step.verify_write_test_data,
]
@ -341,6 +450,13 @@ scenario "seal_ha" {
enos = local.enos_provider[matrix.distro]
}
verifies = [
quality.vault_api_sys_sealwrap_rewrap_read_entries_processed_eq_entries_succeeded_post_rewrap,
quality.vault_api_sys_sealwrap_rewrap_read_entries_processed_gt_zero_post_rewrap,
quality.vault_api_sys_sealwrap_rewrap_read_is_running_false_post_rewrap,
quality.vault_api_sys_sealwrap_rewrap_read_no_entries_fail_during_rewrap,
]
variables {
vault_hosts = step.create_vault_cluster_targets.hosts
vault_install_dir = global.vault_install_dir[matrix.artifact_type]
@ -348,9 +464,9 @@ scenario "seal_ha" {
}
}
// Stop the vault service on all nodes before we restart with new seal config
step "stop_vault" {
module = module.stop_vault
description = "${global.description.stop_vault}. We do this to write new seal configuration."
module = module.stop_vault
depends_on = [
step.create_vault_cluster,
step.verify_write_test_data,
@ -368,13 +484,16 @@ scenario "seal_ha" {
// Add the secondary seal to the cluster
step "add_ha_seal_to_cluster" {
module = module.start_vault
depends_on = [step.stop_vault]
description = global.description.enable_multiseal
module = module.start_vault
depends_on = [step.stop_vault]
providers = {
enos = local.enos_provider[matrix.distro]
}
verifies = quality.vault_multiseal_enable
variables {
cluster_name = step.create_vault_cluster_targets.cluster_name
install_dir = global.vault_install_dir[matrix.artifact_type]
@ -391,13 +510,19 @@ scenario "seal_ha" {
// Wait for our cluster to elect a leader
step "wait_for_leader_election" {
module = module.vault_wait_for_leader
depends_on = [step.add_ha_seal_to_cluster]
description = global.description.wait_for_cluster_to_have_leader
module = module.vault_wait_for_leader
depends_on = [step.add_ha_seal_to_cluster]
providers = {
enos = local.enos_provider[matrix.distro]
}
verifies = [
quality.vault_api_sys_leader_read,
quality.vault_unseal_ha_leader_election,
]
variables {
timeout = 120 # seconds
vault_hosts = step.create_vault_cluster_targets.hosts
@ -407,13 +532,20 @@ scenario "seal_ha" {
}
step "get_leader_ip_for_step_down" {
module = module.vault_get_cluster_ips
depends_on = [step.wait_for_leader_election]
description = global.description.get_vault_cluster_ip_addresses
module = module.vault_get_cluster_ips
depends_on = [step.wait_for_leader_election]
providers = {
enos = local.enos_provider[matrix.distro]
}
verifies = [
quality.vault_api_sys_ha_status_read,
quality.vault_api_sys_leader_read,
quality.vault_cli_operator_members,
]
variables {
vault_hosts = step.create_vault_cluster_targets.hosts
vault_install_dir = global.vault_install_dir[matrix.artifact_type]
@ -423,13 +555,19 @@ scenario "seal_ha" {
// Force a step down to trigger a new leader election
step "vault_leader_step_down" {
module = module.vault_step_down
depends_on = [step.get_leader_ip_for_step_down]
description = global.description.vault_leader_step_down
module = module.vault_step_down
depends_on = [step.get_leader_ip_for_step_down]
providers = {
enos = local.enos_provider[matrix.distro]
}
verifies = [
quality.vault_api_sys_step_down_steps_down,
quality.vault_cli_operator_step_down,
]
variables {
vault_install_dir = global.vault_install_dir[matrix.artifact_type]
leader_host = step.get_leader_ip_for_step_down.leader_host
@ -439,13 +577,19 @@ scenario "seal_ha" {
// Wait for our cluster to elect a leader
step "wait_for_new_leader" {
module = module.vault_wait_for_leader
depends_on = [step.vault_leader_step_down]
description = global.description.wait_for_cluster_to_have_leader
module = module.vault_wait_for_leader
depends_on = [step.vault_leader_step_down]
providers = {
enos = local.enos_provider[matrix.distro]
}
verifies = [
quality.vault_api_sys_leader_read,
quality.vault_unseal_ha_leader_election,
]
variables {
timeout = 120 # seconds
vault_hosts = step.create_vault_cluster_targets.hosts
@ -455,13 +599,20 @@ scenario "seal_ha" {
}
step "get_updated_cluster_ips" {
module = module.vault_get_cluster_ips
depends_on = [step.wait_for_new_leader]
description = global.description.get_vault_cluster_ip_addresses
module = module.vault_get_cluster_ips
depends_on = [step.wait_for_new_leader]
providers = {
enos = local.enos_provider[matrix.distro]
}
verifies = [
quality.vault_api_sys_ha_status_read,
quality.vault_api_sys_leader_read,
quality.vault_cli_operator_members,
]
variables {
vault_hosts = step.create_vault_cluster_targets.hosts
vault_install_dir = global.vault_install_dir[matrix.artifact_type]
@ -470,13 +621,20 @@ scenario "seal_ha" {
}
step "verify_vault_unsealed_with_new_seal" {
module = module.vault_verify_unsealed
depends_on = [step.wait_for_new_leader]
description = global.description.verify_vault_unsealed
module = module.vault_verify_unsealed
depends_on = [step.wait_for_new_leader]
providers = {
enos = local.enos_provider[matrix.distro]
}
verifies = [
quality.vault_seal_awskms,
quality.vault_seal_pkcs11,
quality.vault_seal_shamir,
]
variables {
vault_install_dir = global.vault_install_dir[matrix.artifact_type]
vault_instances = step.create_vault_cluster_targets.hosts
@ -485,7 +643,8 @@ scenario "seal_ha" {
// Wait for the seal rewrap to complete and verify that no entries failed
step "wait_for_seal_rewrap" {
module = module.vault_wait_for_seal_rewrap
description = global.description.wait_for_seal_rewrap
module = module.vault_wait_for_seal_rewrap
depends_on = [
step.add_ha_seal_to_cluster,
step.verify_vault_unsealed_with_new_seal,
@ -495,6 +654,13 @@ scenario "seal_ha" {
enos = local.enos_provider[matrix.distro]
}
verifies = [
quality.vault_api_sys_sealwrap_rewrap_read_entries_processed_eq_entries_succeeded_post_rewrap,
quality.vault_api_sys_sealwrap_rewrap_read_entries_processed_gt_zero_post_rewrap,
quality.vault_api_sys_sealwrap_rewrap_read_is_running_false_post_rewrap,
quality.vault_api_sys_sealwrap_rewrap_read_no_entries_fail_during_rewrap,
]
variables {
vault_hosts = step.create_vault_cluster_targets.hosts
vault_install_dir = global.vault_install_dir[matrix.artifact_type]
@ -504,13 +670,20 @@ scenario "seal_ha" {
// Perform all of our standard verifications after we've enabled multiseal
step "verify_vault_version" {
module = module.vault_verify_version
depends_on = [step.wait_for_seal_rewrap]
description = global.description.verify_vault_version
module = module.vault_verify_version
depends_on = [step.wait_for_seal_rewrap]
providers = {
enos = local.enos_provider[matrix.distro]
}
verifies = [
quality.vault_version_build_date,
quality.vault_version_edition,
quality.vault_version_release,
]
variables {
vault_instances = step.create_vault_cluster_targets.hosts
vault_edition = matrix.edition
@ -523,14 +696,17 @@ scenario "seal_ha" {
}
step "verify_raft_auto_join_voter" {
skip_step = matrix.backend != "raft"
module = module.vault_verify_raft_auto_join_voter
depends_on = [step.wait_for_seal_rewrap]
description = global.description.verify_raft_cluster_all_nodes_are_voters
skip_step = matrix.backend != "raft"
module = module.vault_verify_raft_auto_join_voter
depends_on = [step.wait_for_seal_rewrap]
providers = {
enos = local.enos_provider[matrix.distro]
}
verifies = quality.vault_raft_voters
variables {
vault_install_dir = global.vault_install_dir[matrix.artifact_type]
vault_instances = step.create_vault_cluster_targets.hosts
@ -539,13 +715,20 @@ scenario "seal_ha" {
}
step "verify_replication" {
module = module.vault_verify_replication
depends_on = [step.wait_for_seal_rewrap]
description = global.description.verify_replication_status
module = module.vault_verify_replication
depends_on = [step.wait_for_seal_rewrap]
providers = {
enos = local.enos_provider[matrix.distro]
}
verifies = [
quality.vault_replication_ce_disabled,
quality.vault_replication_ent_dr_available,
quality.vault_replication_ent_pr_available,
]
variables {
vault_edition = matrix.edition
vault_install_dir = global.vault_install_dir[matrix.artifact_type]
@ -555,13 +738,16 @@ scenario "seal_ha" {
// Make sure our data is still available
step "verify_read_test_data" {
module = module.vault_verify_read_data
depends_on = [step.wait_for_seal_rewrap]
description = global.description.verify_read_test_data
module = module.vault_verify_read_data
depends_on = [step.wait_for_seal_rewrap]
providers = {
enos = local.enos_provider[matrix.distro]
}
verifies = quality.vault_secrets_kv_read
variables {
node_public_ips = step.get_updated_cluster_ips.follower_public_ips
vault_install_dir = global.vault_install_dir[matrix.artifact_type]
@ -569,20 +755,23 @@ scenario "seal_ha" {
}
step "verify_ui" {
module = module.vault_verify_ui
depends_on = [step.wait_for_seal_rewrap]
description = global.description.verify_ui
module = module.vault_verify_ui
depends_on = [step.wait_for_seal_rewrap]
providers = {
enos = local.enos_provider[matrix.distro]
}
verifies = quality.vault_ui_assets
variables {
vault_instances = step.create_vault_cluster_targets.hosts
}
}
// Make sure we have a "multiseal" seal type
step "verify_seal_type" {
description = "${global.description.verify_seal_type} In this case we expect to have 'multiseal'."
// Don't run this on versions less than 1.16.0-beta1 until VAULT-21053 is fixed on prior branches.
skip_step = semverconstraint(var.vault_product_version, "< 1.16.0-beta1")
module = module.verify_seal_type
@ -592,6 +781,8 @@ scenario "seal_ha" {
enos = local.enos_provider[matrix.distro]
}
verifies = quality.vault_status_seal_type
variables {
vault_install_dir = global.vault_install_dir[matrix.artifact_type]
vault_hosts = step.create_vault_cluster_targets.hosts
@ -603,7 +794,8 @@ scenario "seal_ha" {
// Stop the vault service on all nodes before we restart with new seal config
step "stop_vault_for_migration" {
module = module.stop_vault
description = "${global.description.stop_vault}. We do this to remove the old primary seal."
module = module.stop_vault
depends_on = [
step.wait_for_seal_rewrap,
step.verify_read_test_data,
@ -621,13 +813,19 @@ scenario "seal_ha" {
// Remove the "primary" seal from the cluster. Set our "secondary" seal to priority 1. We do this
// by restarting vault with the correct config.
step "remove_primary_seal" {
module = module.start_vault
depends_on = [step.stop_vault_for_migration]
description = <<-EOF
Reconfigure the vault cluster seal configuration with only our secondary seal config which
will force a seal migration to a single seal.
EOF
module = module.start_vault
depends_on = [step.stop_vault_for_migration]
providers = {
enos = local.enos_provider[matrix.distro]
}
verifies = quality.vault_config_multiseal_is_toggleable
variables {
cluster_name = step.create_vault_cluster_targets.cluster_name
install_dir = global.vault_install_dir[matrix.artifact_type]
@ -643,13 +841,19 @@ scenario "seal_ha" {
// Wait for our cluster to elect a leader after restarting vault with a new primary seal
step "wait_for_leader_after_migration" {
module = module.vault_wait_for_leader
depends_on = [step.remove_primary_seal]
description = global.description.wait_for_cluster_to_have_leader
module = module.vault_wait_for_leader
depends_on = [step.remove_primary_seal]
providers = {
enos = local.enos_provider[matrix.distro]
}
verifies = [
quality.vault_api_sys_leader_read,
quality.vault_unseal_ha_leader_election,
]
variables {
timeout = 120 # seconds
vault_hosts = step.create_vault_cluster_targets.hosts
@ -660,13 +864,20 @@ scenario "seal_ha" {
// Since we've restarted our cluster we might have a new leader and followers. Get the new IPs.
step "get_cluster_ips_after_migration" {
module = module.vault_get_cluster_ips
depends_on = [step.wait_for_leader_after_migration]
description = global.description.get_vault_cluster_ip_addresses
module = module.vault_get_cluster_ips
depends_on = [step.wait_for_leader_after_migration]
providers = {
enos = local.enos_provider[matrix.distro]
}
verifies = [
quality.vault_api_sys_ha_status_read,
quality.vault_api_sys_leader_read,
quality.vault_cli_operator_members,
]
variables {
vault_hosts = step.create_vault_cluster_targets.hosts
vault_install_dir = global.vault_install_dir[matrix.artifact_type]

View File

@ -2,6 +2,21 @@
# SPDX-License-Identifier: BUSL-1.1
scenario "smoke" {
description = <<-EOF
The smoke scenario verifies a Vault cluster in a fresh installation. The build can be a local
branch, any CRT built Vault artifact saved to the local machine, or any CRT built Vault artifact
in the stable channel in Artifactory.
The scenario deploys a Vault cluster with the candidate build performs an extended set of
baseline verification.
If you want to use the 'distro:leap' variant you must first accept SUSE's terms for the AWS
account. To verify that your account has agreed, sign-in to your AWS through Doormat,
and visit the following links to verify your subscription or subscribe:
arm64 AMI: https://aws.amazon.com/marketplace/server/procurement?productId=a516e959-df54-4035-bb1a-63599b7a6df9
amd64 AMI: https://aws.amazon.com/marketplace/server/procurement?productId=5535c495-72d4-4355-b169-54ffa874f849
EOF
matrix {
arch = global.archs
artifact_source = global.artifact_sources
@ -29,7 +44,7 @@ scenario "smoke" {
# PKCS#11 can only be used on ent.hsm and ent.hsm.fips1402.
exclude {
seal = ["pkcs11"]
edition = ["ce", "ent", "ent.fips1402"]
edition = [for e in matrix.edition : e if !strcontains(e, "hsm")]
}
# arm64 AMIs are not offered for Leap
@ -66,13 +81,9 @@ scenario "smoke" {
manage_service = matrix.artifact_type == "bundle"
}
step "get_local_metadata" {
skip_step = matrix.artifact_source != "local"
module = module.get_local_metadata
}
step "build_vault" {
module = "build_${matrix.artifact_source}"
description = global.description.build_vault
module = "build_${matrix.artifact_source}"
variables {
build_tags = var.vault_local_build_tags != null ? var.vault_local_build_tags : global.build_tags[matrix.edition]
@ -93,22 +104,23 @@ scenario "smoke" {
}
step "ec2_info" {
module = module.ec2_info
description = global.description.ec2_info
module = module.ec2_info
}
step "create_vpc" {
module = module.create_vpc
description = global.description.create_vpc
module = module.create_vpc
variables {
common_tags = global.tags
}
}
// This step reads the contents of the backend license if we're using a Consul backend and
// an "ent" Consul edition.
step "read_backend_license" {
skip_step = matrix.backend == "raft" || matrix.consul_edition == "ce"
module = module.read_license
description = global.description.read_backend_license
module = module.read_license
skip_step = matrix.backend == "raft" || matrix.consul_edition == "ce"
variables {
file_name = global.backend_license_path
@ -116,8 +128,9 @@ scenario "smoke" {
}
step "read_vault_license" {
skip_step = matrix.edition == "ce"
module = module.read_license
description = global.description.read_vault_license
skip_step = matrix.edition == "ce"
module = module.read_license
variables {
file_name = global.vault_license_path
@ -125,8 +138,9 @@ scenario "smoke" {
}
step "create_seal_key" {
module = "seal_${matrix.seal}"
depends_on = [step.create_vpc]
description = global.description.create_seal_key
module = "seal_${matrix.seal}"
depends_on = [step.create_vpc]
providers = {
enos = provider.enos.ubuntu
@ -139,8 +153,9 @@ scenario "smoke" {
}
step "create_vault_cluster_targets" {
module = module.target_ec2_instances
depends_on = [step.create_vpc]
description = global.description.create_vault_cluster_targets
module = module.target_ec2_instances
depends_on = [step.create_vpc]
providers = {
enos = local.enos_provider[matrix.distro]
@ -156,8 +171,9 @@ scenario "smoke" {
}
step "create_vault_cluster_backend_targets" {
module = matrix.backend == "consul" ? module.target_ec2_instances : module.target_ec2_shim
depends_on = [step.create_vpc]
description = global.description.create_vault_cluster_targets
module = matrix.backend == "consul" ? module.target_ec2_instances : module.target_ec2_shim
depends_on = [step.create_vpc]
providers = {
enos = provider.enos.ubuntu
@ -173,7 +189,8 @@ scenario "smoke" {
}
step "create_backend_cluster" {
module = "backend_${matrix.backend}"
description = global.description.create_backend_cluster
module = "backend_${matrix.backend}"
depends_on = [
step.create_vault_cluster_backend_targets
]
@ -182,6 +199,23 @@ scenario "smoke" {
enos = provider.enos.ubuntu
}
verifies = [
// verified in modules
quality.consul_autojoin_aws,
quality.consul_config_file,
quality.consul_ha_leader_election,
quality.consul_service_start_server,
// verified in enos_consul_start resource
quality.consul_api_agent_host_read,
quality.consul_api_health_node_read,
quality.consul_api_operator_raft_config_read,
quality.consul_cli_validate,
quality.consul_health_state_passing_read_nodes_minimum,
quality.consul_operator_raft_configuration_read_voters_minimum,
quality.consul_service_systemd_notified,
quality.consul_service_systemd_unit,
]
variables {
cluster_name = step.create_vault_cluster_backend_targets.cluster_name
cluster_tag_key = global.backend_tag_key
@ -195,7 +229,8 @@ scenario "smoke" {
}
step "create_vault_cluster" {
module = module.vault_cluster
description = global.description.create_vault_cluster
module = module.vault_cluster
depends_on = [
step.create_backend_cluster,
step.build_vault,
@ -206,6 +241,39 @@ scenario "smoke" {
enos = local.enos_provider[matrix.distro]
}
verifies = [
// verified in modules
quality.consul_service_start_client,
quality.vault_artifact_bundle,
quality.vault_artifact_deb,
quality.vault_artifact_rpm,
quality.vault_audit_log,
quality.vault_audit_socket,
quality.vault_audit_syslog,
quality.vault_autojoin_aws,
quality.vault_config_env_variables,
quality.vault_config_file,
quality.vault_config_log_level,
quality.vault_init,
quality.vault_license_required_ent,
quality.vault_service_start,
quality.vault_storage_backend_consul,
quality.vault_storage_backend_raft,
// verified in enos_vault_start resource
quality.vault_api_sys_config_read,
quality.vault_api_sys_ha_status_read,
quality.vault_api_sys_health_read,
quality.vault_api_sys_host_info_read,
quality.vault_api_sys_replication_status_read,
quality.vault_api_sys_seal_status_api_read_matches_sys_health,
quality.vault_api_sys_storage_raft_autopilot_configuration_read,
quality.vault_api_sys_storage_raft_autopilot_state_read,
quality.vault_api_sys_storage_raft_configuration_read,
quality.vault_cli_status_exit_code,
quality.vault_service_systemd_notified,
quality.vault_service_systemd_unit,
]
variables {
artifactory_release = matrix.artifact_source == "artifactory" ? step.build_vault.vault_artifactory_release : null
backend_cluster_name = step.create_vault_cluster_backend_targets.cluster_name
@ -230,15 +298,27 @@ scenario "smoke" {
}
}
step "get_local_metadata" {
description = global.description.get_local_metadata
skip_step = matrix.artifact_source != "local"
module = module.get_local_metadata
}
// Wait for our cluster to elect a leader
step "wait_for_new_leader" {
module = module.vault_wait_for_leader
depends_on = [step.create_vault_cluster]
description = global.description.wait_for_cluster_to_have_leader
module = module.vault_wait_for_leader
depends_on = [step.create_vault_cluster]
providers = {
enos = local.enos_provider[matrix.distro]
}
verifies = [
quality.vault_api_sys_leader_read,
quality.vault_unseal_ha_leader_election,
]
variables {
timeout = 120 # seconds
vault_hosts = step.create_vault_cluster_targets.hosts
@ -248,13 +328,20 @@ scenario "smoke" {
}
step "get_leader_ip_for_step_down" {
module = module.vault_get_cluster_ips
depends_on = [step.wait_for_new_leader]
description = global.description.get_vault_cluster_ip_addresses
module = module.vault_get_cluster_ips
depends_on = [step.wait_for_new_leader]
providers = {
enos = local.enos_provider[matrix.distro]
}
verifies = [
quality.vault_api_sys_ha_status_read,
quality.vault_api_sys_leader_read,
quality.vault_cli_operator_members,
]
variables {
vault_hosts = step.create_vault_cluster_targets.hosts
vault_install_dir = global.vault_install_dir[matrix.artifact_type]
@ -264,13 +351,19 @@ scenario "smoke" {
// Force a step down to trigger a new leader election
step "vault_leader_step_down" {
module = module.vault_step_down
depends_on = [step.get_leader_ip_for_step_down]
description = global.description.vault_leader_step_down
module = module.vault_step_down
depends_on = [step.get_leader_ip_for_step_down]
providers = {
enos = local.enos_provider[matrix.distro]
}
verifies = [
quality.vault_api_sys_step_down_steps_down,
quality.vault_cli_operator_step_down,
]
variables {
vault_install_dir = global.vault_install_dir[matrix.artifact_type]
leader_host = step.get_leader_ip_for_step_down.leader_host
@ -280,13 +373,19 @@ scenario "smoke" {
// Wait for our cluster to elect a leader
step "wait_for_leader" {
module = module.vault_wait_for_leader
depends_on = [step.vault_leader_step_down]
description = global.description.wait_for_cluster_to_have_leader
module = module.vault_wait_for_leader
depends_on = [step.vault_leader_step_down]
providers = {
enos = local.enos_provider[matrix.distro]
}
verifies = [
quality.vault_api_sys_leader_read,
quality.vault_cli_operator_step_down,
]
variables {
timeout = 120 # seconds
vault_hosts = step.create_vault_cluster_targets.hosts
@ -296,13 +395,20 @@ scenario "smoke" {
}
step "get_vault_cluster_ips" {
module = module.vault_get_cluster_ips
depends_on = [step.wait_for_leader]
description = global.description.get_vault_cluster_ip_addresses
module = module.vault_get_cluster_ips
depends_on = [step.wait_for_leader]
providers = {
enos = local.enos_provider[matrix.distro]
}
verifies = [
quality.vault_api_sys_ha_status_read,
quality.vault_api_sys_leader_read,
quality.vault_cli_operator_members,
]
variables {
vault_hosts = step.create_vault_cluster_targets.hosts
vault_install_dir = global.vault_install_dir[matrix.artifact_type]
@ -311,13 +417,20 @@ scenario "smoke" {
}
step "verify_vault_version" {
module = module.vault_verify_version
depends_on = [step.create_vault_cluster]
description = global.description.verify_vault_version
module = module.vault_verify_version
depends_on = [step.create_vault_cluster]
providers = {
enos = local.enos_provider[matrix.distro]
}
verifies = [
quality.vault_version_build_date,
quality.vault_version_edition,
quality.vault_version_release,
]
variables {
vault_instances = step.create_vault_cluster_targets.hosts
vault_edition = matrix.edition
@ -330,13 +443,20 @@ scenario "smoke" {
}
step "verify_vault_unsealed" {
module = module.vault_verify_unsealed
depends_on = [step.wait_for_leader]
description = global.description.verify_vault_unsealed
module = module.vault_verify_unsealed
depends_on = [step.wait_for_leader]
providers = {
enos = local.enos_provider[matrix.distro]
}
verifies = [
quality.vault_seal_awskms,
quality.vault_seal_pkcs11,
quality.vault_seal_shamir,
]
variables {
vault_install_dir = global.vault_install_dir[matrix.artifact_type]
vault_instances = step.create_vault_cluster_targets.hosts
@ -344,7 +464,8 @@ scenario "smoke" {
}
step "verify_write_test_data" {
module = module.vault_verify_write_data
description = global.description.verify_write_test_data
module = module.vault_verify_write_data
depends_on = [
step.create_vault_cluster,
step.get_vault_cluster_ips
@ -354,6 +475,13 @@ scenario "smoke" {
enos = local.enos_provider[matrix.distro]
}
verifies = [
quality.vault_mount_auth,
quality.vault_mount_kv,
quality.vault_secrets_auth_user_policy_write,
quality.vault_secrets_kv_write,
]
variables {
leader_public_ip = step.get_vault_cluster_ips.leader_public_ip
leader_private_ip = step.get_vault_cluster_ips.leader_private_ip
@ -364,8 +492,9 @@ scenario "smoke" {
}
step "verify_raft_auto_join_voter" {
skip_step = matrix.backend != "raft"
module = module.vault_verify_raft_auto_join_voter
description = global.description.verify_raft_cluster_all_nodes_are_voters
skip_step = matrix.backend != "raft"
module = module.vault_verify_raft_auto_join_voter
depends_on = [
step.create_vault_cluster,
step.get_vault_cluster_ips
@ -375,6 +504,8 @@ scenario "smoke" {
enos = local.enos_provider[matrix.distro]
}
verifies = quality.vault_raft_voters
variables {
vault_install_dir = global.vault_install_dir[matrix.artifact_type]
vault_instances = step.create_vault_cluster_targets.hosts
@ -383,7 +514,8 @@ scenario "smoke" {
}
step "verify_replication" {
module = module.vault_verify_replication
description = global.description.verify_replication_status
module = module.vault_verify_replication
depends_on = [
step.create_vault_cluster,
step.get_vault_cluster_ips
@ -393,6 +525,12 @@ scenario "smoke" {
enos = local.enos_provider[matrix.distro]
}
verifies = [
quality.vault_replication_ce_disabled,
quality.vault_replication_ent_dr_available,
quality.vault_replication_ent_pr_available,
]
variables {
vault_edition = matrix.edition
vault_install_dir = global.vault_install_dir[matrix.artifact_type]
@ -401,7 +539,8 @@ scenario "smoke" {
}
step "verify_read_test_data" {
module = module.vault_verify_read_data
description = global.description.verify_read_test_data
module = module.vault_verify_read_data
depends_on = [
step.verify_write_test_data,
step.verify_replication
@ -411,6 +550,8 @@ scenario "smoke" {
enos = local.enos_provider[matrix.distro]
}
verifies = quality.vault_secrets_kv_read
variables {
node_public_ips = step.get_vault_cluster_ips.follower_public_ips
vault_install_dir = global.vault_install_dir[matrix.artifact_type]
@ -418,7 +559,8 @@ scenario "smoke" {
}
step "verify_ui" {
module = module.vault_verify_ui
description = global.description.verify_ui
module = module.vault_verify_ui
depends_on = [
step.create_vault_cluster,
step.get_vault_cluster_ips
@ -428,6 +570,8 @@ scenario "smoke" {
enos = local.enos_provider[matrix.distro]
}
verifies = quality.vault_ui_assets
variables {
vault_instances = step.create_vault_cluster_targets.hosts
}

View File

@ -2,6 +2,16 @@
# SPDX-License-Identifier: BUSL-1.1
scenario "ui" {
description = <<-EOF
The UI scenario is designed to create a new cluster and run the existing Ember test suite
against a live Vault cluster instead of a binary in dev mode.
The UI scenario verifies the Vault ember test suite against a Vault cluster. The build can be a
local branch, any CRT built Vault artifact saved to the local machine, or any CRT built Vault
artifact in the stable channel in Artifactory.
The scenario deploys a Vault cluster with the candidate build and executes the ember test suite.
EOF
matrix {
backend = global.backends
consul_edition = global.consul_editions
@ -40,7 +50,8 @@ scenario "ui" {
}
step "build_vault" {
module = module.build_local
description = global.description.build_vault
module = module.build_local
variables {
build_tags = var.vault_local_build_tags != null ? var.vault_local_build_tags : local.build_tags[matrix.edition]
@ -54,31 +65,25 @@ scenario "ui" {
}
step "ec2_info" {
module = module.ec2_info
description = global.description.ec2_info
module = module.ec2_info
}
step "create_vpc" {
module = module.create_vpc
description = global.description.create_vpc
module = module.create_vpc
variables {
common_tags = local.tags
}
}
step "create_seal_key" {
module = "seal_${local.seal}"
variables {
cluster_id = step.create_vpc.cluster_id
common_tags = global.tags
}
}
// This step reads the contents of the backend license if we're using a Consul backend and
// an "ent" Consul edition.
// the edition is "ent".
step "read_backend_license" {
skip_step = matrix.backend == "raft" || matrix.consul_edition == "ce"
module = module.read_license
description = global.description.read_backend_license
skip_step = matrix.backend == "raft" || matrix.consul_edition == "ce"
module = module.read_license
variables {
file_name = local.backend_license_path
@ -86,17 +91,29 @@ scenario "ui" {
}
step "read_vault_license" {
skip_step = matrix.edition == "ce"
module = module.read_license
description = global.description.read_vault_license
skip_step = matrix.edition == "ce"
module = module.read_license
variables {
file_name = local.vault_license_path
}
}
step "create_seal_key" {
description = global.description.create_seal_key
module = "seal_${local.seal}"
variables {
cluster_id = step.create_vpc.cluster_id
common_tags = global.tags
}
}
step "create_vault_cluster_targets" {
module = module.target_ec2_instances
depends_on = [step.create_vpc]
description = global.description.create_vault_cluster_targets
module = module.target_ec2_instances
depends_on = [step.create_vpc]
providers = {
enos = provider.enos.ubuntu
@ -112,8 +129,9 @@ scenario "ui" {
}
step "create_vault_cluster_backend_targets" {
module = matrix.backend == "consul" ? module.target_ec2_instances : module.target_ec2_shim
depends_on = [step.create_vpc]
description = global.description.create_vault_cluster_targets
module = matrix.backend == "consul" ? module.target_ec2_instances : module.target_ec2_shim
depends_on = [step.create_vpc]
providers = {
enos = provider.enos.ubuntu
@ -129,7 +147,8 @@ scenario "ui" {
}
step "create_backend_cluster" {
module = "backend_${matrix.backend}"
description = global.description.create_backend_cluster
module = "backend_${matrix.backend}"
depends_on = [
step.create_vault_cluster_backend_targets,
]
@ -138,6 +157,23 @@ scenario "ui" {
enos = provider.enos.ubuntu
}
verifies = [
// verified in modules
quality.consul_autojoin_aws,
quality.consul_config_file,
quality.consul_ha_leader_election,
quality.consul_service_start_server,
// verified in enos_consul_start resource
quality.consul_api_agent_host_read,
quality.consul_api_health_node_read,
quality.consul_api_operator_raft_config_read,
quality.consul_cli_validate,
quality.consul_health_state_passing_read_nodes_minimum,
quality.consul_operator_raft_configuration_read_voters_minimum,
quality.consul_service_systemd_notified,
quality.consul_service_systemd_unit,
]
variables {
cluster_name = step.create_vault_cluster_backend_targets.cluster_name
cluster_tag_key = local.backend_tag_key
@ -151,7 +187,8 @@ scenario "ui" {
}
step "create_vault_cluster" {
module = module.vault_cluster
description = global.description.create_vault_cluster
module = module.vault_cluster
depends_on = [
step.create_backend_cluster,
step.build_vault,
@ -162,6 +199,39 @@ scenario "ui" {
enos = provider.enos.ubuntu
}
verifies = [
// verified in modules
quality.consul_service_start_client,
quality.vault_artifact_bundle,
quality.vault_artifact_deb,
quality.vault_artifact_rpm,
quality.vault_audit_log,
quality.vault_audit_socket,
quality.vault_audit_syslog,
quality.vault_autojoin_aws,
quality.vault_config_env_variables,
quality.vault_config_file,
quality.vault_config_log_level,
quality.vault_init,
quality.vault_license_required_ent,
quality.vault_service_start,
quality.vault_storage_backend_consul,
quality.vault_storage_backend_raft,
// verified in enos_vault_start resource
quality.vault_api_sys_config_read,
quality.vault_api_sys_ha_status_read,
quality.vault_api_sys_health_read,
quality.vault_api_sys_host_info_read,
quality.vault_api_sys_replication_status_read,
quality.vault_api_sys_seal_status_api_read_matches_sys_health,
quality.vault_api_sys_storage_raft_autopilot_configuration_read,
quality.vault_api_sys_storage_raft_autopilot_state_read,
quality.vault_api_sys_storage_raft_configuration_read,
quality.vault_cli_status_exit_code,
quality.vault_service_systemd_notified,
quality.vault_service_systemd_unit,
]
variables {
backend_cluster_name = step.create_vault_cluster_backend_targets.cluster_name
backend_cluster_tag_key = local.backend_tag_key
@ -185,13 +255,19 @@ scenario "ui" {
// Wait for our cluster to elect a leader
step "wait_for_leader" {
module = module.vault_wait_for_leader
depends_on = [step.create_vault_cluster]
description = global.description.wait_for_cluster_to_have_leader
module = module.vault_wait_for_leader
depends_on = [step.create_vault_cluster]
providers = {
enos = provider.enos.ubuntu
}
verifies = [
quality.vault_api_sys_leader_read,
quality.vault_unseal_ha_leader_election,
]
variables {
timeout = 120 # seconds
vault_hosts = step.create_vault_cluster_targets.hosts
@ -201,8 +277,14 @@ scenario "ui" {
}
step "test_ui" {
module = module.vault_test_ui
depends_on = [step.wait_for_leader]
description = <<-EOF
Verify that the Vault Web UI test suite can run against a live cluster with the compiled
assets.
EOF
module = module.vault_test_ui
depends_on = [step.wait_for_leader]
verifies = quality.vault_ui_test
variables {
vault_addr = step.create_vault_cluster_targets.hosts[0].public_ip

View File

@ -2,6 +2,23 @@
# SPDX-License-Identifier: BUSL-1.1
scenario "upgrade" {
description = <<-EOF
The upgrade scenario verifies in-place upgrades between previously released versions of Vault
against another candidate build. The build can be a local branch, any CRT built Vault artifact
saved to the local machine, or any CRT built Vault artifact in the stable channel in
Artifactory.
The scenario will first create a new Vault Cluster with a previously released version of Vault,
mount engines and create data, then perform an in-place upgrade with any candidate built and
perform quality verification.
If you want to use the 'distro:leap' variant you must first accept SUSE's terms for the AWS
account. To verify that your account has agreed, sign-in to your AWS through Doormat,
and visit the following links to verify your subscription or subscribe:
arm64 AMI: https://aws.amazon.com/marketplace/server/procurement?productId=a516e959-df54-4035-bb1a-63599b7a6df9
amd64 AMI: https://aws.amazon.com/marketplace/server/procurement?productId=5535c495-72d4-4355-b169-54ffa874f849
EOF
matrix {
arch = global.archs
artifact_source = global.artifact_sources
@ -12,37 +29,42 @@ scenario "upgrade" {
consul_version = global.consul_versions
distro = global.distros
edition = global.editions
// NOTE: when backporting the initial version make sure we don't include initial versions that
// are a higher minor version that our release candidate. Also, prior to 1.11.x the
// /v1/sys/seal-status API has known issues that could cause this scenario to fail when using
// those earlier versions, therefore support from 1.8.x to 1.10.x is unreliable. Prior to 1.8.x
// is not supported due to changes with vault's signaling of systemd and the enos-provider
// no longer supporting setting the license via the license API.
initial_version = global.upgrade_initial_versions
// This reads the VERSION file, strips any pre-release metadata, and selects only initial
// versions that are less than our current version. E.g. A VERSION file containing 1.17.0-beta2
// would render: semverconstraint(v, "<1.17.0-0")
initial_version = [for v in global.upgrade_initial_versions : v if semverconstraint(v, "<${join("-", [split("-", chomp(file("../version/VERSION")))[0], "0"])}")]
seal = global.seals
# Our local builder always creates bundles
exclude {
artifact_source = ["local"]
artifact_type = ["package"]
}
# Don't upgrade from super-ancient versions in CI because there are known reliability issues
# in those versions that have already been fixed.
exclude {
initial_version = [for e in matrix.initial_version : e if semverconstraint(e, "<1.11.0-0")]
}
# FIPS 140-2 editions were not supported until 1.11.x, even though there are 1.10.x binaries
# published.
exclude {
edition = ["ent.fips1402", "ent.hsm.fips1402"]
initial_version = [for e in matrix.initial_version : e if semverconstraint(e, "<1.11.0-0")]
}
# HSM and FIPS 140-2 are only supported on amd64
exclude {
arch = ["arm64"]
edition = ["ent.fips1402", "ent.hsm", "ent.hsm.fips1402"]
}
# FIPS 140-2 editions began at 1.10
exclude {
edition = ["ent.fips1402", "ent.hsm.fips1402"]
initial_version = ["1.8.12", "1.9.10"]
}
# PKCS#11 can only be used on ent.hsm and ent.hsm.fips1402.
# PKCS#11 can only be used with hsm editions
exclude {
seal = ["pkcs11"]
edition = ["ce", "ent", "ent.fips1402"]
edition = [for e in matrix.edition : e if !strcontains(e, "hsm")]
}
# arm64 AMIs are not offered for Leap
@ -79,14 +101,9 @@ scenario "upgrade" {
manage_service = matrix.artifact_type == "bundle"
}
step "get_local_metadata" {
skip_step = matrix.artifact_source != "local"
module = module.get_local_metadata
}
# This step gets/builds the upgrade artifact that we will upgrade to
step "build_vault" {
module = "build_${matrix.artifact_source}"
description = global.description.build_vault
module = "build_${matrix.artifact_source}"
variables {
build_tags = var.vault_local_build_tags != null ? var.vault_local_build_tags : global.build_tags[matrix.edition]
@ -107,11 +124,13 @@ scenario "upgrade" {
}
step "ec2_info" {
module = module.ec2_info
description = global.description.ec2_info
module = module.ec2_info
}
step "create_vpc" {
module = module.create_vpc
description = global.description.create_vpc
module = module.create_vpc
variables {
common_tags = global.tags
@ -121,8 +140,9 @@ scenario "upgrade" {
// This step reads the contents of the backend license if we're using a Consul backend and
// an "ent" Consul edition.
step "read_backend_license" {
skip_step = matrix.backend == "raft" || matrix.consul_edition == "ce"
module = module.read_license
description = global.description.read_backend_license
skip_step = matrix.backend == "raft" || matrix.consul_edition == "ce"
module = module.read_license
variables {
file_name = global.backend_license_path
@ -130,8 +150,9 @@ scenario "upgrade" {
}
step "read_vault_license" {
skip_step = matrix.edition == "ce"
module = module.read_license
description = global.description.read_vault_license
skip_step = matrix.edition == "ce"
module = module.read_license
variables {
file_name = global.vault_license_path
@ -139,8 +160,9 @@ scenario "upgrade" {
}
step "create_seal_key" {
module = "seal_${matrix.seal}"
depends_on = [step.create_vpc]
description = global.description.create_seal_key
module = "seal_${matrix.seal}"
depends_on = [step.create_vpc]
providers = {
enos = provider.enos.ubuntu
@ -153,8 +175,9 @@ scenario "upgrade" {
}
step "create_vault_cluster_targets" {
module = module.target_ec2_instances
depends_on = [step.create_vpc]
description = global.description.create_vault_cluster_targets
module = module.target_ec2_instances
depends_on = [step.create_vpc]
providers = {
enos = local.enos_provider[matrix.distro]
@ -170,8 +193,9 @@ scenario "upgrade" {
}
step "create_vault_cluster_backend_targets" {
module = matrix.backend == "consul" ? module.target_ec2_instances : module.target_ec2_shim
depends_on = [step.create_vpc]
description = global.description.create_vault_cluster_targets
module = matrix.backend == "consul" ? module.target_ec2_instances : module.target_ec2_shim
depends_on = [step.create_vpc]
providers = {
enos = provider.enos.ubuntu
@ -187,7 +211,8 @@ scenario "upgrade" {
}
step "create_backend_cluster" {
module = "backend_${matrix.backend}"
description = global.description.create_backend_cluster
module = "backend_${matrix.backend}"
depends_on = [
step.create_vault_cluster_backend_targets,
]
@ -196,6 +221,23 @@ scenario "upgrade" {
enos = provider.enos.ubuntu
}
verifies = [
// verified in modules
quality.consul_autojoin_aws,
quality.consul_config_file,
quality.consul_ha_leader_election,
quality.consul_service_start_server,
// verified in enos_consul_start resource
quality.consul_api_agent_host_read,
quality.consul_api_health_node_read,
quality.consul_api_operator_raft_config_read,
quality.consul_cli_validate,
quality.consul_health_state_passing_read_nodes_minimum,
quality.consul_operator_raft_configuration_read_voters_minimum,
quality.consul_service_systemd_notified,
quality.consul_service_systemd_unit,
]
variables {
cluster_name = step.create_vault_cluster_backend_targets.cluster_name
cluster_tag_key = global.backend_tag_key
@ -209,7 +251,8 @@ scenario "upgrade" {
}
step "create_vault_cluster" {
module = module.vault_cluster
description = global.description.create_vault_cluster
module = module.vault_cluster
depends_on = [
step.create_backend_cluster,
step.build_vault,
@ -220,6 +263,39 @@ scenario "upgrade" {
enos = local.enos_provider[matrix.distro]
}
verifies = [
// verified in modules
quality.consul_service_start_client,
quality.vault_artifact_bundle,
quality.vault_artifact_deb,
quality.vault_artifact_rpm,
quality.vault_audit_log,
quality.vault_audit_socket,
quality.vault_audit_syslog,
quality.vault_autojoin_aws,
quality.vault_storage_backend_consul,
quality.vault_config_env_variables,
quality.vault_config_file,
quality.vault_config_log_level,
quality.vault_init,
quality.vault_license_required_ent,
quality.vault_service_start,
quality.vault_storage_backend_raft,
// verified in enos_vault_start resource
quality.vault_api_sys_config_read,
quality.vault_api_sys_ha_status_read,
quality.vault_api_sys_health_read,
quality.vault_api_sys_host_info_read,
quality.vault_api_sys_replication_status_read,
quality.vault_api_sys_seal_status_api_read_matches_sys_health,
quality.vault_api_sys_storage_raft_autopilot_configuration_read,
quality.vault_api_sys_storage_raft_autopilot_state_read,
quality.vault_api_sys_storage_raft_configuration_read,
quality.vault_cli_status_exit_code,
quality.vault_service_systemd_unit,
quality.vault_service_systemd_notified,
]
variables {
backend_cluster_name = step.create_vault_cluster_backend_targets.cluster_name
backend_cluster_tag_key = global.backend_tag_key
@ -231,7 +307,6 @@ scenario "upgrade" {
version = matrix.consul_version
} : null
enable_audit_devices = var.vault_enable_audit_devices
install_dir = global.vault_install_dir[matrix.artifact_type]
license = matrix.edition != "ce" ? step.read_vault_license.license : null
packages = concat(global.packages, global.distro_packages[matrix.distro])
release = {
@ -245,14 +320,50 @@ scenario "upgrade" {
}
}
step "get_vault_cluster_ips" {
module = module.vault_get_cluster_ips
depends_on = [step.create_vault_cluster]
step "get_local_metadata" {
description = global.description.get_local_metadata
skip_step = matrix.artifact_source != "local"
module = module.get_local_metadata
}
// Wait for our cluster to elect a leader
step "wait_for_leader" {
description = global.description.wait_for_cluster_to_have_leader
module = module.vault_wait_for_leader
depends_on = [step.create_vault_cluster]
providers = {
enos = local.enos_provider[matrix.distro]
}
verifies = [
quality.vault_api_sys_leader_read,
quality.vault_unseal_ha_leader_election,
]
variables {
timeout = 120 # seconds
vault_hosts = step.create_vault_cluster_targets.hosts
vault_install_dir = global.vault_install_dir[matrix.artifact_type]
vault_root_token = step.create_vault_cluster.root_token
}
}
step "get_vault_cluster_ips" {
description = global.description.get_vault_cluster_ip_addresses
module = module.vault_get_cluster_ips
depends_on = [step.create_vault_cluster]
providers = {
enos = local.enos_provider[matrix.distro]
}
verifies = [
quality.vault_api_sys_ha_status_read,
quality.vault_api_sys_leader_read,
quality.vault_cli_operator_members,
]
variables {
vault_hosts = step.create_vault_cluster_targets.hosts
vault_install_dir = global.vault_install_dir[matrix.artifact_type]
@ -261,7 +372,8 @@ scenario "upgrade" {
}
step "verify_write_test_data" {
module = module.vault_verify_write_data
description = global.description.verify_write_test_data
module = module.vault_verify_write_data
depends_on = [
step.create_vault_cluster,
step.get_vault_cluster_ips,
@ -271,6 +383,13 @@ scenario "upgrade" {
enos = local.enos_provider[matrix.distro]
}
verifies = [
quality.vault_mount_auth,
quality.vault_mount_kv,
quality.vault_secrets_auth_user_policy_write,
quality.vault_secrets_kv_write,
]
variables {
leader_public_ip = step.get_vault_cluster_ips.leader_public_ip
leader_private_ip = step.get_vault_cluster_ips.leader_private_ip
@ -283,7 +402,11 @@ scenario "upgrade" {
# This step upgrades the Vault cluster to the var.vault_product_version
# by getting a bundle or package of that version from the matrix.artifact_source
step "upgrade_vault" {
module = module.vault_upgrade
description = <<-EOF
Perform an in-place upgrade of the Vault Cluster nodes by first installing a new version
of Vault on the cluster node machines and restarting the service.
EOF
module = module.vault_upgrade
depends_on = [
step.create_vault_cluster,
step.verify_write_test_data,
@ -293,6 +416,11 @@ scenario "upgrade" {
enos = local.enos_provider[matrix.distro]
}
verifies = [
quality.vault_cluster_upgrade_in_place,
quality.vault_service_restart,
]
variables {
vault_api_addr = "http://localhost:8200"
vault_instances = step.create_vault_cluster_targets.hosts
@ -306,7 +434,8 @@ scenario "upgrade" {
// Wait for our upgraded cluster to elect a leader
step "wait_for_leader_after_upgrade" {
module = module.vault_wait_for_leader
description = global.description.wait_for_cluster_to_have_leader
module = module.vault_wait_for_leader
depends_on = [
step.create_vault_cluster,
step.upgrade_vault,
@ -316,6 +445,11 @@ scenario "upgrade" {
enos = local.enos_provider[matrix.distro]
}
verifies = [
quality.vault_api_sys_leader_read,
quality.vault_unseal_ha_leader_election,
]
variables {
timeout = 120 # seconds
vault_hosts = step.create_vault_cluster_targets.hosts
@ -325,13 +459,20 @@ scenario "upgrade" {
}
step "get_leader_ip_for_step_down" {
module = module.vault_get_cluster_ips
depends_on = [step.wait_for_leader_after_upgrade]
description = global.description.get_vault_cluster_ip_addresses
module = module.vault_get_cluster_ips
depends_on = [step.wait_for_leader_after_upgrade]
providers = {
enos = local.enos_provider[matrix.distro]
}
verifies = [
quality.vault_api_sys_ha_status_read,
quality.vault_api_sys_leader_read,
quality.vault_cli_operator_members,
]
variables {
vault_hosts = step.create_vault_cluster_targets.hosts
vault_install_dir = global.vault_install_dir[matrix.artifact_type]
@ -341,13 +482,19 @@ scenario "upgrade" {
// Force a step down to trigger a new leader election
step "vault_leader_step_down" {
module = module.vault_step_down
depends_on = [step.get_leader_ip_for_step_down]
description = global.description.vault_leader_step_down
module = module.vault_step_down
depends_on = [step.get_leader_ip_for_step_down]
providers = {
enos = local.enos_provider[matrix.distro]
}
verifies = [
quality.vault_api_sys_step_down_steps_down,
quality.vault_cli_operator_step_down,
]
variables {
vault_install_dir = global.vault_install_dir[matrix.artifact_type]
leader_host = step.get_leader_ip_for_step_down.leader_host
@ -357,13 +504,19 @@ scenario "upgrade" {
// Wait for our cluster to elect a leader
step "wait_for_leader_after_stepdown" {
module = module.vault_wait_for_leader
depends_on = [step.vault_leader_step_down]
description = global.description.wait_for_cluster_to_have_leader
module = module.vault_wait_for_leader
depends_on = [step.vault_leader_step_down]
providers = {
enos = local.enos_provider[matrix.distro]
}
verifies = [
quality.vault_api_sys_leader_read,
quality.vault_unseal_ha_leader_election,
]
variables {
timeout = 120 # seconds
vault_hosts = step.create_vault_cluster_targets.hosts
@ -373,7 +526,8 @@ scenario "upgrade" {
}
step "get_updated_vault_cluster_ips" {
module = module.vault_get_cluster_ips
description = global.description.get_vault_cluster_ip_addresses
module = module.vault_get_cluster_ips
depends_on = [
step.wait_for_leader_after_stepdown,
]
@ -382,6 +536,12 @@ scenario "upgrade" {
enos = local.enos_provider[matrix.distro]
}
verifies = [
quality.vault_api_sys_ha_status_read,
quality.vault_api_sys_leader_read,
quality.vault_cli_operator_members,
]
variables {
vault_hosts = step.create_vault_cluster_targets.hosts
vault_install_dir = global.vault_install_dir[matrix.artifact_type]
@ -390,7 +550,8 @@ scenario "upgrade" {
}
step "verify_vault_version" {
module = module.vault_verify_version
description = global.description.verify_vault_version
module = module.vault_verify_version
depends_on = [
step.get_updated_vault_cluster_ips,
]
@ -399,6 +560,12 @@ scenario "upgrade" {
enos = local.enos_provider[matrix.distro]
}
verifies = [
quality.vault_version_build_date,
quality.vault_version_edition,
quality.vault_version_release,
]
variables {
vault_instances = step.create_vault_cluster_targets.hosts
vault_edition = matrix.edition
@ -411,7 +578,8 @@ scenario "upgrade" {
}
step "verify_vault_unsealed" {
module = module.vault_verify_unsealed
description = global.description.verify_vault_unsealed
module = module.vault_verify_unsealed
depends_on = [
step.get_updated_vault_cluster_ips,
]
@ -420,6 +588,12 @@ scenario "upgrade" {
enos = local.enos_provider[matrix.distro]
}
verifies = [
quality.vault_seal_awskms,
quality.vault_seal_pkcs11,
quality.vault_seal_shamir,
]
variables {
vault_instances = step.create_vault_cluster_targets.hosts
vault_install_dir = global.vault_install_dir[matrix.artifact_type]
@ -427,7 +601,8 @@ scenario "upgrade" {
}
step "verify_read_test_data" {
module = module.vault_verify_read_data
description = global.description.verify_write_test_data
module = module.vault_verify_read_data
depends_on = [
step.get_updated_vault_cluster_ips,
step.verify_write_test_data,
@ -438,6 +613,13 @@ scenario "upgrade" {
enos = local.enos_provider[matrix.distro]
}
verifies = [
quality.vault_mount_auth,
quality.vault_mount_kv,
quality.vault_secrets_auth_user_policy_write,
quality.vault_secrets_kv_write,
]
variables {
node_public_ips = step.get_updated_vault_cluster_ips.follower_public_ips
vault_install_dir = global.vault_install_dir[matrix.artifact_type]
@ -445,8 +627,9 @@ scenario "upgrade" {
}
step "verify_raft_auto_join_voter" {
skip_step = matrix.backend != "raft"
module = module.vault_verify_raft_auto_join_voter
description = global.description.verify_raft_cluster_all_nodes_are_voters
skip_step = matrix.backend != "raft"
module = module.vault_verify_raft_auto_join_voter
depends_on = [
step.get_updated_vault_cluster_ips,
]
@ -455,6 +638,8 @@ scenario "upgrade" {
enos = local.enos_provider[matrix.distro]
}
verifies = quality.vault_raft_voters
variables {
vault_install_dir = global.vault_install_dir[matrix.artifact_type]
vault_instances = step.create_vault_cluster_targets.hosts
@ -463,7 +648,8 @@ scenario "upgrade" {
}
step "verify_replication" {
module = module.vault_verify_replication
description = global.description.verify_replication_status
module = module.vault_verify_replication
depends_on = [
step.get_updated_vault_cluster_ips,
]
@ -472,6 +658,12 @@ scenario "upgrade" {
enos = local.enos_provider[matrix.distro]
}
verifies = [
quality.vault_replication_ce_disabled,
quality.vault_replication_ent_dr_available,
quality.vault_replication_ent_pr_available,
]
variables {
vault_edition = matrix.edition
vault_install_dir = global.vault_install_dir[matrix.artifact_type]
@ -480,7 +672,8 @@ scenario "upgrade" {
}
step "verify_ui" {
module = module.vault_verify_ui
description = global.description.verify_ui
module = module.vault_verify_ui
depends_on = [
step.get_updated_vault_cluster_ips,
]
@ -489,6 +682,8 @@ scenario "upgrade" {
enos = local.enos_provider[matrix.distro]
}
verifies = quality.vault_ui_assets
variables {
vault_instances = step.create_vault_cluster_targets.hosts
}

View File

@ -289,6 +289,37 @@ resource "enos_remote_exec" "configure_login_shell_profile" {
}
}
# Add a motd to assist people that might be logging in.
resource "enos_file" "motd" {
depends_on = [
enos_remote_exec.configure_login_shell_profile
]
for_each = var.target_hosts
destination = "/etc/motd"
content = <<EOF
We've added `vault` to the PATH for you and configured
the VAULT_ADDR and VAULT_TOKEN with the root token.
Have fun!
EOF
transport = {
ssh = {
host = each.value.public_ip
}
}
}
# We need to ensure that the directory used for audit logs is present and accessible to the vault
# user on all nodes, since logging will only happen on the leader.
resource "enos_remote_exec" "create_audit_log_dir" {

View File

@ -43,4 +43,4 @@ while [ "$(date +%s)" -lt "$end_time" ]; do
sleep "$RETRY_INTERVAL"
done
fail "Timed out waiting for Default LCQ verification to complete. Data:\n\t$(getMaxLeases)"
fail "Timed out waiting for Default LCQ verification to complete. Data:\n\t$(getMaxLeases)"