vault/enos/modules/backend_consul/variables.tf
Ryan Cragun 8d22142a3e
[QT-572][VAULT-17391] enos: use ec2 fleets for consul storage scenarios (#21400)
Begin the process of migrating away from the "strongly encouraged not to
use"[0] Ec2 spot fleet API to the more modern `ec2:CreateFleet`.
Unfortuantely the `instant` type fleet does not guarantee fulfillment
with either on-demand or spot types. We'll need to add a feature similar
to `wait_for_fulfillment` on the `spot_fleet_request` resource[1] to
`ec2_fleet` before we can rely on it.

We also update the existing target fleets to support provisioning generic
targets. This has allowed us to remove our usage of `terraform-enos-aws-consul`
and replace it with a smaller `backend_consul` module in-repo.

We also remove `terraform-enos-aws-infra` and replace it with two smaller
in-repo modules `ec2_info` and `create_vpc`. This has allowed us to simplify
the vpc resources we use for each scneario, which in turn allows us to
not rely on flaky resources.

As part of this refactor we've also made it possible to provision
targets using different distro versions.

[0] https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/spot-best-practices.html#which-spot-request-method-to-use
[1] https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/spot_fleet_request#wait_for_fulfillment

* enos/consul: add `backend_consul` module that accepts target hosts.
* enos/target_ec2_spot_fleet: add support for consul networking.
* enos/target_ec2_spot_fleet: add support for customizing cluster tag
  key.
* enos/scenarios: create `target_ec2_fleet` which uses a more modern
  `ec2_fleet` API.
* enos/create_vpc: replace `terraform-enos-aws-infra` with smaller and
  simplified version. Flatten the networking to a single route on the
  default route table and a single subnet.
* enos/ec2_info: add a new module to give us useful ec2 information
  including AMI id's for various arch/distro/version combinations.
* enos/ci: update service user role to allow for managing ec2 fleets.

Signed-off-by: Ryan Cragun <me@ryan.ec>
2023-06-22 12:42:21 -06:00

77 lines
1.8 KiB
HCL

# Copyright (c) HashiCorp, Inc.
# SPDX-License-Identifier: MPL-2.0
variable "cluster_name" {
type = string
description = "The name of the Consul cluster"
default = null
}
variable "cluster_tag_key" {
type = string
description = "The tag key for searching for Consul nodes"
default = null
}
variable "config_dir" {
type = string
description = "The directory where the consul will write config files"
default = "/etc/consul.d"
}
variable "data_dir" {
type = string
description = "The directory where the consul will store data"
default = "/opt/consul/data"
}
variable "install_dir" {
type = string
description = "The directory where the consul binary will be installed"
default = "/opt/consul/bin"
}
variable "license" {
type = string
sensitive = true
description = "The consul enterprise license"
default = null
}
variable "log_dir" {
type = string
description = "The directory where the consul will write log files"
default = "/var/log/consul.d"
}
variable "log_level" {
type = string
description = "The consul service log level"
default = "info"
validation {
condition = contains(["trace", "debug", "info", "warn", "error"], var.log_level)
error_message = "The log_level must be one of 'trace', 'debug', 'info', 'warn', or 'error'."
}
}
variable "release" {
type = object({
version = string
edition = string
})
description = "Consul release version and edition to install from releases.hashicorp.com"
default = {
version = "1.15.3"
edition = "oss"
}
}
variable "target_hosts" {
description = "The target machines host addresses to use for the consul cluster"
type = map(object({
private_ip = string
public_ip = string
}))
}