mirror of
https://github.com/hashicorp/vault.git
synced 2025-08-22 23:21:08 +02:00
* Update README Let contributors know that docs will now be located in UDR * Add comments to each mdx doc Comment has been added to all mdx docs that are not partials * chore: added changelog changelog check failure * wip: removed changelog * Fix content errors * Doc spacing * Update website/content/docs/deploy/kubernetes/vso/helm.mdx Co-authored-by: Tu Nguyen <im2nguyen@users.noreply.github.com> --------- Co-authored-by: jonathanfrappier <92055993+jonathanfrappier@users.noreply.github.com> Co-authored-by: Tu Nguyen <im2nguyen@users.noreply.github.com>
47 lines
2.6 KiB
Plaintext
47 lines
2.6 KiB
Plaintext
---
|
|
layout: docs
|
|
page_title: Use redundancy zones
|
|
description: >-
|
|
Use redundancy zones with hot standby nodes for improved scalability and
|
|
resiliency with Vault Enterprise clusters.
|
|
---
|
|
|
|
> [!IMPORTANT]
|
|
> **Documentation Update:** Product documentation, which were located in this repository under `/website`, are now located in [`hashicorp/web-unified-docs`](https://github.com/hashicorp/web-unified-docs), colocated with all other product documentation. Contributions to this content should be done in the `web-unified-docs` repo, and not this one. Changes made to `/website` content in this repo will not be reflected on the developer.hashicorp.com website.
|
|
|
|
# Use redundancy zones
|
|
|
|
@include 'alerts/enterprise-only.mdx'
|
|
|
|
Vault Enterprise redundancy zones provide both read scaling and resiliency benefits by enabling
|
|
the deployment of non-voting nodes alongside voting nodes on a per availability zone basis.
|
|
|
|
When using redundancy zones, if an operator chooses to deploy Vault across three availability zones,
|
|
they could have two (or more) nodes (one voting/one+ non-voting) in each zone. In the event that a
|
|
voting node in an availability zone fails, the redundancy zone configuration automatically
|
|
promotes the non-voting node to a voting node. In the event that an entire availability zone is
|
|
lost, a non-voting node in one of the existing availability zones would be promoted to a voting
|
|
node, keeping quorum. This capability functions as a "hot standby" for server nodes while also
|
|
providing enhanced read scalability.
|
|
|
|
## Configuration
|
|
A new key can be added to Vault's `storage` configuration stanza: `autopilot_redundancy_zone`.
|
|
The value for this key is a string of your choosing and represents the zone this particular node
|
|
should be in.
|
|
|
|
## Mechanics
|
|
Vault's Autopilot subsystem will always attempt to maintain exactly one voting node per redundancy
|
|
zone. Any additional nodes beyond the first one will be demoted to non-voting status. Non-voting
|
|
nodes can serve reads but can not participate in cluster elections.
|
|
|
|
If redundancy zones are used in conjunction with automated upgrades, Autopilot will always try to
|
|
ensure that Vault is never moving from a more healthy state to a less healthy state. Autopilot will
|
|
wait to begin leadership transfer until it can ensure that there will be as much redundancy on the
|
|
new Vault version as there was on the old Vault version.
|
|
|
|
The status of redundancy zones can be monitored by consulting the [Autopilot state API endpoint](/vault/api-docs/system/storage/raftautopilot#get-cluster-state).
|
|
|
|
## Optimistic Failure Tolerance
|
|
|
|
@include 'autopilot/redundancy-zones.mdx'
|