Brian Shumate 7a3158120d
Docs: Manage snapshots content (#30940)
* Initial manage snapshots content

* API examples and more

* Rounding out DR examples for automatic snapshots

* Finishing up restore snapshot section

* Breaking to chat

* Adding recovery commands phase 1

* Adding recovery commands phase 2

* Complete with some placeholders to avoid errors

* Complete initial document

* Move to operations/configuration

* - Add a manage snapshots partial linking to manage snapshots doc
- Add partial to upgrade index
- Add partial to operator raft command

* Update website/content/docs/configuration/manage-snapshots.mdx

Co-authored-by: Yoko Hyakuna <yoko@hashicorp.com>

* Update website/content/docs/configuration/manage-snapshots.mdx

Co-authored-by: Yoko Hyakuna <yoko@hashicorp.com>

* Update website/content/docs/configuration/manage-snapshots.mdx

Co-authored-by: Yoko Hyakuna <yoko@hashicorp.com>

* Update website/content/docs/configuration/manage-snapshots.mdx

Co-authored-by: Yoko Hyakuna <yoko@hashicorp.com>

* Update website/content/docs/configuration/manage-snapshots.mdx

Co-authored-by: Yoko Hyakuna <yoko@hashicorp.com>

* Update website/content/docs/configuration/manage-snapshots.mdx

Co-authored-by: Yoko Hyakuna <yoko@hashicorp.com>

* Update website/content/docs/configuration/manage-snapshots.mdx

Co-authored-by: Yoko Hyakuna <yoko@hashicorp.com>

* Update website/content/docs/configuration/manage-snapshots.mdx

Co-authored-by: Yoko Hyakuna <yoko@hashicorp.com>

* Update website/content/docs/configuration/manage-snapshots.mdx

Co-authored-by: Yoko Hyakuna <yoko@hashicorp.com>

* Update website/content/docs/configuration/manage-snapshots.mdx

Co-authored-by: Yoko Hyakuna <yoko@hashicorp.com>

* [DOCS] IA and style guide suggestions [in progress] (#30993)

* save initial structure

* save work

* save

* add redirects and update titles

* save

* remove uneeded sysadmin landing page (for now)

* replace missing redirect

---------

Co-authored-by: Yoko Hyakuna <yoko@hashicorp.com>
Co-authored-by: Sarah Chavis <62406755+schavis@users.noreply.github.com>
2025-06-25 12:21:01 -04:00

480 lines
16 KiB
Plaintext

---
layout: docs
page_title: operator raft - Command
description: >-
The "operator raft" command is used to interact with the integrated Raft storage backend.
---
# operator raft
This command groups subcommands for operators to manage the Integrated Storage Raft backend.
```text
Usage: vault operator raft <subcommand> [options] [args]
This command groups subcommands for operators interacting with the Vault
integrated Raft storage backend. Most users will not need to interact with these
commands. Here are a few examples of the Raft operator commands:
Subcommands:
join Joins a node to the Raft cluster
list-peers Returns the Raft peer set
remove-peer Removes a node from the Raft cluster
snapshot Restores and saves snapshots from the Raft cluster
```
## join
This command is used to join a new node as a peer to the Raft cluster. In order
to join, there must be at least one existing member of the cluster. If Shamir
seal is in use, then unseal keys are to be supplied before or after the
join process, depending on whether it's being used exclusively for HA.
If raft is used for `storage`, the node must be joined before unsealing and the
`leader-api-addr` argument must be provided. If raft is used for `ha_storage`,
the node must be first unsealed before joining and the `leader-api-addr` must
_not_ be provided.
```text
Usage: vault operator raft join [options] <leader-api-addr>
Join the current node as a peer to the Raft cluster by providing the address
of the Raft leader node.
$ vault operator raft join "http://127.0.0.2:8200"
```
The `join` command also allows operators to specify cloud auto-join configuration
instead of a static IP address or hostname. When provided, Vault will attempt to
automatically discover and resolve potential leader addresses based on the provided
auto-join configuration.
Vault uses go-discover to support the auto-join functionality. Please see the
go-discover
[README](https://github.com/hashicorp/go-discover/blob/master/README.md) for
details on the format.
By default, Vault will attempt to reach discovered peers using HTTPS and port 8200.
Operators may override these through the `--auto-join-scheme` and `--auto-join-port`
CLI flags respectively.
```text
Usage: vault operator raft join [options] <auto-join-configuration>
Join the current node as a peer to the Raft cluster by providing cloud auto-join
metadata configuration.
$ vault operator raft join "provider=aws region=eu-west-1 ..."
```
### Parameters
The following flags are available for the `operator raft join` command.
- `-leader-ca-cert` `(string: "")` - CA cert to communicate with Raft leader.
- `-leader-client-cert` `(string: "")` - Client cert to authenticate to Raft leader.
- `-leader-client-key` `(string: "")` - Client key to authenticate to Raft leader.
- `-non-voter` `(bool: false) (enterprise)` - This flag is used to make the
server not participate in the Raft quorum, and have it only receive the data
replication stream. This can be used to add read scalability to a cluster in
cases where a high volume of reads to servers are needed. The default is false.
See [`retry_join_as_non_voter`](/vault/docs/configuration/storage/raft#retry_join_as_non_voter)
for the equivalent config option when using `retry_join` stanzas instead.
- `-retry` `(bool: false)` - Continuously retry joining the Raft cluster upon
failures. The default is false.
~> **Note:** Please be aware that the content (not the path to the file) of the certificate or key is expected for these parameters: `-leader-ca-cert`, `-leader-client-cert`, `-leader-client-key`.
## list-peers
This command is used to list the full set of peers in the Raft cluster.
```text
Usage: vault operator raft list-peers
Provides the details of all the peers in the Raft cluster.
$ vault operator raft list-peers
```
### Example output
```json
{
...
"data": {
"config": {
"index": 62,
"servers": [
{
"address": "127.0.0.2:8201",
"leader": true,
"node_id": "node1",
"protocol_version": "3",
"voter": true
},
{
"address": "127.0.0.4:8201",
"leader": false,
"node_id": "node3",
"protocol_version": "3",
"voter": true
}
]
}
}
}
```
Use the output of `list-peers` to ensure that your cluster is in an expected state.
If you've removed a server using `remove-peer`, the server should no longer be
listed in the `list-peers` output. If you've added a server using `add-peer` or
through `retry_join`, check the `list-peers` output to see that it has been added
to the cluster and (if the node has not been added as a non-voter)
it has been promoted to a voter.
## remove-peer
This command is used to remove a node from being a peer to the Raft cluster. In
certain cases where a peer may be left behind in the Raft configuration even
though the server is no longer present and known to the cluster, this command
can be used to remove the failed server so that it is no longer affects the Raft
quorum.
```text
Usage: vault operator raft remove-peer <server_id>
Removes a node from the Raft cluster.
$ vault operator raft remove-peer node1
```
<Note>
Once a node is removed, its Raft data needs to be deleted before it may be joined back into an existing cluster. This requires shutting down the Vault process, deleting the data, then restarting the Vault process on the removed node.
</Note>
## snapshot
This command groups subcommands for operators interacting with the snapshot
functionality of the integrated Raft storage backend.
@include 'tips/manage-snapshots.mdx'
```text
Usage: vault operator raft snapshot <subcommand> [options] [args]
This command groups subcommands for operators interacting with the snapshot
functionality of the integrated Raft storage backend. Here are a few examples of
the Raft snapshot operator commands:
Installs the provided snapshot, returning the cluster to the state defined in it:
$ vault operator raft snapshot restore raft.snap
Saves a snapshot of the current state of the Raft cluster into a file:
$ vault operator raft snapshot save raft.snap
Inspects a snapshot based on a file:
$ vault operator raft snapshot inspect raft.snap
Please see the individual subcommand help for detailed usage information.
Subcommands:
inspect Inspects raft snapshot
load Loads the provided local or cloud storage snapshot
restore Installs the provided snapshot, returning the cluster to the state defined in it
save Saves a snapshot of the current state of the Raft cluster into a file
unload Unloads the snapshot with provided ID```
```
### snapshot inspect
Inspects a snapshot file taken from a Vault raft cluster and prints a table showing the number of keys and the amount of space used.
```text
Usage: vault operator raft snapshot inspect <snapshot_file>
```
For example:
```shell-session
$ vault operator raft snapshot inspect raft.snap
```
### snapshot load
@include 'alerts/enterprise-only.mdx'
Load a new snapshot into the Vault cluster without overwriting the cluster with
the snapshot's data.
@include 'recover/loadedsnapshots.mdx'
```text
Usage: vault operator raft snapshot load < SNAPSHOT_FILE | auto_snapshot_config=CONFIG_NAME url=URL >
Loads the provided snapshot for reading or recovering data on supported paths.
This command supports two modes of operation:
1. Loading a local snapshot file from the filesystem.
In this mode, the file path must be provided as the only positional argument.
$ vault operator raft snapshot load raft.snap
2. Loading a snapshot that was created by Vault's automated snapshot utility from cloud storage.
In this case, the command accepts two K=V arguments: auto_snapshot_config and url.
$ vault operator raft snapshot load auto_snapshot_config=foo url=https://foo.com/blob/snapshot
```
### snapshot restore
Restores a snapshot of Vault data taken with `vault operator raft snapshot save`.
```text
Usage: vault operator raft snapshot restore <snapshot_file>
Installs the provided snapshot, returning the cluster to the state defined in it.
$ vault operator raft snapshot restore raft.snap
```
### snapshot save
Takes a snapshot of your Vault data. You can use the snapshot to restore Vault
to the point in time when you took the snapshot if you use integrated storage.
Vault does not support snapshots for deployments that only use raft for
high-availability storage.
```text
Usage: vault operator raft snapshot save <snapshot_file>
Saves a snapshot of the current state of the Raft cluster into a file.
$ vault operator raft snapshot save raft.snap
```
### snapshot unload
@include 'alerts/enterprise-only.mdx'
Unload a snapshot previously loaded with `vault operator raft snapshot load`.
```text
Usage: vault operator raft snapshot unload <SNAPSHOT_ID>
Unloads the snapshot with the provided ID.
$ vault operator raft snapshot unload 8a4f31cc-7bb2-227a-aa18-8f140fb40f10
```
## autopilot
This command groups subcommands for operators interacting with the autopilot
functionality of the integrated Raft storage backend. There are 3 subcommands
supported: `get-config`, `set-config` and `state`.
For a more detailed overview of autopilot features, see the [concepts page](/vault/docs/concepts/integrated-storage/autopilot).
```text
Usage: vault operator raft autopilot <subcommand> [options] [args]
This command groups subcommands for operators interacting with the autopilot
functionality of the integrated Raft storage backend.
Subcommands:
get-config Returns the configuration of the autopilot subsystem under integrated storage
set-config Modify the configuration of the autopilot subsystem under integrated storage
state Displays the state of the raft cluster under integrated storage as seen by autopilot
```
### autopilot state
Displays the state of the raft cluster under integrated storage as seen by
autopilot. It shows whether autopilot thinks the cluster is healthy or not.
State includes a list of all servers by nodeID and IP address.
```text
Usage: vault operator raft autopilot state
Displays the state of the raft cluster under integrated storage as seen by autopilot.
$ vault operator raft autopilot state
```
#### Example output
```text
Healthy: true
Failure Tolerance: 1
Leader: vault_1
Voters:
vault_1
vault_2
vault_3
Servers:
vault_1
Name: vault_1
Address: 127.0.0.1:8201
Status: leader
Node Status: alive
Healthy: true
Last Contact: 0s
Last Term: 3
Last Index: 61
Version: 1.17.3
Node Type: voter
vault_2
Name: vault_2
Address: 127.0.0.1:8203
Status: voter
Node Status: alive
Healthy: true
Last Contact: 564.765375ms
Last Term: 3
Last Index: 61
Version: 1.17.3
Node Type: voter
vault_3
Name: vault_3
Address: 127.0.0.1:8205
Status: voter
Node Status: alive
Healthy: true
Last Contact: 3.814017875s
Last Term: 3
Last Index: 61
Version: 1.17.3
Node Type: voter
```
The "Failure Tolerance" of a cluster is the number of nodes in the cluster that could
fail gradually without causing an outage.
When verifying the health of your cluster, check the following fields of each server:
- Healthy: whether Autopilot considers this node healthy or not
- Status: the voting status of the node. This will be `voter`, `leader`, or [`non-voter`](/vault/docs/concepts/integrated-storage#non-voting-nodes-enterprise-only).
- Last Index: the index of the last applied Raft log. This should be close to the "Last Index" value of the leader.
- Version: the version of Vault running on the server
- Node Type: the type of node. On CE, this will always be `voter`. See below for an explanation of Enterprise node types.
Vault Enterprise will include additional output related to automated upgrades, optimistic failure tolerance, and redundancy zones.
#### Example Vault enterprise output
```text
Redundancy Zones:
a
Servers: vault_1, vault_2, vault_5
Voters: vault_1
Failure Tolerance: 2
b
Servers: vault_3, vault_4
Voters: vault_3
Failure Tolerance: 1
Upgrade Info:
Status: await-new-voters
Target Version: 1.17.5
Target Version Voters:
Target Version Non-Voters: vault_5
Other Version Voters: vault_1, vault_3
Other Version Non-Voters: vault_2, vault_4
Redundancy Zones:
a
Target Version Voters:
Target Version Non-Voters: vault_5
Other Version Voters: vault_1
Other Version Non-Voters: vault_2
b
Target Version Voters:
Target Version Non-Voters:
Other Version Voters: vault_3
Other Version Non-Voters: vault_4
```
"Optimistic Failure Tolerance" describes the number of healthy active and
back-up voting servers that can fail gradually without causing an outage.
@include 'autopilot/node-types.mdx'
### autopilot get-config
Returns the configuration of the autopilot subsystem under integrated storage.
```text
Usage: vault operator raft autopilot get-config
Returns the configuration of the autopilot subsystem under integrated storage.
$ vault operator raft autopilot get-config
```
### autopilot set-config
Modify the configuration of the autopilot subsystem under integrated storage.
```text
Usage: vault operator raft autopilot set-config [options]
Modify the configuration of the autopilot subsystem under integrated storage.
$ vault operator raft autopilot set-config -server-stabilization-time 10s
```
Flags applicable to this command are the following:
- `cleanup-dead-servers` `(bool: false)` - Controls whether to remove dead servers from
the Raft peer list periodically or when a new server joins. This requires that
`min-quorum` is also set.
- `last-contact-threshold` `(string: "10s")` - Limit on the amount of time a server can
go without leader contact before being considered unhealthy.
- `dead-server-last-contact-threshold` `(string: "24h")` - Limit on the amount of time
a server can go without leader contact before being considered failed. This
takes effect only when `cleanup_dead_servers` is set. When adding new nodes
to your cluster, the `dead_server_last_contact_threshold` needs to be larger
than the amount of time that it takes to load a Raft snapshot, otherwise the
newly added nodes will be removed from your cluster before they have finished
loading the snapshot and starting up. If you are using an [HSM](/vault/docs/enterprise/hsm), your
`dead_server_last_contact_threshold` needs to be larger than the response
time of the HSM.
<Warning>
We strongly recommend keeping `dead_server_last_contact_threshold` at a high
duration, such as a day, as it being too low could result in removal of nodes
that aren't actually dead
</Warning>
- `max-trailing-logs` `(int: 1000)` - Amount of entries in the Raft Log that a server
can be behind before being considered unhealthy. If this value is too low,
it can cause the cluster to lose quorum if a follower falls behind. This
value only needs to be increased from the default if you have a very high
write load on Vault and you see that it takes a long time to promote new
servers to becoming voters. This is an unlikely scenario and most users
should not modify this value.
- `min-quorum` `(int)` - The minimum number of servers that should always be
present in a cluster. Autopilot will not prune servers below this number.
**There is no default for this value** and it should be set to the expected
number of voters in your cluster when `cleanup_dead_servers` is set as `true`.
Use the [quorum size guidance](/vault/docs/internals/integrated-storage#quorum-size-and-failure-tolerance)
to determine the proper minimum quorum size for your cluster.
- `server-stabilization-time` `(string: "10s")` - Minimum amount of time a server must be in a healthy state before it
can become a voter. Until that happens, it will be visible as a peer in the cluster, but as a non-voter, meaning it
won't contribute to quorum.
- `disable-upgrade-migration` `(bool: false)` - Controls whether to disable automated
upgrade migrations, an Enterprise-only feature.