mirror of
https://github.com/hashicorp/vault.git
synced 2025-11-28 14:11:10 +01:00
parent
d569b23c1b
commit
cf5f6fc8e3
@ -77,6 +77,8 @@ naming collisions could result unexpected default behavior. Additionally, we rec
|
||||
the corresponding details in the OIDC provider [concepts](/docs/concepts/oidc-provider) document
|
||||
to understand how the built-in resources are used in the system.
|
||||
|
||||
@include 'raft-panic-old-tls-key.mdx'
|
||||
|
||||
## Known Issues
|
||||
|
||||
### Single Vault follower restart causes election even with established quorum
|
||||
@ -87,7 +89,7 @@ For more details, see the [Server Side Consistent Tokens FAQ](/docs/faq/ssct).
|
||||
|
||||
Since service tokens are always created on the leader, as long as the leader is not upgraded before performance standbys, service tokens will be of the old format and still be usable during the upgrade process. However, the usual upgrade process we recommend can't be relied upon to always upgrade the leader last. Due to this known [issue](https://github.com/hashicorp/vault/issues/14153), a Vault cluster using Integrated Storage may result in a leader not being upgraded last, and this can trigger a re-election. This re-election can cause the upgraded node to become the leader, resulting in the newly created tokens on the leader to be unusable on nodes that have not yet been upgraded. Note that this issue does not impact Vault OSS users.
|
||||
|
||||
We will have a fix for this issue in Vault 1.10.1. Until this issue is fixed, you may be at risk of having performance standbys unable to service requests until all nodes are upgraded. We recommended that you plan for a maintenance window to upgrade.
|
||||
We will have a fix for this issue in Vault 1.10.3. Until this issue is fixed, you may be at risk of having performance standbys unable to service requests until all nodes are upgraded. We recommended that you plan for a maintenance window to upgrade.
|
||||
|
||||
### Limited policy shows unhelpful message in UI after mounting a secret engine
|
||||
|
||||
@ -107,3 +109,4 @@ set to `unauth`.
|
||||
There is a workaround for this error that will allow you to sign in to Vault using the OIDC
|
||||
auth method. Select the "Other" tab instead of selecting the specific OIDC auth mount tab.
|
||||
From there, select "OIDC" from the "Method" select box and proceed to sign in to Vault.
|
||||
|
||||
|
||||
@ -44,6 +44,9 @@ Notes](https://golang.org/doc/go1.16) for full details. Of particular note:
|
||||
@include 'entity-alias-mapping.mdx'
|
||||
|
||||
@include 'pki-forwarding-bug.mdx'
|
||||
|
||||
@include 'raft-panic-old-tls-key.mdx'
|
||||
|
||||
## Known Issues
|
||||
|
||||
- MSSQL integrations (storage and secrets engine) will crash with a "panic: not implemented" error
|
||||
|
||||
@ -97,6 +97,8 @@ See [this blog post](https://go.dev/blog/tls-cipher-suites) for more information
|
||||
|
||||
@include 'pki-forwarding-bug.mdx'
|
||||
|
||||
@include 'raft-panic-old-tls-key.mdx'
|
||||
|
||||
## Known Issues
|
||||
|
||||
### Identity Token Backend Key Rotations
|
||||
|
||||
17
website/content/partials/raft-panic-old-tls-key.mdx
Normal file
17
website/content/partials/raft-panic-old-tls-key.mdx
Normal file
@ -0,0 +1,17 @@
|
||||
## Integrated Storage panic related to old TLS key
|
||||
|
||||
Raft in Vault uses its own set of TLS certificates, independent of those that the user
|
||||
controls to protect the API port and those used for replication and clustering. These
|
||||
certs get rotated daily, but to ensure that nodes which were down or behind on Raft log
|
||||
replication don't lose the ability to speak with other nodes, the newly generated daily
|
||||
TLS cert only starts being used once we see that all nodes have received it.
|
||||
|
||||
A recent security audit related change results in this rotation code [getting a
|
||||
panic](https://github.com/hashicorp/vault/issues/15147) when the current cert is
|
||||
more than 24h old. This can happen if the cluster as a whole is down for a day
|
||||
or more. It can also happen if a single node is unreachable 24h, or sufficiently
|
||||
backlogged in applying raft logs that it's more than a day behind.
|
||||
|
||||
Impacted versions: 1.10.1, 1.9.5, 1.8.10. Versions prior to these are unaffected.
|
||||
|
||||
New releases addressing this panic are coming soon.
|
||||
Loading…
x
Reference in New Issue
Block a user