* add patch to kv adapter
* use query-param-string helper in fetchSubkeys
* one more whitespace helper
* move method because git diff was strange
* update path util tests
* Add LTS explanation and clarify other label explanations
* Link to doc containing LTS calendar
* Change order for simpler cognitive load
* A bit simpler based on feedback
* bumping versions for grpc and docker/docker
* go get github.com/docker/docker@v25.0.6 && go mod tidy
* updating to 25.0.6 in sdk
* updating grpc in sdk
Optimize the cost of the Security `scan` workflow by utilizing a
different runner. Previously this workflow would use the
`custom-linux-xl` in `vault` vs. the `c6a.4xlarge` on-demand runner in
`vault-enterprise. This resulted in the `vault` workflow costing an
order of magnitude more each month.
I tested with the following instances sizes to compare cost to execution
time:
| Runnner | Estimated Time | Cost Factor | Cost Score |
|---------|-----------------|-------------|-------------|
|ubuntu-latest|19m|1|19|
|custom-linux-small|21.5m|2|43|
|custom-linux-medium|11.5m|4|46|
|custom-linux-xl|8.5m|16|136|
Currently the `CI` and `build` require workflows take anywhere from
16-20 minutes on `vault`. Our goal is to not exceed that.
At this time we're going to try out `ubuntu-latest` as it gives us ~85%
savings and by far the best bang for our buck. If it ends up being a
burden we can switch to `custom-linux-medium` for ~66% cost savings but
still a reasonable runtime.
Signed-off-by: Ryan Cragun <me@ryan.ec>
After VAULT-20259 we did not enable the undo logs verification. This
reenables the check but modified to check the status of the primary and
follower nodes, as they should have different values.
While testing this I accidentally flubbed my version input and found the
diagnostic a bit confusing to read so I updated the error message on
version mismatch to be a bit easier to read.
Signed-off-by: Ryan Cragun <me@ryan.ec>
* build kv-patch-editor component
* add tests
* use validator helpers in kv-object-editor
* update class name in version-history
* remove is- from css class
* move whitespace warning and non-string values warning messages to validators util
* break editor component into smaller ones
* fix typo
* add docs
* rename files and move to directory, add tests for new templates
* fix some bugs and add tests!
* fix validation bug and update tests
* capitalize item in helper
* remove comment
* and one more comment change
* change no export data status to be 204 instead of 400
* add identity metadata for JSON and CSV with column flattening
* add condition to nil-check-physical-storage-by-nsid semgrep rule
* add TestActivityLog_Export_CSV_Header test
* fix tests
* add changelog entry
removing these recommendations because they are not inline with conventional wisdom or our HVDs. For example, AppRole should not be leverage when a platform native identity source is available (e.g. AWS, Azure, GCP, K8s, Nomad, etc...)
* Add trace logging to context creation during log req/resp. Improve context sensitivity of sink nodes (file, socket), update eventlogger to include context info in error
* changelog
* Queue for the lock but check context immediately
* fix race in test
* VAULT-29583: Modernize default distributions in enos scenarios
Our scenarios have been running the last gen of distributions in CI.
This updates our default distributions as follows:
- Amazon: 2023
- Leap: 15.6
- RHEL: 8.10, 9.4
- SLES: 15.6
- Ubuntu: 20.04, 24.04
With these changes we also unlock a few new variants combinations:
- `distro:amzn seal:pkcs11`
- `arch:arm64 distro:leap`
We also normalize our distro key for Amazon Linux to `amzn`, which
matches the uname output on both versions that we've supported.
Signed-off-by: Ryan Cragun <me@ryan.ec>
Sometimes the replication scenario will race with other steps and
attempt to check the `v1/sys/version-history` API before the cluster is
ready. Eventually when it gets retried some of the original nodes are
down so it will fail. This makes the verification happen later, only
after we've ensured the cluster is unsealed and have gotten leader and
cluster IP addresses. We also make dependent steps require the version
verification so that if it does fail for some reason it will retry
before doing the rest of the scenario.
Signed-off-by: Ryan Cragun <me@ryan.ec>
* add inline cert auth to postres db plugin
* handle both sslinline and new TLS plugin fields
* refactor PrepareTestContainerWithSSL
* add tests for postgres inline TLS fields
* changelog
* revert back to errwrap since the middleware sanitizing depends on it
* enable only setting sslrootcert
* initial changes
* test selector and duplicate tests clean up
* check for flashDanger
* rename to make it easier to parse
* clean up selector names
* clean up
* add component test coverage
* remove true
* Edit alias_name_source explanation
We wanted to clarify the difference between the two options and the implications.
* Add missing backticks
* Add comma
* Update website/content/api-docs/auth/kubernetes.mdx
Co-authored-by: Sarah Chavis <62406755+schavis@users.noreply.github.com>
---------
Co-authored-by: Sarah Chavis <62406755+schavis@users.noreply.github.com>
* Update 1_15-auto-upgrade.mdx
* Update known issue version numbers for AP issue
---------
Co-authored-by: Sarah Chavis <62406755+schavis@users.noreply.github.com>
* Replace getNewModel with hydrateModel when model exists
* Update getNewModel to only handle nonexistant model types
* Update test
* clarify test
* Fix auth-config models which need hydration not generation
* rename file to match service name
* cleanup + tests
* Add comment about helpUrl method