mirror of
https://github.com/hashicorp/vault.git
synced 2025-08-19 13:41:10 +02:00
3 Commits
Author | SHA1 | Message | Date | |
---|---|---|---|---|
|
1c4aa5369e
|
proto: rebuild with the latest protoc-gen-go (#27331)
Signed-off-by: Ryan Cragun <me@ryan.ec> |
||
|
9a10689ca3
|
[QT-645] Restructure dev tools (#24559)
We're on a quest to reduce our pipeline execution time to both enhance our developer productivity but also to reduce the overall cost of the CI pipeline. The strategy we use here reduces workflow execution time and network I/O cost by reducing our module cache size and using binary external tools when possible. We no longer download modules and build many of the external tools thousands of times a day. Our previous process of installing internal and external developer tools was scattered and inconsistent. Some tools were installed via `go generate -tags tools ./tools/...`, others via various `make` targets, and some only in Github Actions workflows. This process led to some undesirable side effects: * The modules of some dev and test tools were included with those of the Vault project. This leads to us having to manage our own Go modules with those of external tools. Prior to Go 1.16 this was the recommended way to handle external tools, but now `go install tool@version` is the recommended way to handle external tools that need to be build from source as it supports specific versions but does not modify the go.mod. * Due to Github cache constraints we combine our build and test Go module caches together, but having our developer tools as deps in our module results in a larger cache which is downloaded on every build and test workflow runner. Removing the external tools that were included in our go.mod reduced the expanded module cache by size by ~300MB, thus saving time and network I/O costs when downloading the module cache. * Not all of our developer tools were included in our modules. Some were being installed with `go install` or `go run`, so they didn't take advantage of a single module cache. This resulted in us downloading Go modules on every CI and Build runner in order to build our external tools. * Building our developer tools from source in CI is slow. Where possible we can prefer to use pre-built binaries in CI workflows. No more module download or tool compiles if we can avoid them. I've refactored how we define internal and external build tools in our Makefile and added several new targets to handle both building the developer tools locally for development and verifying that they are available. This allows for an easy developer bootstrap while also supporting installation of many of the external developer tools from pre-build binaries in CI. This reduces our network IO and run time across nearly all of our actions runners. While working on this I caught and resolved a few unrelated issue: * Both our Go and Proto format checks we're being run incorrectly. In CI they we're writing changes but not failing if changes were detected. The Go was less of a problem as we have git hooks that are intended to enforce formatting, however we drifted over time. * Our Git hooks couldn't handle removing a Go file without failing. I moved the diff check into the new Go helper and updated it to handle removing files. * I combined a few separate scripts and into helpers and added a few new capabilities. * I refactored how we install Go modules to make it easier to download and tidy all of the projects go.mod's. * Refactor our internal and external tool installation and verification into a tools.sh helper. * Combined more complex Go verification into `scripts/go-helper.sh` and utilize it in the `Makefile` and git commit hooks. * Add `Makefile` targets for executing our various tools.sh helpers. * Update our existing `make` targets to use new tool targets. * Normalize our various scripts and targets output to have a consistent output format. * In CI, install many of our external dependencies as binaries wherever possible. When not possible we'll build them from scratch but not mess with the shared module cache. * [QT-641] Remove our external build tools from our project Go modules. * [QT-641] Remove extraneous `go list`'s from our `set-up-to` composite action. * Fix formatting and regen our protos Signed-off-by: Ryan Cragun <me@ryan.ec> |
||
|
3565c90cf8
|
feature: multiplexing support for database plugins (#14033)
* feat: DB plugin multiplexing (#13734) * WIP: start from main and get a plugin runner from core * move MultiplexedClient map to plugin catalog - call sys.NewPluginClient from PluginFactory - updates to getPluginClient - thread through isMetadataMode * use go-plugin ClientProtocol interface - call sys.NewPluginClient from dbplugin.NewPluginClient * move PluginSets to dbplugin package - export dbplugin HandshakeConfig - small refactor of PluginCatalog.getPluginClient * add removeMultiplexedClient; clean up on Close() - call client.Kill from plugin catalog - set rpcClient when muxed client exists * add ID to dbplugin.DatabasePluginClient struct * only create one plugin process per plugin type * update NewPluginClient to return connection ID to sdk - wrap grpc.ClientConn so we can inject the ID into context - get ID from context on grpc server * add v6 multiplexing protocol version * WIP: backwards compat for db plugins * Ensure locking on plugin catalog access - Create public GetPluginClient method for plugin catalog - rename postgres db plugin * use the New constructor for db plugins * grpc server: use write lock for Close and rlock for CRUD * cleanup MultiplexedClients on Close * remove TODO * fix multiplexing regression with grpc server connection * cleanup grpc server instances on close * embed ClientProtocol in Multiplexer interface * use PluginClientConfig arg to make NewPluginClient plugin type agnostic * create a new plugin process for non-muxed plugins * feat: plugin multiplexing: handle plugin client cleanup (#13896) * use closure for plugin client cleanup * log and return errors; add comments * move rpcClient wrapping to core for ID injection * refactor core plugin client and sdk * remove unused ID method * refactor and only wrap clientConn on multiplexed plugins * rename structs and do not export types * Slight refactor of system view interface * Revert "Slight refactor of system view interface" This reverts commit 73d420e5cd2f0415e000c5a9284ea72a58016dd6. * Revert "Revert "Slight refactor of system view interface"" This reverts commit f75527008a1db06d04a23e04c3059674be8adb5f. * only provide pluginRunner arg to the internal newPluginClient method * embed ClientProtocol in pluginClient and name logger * Add back MLock support * remove enableMlock arg from setupPluginCatalog * rename plugin util interface to PluginClient Co-authored-by: Brian Kassouf <bkassouf@hashicorp.com> * feature: multiplexing: fix unit tests (#14007) * fix grpc_server tests and add coverage * update run_config tests * add happy path test case for grpc_server ID from context * update test helpers * feat: multiplexing: handle v5 plugin compiled with new sdk * add mux supported flag and increase test coverage * set multiplexingSupport field in plugin server * remove multiplexingSupport field in sdk * revert postgres to non-multiplexed * add comments on grpc server fields * use pointer receiver on grpc server methods * add changelog * use pointer for grpcserver instance * Use a gRPC server to determine if a plugin should be multiplexed * Apply suggestions from code review Co-authored-by: Brian Kassouf <briankassouf@users.noreply.github.com> * add lock to removePluginClient * add multiplexingSupport field to externalPlugin struct * do not send nil to grpc MultiplexingSupport * check err before logging * handle locking scenario for cleanupFunc * allow ServeConfigMultiplex to dispense v5 plugin * reposition structs, add err check and comments * add comment on locking for cleanupExternalPlugin Co-authored-by: Brian Kassouf <bkassouf@hashicorp.com> Co-authored-by: Brian Kassouf <briankassouf@users.noreply.github.com> |