* Adding explicit MPL license for sub-package.
This directory and its subdirectories (packages) contain files licensed with the MPLv2 `LICENSE` file in this directory and are intentionally licensed separately from the BSL `LICENSE` file at the root of this repository.
* Adding explicit MPL license for sub-package.
This directory and its subdirectories (packages) contain files licensed with the MPLv2 `LICENSE` file in this directory and are intentionally licensed separately from the BSL `LICENSE` file at the root of this repository.
* Updating the license from MPL to Business Source License.
Going forward, this project will be licensed under the Business Source License v1.1. Please see our blog post for more details at https://hashi.co/bsl-blog, FAQ at www.hashicorp.com/licensing-faq, and details of the license at www.hashicorp.com/bsl.
* add missing license headers
* Update copyright file headers to BUS-1.1
* Fix test that expected exact offset on hcl file
---------
Co-authored-by: hashicorp-copywrite[bot] <110428419+hashicorp-copywrite[bot]@users.noreply.github.com>
Co-authored-by: Sarah Thompson <sthompson@hashicorp.com>
Co-authored-by: Brian Kassouf <bkassouf@hashicorp.com>
* add hashfunc field to EntryFormatter struct and adjust NewEntryFormatter function and tests
* add HeaderAdjuster interface and require it in EntryFormatter
dquote> adjust all references to NewEntryFormatter to include a HeaderAdjuster parameter
* replace use of hash function in AuditedHeadersConfig's ApplyConfig method with Salter interface instance
* fixup! replace use of hash function in AuditedHeadersConfig's ApplyConfig method with Salter interface instance
* review feedback
* Go doc typo
* add another test function
---------
Co-authored-by: Peter Wilson <peter.wilson@hashicorp.com>
* begin refactoring of event package into audit package
* audit options additions
* rename option structs
* Trying to remove 'audit' from the start of names.
* typo
* typo
* typo
* newEvent required params
* typo
* comments on noop sink
* more refactoring - merge json/jsonx formatters
* fix file backend and tests
* Moved unexported funcs to formatter, fixed file tests
* typos, comments, moved func
* fix corehelpers
* fix backends (syslog, socket)
* Moved some sinks back to generic event package.
* return of the file sink
* remove unneeded sink params/return vars
* Implement Register and Deregister Audit Devices for EventLogger Framework (#21940)
* add function to create StdoutSinkNode
* add boolean argument to audit Factory function
* create eventlogger nodes in backend factory functions
* simplify NewNoopSink function and remove DiscardSinkNode
* make the sanity test in the file backend mutually exclusive based on useEventLogger value
* remove test cases that no longer made sense and were failing
* NewFileSink attempts to open file for sanity check
* fix FileSink tests and update FileSink to remove discard, stdout but add /dev/null
* Moved WithPrefix from FileSink to EventFormatter
* move prefix in backend
* NewFormatterConfig and Options (tests fixed)
* Little tidy up
* add test where audit file is created with useEventLogger set to true
* only create eventlogger.Node instances when useEventLogger is true
fix failing test due to invalid string conversion of FileMode value
* moved variable definition to more appropriate scope
---------
Co-authored-by: Marc Boudreau <marc.boudreau@hashicorp.com>
* add useEventLogger argument to audit Factory functions
* adjusting Factory functions defined in tests
* fixup! adjusting Factory functions defined in tests
This PR relates to a feature request logged through HashiCorp commercial
support.
Vault lacks pagination in its APIs. As a result, certain list operations
can return **very** large responses. The user's chosen audit sinks may
experience difficulty consuming audit records that swell to tens of
megabytes of JSON.
In our case, one of the systems consuming audit log data could not cope,
and failed.
The responses of list operations are typically not very interesting, as
they are mostly lists of keys, or, even when they include a "key_info"
field, are not returning confidential information. They become even less
interesting once HMAC-ed by the audit system.
Some example Vault "list" operations that are prone to becoming very
large in an active Vault installation are:
auth/token/accessors/
identity/entity/id/
identity/entity-alias/id/
pki/certs/
In response, I've coded a new option that can be applied to audit
backends, `elide_list_responses`. When enabled, response data is elided
from audit logs, only when the operation type is "list".
For added safety, the elision only applies to the "keys" and "key_info"
fields within the response data - these are conventionally the only
fields present in a list response - see logical.ListResponse, and
logical.ListResponseWithInfo. However, other fields are technically
possible if a plugin author writes unusual code, and these will be
preserved in the audit log even with this option enabled.
The elision replaces the values of the "keys" and "key_info" fields with
an integer count of the number of entries. This allows even the elided
audit logs to still be useful for answering questions like "Was any data
returned?" or "How many records were listed?".
* Send a test message before committing a new audit device.
Also, lower timeout on connection attempts in socket device.
* added changelog
* go mod vendor (picked up some unrelated changes.)
* Skip audit device check in integration test.
Co-authored-by: swayne275 <swayne@hashicorp.com>
Move audit.LogInput to sdk/logical. Allow the Data values in audited
logical.Request and Response to implement OptMarshaler, in which case
we delegate hashing/serializing responsibility to them. Add new
ClientCertificateSerialNumber audit request field.
SystemView can now be cast to ExtendedSystemView to expose the Auditor
interface, which allows submitting requests and responses to the audit
broker.
A static token at the beginning of a log line can help systems parse
logs better. For example, rsyslog and syslog-ng will recognize the
'@cee: ' prefix and will parse the rest of the line as a valid json message.
This is useful in environments where there is a mix of structured and
unstructured logs.