Update README defaults and config

This commit is contained in:
Lennart Jern 2021-03-12 07:53:45 +02:00
parent 8e5bf00c54
commit b40fe984b3

121
README.md
View File

@ -124,7 +124,7 @@ Though for a quickstart a compiled version of the Kubernetes [manifests](manifes
* Create the monitoring stack using the config in the `manifests` directory:
```shell
# Create the namespace and CRDs, and then wait for them to be availble before creating the remaining resources
# Create the namespace and CRDs, and then wait for them to be available before creating the remaining resources
kubectl create -f manifests/setup
until kubectl get servicemonitors --all-namespaces ; do date; sleep 1; echo ""; done
kubectl create -f manifests/
@ -287,7 +287,7 @@ The previous steps (compilation) has created a bunch of manifest files in the ma
Now simply use `kubectl` to install Prometheus and Grafana as per your configuration:
```shell
# Update the namespace and CRDs, and then wait for them to be availble before creating the remaining resources
# Update the namespace and CRDs, and then wait for them to be available before creating the remaining resources
$ kubectl apply -f manifests/setup
$ kubectl apply -f manifests/
```
@ -332,71 +332,98 @@ Jsonnet has the concept of hidden fields. These are fields, that are not going t
These are the available fields with their respective default values:
```
{
_config+:: {
namespace: "default",
versions+:: {
alertmanager: "v0.17.0",
nodeExporter: "v0.18.1",
kubeStateMetrics: "v1.5.0",
kubeRbacProxy: "v0.4.1",
prometheusOperator: "v0.30.0",
prometheus: "v2.10.0",
values:: {
common: {
namespace: 'default',
ruleLabels: {
role: 'alert-rules',
prometheus: $.values.prometheus.name,
},
// to allow automatic upgrades of components, we store versions in autogenerated `versions.json` file and import it here
versions: {
alertmanager: error 'must provide version',
blackboxExporter: error 'must provide version',
grafana: error 'must provide version',
kubeStateMetrics: '1.9.8',
nodeExporter: error 'must provide version',
prometheus: error 'must provide version',
prometheusAdapter: error 'must provide version',
prometheusOperator: error 'must provide version',
} + (import 'versions.json'),
images: {
alertmanager: 'quay.io/prometheus/alertmanager:v' + $.values.common.versions.alertmanager,
blackboxExporter: 'quay.io/prometheus/blackbox-exporter:v' + $.values.common.versions.blackboxExporter,
grafana: 'grafana/grafana:v' + $.values.common.versions.grafana,
kubeStateMetrics: 'k8s.gcr.io/kube-state-metrics/kube-state-metrics:v' + $.values.common.versions.kubeStateMetrics,
nodeExporter: 'quay.io/prometheus/node-exporter:v' + $.values.common.versions.nodeExporter,
prometheus: 'quay.io/prometheus/prometheus:v' + $.values.common.versions.prometheus,
prometheusAdapter: 'directxman12/k8s-prometheus-adapter:v' + $.values.common.versions.prometheusAdapter,
prometheusOperator: 'quay.io/prometheus-operator/prometheus-operator:v' + $.values.common.versions.prometheusOperator,
prometheusOperatorReloader: 'quay.io/prometheus-operator/prometheus-config-reloader:v' + $.values.common.versions.prometheusOperator,
},
},
imageRepos+:: {
prometheus: "quay.io/prometheus/prometheus",
alertmanager: "quay.io/prometheus/alertmanager",
kubeStateMetrics: "quay.io/coreos/kube-state-metrics",
kubeRbacProxy: "quay.io/brancz/kube-rbac-proxy",
nodeExporter: "quay.io/prometheus/node-exporter",
prometheusOperator: "quay.io/prometheus-operator/prometheus-operator",
},
prometheus+:: {
names: 'k8s',
replicas: 2,
rules: {},
},
alertmanager+:: {
alertmanager: {
name: 'main',
config: |||
global:
resolve_timeout: 5m
inhibit_rules:
- source_match:
severity: critical
target_match_re:
severity: warning|info
equal: ['namespace', 'alertname']
- source_match:
severity: warning
target_match_re:
severity: info
equal: ['namespace', 'alertname']
route:
group_by: ['job']
group_by: ['namespace']
group_wait: 30s
group_interval: 5m
repeat_interval: 12h
receiver: 'null'
receiver: 'Default'
routes:
- match:
alertname: Watchdog
receiver: 'null'
receiver: Watchdog
- match:
severity: critical
receiver: Critical
receivers:
- name: 'null'
- name: Default
- name: Watchdog
- name: Critical
|||,
replicas: 3,
replicas: 3
},
kubeStateMetrics+:: {
collectors: '', // empty string gets a default set
kubeStateMetrics: {
resources: {
requests: { cpu: '10m', memory: '190Mi' },
limits: { cpu: '100m', memory: '250Mi' },
},
scrapeInterval: '30s',
scrapeTimeout: '30s',
baseCPU: '100m',
baseMemory: '150Mi',
},
nodeExporter+:: {
nodeExporter: {
listenAddress: '127.0.0.1',
port: 9100,
resources: {
requests: { cpu: '102m', memory: '180Mi' },
limits: { cpu: '250m', memory: '180Mi' },
},
},
},
prometheus: {
name: 'k8s',
replicas: 2,
resources: { memory: '400Mi' }
},
}
}
```
The grafana definition is located in a different project (https://github.com/brancz/kubernetes-grafana), but needed configuration can be customized from the same top level `_config` field. For example to allow anonymous access to grafana, add the following `_config` section:
The grafana definition is located in a different project (https://github.com/brancz/kubernetes-grafana), but needed configuration can be customized from the same top level `values` field. For example to allow anonymous access to grafana, add the following `values` section:
```
grafana+:: {
config: { // http://docs.grafana.org/installation/configuration/
@ -553,7 +580,7 @@ Standard Kubernetes manifests are all written using [ksonnet-lib](https://github
### Alertmanager configuration
The Alertmanager configuration is located in the `_config.alertmanager.config` configuration field. In order to set a custom Alertmanager configuration simply set this field.
The Alertmanager configuration is located in the `values.alertmanager.config` configuration field. In order to set a custom Alertmanager configuration simply set this field.
[embedmd]:# (examples/alertmanager-config.jsonnet)
```jsonnet
@ -596,7 +623,7 @@ In the above example the configuration has been inlined, but can just as well be
### Adding additional namespaces to monitor
In order to monitor additional namespaces, the Prometheus server requires the appropriate `Role` and `RoleBinding` to be able to discover targets from that namespace. By default the Prometheus server is limited to the three namespaces it requires: default, kube-system and the namespace you configure the stack to run in via `$._config.namespace`. This is specified in `$._config.prometheus.namespaces`, to add new namespaces to monitor, simply append the additional namespaces:
In order to monitor additional namespaces, the Prometheus server requires the appropriate `Role` and `RoleBinding` to be able to discover targets from that namespace. By default the Prometheus server is limited to the three namespaces it requires: default, kube-system and the namespace you configure the stack to run in via `$.values.namespace`. This is specified in `$.values.prometheus.namespaces`, to add new namespaces to monitor, simply append the additional namespaces:
[embedmd]:# (examples/additional-namespaces.jsonnet)
```jsonnet
@ -764,7 +791,7 @@ See [exposing Prometheus/Alertmanager/Grafana](docs/exposing-prometheus-alertman
local kp = (import 'kube-prometheus/kube-prometheus.libsonnet') +
// ... all necessary mixins ...
{
_config+:: {
values+:: {
// ... configuration for other features ...
blackboxExporter+:: {
modules+:: {