docs: fix admonitions and add some inline syntax highlighting
This commit is contained in:
parent
86b25cff23
commit
a3cd56cfee
@ -19,17 +19,17 @@ k3d makes it very easy to create single- and multi-node [k3s](https://github.com
|
||||
You have several options there:
|
||||
|
||||
- use the install script to grab the latest release:
|
||||
- wget: `wget -q -O - https://raw.githubusercontent.com/rancher/k3d/master/install.sh | bash`
|
||||
- curl: `curl -s https://raw.githubusercontent.com/rancher/k3d/master/install.sh | bash`
|
||||
- wget: `#!bash wget -q -O - https://raw.githubusercontent.com/rancher/k3d/master/install.sh | bash`
|
||||
- curl: `#!bash curl -s https://raw.githubusercontent.com/rancher/k3d/master/install.sh | bash`
|
||||
- use the install script to grab a specific release (via `TAG` environment variable):
|
||||
- wget: `wget -q -O - https://raw.githubusercontent.com/rancher/k3d/master/install.sh | TAG=v3.0.0-beta.0 bash`
|
||||
- curl: `curl -s https://raw.githubusercontent.com/rancher/k3d/master/install.sh | TAG=v3.0.0-beta.0 bash`
|
||||
- wget: `#!bash wget -q -O - https://raw.githubusercontent.com/rancher/k3d/master/install.sh | TAG=v3.0.0-beta.0 bash`
|
||||
- curl: `#!bash curl -s https://raw.githubusercontent.com/rancher/k3d/master/install.sh | TAG=v3.0.0-beta.0 bash`
|
||||
|
||||
- use [Homebrew](https://brew.sh): `brew install k3d` (Homebrew is available for MacOS and Linux)
|
||||
- use [Homebrew](https://brew.sh): `#!bash brew install k3d` (Homebrew is available for MacOS and Linux)
|
||||
- Formula can be found in [homebrew/homebrew-core](https://github.com/Homebrew/homebrew-core/blob/master/Formula/k3d.rb) and is mirrored to [homebrew/linuxbrew-core](https://github.com/Homebrew/linuxbrew-core/blob/master/Formula/k3d.rb)
|
||||
- install via [AUR](https://aur.archlinux.org/) package [rancher-k3d-bin](https://aur.archlinux.org/packages/rancher-k3d-bin/): `yay -S rancher-k3d-bin`
|
||||
- grab a release from the [release tab](https://github.com/rancher/k3d/releases) and install it yourself.
|
||||
- install via go: `go install github.com/rancher/k3d` (**Note**: this will give you unreleased/bleeding-edge changes)
|
||||
- install via go: `#!bash go install github.com/rancher/k3d` (**Note**: this will give you unreleased/bleeding-edge changes)
|
||||
|
||||
## Quick Start
|
||||
|
||||
@ -39,7 +39,7 @@ Create a cluster named `mycluster` with just a single master node:
|
||||
k3d create cluster mycluster
|
||||
```
|
||||
|
||||
Get the new cluster's connection details merged into your default kubeconfig (usually specified using the `KUBECONFIG` environment variable or the default path `$HOME/.kube/config`) and directly switch to the new context:
|
||||
Get the new cluster's connection details merged into your default kubeconfig (usually specified using the `KUBECONFIG` environment variable or the default path `#!bash $HOME/.kube/config`) and directly switch to the new context:
|
||||
|
||||
```bash
|
||||
k3d get kubeconfig mycluster --switch
|
||||
|
||||
@ -7,28 +7,28 @@ Therefore, we have to create the cluster in a way, that the internal port 80 (wh
|
||||
|
||||
1. Create a cluster, mapping the ingress port 80 to localhost:8081
|
||||
|
||||
`k3d create cluster --api-port 6550 -p 8081:80@loadbalancer --workers 2`
|
||||
`#!bash k3d create cluster --api-port 6550 -p 8081:80@loadbalancer --workers 2`
|
||||
|
||||
!!! note "NOTE"
|
||||
- `--api-port 6550` is not required for the example to work. It's used to have `k3s`'s API-Server listening on port 6550 with that port mapped to the host system.
|
||||
- the port-mapping construct `8081:80@loadbalancer` means
|
||||
- map port `8081` from the host to port `80` on the container which matches the nodefilter `loadbalancer`
|
||||
- the `loadbalancer` nodefilter matches only the `masterlb` that's deployed in front of a cluster's master nodes
|
||||
- all ports exposed on the `masterlb` will be proxied to the same ports on all master nodes in the cluster
|
||||
!!! info "Good to know"
|
||||
- `--api-port 6550` is not required for the example to work. It's used to have `k3s`'s API-Server listening on port 6550 with that port mapped to the host system.
|
||||
- the port-mapping construct `8081:80@loadbalancer` means
|
||||
- map port `8081` from the host to port `80` on the container which matches the nodefilter `loadbalancer`
|
||||
- the `loadbalancer` nodefilter matches only the `masterlb` that's deployed in front of a cluster's master nodes
|
||||
- all ports exposed on the `masterlb` will be proxied to the same ports on all master nodes in the cluster
|
||||
|
||||
2. Get the kubeconfig file
|
||||
|
||||
`export KUBECONFIG="$(k3d get-kubeconfig --name='k3s-default')"`
|
||||
`#!bash export KUBECONFIG="$(k3d get-kubeconfig --name='k3s-default')"`
|
||||
|
||||
3. Create a nginx deployment
|
||||
|
||||
`kubectl create deployment nginx --image=nginx`
|
||||
`#!bash kubectl create deployment nginx --image=nginx`
|
||||
|
||||
4. Create a ClusterIP service for it
|
||||
|
||||
`kubectl create service clusterip nginx --tcp=80:80`
|
||||
`#!bash kubectl create service clusterip nginx --tcp=80:80`
|
||||
|
||||
5. Create an ingress object for it with `kubectl apply -f`
|
||||
5. Create an ingress object for it with `#!bash kubectl apply -f`
|
||||
*Note*: `k3s` deploys [`traefik`](https://github.com/containous/traefik) as the default ingress controller
|
||||
|
||||
```YAML
|
||||
@ -50,19 +50,19 @@ Therefore, we have to create the cluster in a way, that the internal port 80 (wh
|
||||
|
||||
6. Curl it via localhost
|
||||
|
||||
`curl localhost:8081/`
|
||||
`#!bash curl localhost:8081/`
|
||||
|
||||
## 2. via NodePort
|
||||
|
||||
1. Create a cluster, mapping the port 30080 from worker-0 to localhost:8082
|
||||
|
||||
`k3d create cluster mycluster -p 8082:30080@worker[0] --workers 2`
|
||||
`#!bash k3d create cluster mycluster -p 8082:30080@worker[0] --workers 2`
|
||||
|
||||
- Note: Kubernetes' default NodePort range is [`30000-32767`](https://kubernetes.io/docs/concepts/services-networking/service/#nodeport)
|
||||
|
||||
... (Steps 2 and 3 like above) ...
|
||||
|
||||
1. Create a NodePort service for it with `kubectl apply -f`
|
||||
1. Create a NodePort service for it with `#!bash kubectl apply -f`
|
||||
|
||||
```YAML
|
||||
apiVersion: v1
|
||||
@ -85,4 +85,4 @@ Therefore, we have to create the cluster in a way, that the internal port 80 (wh
|
||||
|
||||
2. Curl it via localhost
|
||||
|
||||
`curl localhost:8082/`
|
||||
`#!bash curl localhost:8082/`
|
||||
|
||||
@ -3,7 +3,7 @@
|
||||
By default, k3d won't touch your kubeconfig without you telling it to do so.
|
||||
To get a kubeconfig set up for you to connect to a k3d cluster, you can go different ways.
|
||||
|
||||
??? note "What is the default kubeconfig?"
|
||||
??? question "What is the default kubeconfig?"
|
||||
We determine the path of the used or default kubeconfig in two ways:
|
||||
|
||||
1. Using the `KUBECONFIG` environment variable, if it specifies *exactly one* file
|
||||
@ -22,7 +22,7 @@ To get a kubeconfig set up for you to connect to a k3d cluster, you can go diffe
|
||||
- *Note:* this won't switch the current-context
|
||||
- The file will be created if it doesn't exist
|
||||
|
||||
!!! note "Switching the current context"
|
||||
!!! info "Switching the current context"
|
||||
None of the above options switch the current-context.
|
||||
This is intended to be least intrusive, since the current-context has a global effect.
|
||||
You can switch the current-context directly with the `get kubeconfig` command by adding the `--switch` flag.
|
||||
|
||||
@ -1,6 +1,6 @@
|
||||
# Creating multi-master clusters
|
||||
|
||||
!!! note "Important note"
|
||||
!!! info "Important note"
|
||||
For the best results (and less unexpected issues), choose 1, 3, 5, ... master nodes.
|
||||
|
||||
## Embedded dqlite
|
||||
@ -20,6 +20,6 @@ In theory (and also in practice in most cases), this is as easy as executing the
|
||||
k3d create node newmaster --cluster multimaster --role master
|
||||
```
|
||||
|
||||
!!! note "There's a trap!"
|
||||
!!! important "There's a trap!"
|
||||
If your cluster was initially created with only a single master node, then this will fail.
|
||||
That's because the initial master node was not started with the `--cluster-init` flag and thus is not using the dqlite backend.
|
||||
|
||||
Loading…
x
Reference in New Issue
Block a user