Compare commits
143 Commits
Author | SHA1 | Date | |
---|---|---|---|
|
a969f2fa50 | ||
|
def200ac0e | ||
|
e00db082a4 | ||
|
28c13446f2 | ||
|
7340380ec7 | ||
|
c9c002207a | ||
|
c7c4aea4cf | ||
|
12b821e31c | ||
|
b7416d77aa | ||
|
5d1c83d47a | ||
|
8c776dc471 | ||
|
436fd2df8b | ||
|
3e12d67da3 | ||
|
a5489e08f7 | ||
|
8f35992cff | ||
|
8582cd387d | ||
|
676efcbf4a | ||
|
6337c298e6 | ||
|
bb7b6e3605 | ||
|
f08bb6dad8 | ||
|
b595638644 | ||
|
9048c7526b | ||
|
fb47728ddc | ||
|
92aba64161 | ||
|
e184a90f2c | ||
|
ac1dbb9f0a | ||
|
5b7ec7e29d | ||
|
da2d541a5c | ||
|
34bf23ef85 | ||
|
0ec51e1318 | ||
|
fdefd62687 | ||
|
0b7e118046 | ||
|
ba8f931e65 | ||
|
2ec3efd877 | ||
|
c2aeeca226 | ||
|
80853aa848 | ||
|
840a61a47d | ||
|
3c1422b9d3 | ||
|
bbdc073467 | ||
|
f865919c6b | ||
|
2db715c064 | ||
|
5111fab6f9 | ||
|
ecd86ed993 | ||
|
49ff9cb0fd | ||
|
4d8876f649 | ||
|
c2289c71b6 | ||
|
f80d7e8ba0 | ||
|
56f80d766f | ||
|
c1190c669a | ||
|
c972374434 | ||
|
fcaf3d79f4 | ||
|
a3696aa3d4 | ||
|
d8951c0eb3 | ||
|
4aff326f89 | ||
|
3896f5a019 | ||
|
e54f762832 | ||
|
284db4d74b | ||
|
7ed52c687d | ||
|
7275929907 | ||
|
e9007ba2ef | ||
|
92e19d7e17 | ||
|
739486e382 | ||
|
5c5c4c54c6 | ||
|
59224236b9 | ||
|
01a0526301 | ||
|
2c747cf6d7 | ||
|
dc4c29361f | ||
|
8a65268312 | ||
|
97bebbad66 | ||
|
6e05342812 | ||
|
122ea4637a | ||
|
0eaaef4c45 | ||
|
f9fe8ef0c1 | ||
|
99d6a18be7 | ||
|
b1b7e957bc | ||
|
c3b799c6e2 | ||
|
469b56c253 | ||
|
576ac040cc | ||
|
fbe93eb039 | ||
|
10179f8555 | ||
|
8079d7d2be | ||
|
3fd1061e8c | ||
|
2205264a11 | ||
|
3b0c095765 | ||
|
85b19f7ef7 | ||
|
dc19eb7139 | ||
|
e79c3a98d8 | ||
|
9b8fcd17be | ||
|
18a3db3d9b | ||
|
02b80fecfc | ||
|
e22299f485 | ||
|
6ad5a7b9dd | ||
|
b9ea408d49 | ||
|
6d771a20fa | ||
|
e7b372f0fd | ||
|
c25618cbdc | ||
|
1631a7b91a | ||
|
fb74afceff | ||
|
b485682448 | ||
|
645a5163d4 | ||
|
36bda94dbd | ||
|
f3e2ef5508 | ||
|
9f56c006ee | ||
|
6bdcb3f7f9 | ||
|
89cb12181a | ||
|
d11b96ac7e | ||
|
af0f30c8d4 | ||
|
5c00056118 | ||
|
b09a9680e6 | ||
|
836201611b | ||
|
33c3f7cbda | ||
|
9b888c8216 | ||
|
0d01441553 | ||
|
81748f9b3a | ||
|
1e0aac19f6 | ||
|
418c1887fe | ||
|
9aa5af8d5d | ||
|
802038abb1 | ||
|
b5c4204303 | ||
|
44d6fef28c | ||
|
61107ca5bf | ||
|
95c62650fa | ||
|
80f5aa7d58 | ||
|
1fabe118fa | ||
|
3631e65570 | ||
|
164758d29a | ||
|
eff97313da | ||
|
552075709f | ||
|
66e86e20ab | ||
|
6581844d2e | ||
|
bde5bc6ff3 | ||
|
0317cc395b | ||
|
f33fa2d4cc | ||
|
487418974e | ||
|
9bb49fc860 | ||
|
ee1eb58466 | ||
|
e4e7a32e0d | ||
|
40c9d4b1eb | ||
|
3e80e37faf | ||
|
c8f8fdb139 | ||
|
89ba0f67b7 | ||
|
53eba1828d | ||
|
d6364af32c |
22
.github/ISSUE_TEMPLATE/bug_report.md
vendored
22
.github/ISSUE_TEMPLATE/bug_report.md
vendored
@ -7,32 +7,34 @@ assignees: ''
|
|||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
**What did you do?**
|
### Describe what you did leading up to the issue
|
||||||
|
|
||||||
- How was the cluster created?
|
- How was the cluster created?
|
||||||
- `k3d create -x A -y B`
|
- `k3d create -x A -y B`
|
||||||
|
|
||||||
- What did you do afterwards?
|
- What did you do afterwards?
|
||||||
- k3d commands?
|
- k3d commands?
|
||||||
- docker commands?
|
- docker commands?
|
||||||
- OS operations (e.g. shutdown/reboot)?
|
- OS operations (e.g. shutdown/reboot)?
|
||||||
|
|
||||||
**What did you expect to happen?**
|
### Describe what you expected to happen
|
||||||
|
|
||||||
Concise description of what you expected to happen after doing what you described above.
|
Concise description of what you expected to happen after doing what you described above.
|
||||||
|
|
||||||
**Screenshots or terminal output**
|
### Add screenshots or terminal output
|
||||||
|
|
||||||
If applicable, add screenshots or terminal output (code block) to help explain your problem.
|
If applicable, add screenshots or terminal output (code block) to help explain your problem.
|
||||||
|
|
||||||
**Which OS & Architecture?**
|
### Describe your Setup
|
||||||
|
|
||||||
|
#### OS / Architecture
|
||||||
|
|
||||||
- Linux, Windows, MacOS / amd64, x86, ...?
|
- Linux, Windows, MacOS / amd64, x86, ...?
|
||||||
|
|
||||||
**Which version of `k3d`?**
|
#### k3d version
|
||||||
|
|
||||||
- output of `k3d --version`
|
- output of `k3d --version`
|
||||||
|
|
||||||
**Which version of docker?**
|
### docker version
|
||||||
|
|
||||||
- output of `docker version`
|
- output of `docker version`
|
||||||
|
12
.github/ISSUE_TEMPLATE/feature_request.md
vendored
12
.github/ISSUE_TEMPLATE/feature_request.md
vendored
@ -7,24 +7,24 @@ assignees: ''
|
|||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
**Is your feature request related to a problem or a Pull Request?**
|
### Related Issues and/or Pull-Requests
|
||||||
|
|
||||||
Please link to the issue/PR here and explain how your request is related to it.
|
Please link to the issue/PR here and explain how your request is related to it.
|
||||||
|
|
||||||
**Scope of your request**
|
### Scope of your request
|
||||||
|
|
||||||
Do you need...
|
Do you need...
|
||||||
|
|
||||||
- a new command (next to e.g. `create`, `delete`, etc. used via `k3d <your-command>`)?
|
- a new command (next to e.g. `create`, `delete`, etc. used via `k3d <your-command>`)?
|
||||||
- a new flag for a command (e.g. `k3d create --<your-flag>`)?
|
- a new flag for a command (e.g. `k3d create --<your-flag>`)?
|
||||||
- which command?
|
- which command?
|
||||||
- different functionality for an existing command/flag
|
- different functionality for an existing command/flag
|
||||||
- which command or flag?
|
- which command or flag?
|
||||||
|
|
||||||
**Describe the solution you'd like**
|
### Describe the solution you'd like
|
||||||
|
|
||||||
A clear and concise description of what you want to happen.
|
A clear and concise description of what you want to happen.
|
||||||
|
|
||||||
**Describe alternatives you've considered**
|
### Describe alternatives you've considered
|
||||||
|
|
||||||
A clear and concise description of any alternative solutions or features you've considered.
|
A clear and concise description of any alternative solutions or features you've considered.
|
||||||
|
37
.github/ISSUE_TEMPLATE/help_question.md
vendored
Normal file
37
.github/ISSUE_TEMPLATE/help_question.md
vendored
Normal file
@ -0,0 +1,37 @@
|
|||||||
|
---
|
||||||
|
name: Help/Question
|
||||||
|
about: Ask a question or request help for any challenge/issue
|
||||||
|
title: "[HELP/QUESTION] "
|
||||||
|
labels: help, question
|
||||||
|
assignees: ''
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### Your Question/Help Request
|
||||||
|
|
||||||
|
What's up?
|
||||||
|
|
||||||
|
### Information for Helpers
|
||||||
|
|
||||||
|
**What did you do?**
|
||||||
|
|
||||||
|
- How was the cluster created?
|
||||||
|
- `k3d create -x A -y B`
|
||||||
|
|
||||||
|
- What did you do afterwards?
|
||||||
|
- k3d commands?
|
||||||
|
- docker commands?
|
||||||
|
- OS operations (e.g. shutdown/reboot)?
|
||||||
|
- kubectl commands?
|
||||||
|
|
||||||
|
**Which OS & Architecture?**
|
||||||
|
|
||||||
|
- Linux, Windows, MacOS / amd64, x86, ...?
|
||||||
|
|
||||||
|
**Which version of `k3d`?**
|
||||||
|
|
||||||
|
- output of `k3d --version`
|
||||||
|
|
||||||
|
**Which version of docker?**
|
||||||
|
|
||||||
|
- output of `docker version`
|
3
.gitignore
vendored
3
.gitignore
vendored
@ -16,4 +16,5 @@ _dist/
|
|||||||
*.out
|
*.out
|
||||||
|
|
||||||
# Editors
|
# Editors
|
||||||
.vscode/
|
.vscode/
|
||||||
|
.local/
|
@ -3,16 +3,14 @@ language: go
|
|||||||
env:
|
env:
|
||||||
- GO111MODULE=on
|
- GO111MODULE=on
|
||||||
go:
|
go:
|
||||||
- 1.12.x
|
|
||||||
- 1.13.x
|
- 1.13.x
|
||||||
git:
|
git:
|
||||||
depth: 1
|
depth: 1
|
||||||
install: true
|
install: true
|
||||||
before_script:
|
before_script:
|
||||||
- curl -sfL https://raw.githubusercontent.com/golangci/golangci-lint/master/install.sh| sh -s -- -b ${GOPATH}/bin v1.17.1
|
- make ci-setup
|
||||||
- go get github.com/mitchellh/gox@v1.0.1
|
|
||||||
script:
|
script:
|
||||||
- make fmt check build-cross
|
- make ci-tests-dind ci-dist
|
||||||
deploy:
|
deploy:
|
||||||
provider: releases
|
provider: releases
|
||||||
skip_cleanup: true
|
skip_cleanup: true
|
||||||
|
13
Dockerfile
Normal file
13
Dockerfile
Normal file
@ -0,0 +1,13 @@
|
|||||||
|
FROM golang:1.13 as builder
|
||||||
|
WORKDIR /app
|
||||||
|
COPY . .
|
||||||
|
RUN make build && bin/k3d --version
|
||||||
|
|
||||||
|
FROM docker:19.03-dind
|
||||||
|
|
||||||
|
# TODO: we could create a different stage for e2e tests
|
||||||
|
RUN apk add bash curl sudo
|
||||||
|
RUN curl -LO https://storage.googleapis.com/kubernetes-release/release/`curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt`/bin/linux/amd64/kubectl && \
|
||||||
|
chmod +x ./kubectl && \
|
||||||
|
mv ./kubectl /usr/local/bin/kubectl
|
||||||
|
COPY --from=builder /app/bin/k3d /bin/k3d
|
64
Makefile
64
Makefile
@ -10,14 +10,17 @@ ifeq ($(GIT_TAG),)
|
|||||||
GIT_TAG := $(shell git describe --always)
|
GIT_TAG := $(shell git describe --always)
|
||||||
endif
|
endif
|
||||||
|
|
||||||
# get latest k3s version
|
# get latest k3s version: grep the tag JSON field, extract the tag and replace + with - (difference between git and dockerhub tags)
|
||||||
K3S_TAG := $(shell curl --silent "https://api.github.com/repos/rancher/k3s/releases/latest" | grep '"tag_name":' | sed -E 's/.*"([^"]+)".*/\1/')
|
K3S_TAG := $(shell curl --silent "https://api.github.com/repos/rancher/k3s/releases/latest" | grep '"tag_name":' | sed -E 's/.*"([^"]+)".*/\1/' | sed -E 's/\+/\-/')
|
||||||
ifeq ($(K3S_TAG),)
|
ifeq ($(K3S_TAG),)
|
||||||
$(warning K3S_TAG undefined: couldn't get latest k3s image tag!)
|
$(warning K3S_TAG undefined: couldn't get latest k3s image tag!)
|
||||||
$(warning Output of curl: $(shell curl --silent "https://api.github.com/repos/rancher/k3s/releases/latest"))
|
$(warning Output of curl: $(shell curl --silent "https://api.github.com/repos/rancher/k3s/releases/latest"))
|
||||||
$(error exiting)
|
$(error exiting)
|
||||||
endif
|
endif
|
||||||
|
|
||||||
|
# determine if make is being executed from interactive terminal
|
||||||
|
INTERACTIVE:=$(shell [ -t 0 ] && echo 1)
|
||||||
|
|
||||||
# Go options
|
# Go options
|
||||||
GO ?= go
|
GO ?= go
|
||||||
PKG := $(shell go mod vendor)
|
PKG := $(shell go mod vendor)
|
||||||
@ -29,13 +32,16 @@ GOFLAGS :=
|
|||||||
BINDIR := $(CURDIR)/bin
|
BINDIR := $(CURDIR)/bin
|
||||||
BINARIES := k3d
|
BINARIES := k3d
|
||||||
|
|
||||||
|
K3D_IMAGE_TAG := $(GIT_TAG)
|
||||||
|
|
||||||
# Go Package required
|
# Go Package required
|
||||||
PKG_GOX := github.com/mitchellh/gox@v1.0.1
|
PKG_GOX := github.com/mitchellh/gox@v1.0.1
|
||||||
PKG_GOLANGCI_LINT := github.com/golangci/golangci-lint/cmd/golangci-lint@v1.17.1
|
PKG_GOLANGCI_LINT_VERSION := 1.23.8
|
||||||
|
PKG_GOLANGCI_LINT_SCRIPT := https://raw.githubusercontent.com/golangci/golangci-lint/master/install.sh
|
||||||
|
PKG_GOLANGCI_LINT := github.com/golangci/golangci-lint/cmd/golangci-lint@v${PKG_GOLANGCI_LINT_VERSION}
|
||||||
|
|
||||||
# configuration adjustments for golangci-lint
|
# configuration adjustments for golangci-lint
|
||||||
GOLANGCI_LINT_DISABLED_LINTERS := typecheck # disabling typecheck, because it currently (06.09.2019) fails with Go 1.13
|
GOLANGCI_LINT_DISABLED_LINTERS := ""
|
||||||
|
|
||||||
# Use Go Modules for everything
|
# Use Go Modules for everything
|
||||||
export GO111MODULE=on
|
export GO111MODULE=on
|
||||||
@ -58,23 +64,34 @@ LINT_DIRS := $(DIRS) $(foreach dir,$(REC_DIRS),$(dir)/...)
|
|||||||
all: clean fmt check build
|
all: clean fmt check build
|
||||||
|
|
||||||
build:
|
build:
|
||||||
$(GO) build -i $(GOFLAGS) -tags '$(TAGS)' -ldflags '$(LDFLAGS)' -o '$(BINDIR)/$(BINARIES)'
|
CGO_ENABLED=0 $(GO) build $(GOFLAGS) -tags '$(TAGS)' -ldflags '$(LDFLAGS)' -o '$(BINDIR)/$(BINARIES)'
|
||||||
|
|
||||||
build-cross: LDFLAGS += -extldflags "-static"
|
build-cross: LDFLAGS += -extldflags "-static"
|
||||||
build-cross:
|
build-cross:
|
||||||
CGO_ENABLED=0 gox -parallel=3 -output="_dist/$(BINARIES)-{{.OS}}-{{.Arch}}" -osarch='$(TARGETS)' $(GOFLAGS) $(if $(TAGS),-tags '$(TAGS)',) -ldflags '$(LDFLAGS)'
|
CGO_ENABLED=0 gox -parallel=3 -output="_dist/$(BINARIES)-{{.OS}}-{{.Arch}}" -osarch='$(TARGETS)' $(GOFLAGS) $(if $(TAGS),-tags '$(TAGS)',) -ldflags '$(LDFLAGS)'
|
||||||
|
|
||||||
|
build-dockerfile: Dockerfile
|
||||||
|
@echo "Building Docker image k3d:$(K3D_IMAGE_TAG)"
|
||||||
|
docker build -t k3d:$(K3D_IMAGE_TAG) .
|
||||||
|
|
||||||
clean:
|
clean:
|
||||||
@rm -rf $(BINDIR) _dist/
|
@rm -rf $(BINDIR) _dist/
|
||||||
|
|
||||||
extra-clean: clean
|
extra-clean: clean
|
||||||
go clean -i $(PKG_GOX)
|
$(GO) clean -i $(PKG_GOX)
|
||||||
go clean -i $(PKG_GOLANGCI_LINT)
|
$(GO) clean -i $(PKG_GOLANGCI_LINT)
|
||||||
|
|
||||||
# fmt will fix the golang source style in place.
|
# fmt will fix the golang source style in place.
|
||||||
fmt:
|
fmt:
|
||||||
@gofmt -s -l -w $(GO_SRC)
|
@gofmt -s -l -w $(GO_SRC)
|
||||||
|
|
||||||
|
e2e: build
|
||||||
|
EXE='$(BINDIR)/$(BINARIES)' ./tests/runner.sh
|
||||||
|
|
||||||
|
e2e-dind: build-dockerfile
|
||||||
|
@echo "Running e2e tests in k3d:$(K3D_IMAGE_TAG)"
|
||||||
|
tests/dind.sh "${K3D_IMAGE_TAG}"
|
||||||
|
|
||||||
# check-fmt returns an error code if any source code contains format error.
|
# check-fmt returns an error code if any source code contains format error.
|
||||||
check-fmt:
|
check-fmt:
|
||||||
@test -z $(shell gofmt -s -l $(GO_SRC) | tee /dev/stderr) || echo "[WARN] Fix formatting issues with 'make fmt'"
|
@test -z $(shell gofmt -s -l $(GO_SRC) | tee /dev/stderr) || echo "[WARN] Fix formatting issues with 'make fmt'"
|
||||||
@ -86,12 +103,39 @@ check: check-fmt lint
|
|||||||
|
|
||||||
# Check for required executables
|
# Check for required executables
|
||||||
HAS_GOX := $(shell command -v gox 2> /dev/null)
|
HAS_GOX := $(shell command -v gox 2> /dev/null)
|
||||||
HAS_GOLANGCI := $(shell command -v golangci-lint 2> /dev/null)
|
HAS_GOLANGCI := $(shell command -v golangci-lint)
|
||||||
|
HAS_GOLANGCI_VERSION := $(shell golangci-lint --version | grep "version $(PKG_GOLANGCI_LINT_VERSION) " 2>&1)
|
||||||
|
|
||||||
install-tools:
|
install-tools:
|
||||||
ifndef HAS_GOX
|
ifndef HAS_GOX
|
||||||
(go get $(PKG_GOX))
|
($(GO) get $(PKG_GOX))
|
||||||
endif
|
endif
|
||||||
ifndef HAS_GOLANGCI
|
ifndef HAS_GOLANGCI
|
||||||
(curl -sfL https://raw.githubusercontent.com/golangci/golangci-lint/master/install.sh| sh -s -- -b ${GOPATH}/bin v1.17.1)
|
(curl -sfL $(PKG_GOLANGCI_LINT_SCRIPT) | sh -s -- -b ${GOPATH}/bin v${PKG_GOLANGCI_LINT_VERSION})
|
||||||
endif
|
endif
|
||||||
|
ifdef HAS_GOLANGCI
|
||||||
|
ifeq ($(HAS_GOLANGCI_VERSION),)
|
||||||
|
ifdef INTERACTIVE
|
||||||
|
@echo "Warning: Your installed version of golangci-lint (interactive: ${INTERACTIVE}) differs from what we'd like to use. Switch to v${PKG_GOLANGCI_LINT_VERSION}? [Y/n]"
|
||||||
|
@read line; if [ $$line == "y" ]; then (curl -sfL $(PKG_GOLANGCI_LINT_SCRIPT) | sh -s -- -b ${GOPATH}/bin v${PKG_GOLANGCI_LINT_VERSION}); fi
|
||||||
|
else
|
||||||
|
@echo "Warning: you're not using the same version of golangci-lint as us (v${PKG_GOLANGCI_LINT_VERSION})"
|
||||||
|
endif
|
||||||
|
endif
|
||||||
|
endif
|
||||||
|
|
||||||
|
ci-setup:
|
||||||
|
@echo "Installing Go tools..."
|
||||||
|
curl -sfL $(PKG_GOLANGCI_LINT_SCRIPT) | sh -s -- -b ${GOPATH}/bin v$(PKG_GOLANGCI_LINT_VERSION)
|
||||||
|
go get $(PKG_GOX)
|
||||||
|
|
||||||
|
@echo "Installing kubectl..."
|
||||||
|
curl -LO https://storage.googleapis.com/kubernetes-release/release/`curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt`/bin/linux/amd64/kubectl
|
||||||
|
chmod +x ./kubectl
|
||||||
|
sudo mv ./kubectl /usr/local/bin/kubectl
|
||||||
|
|
||||||
|
ci-tests: fmt check e2e
|
||||||
|
|
||||||
|
ci-dist: build-cross
|
||||||
|
|
||||||
|
ci-tests-dind: fmt check e2e-dind
|
||||||
|
13
README.md
13
README.md
@ -2,6 +2,10 @@
|
|||||||
|
|
||||||
[](https://travis-ci.com/rancher/k3d)
|
[](https://travis-ci.com/rancher/k3d)
|
||||||
[](https://goreportcard.com/report/github.com/rancher/k3d)
|
[](https://goreportcard.com/report/github.com/rancher/k3d)
|
||||||
|
[](./LICENSE.md)
|
||||||
|

|
||||||
|
[](https://github.com/rancher/k3d/releases/latest)
|
||||||
|
[](https://formulae.brew.sh/formula/k3d)
|
||||||
|
|
||||||
## k3s in docker
|
## k3s in docker
|
||||||
|
|
||||||
@ -20,6 +24,13 @@ You have several options there:
|
|||||||
- use the install script to grab the latest release:
|
- use the install script to grab the latest release:
|
||||||
- wget: `wget -q -O - https://raw.githubusercontent.com/rancher/k3d/master/install.sh | bash`
|
- wget: `wget -q -O - https://raw.githubusercontent.com/rancher/k3d/master/install.sh | bash`
|
||||||
- curl: `curl -s https://raw.githubusercontent.com/rancher/k3d/master/install.sh | bash`
|
- curl: `curl -s https://raw.githubusercontent.com/rancher/k3d/master/install.sh | bash`
|
||||||
|
- use the install script to grab a specific release (via `TAG` environment variable):
|
||||||
|
- wget: `wget -q -O - https://raw.githubusercontent.com/rancher/k3d/master/install.sh | TAG=v1.3.4 bash`
|
||||||
|
- curl: `curl -s https://raw.githubusercontent.com/rancher/k3d/master/install.sh | TAG=v1.3.4 bash`
|
||||||
|
|
||||||
|
- Use [Homebrew](https://brew.sh): `brew install k3d` (Homebrew is avaiable for MacOS and Linux)
|
||||||
|
- Formula can be found in [homebrew/homebrew-core](https://github.com/Homebrew/homebrew-core/blob/master/Formula/k3d.rb) and is mirrored to [homebrew/linuxbrew-core](https://github.com/Homebrew/linuxbrew-core/blob/master/Formula/k3d.rb)
|
||||||
|
- Install via [AUR](https://aur.archlinux.org/) package [rancher-k3d-bin](https://aur.archlinux.org/packages/rancher-k3d-bin/): `yay -S rancher-k3d-bin`
|
||||||
- Grab a release from the [release tab](https://github.com/rancher/k3d/releases) and install it yourself.
|
- Grab a release from the [release tab](https://github.com/rancher/k3d/releases) and install it yourself.
|
||||||
- Via go: `go install github.com/rancher/k3d` (**Note**: this will give you unreleased/bleeding-edge changes)
|
- Via go: `go install github.com/rancher/k3d` (**Note**: this will give you unreleased/bleeding-edge changes)
|
||||||
|
|
||||||
@ -40,6 +51,7 @@ or...
|
|||||||
Check out what you can do via `k3d help`
|
Check out what you can do via `k3d help`
|
||||||
|
|
||||||
Example Workflow: Create a new cluster and use it with `kubectl`
|
Example Workflow: Create a new cluster and use it with `kubectl`
|
||||||
|
(*Note:* `kubectl` is not part of `k3d`, so you have to [install it first if needed](https://kubernetes.io/docs/tasks/tools/install-kubectl/))
|
||||||
|
|
||||||
1. `k3d create` to create a new single-node cluster (docker container)
|
1. `k3d create` to create a new single-node cluster (docker container)
|
||||||
2. `export KUBECONFIG=$(k3d get-kubeconfig)` to make `kubectl` to use the kubeconfig for that cluster
|
2. `export KUBECONFIG=$(k3d get-kubeconfig)` to make `kubectl` to use the kubeconfig for that cluster
|
||||||
@ -56,6 +68,7 @@ Check out the [examples here](docs/examples.md).
|
|||||||
Find more details under the following Links:
|
Find more details under the following Links:
|
||||||
|
|
||||||
- [Further documentation](docs/documentation.md)
|
- [Further documentation](docs/documentation.md)
|
||||||
|
- [Using registries](docs/registries.md)
|
||||||
- [Usage examples](docs/examples.md)
|
- [Usage examples](docs/examples.md)
|
||||||
- [Frequently asked questions and nice-to-know facts](docs/faq.md)
|
- [Frequently asked questions and nice-to-know facts](docs/faq.md)
|
||||||
|
|
||||||
|
@ -5,7 +5,6 @@ import (
|
|||||||
"context"
|
"context"
|
||||||
"fmt"
|
"fmt"
|
||||||
"io/ioutil"
|
"io/ioutil"
|
||||||
"log"
|
|
||||||
"os"
|
"os"
|
||||||
"path"
|
"path"
|
||||||
"strconv"
|
"strconv"
|
||||||
@ -16,21 +15,13 @@ import (
|
|||||||
"github.com/docker/docker/client"
|
"github.com/docker/docker/client"
|
||||||
homedir "github.com/mitchellh/go-homedir"
|
homedir "github.com/mitchellh/go-homedir"
|
||||||
"github.com/olekukonko/tablewriter"
|
"github.com/olekukonko/tablewriter"
|
||||||
|
log "github.com/sirupsen/logrus"
|
||||||
)
|
)
|
||||||
|
|
||||||
const (
|
const (
|
||||||
defaultContainerNamePrefix = "k3d"
|
defaultContainerNamePrefix = "k3d"
|
||||||
)
|
)
|
||||||
|
|
||||||
type cluster struct {
|
|
||||||
name string
|
|
||||||
image string
|
|
||||||
status string
|
|
||||||
serverPorts []string
|
|
||||||
server types.Container
|
|
||||||
workers []types.Container
|
|
||||||
}
|
|
||||||
|
|
||||||
// GetContainerName generates the container names
|
// GetContainerName generates the container names
|
||||||
func GetContainerName(role, clusterName string, postfix int) string {
|
func GetContainerName(role, clusterName string, postfix int) string {
|
||||||
if postfix >= 0 {
|
if postfix >= 0 {
|
||||||
@ -65,11 +56,11 @@ func createDirIfNotExists(path string) error {
|
|||||||
func createClusterDir(name string) {
|
func createClusterDir(name string) {
|
||||||
clusterPath, _ := getClusterDir(name)
|
clusterPath, _ := getClusterDir(name)
|
||||||
if err := createDirIfNotExists(clusterPath); err != nil {
|
if err := createDirIfNotExists(clusterPath); err != nil {
|
||||||
log.Fatalf("ERROR: couldn't create cluster directory [%s] -> %+v", clusterPath, err)
|
log.Fatalf("Couldn't create cluster directory [%s] -> %+v", clusterPath, err)
|
||||||
}
|
}
|
||||||
// create subdir for sharing container images
|
// create subdir for sharing container images
|
||||||
if err := createDirIfNotExists(clusterPath + "/images"); err != nil {
|
if err := createDirIfNotExists(clusterPath + "/images"); err != nil {
|
||||||
log.Fatalf("ERROR: couldn't create cluster sub-directory [%s] -> %+v", clusterPath+"/images", err)
|
log.Fatalf("Couldn't create cluster sub-directory [%s] -> %+v", clusterPath+"/images", err)
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
@ -77,7 +68,7 @@ func createClusterDir(name string) {
|
|||||||
func deleteClusterDir(name string) {
|
func deleteClusterDir(name string) {
|
||||||
clusterPath, _ := getClusterDir(name)
|
clusterPath, _ := getClusterDir(name)
|
||||||
if err := os.RemoveAll(clusterPath); err != nil {
|
if err := os.RemoveAll(clusterPath); err != nil {
|
||||||
log.Printf("WARNING: couldn't delete cluster directory [%s]. You might want to delete it manually.", clusterPath)
|
log.Warningf("Couldn't delete cluster directory [%s]. You might want to delete it manually.", clusterPath)
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
@ -85,7 +76,7 @@ func deleteClusterDir(name string) {
|
|||||||
func getClusterDir(name string) (string, error) {
|
func getClusterDir(name string) (string, error) {
|
||||||
homeDir, err := homedir.Dir()
|
homeDir, err := homedir.Dir()
|
||||||
if err != nil {
|
if err != nil {
|
||||||
log.Printf("ERROR: Couldn't get user's home directory")
|
log.Error("Couldn't get user's home directory")
|
||||||
return "", err
|
return "", err
|
||||||
}
|
}
|
||||||
return path.Join(homeDir, ".config", "k3d", name), nil
|
return path.Join(homeDir, ".config", "k3d", name), nil
|
||||||
@ -120,15 +111,22 @@ func createKubeConfigFile(cluster string) error {
|
|||||||
}
|
}
|
||||||
|
|
||||||
// get kubeconfig file from container and read contents
|
// get kubeconfig file from container and read contents
|
||||||
|
|
||||||
|
kubeconfigerror := func() {
|
||||||
|
log.Warnf("Couldn't get the kubeconfig from cluster '%s': Maybe it's not ready yet and you can try again later.", cluster)
|
||||||
|
}
|
||||||
|
|
||||||
reader, _, err := docker.CopyFromContainer(ctx, server[0].ID, "/output/kubeconfig.yaml")
|
reader, _, err := docker.CopyFromContainer(ctx, server[0].ID, "/output/kubeconfig.yaml")
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return fmt.Errorf("ERROR: couldn't copy kubeconfig.yaml from server container %s\n%+v", server[0].ID, err)
|
kubeconfigerror()
|
||||||
|
return fmt.Errorf(" Couldn't copy kubeconfig.yaml from server container %s\n%+v", server[0].ID, err)
|
||||||
}
|
}
|
||||||
defer reader.Close()
|
defer reader.Close()
|
||||||
|
|
||||||
readBytes, err := ioutil.ReadAll(reader)
|
readBytes, err := ioutil.ReadAll(reader)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return fmt.Errorf("ERROR: couldn't read kubeconfig from container\n%+v", err)
|
kubeconfigerror()
|
||||||
|
return fmt.Errorf(" Couldn't read kubeconfig from container\n%+v", err)
|
||||||
}
|
}
|
||||||
|
|
||||||
// create destination kubeconfig file
|
// create destination kubeconfig file
|
||||||
@ -139,7 +137,7 @@ func createKubeConfigFile(cluster string) error {
|
|||||||
|
|
||||||
kubeconfigfile, err := os.Create(destPath)
|
kubeconfigfile, err := os.Create(destPath)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return fmt.Errorf("ERROR: couldn't create kubeconfig file %s\n%+v", destPath, err)
|
return fmt.Errorf(" Couldn't create kubeconfig file %s\n%+v", destPath, err)
|
||||||
}
|
}
|
||||||
defer kubeconfigfile.Close()
|
defer kubeconfigfile.Close()
|
||||||
|
|
||||||
@ -156,22 +154,27 @@ func createKubeConfigFile(cluster string) error {
|
|||||||
// set the host name to remote docker machine's IP address.
|
// set the host name to remote docker machine's IP address.
|
||||||
//
|
//
|
||||||
// Otherwise, the hostname remains as 'localhost'
|
// Otherwise, the hostname remains as 'localhost'
|
||||||
|
//
|
||||||
|
// Additionally, we replace every occurence of 'default' in the kubeconfig with the actual cluster name
|
||||||
apiHost := server[0].Labels["apihost"]
|
apiHost := server[0].Labels["apihost"]
|
||||||
|
|
||||||
|
s := string(trimBytes)
|
||||||
|
s = strings.ReplaceAll(s, "default", cluster)
|
||||||
if apiHost != "" {
|
if apiHost != "" {
|
||||||
s := string(trimBytes)
|
|
||||||
s = strings.Replace(s, "localhost", apiHost, 1)
|
s = strings.Replace(s, "localhost", apiHost, 1)
|
||||||
trimBytes = []byte(s)
|
s = strings.Replace(s, "127.0.0.1", apiHost, 1)
|
||||||
}
|
}
|
||||||
|
trimBytes = []byte(s)
|
||||||
|
|
||||||
_, err = kubeconfigfile.Write(trimBytes)
|
_, err = kubeconfigfile.Write(trimBytes)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return fmt.Errorf("ERROR: couldn't write to kubeconfig.yaml\n%+v", err)
|
return fmt.Errorf("Couldn't write to kubeconfig.yaml\n%+v", err)
|
||||||
}
|
}
|
||||||
|
|
||||||
return nil
|
return nil
|
||||||
}
|
}
|
||||||
|
|
||||||
func getKubeConfig(cluster string) (string, error) {
|
func getKubeConfig(cluster string, overwrite bool) (string, error) {
|
||||||
kubeConfigPath, err := getClusterKubeConfigPath(cluster)
|
kubeConfigPath, err := getClusterKubeConfigPath(cluster)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return "", err
|
return "", err
|
||||||
@ -184,14 +187,25 @@ func getKubeConfig(cluster string) (string, error) {
|
|||||||
return "", fmt.Errorf("Cluster %s does not exist", cluster)
|
return "", fmt.Errorf("Cluster %s does not exist", cluster)
|
||||||
}
|
}
|
||||||
|
|
||||||
// If kubeconfi.yaml has not been created, generate it now
|
// Create or overwrite file no matter if it exists or not
|
||||||
if _, err := os.Stat(kubeConfigPath); err != nil {
|
if overwrite {
|
||||||
if os.IsNotExist(err) {
|
log.Debugf("Creating/Overwriting file %s...", kubeConfigPath)
|
||||||
if err = createKubeConfigFile(cluster); err != nil {
|
if err = createKubeConfigFile(cluster); err != nil {
|
||||||
|
return "", err
|
||||||
|
}
|
||||||
|
} else {
|
||||||
|
// If kubeconfi.yaml has not been created, generate it now
|
||||||
|
if _, err := os.Stat(kubeConfigPath); err != nil {
|
||||||
|
if os.IsNotExist(err) {
|
||||||
|
log.Debugf("File %s does not exist. Creating it now...", kubeConfigPath)
|
||||||
|
if err = createKubeConfigFile(cluster); err != nil {
|
||||||
|
return "", err
|
||||||
|
}
|
||||||
|
} else {
|
||||||
return "", err
|
return "", err
|
||||||
}
|
}
|
||||||
} else {
|
} else {
|
||||||
return "", err
|
log.Debugf("File %s exists, leaving it as it is...", kubeConfigPath)
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
@ -199,14 +213,13 @@ func getKubeConfig(cluster string) (string, error) {
|
|||||||
}
|
}
|
||||||
|
|
||||||
// printClusters prints the names of existing clusters
|
// printClusters prints the names of existing clusters
|
||||||
func printClusters() {
|
func printClusters() error {
|
||||||
clusters, err := getClusters(true, "")
|
clusters, err := getClusters(true, "")
|
||||||
if err != nil {
|
if err != nil {
|
||||||
log.Fatalf("ERROR: Couldn't list clusters\n%+v", err)
|
log.Fatalf("Couldn't list clusters\n%+v", err)
|
||||||
}
|
}
|
||||||
if len(clusters) == 0 {
|
if len(clusters) == 0 {
|
||||||
log.Printf("No clusters found!")
|
return fmt.Errorf("No clusters found")
|
||||||
return
|
|
||||||
}
|
}
|
||||||
|
|
||||||
table := tablewriter.NewWriter(os.Stdout)
|
table := tablewriter.NewWriter(os.Stdout)
|
||||||
@ -226,6 +239,7 @@ func printClusters() {
|
|||||||
}
|
}
|
||||||
|
|
||||||
table.Render()
|
table.Render()
|
||||||
|
return nil
|
||||||
}
|
}
|
||||||
|
|
||||||
// Classify cluster state: Running, Stopped or Abnormal
|
// Classify cluster state: Running, Stopped or Abnormal
|
||||||
@ -251,11 +265,11 @@ func getClusterStatus(server types.Container, workers []types.Container) string
|
|||||||
// When 'all' is true, 'cluster' contains all clusters found from the docker daemon
|
// When 'all' is true, 'cluster' contains all clusters found from the docker daemon
|
||||||
// When 'all' is false, 'cluster' contains up to one cluster whose name matches 'name'. 'cluster' can
|
// When 'all' is false, 'cluster' contains up to one cluster whose name matches 'name'. 'cluster' can
|
||||||
// be empty if no matching cluster is found.
|
// be empty if no matching cluster is found.
|
||||||
func getClusters(all bool, name string) (map[string]cluster, error) {
|
func getClusters(all bool, name string) (map[string]Cluster, error) {
|
||||||
ctx := context.Background()
|
ctx := context.Background()
|
||||||
docker, err := client.NewClientWithOpts(client.FromEnv, client.WithAPIVersionNegotiation())
|
docker, err := client.NewClientWithOpts(client.FromEnv, client.WithAPIVersionNegotiation())
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return nil, fmt.Errorf("ERROR: couldn't create docker client\n%+v", err)
|
return nil, fmt.Errorf(" Couldn't create docker client\n%+v", err)
|
||||||
}
|
}
|
||||||
|
|
||||||
// Prepare docker label filters
|
// Prepare docker label filters
|
||||||
@ -272,7 +286,7 @@ func getClusters(all bool, name string) (map[string]cluster, error) {
|
|||||||
return nil, fmt.Errorf("WARNING: couldn't list server containers\n%+v", err)
|
return nil, fmt.Errorf("WARNING: couldn't list server containers\n%+v", err)
|
||||||
}
|
}
|
||||||
|
|
||||||
clusters := make(map[string]cluster)
|
clusters := make(map[string]Cluster)
|
||||||
|
|
||||||
// don't filter for servers but for workers now
|
// don't filter for servers but for workers now
|
||||||
filters.Del("label", "component=server")
|
filters.Del("label", "component=server")
|
||||||
@ -295,7 +309,7 @@ func getClusters(all bool, name string) (map[string]cluster, error) {
|
|||||||
Filters: filters,
|
Filters: filters,
|
||||||
})
|
})
|
||||||
if err != nil {
|
if err != nil {
|
||||||
log.Printf("WARNING: couldn't get worker containers for cluster %s\n%+v", clusterName, err)
|
log.Warningf("Couldn't get worker containers for cluster %s\n%+v", clusterName, err)
|
||||||
}
|
}
|
||||||
|
|
||||||
// save cluster information
|
// save cluster information
|
||||||
@ -303,7 +317,7 @@ func getClusters(all bool, name string) (map[string]cluster, error) {
|
|||||||
for _, port := range server.Ports {
|
for _, port := range server.Ports {
|
||||||
serverPorts = append(serverPorts, strconv.Itoa(int(port.PublicPort)))
|
serverPorts = append(serverPorts, strconv.Itoa(int(port.PublicPort)))
|
||||||
}
|
}
|
||||||
clusters[clusterName] = cluster{
|
clusters[clusterName] = Cluster{
|
||||||
name: clusterName,
|
name: clusterName,
|
||||||
image: server.Image,
|
image: server.Image,
|
||||||
status: getClusterStatus(server, workers),
|
status: getClusterStatus(server, workers),
|
||||||
|
664
cli/commands.go
664
cli/commands.go
@ -5,26 +5,20 @@ package run
|
|||||||
*/
|
*/
|
||||||
|
|
||||||
import (
|
import (
|
||||||
"bytes"
|
|
||||||
"context"
|
"context"
|
||||||
"errors"
|
|
||||||
"fmt"
|
"fmt"
|
||||||
"log"
|
|
||||||
"os"
|
"os"
|
||||||
"strconv"
|
"strconv"
|
||||||
"strings"
|
"strings"
|
||||||
"time"
|
"time"
|
||||||
|
|
||||||
"github.com/docker/docker/api/types"
|
"github.com/docker/docker/api/types"
|
||||||
|
"github.com/docker/docker/api/types/filters"
|
||||||
"github.com/docker/docker/client"
|
"github.com/docker/docker/client"
|
||||||
|
log "github.com/sirupsen/logrus"
|
||||||
"github.com/urfave/cli"
|
"github.com/urfave/cli"
|
||||||
)
|
)
|
||||||
|
|
||||||
const (
|
|
||||||
defaultRegistry = "docker.io"
|
|
||||||
defaultServerCount = 1
|
|
||||||
)
|
|
||||||
|
|
||||||
// CheckTools checks if the docker API server is responding
|
// CheckTools checks if the docker API server is responding
|
||||||
func CheckTools(c *cli.Context) error {
|
func CheckTools(c *cli.Context) error {
|
||||||
log.Print("Checking docker...")
|
log.Print("Checking docker...")
|
||||||
@ -36,7 +30,7 @@ func CheckTools(c *cli.Context) error {
|
|||||||
ping, err := docker.Ping(ctx)
|
ping, err := docker.Ping(ctx)
|
||||||
|
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return fmt.Errorf("ERROR: checking docker failed\n%+v", err)
|
return fmt.Errorf(" Checking docker failed\n%+v", err)
|
||||||
}
|
}
|
||||||
log.Printf("SUCCESS: Checking docker succeeded (API: v%s)\n", ping.APIVersion)
|
log.Printf("SUCCESS: Checking docker succeeded (API: v%s)\n", ping.APIVersion)
|
||||||
return nil
|
return nil
|
||||||
@ -45,44 +39,60 @@ func CheckTools(c *cli.Context) error {
|
|||||||
// CreateCluster creates a new single-node cluster container and initializes the cluster directory
|
// CreateCluster creates a new single-node cluster container and initializes the cluster directory
|
||||||
func CreateCluster(c *cli.Context) error {
|
func CreateCluster(c *cli.Context) error {
|
||||||
|
|
||||||
if err := CheckClusterName(c.String("name")); err != nil {
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
|
|
||||||
if cluster, err := getClusters(false, c.String("name")); err != nil {
|
|
||||||
return err
|
|
||||||
} else if len(cluster) != 0 {
|
|
||||||
// A cluster exists with the same name. Return with an error.
|
|
||||||
return fmt.Errorf("ERROR: Cluster %s already exists", c.String("name"))
|
|
||||||
}
|
|
||||||
|
|
||||||
// On Error delete the cluster. If there createCluster() encounter any error,
|
// On Error delete the cluster. If there createCluster() encounter any error,
|
||||||
// call this function to remove all resources allocated for the cluster so far
|
// call this function to remove all resources allocated for the cluster so far
|
||||||
// so that they don't linger around.
|
// so that they don't linger around.
|
||||||
deleteCluster := func() {
|
deleteCluster := func() {
|
||||||
|
log.Println("ERROR: Cluster creation failed, rolling back...")
|
||||||
if err := DeleteCluster(c); err != nil {
|
if err := DeleteCluster(c); err != nil {
|
||||||
log.Printf("Error: Failed to delete cluster %s", c.String("name"))
|
log.Printf("Error: Failed to delete cluster %s", c.String("name"))
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
// define image
|
// validate --wait flag
|
||||||
image := c.String("image")
|
if c.IsSet("wait") && c.Int("wait") < 0 {
|
||||||
if c.IsSet("version") {
|
log.Fatalf("Negative value for '--wait' not allowed (set '%d')", c.Int("wait"))
|
||||||
// TODO: --version to be deprecated
|
|
||||||
log.Println("[WARNING] The `--version` flag will be deprecated soon, please use `--image rancher/k3s:<version>` instead")
|
|
||||||
if c.IsSet("image") {
|
|
||||||
// version specified, custom image = error (to push deprecation of version flag)
|
|
||||||
log.Fatalln("[ERROR] Please use `--image <image>:<version>` instead of --image and --version")
|
|
||||||
} else {
|
|
||||||
// version specified, default image = ok (until deprecation of version flag)
|
|
||||||
image = fmt.Sprintf("%s:%s", strings.Split(image, ":")[0], c.String("version"))
|
|
||||||
}
|
|
||||||
}
|
|
||||||
if len(strings.Split(image, "/")) <= 2 {
|
|
||||||
// fallback to default registry
|
|
||||||
image = fmt.Sprintf("%s/%s", defaultRegistry, image)
|
|
||||||
}
|
}
|
||||||
|
|
||||||
|
/**********************
|
||||||
|
* *
|
||||||
|
* CONFIGURATION *
|
||||||
|
* vvvvvvvvvvvvvvvvvv *
|
||||||
|
**********************/
|
||||||
|
|
||||||
|
/*
|
||||||
|
* --name, -n
|
||||||
|
* Name of the cluster
|
||||||
|
*/
|
||||||
|
|
||||||
|
// ensure that it's a valid hostname, because it will be part of container names
|
||||||
|
if err := CheckClusterName(c.String("name")); err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
|
||||||
|
// check if the cluster name is already taken
|
||||||
|
if cluster, err := getClusters(false, c.String("name")); err != nil {
|
||||||
|
return err
|
||||||
|
} else if len(cluster) != 0 {
|
||||||
|
// A cluster exists with the same name. Return with an error.
|
||||||
|
return fmt.Errorf(" Cluster %s already exists", c.String("name"))
|
||||||
|
}
|
||||||
|
|
||||||
|
/*
|
||||||
|
* --image, -i
|
||||||
|
* The k3s image used for the k3d node containers
|
||||||
|
*/
|
||||||
|
// define image
|
||||||
|
image := c.String("image")
|
||||||
|
// if no registry was provided, use the default docker.io
|
||||||
|
if len(strings.Split(image, "/")) <= 2 {
|
||||||
|
image = fmt.Sprintf("%s/%s", DefaultRegistry, image)
|
||||||
|
}
|
||||||
|
|
||||||
|
/*
|
||||||
|
* Cluster network
|
||||||
|
* For proper communication, all k3d node containers have to be in the same docker network
|
||||||
|
*/
|
||||||
// create cluster network
|
// create cluster network
|
||||||
networkID, err := createClusterNetwork(c.String("name"))
|
networkID, err := createClusterNetwork(c.String("name"))
|
||||||
if err != nil {
|
if err != nil {
|
||||||
@ -90,23 +100,42 @@ func CreateCluster(c *cli.Context) error {
|
|||||||
}
|
}
|
||||||
log.Printf("Created cluster network with ID %s", networkID)
|
log.Printf("Created cluster network with ID %s", networkID)
|
||||||
|
|
||||||
|
/*
|
||||||
|
* --env, -e
|
||||||
|
* Environment variables that will be passed into the k3d node containers
|
||||||
|
*/
|
||||||
// environment variables
|
// environment variables
|
||||||
env := []string{"K3S_KUBECONFIG_OUTPUT=/output/kubeconfig.yaml"}
|
env := []string{"K3S_KUBECONFIG_OUTPUT=/output/kubeconfig.yaml"}
|
||||||
env = append(env, c.StringSlice("env")...)
|
env = append(env, c.StringSlice("env")...)
|
||||||
env = append(env, fmt.Sprintf("K3S_CLUSTER_SECRET=%s", GenerateRandomString(20)))
|
env = append(env, fmt.Sprintf("K3S_CLUSTER_SECRET=%s", GenerateRandomString(20)))
|
||||||
|
|
||||||
// k3s server arguments
|
/*
|
||||||
// TODO: --port will soon be --api-port since we want to re-use --port for arbitrary port mappings
|
* --label, -l
|
||||||
if c.IsSet("port") {
|
* Docker container labels that will be added to the k3d node containers
|
||||||
log.Println("INFO: As of v2.0.0 --port will be used for arbitrary port mapping. Please use --api-port/-a instead for configuring the Api Port")
|
*/
|
||||||
|
// labels
|
||||||
|
labelmap, err := mapNodesToLabelSpecs(c.StringSlice("label"), GetAllContainerNames(c.String("name"), DefaultServerCount, c.Int("workers")))
|
||||||
|
if err != nil {
|
||||||
|
log.Fatal(err)
|
||||||
}
|
}
|
||||||
|
|
||||||
|
/*
|
||||||
|
* Arguments passed on to the k3s server and agent, will be filled later
|
||||||
|
*/
|
||||||
|
k3AgentArgs := []string{}
|
||||||
|
k3sServerArgs := []string{}
|
||||||
|
|
||||||
|
/*
|
||||||
|
* --api-port, -a
|
||||||
|
* The port that will be used by the k3s API-Server
|
||||||
|
* It will be mapped to localhost or to another hist interface, if specified
|
||||||
|
* If another host is chosen, we also add a tls-san argument for the server to allow connections
|
||||||
|
*/
|
||||||
apiPort, err := parseAPIPort(c.String("api-port"))
|
apiPort, err := parseAPIPort(c.String("api-port"))
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
|
k3sServerArgs = append(k3sServerArgs, "--https-listen-port", apiPort.Port)
|
||||||
k3AgentArgs := []string{}
|
|
||||||
k3sServerArgs := []string{"--https-listen-port", apiPort.Port}
|
|
||||||
|
|
||||||
// When the 'host' is not provided by --api-port, try to fill it using Docker Machine's IP address.
|
// When the 'host' is not provided by --api-port, try to fill it using Docker Machine's IP address.
|
||||||
if apiPort.Host == "" {
|
if apiPort.Host == "" {
|
||||||
@ -116,101 +145,170 @@ func CreateCluster(c *cli.Context) error {
|
|||||||
// In case of error, Log a warning message, and continue on. Since it more likely caused by a miss configured
|
// In case of error, Log a warning message, and continue on. Since it more likely caused by a miss configured
|
||||||
// DOCKER_MACHINE_NAME environment variable.
|
// DOCKER_MACHINE_NAME environment variable.
|
||||||
if err != nil {
|
if err != nil {
|
||||||
log.Printf("WARNING: Failed to get docker machine IP address, ignoring the DOCKER_MACHINE_NAME environment variable setting.\n")
|
log.Warning("Failed to get docker machine IP address, ignoring the DOCKER_MACHINE_NAME environment variable setting.")
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// Add TLS SAN for non default host name
|
||||||
if apiPort.Host != "" {
|
if apiPort.Host != "" {
|
||||||
// Add TLS SAN for non default host name
|
|
||||||
log.Printf("Add TLS SAN for %s", apiPort.Host)
|
log.Printf("Add TLS SAN for %s", apiPort.Host)
|
||||||
k3sServerArgs = append(k3sServerArgs, "--tls-san", apiPort.Host)
|
k3sServerArgs = append(k3sServerArgs, "--tls-san", apiPort.Host)
|
||||||
}
|
}
|
||||||
|
|
||||||
|
/*
|
||||||
|
* --server-arg, -x
|
||||||
|
* Add user-supplied arguments for the k3s server
|
||||||
|
*/
|
||||||
if c.IsSet("server-arg") || c.IsSet("x") {
|
if c.IsSet("server-arg") || c.IsSet("x") {
|
||||||
k3sServerArgs = append(k3sServerArgs, c.StringSlice("server-arg")...)
|
k3sServerArgs = append(k3sServerArgs, c.StringSlice("server-arg")...)
|
||||||
}
|
}
|
||||||
|
|
||||||
|
/*
|
||||||
|
* --agent-arg
|
||||||
|
* Add user-supplied arguments for the k3s agent
|
||||||
|
*/
|
||||||
if c.IsSet("agent-arg") {
|
if c.IsSet("agent-arg") {
|
||||||
|
if c.Int("workers") < 1 {
|
||||||
|
log.Warnln("--agent-arg supplied, but --workers is 0, so no agents will be created")
|
||||||
|
}
|
||||||
k3AgentArgs = append(k3AgentArgs, c.StringSlice("agent-arg")...)
|
k3AgentArgs = append(k3AgentArgs, c.StringSlice("agent-arg")...)
|
||||||
}
|
}
|
||||||
|
|
||||||
|
/*
|
||||||
|
* --port, -p, --publish, --add-port
|
||||||
|
* List of ports, that should be mapped from some or all k3d node containers to the host system (or other interface)
|
||||||
|
*/
|
||||||
// new port map
|
// new port map
|
||||||
portmap, err := mapNodesToPortSpecs(c.StringSlice("publish"), GetAllContainerNames(c.String("name"), defaultServerCount, c.Int("workers")))
|
portmap, err := mapNodesToPortSpecs(c.StringSlice("port"), GetAllContainerNames(c.String("name"), DefaultServerCount, c.Int("workers")))
|
||||||
if err != nil {
|
if err != nil {
|
||||||
log.Fatal(err)
|
log.Fatal(err)
|
||||||
}
|
}
|
||||||
|
|
||||||
|
/*
|
||||||
|
* Image Volume
|
||||||
|
* A docker volume that will be shared by every k3d node container in the cluster.
|
||||||
|
* This volume will be used for the `import-image` command.
|
||||||
|
* On it, all node containers can access the image tarball.
|
||||||
|
*/
|
||||||
// create a docker volume for sharing image tarballs with the cluster
|
// create a docker volume for sharing image tarballs with the cluster
|
||||||
imageVolume, err := createImageVolume(c.String("name"))
|
imageVolume, err := createImageVolume(c.String("name"))
|
||||||
log.Println("Created docker volume ", imageVolume.Name)
|
log.Println("Created docker volume ", imageVolume.Name)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
volumes := c.StringSlice("volume")
|
|
||||||
volumes = append(volumes, fmt.Sprintf("%s:/images", imageVolume.Name))
|
|
||||||
|
|
||||||
clusterSpec := &ClusterSpec{
|
/*
|
||||||
AgentArgs: k3AgentArgs,
|
* --volume, -v
|
||||||
APIPort: *apiPort,
|
* List of volumes: host directory mounts for some or all k3d node containers in the cluster
|
||||||
AutoRestart: c.Bool("auto-restart"),
|
*/
|
||||||
ClusterName: c.String("name"),
|
volumes := c.StringSlice("volume")
|
||||||
Env: env,
|
|
||||||
Image: image,
|
volumesSpec, err := NewVolumes(volumes)
|
||||||
NodeToPortSpecMap: portmap,
|
if err != nil {
|
||||||
PortAutoOffset: c.Int("port-auto-offset"),
|
return err
|
||||||
ServerArgs: k3sServerArgs,
|
|
||||||
Verbose: c.GlobalBool("verbose"),
|
|
||||||
Volumes: volumes,
|
|
||||||
}
|
}
|
||||||
|
|
||||||
// create the server
|
volumesSpec.DefaultVolumes = append(volumesSpec.DefaultVolumes, fmt.Sprintf("%s:/images", imageVolume.Name))
|
||||||
|
|
||||||
|
/*
|
||||||
|
* --registry-file
|
||||||
|
* check if there is a registries file
|
||||||
|
*/
|
||||||
|
registriesFile := ""
|
||||||
|
if c.IsSet("registries-file") {
|
||||||
|
registriesFile = c.String("registries-file")
|
||||||
|
if !fileExists(registriesFile) {
|
||||||
|
log.Fatalf("registries-file %q does not exists", registriesFile)
|
||||||
|
}
|
||||||
|
} else {
|
||||||
|
registriesFile, err = getGlobalRegistriesConfFilename()
|
||||||
|
if err != nil {
|
||||||
|
log.Fatal(err)
|
||||||
|
}
|
||||||
|
if !fileExists(registriesFile) {
|
||||||
|
// if the default registries file does not exists, go ahead but do not try to load it
|
||||||
|
registriesFile = ""
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
/*
|
||||||
|
* clusterSpec
|
||||||
|
* Defines, with which specifications, the cluster and the nodes inside should be created
|
||||||
|
*/
|
||||||
|
clusterSpec := &ClusterSpec{
|
||||||
|
AgentArgs: k3AgentArgs,
|
||||||
|
APIPort: *apiPort,
|
||||||
|
AutoRestart: c.Bool("auto-restart"),
|
||||||
|
ClusterName: c.String("name"),
|
||||||
|
Env: env,
|
||||||
|
NodeToLabelSpecMap: labelmap,
|
||||||
|
Image: image,
|
||||||
|
NodeToPortSpecMap: portmap,
|
||||||
|
PortAutoOffset: c.Int("port-auto-offset"),
|
||||||
|
RegistriesFile: registriesFile,
|
||||||
|
RegistryEnabled: c.Bool("enable-registry"),
|
||||||
|
RegistryCacheEnabled: c.Bool("enable-registry-cache"),
|
||||||
|
RegistryName: c.String("registry-name"),
|
||||||
|
RegistryPort: c.Int("registry-port"),
|
||||||
|
RegistryVolume: c.String("registry-volume"),
|
||||||
|
ServerArgs: k3sServerArgs,
|
||||||
|
Volumes: volumesSpec,
|
||||||
|
}
|
||||||
|
|
||||||
|
/******************
|
||||||
|
* *
|
||||||
|
* CREATION *
|
||||||
|
* vvvvvvvvvvvvvv *
|
||||||
|
******************/
|
||||||
|
|
||||||
log.Printf("Creating cluster [%s]", c.String("name"))
|
log.Printf("Creating cluster [%s]", c.String("name"))
|
||||||
|
|
||||||
|
/*
|
||||||
|
* Cluster Directory
|
||||||
|
*/
|
||||||
// create the directory where we will put the kubeconfig file by default (when running `k3d get-config`)
|
// create the directory where we will put the kubeconfig file by default (when running `k3d get-config`)
|
||||||
createClusterDir(c.String("name"))
|
createClusterDir(c.String("name"))
|
||||||
|
|
||||||
dockerID, err := createServer(clusterSpec)
|
/* (1)
|
||||||
|
* Registry (optional)
|
||||||
|
* Create the (optional) registry container
|
||||||
|
*/
|
||||||
|
var registryNameExists *dnsNameCheck
|
||||||
|
if clusterSpec.RegistryEnabled {
|
||||||
|
registryNameExists = newAsyncNameExists(clusterSpec.RegistryName, 1*time.Second)
|
||||||
|
if _, err = createRegistry(*clusterSpec); err != nil {
|
||||||
|
deleteCluster()
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
/* (2)
|
||||||
|
* Server
|
||||||
|
* Create the server node container
|
||||||
|
*/
|
||||||
|
serverContainerID, err := createServer(clusterSpec)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
deleteCluster()
|
deleteCluster()
|
||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
|
|
||||||
ctx := context.Background()
|
/* (2.1)
|
||||||
docker, err := client.NewClientWithOpts(client.FromEnv, client.WithAPIVersionNegotiation())
|
* Wait
|
||||||
if err != nil {
|
* Wait for k3s server to be done initializing, if wanted
|
||||||
return err
|
*/
|
||||||
}
|
|
||||||
|
|
||||||
// Wait for k3s to be up and running if wanted.
|
|
||||||
// We're simply scanning the container logs for a line that tells us that everything's up and running
|
// We're simply scanning the container logs for a line that tells us that everything's up and running
|
||||||
// TODO: also wait for worker nodes
|
// TODO: also wait for worker nodes
|
||||||
start := time.Now()
|
if c.IsSet("wait") {
|
||||||
timeout := time.Duration(c.Int("wait")) * time.Second
|
if err := waitForContainerLogMessage(serverContainerID, "Wrote kubeconfig", c.Int("wait")); err != nil {
|
||||||
for c.IsSet("wait") {
|
|
||||||
// not running after timeout exceeded? Rollback and delete everything.
|
|
||||||
if timeout != 0 && time.Now().After(start.Add(timeout)) {
|
|
||||||
deleteCluster()
|
deleteCluster()
|
||||||
return errors.New("Cluster creation exceeded specified timeout")
|
return fmt.Errorf("ERROR: failed while waiting for server to come up\n%+v", err)
|
||||||
}
|
}
|
||||||
|
|
||||||
// scan container logs for a line that tells us that the required services are up and running
|
|
||||||
out, err := docker.ContainerLogs(ctx, dockerID, types.ContainerLogsOptions{ShowStdout: true, ShowStderr: true})
|
|
||||||
if err != nil {
|
|
||||||
out.Close()
|
|
||||||
return fmt.Errorf("ERROR: couldn't get docker logs for %s\n%+v", c.String("name"), err)
|
|
||||||
}
|
|
||||||
buf := new(bytes.Buffer)
|
|
||||||
nRead, _ := buf.ReadFrom(out)
|
|
||||||
out.Close()
|
|
||||||
output := buf.String()
|
|
||||||
if nRead > 0 && strings.Contains(string(output), "Running kubelet") {
|
|
||||||
break
|
|
||||||
}
|
|
||||||
|
|
||||||
time.Sleep(1 * time.Second)
|
|
||||||
}
|
}
|
||||||
|
|
||||||
// spin up the worker nodes
|
/* (3)
|
||||||
|
* Workers
|
||||||
|
* Create the worker node containers
|
||||||
|
*/
|
||||||
// TODO: do this concurrently in different goroutines
|
// TODO: do this concurrently in different goroutines
|
||||||
if c.Int("workers") > 0 {
|
if c.Int("workers") > 0 {
|
||||||
log.Printf("Booting %s workers for cluster %s", strconv.Itoa(c.Int("workers")), c.String("name"))
|
log.Printf("Booting %s workers for cluster %s", strconv.Itoa(c.Int("workers")), c.String("name"))
|
||||||
@ -224,7 +322,21 @@ func CreateCluster(c *cli.Context) error {
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
/* (4)
|
||||||
|
* Done
|
||||||
|
* Finished creating resources.
|
||||||
|
*/
|
||||||
log.Printf("SUCCESS: created cluster [%s]", c.String("name"))
|
log.Printf("SUCCESS: created cluster [%s]", c.String("name"))
|
||||||
|
|
||||||
|
if clusterSpec.RegistryEnabled {
|
||||||
|
log.Printf("A local registry has been started as %s:%d", clusterSpec.RegistryName, clusterSpec.RegistryPort)
|
||||||
|
|
||||||
|
exists, err := registryNameExists.Exists()
|
||||||
|
if !exists || err != nil {
|
||||||
|
log.Printf("Make sure %s resolves to '127.0.0.1' (using /etc/hosts f.e)", clusterSpec.RegistryName)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
log.Printf(`You can now use the cluster with:
|
log.Printf(`You can now use the cluster with:
|
||||||
|
|
||||||
export KUBECONFIG="$(%s get-kubeconfig --name='%s')"
|
export KUBECONFIG="$(%s get-kubeconfig --name='%s')"
|
||||||
@ -235,12 +347,20 @@ kubectl cluster-info`, os.Args[0], c.String("name"))
|
|||||||
|
|
||||||
// DeleteCluster removes the containers belonging to a cluster and its local directory
|
// DeleteCluster removes the containers belonging to a cluster and its local directory
|
||||||
func DeleteCluster(c *cli.Context) error {
|
func DeleteCluster(c *cli.Context) error {
|
||||||
|
|
||||||
clusters, err := getClusters(c.Bool("all"), c.String("name"))
|
clusters, err := getClusters(c.Bool("all"), c.String("name"))
|
||||||
|
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
|
|
||||||
|
if len(clusters) == 0 {
|
||||||
|
if !c.IsSet("all") && c.IsSet("name") {
|
||||||
|
return fmt.Errorf("No cluster with name '%s' found (You can add `--all` and `--name <CLUSTER-NAME>` to delete other clusters)", c.String("name"))
|
||||||
|
}
|
||||||
|
return fmt.Errorf("No cluster(s) found")
|
||||||
|
}
|
||||||
|
|
||||||
// remove clusters one by one instead of appending all names to the docker command
|
// remove clusters one by one instead of appending all names to the docker command
|
||||||
// this allows for more granular error handling and logging
|
// this allows for more granular error handling and logging
|
||||||
for _, cluster := range clusters {
|
for _, cluster := range clusters {
|
||||||
@ -258,19 +378,43 @@ func DeleteCluster(c *cli.Context) error {
|
|||||||
deleteClusterDir(cluster.name)
|
deleteClusterDir(cluster.name)
|
||||||
log.Println("...Removing server")
|
log.Println("...Removing server")
|
||||||
if err := removeContainer(cluster.server.ID); err != nil {
|
if err := removeContainer(cluster.server.ID); err != nil {
|
||||||
return fmt.Errorf("ERROR: Couldn't remove server for cluster %s\n%+v", cluster.name, err)
|
return fmt.Errorf(" Couldn't remove server for cluster %s\n%+v", cluster.name, err)
|
||||||
|
}
|
||||||
|
|
||||||
|
if err := disconnectRegistryFromNetwork(cluster.name, c.IsSet("keep-registry-volume")); err != nil {
|
||||||
|
log.Warningf("Couldn't disconnect Registry from network %s\n%+v", cluster.name, err)
|
||||||
|
}
|
||||||
|
|
||||||
|
if c.IsSet("prune") {
|
||||||
|
// disconnect any other container that is connected to the k3d network
|
||||||
|
nid, err := getClusterNetwork(cluster.name)
|
||||||
|
if err != nil {
|
||||||
|
log.Warningf("Couldn't get the network for cluster %q\n%+v", cluster.name, err)
|
||||||
|
}
|
||||||
|
cids, err := getContainersInNetwork(nid)
|
||||||
|
if err != nil {
|
||||||
|
log.Warningf("Couldn't get the list of containers connected to network %q\n%+v", nid, err)
|
||||||
|
}
|
||||||
|
for _, cid := range cids {
|
||||||
|
err := disconnectContainerFromNetwork(cid, nid)
|
||||||
|
if err != nil {
|
||||||
|
log.Warningf("Couldn't disconnect container %q from network %q", cid, nid)
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
log.Printf("...%q has been forced to disconnect from %q's network", cid, cluster.name)
|
||||||
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
if err := deleteClusterNetwork(cluster.name); err != nil {
|
if err := deleteClusterNetwork(cluster.name); err != nil {
|
||||||
log.Printf("WARNING: couldn't delete cluster network for cluster %s\n%+v", cluster.name, err)
|
log.Warningf("Couldn't delete cluster network for cluster %s\n%+v", cluster.name, err)
|
||||||
}
|
}
|
||||||
|
|
||||||
log.Println("...Removing docker image volume")
|
log.Println("...Removing docker image volume")
|
||||||
if err := deleteImageVolume(cluster.name); err != nil {
|
if err := deleteImageVolume(cluster.name); err != nil {
|
||||||
log.Printf("WARNING: couldn't delete image docker volume for cluster %s\n%+v", cluster.name, err)
|
log.Warningf("Couldn't delete image docker volume for cluster %s\n%+v", cluster.name, err)
|
||||||
}
|
}
|
||||||
|
|
||||||
log.Printf("SUCCESS: removed cluster [%s]", cluster.name)
|
log.Infof("Removed cluster [%s]", cluster.name)
|
||||||
}
|
}
|
||||||
|
|
||||||
return nil
|
return nil
|
||||||
@ -284,10 +428,17 @@ func StopCluster(c *cli.Context) error {
|
|||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
|
|
||||||
|
if len(clusters) == 0 {
|
||||||
|
if !c.IsSet("all") && c.IsSet("name") {
|
||||||
|
return fmt.Errorf("No cluster with name '%s' found (You can add `--all` and `--name <CLUSTER-NAME>` to stop other clusters)", c.String("name"))
|
||||||
|
}
|
||||||
|
return fmt.Errorf("No cluster(s) found")
|
||||||
|
}
|
||||||
|
|
||||||
ctx := context.Background()
|
ctx := context.Background()
|
||||||
docker, err := client.NewClientWithOpts(client.FromEnv, client.WithAPIVersionNegotiation())
|
docker, err := client.NewClientWithOpts(client.FromEnv, client.WithAPIVersionNegotiation())
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return fmt.Errorf("ERROR: couldn't create docker client\n%+v", err)
|
return fmt.Errorf(" Couldn't create docker client\n%+v", err)
|
||||||
}
|
}
|
||||||
|
|
||||||
// remove clusters one by one instead of appending all names to the docker command
|
// remove clusters one by one instead of appending all names to the docker command
|
||||||
@ -305,10 +456,10 @@ func StopCluster(c *cli.Context) error {
|
|||||||
}
|
}
|
||||||
log.Println("...Stopping server")
|
log.Println("...Stopping server")
|
||||||
if err := docker.ContainerStop(ctx, cluster.server.ID, nil); err != nil {
|
if err := docker.ContainerStop(ctx, cluster.server.ID, nil); err != nil {
|
||||||
return fmt.Errorf("ERROR: Couldn't stop server for cluster %s\n%+v", cluster.name, err)
|
return fmt.Errorf(" Couldn't stop server for cluster %s\n%+v", cluster.name, err)
|
||||||
}
|
}
|
||||||
|
|
||||||
log.Printf("SUCCESS: Stopped cluster [%s]", cluster.name)
|
log.Infof("Stopped cluster [%s]", cluster.name)
|
||||||
}
|
}
|
||||||
|
|
||||||
return nil
|
return nil
|
||||||
@ -322,10 +473,17 @@ func StartCluster(c *cli.Context) error {
|
|||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
|
|
||||||
|
if len(clusters) == 0 {
|
||||||
|
if !c.IsSet("all") && c.IsSet("name") {
|
||||||
|
return fmt.Errorf("No cluster with name '%s' found (You can add `--all` and `--name <CLUSTER-NAME>` to start other clusters)", c.String("name"))
|
||||||
|
}
|
||||||
|
return fmt.Errorf("No cluster(s) found")
|
||||||
|
}
|
||||||
|
|
||||||
ctx := context.Background()
|
ctx := context.Background()
|
||||||
docker, err := client.NewClientWithOpts(client.FromEnv, client.WithAPIVersionNegotiation())
|
docker, err := client.NewClientWithOpts(client.FromEnv, client.WithAPIVersionNegotiation())
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return fmt.Errorf("ERROR: couldn't create docker client\n%+v", err)
|
return fmt.Errorf(" Couldn't create docker client\n%+v", err)
|
||||||
}
|
}
|
||||||
|
|
||||||
// remove clusters one by one instead of appending all names to the docker command
|
// remove clusters one by one instead of appending all names to the docker command
|
||||||
@ -333,9 +491,23 @@ func StartCluster(c *cli.Context) error {
|
|||||||
for _, cluster := range clusters {
|
for _, cluster := range clusters {
|
||||||
log.Printf("Starting cluster [%s]", cluster.name)
|
log.Printf("Starting cluster [%s]", cluster.name)
|
||||||
|
|
||||||
|
// TODO: consider only touching the registry if it's really in use by a cluster
|
||||||
|
registryContainer, err := getRegistryContainer()
|
||||||
|
if err != nil {
|
||||||
|
log.Warn("Couldn't get registry container, if you know you have one, try starting it manually via `docker start`")
|
||||||
|
}
|
||||||
|
if registryContainer != "" {
|
||||||
|
log.Infof("...Starting registry container '%s'", registryContainer)
|
||||||
|
if err := docker.ContainerStart(ctx, registryContainer, types.ContainerStartOptions{}); err != nil {
|
||||||
|
log.Warnf("Failed to start the registry container '%s', try starting it manually via `docker start %s`", registryContainer, registryContainer)
|
||||||
|
}
|
||||||
|
} else {
|
||||||
|
log.Debugln("No registry container found. Proceeding.")
|
||||||
|
}
|
||||||
|
|
||||||
log.Println("...Starting server")
|
log.Println("...Starting server")
|
||||||
if err := docker.ContainerStart(ctx, cluster.server.ID, types.ContainerStartOptions{}); err != nil {
|
if err := docker.ContainerStart(ctx, cluster.server.ID, types.ContainerStartOptions{}); err != nil {
|
||||||
return fmt.Errorf("ERROR: Couldn't start server for cluster %s\n%+v", cluster.name, err)
|
return fmt.Errorf(" Couldn't start server for cluster %s\n%+v", cluster.name, err)
|
||||||
}
|
}
|
||||||
|
|
||||||
if len(cluster.workers) > 0 {
|
if len(cluster.workers) > 0 {
|
||||||
@ -356,24 +528,40 @@ func StartCluster(c *cli.Context) error {
|
|||||||
|
|
||||||
// ListClusters prints a list of created clusters
|
// ListClusters prints a list of created clusters
|
||||||
func ListClusters(c *cli.Context) error {
|
func ListClusters(c *cli.Context) error {
|
||||||
if c.IsSet("all") {
|
if err := printClusters(); err != nil {
|
||||||
log.Println("INFO: --all is on by default, thus no longer required. This option will be removed in v2.0.0")
|
return err
|
||||||
|
|
||||||
}
|
}
|
||||||
printClusters()
|
|
||||||
return nil
|
return nil
|
||||||
}
|
}
|
||||||
|
|
||||||
// GetKubeConfig grabs the kubeconfig from the running cluster and prints the path to stdout
|
// GetKubeConfig grabs the kubeconfig from the running cluster and prints the path to stdout
|
||||||
func GetKubeConfig(c *cli.Context) error {
|
func GetKubeConfig(c *cli.Context) error {
|
||||||
cluster := c.String("name")
|
clusters, err := getClusters(c.Bool("all"), c.String("name"))
|
||||||
kubeConfigPath, err := getKubeConfig(cluster)
|
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
|
|
||||||
// output kubeconfig file path to stdout
|
if len(clusters) == 0 {
|
||||||
fmt.Println(kubeConfigPath)
|
if !c.IsSet("all") && c.IsSet("name") {
|
||||||
|
return fmt.Errorf("No cluster with name '%s' found (You can add `--all` and `--name <CLUSTER-NAME>` to check other clusters)", c.String("name"))
|
||||||
|
}
|
||||||
|
return fmt.Errorf("No cluster(s) found")
|
||||||
|
}
|
||||||
|
|
||||||
|
for _, cluster := range clusters {
|
||||||
|
kubeConfigPath, err := getKubeConfig(cluster.name, c.Bool("overwrite"))
|
||||||
|
if err != nil {
|
||||||
|
if !c.Bool("all") {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
log.Println(err)
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
|
||||||
|
// output kubeconfig file path to stdout
|
||||||
|
fmt.Println(kubeConfigPath)
|
||||||
|
}
|
||||||
|
|
||||||
return nil
|
return nil
|
||||||
}
|
}
|
||||||
|
|
||||||
@ -390,5 +578,263 @@ func ImportImage(c *cli.Context) error {
|
|||||||
} else {
|
} else {
|
||||||
images = append(images, c.Args()...)
|
images = append(images, c.Args()...)
|
||||||
}
|
}
|
||||||
|
if len(images) == 0 {
|
||||||
|
return fmt.Errorf("No images specified for import")
|
||||||
|
}
|
||||||
return importImage(c.String("name"), images, c.Bool("no-remove"))
|
return importImage(c.String("name"), images, c.Bool("no-remove"))
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// AddNode adds a node to an existing cluster
|
||||||
|
func AddNode(c *cli.Context) error {
|
||||||
|
|
||||||
|
/*
|
||||||
|
* (0) Check flags
|
||||||
|
*/
|
||||||
|
|
||||||
|
clusterName := c.String("name")
|
||||||
|
nodeCount := c.Int("count")
|
||||||
|
|
||||||
|
clusterSpec := &ClusterSpec{
|
||||||
|
AgentArgs: nil,
|
||||||
|
APIPort: apiPort{},
|
||||||
|
AutoRestart: false,
|
||||||
|
ClusterName: clusterName,
|
||||||
|
Env: nil,
|
||||||
|
NodeToLabelSpecMap: nil,
|
||||||
|
Image: "",
|
||||||
|
NodeToPortSpecMap: nil,
|
||||||
|
PortAutoOffset: 0,
|
||||||
|
ServerArgs: nil,
|
||||||
|
Volumes: &Volumes{},
|
||||||
|
}
|
||||||
|
|
||||||
|
/* (0.1)
|
||||||
|
* --role
|
||||||
|
* Role of the node that has to be created.
|
||||||
|
* One of (server|master), (agent|worker)
|
||||||
|
*/
|
||||||
|
nodeRole := c.String("role")
|
||||||
|
if nodeRole == "worker" {
|
||||||
|
nodeRole = "agent"
|
||||||
|
}
|
||||||
|
if nodeRole == "master" {
|
||||||
|
nodeRole = "server"
|
||||||
|
}
|
||||||
|
|
||||||
|
// TODO: support adding server nodes
|
||||||
|
if nodeRole != "worker" && nodeRole != "agent" {
|
||||||
|
return fmt.Errorf("Adding nodes of type '%s' is not supported", nodeRole)
|
||||||
|
}
|
||||||
|
|
||||||
|
/* (0.2)
|
||||||
|
* --image, -i
|
||||||
|
* The k3s image used for the k3d node containers
|
||||||
|
*/
|
||||||
|
// TODO: use the currently running image by default
|
||||||
|
image := c.String("image")
|
||||||
|
// if no registry was provided, use the default docker.io
|
||||||
|
if len(strings.Split(image, "/")) <= 2 {
|
||||||
|
image = fmt.Sprintf("%s/%s", DefaultRegistry, image)
|
||||||
|
}
|
||||||
|
clusterSpec.Image = image
|
||||||
|
|
||||||
|
/* (0.3)
|
||||||
|
* --env, -e <key1=val1>[,<keyX=valX]
|
||||||
|
* Environment variables that will be passed to the node containers
|
||||||
|
*/
|
||||||
|
clusterSpec.Env = []string{}
|
||||||
|
clusterSpec.Env = append(clusterSpec.Env, c.StringSlice("env")...)
|
||||||
|
|
||||||
|
/* (0.4)
|
||||||
|
* --arg, -x <argument>
|
||||||
|
* Argument passed in to the k3s server/agent command
|
||||||
|
*/
|
||||||
|
clusterSpec.ServerArgs = append(clusterSpec.ServerArgs, c.StringSlice("arg")...)
|
||||||
|
clusterSpec.AgentArgs = append(clusterSpec.AgentArgs, c.StringSlice("arg")...)
|
||||||
|
|
||||||
|
/* (0.5)
|
||||||
|
* --volume, -v
|
||||||
|
* Add volume mounts
|
||||||
|
*/
|
||||||
|
volumeSpec, err := NewVolumes(c.StringSlice("volume"))
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
// TODO: volumeSpec.DefaultVolumes = append(volumeSpec.DefaultVolumes, "%s:/images", imageVolume.Name)
|
||||||
|
clusterSpec.Volumes = volumeSpec
|
||||||
|
|
||||||
|
/* (0.5) BREAKOUT
|
||||||
|
* --k3s <url>
|
||||||
|
* Connect to a non-dockerized k3s server
|
||||||
|
*/
|
||||||
|
|
||||||
|
if c.IsSet("k3s") {
|
||||||
|
log.Infof("Adding %d %s-nodes to k3s cluster %s...\n", nodeCount, nodeRole, c.String("k3s"))
|
||||||
|
if _, err := createClusterNetwork(clusterName); err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
if err := addNodeToK3s(c, clusterSpec, nodeRole); err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
/*
|
||||||
|
* (1) Check cluster
|
||||||
|
*/
|
||||||
|
|
||||||
|
ctx := context.Background()
|
||||||
|
docker, err := client.NewClientWithOpts(client.FromEnv, client.WithAPIVersionNegotiation())
|
||||||
|
if err != nil {
|
||||||
|
log.Errorln("Failed to create docker client")
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
|
||||||
|
filters := filters.NewArgs()
|
||||||
|
filters.Add("label", fmt.Sprintf("cluster=%s", clusterName))
|
||||||
|
filters.Add("label", "app=k3d")
|
||||||
|
|
||||||
|
/*
|
||||||
|
* (1.1) Verify, that the cluster (i.e. the server) that we want to connect to, is running
|
||||||
|
*/
|
||||||
|
filters.Add("label", "component=server")
|
||||||
|
|
||||||
|
serverList, err := docker.ContainerList(ctx, types.ContainerListOptions{
|
||||||
|
Filters: filters,
|
||||||
|
})
|
||||||
|
if err != nil || len(serverList) == 0 {
|
||||||
|
log.Errorf("Failed to get server container for cluster '%s'", clusterName)
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
|
||||||
|
/*
|
||||||
|
* (1.2) Extract cluster information from server container
|
||||||
|
*/
|
||||||
|
serverContainer, err := docker.ContainerInspect(ctx, serverList[0].ID)
|
||||||
|
if err != nil {
|
||||||
|
log.Errorf("Failed to inspect server container '%s' to get cluster secret", serverList[0].ID)
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
|
||||||
|
/*
|
||||||
|
* (1.2.1) Extract cluster secret from server container's labels
|
||||||
|
*/
|
||||||
|
clusterSecretEnvVar := ""
|
||||||
|
for _, envVar := range serverContainer.Config.Env {
|
||||||
|
if envVarSplit := strings.SplitN(envVar, "=", 2); envVarSplit[0] == "K3S_CLUSTER_SECRET" {
|
||||||
|
clusterSecretEnvVar = envVar
|
||||||
|
}
|
||||||
|
}
|
||||||
|
if clusterSecretEnvVar == "" {
|
||||||
|
return fmt.Errorf("Failed to get cluster secret from server container")
|
||||||
|
}
|
||||||
|
|
||||||
|
clusterSpec.Env = append(clusterSpec.Env, clusterSecretEnvVar)
|
||||||
|
|
||||||
|
/*
|
||||||
|
* (1.2.2) Extract API server Port from server container's cmd
|
||||||
|
*/
|
||||||
|
serverListenPort := ""
|
||||||
|
for cmdIndex, cmdPart := range serverContainer.Config.Cmd {
|
||||||
|
if cmdPart == "--https-listen-port" {
|
||||||
|
serverListenPort = serverContainer.Config.Cmd[cmdIndex+1]
|
||||||
|
}
|
||||||
|
}
|
||||||
|
if serverListenPort == "" {
|
||||||
|
return fmt.Errorf("Failed to get https-listen-port from server container")
|
||||||
|
}
|
||||||
|
|
||||||
|
serverURLEnvVar := fmt.Sprintf("K3S_URL=https://%s:%s", strings.TrimLeft(serverContainer.Name, "/"), serverListenPort)
|
||||||
|
clusterSpec.Env = append(clusterSpec.Env, serverURLEnvVar)
|
||||||
|
|
||||||
|
/*
|
||||||
|
* (1.3) Get the docker network of the cluster that we want to connect to
|
||||||
|
*/
|
||||||
|
filters.Del("label", "component=server")
|
||||||
|
|
||||||
|
networkList, err := docker.NetworkList(ctx, types.NetworkListOptions{
|
||||||
|
Filters: filters,
|
||||||
|
})
|
||||||
|
if err != nil || len(networkList) == 0 {
|
||||||
|
log.Errorf("Failed to find network for cluster '%s'", clusterName)
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
|
||||||
|
/*
|
||||||
|
* (2) Now identify any existing worker nodes IF we're adding a new one
|
||||||
|
*/
|
||||||
|
highestExistingWorkerSuffix := 0 // needs to be outside conditional because of bad branching
|
||||||
|
|
||||||
|
if nodeRole == "agent" {
|
||||||
|
filters.Add("label", "component=worker")
|
||||||
|
|
||||||
|
workerList, err := docker.ContainerList(ctx, types.ContainerListOptions{
|
||||||
|
Filters: filters,
|
||||||
|
All: true,
|
||||||
|
})
|
||||||
|
if err != nil {
|
||||||
|
log.Errorln("Failed to list worker node containers")
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
|
||||||
|
for _, worker := range workerList {
|
||||||
|
split := strings.Split(worker.Names[0], "-")
|
||||||
|
currSuffix, err := strconv.Atoi(split[len(split)-1])
|
||||||
|
if err != nil {
|
||||||
|
log.Errorln("Failed to get highest worker suffix")
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
if currSuffix > highestExistingWorkerSuffix {
|
||||||
|
highestExistingWorkerSuffix = currSuffix
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
/*
|
||||||
|
* (3) Create the nodes with configuration that automatically joins them to the cluster
|
||||||
|
*/
|
||||||
|
|
||||||
|
log.Infof("Adding %d %s-nodes to k3d cluster %s...\n", nodeCount, nodeRole, clusterName)
|
||||||
|
|
||||||
|
if err := createNodes(clusterSpec, nodeRole, highestExistingWorkerSuffix+1, nodeCount); err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func addNodeToK3s(c *cli.Context, clusterSpec *ClusterSpec, nodeRole string) error {
|
||||||
|
|
||||||
|
k3sURLEnvVar := fmt.Sprintf("K3S_URL=%s", c.String("k3s"))
|
||||||
|
k3sConnSecretEnvVar := fmt.Sprintf("K3S_CLUSTER_SECRET=%s", c.String("k3s-secret"))
|
||||||
|
if c.IsSet("k3s-token") {
|
||||||
|
k3sConnSecretEnvVar = fmt.Sprintf("K3S_TOKEN=%s", c.String("k3s-token"))
|
||||||
|
}
|
||||||
|
|
||||||
|
clusterSpec.Env = append(clusterSpec.Env, k3sURLEnvVar, k3sConnSecretEnvVar)
|
||||||
|
|
||||||
|
if err := createNodes(clusterSpec, nodeRole, 0, c.Int("count")); err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// createNodes helps creating multiple nodes at once with an incrementing suffix in the name
|
||||||
|
func createNodes(clusterSpec *ClusterSpec, role string, suffixNumberStart int, count int) error {
|
||||||
|
for suffix := suffixNumberStart; suffix < suffixNumberStart+count; suffix++ {
|
||||||
|
containerID := ""
|
||||||
|
var err error
|
||||||
|
if role == "agent" {
|
||||||
|
containerID, err = createWorker(clusterSpec, suffix)
|
||||||
|
} else if role == "server" {
|
||||||
|
containerID, err = createServer(clusterSpec)
|
||||||
|
}
|
||||||
|
if err != nil {
|
||||||
|
log.Errorf("Failed to create %s-node", role)
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
log.Infof("Created %s-node with ID %s", role, containerID)
|
||||||
|
}
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
233
cli/container.go
233
cli/container.go
@ -6,40 +6,30 @@ package run
|
|||||||
*/
|
*/
|
||||||
|
|
||||||
import (
|
import (
|
||||||
|
"archive/tar"
|
||||||
|
"bytes"
|
||||||
"context"
|
"context"
|
||||||
"fmt"
|
"fmt"
|
||||||
"io"
|
"io"
|
||||||
"io/ioutil"
|
"io/ioutil"
|
||||||
"log"
|
|
||||||
"os"
|
"os"
|
||||||
|
"strings"
|
||||||
"time"
|
"time"
|
||||||
|
|
||||||
"github.com/docker/docker/api/types"
|
"github.com/docker/docker/api/types"
|
||||||
"github.com/docker/docker/api/types/container"
|
"github.com/docker/docker/api/types/container"
|
||||||
"github.com/docker/docker/api/types/network"
|
"github.com/docker/docker/api/types/network"
|
||||||
"github.com/docker/docker/client"
|
"github.com/docker/docker/client"
|
||||||
|
"github.com/pkg/errors"
|
||||||
|
log "github.com/sirupsen/logrus"
|
||||||
)
|
)
|
||||||
|
|
||||||
type ClusterSpec struct {
|
func createContainer(config *container.Config, hostConfig *container.HostConfig, networkingConfig *network.NetworkingConfig, containerName string) (string, error) {
|
||||||
AgentArgs []string
|
|
||||||
APIPort apiPort
|
|
||||||
AutoRestart bool
|
|
||||||
ClusterName string
|
|
||||||
Env []string
|
|
||||||
Image string
|
|
||||||
NodeToPortSpecMap map[string][]string
|
|
||||||
PortAutoOffset int
|
|
||||||
ServerArgs []string
|
|
||||||
Verbose bool
|
|
||||||
Volumes []string
|
|
||||||
}
|
|
||||||
|
|
||||||
func startContainer(verbose bool, config *container.Config, hostConfig *container.HostConfig, networkingConfig *network.NetworkingConfig, containerName string) (string, error) {
|
|
||||||
ctx := context.Background()
|
ctx := context.Background()
|
||||||
|
|
||||||
docker, err := client.NewClientWithOpts(client.FromEnv, client.WithAPIVersionNegotiation())
|
docker, err := client.NewClientWithOpts(client.FromEnv, client.WithAPIVersionNegotiation())
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return "", fmt.Errorf("ERROR: couldn't create docker client\n%+v", err)
|
return "", fmt.Errorf("Couldn't create docker client\n%+v", err)
|
||||||
}
|
}
|
||||||
|
|
||||||
resp, err := docker.ContainerCreate(ctx, config, hostConfig, networkingConfig, containerName)
|
resp, err := docker.ContainerCreate(ctx, config, hostConfig, networkingConfig, containerName)
|
||||||
@ -47,35 +37,45 @@ func startContainer(verbose bool, config *container.Config, hostConfig *containe
|
|||||||
log.Printf("Pulling image %s...\n", config.Image)
|
log.Printf("Pulling image %s...\n", config.Image)
|
||||||
reader, err := docker.ImagePull(ctx, config.Image, types.ImagePullOptions{})
|
reader, err := docker.ImagePull(ctx, config.Image, types.ImagePullOptions{})
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return "", fmt.Errorf("ERROR: couldn't pull image %s\n%+v", config.Image, err)
|
return "", fmt.Errorf("Couldn't pull image %s\n%+v", config.Image, err)
|
||||||
}
|
}
|
||||||
defer reader.Close()
|
defer reader.Close()
|
||||||
if verbose {
|
if ll := log.GetLevel(); ll == log.DebugLevel {
|
||||||
_, err := io.Copy(os.Stdout, reader)
|
_, err := io.Copy(os.Stdout, reader)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
log.Printf("WARNING: couldn't get docker output\n%+v", err)
|
log.Warningf("Couldn't get docker output\n%+v", err)
|
||||||
}
|
}
|
||||||
} else {
|
} else {
|
||||||
_, err := io.Copy(ioutil.Discard, reader)
|
_, err := io.Copy(ioutil.Discard, reader)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
log.Printf("WARNING: couldn't get docker output\n%+v", err)
|
log.Warningf("Couldn't get docker output\n%+v", err)
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
resp, err = docker.ContainerCreate(ctx, config, hostConfig, networkingConfig, containerName)
|
resp, err = docker.ContainerCreate(ctx, config, hostConfig, networkingConfig, containerName)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return "", fmt.Errorf("ERROR: couldn't create container after pull %s\n%+v", containerName, err)
|
return "", fmt.Errorf(" Couldn't create container after pull %s\n%+v", containerName, err)
|
||||||
}
|
}
|
||||||
} else if err != nil {
|
} else if err != nil {
|
||||||
return "", fmt.Errorf("ERROR: couldn't create container %s\n%+v", containerName, err)
|
return "", fmt.Errorf(" Couldn't create container %s\n%+v", containerName, err)
|
||||||
}
|
|
||||||
|
|
||||||
if err := docker.ContainerStart(ctx, resp.ID, types.ContainerStartOptions{}); err != nil {
|
|
||||||
return "", err
|
|
||||||
}
|
}
|
||||||
|
|
||||||
return resp.ID, nil
|
return resp.ID, nil
|
||||||
}
|
}
|
||||||
|
|
||||||
|
func startContainer(ID string) error {
|
||||||
|
ctx := context.Background()
|
||||||
|
docker, err := client.NewClientWithOpts(client.FromEnv, client.WithAPIVersionNegotiation())
|
||||||
|
if err != nil {
|
||||||
|
return fmt.Errorf("Couldn't create docker client\n%+v", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
if err := docker.ContainerStart(ctx, ID, types.ContainerStartOptions{}); err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
func createServer(spec *ClusterSpec) (string, error) {
|
func createServer(spec *ClusterSpec) (string, error) {
|
||||||
log.Printf("Creating server using %s...\n", spec.Image)
|
log.Printf("Creating server using %s...\n", spec.Image)
|
||||||
|
|
||||||
@ -87,8 +87,16 @@ func createServer(spec *ClusterSpec) (string, error) {
|
|||||||
|
|
||||||
containerName := GetContainerName("server", spec.ClusterName, -1)
|
containerName := GetContainerName("server", spec.ClusterName, -1)
|
||||||
|
|
||||||
|
// labels to be created to the server belong to roles
|
||||||
|
// all, server, master or <server-container-name>
|
||||||
|
serverLabels, err := MergeLabelSpecs(spec.NodeToLabelSpecMap, "server", containerName)
|
||||||
|
if err != nil {
|
||||||
|
return "", err
|
||||||
|
}
|
||||||
|
containerLabels = MergeLabels(containerLabels, serverLabels)
|
||||||
|
|
||||||
// ports to be assigned to the server belong to roles
|
// ports to be assigned to the server belong to roles
|
||||||
// all, server or <server-container-name>
|
// all, server, master or <server-container-name>
|
||||||
serverPorts, err := MergePortSpecs(spec.NodeToPortSpecMap, "server", containerName)
|
serverPorts, err := MergePortSpecs(spec.NodeToPortSpecMap, "server", containerName)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return "", err
|
return "", err
|
||||||
@ -113,15 +121,14 @@ func createServer(spec *ClusterSpec) (string, error) {
|
|||||||
hostConfig := &container.HostConfig{
|
hostConfig := &container.HostConfig{
|
||||||
PortBindings: serverPublishedPorts.PortBindings,
|
PortBindings: serverPublishedPorts.PortBindings,
|
||||||
Privileged: true,
|
Privileged: true,
|
||||||
|
Init: &[]bool{true}[0],
|
||||||
}
|
}
|
||||||
|
|
||||||
if spec.AutoRestart {
|
if spec.AutoRestart {
|
||||||
hostConfig.RestartPolicy.Name = "unless-stopped"
|
hostConfig.RestartPolicy.Name = "unless-stopped"
|
||||||
}
|
}
|
||||||
|
|
||||||
if len(spec.Volumes) > 0 && spec.Volumes[0] != "" {
|
spec.Volumes.addVolumesToHostConfig(containerName, "server", hostConfig)
|
||||||
hostConfig.Binds = spec.Volumes
|
|
||||||
}
|
|
||||||
|
|
||||||
networkingConfig := &network.NetworkingConfig{
|
networkingConfig := &network.NetworkingConfig{
|
||||||
EndpointsConfig: map[string]*network.EndpointSettings{
|
EndpointsConfig: map[string]*network.EndpointSettings{
|
||||||
@ -139,9 +146,20 @@ func createServer(spec *ClusterSpec) (string, error) {
|
|||||||
Env: spec.Env,
|
Env: spec.Env,
|
||||||
Labels: containerLabels,
|
Labels: containerLabels,
|
||||||
}
|
}
|
||||||
id, err := startContainer(spec.Verbose, config, hostConfig, networkingConfig, containerName)
|
id, err := createContainer(config, hostConfig, networkingConfig, containerName)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return "", fmt.Errorf("ERROR: couldn't create container %s\n%+v", containerName, err)
|
return "", fmt.Errorf(" Couldn't create container %s\n%+v", containerName, err)
|
||||||
|
}
|
||||||
|
|
||||||
|
// copy the registry configuration
|
||||||
|
if spec.RegistryEnabled || len(spec.RegistriesFile) > 0 {
|
||||||
|
if err := writeRegistriesConfigInContainer(spec, id); err != nil {
|
||||||
|
return "", err
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
if err := startContainer(id); err != nil {
|
||||||
|
return "", fmt.Errorf(" Couldn't start container %s\n%+v", containerName, err)
|
||||||
}
|
}
|
||||||
|
|
||||||
return id, nil
|
return id, nil
|
||||||
@ -156,11 +174,29 @@ func createWorker(spec *ClusterSpec, postfix int) (string, error) {
|
|||||||
containerLabels["cluster"] = spec.ClusterName
|
containerLabels["cluster"] = spec.ClusterName
|
||||||
|
|
||||||
containerName := GetContainerName("worker", spec.ClusterName, postfix)
|
containerName := GetContainerName("worker", spec.ClusterName, postfix)
|
||||||
|
env := spec.Env
|
||||||
|
|
||||||
env := append(spec.Env, fmt.Sprintf("K3S_URL=https://k3d-%s-server:%s", spec.ClusterName, spec.APIPort.Port))
|
needServerURL := true
|
||||||
|
for _, envVar := range env {
|
||||||
|
if strings.Split(envVar, "=")[0] == "K3S_URL" {
|
||||||
|
needServerURL = false
|
||||||
|
break
|
||||||
|
}
|
||||||
|
}
|
||||||
|
if needServerURL {
|
||||||
|
env = append(spec.Env, fmt.Sprintf("K3S_URL=https://k3d-%s-server:%s", spec.ClusterName, spec.APIPort.Port))
|
||||||
|
}
|
||||||
|
|
||||||
// ports to be assigned to the server belong to roles
|
// labels to be created to the worker belong to roles
|
||||||
// all, server or <server-container-name>
|
// all, worker, agent or <server-container-name>
|
||||||
|
workerLabels, err := MergeLabelSpecs(spec.NodeToLabelSpecMap, "worker", containerName)
|
||||||
|
if err != nil {
|
||||||
|
return "", err
|
||||||
|
}
|
||||||
|
containerLabels = MergeLabels(containerLabels, workerLabels)
|
||||||
|
|
||||||
|
// ports to be assigned to the worker belong to roles
|
||||||
|
// all, worker, agent or <server-container-name>
|
||||||
workerPorts, err := MergePortSpecs(spec.NodeToPortSpecMap, "worker", containerName)
|
workerPorts, err := MergePortSpecs(spec.NodeToPortSpecMap, "worker", containerName)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return "", err
|
return "", err
|
||||||
@ -182,15 +218,14 @@ func createWorker(spec *ClusterSpec, postfix int) (string, error) {
|
|||||||
},
|
},
|
||||||
PortBindings: workerPublishedPorts.PortBindings,
|
PortBindings: workerPublishedPorts.PortBindings,
|
||||||
Privileged: true,
|
Privileged: true,
|
||||||
|
Init: &[]bool{true}[0],
|
||||||
}
|
}
|
||||||
|
|
||||||
if spec.AutoRestart {
|
if spec.AutoRestart {
|
||||||
hostConfig.RestartPolicy.Name = "unless-stopped"
|
hostConfig.RestartPolicy.Name = "unless-stopped"
|
||||||
}
|
}
|
||||||
|
|
||||||
if len(spec.Volumes) > 0 && spec.Volumes[0] != "" {
|
spec.Volumes.addVolumesToHostConfig(containerName, "worker", hostConfig)
|
||||||
hostConfig.Binds = spec.Volumes
|
|
||||||
}
|
|
||||||
|
|
||||||
networkingConfig := &network.NetworkingConfig{
|
networkingConfig := &network.NetworkingConfig{
|
||||||
EndpointsConfig: map[string]*network.EndpointSettings{
|
EndpointsConfig: map[string]*network.EndpointSettings{
|
||||||
@ -209,11 +244,21 @@ func createWorker(spec *ClusterSpec, postfix int) (string, error) {
|
|||||||
ExposedPorts: workerPublishedPorts.ExposedPorts,
|
ExposedPorts: workerPublishedPorts.ExposedPorts,
|
||||||
}
|
}
|
||||||
|
|
||||||
id, err := startContainer(spec.Verbose, config, hostConfig, networkingConfig, containerName)
|
id, err := createContainer(config, hostConfig, networkingConfig, containerName)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return "", fmt.Errorf("ERROR: couldn't start container %s\n%+v", containerName, err)
|
return "", fmt.Errorf(" Couldn't create container %s\n%+v", containerName, err)
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// copy the registry configuration
|
||||||
|
if spec.RegistryEnabled || len(spec.RegistriesFile) > 0 {
|
||||||
|
if err := writeRegistriesConfigInContainer(spec, id); err != nil {
|
||||||
|
return "", err
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
if err := startContainer(id); err != nil {
|
||||||
|
return "", fmt.Errorf(" Couldn't start container %s\n%+v", containerName, err)
|
||||||
|
}
|
||||||
return id, nil
|
return id, nil
|
||||||
}
|
}
|
||||||
|
|
||||||
@ -222,7 +267,7 @@ func removeContainer(ID string) error {
|
|||||||
ctx := context.Background()
|
ctx := context.Background()
|
||||||
docker, err := client.NewClientWithOpts(client.FromEnv, client.WithAPIVersionNegotiation())
|
docker, err := client.NewClientWithOpts(client.FromEnv, client.WithAPIVersionNegotiation())
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return fmt.Errorf("ERROR: couldn't create docker client\n%+v", err)
|
return fmt.Errorf(" Couldn't create docker client\n%+v", err)
|
||||||
}
|
}
|
||||||
|
|
||||||
options := types.ContainerRemoveOptions{
|
options := types.ContainerRemoveOptions{
|
||||||
@ -231,7 +276,109 @@ func removeContainer(ID string) error {
|
|||||||
}
|
}
|
||||||
|
|
||||||
if err := docker.ContainerRemove(ctx, ID, options); err != nil {
|
if err := docker.ContainerRemove(ctx, ID, options); err != nil {
|
||||||
return fmt.Errorf("ERROR: couldn't delete container [%s] -> %+v", ID, err)
|
return fmt.Errorf(" Couldn't delete container [%s] -> %+v", ID, err)
|
||||||
|
}
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// getContainerNetworks returns the networks a container is connected to
|
||||||
|
func getContainerNetworks(ID string) (map[string]*network.EndpointSettings, error) {
|
||||||
|
ctx := context.Background()
|
||||||
|
docker, err := client.NewClientWithOpts(client.FromEnv, client.WithAPIVersionNegotiation())
|
||||||
|
if err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
|
||||||
|
c, err := docker.ContainerInspect(ctx, ID)
|
||||||
|
if err != nil {
|
||||||
|
return nil, fmt.Errorf(" Couldn't get details about container %s: %w", ID, err)
|
||||||
|
}
|
||||||
|
return c.NetworkSettings.Networks, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// connectContainerToNetwork connects a container to a given network
|
||||||
|
func connectContainerToNetwork(ID string, networkID string, aliases []string) error {
|
||||||
|
ctx := context.Background()
|
||||||
|
docker, err := client.NewClientWithOpts(client.FromEnv, client.WithAPIVersionNegotiation())
|
||||||
|
if err != nil {
|
||||||
|
return fmt.Errorf(" Couldn't create docker client\n%+v", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
networkingConfig := &network.EndpointSettings{
|
||||||
|
Aliases: aliases,
|
||||||
|
}
|
||||||
|
|
||||||
|
return docker.NetworkConnect(ctx, networkID, ID, networkingConfig)
|
||||||
|
}
|
||||||
|
|
||||||
|
// disconnectContainerFromNetwork disconnects a container from a given network
|
||||||
|
func disconnectContainerFromNetwork(ID string, networkID string) error {
|
||||||
|
ctx := context.Background()
|
||||||
|
docker, err := client.NewClientWithOpts(client.FromEnv, client.WithAPIVersionNegotiation())
|
||||||
|
if err != nil {
|
||||||
|
return fmt.Errorf(" Couldn't create docker client\n%+v", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
return docker.NetworkDisconnect(ctx, networkID, ID, false)
|
||||||
|
}
|
||||||
|
|
||||||
|
func waitForContainerLogMessage(containerID string, message string, timeoutSeconds int) error {
|
||||||
|
ctx := context.Background()
|
||||||
|
docker, err := client.NewClientWithOpts(client.FromEnv, client.WithAPIVersionNegotiation())
|
||||||
|
if err != nil {
|
||||||
|
return fmt.Errorf(" Couldn't create docker client\n%+v", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
start := time.Now()
|
||||||
|
timeout := time.Duration(timeoutSeconds) * time.Second
|
||||||
|
for {
|
||||||
|
// not running after timeout exceeded? Rollback and delete everything.
|
||||||
|
if timeout != 0 && time.Now().After(start.Add(timeout)) {
|
||||||
|
return fmt.Errorf("ERROR: timeout of %d seconds exceeded while waiting for log message '%s'", timeoutSeconds, message)
|
||||||
|
}
|
||||||
|
|
||||||
|
// scan container logs for a line that tells us that the required services are up and running
|
||||||
|
out, err := docker.ContainerLogs(ctx, containerID, types.ContainerLogsOptions{ShowStdout: true, ShowStderr: true})
|
||||||
|
if err != nil {
|
||||||
|
out.Close()
|
||||||
|
return fmt.Errorf("ERROR: couldn't get docker logs from container %s\n%+v", containerID, err)
|
||||||
|
}
|
||||||
|
buf := new(bytes.Buffer)
|
||||||
|
nRead, _ := buf.ReadFrom(out)
|
||||||
|
out.Close()
|
||||||
|
output := buf.String()
|
||||||
|
if nRead > 0 && strings.Contains(string(output), message) {
|
||||||
|
break
|
||||||
|
}
|
||||||
|
|
||||||
|
time.Sleep(1 * time.Second)
|
||||||
|
}
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func copyToContainer(ID string, dstPath string, content []byte) error {
|
||||||
|
ctx := context.Background()
|
||||||
|
docker, err := client.NewClientWithOpts(client.FromEnv, client.WithAPIVersionNegotiation())
|
||||||
|
if err != nil {
|
||||||
|
return fmt.Errorf(" Couldn't create docker client\n%+v", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
buf := new(bytes.Buffer)
|
||||||
|
tw := tar.NewWriter(buf)
|
||||||
|
hdr := &tar.Header{Name: dstPath, Mode: 0644, Size: int64(len(content))}
|
||||||
|
if err := tw.WriteHeader(hdr); err != nil {
|
||||||
|
return errors.Wrap(err, "failed to write a tar header")
|
||||||
|
}
|
||||||
|
if _, err := tw.Write(content); err != nil {
|
||||||
|
return errors.Wrap(err, "failed to write a tar body")
|
||||||
|
}
|
||||||
|
if err := tw.Close(); err != nil {
|
||||||
|
return errors.Wrap(err, "failed to close tar archive")
|
||||||
|
}
|
||||||
|
|
||||||
|
r := bytes.NewReader(buf.Bytes())
|
||||||
|
if err := docker.CopyToContainer(ctx, ID, "/", r, types.CopyToContainerOptions{AllowOverwriteDirWithFile: true}); err != nil {
|
||||||
|
return errors.Wrap(err, "failed to copy source code")
|
||||||
}
|
}
|
||||||
return nil
|
return nil
|
||||||
}
|
}
|
||||||
|
@ -1,10 +1,11 @@
|
|||||||
package run
|
package run
|
||||||
|
|
||||||
import (
|
import (
|
||||||
"log"
|
|
||||||
"os"
|
"os"
|
||||||
"os/exec"
|
"os/exec"
|
||||||
"strings"
|
"strings"
|
||||||
|
|
||||||
|
log "github.com/sirupsen/logrus"
|
||||||
)
|
)
|
||||||
|
|
||||||
func getDockerMachineIp() (string, error) {
|
func getDockerMachineIp() (string, error) {
|
||||||
|
51
cli/image.go
51
cli/image.go
@ -4,7 +4,6 @@ import (
|
|||||||
"context"
|
"context"
|
||||||
"fmt"
|
"fmt"
|
||||||
"io/ioutil"
|
"io/ioutil"
|
||||||
"log"
|
|
||||||
"strings"
|
"strings"
|
||||||
"time"
|
"time"
|
||||||
|
|
||||||
@ -12,6 +11,7 @@ import (
|
|||||||
"github.com/docker/docker/api/types/container"
|
"github.com/docker/docker/api/types/container"
|
||||||
"github.com/docker/docker/api/types/network"
|
"github.com/docker/docker/api/types/network"
|
||||||
"github.com/docker/docker/client"
|
"github.com/docker/docker/client"
|
||||||
|
log "github.com/sirupsen/logrus"
|
||||||
)
|
)
|
||||||
|
|
||||||
const (
|
const (
|
||||||
@ -24,17 +24,17 @@ func importImage(clusterName string, images []string, noRemove bool) error {
|
|||||||
ctx := context.Background()
|
ctx := context.Background()
|
||||||
docker, err := client.NewClientWithOpts(client.FromEnv, client.WithAPIVersionNegotiation())
|
docker, err := client.NewClientWithOpts(client.FromEnv, client.WithAPIVersionNegotiation())
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return fmt.Errorf("ERROR: couldn't create docker client\n%+v", err)
|
return fmt.Errorf(" Couldn't create docker client\n%+v", err)
|
||||||
}
|
}
|
||||||
|
|
||||||
// get cluster directory to temporarily save the image tarball there
|
// get cluster directory to temporarily save the image tarball there
|
||||||
imageVolume, err := getImageVolume(clusterName)
|
imageVolume, err := getImageVolume(clusterName)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return fmt.Errorf("ERROR: couldn't get image volume for cluster [%s]\n%+v", clusterName, err)
|
return fmt.Errorf(" Couldn't get image volume for cluster [%s]\n%+v", clusterName, err)
|
||||||
}
|
}
|
||||||
|
|
||||||
//*** first, save the images using the local docker daemon
|
//*** first, save the images using the local docker daemon
|
||||||
log.Printf("INFO: Saving images %s from local docker daemon...", images)
|
log.Infof("Saving images %s from local docker daemon...", images)
|
||||||
toolsContainerName := fmt.Sprintf("k3d-%s-tools", clusterName)
|
toolsContainerName := fmt.Sprintf("k3d-%s-tools", clusterName)
|
||||||
tarFileName := fmt.Sprintf("%s/k3d-%s-images-%s.tar", imageBasePathRemote, clusterName, time.Now().Format("20060102150405"))
|
tarFileName := fmt.Sprintf("%s/k3d-%s-images-%s.tar", imageBasePathRemote, clusterName, time.Now().Format("20060102150405"))
|
||||||
|
|
||||||
@ -58,16 +58,19 @@ func importImage(clusterName string, images []string, noRemove bool) error {
|
|||||||
},
|
},
|
||||||
}
|
}
|
||||||
|
|
||||||
toolsContainerID, err := startContainer(false, &containerConfig, &hostConfig, &network.NetworkingConfig{}, toolsContainerName)
|
toolsContainerID, err := createContainer(&containerConfig, &hostConfig, &network.NetworkingConfig{}, toolsContainerName)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
|
if err := startContainer(toolsContainerID); err != nil {
|
||||||
|
return fmt.Errorf(" Couldn't start container %s\n%w", toolsContainerName, err)
|
||||||
|
}
|
||||||
|
|
||||||
defer func() {
|
defer func() {
|
||||||
if err = docker.ContainerRemove(ctx, toolsContainerID, types.ContainerRemoveOptions{
|
if err = docker.ContainerRemove(ctx, toolsContainerID, types.ContainerRemoveOptions{
|
||||||
Force: true,
|
Force: true,
|
||||||
}); err != nil {
|
}); err != nil {
|
||||||
log.Println(fmt.Errorf("WARN: couldn't remove tools container\n%+v", err))
|
log.Warningf("Couldn't remove tools container\n%+v", err)
|
||||||
}
|
}
|
||||||
}()
|
}()
|
||||||
|
|
||||||
@ -75,14 +78,14 @@ func importImage(clusterName string, images []string, noRemove bool) error {
|
|||||||
for {
|
for {
|
||||||
cont, err := docker.ContainerInspect(ctx, toolsContainerID)
|
cont, err := docker.ContainerInspect(ctx, toolsContainerID)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return fmt.Errorf("ERROR: couldn't get helper container's exit code\n%+v", err)
|
return fmt.Errorf(" Couldn't get helper container's exit code\n%+v", err)
|
||||||
}
|
}
|
||||||
if !cont.State.Running { // container finished...
|
if !cont.State.Running { // container finished...
|
||||||
if cont.State.ExitCode == 0 { // ...successfully
|
if cont.State.ExitCode == 0 { // ...successfully
|
||||||
log.Println("INFO: saved images to shared docker volume")
|
log.Info("Saved images to shared docker volume")
|
||||||
break
|
break
|
||||||
} else if cont.State.ExitCode != 0 { // ...failed
|
} else if cont.State.ExitCode != 0 { // ...failed
|
||||||
errTxt := "ERROR: helper container failed to save images"
|
errTxt := "Helper container failed to save images"
|
||||||
logReader, err := docker.ContainerLogs(ctx, toolsContainerID, types.ContainerLogsOptions{
|
logReader, err := docker.ContainerLogs(ctx, toolsContainerID, types.ContainerLogsOptions{
|
||||||
ShowStdout: true,
|
ShowStdout: true,
|
||||||
ShowStderr: true,
|
ShowStderr: true,
|
||||||
@ -103,7 +106,7 @@ func importImage(clusterName string, images []string, noRemove bool) error {
|
|||||||
// Get the container IDs for all containers in the cluster
|
// Get the container IDs for all containers in the cluster
|
||||||
clusters, err := getClusters(false, clusterName)
|
clusters, err := getClusters(false, clusterName)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return fmt.Errorf("ERROR: couldn't get cluster by name [%s]\n%+v", clusterName, err)
|
return fmt.Errorf(" Couldn't get cluster by name [%s]\n%+v", clusterName, err)
|
||||||
}
|
}
|
||||||
containerList := []types.Container{clusters[clusterName].server}
|
containerList := []types.Container{clusters[clusterName].server}
|
||||||
containerList = append(containerList, clusters[clusterName].workers...)
|
containerList = append(containerList, clusters[clusterName].workers...)
|
||||||
@ -133,12 +136,12 @@ func importImage(clusterName string, images []string, noRemove bool) error {
|
|||||||
for _, container := range containerList {
|
for _, container := range containerList {
|
||||||
|
|
||||||
containerName := container.Names[0][1:] // trimming the leading "/" from name
|
containerName := container.Names[0][1:] // trimming the leading "/" from name
|
||||||
log.Printf("INFO: Importing images %s in container [%s]", images, containerName)
|
log.Infof("Importing images %s in container [%s]", images, containerName)
|
||||||
|
|
||||||
// create exec configuration
|
// create exec configuration
|
||||||
execResponse, err := docker.ContainerExecCreate(ctx, container.ID, execConfig)
|
execResponse, err := docker.ContainerExecCreate(ctx, container.ID, execConfig)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return fmt.Errorf("ERROR: Failed to create exec command for container [%s]\n%+v", containerName, err)
|
return fmt.Errorf("Failed to create exec command for container [%s]\n%+v", containerName, err)
|
||||||
}
|
}
|
||||||
|
|
||||||
// attach to exec process in container
|
// attach to exec process in container
|
||||||
@ -147,66 +150,66 @@ func importImage(clusterName string, images []string, noRemove bool) error {
|
|||||||
Tty: execAttachConfig.Tty,
|
Tty: execAttachConfig.Tty,
|
||||||
})
|
})
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return fmt.Errorf("ERROR: couldn't attach to container [%s]\n%+v", containerName, err)
|
return fmt.Errorf(" Couldn't attach to container [%s]\n%+v", containerName, err)
|
||||||
}
|
}
|
||||||
defer containerConnection.Close()
|
defer containerConnection.Close()
|
||||||
|
|
||||||
// start exec
|
// start exec
|
||||||
err = docker.ContainerExecStart(ctx, execResponse.ID, execStartConfig)
|
err = docker.ContainerExecStart(ctx, execResponse.ID, execStartConfig)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return fmt.Errorf("ERROR: couldn't execute command in container [%s]\n%+v", containerName, err)
|
return fmt.Errorf(" Couldn't execute command in container [%s]\n%+v", containerName, err)
|
||||||
}
|
}
|
||||||
|
|
||||||
// get output from container
|
// get output from container
|
||||||
content, err := ioutil.ReadAll(containerConnection.Reader)
|
content, err := ioutil.ReadAll(containerConnection.Reader)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return fmt.Errorf("ERROR: couldn't read output from container [%s]\n%+v", containerName, err)
|
return fmt.Errorf(" Couldn't read output from container [%s]\n%+v", containerName, err)
|
||||||
}
|
}
|
||||||
|
|
||||||
// example output "unpacking image........ ...done"
|
// example output "unpacking image........ ...done"
|
||||||
if !strings.Contains(string(content), "done") {
|
if !strings.Contains(string(content), "done") {
|
||||||
return fmt.Errorf("ERROR: seems like something went wrong using `ctr image import` in container [%s]. Full output below:\n%s", containerName, string(content))
|
return fmt.Errorf("seems like something went wrong using `ctr image import` in container [%s]. Full output below:\n%s", containerName, string(content))
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
log.Printf("INFO: Successfully imported images %s in all nodes of cluster [%s]", images, clusterName)
|
log.Infof("Successfully imported images %s in all nodes of cluster [%s]", images, clusterName)
|
||||||
|
|
||||||
// remove tarball from inside the server container
|
// remove tarball from inside the server container
|
||||||
if !noRemove {
|
if !noRemove {
|
||||||
log.Println("INFO: Cleaning up tarball")
|
log.Info("Cleaning up tarball")
|
||||||
|
|
||||||
execID, err := docker.ContainerExecCreate(ctx, clusters[clusterName].server.ID, types.ExecConfig{
|
execID, err := docker.ContainerExecCreate(ctx, clusters[clusterName].server.ID, types.ExecConfig{
|
||||||
Cmd: []string{"rm", "-f", tarFileName},
|
Cmd: []string{"rm", "-f", tarFileName},
|
||||||
})
|
})
|
||||||
if err != nil {
|
if err != nil {
|
||||||
log.Printf("WARN: failed to delete tarball: couldn't create remove in container [%s]\n%+v", clusters[clusterName].server.ID, err)
|
log.Warningf("Failed to delete tarball: couldn't create remove in container [%s]\n%+v", clusters[clusterName].server.ID, err)
|
||||||
}
|
}
|
||||||
err = docker.ContainerExecStart(ctx, execID.ID, types.ExecStartCheck{
|
err = docker.ContainerExecStart(ctx, execID.ID, types.ExecStartCheck{
|
||||||
Detach: true,
|
Detach: true,
|
||||||
})
|
})
|
||||||
if err != nil {
|
if err != nil {
|
||||||
log.Printf("WARN: couldn't start tarball deletion action\n%+v", err)
|
log.Warningf("Couldn't start tarball deletion action\n%+v", err)
|
||||||
}
|
}
|
||||||
|
|
||||||
for {
|
for {
|
||||||
execInspect, err := docker.ContainerExecInspect(ctx, execID.ID)
|
execInspect, err := docker.ContainerExecInspect(ctx, execID.ID)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
log.Printf("WARN: couldn't verify deletion of tarball\n%+v", err)
|
log.Warningf("Couldn't verify deletion of tarball\n%+v", err)
|
||||||
}
|
}
|
||||||
|
|
||||||
if !execInspect.Running {
|
if !execInspect.Running {
|
||||||
if execInspect.ExitCode == 0 {
|
if execInspect.ExitCode == 0 {
|
||||||
log.Println("INFO: deleted tarball")
|
log.Info("Deleted tarball")
|
||||||
break
|
break
|
||||||
} else {
|
} else {
|
||||||
log.Println("WARN: failed to delete tarball")
|
log.Warning("Failed to delete tarball")
|
||||||
break
|
break
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
log.Println("INFO: ...Done")
|
log.Info("...Done")
|
||||||
|
|
||||||
return nil
|
return nil
|
||||||
}
|
}
|
||||||
|
121
cli/label.go
Normal file
121
cli/label.go
Normal file
@ -0,0 +1,121 @@
|
|||||||
|
package run
|
||||||
|
|
||||||
|
import (
|
||||||
|
"regexp"
|
||||||
|
"strings"
|
||||||
|
|
||||||
|
log "github.com/sirupsen/logrus"
|
||||||
|
)
|
||||||
|
|
||||||
|
// mapNodesToLabelSpecs maps nodes to labelSpecs
|
||||||
|
func mapNodesToLabelSpecs(specs []string, createdNodes []string) (map[string][]string, error) {
|
||||||
|
// check node-specifier possibilitites
|
||||||
|
possibleNodeSpecifiers := []string{"all", "workers", "agents", "server", "master"}
|
||||||
|
possibleNodeSpecifiers = append(possibleNodeSpecifiers, createdNodes...)
|
||||||
|
|
||||||
|
nodeToLabelSpecMap := make(map[string][]string)
|
||||||
|
|
||||||
|
for _, spec := range specs {
|
||||||
|
labelSpec, node := extractLabelNode(spec)
|
||||||
|
|
||||||
|
// check if node-specifier is valid (either a role or a name) and append to list if matches
|
||||||
|
nodeFound := false
|
||||||
|
for _, name := range possibleNodeSpecifiers {
|
||||||
|
if node == name {
|
||||||
|
nodeFound = true
|
||||||
|
nodeToLabelSpecMap[node] = append(nodeToLabelSpecMap[node], labelSpec)
|
||||||
|
break
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// node extraction was a false positive, use full spec with default node
|
||||||
|
if !nodeFound {
|
||||||
|
nodeToLabelSpecMap[defaultLabelNodes] = append(nodeToLabelSpecMap[defaultLabelNodes], spec)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
return nodeToLabelSpecMap, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// extractLabelNode separates the node specification from the actual label specs
|
||||||
|
func extractLabelNode(spec string) (string, string) {
|
||||||
|
// label defaults to full spec
|
||||||
|
labelSpec := spec
|
||||||
|
|
||||||
|
// node defaults to "all"
|
||||||
|
node := defaultLabelNodes
|
||||||
|
|
||||||
|
// only split at the last "@"
|
||||||
|
re := regexp.MustCompile(`^(.*)@([^@]+)$`)
|
||||||
|
match := re.FindStringSubmatch(spec)
|
||||||
|
|
||||||
|
if len(match) > 0 {
|
||||||
|
labelSpec = match[1]
|
||||||
|
node = match[2]
|
||||||
|
}
|
||||||
|
|
||||||
|
return labelSpec, node
|
||||||
|
}
|
||||||
|
|
||||||
|
// splitLabel separates the label key from the label value
|
||||||
|
func splitLabel(label string) (string, string) {
|
||||||
|
// split only on first '=' sign (like `docker run` do)
|
||||||
|
labelSlice := strings.SplitN(label, "=", 2)
|
||||||
|
|
||||||
|
if len(labelSlice) > 1 {
|
||||||
|
return labelSlice[0], labelSlice[1]
|
||||||
|
}
|
||||||
|
|
||||||
|
// defaults to label key with empty value (like `docker run` do)
|
||||||
|
return label, ""
|
||||||
|
}
|
||||||
|
|
||||||
|
// MergeLabelSpecs merges labels for a given node
|
||||||
|
func MergeLabelSpecs(nodeToLabelSpecMap map[string][]string, role, name string) ([]string, error) {
|
||||||
|
labelSpecs := []string{}
|
||||||
|
|
||||||
|
// add portSpecs according to node role
|
||||||
|
for _, group := range nodeRuleGroupsMap[role] {
|
||||||
|
for _, v := range nodeToLabelSpecMap[group] {
|
||||||
|
exists := false
|
||||||
|
for _, i := range labelSpecs {
|
||||||
|
if v == i {
|
||||||
|
exists = true
|
||||||
|
}
|
||||||
|
}
|
||||||
|
if !exists {
|
||||||
|
labelSpecs = append(labelSpecs, v)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// add portSpecs according to node name
|
||||||
|
for _, v := range nodeToLabelSpecMap[name] {
|
||||||
|
exists := false
|
||||||
|
for _, i := range labelSpecs {
|
||||||
|
if v == i {
|
||||||
|
exists = true
|
||||||
|
}
|
||||||
|
}
|
||||||
|
if !exists {
|
||||||
|
labelSpecs = append(labelSpecs, v)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
return labelSpecs, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// MergeLabels merges list of labels into a label map
|
||||||
|
func MergeLabels(labelMap map[string]string, labels []string) map[string]string {
|
||||||
|
for _, label := range labels {
|
||||||
|
labelKey, labelValue := splitLabel(label)
|
||||||
|
|
||||||
|
if _, found := labelMap[labelKey]; found {
|
||||||
|
log.Warningf("Overriding already existing label [%s]", labelKey)
|
||||||
|
}
|
||||||
|
|
||||||
|
labelMap[labelKey] = labelValue
|
||||||
|
}
|
||||||
|
|
||||||
|
return labelMap
|
||||||
|
}
|
@ -3,11 +3,11 @@ package run
|
|||||||
import (
|
import (
|
||||||
"context"
|
"context"
|
||||||
"fmt"
|
"fmt"
|
||||||
"log"
|
|
||||||
|
|
||||||
"github.com/docker/docker/api/types"
|
"github.com/docker/docker/api/types"
|
||||||
"github.com/docker/docker/api/types/filters"
|
"github.com/docker/docker/api/types/filters"
|
||||||
"github.com/docker/docker/client"
|
"github.com/docker/docker/client"
|
||||||
|
log "github.com/sirupsen/logrus"
|
||||||
)
|
)
|
||||||
|
|
||||||
func k3dNetworkName(clusterName string) string {
|
func k3dNetworkName(clusterName string) string {
|
||||||
@ -20,7 +20,7 @@ func createClusterNetwork(clusterName string) (string, error) {
|
|||||||
ctx := context.Background()
|
ctx := context.Background()
|
||||||
docker, err := client.NewClientWithOpts(client.FromEnv, client.WithAPIVersionNegotiation())
|
docker, err := client.NewClientWithOpts(client.FromEnv, client.WithAPIVersionNegotiation())
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return "", fmt.Errorf("ERROR: couldn't create docker client\n%+v", err)
|
return "", fmt.Errorf(" Couldn't create docker client\n%+v", err)
|
||||||
}
|
}
|
||||||
|
|
||||||
args := filters.NewArgs()
|
args := filters.NewArgs()
|
||||||
@ -32,7 +32,7 @@ func createClusterNetwork(clusterName string) (string, error) {
|
|||||||
}
|
}
|
||||||
|
|
||||||
if len(nl) > 1 {
|
if len(nl) > 1 {
|
||||||
log.Printf("WARNING: Found %d networks for %s when we only expect 1\n", len(nl), clusterName)
|
log.Warningf("Found %d networks for %s when we only expect 1\n", len(nl), clusterName)
|
||||||
}
|
}
|
||||||
|
|
||||||
if len(nl) > 0 {
|
if len(nl) > 0 {
|
||||||
@ -47,18 +47,17 @@ func createClusterNetwork(clusterName string) (string, error) {
|
|||||||
},
|
},
|
||||||
})
|
})
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return "", fmt.Errorf("ERROR: couldn't create network\n%+v", err)
|
return "", fmt.Errorf(" Couldn't create network\n%+v", err)
|
||||||
}
|
}
|
||||||
|
|
||||||
return resp.ID, nil
|
return resp.ID, nil
|
||||||
}
|
}
|
||||||
|
|
||||||
// deleteClusterNetwork deletes a docker network based on the name of a cluster it belongs to
|
func getClusterNetwork(clusterName string) (string, error) {
|
||||||
func deleteClusterNetwork(clusterName string) error {
|
|
||||||
ctx := context.Background()
|
ctx := context.Background()
|
||||||
docker, err := client.NewClientWithOpts(client.FromEnv, client.WithAPIVersionNegotiation())
|
docker, err := client.NewClientWithOpts(client.FromEnv, client.WithAPIVersionNegotiation())
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return fmt.Errorf("ERROR: couldn't create docker client\n%+v", err)
|
return "", fmt.Errorf(" Couldn't create docker client\n%+v", err)
|
||||||
}
|
}
|
||||||
|
|
||||||
filters := filters.NewArgs()
|
filters := filters.NewArgs()
|
||||||
@ -69,15 +68,53 @@ func deleteClusterNetwork(clusterName string) error {
|
|||||||
Filters: filters,
|
Filters: filters,
|
||||||
})
|
})
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return fmt.Errorf("ERROR: couldn't find network for cluster %s\n%+v", clusterName, err)
|
return "", fmt.Errorf(" Couldn't find network for cluster %s\n%+v", clusterName, err)
|
||||||
|
}
|
||||||
|
if len(networks) == 0 {
|
||||||
|
return "", nil
|
||||||
|
}
|
||||||
|
// there should be only one network that matches the name... but who knows?
|
||||||
|
return networks[0].ID, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// deleteClusterNetwork deletes a docker network based on the name of a cluster it belongs to
|
||||||
|
func deleteClusterNetwork(clusterName string) error {
|
||||||
|
nid, err := getClusterNetwork(clusterName)
|
||||||
|
if err != nil {
|
||||||
|
return fmt.Errorf(" Couldn't find network for cluster %s\n%+v", clusterName, err)
|
||||||
|
}
|
||||||
|
if nid == "" {
|
||||||
|
log.Warningf("couldn't remove network for cluster %s: network does not exist", clusterName)
|
||||||
|
return nil
|
||||||
}
|
}
|
||||||
|
|
||||||
// there should be only one network that matches the name... but who knows?
|
ctx := context.Background()
|
||||||
for _, network := range networks {
|
docker, err := client.NewClientWithOpts(client.FromEnv, client.WithAPIVersionNegotiation())
|
||||||
if err := docker.NetworkRemove(ctx, network.ID); err != nil {
|
if err != nil {
|
||||||
log.Printf("WARNING: couldn't remove network for cluster %s\n%+v", clusterName, err)
|
return fmt.Errorf(" Couldn't create docker client\n%+v", err)
|
||||||
continue
|
}
|
||||||
}
|
if err := docker.NetworkRemove(ctx, nid); err != nil {
|
||||||
|
log.Warningf("couldn't remove network for cluster %s\n%+v", clusterName, err)
|
||||||
}
|
}
|
||||||
return nil
|
return nil
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// getContainersInNetwork gets a list of containers connected to a network
|
||||||
|
func getContainersInNetwork(nid string) ([]string, error) {
|
||||||
|
ctx := context.Background()
|
||||||
|
docker, err := client.NewClientWithOpts(client.FromEnv, client.WithAPIVersionNegotiation())
|
||||||
|
if err != nil {
|
||||||
|
return nil, fmt.Errorf("Couldn't create docker client\n%+v", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
options := types.NetworkInspectOptions{}
|
||||||
|
network, err := docker.NetworkInspect(ctx, nid, options)
|
||||||
|
if err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
cids := []string{}
|
||||||
|
for cid := range network.Containers {
|
||||||
|
cids = append(cids, cid)
|
||||||
|
}
|
||||||
|
return cids, nil
|
||||||
|
}
|
||||||
|
27
cli/port.go
27
cli/port.go
@ -2,27 +2,12 @@ package run
|
|||||||
|
|
||||||
import (
|
import (
|
||||||
"fmt"
|
"fmt"
|
||||||
"log"
|
|
||||||
"strings"
|
"strings"
|
||||||
|
|
||||||
"github.com/docker/go-connections/nat"
|
"github.com/docker/go-connections/nat"
|
||||||
|
log "github.com/sirupsen/logrus"
|
||||||
)
|
)
|
||||||
|
|
||||||
// PublishedPorts is a struct used for exposing container ports on the host system
|
|
||||||
type PublishedPorts struct {
|
|
||||||
ExposedPorts map[nat.Port]struct{}
|
|
||||||
PortBindings map[nat.Port][]nat.PortBinding
|
|
||||||
}
|
|
||||||
|
|
||||||
// defaultNodes describes the type of nodes on which a port should be exposed by default
|
|
||||||
const defaultNodes = "server"
|
|
||||||
|
|
||||||
// mapping a node role to groups that should be applied to it
|
|
||||||
var nodeRuleGroupsMap = map[string][]string{
|
|
||||||
"worker": {"all", "workers"},
|
|
||||||
"server": {"all", "server", "master"},
|
|
||||||
}
|
|
||||||
|
|
||||||
// mapNodesToPortSpecs maps nodes to portSpecs
|
// mapNodesToPortSpecs maps nodes to portSpecs
|
||||||
func mapNodesToPortSpecs(specs []string, createdNodes []string) (map[string][]string, error) {
|
func mapNodesToPortSpecs(specs []string, createdNodes []string) (map[string][]string, error) {
|
||||||
|
|
||||||
@ -31,7 +16,7 @@ func mapNodesToPortSpecs(specs []string, createdNodes []string) (map[string][]st
|
|||||||
}
|
}
|
||||||
|
|
||||||
// check node-specifier possibilitites
|
// check node-specifier possibilitites
|
||||||
possibleNodeSpecifiers := []string{"all", "workers", "server", "master"}
|
possibleNodeSpecifiers := []string{"all", "workers", "agents", "server", "master"}
|
||||||
possibleNodeSpecifiers = append(possibleNodeSpecifiers, createdNodes...)
|
possibleNodeSpecifiers = append(possibleNodeSpecifiers, createdNodes...)
|
||||||
|
|
||||||
nodeToPortSpecMap := make(map[string][]string)
|
nodeToPortSpecMap := make(map[string][]string)
|
||||||
@ -54,7 +39,7 @@ func mapNodesToPortSpecs(specs []string, createdNodes []string) (map[string][]st
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
if !nodeFound {
|
if !nodeFound {
|
||||||
log.Printf("WARNING: Unknown node-specifier [%s] in port mapping entry [%s]", node, spec)
|
log.Warningf("Unknown node-specifier [%s] in port mapping entry [%s]", node, spec)
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
@ -80,12 +65,12 @@ func validatePortSpecs(specs []string) error {
|
|||||||
atSplit := strings.Split(spec, "@")
|
atSplit := strings.Split(spec, "@")
|
||||||
_, err := nat.ParsePortSpec(atSplit[0])
|
_, err := nat.ParsePortSpec(atSplit[0])
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return fmt.Errorf("ERROR: Invalid port specification [%s] in port mapping [%s]\n%+v", atSplit[0], spec, err)
|
return fmt.Errorf("Invalid port specification [%s] in port mapping [%s]\n%+v", atSplit[0], spec, err)
|
||||||
}
|
}
|
||||||
if len(atSplit) > 0 {
|
if len(atSplit) > 0 {
|
||||||
for i := 1; i < len(atSplit); i++ {
|
for i := 1; i < len(atSplit); i++ {
|
||||||
if err := ValidateHostname(atSplit[i]); err != nil {
|
if err := ValidateHostname(atSplit[i]); err != nil {
|
||||||
return fmt.Errorf("ERROR: Invalid node-specifier [%s] in port mapping [%s]\n%+v", atSplit[i], spec, err)
|
return fmt.Errorf("Invalid node-specifier [%s] in port mapping [%s]\n%+v", atSplit[i], spec, err)
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
@ -108,7 +93,7 @@ func extractNodes(spec string) ([]string, string) {
|
|||||||
return nodes, portSpec
|
return nodes, portSpec
|
||||||
}
|
}
|
||||||
|
|
||||||
// Offset creates a new PublishedPort structure, with all host ports are changed by a fixed 'offset'
|
// Offset creates a new PublishedPort structure, with all host ports are changed by a fixed 'offset'
|
||||||
func (p PublishedPorts) Offset(offset int) *PublishedPorts {
|
func (p PublishedPorts) Offset(offset int) *PublishedPorts {
|
||||||
var newExposedPorts = make(map[nat.Port]struct{}, len(p.ExposedPorts))
|
var newExposedPorts = make(map[nat.Port]struct{}, len(p.ExposedPorts))
|
||||||
var newPortBindings = make(map[nat.Port][]nat.PortBinding, len(p.PortBindings))
|
var newPortBindings = make(map[nat.Port][]nat.PortBinding, len(p.PortBindings))
|
||||||
|
340
cli/registry.go
Normal file
340
cli/registry.go
Normal file
@ -0,0 +1,340 @@
|
|||||||
|
package run
|
||||||
|
|
||||||
|
import (
|
||||||
|
"context"
|
||||||
|
"fmt"
|
||||||
|
"io/ioutil"
|
||||||
|
"path"
|
||||||
|
"strconv"
|
||||||
|
"time"
|
||||||
|
|
||||||
|
"github.com/docker/docker/api/types"
|
||||||
|
"github.com/docker/docker/api/types/container"
|
||||||
|
"github.com/docker/docker/api/types/filters"
|
||||||
|
"github.com/docker/docker/api/types/network"
|
||||||
|
"github.com/docker/docker/client"
|
||||||
|
"github.com/mitchellh/go-homedir"
|
||||||
|
log "github.com/sirupsen/logrus"
|
||||||
|
"gopkg.in/yaml.v2"
|
||||||
|
)
|
||||||
|
|
||||||
|
const (
|
||||||
|
defaultRegistryContainerName = "k3d-registry"
|
||||||
|
|
||||||
|
defaultRegistryImage = "registry:2"
|
||||||
|
|
||||||
|
// Default registry port, both for the external and the internal ports
|
||||||
|
// Note well, that the internal port is never changed.
|
||||||
|
defaultRegistryPort = 5000
|
||||||
|
|
||||||
|
defaultFullRegistriesPath = "/etc/rancher/k3s/registries.yaml"
|
||||||
|
|
||||||
|
defaultRegistryMountPath = "/var/lib/registry"
|
||||||
|
|
||||||
|
defaultDockerHubAddress = "docker.io"
|
||||||
|
|
||||||
|
defaultDockerRegistryHubAddress = "registry-1.docker.io"
|
||||||
|
)
|
||||||
|
|
||||||
|
// default labels assigned to the registry container
|
||||||
|
var defaultRegistryContainerLabels = map[string]string{
|
||||||
|
"app": "k3d",
|
||||||
|
"component": "registry",
|
||||||
|
}
|
||||||
|
|
||||||
|
// default labels assigned to the registry volume
|
||||||
|
var defaultRegistryVolumeLabels = map[string]string{
|
||||||
|
"app": "k3d",
|
||||||
|
"component": "registry",
|
||||||
|
"managed": "true",
|
||||||
|
}
|
||||||
|
|
||||||
|
// NOTE: structs copied from https://github.com/rancher/k3s/blob/master/pkg/agent/templates/registry.go
|
||||||
|
// for avoiding a dependencies nightmare...
|
||||||
|
|
||||||
|
// Registry is registry settings configured
|
||||||
|
type Registry struct {
|
||||||
|
// Mirrors are namespace to mirror mapping for all namespaces.
|
||||||
|
Mirrors map[string]Mirror `toml:"mirrors" yaml:"mirrors"`
|
||||||
|
|
||||||
|
// Configs are configs for each registry.
|
||||||
|
// The key is the FDQN or IP of the registry.
|
||||||
|
Configs map[string]interface{} `toml:"configs" yaml:"configs"`
|
||||||
|
|
||||||
|
// Auths are registry endpoint to auth config mapping. The registry endpoint must
|
||||||
|
// be a valid url with host specified.
|
||||||
|
// DEPRECATED: Use Configs instead. Remove in containerd 1.4.
|
||||||
|
Auths map[string]interface{} `toml:"auths" yaml:"auths"`
|
||||||
|
}
|
||||||
|
|
||||||
|
// Mirror contains the config related to the registry mirror
|
||||||
|
type Mirror struct {
|
||||||
|
// Endpoints are endpoints for a namespace. CRI plugin will try the endpoints
|
||||||
|
// one by one until a working one is found. The endpoint must be a valid url
|
||||||
|
// with host specified.
|
||||||
|
// The scheme, host and path from the endpoint URL will be used.
|
||||||
|
Endpoints []string `toml:"endpoint" yaml:"endpoint"`
|
||||||
|
}
|
||||||
|
|
||||||
|
// getGlobalRegistriesConfFilename gets the global registries file that will be used in all the servers/workers
|
||||||
|
func getGlobalRegistriesConfFilename() (string, error) {
|
||||||
|
homeDir, err := homedir.Dir()
|
||||||
|
if err != nil {
|
||||||
|
log.Error("Couldn't get user's home directory")
|
||||||
|
return "", err
|
||||||
|
}
|
||||||
|
|
||||||
|
return path.Join(homeDir, ".k3d", "registries.yaml"), nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// writeRegistriesConfigInContainer creates a valid registries configuration file in a container
|
||||||
|
func writeRegistriesConfigInContainer(spec *ClusterSpec, ID string) error {
|
||||||
|
registryInternalAddress := fmt.Sprintf("%s:%d", spec.RegistryName, defaultRegistryPort)
|
||||||
|
registryExternalAddress := fmt.Sprintf("%s:%d", spec.RegistryName, spec.RegistryPort)
|
||||||
|
|
||||||
|
privRegistries := &Registry{}
|
||||||
|
|
||||||
|
// load the base registry file
|
||||||
|
if len(spec.RegistriesFile) > 0 {
|
||||||
|
log.Printf("Using registries definitions from %q...\n", spec.RegistriesFile)
|
||||||
|
privRegistryFile, err := ioutil.ReadFile(spec.RegistriesFile)
|
||||||
|
if err != nil {
|
||||||
|
return err // the file must exist at this point
|
||||||
|
}
|
||||||
|
if err := yaml.Unmarshal(privRegistryFile, &privRegistries); err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
if spec.RegistryEnabled {
|
||||||
|
if len(privRegistries.Mirrors) == 0 {
|
||||||
|
privRegistries.Mirrors = map[string]Mirror{}
|
||||||
|
}
|
||||||
|
|
||||||
|
// then add the private registry
|
||||||
|
privRegistries.Mirrors[registryExternalAddress] = Mirror{
|
||||||
|
Endpoints: []string{fmt.Sprintf("http://%s", registryInternalAddress)},
|
||||||
|
}
|
||||||
|
|
||||||
|
// with the cache, redirect all the PULLs to the Docker Hub to the local registry
|
||||||
|
if spec.RegistryCacheEnabled {
|
||||||
|
privRegistries.Mirrors[defaultDockerHubAddress] = Mirror{
|
||||||
|
Endpoints: []string{fmt.Sprintf("http://%s", registryInternalAddress)},
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
d, err := yaml.Marshal(&privRegistries)
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
|
||||||
|
return copyToContainer(ID, defaultFullRegistriesPath, d)
|
||||||
|
}
|
||||||
|
|
||||||
|
// createRegistry creates a registry, or connect the k3d network to an existing one
|
||||||
|
func createRegistry(spec ClusterSpec) (string, error) {
|
||||||
|
netName := k3dNetworkName(spec.ClusterName)
|
||||||
|
|
||||||
|
// first, check we have not already started a registry (for example, for a different k3d cluster)
|
||||||
|
// all the k3d clusters should share the same private registry, so if we already have a registry just connect
|
||||||
|
// it to the network of this cluster.
|
||||||
|
cid, err := getRegistryContainer()
|
||||||
|
if err != nil {
|
||||||
|
return "", err
|
||||||
|
}
|
||||||
|
|
||||||
|
if cid != "" {
|
||||||
|
// TODO: we should check given-registry-name == existing-registry-name
|
||||||
|
log.Printf("Registry already present: ensuring that it's running and connecting it to the '%s' network...\n", netName)
|
||||||
|
if err := startContainer(cid); err != nil {
|
||||||
|
log.Warnf("Failed to start registry container. Try starting it manually via `docker start %s`", cid)
|
||||||
|
}
|
||||||
|
if err := connectRegistryToNetwork(cid, netName, []string{spec.RegistryName}); err != nil {
|
||||||
|
return "", err
|
||||||
|
}
|
||||||
|
return cid, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
log.Printf("Creating Registry as %s:%d...\n", spec.RegistryName, spec.RegistryPort)
|
||||||
|
|
||||||
|
containerLabels := make(map[string]string)
|
||||||
|
|
||||||
|
// add a standard list of labels to our registry
|
||||||
|
for k, v := range defaultRegistryContainerLabels {
|
||||||
|
containerLabels[k] = v
|
||||||
|
}
|
||||||
|
containerLabels["created"] = time.Now().Format("2006-01-02 15:04:05")
|
||||||
|
containerLabels["hostname"] = spec.RegistryName
|
||||||
|
|
||||||
|
registryPortSpec := fmt.Sprintf("0.0.0.0:%d:%d/tcp", spec.RegistryPort, defaultRegistryPort)
|
||||||
|
registryPublishedPorts, err := CreatePublishedPorts([]string{registryPortSpec})
|
||||||
|
if err != nil {
|
||||||
|
log.Fatalf("Error: failed to parse port specs %+v \n%+v", registryPortSpec, err)
|
||||||
|
}
|
||||||
|
|
||||||
|
hostConfig := &container.HostConfig{
|
||||||
|
PortBindings: registryPublishedPorts.PortBindings,
|
||||||
|
Privileged: true,
|
||||||
|
Init: &[]bool{true}[0],
|
||||||
|
}
|
||||||
|
|
||||||
|
if spec.AutoRestart {
|
||||||
|
hostConfig.RestartPolicy.Name = "unless-stopped"
|
||||||
|
}
|
||||||
|
|
||||||
|
spec.Volumes = &Volumes{} // we do not need in the registry any of the volumes used by the other containers
|
||||||
|
if spec.RegistryVolume != "" {
|
||||||
|
vol, err := getVolume(spec.RegistryVolume, map[string]string{})
|
||||||
|
if err != nil {
|
||||||
|
return "", fmt.Errorf(" Couldn't check if volume %s exists: %w", spec.RegistryVolume, err)
|
||||||
|
}
|
||||||
|
if vol != nil {
|
||||||
|
log.Printf("Using existing volume %s for the Registry\n", spec.RegistryVolume)
|
||||||
|
} else {
|
||||||
|
log.Printf("Creating Registry volume %s...\n", spec.RegistryVolume)
|
||||||
|
|
||||||
|
// assign some labels (so we can recognize the volume later on)
|
||||||
|
volLabels := map[string]string{
|
||||||
|
"registry-name": spec.RegistryName,
|
||||||
|
"registry-port": strconv.Itoa(spec.RegistryPort),
|
||||||
|
}
|
||||||
|
for k, v := range defaultRegistryVolumeLabels {
|
||||||
|
volLabels[k] = v
|
||||||
|
}
|
||||||
|
_, err := createVolume(spec.RegistryVolume, volLabels)
|
||||||
|
if err != nil {
|
||||||
|
return "", fmt.Errorf(" Couldn't create volume %s for registry: %w", spec.RegistryVolume, err)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
mount := fmt.Sprintf("%s:%s", spec.RegistryVolume, defaultRegistryMountPath)
|
||||||
|
hostConfig.Binds = []string{mount}
|
||||||
|
}
|
||||||
|
|
||||||
|
// connect the registry to this k3d network
|
||||||
|
networkingConfig := &network.NetworkingConfig{
|
||||||
|
EndpointsConfig: map[string]*network.EndpointSettings{
|
||||||
|
netName: {
|
||||||
|
Aliases: []string{spec.RegistryName},
|
||||||
|
},
|
||||||
|
},
|
||||||
|
}
|
||||||
|
|
||||||
|
config := &container.Config{
|
||||||
|
Hostname: spec.RegistryName,
|
||||||
|
Image: defaultRegistryImage,
|
||||||
|
ExposedPorts: registryPublishedPorts.ExposedPorts,
|
||||||
|
Labels: containerLabels,
|
||||||
|
}
|
||||||
|
|
||||||
|
// we can enable the cache in the Registry by just adding a new env variable
|
||||||
|
// (see https://docs.docker.com/registry/configuration/#override-specific-configuration-options)
|
||||||
|
if spec.RegistryCacheEnabled {
|
||||||
|
log.Printf("Activating pull-through cache to Docker Hub\n")
|
||||||
|
cacheConfigKey := "REGISTRY_PROXY_REMOTEURL"
|
||||||
|
cacheConfigValues := fmt.Sprintf("https://%s", defaultDockerRegistryHubAddress)
|
||||||
|
config.Env = []string{fmt.Sprintf("%s=%s", cacheConfigKey, cacheConfigValues)}
|
||||||
|
}
|
||||||
|
|
||||||
|
id, err := createContainer(config, hostConfig, networkingConfig, defaultRegistryContainerName)
|
||||||
|
if err != nil {
|
||||||
|
return "", fmt.Errorf(" Couldn't create registry container %s\n%w", defaultRegistryContainerName, err)
|
||||||
|
}
|
||||||
|
|
||||||
|
if err := startContainer(id); err != nil {
|
||||||
|
return "", fmt.Errorf(" Couldn't start container %s\n%w", defaultRegistryContainerName, err)
|
||||||
|
}
|
||||||
|
|
||||||
|
return id, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// getRegistryContainer looks for the registry container
|
||||||
|
func getRegistryContainer() (string, error) {
|
||||||
|
ctx := context.Background()
|
||||||
|
docker, err := client.NewClientWithOpts(client.FromEnv, client.WithAPIVersionNegotiation())
|
||||||
|
if err != nil {
|
||||||
|
return "", fmt.Errorf("Couldn't create docker client\n%+v", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
cFilter := filters.NewArgs()
|
||||||
|
cFilter.Add("name", defaultRegistryContainerName)
|
||||||
|
// filter with the standard list of labels of our registry
|
||||||
|
for k, v := range defaultRegistryContainerLabels {
|
||||||
|
cFilter.Add("label", fmt.Sprintf("%s=%s", k, v))
|
||||||
|
}
|
||||||
|
|
||||||
|
containers, err := docker.ContainerList(ctx, types.ContainerListOptions{Filters: cFilter, All: true})
|
||||||
|
if err != nil {
|
||||||
|
return "", fmt.Errorf(" Couldn't list containers: %w", err)
|
||||||
|
}
|
||||||
|
if len(containers) == 0 {
|
||||||
|
return "", nil
|
||||||
|
}
|
||||||
|
return containers[0].ID, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// connectRegistryToNetwork connects the registry container to a given network
|
||||||
|
func connectRegistryToNetwork(ID string, networkID string, aliases []string) error {
|
||||||
|
if err := connectContainerToNetwork(ID, networkID, aliases); err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// disconnectRegistryFromNetwork disconnects the Registry from a Network
|
||||||
|
// if the Registry container is not connected to any more networks, it is stopped
|
||||||
|
func disconnectRegistryFromNetwork(name string, keepRegistryVolume bool) error {
|
||||||
|
// disconnect the registry from this cluster's network
|
||||||
|
netName := k3dNetworkName(name)
|
||||||
|
cid, err := getRegistryContainer()
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
if cid == "" {
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
log.Printf("...Disconnecting Registry from the %s network\n", netName)
|
||||||
|
if err := disconnectContainerFromNetwork(cid, netName); err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
|
||||||
|
// check if the registry is not connected to any other networks.
|
||||||
|
// in that case, we can safely stop the registry container
|
||||||
|
networks, err := getContainerNetworks(cid)
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
if len(networks) == 0 {
|
||||||
|
log.Printf("...Removing the Registry\n")
|
||||||
|
volName, err := getVolumeMountedIn(cid, defaultRegistryMountPath)
|
||||||
|
if err != nil {
|
||||||
|
log.Printf("...warning: could not detect registry volume\n")
|
||||||
|
}
|
||||||
|
|
||||||
|
if err := removeContainer(cid); err != nil {
|
||||||
|
log.Println(err)
|
||||||
|
}
|
||||||
|
|
||||||
|
// check if the volume mounted in /var/lib/registry was managed by us. In that case (and only if
|
||||||
|
// the user does not want to keep the volume alive), delete the registry volume
|
||||||
|
if volName != "" {
|
||||||
|
vol, err := getVolume(volName, defaultRegistryVolumeLabels)
|
||||||
|
if err != nil {
|
||||||
|
return fmt.Errorf(" Couldn't remove volume for registry %s\n%w", defaultRegistryContainerName, err)
|
||||||
|
}
|
||||||
|
if vol != nil {
|
||||||
|
if keepRegistryVolume {
|
||||||
|
log.Printf("...(keeping the Registry volume %s)\n", volName)
|
||||||
|
} else {
|
||||||
|
log.Printf("...Removing the Registry volume %s\n", volName)
|
||||||
|
if err := deleteVolume(volName); err != nil {
|
||||||
|
return fmt.Errorf(" Couldn't remove volume for registry %s\n%w", defaultRegistryContainerName, err)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
return nil
|
||||||
|
}
|
@ -47,11 +47,11 @@ func subShell(cluster, shell, command string) error {
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
if !supported {
|
if !supported {
|
||||||
return fmt.Errorf("ERROR: selected shell [%s] is not supported", shell)
|
return fmt.Errorf("selected shell [%s] is not supported", shell)
|
||||||
}
|
}
|
||||||
|
|
||||||
// get kubeconfig for selected cluster
|
// get kubeconfig for selected cluster
|
||||||
kubeConfigPath, err := getKubeConfig(cluster)
|
kubeConfigPath, err := getKubeConfig(cluster, true)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
|
61
cli/types.go
Normal file
61
cli/types.go
Normal file
@ -0,0 +1,61 @@
|
|||||||
|
package run
|
||||||
|
|
||||||
|
import (
|
||||||
|
"github.com/docker/docker/api/types"
|
||||||
|
"github.com/docker/go-connections/nat"
|
||||||
|
)
|
||||||
|
|
||||||
|
// Globally used constants
|
||||||
|
const (
|
||||||
|
DefaultRegistry = "docker.io"
|
||||||
|
DefaultServerCount = 1
|
||||||
|
)
|
||||||
|
|
||||||
|
// defaultNodes describes the type of nodes on which a port should be exposed by default
|
||||||
|
const defaultNodes = "server"
|
||||||
|
|
||||||
|
// defaultLabelNodes describes the type of nodes on which a label should be applied by default
|
||||||
|
const defaultLabelNodes = "all"
|
||||||
|
|
||||||
|
// mapping a node role to groups that should be applied to it
|
||||||
|
var nodeRuleGroupsMap = map[string][]string{
|
||||||
|
"worker": {"all", "workers", "agents"},
|
||||||
|
"server": {"all", "server", "master"},
|
||||||
|
}
|
||||||
|
|
||||||
|
// Cluster describes an existing cluster
|
||||||
|
type Cluster struct {
|
||||||
|
name string
|
||||||
|
image string
|
||||||
|
status string
|
||||||
|
serverPorts []string
|
||||||
|
server types.Container
|
||||||
|
workers []types.Container
|
||||||
|
}
|
||||||
|
|
||||||
|
// ClusterSpec defines the specs for a cluster that's up for creation
|
||||||
|
type ClusterSpec struct {
|
||||||
|
AgentArgs []string
|
||||||
|
APIPort apiPort
|
||||||
|
AutoRestart bool
|
||||||
|
ClusterName string
|
||||||
|
Env []string
|
||||||
|
NodeToLabelSpecMap map[string][]string
|
||||||
|
Image string
|
||||||
|
NodeToPortSpecMap map[string][]string
|
||||||
|
PortAutoOffset int
|
||||||
|
RegistriesFile string
|
||||||
|
RegistryEnabled bool
|
||||||
|
RegistryCacheEnabled bool
|
||||||
|
RegistryName string
|
||||||
|
RegistryPort int
|
||||||
|
RegistryVolume string
|
||||||
|
ServerArgs []string
|
||||||
|
Volumes *Volumes
|
||||||
|
}
|
||||||
|
|
||||||
|
// PublishedPorts is a struct used for exposing container ports on the host system
|
||||||
|
type PublishedPorts struct {
|
||||||
|
ExposedPorts map[nat.Port]struct{}
|
||||||
|
PortBindings map[nat.Port][]nat.PortBinding
|
||||||
|
}
|
51
cli/util.go
51
cli/util.go
@ -4,6 +4,7 @@ import (
|
|||||||
"fmt"
|
"fmt"
|
||||||
"math/rand"
|
"math/rand"
|
||||||
"net"
|
"net"
|
||||||
|
"os"
|
||||||
"strconv"
|
"strconv"
|
||||||
"strings"
|
"strings"
|
||||||
"time"
|
"time"
|
||||||
@ -55,10 +56,10 @@ const clusterNameMaxSize int = 35
|
|||||||
// within the 64 characters limit.
|
// within the 64 characters limit.
|
||||||
func CheckClusterName(name string) error {
|
func CheckClusterName(name string) error {
|
||||||
if err := ValidateHostname(name); err != nil {
|
if err := ValidateHostname(name); err != nil {
|
||||||
return fmt.Errorf("[ERROR] Invalid cluster name\n%+v", ValidateHostname(name))
|
return fmt.Errorf("Invalid cluster name\n%+v", ValidateHostname(name))
|
||||||
}
|
}
|
||||||
if len(name) > clusterNameMaxSize {
|
if len(name) > clusterNameMaxSize {
|
||||||
return fmt.Errorf("[ERROR] Cluster name is too long (%d > %d)", len(name), clusterNameMaxSize)
|
return fmt.Errorf("Cluster name is too long (%d > %d)", len(name), clusterNameMaxSize)
|
||||||
}
|
}
|
||||||
return nil
|
return nil
|
||||||
}
|
}
|
||||||
@ -67,11 +68,11 @@ func CheckClusterName(name string) error {
|
|||||||
func ValidateHostname(name string) error {
|
func ValidateHostname(name string) error {
|
||||||
|
|
||||||
if len(name) == 0 {
|
if len(name) == 0 {
|
||||||
return fmt.Errorf("[ERROR] no name provided")
|
return fmt.Errorf("no name provided")
|
||||||
}
|
}
|
||||||
|
|
||||||
if name[0] == '-' || name[len(name)-1] == '-' {
|
if name[0] == '-' || name[len(name)-1] == '-' {
|
||||||
return fmt.Errorf("[ERROR] Hostname [%s] must not start or end with - (dash)", name)
|
return fmt.Errorf("Hostname [%s] must not start or end with - (dash)", name)
|
||||||
}
|
}
|
||||||
|
|
||||||
for _, c := range name {
|
for _, c := range name {
|
||||||
@ -82,7 +83,7 @@ func ValidateHostname(name string) error {
|
|||||||
case c == '-':
|
case c == '-':
|
||||||
break
|
break
|
||||||
default:
|
default:
|
||||||
return fmt.Errorf("[ERROR] Hostname [%s] contains characters other than 'Aa-Zz', '0-9' or '-'", name)
|
return fmt.Errorf("Hostname [%s] contains characters other than 'Aa-Zz', '0-9' or '-'", name)
|
||||||
|
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
@ -115,8 +116,46 @@ func parseAPIPort(portSpec string) (*apiPort, error) {
|
|||||||
}
|
}
|
||||||
|
|
||||||
if p < 0 || p > 65535 {
|
if p < 0 || p > 65535 {
|
||||||
return nil, fmt.Errorf("ERROR: --api-port port value out of range")
|
return nil, fmt.Errorf("--api-port port value out of range")
|
||||||
}
|
}
|
||||||
|
|
||||||
return port, nil
|
return port, nil
|
||||||
}
|
}
|
||||||
|
|
||||||
|
func fileExists(filename string) bool {
|
||||||
|
_, err := os.Stat(filename)
|
||||||
|
return !os.IsNotExist(err)
|
||||||
|
}
|
||||||
|
|
||||||
|
type dnsNameCheck struct {
|
||||||
|
res chan bool
|
||||||
|
err chan error
|
||||||
|
timeout time.Duration
|
||||||
|
}
|
||||||
|
|
||||||
|
func newAsyncNameExists(name string, timeout time.Duration) *dnsNameCheck {
|
||||||
|
d := &dnsNameCheck{
|
||||||
|
res: make(chan bool),
|
||||||
|
err: make(chan error),
|
||||||
|
timeout: timeout,
|
||||||
|
}
|
||||||
|
go func() {
|
||||||
|
addresses, err := net.LookupHost(name)
|
||||||
|
if err != nil {
|
||||||
|
d.err <- err
|
||||||
|
}
|
||||||
|
d.res <- len(addresses) > 0
|
||||||
|
}()
|
||||||
|
return d
|
||||||
|
}
|
||||||
|
|
||||||
|
func (d dnsNameCheck) Exists() (bool, error) {
|
||||||
|
select {
|
||||||
|
case r := <-d.res:
|
||||||
|
return r, nil
|
||||||
|
case e := <-d.err:
|
||||||
|
return false, e
|
||||||
|
case <-time.After(d.timeout):
|
||||||
|
return false, nil
|
||||||
|
}
|
||||||
|
}
|
||||||
|
184
cli/volume.go
184
cli/volume.go
@ -3,61 +3,124 @@ package run
|
|||||||
import (
|
import (
|
||||||
"context"
|
"context"
|
||||||
"fmt"
|
"fmt"
|
||||||
|
"strings"
|
||||||
|
|
||||||
"github.com/docker/docker/api/types"
|
"github.com/docker/docker/api/types"
|
||||||
|
"github.com/docker/docker/api/types/container"
|
||||||
"github.com/docker/docker/api/types/filters"
|
"github.com/docker/docker/api/types/filters"
|
||||||
"github.com/docker/docker/api/types/volume"
|
"github.com/docker/docker/api/types/volume"
|
||||||
"github.com/docker/docker/client"
|
"github.com/docker/docker/client"
|
||||||
)
|
)
|
||||||
|
|
||||||
// createImageVolume will create a new docker volume used for storing image tarballs that can be loaded into the clusters
|
type Volumes struct {
|
||||||
func createImageVolume(clusterName string) (types.Volume, error) {
|
DefaultVolumes []string
|
||||||
|
NodeSpecificVolumes map[string][]string
|
||||||
|
GroupSpecificVolumes map[string][]string
|
||||||
|
}
|
||||||
|
|
||||||
|
// createVolume will create a new docker volume
|
||||||
|
func createVolume(volName string, volLabels map[string]string) (types.Volume, error) {
|
||||||
var vol types.Volume
|
var vol types.Volume
|
||||||
|
|
||||||
ctx := context.Background()
|
ctx := context.Background()
|
||||||
docker, err := client.NewClientWithOpts(client.FromEnv, client.WithAPIVersionNegotiation())
|
docker, err := client.NewClientWithOpts(client.FromEnv, client.WithAPIVersionNegotiation())
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return vol, fmt.Errorf("ERROR: couldn't create docker client\n%+v", err)
|
return vol, fmt.Errorf(" Couldn't create docker client\n%+v", err)
|
||||||
}
|
}
|
||||||
|
|
||||||
volName := fmt.Sprintf("k3d-%s-images", clusterName)
|
|
||||||
|
|
||||||
volumeCreateOptions := volume.VolumeCreateBody{
|
volumeCreateOptions := volume.VolumeCreateBody{
|
||||||
Name: volName,
|
Name: volName,
|
||||||
Labels: map[string]string{
|
Labels: volLabels,
|
||||||
"app": "k3d",
|
|
||||||
"cluster": clusterName,
|
|
||||||
},
|
|
||||||
Driver: "local", //TODO: allow setting driver + opts
|
Driver: "local", //TODO: allow setting driver + opts
|
||||||
DriverOpts: map[string]string{},
|
DriverOpts: map[string]string{},
|
||||||
}
|
}
|
||||||
vol, err = docker.VolumeCreate(ctx, volumeCreateOptions)
|
vol, err = docker.VolumeCreate(ctx, volumeCreateOptions)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return vol, fmt.Errorf("ERROR: failed to create image volume [%s] for cluster [%s]\n%+v", volName, clusterName, err)
|
return vol, fmt.Errorf("failed to create image volume [%s]\n%+v", volName, err)
|
||||||
}
|
}
|
||||||
|
|
||||||
return vol, nil
|
return vol, nil
|
||||||
}
|
}
|
||||||
|
|
||||||
// deleteImageVolume will delete the volume we created for sharing images with this cluster
|
// deleteVolume will delete a volume
|
||||||
func deleteImageVolume(clusterName string) error {
|
func deleteVolume(volName string) error {
|
||||||
|
|
||||||
ctx := context.Background()
|
ctx := context.Background()
|
||||||
docker, err := client.NewClientWithOpts(client.FromEnv, client.WithAPIVersionNegotiation())
|
docker, err := client.NewClientWithOpts(client.FromEnv, client.WithAPIVersionNegotiation())
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return fmt.Errorf("ERROR: couldn't create docker client\n%+v", err)
|
return fmt.Errorf(" Couldn't create docker client\n%+v", err)
|
||||||
}
|
}
|
||||||
|
|
||||||
volName := fmt.Sprintf("k3d-%s-images", clusterName)
|
|
||||||
|
|
||||||
if err = docker.VolumeRemove(ctx, volName, true); err != nil {
|
if err = docker.VolumeRemove(ctx, volName, true); err != nil {
|
||||||
return fmt.Errorf("ERROR: couldn't remove volume [%s] for cluster [%s]\n%+v", volName, clusterName, err)
|
return fmt.Errorf(" Couldn't remove volume [%s]\n%+v", volName, err)
|
||||||
}
|
}
|
||||||
|
|
||||||
return nil
|
return nil
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// getVolume checks if a docker volume exists. The volume can be specified with a name and/or some labels.
|
||||||
|
func getVolume(volName string, volLabels map[string]string) (*types.Volume, error) {
|
||||||
|
ctx := context.Background()
|
||||||
|
|
||||||
|
docker, err := client.NewClientWithOpts(client.FromEnv, client.WithAPIVersionNegotiation())
|
||||||
|
if err != nil {
|
||||||
|
return nil, fmt.Errorf(" Couldn't create docker client: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
vFilter := filters.NewArgs()
|
||||||
|
if volName != "" {
|
||||||
|
vFilter.Add("name", volName)
|
||||||
|
}
|
||||||
|
for k, v := range volLabels {
|
||||||
|
vFilter.Add("label", fmt.Sprintf("%s=%s", k, v))
|
||||||
|
}
|
||||||
|
|
||||||
|
volumes, err := docker.VolumeList(ctx, vFilter)
|
||||||
|
if err != nil {
|
||||||
|
return nil, fmt.Errorf(" Couldn't list volumes: %w", err)
|
||||||
|
}
|
||||||
|
if len(volumes.Volumes) == 0 {
|
||||||
|
return nil, nil
|
||||||
|
}
|
||||||
|
return volumes.Volumes[0], nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// getVolumeMountedIn gets the volume that is mounted in some container in some path
|
||||||
|
func getVolumeMountedIn(ID string, path string) (string, error) {
|
||||||
|
ctx := context.Background()
|
||||||
|
|
||||||
|
docker, err := client.NewClientWithOpts(client.FromEnv, client.WithAPIVersionNegotiation())
|
||||||
|
if err != nil {
|
||||||
|
return "", fmt.Errorf(" Couldn't create docker client: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
c, err := docker.ContainerInspect(ctx, ID)
|
||||||
|
if err != nil {
|
||||||
|
return "", fmt.Errorf(" Couldn't inspect container %s: %w", ID, err)
|
||||||
|
}
|
||||||
|
for _, mount := range c.Mounts {
|
||||||
|
if mount.Destination == path {
|
||||||
|
return mount.Name, nil
|
||||||
|
}
|
||||||
|
}
|
||||||
|
return "", nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// createImageVolume will create a new docker volume used for storing image tarballs that can be loaded into the clusters
|
||||||
|
func createImageVolume(clusterName string) (types.Volume, error) {
|
||||||
|
volName := fmt.Sprintf("k3d-%s-images", clusterName)
|
||||||
|
volLabels := map[string]string{
|
||||||
|
"app": "k3d",
|
||||||
|
"cluster": clusterName,
|
||||||
|
}
|
||||||
|
return createVolume(volName, volLabels)
|
||||||
|
}
|
||||||
|
|
||||||
|
// deleteImageVolume will delete the volume we created for sharing images with this cluster
|
||||||
|
func deleteImageVolume(clusterName string) error {
|
||||||
|
volName := fmt.Sprintf("k3d-%s-images", clusterName)
|
||||||
|
return deleteVolume(volName)
|
||||||
|
}
|
||||||
|
|
||||||
// getImageVolume returns the docker volume object representing the imagevolume for the cluster
|
// getImageVolume returns the docker volume object representing the imagevolume for the cluster
|
||||||
func getImageVolume(clusterName string) (types.Volume, error) {
|
func getImageVolume(clusterName string) (types.Volume, error) {
|
||||||
var vol types.Volume
|
var vol types.Volume
|
||||||
@ -66,7 +129,7 @@ func getImageVolume(clusterName string) (types.Volume, error) {
|
|||||||
ctx := context.Background()
|
ctx := context.Background()
|
||||||
docker, err := client.NewClientWithOpts(client.FromEnv, client.WithAPIVersionNegotiation())
|
docker, err := client.NewClientWithOpts(client.FromEnv, client.WithAPIVersionNegotiation())
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return vol, fmt.Errorf("ERROR: couldn't create docker client\n%+v", err)
|
return vol, fmt.Errorf(" Couldn't create docker client\n%+v", err)
|
||||||
}
|
}
|
||||||
|
|
||||||
filters := filters.NewArgs()
|
filters := filters.NewArgs()
|
||||||
@ -74,7 +137,7 @@ func getImageVolume(clusterName string) (types.Volume, error) {
|
|||||||
filters.Add("label", fmt.Sprintf("cluster=%s", clusterName))
|
filters.Add("label", fmt.Sprintf("cluster=%s", clusterName))
|
||||||
volumeList, err := docker.VolumeList(ctx, filters)
|
volumeList, err := docker.VolumeList(ctx, filters)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return vol, fmt.Errorf("ERROR: couldn't get volumes for cluster [%s]\n%+v ", clusterName, err)
|
return vol, fmt.Errorf(" Couldn't get volumes for cluster [%s]\n%+v ", clusterName, err)
|
||||||
}
|
}
|
||||||
volFound := false
|
volFound := false
|
||||||
for _, volume := range volumeList.Volumes {
|
for _, volume := range volumeList.Volumes {
|
||||||
@ -85,8 +148,87 @@ func getImageVolume(clusterName string) (types.Volume, error) {
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
if !volFound {
|
if !volFound {
|
||||||
return vol, fmt.Errorf("ERROR: didn't find volume [%s] in list of volumes returned for cluster [%s]", volName, clusterName)
|
return vol, fmt.Errorf("didn't find volume [%s] in list of volumes returned for cluster [%s]", volName, clusterName)
|
||||||
}
|
}
|
||||||
|
|
||||||
return vol, nil
|
return vol, nil
|
||||||
}
|
}
|
||||||
|
|
||||||
|
func NewVolumes(volumes []string) (*Volumes, error) {
|
||||||
|
volumesSpec := &Volumes{
|
||||||
|
DefaultVolumes: []string{},
|
||||||
|
NodeSpecificVolumes: make(map[string][]string),
|
||||||
|
GroupSpecificVolumes: make(map[string][]string),
|
||||||
|
}
|
||||||
|
|
||||||
|
volumes:
|
||||||
|
for _, volume := range volumes {
|
||||||
|
if strings.Contains(volume, "@") {
|
||||||
|
split := strings.Split(volume, "@")
|
||||||
|
if len(split) != 2 {
|
||||||
|
return nil, fmt.Errorf("invalid node volume spec: %s", volume)
|
||||||
|
}
|
||||||
|
|
||||||
|
nodeVolumes := split[0]
|
||||||
|
node := strings.ToLower(split[1])
|
||||||
|
if len(node) == 0 {
|
||||||
|
return nil, fmt.Errorf("invalid node volume spec: %s", volume)
|
||||||
|
}
|
||||||
|
|
||||||
|
// check if node selector is a node group
|
||||||
|
for group, names := range nodeRuleGroupsMap {
|
||||||
|
added := false
|
||||||
|
|
||||||
|
for _, name := range names {
|
||||||
|
if name == node {
|
||||||
|
volumesSpec.addGroupSpecificVolume(group, nodeVolumes)
|
||||||
|
added = true
|
||||||
|
break
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
if added {
|
||||||
|
continue volumes
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// otherwise this is a volume for a specific node
|
||||||
|
volumesSpec.addNodeSpecificVolume(node, nodeVolumes)
|
||||||
|
} else {
|
||||||
|
volumesSpec.DefaultVolumes = append(volumesSpec.DefaultVolumes, volume)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
return volumesSpec, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// addVolumesToHostConfig adds all default volumes and node / group specific volumes to a HostConfig
|
||||||
|
func (v Volumes) addVolumesToHostConfig(containerName string, groupName string, hostConfig *container.HostConfig) {
|
||||||
|
volumes := v.DefaultVolumes
|
||||||
|
|
||||||
|
if v, ok := v.NodeSpecificVolumes[containerName]; ok {
|
||||||
|
volumes = append(volumes, v...)
|
||||||
|
}
|
||||||
|
|
||||||
|
if v, ok := v.GroupSpecificVolumes[groupName]; ok {
|
||||||
|
volumes = append(volumes, v...)
|
||||||
|
}
|
||||||
|
|
||||||
|
if len(volumes) > 0 {
|
||||||
|
hostConfig.Binds = volumes
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func (v *Volumes) addNodeSpecificVolume(node, volume string) {
|
||||||
|
if _, ok := v.NodeSpecificVolumes[node]; !ok {
|
||||||
|
v.NodeSpecificVolumes[node] = []string{}
|
||||||
|
}
|
||||||
|
v.NodeSpecificVolumes[node] = append(v.NodeSpecificVolumes[node], volume)
|
||||||
|
}
|
||||||
|
|
||||||
|
func (v *Volumes) addGroupSpecificVolume(group, volume string) {
|
||||||
|
if _, ok := v.GroupSpecificVolumes[group]; !ok {
|
||||||
|
v.GroupSpecificVolumes[group] = []string{}
|
||||||
|
}
|
||||||
|
v.GroupSpecificVolumes[group] = append(v.GroupSpecificVolumes[group], volume)
|
||||||
|
}
|
||||||
|
1
docs/CNAME
Normal file
1
docs/CNAME
Normal file
@ -0,0 +1 @@
|
|||||||
|
k3d.io
|
@ -2,24 +2,4 @@
|
|||||||
|
|
||||||
## Functionality
|
## Functionality
|
||||||
|
|
||||||
```shell
|
|
||||||
COMMANDS:
|
|
||||||
check-tools, ct Check if docker is running
|
|
||||||
shell Start a subshell for a cluster
|
|
||||||
create, c Create a single- or multi-node k3s cluster in docker containers
|
|
||||||
delete, d, del Delete cluster
|
|
||||||
stop Stop cluster
|
|
||||||
start Start a stopped cluster
|
|
||||||
list, ls, l List all clusters
|
|
||||||
get-kubeconfig Get kubeconfig location for cluster
|
|
||||||
help, h Shows a list of commands or help for one command
|
|
||||||
|
|
||||||
GLOBAL OPTIONS:
|
|
||||||
--verbose Enable verbose output
|
|
||||||
--help, -h show help
|
|
||||||
--version, -v print the version
|
|
||||||
```
|
|
||||||
|
|
||||||
## Compatibility with `k3s` functionality/options
|
|
||||||
|
|
||||||
... under construction ...
|
... under construction ...
|
||||||
|
130
docs/examples.md
130
docs/examples.md
@ -4,11 +4,14 @@
|
|||||||
|
|
||||||
### 1. via Ingress
|
### 1. via Ingress
|
||||||
|
|
||||||
|
In this example, we will deploy a simple nginx webserver deployment and make it accessible via ingress.
|
||||||
|
Therefore, we have to create the cluster in a way, that the internal port 80 (where the `traefik` ingress controller is listening on) is exposed on the host system.
|
||||||
|
|
||||||
1. Create a cluster, mapping the ingress port 80 to localhost:8081
|
1. Create a cluster, mapping the ingress port 80 to localhost:8081
|
||||||
|
|
||||||
`k3d create --api-port 6550 --publish 8081:80 --workers 2`
|
`k3d create --api-port 6550 --publish 8081:80 --workers 2`
|
||||||
|
|
||||||
- Note: `--api-port 6550` is not required for the example to work. It's used to have `k3s`'s ApiServer listening on port 6550 with that port mapped to the host system.
|
- Note: `--api-port 6550` is not required for the example to work. It's used to have `k3s`'s API-Server listening on port 6550 with that port mapped to the host system.
|
||||||
|
|
||||||
2. Get the kubeconfig file
|
2. Get the kubeconfig file
|
||||||
|
|
||||||
@ -23,6 +26,7 @@
|
|||||||
`kubectl create service clusterip nginx --tcp=80:80`
|
`kubectl create service clusterip nginx --tcp=80:80`
|
||||||
|
|
||||||
5. Create an ingress object for it with `kubectl apply -f`
|
5. Create an ingress object for it with `kubectl apply -f`
|
||||||
|
*Note*: `k3s` deploys [`traefik`](https://github.com/containous/traefik) as the default ingress controller
|
||||||
|
|
||||||
```YAML
|
```YAML
|
||||||
apiVersion: extensions/v1beta1
|
apiVersion: extensions/v1beta1
|
||||||
@ -80,106 +84,42 @@
|
|||||||
|
|
||||||
`curl localhost:8082/`
|
`curl localhost:8082/`
|
||||||
|
|
||||||
## Connect with a local insecure registry
|
|
||||||
|
|
||||||
This guide takes you through setting up a local insecure (http) registry and integrating it into your workflow so that:
|
## Running on filesystems k3s doesn't like (btrfs, tmpfs, …)
|
||||||
- you can push to the registry from your host
|
|
||||||
- the cluster managed by k3d can pull from that registry
|
|
||||||
|
|
||||||
The registry will be named `registry.local` and run on port `5000`.
|
The following script leverages a [Docker loopback volume plugin](https://github.com/ashald/docker-volume-loopback) to mask the problematic filesystem away from k3s by providing a small ext4 filesystem underneath `/var/lib/rancher/k3s` (k3s' data dir).
|
||||||
### Create the registry
|
|
||||||
|
|
||||||
<pre>
|
```bash
|
||||||
docker volume create local_registry
|
#!/bin/bash -x
|
||||||
|
|
||||||
docker container run -d --name <b>registry.local</b> -v local_registry:/var/lib/registry --restart always -p <b>5000:5000</b> registry:2
|
CLUSTER_NAME="${1:-k3s-default}"
|
||||||
</pre>
|
NUM_WORKERS="${2:-2}"
|
||||||
|
|
||||||
### Create the cluster with k3d
|
setup() {
|
||||||
|
PLUGIN_LS_OUT=`docker plugin ls --format '{{.Name}},{{.Enabled}}' | grep -E '^ashald/docker-volume-loopback'`
|
||||||
|
[ -z "${PLUGIN_LS_OUT}" ] && docker plugin install ashald/docker-volume-loopback DATA_DIR=/tmp/docker-loop/data
|
||||||
|
sleep 3
|
||||||
|
[ "${PLUGIN_LS_OUT##*,}" != "true" ] && docker plugin enable ashald/docker-volume-loopback
|
||||||
|
|
||||||
First we need a place to store the config template: `mkdir -p /home/${USER}/.k3d`
|
K3D_MOUNTS=()
|
||||||
|
for i in `seq 0 ${NUM_WORKERS}`; do
|
||||||
|
[ ${i} -eq 0 ] && VOLUME_NAME="k3d-${CLUSTER_NAME}-server" || VOLUME_NAME="k3d-${CLUSTER_NAME}-worker-$((${i}-1))"
|
||||||
|
docker volume create -d ashald/docker-volume-loopback ${VOLUME_NAME} -o sparse=true -o fs=ext4
|
||||||
|
K3D_MOUNTS+=('-v' "${VOLUME_NAME}:/var/lib/rancher/k3s@${VOLUME_NAME}")
|
||||||
|
done
|
||||||
|
k3d c -i rancher/k3s:v0.9.1 -n ${CLUSTER_NAME} -w ${NUM_WORKERS} ${K3D_MOUNTS[@]}
|
||||||
|
}
|
||||||
|
|
||||||
Create a file named `config.toml.tmpl` in `/home/${USER}/.k3d`, with following content:
|
cleanup() {
|
||||||
|
K3D_VOLUMES=()
|
||||||
<pre>
|
k3d d -n ${CLUSTER_NAME}
|
||||||
# Original section: no changes
|
for i in `seq 0 ${NUM_WORKERS}`; do
|
||||||
[plugins.opt]
|
[ ${i} -eq 0 ] && VOLUME_NAME="k3d-${CLUSTER_NAME}-server" || VOLUME_NAME="k3d-${CLUSTER_NAME}-worker-$((${i}-1))"
|
||||||
path = "{{ .NodeConfig.Containerd.Opt }}"
|
K3D_VOLUMES+=("${VOLUME_NAME}")
|
||||||
[plugins.cri]
|
done
|
||||||
stream_server_address = "{{ .NodeConfig.AgentConfig.NodeName }}"
|
docker volume rm -f ${K3D_VOLUMES[@]}
|
||||||
stream_server_port = "10010"
|
}
|
||||||
{{- if .IsRunningInUserNS }}
|
|
||||||
disable_cgroup = true
|
|
||||||
disable_apparmor = true
|
|
||||||
restrict_oom_score_adj = true
|
|
||||||
{{ end -}}
|
|
||||||
{{- if .NodeConfig.AgentConfig.PauseImage }}
|
|
||||||
sandbox_image = "{{ .NodeConfig.AgentConfig.PauseImage }}"
|
|
||||||
{{ end -}}
|
|
||||||
{{- if not .NodeConfig.NoFlannel }}
|
|
||||||
[plugins.cri.cni]
|
|
||||||
bin_dir = "{{ .NodeConfig.AgentConfig.CNIBinDir }}"
|
|
||||||
conf_dir = "{{ .NodeConfig.AgentConfig.CNIConfDir }}"
|
|
||||||
{{ end -}}
|
|
||||||
|
|
||||||
# Added section: additional registries and the endpoints
|
|
||||||
[plugins.cri.registry.mirrors]
|
|
||||||
[plugins.cri.registry.mirrors."<b>registry.local:5000</b>"]
|
|
||||||
endpoint = ["http://<b>registry.local:5000</b>"]
|
|
||||||
</pre>
|
|
||||||
|
|
||||||
Finally start a cluster with k3d, passing-in the config template:
|
|
||||||
|
|
||||||
|
setup
|
||||||
|
#cleanup
|
||||||
```
|
```
|
||||||
CLUSTER_NAME=k3s-default
|
|
||||||
k3d create \
|
|
||||||
--name ${CLUSTER_NAME} \
|
|
||||||
--wait 0 \
|
|
||||||
--auto-restart \
|
|
||||||
--volume /home/${USER}/.k3d/config.toml.tmpl:/var/lib/rancher/k3s/agent/etc/containerd/config.toml.tmpl
|
|
||||||
```
|
|
||||||
|
|
||||||
### Wire them up
|
|
||||||
|
|
||||||
- Connect the registry to the cluster network: `docker network connect k3d-k3s-default registry.local`
|
|
||||||
- Add `127.0.0.1 registry.local` to your `/etc/hosts`
|
|
||||||
|
|
||||||
### Test
|
|
||||||
|
|
||||||
Push an image to the registry:
|
|
||||||
|
|
||||||
```
|
|
||||||
docker pull nginx:latest
|
|
||||||
docker tag nginx:latest registry.local:5000/nginx:latest
|
|
||||||
docker push registry.local:5000/nginx:latest
|
|
||||||
```
|
|
||||||
|
|
||||||
Deploy a pod referencing this image to your cluster:
|
|
||||||
|
|
||||||
```
|
|
||||||
cat <<EOF | kubectl apply -f -
|
|
||||||
apiVersion: apps/v1
|
|
||||||
kind: Deployment
|
|
||||||
metadata:
|
|
||||||
name: nginx-test-registry
|
|
||||||
labels:
|
|
||||||
app: nginx-test-registry
|
|
||||||
spec:
|
|
||||||
replicas: 1
|
|
||||||
selector:
|
|
||||||
matchLabels:
|
|
||||||
app: nginx-test-registry
|
|
||||||
template:
|
|
||||||
metadata:
|
|
||||||
labels:
|
|
||||||
app: nginx-test-registry
|
|
||||||
spec:
|
|
||||||
containers:
|
|
||||||
- name: nginx-test-registry
|
|
||||||
image: registry.local:5000/nginx:latest
|
|
||||||
ports:
|
|
||||||
- containerPort: 80
|
|
||||||
EOF
|
|
||||||
```
|
|
||||||
|
|
||||||
... and check that the pod is running: `kubectl get pods -l "app=nginx-test-registry"`
|
|
||||||
|
11
docs/faq.md
11
docs/faq.md
@ -1,4 +1,13 @@
|
|||||||
# FAQ / Nice to know
|
# FAQ / Nice to know
|
||||||
|
|
||||||
- As [@jaredallard](https://github.com/jaredallard) [pointed out](https://github.com/rancher/k3d/pull/48), people running `k3d` on a system with **btrfs**, may need to mount `/dev/mapper` into the nodes for the setup to work.
|
- As [@jaredallard](https://github.com/jaredallard) [pointed out](https://github.com/rancher/k3d/pull/48), people running `k3d` on a system with **btrfs**, may need to mount `/dev/mapper` into the nodes for the setup to work.
|
||||||
- This will do: `k3d create -v /dev/mapper:/dev/mapper`
|
- This will do: `k3d create -v /dev/mapper:/dev/mapper`
|
||||||
|
- An additional solution proposed by [@zer0def](https://github.com/zer0def) can be found in the [examples section](examples.md) (_Running on filesystems k3s doesn't like (btrfs, tmpfs, …)_)
|
||||||
|
|
||||||
|
- Pods go to evicted state after doing X
|
||||||
|
- Related issues: [#133 - Pods evicted due to `NodeHasDiskPressure`](https://github.com/rancher/k3d/issues/133) (collection of #119 and #130)
|
||||||
|
- Background: somehow docker runs out of space for the k3d node containers, which triggers a hard eviction in the kubelet
|
||||||
|
- Possible [fix/workaround by @zer0def](https://github.com/rancher/k3d/issues/133#issuecomment-549065666):
|
||||||
|
- use a docker storage driver which cleans up properly (e.g. overlay2)
|
||||||
|
- clean up or expand docker root filesystem
|
||||||
|
- change the kubelet's eviction thresholds upon cluster creation: `k3d create --agent-arg '--kubelet-arg=eviction-hard=imagefs.available<1%,nodefs.available<1%' --agent-arg '--kubelet-arg=eviction-minimum-reclaim=imagefs.available=1%,nodefs.available=1%'`
|
||||||
|
244
docs/registries.md
Normal file
244
docs/registries.md
Normal file
@ -0,0 +1,244 @@
|
|||||||
|
# Using registries with k3d
|
||||||
|
|
||||||
|
## <a name="registries-file"></a>Registries configuration file
|
||||||
|
|
||||||
|
You can add registries by specifying them in a `registries.yaml` in your `$HOME/.k3d` directory.
|
||||||
|
This file will be loaded automatically by k3d if present and will be shared between all your
|
||||||
|
k3d clusters, but you can also use a specific file for a new cluster with the
|
||||||
|
`--registries-file` argument.
|
||||||
|
|
||||||
|
This file is a regular [k3s registries configuration file](https://rancher.com/docs/k3s/latest/en/installation/airgap/#create-registry-yaml),
|
||||||
|
and looks like this:
|
||||||
|
|
||||||
|
```yaml
|
||||||
|
mirrors:
|
||||||
|
"my.company.registry:5000":
|
||||||
|
endpoint:
|
||||||
|
- http://my.company.registry:5000
|
||||||
|
```
|
||||||
|
|
||||||
|
In this example, an image with a name like `my.company.registry:5000/nginx:latest` would be
|
||||||
|
_pulled_ from the registry running at `http://my.company.registry:5000`.
|
||||||
|
|
||||||
|
Note well there is an important limitation: **this configuration file will only work with
|
||||||
|
k3s >= v0.10.0**. It will fail silently with previous versions of k3s, but you find in the
|
||||||
|
[section below](#k3s-old) an alternative solution.
|
||||||
|
|
||||||
|
This file can also be used for providing additional information necessary for accessing
|
||||||
|
some registries, like [authentication](#auth) and [certificates](#certs).
|
||||||
|
|
||||||
|
### <a name="auth"></a>Authenticated registries
|
||||||
|
|
||||||
|
When using authenticated registries, we can add the _username_ and _password_ in a
|
||||||
|
`configs` section in the `registries.yaml`, like this:
|
||||||
|
|
||||||
|
```yaml
|
||||||
|
mirrors:
|
||||||
|
my.company.registry:
|
||||||
|
endpoint:
|
||||||
|
- http://my.company.registry
|
||||||
|
|
||||||
|
configs:
|
||||||
|
my.company.registry:
|
||||||
|
auth:
|
||||||
|
username: aladin
|
||||||
|
password: abracadabra
|
||||||
|
```
|
||||||
|
|
||||||
|
### <a name="certs"></a>Secure registries
|
||||||
|
|
||||||
|
When using secure registries, the [`registries.yaml` file](#registries-file) must include information
|
||||||
|
about the certificates. For example, if you want to use images from the secure registry
|
||||||
|
running at `https://my.company.registry`, you must first download a CA file valid for that server
|
||||||
|
and store it in some well-known directory like `${HOME}/.k3d/my-company-root.pem`.
|
||||||
|
|
||||||
|
Then you have to mount the CA file in some directory in the nodes in the cluster and
|
||||||
|
include that mounted file in a `configs` section in the [`registries.yaml` file](#registries-file).
|
||||||
|
For example, if we mount the CA file in `/etc/ssl/certs/my-company-root.pem`, the `registries.yaml`
|
||||||
|
will look like:
|
||||||
|
|
||||||
|
```yaml
|
||||||
|
mirrors:
|
||||||
|
my.company.registry:
|
||||||
|
endpoint:
|
||||||
|
- https://my.company.registry
|
||||||
|
|
||||||
|
configs:
|
||||||
|
my.company.registry:
|
||||||
|
tls:
|
||||||
|
# we will mount "my-company-root.pem" in the /etc/ssl/certs/ directory.
|
||||||
|
ca_file: "/etc/ssl/certs/my-company-root.pem"
|
||||||
|
```
|
||||||
|
|
||||||
|
Finally, we can create the cluster, mounting the CA file in the path we
|
||||||
|
specified in `ca_file`:
|
||||||
|
|
||||||
|
```shell script
|
||||||
|
k3d create --volume ${HOME}/.k3d/my-company-root.pem:/etc/ssl/certs/my-company-root.pem ...
|
||||||
|
```
|
||||||
|
|
||||||
|
## Using a local registry
|
||||||
|
|
||||||
|
### Using the k3d registry
|
||||||
|
|
||||||
|
k3d can manage a local registry that you can use for pushing your images to, and your k3d nodes
|
||||||
|
will be able to use those images automatically. k3d will create the registry for you and connect
|
||||||
|
it to your k3d cluster. It is important to note that this registry will be shared between all
|
||||||
|
your k3d clusters, and it will be released when the last of the k3d clusters that was using it
|
||||||
|
is deleted.
|
||||||
|
|
||||||
|
In order to enable the k3d registry when creating a new cluster, you must run k3d with the
|
||||||
|
`--enable-registry` argument
|
||||||
|
|
||||||
|
```shell script
|
||||||
|
k3d create --enable-registry ...
|
||||||
|
```
|
||||||
|
|
||||||
|
Then you must make it accessible as described in [the next section](#etc-hosts). And
|
||||||
|
then you should [check your local registry](#testing).
|
||||||
|
|
||||||
|
### Using your own local registry
|
||||||
|
|
||||||
|
If you don't want k3d to manage your registry, you can start it with some `docker` commands, like:
|
||||||
|
|
||||||
|
```shell script
|
||||||
|
docker volume create local_registry
|
||||||
|
docker container run -d --name registry.localhost -v local_registry:/var/lib/registry --restart always -p 5000:5000 registry:2
|
||||||
|
```
|
||||||
|
|
||||||
|
These commands will start your registry in `registry.localhost:5000`. In order to push to this registry, you will
|
||||||
|
need to make it accessible as we described in [the next section ](#etc-hosts). Once your
|
||||||
|
registry is up and running, we will need to add it to your [`registries.yaml` configuration file](#registries-file).
|
||||||
|
Finally, you must connect the registry network to the k3d cluster network:
|
||||||
|
`docker network connect k3d-k3s-default registry.localhost`. And then you can
|
||||||
|
[check your local registry](#testing).
|
||||||
|
|
||||||
|
### <a name="etc-hosts"></a>Pushing to your local registry address
|
||||||
|
|
||||||
|
The registry will be located, by default, at `registry.localhost:5000` (customizable with the `--registry-name`
|
||||||
|
and `--registry-port` parameters). All the nodes in your k3d cluster can resolve this hostname (thanks to the
|
||||||
|
DNS server provided by the Docker daemon) but, in order to be able to push to this registry, this hostname
|
||||||
|
but also be resolved from your host.
|
||||||
|
|
||||||
|
Luckily (for Linux users), [NSS-myhostname](http://man7.org/linux/man-pages/man8/nss-myhostname.8.html) ships with many Linux distributions
|
||||||
|
and should resolve `*.localhost` automatically to `127.0.0.1`.
|
||||||
|
Otherwise, it's installable using `sudo apt install libnss-myhostname`.
|
||||||
|
|
||||||
|
If it's not the case, you can add an entry in your `/etc/hosts` file like this:
|
||||||
|
|
||||||
|
```shell script
|
||||||
|
127.0.0.1 registry.localhost
|
||||||
|
```
|
||||||
|
|
||||||
|
Once again, this will only work with k3s >= v0.10.0 (see the [section below](#k3s-old)
|
||||||
|
when using k3s <= v0.9.1)
|
||||||
|
|
||||||
|
### <a name="registry-volume"></a>Local registry volume
|
||||||
|
|
||||||
|
The local k3d registry uses a volume for storying the images. This volume will be destroyed
|
||||||
|
when the k3d registry is released. In order to persist this volume and make these images survive
|
||||||
|
the removal of the registry, you can specify a volume with the `--registry-volume` and use the
|
||||||
|
`--keep-registry-volume` flag when deleting the cluster. This will create a volume with the given
|
||||||
|
name the first time the registry is used, while successive invocations will just mount this
|
||||||
|
existing volume in the k3d registry container.
|
||||||
|
|
||||||
|
### <a name="docker-hub-cache"></a>Docker Hub Cache
|
||||||
|
|
||||||
|
The local k3d registry can also be used for caching images from the Docker Hub. You can start the
|
||||||
|
registry as a pull-through cache when the cluster is created with `--enable-registry-cache`. Used
|
||||||
|
in conjuction with `--registry-volume`/`--keep-registry-volume` can speed up all the downloads
|
||||||
|
from the Hub by keeping a persistent cache of images in your local machine.
|
||||||
|
|
||||||
|
**Note**: This disables the registry for pushing local images to it! ([Comment](https://github.com/rancher/k3d/pull/207#issuecomment-617318637))
|
||||||
|
|
||||||
|
## <a name="testing"></a>Testing your registry
|
||||||
|
|
||||||
|
You should test that you can
|
||||||
|
|
||||||
|
* push to your registry from your local development machine.
|
||||||
|
* use images from that registry in `Deployments` in your k3d cluster.
|
||||||
|
|
||||||
|
We will verify these two things for a local registry (located at `registry.localhost:5000`) running
|
||||||
|
in your development machine. Things would be basically the same for checking an external
|
||||||
|
registry, but some additional configuration could be necessary in your local machine when
|
||||||
|
using an authenticated or secure registry (please refer to Docker's documentation for this).
|
||||||
|
|
||||||
|
Firstly, we can download some image (like `nginx`) and push it to our local registry with:
|
||||||
|
|
||||||
|
```shell script
|
||||||
|
docker pull nginx:latest
|
||||||
|
docker tag nginx:latest registry.localhost:5000/nginx:latest
|
||||||
|
docker push registry.localhost:5000/nginx:latest
|
||||||
|
```
|
||||||
|
|
||||||
|
Then we can deploy a pod referencing this image to your cluster:
|
||||||
|
|
||||||
|
```shell script
|
||||||
|
cat <<EOF | kubectl apply -f -
|
||||||
|
apiVersion: apps/v1
|
||||||
|
kind: Deployment
|
||||||
|
metadata:
|
||||||
|
name: nginx-test-registry
|
||||||
|
labels:
|
||||||
|
app: nginx-test-registry
|
||||||
|
spec:
|
||||||
|
replicas: 1
|
||||||
|
selector:
|
||||||
|
matchLabels:
|
||||||
|
app: nginx-test-registry
|
||||||
|
template:
|
||||||
|
metadata:
|
||||||
|
labels:
|
||||||
|
app: nginx-test-registry
|
||||||
|
spec:
|
||||||
|
containers:
|
||||||
|
- name: nginx-test-registry
|
||||||
|
image: registry.localhost:5000/nginx:latest
|
||||||
|
ports:
|
||||||
|
- containerPort: 80
|
||||||
|
EOF
|
||||||
|
```
|
||||||
|
|
||||||
|
Then you should check that the pod is running with `kubectl get pods -l "app=nginx-test-registry"`.
|
||||||
|
|
||||||
|
## <a name="k3s-old"></a>Configuring registries for k3s <= v0.9.1
|
||||||
|
|
||||||
|
k3s servers below v0.9.1 do not recognize the `registries.yaml` file as we described in
|
||||||
|
the [previous section](#registries-file), so you will need to embed the contents of that
|
||||||
|
file in a `containerd` configuration file. You will have to create your own `containerd`
|
||||||
|
configuration file at some well-known path like `${HOME}/.k3d/config.toml.tmpl`, like this:
|
||||||
|
|
||||||
|
<pre>
|
||||||
|
# Original section: no changes
|
||||||
|
[plugins.opt]
|
||||||
|
path = "{{ .NodeConfig.Containerd.Opt }}"
|
||||||
|
[plugins.cri]
|
||||||
|
stream_server_address = "{{ .NodeConfig.AgentConfig.NodeName }}"
|
||||||
|
stream_server_port = "10010"
|
||||||
|
{{- if .IsRunningInUserNS }}
|
||||||
|
disable_cgroup = true
|
||||||
|
disable_apparmor = true
|
||||||
|
restrict_oom_score_adj = true
|
||||||
|
{{ end -}}
|
||||||
|
{{- if .NodeConfig.AgentConfig.PauseImage }}
|
||||||
|
sandbox_image = "{{ .NodeConfig.AgentConfig.PauseImage }}"
|
||||||
|
{{ end -}}
|
||||||
|
{{- if not .NodeConfig.NoFlannel }}
|
||||||
|
[plugins.cri.cni]
|
||||||
|
bin_dir = "{{ .NodeConfig.AgentConfig.CNIBinDir }}"
|
||||||
|
conf_dir = "{{ .NodeConfig.AgentConfig.CNIConfDir }}"
|
||||||
|
{{ end -}}
|
||||||
|
|
||||||
|
# Added section: additional registries and the endpoints
|
||||||
|
[plugins.cri.registry.mirrors]
|
||||||
|
[plugins.cri.registry.mirrors."<b>registry.localhost:5000</b>"]
|
||||||
|
endpoint = ["http://<b>registry.localhost:5000</b>"]
|
||||||
|
</pre>
|
||||||
|
|
||||||
|
and then mount it at `/var/lib/rancher/k3s/agent/etc/containerd/config.toml.tmpl` (where
|
||||||
|
the `containerd` in your k3d nodes will load it) when creating the k3d cluster:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
k3d create \
|
||||||
|
--volume ${HOME}/.k3d/config.toml.tmpl:/var/lib/rancher/k3s/agent/etc/containerd/config.toml.tmpl
|
||||||
|
```
|
5
go.mod
5
go.mod
@ -18,12 +18,13 @@ require (
|
|||||||
github.com/olekukonko/tablewriter v0.0.1
|
github.com/olekukonko/tablewriter v0.0.1
|
||||||
github.com/opencontainers/go-digest v1.0.0-rc1 // indirect
|
github.com/opencontainers/go-digest v1.0.0-rc1 // indirect
|
||||||
github.com/opencontainers/image-spec v1.0.1 // indirect
|
github.com/opencontainers/image-spec v1.0.1 // indirect
|
||||||
github.com/pkg/errors v0.8.1 // indirect
|
github.com/pkg/errors v0.8.1
|
||||||
github.com/sirupsen/logrus v1.4.2 // indirect
|
github.com/sirupsen/logrus v1.5.0
|
||||||
github.com/stretchr/testify v1.3.0 // indirect
|
github.com/stretchr/testify v1.3.0 // indirect
|
||||||
github.com/urfave/cli v1.20.0
|
github.com/urfave/cli v1.20.0
|
||||||
golang.org/x/net v0.0.0-20190403144856-b630fd6fe46b // indirect
|
golang.org/x/net v0.0.0-20190403144856-b630fd6fe46b // indirect
|
||||||
golang.org/x/time v0.0.0-20190308202827-9d24e82272b4 // indirect
|
golang.org/x/time v0.0.0-20190308202827-9d24e82272b4 // indirect
|
||||||
google.golang.org/grpc v1.22.0 // indirect
|
google.golang.org/grpc v1.22.0 // indirect
|
||||||
|
gopkg.in/yaml.v2 v2.2.7
|
||||||
gotest.tools v2.2.0+incompatible // indirect
|
gotest.tools v2.2.0+incompatible // indirect
|
||||||
)
|
)
|
||||||
|
9
go.sum
9
go.sum
@ -49,10 +49,9 @@ github.com/pkg/errors v0.8.1 h1:iURUrRGxPUNPdy5/HRSm+Yj6okJ6UtLINN0Q9M4+h3I=
|
|||||||
github.com/pkg/errors v0.8.1/go.mod h1:bwawxfHBFNV+L2hUp1rHADufV3IMtnDRdf1r5NINEl0=
|
github.com/pkg/errors v0.8.1/go.mod h1:bwawxfHBFNV+L2hUp1rHADufV3IMtnDRdf1r5NINEl0=
|
||||||
github.com/pmezard/go-difflib v1.0.0 h1:4DBwDE0NGyQoBHbLQYPwSUPoCMWR5BEzIk/f1lZbAQM=
|
github.com/pmezard/go-difflib v1.0.0 h1:4DBwDE0NGyQoBHbLQYPwSUPoCMWR5BEzIk/f1lZbAQM=
|
||||||
github.com/pmezard/go-difflib v1.0.0/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4=
|
github.com/pmezard/go-difflib v1.0.0/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4=
|
||||||
github.com/sirupsen/logrus v1.4.2 h1:SPIRibHv4MatM3XXNO2BJeFLZwZ2LvZgfQ5+UNI2im4=
|
github.com/sirupsen/logrus v1.5.0 h1:1N5EYkVAPEywqZRJd7cwnRtCb6xJx7NH3T3WUTF980Q=
|
||||||
github.com/sirupsen/logrus v1.4.2/go.mod h1:tLMulIdttU9McNUspp0xgXVQah82FyeX6MwdIuYE2rE=
|
github.com/sirupsen/logrus v1.5.0/go.mod h1:+F7Ogzej0PZc/94MaYx/nvG9jOFMD2osvC3s+Squfpo=
|
||||||
github.com/stretchr/objx v0.1.0/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+wExME=
|
github.com/stretchr/objx v0.1.0/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+wExME=
|
||||||
github.com/stretchr/objx v0.1.1/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+wExME=
|
|
||||||
github.com/stretchr/testify v1.2.2/go.mod h1:a8OnRcib4nhh0OaRAV+Yts87kKdq0PP7pXfy6kDkUVs=
|
github.com/stretchr/testify v1.2.2/go.mod h1:a8OnRcib4nhh0OaRAV+Yts87kKdq0PP7pXfy6kDkUVs=
|
||||||
github.com/stretchr/testify v1.3.0 h1:TivCn/peBQ7UY8ooIcPgZFpTNSz0Q2U6UrFlUfqbe0Q=
|
github.com/stretchr/testify v1.3.0 h1:TivCn/peBQ7UY8ooIcPgZFpTNSz0Q2U6UrFlUfqbe0Q=
|
||||||
github.com/stretchr/testify v1.3.0/go.mod h1:M5WIy9Dh21IEIfnGCwXGc5bZfKNJtfHm1UVUgZn+9EI=
|
github.com/stretchr/testify v1.3.0/go.mod h1:M5WIy9Dh21IEIfnGCwXGc5bZfKNJtfHm1UVUgZn+9EI=
|
||||||
@ -81,6 +80,10 @@ google.golang.org/genproto v0.0.0-20180817151627-c66870c02cf8 h1:Nw54tB0rB7hY/N0
|
|||||||
google.golang.org/genproto v0.0.0-20180817151627-c66870c02cf8/go.mod h1:JiN7NxoALGmiZfu7CAH4rXhgtRTLTxftemlI0sWmxmc=
|
google.golang.org/genproto v0.0.0-20180817151627-c66870c02cf8/go.mod h1:JiN7NxoALGmiZfu7CAH4rXhgtRTLTxftemlI0sWmxmc=
|
||||||
google.golang.org/grpc v1.22.0 h1:J0UbZOIrCAl+fpTOf8YLs4dJo8L/owV4LYVtAXQoPkw=
|
google.golang.org/grpc v1.22.0 h1:J0UbZOIrCAl+fpTOf8YLs4dJo8L/owV4LYVtAXQoPkw=
|
||||||
google.golang.org/grpc v1.22.0/go.mod h1:Y5yQAOtifL1yxbo5wqy6BxZv8vAUGQwXBOALyacEbxg=
|
google.golang.org/grpc v1.22.0/go.mod h1:Y5yQAOtifL1yxbo5wqy6BxZv8vAUGQwXBOALyacEbxg=
|
||||||
|
gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405 h1:yhCVgyC4o1eVCa2tZl7eS0r+SDo693bJlVdllGtEeKM=
|
||||||
|
gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
|
||||||
|
gopkg.in/yaml.v2 v2.2.7 h1:VUgggvou5XRW9mHwD/yXxIYSMtY0zoKQf/v226p2nyo=
|
||||||
|
gopkg.in/yaml.v2 v2.2.7/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI=
|
||||||
gotest.tools v2.2.0+incompatible h1:VsBPFP1AI068pPrMxtb/S8Zkgf9xEmTLJjfM+P5UIEo=
|
gotest.tools v2.2.0+incompatible h1:VsBPFP1AI068pPrMxtb/S8Zkgf9xEmTLJjfM+P5UIEo=
|
||||||
gotest.tools v2.2.0+incompatible/go.mod h1:DsYFclhRJ6vuDpmuTbkuFWG+y2sxOXAzmJt81HFBacw=
|
gotest.tools v2.2.0+incompatible/go.mod h1:DsYFclhRJ6vuDpmuTbkuFWG+y2sxOXAzmJt81HFBacw=
|
||||||
honnef.co/go/tools v0.0.0-20190523083050-ea95bdfd59fc/go.mod h1:rf3lG4BRIbNafJWhAfAdb/ePZxsR/4RtNHQocxwk9r4=
|
honnef.co/go/tools v0.0.0-20190523083050-ea95bdfd59fc/go.mod h1:rf3lG4BRIbNafJWhAfAdb/ePZxsR/4RtNHQocxwk9r4=
|
||||||
|
@ -75,6 +75,11 @@ checkK3dInstalledVersion() {
|
|||||||
fi
|
fi
|
||||||
}
|
}
|
||||||
|
|
||||||
|
# checkTagProvided checks whether TAG has provided as an environment variable so we can skip checkLatestVersion.
|
||||||
|
checkTagProvided() {
|
||||||
|
[[ ! -z "$TAG" ]]
|
||||||
|
}
|
||||||
|
|
||||||
# checkLatestVersion grabs the latest version string from the releases
|
# checkLatestVersion grabs the latest version string from the releases
|
||||||
checkLatestVersion() {
|
checkLatestVersion() {
|
||||||
local latest_release_url="$REPO_URL/releases/latest"
|
local latest_release_url="$REPO_URL/releases/latest"
|
||||||
@ -178,7 +183,7 @@ set +u
|
|||||||
initArch
|
initArch
|
||||||
initOS
|
initOS
|
||||||
verifySupported
|
verifySupported
|
||||||
checkLatestVersion
|
checkTagProvided || checkLatestVersion
|
||||||
if ! checkK3dInstalledVersion; then
|
if ! checkK3dInstalledVersion; then
|
||||||
downloadFile
|
downloadFile
|
||||||
installFile
|
installFile
|
||||||
|
197
main.go
197
main.go
@ -2,17 +2,22 @@ package main
|
|||||||
|
|
||||||
import (
|
import (
|
||||||
"fmt"
|
"fmt"
|
||||||
"log"
|
"io/ioutil"
|
||||||
"os"
|
"os"
|
||||||
|
|
||||||
|
log "github.com/sirupsen/logrus"
|
||||||
|
"github.com/sirupsen/logrus/hooks/writer"
|
||||||
|
"github.com/urfave/cli"
|
||||||
|
|
||||||
run "github.com/rancher/k3d/cli"
|
run "github.com/rancher/k3d/cli"
|
||||||
"github.com/rancher/k3d/version"
|
"github.com/rancher/k3d/version"
|
||||||
"github.com/urfave/cli"
|
|
||||||
)
|
)
|
||||||
|
|
||||||
// defaultK3sImage specifies the default image being used for server and workers
|
// defaultK3sImage specifies the default image being used for server and workers
|
||||||
const defaultK3sImage = "docker.io/rancher/k3s"
|
const defaultK3sImage = "docker.io/rancher/k3s"
|
||||||
const defaultK3sClusterName string = "k3s-default"
|
const defaultK3sClusterName string = "k3s-default"
|
||||||
|
const defaultRegistryName = "registry.localhost"
|
||||||
|
const defaultRegistryPort = 5000
|
||||||
|
|
||||||
// main represents the CLI application
|
// main represents the CLI application
|
||||||
func main() {
|
func main() {
|
||||||
@ -22,19 +27,6 @@ func main() {
|
|||||||
app.Name = "k3d"
|
app.Name = "k3d"
|
||||||
app.Usage = "Run k3s in Docker!"
|
app.Usage = "Run k3s in Docker!"
|
||||||
app.Version = version.GetVersion()
|
app.Version = version.GetVersion()
|
||||||
app.Authors = []cli.Author{
|
|
||||||
{
|
|
||||||
Name: "Thorsten Klein",
|
|
||||||
Email: "iwilltry42@gmail.com",
|
|
||||||
},
|
|
||||||
{
|
|
||||||
Name: "Rishabh Gupta",
|
|
||||||
Email: "r.g.gupta@outlook.com",
|
|
||||||
},
|
|
||||||
{
|
|
||||||
Name: "Darren Shepherd",
|
|
||||||
},
|
|
||||||
}
|
|
||||||
|
|
||||||
// commands that you can execute
|
// commands that you can execute
|
||||||
app.Commands = []cli.Command{
|
app.Commands = []cli.Command{
|
||||||
@ -83,8 +75,9 @@ func main() {
|
|||||||
Usage: "Mount one or more volumes into every node of the cluster (Docker notation: `source:destination`)",
|
Usage: "Mount one or more volumes into every node of the cluster (Docker notation: `source:destination`)",
|
||||||
},
|
},
|
||||||
cli.StringSliceFlag{
|
cli.StringSliceFlag{
|
||||||
Name: "publish, add-port",
|
// TODO: remove publish/add-port soon, to clean up
|
||||||
Usage: "Publish k3s node ports to the host (Format: `[ip:][host-port:]container-port[/protocol]@node-specifier`, use multiple options to expose more ports)",
|
Name: "port, p, publish, add-port",
|
||||||
|
Usage: "Publish k3s node ports to the host (Format: `-p [ip:][host-port:]container-port[/protocol]@node-specifier`, use multiple options to expose more ports)",
|
||||||
},
|
},
|
||||||
cli.IntFlag{
|
cli.IntFlag{
|
||||||
Name: "port-auto-offset",
|
Name: "port-auto-offset",
|
||||||
@ -92,20 +85,14 @@ func main() {
|
|||||||
Usage: "Automatically add an offset (* worker number) to the chosen host port when using `--publish` to map the same container-port from multiple k3d workers to the host",
|
Usage: "Automatically add an offset (* worker number) to the chosen host port when using `--publish` to map the same container-port from multiple k3d workers to the host",
|
||||||
},
|
},
|
||||||
cli.StringFlag{
|
cli.StringFlag{
|
||||||
// TODO: to be deprecated
|
Name: "api-port, a",
|
||||||
Name: "version",
|
|
||||||
Usage: "Choose the k3s image version",
|
|
||||||
},
|
|
||||||
cli.StringFlag{
|
|
||||||
// TODO: only --api-port, -a soon since we want to use --port, -p for the --publish/--add-port functionality
|
|
||||||
Name: "api-port, a, port, p",
|
|
||||||
Value: "6443",
|
Value: "6443",
|
||||||
Usage: "Specify the Kubernetes cluster API server port (Format: `[host:]port` (Note: --port/-p will be used for arbitrary port mapping as of v2.0.0, use --api-port/-a instead for setting the api port)",
|
Usage: "Specify the Kubernetes cluster API server port (Format: `-a [host:]port`",
|
||||||
},
|
},
|
||||||
cli.IntFlag{
|
cli.IntFlag{
|
||||||
Name: "wait, t",
|
Name: "wait, t",
|
||||||
Value: 0, // timeout
|
Value: -1,
|
||||||
Usage: "Wait for the cluster to come up before returning until timoout (in seconds). Use --wait 0 to wait forever",
|
Usage: "Wait for a maximum of `TIMEOUT` seconds (>= 0) for the cluster to be ready and rollback if it doesn't come up in time. Disabled by default (-1).",
|
||||||
},
|
},
|
||||||
cli.StringFlag{
|
cli.StringFlag{
|
||||||
Name: "image, i",
|
Name: "image, i",
|
||||||
@ -124,6 +111,10 @@ func main() {
|
|||||||
Name: "env, e",
|
Name: "env, e",
|
||||||
Usage: "Pass an additional environment variable (new flag per variable)",
|
Usage: "Pass an additional environment variable (new flag per variable)",
|
||||||
},
|
},
|
||||||
|
cli.StringSliceFlag{
|
||||||
|
Name: "label, l",
|
||||||
|
Usage: "Add a docker label to node container (Format: `key[=value][@node-specifier]`, new flag per label)",
|
||||||
|
},
|
||||||
cli.IntFlag{
|
cli.IntFlag{
|
||||||
Name: "workers, w",
|
Name: "workers, w",
|
||||||
Value: 0,
|
Value: 0,
|
||||||
@ -133,9 +124,92 @@ func main() {
|
|||||||
Name: "auto-restart",
|
Name: "auto-restart",
|
||||||
Usage: "Set docker's --restart=unless-stopped flag on the containers",
|
Usage: "Set docker's --restart=unless-stopped flag on the containers",
|
||||||
},
|
},
|
||||||
|
cli.BoolFlag{
|
||||||
|
Name: "enable-registry",
|
||||||
|
Usage: "Start a local Docker registry",
|
||||||
|
},
|
||||||
|
cli.StringFlag{
|
||||||
|
Name: "registry-name",
|
||||||
|
Value: defaultRegistryName,
|
||||||
|
Usage: "Name of the local registry container",
|
||||||
|
},
|
||||||
|
cli.IntFlag{
|
||||||
|
Name: "registry-port",
|
||||||
|
Value: defaultRegistryPort,
|
||||||
|
Usage: "Port of the local registry container",
|
||||||
|
},
|
||||||
|
cli.StringFlag{
|
||||||
|
Name: "registry-volume",
|
||||||
|
Usage: "Use a specific volume for the registry storage (will be created if not existing)",
|
||||||
|
},
|
||||||
|
cli.StringFlag{
|
||||||
|
Name: "registries-file",
|
||||||
|
Usage: "registries.yaml config file",
|
||||||
|
},
|
||||||
|
cli.BoolFlag{
|
||||||
|
Name: "enable-registry-cache",
|
||||||
|
Usage: "Use the local registry as a cache for the Docker Hub (Note: This disables pushing local images to the registry!)",
|
||||||
|
},
|
||||||
},
|
},
|
||||||
Action: run.CreateCluster,
|
Action: run.CreateCluster,
|
||||||
},
|
},
|
||||||
|
/*
|
||||||
|
* Add a new node to an existing k3d/k3s cluster (choosing k3d by default)
|
||||||
|
*/
|
||||||
|
{
|
||||||
|
Name: "add-node",
|
||||||
|
Usage: "[EXPERIMENTAL] Add nodes to an existing k3d/k3s cluster (k3d by default)",
|
||||||
|
Flags: []cli.Flag{
|
||||||
|
cli.StringFlag{
|
||||||
|
Name: "role, r",
|
||||||
|
Usage: "Choose role of the node you want to add [agent|server]",
|
||||||
|
Value: "agent",
|
||||||
|
},
|
||||||
|
cli.StringFlag{
|
||||||
|
Name: "name, n",
|
||||||
|
Usage: "Name of the k3d cluster that you want to add a node to [only for node name if --k3s is set]",
|
||||||
|
Value: defaultK3sClusterName,
|
||||||
|
},
|
||||||
|
cli.IntFlag{
|
||||||
|
Name: "count, c",
|
||||||
|
Usage: "Number of nodes that you want to add",
|
||||||
|
Value: 1,
|
||||||
|
},
|
||||||
|
cli.StringFlag{
|
||||||
|
Name: "image, i",
|
||||||
|
Usage: "Specify a k3s image (Format: <repo>/<image>:<tag>)",
|
||||||
|
Value: fmt.Sprintf("%s:%s", defaultK3sImage, version.GetK3sVersion()),
|
||||||
|
},
|
||||||
|
cli.StringSliceFlag{
|
||||||
|
Name: "arg, x",
|
||||||
|
Usage: "Pass arguments to the k3s server/agent command.",
|
||||||
|
},
|
||||||
|
cli.StringSliceFlag{
|
||||||
|
Name: "env, e",
|
||||||
|
Usage: "Pass an additional environment variable (new flag per variable)",
|
||||||
|
},
|
||||||
|
cli.StringSliceFlag{
|
||||||
|
Name: "volume, v",
|
||||||
|
Usage: "Mount one or more volumes into every created node (Docker notation: `source:destination`)",
|
||||||
|
},
|
||||||
|
/*
|
||||||
|
* Connect to a non-dockerized k3s cluster
|
||||||
|
*/
|
||||||
|
cli.StringFlag{
|
||||||
|
Name: "k3s",
|
||||||
|
Usage: "Add a k3d node to a non-k3d k3s cluster (specify k3s server URL like this `https://<host>:<port>`) [requires k3s-secret or k3s-token]",
|
||||||
|
},
|
||||||
|
cli.StringFlag{
|
||||||
|
Name: "k3s-secret, s",
|
||||||
|
Usage: "Specify k3s cluster secret (or use --k3s-token to use a node token)",
|
||||||
|
},
|
||||||
|
cli.StringFlag{
|
||||||
|
Name: "k3s-token, t",
|
||||||
|
Usage: "Specify k3s node token (or use --k3s-secret to use a cluster secret)[overrides k3s-secret]",
|
||||||
|
},
|
||||||
|
},
|
||||||
|
Action: run.AddNode,
|
||||||
|
},
|
||||||
{
|
{
|
||||||
// delete deletes an existing k3s cluster (remove container and cluster directory)
|
// delete deletes an existing k3s cluster (remove container and cluster directory)
|
||||||
Name: "delete",
|
Name: "delete",
|
||||||
@ -151,6 +225,14 @@ func main() {
|
|||||||
Name: "all, a",
|
Name: "all, a",
|
||||||
Usage: "Delete all existing clusters (this ignores the --name/-n flag)",
|
Usage: "Delete all existing clusters (this ignores the --name/-n flag)",
|
||||||
},
|
},
|
||||||
|
cli.BoolFlag{
|
||||||
|
Name: "prune",
|
||||||
|
Usage: "Disconnect any other non-k3d containers in the network before deleting the cluster",
|
||||||
|
},
|
||||||
|
cli.BoolFlag{
|
||||||
|
Name: "keep-registry-volume",
|
||||||
|
Usage: "Do not delete the registry volume",
|
||||||
|
},
|
||||||
},
|
},
|
||||||
Action: run.DeleteCluster,
|
Action: run.DeleteCluster,
|
||||||
},
|
},
|
||||||
@ -193,13 +275,7 @@ func main() {
|
|||||||
Name: "list",
|
Name: "list",
|
||||||
Aliases: []string{"ls", "l"},
|
Aliases: []string{"ls", "l"},
|
||||||
Usage: "List all clusters",
|
Usage: "List all clusters",
|
||||||
Flags: []cli.Flag{
|
Action: run.ListClusters,
|
||||||
cli.BoolFlag{
|
|
||||||
Name: "all, a",
|
|
||||||
Usage: "Also show non-running clusters",
|
|
||||||
},
|
|
||||||
},
|
|
||||||
Action: run.ListClusters,
|
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
// get-kubeconfig grabs the kubeconfig from the cluster and prints the path to it
|
// get-kubeconfig grabs the kubeconfig from the cluster and prints the path to it
|
||||||
@ -215,6 +291,10 @@ func main() {
|
|||||||
Name: "all, a",
|
Name: "all, a",
|
||||||
Usage: "Get kubeconfig for all clusters (this ignores the --name/-n flag)",
|
Usage: "Get kubeconfig for all clusters (this ignores the --name/-n flag)",
|
||||||
},
|
},
|
||||||
|
cli.BoolFlag{
|
||||||
|
Name: "overwrite, o",
|
||||||
|
Usage: "Overwrite any existing file with the same name",
|
||||||
|
},
|
||||||
},
|
},
|
||||||
Action: run.GetKubeConfig,
|
Action: run.GetKubeConfig,
|
||||||
},
|
},
|
||||||
@ -236,6 +316,14 @@ func main() {
|
|||||||
},
|
},
|
||||||
Action: run.ImportImage,
|
Action: run.ImportImage,
|
||||||
},
|
},
|
||||||
|
{
|
||||||
|
Name: "version",
|
||||||
|
Usage: "print k3d and k3s version",
|
||||||
|
Action: func(c *cli.Context) {
|
||||||
|
fmt.Println("k3d version", version.GetVersion())
|
||||||
|
fmt.Println("k3s version", version.GetK3sVersion())
|
||||||
|
},
|
||||||
|
},
|
||||||
}
|
}
|
||||||
|
|
||||||
// Global flags
|
// Global flags
|
||||||
@ -244,6 +332,47 @@ func main() {
|
|||||||
Name: "verbose",
|
Name: "verbose",
|
||||||
Usage: "Enable verbose output",
|
Usage: "Enable verbose output",
|
||||||
},
|
},
|
||||||
|
cli.BoolFlag{
|
||||||
|
Name: "timestamp",
|
||||||
|
Usage: "Enable timestamps in logs messages",
|
||||||
|
},
|
||||||
|
}
|
||||||
|
|
||||||
|
// init log level
|
||||||
|
app.Before = func(c *cli.Context) error {
|
||||||
|
log.SetOutput(ioutil.Discard)
|
||||||
|
log.AddHook(&writer.Hook{
|
||||||
|
Writer: os.Stderr,
|
||||||
|
LogLevels: []log.Level{
|
||||||
|
log.PanicLevel,
|
||||||
|
log.FatalLevel,
|
||||||
|
log.ErrorLevel,
|
||||||
|
log.WarnLevel,
|
||||||
|
},
|
||||||
|
})
|
||||||
|
log.AddHook(&writer.Hook{
|
||||||
|
Writer: os.Stdout,
|
||||||
|
LogLevels: []log.Level{
|
||||||
|
log.InfoLevel,
|
||||||
|
log.DebugLevel,
|
||||||
|
},
|
||||||
|
})
|
||||||
|
log.SetFormatter(&log.TextFormatter{
|
||||||
|
ForceColors: true,
|
||||||
|
})
|
||||||
|
if c.GlobalBool("verbose") {
|
||||||
|
log.SetLevel(log.DebugLevel)
|
||||||
|
} else {
|
||||||
|
log.SetLevel(log.InfoLevel)
|
||||||
|
}
|
||||||
|
if c.GlobalBool("timestamp") {
|
||||||
|
log.SetFormatter(&log.TextFormatter{
|
||||||
|
FullTimestamp: true,
|
||||||
|
ForceColors: true,
|
||||||
|
})
|
||||||
|
}
|
||||||
|
|
||||||
|
return nil
|
||||||
}
|
}
|
||||||
|
|
||||||
// run the whole thing
|
// run the whole thing
|
||||||
|
42
tests/01-basic.sh
Executable file
42
tests/01-basic.sh
Executable file
@ -0,0 +1,42 @@
|
|||||||
|
#!/bin/bash
|
||||||
|
|
||||||
|
CURR_DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" >/dev/null 2>&1 && pwd )"
|
||||||
|
[ -d "$CURR_DIR" ] || { echo "FATAL: no current dir (maybe running in zsh?)"; exit 1; }
|
||||||
|
|
||||||
|
# shellcheck source=./common.sh
|
||||||
|
source "$CURR_DIR/common.sh"
|
||||||
|
|
||||||
|
#########################################################################################
|
||||||
|
|
||||||
|
FIXTURES_DIR=$CURR_DIR/fixtures
|
||||||
|
|
||||||
|
# a dummy registries.yaml file
|
||||||
|
REGISTRIES_YAML=$FIXTURES_DIR/01-registries-empty.yaml
|
||||||
|
|
||||||
|
#########################################################################################
|
||||||
|
|
||||||
|
info "Creating two clusters c1 and c2..."
|
||||||
|
$EXE create --wait 60 --name "c1" --api-port 6443 -v $REGISTRIES_YAML:/etc/rancher/k3s/registries.yaml || failed "could not create cluster c1"
|
||||||
|
$EXE create --wait 60 --name "c2" --api-port 6444 || failed "could not create cluster c2"
|
||||||
|
|
||||||
|
info "Checking we have access to both clusters..."
|
||||||
|
check_k3d_clusters "c1" "c2" || failed "error checking cluster"
|
||||||
|
dump_registries_yaml_in "c1" "c2"
|
||||||
|
|
||||||
|
info "Creating a cluster with a wrong --registries-file argument..."
|
||||||
|
$EXE create --wait 60 --name "c3" --api-port 6445 --registries-file /etc/inexistant || passed "expected error with a --registries-file that does not exist"
|
||||||
|
|
||||||
|
info "Attaching a container to c2"
|
||||||
|
background=$(docker run -d --rm alpine sleep 3000)
|
||||||
|
docker network connect "k3d-c2" "$background"
|
||||||
|
|
||||||
|
info "Deleting clusters c1 and c2..."
|
||||||
|
$EXE delete --name "c1" || failed "could not delete the cluster c1"
|
||||||
|
$EXE delete --name "c2" --prune || failed "could not delete the cluster c2"
|
||||||
|
|
||||||
|
info "Stopping attached container"
|
||||||
|
docker stop "$background" >/dev/null
|
||||||
|
|
||||||
|
exit 0
|
||||||
|
|
||||||
|
|
105
tests/02-registry.sh
Executable file
105
tests/02-registry.sh
Executable file
@ -0,0 +1,105 @@
|
|||||||
|
#!/bin/bash
|
||||||
|
|
||||||
|
CURR_DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" >/dev/null 2>&1 && pwd )"
|
||||||
|
[ -d "$CURR_DIR" ] || { echo "FATAL: no current dir (maybe running in zsh?)"; exit 1; }
|
||||||
|
|
||||||
|
# shellcheck source=./common.sh
|
||||||
|
source "$CURR_DIR/common.sh"
|
||||||
|
|
||||||
|
#########################################################################################
|
||||||
|
|
||||||
|
REGISTRY_NAME="registry.localhost"
|
||||||
|
REGISTRY_PORT="5000"
|
||||||
|
REGISTRY="$REGISTRY_NAME:$REGISTRY_PORT"
|
||||||
|
TEST_IMAGE="nginx:latest"
|
||||||
|
|
||||||
|
FIXTURES_DIR=$CURR_DIR/fixtures
|
||||||
|
|
||||||
|
# a dummy registries.yaml file
|
||||||
|
REGISTRIES_YAML=$FIXTURES_DIR/01-registries-empty.yaml
|
||||||
|
|
||||||
|
|
||||||
|
#########################################################################################
|
||||||
|
|
||||||
|
info "Checking that $REGISTRY_NAME is resolvable"
|
||||||
|
getent hosts $REGISTRY_NAME
|
||||||
|
if [ $? -ne 0 ] ; then
|
||||||
|
[ "$CI" = "true" ] || abort "$REGISTRY_NAME is not in resolvable. please add an entry in /etc/hosts manually."
|
||||||
|
|
||||||
|
info "Adding '127.0.0.1 $REGISTRY_NAME' to /etc/hosts"
|
||||||
|
echo "127.0.0.1 $REGISTRY_NAME" | sudo tee -a /etc/hosts
|
||||||
|
else
|
||||||
|
passed "... good: $REGISTRY_NAME is in /etc/hosts"
|
||||||
|
fi
|
||||||
|
|
||||||
|
info "Creating two clusters (with a registry)..."
|
||||||
|
$EXE create --wait 60 --name "c1" --api-port 6443 --enable-registry --registry-volume "reg-vol" || failed "could not create cluster c1"
|
||||||
|
$EXE create --wait 60 --name "c2" --api-port 6444 --enable-registry --registries-file "$REGISTRIES_YAML" || failed "could not create cluster c2"
|
||||||
|
|
||||||
|
check_k3d_clusters "c1" "c2" || failed "error checking cluster"
|
||||||
|
dump_registries_yaml_in "c1" "c2"
|
||||||
|
|
||||||
|
check_registry || abort "local registry not available at $REGISTRY"
|
||||||
|
passed "Local registry running at $REGISTRY"
|
||||||
|
|
||||||
|
info "Deleting c1 cluster: the registry should remain..."
|
||||||
|
$EXE delete --name "c1" --keep-registry-volume || failed "could not delete the cluster c1"
|
||||||
|
check_registry || abort "local registry not available at $REGISTRY after removing c1"
|
||||||
|
passed "The local registry is still running"
|
||||||
|
|
||||||
|
info "Checking that the reg-vol still exists after removing c1"
|
||||||
|
check_volume_exists "reg-vol" || abort "the registry volume 'reg-vol' does not seem to exist"
|
||||||
|
passed "reg-vol still exists"
|
||||||
|
|
||||||
|
info "Pulling a test image..."
|
||||||
|
docker pull $TEST_IMAGE
|
||||||
|
docker tag $TEST_IMAGE $REGISTRY/$TEST_IMAGE
|
||||||
|
|
||||||
|
info "Pushing to $REGISTRY..."
|
||||||
|
docker push $REGISTRY/$TEST_IMAGE
|
||||||
|
|
||||||
|
info "Using the image in the registry in the first cluster..."
|
||||||
|
cat <<EOF | kubectl apply --kubeconfig=$($EXE get-kubeconfig --name "c2") -f -
|
||||||
|
apiVersion: apps/v1
|
||||||
|
kind: Deployment
|
||||||
|
metadata:
|
||||||
|
name: test-registry
|
||||||
|
labels:
|
||||||
|
app: test-registry
|
||||||
|
spec:
|
||||||
|
replicas: 1
|
||||||
|
selector:
|
||||||
|
matchLabels:
|
||||||
|
app: test-registry
|
||||||
|
template:
|
||||||
|
metadata:
|
||||||
|
labels:
|
||||||
|
app: test-registry
|
||||||
|
spec:
|
||||||
|
containers:
|
||||||
|
- name: test-registry
|
||||||
|
image: $REGISTRY/$TEST_IMAGE
|
||||||
|
ports:
|
||||||
|
- containerPort: 80
|
||||||
|
EOF
|
||||||
|
|
||||||
|
kubectl --kubeconfig=$($EXE get-kubeconfig --name "c2") wait --for=condition=available --timeout=600s deployment/test-registry
|
||||||
|
[ $? -eq 0 ] || abort "deployment with local registry failed"
|
||||||
|
passed "Local registry seems to be usable"
|
||||||
|
|
||||||
|
info "Deleting c2 cluster: the registry should be removed now..."
|
||||||
|
$EXE delete --name "c2" --keep-registry-volume || failed "could not delete the cluster c2"
|
||||||
|
check_registry && abort "local registry still running at $REGISTRY"
|
||||||
|
passed "The local registry has been removed"
|
||||||
|
|
||||||
|
info "Creating a new clusters that uses a registry with an existsing 'reg-vol' volume..."
|
||||||
|
check_volume_exists "reg-vol" || abort "the registry volume 'reg-vol' does not exist"
|
||||||
|
$EXE create --wait 60 --name "c3" --api-port 6445 --enable-registry --registry-volume "reg-vol" || failed "could not create cluster c3"
|
||||||
|
|
||||||
|
info "Deleting c3 cluster: the registry should be removed and, this time, the volume too..."
|
||||||
|
$EXE delete --name "c3" || failed "could not delete the cluster c3"
|
||||||
|
check_volume_exists "reg-vol" && abort "the registry volume 'reg-vol' still exists"
|
||||||
|
passed "'reg-vol' has been removed"
|
||||||
|
|
||||||
|
exit 0
|
||||||
|
|
88
tests/common.sh
Executable file
88
tests/common.sh
Executable file
@ -0,0 +1,88 @@
|
|||||||
|
#!/bin/bash
|
||||||
|
|
||||||
|
RED='\033[1;31m'
|
||||||
|
GRN='\033[1;32m'
|
||||||
|
YEL='\033[1;33m'
|
||||||
|
BLU='\033[1;34m'
|
||||||
|
WHT='\033[1;37m'
|
||||||
|
MGT='\033[1;95m'
|
||||||
|
CYA='\033[1;96m'
|
||||||
|
END='\033[0m'
|
||||||
|
BLOCK='\033[1;37m'
|
||||||
|
|
||||||
|
PATH=/usr/local/bin:$PATH
|
||||||
|
export PATH
|
||||||
|
|
||||||
|
log() { >&2 printf "${BLOCK}>>>${END} $1\n"; }
|
||||||
|
|
||||||
|
info() { log "${BLU}$1${END}"; }
|
||||||
|
highlight() { log "${MGT}$1${END}"; }
|
||||||
|
|
||||||
|
bye() {
|
||||||
|
log "${BLU}$1... exiting${END}"
|
||||||
|
exit 0
|
||||||
|
}
|
||||||
|
|
||||||
|
warn() { log "${RED}!!! WARNING !!! $1 ${END}"; }
|
||||||
|
|
||||||
|
abort() {
|
||||||
|
log "${RED}FATAL: $1${END}"
|
||||||
|
exit 1
|
||||||
|
}
|
||||||
|
|
||||||
|
command_exists() {
|
||||||
|
command -v $1 >/dev/null 2>&1
|
||||||
|
}
|
||||||
|
|
||||||
|
failed() {
|
||||||
|
if [ -z "$1" ] ; then
|
||||||
|
log "${RED}failed!!!${END}"
|
||||||
|
else
|
||||||
|
log "${RED}$1${END}"
|
||||||
|
fi
|
||||||
|
abort "test failed"
|
||||||
|
}
|
||||||
|
|
||||||
|
passed() {
|
||||||
|
if [ -z "$1" ] ; then
|
||||||
|
log "${GRN}done!${END}"
|
||||||
|
else
|
||||||
|
log "${GRN}$1${END}"
|
||||||
|
fi
|
||||||
|
}
|
||||||
|
|
||||||
|
dump_registries_yaml_in() {
|
||||||
|
for cluster in $@ ; do
|
||||||
|
info "registries.yaml in cluster $cluster:"
|
||||||
|
docker exec -t k3d-$cluster-server cat /etc/rancher/k3s/registries.yaml
|
||||||
|
done
|
||||||
|
}
|
||||||
|
|
||||||
|
# checks that a URL is available, with an optional error message
|
||||||
|
check_url() {
|
||||||
|
command_exists curl || abort "curl is not installed"
|
||||||
|
curl -L --silent -k --output /dev/null --fail "$1"
|
||||||
|
}
|
||||||
|
|
||||||
|
check_k3d_clusters() {
|
||||||
|
[ -n "$EXE" ] || abort "EXE is not defined"
|
||||||
|
for c in "c1" "c2" ; do
|
||||||
|
kc=$($EXE get-kubeconfig --name "$c")
|
||||||
|
[ -n "$kc" ] || abort "could not obtain a kubeconfig for $c"
|
||||||
|
if kubectl --kubeconfig="$kc" cluster-info ; then
|
||||||
|
passed "cluster $c is reachable (with kubeconfig=$kc)"
|
||||||
|
else
|
||||||
|
warn "could not obtain cluster info for $c (with kubeconfig=$kc). Contents:\n$(cat $kc)"
|
||||||
|
return 1
|
||||||
|
fi
|
||||||
|
done
|
||||||
|
return 0
|
||||||
|
}
|
||||||
|
|
||||||
|
check_registry() {
|
||||||
|
check_url $REGISTRY/v2/_catalog
|
||||||
|
}
|
||||||
|
|
||||||
|
check_volume_exists() {
|
||||||
|
docker volume inspect "$1" >/dev/null 2>&1
|
||||||
|
}
|
32
tests/dind.sh
Executable file
32
tests/dind.sh
Executable file
@ -0,0 +1,32 @@
|
|||||||
|
#!/bin/bash
|
||||||
|
|
||||||
|
K3D_EXE=${EXE:-/bin/k3d}
|
||||||
|
K3D_IMAGE_TAG=$1
|
||||||
|
|
||||||
|
# define E2E_KEEP to non-empty for keeping the e2e runner container after running the tests
|
||||||
|
E2E_KEEP=${E2E_KEEP:-}
|
||||||
|
|
||||||
|
####################################################################################
|
||||||
|
|
||||||
|
TIMESTAMP=$(date "+%m%d%H%M%S")
|
||||||
|
|
||||||
|
k3de2e=$(docker run -d \
|
||||||
|
-v "$(pwd)"/tests:/tests \
|
||||||
|
--privileged \
|
||||||
|
-e EXE="$K3D_EXE" \
|
||||||
|
-e CI="true" \
|
||||||
|
--name "k3d-e2e-runner-$TIMESTAMP" \
|
||||||
|
k3d:$K3D_IMAGE_TAG)
|
||||||
|
|
||||||
|
sleep 5 # wait 5 seconds for docker to start
|
||||||
|
|
||||||
|
# Execute tests
|
||||||
|
finish() {
|
||||||
|
docker stop "$k3de2e" || /bin/true
|
||||||
|
if [ -z "$E2E_KEEP" ] ; then
|
||||||
|
docker rm "$k3de2e" || /bin/true
|
||||||
|
fi
|
||||||
|
}
|
||||||
|
trap finish EXIT
|
||||||
|
|
||||||
|
docker exec "$k3de2e" /tests/runner.sh
|
4
tests/fixtures/01-registries-empty.yaml
vendored
Normal file
4
tests/fixtures/01-registries-empty.yaml
vendored
Normal file
@ -0,0 +1,4 @@
|
|||||||
|
mirrors:
|
||||||
|
"dummy.local":
|
||||||
|
endpoint:
|
||||||
|
- http://localhost:32000
|
22
tests/runner.sh
Executable file
22
tests/runner.sh
Executable file
@ -0,0 +1,22 @@
|
|||||||
|
#!/bin/bash
|
||||||
|
|
||||||
|
CURR_DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" >/dev/null 2>&1 && pwd )"
|
||||||
|
[ -d "$CURR_DIR" ] || { echo "FATAL: no current dir (maybe running in zsh?)"; exit 1; }
|
||||||
|
|
||||||
|
# shellcheck source=./common.sh
|
||||||
|
source "$CURR_DIR/common.sh"
|
||||||
|
|
||||||
|
#########################################################################################
|
||||||
|
|
||||||
|
[ -n "$EXE" ] || abort "no EXE provided"
|
||||||
|
|
||||||
|
info "Starting e2e tests..."
|
||||||
|
|
||||||
|
for i in $CURR_DIR/[0-9]*.sh ; do
|
||||||
|
base=$(basename $i .sh)
|
||||||
|
highlight "***** Running $base *****"
|
||||||
|
$i || abort "test $base failed"
|
||||||
|
done
|
||||||
|
|
||||||
|
exit 0
|
||||||
|
|
40
vendor/github.com/sirupsen/logrus/.golangci.yml
generated
vendored
Normal file
40
vendor/github.com/sirupsen/logrus/.golangci.yml
generated
vendored
Normal file
@ -0,0 +1,40 @@
|
|||||||
|
run:
|
||||||
|
# do not run on test files yet
|
||||||
|
tests: false
|
||||||
|
|
||||||
|
# all available settings of specific linters
|
||||||
|
linters-settings:
|
||||||
|
errcheck:
|
||||||
|
# report about not checking of errors in type assetions: `a := b.(MyStruct)`;
|
||||||
|
# default is false: such cases aren't reported by default.
|
||||||
|
check-type-assertions: false
|
||||||
|
|
||||||
|
# report about assignment of errors to blank identifier: `num, _ := strconv.Atoi(numStr)`;
|
||||||
|
# default is false: such cases aren't reported by default.
|
||||||
|
check-blank: false
|
||||||
|
|
||||||
|
lll:
|
||||||
|
line-length: 100
|
||||||
|
tab-width: 4
|
||||||
|
|
||||||
|
prealloc:
|
||||||
|
simple: false
|
||||||
|
range-loops: false
|
||||||
|
for-loops: false
|
||||||
|
|
||||||
|
whitespace:
|
||||||
|
multi-if: false # Enforces newlines (or comments) after every multi-line if statement
|
||||||
|
multi-func: false # Enforces newlines (or comments) after every multi-line function signature
|
||||||
|
|
||||||
|
linters:
|
||||||
|
enable:
|
||||||
|
- megacheck
|
||||||
|
- govet
|
||||||
|
disable:
|
||||||
|
- maligned
|
||||||
|
- prealloc
|
||||||
|
disable-all: false
|
||||||
|
presets:
|
||||||
|
- bugs
|
||||||
|
- unused
|
||||||
|
fast: false
|
14
vendor/github.com/sirupsen/logrus/.travis.yml
generated
vendored
14
vendor/github.com/sirupsen/logrus/.travis.yml
generated
vendored
@ -4,21 +4,13 @@ git:
|
|||||||
depth: 1
|
depth: 1
|
||||||
env:
|
env:
|
||||||
- GO111MODULE=on
|
- GO111MODULE=on
|
||||||
- GO111MODULE=off
|
go: [1.13.x, 1.14.x]
|
||||||
go: [ 1.11.x, 1.12.x ]
|
os: [linux, osx]
|
||||||
os: [ linux, osx ]
|
|
||||||
matrix:
|
|
||||||
exclude:
|
|
||||||
- go: 1.12.x
|
|
||||||
env: GO111MODULE=off
|
|
||||||
- go: 1.11.x
|
|
||||||
os: osx
|
|
||||||
install:
|
install:
|
||||||
- ./travis/install.sh
|
- ./travis/install.sh
|
||||||
- if [[ "$GO111MODULE" == "on" ]]; then go mod download; fi
|
|
||||||
- if [[ "$GO111MODULE" == "off" ]]; then go get github.com/stretchr/testify/assert golang.org/x/sys/unix github.com/konsorten/go-windows-terminal-sequences; fi
|
|
||||||
script:
|
script:
|
||||||
- ./travis/cross_build.sh
|
- ./travis/cross_build.sh
|
||||||
|
- ./travis/lint.sh
|
||||||
- export GOMAXPROCS=4
|
- export GOMAXPROCS=4
|
||||||
- export GORACE=halt_on_error=1
|
- export GORACE=halt_on_error=1
|
||||||
- go test -race -v ./...
|
- go test -race -v ./...
|
||||||
|
44
vendor/github.com/sirupsen/logrus/README.md
generated
vendored
44
vendor/github.com/sirupsen/logrus/README.md
generated
vendored
@ -1,8 +1,28 @@
|
|||||||
# Logrus <img src="http://i.imgur.com/hTeVwmJ.png" width="40" height="40" alt=":walrus:" class="emoji" title=":walrus:"/> [](https://travis-ci.org/sirupsen/logrus) [](https://godoc.org/github.com/sirupsen/logrus)
|
# Logrus <img src="http://i.imgur.com/hTeVwmJ.png" width="40" height="40" alt=":walrus:" class="emoji" title=":walrus:"/> [](https://travis-ci.org/sirupsen/logrus) [](https://godoc.org/github.com/sirupsen/logrus)
|
||||||
|
|
||||||
Logrus is a structured logger for Go (golang), completely API compatible with
|
Logrus is a structured logger for Go (golang), completely API compatible with
|
||||||
the standard library logger.
|
the standard library logger.
|
||||||
|
|
||||||
|
**Logrus is in maintenance-mode.** We will not be introducing new features. It's
|
||||||
|
simply too hard to do in a way that won't break many people's projects, which is
|
||||||
|
the last thing you want from your Logging library (again...).
|
||||||
|
|
||||||
|
This does not mean Logrus is dead. Logrus will continue to be maintained for
|
||||||
|
security, (backwards compatible) bug fixes, and performance (where we are
|
||||||
|
limited by the interface).
|
||||||
|
|
||||||
|
I believe Logrus' biggest contribution is to have played a part in today's
|
||||||
|
widespread use of structured logging in Golang. There doesn't seem to be a
|
||||||
|
reason to do a major, breaking iteration into Logrus V2, since the fantastic Go
|
||||||
|
community has built those independently. Many fantastic alternatives have sprung
|
||||||
|
up. Logrus would look like those, had it been re-designed with what we know
|
||||||
|
about structured logging in Go today. Check out, for example,
|
||||||
|
[Zerolog][zerolog], [Zap][zap], and [Apex][apex].
|
||||||
|
|
||||||
|
[zerolog]: https://github.com/rs/zerolog
|
||||||
|
[zap]: https://github.com/uber-go/zap
|
||||||
|
[apex]: https://github.com/apex/log
|
||||||
|
|
||||||
**Seeing weird case-sensitive problems?** It's in the past been possible to
|
**Seeing weird case-sensitive problems?** It's in the past been possible to
|
||||||
import Logrus as both upper- and lower-case. Due to the Go package environment,
|
import Logrus as both upper- and lower-case. Due to the Go package environment,
|
||||||
this caused issues in the community and we needed a standard. Some environments
|
this caused issues in the community and we needed a standard. Some environments
|
||||||
@ -15,11 +35,6 @@ comments](https://github.com/sirupsen/logrus/issues/553#issuecomment-306591437).
|
|||||||
For an in-depth explanation of the casing issue, see [this
|
For an in-depth explanation of the casing issue, see [this
|
||||||
comment](https://github.com/sirupsen/logrus/issues/570#issuecomment-313933276).
|
comment](https://github.com/sirupsen/logrus/issues/570#issuecomment-313933276).
|
||||||
|
|
||||||
**Are you interested in assisting in maintaining Logrus?** Currently I have a
|
|
||||||
lot of obligations, and I am unable to provide Logrus with the maintainership it
|
|
||||||
needs. If you'd like to help, please reach out to me at `simon at author's
|
|
||||||
username dot com`.
|
|
||||||
|
|
||||||
Nicely color-coded in development (when a TTY is attached, otherwise just
|
Nicely color-coded in development (when a TTY is attached, otherwise just
|
||||||
plain text):
|
plain text):
|
||||||
|
|
||||||
@ -187,7 +202,7 @@ func main() {
|
|||||||
log.Out = os.Stdout
|
log.Out = os.Stdout
|
||||||
|
|
||||||
// You could set this to any `io.Writer` such as a file
|
// You could set this to any `io.Writer` such as a file
|
||||||
// file, err := os.OpenFile("logrus.log", os.O_CREATE|os.O_WRONLY, 0666)
|
// file, err := os.OpenFile("logrus.log", os.O_CREATE|os.O_WRONLY|os.O_APPEND, 0666)
|
||||||
// if err == nil {
|
// if err == nil {
|
||||||
// log.Out = file
|
// log.Out = file
|
||||||
// } else {
|
// } else {
|
||||||
@ -272,7 +287,7 @@ func init() {
|
|||||||
```
|
```
|
||||||
Note: Syslog hook also support connecting to local syslog (Ex. "/dev/log" or "/var/run/syslog" or "/var/run/log"). For the detail, please check the [syslog hook README](hooks/syslog/README.md).
|
Note: Syslog hook also support connecting to local syslog (Ex. "/dev/log" or "/var/run/syslog" or "/var/run/log"). For the detail, please check the [syslog hook README](hooks/syslog/README.md).
|
||||||
|
|
||||||
A list of currently known of service hook can be found in this wiki [page](https://github.com/sirupsen/logrus/wiki/Hooks)
|
A list of currently known service hooks can be found in this wiki [page](https://github.com/sirupsen/logrus/wiki/Hooks)
|
||||||
|
|
||||||
|
|
||||||
#### Level logging
|
#### Level logging
|
||||||
@ -354,6 +369,7 @@ The built-in logging formatters are:
|
|||||||
[github.com/mattn/go-colorable](https://github.com/mattn/go-colorable).
|
[github.com/mattn/go-colorable](https://github.com/mattn/go-colorable).
|
||||||
* When colors are enabled, levels are truncated to 4 characters by default. To disable
|
* When colors are enabled, levels are truncated to 4 characters by default. To disable
|
||||||
truncation set the `DisableLevelTruncation` field to `true`.
|
truncation set the `DisableLevelTruncation` field to `true`.
|
||||||
|
* When outputting to a TTY, it's often helpful to visually scan down a column where all the levels are the same width. Setting the `PadLevelText` field to `true` enables this behavior, by adding padding to the level text.
|
||||||
* All options are listed in the [generated docs](https://godoc.org/github.com/sirupsen/logrus#TextFormatter).
|
* All options are listed in the [generated docs](https://godoc.org/github.com/sirupsen/logrus#TextFormatter).
|
||||||
* `logrus.JSONFormatter`. Logs fields as JSON.
|
* `logrus.JSONFormatter`. Logs fields as JSON.
|
||||||
* All options are listed in the [generated docs](https://godoc.org/github.com/sirupsen/logrus#JSONFormatter).
|
* All options are listed in the [generated docs](https://godoc.org/github.com/sirupsen/logrus#JSONFormatter).
|
||||||
@ -364,8 +380,10 @@ Third party logging formatters:
|
|||||||
* [`GELF`](https://github.com/fabienm/go-logrus-formatters). Formats entries so they comply to Graylog's [GELF 1.1 specification](http://docs.graylog.org/en/2.4/pages/gelf.html).
|
* [`GELF`](https://github.com/fabienm/go-logrus-formatters). Formats entries so they comply to Graylog's [GELF 1.1 specification](http://docs.graylog.org/en/2.4/pages/gelf.html).
|
||||||
* [`logstash`](https://github.com/bshuster-repo/logrus-logstash-hook). Logs fields as [Logstash](http://logstash.net) Events.
|
* [`logstash`](https://github.com/bshuster-repo/logrus-logstash-hook). Logs fields as [Logstash](http://logstash.net) Events.
|
||||||
* [`prefixed`](https://github.com/x-cray/logrus-prefixed-formatter). Displays log entry source along with alternative layout.
|
* [`prefixed`](https://github.com/x-cray/logrus-prefixed-formatter). Displays log entry source along with alternative layout.
|
||||||
* [`zalgo`](https://github.com/aybabtme/logzalgo). Invoking the P͉̫o̳̼̊w̖͈̰͎e̬͔̭͂r͚̼̹̲ ̫͓͉̳͈ō̠͕͖̚f̝͍̠ ͕̲̞͖͑Z̖̫̤̫ͪa͉̬͈̗l͖͎g̳̥o̰̥̅!̣͔̲̻͊̄ ̙̘̦̹̦.
|
* [`zalgo`](https://github.com/aybabtme/logzalgo). Invoking the Power of Zalgo.
|
||||||
* [`nested-logrus-formatter`](https://github.com/antonfisher/nested-logrus-formatter). Converts logrus fields to a nested structure.
|
* [`nested-logrus-formatter`](https://github.com/antonfisher/nested-logrus-formatter). Converts logrus fields to a nested structure.
|
||||||
|
* [`powerful-logrus-formatter`](https://github.com/zput/zxcTool). get fileName, log's line number and the latest function's name when print log; Sava log to files.
|
||||||
|
* [`caption-json-formatter`](https://github.com/nolleh/caption_json_formatter). logrus's message json formatter with human-readable caption added.
|
||||||
|
|
||||||
You can define your formatter by implementing the `Formatter` interface,
|
You can define your formatter by implementing the `Formatter` interface,
|
||||||
requiring a `Format` method. `Format` takes an `*Entry`. `entry.Data` is a
|
requiring a `Format` method. `Format` takes an `*Entry`. `entry.Data` is a
|
||||||
@ -430,14 +448,14 @@ entries. It should not be a feature of the application-level logger.
|
|||||||
|
|
||||||
| Tool | Description |
|
| Tool | Description |
|
||||||
| ---- | ----------- |
|
| ---- | ----------- |
|
||||||
|[Logrus Mate](https://github.com/gogap/logrus_mate)|Logrus mate is a tool for Logrus to manage loggers, you can initial logger's level, hook and formatter by config file, the logger will generated with different config at different environment.|
|
|[Logrus Mate](https://github.com/gogap/logrus_mate)|Logrus mate is a tool for Logrus to manage loggers, you can initial logger's level, hook and formatter by config file, the logger will be generated with different configs in different environments.|
|
||||||
|[Logrus Viper Helper](https://github.com/heirko/go-contrib/tree/master/logrusHelper)|An Helper around Logrus to wrap with spf13/Viper to load configuration with fangs! And to simplify Logrus configuration use some behavior of [Logrus Mate](https://github.com/gogap/logrus_mate). [sample](https://github.com/heirko/iris-contrib/blob/master/middleware/logrus-logger/example) |
|
|[Logrus Viper Helper](https://github.com/heirko/go-contrib/tree/master/logrusHelper)|An Helper around Logrus to wrap with spf13/Viper to load configuration with fangs! And to simplify Logrus configuration use some behavior of [Logrus Mate](https://github.com/gogap/logrus_mate). [sample](https://github.com/heirko/iris-contrib/blob/master/middleware/logrus-logger/example) |
|
||||||
|
|
||||||
#### Testing
|
#### Testing
|
||||||
|
|
||||||
Logrus has a built in facility for asserting the presence of log messages. This is implemented through the `test` hook and provides:
|
Logrus has a built in facility for asserting the presence of log messages. This is implemented through the `test` hook and provides:
|
||||||
|
|
||||||
* decorators for existing logger (`test.NewLocal` and `test.NewGlobal`) which basically just add the `test` hook
|
* decorators for existing logger (`test.NewLocal` and `test.NewGlobal`) which basically just adds the `test` hook
|
||||||
* a test logger (`test.NewNullLogger`) that just records log messages (and does not output any):
|
* a test logger (`test.NewNullLogger`) that just records log messages (and does not output any):
|
||||||
|
|
||||||
```go
|
```go
|
||||||
@ -465,7 +483,7 @@ func TestSomething(t*testing.T){
|
|||||||
|
|
||||||
Logrus can register one or more functions that will be called when any `fatal`
|
Logrus can register one or more functions that will be called when any `fatal`
|
||||||
level message is logged. The registered handlers will be executed before
|
level message is logged. The registered handlers will be executed before
|
||||||
logrus performs a `os.Exit(1)`. This behavior may be helpful if callers need
|
logrus performs an `os.Exit(1)`. This behavior may be helpful if callers need
|
||||||
to gracefully shutdown. Unlike a `panic("Something went wrong...")` call which can be intercepted with a deferred `recover` a call to `os.Exit(1)` can not be intercepted.
|
to gracefully shutdown. Unlike a `panic("Something went wrong...")` call which can be intercepted with a deferred `recover` a call to `os.Exit(1)` can not be intercepted.
|
||||||
|
|
||||||
```
|
```
|
||||||
@ -490,6 +508,6 @@ Situation when locking is not needed includes:
|
|||||||
|
|
||||||
1) logger.Out is protected by locks.
|
1) logger.Out is protected by locks.
|
||||||
|
|
||||||
2) logger.Out is a os.File handler opened with `O_APPEND` flag, and every write is smaller than 4k. (This allow multi-thread/multi-process writing)
|
2) logger.Out is an os.File handler opened with `O_APPEND` flag, and every write is smaller than 4k. (This allows multi-thread/multi-process writing)
|
||||||
|
|
||||||
(Refer to http://www.notthewizard.com/2014/06/17/are-files-appends-really-atomic/)
|
(Refer to http://www.notthewizard.com/2014/06/17/are-files-appends-really-atomic/)
|
||||||
|
49
vendor/github.com/sirupsen/logrus/entry.go
generated
vendored
49
vendor/github.com/sirupsen/logrus/entry.go
generated
vendored
@ -85,10 +85,15 @@ func NewEntry(logger *Logger) *Entry {
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// Returns the bytes representation of this entry from the formatter.
|
||||||
|
func (entry *Entry) Bytes() ([]byte, error) {
|
||||||
|
return entry.Logger.Formatter.Format(entry)
|
||||||
|
}
|
||||||
|
|
||||||
// Returns the string representation from the reader and ultimately the
|
// Returns the string representation from the reader and ultimately the
|
||||||
// formatter.
|
// formatter.
|
||||||
func (entry *Entry) String() (string, error) {
|
func (entry *Entry) String() (string, error) {
|
||||||
serialized, err := entry.Logger.Formatter.Format(entry)
|
serialized, err := entry.Bytes()
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return "", err
|
return "", err
|
||||||
}
|
}
|
||||||
@ -103,7 +108,11 @@ func (entry *Entry) WithError(err error) *Entry {
|
|||||||
|
|
||||||
// Add a context to the Entry.
|
// Add a context to the Entry.
|
||||||
func (entry *Entry) WithContext(ctx context.Context) *Entry {
|
func (entry *Entry) WithContext(ctx context.Context) *Entry {
|
||||||
return &Entry{Logger: entry.Logger, Data: entry.Data, Time: entry.Time, err: entry.err, Context: ctx}
|
dataCopy := make(Fields, len(entry.Data))
|
||||||
|
for k, v := range entry.Data {
|
||||||
|
dataCopy[k] = v
|
||||||
|
}
|
||||||
|
return &Entry{Logger: entry.Logger, Data: dataCopy, Time: entry.Time, err: entry.err, Context: ctx}
|
||||||
}
|
}
|
||||||
|
|
||||||
// Add a single field to the Entry.
|
// Add a single field to the Entry.
|
||||||
@ -113,6 +122,8 @@ func (entry *Entry) WithField(key string, value interface{}) *Entry {
|
|||||||
|
|
||||||
// Add a map of fields to the Entry.
|
// Add a map of fields to the Entry.
|
||||||
func (entry *Entry) WithFields(fields Fields) *Entry {
|
func (entry *Entry) WithFields(fields Fields) *Entry {
|
||||||
|
entry.Logger.mu.Lock()
|
||||||
|
defer entry.Logger.mu.Unlock()
|
||||||
data := make(Fields, len(entry.Data)+len(fields))
|
data := make(Fields, len(entry.Data)+len(fields))
|
||||||
for k, v := range entry.Data {
|
for k, v := range entry.Data {
|
||||||
data[k] = v
|
data[k] = v
|
||||||
@ -144,7 +155,11 @@ func (entry *Entry) WithFields(fields Fields) *Entry {
|
|||||||
|
|
||||||
// Overrides the time of the Entry.
|
// Overrides the time of the Entry.
|
||||||
func (entry *Entry) WithTime(t time.Time) *Entry {
|
func (entry *Entry) WithTime(t time.Time) *Entry {
|
||||||
return &Entry{Logger: entry.Logger, Data: entry.Data, Time: t, err: entry.err, Context: entry.Context}
|
dataCopy := make(Fields, len(entry.Data))
|
||||||
|
for k, v := range entry.Data {
|
||||||
|
dataCopy[k] = v
|
||||||
|
}
|
||||||
|
return &Entry{Logger: entry.Logger, Data: dataCopy, Time: t, err: entry.err, Context: entry.Context}
|
||||||
}
|
}
|
||||||
|
|
||||||
// getPackageName reduces a fully qualified function name to the package name
|
// getPackageName reduces a fully qualified function name to the package name
|
||||||
@ -165,15 +180,20 @@ func getPackageName(f string) string {
|
|||||||
|
|
||||||
// getCaller retrieves the name of the first non-logrus calling function
|
// getCaller retrieves the name of the first non-logrus calling function
|
||||||
func getCaller() *runtime.Frame {
|
func getCaller() *runtime.Frame {
|
||||||
|
|
||||||
// cache this package's fully-qualified name
|
// cache this package's fully-qualified name
|
||||||
callerInitOnce.Do(func() {
|
callerInitOnce.Do(func() {
|
||||||
pcs := make([]uintptr, 2)
|
pcs := make([]uintptr, maximumCallerDepth)
|
||||||
_ = runtime.Callers(0, pcs)
|
_ = runtime.Callers(0, pcs)
|
||||||
logrusPackage = getPackageName(runtime.FuncForPC(pcs[1]).Name())
|
|
||||||
|
|
||||||
// now that we have the cache, we can skip a minimum count of known-logrus functions
|
// dynamic get the package name and the minimum caller depth
|
||||||
// XXX this is dubious, the number of frames may vary
|
for i := 0; i < maximumCallerDepth; i++ {
|
||||||
|
funcName := runtime.FuncForPC(pcs[i]).Name()
|
||||||
|
if strings.Contains(funcName, "getCaller") {
|
||||||
|
logrusPackage = getPackageName(funcName)
|
||||||
|
break
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
minimumCallerDepth = knownLogrusFrames
|
minimumCallerDepth = knownLogrusFrames
|
||||||
})
|
})
|
||||||
|
|
||||||
@ -187,7 +207,7 @@ func getCaller() *runtime.Frame {
|
|||||||
|
|
||||||
// If the caller isn't part of this package, we're done
|
// If the caller isn't part of this package, we're done
|
||||||
if pkg != logrusPackage {
|
if pkg != logrusPackage {
|
||||||
return &f
|
return &f //nolint:scopelint
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
@ -217,9 +237,11 @@ func (entry Entry) log(level Level, msg string) {
|
|||||||
|
|
||||||
entry.Level = level
|
entry.Level = level
|
||||||
entry.Message = msg
|
entry.Message = msg
|
||||||
|
entry.Logger.mu.Lock()
|
||||||
if entry.Logger.ReportCaller {
|
if entry.Logger.ReportCaller {
|
||||||
entry.Caller = getCaller()
|
entry.Caller = getCaller()
|
||||||
}
|
}
|
||||||
|
entry.Logger.mu.Unlock()
|
||||||
|
|
||||||
entry.fireHooks()
|
entry.fireHooks()
|
||||||
|
|
||||||
@ -255,11 +277,10 @@ func (entry *Entry) write() {
|
|||||||
serialized, err := entry.Logger.Formatter.Format(entry)
|
serialized, err := entry.Logger.Formatter.Format(entry)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
fmt.Fprintf(os.Stderr, "Failed to obtain reader, %v\n", err)
|
fmt.Fprintf(os.Stderr, "Failed to obtain reader, %v\n", err)
|
||||||
} else {
|
return
|
||||||
_, err = entry.Logger.Out.Write(serialized)
|
}
|
||||||
if err != nil {
|
if _, err = entry.Logger.Out.Write(serialized); err != nil {
|
||||||
fmt.Fprintf(os.Stderr, "Failed to write to log, %v\n", err)
|
fmt.Fprintf(os.Stderr, "Failed to write to log, %v\n", err)
|
||||||
}
|
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
2
vendor/github.com/sirupsen/logrus/exported.go
generated
vendored
2
vendor/github.com/sirupsen/logrus/exported.go
generated
vendored
@ -80,7 +80,7 @@ func WithFields(fields Fields) *Entry {
|
|||||||
return std.WithFields(fields)
|
return std.WithFields(fields)
|
||||||
}
|
}
|
||||||
|
|
||||||
// WithTime creats an entry from the standard logger and overrides the time of
|
// WithTime creates an entry from the standard logger and overrides the time of
|
||||||
// logs generated with it.
|
// logs generated with it.
|
||||||
//
|
//
|
||||||
// Note that it doesn't log until you call Debug, Print, Info, Warn, Fatal
|
// Note that it doesn't log until you call Debug, Print, Info, Warn, Fatal
|
||||||
|
3
vendor/github.com/sirupsen/logrus/go.mod
generated
vendored
3
vendor/github.com/sirupsen/logrus/go.mod
generated
vendored
@ -4,7 +4,8 @@ require (
|
|||||||
github.com/davecgh/go-spew v1.1.1 // indirect
|
github.com/davecgh/go-spew v1.1.1 // indirect
|
||||||
github.com/konsorten/go-windows-terminal-sequences v1.0.1
|
github.com/konsorten/go-windows-terminal-sequences v1.0.1
|
||||||
github.com/pmezard/go-difflib v1.0.0 // indirect
|
github.com/pmezard/go-difflib v1.0.0 // indirect
|
||||||
github.com/stretchr/objx v0.1.1 // indirect
|
|
||||||
github.com/stretchr/testify v1.2.2
|
github.com/stretchr/testify v1.2.2
|
||||||
golang.org/x/sys v0.0.0-20190422165155-953cdadca894
|
golang.org/x/sys v0.0.0-20190422165155-953cdadca894
|
||||||
)
|
)
|
||||||
|
|
||||||
|
go 1.13
|
||||||
|
6
vendor/github.com/sirupsen/logrus/go.sum
generated
vendored
6
vendor/github.com/sirupsen/logrus/go.sum
generated
vendored
@ -1,16 +1,10 @@
|
|||||||
github.com/davecgh/go-spew v1.1.1 h1:vj9j/u1bqnvCEfJOwUhtlOARqs3+rkHYY13jYWTU97c=
|
github.com/davecgh/go-spew v1.1.1 h1:vj9j/u1bqnvCEfJOwUhtlOARqs3+rkHYY13jYWTU97c=
|
||||||
github.com/davecgh/go-spew v1.1.1/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
|
github.com/davecgh/go-spew v1.1.1/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
|
||||||
github.com/konsorten/go-windows-terminal-sequences v0.0.0-20180402223658-b729f2633dfe h1:CHRGQ8V7OlCYtwaKPJi3iA7J+YdNKdo8j7nG5IgDhjs=
|
|
||||||
github.com/konsorten/go-windows-terminal-sequences v0.0.0-20180402223658-b729f2633dfe/go.mod h1:T0+1ngSBFLxvqU3pZ+m/2kptfBszLMUkC4ZK/EgS/cQ=
|
|
||||||
github.com/konsorten/go-windows-terminal-sequences v1.0.1 h1:mweAR1A6xJ3oS2pRaGiHgQ4OO8tzTaLawm8vnODuwDk=
|
github.com/konsorten/go-windows-terminal-sequences v1.0.1 h1:mweAR1A6xJ3oS2pRaGiHgQ4OO8tzTaLawm8vnODuwDk=
|
||||||
github.com/konsorten/go-windows-terminal-sequences v1.0.1/go.mod h1:T0+1ngSBFLxvqU3pZ+m/2kptfBszLMUkC4ZK/EgS/cQ=
|
github.com/konsorten/go-windows-terminal-sequences v1.0.1/go.mod h1:T0+1ngSBFLxvqU3pZ+m/2kptfBszLMUkC4ZK/EgS/cQ=
|
||||||
github.com/pmezard/go-difflib v1.0.0 h1:4DBwDE0NGyQoBHbLQYPwSUPoCMWR5BEzIk/f1lZbAQM=
|
github.com/pmezard/go-difflib v1.0.0 h1:4DBwDE0NGyQoBHbLQYPwSUPoCMWR5BEzIk/f1lZbAQM=
|
||||||
github.com/pmezard/go-difflib v1.0.0/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4=
|
github.com/pmezard/go-difflib v1.0.0/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4=
|
||||||
github.com/stretchr/objx v0.1.1 h1:2vfRuCMp5sSVIDSqO8oNnWJq7mPa6KVP3iPIwFBuy8A=
|
|
||||||
github.com/stretchr/objx v0.1.1/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+wExME=
|
|
||||||
github.com/stretchr/testify v1.2.2 h1:bSDNvY7ZPG5RlJ8otE/7V6gMiyenm9RtJ7IUVIAoJ1w=
|
github.com/stretchr/testify v1.2.2 h1:bSDNvY7ZPG5RlJ8otE/7V6gMiyenm9RtJ7IUVIAoJ1w=
|
||||||
github.com/stretchr/testify v1.2.2/go.mod h1:a8OnRcib4nhh0OaRAV+Yts87kKdq0PP7pXfy6kDkUVs=
|
github.com/stretchr/testify v1.2.2/go.mod h1:a8OnRcib4nhh0OaRAV+Yts87kKdq0PP7pXfy6kDkUVs=
|
||||||
golang.org/x/sys v0.0.0-20180905080454-ebe1bf3edb33 h1:I6FyU15t786LL7oL/hn43zqTuEGr4PN7F4XJ1p4E3Y8=
|
|
||||||
golang.org/x/sys v0.0.0-20180905080454-ebe1bf3edb33/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
|
|
||||||
golang.org/x/sys v0.0.0-20190422165155-953cdadca894 h1:Cz4ceDQGXuKRnVBDTS23GTn/pU5OE2C0WrNTOYK1Uuc=
|
golang.org/x/sys v0.0.0-20190422165155-953cdadca894 h1:Cz4ceDQGXuKRnVBDTS23GTn/pU5OE2C0WrNTOYK1Uuc=
|
||||||
golang.org/x/sys v0.0.0-20190422165155-953cdadca894/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
|
golang.org/x/sys v0.0.0-20190422165155-953cdadca894/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
|
||||||
|
43
vendor/github.com/sirupsen/logrus/hooks/writer/README.md
generated
vendored
Normal file
43
vendor/github.com/sirupsen/logrus/hooks/writer/README.md
generated
vendored
Normal file
@ -0,0 +1,43 @@
|
|||||||
|
# Writer Hooks for Logrus
|
||||||
|
|
||||||
|
Send logs of given levels to any object with `io.Writer` interface.
|
||||||
|
|
||||||
|
## Usage
|
||||||
|
|
||||||
|
If you want for example send high level logs to `Stderr` and
|
||||||
|
logs of normal execution to `Stdout`, you could do it like this:
|
||||||
|
|
||||||
|
```go
|
||||||
|
package main
|
||||||
|
|
||||||
|
import (
|
||||||
|
"io/ioutil"
|
||||||
|
"os"
|
||||||
|
|
||||||
|
log "github.com/sirupsen/logrus"
|
||||||
|
"github.com/sirupsen/logrus/hooks/writer"
|
||||||
|
)
|
||||||
|
|
||||||
|
func main() {
|
||||||
|
log.SetOutput(ioutil.Discard) // Send all logs to nowhere by default
|
||||||
|
|
||||||
|
log.AddHook(&writer.Hook{ // Send logs with level higher than warning to stderr
|
||||||
|
Writer: os.Stderr,
|
||||||
|
LogLevels: []log.Level{
|
||||||
|
log.PanicLevel,
|
||||||
|
log.FatalLevel,
|
||||||
|
log.ErrorLevel,
|
||||||
|
log.WarnLevel,
|
||||||
|
},
|
||||||
|
})
|
||||||
|
log.AddHook(&writer.Hook{ // Send info and debug logs to stdout
|
||||||
|
Writer: os.Stdout,
|
||||||
|
LogLevels: []log.Level{
|
||||||
|
log.InfoLevel,
|
||||||
|
log.DebugLevel,
|
||||||
|
},
|
||||||
|
})
|
||||||
|
log.Info("This will go to stdout")
|
||||||
|
log.Warn("This will go to stderr")
|
||||||
|
}
|
||||||
|
```
|
29
vendor/github.com/sirupsen/logrus/hooks/writer/writer.go
generated
vendored
Normal file
29
vendor/github.com/sirupsen/logrus/hooks/writer/writer.go
generated
vendored
Normal file
@ -0,0 +1,29 @@
|
|||||||
|
package writer
|
||||||
|
|
||||||
|
import (
|
||||||
|
"io"
|
||||||
|
|
||||||
|
log "github.com/sirupsen/logrus"
|
||||||
|
)
|
||||||
|
|
||||||
|
// Hook is a hook that writes logs of specified LogLevels to specified Writer
|
||||||
|
type Hook struct {
|
||||||
|
Writer io.Writer
|
||||||
|
LogLevels []log.Level
|
||||||
|
}
|
||||||
|
|
||||||
|
// Fire will be called when some logging function is called with current hook
|
||||||
|
// It will format log entry to string and write it to appropriate writer
|
||||||
|
func (hook *Hook) Fire(entry *log.Entry) error {
|
||||||
|
line, err := entry.Bytes()
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
_, err = hook.Writer.Write(line)
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
|
||||||
|
// Levels define on which log levels this hook would trigger
|
||||||
|
func (hook *Hook) Levels() []log.Level {
|
||||||
|
return hook.LogLevels
|
||||||
|
}
|
4
vendor/github.com/sirupsen/logrus/json_formatter.go
generated
vendored
4
vendor/github.com/sirupsen/logrus/json_formatter.go
generated
vendored
@ -28,6 +28,9 @@ type JSONFormatter struct {
|
|||||||
// DisableTimestamp allows disabling automatic timestamps in output
|
// DisableTimestamp allows disabling automatic timestamps in output
|
||||||
DisableTimestamp bool
|
DisableTimestamp bool
|
||||||
|
|
||||||
|
// DisableHTMLEscape allows disabling html escaping in output
|
||||||
|
DisableHTMLEscape bool
|
||||||
|
|
||||||
// DataKey allows users to put all the log entry parameters into a nested dictionary at a given key.
|
// DataKey allows users to put all the log entry parameters into a nested dictionary at a given key.
|
||||||
DataKey string
|
DataKey string
|
||||||
|
|
||||||
@ -110,6 +113,7 @@ func (f *JSONFormatter) Format(entry *Entry) ([]byte, error) {
|
|||||||
}
|
}
|
||||||
|
|
||||||
encoder := json.NewEncoder(b)
|
encoder := json.NewEncoder(b)
|
||||||
|
encoder.SetEscapeHTML(!f.DisableHTMLEscape)
|
||||||
if f.PrettyPrint {
|
if f.PrettyPrint {
|
||||||
encoder.SetIndent("", " ")
|
encoder.SetIndent("", " ")
|
||||||
}
|
}
|
||||||
|
11
vendor/github.com/sirupsen/logrus/logger.go
generated
vendored
11
vendor/github.com/sirupsen/logrus/logger.go
generated
vendored
@ -68,10 +68,10 @@ func (mw *MutexWrap) Disable() {
|
|||||||
// `Out` and `Hooks` directly on the default logger instance. You can also just
|
// `Out` and `Hooks` directly on the default logger instance. You can also just
|
||||||
// instantiate your own:
|
// instantiate your own:
|
||||||
//
|
//
|
||||||
// var log = &Logger{
|
// var log = &logrus.Logger{
|
||||||
// Out: os.Stderr,
|
// Out: os.Stderr,
|
||||||
// Formatter: new(JSONFormatter),
|
// Formatter: new(logrus.JSONFormatter),
|
||||||
// Hooks: make(LevelHooks),
|
// Hooks: make(logrus.LevelHooks),
|
||||||
// Level: logrus.DebugLevel,
|
// Level: logrus.DebugLevel,
|
||||||
// }
|
// }
|
||||||
//
|
//
|
||||||
@ -100,8 +100,9 @@ func (logger *Logger) releaseEntry(entry *Entry) {
|
|||||||
logger.entryPool.Put(entry)
|
logger.entryPool.Put(entry)
|
||||||
}
|
}
|
||||||
|
|
||||||
// Adds a field to the log entry, note that it doesn't log until you call
|
// WithField allocates a new entry and adds a field to it.
|
||||||
// Debug, Print, Info, Warn, Error, Fatal or Panic. It only creates a log entry.
|
// Debug, Print, Info, Warn, Error, Fatal or Panic must be then applied to
|
||||||
|
// this new returned entry.
|
||||||
// If you want multiple fields, use `WithFields`.
|
// If you want multiple fields, use `WithFields`.
|
||||||
func (logger *Logger) WithField(key string, value interface{}) *Entry {
|
func (logger *Logger) WithField(key string, value interface{}) *Entry {
|
||||||
entry := logger.newEntry()
|
entry := logger.newEntry()
|
||||||
|
2
vendor/github.com/sirupsen/logrus/logrus.go
generated
vendored
2
vendor/github.com/sirupsen/logrus/logrus.go
generated
vendored
@ -51,7 +51,7 @@ func (level *Level) UnmarshalText(text []byte) error {
|
|||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
|
|
||||||
*level = Level(l)
|
*level = l
|
||||||
|
|
||||||
return nil
|
return nil
|
||||||
}
|
}
|
||||||
|
2
vendor/github.com/sirupsen/logrus/terminal_check_bsd.go
generated
vendored
2
vendor/github.com/sirupsen/logrus/terminal_check_bsd.go
generated
vendored
@ -1,4 +1,5 @@
|
|||||||
// +build darwin dragonfly freebsd netbsd openbsd
|
// +build darwin dragonfly freebsd netbsd openbsd
|
||||||
|
// +build !js
|
||||||
|
|
||||||
package logrus
|
package logrus
|
||||||
|
|
||||||
@ -10,4 +11,3 @@ func isTerminal(fd int) bool {
|
|||||||
_, err := unix.IoctlGetTermios(fd, ioctlReadTermios)
|
_, err := unix.IoctlGetTermios(fd, ioctlReadTermios)
|
||||||
return err == nil
|
return err == nil
|
||||||
}
|
}
|
||||||
|
|
||||||
|
7
vendor/github.com/sirupsen/logrus/terminal_check_js.go
generated
vendored
Normal file
7
vendor/github.com/sirupsen/logrus/terminal_check_js.go
generated
vendored
Normal file
@ -0,0 +1,7 @@
|
|||||||
|
// +build js
|
||||||
|
|
||||||
|
package logrus
|
||||||
|
|
||||||
|
func isTerminal(fd int) bool {
|
||||||
|
return false
|
||||||
|
}
|
2
vendor/github.com/sirupsen/logrus/terminal_check_unix.go
generated
vendored
2
vendor/github.com/sirupsen/logrus/terminal_check_unix.go
generated
vendored
@ -1,4 +1,5 @@
|
|||||||
// +build linux aix
|
// +build linux aix
|
||||||
|
// +build !js
|
||||||
|
|
||||||
package logrus
|
package logrus
|
||||||
|
|
||||||
@ -10,4 +11,3 @@ func isTerminal(fd int) bool {
|
|||||||
_, err := unix.IoctlGetTermios(fd, ioctlReadTermios)
|
_, err := unix.IoctlGetTermios(fd, ioctlReadTermios)
|
||||||
return err == nil
|
return err == nil
|
||||||
}
|
}
|
||||||
|
|
||||||
|
47
vendor/github.com/sirupsen/logrus/text_formatter.go
generated
vendored
47
vendor/github.com/sirupsen/logrus/text_formatter.go
generated
vendored
@ -6,9 +6,11 @@ import (
|
|||||||
"os"
|
"os"
|
||||||
"runtime"
|
"runtime"
|
||||||
"sort"
|
"sort"
|
||||||
|
"strconv"
|
||||||
"strings"
|
"strings"
|
||||||
"sync"
|
"sync"
|
||||||
"time"
|
"time"
|
||||||
|
"unicode/utf8"
|
||||||
)
|
)
|
||||||
|
|
||||||
const (
|
const (
|
||||||
@ -32,6 +34,9 @@ type TextFormatter struct {
|
|||||||
// Force disabling colors.
|
// Force disabling colors.
|
||||||
DisableColors bool
|
DisableColors bool
|
||||||
|
|
||||||
|
// Force quoting of all values
|
||||||
|
ForceQuote bool
|
||||||
|
|
||||||
// Override coloring based on CLICOLOR and CLICOLOR_FORCE. - https://bixense.com/clicolors/
|
// Override coloring based on CLICOLOR and CLICOLOR_FORCE. - https://bixense.com/clicolors/
|
||||||
EnvironmentOverrideColors bool
|
EnvironmentOverrideColors bool
|
||||||
|
|
||||||
@ -57,6 +62,10 @@ type TextFormatter struct {
|
|||||||
// Disables the truncation of the level text to 4 characters.
|
// Disables the truncation of the level text to 4 characters.
|
||||||
DisableLevelTruncation bool
|
DisableLevelTruncation bool
|
||||||
|
|
||||||
|
// PadLevelText Adds padding the level text so that all the levels output at the same length
|
||||||
|
// PadLevelText is a superset of the DisableLevelTruncation option
|
||||||
|
PadLevelText bool
|
||||||
|
|
||||||
// QuoteEmptyFields will wrap empty fields in quotes if true
|
// QuoteEmptyFields will wrap empty fields in quotes if true
|
||||||
QuoteEmptyFields bool
|
QuoteEmptyFields bool
|
||||||
|
|
||||||
@ -79,23 +88,32 @@ type TextFormatter struct {
|
|||||||
CallerPrettyfier func(*runtime.Frame) (function string, file string)
|
CallerPrettyfier func(*runtime.Frame) (function string, file string)
|
||||||
|
|
||||||
terminalInitOnce sync.Once
|
terminalInitOnce sync.Once
|
||||||
|
|
||||||
|
// The max length of the level text, generated dynamically on init
|
||||||
|
levelTextMaxLength int
|
||||||
}
|
}
|
||||||
|
|
||||||
func (f *TextFormatter) init(entry *Entry) {
|
func (f *TextFormatter) init(entry *Entry) {
|
||||||
if entry.Logger != nil {
|
if entry.Logger != nil {
|
||||||
f.isTerminal = checkIfTerminal(entry.Logger.Out)
|
f.isTerminal = checkIfTerminal(entry.Logger.Out)
|
||||||
}
|
}
|
||||||
|
// Get the max length of the level text
|
||||||
|
for _, level := range AllLevels {
|
||||||
|
levelTextLength := utf8.RuneCount([]byte(level.String()))
|
||||||
|
if levelTextLength > f.levelTextMaxLength {
|
||||||
|
f.levelTextMaxLength = levelTextLength
|
||||||
|
}
|
||||||
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
func (f *TextFormatter) isColored() bool {
|
func (f *TextFormatter) isColored() bool {
|
||||||
isColored := f.ForceColors || (f.isTerminal && (runtime.GOOS != "windows"))
|
isColored := f.ForceColors || (f.isTerminal && (runtime.GOOS != "windows"))
|
||||||
|
|
||||||
if f.EnvironmentOverrideColors {
|
if f.EnvironmentOverrideColors {
|
||||||
if force, ok := os.LookupEnv("CLICOLOR_FORCE"); ok && force != "0" {
|
switch force, ok := os.LookupEnv("CLICOLOR_FORCE"); {
|
||||||
|
case ok && force != "0":
|
||||||
isColored = true
|
isColored = true
|
||||||
} else if ok && force == "0" {
|
case ok && force == "0", os.Getenv("CLICOLOR") == "0":
|
||||||
isColored = false
|
|
||||||
} else if os.Getenv("CLICOLOR") == "0" {
|
|
||||||
isColored = false
|
isColored = false
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
@ -217,9 +235,18 @@ func (f *TextFormatter) printColored(b *bytes.Buffer, entry *Entry, keys []strin
|
|||||||
}
|
}
|
||||||
|
|
||||||
levelText := strings.ToUpper(entry.Level.String())
|
levelText := strings.ToUpper(entry.Level.String())
|
||||||
if !f.DisableLevelTruncation {
|
if !f.DisableLevelTruncation && !f.PadLevelText {
|
||||||
levelText = levelText[0:4]
|
levelText = levelText[0:4]
|
||||||
}
|
}
|
||||||
|
if f.PadLevelText {
|
||||||
|
// Generates the format string used in the next line, for example "%-6s" or "%-7s".
|
||||||
|
// Based on the max level text length.
|
||||||
|
formatString := "%-" + strconv.Itoa(f.levelTextMaxLength) + "s"
|
||||||
|
// Formats the level text by appending spaces up to the max length, for example:
|
||||||
|
// - "INFO "
|
||||||
|
// - "WARNING"
|
||||||
|
levelText = fmt.Sprintf(formatString, levelText)
|
||||||
|
}
|
||||||
|
|
||||||
// Remove a single newline if it already exists in the message to keep
|
// Remove a single newline if it already exists in the message to keep
|
||||||
// the behavior of logrus text_formatter the same as the stdlib log package
|
// the behavior of logrus text_formatter the same as the stdlib log package
|
||||||
@ -243,11 +270,12 @@ func (f *TextFormatter) printColored(b *bytes.Buffer, entry *Entry, keys []strin
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
if f.DisableTimestamp {
|
switch {
|
||||||
|
case f.DisableTimestamp:
|
||||||
fmt.Fprintf(b, "\x1b[%dm%s\x1b[0m%s %-44s ", levelColor, levelText, caller, entry.Message)
|
fmt.Fprintf(b, "\x1b[%dm%s\x1b[0m%s %-44s ", levelColor, levelText, caller, entry.Message)
|
||||||
} else if !f.FullTimestamp {
|
case !f.FullTimestamp:
|
||||||
fmt.Fprintf(b, "\x1b[%dm%s\x1b[0m[%04d]%s %-44s ", levelColor, levelText, int(entry.Time.Sub(baseTimestamp)/time.Second), caller, entry.Message)
|
fmt.Fprintf(b, "\x1b[%dm%s\x1b[0m[%04d]%s %-44s ", levelColor, levelText, int(entry.Time.Sub(baseTimestamp)/time.Second), caller, entry.Message)
|
||||||
} else {
|
default:
|
||||||
fmt.Fprintf(b, "\x1b[%dm%s\x1b[0m[%s]%s %-44s ", levelColor, levelText, entry.Time.Format(timestampFormat), caller, entry.Message)
|
fmt.Fprintf(b, "\x1b[%dm%s\x1b[0m[%s]%s %-44s ", levelColor, levelText, entry.Time.Format(timestampFormat), caller, entry.Message)
|
||||||
}
|
}
|
||||||
for _, k := range keys {
|
for _, k := range keys {
|
||||||
@ -258,6 +286,9 @@ func (f *TextFormatter) printColored(b *bytes.Buffer, entry *Entry, keys []strin
|
|||||||
}
|
}
|
||||||
|
|
||||||
func (f *TextFormatter) needsQuoting(text string) bool {
|
func (f *TextFormatter) needsQuoting(text string) bool {
|
||||||
|
if f.ForceQuote {
|
||||||
|
return true
|
||||||
|
}
|
||||||
if f.QuoteEmptyFields && len(text) == 0 {
|
if f.QuoteEmptyFields && len(text) == 0 {
|
||||||
return true
|
return true
|
||||||
}
|
}
|
||||||
|
6
vendor/github.com/sirupsen/logrus/writer.go
generated
vendored
6
vendor/github.com/sirupsen/logrus/writer.go
generated
vendored
@ -6,10 +6,16 @@ import (
|
|||||||
"runtime"
|
"runtime"
|
||||||
)
|
)
|
||||||
|
|
||||||
|
// Writer at INFO level. See WriterLevel for details.
|
||||||
func (logger *Logger) Writer() *io.PipeWriter {
|
func (logger *Logger) Writer() *io.PipeWriter {
|
||||||
return logger.WriterLevel(InfoLevel)
|
return logger.WriterLevel(InfoLevel)
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// WriterLevel returns an io.Writer that can be used to write arbitrary text to
|
||||||
|
// the logger at the given log level. Each line written to the writer will be
|
||||||
|
// printed in the usual way using formatters and hooks. The writer is part of an
|
||||||
|
// io.Pipe and it is the callers responsibility to close the writer when done.
|
||||||
|
// This can be used to override the standard library logger easily.
|
||||||
func (logger *Logger) WriterLevel(level Level) *io.PipeWriter {
|
func (logger *Logger) WriterLevel(level Level) *io.PipeWriter {
|
||||||
return NewEntry(logger).WriterLevel(level)
|
return NewEntry(logger).WriterLevel(level)
|
||||||
}
|
}
|
||||||
|
16
vendor/gopkg.in/yaml.v2/.travis.yml
generated
vendored
Normal file
16
vendor/gopkg.in/yaml.v2/.travis.yml
generated
vendored
Normal file
@ -0,0 +1,16 @@
|
|||||||
|
language: go
|
||||||
|
|
||||||
|
go:
|
||||||
|
- "1.4.x"
|
||||||
|
- "1.5.x"
|
||||||
|
- "1.6.x"
|
||||||
|
- "1.7.x"
|
||||||
|
- "1.8.x"
|
||||||
|
- "1.9.x"
|
||||||
|
- "1.10.x"
|
||||||
|
- "1.11.x"
|
||||||
|
- "1.12.x"
|
||||||
|
- "1.13.x"
|
||||||
|
- "tip"
|
||||||
|
|
||||||
|
go_import_path: gopkg.in/yaml.v2
|
201
vendor/gopkg.in/yaml.v2/LICENSE
generated
vendored
Normal file
201
vendor/gopkg.in/yaml.v2/LICENSE
generated
vendored
Normal file
@ -0,0 +1,201 @@
|
|||||||
|
Apache License
|
||||||
|
Version 2.0, January 2004
|
||||||
|
http://www.apache.org/licenses/
|
||||||
|
|
||||||
|
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
|
||||||
|
|
||||||
|
1. Definitions.
|
||||||
|
|
||||||
|
"License" shall mean the terms and conditions for use, reproduction,
|
||||||
|
and distribution as defined by Sections 1 through 9 of this document.
|
||||||
|
|
||||||
|
"Licensor" shall mean the copyright owner or entity authorized by
|
||||||
|
the copyright owner that is granting the License.
|
||||||
|
|
||||||
|
"Legal Entity" shall mean the union of the acting entity and all
|
||||||
|
other entities that control, are controlled by, or are under common
|
||||||
|
control with that entity. For the purposes of this definition,
|
||||||
|
"control" means (i) the power, direct or indirect, to cause the
|
||||||
|
direction or management of such entity, whether by contract or
|
||||||
|
otherwise, or (ii) ownership of fifty percent (50%) or more of the
|
||||||
|
outstanding shares, or (iii) beneficial ownership of such entity.
|
||||||
|
|
||||||
|
"You" (or "Your") shall mean an individual or Legal Entity
|
||||||
|
exercising permissions granted by this License.
|
||||||
|
|
||||||
|
"Source" form shall mean the preferred form for making modifications,
|
||||||
|
including but not limited to software source code, documentation
|
||||||
|
source, and configuration files.
|
||||||
|
|
||||||
|
"Object" form shall mean any form resulting from mechanical
|
||||||
|
transformation or translation of a Source form, including but
|
||||||
|
not limited to compiled object code, generated documentation,
|
||||||
|
and conversions to other media types.
|
||||||
|
|
||||||
|
"Work" shall mean the work of authorship, whether in Source or
|
||||||
|
Object form, made available under the License, as indicated by a
|
||||||
|
copyright notice that is included in or attached to the work
|
||||||
|
(an example is provided in the Appendix below).
|
||||||
|
|
||||||
|
"Derivative Works" shall mean any work, whether in Source or Object
|
||||||
|
form, that is based on (or derived from) the Work and for which the
|
||||||
|
editorial revisions, annotations, elaborations, or other modifications
|
||||||
|
represent, as a whole, an original work of authorship. For the purposes
|
||||||
|
of this License, Derivative Works shall not include works that remain
|
||||||
|
separable from, or merely link (or bind by name) to the interfaces of,
|
||||||
|
the Work and Derivative Works thereof.
|
||||||
|
|
||||||
|
"Contribution" shall mean any work of authorship, including
|
||||||
|
the original version of the Work and any modifications or additions
|
||||||
|
to that Work or Derivative Works thereof, that is intentionally
|
||||||
|
submitted to Licensor for inclusion in the Work by the copyright owner
|
||||||
|
or by an individual or Legal Entity authorized to submit on behalf of
|
||||||
|
the copyright owner. For the purposes of this definition, "submitted"
|
||||||
|
means any form of electronic, verbal, or written communication sent
|
||||||
|
to the Licensor or its representatives, including but not limited to
|
||||||
|
communication on electronic mailing lists, source code control systems,
|
||||||
|
and issue tracking systems that are managed by, or on behalf of, the
|
||||||
|
Licensor for the purpose of discussing and improving the Work, but
|
||||||
|
excluding communication that is conspicuously marked or otherwise
|
||||||
|
designated in writing by the copyright owner as "Not a Contribution."
|
||||||
|
|
||||||
|
"Contributor" shall mean Licensor and any individual or Legal Entity
|
||||||
|
on behalf of whom a Contribution has been received by Licensor and
|
||||||
|
subsequently incorporated within the Work.
|
||||||
|
|
||||||
|
2. Grant of Copyright License. Subject to the terms and conditions of
|
||||||
|
this License, each Contributor hereby grants to You a perpetual,
|
||||||
|
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
|
||||||
|
copyright license to reproduce, prepare Derivative Works of,
|
||||||
|
publicly display, publicly perform, sublicense, and distribute the
|
||||||
|
Work and such Derivative Works in Source or Object form.
|
||||||
|
|
||||||
|
3. Grant of Patent License. Subject to the terms and conditions of
|
||||||
|
this License, each Contributor hereby grants to You a perpetual,
|
||||||
|
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
|
||||||
|
(except as stated in this section) patent license to make, have made,
|
||||||
|
use, offer to sell, sell, import, and otherwise transfer the Work,
|
||||||
|
where such license applies only to those patent claims licensable
|
||||||
|
by such Contributor that are necessarily infringed by their
|
||||||
|
Contribution(s) alone or by combination of their Contribution(s)
|
||||||
|
with the Work to which such Contribution(s) was submitted. If You
|
||||||
|
institute patent litigation against any entity (including a
|
||||||
|
cross-claim or counterclaim in a lawsuit) alleging that the Work
|
||||||
|
or a Contribution incorporated within the Work constitutes direct
|
||||||
|
or contributory patent infringement, then any patent licenses
|
||||||
|
granted to You under this License for that Work shall terminate
|
||||||
|
as of the date such litigation is filed.
|
||||||
|
|
||||||
|
4. Redistribution. You may reproduce and distribute copies of the
|
||||||
|
Work or Derivative Works thereof in any medium, with or without
|
||||||
|
modifications, and in Source or Object form, provided that You
|
||||||
|
meet the following conditions:
|
||||||
|
|
||||||
|
(a) You must give any other recipients of the Work or
|
||||||
|
Derivative Works a copy of this License; and
|
||||||
|
|
||||||
|
(b) You must cause any modified files to carry prominent notices
|
||||||
|
stating that You changed the files; and
|
||||||
|
|
||||||
|
(c) You must retain, in the Source form of any Derivative Works
|
||||||
|
that You distribute, all copyright, patent, trademark, and
|
||||||
|
attribution notices from the Source form of the Work,
|
||||||
|
excluding those notices that do not pertain to any part of
|
||||||
|
the Derivative Works; and
|
||||||
|
|
||||||
|
(d) If the Work includes a "NOTICE" text file as part of its
|
||||||
|
distribution, then any Derivative Works that You distribute must
|
||||||
|
include a readable copy of the attribution notices contained
|
||||||
|
within such NOTICE file, excluding those notices that do not
|
||||||
|
pertain to any part of the Derivative Works, in at least one
|
||||||
|
of the following places: within a NOTICE text file distributed
|
||||||
|
as part of the Derivative Works; within the Source form or
|
||||||
|
documentation, if provided along with the Derivative Works; or,
|
||||||
|
within a display generated by the Derivative Works, if and
|
||||||
|
wherever such third-party notices normally appear. The contents
|
||||||
|
of the NOTICE file are for informational purposes only and
|
||||||
|
do not modify the License. You may add Your own attribution
|
||||||
|
notices within Derivative Works that You distribute, alongside
|
||||||
|
or as an addendum to the NOTICE text from the Work, provided
|
||||||
|
that such additional attribution notices cannot be construed
|
||||||
|
as modifying the License.
|
||||||
|
|
||||||
|
You may add Your own copyright statement to Your modifications and
|
||||||
|
may provide additional or different license terms and conditions
|
||||||
|
for use, reproduction, or distribution of Your modifications, or
|
||||||
|
for any such Derivative Works as a whole, provided Your use,
|
||||||
|
reproduction, and distribution of the Work otherwise complies with
|
||||||
|
the conditions stated in this License.
|
||||||
|
|
||||||
|
5. Submission of Contributions. Unless You explicitly state otherwise,
|
||||||
|
any Contribution intentionally submitted for inclusion in the Work
|
||||||
|
by You to the Licensor shall be under the terms and conditions of
|
||||||
|
this License, without any additional terms or conditions.
|
||||||
|
Notwithstanding the above, nothing herein shall supersede or modify
|
||||||
|
the terms of any separate license agreement you may have executed
|
||||||
|
with Licensor regarding such Contributions.
|
||||||
|
|
||||||
|
6. Trademarks. This License does not grant permission to use the trade
|
||||||
|
names, trademarks, service marks, or product names of the Licensor,
|
||||||
|
except as required for reasonable and customary use in describing the
|
||||||
|
origin of the Work and reproducing the content of the NOTICE file.
|
||||||
|
|
||||||
|
7. Disclaimer of Warranty. Unless required by applicable law or
|
||||||
|
agreed to in writing, Licensor provides the Work (and each
|
||||||
|
Contributor provides its Contributions) on an "AS IS" BASIS,
|
||||||
|
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
|
||||||
|
implied, including, without limitation, any warranties or conditions
|
||||||
|
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
|
||||||
|
PARTICULAR PURPOSE. You are solely responsible for determining the
|
||||||
|
appropriateness of using or redistributing the Work and assume any
|
||||||
|
risks associated with Your exercise of permissions under this License.
|
||||||
|
|
||||||
|
8. Limitation of Liability. In no event and under no legal theory,
|
||||||
|
whether in tort (including negligence), contract, or otherwise,
|
||||||
|
unless required by applicable law (such as deliberate and grossly
|
||||||
|
negligent acts) or agreed to in writing, shall any Contributor be
|
||||||
|
liable to You for damages, including any direct, indirect, special,
|
||||||
|
incidental, or consequential damages of any character arising as a
|
||||||
|
result of this License or out of the use or inability to use the
|
||||||
|
Work (including but not limited to damages for loss of goodwill,
|
||||||
|
work stoppage, computer failure or malfunction, or any and all
|
||||||
|
other commercial damages or losses), even if such Contributor
|
||||||
|
has been advised of the possibility of such damages.
|
||||||
|
|
||||||
|
9. Accepting Warranty or Additional Liability. While redistributing
|
||||||
|
the Work or Derivative Works thereof, You may choose to offer,
|
||||||
|
and charge a fee for, acceptance of support, warranty, indemnity,
|
||||||
|
or other liability obligations and/or rights consistent with this
|
||||||
|
License. However, in accepting such obligations, You may act only
|
||||||
|
on Your own behalf and on Your sole responsibility, not on behalf
|
||||||
|
of any other Contributor, and only if You agree to indemnify,
|
||||||
|
defend, and hold each Contributor harmless for any liability
|
||||||
|
incurred by, or claims asserted against, such Contributor by reason
|
||||||
|
of your accepting any such warranty or additional liability.
|
||||||
|
|
||||||
|
END OF TERMS AND CONDITIONS
|
||||||
|
|
||||||
|
APPENDIX: How to apply the Apache License to your work.
|
||||||
|
|
||||||
|
To apply the Apache License to your work, attach the following
|
||||||
|
boilerplate notice, with the fields enclosed by brackets "{}"
|
||||||
|
replaced with your own identifying information. (Don't include
|
||||||
|
the brackets!) The text should be enclosed in the appropriate
|
||||||
|
comment syntax for the file format. We also recommend that a
|
||||||
|
file or class name and description of purpose be included on the
|
||||||
|
same "printed page" as the copyright notice for easier
|
||||||
|
identification within third-party archives.
|
||||||
|
|
||||||
|
Copyright {yyyy} {name of copyright owner}
|
||||||
|
|
||||||
|
Licensed under the Apache License, Version 2.0 (the "License");
|
||||||
|
you may not use this file except in compliance with the License.
|
||||||
|
You may obtain a copy of the License at
|
||||||
|
|
||||||
|
http://www.apache.org/licenses/LICENSE-2.0
|
||||||
|
|
||||||
|
Unless required by applicable law or agreed to in writing, software
|
||||||
|
distributed under the License is distributed on an "AS IS" BASIS,
|
||||||
|
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||||
|
See the License for the specific language governing permissions and
|
||||||
|
limitations under the License.
|
31
vendor/gopkg.in/yaml.v2/LICENSE.libyaml
generated
vendored
Normal file
31
vendor/gopkg.in/yaml.v2/LICENSE.libyaml
generated
vendored
Normal file
@ -0,0 +1,31 @@
|
|||||||
|
The following files were ported to Go from C files of libyaml, and thus
|
||||||
|
are still covered by their original copyright and license:
|
||||||
|
|
||||||
|
apic.go
|
||||||
|
emitterc.go
|
||||||
|
parserc.go
|
||||||
|
readerc.go
|
||||||
|
scannerc.go
|
||||||
|
writerc.go
|
||||||
|
yamlh.go
|
||||||
|
yamlprivateh.go
|
||||||
|
|
||||||
|
Copyright (c) 2006 Kirill Simonov
|
||||||
|
|
||||||
|
Permission is hereby granted, free of charge, to any person obtaining a copy of
|
||||||
|
this software and associated documentation files (the "Software"), to deal in
|
||||||
|
the Software without restriction, including without limitation the rights to
|
||||||
|
use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies
|
||||||
|
of the Software, and to permit persons to whom the Software is furnished to do
|
||||||
|
so, subject to the following conditions:
|
||||||
|
|
||||||
|
The above copyright notice and this permission notice shall be included in all
|
||||||
|
copies or substantial portions of the Software.
|
||||||
|
|
||||||
|
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
|
||||||
|
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
|
||||||
|
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
|
||||||
|
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
|
||||||
|
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
|
||||||
|
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
|
||||||
|
SOFTWARE.
|
13
vendor/gopkg.in/yaml.v2/NOTICE
generated
vendored
Normal file
13
vendor/gopkg.in/yaml.v2/NOTICE
generated
vendored
Normal file
@ -0,0 +1,13 @@
|
|||||||
|
Copyright 2011-2016 Canonical Ltd.
|
||||||
|
|
||||||
|
Licensed under the Apache License, Version 2.0 (the "License");
|
||||||
|
you may not use this file except in compliance with the License.
|
||||||
|
You may obtain a copy of the License at
|
||||||
|
|
||||||
|
http://www.apache.org/licenses/LICENSE-2.0
|
||||||
|
|
||||||
|
Unless required by applicable law or agreed to in writing, software
|
||||||
|
distributed under the License is distributed on an "AS IS" BASIS,
|
||||||
|
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||||
|
See the License for the specific language governing permissions and
|
||||||
|
limitations under the License.
|
133
vendor/gopkg.in/yaml.v2/README.md
generated
vendored
Normal file
133
vendor/gopkg.in/yaml.v2/README.md
generated
vendored
Normal file
@ -0,0 +1,133 @@
|
|||||||
|
# YAML support for the Go language
|
||||||
|
|
||||||
|
Introduction
|
||||||
|
------------
|
||||||
|
|
||||||
|
The yaml package enables Go programs to comfortably encode and decode YAML
|
||||||
|
values. It was developed within [Canonical](https://www.canonical.com) as
|
||||||
|
part of the [juju](https://juju.ubuntu.com) project, and is based on a
|
||||||
|
pure Go port of the well-known [libyaml](http://pyyaml.org/wiki/LibYAML)
|
||||||
|
C library to parse and generate YAML data quickly and reliably.
|
||||||
|
|
||||||
|
Compatibility
|
||||||
|
-------------
|
||||||
|
|
||||||
|
The yaml package supports most of YAML 1.1 and 1.2, including support for
|
||||||
|
anchors, tags, map merging, etc. Multi-document unmarshalling is not yet
|
||||||
|
implemented, and base-60 floats from YAML 1.1 are purposefully not
|
||||||
|
supported since they're a poor design and are gone in YAML 1.2.
|
||||||
|
|
||||||
|
Installation and usage
|
||||||
|
----------------------
|
||||||
|
|
||||||
|
The import path for the package is *gopkg.in/yaml.v2*.
|
||||||
|
|
||||||
|
To install it, run:
|
||||||
|
|
||||||
|
go get gopkg.in/yaml.v2
|
||||||
|
|
||||||
|
API documentation
|
||||||
|
-----------------
|
||||||
|
|
||||||
|
If opened in a browser, the import path itself leads to the API documentation:
|
||||||
|
|
||||||
|
* [https://gopkg.in/yaml.v2](https://gopkg.in/yaml.v2)
|
||||||
|
|
||||||
|
API stability
|
||||||
|
-------------
|
||||||
|
|
||||||
|
The package API for yaml v2 will remain stable as described in [gopkg.in](https://gopkg.in).
|
||||||
|
|
||||||
|
|
||||||
|
License
|
||||||
|
-------
|
||||||
|
|
||||||
|
The yaml package is licensed under the Apache License 2.0. Please see the LICENSE file for details.
|
||||||
|
|
||||||
|
|
||||||
|
Example
|
||||||
|
-------
|
||||||
|
|
||||||
|
```Go
|
||||||
|
package main
|
||||||
|
|
||||||
|
import (
|
||||||
|
"fmt"
|
||||||
|
"log"
|
||||||
|
|
||||||
|
"gopkg.in/yaml.v2"
|
||||||
|
)
|
||||||
|
|
||||||
|
var data = `
|
||||||
|
a: Easy!
|
||||||
|
b:
|
||||||
|
c: 2
|
||||||
|
d: [3, 4]
|
||||||
|
`
|
||||||
|
|
||||||
|
// Note: struct fields must be public in order for unmarshal to
|
||||||
|
// correctly populate the data.
|
||||||
|
type T struct {
|
||||||
|
A string
|
||||||
|
B struct {
|
||||||
|
RenamedC int `yaml:"c"`
|
||||||
|
D []int `yaml:",flow"`
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func main() {
|
||||||
|
t := T{}
|
||||||
|
|
||||||
|
err := yaml.Unmarshal([]byte(data), &t)
|
||||||
|
if err != nil {
|
||||||
|
log.Fatalf("error: %v", err)
|
||||||
|
}
|
||||||
|
fmt.Printf("--- t:\n%v\n\n", t)
|
||||||
|
|
||||||
|
d, err := yaml.Marshal(&t)
|
||||||
|
if err != nil {
|
||||||
|
log.Fatalf("error: %v", err)
|
||||||
|
}
|
||||||
|
fmt.Printf("--- t dump:\n%s\n\n", string(d))
|
||||||
|
|
||||||
|
m := make(map[interface{}]interface{})
|
||||||
|
|
||||||
|
err = yaml.Unmarshal([]byte(data), &m)
|
||||||
|
if err != nil {
|
||||||
|
log.Fatalf("error: %v", err)
|
||||||
|
}
|
||||||
|
fmt.Printf("--- m:\n%v\n\n", m)
|
||||||
|
|
||||||
|
d, err = yaml.Marshal(&m)
|
||||||
|
if err != nil {
|
||||||
|
log.Fatalf("error: %v", err)
|
||||||
|
}
|
||||||
|
fmt.Printf("--- m dump:\n%s\n\n", string(d))
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
This example will generate the following output:
|
||||||
|
|
||||||
|
```
|
||||||
|
--- t:
|
||||||
|
{Easy! {2 [3 4]}}
|
||||||
|
|
||||||
|
--- t dump:
|
||||||
|
a: Easy!
|
||||||
|
b:
|
||||||
|
c: 2
|
||||||
|
d: [3, 4]
|
||||||
|
|
||||||
|
|
||||||
|
--- m:
|
||||||
|
map[a:Easy! b:map[c:2 d:[3 4]]]
|
||||||
|
|
||||||
|
--- m dump:
|
||||||
|
a: Easy!
|
||||||
|
b:
|
||||||
|
c: 2
|
||||||
|
d:
|
||||||
|
- 3
|
||||||
|
- 4
|
||||||
|
```
|
||||||
|
|
739
vendor/gopkg.in/yaml.v2/apic.go
generated
vendored
Normal file
739
vendor/gopkg.in/yaml.v2/apic.go
generated
vendored
Normal file
@ -0,0 +1,739 @@
|
|||||||
|
package yaml
|
||||||
|
|
||||||
|
import (
|
||||||
|
"io"
|
||||||
|
)
|
||||||
|
|
||||||
|
func yaml_insert_token(parser *yaml_parser_t, pos int, token *yaml_token_t) {
|
||||||
|
//fmt.Println("yaml_insert_token", "pos:", pos, "typ:", token.typ, "head:", parser.tokens_head, "len:", len(parser.tokens))
|
||||||
|
|
||||||
|
// Check if we can move the queue at the beginning of the buffer.
|
||||||
|
if parser.tokens_head > 0 && len(parser.tokens) == cap(parser.tokens) {
|
||||||
|
if parser.tokens_head != len(parser.tokens) {
|
||||||
|
copy(parser.tokens, parser.tokens[parser.tokens_head:])
|
||||||
|
}
|
||||||
|
parser.tokens = parser.tokens[:len(parser.tokens)-parser.tokens_head]
|
||||||
|
parser.tokens_head = 0
|
||||||
|
}
|
||||||
|
parser.tokens = append(parser.tokens, *token)
|
||||||
|
if pos < 0 {
|
||||||
|
return
|
||||||
|
}
|
||||||
|
copy(parser.tokens[parser.tokens_head+pos+1:], parser.tokens[parser.tokens_head+pos:])
|
||||||
|
parser.tokens[parser.tokens_head+pos] = *token
|
||||||
|
}
|
||||||
|
|
||||||
|
// Create a new parser object.
|
||||||
|
func yaml_parser_initialize(parser *yaml_parser_t) bool {
|
||||||
|
*parser = yaml_parser_t{
|
||||||
|
raw_buffer: make([]byte, 0, input_raw_buffer_size),
|
||||||
|
buffer: make([]byte, 0, input_buffer_size),
|
||||||
|
}
|
||||||
|
return true
|
||||||
|
}
|
||||||
|
|
||||||
|
// Destroy a parser object.
|
||||||
|
func yaml_parser_delete(parser *yaml_parser_t) {
|
||||||
|
*parser = yaml_parser_t{}
|
||||||
|
}
|
||||||
|
|
||||||
|
// String read handler.
|
||||||
|
func yaml_string_read_handler(parser *yaml_parser_t, buffer []byte) (n int, err error) {
|
||||||
|
if parser.input_pos == len(parser.input) {
|
||||||
|
return 0, io.EOF
|
||||||
|
}
|
||||||
|
n = copy(buffer, parser.input[parser.input_pos:])
|
||||||
|
parser.input_pos += n
|
||||||
|
return n, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// Reader read handler.
|
||||||
|
func yaml_reader_read_handler(parser *yaml_parser_t, buffer []byte) (n int, err error) {
|
||||||
|
return parser.input_reader.Read(buffer)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Set a string input.
|
||||||
|
func yaml_parser_set_input_string(parser *yaml_parser_t, input []byte) {
|
||||||
|
if parser.read_handler != nil {
|
||||||
|
panic("must set the input source only once")
|
||||||
|
}
|
||||||
|
parser.read_handler = yaml_string_read_handler
|
||||||
|
parser.input = input
|
||||||
|
parser.input_pos = 0
|
||||||
|
}
|
||||||
|
|
||||||
|
// Set a file input.
|
||||||
|
func yaml_parser_set_input_reader(parser *yaml_parser_t, r io.Reader) {
|
||||||
|
if parser.read_handler != nil {
|
||||||
|
panic("must set the input source only once")
|
||||||
|
}
|
||||||
|
parser.read_handler = yaml_reader_read_handler
|
||||||
|
parser.input_reader = r
|
||||||
|
}
|
||||||
|
|
||||||
|
// Set the source encoding.
|
||||||
|
func yaml_parser_set_encoding(parser *yaml_parser_t, encoding yaml_encoding_t) {
|
||||||
|
if parser.encoding != yaml_ANY_ENCODING {
|
||||||
|
panic("must set the encoding only once")
|
||||||
|
}
|
||||||
|
parser.encoding = encoding
|
||||||
|
}
|
||||||
|
|
||||||
|
// Create a new emitter object.
|
||||||
|
func yaml_emitter_initialize(emitter *yaml_emitter_t) {
|
||||||
|
*emitter = yaml_emitter_t{
|
||||||
|
buffer: make([]byte, output_buffer_size),
|
||||||
|
raw_buffer: make([]byte, 0, output_raw_buffer_size),
|
||||||
|
states: make([]yaml_emitter_state_t, 0, initial_stack_size),
|
||||||
|
events: make([]yaml_event_t, 0, initial_queue_size),
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Destroy an emitter object.
|
||||||
|
func yaml_emitter_delete(emitter *yaml_emitter_t) {
|
||||||
|
*emitter = yaml_emitter_t{}
|
||||||
|
}
|
||||||
|
|
||||||
|
// String write handler.
|
||||||
|
func yaml_string_write_handler(emitter *yaml_emitter_t, buffer []byte) error {
|
||||||
|
*emitter.output_buffer = append(*emitter.output_buffer, buffer...)
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// yaml_writer_write_handler uses emitter.output_writer to write the
|
||||||
|
// emitted text.
|
||||||
|
func yaml_writer_write_handler(emitter *yaml_emitter_t, buffer []byte) error {
|
||||||
|
_, err := emitter.output_writer.Write(buffer)
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
|
||||||
|
// Set a string output.
|
||||||
|
func yaml_emitter_set_output_string(emitter *yaml_emitter_t, output_buffer *[]byte) {
|
||||||
|
if emitter.write_handler != nil {
|
||||||
|
panic("must set the output target only once")
|
||||||
|
}
|
||||||
|
emitter.write_handler = yaml_string_write_handler
|
||||||
|
emitter.output_buffer = output_buffer
|
||||||
|
}
|
||||||
|
|
||||||
|
// Set a file output.
|
||||||
|
func yaml_emitter_set_output_writer(emitter *yaml_emitter_t, w io.Writer) {
|
||||||
|
if emitter.write_handler != nil {
|
||||||
|
panic("must set the output target only once")
|
||||||
|
}
|
||||||
|
emitter.write_handler = yaml_writer_write_handler
|
||||||
|
emitter.output_writer = w
|
||||||
|
}
|
||||||
|
|
||||||
|
// Set the output encoding.
|
||||||
|
func yaml_emitter_set_encoding(emitter *yaml_emitter_t, encoding yaml_encoding_t) {
|
||||||
|
if emitter.encoding != yaml_ANY_ENCODING {
|
||||||
|
panic("must set the output encoding only once")
|
||||||
|
}
|
||||||
|
emitter.encoding = encoding
|
||||||
|
}
|
||||||
|
|
||||||
|
// Set the canonical output style.
|
||||||
|
func yaml_emitter_set_canonical(emitter *yaml_emitter_t, canonical bool) {
|
||||||
|
emitter.canonical = canonical
|
||||||
|
}
|
||||||
|
|
||||||
|
//// Set the indentation increment.
|
||||||
|
func yaml_emitter_set_indent(emitter *yaml_emitter_t, indent int) {
|
||||||
|
if indent < 2 || indent > 9 {
|
||||||
|
indent = 2
|
||||||
|
}
|
||||||
|
emitter.best_indent = indent
|
||||||
|
}
|
||||||
|
|
||||||
|
// Set the preferred line width.
|
||||||
|
func yaml_emitter_set_width(emitter *yaml_emitter_t, width int) {
|
||||||
|
if width < 0 {
|
||||||
|
width = -1
|
||||||
|
}
|
||||||
|
emitter.best_width = width
|
||||||
|
}
|
||||||
|
|
||||||
|
// Set if unescaped non-ASCII characters are allowed.
|
||||||
|
func yaml_emitter_set_unicode(emitter *yaml_emitter_t, unicode bool) {
|
||||||
|
emitter.unicode = unicode
|
||||||
|
}
|
||||||
|
|
||||||
|
// Set the preferred line break character.
|
||||||
|
func yaml_emitter_set_break(emitter *yaml_emitter_t, line_break yaml_break_t) {
|
||||||
|
emitter.line_break = line_break
|
||||||
|
}
|
||||||
|
|
||||||
|
///*
|
||||||
|
// * Destroy a token object.
|
||||||
|
// */
|
||||||
|
//
|
||||||
|
//YAML_DECLARE(void)
|
||||||
|
//yaml_token_delete(yaml_token_t *token)
|
||||||
|
//{
|
||||||
|
// assert(token); // Non-NULL token object expected.
|
||||||
|
//
|
||||||
|
// switch (token.type)
|
||||||
|
// {
|
||||||
|
// case YAML_TAG_DIRECTIVE_TOKEN:
|
||||||
|
// yaml_free(token.data.tag_directive.handle);
|
||||||
|
// yaml_free(token.data.tag_directive.prefix);
|
||||||
|
// break;
|
||||||
|
//
|
||||||
|
// case YAML_ALIAS_TOKEN:
|
||||||
|
// yaml_free(token.data.alias.value);
|
||||||
|
// break;
|
||||||
|
//
|
||||||
|
// case YAML_ANCHOR_TOKEN:
|
||||||
|
// yaml_free(token.data.anchor.value);
|
||||||
|
// break;
|
||||||
|
//
|
||||||
|
// case YAML_TAG_TOKEN:
|
||||||
|
// yaml_free(token.data.tag.handle);
|
||||||
|
// yaml_free(token.data.tag.suffix);
|
||||||
|
// break;
|
||||||
|
//
|
||||||
|
// case YAML_SCALAR_TOKEN:
|
||||||
|
// yaml_free(token.data.scalar.value);
|
||||||
|
// break;
|
||||||
|
//
|
||||||
|
// default:
|
||||||
|
// break;
|
||||||
|
// }
|
||||||
|
//
|
||||||
|
// memset(token, 0, sizeof(yaml_token_t));
|
||||||
|
//}
|
||||||
|
//
|
||||||
|
///*
|
||||||
|
// * Check if a string is a valid UTF-8 sequence.
|
||||||
|
// *
|
||||||
|
// * Check 'reader.c' for more details on UTF-8 encoding.
|
||||||
|
// */
|
||||||
|
//
|
||||||
|
//static int
|
||||||
|
//yaml_check_utf8(yaml_char_t *start, size_t length)
|
||||||
|
//{
|
||||||
|
// yaml_char_t *end = start+length;
|
||||||
|
// yaml_char_t *pointer = start;
|
||||||
|
//
|
||||||
|
// while (pointer < end) {
|
||||||
|
// unsigned char octet;
|
||||||
|
// unsigned int width;
|
||||||
|
// unsigned int value;
|
||||||
|
// size_t k;
|
||||||
|
//
|
||||||
|
// octet = pointer[0];
|
||||||
|
// width = (octet & 0x80) == 0x00 ? 1 :
|
||||||
|
// (octet & 0xE0) == 0xC0 ? 2 :
|
||||||
|
// (octet & 0xF0) == 0xE0 ? 3 :
|
||||||
|
// (octet & 0xF8) == 0xF0 ? 4 : 0;
|
||||||
|
// value = (octet & 0x80) == 0x00 ? octet & 0x7F :
|
||||||
|
// (octet & 0xE0) == 0xC0 ? octet & 0x1F :
|
||||||
|
// (octet & 0xF0) == 0xE0 ? octet & 0x0F :
|
||||||
|
// (octet & 0xF8) == 0xF0 ? octet & 0x07 : 0;
|
||||||
|
// if (!width) return 0;
|
||||||
|
// if (pointer+width > end) return 0;
|
||||||
|
// for (k = 1; k < width; k ++) {
|
||||||
|
// octet = pointer[k];
|
||||||
|
// if ((octet & 0xC0) != 0x80) return 0;
|
||||||
|
// value = (value << 6) + (octet & 0x3F);
|
||||||
|
// }
|
||||||
|
// if (!((width == 1) ||
|
||||||
|
// (width == 2 && value >= 0x80) ||
|
||||||
|
// (width == 3 && value >= 0x800) ||
|
||||||
|
// (width == 4 && value >= 0x10000))) return 0;
|
||||||
|
//
|
||||||
|
// pointer += width;
|
||||||
|
// }
|
||||||
|
//
|
||||||
|
// return 1;
|
||||||
|
//}
|
||||||
|
//
|
||||||
|
|
||||||
|
// Create STREAM-START.
|
||||||
|
func yaml_stream_start_event_initialize(event *yaml_event_t, encoding yaml_encoding_t) {
|
||||||
|
*event = yaml_event_t{
|
||||||
|
typ: yaml_STREAM_START_EVENT,
|
||||||
|
encoding: encoding,
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Create STREAM-END.
|
||||||
|
func yaml_stream_end_event_initialize(event *yaml_event_t) {
|
||||||
|
*event = yaml_event_t{
|
||||||
|
typ: yaml_STREAM_END_EVENT,
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Create DOCUMENT-START.
|
||||||
|
func yaml_document_start_event_initialize(
|
||||||
|
event *yaml_event_t,
|
||||||
|
version_directive *yaml_version_directive_t,
|
||||||
|
tag_directives []yaml_tag_directive_t,
|
||||||
|
implicit bool,
|
||||||
|
) {
|
||||||
|
*event = yaml_event_t{
|
||||||
|
typ: yaml_DOCUMENT_START_EVENT,
|
||||||
|
version_directive: version_directive,
|
||||||
|
tag_directives: tag_directives,
|
||||||
|
implicit: implicit,
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Create DOCUMENT-END.
|
||||||
|
func yaml_document_end_event_initialize(event *yaml_event_t, implicit bool) {
|
||||||
|
*event = yaml_event_t{
|
||||||
|
typ: yaml_DOCUMENT_END_EVENT,
|
||||||
|
implicit: implicit,
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
///*
|
||||||
|
// * Create ALIAS.
|
||||||
|
// */
|
||||||
|
//
|
||||||
|
//YAML_DECLARE(int)
|
||||||
|
//yaml_alias_event_initialize(event *yaml_event_t, anchor *yaml_char_t)
|
||||||
|
//{
|
||||||
|
// mark yaml_mark_t = { 0, 0, 0 }
|
||||||
|
// anchor_copy *yaml_char_t = NULL
|
||||||
|
//
|
||||||
|
// assert(event) // Non-NULL event object is expected.
|
||||||
|
// assert(anchor) // Non-NULL anchor is expected.
|
||||||
|
//
|
||||||
|
// if (!yaml_check_utf8(anchor, strlen((char *)anchor))) return 0
|
||||||
|
//
|
||||||
|
// anchor_copy = yaml_strdup(anchor)
|
||||||
|
// if (!anchor_copy)
|
||||||
|
// return 0
|
||||||
|
//
|
||||||
|
// ALIAS_EVENT_INIT(*event, anchor_copy, mark, mark)
|
||||||
|
//
|
||||||
|
// return 1
|
||||||
|
//}
|
||||||
|
|
||||||
|
// Create SCALAR.
|
||||||
|
func yaml_scalar_event_initialize(event *yaml_event_t, anchor, tag, value []byte, plain_implicit, quoted_implicit bool, style yaml_scalar_style_t) bool {
|
||||||
|
*event = yaml_event_t{
|
||||||
|
typ: yaml_SCALAR_EVENT,
|
||||||
|
anchor: anchor,
|
||||||
|
tag: tag,
|
||||||
|
value: value,
|
||||||
|
implicit: plain_implicit,
|
||||||
|
quoted_implicit: quoted_implicit,
|
||||||
|
style: yaml_style_t(style),
|
||||||
|
}
|
||||||
|
return true
|
||||||
|
}
|
||||||
|
|
||||||
|
// Create SEQUENCE-START.
|
||||||
|
func yaml_sequence_start_event_initialize(event *yaml_event_t, anchor, tag []byte, implicit bool, style yaml_sequence_style_t) bool {
|
||||||
|
*event = yaml_event_t{
|
||||||
|
typ: yaml_SEQUENCE_START_EVENT,
|
||||||
|
anchor: anchor,
|
||||||
|
tag: tag,
|
||||||
|
implicit: implicit,
|
||||||
|
style: yaml_style_t(style),
|
||||||
|
}
|
||||||
|
return true
|
||||||
|
}
|
||||||
|
|
||||||
|
// Create SEQUENCE-END.
|
||||||
|
func yaml_sequence_end_event_initialize(event *yaml_event_t) bool {
|
||||||
|
*event = yaml_event_t{
|
||||||
|
typ: yaml_SEQUENCE_END_EVENT,
|
||||||
|
}
|
||||||
|
return true
|
||||||
|
}
|
||||||
|
|
||||||
|
// Create MAPPING-START.
|
||||||
|
func yaml_mapping_start_event_initialize(event *yaml_event_t, anchor, tag []byte, implicit bool, style yaml_mapping_style_t) {
|
||||||
|
*event = yaml_event_t{
|
||||||
|
typ: yaml_MAPPING_START_EVENT,
|
||||||
|
anchor: anchor,
|
||||||
|
tag: tag,
|
||||||
|
implicit: implicit,
|
||||||
|
style: yaml_style_t(style),
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Create MAPPING-END.
|
||||||
|
func yaml_mapping_end_event_initialize(event *yaml_event_t) {
|
||||||
|
*event = yaml_event_t{
|
||||||
|
typ: yaml_MAPPING_END_EVENT,
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Destroy an event object.
|
||||||
|
func yaml_event_delete(event *yaml_event_t) {
|
||||||
|
*event = yaml_event_t{}
|
||||||
|
}
|
||||||
|
|
||||||
|
///*
|
||||||
|
// * Create a document object.
|
||||||
|
// */
|
||||||
|
//
|
||||||
|
//YAML_DECLARE(int)
|
||||||
|
//yaml_document_initialize(document *yaml_document_t,
|
||||||
|
// version_directive *yaml_version_directive_t,
|
||||||
|
// tag_directives_start *yaml_tag_directive_t,
|
||||||
|
// tag_directives_end *yaml_tag_directive_t,
|
||||||
|
// start_implicit int, end_implicit int)
|
||||||
|
//{
|
||||||
|
// struct {
|
||||||
|
// error yaml_error_type_t
|
||||||
|
// } context
|
||||||
|
// struct {
|
||||||
|
// start *yaml_node_t
|
||||||
|
// end *yaml_node_t
|
||||||
|
// top *yaml_node_t
|
||||||
|
// } nodes = { NULL, NULL, NULL }
|
||||||
|
// version_directive_copy *yaml_version_directive_t = NULL
|
||||||
|
// struct {
|
||||||
|
// start *yaml_tag_directive_t
|
||||||
|
// end *yaml_tag_directive_t
|
||||||
|
// top *yaml_tag_directive_t
|
||||||
|
// } tag_directives_copy = { NULL, NULL, NULL }
|
||||||
|
// value yaml_tag_directive_t = { NULL, NULL }
|
||||||
|
// mark yaml_mark_t = { 0, 0, 0 }
|
||||||
|
//
|
||||||
|
// assert(document) // Non-NULL document object is expected.
|
||||||
|
// assert((tag_directives_start && tag_directives_end) ||
|
||||||
|
// (tag_directives_start == tag_directives_end))
|
||||||
|
// // Valid tag directives are expected.
|
||||||
|
//
|
||||||
|
// if (!STACK_INIT(&context, nodes, INITIAL_STACK_SIZE)) goto error
|
||||||
|
//
|
||||||
|
// if (version_directive) {
|
||||||
|
// version_directive_copy = yaml_malloc(sizeof(yaml_version_directive_t))
|
||||||
|
// if (!version_directive_copy) goto error
|
||||||
|
// version_directive_copy.major = version_directive.major
|
||||||
|
// version_directive_copy.minor = version_directive.minor
|
||||||
|
// }
|
||||||
|
//
|
||||||
|
// if (tag_directives_start != tag_directives_end) {
|
||||||
|
// tag_directive *yaml_tag_directive_t
|
||||||
|
// if (!STACK_INIT(&context, tag_directives_copy, INITIAL_STACK_SIZE))
|
||||||
|
// goto error
|
||||||
|
// for (tag_directive = tag_directives_start
|
||||||
|
// tag_directive != tag_directives_end; tag_directive ++) {
|
||||||
|
// assert(tag_directive.handle)
|
||||||
|
// assert(tag_directive.prefix)
|
||||||
|
// if (!yaml_check_utf8(tag_directive.handle,
|
||||||
|
// strlen((char *)tag_directive.handle)))
|
||||||
|
// goto error
|
||||||
|
// if (!yaml_check_utf8(tag_directive.prefix,
|
||||||
|
// strlen((char *)tag_directive.prefix)))
|
||||||
|
// goto error
|
||||||
|
// value.handle = yaml_strdup(tag_directive.handle)
|
||||||
|
// value.prefix = yaml_strdup(tag_directive.prefix)
|
||||||
|
// if (!value.handle || !value.prefix) goto error
|
||||||
|
// if (!PUSH(&context, tag_directives_copy, value))
|
||||||
|
// goto error
|
||||||
|
// value.handle = NULL
|
||||||
|
// value.prefix = NULL
|
||||||
|
// }
|
||||||
|
// }
|
||||||
|
//
|
||||||
|
// DOCUMENT_INIT(*document, nodes.start, nodes.end, version_directive_copy,
|
||||||
|
// tag_directives_copy.start, tag_directives_copy.top,
|
||||||
|
// start_implicit, end_implicit, mark, mark)
|
||||||
|
//
|
||||||
|
// return 1
|
||||||
|
//
|
||||||
|
//error:
|
||||||
|
// STACK_DEL(&context, nodes)
|
||||||
|
// yaml_free(version_directive_copy)
|
||||||
|
// while (!STACK_EMPTY(&context, tag_directives_copy)) {
|
||||||
|
// value yaml_tag_directive_t = POP(&context, tag_directives_copy)
|
||||||
|
// yaml_free(value.handle)
|
||||||
|
// yaml_free(value.prefix)
|
||||||
|
// }
|
||||||
|
// STACK_DEL(&context, tag_directives_copy)
|
||||||
|
// yaml_free(value.handle)
|
||||||
|
// yaml_free(value.prefix)
|
||||||
|
//
|
||||||
|
// return 0
|
||||||
|
//}
|
||||||
|
//
|
||||||
|
///*
|
||||||
|
// * Destroy a document object.
|
||||||
|
// */
|
||||||
|
//
|
||||||
|
//YAML_DECLARE(void)
|
||||||
|
//yaml_document_delete(document *yaml_document_t)
|
||||||
|
//{
|
||||||
|
// struct {
|
||||||
|
// error yaml_error_type_t
|
||||||
|
// } context
|
||||||
|
// tag_directive *yaml_tag_directive_t
|
||||||
|
//
|
||||||
|
// context.error = YAML_NO_ERROR // Eliminate a compiler warning.
|
||||||
|
//
|
||||||
|
// assert(document) // Non-NULL document object is expected.
|
||||||
|
//
|
||||||
|
// while (!STACK_EMPTY(&context, document.nodes)) {
|
||||||
|
// node yaml_node_t = POP(&context, document.nodes)
|
||||||
|
// yaml_free(node.tag)
|
||||||
|
// switch (node.type) {
|
||||||
|
// case YAML_SCALAR_NODE:
|
||||||
|
// yaml_free(node.data.scalar.value)
|
||||||
|
// break
|
||||||
|
// case YAML_SEQUENCE_NODE:
|
||||||
|
// STACK_DEL(&context, node.data.sequence.items)
|
||||||
|
// break
|
||||||
|
// case YAML_MAPPING_NODE:
|
||||||
|
// STACK_DEL(&context, node.data.mapping.pairs)
|
||||||
|
// break
|
||||||
|
// default:
|
||||||
|
// assert(0) // Should not happen.
|
||||||
|
// }
|
||||||
|
// }
|
||||||
|
// STACK_DEL(&context, document.nodes)
|
||||||
|
//
|
||||||
|
// yaml_free(document.version_directive)
|
||||||
|
// for (tag_directive = document.tag_directives.start
|
||||||
|
// tag_directive != document.tag_directives.end
|
||||||
|
// tag_directive++) {
|
||||||
|
// yaml_free(tag_directive.handle)
|
||||||
|
// yaml_free(tag_directive.prefix)
|
||||||
|
// }
|
||||||
|
// yaml_free(document.tag_directives.start)
|
||||||
|
//
|
||||||
|
// memset(document, 0, sizeof(yaml_document_t))
|
||||||
|
//}
|
||||||
|
//
|
||||||
|
///**
|
||||||
|
// * Get a document node.
|
||||||
|
// */
|
||||||
|
//
|
||||||
|
//YAML_DECLARE(yaml_node_t *)
|
||||||
|
//yaml_document_get_node(document *yaml_document_t, index int)
|
||||||
|
//{
|
||||||
|
// assert(document) // Non-NULL document object is expected.
|
||||||
|
//
|
||||||
|
// if (index > 0 && document.nodes.start + index <= document.nodes.top) {
|
||||||
|
// return document.nodes.start + index - 1
|
||||||
|
// }
|
||||||
|
// return NULL
|
||||||
|
//}
|
||||||
|
//
|
||||||
|
///**
|
||||||
|
// * Get the root object.
|
||||||
|
// */
|
||||||
|
//
|
||||||
|
//YAML_DECLARE(yaml_node_t *)
|
||||||
|
//yaml_document_get_root_node(document *yaml_document_t)
|
||||||
|
//{
|
||||||
|
// assert(document) // Non-NULL document object is expected.
|
||||||
|
//
|
||||||
|
// if (document.nodes.top != document.nodes.start) {
|
||||||
|
// return document.nodes.start
|
||||||
|
// }
|
||||||
|
// return NULL
|
||||||
|
//}
|
||||||
|
//
|
||||||
|
///*
|
||||||
|
// * Add a scalar node to a document.
|
||||||
|
// */
|
||||||
|
//
|
||||||
|
//YAML_DECLARE(int)
|
||||||
|
//yaml_document_add_scalar(document *yaml_document_t,
|
||||||
|
// tag *yaml_char_t, value *yaml_char_t, length int,
|
||||||
|
// style yaml_scalar_style_t)
|
||||||
|
//{
|
||||||
|
// struct {
|
||||||
|
// error yaml_error_type_t
|
||||||
|
// } context
|
||||||
|
// mark yaml_mark_t = { 0, 0, 0 }
|
||||||
|
// tag_copy *yaml_char_t = NULL
|
||||||
|
// value_copy *yaml_char_t = NULL
|
||||||
|
// node yaml_node_t
|
||||||
|
//
|
||||||
|
// assert(document) // Non-NULL document object is expected.
|
||||||
|
// assert(value) // Non-NULL value is expected.
|
||||||
|
//
|
||||||
|
// if (!tag) {
|
||||||
|
// tag = (yaml_char_t *)YAML_DEFAULT_SCALAR_TAG
|
||||||
|
// }
|
||||||
|
//
|
||||||
|
// if (!yaml_check_utf8(tag, strlen((char *)tag))) goto error
|
||||||
|
// tag_copy = yaml_strdup(tag)
|
||||||
|
// if (!tag_copy) goto error
|
||||||
|
//
|
||||||
|
// if (length < 0) {
|
||||||
|
// length = strlen((char *)value)
|
||||||
|
// }
|
||||||
|
//
|
||||||
|
// if (!yaml_check_utf8(value, length)) goto error
|
||||||
|
// value_copy = yaml_malloc(length+1)
|
||||||
|
// if (!value_copy) goto error
|
||||||
|
// memcpy(value_copy, value, length)
|
||||||
|
// value_copy[length] = '\0'
|
||||||
|
//
|
||||||
|
// SCALAR_NODE_INIT(node, tag_copy, value_copy, length, style, mark, mark)
|
||||||
|
// if (!PUSH(&context, document.nodes, node)) goto error
|
||||||
|
//
|
||||||
|
// return document.nodes.top - document.nodes.start
|
||||||
|
//
|
||||||
|
//error:
|
||||||
|
// yaml_free(tag_copy)
|
||||||
|
// yaml_free(value_copy)
|
||||||
|
//
|
||||||
|
// return 0
|
||||||
|
//}
|
||||||
|
//
|
||||||
|
///*
|
||||||
|
// * Add a sequence node to a document.
|
||||||
|
// */
|
||||||
|
//
|
||||||
|
//YAML_DECLARE(int)
|
||||||
|
//yaml_document_add_sequence(document *yaml_document_t,
|
||||||
|
// tag *yaml_char_t, style yaml_sequence_style_t)
|
||||||
|
//{
|
||||||
|
// struct {
|
||||||
|
// error yaml_error_type_t
|
||||||
|
// } context
|
||||||
|
// mark yaml_mark_t = { 0, 0, 0 }
|
||||||
|
// tag_copy *yaml_char_t = NULL
|
||||||
|
// struct {
|
||||||
|
// start *yaml_node_item_t
|
||||||
|
// end *yaml_node_item_t
|
||||||
|
// top *yaml_node_item_t
|
||||||
|
// } items = { NULL, NULL, NULL }
|
||||||
|
// node yaml_node_t
|
||||||
|
//
|
||||||
|
// assert(document) // Non-NULL document object is expected.
|
||||||
|
//
|
||||||
|
// if (!tag) {
|
||||||
|
// tag = (yaml_char_t *)YAML_DEFAULT_SEQUENCE_TAG
|
||||||
|
// }
|
||||||
|
//
|
||||||
|
// if (!yaml_check_utf8(tag, strlen((char *)tag))) goto error
|
||||||
|
// tag_copy = yaml_strdup(tag)
|
||||||
|
// if (!tag_copy) goto error
|
||||||
|
//
|
||||||
|
// if (!STACK_INIT(&context, items, INITIAL_STACK_SIZE)) goto error
|
||||||
|
//
|
||||||
|
// SEQUENCE_NODE_INIT(node, tag_copy, items.start, items.end,
|
||||||
|
// style, mark, mark)
|
||||||
|
// if (!PUSH(&context, document.nodes, node)) goto error
|
||||||
|
//
|
||||||
|
// return document.nodes.top - document.nodes.start
|
||||||
|
//
|
||||||
|
//error:
|
||||||
|
// STACK_DEL(&context, items)
|
||||||
|
// yaml_free(tag_copy)
|
||||||
|
//
|
||||||
|
// return 0
|
||||||
|
//}
|
||||||
|
//
|
||||||
|
///*
|
||||||
|
// * Add a mapping node to a document.
|
||||||
|
// */
|
||||||
|
//
|
||||||
|
//YAML_DECLARE(int)
|
||||||
|
//yaml_document_add_mapping(document *yaml_document_t,
|
||||||
|
// tag *yaml_char_t, style yaml_mapping_style_t)
|
||||||
|
//{
|
||||||
|
// struct {
|
||||||
|
// error yaml_error_type_t
|
||||||
|
// } context
|
||||||
|
// mark yaml_mark_t = { 0, 0, 0 }
|
||||||
|
// tag_copy *yaml_char_t = NULL
|
||||||
|
// struct {
|
||||||
|
// start *yaml_node_pair_t
|
||||||
|
// end *yaml_node_pair_t
|
||||||
|
// top *yaml_node_pair_t
|
||||||
|
// } pairs = { NULL, NULL, NULL }
|
||||||
|
// node yaml_node_t
|
||||||
|
//
|
||||||
|
// assert(document) // Non-NULL document object is expected.
|
||||||
|
//
|
||||||
|
// if (!tag) {
|
||||||
|
// tag = (yaml_char_t *)YAML_DEFAULT_MAPPING_TAG
|
||||||
|
// }
|
||||||
|
//
|
||||||
|
// if (!yaml_check_utf8(tag, strlen((char *)tag))) goto error
|
||||||
|
// tag_copy = yaml_strdup(tag)
|
||||||
|
// if (!tag_copy) goto error
|
||||||
|
//
|
||||||
|
// if (!STACK_INIT(&context, pairs, INITIAL_STACK_SIZE)) goto error
|
||||||
|
//
|
||||||
|
// MAPPING_NODE_INIT(node, tag_copy, pairs.start, pairs.end,
|
||||||
|
// style, mark, mark)
|
||||||
|
// if (!PUSH(&context, document.nodes, node)) goto error
|
||||||
|
//
|
||||||
|
// return document.nodes.top - document.nodes.start
|
||||||
|
//
|
||||||
|
//error:
|
||||||
|
// STACK_DEL(&context, pairs)
|
||||||
|
// yaml_free(tag_copy)
|
||||||
|
//
|
||||||
|
// return 0
|
||||||
|
//}
|
||||||
|
//
|
||||||
|
///*
|
||||||
|
// * Append an item to a sequence node.
|
||||||
|
// */
|
||||||
|
//
|
||||||
|
//YAML_DECLARE(int)
|
||||||
|
//yaml_document_append_sequence_item(document *yaml_document_t,
|
||||||
|
// sequence int, item int)
|
||||||
|
//{
|
||||||
|
// struct {
|
||||||
|
// error yaml_error_type_t
|
||||||
|
// } context
|
||||||
|
//
|
||||||
|
// assert(document) // Non-NULL document is required.
|
||||||
|
// assert(sequence > 0
|
||||||
|
// && document.nodes.start + sequence <= document.nodes.top)
|
||||||
|
// // Valid sequence id is required.
|
||||||
|
// assert(document.nodes.start[sequence-1].type == YAML_SEQUENCE_NODE)
|
||||||
|
// // A sequence node is required.
|
||||||
|
// assert(item > 0 && document.nodes.start + item <= document.nodes.top)
|
||||||
|
// // Valid item id is required.
|
||||||
|
//
|
||||||
|
// if (!PUSH(&context,
|
||||||
|
// document.nodes.start[sequence-1].data.sequence.items, item))
|
||||||
|
// return 0
|
||||||
|
//
|
||||||
|
// return 1
|
||||||
|
//}
|
||||||
|
//
|
||||||
|
///*
|
||||||
|
// * Append a pair of a key and a value to a mapping node.
|
||||||
|
// */
|
||||||
|
//
|
||||||
|
//YAML_DECLARE(int)
|
||||||
|
//yaml_document_append_mapping_pair(document *yaml_document_t,
|
||||||
|
// mapping int, key int, value int)
|
||||||
|
//{
|
||||||
|
// struct {
|
||||||
|
// error yaml_error_type_t
|
||||||
|
// } context
|
||||||
|
//
|
||||||
|
// pair yaml_node_pair_t
|
||||||
|
//
|
||||||
|
// assert(document) // Non-NULL document is required.
|
||||||
|
// assert(mapping > 0
|
||||||
|
// && document.nodes.start + mapping <= document.nodes.top)
|
||||||
|
// // Valid mapping id is required.
|
||||||
|
// assert(document.nodes.start[mapping-1].type == YAML_MAPPING_NODE)
|
||||||
|
// // A mapping node is required.
|
||||||
|
// assert(key > 0 && document.nodes.start + key <= document.nodes.top)
|
||||||
|
// // Valid key id is required.
|
||||||
|
// assert(value > 0 && document.nodes.start + value <= document.nodes.top)
|
||||||
|
// // Valid value id is required.
|
||||||
|
//
|
||||||
|
// pair.key = key
|
||||||
|
// pair.value = value
|
||||||
|
//
|
||||||
|
// if (!PUSH(&context,
|
||||||
|
// document.nodes.start[mapping-1].data.mapping.pairs, pair))
|
||||||
|
// return 0
|
||||||
|
//
|
||||||
|
// return 1
|
||||||
|
//}
|
||||||
|
//
|
||||||
|
//
|
815
vendor/gopkg.in/yaml.v2/decode.go
generated
vendored
Normal file
815
vendor/gopkg.in/yaml.v2/decode.go
generated
vendored
Normal file
@ -0,0 +1,815 @@
|
|||||||
|
package yaml
|
||||||
|
|
||||||
|
import (
|
||||||
|
"encoding"
|
||||||
|
"encoding/base64"
|
||||||
|
"fmt"
|
||||||
|
"io"
|
||||||
|
"math"
|
||||||
|
"reflect"
|
||||||
|
"strconv"
|
||||||
|
"time"
|
||||||
|
)
|
||||||
|
|
||||||
|
const (
|
||||||
|
documentNode = 1 << iota
|
||||||
|
mappingNode
|
||||||
|
sequenceNode
|
||||||
|
scalarNode
|
||||||
|
aliasNode
|
||||||
|
)
|
||||||
|
|
||||||
|
type node struct {
|
||||||
|
kind int
|
||||||
|
line, column int
|
||||||
|
tag string
|
||||||
|
// For an alias node, alias holds the resolved alias.
|
||||||
|
alias *node
|
||||||
|
value string
|
||||||
|
implicit bool
|
||||||
|
children []*node
|
||||||
|
anchors map[string]*node
|
||||||
|
}
|
||||||
|
|
||||||
|
// ----------------------------------------------------------------------------
|
||||||
|
// Parser, produces a node tree out of a libyaml event stream.
|
||||||
|
|
||||||
|
type parser struct {
|
||||||
|
parser yaml_parser_t
|
||||||
|
event yaml_event_t
|
||||||
|
doc *node
|
||||||
|
doneInit bool
|
||||||
|
}
|
||||||
|
|
||||||
|
func newParser(b []byte) *parser {
|
||||||
|
p := parser{}
|
||||||
|
if !yaml_parser_initialize(&p.parser) {
|
||||||
|
panic("failed to initialize YAML emitter")
|
||||||
|
}
|
||||||
|
if len(b) == 0 {
|
||||||
|
b = []byte{'\n'}
|
||||||
|
}
|
||||||
|
yaml_parser_set_input_string(&p.parser, b)
|
||||||
|
return &p
|
||||||
|
}
|
||||||
|
|
||||||
|
func newParserFromReader(r io.Reader) *parser {
|
||||||
|
p := parser{}
|
||||||
|
if !yaml_parser_initialize(&p.parser) {
|
||||||
|
panic("failed to initialize YAML emitter")
|
||||||
|
}
|
||||||
|
yaml_parser_set_input_reader(&p.parser, r)
|
||||||
|
return &p
|
||||||
|
}
|
||||||
|
|
||||||
|
func (p *parser) init() {
|
||||||
|
if p.doneInit {
|
||||||
|
return
|
||||||
|
}
|
||||||
|
p.expect(yaml_STREAM_START_EVENT)
|
||||||
|
p.doneInit = true
|
||||||
|
}
|
||||||
|
|
||||||
|
func (p *parser) destroy() {
|
||||||
|
if p.event.typ != yaml_NO_EVENT {
|
||||||
|
yaml_event_delete(&p.event)
|
||||||
|
}
|
||||||
|
yaml_parser_delete(&p.parser)
|
||||||
|
}
|
||||||
|
|
||||||
|
// expect consumes an event from the event stream and
|
||||||
|
// checks that it's of the expected type.
|
||||||
|
func (p *parser) expect(e yaml_event_type_t) {
|
||||||
|
if p.event.typ == yaml_NO_EVENT {
|
||||||
|
if !yaml_parser_parse(&p.parser, &p.event) {
|
||||||
|
p.fail()
|
||||||
|
}
|
||||||
|
}
|
||||||
|
if p.event.typ == yaml_STREAM_END_EVENT {
|
||||||
|
failf("attempted to go past the end of stream; corrupted value?")
|
||||||
|
}
|
||||||
|
if p.event.typ != e {
|
||||||
|
p.parser.problem = fmt.Sprintf("expected %s event but got %s", e, p.event.typ)
|
||||||
|
p.fail()
|
||||||
|
}
|
||||||
|
yaml_event_delete(&p.event)
|
||||||
|
p.event.typ = yaml_NO_EVENT
|
||||||
|
}
|
||||||
|
|
||||||
|
// peek peeks at the next event in the event stream,
|
||||||
|
// puts the results into p.event and returns the event type.
|
||||||
|
func (p *parser) peek() yaml_event_type_t {
|
||||||
|
if p.event.typ != yaml_NO_EVENT {
|
||||||
|
return p.event.typ
|
||||||
|
}
|
||||||
|
if !yaml_parser_parse(&p.parser, &p.event) {
|
||||||
|
p.fail()
|
||||||
|
}
|
||||||
|
return p.event.typ
|
||||||
|
}
|
||||||
|
|
||||||
|
func (p *parser) fail() {
|
||||||
|
var where string
|
||||||
|
var line int
|
||||||
|
if p.parser.problem_mark.line != 0 {
|
||||||
|
line = p.parser.problem_mark.line
|
||||||
|
// Scanner errors don't iterate line before returning error
|
||||||
|
if p.parser.error == yaml_SCANNER_ERROR {
|
||||||
|
line++
|
||||||
|
}
|
||||||
|
} else if p.parser.context_mark.line != 0 {
|
||||||
|
line = p.parser.context_mark.line
|
||||||
|
}
|
||||||
|
if line != 0 {
|
||||||
|
where = "line " + strconv.Itoa(line) + ": "
|
||||||
|
}
|
||||||
|
var msg string
|
||||||
|
if len(p.parser.problem) > 0 {
|
||||||
|
msg = p.parser.problem
|
||||||
|
} else {
|
||||||
|
msg = "unknown problem parsing YAML content"
|
||||||
|
}
|
||||||
|
failf("%s%s", where, msg)
|
||||||
|
}
|
||||||
|
|
||||||
|
func (p *parser) anchor(n *node, anchor []byte) {
|
||||||
|
if anchor != nil {
|
||||||
|
p.doc.anchors[string(anchor)] = n
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func (p *parser) parse() *node {
|
||||||
|
p.init()
|
||||||
|
switch p.peek() {
|
||||||
|
case yaml_SCALAR_EVENT:
|
||||||
|
return p.scalar()
|
||||||
|
case yaml_ALIAS_EVENT:
|
||||||
|
return p.alias()
|
||||||
|
case yaml_MAPPING_START_EVENT:
|
||||||
|
return p.mapping()
|
||||||
|
case yaml_SEQUENCE_START_EVENT:
|
||||||
|
return p.sequence()
|
||||||
|
case yaml_DOCUMENT_START_EVENT:
|
||||||
|
return p.document()
|
||||||
|
case yaml_STREAM_END_EVENT:
|
||||||
|
// Happens when attempting to decode an empty buffer.
|
||||||
|
return nil
|
||||||
|
default:
|
||||||
|
panic("attempted to parse unknown event: " + p.event.typ.String())
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func (p *parser) node(kind int) *node {
|
||||||
|
return &node{
|
||||||
|
kind: kind,
|
||||||
|
line: p.event.start_mark.line,
|
||||||
|
column: p.event.start_mark.column,
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func (p *parser) document() *node {
|
||||||
|
n := p.node(documentNode)
|
||||||
|
n.anchors = make(map[string]*node)
|
||||||
|
p.doc = n
|
||||||
|
p.expect(yaml_DOCUMENT_START_EVENT)
|
||||||
|
n.children = append(n.children, p.parse())
|
||||||
|
p.expect(yaml_DOCUMENT_END_EVENT)
|
||||||
|
return n
|
||||||
|
}
|
||||||
|
|
||||||
|
func (p *parser) alias() *node {
|
||||||
|
n := p.node(aliasNode)
|
||||||
|
n.value = string(p.event.anchor)
|
||||||
|
n.alias = p.doc.anchors[n.value]
|
||||||
|
if n.alias == nil {
|
||||||
|
failf("unknown anchor '%s' referenced", n.value)
|
||||||
|
}
|
||||||
|
p.expect(yaml_ALIAS_EVENT)
|
||||||
|
return n
|
||||||
|
}
|
||||||
|
|
||||||
|
func (p *parser) scalar() *node {
|
||||||
|
n := p.node(scalarNode)
|
||||||
|
n.value = string(p.event.value)
|
||||||
|
n.tag = string(p.event.tag)
|
||||||
|
n.implicit = p.event.implicit
|
||||||
|
p.anchor(n, p.event.anchor)
|
||||||
|
p.expect(yaml_SCALAR_EVENT)
|
||||||
|
return n
|
||||||
|
}
|
||||||
|
|
||||||
|
func (p *parser) sequence() *node {
|
||||||
|
n := p.node(sequenceNode)
|
||||||
|
p.anchor(n, p.event.anchor)
|
||||||
|
p.expect(yaml_SEQUENCE_START_EVENT)
|
||||||
|
for p.peek() != yaml_SEQUENCE_END_EVENT {
|
||||||
|
n.children = append(n.children, p.parse())
|
||||||
|
}
|
||||||
|
p.expect(yaml_SEQUENCE_END_EVENT)
|
||||||
|
return n
|
||||||
|
}
|
||||||
|
|
||||||
|
func (p *parser) mapping() *node {
|
||||||
|
n := p.node(mappingNode)
|
||||||
|
p.anchor(n, p.event.anchor)
|
||||||
|
p.expect(yaml_MAPPING_START_EVENT)
|
||||||
|
for p.peek() != yaml_MAPPING_END_EVENT {
|
||||||
|
n.children = append(n.children, p.parse(), p.parse())
|
||||||
|
}
|
||||||
|
p.expect(yaml_MAPPING_END_EVENT)
|
||||||
|
return n
|
||||||
|
}
|
||||||
|
|
||||||
|
// ----------------------------------------------------------------------------
|
||||||
|
// Decoder, unmarshals a node into a provided value.
|
||||||
|
|
||||||
|
type decoder struct {
|
||||||
|
doc *node
|
||||||
|
aliases map[*node]bool
|
||||||
|
mapType reflect.Type
|
||||||
|
terrors []string
|
||||||
|
strict bool
|
||||||
|
|
||||||
|
decodeCount int
|
||||||
|
aliasCount int
|
||||||
|
aliasDepth int
|
||||||
|
}
|
||||||
|
|
||||||
|
var (
|
||||||
|
mapItemType = reflect.TypeOf(MapItem{})
|
||||||
|
durationType = reflect.TypeOf(time.Duration(0))
|
||||||
|
defaultMapType = reflect.TypeOf(map[interface{}]interface{}{})
|
||||||
|
ifaceType = defaultMapType.Elem()
|
||||||
|
timeType = reflect.TypeOf(time.Time{})
|
||||||
|
ptrTimeType = reflect.TypeOf(&time.Time{})
|
||||||
|
)
|
||||||
|
|
||||||
|
func newDecoder(strict bool) *decoder {
|
||||||
|
d := &decoder{mapType: defaultMapType, strict: strict}
|
||||||
|
d.aliases = make(map[*node]bool)
|
||||||
|
return d
|
||||||
|
}
|
||||||
|
|
||||||
|
func (d *decoder) terror(n *node, tag string, out reflect.Value) {
|
||||||
|
if n.tag != "" {
|
||||||
|
tag = n.tag
|
||||||
|
}
|
||||||
|
value := n.value
|
||||||
|
if tag != yaml_SEQ_TAG && tag != yaml_MAP_TAG {
|
||||||
|
if len(value) > 10 {
|
||||||
|
value = " `" + value[:7] + "...`"
|
||||||
|
} else {
|
||||||
|
value = " `" + value + "`"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
d.terrors = append(d.terrors, fmt.Sprintf("line %d: cannot unmarshal %s%s into %s", n.line+1, shortTag(tag), value, out.Type()))
|
||||||
|
}
|
||||||
|
|
||||||
|
func (d *decoder) callUnmarshaler(n *node, u Unmarshaler) (good bool) {
|
||||||
|
terrlen := len(d.terrors)
|
||||||
|
err := u.UnmarshalYAML(func(v interface{}) (err error) {
|
||||||
|
defer handleErr(&err)
|
||||||
|
d.unmarshal(n, reflect.ValueOf(v))
|
||||||
|
if len(d.terrors) > terrlen {
|
||||||
|
issues := d.terrors[terrlen:]
|
||||||
|
d.terrors = d.terrors[:terrlen]
|
||||||
|
return &TypeError{issues}
|
||||||
|
}
|
||||||
|
return nil
|
||||||
|
})
|
||||||
|
if e, ok := err.(*TypeError); ok {
|
||||||
|
d.terrors = append(d.terrors, e.Errors...)
|
||||||
|
return false
|
||||||
|
}
|
||||||
|
if err != nil {
|
||||||
|
fail(err)
|
||||||
|
}
|
||||||
|
return true
|
||||||
|
}
|
||||||
|
|
||||||
|
// d.prepare initializes and dereferences pointers and calls UnmarshalYAML
|
||||||
|
// if a value is found to implement it.
|
||||||
|
// It returns the initialized and dereferenced out value, whether
|
||||||
|
// unmarshalling was already done by UnmarshalYAML, and if so whether
|
||||||
|
// its types unmarshalled appropriately.
|
||||||
|
//
|
||||||
|
// If n holds a null value, prepare returns before doing anything.
|
||||||
|
func (d *decoder) prepare(n *node, out reflect.Value) (newout reflect.Value, unmarshaled, good bool) {
|
||||||
|
if n.tag == yaml_NULL_TAG || n.kind == scalarNode && n.tag == "" && (n.value == "null" || n.value == "~" || n.value == "" && n.implicit) {
|
||||||
|
return out, false, false
|
||||||
|
}
|
||||||
|
again := true
|
||||||
|
for again {
|
||||||
|
again = false
|
||||||
|
if out.Kind() == reflect.Ptr {
|
||||||
|
if out.IsNil() {
|
||||||
|
out.Set(reflect.New(out.Type().Elem()))
|
||||||
|
}
|
||||||
|
out = out.Elem()
|
||||||
|
again = true
|
||||||
|
}
|
||||||
|
if out.CanAddr() {
|
||||||
|
if u, ok := out.Addr().Interface().(Unmarshaler); ok {
|
||||||
|
good = d.callUnmarshaler(n, u)
|
||||||
|
return out, true, good
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
return out, false, false
|
||||||
|
}
|
||||||
|
|
||||||
|
const (
|
||||||
|
// 400,000 decode operations is ~500kb of dense object declarations, or
|
||||||
|
// ~5kb of dense object declarations with 10000% alias expansion
|
||||||
|
alias_ratio_range_low = 400000
|
||||||
|
|
||||||
|
// 4,000,000 decode operations is ~5MB of dense object declarations, or
|
||||||
|
// ~4.5MB of dense object declarations with 10% alias expansion
|
||||||
|
alias_ratio_range_high = 4000000
|
||||||
|
|
||||||
|
// alias_ratio_range is the range over which we scale allowed alias ratios
|
||||||
|
alias_ratio_range = float64(alias_ratio_range_high - alias_ratio_range_low)
|
||||||
|
)
|
||||||
|
|
||||||
|
func allowedAliasRatio(decodeCount int) float64 {
|
||||||
|
switch {
|
||||||
|
case decodeCount <= alias_ratio_range_low:
|
||||||
|
// allow 99% to come from alias expansion for small-to-medium documents
|
||||||
|
return 0.99
|
||||||
|
case decodeCount >= alias_ratio_range_high:
|
||||||
|
// allow 10% to come from alias expansion for very large documents
|
||||||
|
return 0.10
|
||||||
|
default:
|
||||||
|
// scale smoothly from 99% down to 10% over the range.
|
||||||
|
// this maps to 396,000 - 400,000 allowed alias-driven decodes over the range.
|
||||||
|
// 400,000 decode operations is ~100MB of allocations in worst-case scenarios (single-item maps).
|
||||||
|
return 0.99 - 0.89*(float64(decodeCount-alias_ratio_range_low)/alias_ratio_range)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func (d *decoder) unmarshal(n *node, out reflect.Value) (good bool) {
|
||||||
|
d.decodeCount++
|
||||||
|
if d.aliasDepth > 0 {
|
||||||
|
d.aliasCount++
|
||||||
|
}
|
||||||
|
if d.aliasCount > 100 && d.decodeCount > 1000 && float64(d.aliasCount)/float64(d.decodeCount) > allowedAliasRatio(d.decodeCount) {
|
||||||
|
failf("document contains excessive aliasing")
|
||||||
|
}
|
||||||
|
switch n.kind {
|
||||||
|
case documentNode:
|
||||||
|
return d.document(n, out)
|
||||||
|
case aliasNode:
|
||||||
|
return d.alias(n, out)
|
||||||
|
}
|
||||||
|
out, unmarshaled, good := d.prepare(n, out)
|
||||||
|
if unmarshaled {
|
||||||
|
return good
|
||||||
|
}
|
||||||
|
switch n.kind {
|
||||||
|
case scalarNode:
|
||||||
|
good = d.scalar(n, out)
|
||||||
|
case mappingNode:
|
||||||
|
good = d.mapping(n, out)
|
||||||
|
case sequenceNode:
|
||||||
|
good = d.sequence(n, out)
|
||||||
|
default:
|
||||||
|
panic("internal error: unknown node kind: " + strconv.Itoa(n.kind))
|
||||||
|
}
|
||||||
|
return good
|
||||||
|
}
|
||||||
|
|
||||||
|
func (d *decoder) document(n *node, out reflect.Value) (good bool) {
|
||||||
|
if len(n.children) == 1 {
|
||||||
|
d.doc = n
|
||||||
|
d.unmarshal(n.children[0], out)
|
||||||
|
return true
|
||||||
|
}
|
||||||
|
return false
|
||||||
|
}
|
||||||
|
|
||||||
|
func (d *decoder) alias(n *node, out reflect.Value) (good bool) {
|
||||||
|
if d.aliases[n] {
|
||||||
|
// TODO this could actually be allowed in some circumstances.
|
||||||
|
failf("anchor '%s' value contains itself", n.value)
|
||||||
|
}
|
||||||
|
d.aliases[n] = true
|
||||||
|
d.aliasDepth++
|
||||||
|
good = d.unmarshal(n.alias, out)
|
||||||
|
d.aliasDepth--
|
||||||
|
delete(d.aliases, n)
|
||||||
|
return good
|
||||||
|
}
|
||||||
|
|
||||||
|
var zeroValue reflect.Value
|
||||||
|
|
||||||
|
func resetMap(out reflect.Value) {
|
||||||
|
for _, k := range out.MapKeys() {
|
||||||
|
out.SetMapIndex(k, zeroValue)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func (d *decoder) scalar(n *node, out reflect.Value) bool {
|
||||||
|
var tag string
|
||||||
|
var resolved interface{}
|
||||||
|
if n.tag == "" && !n.implicit {
|
||||||
|
tag = yaml_STR_TAG
|
||||||
|
resolved = n.value
|
||||||
|
} else {
|
||||||
|
tag, resolved = resolve(n.tag, n.value)
|
||||||
|
if tag == yaml_BINARY_TAG {
|
||||||
|
data, err := base64.StdEncoding.DecodeString(resolved.(string))
|
||||||
|
if err != nil {
|
||||||
|
failf("!!binary value contains invalid base64 data")
|
||||||
|
}
|
||||||
|
resolved = string(data)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
if resolved == nil {
|
||||||
|
if out.Kind() == reflect.Map && !out.CanAddr() {
|
||||||
|
resetMap(out)
|
||||||
|
} else {
|
||||||
|
out.Set(reflect.Zero(out.Type()))
|
||||||
|
}
|
||||||
|
return true
|
||||||
|
}
|
||||||
|
if resolvedv := reflect.ValueOf(resolved); out.Type() == resolvedv.Type() {
|
||||||
|
// We've resolved to exactly the type we want, so use that.
|
||||||
|
out.Set(resolvedv)
|
||||||
|
return true
|
||||||
|
}
|
||||||
|
// Perhaps we can use the value as a TextUnmarshaler to
|
||||||
|
// set its value.
|
||||||
|
if out.CanAddr() {
|
||||||
|
u, ok := out.Addr().Interface().(encoding.TextUnmarshaler)
|
||||||
|
if ok {
|
||||||
|
var text []byte
|
||||||
|
if tag == yaml_BINARY_TAG {
|
||||||
|
text = []byte(resolved.(string))
|
||||||
|
} else {
|
||||||
|
// We let any value be unmarshaled into TextUnmarshaler.
|
||||||
|
// That might be more lax than we'd like, but the
|
||||||
|
// TextUnmarshaler itself should bowl out any dubious values.
|
||||||
|
text = []byte(n.value)
|
||||||
|
}
|
||||||
|
err := u.UnmarshalText(text)
|
||||||
|
if err != nil {
|
||||||
|
fail(err)
|
||||||
|
}
|
||||||
|
return true
|
||||||
|
}
|
||||||
|
}
|
||||||
|
switch out.Kind() {
|
||||||
|
case reflect.String:
|
||||||
|
if tag == yaml_BINARY_TAG {
|
||||||
|
out.SetString(resolved.(string))
|
||||||
|
return true
|
||||||
|
}
|
||||||
|
if resolved != nil {
|
||||||
|
out.SetString(n.value)
|
||||||
|
return true
|
||||||
|
}
|
||||||
|
case reflect.Interface:
|
||||||
|
if resolved == nil {
|
||||||
|
out.Set(reflect.Zero(out.Type()))
|
||||||
|
} else if tag == yaml_TIMESTAMP_TAG {
|
||||||
|
// It looks like a timestamp but for backward compatibility
|
||||||
|
// reasons we set it as a string, so that code that unmarshals
|
||||||
|
// timestamp-like values into interface{} will continue to
|
||||||
|
// see a string and not a time.Time.
|
||||||
|
// TODO(v3) Drop this.
|
||||||
|
out.Set(reflect.ValueOf(n.value))
|
||||||
|
} else {
|
||||||
|
out.Set(reflect.ValueOf(resolved))
|
||||||
|
}
|
||||||
|
return true
|
||||||
|
case reflect.Int, reflect.Int8, reflect.Int16, reflect.Int32, reflect.Int64:
|
||||||
|
switch resolved := resolved.(type) {
|
||||||
|
case int:
|
||||||
|
if !out.OverflowInt(int64(resolved)) {
|
||||||
|
out.SetInt(int64(resolved))
|
||||||
|
return true
|
||||||
|
}
|
||||||
|
case int64:
|
||||||
|
if !out.OverflowInt(resolved) {
|
||||||
|
out.SetInt(resolved)
|
||||||
|
return true
|
||||||
|
}
|
||||||
|
case uint64:
|
||||||
|
if resolved <= math.MaxInt64 && !out.OverflowInt(int64(resolved)) {
|
||||||
|
out.SetInt(int64(resolved))
|
||||||
|
return true
|
||||||
|
}
|
||||||
|
case float64:
|
||||||
|
if resolved <= math.MaxInt64 && !out.OverflowInt(int64(resolved)) {
|
||||||
|
out.SetInt(int64(resolved))
|
||||||
|
return true
|
||||||
|
}
|
||||||
|
case string:
|
||||||
|
if out.Type() == durationType {
|
||||||
|
d, err := time.ParseDuration(resolved)
|
||||||
|
if err == nil {
|
||||||
|
out.SetInt(int64(d))
|
||||||
|
return true
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
case reflect.Uint, reflect.Uint8, reflect.Uint16, reflect.Uint32, reflect.Uint64, reflect.Uintptr:
|
||||||
|
switch resolved := resolved.(type) {
|
||||||
|
case int:
|
||||||
|
if resolved >= 0 && !out.OverflowUint(uint64(resolved)) {
|
||||||
|
out.SetUint(uint64(resolved))
|
||||||
|
return true
|
||||||
|
}
|
||||||
|
case int64:
|
||||||
|
if resolved >= 0 && !out.OverflowUint(uint64(resolved)) {
|
||||||
|
out.SetUint(uint64(resolved))
|
||||||
|
return true
|
||||||
|
}
|
||||||
|
case uint64:
|
||||||
|
if !out.OverflowUint(uint64(resolved)) {
|
||||||
|
out.SetUint(uint64(resolved))
|
||||||
|
return true
|
||||||
|
}
|
||||||
|
case float64:
|
||||||
|
if resolved <= math.MaxUint64 && !out.OverflowUint(uint64(resolved)) {
|
||||||
|
out.SetUint(uint64(resolved))
|
||||||
|
return true
|
||||||
|
}
|
||||||
|
}
|
||||||
|
case reflect.Bool:
|
||||||
|
switch resolved := resolved.(type) {
|
||||||
|
case bool:
|
||||||
|
out.SetBool(resolved)
|
||||||
|
return true
|
||||||
|
}
|
||||||
|
case reflect.Float32, reflect.Float64:
|
||||||
|
switch resolved := resolved.(type) {
|
||||||
|
case int:
|
||||||
|
out.SetFloat(float64(resolved))
|
||||||
|
return true
|
||||||
|
case int64:
|
||||||
|
out.SetFloat(float64(resolved))
|
||||||
|
return true
|
||||||
|
case uint64:
|
||||||
|
out.SetFloat(float64(resolved))
|
||||||
|
return true
|
||||||
|
case float64:
|
||||||
|
out.SetFloat(resolved)
|
||||||
|
return true
|
||||||
|
}
|
||||||
|
case reflect.Struct:
|
||||||
|
if resolvedv := reflect.ValueOf(resolved); out.Type() == resolvedv.Type() {
|
||||||
|
out.Set(resolvedv)
|
||||||
|
return true
|
||||||
|
}
|
||||||
|
case reflect.Ptr:
|
||||||
|
if out.Type().Elem() == reflect.TypeOf(resolved) {
|
||||||
|
// TODO DOes this make sense? When is out a Ptr except when decoding a nil value?
|
||||||
|
elem := reflect.New(out.Type().Elem())
|
||||||
|
elem.Elem().Set(reflect.ValueOf(resolved))
|
||||||
|
out.Set(elem)
|
||||||
|
return true
|
||||||
|
}
|
||||||
|
}
|
||||||
|
d.terror(n, tag, out)
|
||||||
|
return false
|
||||||
|
}
|
||||||
|
|
||||||
|
func settableValueOf(i interface{}) reflect.Value {
|
||||||
|
v := reflect.ValueOf(i)
|
||||||
|
sv := reflect.New(v.Type()).Elem()
|
||||||
|
sv.Set(v)
|
||||||
|
return sv
|
||||||
|
}
|
||||||
|
|
||||||
|
func (d *decoder) sequence(n *node, out reflect.Value) (good bool) {
|
||||||
|
l := len(n.children)
|
||||||
|
|
||||||
|
var iface reflect.Value
|
||||||
|
switch out.Kind() {
|
||||||
|
case reflect.Slice:
|
||||||
|
out.Set(reflect.MakeSlice(out.Type(), l, l))
|
||||||
|
case reflect.Array:
|
||||||
|
if l != out.Len() {
|
||||||
|
failf("invalid array: want %d elements but got %d", out.Len(), l)
|
||||||
|
}
|
||||||
|
case reflect.Interface:
|
||||||
|
// No type hints. Will have to use a generic sequence.
|
||||||
|
iface = out
|
||||||
|
out = settableValueOf(make([]interface{}, l))
|
||||||
|
default:
|
||||||
|
d.terror(n, yaml_SEQ_TAG, out)
|
||||||
|
return false
|
||||||
|
}
|
||||||
|
et := out.Type().Elem()
|
||||||
|
|
||||||
|
j := 0
|
||||||
|
for i := 0; i < l; i++ {
|
||||||
|
e := reflect.New(et).Elem()
|
||||||
|
if ok := d.unmarshal(n.children[i], e); ok {
|
||||||
|
out.Index(j).Set(e)
|
||||||
|
j++
|
||||||
|
}
|
||||||
|
}
|
||||||
|
if out.Kind() != reflect.Array {
|
||||||
|
out.Set(out.Slice(0, j))
|
||||||
|
}
|
||||||
|
if iface.IsValid() {
|
||||||
|
iface.Set(out)
|
||||||
|
}
|
||||||
|
return true
|
||||||
|
}
|
||||||
|
|
||||||
|
func (d *decoder) mapping(n *node, out reflect.Value) (good bool) {
|
||||||
|
switch out.Kind() {
|
||||||
|
case reflect.Struct:
|
||||||
|
return d.mappingStruct(n, out)
|
||||||
|
case reflect.Slice:
|
||||||
|
return d.mappingSlice(n, out)
|
||||||
|
case reflect.Map:
|
||||||
|
// okay
|
||||||
|
case reflect.Interface:
|
||||||
|
if d.mapType.Kind() == reflect.Map {
|
||||||
|
iface := out
|
||||||
|
out = reflect.MakeMap(d.mapType)
|
||||||
|
iface.Set(out)
|
||||||
|
} else {
|
||||||
|
slicev := reflect.New(d.mapType).Elem()
|
||||||
|
if !d.mappingSlice(n, slicev) {
|
||||||
|
return false
|
||||||
|
}
|
||||||
|
out.Set(slicev)
|
||||||
|
return true
|
||||||
|
}
|
||||||
|
default:
|
||||||
|
d.terror(n, yaml_MAP_TAG, out)
|
||||||
|
return false
|
||||||
|
}
|
||||||
|
outt := out.Type()
|
||||||
|
kt := outt.Key()
|
||||||
|
et := outt.Elem()
|
||||||
|
|
||||||
|
mapType := d.mapType
|
||||||
|
if outt.Key() == ifaceType && outt.Elem() == ifaceType {
|
||||||
|
d.mapType = outt
|
||||||
|
}
|
||||||
|
|
||||||
|
if out.IsNil() {
|
||||||
|
out.Set(reflect.MakeMap(outt))
|
||||||
|
}
|
||||||
|
l := len(n.children)
|
||||||
|
for i := 0; i < l; i += 2 {
|
||||||
|
if isMerge(n.children[i]) {
|
||||||
|
d.merge(n.children[i+1], out)
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
k := reflect.New(kt).Elem()
|
||||||
|
if d.unmarshal(n.children[i], k) {
|
||||||
|
kkind := k.Kind()
|
||||||
|
if kkind == reflect.Interface {
|
||||||
|
kkind = k.Elem().Kind()
|
||||||
|
}
|
||||||
|
if kkind == reflect.Map || kkind == reflect.Slice {
|
||||||
|
failf("invalid map key: %#v", k.Interface())
|
||||||
|
}
|
||||||
|
e := reflect.New(et).Elem()
|
||||||
|
if d.unmarshal(n.children[i+1], e) {
|
||||||
|
d.setMapIndex(n.children[i+1], out, k, e)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
d.mapType = mapType
|
||||||
|
return true
|
||||||
|
}
|
||||||
|
|
||||||
|
func (d *decoder) setMapIndex(n *node, out, k, v reflect.Value) {
|
||||||
|
if d.strict && out.MapIndex(k) != zeroValue {
|
||||||
|
d.terrors = append(d.terrors, fmt.Sprintf("line %d: key %#v already set in map", n.line+1, k.Interface()))
|
||||||
|
return
|
||||||
|
}
|
||||||
|
out.SetMapIndex(k, v)
|
||||||
|
}
|
||||||
|
|
||||||
|
func (d *decoder) mappingSlice(n *node, out reflect.Value) (good bool) {
|
||||||
|
outt := out.Type()
|
||||||
|
if outt.Elem() != mapItemType {
|
||||||
|
d.terror(n, yaml_MAP_TAG, out)
|
||||||
|
return false
|
||||||
|
}
|
||||||
|
|
||||||
|
mapType := d.mapType
|
||||||
|
d.mapType = outt
|
||||||
|
|
||||||
|
var slice []MapItem
|
||||||
|
var l = len(n.children)
|
||||||
|
for i := 0; i < l; i += 2 {
|
||||||
|
if isMerge(n.children[i]) {
|
||||||
|
d.merge(n.children[i+1], out)
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
item := MapItem{}
|
||||||
|
k := reflect.ValueOf(&item.Key).Elem()
|
||||||
|
if d.unmarshal(n.children[i], k) {
|
||||||
|
v := reflect.ValueOf(&item.Value).Elem()
|
||||||
|
if d.unmarshal(n.children[i+1], v) {
|
||||||
|
slice = append(slice, item)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
out.Set(reflect.ValueOf(slice))
|
||||||
|
d.mapType = mapType
|
||||||
|
return true
|
||||||
|
}
|
||||||
|
|
||||||
|
func (d *decoder) mappingStruct(n *node, out reflect.Value) (good bool) {
|
||||||
|
sinfo, err := getStructInfo(out.Type())
|
||||||
|
if err != nil {
|
||||||
|
panic(err)
|
||||||
|
}
|
||||||
|
name := settableValueOf("")
|
||||||
|
l := len(n.children)
|
||||||
|
|
||||||
|
var inlineMap reflect.Value
|
||||||
|
var elemType reflect.Type
|
||||||
|
if sinfo.InlineMap != -1 {
|
||||||
|
inlineMap = out.Field(sinfo.InlineMap)
|
||||||
|
inlineMap.Set(reflect.New(inlineMap.Type()).Elem())
|
||||||
|
elemType = inlineMap.Type().Elem()
|
||||||
|
}
|
||||||
|
|
||||||
|
var doneFields []bool
|
||||||
|
if d.strict {
|
||||||
|
doneFields = make([]bool, len(sinfo.FieldsList))
|
||||||
|
}
|
||||||
|
for i := 0; i < l; i += 2 {
|
||||||
|
ni := n.children[i]
|
||||||
|
if isMerge(ni) {
|
||||||
|
d.merge(n.children[i+1], out)
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
if !d.unmarshal(ni, name) {
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
if info, ok := sinfo.FieldsMap[name.String()]; ok {
|
||||||
|
if d.strict {
|
||||||
|
if doneFields[info.Id] {
|
||||||
|
d.terrors = append(d.terrors, fmt.Sprintf("line %d: field %s already set in type %s", ni.line+1, name.String(), out.Type()))
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
doneFields[info.Id] = true
|
||||||
|
}
|
||||||
|
var field reflect.Value
|
||||||
|
if info.Inline == nil {
|
||||||
|
field = out.Field(info.Num)
|
||||||
|
} else {
|
||||||
|
field = out.FieldByIndex(info.Inline)
|
||||||
|
}
|
||||||
|
d.unmarshal(n.children[i+1], field)
|
||||||
|
} else if sinfo.InlineMap != -1 {
|
||||||
|
if inlineMap.IsNil() {
|
||||||
|
inlineMap.Set(reflect.MakeMap(inlineMap.Type()))
|
||||||
|
}
|
||||||
|
value := reflect.New(elemType).Elem()
|
||||||
|
d.unmarshal(n.children[i+1], value)
|
||||||
|
d.setMapIndex(n.children[i+1], inlineMap, name, value)
|
||||||
|
} else if d.strict {
|
||||||
|
d.terrors = append(d.terrors, fmt.Sprintf("line %d: field %s not found in type %s", ni.line+1, name.String(), out.Type()))
|
||||||
|
}
|
||||||
|
}
|
||||||
|
return true
|
||||||
|
}
|
||||||
|
|
||||||
|
func failWantMap() {
|
||||||
|
failf("map merge requires map or sequence of maps as the value")
|
||||||
|
}
|
||||||
|
|
||||||
|
func (d *decoder) merge(n *node, out reflect.Value) {
|
||||||
|
switch n.kind {
|
||||||
|
case mappingNode:
|
||||||
|
d.unmarshal(n, out)
|
||||||
|
case aliasNode:
|
||||||
|
if n.alias != nil && n.alias.kind != mappingNode {
|
||||||
|
failWantMap()
|
||||||
|
}
|
||||||
|
d.unmarshal(n, out)
|
||||||
|
case sequenceNode:
|
||||||
|
// Step backwards as earlier nodes take precedence.
|
||||||
|
for i := len(n.children) - 1; i >= 0; i-- {
|
||||||
|
ni := n.children[i]
|
||||||
|
if ni.kind == aliasNode {
|
||||||
|
if ni.alias != nil && ni.alias.kind != mappingNode {
|
||||||
|
failWantMap()
|
||||||
|
}
|
||||||
|
} else if ni.kind != mappingNode {
|
||||||
|
failWantMap()
|
||||||
|
}
|
||||||
|
d.unmarshal(ni, out)
|
||||||
|
}
|
||||||
|
default:
|
||||||
|
failWantMap()
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func isMerge(n *node) bool {
|
||||||
|
return n.kind == scalarNode && n.value == "<<" && (n.implicit == true || n.tag == yaml_MERGE_TAG)
|
||||||
|
}
|
1685
vendor/gopkg.in/yaml.v2/emitterc.go
generated
vendored
Normal file
1685
vendor/gopkg.in/yaml.v2/emitterc.go
generated
vendored
Normal file
File diff suppressed because it is too large
Load Diff
390
vendor/gopkg.in/yaml.v2/encode.go
generated
vendored
Normal file
390
vendor/gopkg.in/yaml.v2/encode.go
generated
vendored
Normal file
@ -0,0 +1,390 @@
|
|||||||
|
package yaml
|
||||||
|
|
||||||
|
import (
|
||||||
|
"encoding"
|
||||||
|
"fmt"
|
||||||
|
"io"
|
||||||
|
"reflect"
|
||||||
|
"regexp"
|
||||||
|
"sort"
|
||||||
|
"strconv"
|
||||||
|
"strings"
|
||||||
|
"time"
|
||||||
|
"unicode/utf8"
|
||||||
|
)
|
||||||
|
|
||||||
|
// jsonNumber is the interface of the encoding/json.Number datatype.
|
||||||
|
// Repeating the interface here avoids a dependency on encoding/json, and also
|
||||||
|
// supports other libraries like jsoniter, which use a similar datatype with
|
||||||
|
// the same interface. Detecting this interface is useful when dealing with
|
||||||
|
// structures containing json.Number, which is a string under the hood. The
|
||||||
|
// encoder should prefer the use of Int64(), Float64() and string(), in that
|
||||||
|
// order, when encoding this type.
|
||||||
|
type jsonNumber interface {
|
||||||
|
Float64() (float64, error)
|
||||||
|
Int64() (int64, error)
|
||||||
|
String() string
|
||||||
|
}
|
||||||
|
|
||||||
|
type encoder struct {
|
||||||
|
emitter yaml_emitter_t
|
||||||
|
event yaml_event_t
|
||||||
|
out []byte
|
||||||
|
flow bool
|
||||||
|
// doneInit holds whether the initial stream_start_event has been
|
||||||
|
// emitted.
|
||||||
|
doneInit bool
|
||||||
|
}
|
||||||
|
|
||||||
|
func newEncoder() *encoder {
|
||||||
|
e := &encoder{}
|
||||||
|
yaml_emitter_initialize(&e.emitter)
|
||||||
|
yaml_emitter_set_output_string(&e.emitter, &e.out)
|
||||||
|
yaml_emitter_set_unicode(&e.emitter, true)
|
||||||
|
return e
|
||||||
|
}
|
||||||
|
|
||||||
|
func newEncoderWithWriter(w io.Writer) *encoder {
|
||||||
|
e := &encoder{}
|
||||||
|
yaml_emitter_initialize(&e.emitter)
|
||||||
|
yaml_emitter_set_output_writer(&e.emitter, w)
|
||||||
|
yaml_emitter_set_unicode(&e.emitter, true)
|
||||||
|
return e
|
||||||
|
}
|
||||||
|
|
||||||
|
func (e *encoder) init() {
|
||||||
|
if e.doneInit {
|
||||||
|
return
|
||||||
|
}
|
||||||
|
yaml_stream_start_event_initialize(&e.event, yaml_UTF8_ENCODING)
|
||||||
|
e.emit()
|
||||||
|
e.doneInit = true
|
||||||
|
}
|
||||||
|
|
||||||
|
func (e *encoder) finish() {
|
||||||
|
e.emitter.open_ended = false
|
||||||
|
yaml_stream_end_event_initialize(&e.event)
|
||||||
|
e.emit()
|
||||||
|
}
|
||||||
|
|
||||||
|
func (e *encoder) destroy() {
|
||||||
|
yaml_emitter_delete(&e.emitter)
|
||||||
|
}
|
||||||
|
|
||||||
|
func (e *encoder) emit() {
|
||||||
|
// This will internally delete the e.event value.
|
||||||
|
e.must(yaml_emitter_emit(&e.emitter, &e.event))
|
||||||
|
}
|
||||||
|
|
||||||
|
func (e *encoder) must(ok bool) {
|
||||||
|
if !ok {
|
||||||
|
msg := e.emitter.problem
|
||||||
|
if msg == "" {
|
||||||
|
msg = "unknown problem generating YAML content"
|
||||||
|
}
|
||||||
|
failf("%s", msg)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func (e *encoder) marshalDoc(tag string, in reflect.Value) {
|
||||||
|
e.init()
|
||||||
|
yaml_document_start_event_initialize(&e.event, nil, nil, true)
|
||||||
|
e.emit()
|
||||||
|
e.marshal(tag, in)
|
||||||
|
yaml_document_end_event_initialize(&e.event, true)
|
||||||
|
e.emit()
|
||||||
|
}
|
||||||
|
|
||||||
|
func (e *encoder) marshal(tag string, in reflect.Value) {
|
||||||
|
if !in.IsValid() || in.Kind() == reflect.Ptr && in.IsNil() {
|
||||||
|
e.nilv()
|
||||||
|
return
|
||||||
|
}
|
||||||
|
iface := in.Interface()
|
||||||
|
switch m := iface.(type) {
|
||||||
|
case jsonNumber:
|
||||||
|
integer, err := m.Int64()
|
||||||
|
if err == nil {
|
||||||
|
// In this case the json.Number is a valid int64
|
||||||
|
in = reflect.ValueOf(integer)
|
||||||
|
break
|
||||||
|
}
|
||||||
|
float, err := m.Float64()
|
||||||
|
if err == nil {
|
||||||
|
// In this case the json.Number is a valid float64
|
||||||
|
in = reflect.ValueOf(float)
|
||||||
|
break
|
||||||
|
}
|
||||||
|
// fallback case - no number could be obtained
|
||||||
|
in = reflect.ValueOf(m.String())
|
||||||
|
case time.Time, *time.Time:
|
||||||
|
// Although time.Time implements TextMarshaler,
|
||||||
|
// we don't want to treat it as a string for YAML
|
||||||
|
// purposes because YAML has special support for
|
||||||
|
// timestamps.
|
||||||
|
case Marshaler:
|
||||||
|
v, err := m.MarshalYAML()
|
||||||
|
if err != nil {
|
||||||
|
fail(err)
|
||||||
|
}
|
||||||
|
if v == nil {
|
||||||
|
e.nilv()
|
||||||
|
return
|
||||||
|
}
|
||||||
|
in = reflect.ValueOf(v)
|
||||||
|
case encoding.TextMarshaler:
|
||||||
|
text, err := m.MarshalText()
|
||||||
|
if err != nil {
|
||||||
|
fail(err)
|
||||||
|
}
|
||||||
|
in = reflect.ValueOf(string(text))
|
||||||
|
case nil:
|
||||||
|
e.nilv()
|
||||||
|
return
|
||||||
|
}
|
||||||
|
switch in.Kind() {
|
||||||
|
case reflect.Interface:
|
||||||
|
e.marshal(tag, in.Elem())
|
||||||
|
case reflect.Map:
|
||||||
|
e.mapv(tag, in)
|
||||||
|
case reflect.Ptr:
|
||||||
|
if in.Type() == ptrTimeType {
|
||||||
|
e.timev(tag, in.Elem())
|
||||||
|
} else {
|
||||||
|
e.marshal(tag, in.Elem())
|
||||||
|
}
|
||||||
|
case reflect.Struct:
|
||||||
|
if in.Type() == timeType {
|
||||||
|
e.timev(tag, in)
|
||||||
|
} else {
|
||||||
|
e.structv(tag, in)
|
||||||
|
}
|
||||||
|
case reflect.Slice, reflect.Array:
|
||||||
|
if in.Type().Elem() == mapItemType {
|
||||||
|
e.itemsv(tag, in)
|
||||||
|
} else {
|
||||||
|
e.slicev(tag, in)
|
||||||
|
}
|
||||||
|
case reflect.String:
|
||||||
|
e.stringv(tag, in)
|
||||||
|
case reflect.Int, reflect.Int8, reflect.Int16, reflect.Int32, reflect.Int64:
|
||||||
|
if in.Type() == durationType {
|
||||||
|
e.stringv(tag, reflect.ValueOf(iface.(time.Duration).String()))
|
||||||
|
} else {
|
||||||
|
e.intv(tag, in)
|
||||||
|
}
|
||||||
|
case reflect.Uint, reflect.Uint8, reflect.Uint16, reflect.Uint32, reflect.Uint64, reflect.Uintptr:
|
||||||
|
e.uintv(tag, in)
|
||||||
|
case reflect.Float32, reflect.Float64:
|
||||||
|
e.floatv(tag, in)
|
||||||
|
case reflect.Bool:
|
||||||
|
e.boolv(tag, in)
|
||||||
|
default:
|
||||||
|
panic("cannot marshal type: " + in.Type().String())
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func (e *encoder) mapv(tag string, in reflect.Value) {
|
||||||
|
e.mappingv(tag, func() {
|
||||||
|
keys := keyList(in.MapKeys())
|
||||||
|
sort.Sort(keys)
|
||||||
|
for _, k := range keys {
|
||||||
|
e.marshal("", k)
|
||||||
|
e.marshal("", in.MapIndex(k))
|
||||||
|
}
|
||||||
|
})
|
||||||
|
}
|
||||||
|
|
||||||
|
func (e *encoder) itemsv(tag string, in reflect.Value) {
|
||||||
|
e.mappingv(tag, func() {
|
||||||
|
slice := in.Convert(reflect.TypeOf([]MapItem{})).Interface().([]MapItem)
|
||||||
|
for _, item := range slice {
|
||||||
|
e.marshal("", reflect.ValueOf(item.Key))
|
||||||
|
e.marshal("", reflect.ValueOf(item.Value))
|
||||||
|
}
|
||||||
|
})
|
||||||
|
}
|
||||||
|
|
||||||
|
func (e *encoder) structv(tag string, in reflect.Value) {
|
||||||
|
sinfo, err := getStructInfo(in.Type())
|
||||||
|
if err != nil {
|
||||||
|
panic(err)
|
||||||
|
}
|
||||||
|
e.mappingv(tag, func() {
|
||||||
|
for _, info := range sinfo.FieldsList {
|
||||||
|
var value reflect.Value
|
||||||
|
if info.Inline == nil {
|
||||||
|
value = in.Field(info.Num)
|
||||||
|
} else {
|
||||||
|
value = in.FieldByIndex(info.Inline)
|
||||||
|
}
|
||||||
|
if info.OmitEmpty && isZero(value) {
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
e.marshal("", reflect.ValueOf(info.Key))
|
||||||
|
e.flow = info.Flow
|
||||||
|
e.marshal("", value)
|
||||||
|
}
|
||||||
|
if sinfo.InlineMap >= 0 {
|
||||||
|
m := in.Field(sinfo.InlineMap)
|
||||||
|
if m.Len() > 0 {
|
||||||
|
e.flow = false
|
||||||
|
keys := keyList(m.MapKeys())
|
||||||
|
sort.Sort(keys)
|
||||||
|
for _, k := range keys {
|
||||||
|
if _, found := sinfo.FieldsMap[k.String()]; found {
|
||||||
|
panic(fmt.Sprintf("Can't have key %q in inlined map; conflicts with struct field", k.String()))
|
||||||
|
}
|
||||||
|
e.marshal("", k)
|
||||||
|
e.flow = false
|
||||||
|
e.marshal("", m.MapIndex(k))
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
})
|
||||||
|
}
|
||||||
|
|
||||||
|
func (e *encoder) mappingv(tag string, f func()) {
|
||||||
|
implicit := tag == ""
|
||||||
|
style := yaml_BLOCK_MAPPING_STYLE
|
||||||
|
if e.flow {
|
||||||
|
e.flow = false
|
||||||
|
style = yaml_FLOW_MAPPING_STYLE
|
||||||
|
}
|
||||||
|
yaml_mapping_start_event_initialize(&e.event, nil, []byte(tag), implicit, style)
|
||||||
|
e.emit()
|
||||||
|
f()
|
||||||
|
yaml_mapping_end_event_initialize(&e.event)
|
||||||
|
e.emit()
|
||||||
|
}
|
||||||
|
|
||||||
|
func (e *encoder) slicev(tag string, in reflect.Value) {
|
||||||
|
implicit := tag == ""
|
||||||
|
style := yaml_BLOCK_SEQUENCE_STYLE
|
||||||
|
if e.flow {
|
||||||
|
e.flow = false
|
||||||
|
style = yaml_FLOW_SEQUENCE_STYLE
|
||||||
|
}
|
||||||
|
e.must(yaml_sequence_start_event_initialize(&e.event, nil, []byte(tag), implicit, style))
|
||||||
|
e.emit()
|
||||||
|
n := in.Len()
|
||||||
|
for i := 0; i < n; i++ {
|
||||||
|
e.marshal("", in.Index(i))
|
||||||
|
}
|
||||||
|
e.must(yaml_sequence_end_event_initialize(&e.event))
|
||||||
|
e.emit()
|
||||||
|
}
|
||||||
|
|
||||||
|
// isBase60 returns whether s is in base 60 notation as defined in YAML 1.1.
|
||||||
|
//
|
||||||
|
// The base 60 float notation in YAML 1.1 is a terrible idea and is unsupported
|
||||||
|
// in YAML 1.2 and by this package, but these should be marshalled quoted for
|
||||||
|
// the time being for compatibility with other parsers.
|
||||||
|
func isBase60Float(s string) (result bool) {
|
||||||
|
// Fast path.
|
||||||
|
if s == "" {
|
||||||
|
return false
|
||||||
|
}
|
||||||
|
c := s[0]
|
||||||
|
if !(c == '+' || c == '-' || c >= '0' && c <= '9') || strings.IndexByte(s, ':') < 0 {
|
||||||
|
return false
|
||||||
|
}
|
||||||
|
// Do the full match.
|
||||||
|
return base60float.MatchString(s)
|
||||||
|
}
|
||||||
|
|
||||||
|
// From http://yaml.org/type/float.html, except the regular expression there
|
||||||
|
// is bogus. In practice parsers do not enforce the "\.[0-9_]*" suffix.
|
||||||
|
var base60float = regexp.MustCompile(`^[-+]?[0-9][0-9_]*(?::[0-5]?[0-9])+(?:\.[0-9_]*)?$`)
|
||||||
|
|
||||||
|
func (e *encoder) stringv(tag string, in reflect.Value) {
|
||||||
|
var style yaml_scalar_style_t
|
||||||
|
s := in.String()
|
||||||
|
canUsePlain := true
|
||||||
|
switch {
|
||||||
|
case !utf8.ValidString(s):
|
||||||
|
if tag == yaml_BINARY_TAG {
|
||||||
|
failf("explicitly tagged !!binary data must be base64-encoded")
|
||||||
|
}
|
||||||
|
if tag != "" {
|
||||||
|
failf("cannot marshal invalid UTF-8 data as %s", shortTag(tag))
|
||||||
|
}
|
||||||
|
// It can't be encoded directly as YAML so use a binary tag
|
||||||
|
// and encode it as base64.
|
||||||
|
tag = yaml_BINARY_TAG
|
||||||
|
s = encodeBase64(s)
|
||||||
|
case tag == "":
|
||||||
|
// Check to see if it would resolve to a specific
|
||||||
|
// tag when encoded unquoted. If it doesn't,
|
||||||
|
// there's no need to quote it.
|
||||||
|
rtag, _ := resolve("", s)
|
||||||
|
canUsePlain = rtag == yaml_STR_TAG && !isBase60Float(s)
|
||||||
|
}
|
||||||
|
// Note: it's possible for user code to emit invalid YAML
|
||||||
|
// if they explicitly specify a tag and a string containing
|
||||||
|
// text that's incompatible with that tag.
|
||||||
|
switch {
|
||||||
|
case strings.Contains(s, "\n"):
|
||||||
|
style = yaml_LITERAL_SCALAR_STYLE
|
||||||
|
case canUsePlain:
|
||||||
|
style = yaml_PLAIN_SCALAR_STYLE
|
||||||
|
default:
|
||||||
|
style = yaml_DOUBLE_QUOTED_SCALAR_STYLE
|
||||||
|
}
|
||||||
|
e.emitScalar(s, "", tag, style)
|
||||||
|
}
|
||||||
|
|
||||||
|
func (e *encoder) boolv(tag string, in reflect.Value) {
|
||||||
|
var s string
|
||||||
|
if in.Bool() {
|
||||||
|
s = "true"
|
||||||
|
} else {
|
||||||
|
s = "false"
|
||||||
|
}
|
||||||
|
e.emitScalar(s, "", tag, yaml_PLAIN_SCALAR_STYLE)
|
||||||
|
}
|
||||||
|
|
||||||
|
func (e *encoder) intv(tag string, in reflect.Value) {
|
||||||
|
s := strconv.FormatInt(in.Int(), 10)
|
||||||
|
e.emitScalar(s, "", tag, yaml_PLAIN_SCALAR_STYLE)
|
||||||
|
}
|
||||||
|
|
||||||
|
func (e *encoder) uintv(tag string, in reflect.Value) {
|
||||||
|
s := strconv.FormatUint(in.Uint(), 10)
|
||||||
|
e.emitScalar(s, "", tag, yaml_PLAIN_SCALAR_STYLE)
|
||||||
|
}
|
||||||
|
|
||||||
|
func (e *encoder) timev(tag string, in reflect.Value) {
|
||||||
|
t := in.Interface().(time.Time)
|
||||||
|
s := t.Format(time.RFC3339Nano)
|
||||||
|
e.emitScalar(s, "", tag, yaml_PLAIN_SCALAR_STYLE)
|
||||||
|
}
|
||||||
|
|
||||||
|
func (e *encoder) floatv(tag string, in reflect.Value) {
|
||||||
|
// Issue #352: When formatting, use the precision of the underlying value
|
||||||
|
precision := 64
|
||||||
|
if in.Kind() == reflect.Float32 {
|
||||||
|
precision = 32
|
||||||
|
}
|
||||||
|
|
||||||
|
s := strconv.FormatFloat(in.Float(), 'g', -1, precision)
|
||||||
|
switch s {
|
||||||
|
case "+Inf":
|
||||||
|
s = ".inf"
|
||||||
|
case "-Inf":
|
||||||
|
s = "-.inf"
|
||||||
|
case "NaN":
|
||||||
|
s = ".nan"
|
||||||
|
}
|
||||||
|
e.emitScalar(s, "", tag, yaml_PLAIN_SCALAR_STYLE)
|
||||||
|
}
|
||||||
|
|
||||||
|
func (e *encoder) nilv() {
|
||||||
|
e.emitScalar("null", "", "", yaml_PLAIN_SCALAR_STYLE)
|
||||||
|
}
|
||||||
|
|
||||||
|
func (e *encoder) emitScalar(value, anchor, tag string, style yaml_scalar_style_t) {
|
||||||
|
implicit := tag == ""
|
||||||
|
e.must(yaml_scalar_event_initialize(&e.event, []byte(anchor), []byte(tag), []byte(value), implicit, implicit, style))
|
||||||
|
e.emit()
|
||||||
|
}
|
5
vendor/gopkg.in/yaml.v2/go.mod
generated
vendored
Normal file
5
vendor/gopkg.in/yaml.v2/go.mod
generated
vendored
Normal file
@ -0,0 +1,5 @@
|
|||||||
|
module "gopkg.in/yaml.v2"
|
||||||
|
|
||||||
|
require (
|
||||||
|
"gopkg.in/check.v1" v0.0.0-20161208181325-20d25e280405
|
||||||
|
)
|
1095
vendor/gopkg.in/yaml.v2/parserc.go
generated
vendored
Normal file
1095
vendor/gopkg.in/yaml.v2/parserc.go
generated
vendored
Normal file
File diff suppressed because it is too large
Load Diff
412
vendor/gopkg.in/yaml.v2/readerc.go
generated
vendored
Normal file
412
vendor/gopkg.in/yaml.v2/readerc.go
generated
vendored
Normal file
@ -0,0 +1,412 @@
|
|||||||
|
package yaml
|
||||||
|
|
||||||
|
import (
|
||||||
|
"io"
|
||||||
|
)
|
||||||
|
|
||||||
|
// Set the reader error and return 0.
|
||||||
|
func yaml_parser_set_reader_error(parser *yaml_parser_t, problem string, offset int, value int) bool {
|
||||||
|
parser.error = yaml_READER_ERROR
|
||||||
|
parser.problem = problem
|
||||||
|
parser.problem_offset = offset
|
||||||
|
parser.problem_value = value
|
||||||
|
return false
|
||||||
|
}
|
||||||
|
|
||||||
|
// Byte order marks.
|
||||||
|
const (
|
||||||
|
bom_UTF8 = "\xef\xbb\xbf"
|
||||||
|
bom_UTF16LE = "\xff\xfe"
|
||||||
|
bom_UTF16BE = "\xfe\xff"
|
||||||
|
)
|
||||||
|
|
||||||
|
// Determine the input stream encoding by checking the BOM symbol. If no BOM is
|
||||||
|
// found, the UTF-8 encoding is assumed. Return 1 on success, 0 on failure.
|
||||||
|
func yaml_parser_determine_encoding(parser *yaml_parser_t) bool {
|
||||||
|
// Ensure that we had enough bytes in the raw buffer.
|
||||||
|
for !parser.eof && len(parser.raw_buffer)-parser.raw_buffer_pos < 3 {
|
||||||
|
if !yaml_parser_update_raw_buffer(parser) {
|
||||||
|
return false
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Determine the encoding.
|
||||||
|
buf := parser.raw_buffer
|
||||||
|
pos := parser.raw_buffer_pos
|
||||||
|
avail := len(buf) - pos
|
||||||
|
if avail >= 2 && buf[pos] == bom_UTF16LE[0] && buf[pos+1] == bom_UTF16LE[1] {
|
||||||
|
parser.encoding = yaml_UTF16LE_ENCODING
|
||||||
|
parser.raw_buffer_pos += 2
|
||||||
|
parser.offset += 2
|
||||||
|
} else if avail >= 2 && buf[pos] == bom_UTF16BE[0] && buf[pos+1] == bom_UTF16BE[1] {
|
||||||
|
parser.encoding = yaml_UTF16BE_ENCODING
|
||||||
|
parser.raw_buffer_pos += 2
|
||||||
|
parser.offset += 2
|
||||||
|
} else if avail >= 3 && buf[pos] == bom_UTF8[0] && buf[pos+1] == bom_UTF8[1] && buf[pos+2] == bom_UTF8[2] {
|
||||||
|
parser.encoding = yaml_UTF8_ENCODING
|
||||||
|
parser.raw_buffer_pos += 3
|
||||||
|
parser.offset += 3
|
||||||
|
} else {
|
||||||
|
parser.encoding = yaml_UTF8_ENCODING
|
||||||
|
}
|
||||||
|
return true
|
||||||
|
}
|
||||||
|
|
||||||
|
// Update the raw buffer.
|
||||||
|
func yaml_parser_update_raw_buffer(parser *yaml_parser_t) bool {
|
||||||
|
size_read := 0
|
||||||
|
|
||||||
|
// Return if the raw buffer is full.
|
||||||
|
if parser.raw_buffer_pos == 0 && len(parser.raw_buffer) == cap(parser.raw_buffer) {
|
||||||
|
return true
|
||||||
|
}
|
||||||
|
|
||||||
|
// Return on EOF.
|
||||||
|
if parser.eof {
|
||||||
|
return true
|
||||||
|
}
|
||||||
|
|
||||||
|
// Move the remaining bytes in the raw buffer to the beginning.
|
||||||
|
if parser.raw_buffer_pos > 0 && parser.raw_buffer_pos < len(parser.raw_buffer) {
|
||||||
|
copy(parser.raw_buffer, parser.raw_buffer[parser.raw_buffer_pos:])
|
||||||
|
}
|
||||||
|
parser.raw_buffer = parser.raw_buffer[:len(parser.raw_buffer)-parser.raw_buffer_pos]
|
||||||
|
parser.raw_buffer_pos = 0
|
||||||
|
|
||||||
|
// Call the read handler to fill the buffer.
|
||||||
|
size_read, err := parser.read_handler(parser, parser.raw_buffer[len(parser.raw_buffer):cap(parser.raw_buffer)])
|
||||||
|
parser.raw_buffer = parser.raw_buffer[:len(parser.raw_buffer)+size_read]
|
||||||
|
if err == io.EOF {
|
||||||
|
parser.eof = true
|
||||||
|
} else if err != nil {
|
||||||
|
return yaml_parser_set_reader_error(parser, "input error: "+err.Error(), parser.offset, -1)
|
||||||
|
}
|
||||||
|
return true
|
||||||
|
}
|
||||||
|
|
||||||
|
// Ensure that the buffer contains at least `length` characters.
|
||||||
|
// Return true on success, false on failure.
|
||||||
|
//
|
||||||
|
// The length is supposed to be significantly less that the buffer size.
|
||||||
|
func yaml_parser_update_buffer(parser *yaml_parser_t, length int) bool {
|
||||||
|
if parser.read_handler == nil {
|
||||||
|
panic("read handler must be set")
|
||||||
|
}
|
||||||
|
|
||||||
|
// [Go] This function was changed to guarantee the requested length size at EOF.
|
||||||
|
// The fact we need to do this is pretty awful, but the description above implies
|
||||||
|
// for that to be the case, and there are tests
|
||||||
|
|
||||||
|
// If the EOF flag is set and the raw buffer is empty, do nothing.
|
||||||
|
if parser.eof && parser.raw_buffer_pos == len(parser.raw_buffer) {
|
||||||
|
// [Go] ACTUALLY! Read the documentation of this function above.
|
||||||
|
// This is just broken. To return true, we need to have the
|
||||||
|
// given length in the buffer. Not doing that means every single
|
||||||
|
// check that calls this function to make sure the buffer has a
|
||||||
|
// given length is Go) panicking; or C) accessing invalid memory.
|
||||||
|
//return true
|
||||||
|
}
|
||||||
|
|
||||||
|
// Return if the buffer contains enough characters.
|
||||||
|
if parser.unread >= length {
|
||||||
|
return true
|
||||||
|
}
|
||||||
|
|
||||||
|
// Determine the input encoding if it is not known yet.
|
||||||
|
if parser.encoding == yaml_ANY_ENCODING {
|
||||||
|
if !yaml_parser_determine_encoding(parser) {
|
||||||
|
return false
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Move the unread characters to the beginning of the buffer.
|
||||||
|
buffer_len := len(parser.buffer)
|
||||||
|
if parser.buffer_pos > 0 && parser.buffer_pos < buffer_len {
|
||||||
|
copy(parser.buffer, parser.buffer[parser.buffer_pos:])
|
||||||
|
buffer_len -= parser.buffer_pos
|
||||||
|
parser.buffer_pos = 0
|
||||||
|
} else if parser.buffer_pos == buffer_len {
|
||||||
|
buffer_len = 0
|
||||||
|
parser.buffer_pos = 0
|
||||||
|
}
|
||||||
|
|
||||||
|
// Open the whole buffer for writing, and cut it before returning.
|
||||||
|
parser.buffer = parser.buffer[:cap(parser.buffer)]
|
||||||
|
|
||||||
|
// Fill the buffer until it has enough characters.
|
||||||
|
first := true
|
||||||
|
for parser.unread < length {
|
||||||
|
|
||||||
|
// Fill the raw buffer if necessary.
|
||||||
|
if !first || parser.raw_buffer_pos == len(parser.raw_buffer) {
|
||||||
|
if !yaml_parser_update_raw_buffer(parser) {
|
||||||
|
parser.buffer = parser.buffer[:buffer_len]
|
||||||
|
return false
|
||||||
|
}
|
||||||
|
}
|
||||||
|
first = false
|
||||||
|
|
||||||
|
// Decode the raw buffer.
|
||||||
|
inner:
|
||||||
|
for parser.raw_buffer_pos != len(parser.raw_buffer) {
|
||||||
|
var value rune
|
||||||
|
var width int
|
||||||
|
|
||||||
|
raw_unread := len(parser.raw_buffer) - parser.raw_buffer_pos
|
||||||
|
|
||||||
|
// Decode the next character.
|
||||||
|
switch parser.encoding {
|
||||||
|
case yaml_UTF8_ENCODING:
|
||||||
|
// Decode a UTF-8 character. Check RFC 3629
|
||||||
|
// (http://www.ietf.org/rfc/rfc3629.txt) for more details.
|
||||||
|
//
|
||||||
|
// The following table (taken from the RFC) is used for
|
||||||
|
// decoding.
|
||||||
|
//
|
||||||
|
// Char. number range | UTF-8 octet sequence
|
||||||
|
// (hexadecimal) | (binary)
|
||||||
|
// --------------------+------------------------------------
|
||||||
|
// 0000 0000-0000 007F | 0xxxxxxx
|
||||||
|
// 0000 0080-0000 07FF | 110xxxxx 10xxxxxx
|
||||||
|
// 0000 0800-0000 FFFF | 1110xxxx 10xxxxxx 10xxxxxx
|
||||||
|
// 0001 0000-0010 FFFF | 11110xxx 10xxxxxx 10xxxxxx 10xxxxxx
|
||||||
|
//
|
||||||
|
// Additionally, the characters in the range 0xD800-0xDFFF
|
||||||
|
// are prohibited as they are reserved for use with UTF-16
|
||||||
|
// surrogate pairs.
|
||||||
|
|
||||||
|
// Determine the length of the UTF-8 sequence.
|
||||||
|
octet := parser.raw_buffer[parser.raw_buffer_pos]
|
||||||
|
switch {
|
||||||
|
case octet&0x80 == 0x00:
|
||||||
|
width = 1
|
||||||
|
case octet&0xE0 == 0xC0:
|
||||||
|
width = 2
|
||||||
|
case octet&0xF0 == 0xE0:
|
||||||
|
width = 3
|
||||||
|
case octet&0xF8 == 0xF0:
|
||||||
|
width = 4
|
||||||
|
default:
|
||||||
|
// The leading octet is invalid.
|
||||||
|
return yaml_parser_set_reader_error(parser,
|
||||||
|
"invalid leading UTF-8 octet",
|
||||||
|
parser.offset, int(octet))
|
||||||
|
}
|
||||||
|
|
||||||
|
// Check if the raw buffer contains an incomplete character.
|
||||||
|
if width > raw_unread {
|
||||||
|
if parser.eof {
|
||||||
|
return yaml_parser_set_reader_error(parser,
|
||||||
|
"incomplete UTF-8 octet sequence",
|
||||||
|
parser.offset, -1)
|
||||||
|
}
|
||||||
|
break inner
|
||||||
|
}
|
||||||
|
|
||||||
|
// Decode the leading octet.
|
||||||
|
switch {
|
||||||
|
case octet&0x80 == 0x00:
|
||||||
|
value = rune(octet & 0x7F)
|
||||||
|
case octet&0xE0 == 0xC0:
|
||||||
|
value = rune(octet & 0x1F)
|
||||||
|
case octet&0xF0 == 0xE0:
|
||||||
|
value = rune(octet & 0x0F)
|
||||||
|
case octet&0xF8 == 0xF0:
|
||||||
|
value = rune(octet & 0x07)
|
||||||
|
default:
|
||||||
|
value = 0
|
||||||
|
}
|
||||||
|
|
||||||
|
// Check and decode the trailing octets.
|
||||||
|
for k := 1; k < width; k++ {
|
||||||
|
octet = parser.raw_buffer[parser.raw_buffer_pos+k]
|
||||||
|
|
||||||
|
// Check if the octet is valid.
|
||||||
|
if (octet & 0xC0) != 0x80 {
|
||||||
|
return yaml_parser_set_reader_error(parser,
|
||||||
|
"invalid trailing UTF-8 octet",
|
||||||
|
parser.offset+k, int(octet))
|
||||||
|
}
|
||||||
|
|
||||||
|
// Decode the octet.
|
||||||
|
value = (value << 6) + rune(octet&0x3F)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Check the length of the sequence against the value.
|
||||||
|
switch {
|
||||||
|
case width == 1:
|
||||||
|
case width == 2 && value >= 0x80:
|
||||||
|
case width == 3 && value >= 0x800:
|
||||||
|
case width == 4 && value >= 0x10000:
|
||||||
|
default:
|
||||||
|
return yaml_parser_set_reader_error(parser,
|
||||||
|
"invalid length of a UTF-8 sequence",
|
||||||
|
parser.offset, -1)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Check the range of the value.
|
||||||
|
if value >= 0xD800 && value <= 0xDFFF || value > 0x10FFFF {
|
||||||
|
return yaml_parser_set_reader_error(parser,
|
||||||
|
"invalid Unicode character",
|
||||||
|
parser.offset, int(value))
|
||||||
|
}
|
||||||
|
|
||||||
|
case yaml_UTF16LE_ENCODING, yaml_UTF16BE_ENCODING:
|
||||||
|
var low, high int
|
||||||
|
if parser.encoding == yaml_UTF16LE_ENCODING {
|
||||||
|
low, high = 0, 1
|
||||||
|
} else {
|
||||||
|
low, high = 1, 0
|
||||||
|
}
|
||||||
|
|
||||||
|
// The UTF-16 encoding is not as simple as one might
|
||||||
|
// naively think. Check RFC 2781
|
||||||
|
// (http://www.ietf.org/rfc/rfc2781.txt).
|
||||||
|
//
|
||||||
|
// Normally, two subsequent bytes describe a Unicode
|
||||||
|
// character. However a special technique (called a
|
||||||
|
// surrogate pair) is used for specifying character
|
||||||
|
// values larger than 0xFFFF.
|
||||||
|
//
|
||||||
|
// A surrogate pair consists of two pseudo-characters:
|
||||||
|
// high surrogate area (0xD800-0xDBFF)
|
||||||
|
// low surrogate area (0xDC00-0xDFFF)
|
||||||
|
//
|
||||||
|
// The following formulas are used for decoding
|
||||||
|
// and encoding characters using surrogate pairs:
|
||||||
|
//
|
||||||
|
// U = U' + 0x10000 (0x01 00 00 <= U <= 0x10 FF FF)
|
||||||
|
// U' = yyyyyyyyyyxxxxxxxxxx (0 <= U' <= 0x0F FF FF)
|
||||||
|
// W1 = 110110yyyyyyyyyy
|
||||||
|
// W2 = 110111xxxxxxxxxx
|
||||||
|
//
|
||||||
|
// where U is the character value, W1 is the high surrogate
|
||||||
|
// area, W2 is the low surrogate area.
|
||||||
|
|
||||||
|
// Check for incomplete UTF-16 character.
|
||||||
|
if raw_unread < 2 {
|
||||||
|
if parser.eof {
|
||||||
|
return yaml_parser_set_reader_error(parser,
|
||||||
|
"incomplete UTF-16 character",
|
||||||
|
parser.offset, -1)
|
||||||
|
}
|
||||||
|
break inner
|
||||||
|
}
|
||||||
|
|
||||||
|
// Get the character.
|
||||||
|
value = rune(parser.raw_buffer[parser.raw_buffer_pos+low]) +
|
||||||
|
(rune(parser.raw_buffer[parser.raw_buffer_pos+high]) << 8)
|
||||||
|
|
||||||
|
// Check for unexpected low surrogate area.
|
||||||
|
if value&0xFC00 == 0xDC00 {
|
||||||
|
return yaml_parser_set_reader_error(parser,
|
||||||
|
"unexpected low surrogate area",
|
||||||
|
parser.offset, int(value))
|
||||||
|
}
|
||||||
|
|
||||||
|
// Check for a high surrogate area.
|
||||||
|
if value&0xFC00 == 0xD800 {
|
||||||
|
width = 4
|
||||||
|
|
||||||
|
// Check for incomplete surrogate pair.
|
||||||
|
if raw_unread < 4 {
|
||||||
|
if parser.eof {
|
||||||
|
return yaml_parser_set_reader_error(parser,
|
||||||
|
"incomplete UTF-16 surrogate pair",
|
||||||
|
parser.offset, -1)
|
||||||
|
}
|
||||||
|
break inner
|
||||||
|
}
|
||||||
|
|
||||||
|
// Get the next character.
|
||||||
|
value2 := rune(parser.raw_buffer[parser.raw_buffer_pos+low+2]) +
|
||||||
|
(rune(parser.raw_buffer[parser.raw_buffer_pos+high+2]) << 8)
|
||||||
|
|
||||||
|
// Check for a low surrogate area.
|
||||||
|
if value2&0xFC00 != 0xDC00 {
|
||||||
|
return yaml_parser_set_reader_error(parser,
|
||||||
|
"expected low surrogate area",
|
||||||
|
parser.offset+2, int(value2))
|
||||||
|
}
|
||||||
|
|
||||||
|
// Generate the value of the surrogate pair.
|
||||||
|
value = 0x10000 + ((value & 0x3FF) << 10) + (value2 & 0x3FF)
|
||||||
|
} else {
|
||||||
|
width = 2
|
||||||
|
}
|
||||||
|
|
||||||
|
default:
|
||||||
|
panic("impossible")
|
||||||
|
}
|
||||||
|
|
||||||
|
// Check if the character is in the allowed range:
|
||||||
|
// #x9 | #xA | #xD | [#x20-#x7E] (8 bit)
|
||||||
|
// | #x85 | [#xA0-#xD7FF] | [#xE000-#xFFFD] (16 bit)
|
||||||
|
// | [#x10000-#x10FFFF] (32 bit)
|
||||||
|
switch {
|
||||||
|
case value == 0x09:
|
||||||
|
case value == 0x0A:
|
||||||
|
case value == 0x0D:
|
||||||
|
case value >= 0x20 && value <= 0x7E:
|
||||||
|
case value == 0x85:
|
||||||
|
case value >= 0xA0 && value <= 0xD7FF:
|
||||||
|
case value >= 0xE000 && value <= 0xFFFD:
|
||||||
|
case value >= 0x10000 && value <= 0x10FFFF:
|
||||||
|
default:
|
||||||
|
return yaml_parser_set_reader_error(parser,
|
||||||
|
"control characters are not allowed",
|
||||||
|
parser.offset, int(value))
|
||||||
|
}
|
||||||
|
|
||||||
|
// Move the raw pointers.
|
||||||
|
parser.raw_buffer_pos += width
|
||||||
|
parser.offset += width
|
||||||
|
|
||||||
|
// Finally put the character into the buffer.
|
||||||
|
if value <= 0x7F {
|
||||||
|
// 0000 0000-0000 007F . 0xxxxxxx
|
||||||
|
parser.buffer[buffer_len+0] = byte(value)
|
||||||
|
buffer_len += 1
|
||||||
|
} else if value <= 0x7FF {
|
||||||
|
// 0000 0080-0000 07FF . 110xxxxx 10xxxxxx
|
||||||
|
parser.buffer[buffer_len+0] = byte(0xC0 + (value >> 6))
|
||||||
|
parser.buffer[buffer_len+1] = byte(0x80 + (value & 0x3F))
|
||||||
|
buffer_len += 2
|
||||||
|
} else if value <= 0xFFFF {
|
||||||
|
// 0000 0800-0000 FFFF . 1110xxxx 10xxxxxx 10xxxxxx
|
||||||
|
parser.buffer[buffer_len+0] = byte(0xE0 + (value >> 12))
|
||||||
|
parser.buffer[buffer_len+1] = byte(0x80 + ((value >> 6) & 0x3F))
|
||||||
|
parser.buffer[buffer_len+2] = byte(0x80 + (value & 0x3F))
|
||||||
|
buffer_len += 3
|
||||||
|
} else {
|
||||||
|
// 0001 0000-0010 FFFF . 11110xxx 10xxxxxx 10xxxxxx 10xxxxxx
|
||||||
|
parser.buffer[buffer_len+0] = byte(0xF0 + (value >> 18))
|
||||||
|
parser.buffer[buffer_len+1] = byte(0x80 + ((value >> 12) & 0x3F))
|
||||||
|
parser.buffer[buffer_len+2] = byte(0x80 + ((value >> 6) & 0x3F))
|
||||||
|
parser.buffer[buffer_len+3] = byte(0x80 + (value & 0x3F))
|
||||||
|
buffer_len += 4
|
||||||
|
}
|
||||||
|
|
||||||
|
parser.unread++
|
||||||
|
}
|
||||||
|
|
||||||
|
// On EOF, put NUL into the buffer and return.
|
||||||
|
if parser.eof {
|
||||||
|
parser.buffer[buffer_len] = 0
|
||||||
|
buffer_len++
|
||||||
|
parser.unread++
|
||||||
|
break
|
||||||
|
}
|
||||||
|
}
|
||||||
|
// [Go] Read the documentation of this function above. To return true,
|
||||||
|
// we need to have the given length in the buffer. Not doing that means
|
||||||
|
// every single check that calls this function to make sure the buffer
|
||||||
|
// has a given length is Go) panicking; or C) accessing invalid memory.
|
||||||
|
// This happens here due to the EOF above breaking early.
|
||||||
|
for buffer_len < length {
|
||||||
|
parser.buffer[buffer_len] = 0
|
||||||
|
buffer_len++
|
||||||
|
}
|
||||||
|
parser.buffer = parser.buffer[:buffer_len]
|
||||||
|
return true
|
||||||
|
}
|
258
vendor/gopkg.in/yaml.v2/resolve.go
generated
vendored
Normal file
258
vendor/gopkg.in/yaml.v2/resolve.go
generated
vendored
Normal file
@ -0,0 +1,258 @@
|
|||||||
|
package yaml
|
||||||
|
|
||||||
|
import (
|
||||||
|
"encoding/base64"
|
||||||
|
"math"
|
||||||
|
"regexp"
|
||||||
|
"strconv"
|
||||||
|
"strings"
|
||||||
|
"time"
|
||||||
|
)
|
||||||
|
|
||||||
|
type resolveMapItem struct {
|
||||||
|
value interface{}
|
||||||
|
tag string
|
||||||
|
}
|
||||||
|
|
||||||
|
var resolveTable = make([]byte, 256)
|
||||||
|
var resolveMap = make(map[string]resolveMapItem)
|
||||||
|
|
||||||
|
func init() {
|
||||||
|
t := resolveTable
|
||||||
|
t[int('+')] = 'S' // Sign
|
||||||
|
t[int('-')] = 'S'
|
||||||
|
for _, c := range "0123456789" {
|
||||||
|
t[int(c)] = 'D' // Digit
|
||||||
|
}
|
||||||
|
for _, c := range "yYnNtTfFoO~" {
|
||||||
|
t[int(c)] = 'M' // In map
|
||||||
|
}
|
||||||
|
t[int('.')] = '.' // Float (potentially in map)
|
||||||
|
|
||||||
|
var resolveMapList = []struct {
|
||||||
|
v interface{}
|
||||||
|
tag string
|
||||||
|
l []string
|
||||||
|
}{
|
||||||
|
{true, yaml_BOOL_TAG, []string{"y", "Y", "yes", "Yes", "YES"}},
|
||||||
|
{true, yaml_BOOL_TAG, []string{"true", "True", "TRUE"}},
|
||||||
|
{true, yaml_BOOL_TAG, []string{"on", "On", "ON"}},
|
||||||
|
{false, yaml_BOOL_TAG, []string{"n", "N", "no", "No", "NO"}},
|
||||||
|
{false, yaml_BOOL_TAG, []string{"false", "False", "FALSE"}},
|
||||||
|
{false, yaml_BOOL_TAG, []string{"off", "Off", "OFF"}},
|
||||||
|
{nil, yaml_NULL_TAG, []string{"", "~", "null", "Null", "NULL"}},
|
||||||
|
{math.NaN(), yaml_FLOAT_TAG, []string{".nan", ".NaN", ".NAN"}},
|
||||||
|
{math.Inf(+1), yaml_FLOAT_TAG, []string{".inf", ".Inf", ".INF"}},
|
||||||
|
{math.Inf(+1), yaml_FLOAT_TAG, []string{"+.inf", "+.Inf", "+.INF"}},
|
||||||
|
{math.Inf(-1), yaml_FLOAT_TAG, []string{"-.inf", "-.Inf", "-.INF"}},
|
||||||
|
{"<<", yaml_MERGE_TAG, []string{"<<"}},
|
||||||
|
}
|
||||||
|
|
||||||
|
m := resolveMap
|
||||||
|
for _, item := range resolveMapList {
|
||||||
|
for _, s := range item.l {
|
||||||
|
m[s] = resolveMapItem{item.v, item.tag}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
const longTagPrefix = "tag:yaml.org,2002:"
|
||||||
|
|
||||||
|
func shortTag(tag string) string {
|
||||||
|
// TODO This can easily be made faster and produce less garbage.
|
||||||
|
if strings.HasPrefix(tag, longTagPrefix) {
|
||||||
|
return "!!" + tag[len(longTagPrefix):]
|
||||||
|
}
|
||||||
|
return tag
|
||||||
|
}
|
||||||
|
|
||||||
|
func longTag(tag string) string {
|
||||||
|
if strings.HasPrefix(tag, "!!") {
|
||||||
|
return longTagPrefix + tag[2:]
|
||||||
|
}
|
||||||
|
return tag
|
||||||
|
}
|
||||||
|
|
||||||
|
func resolvableTag(tag string) bool {
|
||||||
|
switch tag {
|
||||||
|
case "", yaml_STR_TAG, yaml_BOOL_TAG, yaml_INT_TAG, yaml_FLOAT_TAG, yaml_NULL_TAG, yaml_TIMESTAMP_TAG:
|
||||||
|
return true
|
||||||
|
}
|
||||||
|
return false
|
||||||
|
}
|
||||||
|
|
||||||
|
var yamlStyleFloat = regexp.MustCompile(`^[-+]?(\.[0-9]+|[0-9]+(\.[0-9]*)?)([eE][-+]?[0-9]+)?$`)
|
||||||
|
|
||||||
|
func resolve(tag string, in string) (rtag string, out interface{}) {
|
||||||
|
if !resolvableTag(tag) {
|
||||||
|
return tag, in
|
||||||
|
}
|
||||||
|
|
||||||
|
defer func() {
|
||||||
|
switch tag {
|
||||||
|
case "", rtag, yaml_STR_TAG, yaml_BINARY_TAG:
|
||||||
|
return
|
||||||
|
case yaml_FLOAT_TAG:
|
||||||
|
if rtag == yaml_INT_TAG {
|
||||||
|
switch v := out.(type) {
|
||||||
|
case int64:
|
||||||
|
rtag = yaml_FLOAT_TAG
|
||||||
|
out = float64(v)
|
||||||
|
return
|
||||||
|
case int:
|
||||||
|
rtag = yaml_FLOAT_TAG
|
||||||
|
out = float64(v)
|
||||||
|
return
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
failf("cannot decode %s `%s` as a %s", shortTag(rtag), in, shortTag(tag))
|
||||||
|
}()
|
||||||
|
|
||||||
|
// Any data is accepted as a !!str or !!binary.
|
||||||
|
// Otherwise, the prefix is enough of a hint about what it might be.
|
||||||
|
hint := byte('N')
|
||||||
|
if in != "" {
|
||||||
|
hint = resolveTable[in[0]]
|
||||||
|
}
|
||||||
|
if hint != 0 && tag != yaml_STR_TAG && tag != yaml_BINARY_TAG {
|
||||||
|
// Handle things we can lookup in a map.
|
||||||
|
if item, ok := resolveMap[in]; ok {
|
||||||
|
return item.tag, item.value
|
||||||
|
}
|
||||||
|
|
||||||
|
// Base 60 floats are a bad idea, were dropped in YAML 1.2, and
|
||||||
|
// are purposefully unsupported here. They're still quoted on
|
||||||
|
// the way out for compatibility with other parser, though.
|
||||||
|
|
||||||
|
switch hint {
|
||||||
|
case 'M':
|
||||||
|
// We've already checked the map above.
|
||||||
|
|
||||||
|
case '.':
|
||||||
|
// Not in the map, so maybe a normal float.
|
||||||
|
floatv, err := strconv.ParseFloat(in, 64)
|
||||||
|
if err == nil {
|
||||||
|
return yaml_FLOAT_TAG, floatv
|
||||||
|
}
|
||||||
|
|
||||||
|
case 'D', 'S':
|
||||||
|
// Int, float, or timestamp.
|
||||||
|
// Only try values as a timestamp if the value is unquoted or there's an explicit
|
||||||
|
// !!timestamp tag.
|
||||||
|
if tag == "" || tag == yaml_TIMESTAMP_TAG {
|
||||||
|
t, ok := parseTimestamp(in)
|
||||||
|
if ok {
|
||||||
|
return yaml_TIMESTAMP_TAG, t
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
plain := strings.Replace(in, "_", "", -1)
|
||||||
|
intv, err := strconv.ParseInt(plain, 0, 64)
|
||||||
|
if err == nil {
|
||||||
|
if intv == int64(int(intv)) {
|
||||||
|
return yaml_INT_TAG, int(intv)
|
||||||
|
} else {
|
||||||
|
return yaml_INT_TAG, intv
|
||||||
|
}
|
||||||
|
}
|
||||||
|
uintv, err := strconv.ParseUint(plain, 0, 64)
|
||||||
|
if err == nil {
|
||||||
|
return yaml_INT_TAG, uintv
|
||||||
|
}
|
||||||
|
if yamlStyleFloat.MatchString(plain) {
|
||||||
|
floatv, err := strconv.ParseFloat(plain, 64)
|
||||||
|
if err == nil {
|
||||||
|
return yaml_FLOAT_TAG, floatv
|
||||||
|
}
|
||||||
|
}
|
||||||
|
if strings.HasPrefix(plain, "0b") {
|
||||||
|
intv, err := strconv.ParseInt(plain[2:], 2, 64)
|
||||||
|
if err == nil {
|
||||||
|
if intv == int64(int(intv)) {
|
||||||
|
return yaml_INT_TAG, int(intv)
|
||||||
|
} else {
|
||||||
|
return yaml_INT_TAG, intv
|
||||||
|
}
|
||||||
|
}
|
||||||
|
uintv, err := strconv.ParseUint(plain[2:], 2, 64)
|
||||||
|
if err == nil {
|
||||||
|
return yaml_INT_TAG, uintv
|
||||||
|
}
|
||||||
|
} else if strings.HasPrefix(plain, "-0b") {
|
||||||
|
intv, err := strconv.ParseInt("-" + plain[3:], 2, 64)
|
||||||
|
if err == nil {
|
||||||
|
if true || intv == int64(int(intv)) {
|
||||||
|
return yaml_INT_TAG, int(intv)
|
||||||
|
} else {
|
||||||
|
return yaml_INT_TAG, intv
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
default:
|
||||||
|
panic("resolveTable item not yet handled: " + string(rune(hint)) + " (with " + in + ")")
|
||||||
|
}
|
||||||
|
}
|
||||||
|
return yaml_STR_TAG, in
|
||||||
|
}
|
||||||
|
|
||||||
|
// encodeBase64 encodes s as base64 that is broken up into multiple lines
|
||||||
|
// as appropriate for the resulting length.
|
||||||
|
func encodeBase64(s string) string {
|
||||||
|
const lineLen = 70
|
||||||
|
encLen := base64.StdEncoding.EncodedLen(len(s))
|
||||||
|
lines := encLen/lineLen + 1
|
||||||
|
buf := make([]byte, encLen*2+lines)
|
||||||
|
in := buf[0:encLen]
|
||||||
|
out := buf[encLen:]
|
||||||
|
base64.StdEncoding.Encode(in, []byte(s))
|
||||||
|
k := 0
|
||||||
|
for i := 0; i < len(in); i += lineLen {
|
||||||
|
j := i + lineLen
|
||||||
|
if j > len(in) {
|
||||||
|
j = len(in)
|
||||||
|
}
|
||||||
|
k += copy(out[k:], in[i:j])
|
||||||
|
if lines > 1 {
|
||||||
|
out[k] = '\n'
|
||||||
|
k++
|
||||||
|
}
|
||||||
|
}
|
||||||
|
return string(out[:k])
|
||||||
|
}
|
||||||
|
|
||||||
|
// This is a subset of the formats allowed by the regular expression
|
||||||
|
// defined at http://yaml.org/type/timestamp.html.
|
||||||
|
var allowedTimestampFormats = []string{
|
||||||
|
"2006-1-2T15:4:5.999999999Z07:00", // RCF3339Nano with short date fields.
|
||||||
|
"2006-1-2t15:4:5.999999999Z07:00", // RFC3339Nano with short date fields and lower-case "t".
|
||||||
|
"2006-1-2 15:4:5.999999999", // space separated with no time zone
|
||||||
|
"2006-1-2", // date only
|
||||||
|
// Notable exception: time.Parse cannot handle: "2001-12-14 21:59:43.10 -5"
|
||||||
|
// from the set of examples.
|
||||||
|
}
|
||||||
|
|
||||||
|
// parseTimestamp parses s as a timestamp string and
|
||||||
|
// returns the timestamp and reports whether it succeeded.
|
||||||
|
// Timestamp formats are defined at http://yaml.org/type/timestamp.html
|
||||||
|
func parseTimestamp(s string) (time.Time, bool) {
|
||||||
|
// TODO write code to check all the formats supported by
|
||||||
|
// http://yaml.org/type/timestamp.html instead of using time.Parse.
|
||||||
|
|
||||||
|
// Quick check: all date formats start with YYYY-.
|
||||||
|
i := 0
|
||||||
|
for ; i < len(s); i++ {
|
||||||
|
if c := s[i]; c < '0' || c > '9' {
|
||||||
|
break
|
||||||
|
}
|
||||||
|
}
|
||||||
|
if i != 4 || i == len(s) || s[i] != '-' {
|
||||||
|
return time.Time{}, false
|
||||||
|
}
|
||||||
|
for _, format := range allowedTimestampFormats {
|
||||||
|
if t, err := time.Parse(format, s); err == nil {
|
||||||
|
return t, true
|
||||||
|
}
|
||||||
|
}
|
||||||
|
return time.Time{}, false
|
||||||
|
}
|
2718
vendor/gopkg.in/yaml.v2/scannerc.go
generated
vendored
Normal file
2718
vendor/gopkg.in/yaml.v2/scannerc.go
generated
vendored
Normal file
File diff suppressed because it is too large
Load Diff
113
vendor/gopkg.in/yaml.v2/sorter.go
generated
vendored
Normal file
113
vendor/gopkg.in/yaml.v2/sorter.go
generated
vendored
Normal file
@ -0,0 +1,113 @@
|
|||||||
|
package yaml
|
||||||
|
|
||||||
|
import (
|
||||||
|
"reflect"
|
||||||
|
"unicode"
|
||||||
|
)
|
||||||
|
|
||||||
|
type keyList []reflect.Value
|
||||||
|
|
||||||
|
func (l keyList) Len() int { return len(l) }
|
||||||
|
func (l keyList) Swap(i, j int) { l[i], l[j] = l[j], l[i] }
|
||||||
|
func (l keyList) Less(i, j int) bool {
|
||||||
|
a := l[i]
|
||||||
|
b := l[j]
|
||||||
|
ak := a.Kind()
|
||||||
|
bk := b.Kind()
|
||||||
|
for (ak == reflect.Interface || ak == reflect.Ptr) && !a.IsNil() {
|
||||||
|
a = a.Elem()
|
||||||
|
ak = a.Kind()
|
||||||
|
}
|
||||||
|
for (bk == reflect.Interface || bk == reflect.Ptr) && !b.IsNil() {
|
||||||
|
b = b.Elem()
|
||||||
|
bk = b.Kind()
|
||||||
|
}
|
||||||
|
af, aok := keyFloat(a)
|
||||||
|
bf, bok := keyFloat(b)
|
||||||
|
if aok && bok {
|
||||||
|
if af != bf {
|
||||||
|
return af < bf
|
||||||
|
}
|
||||||
|
if ak != bk {
|
||||||
|
return ak < bk
|
||||||
|
}
|
||||||
|
return numLess(a, b)
|
||||||
|
}
|
||||||
|
if ak != reflect.String || bk != reflect.String {
|
||||||
|
return ak < bk
|
||||||
|
}
|
||||||
|
ar, br := []rune(a.String()), []rune(b.String())
|
||||||
|
for i := 0; i < len(ar) && i < len(br); i++ {
|
||||||
|
if ar[i] == br[i] {
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
al := unicode.IsLetter(ar[i])
|
||||||
|
bl := unicode.IsLetter(br[i])
|
||||||
|
if al && bl {
|
||||||
|
return ar[i] < br[i]
|
||||||
|
}
|
||||||
|
if al || bl {
|
||||||
|
return bl
|
||||||
|
}
|
||||||
|
var ai, bi int
|
||||||
|
var an, bn int64
|
||||||
|
if ar[i] == '0' || br[i] == '0' {
|
||||||
|
for j := i-1; j >= 0 && unicode.IsDigit(ar[j]); j-- {
|
||||||
|
if ar[j] != '0' {
|
||||||
|
an = 1
|
||||||
|
bn = 1
|
||||||
|
break
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
for ai = i; ai < len(ar) && unicode.IsDigit(ar[ai]); ai++ {
|
||||||
|
an = an*10 + int64(ar[ai]-'0')
|
||||||
|
}
|
||||||
|
for bi = i; bi < len(br) && unicode.IsDigit(br[bi]); bi++ {
|
||||||
|
bn = bn*10 + int64(br[bi]-'0')
|
||||||
|
}
|
||||||
|
if an != bn {
|
||||||
|
return an < bn
|
||||||
|
}
|
||||||
|
if ai != bi {
|
||||||
|
return ai < bi
|
||||||
|
}
|
||||||
|
return ar[i] < br[i]
|
||||||
|
}
|
||||||
|
return len(ar) < len(br)
|
||||||
|
}
|
||||||
|
|
||||||
|
// keyFloat returns a float value for v if it is a number/bool
|
||||||
|
// and whether it is a number/bool or not.
|
||||||
|
func keyFloat(v reflect.Value) (f float64, ok bool) {
|
||||||
|
switch v.Kind() {
|
||||||
|
case reflect.Int, reflect.Int8, reflect.Int16, reflect.Int32, reflect.Int64:
|
||||||
|
return float64(v.Int()), true
|
||||||
|
case reflect.Float32, reflect.Float64:
|
||||||
|
return v.Float(), true
|
||||||
|
case reflect.Uint, reflect.Uint8, reflect.Uint16, reflect.Uint32, reflect.Uint64, reflect.Uintptr:
|
||||||
|
return float64(v.Uint()), true
|
||||||
|
case reflect.Bool:
|
||||||
|
if v.Bool() {
|
||||||
|
return 1, true
|
||||||
|
}
|
||||||
|
return 0, true
|
||||||
|
}
|
||||||
|
return 0, false
|
||||||
|
}
|
||||||
|
|
||||||
|
// numLess returns whether a < b.
|
||||||
|
// a and b must necessarily have the same kind.
|
||||||
|
func numLess(a, b reflect.Value) bool {
|
||||||
|
switch a.Kind() {
|
||||||
|
case reflect.Int, reflect.Int8, reflect.Int16, reflect.Int32, reflect.Int64:
|
||||||
|
return a.Int() < b.Int()
|
||||||
|
case reflect.Float32, reflect.Float64:
|
||||||
|
return a.Float() < b.Float()
|
||||||
|
case reflect.Uint, reflect.Uint8, reflect.Uint16, reflect.Uint32, reflect.Uint64, reflect.Uintptr:
|
||||||
|
return a.Uint() < b.Uint()
|
||||||
|
case reflect.Bool:
|
||||||
|
return !a.Bool() && b.Bool()
|
||||||
|
}
|
||||||
|
panic("not a number")
|
||||||
|
}
|
26
vendor/gopkg.in/yaml.v2/writerc.go
generated
vendored
Normal file
26
vendor/gopkg.in/yaml.v2/writerc.go
generated
vendored
Normal file
@ -0,0 +1,26 @@
|
|||||||
|
package yaml
|
||||||
|
|
||||||
|
// Set the writer error and return false.
|
||||||
|
func yaml_emitter_set_writer_error(emitter *yaml_emitter_t, problem string) bool {
|
||||||
|
emitter.error = yaml_WRITER_ERROR
|
||||||
|
emitter.problem = problem
|
||||||
|
return false
|
||||||
|
}
|
||||||
|
|
||||||
|
// Flush the output buffer.
|
||||||
|
func yaml_emitter_flush(emitter *yaml_emitter_t) bool {
|
||||||
|
if emitter.write_handler == nil {
|
||||||
|
panic("write handler not set")
|
||||||
|
}
|
||||||
|
|
||||||
|
// Check if the buffer is empty.
|
||||||
|
if emitter.buffer_pos == 0 {
|
||||||
|
return true
|
||||||
|
}
|
||||||
|
|
||||||
|
if err := emitter.write_handler(emitter, emitter.buffer[:emitter.buffer_pos]); err != nil {
|
||||||
|
return yaml_emitter_set_writer_error(emitter, "write error: "+err.Error())
|
||||||
|
}
|
||||||
|
emitter.buffer_pos = 0
|
||||||
|
return true
|
||||||
|
}
|
466
vendor/gopkg.in/yaml.v2/yaml.go
generated
vendored
Normal file
466
vendor/gopkg.in/yaml.v2/yaml.go
generated
vendored
Normal file
@ -0,0 +1,466 @@
|
|||||||
|
// Package yaml implements YAML support for the Go language.
|
||||||
|
//
|
||||||
|
// Source code and other details for the project are available at GitHub:
|
||||||
|
//
|
||||||
|
// https://github.com/go-yaml/yaml
|
||||||
|
//
|
||||||
|
package yaml
|
||||||
|
|
||||||
|
import (
|
||||||
|
"errors"
|
||||||
|
"fmt"
|
||||||
|
"io"
|
||||||
|
"reflect"
|
||||||
|
"strings"
|
||||||
|
"sync"
|
||||||
|
)
|
||||||
|
|
||||||
|
// MapSlice encodes and decodes as a YAML map.
|
||||||
|
// The order of keys is preserved when encoding and decoding.
|
||||||
|
type MapSlice []MapItem
|
||||||
|
|
||||||
|
// MapItem is an item in a MapSlice.
|
||||||
|
type MapItem struct {
|
||||||
|
Key, Value interface{}
|
||||||
|
}
|
||||||
|
|
||||||
|
// The Unmarshaler interface may be implemented by types to customize their
|
||||||
|
// behavior when being unmarshaled from a YAML document. The UnmarshalYAML
|
||||||
|
// method receives a function that may be called to unmarshal the original
|
||||||
|
// YAML value into a field or variable. It is safe to call the unmarshal
|
||||||
|
// function parameter more than once if necessary.
|
||||||
|
type Unmarshaler interface {
|
||||||
|
UnmarshalYAML(unmarshal func(interface{}) error) error
|
||||||
|
}
|
||||||
|
|
||||||
|
// The Marshaler interface may be implemented by types to customize their
|
||||||
|
// behavior when being marshaled into a YAML document. The returned value
|
||||||
|
// is marshaled in place of the original value implementing Marshaler.
|
||||||
|
//
|
||||||
|
// If an error is returned by MarshalYAML, the marshaling procedure stops
|
||||||
|
// and returns with the provided error.
|
||||||
|
type Marshaler interface {
|
||||||
|
MarshalYAML() (interface{}, error)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Unmarshal decodes the first document found within the in byte slice
|
||||||
|
// and assigns decoded values into the out value.
|
||||||
|
//
|
||||||
|
// Maps and pointers (to a struct, string, int, etc) are accepted as out
|
||||||
|
// values. If an internal pointer within a struct is not initialized,
|
||||||
|
// the yaml package will initialize it if necessary for unmarshalling
|
||||||
|
// the provided data. The out parameter must not be nil.
|
||||||
|
//
|
||||||
|
// The type of the decoded values should be compatible with the respective
|
||||||
|
// values in out. If one or more values cannot be decoded due to a type
|
||||||
|
// mismatches, decoding continues partially until the end of the YAML
|
||||||
|
// content, and a *yaml.TypeError is returned with details for all
|
||||||
|
// missed values.
|
||||||
|
//
|
||||||
|
// Struct fields are only unmarshalled if they are exported (have an
|
||||||
|
// upper case first letter), and are unmarshalled using the field name
|
||||||
|
// lowercased as the default key. Custom keys may be defined via the
|
||||||
|
// "yaml" name in the field tag: the content preceding the first comma
|
||||||
|
// is used as the key, and the following comma-separated options are
|
||||||
|
// used to tweak the marshalling process (see Marshal).
|
||||||
|
// Conflicting names result in a runtime error.
|
||||||
|
//
|
||||||
|
// For example:
|
||||||
|
//
|
||||||
|
// type T struct {
|
||||||
|
// F int `yaml:"a,omitempty"`
|
||||||
|
// B int
|
||||||
|
// }
|
||||||
|
// var t T
|
||||||
|
// yaml.Unmarshal([]byte("a: 1\nb: 2"), &t)
|
||||||
|
//
|
||||||
|
// See the documentation of Marshal for the format of tags and a list of
|
||||||
|
// supported tag options.
|
||||||
|
//
|
||||||
|
func Unmarshal(in []byte, out interface{}) (err error) {
|
||||||
|
return unmarshal(in, out, false)
|
||||||
|
}
|
||||||
|
|
||||||
|
// UnmarshalStrict is like Unmarshal except that any fields that are found
|
||||||
|
// in the data that do not have corresponding struct members, or mapping
|
||||||
|
// keys that are duplicates, will result in
|
||||||
|
// an error.
|
||||||
|
func UnmarshalStrict(in []byte, out interface{}) (err error) {
|
||||||
|
return unmarshal(in, out, true)
|
||||||
|
}
|
||||||
|
|
||||||
|
// A Decoder reads and decodes YAML values from an input stream.
|
||||||
|
type Decoder struct {
|
||||||
|
strict bool
|
||||||
|
parser *parser
|
||||||
|
}
|
||||||
|
|
||||||
|
// NewDecoder returns a new decoder that reads from r.
|
||||||
|
//
|
||||||
|
// The decoder introduces its own buffering and may read
|
||||||
|
// data from r beyond the YAML values requested.
|
||||||
|
func NewDecoder(r io.Reader) *Decoder {
|
||||||
|
return &Decoder{
|
||||||
|
parser: newParserFromReader(r),
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// SetStrict sets whether strict decoding behaviour is enabled when
|
||||||
|
// decoding items in the data (see UnmarshalStrict). By default, decoding is not strict.
|
||||||
|
func (dec *Decoder) SetStrict(strict bool) {
|
||||||
|
dec.strict = strict
|
||||||
|
}
|
||||||
|
|
||||||
|
// Decode reads the next YAML-encoded value from its input
|
||||||
|
// and stores it in the value pointed to by v.
|
||||||
|
//
|
||||||
|
// See the documentation for Unmarshal for details about the
|
||||||
|
// conversion of YAML into a Go value.
|
||||||
|
func (dec *Decoder) Decode(v interface{}) (err error) {
|
||||||
|
d := newDecoder(dec.strict)
|
||||||
|
defer handleErr(&err)
|
||||||
|
node := dec.parser.parse()
|
||||||
|
if node == nil {
|
||||||
|
return io.EOF
|
||||||
|
}
|
||||||
|
out := reflect.ValueOf(v)
|
||||||
|
if out.Kind() == reflect.Ptr && !out.IsNil() {
|
||||||
|
out = out.Elem()
|
||||||
|
}
|
||||||
|
d.unmarshal(node, out)
|
||||||
|
if len(d.terrors) > 0 {
|
||||||
|
return &TypeError{d.terrors}
|
||||||
|
}
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func unmarshal(in []byte, out interface{}, strict bool) (err error) {
|
||||||
|
defer handleErr(&err)
|
||||||
|
d := newDecoder(strict)
|
||||||
|
p := newParser(in)
|
||||||
|
defer p.destroy()
|
||||||
|
node := p.parse()
|
||||||
|
if node != nil {
|
||||||
|
v := reflect.ValueOf(out)
|
||||||
|
if v.Kind() == reflect.Ptr && !v.IsNil() {
|
||||||
|
v = v.Elem()
|
||||||
|
}
|
||||||
|
d.unmarshal(node, v)
|
||||||
|
}
|
||||||
|
if len(d.terrors) > 0 {
|
||||||
|
return &TypeError{d.terrors}
|
||||||
|
}
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// Marshal serializes the value provided into a YAML document. The structure
|
||||||
|
// of the generated document will reflect the structure of the value itself.
|
||||||
|
// Maps and pointers (to struct, string, int, etc) are accepted as the in value.
|
||||||
|
//
|
||||||
|
// Struct fields are only marshalled if they are exported (have an upper case
|
||||||
|
// first letter), and are marshalled using the field name lowercased as the
|
||||||
|
// default key. Custom keys may be defined via the "yaml" name in the field
|
||||||
|
// tag: the content preceding the first comma is used as the key, and the
|
||||||
|
// following comma-separated options are used to tweak the marshalling process.
|
||||||
|
// Conflicting names result in a runtime error.
|
||||||
|
//
|
||||||
|
// The field tag format accepted is:
|
||||||
|
//
|
||||||
|
// `(...) yaml:"[<key>][,<flag1>[,<flag2>]]" (...)`
|
||||||
|
//
|
||||||
|
// The following flags are currently supported:
|
||||||
|
//
|
||||||
|
// omitempty Only include the field if it's not set to the zero
|
||||||
|
// value for the type or to empty slices or maps.
|
||||||
|
// Zero valued structs will be omitted if all their public
|
||||||
|
// fields are zero, unless they implement an IsZero
|
||||||
|
// method (see the IsZeroer interface type), in which
|
||||||
|
// case the field will be included if that method returns true.
|
||||||
|
//
|
||||||
|
// flow Marshal using a flow style (useful for structs,
|
||||||
|
// sequences and maps).
|
||||||
|
//
|
||||||
|
// inline Inline the field, which must be a struct or a map,
|
||||||
|
// causing all of its fields or keys to be processed as if
|
||||||
|
// they were part of the outer struct. For maps, keys must
|
||||||
|
// not conflict with the yaml keys of other struct fields.
|
||||||
|
//
|
||||||
|
// In addition, if the key is "-", the field is ignored.
|
||||||
|
//
|
||||||
|
// For example:
|
||||||
|
//
|
||||||
|
// type T struct {
|
||||||
|
// F int `yaml:"a,omitempty"`
|
||||||
|
// B int
|
||||||
|
// }
|
||||||
|
// yaml.Marshal(&T{B: 2}) // Returns "b: 2\n"
|
||||||
|
// yaml.Marshal(&T{F: 1}} // Returns "a: 1\nb: 0\n"
|
||||||
|
//
|
||||||
|
func Marshal(in interface{}) (out []byte, err error) {
|
||||||
|
defer handleErr(&err)
|
||||||
|
e := newEncoder()
|
||||||
|
defer e.destroy()
|
||||||
|
e.marshalDoc("", reflect.ValueOf(in))
|
||||||
|
e.finish()
|
||||||
|
out = e.out
|
||||||
|
return
|
||||||
|
}
|
||||||
|
|
||||||
|
// An Encoder writes YAML values to an output stream.
|
||||||
|
type Encoder struct {
|
||||||
|
encoder *encoder
|
||||||
|
}
|
||||||
|
|
||||||
|
// NewEncoder returns a new encoder that writes to w.
|
||||||
|
// The Encoder should be closed after use to flush all data
|
||||||
|
// to w.
|
||||||
|
func NewEncoder(w io.Writer) *Encoder {
|
||||||
|
return &Encoder{
|
||||||
|
encoder: newEncoderWithWriter(w),
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Encode writes the YAML encoding of v to the stream.
|
||||||
|
// If multiple items are encoded to the stream, the
|
||||||
|
// second and subsequent document will be preceded
|
||||||
|
// with a "---" document separator, but the first will not.
|
||||||
|
//
|
||||||
|
// See the documentation for Marshal for details about the conversion of Go
|
||||||
|
// values to YAML.
|
||||||
|
func (e *Encoder) Encode(v interface{}) (err error) {
|
||||||
|
defer handleErr(&err)
|
||||||
|
e.encoder.marshalDoc("", reflect.ValueOf(v))
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// Close closes the encoder by writing any remaining data.
|
||||||
|
// It does not write a stream terminating string "...".
|
||||||
|
func (e *Encoder) Close() (err error) {
|
||||||
|
defer handleErr(&err)
|
||||||
|
e.encoder.finish()
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func handleErr(err *error) {
|
||||||
|
if v := recover(); v != nil {
|
||||||
|
if e, ok := v.(yamlError); ok {
|
||||||
|
*err = e.err
|
||||||
|
} else {
|
||||||
|
panic(v)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
type yamlError struct {
|
||||||
|
err error
|
||||||
|
}
|
||||||
|
|
||||||
|
func fail(err error) {
|
||||||
|
panic(yamlError{err})
|
||||||
|
}
|
||||||
|
|
||||||
|
func failf(format string, args ...interface{}) {
|
||||||
|
panic(yamlError{fmt.Errorf("yaml: "+format, args...)})
|
||||||
|
}
|
||||||
|
|
||||||
|
// A TypeError is returned by Unmarshal when one or more fields in
|
||||||
|
// the YAML document cannot be properly decoded into the requested
|
||||||
|
// types. When this error is returned, the value is still
|
||||||
|
// unmarshaled partially.
|
||||||
|
type TypeError struct {
|
||||||
|
Errors []string
|
||||||
|
}
|
||||||
|
|
||||||
|
func (e *TypeError) Error() string {
|
||||||
|
return fmt.Sprintf("yaml: unmarshal errors:\n %s", strings.Join(e.Errors, "\n "))
|
||||||
|
}
|
||||||
|
|
||||||
|
// --------------------------------------------------------------------------
|
||||||
|
// Maintain a mapping of keys to structure field indexes
|
||||||
|
|
||||||
|
// The code in this section was copied from mgo/bson.
|
||||||
|
|
||||||
|
// structInfo holds details for the serialization of fields of
|
||||||
|
// a given struct.
|
||||||
|
type structInfo struct {
|
||||||
|
FieldsMap map[string]fieldInfo
|
||||||
|
FieldsList []fieldInfo
|
||||||
|
|
||||||
|
// InlineMap is the number of the field in the struct that
|
||||||
|
// contains an ,inline map, or -1 if there's none.
|
||||||
|
InlineMap int
|
||||||
|
}
|
||||||
|
|
||||||
|
type fieldInfo struct {
|
||||||
|
Key string
|
||||||
|
Num int
|
||||||
|
OmitEmpty bool
|
||||||
|
Flow bool
|
||||||
|
// Id holds the unique field identifier, so we can cheaply
|
||||||
|
// check for field duplicates without maintaining an extra map.
|
||||||
|
Id int
|
||||||
|
|
||||||
|
// Inline holds the field index if the field is part of an inlined struct.
|
||||||
|
Inline []int
|
||||||
|
}
|
||||||
|
|
||||||
|
var structMap = make(map[reflect.Type]*structInfo)
|
||||||
|
var fieldMapMutex sync.RWMutex
|
||||||
|
|
||||||
|
func getStructInfo(st reflect.Type) (*structInfo, error) {
|
||||||
|
fieldMapMutex.RLock()
|
||||||
|
sinfo, found := structMap[st]
|
||||||
|
fieldMapMutex.RUnlock()
|
||||||
|
if found {
|
||||||
|
return sinfo, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
n := st.NumField()
|
||||||
|
fieldsMap := make(map[string]fieldInfo)
|
||||||
|
fieldsList := make([]fieldInfo, 0, n)
|
||||||
|
inlineMap := -1
|
||||||
|
for i := 0; i != n; i++ {
|
||||||
|
field := st.Field(i)
|
||||||
|
if field.PkgPath != "" && !field.Anonymous {
|
||||||
|
continue // Private field
|
||||||
|
}
|
||||||
|
|
||||||
|
info := fieldInfo{Num: i}
|
||||||
|
|
||||||
|
tag := field.Tag.Get("yaml")
|
||||||
|
if tag == "" && strings.Index(string(field.Tag), ":") < 0 {
|
||||||
|
tag = string(field.Tag)
|
||||||
|
}
|
||||||
|
if tag == "-" {
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
|
||||||
|
inline := false
|
||||||
|
fields := strings.Split(tag, ",")
|
||||||
|
if len(fields) > 1 {
|
||||||
|
for _, flag := range fields[1:] {
|
||||||
|
switch flag {
|
||||||
|
case "omitempty":
|
||||||
|
info.OmitEmpty = true
|
||||||
|
case "flow":
|
||||||
|
info.Flow = true
|
||||||
|
case "inline":
|
||||||
|
inline = true
|
||||||
|
default:
|
||||||
|
return nil, errors.New(fmt.Sprintf("Unsupported flag %q in tag %q of type %s", flag, tag, st))
|
||||||
|
}
|
||||||
|
}
|
||||||
|
tag = fields[0]
|
||||||
|
}
|
||||||
|
|
||||||
|
if inline {
|
||||||
|
switch field.Type.Kind() {
|
||||||
|
case reflect.Map:
|
||||||
|
if inlineMap >= 0 {
|
||||||
|
return nil, errors.New("Multiple ,inline maps in struct " + st.String())
|
||||||
|
}
|
||||||
|
if field.Type.Key() != reflect.TypeOf("") {
|
||||||
|
return nil, errors.New("Option ,inline needs a map with string keys in struct " + st.String())
|
||||||
|
}
|
||||||
|
inlineMap = info.Num
|
||||||
|
case reflect.Struct:
|
||||||
|
sinfo, err := getStructInfo(field.Type)
|
||||||
|
if err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
for _, finfo := range sinfo.FieldsList {
|
||||||
|
if _, found := fieldsMap[finfo.Key]; found {
|
||||||
|
msg := "Duplicated key '" + finfo.Key + "' in struct " + st.String()
|
||||||
|
return nil, errors.New(msg)
|
||||||
|
}
|
||||||
|
if finfo.Inline == nil {
|
||||||
|
finfo.Inline = []int{i, finfo.Num}
|
||||||
|
} else {
|
||||||
|
finfo.Inline = append([]int{i}, finfo.Inline...)
|
||||||
|
}
|
||||||
|
finfo.Id = len(fieldsList)
|
||||||
|
fieldsMap[finfo.Key] = finfo
|
||||||
|
fieldsList = append(fieldsList, finfo)
|
||||||
|
}
|
||||||
|
default:
|
||||||
|
//return nil, errors.New("Option ,inline needs a struct value or map field")
|
||||||
|
return nil, errors.New("Option ,inline needs a struct value field")
|
||||||
|
}
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
|
||||||
|
if tag != "" {
|
||||||
|
info.Key = tag
|
||||||
|
} else {
|
||||||
|
info.Key = strings.ToLower(field.Name)
|
||||||
|
}
|
||||||
|
|
||||||
|
if _, found = fieldsMap[info.Key]; found {
|
||||||
|
msg := "Duplicated key '" + info.Key + "' in struct " + st.String()
|
||||||
|
return nil, errors.New(msg)
|
||||||
|
}
|
||||||
|
|
||||||
|
info.Id = len(fieldsList)
|
||||||
|
fieldsList = append(fieldsList, info)
|
||||||
|
fieldsMap[info.Key] = info
|
||||||
|
}
|
||||||
|
|
||||||
|
sinfo = &structInfo{
|
||||||
|
FieldsMap: fieldsMap,
|
||||||
|
FieldsList: fieldsList,
|
||||||
|
InlineMap: inlineMap,
|
||||||
|
}
|
||||||
|
|
||||||
|
fieldMapMutex.Lock()
|
||||||
|
structMap[st] = sinfo
|
||||||
|
fieldMapMutex.Unlock()
|
||||||
|
return sinfo, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// IsZeroer is used to check whether an object is zero to
|
||||||
|
// determine whether it should be omitted when marshaling
|
||||||
|
// with the omitempty flag. One notable implementation
|
||||||
|
// is time.Time.
|
||||||
|
type IsZeroer interface {
|
||||||
|
IsZero() bool
|
||||||
|
}
|
||||||
|
|
||||||
|
func isZero(v reflect.Value) bool {
|
||||||
|
kind := v.Kind()
|
||||||
|
if z, ok := v.Interface().(IsZeroer); ok {
|
||||||
|
if (kind == reflect.Ptr || kind == reflect.Interface) && v.IsNil() {
|
||||||
|
return true
|
||||||
|
}
|
||||||
|
return z.IsZero()
|
||||||
|
}
|
||||||
|
switch kind {
|
||||||
|
case reflect.String:
|
||||||
|
return len(v.String()) == 0
|
||||||
|
case reflect.Interface, reflect.Ptr:
|
||||||
|
return v.IsNil()
|
||||||
|
case reflect.Slice:
|
||||||
|
return v.Len() == 0
|
||||||
|
case reflect.Map:
|
||||||
|
return v.Len() == 0
|
||||||
|
case reflect.Int, reflect.Int8, reflect.Int16, reflect.Int32, reflect.Int64:
|
||||||
|
return v.Int() == 0
|
||||||
|
case reflect.Float32, reflect.Float64:
|
||||||
|
return v.Float() == 0
|
||||||
|
case reflect.Uint, reflect.Uint8, reflect.Uint16, reflect.Uint32, reflect.Uint64, reflect.Uintptr:
|
||||||
|
return v.Uint() == 0
|
||||||
|
case reflect.Bool:
|
||||||
|
return !v.Bool()
|
||||||
|
case reflect.Struct:
|
||||||
|
vt := v.Type()
|
||||||
|
for i := v.NumField() - 1; i >= 0; i-- {
|
||||||
|
if vt.Field(i).PkgPath != "" {
|
||||||
|
continue // Private field
|
||||||
|
}
|
||||||
|
if !isZero(v.Field(i)) {
|
||||||
|
return false
|
||||||
|
}
|
||||||
|
}
|
||||||
|
return true
|
||||||
|
}
|
||||||
|
return false
|
||||||
|
}
|
738
vendor/gopkg.in/yaml.v2/yamlh.go
generated
vendored
Normal file
738
vendor/gopkg.in/yaml.v2/yamlh.go
generated
vendored
Normal file
@ -0,0 +1,738 @@
|
|||||||
|
package yaml
|
||||||
|
|
||||||
|
import (
|
||||||
|
"fmt"
|
||||||
|
"io"
|
||||||
|
)
|
||||||
|
|
||||||
|
// The version directive data.
|
||||||
|
type yaml_version_directive_t struct {
|
||||||
|
major int8 // The major version number.
|
||||||
|
minor int8 // The minor version number.
|
||||||
|
}
|
||||||
|
|
||||||
|
// The tag directive data.
|
||||||
|
type yaml_tag_directive_t struct {
|
||||||
|
handle []byte // The tag handle.
|
||||||
|
prefix []byte // The tag prefix.
|
||||||
|
}
|
||||||
|
|
||||||
|
type yaml_encoding_t int
|
||||||
|
|
||||||
|
// The stream encoding.
|
||||||
|
const (
|
||||||
|
// Let the parser choose the encoding.
|
||||||
|
yaml_ANY_ENCODING yaml_encoding_t = iota
|
||||||
|
|
||||||
|
yaml_UTF8_ENCODING // The default UTF-8 encoding.
|
||||||
|
yaml_UTF16LE_ENCODING // The UTF-16-LE encoding with BOM.
|
||||||
|
yaml_UTF16BE_ENCODING // The UTF-16-BE encoding with BOM.
|
||||||
|
)
|
||||||
|
|
||||||
|
type yaml_break_t int
|
||||||
|
|
||||||
|
// Line break types.
|
||||||
|
const (
|
||||||
|
// Let the parser choose the break type.
|
||||||
|
yaml_ANY_BREAK yaml_break_t = iota
|
||||||
|
|
||||||
|
yaml_CR_BREAK // Use CR for line breaks (Mac style).
|
||||||
|
yaml_LN_BREAK // Use LN for line breaks (Unix style).
|
||||||
|
yaml_CRLN_BREAK // Use CR LN for line breaks (DOS style).
|
||||||
|
)
|
||||||
|
|
||||||
|
type yaml_error_type_t int
|
||||||
|
|
||||||
|
// Many bad things could happen with the parser and emitter.
|
||||||
|
const (
|
||||||
|
// No error is produced.
|
||||||
|
yaml_NO_ERROR yaml_error_type_t = iota
|
||||||
|
|
||||||
|
yaml_MEMORY_ERROR // Cannot allocate or reallocate a block of memory.
|
||||||
|
yaml_READER_ERROR // Cannot read or decode the input stream.
|
||||||
|
yaml_SCANNER_ERROR // Cannot scan the input stream.
|
||||||
|
yaml_PARSER_ERROR // Cannot parse the input stream.
|
||||||
|
yaml_COMPOSER_ERROR // Cannot compose a YAML document.
|
||||||
|
yaml_WRITER_ERROR // Cannot write to the output stream.
|
||||||
|
yaml_EMITTER_ERROR // Cannot emit a YAML stream.
|
||||||
|
)
|
||||||
|
|
||||||
|
// The pointer position.
|
||||||
|
type yaml_mark_t struct {
|
||||||
|
index int // The position index.
|
||||||
|
line int // The position line.
|
||||||
|
column int // The position column.
|
||||||
|
}
|
||||||
|
|
||||||
|
// Node Styles
|
||||||
|
|
||||||
|
type yaml_style_t int8
|
||||||
|
|
||||||
|
type yaml_scalar_style_t yaml_style_t
|
||||||
|
|
||||||
|
// Scalar styles.
|
||||||
|
const (
|
||||||
|
// Let the emitter choose the style.
|
||||||
|
yaml_ANY_SCALAR_STYLE yaml_scalar_style_t = iota
|
||||||
|
|
||||||
|
yaml_PLAIN_SCALAR_STYLE // The plain scalar style.
|
||||||
|
yaml_SINGLE_QUOTED_SCALAR_STYLE // The single-quoted scalar style.
|
||||||
|
yaml_DOUBLE_QUOTED_SCALAR_STYLE // The double-quoted scalar style.
|
||||||
|
yaml_LITERAL_SCALAR_STYLE // The literal scalar style.
|
||||||
|
yaml_FOLDED_SCALAR_STYLE // The folded scalar style.
|
||||||
|
)
|
||||||
|
|
||||||
|
type yaml_sequence_style_t yaml_style_t
|
||||||
|
|
||||||
|
// Sequence styles.
|
||||||
|
const (
|
||||||
|
// Let the emitter choose the style.
|
||||||
|
yaml_ANY_SEQUENCE_STYLE yaml_sequence_style_t = iota
|
||||||
|
|
||||||
|
yaml_BLOCK_SEQUENCE_STYLE // The block sequence style.
|
||||||
|
yaml_FLOW_SEQUENCE_STYLE // The flow sequence style.
|
||||||
|
)
|
||||||
|
|
||||||
|
type yaml_mapping_style_t yaml_style_t
|
||||||
|
|
||||||
|
// Mapping styles.
|
||||||
|
const (
|
||||||
|
// Let the emitter choose the style.
|
||||||
|
yaml_ANY_MAPPING_STYLE yaml_mapping_style_t = iota
|
||||||
|
|
||||||
|
yaml_BLOCK_MAPPING_STYLE // The block mapping style.
|
||||||
|
yaml_FLOW_MAPPING_STYLE // The flow mapping style.
|
||||||
|
)
|
||||||
|
|
||||||
|
// Tokens
|
||||||
|
|
||||||
|
type yaml_token_type_t int
|
||||||
|
|
||||||
|
// Token types.
|
||||||
|
const (
|
||||||
|
// An empty token.
|
||||||
|
yaml_NO_TOKEN yaml_token_type_t = iota
|
||||||
|
|
||||||
|
yaml_STREAM_START_TOKEN // A STREAM-START token.
|
||||||
|
yaml_STREAM_END_TOKEN // A STREAM-END token.
|
||||||
|
|
||||||
|
yaml_VERSION_DIRECTIVE_TOKEN // A VERSION-DIRECTIVE token.
|
||||||
|
yaml_TAG_DIRECTIVE_TOKEN // A TAG-DIRECTIVE token.
|
||||||
|
yaml_DOCUMENT_START_TOKEN // A DOCUMENT-START token.
|
||||||
|
yaml_DOCUMENT_END_TOKEN // A DOCUMENT-END token.
|
||||||
|
|
||||||
|
yaml_BLOCK_SEQUENCE_START_TOKEN // A BLOCK-SEQUENCE-START token.
|
||||||
|
yaml_BLOCK_MAPPING_START_TOKEN // A BLOCK-SEQUENCE-END token.
|
||||||
|
yaml_BLOCK_END_TOKEN // A BLOCK-END token.
|
||||||
|
|
||||||
|
yaml_FLOW_SEQUENCE_START_TOKEN // A FLOW-SEQUENCE-START token.
|
||||||
|
yaml_FLOW_SEQUENCE_END_TOKEN // A FLOW-SEQUENCE-END token.
|
||||||
|
yaml_FLOW_MAPPING_START_TOKEN // A FLOW-MAPPING-START token.
|
||||||
|
yaml_FLOW_MAPPING_END_TOKEN // A FLOW-MAPPING-END token.
|
||||||
|
|
||||||
|
yaml_BLOCK_ENTRY_TOKEN // A BLOCK-ENTRY token.
|
||||||
|
yaml_FLOW_ENTRY_TOKEN // A FLOW-ENTRY token.
|
||||||
|
yaml_KEY_TOKEN // A KEY token.
|
||||||
|
yaml_VALUE_TOKEN // A VALUE token.
|
||||||
|
|
||||||
|
yaml_ALIAS_TOKEN // An ALIAS token.
|
||||||
|
yaml_ANCHOR_TOKEN // An ANCHOR token.
|
||||||
|
yaml_TAG_TOKEN // A TAG token.
|
||||||
|
yaml_SCALAR_TOKEN // A SCALAR token.
|
||||||
|
)
|
||||||
|
|
||||||
|
func (tt yaml_token_type_t) String() string {
|
||||||
|
switch tt {
|
||||||
|
case yaml_NO_TOKEN:
|
||||||
|
return "yaml_NO_TOKEN"
|
||||||
|
case yaml_STREAM_START_TOKEN:
|
||||||
|
return "yaml_STREAM_START_TOKEN"
|
||||||
|
case yaml_STREAM_END_TOKEN:
|
||||||
|
return "yaml_STREAM_END_TOKEN"
|
||||||
|
case yaml_VERSION_DIRECTIVE_TOKEN:
|
||||||
|
return "yaml_VERSION_DIRECTIVE_TOKEN"
|
||||||
|
case yaml_TAG_DIRECTIVE_TOKEN:
|
||||||
|
return "yaml_TAG_DIRECTIVE_TOKEN"
|
||||||
|
case yaml_DOCUMENT_START_TOKEN:
|
||||||
|
return "yaml_DOCUMENT_START_TOKEN"
|
||||||
|
case yaml_DOCUMENT_END_TOKEN:
|
||||||
|
return "yaml_DOCUMENT_END_TOKEN"
|
||||||
|
case yaml_BLOCK_SEQUENCE_START_TOKEN:
|
||||||
|
return "yaml_BLOCK_SEQUENCE_START_TOKEN"
|
||||||
|
case yaml_BLOCK_MAPPING_START_TOKEN:
|
||||||
|
return "yaml_BLOCK_MAPPING_START_TOKEN"
|
||||||
|
case yaml_BLOCK_END_TOKEN:
|
||||||
|
return "yaml_BLOCK_END_TOKEN"
|
||||||
|
case yaml_FLOW_SEQUENCE_START_TOKEN:
|
||||||
|
return "yaml_FLOW_SEQUENCE_START_TOKEN"
|
||||||
|
case yaml_FLOW_SEQUENCE_END_TOKEN:
|
||||||
|
return "yaml_FLOW_SEQUENCE_END_TOKEN"
|
||||||
|
case yaml_FLOW_MAPPING_START_TOKEN:
|
||||||
|
return "yaml_FLOW_MAPPING_START_TOKEN"
|
||||||
|
case yaml_FLOW_MAPPING_END_TOKEN:
|
||||||
|
return "yaml_FLOW_MAPPING_END_TOKEN"
|
||||||
|
case yaml_BLOCK_ENTRY_TOKEN:
|
||||||
|
return "yaml_BLOCK_ENTRY_TOKEN"
|
||||||
|
case yaml_FLOW_ENTRY_TOKEN:
|
||||||
|
return "yaml_FLOW_ENTRY_TOKEN"
|
||||||
|
case yaml_KEY_TOKEN:
|
||||||
|
return "yaml_KEY_TOKEN"
|
||||||
|
case yaml_VALUE_TOKEN:
|
||||||
|
return "yaml_VALUE_TOKEN"
|
||||||
|
case yaml_ALIAS_TOKEN:
|
||||||
|
return "yaml_ALIAS_TOKEN"
|
||||||
|
case yaml_ANCHOR_TOKEN:
|
||||||
|
return "yaml_ANCHOR_TOKEN"
|
||||||
|
case yaml_TAG_TOKEN:
|
||||||
|
return "yaml_TAG_TOKEN"
|
||||||
|
case yaml_SCALAR_TOKEN:
|
||||||
|
return "yaml_SCALAR_TOKEN"
|
||||||
|
}
|
||||||
|
return "<unknown token>"
|
||||||
|
}
|
||||||
|
|
||||||
|
// The token structure.
|
||||||
|
type yaml_token_t struct {
|
||||||
|
// The token type.
|
||||||
|
typ yaml_token_type_t
|
||||||
|
|
||||||
|
// The start/end of the token.
|
||||||
|
start_mark, end_mark yaml_mark_t
|
||||||
|
|
||||||
|
// The stream encoding (for yaml_STREAM_START_TOKEN).
|
||||||
|
encoding yaml_encoding_t
|
||||||
|
|
||||||
|
// The alias/anchor/scalar value or tag/tag directive handle
|
||||||
|
// (for yaml_ALIAS_TOKEN, yaml_ANCHOR_TOKEN, yaml_SCALAR_TOKEN, yaml_TAG_TOKEN, yaml_TAG_DIRECTIVE_TOKEN).
|
||||||
|
value []byte
|
||||||
|
|
||||||
|
// The tag suffix (for yaml_TAG_TOKEN).
|
||||||
|
suffix []byte
|
||||||
|
|
||||||
|
// The tag directive prefix (for yaml_TAG_DIRECTIVE_TOKEN).
|
||||||
|
prefix []byte
|
||||||
|
|
||||||
|
// The scalar style (for yaml_SCALAR_TOKEN).
|
||||||
|
style yaml_scalar_style_t
|
||||||
|
|
||||||
|
// The version directive major/minor (for yaml_VERSION_DIRECTIVE_TOKEN).
|
||||||
|
major, minor int8
|
||||||
|
}
|
||||||
|
|
||||||
|
// Events
|
||||||
|
|
||||||
|
type yaml_event_type_t int8
|
||||||
|
|
||||||
|
// Event types.
|
||||||
|
const (
|
||||||
|
// An empty event.
|
||||||
|
yaml_NO_EVENT yaml_event_type_t = iota
|
||||||
|
|
||||||
|
yaml_STREAM_START_EVENT // A STREAM-START event.
|
||||||
|
yaml_STREAM_END_EVENT // A STREAM-END event.
|
||||||
|
yaml_DOCUMENT_START_EVENT // A DOCUMENT-START event.
|
||||||
|
yaml_DOCUMENT_END_EVENT // A DOCUMENT-END event.
|
||||||
|
yaml_ALIAS_EVENT // An ALIAS event.
|
||||||
|
yaml_SCALAR_EVENT // A SCALAR event.
|
||||||
|
yaml_SEQUENCE_START_EVENT // A SEQUENCE-START event.
|
||||||
|
yaml_SEQUENCE_END_EVENT // A SEQUENCE-END event.
|
||||||
|
yaml_MAPPING_START_EVENT // A MAPPING-START event.
|
||||||
|
yaml_MAPPING_END_EVENT // A MAPPING-END event.
|
||||||
|
)
|
||||||
|
|
||||||
|
var eventStrings = []string{
|
||||||
|
yaml_NO_EVENT: "none",
|
||||||
|
yaml_STREAM_START_EVENT: "stream start",
|
||||||
|
yaml_STREAM_END_EVENT: "stream end",
|
||||||
|
yaml_DOCUMENT_START_EVENT: "document start",
|
||||||
|
yaml_DOCUMENT_END_EVENT: "document end",
|
||||||
|
yaml_ALIAS_EVENT: "alias",
|
||||||
|
yaml_SCALAR_EVENT: "scalar",
|
||||||
|
yaml_SEQUENCE_START_EVENT: "sequence start",
|
||||||
|
yaml_SEQUENCE_END_EVENT: "sequence end",
|
||||||
|
yaml_MAPPING_START_EVENT: "mapping start",
|
||||||
|
yaml_MAPPING_END_EVENT: "mapping end",
|
||||||
|
}
|
||||||
|
|
||||||
|
func (e yaml_event_type_t) String() string {
|
||||||
|
if e < 0 || int(e) >= len(eventStrings) {
|
||||||
|
return fmt.Sprintf("unknown event %d", e)
|
||||||
|
}
|
||||||
|
return eventStrings[e]
|
||||||
|
}
|
||||||
|
|
||||||
|
// The event structure.
|
||||||
|
type yaml_event_t struct {
|
||||||
|
|
||||||
|
// The event type.
|
||||||
|
typ yaml_event_type_t
|
||||||
|
|
||||||
|
// The start and end of the event.
|
||||||
|
start_mark, end_mark yaml_mark_t
|
||||||
|
|
||||||
|
// The document encoding (for yaml_STREAM_START_EVENT).
|
||||||
|
encoding yaml_encoding_t
|
||||||
|
|
||||||
|
// The version directive (for yaml_DOCUMENT_START_EVENT).
|
||||||
|
version_directive *yaml_version_directive_t
|
||||||
|
|
||||||
|
// The list of tag directives (for yaml_DOCUMENT_START_EVENT).
|
||||||
|
tag_directives []yaml_tag_directive_t
|
||||||
|
|
||||||
|
// The anchor (for yaml_SCALAR_EVENT, yaml_SEQUENCE_START_EVENT, yaml_MAPPING_START_EVENT, yaml_ALIAS_EVENT).
|
||||||
|
anchor []byte
|
||||||
|
|
||||||
|
// The tag (for yaml_SCALAR_EVENT, yaml_SEQUENCE_START_EVENT, yaml_MAPPING_START_EVENT).
|
||||||
|
tag []byte
|
||||||
|
|
||||||
|
// The scalar value (for yaml_SCALAR_EVENT).
|
||||||
|
value []byte
|
||||||
|
|
||||||
|
// Is the document start/end indicator implicit, or the tag optional?
|
||||||
|
// (for yaml_DOCUMENT_START_EVENT, yaml_DOCUMENT_END_EVENT, yaml_SEQUENCE_START_EVENT, yaml_MAPPING_START_EVENT, yaml_SCALAR_EVENT).
|
||||||
|
implicit bool
|
||||||
|
|
||||||
|
// Is the tag optional for any non-plain style? (for yaml_SCALAR_EVENT).
|
||||||
|
quoted_implicit bool
|
||||||
|
|
||||||
|
// The style (for yaml_SCALAR_EVENT, yaml_SEQUENCE_START_EVENT, yaml_MAPPING_START_EVENT).
|
||||||
|
style yaml_style_t
|
||||||
|
}
|
||||||
|
|
||||||
|
func (e *yaml_event_t) scalar_style() yaml_scalar_style_t { return yaml_scalar_style_t(e.style) }
|
||||||
|
func (e *yaml_event_t) sequence_style() yaml_sequence_style_t { return yaml_sequence_style_t(e.style) }
|
||||||
|
func (e *yaml_event_t) mapping_style() yaml_mapping_style_t { return yaml_mapping_style_t(e.style) }
|
||||||
|
|
||||||
|
// Nodes
|
||||||
|
|
||||||
|
const (
|
||||||
|
yaml_NULL_TAG = "tag:yaml.org,2002:null" // The tag !!null with the only possible value: null.
|
||||||
|
yaml_BOOL_TAG = "tag:yaml.org,2002:bool" // The tag !!bool with the values: true and false.
|
||||||
|
yaml_STR_TAG = "tag:yaml.org,2002:str" // The tag !!str for string values.
|
||||||
|
yaml_INT_TAG = "tag:yaml.org,2002:int" // The tag !!int for integer values.
|
||||||
|
yaml_FLOAT_TAG = "tag:yaml.org,2002:float" // The tag !!float for float values.
|
||||||
|
yaml_TIMESTAMP_TAG = "tag:yaml.org,2002:timestamp" // The tag !!timestamp for date and time values.
|
||||||
|
|
||||||
|
yaml_SEQ_TAG = "tag:yaml.org,2002:seq" // The tag !!seq is used to denote sequences.
|
||||||
|
yaml_MAP_TAG = "tag:yaml.org,2002:map" // The tag !!map is used to denote mapping.
|
||||||
|
|
||||||
|
// Not in original libyaml.
|
||||||
|
yaml_BINARY_TAG = "tag:yaml.org,2002:binary"
|
||||||
|
yaml_MERGE_TAG = "tag:yaml.org,2002:merge"
|
||||||
|
|
||||||
|
yaml_DEFAULT_SCALAR_TAG = yaml_STR_TAG // The default scalar tag is !!str.
|
||||||
|
yaml_DEFAULT_SEQUENCE_TAG = yaml_SEQ_TAG // The default sequence tag is !!seq.
|
||||||
|
yaml_DEFAULT_MAPPING_TAG = yaml_MAP_TAG // The default mapping tag is !!map.
|
||||||
|
)
|
||||||
|
|
||||||
|
type yaml_node_type_t int
|
||||||
|
|
||||||
|
// Node types.
|
||||||
|
const (
|
||||||
|
// An empty node.
|
||||||
|
yaml_NO_NODE yaml_node_type_t = iota
|
||||||
|
|
||||||
|
yaml_SCALAR_NODE // A scalar node.
|
||||||
|
yaml_SEQUENCE_NODE // A sequence node.
|
||||||
|
yaml_MAPPING_NODE // A mapping node.
|
||||||
|
)
|
||||||
|
|
||||||
|
// An element of a sequence node.
|
||||||
|
type yaml_node_item_t int
|
||||||
|
|
||||||
|
// An element of a mapping node.
|
||||||
|
type yaml_node_pair_t struct {
|
||||||
|
key int // The key of the element.
|
||||||
|
value int // The value of the element.
|
||||||
|
}
|
||||||
|
|
||||||
|
// The node structure.
|
||||||
|
type yaml_node_t struct {
|
||||||
|
typ yaml_node_type_t // The node type.
|
||||||
|
tag []byte // The node tag.
|
||||||
|
|
||||||
|
// The node data.
|
||||||
|
|
||||||
|
// The scalar parameters (for yaml_SCALAR_NODE).
|
||||||
|
scalar struct {
|
||||||
|
value []byte // The scalar value.
|
||||||
|
length int // The length of the scalar value.
|
||||||
|
style yaml_scalar_style_t // The scalar style.
|
||||||
|
}
|
||||||
|
|
||||||
|
// The sequence parameters (for YAML_SEQUENCE_NODE).
|
||||||
|
sequence struct {
|
||||||
|
items_data []yaml_node_item_t // The stack of sequence items.
|
||||||
|
style yaml_sequence_style_t // The sequence style.
|
||||||
|
}
|
||||||
|
|
||||||
|
// The mapping parameters (for yaml_MAPPING_NODE).
|
||||||
|
mapping struct {
|
||||||
|
pairs_data []yaml_node_pair_t // The stack of mapping pairs (key, value).
|
||||||
|
pairs_start *yaml_node_pair_t // The beginning of the stack.
|
||||||
|
pairs_end *yaml_node_pair_t // The end of the stack.
|
||||||
|
pairs_top *yaml_node_pair_t // The top of the stack.
|
||||||
|
style yaml_mapping_style_t // The mapping style.
|
||||||
|
}
|
||||||
|
|
||||||
|
start_mark yaml_mark_t // The beginning of the node.
|
||||||
|
end_mark yaml_mark_t // The end of the node.
|
||||||
|
|
||||||
|
}
|
||||||
|
|
||||||
|
// The document structure.
|
||||||
|
type yaml_document_t struct {
|
||||||
|
|
||||||
|
// The document nodes.
|
||||||
|
nodes []yaml_node_t
|
||||||
|
|
||||||
|
// The version directive.
|
||||||
|
version_directive *yaml_version_directive_t
|
||||||
|
|
||||||
|
// The list of tag directives.
|
||||||
|
tag_directives_data []yaml_tag_directive_t
|
||||||
|
tag_directives_start int // The beginning of the tag directives list.
|
||||||
|
tag_directives_end int // The end of the tag directives list.
|
||||||
|
|
||||||
|
start_implicit int // Is the document start indicator implicit?
|
||||||
|
end_implicit int // Is the document end indicator implicit?
|
||||||
|
|
||||||
|
// The start/end of the document.
|
||||||
|
start_mark, end_mark yaml_mark_t
|
||||||
|
}
|
||||||
|
|
||||||
|
// The prototype of a read handler.
|
||||||
|
//
|
||||||
|
// The read handler is called when the parser needs to read more bytes from the
|
||||||
|
// source. The handler should write not more than size bytes to the buffer.
|
||||||
|
// The number of written bytes should be set to the size_read variable.
|
||||||
|
//
|
||||||
|
// [in,out] data A pointer to an application data specified by
|
||||||
|
// yaml_parser_set_input().
|
||||||
|
// [out] buffer The buffer to write the data from the source.
|
||||||
|
// [in] size The size of the buffer.
|
||||||
|
// [out] size_read The actual number of bytes read from the source.
|
||||||
|
//
|
||||||
|
// On success, the handler should return 1. If the handler failed,
|
||||||
|
// the returned value should be 0. On EOF, the handler should set the
|
||||||
|
// size_read to 0 and return 1.
|
||||||
|
type yaml_read_handler_t func(parser *yaml_parser_t, buffer []byte) (n int, err error)
|
||||||
|
|
||||||
|
// This structure holds information about a potential simple key.
|
||||||
|
type yaml_simple_key_t struct {
|
||||||
|
possible bool // Is a simple key possible?
|
||||||
|
required bool // Is a simple key required?
|
||||||
|
token_number int // The number of the token.
|
||||||
|
mark yaml_mark_t // The position mark.
|
||||||
|
}
|
||||||
|
|
||||||
|
// The states of the parser.
|
||||||
|
type yaml_parser_state_t int
|
||||||
|
|
||||||
|
const (
|
||||||
|
yaml_PARSE_STREAM_START_STATE yaml_parser_state_t = iota
|
||||||
|
|
||||||
|
yaml_PARSE_IMPLICIT_DOCUMENT_START_STATE // Expect the beginning of an implicit document.
|
||||||
|
yaml_PARSE_DOCUMENT_START_STATE // Expect DOCUMENT-START.
|
||||||
|
yaml_PARSE_DOCUMENT_CONTENT_STATE // Expect the content of a document.
|
||||||
|
yaml_PARSE_DOCUMENT_END_STATE // Expect DOCUMENT-END.
|
||||||
|
yaml_PARSE_BLOCK_NODE_STATE // Expect a block node.
|
||||||
|
yaml_PARSE_BLOCK_NODE_OR_INDENTLESS_SEQUENCE_STATE // Expect a block node or indentless sequence.
|
||||||
|
yaml_PARSE_FLOW_NODE_STATE // Expect a flow node.
|
||||||
|
yaml_PARSE_BLOCK_SEQUENCE_FIRST_ENTRY_STATE // Expect the first entry of a block sequence.
|
||||||
|
yaml_PARSE_BLOCK_SEQUENCE_ENTRY_STATE // Expect an entry of a block sequence.
|
||||||
|
yaml_PARSE_INDENTLESS_SEQUENCE_ENTRY_STATE // Expect an entry of an indentless sequence.
|
||||||
|
yaml_PARSE_BLOCK_MAPPING_FIRST_KEY_STATE // Expect the first key of a block mapping.
|
||||||
|
yaml_PARSE_BLOCK_MAPPING_KEY_STATE // Expect a block mapping key.
|
||||||
|
yaml_PARSE_BLOCK_MAPPING_VALUE_STATE // Expect a block mapping value.
|
||||||
|
yaml_PARSE_FLOW_SEQUENCE_FIRST_ENTRY_STATE // Expect the first entry of a flow sequence.
|
||||||
|
yaml_PARSE_FLOW_SEQUENCE_ENTRY_STATE // Expect an entry of a flow sequence.
|
||||||
|
yaml_PARSE_FLOW_SEQUENCE_ENTRY_MAPPING_KEY_STATE // Expect a key of an ordered mapping.
|
||||||
|
yaml_PARSE_FLOW_SEQUENCE_ENTRY_MAPPING_VALUE_STATE // Expect a value of an ordered mapping.
|
||||||
|
yaml_PARSE_FLOW_SEQUENCE_ENTRY_MAPPING_END_STATE // Expect the and of an ordered mapping entry.
|
||||||
|
yaml_PARSE_FLOW_MAPPING_FIRST_KEY_STATE // Expect the first key of a flow mapping.
|
||||||
|
yaml_PARSE_FLOW_MAPPING_KEY_STATE // Expect a key of a flow mapping.
|
||||||
|
yaml_PARSE_FLOW_MAPPING_VALUE_STATE // Expect a value of a flow mapping.
|
||||||
|
yaml_PARSE_FLOW_MAPPING_EMPTY_VALUE_STATE // Expect an empty value of a flow mapping.
|
||||||
|
yaml_PARSE_END_STATE // Expect nothing.
|
||||||
|
)
|
||||||
|
|
||||||
|
func (ps yaml_parser_state_t) String() string {
|
||||||
|
switch ps {
|
||||||
|
case yaml_PARSE_STREAM_START_STATE:
|
||||||
|
return "yaml_PARSE_STREAM_START_STATE"
|
||||||
|
case yaml_PARSE_IMPLICIT_DOCUMENT_START_STATE:
|
||||||
|
return "yaml_PARSE_IMPLICIT_DOCUMENT_START_STATE"
|
||||||
|
case yaml_PARSE_DOCUMENT_START_STATE:
|
||||||
|
return "yaml_PARSE_DOCUMENT_START_STATE"
|
||||||
|
case yaml_PARSE_DOCUMENT_CONTENT_STATE:
|
||||||
|
return "yaml_PARSE_DOCUMENT_CONTENT_STATE"
|
||||||
|
case yaml_PARSE_DOCUMENT_END_STATE:
|
||||||
|
return "yaml_PARSE_DOCUMENT_END_STATE"
|
||||||
|
case yaml_PARSE_BLOCK_NODE_STATE:
|
||||||
|
return "yaml_PARSE_BLOCK_NODE_STATE"
|
||||||
|
case yaml_PARSE_BLOCK_NODE_OR_INDENTLESS_SEQUENCE_STATE:
|
||||||
|
return "yaml_PARSE_BLOCK_NODE_OR_INDENTLESS_SEQUENCE_STATE"
|
||||||
|
case yaml_PARSE_FLOW_NODE_STATE:
|
||||||
|
return "yaml_PARSE_FLOW_NODE_STATE"
|
||||||
|
case yaml_PARSE_BLOCK_SEQUENCE_FIRST_ENTRY_STATE:
|
||||||
|
return "yaml_PARSE_BLOCK_SEQUENCE_FIRST_ENTRY_STATE"
|
||||||
|
case yaml_PARSE_BLOCK_SEQUENCE_ENTRY_STATE:
|
||||||
|
return "yaml_PARSE_BLOCK_SEQUENCE_ENTRY_STATE"
|
||||||
|
case yaml_PARSE_INDENTLESS_SEQUENCE_ENTRY_STATE:
|
||||||
|
return "yaml_PARSE_INDENTLESS_SEQUENCE_ENTRY_STATE"
|
||||||
|
case yaml_PARSE_BLOCK_MAPPING_FIRST_KEY_STATE:
|
||||||
|
return "yaml_PARSE_BLOCK_MAPPING_FIRST_KEY_STATE"
|
||||||
|
case yaml_PARSE_BLOCK_MAPPING_KEY_STATE:
|
||||||
|
return "yaml_PARSE_BLOCK_MAPPING_KEY_STATE"
|
||||||
|
case yaml_PARSE_BLOCK_MAPPING_VALUE_STATE:
|
||||||
|
return "yaml_PARSE_BLOCK_MAPPING_VALUE_STATE"
|
||||||
|
case yaml_PARSE_FLOW_SEQUENCE_FIRST_ENTRY_STATE:
|
||||||
|
return "yaml_PARSE_FLOW_SEQUENCE_FIRST_ENTRY_STATE"
|
||||||
|
case yaml_PARSE_FLOW_SEQUENCE_ENTRY_STATE:
|
||||||
|
return "yaml_PARSE_FLOW_SEQUENCE_ENTRY_STATE"
|
||||||
|
case yaml_PARSE_FLOW_SEQUENCE_ENTRY_MAPPING_KEY_STATE:
|
||||||
|
return "yaml_PARSE_FLOW_SEQUENCE_ENTRY_MAPPING_KEY_STATE"
|
||||||
|
case yaml_PARSE_FLOW_SEQUENCE_ENTRY_MAPPING_VALUE_STATE:
|
||||||
|
return "yaml_PARSE_FLOW_SEQUENCE_ENTRY_MAPPING_VALUE_STATE"
|
||||||
|
case yaml_PARSE_FLOW_SEQUENCE_ENTRY_MAPPING_END_STATE:
|
||||||
|
return "yaml_PARSE_FLOW_SEQUENCE_ENTRY_MAPPING_END_STATE"
|
||||||
|
case yaml_PARSE_FLOW_MAPPING_FIRST_KEY_STATE:
|
||||||
|
return "yaml_PARSE_FLOW_MAPPING_FIRST_KEY_STATE"
|
||||||
|
case yaml_PARSE_FLOW_MAPPING_KEY_STATE:
|
||||||
|
return "yaml_PARSE_FLOW_MAPPING_KEY_STATE"
|
||||||
|
case yaml_PARSE_FLOW_MAPPING_VALUE_STATE:
|
||||||
|
return "yaml_PARSE_FLOW_MAPPING_VALUE_STATE"
|
||||||
|
case yaml_PARSE_FLOW_MAPPING_EMPTY_VALUE_STATE:
|
||||||
|
return "yaml_PARSE_FLOW_MAPPING_EMPTY_VALUE_STATE"
|
||||||
|
case yaml_PARSE_END_STATE:
|
||||||
|
return "yaml_PARSE_END_STATE"
|
||||||
|
}
|
||||||
|
return "<unknown parser state>"
|
||||||
|
}
|
||||||
|
|
||||||
|
// This structure holds aliases data.
|
||||||
|
type yaml_alias_data_t struct {
|
||||||
|
anchor []byte // The anchor.
|
||||||
|
index int // The node id.
|
||||||
|
mark yaml_mark_t // The anchor mark.
|
||||||
|
}
|
||||||
|
|
||||||
|
// The parser structure.
|
||||||
|
//
|
||||||
|
// All members are internal. Manage the structure using the
|
||||||
|
// yaml_parser_ family of functions.
|
||||||
|
type yaml_parser_t struct {
|
||||||
|
|
||||||
|
// Error handling
|
||||||
|
|
||||||
|
error yaml_error_type_t // Error type.
|
||||||
|
|
||||||
|
problem string // Error description.
|
||||||
|
|
||||||
|
// The byte about which the problem occurred.
|
||||||
|
problem_offset int
|
||||||
|
problem_value int
|
||||||
|
problem_mark yaml_mark_t
|
||||||
|
|
||||||
|
// The error context.
|
||||||
|
context string
|
||||||
|
context_mark yaml_mark_t
|
||||||
|
|
||||||
|
// Reader stuff
|
||||||
|
|
||||||
|
read_handler yaml_read_handler_t // Read handler.
|
||||||
|
|
||||||
|
input_reader io.Reader // File input data.
|
||||||
|
input []byte // String input data.
|
||||||
|
input_pos int
|
||||||
|
|
||||||
|
eof bool // EOF flag
|
||||||
|
|
||||||
|
buffer []byte // The working buffer.
|
||||||
|
buffer_pos int // The current position of the buffer.
|
||||||
|
|
||||||
|
unread int // The number of unread characters in the buffer.
|
||||||
|
|
||||||
|
raw_buffer []byte // The raw buffer.
|
||||||
|
raw_buffer_pos int // The current position of the buffer.
|
||||||
|
|
||||||
|
encoding yaml_encoding_t // The input encoding.
|
||||||
|
|
||||||
|
offset int // The offset of the current position (in bytes).
|
||||||
|
mark yaml_mark_t // The mark of the current position.
|
||||||
|
|
||||||
|
// Scanner stuff
|
||||||
|
|
||||||
|
stream_start_produced bool // Have we started to scan the input stream?
|
||||||
|
stream_end_produced bool // Have we reached the end of the input stream?
|
||||||
|
|
||||||
|
flow_level int // The number of unclosed '[' and '{' indicators.
|
||||||
|
|
||||||
|
tokens []yaml_token_t // The tokens queue.
|
||||||
|
tokens_head int // The head of the tokens queue.
|
||||||
|
tokens_parsed int // The number of tokens fetched from the queue.
|
||||||
|
token_available bool // Does the tokens queue contain a token ready for dequeueing.
|
||||||
|
|
||||||
|
indent int // The current indentation level.
|
||||||
|
indents []int // The indentation levels stack.
|
||||||
|
|
||||||
|
simple_key_allowed bool // May a simple key occur at the current position?
|
||||||
|
simple_keys []yaml_simple_key_t // The stack of simple keys.
|
||||||
|
|
||||||
|
// Parser stuff
|
||||||
|
|
||||||
|
state yaml_parser_state_t // The current parser state.
|
||||||
|
states []yaml_parser_state_t // The parser states stack.
|
||||||
|
marks []yaml_mark_t // The stack of marks.
|
||||||
|
tag_directives []yaml_tag_directive_t // The list of TAG directives.
|
||||||
|
|
||||||
|
// Dumper stuff
|
||||||
|
|
||||||
|
aliases []yaml_alias_data_t // The alias data.
|
||||||
|
|
||||||
|
document *yaml_document_t // The currently parsed document.
|
||||||
|
}
|
||||||
|
|
||||||
|
// Emitter Definitions
|
||||||
|
|
||||||
|
// The prototype of a write handler.
|
||||||
|
//
|
||||||
|
// The write handler is called when the emitter needs to flush the accumulated
|
||||||
|
// characters to the output. The handler should write @a size bytes of the
|
||||||
|
// @a buffer to the output.
|
||||||
|
//
|
||||||
|
// @param[in,out] data A pointer to an application data specified by
|
||||||
|
// yaml_emitter_set_output().
|
||||||
|
// @param[in] buffer The buffer with bytes to be written.
|
||||||
|
// @param[in] size The size of the buffer.
|
||||||
|
//
|
||||||
|
// @returns On success, the handler should return @c 1. If the handler failed,
|
||||||
|
// the returned value should be @c 0.
|
||||||
|
//
|
||||||
|
type yaml_write_handler_t func(emitter *yaml_emitter_t, buffer []byte) error
|
||||||
|
|
||||||
|
type yaml_emitter_state_t int
|
||||||
|
|
||||||
|
// The emitter states.
|
||||||
|
const (
|
||||||
|
// Expect STREAM-START.
|
||||||
|
yaml_EMIT_STREAM_START_STATE yaml_emitter_state_t = iota
|
||||||
|
|
||||||
|
yaml_EMIT_FIRST_DOCUMENT_START_STATE // Expect the first DOCUMENT-START or STREAM-END.
|
||||||
|
yaml_EMIT_DOCUMENT_START_STATE // Expect DOCUMENT-START or STREAM-END.
|
||||||
|
yaml_EMIT_DOCUMENT_CONTENT_STATE // Expect the content of a document.
|
||||||
|
yaml_EMIT_DOCUMENT_END_STATE // Expect DOCUMENT-END.
|
||||||
|
yaml_EMIT_FLOW_SEQUENCE_FIRST_ITEM_STATE // Expect the first item of a flow sequence.
|
||||||
|
yaml_EMIT_FLOW_SEQUENCE_ITEM_STATE // Expect an item of a flow sequence.
|
||||||
|
yaml_EMIT_FLOW_MAPPING_FIRST_KEY_STATE // Expect the first key of a flow mapping.
|
||||||
|
yaml_EMIT_FLOW_MAPPING_KEY_STATE // Expect a key of a flow mapping.
|
||||||
|
yaml_EMIT_FLOW_MAPPING_SIMPLE_VALUE_STATE // Expect a value for a simple key of a flow mapping.
|
||||||
|
yaml_EMIT_FLOW_MAPPING_VALUE_STATE // Expect a value of a flow mapping.
|
||||||
|
yaml_EMIT_BLOCK_SEQUENCE_FIRST_ITEM_STATE // Expect the first item of a block sequence.
|
||||||
|
yaml_EMIT_BLOCK_SEQUENCE_ITEM_STATE // Expect an item of a block sequence.
|
||||||
|
yaml_EMIT_BLOCK_MAPPING_FIRST_KEY_STATE // Expect the first key of a block mapping.
|
||||||
|
yaml_EMIT_BLOCK_MAPPING_KEY_STATE // Expect the key of a block mapping.
|
||||||
|
yaml_EMIT_BLOCK_MAPPING_SIMPLE_VALUE_STATE // Expect a value for a simple key of a block mapping.
|
||||||
|
yaml_EMIT_BLOCK_MAPPING_VALUE_STATE // Expect a value of a block mapping.
|
||||||
|
yaml_EMIT_END_STATE // Expect nothing.
|
||||||
|
)
|
||||||
|
|
||||||
|
// The emitter structure.
|
||||||
|
//
|
||||||
|
// All members are internal. Manage the structure using the @c yaml_emitter_
|
||||||
|
// family of functions.
|
||||||
|
type yaml_emitter_t struct {
|
||||||
|
|
||||||
|
// Error handling
|
||||||
|
|
||||||
|
error yaml_error_type_t // Error type.
|
||||||
|
problem string // Error description.
|
||||||
|
|
||||||
|
// Writer stuff
|
||||||
|
|
||||||
|
write_handler yaml_write_handler_t // Write handler.
|
||||||
|
|
||||||
|
output_buffer *[]byte // String output data.
|
||||||
|
output_writer io.Writer // File output data.
|
||||||
|
|
||||||
|
buffer []byte // The working buffer.
|
||||||
|
buffer_pos int // The current position of the buffer.
|
||||||
|
|
||||||
|
raw_buffer []byte // The raw buffer.
|
||||||
|
raw_buffer_pos int // The current position of the buffer.
|
||||||
|
|
||||||
|
encoding yaml_encoding_t // The stream encoding.
|
||||||
|
|
||||||
|
// Emitter stuff
|
||||||
|
|
||||||
|
canonical bool // If the output is in the canonical style?
|
||||||
|
best_indent int // The number of indentation spaces.
|
||||||
|
best_width int // The preferred width of the output lines.
|
||||||
|
unicode bool // Allow unescaped non-ASCII characters?
|
||||||
|
line_break yaml_break_t // The preferred line break.
|
||||||
|
|
||||||
|
state yaml_emitter_state_t // The current emitter state.
|
||||||
|
states []yaml_emitter_state_t // The stack of states.
|
||||||
|
|
||||||
|
events []yaml_event_t // The event queue.
|
||||||
|
events_head int // The head of the event queue.
|
||||||
|
|
||||||
|
indents []int // The stack of indentation levels.
|
||||||
|
|
||||||
|
tag_directives []yaml_tag_directive_t // The list of tag directives.
|
||||||
|
|
||||||
|
indent int // The current indentation level.
|
||||||
|
|
||||||
|
flow_level int // The current flow level.
|
||||||
|
|
||||||
|
root_context bool // Is it the document root context?
|
||||||
|
sequence_context bool // Is it a sequence context?
|
||||||
|
mapping_context bool // Is it a mapping context?
|
||||||
|
simple_key_context bool // Is it a simple mapping key context?
|
||||||
|
|
||||||
|
line int // The current line.
|
||||||
|
column int // The current column.
|
||||||
|
whitespace bool // If the last character was a whitespace?
|
||||||
|
indention bool // If the last character was an indentation character (' ', '-', '?', ':')?
|
||||||
|
open_ended bool // If an explicit document end is required?
|
||||||
|
|
||||||
|
// Anchor analysis.
|
||||||
|
anchor_data struct {
|
||||||
|
anchor []byte // The anchor value.
|
||||||
|
alias bool // Is it an alias?
|
||||||
|
}
|
||||||
|
|
||||||
|
// Tag analysis.
|
||||||
|
tag_data struct {
|
||||||
|
handle []byte // The tag handle.
|
||||||
|
suffix []byte // The tag suffix.
|
||||||
|
}
|
||||||
|
|
||||||
|
// Scalar analysis.
|
||||||
|
scalar_data struct {
|
||||||
|
value []byte // The scalar value.
|
||||||
|
multiline bool // Does the scalar contain line breaks?
|
||||||
|
flow_plain_allowed bool // Can the scalar be expessed in the flow plain style?
|
||||||
|
block_plain_allowed bool // Can the scalar be expressed in the block plain style?
|
||||||
|
single_quoted_allowed bool // Can the scalar be expressed in the single quoted style?
|
||||||
|
block_allowed bool // Can the scalar be expressed in the literal or folded styles?
|
||||||
|
style yaml_scalar_style_t // The output style.
|
||||||
|
}
|
||||||
|
|
||||||
|
// Dumper stuff
|
||||||
|
|
||||||
|
opened bool // If the stream was already opened?
|
||||||
|
closed bool // If the stream was already closed?
|
||||||
|
|
||||||
|
// The information associated with the document nodes.
|
||||||
|
anchors *struct {
|
||||||
|
references int // The number of references.
|
||||||
|
anchor int // The anchor id.
|
||||||
|
serialized bool // If the node has been emitted?
|
||||||
|
}
|
||||||
|
|
||||||
|
last_anchor_id int // The last assigned anchor id.
|
||||||
|
|
||||||
|
document *yaml_document_t // The currently emitted document.
|
||||||
|
}
|
173
vendor/gopkg.in/yaml.v2/yamlprivateh.go
generated
vendored
Normal file
173
vendor/gopkg.in/yaml.v2/yamlprivateh.go
generated
vendored
Normal file
@ -0,0 +1,173 @@
|
|||||||
|
package yaml
|
||||||
|
|
||||||
|
const (
|
||||||
|
// The size of the input raw buffer.
|
||||||
|
input_raw_buffer_size = 512
|
||||||
|
|
||||||
|
// The size of the input buffer.
|
||||||
|
// It should be possible to decode the whole raw buffer.
|
||||||
|
input_buffer_size = input_raw_buffer_size * 3
|
||||||
|
|
||||||
|
// The size of the output buffer.
|
||||||
|
output_buffer_size = 128
|
||||||
|
|
||||||
|
// The size of the output raw buffer.
|
||||||
|
// It should be possible to encode the whole output buffer.
|
||||||
|
output_raw_buffer_size = (output_buffer_size*2 + 2)
|
||||||
|
|
||||||
|
// The size of other stacks and queues.
|
||||||
|
initial_stack_size = 16
|
||||||
|
initial_queue_size = 16
|
||||||
|
initial_string_size = 16
|
||||||
|
)
|
||||||
|
|
||||||
|
// Check if the character at the specified position is an alphabetical
|
||||||
|
// character, a digit, '_', or '-'.
|
||||||
|
func is_alpha(b []byte, i int) bool {
|
||||||
|
return b[i] >= '0' && b[i] <= '9' || b[i] >= 'A' && b[i] <= 'Z' || b[i] >= 'a' && b[i] <= 'z' || b[i] == '_' || b[i] == '-'
|
||||||
|
}
|
||||||
|
|
||||||
|
// Check if the character at the specified position is a digit.
|
||||||
|
func is_digit(b []byte, i int) bool {
|
||||||
|
return b[i] >= '0' && b[i] <= '9'
|
||||||
|
}
|
||||||
|
|
||||||
|
// Get the value of a digit.
|
||||||
|
func as_digit(b []byte, i int) int {
|
||||||
|
return int(b[i]) - '0'
|
||||||
|
}
|
||||||
|
|
||||||
|
// Check if the character at the specified position is a hex-digit.
|
||||||
|
func is_hex(b []byte, i int) bool {
|
||||||
|
return b[i] >= '0' && b[i] <= '9' || b[i] >= 'A' && b[i] <= 'F' || b[i] >= 'a' && b[i] <= 'f'
|
||||||
|
}
|
||||||
|
|
||||||
|
// Get the value of a hex-digit.
|
||||||
|
func as_hex(b []byte, i int) int {
|
||||||
|
bi := b[i]
|
||||||
|
if bi >= 'A' && bi <= 'F' {
|
||||||
|
return int(bi) - 'A' + 10
|
||||||
|
}
|
||||||
|
if bi >= 'a' && bi <= 'f' {
|
||||||
|
return int(bi) - 'a' + 10
|
||||||
|
}
|
||||||
|
return int(bi) - '0'
|
||||||
|
}
|
||||||
|
|
||||||
|
// Check if the character is ASCII.
|
||||||
|
func is_ascii(b []byte, i int) bool {
|
||||||
|
return b[i] <= 0x7F
|
||||||
|
}
|
||||||
|
|
||||||
|
// Check if the character at the start of the buffer can be printed unescaped.
|
||||||
|
func is_printable(b []byte, i int) bool {
|
||||||
|
return ((b[i] == 0x0A) || // . == #x0A
|
||||||
|
(b[i] >= 0x20 && b[i] <= 0x7E) || // #x20 <= . <= #x7E
|
||||||
|
(b[i] == 0xC2 && b[i+1] >= 0xA0) || // #0xA0 <= . <= #xD7FF
|
||||||
|
(b[i] > 0xC2 && b[i] < 0xED) ||
|
||||||
|
(b[i] == 0xED && b[i+1] < 0xA0) ||
|
||||||
|
(b[i] == 0xEE) ||
|
||||||
|
(b[i] == 0xEF && // #xE000 <= . <= #xFFFD
|
||||||
|
!(b[i+1] == 0xBB && b[i+2] == 0xBF) && // && . != #xFEFF
|
||||||
|
!(b[i+1] == 0xBF && (b[i+2] == 0xBE || b[i+2] == 0xBF))))
|
||||||
|
}
|
||||||
|
|
||||||
|
// Check if the character at the specified position is NUL.
|
||||||
|
func is_z(b []byte, i int) bool {
|
||||||
|
return b[i] == 0x00
|
||||||
|
}
|
||||||
|
|
||||||
|
// Check if the beginning of the buffer is a BOM.
|
||||||
|
func is_bom(b []byte, i int) bool {
|
||||||
|
return b[0] == 0xEF && b[1] == 0xBB && b[2] == 0xBF
|
||||||
|
}
|
||||||
|
|
||||||
|
// Check if the character at the specified position is space.
|
||||||
|
func is_space(b []byte, i int) bool {
|
||||||
|
return b[i] == ' '
|
||||||
|
}
|
||||||
|
|
||||||
|
// Check if the character at the specified position is tab.
|
||||||
|
func is_tab(b []byte, i int) bool {
|
||||||
|
return b[i] == '\t'
|
||||||
|
}
|
||||||
|
|
||||||
|
// Check if the character at the specified position is blank (space or tab).
|
||||||
|
func is_blank(b []byte, i int) bool {
|
||||||
|
//return is_space(b, i) || is_tab(b, i)
|
||||||
|
return b[i] == ' ' || b[i] == '\t'
|
||||||
|
}
|
||||||
|
|
||||||
|
// Check if the character at the specified position is a line break.
|
||||||
|
func is_break(b []byte, i int) bool {
|
||||||
|
return (b[i] == '\r' || // CR (#xD)
|
||||||
|
b[i] == '\n' || // LF (#xA)
|
||||||
|
b[i] == 0xC2 && b[i+1] == 0x85 || // NEL (#x85)
|
||||||
|
b[i] == 0xE2 && b[i+1] == 0x80 && b[i+2] == 0xA8 || // LS (#x2028)
|
||||||
|
b[i] == 0xE2 && b[i+1] == 0x80 && b[i+2] == 0xA9) // PS (#x2029)
|
||||||
|
}
|
||||||
|
|
||||||
|
func is_crlf(b []byte, i int) bool {
|
||||||
|
return b[i] == '\r' && b[i+1] == '\n'
|
||||||
|
}
|
||||||
|
|
||||||
|
// Check if the character is a line break or NUL.
|
||||||
|
func is_breakz(b []byte, i int) bool {
|
||||||
|
//return is_break(b, i) || is_z(b, i)
|
||||||
|
return ( // is_break:
|
||||||
|
b[i] == '\r' || // CR (#xD)
|
||||||
|
b[i] == '\n' || // LF (#xA)
|
||||||
|
b[i] == 0xC2 && b[i+1] == 0x85 || // NEL (#x85)
|
||||||
|
b[i] == 0xE2 && b[i+1] == 0x80 && b[i+2] == 0xA8 || // LS (#x2028)
|
||||||
|
b[i] == 0xE2 && b[i+1] == 0x80 && b[i+2] == 0xA9 || // PS (#x2029)
|
||||||
|
// is_z:
|
||||||
|
b[i] == 0)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Check if the character is a line break, space, or NUL.
|
||||||
|
func is_spacez(b []byte, i int) bool {
|
||||||
|
//return is_space(b, i) || is_breakz(b, i)
|
||||||
|
return ( // is_space:
|
||||||
|
b[i] == ' ' ||
|
||||||
|
// is_breakz:
|
||||||
|
b[i] == '\r' || // CR (#xD)
|
||||||
|
b[i] == '\n' || // LF (#xA)
|
||||||
|
b[i] == 0xC2 && b[i+1] == 0x85 || // NEL (#x85)
|
||||||
|
b[i] == 0xE2 && b[i+1] == 0x80 && b[i+2] == 0xA8 || // LS (#x2028)
|
||||||
|
b[i] == 0xE2 && b[i+1] == 0x80 && b[i+2] == 0xA9 || // PS (#x2029)
|
||||||
|
b[i] == 0)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Check if the character is a line break, space, tab, or NUL.
|
||||||
|
func is_blankz(b []byte, i int) bool {
|
||||||
|
//return is_blank(b, i) || is_breakz(b, i)
|
||||||
|
return ( // is_blank:
|
||||||
|
b[i] == ' ' || b[i] == '\t' ||
|
||||||
|
// is_breakz:
|
||||||
|
b[i] == '\r' || // CR (#xD)
|
||||||
|
b[i] == '\n' || // LF (#xA)
|
||||||
|
b[i] == 0xC2 && b[i+1] == 0x85 || // NEL (#x85)
|
||||||
|
b[i] == 0xE2 && b[i+1] == 0x80 && b[i+2] == 0xA8 || // LS (#x2028)
|
||||||
|
b[i] == 0xE2 && b[i+1] == 0x80 && b[i+2] == 0xA9 || // PS (#x2029)
|
||||||
|
b[i] == 0)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Determine the width of the character.
|
||||||
|
func width(b byte) int {
|
||||||
|
// Don't replace these by a switch without first
|
||||||
|
// confirming that it is being inlined.
|
||||||
|
if b&0x80 == 0x00 {
|
||||||
|
return 1
|
||||||
|
}
|
||||||
|
if b&0xE0 == 0xC0 {
|
||||||
|
return 2
|
||||||
|
}
|
||||||
|
if b&0xF0 == 0xE0 {
|
||||||
|
return 3
|
||||||
|
}
|
||||||
|
if b&0xF8 == 0xF0 {
|
||||||
|
return 4
|
||||||
|
}
|
||||||
|
return 0
|
||||||
|
|
||||||
|
}
|
5
vendor/modules.txt
vendored
5
vendor/modules.txt
vendored
@ -54,8 +54,9 @@ github.com/opencontainers/image-spec/specs-go
|
|||||||
github.com/opencontainers/image-spec/specs-go/v1
|
github.com/opencontainers/image-spec/specs-go/v1
|
||||||
# github.com/pkg/errors v0.8.1
|
# github.com/pkg/errors v0.8.1
|
||||||
github.com/pkg/errors
|
github.com/pkg/errors
|
||||||
# github.com/sirupsen/logrus v1.4.2
|
# github.com/sirupsen/logrus v1.5.0
|
||||||
github.com/sirupsen/logrus
|
github.com/sirupsen/logrus
|
||||||
|
github.com/sirupsen/logrus/hooks/writer
|
||||||
# github.com/urfave/cli v1.20.0
|
# github.com/urfave/cli v1.20.0
|
||||||
github.com/urfave/cli
|
github.com/urfave/cli
|
||||||
# golang.org/x/net v0.0.0-20190403144856-b630fd6fe46b
|
# golang.org/x/net v0.0.0-20190403144856-b630fd6fe46b
|
||||||
@ -72,3 +73,5 @@ google.golang.org/grpc/connectivity
|
|||||||
google.golang.org/grpc/grpclog
|
google.golang.org/grpc/grpclog
|
||||||
google.golang.org/grpc/internal
|
google.golang.org/grpc/internal
|
||||||
google.golang.org/grpc/status
|
google.golang.org/grpc/status
|
||||||
|
# gopkg.in/yaml.v2 v2.2.7
|
||||||
|
gopkg.in/yaml.v2
|
||||||
|
Loading…
Reference in New Issue
Block a user