diff --git a/.github/ISSUE_TEMPLATE.md b/.github/ISSUE_TEMPLATE.md
deleted file mode 100644
index a9771687d..000000000
--- a/.github/ISSUE_TEMPLATE.md
+++ /dev/null
@@ -1,25 +0,0 @@
-
-
-**Is this a BUG REPORT or FEATURE REQUEST?**:
-
-> Uncomment only one, leave it on its own line:
->
-> /kind bug
-> /kind feature
-
-
-**What happened**:
-
-**What you expected to happen**:
-
-**How to reproduce it (as minimally and precisely as possible)**:
-
-
-**Anything else we need to know?**:
-
-**Environment**:
-- vsphere-cloud-controller-manager version:
-- OS (e.g. from /etc/os-release):
-- Kernel (e.g. `uname -a`):
-- Install tools:
-- Others:
diff --git a/.github/ISSUE_TEMPLATE/bug-report.yaml b/.github/ISSUE_TEMPLATE/bug-report.yaml
new file mode 100644
index 000000000..512acf42f
--- /dev/null
+++ b/.github/ISSUE_TEMPLATE/bug-report.yaml
@@ -0,0 +1,125 @@
+name: Bug Report
+description: Report a bug encountered while operating Kubernetes
+labels: kind/bug
+body:
+ - type: textarea
+ id: problem
+ attributes:
+ label: What happened?
+ description: |
+ Please provide as much info as possible. Not doing so may result in your bug not being addressed in a timely manner.
+ If this matter is security related, please disclose it privately via https://kubernetes.io/security
+ validations:
+ required: true
+
+ - type: textarea
+ id: expected
+ attributes:
+ label: What did you expect to happen?
+ validations:
+ required: true
+
+ - type: textarea
+ id: repro
+ attributes:
+ label: How can we reproduce it (as minimally and precisely as possible)?
+ validations:
+ required: true
+
+ - type: textarea
+ id: additional
+ attributes:
+ label: Anything else we need to know (please consider providing level 4 or above logs of CPI)?
+
+ - type: textarea
+ id: kubeVersion
+ attributes:
+ label: Kubernetes version
+ value: |
+
+
+ ```console
+ $ kubectl version
+ # paste output here
+ ```
+
+
+ validations:
+ required: true
+
+ - type: textarea
+ id: cloudProvider
+ attributes:
+ label: Cloud provider or hardware configuration
+ value: |
+
+
+
+ validations:
+ required: true
+
+ - type: textarea
+ id: osVersion
+ attributes:
+ label: OS version
+ value: |
+
+
+ ```console
+ # On Linux:
+ $ cat /etc/os-release
+ # paste output here
+ $ uname -a
+ # paste output here
+
+ # On Windows:
+ C:\> wmic os get Caption, Version, BuildNumber, OSArchitecture
+ # paste output here
+ ```
+
+
+
+ - type: textarea
+ id: kernel
+ attributes:
+ label: Kernel (e.g. `uname -a`)
+ value: |
+
+
+
+
+ - type: textarea
+ id: installer
+ attributes:
+ label: Install tools
+ value: |
+
+
+
+
+ - type: textarea
+ id: runtime
+ attributes:
+ label: Container runtime (CRI) and and version (if applicable)
+ value: |
+
+
+
+
+ - type: textarea
+ id: plugins
+ attributes:
+ label: Related plugins (CNI, CSI, ...) and versions (if applicable)
+ value: |
+
+
+
+
+ - type: textarea
+ id: others
+ attributes:
+ label: Others
+ value: |
+
+
+
diff --git a/.github/ISSUE_TEMPLATE/config.yml b/.github/ISSUE_TEMPLATE/config.yml
new file mode 100644
index 000000000..4c74dfd79
--- /dev/null
+++ b/.github/ISSUE_TEMPLATE/config.yml
@@ -0,0 +1,4 @@
+contact_links:
+ - name: Support Request
+ url: https://discuss.kubernetes.io
+ about: Support request or question relating to Kubernetes
diff --git a/.github/ISSUE_TEMPLATE/enhancement.yaml b/.github/ISSUE_TEMPLATE/enhancement.yaml
new file mode 100644
index 000000000..c7b92496f
--- /dev/null
+++ b/.github/ISSUE_TEMPLATE/enhancement.yaml
@@ -0,0 +1,21 @@
+name: Enhancement Tracking Issue
+description: Provide supporting details for a feature in development
+labels: kind/feature
+body:
+ - type: textarea
+ id: feature
+ attributes:
+ label: What would you like to be added?
+ description: |
+ Feature requests are unlikely to make progress as issues. Please consider engaging with SIGs on slack and mailing lists, instead.
+ A proposal that works through the design along with the implications of the change can be opened as a KEP.
+ See https://git.k8s.io/enhancements/keps#kubernetes-enhancement-proposals-keps
+ validations:
+ required: true
+
+ - type: textarea
+ id: rationale
+ attributes:
+ label: Why is this needed?
+ validations:
+ required: true
diff --git a/.github/ISSUE_TEMPLATE/failing-test.yaml b/.github/ISSUE_TEMPLATE/failing-test.yaml
new file mode 100644
index 000000000..4f0469d55
--- /dev/null
+++ b/.github/ISSUE_TEMPLATE/failing-test.yaml
@@ -0,0 +1,48 @@
+name: Failing Test
+description: Report continuously failing tests or jobs in Kubernetes CI
+labels: kind/failing-test
+body:
+ - type: textarea
+ id: jobs
+ attributes:
+ label: Which jobs are failing?
+ placeholder: |
+ Please only use this template for submitting reports about continuously failing tests or jobs in Kubernetes CI.
+ validations:
+ required: true
+
+ - type: textarea
+ id: tests
+ attributes:
+ label: Which tests are failing?
+ validations:
+ required: true
+
+ - type: textarea
+ id: since
+ attributes:
+ label: Since when has it been failing?
+ validations:
+ required: true
+
+ - type: input
+ id: testgrid
+ attributes:
+ label: Testgrid link
+
+ - type: textarea
+ id: reason
+ attributes:
+ label: Reason for failure (if possible)
+
+ - type: textarea
+ id: additional
+ attributes:
+ label: Anything else we need to know?
+
+ - type: textarea
+ id: sigs
+ attributes:
+ label: Relevant SIG(s)
+ description: You can identify the SIG from the "prowjob_config_url" on the testgrid dashboard for a test.
+ value: /sig
diff --git a/.github/ISSUE_TEMPLATE/flaking-test.yaml b/.github/ISSUE_TEMPLATE/flaking-test.yaml
new file mode 100644
index 000000000..7bf0e5123
--- /dev/null
+++ b/.github/ISSUE_TEMPLATE/flaking-test.yaml
@@ -0,0 +1,50 @@
+name: Flaking Test
+description: Report flaky tests or jobs in Kubernetes CI
+labels: kind/flake
+body:
+ - type: textarea
+ id: jobs
+ attributes:
+ label: Which jobs are flaking?
+ description: |
+ Please only use this template for submitting reports about flaky tests or jobs (pass or fail with no underlying change in code) in Kubernetes CI.
+ Links to go.k8s.io/triage and/or links to specific failures in spyglass are appreciated.
+ Please see the deflaking doc (https://github.com/kubernetes/community/blob/master/contributors/devel/sig-testing/flaky-tests.md) for more guidance.
+ validations:
+ required: true
+
+ - type: textarea
+ id: tests
+ attributes:
+ label: Which tests are flaking?
+ validations:
+ required: true
+
+ - type: textarea
+ id: since
+ attributes:
+ label: Since when has it been flaking?
+ validations:
+ required: true
+
+ - type: input
+ id: testgrid
+ attributes:
+ label: Testgrid link
+
+ - type: textarea
+ id: reason
+ attributes:
+ label: Reason for failure (if possible)
+
+ - type: textarea
+ id: additional
+ attributes:
+ label: Anything else we need to know?
+
+ - type: textarea
+ id: sigs
+ attributes:
+ label: Relevant SIG(s)
+ description: You can identify the SIG from the "prowjob_config_url" on the testgrid dashboard for a test.
+ value: /sig
diff --git a/.gitignore b/.gitignore
index 8f1003e44..bd85131f3 100644
--- a/.gitignore
+++ b/.gitignore
@@ -4,6 +4,9 @@
# Ignore the build output.
/.build
+# Ignore tooling binaries.
+/hack/tools/bin
+
# Ignore the environment variable and Prow configuration files at
# the root of the project.
/config.env
@@ -24,7 +27,6 @@
*.sublime-workspace
*.swp
.idea
-.DS_Store
# OSX leaves these everywhere on SMB shares
._*
@@ -145,6 +147,7 @@ zz_generated.openapi.go
/.make/
# Just in time generated data in the source, should never be committed
/test/e2e/generated/bindata.go
+/test/e2e/data
# This file used by some vendor repos (e.g. github.com/go-openapi/...) to store secret variables and should not be ignored
!\.drone\.sec
@@ -155,5 +158,9 @@ zz_generated.openapi.go
/bazel-*
*.pyc
+# Output artifacts & logs from e2e
+/_e2e-logs/
+/_e2e_artifacts/
+
# binaries
/vsphere-cloud-controller-manager
diff --git a/Makefile b/Makefile
index 3c48ef81c..d9505dec2 100644
--- a/Makefile
+++ b/Makefile
@@ -93,6 +93,14 @@ build build-bins: $(CCM_BIN)
build-with-docker:
hack/make.sh
+# Tooling binaries for e2e
+TOOLS_DIR := $(abspath hack/tools)
+TOOLS_BIN_DIR := $(TOOLS_DIR)/bin
+GINKGO := $(TOOLS_BIN_DIR)/ginkgo
+KIND := $(TOOLS_BIN_DIR)/kind
+TOOLING_BINARIES := $(GINKGO) $(KIND)
+E2E_DIR := $(abspath test/e2e)
+
################################################################################
## DIST ##
################################################################################
@@ -204,7 +212,7 @@ endif # ifndef X_BUILD_DISABLED
## TESTING ##
################################################################################
ifndef PKGS_WITH_TESTS
-export PKGS_WITH_TESTS := $(sort $(shell find . -name "*_test.go" -type f -exec dirname \{\} \;))
+export PKGS_WITH_TESTS := $(sort $(shell find ./pkg -name "*_test.go" -type f -exec dirname \{\} \;))
endif
TEST_FLAGS ?= -v
.PHONY: unit build-unit-tests
@@ -225,6 +233,15 @@ build-tests: build-unit-tests
cover: TEST_FLAGS += -cover
cover: test
+tools: $(TOOLING_BINARIES) ## Build tooling binaries
+.PHONY: $(TOOLING_BINARIES)
+$(TOOLING_BINARIES):
+ make -C $(TOOLS_DIR) $(@F)
+
+.PHONY: test-e2e
+test-e2e:
+ make -C $(E2E_DIR) run
+
.PHONY: integration-test
integration-test: | $(DOCKER_SOCK)
$(MAKE) -C test/integration
diff --git a/README.md b/README.md
index db3d76195..2a85e8472 100644
--- a/README.md
+++ b/README.md
@@ -23,10 +23,16 @@ Version matrix:
| Kubernetes Version | vSphere Cloud Provider Release Version | Cloud Provider Branch |
| ----------- | ----------- | ----------- |
+| v1.22.X | v1.22.X | release-1.22 |
+| v1.21.X | v1.21.X | release-1.21 |
| v1.20.X | v1.20.X | release-1.20 |
| v1.19.X | v1.19.X | release-1.19 |
| v1.18.X | v1.18.X | release-1.18 |
+Our current support policy is that when a new Kubernetes release comes out, we will bump our k8s dependencies to the new version and cut a new release for CPI, e.g. CPI v1.22.x was released after k8s v1.22 comes out.
+
+The latest CPI version is ![GitHub release (latest SemVer including pre-releases](https://img.shields.io/github/v/release/kubernetes/cloud-provider-vsphere?include_prereleases). The recommended way to upgrade CPI can be found on [this page](https://github.com/kubernetes/cloud-provider-vsphere/blob/master/releases/README.md).
+
## Quickstart
Get started with Cloud controller manager for vSphere with Kubeadm with this [quickstart](https://cloud-provider-vsphere.sigs.k8s.io/tutorials/kubernetes-on-vsphere-with-kubeadm.html).
diff --git a/charts/vsphere-cpi/Chart.yaml b/charts/vsphere-cpi/Chart.yaml
index 913fd8e98..b9b8de66a 100644
--- a/charts/vsphere-cpi/Chart.yaml
+++ b/charts/vsphere-cpi/Chart.yaml
@@ -1,5 +1,5 @@
apiVersion: v2
-appVersion: 1.21.0
+appVersion: 1.22.2
description: A Helm chart for vSphere Cloud Provider Interface Manager (CPI)
name: vsphere-cpi
version: 1.0.0
diff --git a/charts/vsphere-cpi/README.md b/charts/vsphere-cpi/README.md
index 548864d05..c88c38493 100644
--- a/charts/vsphere-cpi/README.md
+++ b/charts/vsphere-cpi/README.md
@@ -8,7 +8,7 @@ This chart deploys all components required to run the external vSphere CPI as de
## Prerequisites
-- Has been tested on Kubernetes 1.21.X+
+- Has been tested on Kubernetes 1.22.X+
- Assumes your Kubernetes cluster has been configured to use the external cloud provider. Please take a look at configuration guidelines located in the [Kubernetes documentation](https://kubernetes.io/docs/tasks/administer-cluster/running-cloud-controller/#running-cloud-controller-manager).
## Installing the Chart using Helm 3.0+
@@ -94,15 +94,18 @@ The following table lists the configurable parameters of the vSphere CPI chart a
| `podSecurityPolicy.enabled` | Enable pod sec policy (k8s > 1.17) | true |
| `podSecurityPolicy.annotations` | Annotations for pd sec policy | nil |
| `securityContext.enabled` | Enable sec context for container | false |
-| `securityContext.runAsUser` | RunAsUser. Default is `nobody` in | 1001 |
+| `securityContext.runAsUser` | RunAsUser. Default is `nobody` in | 1001 |
| | distroless image | |
-| `securityContext.fsGroup` | FsGroup. Default is `nobody` in | 1001 |
+| `securityContext.fsGroup` | FsGroup. Default is `nobody` in | 1001 |
| | distroless image | |
| `config.enabled` | Create a simple single VC config | false |
+| `config.name` | Name of the created VC configmap | false |
| `config.vcenter` | FQDN or IP of vCenter | vcenter.local |
| `config.username` | vCenter username | user |
| `config.password` | vCenter password | pass |
| `config.datacenter` | Datacenters within the vCenter | dc |
+| `config.secret.create` | Create secret for VC config | true |
+| `config.secret.name` | Name of the created VC secret | vsphere-cloud-secret |
| `rbac.create` | Create roles and role bindings | true |
| `serviceAccount.create` | Create the service account | true |
| `serviceAccount.name` | Name of the created service account | cloud-controller-manager |
diff --git a/charts/vsphere-cpi/templates/configmap.yaml b/charts/vsphere-cpi/templates/configmap.yaml
index 9b4cdd134..93f2a5029 100644
--- a/charts/vsphere-cpi/templates/configmap.yaml
+++ b/charts/vsphere-cpi/templates/configmap.yaml
@@ -2,7 +2,7 @@
apiVersion: v1
kind: ConfigMap
metadata:
- name: vsphere-cloud-config
+ name: {{ .Values.config.name | default "cloud-config" }}
labels:
app: {{ template "cpi.name" . }}
vsphere-cpi-infra: cloud-config
@@ -16,7 +16,7 @@ data:
# set insecure-flag to true if the vCenter uses a self-signed cert
insecureFlag: true
# settings for using k8s secret
- secretName: vsphere-cloud-secret
+ secretName: {{ .Values.config.secret.name }}
secretNamespace: {{ .Release.Namespace }}
# vcenter section
diff --git a/charts/vsphere-cpi/templates/daemonset.yaml b/charts/vsphere-cpi/templates/daemonset.yaml
index 9787f7b92..b2e3b408e 100644
--- a/charts/vsphere-cpi/templates/daemonset.yaml
+++ b/charts/vsphere-cpi/templates/daemonset.yaml
@@ -82,10 +82,7 @@ spec:
- name: VSPHERE_API_DISABLE
value: "true"
- name: VSPHERE_API_BINDING
- valueFrom:
- configMapKeyRef:
- name: {{ template "cpi.fullname" . }}
- key: api.binding
+ value: {{ template "api.binding" . }}
ports:
- containerPort: {{ .Values.service.endpointPort }}
protocol: TCP
@@ -101,4 +98,4 @@ spec:
volumes:
- name: vsphere-config-volume
configMap:
- name: cloud-config
+ name: {{ if .Values.config.enabled }}{{- .Values.config.name }}{{- else }}cloud-config{{- end }}
diff --git a/charts/vsphere-cpi/templates/secret.yaml b/charts/vsphere-cpi/templates/secret.yaml
index 338e6d9d7..961b8518e 100644
--- a/charts/vsphere-cpi/templates/secret.yaml
+++ b/charts/vsphere-cpi/templates/secret.yaml
@@ -1,14 +1,14 @@
-{{- if .Values.config.enabled | default .Values.global.config.enabled -}}
+{{- if and .Values.config.secret.create .Values.config.enabled | default .Values.global.config.enabled -}}
apiVersion: v1
kind: Secret
metadata:
- name: vsphere-cloud-secret
+ name: {{ .Values.config.secret.name | default "vsphere-cloud-secret" }}
labels:
app: {{ template "cpi.name" . }}
vsphere-cpi-infra: secret
component: cloud-controller-manager
namespace: {{ .Release.Namespace }}
stringData:
- {{ .Values.config.vcenter | default .Values.global.config.vcenter }}.username: {{ .Values.config.username | default .Values.global.config.username }}
- {{ .Values.config.vcenter | default .Values.global.config.vcenter }}.password: {{ .Values.config.password | default .Values.global.config.password }}
+ {{ .Values.config.vcenter | default .Values.global.config.vcenter }}.username: {{ .Values.config.username | default .Values.global.config.username | quote }}
+ {{ .Values.config.vcenter | default .Values.global.config.vcenter }}.password: {{ .Values.config.password | default .Values.global.config.password | quote }}
{{- end -}}
diff --git a/charts/vsphere-cpi/values.yaml b/charts/vsphere-cpi/values.yaml
index b05306623..b966e4a53 100644
--- a/charts/vsphere-cpi/values.yaml
+++ b/charts/vsphere-cpi/values.yaml
@@ -1,6 +1,6 @@
# Default values for vSphere CPI.
# This is a YAML-formatted file.
-# vSohere CPI values are grouped by component
+# vSphere CPI values are grouped by component
global:
config:
@@ -8,6 +8,7 @@ global:
config:
enabled: false
+ name: vsphere-cloud-config
vcenter: "vcenter.local"
username: "user"
password: "pass"
@@ -15,6 +16,13 @@ config:
region: "k8s-region"
zone: "k8s-zone"
+ secret:
+ # Specifies whether Secret should be created from config values
+ create: true
+ # The name of the Secret referred to in the vsphere-cloud-config ConfigMap
+ # If your Kubernetes platform provides this secret, set create to false and adjust the secret name
+ name: vsphere-cloud-secret
+
## Specify if a Pod Security Policy for kube-state-metrics must be created
## Ref: https://kubernetes.io/docs/concepts/policy/pod-security-policy/
podSecurityPolicy:
@@ -50,7 +58,7 @@ serviceAccount:
daemonset:
annotations: {}
image: gcr.io/cloud-provider-vsphere/cpi/release/manager
- tag: v1.21.0
+ tag: v1.22.2
pullPolicy: IfNotPresent
dnsPolicy: ClusterFirst
cmdline:
diff --git a/cluster/images/controller-manager/Dockerfile b/cluster/images/controller-manager/Dockerfile
index 66aa419ba..7da12f67c 100644
--- a/cluster/images/controller-manager/Dockerfile
+++ b/cluster/images/controller-manager/Dockerfile
@@ -14,7 +14,7 @@
## BUILD ARGS ##
################################################################################
# This build arg allows the specification of a custom Golang image.
-ARG GOLANG_IMAGE=golang:1.16.7
+ARG GOLANG_IMAGE=golang:1.17.5
# The distroless image on which the CPI manager image is built.
#
@@ -33,7 +33,7 @@ ARG DISTROLESS_IMAGE=gcr.io/distroless/static@sha256:9b60270ec0991bc4f14bda475e8
FROM ${GOLANG_IMAGE} as builder
# This build arg is the version to embed in the CPI binary
-ARG VERSION=unknown
+ARG VERSION=1.22.3
# This build arg controls the GOPROXY setting
ARG GOPROXY
@@ -44,7 +44,7 @@ COPY pkg/ pkg/
COPY cmd/ cmd/
ENV CGO_ENABLED=0
ENV GOPROXY ${GOPROXY:-https://proxy.golang.org}
-RUN go build -a -ldflags='-w -s -extldflags=static -X main.version=${VERSION}' -o vsphere-cloud-controller-manager ./cmd/vsphere-cloud-controller-manager
+RUN go build -a -ldflags="-w -s -extldflags=static -X main.version=${VERSION}" -o vsphere-cloud-controller-manager ./cmd/vsphere-cloud-controller-manager
################################################################################
## MAIN STAGE ##
diff --git a/cmd/vsphere-cloud-controller-manager/main.go b/cmd/vsphere-cloud-controller-manager/main.go
index d98afb3bb..1a141b763 100644
--- a/cmd/vsphere-cloud-controller-manager/main.go
+++ b/cmd/vsphere-cloud-controller-manager/main.go
@@ -27,7 +27,6 @@ import (
"strings"
"time"
- "k8s.io/apimachinery/pkg/util/wait"
cloudprovider "k8s.io/cloud-provider"
"k8s.io/cloud-provider-vsphere/pkg/cloudprovider/vsphere"
"k8s.io/cloud-provider-vsphere/pkg/cloudprovider/vsphere/loadbalancer"
@@ -44,6 +43,7 @@ import (
"k8s.io/component-base/version/verflag"
klog "k8s.io/klog/v2"
+ "github.com/fsnotify/fsnotify"
"github.com/spf13/cobra"
"github.com/spf13/pflag"
)
@@ -74,7 +74,8 @@ func main() {
c, err := ccmOptions.Config(app.ControllerNames(app.DefaultInitFuncConstructors), app.ControllersDisabledByDefault.List())
if err != nil {
- fmt.Fprintf(os.Stderr, "%v\n", err)
+ // explicitly ignore the error by Fprintf, exiting anyway
+ _, _ = fmt.Fprintf(os.Stderr, "%v\n", err)
os.Exit(1)
}
@@ -83,7 +84,9 @@ func main() {
// Default to the vsphere cloud provider if not set
cloudProviderFlag := cmd.Flags().Lookup("cloud-provider")
if cloudProviderFlag.Value.String() == "" {
- cloudProviderFlag.Value.Set(vsphere.RegisteredProviderName)
+ if err := cloudProviderFlag.Value.Set(vsphere.RegisteredProviderName); err != nil {
+ klog.Fatalf("cannot set RegisteredProviderName to %s: %v", vsphere.RegisteredProviderName, err)
+ }
}
cloudProvider := cloudProviderFlag.Value.String()
@@ -92,11 +95,24 @@ func main() {
}
completedConfig := c.Complete()
- cloud := cloudInitializer(completedConfig, cloudProvider)
+
+ cloud := initializeCloud(completedConfig, cloudProvider)
controllerInitializers = app.ConstructControllerInitializers(app.DefaultInitFuncConstructors, completedConfig, cloud)
- if err := app.Run(completedConfig, cloud, controllerInitializers, wait.NeverStop); err != nil {
- fmt.Fprintf(os.Stderr, "%v\n", err)
+ // initialize a notifier for cloud config update
+ cloudConfig := completedConfig.ComponentConfig.KubeCloudShared.CloudProvider.CloudConfigFile
+ klog.Infof("initialize notifier on configmap update %s\n", cloudConfig)
+ watch, stop, err := initializeWatch(completedConfig, cloudConfig)
+ if err != nil {
+ klog.Fatalf("fail to initialize watch on config map %s: %v\n", cloudConfig, err)
+ }
+ defer func(watch *fsnotify.Watcher) {
+ _ = watch.Close() // ignore explicitly when the watch closes
+ }(watch)
+
+ if err := app.Run(completedConfig, cloud, controllerInitializers, stop); err != nil {
+ // explicitly ignore the error by Fprintf, exiting anyway due to app error
+ _, _ = fmt.Fprintf(os.Stderr, "%v\n", err)
os.Exit(1)
}
},
@@ -122,12 +138,16 @@ func main() {
usageFmt := "Usage:\n %s\n"
cols, _, _ := term.TerminalSize(command.OutOrStdout())
command.SetUsageFunc(func(cmd *cobra.Command) error {
- fmt.Fprintf(cmd.OutOrStderr(), usageFmt, cmd.UseLine())
+ if _, err := fmt.Fprintf(cmd.OutOrStderr(), usageFmt, cmd.UseLine()); err != nil {
+ return err
+ }
cliflag.PrintSections(cmd.OutOrStderr(), namedFlagSets, cols)
return nil
})
command.SetHelpFunc(func(cmd *cobra.Command, args []string) {
- fmt.Fprintf(cmd.OutOrStdout(), "%s\n\n"+usageFmt, cmd.Long, cmd.UseLine())
+ if _, err := fmt.Fprintf(cmd.OutOrStdout(), "%s\n\n"+usageFmt, cmd.Long, cmd.UseLine()); err != nil {
+ return
+ }
cliflag.PrintSections(cmd.OutOrStdout(), namedFlagSets, cols)
})
@@ -148,7 +168,9 @@ func main() {
// Set cloud-provider flag to vsphere
case "cloud-provider":
cloudProviderFlag = &flag.Value
- flag.Value.Set(vsphere.RegisteredProviderName)
+ if err := flag.Value.Set(vsphere.RegisteredProviderName); err != nil {
+ klog.Fatalf("cannot set RegisteredProviderName to %s: %v", vsphere.RegisteredProviderName, err)
+ }
flag.DefValue = vsphere.RegisteredProviderName
case "cluster-name":
clusterNameFlag = &flag.Value
@@ -187,12 +209,38 @@ func main() {
}
if err := command.Execute(); err != nil {
- fmt.Fprintf(os.Stderr, "error: %v\n", err)
+ // ignore error by Fprintf, exit anyway due to cmd execute error
+ _, _ = fmt.Fprintf(os.Stderr, "error: %v\n", err)
os.Exit(1)
}
}
-func cloudInitializer(config *appconfig.CompletedConfig, cloudProvider string) cloudprovider.Interface {
+// set up a filesystem watcher for the cloud config mount
+// reboot the app whenever there is an update via the returned stopCh
+func initializeWatch(_ *appconfig.CompletedConfig, cloudConfigPath string) (watch *fsnotify.Watcher, stopCh chan struct{}, err error) {
+ stopCh = make(chan struct{})
+ watch, err = fsnotify.NewWatcher()
+ if err != nil {
+ klog.Fatalln("fail to setup config watcher")
+ }
+ go func() {
+ for {
+ select {
+ case err := <-watch.Errors:
+ klog.Warningf("watcher receives err: %v\n", err)
+ case event := <-watch.Events:
+ klog.Fatalf("config map %s has been updated, restarting pod, received event %v\n", cloudConfigPath, event)
+ stopCh <- struct{}{}
+ }
+ }
+ }()
+ if err := watch.Add(cloudConfigPath); err != nil {
+ klog.Fatalf("fail to watch cloud config file %s\n", cloudConfigPath)
+ }
+ return
+}
+
+func initializeCloud(config *appconfig.CompletedConfig, cloudProvider string) cloudprovider.Interface {
cloudConfig := config.ComponentConfig.KubeCloudShared.CloudProvider
// initialize cloud provider with the cloud provider name and config file provided
diff --git a/docs/book/README.md b/docs/book/README.md
index 0ca2cedbb..6ec880b0c 100644
--- a/docs/book/README.md
+++ b/docs/book/README.md
@@ -4,7 +4,7 @@ This is documentation for the [Kubernetes vSphere Cloud Provider](https://github
## Introduction
-This documentation provides information about running Kubernetes on vSphere and specifically focuses on the Container Storage Interface (CSI) and Cloud Provider Interface (CPI), previously called Cloud Control Manager (CCM). This documentation covers key concepts, features, known issues, installation requirements, and offers sample procedures to run Kubernetes clusters on vSphere. Note that you can continue running Kubernetes clusters on vSphere without enabling the cloud provider integration. However, if you do not use the Cloud Provider Interface, your Kubernetes clusters will not have integration with the underlying infrastructure.
+This documentation provides information about running Kubernetes on vSphere and specifically focuses on the Cloud Provider Interface (CPI), previously called Cloud Control Manager (CCM). This documentation covers key concepts, features, known issues, installation requirements, and offers sample procedures to run Kubernetes clusters on vSphere. Note that you can continue running Kubernetes clusters on vSphere without enabling the cloud provider integration. However, if you do not use the Cloud Provider Interface, your Kubernetes clusters will not have integration with the underlying infrastructure.
## History
@@ -14,6 +14,8 @@ The in-tree provider for vSphere is called the vSphere Cloud Provider (VCP). The
This document covers both the in-tree and out-of-tree vSphere integrations for Kubernetes. For Kubernetes clusters on vSphere, both in-tree and out-of-tree modes of operation work. However, the out-of-tree vSphere cloud provider is recommended as future releases of Kubernetes will remove support for all in-tree cloud providers. Also, the in-tree VCP only has community support, unless support is provided by a managed Kubernetes offering.
+If you are looking for more information about the Container Storage Interface (CSI), please refer to [Kubernetes Container Storage Interface (CSI) Documentation](https://kubernetes-csi.github.io/docs/).
+
## Summary
* [Concepts](concepts.md)
@@ -21,10 +23,9 @@ This document covers both the in-tree and out-of-tree vSphere integrations for K
* [In-tree vs Out-of-Tree](concepts/in_tree_vs_out_of_tree.md)
* [Overview of the VCP](concepts/vcp_overview.md)
* [Overview of the CPI](concepts/cpi_overview.md)
- * [Overview of the CSI](concepts/csi_overview.md)
+ * [Overview of the CSI](https://github.com/container-storage-interface/spec/blob/master/spec.md)
* [Glossary](glossary.md)
* [Cloud Provider Interface (CPI)](cloud_provider_interface.md)
- * [Container Storage Interface (CSI)](container_storage_interface.md)
* [Known Issues](known_issues.md)
## Tutorials
@@ -32,8 +33,12 @@ This document covers both the in-tree and out-of-tree vSphere integrations for K
### vSphere 6.7U3 tutorials
* [Deploying a new K8s cluster with CPI and CSI on vSphere 6.7U3 with kubeadm](./tutorials/kubernetes-on-vsphere-with-kubeadm.md)
-* [Deploying CPI and CSI with Zones Topology](./tutorials/deploying_cpi_and_csi_with_multi_dc_vc_aka_zones.md)
+* [Deploying CPI with Zones Topology](./tutorials/deploying_cpi_with_multi_dc_vc_aka_zones.md)
### Earlier tutorials
* [Deploying K8s with vSphere Cloud Provider (in-tree) using kubeadm (deprecated)](./tutorials/k8s-vcp-on-vsphere-with-kubeadm.md)
+
+## Developer Guide
+
+* [Release guide for CPI](./tutorials/make_a_new_cpi_release.md)
diff --git a/docs/book/SUMMARY.md b/docs/book/SUMMARY.md
index 346192c47..972bb5858 100644
--- a/docs/book/SUMMARY.md
+++ b/docs/book/SUMMARY.md
@@ -5,17 +5,17 @@
* [In-Tree and Out-of-Tree Implementation Models](concepts/in_tree_vs_out_of_tree.md)
* [About vSphere Cloud Provider](concepts/vcp_overview.md)
* [Overview of the CPI](concepts/cpi_overview.md)
- * [Overview of the CSI](concepts/csi_overview.md)
+ * [Overview of the CSI](https://github.com/container-storage-interface/spec/blob/master/spec.md)
* [Glossary](glossary.md)
* [Cloud Provider Interface (CPI)](cloud_provider_interface.md)
-* [Container Storage Interface (CSI)](container_storage_interface.md)
* [Cloud Config Spec](cloud_config.md)
* [Known Issues](known_issues.md)
## Tutorials
-* [Deploying the vSphere CPI and CSI in a Multi-vCenter OR Multi-Datacenter Environment using Zones](/tutorials/deploying_cpi_and_csi_with_multi_dc_vc_aka_zones.md)
-* [Enabling vSphere CSI on an existing cluster](/tutorials/enabling-vsphere-csi-on-an-existing-cluster.md)
+* [Deploying the vSphere CPI in a Multi-vCenter OR Multi-Datacenter Environment using Zones](/tutorials/deploying_cpi_with_multi_dc_vc_aka_zones.md)
+* [Using vSphere Container Storage Plug-in](https://docs.vmware.com/en/VMware-vSphere-Container-Storage-Plug-in/2.0/vmware-vsphere-csp-getting-started/GUID-5D144DA0-4806-4DEB-8819-10A1C42E38AB.html)
* [Running a Kubernetes Cluster on vSphere with kubeadm](./tutorials/k8s-vcp-on-vsphere-with-kubeadm.md)
* [Deploying vSphere CPI using Helm](/tutorials/kubernetes-on-vsphere-with-helm.md)
+* [Deploying vSphere CPI with k3s](/tutorials/deploying-cpi-with-k3s.md)
* [Deploying a Kubernetes Cluster on vSphere with CSI and CPI](/tutorials/kubernetes-on-vsphere-with-kubeadm.md)
diff --git a/docs/book/cloud_config.md b/docs/book/cloud_config.md
index 8af2866a0..9ffc31b54 100644
--- a/docs/book/cloud_config.md
+++ b/docs/book/cloud_config.md
@@ -47,9 +47,17 @@ Here's the entire cloud config spec using example values:
[Labels]
region = k8s-region
zone = k8s-zone
+
+[Nodes]
+ internal-network-subnet-cidr = "192.0.2.0/24"
+ external-network-subnet-cidr = "198.51.100.0/24"
+ internal-vm-network-name = "Internal K8s Traffic"
+ external-vm-network-name = "External/Outbound Traffic"
+ exclude-internal-network-subnet-cidr = "192.0.2.0/24,fe80::1/128"
+ exclude-external-network-subnet-cidr = "192.1.2.0/24,fe80::2/128"
```
-There are 3 sections in the cloud config file, let's break down the fields in each section:
+There are 4 sections in the cloud config file, let's break down the fields in each section:
### Global
@@ -171,6 +179,61 @@ on your Nodes and PersistentVolumes based on the value of the tags specified her
zone = k8s-zone
```
+### Nodes
+
+The Nodes section defines the way that the Node IPs are selected from the
+addresses assigned to the Node in kube-api.
+
+Addresses in the optional `exclude-internal-network-subnet-cidr` and optional
+`exclude-external-network-subnet-cidr` are removed from consideration for any
+match before any selection happens.
+
+If provided, the `internal-network-subnet-cidr` and
+`external-network-subnet-cidr` matching will be attempted first. Addresses that
+fall within each of the provided CIDRs will be selected.
+
+If provided, and the subnet matching method does not select a matching address,
+the `internal-vm-network-name` and `external-vm-network-name` matching will be
+attempted. Addresses belonging to networks that match the name in vSphere will
+be selected.
+
+If these methods are unsuccessful at selecting an address, or if these other
+configurations were not provided, default selection will select the first
+address that is not a Localhost address.
+
+```bash
+[Nodes]
+ # If set, the vSphere cloud provider will select the first address that falls
+ # within the provided subnet and assign that value to the Internal network for
+ # the node.
+ internal-network-subnet-cidr = "192.0.2.0/24"
+
+ # If set, the vSphere cloud provider will select the first address that falls
+ # within the provided subnet and assign that value to the External network for
+ # the node.
+ external-network-subnet-cidr = "198.51.100.0/24"
+
+ # If set, the vSphere cloud provider will select the first address found in
+ # the VM network matching the provided name and assign that value to the
+ # Internal network for the node.
+ internal-vm-network-name = "Internal K8s Traffic"
+
+ # If set, the vSphere cloud provider will select the first address found in
+ # the VM network matching the provided name and assign that value to the
+ # External network for the node.
+ external-vm-network-name = "External/Outbound Traffic"
+
+ # If set, the vSphere cloud provider will never select addresses for the
+ # Internal network that fall within the provided subnet ranges. This
+ # configuration has the highest precedence. See notes above for details.
+ exclude-internal-network-subnet-cidr = "192.0.2.0/24,fe80::1/128"
+
+ # If set, the vSphere cloud provider will never select addresses for the
+ # External network that fall within the provided subnet ranges. This
+ # configuration has the highest precedence. See notes above for details.
+ exclude-external-network-subnet-cidr = "192.1.2.0/24,fe80::2/128"
+```
+
### Storing vCenter Credentials in a Kubernetes Secret
## FAQ
diff --git a/docs/book/cloud_provider_interface.md b/docs/book/cloud_provider_interface.md
index f663c64a4..83bed4001 100644
--- a/docs/book/cloud_provider_interface.md
+++ b/docs/book/cloud_provider_interface.md
@@ -68,7 +68,7 @@ The instance type label is generally useful and can be used for:
### Node Zones/Regions Topology
-Similar to instance type, the cloud provider can also apply zones and region labels to your Kubernetes nodes. The zones and region topology labels are interesting because they originate from use-cases derived from public cloud providers where VMs are provisioned in physical zones and regions. For the case of vSphere, physical zones and regions may not always apply. However, the vSphere cloud provider allows you to configure the "zones" and "regions" topology arbitrarily on your clusters. This gives the vSphere admin flexibility to configure zones/region topology based on their use-case. A vSphere admin can enable zones/regions support by tagging VMs on vSphere with the desired zones/regions. To learn more about how to enable and operate zones on your cluster, see the [Zones Support Tutorial](https://github.com/cormachogan/cloud-provider-vsphere/blob/master/docs/book/tutorials/deploying_ccm_and_csi_with_multi_dc_vc_aka_zones.md).
+Similar to instance type, the cloud provider can also apply zones and region labels to your Kubernetes nodes. The zones and region topology labels are interesting because they originate from use-cases derived from public cloud providers where VMs are provisioned in physical zones and regions. For the case of vSphere, physical zones and regions may not always apply. However, the vSphere cloud provider allows you to configure the "zones" and "regions" topology arbitrarily on your clusters. This gives the vSphere admin flexibility to configure zones/region topology based on their use-case. A vSphere admin can enable zones/regions support by tagging VMs on vSphere with the desired zones/regions. To learn more about how to enable and operate zones on your cluster, see the [Zones Support Tutorial](https://github.com/cormachogan/cloud-provider-vsphere/blob/master/docs/book/tutorials/deploying_cpi_with_multi_dc_vc_aka_zones.md).
Once you have zones or regions enabled on your cluster, you can verify zones support by looking for the failure-domain.beta.kubernetes.io/zone and failure-domain.beta.kubernetes.io/region labels on your nodes. For this specific example, each rack represents a zone and the entire datacenter located in SFO represents a region.
diff --git a/docs/book/concepts/cpi_overview.md b/docs/book/concepts/cpi_overview.md
index 8091ac177..1d8d3b1f4 100644
--- a/docs/book/concepts/cpi_overview.md
+++ b/docs/book/concepts/cpi_overview.md
@@ -5,3 +5,13 @@ The Cloud Provider Interface (CPI) project decouples intelligence of underlying
The out-of-tree CPI integration connects to vCenter Server and maps information about your infrastructure, such as VMs, disks, and so on, back to the Kubernetes API. Only the cloud-controller-manager pod is required to have a valid config file and credentials to connect to vCenter Server. The following chapters offer more information on how to configure this provider. For now, assume that the cloud-controller-manager pod has access to the config file and credentials that allow access to vCenter Server. The following simplified diagram illustrates which components in your cluster should be connecting to vCenter Server.
![vSphere Out-of-Tree Cloud Provider Architecture](https://github.com/kubernetes/cloud-provider-vsphere/raw/master/docs/images/vsphere-out-of-tree-architecture.png "vSphere Out-of-Tree Cloud Provider Architecture")
+
+## Overview of the Container Storage Interface
+
+The [Container Storage Interface (CSI)](https://github.com/container-storage-interface/spec/blob/master/spec.md) is a specification designed to enable persistent storage volume management on Container Orchestrators (COs) such as Kubernetes. The specification allows storage systems to integrate with containerized workloads running on Kubernetes. Using CSI, storage providers, such as VMware, can write and deploy plugins for storage systems in Kubernetes without a need to modify any core Kubernetes code.
+
+CSI allows volume plugins to be installed on Kubernetes clusters as extensions. Once a CSI compatible volume driver is deployed on a Kubernetes cluster, users can use the CSI to provision, attach, mount, and format the volumes exposed by the CSI driver. For vSphere, the CSI driver is csi.vsphere.vmware.com.
+
+## Dependency between CPI and CSI
+
+On Kubernetes, the vSphere CSI driver is used in conjunction with the out-of-tree vSphere CPI. The CPI initializes nodes with labels describing the topology information, such as zone and region. In the case of vSphere, these labels are inherited from vSphere tags, and are applied to the Kubernetes nodes as labels. Pods can then be provisioned using a variety of constraint options, such as node selector, or affinity and anti-affinity rules. The CSI driver can deploy Persistent Volumes (PVs) using the same constraints as Pods. Some of the tutorials explain the relationship between CPI and CSI in a more detailed way.
diff --git a/docs/book/concepts/csi_overview.md b/docs/book/concepts/csi_overview.md
deleted file mode 100644
index 3458b3fe9..000000000
--- a/docs/book/concepts/csi_overview.md
+++ /dev/null
@@ -1,9 +0,0 @@
-# Overview of the Container Storage Interface
-
-The [Container Storage Interface (CSI)](https://github.com/container-storage-interface/spec/blob/master/spec.md) is a specification designed to enable persistent storage volume management on Container Orchestrators (COs) such as Kubernetes. The specification allows storage systems to integrate with containerized workloads running on Kubernetes. Using CSI, storage providers, such as VMware, can write and deploy plugins for storage systems in Kubernetes without a need to modify any core Kubernetes code.
-
-CSI allows volume plugins to be installed on Kubernetes clusters as extensions. Once a CSI compatible volume driver is deployed on a Kubernetes cluster, users can use the CSI to provision, attach, mount, and format the volumes exposed by the CSI driver. For vSphere, the CSI driver is csi.vsphere.vmware.com.
-
-## Dependency between CPI and CSI
-
-On Kubernetes, the vSphere CSI driver is used in conjunction with the out-of-tree vSphere CPI. The CPI initializes nodes with labels describing the topology information, such as zone and region. In the case of vSphere, these labels are inherited from vSphere tags, and are applied to the Kubernetes nodes as labels. Pods can then be provisioned using a variety of constraint options, such as node selector, or affinity and anti-affinity rules. The CSI driver can deploy Persistent Volumes (PVs) using the same constraints as Pods. Some of the tutorials explain the relationship between CPI and CSI in a more detailed way.
diff --git a/docs/book/container_storage_interface.md b/docs/book/container_storage_interface.md
deleted file mode 100644
index 6cef108a3..000000000
--- a/docs/book/container_storage_interface.md
+++ /dev/null
@@ -1,39 +0,0 @@
-# CSI - Container Storage Interface
-
-The goal of CSI is to establish a standardized mechanism for Container Orchestration Systems (COs) to expose arbitrary storage systems to their containerized workloads. The CSI specification emerged from cooperation between community members from various COs – including; Kubernetes, Mesos, Docker, and Cloud Foundry. The specification is developed, independent of Kubernetes, and maintained [here](https://github.com/container-storage-interface/spec/blob/master/spec.md).
-
-## Why do we need it?
-
-Historically, Kubernetes volume plugins were “in-tree”, meaning they’re linked, compiled, built, and shipped with the core kubernetes binaries. Adding support for a new storage system to Kubernetes (a volume plugin) required checking code into the core Kubernetes repository. But aligning with the Kubernetes release process was very painful for many plugin developers. CSI, the Container Storage Interface, makes installing new volume plugins as easy as deploying a pod. It also enables third-party storage providers to develop solutions without the need to add to the core Kubernetes codebase.
-
-CSI enables storage plugins to be developed out-of-tree, containerized, deployed via standard Kubernetes primitives, and consumed through the familiar Kubernetes storage primitives, such as `PersistentVolumeClaim`s, `PersistentVolume`s, and `StorageClass`es.
-
-## Which versions of Kubernetes/vSphere support it?
-
-With the GA release of the CSI driver, vSphere `6.7 U3` and above is required, and Kubernetes `v1.14` and above is required.
-
-## How do I install it?
-
-Full instructions on the setup and installation of the vSphere CSI driver and CPI can be found [here](https://cloud-provider-vsphere.sigs.k8s.io/tutorials/kubernetes-on-vsphere-with-kubeadm.html).
-
-_Note:_ The vSphere CSI driver requires the vSphere CPI to be installed as well (covered in the same article).
-
-## How do I use/consume it?
-
-If the vSphere CSI storage plugin is already deployed on your cluster, you can use it through the familiar Kubernetes storage primitives such as `PersistentVolumeClaim`s, `PersistentVolume`s, and `StorageClass`es.
-
-## Do you have an example StorageClass?
-
-See below, you should note that it is required to use a [Storage Policy](https://docs.vmware.com/en/VMware-vSphere/6.7/com.vmware.vsphere.storage.doc/GUID-89091D59-D844-46B2-94C2-35A3961D23E7.html) in vSphere - even if you are using VMFS or NFS datastores (use [tag-based](https://blogs.vmware.com/virtualblocks/2018/07/26/using-tag-based-spbm-policies-to-manage-your-storage/) policies).
-
-```yaml
-kind: StorageClass
-apiVersion: storage.k8s.io/v1
-metadata:
- name: space-efficient
- annotations:
- storageclass.kubernetes.io/is-default-class: "true"
-provisioner: csi.vsphere.vmware.com
-parameters:
- storagepolicyname: "Space Efficient"
-```
diff --git a/docs/book/tutorials/deploying-cpi-with-k3s.md b/docs/book/tutorials/deploying-cpi-with-k3s.md
index 32b8b33ce..a3a47b15e 100644
--- a/docs/book/tutorials/deploying-cpi-with-k3s.md
+++ b/docs/book/tutorials/deploying-cpi-with-k3s.md
@@ -4,7 +4,7 @@ This document is designed to show you how to integrate k3s with cloud provider v
When running with a cloud-controller-manager, it is expected to pass the node provider ID to a CCM as `://`, in our case, `vsphere://1234567`. However, k3s passes it as `k3s://`, which makes vsphere CCM not be able to find the node.
-We only support `vsphere` as the provider name that is used for constructing **providerID** for both [vsphere](https://github.com/kubernetes/cloud-provider-vsphere/blob/v1.21.0/pkg/cloudprovider/vsphere/cloud.go#L51) and [vsphere-paravirtual](https://github.com/kubernetes/cloud-provider-vsphere/blob/v1.21.0/pkg/cloudprovider/vsphereparavirtual/cloud.go#L42).
+We only support `vsphere` as the provider name that is used for constructing **providerID** for both [vsphere](https://github.com/kubernetes/cloud-provider-vsphere/blob/v1.22.2/pkg/cloudprovider/vsphere/cloud.go#L51) and [vsphere-paravirtual](https://github.com/kubernetes/cloud-provider-vsphere/blob/v1.22.2/pkg/cloudprovider/vsphereparavirtual/cloud.go#L42).
## How to integrate k3s with cloud provider vsphere
@@ -54,10 +54,10 @@ curl -sfL https://get.k3s.io | K3S_TOKEN=${token} sh -s - agent \
### Install CCM
-Now after k3s server starts we need to install the CCM itself. Simply apply the yaml manifest that matches the CCM version you are using, e.g. for v1.21.0:
+Now after k3s server starts we need to install the CCM itself. Simply apply the yaml manifest that matches the CCM version you are using, e.g. for v1.22.1:
```shell
-kubectl apply -f releases/v1.21/
+kubectl apply -f releases/v1.22/
```
That’s it!
diff --git a/docs/book/tutorials/deploying_cpi_and_csi_with_multi_dc_vc_aka_zones.md b/docs/book/tutorials/deploying_cpi_and_csi_with_multi_dc_vc_aka_zones.md
deleted file mode 100644
index 293d19e33..000000000
--- a/docs/book/tutorials/deploying_cpi_and_csi_with_multi_dc_vc_aka_zones.md
+++ /dev/null
@@ -1,225 +0,0 @@
-# Deploying the vSphere CPI and CSI in a Multi-vCenter OR Multi-Datacenter Environment using Zones
-
-This document is designed to quickly get you up and running in a vSphere configuration that consists of multiple vCenter or a multiple Datacenter environment via using zones.
-
-Note: These steps need to be done at initial Kubernetes cluster deployment. It is not possible to add zone support after the Kubernetes cluster has been deployed.
-
-## Prerequisites
-
-This document assumes that you have read and understood the setup documentation for both the vSphere Cloud Provider Interface (also known as the vSphere Cloud Controller Manager - CCM) and vSphere Container Storage Interface (CSI) driver. This guide will go over the additional zone-based configuration needed to support a multi-vCenter or multi-Datacenter environment by using the previous documentation as a base. If you need to revisit the base CPI and CSI documentation, you can find the documentation links below:
-
-[Deploying Kubernetes Cluster on vSphere with CPI and CSI](https://github.com/kubernetes/cloud-provider-vsphere/blob/master/docs/book/tutorials/kubernetes-on-vsphere-with-kubeadm.md)
-
-## Why Do We Need to Use Zones in a Multi-vCenter or Multi-Datacenter Environment
-
-There exist 2 significant issues when deploying Kubernetes workloads or pods in a mutli-vCenter or single vCenter with multiple Datacenters. They are:
-
-1. Datastore objects, specifically names and even morefs (Managed Object References), are not unique across vCenters instances
-2. Datastore objects, specifically names, are not unique within a single vCenter since objects of the same name can exist in different Datacenters
-
-![Which datastore?](https://raw.githubusercontent.com/kubernetes/cloud-provider-vsphere/master/docs/images/whichdatastore.png)
-
-There needs to be a mechanism in place to allow end-users to continue to use the human readable "friendly" names for objects like datastores and datastore clusters and still be able to target workloads to use resources from them. This is where the concept of zones or zoning comes in. Zones allow you to partition datacenters and compute clusters so that the end-user can target workloads to specific locations in your vSphere environment.
-
-## Understanding Optimal Zone Configurations
-
-This section outlines some optimal configurations for Kubernetes zones in your vSphere environment/configuration. The implementation for zone support in the CPI and CSI driver are quite flexible but there are some configurations that can take advantage of features in vSphere and thus providing certain benefits. Here are a couple of common deployment scenarios for zones. If you cannot roll out or deploy zones in some of these suggested configurations, it might be worth consulting someone with familiarity with how zones are implemented.
-
-### Zones Per Cluster
-
-An ideal configuration is creating a zone per cluster. It follows that datastore and datastore clusters access be tied to the compute nodes within a given cluster. The main reason for this is to take advantage of the High Availability (HA) that clusters offer as well as features like vMotion and etc. Example diagrams or configurations appear below.
-
-![Cluster-based Zones](https://github.com/kubernetes/cloud-provider-vsphere/raw/master/docs/images/clusterbased.png)
-
-### Zones Per Datacenter
-
-Zones per datacenter can work as well, but there are some very important design considerations when doing this. If this deployment strategy is taken, it is important to understand that all compute nodes in that zone aka datacenter have access to provision VMDKs from a given shared datastore. The reason for this is CSI driver uses zones in order to target Kubenetes pods or workloads when provisioning external storage. Example diagrams or configurations appear below.
-
-![Datacenter-based Zones](https://github.com/kubernetes/cloud-provider-vsphere/raw/master/docs/images/datacenterbased.png)
-
-## Pitfalls: Zones as they Relate to Storage
-
-Here is a great example of a mistake you don't want to make. In a multi-Datacenter or multi-vCenter environment, you need to use to zones in an effective way especially when it comes to the use of persistent storage. The picture below shows an example of how zones can be incorrectly used.
-
-We have two clusters in `Datacenter 1`. If we deploy a pod to `Zone Engineering` what cluster will the pod land on? If you don't care, then this topology will work, but if you want to run a stateful pod with some storage to be provisioned, then placement really does matter. The `StorageClass` explicitly calls out from what datastore you want to provision a storage, or in our case a VMDK, out of. So placement in that case is very important and the zone configuration here is **insufficient** to handle that deployment scenario.
-
-![Pitfalls of Zones](https://github.com/kubernetes/cloud-provider-vsphere/raw/master/docs/images/pitfalls.png)
-
-## Wrap-Up Zone Considerations
-
-Some important takeaways for implementing zones:
-
-1. Zones allow you to target Kubernetes workloads to a specific group of vSphere infrastructure. This is handled by the CPI.
-2. Zones also define persistent storage boundaries. In other words, all compute nodes within a given zone must have access to shared storage if persistent storage (aka an FCD) is to be provisioned for stateful applications/pods/workloads.
-
-## Deployment Overview
-
-Steps that will be covered in order to setup zones for the vSphere CPI, vSphere CSI driver, and vSphere environment/configuration:
-
-1. Enabling Zones the `vsphere.conf` file
-2. Creating Zones in your vSphere Environment via Tags
-3. Updating your `StorageClass` when using Persistent Storage
-4. Example: Deploying a Kubernetes pod to a Specific Zone using Persistent Storage
-
-## Deploying Zones using the CPI and CSI driver
-
-### 1. Enabling Zones the `vsphere.conf` file
-
-> ***Note:*** The CSI and CPI drivers have their own vsphere.conf files. The following modifications need to be made in both configurations.
-
-The zones implementation depends on 2 sets of vSphere tags to be used on objects, such as datacenters or clusters. The first is a `region` tag and the second is a `zone` tag. vSphere tags are very simply put key/value pairs that can be assigned to objects and instead of using fixed keys to denote a `region` or a `zone`, we give the end-user the ability to come up with their own keys for a `region` and `zone` in the form of vSphere Tag Catagory. It just allows for a level of indirection in case you already have regions and zones setup in your configuration. Once a key/label or vSphere Tag Category is selected for each, create a `labels:` section in the `vsphere.conf` then assign tag names for both `region` and `zone`.
-
-**NOTE:** If you are using CPI version 1.1.0 or earlier, please use the `INI` based cloud configuration as outlined in the [Deploying a Kubernetes Cluster on vSphere with CSI and CPI](https://github.com/kubernetes/cloud-provider-vsphere/blob/master/docs/book/tutorials/kubernetes-on-vsphere-with-kubeadm.md) documentation.
-
-In the example `vsphere.conf` below, `k8s-region` and `k8s-zone` was selected:
-
-```bash
-# Global properties in this section will be used for all specified vCenters unless overriden in VirtualCenter section.
-global:
- user: YourVCenterUser
- password: YourVCenterPass
- port: 443
- # set insecureFlag to true if the vCenter uses a self-signed cert
- insecureFlag: true
- # settings for using k8s secret
- secretName: cpi-secret
- secretNamespace: kube-system
-
-# VirtualCenter section
-vcenter:
- tenant1:
- user: YourVCenterUser
- password: YourVCenterPass
- server: 10.0.0.1
- datacenters:
- - mydc1
- tenant2:
- server: 127.0.0.1
- port: 448
- insecureFlag: false
- datacenters:
- - myotherdc1
- - myotherdc2
-
-# labels for regions and zones
-labels:
- region: k8s-region
- zone: k8s-zone
-```
-
-**NOTE:** For the `INI` based configuration the zones configuration would appear as the following at the end of your cloud-config file:
-
-```bash
-# labels for regions and zones
-[Labels]
-region = k8s-region
-zone = k8s-zone
-```
-
-### 2. Creating Zones in your vSphere Environment via Tags
-
- The `region` tag is just a construct that allows one to make a grouping for a specific set of resources. It could be used to indicate something like a geographic location like a country or perhaps a specific datacenter. This label is an arbitrary grouping that you decide on. The `zone` tag is another construct that allows you to further subdivide resources within a `region`. As an example, using the countries as a `region`, the `zone` could indicate a specific datacenter out of a list in that `region`. In the second example of using a datacenter as a `region`, you might use a `zone` to indicate a specific rack within the datacenter or even just a cluster within that datacenter. Then all hosts and subsequently all VMs acting as Kubernetes worker nodes under that tagged datacenter or cluster inherit the tags of those parent objects. How one chooses to group regions and zones is completely based on how you want to identify a specific group of resources.
-
-There are many options for creating vSphere tags. One such method would be to use [govc](https://github.com/vmware/govmomi/tree/master/govc). All the examples below will make use of this method. You could also create tags by accessing the vSphere REST APIs directly or by using the vSphere UI.
-
-> **NOTE**: The example commands below assume that you have exported the GOVC_URL before running said commands:
-
-```bash
-[k8suser@k8master ~]$ export GOVC_URL=https://REPLACE_VSPHERE_USERNAME:REPLACE_VSPHERE_PASSWORD@REPLACE_VSPHERE_IP/sdk
-```
-
-Using the example above, if it is decided that `k8s-region` and `k8s-zone` are to be used for your Category labels, then you can create those vSphere Categories using `govc` by running the following command:
-
-```bash
-[k8suser@k8master ~]$ ./govc tags.category.create -d "Kubernetes region" k8s-region
-[k8suser@k8master ~]$ ./govc tags.category.create -d "Kubernetes zone" k8s-zone
-```
-
-Say there are 2 `regions` in the US and EU that cover our vSphere environment, we can then create 2 region tags `k8s-region-us` and `k8s-region-eu` using `govc` by running the following command:
-
-```bash
-[k8suser@k8master ~]$ ./govc tags.create -d "Kubernetes Region US" -c k8s-region k8s-region-us
-[k8suser@k8master ~]$ ./govc tags.create -d "Kubernetes Region EU" -c k8s-region k8s-region-eu
-```
-
-Now say our colocations in those regions are fairly small and we have 2 datacenters (dcwest and dceast) in the US and 1 datacenter (dceu) in the EU each with just a small vSphere cluster in each datacenter. Let's each datacenter could represent a particular `zone` in those `regions`. In this example, we could simply create tags for each datacenter, such as `k8s-region-us-west`, `k8s-region-us-east` and `k8s-region-eu-all`, by running the following command:
-
-```bash
-[k8suser@k8master ~]$ ./govc tags.create -d "Kubernetes Zone US West" -c k8s-zone k8s-zone-us-west
-[k8suser@k8master ~]$ ./govc tags.create -d "Kubernetes Zone US East" -c k8s-zone k8s-zone-us-east
-[k8suser@k8master ~]$ ./govc tags.create -d "Kubernetes Zone EU All" -c k8s-zone k8s-region-eu-all
-```
-
-Now let's assign the region and zone tags to each of the datacenters in the vSphere environment by running the following command:
-
-```bash
-#dcwest
-[k8suser@k8master ~]$ ./govc tags.attach k8s-region k8s-region-us /dcwest
-[k8suser@k8master ~]$ ./govc tags.attach k8s-zone k8s-zone-us-west /dcwest
-#dceast
-[k8suser@k8master ~]$ ./govc tags.attach k8s-region k8s-region-us /dceast
-[k8suser@k8master ~]$ ./govc tags.attach k8s-zone k8s-zone-us-east /dceast
-#dceu
-[k8suser@k8master ~]$ ./govc tags.attach k8s-region k8s-region-eu /dceu
-[k8suser@k8master ~]$ ./govc tags.attach k8s-zone k8s-region-eu-all /dceu
-```
-
-And there you go! All setup with the correct tags.
-
-> **NOTE**: Since the CPI and CSI driver support multiple vCenter Servers, the datacenters in the US and EU could be distinctly different. In that case, the `govc` commands would be identical with the exception of replacing the proper vCenter username, password, and IP address for each command.
-
-### 3. Updating your `StorageClass` when using Persistent Storage
-
-Now that we have set the regions and zones within the vSphere environment, we can now target a specific region/zone to deploy a Kubernetes workload or pod into. If a persistent volume is required for that given Kubernetes pod, we need to update the `StorageClass` with the `region` and `zone` information that the particular datastore is in. This is what the `StorageClass` YAML might look like:
-
-```yaml
-kind: StorageClass
-apiVersion: storage.k8s.io/v1
-metadata:
- name: vsphere-csi
- namespace: kube-system
- annotations:
- storageclass.kubernetes.io/is-default-class: "true"
-provisioner: csi.vsphere.vmware.com
-parameters:
- datastoreurl: "URL_OF_DATASTORE" # optional parameter
- storagepolicyname: "STORAGE_POLICY" # optional parameter
- fstype: "FILESYSTEM_TYPE" # optional parameter
-allowedTopologies:
-- matchLabelExpressions:
- - key: failure-domain.beta.kubernetes.io/zone
- values:
- - IF_USING_ZONES_REPLACE_WITH_ZONE_VALUE
- - key: failure-domain.beta.kubernetes.io/region
- values:
- - IF_USING_ZONES_REPLACE_WITH_REGION_VALUE
-```
-
-### 4. Example: Deploying a Kubernetes pod to a Specific Zone using Persistent Storage
-
-Now if one wanted to deploy a Kubernetes pod into a specific `region` and `zone` also using the persistent volume above, the YAML would look something like this:
-
-```yaml
-kind: Pod
-apiVersion: v1
-metadata:
- name: my-csi-app
-spec:
- containers:
- - name: my-frontend
- image: busybox
- volumeMounts:
- - mountPath: "/data"
- name: my-csi-volume
- command: [ "sleep", "1000000" ]
- volumes:
- - name: my-csi-volume
- persistentVolumeClaim:
- claimName: vsphere-csi-pvc
-```
-
-*IMPORTANT*: Just to re-emphasize topics discussed in this document, the datastore or datastore cluster that the persistent volume is to be provisioned from must be available to the `region` and `zone` and by all hosts within that `region` and `zone` pairing since Kubernetes is what is performing the scheduling of pods.
-
-## Wrapping Up
-
-That's it! Pretty straightforward. Questions, comments, concerns... please stop by the #sig-vmware channel at [kubernetes.slack.com](https://kubernetes.slack.com).
diff --git a/docs/book/tutorials/deploying_cpi_with_multi_dc_vc_aka_zones.md b/docs/book/tutorials/deploying_cpi_with_multi_dc_vc_aka_zones.md
new file mode 100644
index 000000000..c3bbde440
--- /dev/null
+++ b/docs/book/tutorials/deploying_cpi_with_multi_dc_vc_aka_zones.md
@@ -0,0 +1,180 @@
+# Deploying the vSphere CPI and CSI in a Multi-vCenter OR Multi-Datacenter Environment using Zones
+
+This document is designed to quickly get you up and running in a vSphere configuration that consists of multiple vCenter or a multiple Datacenter environment via using zones.
+
+Note: These steps need to be done at initial Kubernetes cluster deployment. It is not possible to add zone support after the Kubernetes cluster has been deployed.
+
+## Prerequisites
+
+This document assumes that you have read and understood the setup documentation for both the vSphere Cloud Provider Interface (also known as the vSphere Cloud Controller Manager - CCM) and [vSphere Container Storage Interface (CSI) driver](https://github.com/kubernetes-sigs/vsphere-csi-driver). This guide will go over the additional zone-based configuration needed to support a multi-vCenter or multi-Datacenter environment by using the previous documentation as a base. If you need to revisit the base CPI and CSI documentation, you can find the documentation links below:
+
+[Deploying Kubernetes Cluster on vSphere with CPI and CSI](https://github.com/kubernetes/cloud-provider-vsphere/blob/master/docs/book/tutorials/kubernetes-on-vsphere-with-kubeadm.md)
+
+[Deploy the vSphere Container Storage Plug-in with Topology](https://docs.vmware.com/en/VMware-vSphere-Container-Storage-Plug-in/2.0/vmware-vsphere-csp-getting-started/GUID-73D106A3-1D8A-4CDC-9762-6CB35A65B0B4.html)
+
+## Why Do We Need to Use Zones in a Multi-vCenter or Multi-Datacenter Environment
+
+There exist 2 significant issues when deploying Kubernetes workloads or pods in a mutli-vCenter or single vCenter with multiple Datacenters. They are:
+
+1. Datastore objects, specifically names and even morefs (Managed Object References), are not unique across vCenters instances
+2. Datastore objects, specifically names, are not unique within a single vCenter since objects of the same name can exist in different Datacenters
+
+![Which datastore?](https://raw.githubusercontent.com/kubernetes/cloud-provider-vsphere/master/docs/images/whichdatastore.png)
+
+There needs to be a mechanism in place to allow end-users to continue to use the human readable "friendly" names for objects like datastores and datastore clusters and still be able to target workloads to use resources from them. This is where the concept of zones or zoning comes in. Zones allow you to partition datacenters and compute clusters so that the end-user can target workloads to specific locations in your vSphere environment.
+
+## Understanding Optimal Zone Configurations
+
+This section outlines some optimal configurations for Kubernetes zones in your vSphere environment/configuration. The implementation for zone support in the CPI and CSI driver are quite flexible but there are some configurations that can take advantage of features in vSphere and thus providing certain benefits. Here are a couple of common deployment scenarios for zones. If you cannot roll out or deploy zones in some of these suggested configurations, it might be worth consulting someone with familiarity with how zones are implemented.
+
+### Zones Per Cluster
+
+An ideal configuration is creating a zone per cluster. It follows that datastore and datastore clusters access be tied to the compute nodes within a given cluster. The main reason for this is to take advantage of the High Availability (HA) that clusters offer as well as features like vMotion and etc. Example diagrams or configurations appear below.
+
+![Cluster-based Zones](https://github.com/kubernetes/cloud-provider-vsphere/raw/master/docs/images/clusterbased.png)
+
+### Zones Per Datacenter
+
+Zones per datacenter can work as well, but there are some very important design considerations when doing this. If this deployment strategy is taken, it is important to understand that all compute nodes in that zone aka datacenter have access to provision VMDKs from a given shared datastore. The reason for this is CSI driver uses zones in order to target Kubenetes pods or workloads when provisioning external storage. Example diagrams or configurations appear below.
+
+![Datacenter-based Zones](https://github.com/kubernetes/cloud-provider-vsphere/raw/master/docs/images/datacenterbased.png)
+
+## Pitfalls: Zones as they Relate to Storage
+
+Here is a great example of a mistake you don't want to make. In a multi-Datacenter or multi-vCenter environment, you need to use to zones in an effective way especially when it comes to the use of persistent storage. The picture below shows an example of how zones can be incorrectly used.
+
+We have two clusters in `Datacenter 1`. If we deploy a pod to `Zone Engineering` what cluster will the pod land on? If you don't care, then this topology will work, but if you want to run a stateful pod with some storage to be provisioned, then placement really does matter. The `StorageClass` explicitly calls out from what datastore you want to provision a storage, or in our case a VMDK, out of. So placement in that case is very important and the zone configuration here is **insufficient** to handle that deployment scenario.
+
+![Pitfalls of Zones](https://github.com/kubernetes/cloud-provider-vsphere/raw/master/docs/images/pitfalls.png)
+
+## Wrap-Up Zone Considerations
+
+Some important takeaways for implementing zones:
+
+1. Zones allow you to target Kubernetes workloads to a specific group of vSphere infrastructure. This is handled by the CPI.
+2. Zones also define persistent storage boundaries. In other words, all compute nodes within a given zone must have access to shared storage if persistent storage (aka an FCD) is to be provisioned for stateful applications/pods/workloads.
+
+## Deployment Overview
+
+For steps to deploy Zones using CSI driver, please refer to [CSI docs](https://docs.vmware.com/en/VMware-vSphere-Container-Storage-Plug-in/2.0/vmware-vsphere-csp-getting-started/GUID-73D106A3-1D8A-4CDC-9762-6CB35A65B0B4.html).
+
+Steps that will be covered in order to setup zones for the vSphere CPI and vSphere environment/configuration:
+
+1. Enabling Zones the `vsphere.conf` file
+2. Creating Zones in your vSphere Environment via Tags
+3. Example: Deploying a Kubernetes pod to a Specific Zone using CPI
+
+## Deploying Zones using the CPI
+
+### 1. Enabling Zones the `vsphere.conf` file
+
+> ***Note:*** CPI has its own `vsphere.conf` files. The following modifications need to be made in its configurations.
+
+The zones implementation depends on 2 sets of vSphere tags to be used on objects, such as datacenters or clusters. The first is a `region` tag and the second is a `zone` tag. vSphere tags are very simply put key/value pairs that can be assigned to objects and instead of using fixed keys to denote a `region` or a `zone`, we give the end-user the ability to come up with their own keys for a `region` and `zone` in the form of vSphere Tag Catagory. It just allows for a level of indirection in case you already have regions and zones setup in your configuration. Once a key/label or vSphere Tag Category is selected for each, create a `labels:` section in the `vsphere.conf` then assign tag names for both `region` and `zone`.
+
+**NOTE:** If you are using CPI version 1.1.0 or earlier, please use the `INI` based cloud configuration as outlined in the [Deploying a Kubernetes Cluster on vSphere with CSI and CPI](https://github.com/kubernetes/cloud-provider-vsphere/blob/master/docs/book/tutorials/kubernetes-on-vsphere-with-kubeadm.md) documentation.
+
+In the example `vsphere.conf` below, `k8s-region` and `k8s-zone` was selected:
+
+```bash
+# Global properties in this section will be used for all specified vCenters unless overriden in VirtualCenter section.
+global:
+ user: YourVCenterUser
+ password: YourVCenterPass
+ port: 443
+ # set insecureFlag to true if the vCenter uses a self-signed cert
+ insecureFlag: true
+ # settings for using k8s secret
+ secretName: cpi-secret
+ secretNamespace: kube-system
+
+# VirtualCenter section
+vcenter:
+ tenant1:
+ user: YourVCenterUser
+ password: YourVCenterPass
+ server: 10.0.0.1
+ datacenters:
+ - mydc1
+ tenant2:
+ server: 127.0.0.1
+ port: 448
+ insecureFlag: false
+ datacenters:
+ - myotherdc1
+ - myotherdc2
+
+# labels for regions and zones
+labels:
+ region: k8s-region
+ zone: k8s-zone
+```
+
+**NOTE:** For the `INI` based configuration the zones configuration would appear as the following at the end of your cloud-config file:
+
+```bash
+# labels for regions and zones
+[Labels]
+region = k8s-region
+zone = k8s-zone
+```
+
+### 2. Creating Zones in your vSphere Environment via Tags
+
+ The `region` tag is just a construct that allows one to make a grouping for a specific set of resources. It could be used to indicate something like a geographic location like a country or perhaps a specific datacenter. This label is an arbitrary grouping that you decide on. The `zone` tag is another construct that allows you to further subdivide resources within a `region`. As an example, using the countries as a `region`, the `zone` could indicate a specific datacenter out of a list in that `region`. In the second example of using a datacenter as a `region`, you might use a `zone` to indicate a specific rack within the datacenter or even just a cluster within that datacenter. Then all hosts and subsequently all VMs acting as Kubernetes worker nodes under that tagged datacenter or cluster inherit the tags of those parent objects. How one chooses to group regions and zones is completely based on how you want to identify a specific group of resources.
+
+There are many options for creating vSphere tags. One such method would be to use [govc](https://github.com/vmware/govmomi/tree/master/govc). All the examples below will make use of this method. You could also create tags by accessing the vSphere REST APIs directly or by using the vSphere UI.
+
+> **NOTE**: The example commands below assume that you have exported the GOVC_URL before running said commands:
+
+```bash
+[k8suser@k8master ~]$ export GOVC_URL=https://REPLACE_VSPHERE_USERNAME:REPLACE_VSPHERE_PASSWORD@REPLACE_VSPHERE_IP/sdk
+```
+
+Using the example above, if it is decided that `k8s-region` and `k8s-zone` are to be used for your Category labels, then you can create those vSphere Categories using `govc` by running the following command:
+
+```bash
+[k8suser@k8master ~]$ ./govc tags.category.create -d "Kubernetes region" k8s-region
+[k8suser@k8master ~]$ ./govc tags.category.create -d "Kubernetes zone" k8s-zone
+```
+
+Say there are 2 `regions` in the US and EU that cover our vSphere environment, we can then create 2 region tags `k8s-region-us` and `k8s-region-eu` using `govc` by running the following command:
+
+```bash
+[k8suser@k8master ~]$ ./govc tags.create -d "Kubernetes Region US" -c k8s-region k8s-region-us
+[k8suser@k8master ~]$ ./govc tags.create -d "Kubernetes Region EU" -c k8s-region k8s-region-eu
+```
+
+Now say our colocations in those regions are fairly small and we have 2 datacenters (dcwest and dceast) in the US and 1 datacenter (dceu) in the EU each with just a small vSphere cluster in each datacenter. Let's each datacenter could represent a particular `zone` in those `regions`. In this example, we could simply create tags for each datacenter, such as `k8s-region-us-west`, `k8s-region-us-east` and `k8s-region-eu-all`, by running the following command:
+
+```bash
+[k8suser@k8master ~]$ ./govc tags.create -d "Kubernetes Zone US West" -c k8s-zone k8s-zone-us-west
+[k8suser@k8master ~]$ ./govc tags.create -d "Kubernetes Zone US East" -c k8s-zone k8s-zone-us-east
+[k8suser@k8master ~]$ ./govc tags.create -d "Kubernetes Zone EU All" -c k8s-zone k8s-region-eu-all
+```
+
+Now let's assign the region and zone tags to each of the datacenters in the vSphere environment by running the following command:
+
+```bash
+#dcwest
+[k8suser@k8master ~]$ ./govc tags.attach k8s-region k8s-region-us /dcwest
+[k8suser@k8master ~]$ ./govc tags.attach k8s-zone k8s-zone-us-west /dcwest
+#dceast
+[k8suser@k8master ~]$ ./govc tags.attach k8s-region k8s-region-us /dceast
+[k8suser@k8master ~]$ ./govc tags.attach k8s-zone k8s-zone-us-east /dceast
+#dceu
+[k8suser@k8master ~]$ ./govc tags.attach k8s-region k8s-region-eu /dceu
+[k8suser@k8master ~]$ ./govc tags.attach k8s-zone k8s-region-eu-all /dceu
+```
+
+And there you go! All setup with the correct tags.
+
+> **NOTE**: Since CPI supports multiple vCenter Servers, the datacenters in the US and EU could be distinctly different. In that case, the `govc` commands would be identical with the exception of replacing the proper vCenter username, password, and IP address for each command.
+
+### 3. Setting up CSI Topology-Aware Volume Provisioning when using Persistent Storage
+
+Now that we have set the regions and zones within the vSphere environment, we can now target a specific region/zone to deploy a Kubernetes workload or pod into. If a persistent volume is required for that given Kubernetes pod, we need to update the `StorageClass` with the `region` and `zone` information that the particular datastore is in. You can refer to the procedure in [this doc](https://docs.vmware.com/en/VMware-vSphere-Container-Storage-Plug-in/2.0/vmware-vsphere-csp-getting-started/GUID-61646244-E24F-4E7E-AB1A-F95B5A5DD518.html#GUID-61646244-E24F-4E7E-AB1A-F95B5A5DD518), which provides some example YAMLs and explanations of each step.
+
+## Wrapping Up
+
+That's it! Pretty straightforward. Questions, comments, concerns... please stop by the #sig-vmware channel at [kubernetes.slack.com](https://kubernetes.slack.com).
diff --git a/docs/book/tutorials/disable-node-deletion.md b/docs/book/tutorials/disable-node-deletion.md
new file mode 100644
index 000000000..db3b5cc38
--- /dev/null
+++ b/docs/book/tutorials/disable-node-deletion.md
@@ -0,0 +1,44 @@
+# Disable node deletion by CPI
+
+The [default behavior]((https://github.com/kubernetes/cloud-provider/blob/e820ef550efff2654f98d08b66e03094ccc0d6d7/controllers/nodelifecycle/node_lifecycle_controller.go#L155)) is that if the vSphere VM is no longer accessible/present according to the vCenter Server, the node VM will be deleted from the cloud provider. Specifically, [InstanceExistsByProviderID](https://github.com/kubernetes/cloud-provider-vsphere/blame/00587b422a0ef2b76e57233bca0e0e3b5380838e/pkg/cloudprovider/vsphere/instances.go#L164) should return `false, nil` when the VM on vSphere no longer exists. This cleans up Kubernetes node objects automatically in the event that a VM is deleted.
+
+In this tutorial, we provide a way to disable deleting the node object of a terminated VM due to certain failure scenarios, e.g. when a VM' is not accessible by vCenter but the corresponding k8s-node is still running(network partition event).
+
+Note that if you disable node deletion, when VMs on vSphere become inaccessible or not found, we will have leftover nodes and may introduce unexpected behaviors. Moreover, this behavior is not consistent with other cloud providers. The `SKIP_NODE_DELETION` flag is just a temporary one-off flag and we will need to re-evaluate if we want to change the current behavior.
+
+## Option 1
+
+Set env variable `SKIP_NODE_DELETION` for vsphere-cloud-controller-manager container:
+
+```bash
+ env:
+ - name: SKIP_NODE_DELETION
+ value: true
+```
+
+Example temporary environment variable setting procedure:
+
+1. Add the environment variable with `kubectl set env daemonset vsphere-cloud-controller-manager -n kube-system SKIP_NODE_DELETION=true`.
+You can check if the env variable has been applied correctly by running `kubectl describe daemonset vsphere-cloud-controller-manager -n kube-system`.
+
+2. Terminate the running pod(s). The next pod created would pull in that environment variable.
+
+3. Wait for pod to start.
+
+4. View logs with `kubectl logs [POD_NAME] -n kube-system` and confirm everything healthy.
+
+## Option 2
+
+Another option is to manually modify the environment variable via `kubectl edit ds -n kube-system vsphere-cloud-controller-manager`. A new pod will be started and the old one is terminated after you save.
+
+Sample YAML file can be found [here](https://github.com/kubernetes/cloud-provider-vsphere/blob/master/docs/book/tutorials/disable-node-deletion.yaml).
+
+## Option 3
+
+You can run this command in your running pod using `kubectl exec`, but this is only recommended for debugging purposes:
+
+```bash
+kubectl exec -it vsphere-cloud-controller-manager export SKIP_NODE_DELETION=true
+```
+
+Alternatively, you can get the Pod command line and change the variables in the runtime via `kubectl exec -it vsphere-cloud-controller-manager -- /bin/bash` and then run `export SKIP_NODE_DELETION=true`.
diff --git a/docs/book/tutorials/disable-node-deletion.yaml b/docs/book/tutorials/disable-node-deletion.yaml
new file mode 100644
index 000000000..6ce0c8cdf
--- /dev/null
+++ b/docs/book/tutorials/disable-node-deletion.yaml
@@ -0,0 +1,256 @@
+---
+apiVersion: v1
+kind: ServiceAccount
+metadata:
+ name: cloud-controller-manager
+ labels:
+ vsphere-cpi-infra: service-account
+ component: cloud-controller-manager
+ namespace: kube-system
+---
+apiVersion: v1
+kind: Secret
+metadata:
+ name: vsphere-cloud-secret
+ labels:
+ vsphere-cpi-infra: secret
+ component: cloud-controller-manager
+ namespace: kube-system
+ # NOTE: this is just an example configuration, update with real values based on your environment
+stringData:
+ 10.0.0.1.username: ""
+ 10.0.0.1.password: ""
+ 1.2.3.4.username: ""
+ 1.2.3.4.password: ""
+---
+apiVersion: v1
+kind: ConfigMap
+metadata:
+ name: vsphere-cloud-config
+ labels:
+ vsphere-cpi-infra: config
+ component: cloud-controller-manager
+ namespace: kube-system
+data:
+ # NOTE: this is just an example configuration, update with real values based on your environment
+ vsphere.conf: |
+ # Global properties in this section will be used for all specified vCenters unless overriden in VirtualCenter section.
+ global:
+ port: 443
+ # set insecureFlag to true if the vCenter uses a self-signed cert
+ insecureFlag: true
+ # settings for using k8s secret
+ secretName: vsphere-cloud-secret
+ secretNamespace: kube-system
+
+ # vcenter section
+ vcenter:
+ your-vcenter-name-here:
+ server: 10.0.0.1
+ user: use-your-vcenter-user-here
+ password: use-your-vcenter-password-here
+ datacenters:
+ - hrwest
+ - hreast
+ could-be-a-tenant-label:
+ server: 1.2.3.4
+ datacenters:
+ - mytenantdc
+ secretName: cpi-engineering-secret
+ secretNamespace: kube-system
+
+ # labels for regions and zones
+ labels:
+ region: k8s-region
+ zone: k8s-zone
+---
+apiVersion: rbac.authorization.k8s.io/v1
+kind: RoleBinding
+metadata:
+ name: servicecatalog.k8s.io:apiserver-authentication-reader
+ labels:
+ vsphere-cpi-infra: role-binding
+ component: cloud-controller-manager
+ namespace: kube-system
+roleRef:
+ apiGroup: rbac.authorization.k8s.io
+ kind: Role
+ name: extension-apiserver-authentication-reader
+subjects:
+ - apiGroup: ""
+ kind: ServiceAccount
+ name: cloud-controller-manager
+ namespace: kube-system
+ - apiGroup: ""
+ kind: User
+ name: cloud-controller-manager
+---
+apiVersion: rbac.authorization.k8s.io/v1
+kind: ClusterRoleBinding
+metadata:
+ name: system:cloud-controller-manager
+ labels:
+ vsphere-cpi-infra: cluster-role-binding
+ component: cloud-controller-manager
+roleRef:
+ apiGroup: rbac.authorization.k8s.io
+ kind: ClusterRole
+ name: system:cloud-controller-manager
+subjects:
+ - kind: ServiceAccount
+ name: cloud-controller-manager
+ namespace: kube-system
+ - kind: User
+ name: cloud-controller-manager
+---
+apiVersion: rbac.authorization.k8s.io/v1
+kind: ClusterRole
+metadata:
+ name: system:cloud-controller-manager
+ labels:
+ vsphere-cpi-infra: role
+ component: cloud-controller-manager
+rules:
+ - apiGroups:
+ - ""
+ resources:
+ - events
+ verbs:
+ - create
+ - patch
+ - update
+ - apiGroups:
+ - ""
+ resources:
+ - nodes
+ verbs:
+ - "*"
+ - apiGroups:
+ - ""
+ resources:
+ - nodes/status
+ verbs:
+ - patch
+ - apiGroups:
+ - ""
+ resources:
+ - services
+ verbs:
+ - list
+ - patch
+ - update
+ - watch
+ - apiGroups:
+ - ""
+ resources:
+ - services/status
+ verbs:
+ - patch
+ - apiGroups:
+ - ""
+ resources:
+ - serviceaccounts
+ verbs:
+ - create
+ - get
+ - list
+ - watch
+ - update
+ - apiGroups:
+ - ""
+ resources:
+ - persistentvolumes
+ verbs:
+ - get
+ - list
+ - update
+ - watch
+ - apiGroups:
+ - ""
+ resources:
+ - endpoints
+ verbs:
+ - create
+ - get
+ - list
+ - watch
+ - update
+ - apiGroups:
+ - ""
+ resources:
+ - secrets
+ verbs:
+ - get
+ - list
+ - watch
+ - apiGroups:
+ - "coordination.k8s.io"
+ resources:
+ - leases
+ verbs:
+ - create
+ - get
+ - list
+ - watch
+ - update
+---
+apiVersion: apps/v1
+kind: DaemonSet
+metadata:
+ name: vsphere-cloud-controller-manager
+ labels:
+ component: cloud-controller-manager
+ tier: control-plane
+ namespace: kube-system
+ annotations:
+ scheduler.alpha.kubernetes.io/critical-pod: ""
+spec:
+ selector:
+ matchLabels:
+ name: vsphere-cloud-controller-manager
+ updateStrategy:
+ type: RollingUpdate
+ template:
+ metadata:
+ labels:
+ name: vsphere-cloud-controller-manager
+ component: cloud-controller-manager
+ tier: control-plane
+ spec:
+ nodeSelector:
+ node-role.kubernetes.io/master: ""
+ tolerations:
+ - key: node.cloudprovider.kubernetes.io/uninitialized
+ value: "true"
+ effect: NoSchedule
+ - key: node-role.kubernetes.io/master
+ effect: NoSchedule
+ operator: Exists
+ - key: node.kubernetes.io/not-ready
+ effect: NoSchedule
+ operator: Exists
+ securityContext:
+ runAsUser: 1001
+ serviceAccountName: cloud-controller-manager
+ containers:
+ - name: vsphere-cloud-controller-manager
+ image: gcr.io/cloud-provider-vsphere/cpi/release/manager:v1.22.2
+ args:
+ - --cloud-provider=vsphere
+ - --v=2
+ - --cloud-config=/etc/cloud/vsphere.conf
+ env:
+ - name: SKIP_NODE_DELETION
+ value: true
+ volumeMounts:
+ - mountPath: /etc/cloud
+ name: vsphere-config-volume
+ readOnly: true
+ resources:
+ requests:
+ cpu: 200m
+ hostNetwork: true
+ volumes:
+ - name: vsphere-config-volume
+ configMap:
+ name: vsphere-cloud-config
diff --git a/docs/book/tutorials/enabling-vsphere-csi-on-an-existing-cluster.md b/docs/book/tutorials/enabling-vsphere-csi-on-an-existing-cluster.md
deleted file mode 100644
index 877b40f41..000000000
--- a/docs/book/tutorials/enabling-vsphere-csi-on-an-existing-cluster.md
+++ /dev/null
@@ -1,133 +0,0 @@
-# Introduction
-
-This guide assumes you have an existing Kubernetes cluster, set up with either Kubeadm, or manually and covers only the enabling and troubleshooting of the vSphere CSI driver.
-
-## Infrastructure prerequisites
-
-This section will cover the prerequisites that need to be in place before attempting the deployment.
-
-### vSphere requirements
-
-vSphere 6.7U3 (or later) is a prerequisite for using CSI and CPI at the time of writing. This may change going forward, and the documentation will be updated to reflect any changes in this support statement. If you are on a vSphere version that is below 6.7 U3, you can either upgrade vSphere to 6.7U3 or follow one of the tutorials for earlier vSphere versions. Here is the tutorial on deploying Kubernetes with kubeadm, using the VCP - [Deploying Kubernetes using kubeadm with the vSphere Cloud Provider (in-tree)](./k8s-vcp-on-vsphere-with-kubeadm.md).
-
-### Firewall requirements
-
-Providing the K8s master node(s) access to the vCenter management interface will be sufficient, given the CPI and CSI pods are deployed on the master node(s). Should these components be deployed on worker nodes or otherwise - those nodes will also need access to the vCenter management interface.
-
-If you want to use topology-aware volume provisioning and the late binding feature using `zone`/`region`, the node needs to discover its topology by connecting to the vCenter, for this every node should be able to communicate to the vCenter. You can disable this optional feature if you want to open only the master node to the vCenter management interface.
-
-### Virtual Machine Hardware requirements
-
-Virtual Machine Hardware must be `version 15` or higher. For Virtual Machine CPU and Memory requirements, size adequately based on workload requirements.
-VMware also recommend that virtual machines use the VMware Paravirtual SCSI controller for Primary Disk on the Node VM. This should be the default, but it is always good practice to check.
-
-Finally, the `disk.EnableUUID` parameter must be set for each node VMs. This step is necessary so that the VMDK always presents a consistent UUID to the VM, thus allowing the disk to be mounted properly.
-It is recommended to not take snapshots of CNS node VMs to avoid errors and unpredictable behavior.
-
-#### disk.EnableUUID=1
-
-The following govc commands will set the disk.EnableUUID=1 on all nodes.
-
-```sh
-export GOVC_INSECURE=1
-export GOVC_URL='https://'
-export GOVC_USERNAME=VC_Admin_User
-export GOVC_PASSWORD=VC_Admin_Passwd
-```
-
-Check the connection to vCenter:
-
-```sh
-$ govc ls
-/datacenter/vm
-/datacenter/network
-/datacenter/host
-/datacenter/datastore
-```
-
-To retrieve all Node VMs, use the following command:
-
-```sh
-$ govc ls //vm
-/datacenter/vm/k8s-node3
-/datacenter/vm/k8s-node4
-/datacenter/vm/k8s-node1
-/datacenter/vm/k8s-node2
-/datacenter/vm/k8s-master
-```
-
-To use govc to enable Disk UUID, use the following command:
-
-```sh
-govc vm.change -vm '/datacenter/vm/k8s-node1' -e="disk.enableUUID=1"
-govc vm.change -vm '/datacenter/vm/k8s-node2' -e="disk.enableUUID=1"
-govc vm.change -vm '/datacenter/vm/k8s-node3' -e="disk.enableUUID=1"
-govc vm.change -vm '/datacenter/vm/k8s-node4' -e="disk.enableUUID=1"
-govc vm.change -vm '/datacenter/vm/k8s-master' -e="disk.enableUUID=1"
-```
-
-Further information on disk.enableUUID can be found in [VMware Knowledgebase Article 52815](https://kb.vmware.com/s/article/52815).
-
-#### Upgrade Virtual Machine Hardware
-
-VM Hardware should be at version 15 or higher.
-
-```bash
-govc vm.upgrade -version=15 -vm '/datacenter/vm/k8s-node1'
-govc vm.upgrade -version=15 -vm '/datacenter/vm/k8s-node2'
-govc vm.upgrade -version=15 -vm '/datacenter/vm/k8s-node3'
-govc vm.upgrade -version=15 -vm '/datacenter/vm/k8s-node4'
-govc vm.upgrade -version=15 -vm '/datacenter/vm/k8s-master'
-```
-
-Check the VM Hardware version after running the above command:
-
-```bash
-$ govc vm.option.info '/datacenter/vm/k8s-node1' | grep HwVersion
-HwVersion: 15
-```
-
-## Kubernetes changes
-
-### Node-level changes
-
-On each K8s node, set the `kubelet`’s `cloud-provider` flag to `external` on all nodes. This flag needs to be set in the service configuration file (usually `/etc/systemd/system/kubelet.service`) but this depends on how you installed Kubernetes or the distribution you are using.
-
-E.g:
-
-```sh
---cloud-provider=external
-```
-
-Restart the `kubelet` service on each node.
-
-```sh
-systemctl daemon-reload
-systemctl restart kubelet.service
-```
-
-### Kubernetes manifest changes
-
-Set taints on all nodes to allow them to be initialised by the vSphere Cloud Provider Interface, this allows them to have their `providerID` populated, which creates the link between the CSI and the VM in vCenter.
-
-On worker nodes set this taint:
-
-```sh
-kubectl taint nodes --selector='!node-role.kubernetes.io/master' node.cloudprovider.kubernetes.io/uninitialized=true:NoSchedule
-```
-
-On master nodes set this taint:
-
-```sh
-kubectl taint nodes --selector='node-role.kubernetes.io/master' node-role.kubernetes.io/master=:NoSchedule
-```
-
-### Install the vSphere Cloud Provider Interface
-
-Please refer to this guide for details on installing the CPI –
-
-**Note: Taints needs to be set on the nodes BEFORE the installation of the CPI.**
-
-### Install the vSphere CSI Driver
-
-Please refer to this guide for details on installing the CSI Driver -
diff --git a/docs/book/tutorials/make_a_new_cpi_release.md b/docs/book/tutorials/make_a_new_cpi_release.md
new file mode 100644
index 000000000..86e3a5a51
--- /dev/null
+++ b/docs/book/tutorials/make_a_new_cpi_release.md
@@ -0,0 +1,50 @@
+# Release Guide for CPI
+
+When a new k8s version is available, we should bump our [k8s dependencies of CPI](https://github.com/kubernetes/cloud-provider-vsphere/blob/master/go.mod) before cutting a new CPI release.
+
+In this tutorial, we provide detailed steps on how to cut an official release of CPI.
+
+## Create a PR to bump k8s dependencies
+
+We recommend upgrading and downgrading of CPI dependencies using `go get`, which will automatically update the `go.mod` file.
+
+For example, to upgrade a dependency to the latest version:
+
+```shell
+go get k8s.io/cloud-provider/app@v0.22.1
+```
+
+Remember to update `version` value in the [Dockerfile for image building](https://github.com/kubernetes/cloud-provider-vsphere/blob/master/cluster/images/controller-manager/Dockerfile#L36).
+
+Sample PR: [Bump k8s dependencies to 1.22 and go to 1.16](https://github.com/kubernetes/cloud-provider-vsphere/pull/496)
+
+## Test before release
+
+Before we release a new version, we should always make sure we've fully tested CPI. To build a docker image for testing, you can run:
+
+```shell
+make docker-image IMAGE=
+```
+
+## Create a sample release YAML
+
+For each release, we should provide its release YAML under [this folder](https://github.com/kubernetes/cloud-provider-vsphere/tree/master/releases). Please refer to [this PR](https://github.com/kubernetes/cloud-provider-vsphere/pull/487) to add the corresponding release YAML.
+
+## Create a GitHub Release
+
+Normally, we need to cut alpha and beta releases before the official release. For example, before cutting `1.22.0` official release, we need to first create an alpha release named `v1.22.0-alpha.1`, and ensure that this alpha version of CPI is working. If a new bug occurs, we should fix that and cut another release named `v1.22.0-alpha.2`. Once the latest alpha release is stable, we are ready to cut a beta release named `v1.22.0-beta.1` and follow the same pattern as alpha releases. When the beta release is stable, we are ready to cut the official release `1.22.0`.
+
+To create a new release, please refer to the following workflow:
+
+```shell
+$ git pull --rebase
+# release_name can be v1.22.0-alpha.1, v1.22.0-beta.1, v1.22.0, etc
+$ git tag -a
+$ git push
+```
+
+Now we can open up the [release page](https://github.com/kubernetes/cloud-provider-vsphere/releases), and click on `Draft a new release`. Use the tag we just created and edit the release message by refering to major PRs for important user-facing features instead of minor bug fixes.
+
+Press `Publish Release` to publish the release from the existing tag. As soon as you publish the release on GitHub, we can see it under the release tab, which was previously showing just the tag names.
+
+Please go to [post-release-pipeline](https://prow.k8s.io/view/gs/kubernetes-jenkins/logs/post-cloud-provider-vsphere-release/) to check the release logs and make sure new image is published in `gcr.io/cloud-provider-vsphere/cpi/release/manager` with the correct version tag.
diff --git a/go.mod b/go.mod
index 408578c87..d34767026 100644
--- a/go.mod
+++ b/go.mod
@@ -1,14 +1,17 @@
module k8s.io/cloud-provider-vsphere
-go 1.16
+go 1.17
require (
- github.com/golang/mock v1.4.4
+ github.com/fsnotify/fsnotify v1.5.1
+ github.com/golang/mock v1.5.0
github.com/golang/protobuf v1.5.2
- github.com/google/uuid v1.1.2
+ github.com/google/uuid v1.2.0
+ github.com/onsi/ginkgo v1.16.5
+ github.com/onsi/gomega v1.17.0
github.com/pkg/errors v0.9.1
github.com/prometheus/client_golang v1.11.0
- github.com/spf13/cobra v1.1.3
+ github.com/spf13/cobra v1.2.1
github.com/spf13/pflag v1.0.5
github.com/stretchr/testify v1.7.0
github.com/vmware-tanzu/vm-operator-api v0.1.4-0.20201118171008-5ca641b0e126
@@ -16,19 +19,147 @@ require (
github.com/vmware/vsphere-automation-sdk-go/lib v0.2.0
github.com/vmware/vsphere-automation-sdk-go/runtime v0.2.0
github.com/vmware/vsphere-automation-sdk-go/services/nsxt v0.3.0
- golang.org/x/net v0.0.0-20210520170846-37e1c6afe023
- google.golang.org/grpc v1.38.0
+ golang.org/x/net v0.0.0-20211209124913-491a49abca63
+ google.golang.org/grpc v1.40.0
gopkg.in/gcfg.v1 v1.2.3
- gopkg.in/warnings.v0 v0.1.2 // indirect
gopkg.in/yaml.v2 v2.4.0
- k8s.io/api v0.22.1
- k8s.io/apiextensions-apiserver v0.22.1 // indirect
- k8s.io/apimachinery v0.22.1
- k8s.io/client-go v0.22.1
- k8s.io/cloud-provider v0.22.1
- k8s.io/code-generator v0.22.1
- k8s.io/component-base v0.22.1
- k8s.io/klog/v2 v2.9.0
- sigs.k8s.io/controller-runtime v0.6.5
- sigs.k8s.io/yaml v1.2.0
+ k8s.io/api v0.23.1
+ k8s.io/apimachinery v0.23.1
+ k8s.io/client-go v0.23.1
+ k8s.io/cloud-provider v0.23.1
+ k8s.io/code-generator v0.23.1
+ k8s.io/component-base v0.23.1
+ k8s.io/klog/v2 v2.30.0
+ sigs.k8s.io/cluster-api/test v0.4.5
+ sigs.k8s.io/controller-runtime v0.11.0
+ sigs.k8s.io/yaml v1.3.0
+)
+
+require (
+ github.com/Azure/go-ansiterm v0.0.0-20210617225240-d185dfc1b5a1 // indirect
+ github.com/BurntSushi/toml v0.3.1 // indirect
+ github.com/MakeNowJust/heredoc v1.0.0 // indirect
+ github.com/Microsoft/go-winio v0.5.0 // indirect
+ github.com/NYTimes/gziphandler v1.1.1 // indirect
+ github.com/PuerkitoBio/purell v1.1.1 // indirect
+ github.com/PuerkitoBio/urlesc v0.0.0-20170810143723-de5bf2ad4578 // indirect
+ github.com/alessio/shellescape v1.4.1 // indirect
+ github.com/beevik/etree v1.1.0 // indirect
+ github.com/beorn7/perks v1.0.1 // indirect
+ github.com/blang/semver v3.5.1+incompatible // indirect
+ github.com/cespare/xxhash/v2 v2.1.1 // indirect
+ github.com/containerd/containerd v1.5.2 // indirect
+ github.com/coredns/caddy v1.1.0 // indirect
+ github.com/coredns/corefile-migration v1.0.12 // indirect
+ github.com/coreos/go-semver v0.3.0 // indirect
+ github.com/coreos/go-systemd/v22 v22.3.2 // indirect
+ github.com/davecgh/go-spew v1.1.1 // indirect
+ github.com/docker/distribution v2.7.1+incompatible // indirect
+ github.com/docker/docker v20.10.7+incompatible // indirect
+ github.com/docker/go-connections v0.4.0 // indirect
+ github.com/docker/go-units v0.4.0 // indirect
+ github.com/drone/envsubst/v2 v2.0.0-20210615175204-7bf45dbf5372 // indirect
+ github.com/emicklei/go-restful v2.9.5+incompatible // indirect
+ github.com/evanphx/json-patch v4.12.0+incompatible // indirect
+ github.com/evanphx/json-patch/v5 v5.2.0 // indirect
+ github.com/felixge/httpsnoop v1.0.1 // indirect
+ github.com/gibson042/canonicaljson-go v1.0.3 // indirect
+ github.com/go-logr/logr v1.2.0 // indirect
+ github.com/go-openapi/jsonpointer v0.19.5 // indirect
+ github.com/go-openapi/jsonreference v0.19.5 // indirect
+ github.com/go-openapi/swag v0.19.14 // indirect
+ github.com/gobuffalo/flect v0.2.3 // indirect
+ github.com/gogo/protobuf v1.3.2 // indirect
+ github.com/golang/groupcache v0.0.0-20210331224755-41bb18bfe9da // indirect
+ github.com/google/go-cmp v0.5.6 // indirect
+ github.com/google/go-github/v33 v33.0.0 // indirect
+ github.com/google/go-querystring v1.0.0 // indirect
+ github.com/google/gofuzz v1.2.0 // indirect
+ github.com/googleapis/gnostic v0.5.5 // indirect
+ github.com/grpc-ecosystem/go-grpc-prometheus v1.2.0 // indirect
+ github.com/grpc-ecosystem/grpc-gateway v1.16.0 // indirect
+ github.com/hashicorp/hcl v1.0.0 // indirect
+ github.com/imdario/mergo v0.3.12 // indirect
+ github.com/inconshreveable/mousetrap v1.0.0 // indirect
+ github.com/josharian/intern v1.0.0 // indirect
+ github.com/json-iterator/go v1.1.12 // indirect
+ github.com/magiconair/properties v1.8.5 // indirect
+ github.com/mailru/easyjson v0.7.6 // indirect
+ github.com/mattn/go-isatty v0.0.12 // indirect
+ github.com/matttproud/golang_protobuf_extensions v1.0.2-0.20181231171920-c182affec369 // indirect
+ github.com/mitchellh/mapstructure v1.4.1 // indirect
+ github.com/moby/term v0.0.0-20210610120745-9d4ed1856297 // indirect
+ github.com/modern-go/concurrent v0.0.0-20180306012644-bacd9c7ef1dd // indirect
+ github.com/modern-go/reflect2 v1.0.2 // indirect
+ github.com/munnerz/goautoneg v0.0.0-20191010083416-a7dc8b61c822 // indirect
+ github.com/nxadm/tail v1.4.8 // indirect
+ github.com/opencontainers/go-digest v1.0.0 // indirect
+ github.com/opencontainers/image-spec v1.0.1 // indirect
+ github.com/pelletier/go-toml v1.9.3 // indirect
+ github.com/pmezard/go-difflib v1.0.0 // indirect
+ github.com/prometheus/client_model v0.2.0 // indirect
+ github.com/prometheus/common v0.28.0 // indirect
+ github.com/prometheus/procfs v0.6.0 // indirect
+ github.com/sirupsen/logrus v1.8.1 // indirect
+ github.com/spf13/afero v1.6.0 // indirect
+ github.com/spf13/cast v1.3.1 // indirect
+ github.com/spf13/jwalterweatherman v1.1.0 // indirect
+ github.com/spf13/viper v1.8.1 // indirect
+ github.com/subosito/gotenv v1.2.0 // indirect
+ go.etcd.io/etcd/api/v3 v3.5.0 // indirect
+ go.etcd.io/etcd/client/pkg/v3 v3.5.0 // indirect
+ go.etcd.io/etcd/client/v3 v3.5.0 // indirect
+ go.opentelemetry.io/contrib v0.20.0 // indirect
+ go.opentelemetry.io/contrib/instrumentation/google.golang.org/grpc/otelgrpc v0.20.0 // indirect
+ go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp v0.20.0 // indirect
+ go.opentelemetry.io/otel v0.20.0 // indirect
+ go.opentelemetry.io/otel/exporters/otlp v0.20.0 // indirect
+ go.opentelemetry.io/otel/metric v0.20.0 // indirect
+ go.opentelemetry.io/otel/sdk v0.20.0 // indirect
+ go.opentelemetry.io/otel/sdk/export/metric v0.20.0 // indirect
+ go.opentelemetry.io/otel/sdk/metric v0.20.0 // indirect
+ go.opentelemetry.io/otel/trace v0.20.0 // indirect
+ go.opentelemetry.io/proto/otlp v0.7.0 // indirect
+ go.uber.org/atomic v1.7.0 // indirect
+ go.uber.org/multierr v1.6.0 // indirect
+ go.uber.org/zap v1.19.1 // indirect
+ golang.org/x/crypto v0.0.0-20210817164053-32db794688a5 // indirect
+ golang.org/x/mod v0.5.1 // indirect
+ golang.org/x/oauth2 v0.0.0-20210819190943-2bc19b11175f // indirect
+ golang.org/x/sync v0.0.0-20210220032951-036812b2e83c // indirect
+ golang.org/x/sys v0.0.0-20211029165221-6e7872819dc8 // indirect
+ golang.org/x/term v0.0.0-20210615171337-6886f2dfbf5b // indirect
+ golang.org/x/text v0.3.7 // indirect
+ golang.org/x/time v0.0.0-20210723032227-1f47c861a9ac // indirect
+ golang.org/x/tools v0.1.8 // indirect
+ golang.org/x/xerrors v0.0.0-20200804184101-5ec99f83aff1 // indirect
+ gomodules.xyz/jsonpatch/v2 v2.2.0 // indirect
+ google.golang.org/appengine v1.6.7 // indirect
+ google.golang.org/genproto v0.0.0-20210831024726-fe130286e0e2 // indirect
+ google.golang.org/protobuf v1.27.1 // indirect
+ gopkg.in/inf.v0 v0.9.1 // indirect
+ gopkg.in/ini.v1 v1.62.0 // indirect
+ gopkg.in/natefinch/lumberjack.v2 v2.0.0 // indirect
+ gopkg.in/tomb.v1 v1.0.0-20141024135613-dd632973f1e7 // indirect
+ gopkg.in/warnings.v0 v0.1.2 // indirect
+ gopkg.in/yaml.v3 v3.0.0-20210107192922-496545a6307b // indirect
+ k8s.io/apiextensions-apiserver v0.23.1 // indirect
+ k8s.io/apiserver v0.23.1 // indirect
+ k8s.io/cluster-bootstrap v0.21.4 // indirect
+ k8s.io/component-helpers v0.23.1 // indirect
+ k8s.io/controller-manager v0.23.1 // indirect
+ k8s.io/gengo v0.0.0-20210813121822-485abfe95c7c // indirect
+ k8s.io/kube-openapi v0.0.0-20211115234752-e816edb12b65 // indirect
+ k8s.io/utils v0.0.0-20210930125809-cb0fa318a74b // indirect
+ sigs.k8s.io/apiserver-network-proxy/konnectivity-client v0.0.25 // indirect
+ sigs.k8s.io/cluster-api v0.4.5 // indirect
+ sigs.k8s.io/json v0.0.0-20211020170558-c049b76a60c6 // indirect
+ sigs.k8s.io/kind v0.11.1 // indirect
+ sigs.k8s.io/structured-merge-diff/v4 v4.2.0 // indirect
+)
+
+replace (
+ github.com/onsi/ginkgo => github.com/onsi/ginkgo v1.16.1
+ github.com/onsi/gomega => github.com/onsi/gomega v1.11.0
+ sigs.k8s.io/cluster-api => sigs.k8s.io/cluster-api v0.4.5
)
diff --git a/go.sum b/go.sum
index a45e743fd..3c9963e94 100644
--- a/go.sum
+++ b/go.sum
@@ -1,3 +1,4 @@
+bazil.org/fuse v0.0.0-20160811212531-371fbbdaa898/go.mod h1:Xbm+BRKSBEpa4q4hTSxohYNQpsxXPbPry4JJWOB3LB8=
cloud.google.com/go v0.26.0/go.mod h1:aQUYkXzVsufM+DwF1aE+0xfcU+56JwCaLick0ClmMTw=
cloud.google.com/go v0.34.0/go.mod h1:aQUYkXzVsufM+DwF1aE+0xfcU+56JwCaLick0ClmMTw=
cloud.google.com/go v0.38.0/go.mod h1:990N+gfupTy94rShfmMCWGDn0LpTmnzTp2qbd1dvSRU=
@@ -8,42 +9,80 @@ cloud.google.com/go v0.46.3/go.mod h1:a6bKKbmY7er1mI7TEI4lsAkts/mkhTSZK8w33B4RAg
cloud.google.com/go v0.50.0/go.mod h1:r9sluTvynVuxRIOHXQEHMFffphuXHOMZMycpNR5e6To=
cloud.google.com/go v0.52.0/go.mod h1:pXajvRH/6o3+F9jDHZWQ5PbGhn+o8w9qiu/CffaVdO4=
cloud.google.com/go v0.53.0/go.mod h1:fp/UouUEsRkN6ryDKNW/Upv/JBKnv6WDthjR6+vze6M=
-cloud.google.com/go v0.54.0 h1:3ithwDMr7/3vpAMXiH+ZQnYbuIsh+OPhUPMFC9enmn0=
cloud.google.com/go v0.54.0/go.mod h1:1rq2OEkV3YMf6n/9ZvGWI3GWw0VoqH/1x2nd8Is/bPc=
+cloud.google.com/go v0.56.0/go.mod h1:jr7tqZxxKOVYizybht9+26Z/gUq7tiRzu+ACVAMbKVk=
+cloud.google.com/go v0.57.0/go.mod h1:oXiQ6Rzq3RAkkY7N6t3TcE6jE+CIBBbA36lwQ1JyzZs=
+cloud.google.com/go v0.62.0/go.mod h1:jmCYTdRCQuc1PHIIJ/maLInMho30T/Y0M4hTdTShOYc=
+cloud.google.com/go v0.65.0/go.mod h1:O5N8zS7uWy9vkA9vayVHs65eM1ubvY4h553ofrNHObY=
+cloud.google.com/go v0.72.0/go.mod h1:M+5Vjvlc2wnp6tjzE102Dw08nGShTscUx2nZMufOKPI=
+cloud.google.com/go v0.74.0/go.mod h1:VV1xSbzvo+9QJOxLDaJfTjx5e+MePCpCWwvftOeQmWk=
+cloud.google.com/go v0.78.0/go.mod h1:QjdrLG0uq+YwhjoVOLsS1t7TW8fs36kLs4XO5R5ECHg=
+cloud.google.com/go v0.79.0/go.mod h1:3bzgcEeQlzbuEAYu4mrWhKqWjmpprinYgKJLgKHnbb8=
+cloud.google.com/go v0.81.0 h1:at8Tk2zUz63cLPR0JPWm5vp77pEZmzxEQBEfRKn1VV8=
+cloud.google.com/go v0.81.0/go.mod h1:mk/AM35KwGk/Nm2YSeZbxXdrNK3KZOYHmLkOqC2V6E0=
cloud.google.com/go/bigquery v1.0.1/go.mod h1:i/xbL2UlR5RvWAURpBYZTtm/cXjCha9lbfbpx4poX+o=
cloud.google.com/go/bigquery v1.3.0/go.mod h1:PjpwJnslEMmckchkHFfq+HTD2DmtT67aNFKH1/VBDHE=
cloud.google.com/go/bigquery v1.4.0/go.mod h1:S8dzgnTigyfTmLBfrtrhyYhwRxG72rYxvftPBK2Dvzc=
+cloud.google.com/go/bigquery v1.5.0/go.mod h1:snEHRnqQbz117VIFhE8bmtwIDY80NLUZUMb4Nv6dBIg=
+cloud.google.com/go/bigquery v1.7.0/go.mod h1://okPTzCYNXSlb24MZs83e2Do+h+VXtc4gLoIoXIAPc=
+cloud.google.com/go/bigquery v1.8.0/go.mod h1:J5hqkt3O0uAFnINi6JXValWIb1v0goeZM77hZzJN/fQ=
cloud.google.com/go/datastore v1.0.0/go.mod h1:LXYbyblFSglQ5pkeyhO+Qmw7ukd3C+pD7TKLgZqpHYE=
cloud.google.com/go/datastore v1.1.0/go.mod h1:umbIZjpQpHh4hmRpGhH4tLFup+FVzqBi1b3c64qFpCk=
cloud.google.com/go/firestore v1.1.0/go.mod h1:ulACoGHTpvq5r8rxGJ4ddJZBZqakUQqClKRT5SZwBmk=
cloud.google.com/go/pubsub v1.0.1/go.mod h1:R0Gpsv3s54REJCy4fxDixWD93lHJMoZTyQ2kNxGRt3I=
cloud.google.com/go/pubsub v1.1.0/go.mod h1:EwwdRX2sKPjnvnqCa270oGRyludottCI76h+R3AArQw=
cloud.google.com/go/pubsub v1.2.0/go.mod h1:jhfEVHT8odbXTkndysNHCcx0awwzvfOlguIAii9o8iA=
+cloud.google.com/go/pubsub v1.3.1/go.mod h1:i+ucay31+CNRpDW4Lu78I4xXG+O1r/MAHgjpRVR+TSU=
cloud.google.com/go/storage v1.0.0/go.mod h1:IhtSnM/ZTZV8YYJWCY8RULGVqBDmpoyjwiyrjsg+URw=
cloud.google.com/go/storage v1.5.0/go.mod h1:tpKbwo567HUNpVclU5sGELwQWBDZ8gh0ZeosJ0Rtdos=
cloud.google.com/go/storage v1.6.0/go.mod h1:N7U0C8pVQ/+NIKOBQyamJIeKQKkZ+mxpohlUTyfDhBk=
+cloud.google.com/go/storage v1.8.0/go.mod h1:Wv1Oy7z6Yz3DshWRJFhqM/UCfaWIRTdp0RXyy7KQOVs=
+cloud.google.com/go/storage v1.10.0/go.mod h1:FLPqc6j+Ki4BU591ie1oL6qBQGu2Bl/tZ9ullr3+Kg0=
dmitri.shuralyov.com/gpu/mtl v0.0.0-20190408044501-666a987793e9/go.mod h1:H6x//7gZCb22OMCxBHrMx7a5I7Hp++hsVxbQ4BYO7hU=
+github.com/Azure/azure-sdk-for-go v16.2.1+incompatible/go.mod h1:9XXNKU+eRnpl9moKnB4QOLf1HestfXbmab5FXxiDBjc=
github.com/Azure/go-ansiterm v0.0.0-20170929234023-d6e3b3328b78/go.mod h1:LmzpDX56iTiv29bbRTIsUNlaFfuhWRQBWjQdVyAevI8=
github.com/Azure/go-ansiterm v0.0.0-20210608223527-2377c96fe795/go.mod h1:LmzpDX56iTiv29bbRTIsUNlaFfuhWRQBWjQdVyAevI8=
github.com/Azure/go-ansiterm v0.0.0-20210617225240-d185dfc1b5a1 h1:UQHMgLO+TxOElx5B5HZ4hJQsoJ/PvUvKRhJHDQXO8P8=
github.com/Azure/go-ansiterm v0.0.0-20210617225240-d185dfc1b5a1/go.mod h1:xomTg63KZ2rFqZQzSB4Vz2SUXa1BpHTVz9L5PTmPC4E=
+github.com/Azure/go-autorest v10.8.1+incompatible/go.mod h1:r+4oMnoxhatjLLJ6zxSWATqVooLgysK6ZNox3g/xq24=
github.com/Azure/go-autorest v14.2.0+incompatible/go.mod h1:r+4oMnoxhatjLLJ6zxSWATqVooLgysK6ZNox3g/xq24=
-github.com/Azure/go-autorest/autorest v0.9.0/go.mod h1:xyHB1BMZT0cuDHU7I0+g046+BFDTQ8rEZB0s4Yfa6bI=
+github.com/Azure/go-autorest/autorest v0.11.1/go.mod h1:JFgpikqFJ/MleTTxwepExTKnFUKKszPS8UavbQYUMuw=
+github.com/Azure/go-autorest/autorest v0.11.12/go.mod h1:eipySxLmqSyC5s5k1CLupqet0PSENBEDP93LQ9a8QYw=
github.com/Azure/go-autorest/autorest v0.11.18/go.mod h1:dSiJPy22c3u0OtOKDNttNgqpNFY/GeWa7GH/Pz56QRA=
-github.com/Azure/go-autorest/autorest/adal v0.5.0/go.mod h1:8Z9fGy2MpX0PvDjB1pEgQTmVqjGhiHBW7RJJEciWzS0=
+github.com/Azure/go-autorest/autorest/adal v0.9.0/go.mod h1:/c022QCutn2P7uY+/oQWWNcK9YU+MH96NgK+jErpbcg=
+github.com/Azure/go-autorest/autorest/adal v0.9.5/go.mod h1:B7KF7jKIeC9Mct5spmyCB/A8CG/sEz1vwIRGv/bbw7A=
github.com/Azure/go-autorest/autorest/adal v0.9.13/go.mod h1:W/MM4U6nLxnIskrw4UwWzlHfGjwUS50aOsc/I3yuU8M=
-github.com/Azure/go-autorest/autorest/date v0.1.0/go.mod h1:plvfp3oPSKwf2DNjlBjWF/7vwR+cUD/ELuzDCXwHUVA=
github.com/Azure/go-autorest/autorest/date v0.3.0/go.mod h1:BI0uouVdmngYNUzGWeSYnokU+TrmwEsOqdt8Y6sso74=
-github.com/Azure/go-autorest/autorest/mocks v0.1.0/go.mod h1:OTyCOPRA2IgIlWxVYxBee2F5Gr4kF2zd2J5cFRaIDN0=
-github.com/Azure/go-autorest/autorest/mocks v0.2.0/go.mod h1:OTyCOPRA2IgIlWxVYxBee2F5Gr4kF2zd2J5cFRaIDN0=
+github.com/Azure/go-autorest/autorest/mocks v0.4.0/go.mod h1:LTp+uSrOhSkaKrUy935gNZuuIPPVsHlr9DSOxSayd+k=
github.com/Azure/go-autorest/autorest/mocks v0.4.1/go.mod h1:LTp+uSrOhSkaKrUy935gNZuuIPPVsHlr9DSOxSayd+k=
-github.com/Azure/go-autorest/logger v0.1.0/go.mod h1:oExouG+K6PryycPJfVSxi/koC6LSNgds39diKLz7Vrc=
+github.com/Azure/go-autorest/logger v0.2.0/go.mod h1:T9E3cAhj2VqvPOtCYAvby9aBXkZmbF5NWuPV8+WeEW8=
github.com/Azure/go-autorest/logger v0.2.1/go.mod h1:T9E3cAhj2VqvPOtCYAvby9aBXkZmbF5NWuPV8+WeEW8=
-github.com/Azure/go-autorest/tracing v0.5.0/go.mod h1:r/s2XiOKccPW3HrqB+W0TQzfbtp2fGCgRFtBroKn4Dk=
github.com/Azure/go-autorest/tracing v0.6.0/go.mod h1:+vhtPC754Xsa23ID7GlGsrdKBpUA79WCAKPPZVC2DeU=
github.com/BurntSushi/toml v0.3.1 h1:WXkYYl6Yr3qBf1K79EBnL4mak0OimBfB0XUf9Vl28OQ=
github.com/BurntSushi/toml v0.3.1/go.mod h1:xHWCNGjB5oqiDr8zfno3MHue2Ht5sIBksp03qcyfWMU=
github.com/BurntSushi/xgb v0.0.0-20160522181843-27f122750802/go.mod h1:IVnqGOEym/WlBOVXweHU+Q+/VP0lqqI8lqeDx9IjBqo=
+github.com/MakeNowJust/heredoc v0.0.0-20170808103936-bb23615498cd/go.mod h1:64YHyfSL2R96J44Nlwm39UHepQbyR5q10x7iYa1ks2E=
+github.com/MakeNowJust/heredoc v1.0.0 h1:cXCdzVdstXyiTqTvfqk9SDHpKNjxuom+DOlyEeQ4pzQ=
+github.com/MakeNowJust/heredoc v1.0.0/go.mod h1:mG5amYoWBHf8vpLOuehzbGGw0EHxpZZ6lCpQ4fNJ8LE=
+github.com/Microsoft/go-winio v0.4.11/go.mod h1:VhR8bwka0BXejwEJY73c50VrPtXAaKcyvVC4A4RozmA=
+github.com/Microsoft/go-winio v0.4.14/go.mod h1:qXqCSQ3Xa7+6tgxaGTIe4Kpcdsi+P8jBhyzoq1bpyYA=
+github.com/Microsoft/go-winio v0.4.15-0.20190919025122-fc70bd9a86b5/go.mod h1:tTuCMEN+UleMWgg9dVx4Hu52b1bJo+59jBh3ajtinzw=
+github.com/Microsoft/go-winio v0.4.16-0.20201130162521-d1ffc52c7331/go.mod h1:XB6nPKklQyQ7GC9LdcBEcBl8PF76WugXOPRXwdLnMv0=
+github.com/Microsoft/go-winio v0.4.16/go.mod h1:XB6nPKklQyQ7GC9LdcBEcBl8PF76WugXOPRXwdLnMv0=
+github.com/Microsoft/go-winio v0.4.17-0.20210211115548-6eac466e5fa3/go.mod h1:JPGBdM1cNvN/6ISo+n8V5iA4v8pBzdOpzfwIujj1a84=
+github.com/Microsoft/go-winio v0.4.17-0.20210324224401-5516f17a5958/go.mod h1:JPGBdM1cNvN/6ISo+n8V5iA4v8pBzdOpzfwIujj1a84=
+github.com/Microsoft/go-winio v0.4.17/go.mod h1:JPGBdM1cNvN/6ISo+n8V5iA4v8pBzdOpzfwIujj1a84=
+github.com/Microsoft/go-winio v0.5.0 h1:Elr9Wn+sGKPlkaBvwu4mTrxtmOp3F3yV9qhaHbXGjwU=
+github.com/Microsoft/go-winio v0.5.0/go.mod h1:JPGBdM1cNvN/6ISo+n8V5iA4v8pBzdOpzfwIujj1a84=
+github.com/Microsoft/hcsshim v0.8.6/go.mod h1:Op3hHsoHPAvb6lceZHDtd9OkTew38wNoXnJs8iY7rUg=
+github.com/Microsoft/hcsshim v0.8.7-0.20190325164909-8abdbb8205e4/go.mod h1:Op3hHsoHPAvb6lceZHDtd9OkTew38wNoXnJs8iY7rUg=
+github.com/Microsoft/hcsshim v0.8.7/go.mod h1:OHd7sQqRFrYd3RmSgbgji+ctCwkbq2wbEYNSzOYtcBQ=
+github.com/Microsoft/hcsshim v0.8.9/go.mod h1:5692vkUqntj1idxauYlpoINNKeqCiG6Sg38RRsjT5y8=
+github.com/Microsoft/hcsshim v0.8.14/go.mod h1:NtVKoYxQuTLx6gEq0L96c9Ju4JbRJ4nY2ow3VK6a9Lg=
+github.com/Microsoft/hcsshim v0.8.15/go.mod h1:x38A4YbHbdxJtc0sF6oIz+RG0npwSCAvn69iY6URG00=
+github.com/Microsoft/hcsshim v0.8.16/go.mod h1:o5/SZqmR7x9JNKsW3pu+nqHm0MF8vbA+VxGOoXdC600=
+github.com/Microsoft/hcsshim/test v0.0.0-20201218223536-d3e5debf77da/go.mod h1:5hlzMzRKMLyo42nCZ9oml8AdTlq/0cvIaBv6tK1RehU=
+github.com/Microsoft/hcsshim/test v0.0.0-20210227013316-43a75bb4edd3/go.mod h1:mw7qgWloBUl75W/gVH3cQszUg1+gUITj7D6NY7ywVnY=
github.com/NYTimes/gziphandler v0.0.0-20170623195520-56545f4a5d46/go.mod h1:3wb06e3pkSAbeQ52E9H9iFoQsEEwGN64994WTCIhntQ=
github.com/NYTimes/gziphandler v1.1.1 h1:ZUDjpQae29j0ryrS0u/B8HZfJBtBQHjqw2rQ2cqUQ3I=
github.com/NYTimes/gziphandler v1.1.1/go.mod h1:n/CVRwUEOgIxrgPvAQhUUr9oeUtvrhMomdKFjzJNB0c=
@@ -55,33 +94,49 @@ github.com/PuerkitoBio/purell v1.1.1/go.mod h1:c11w/QuzBsJSee3cPx9rAFu61PvFxuPbt
github.com/PuerkitoBio/urlesc v0.0.0-20160726150825-5bd2802263f2/go.mod h1:uGdkoq3SwY9Y+13GIhn11/XLaGBb4BfwItxLd5jeuXE=
github.com/PuerkitoBio/urlesc v0.0.0-20170810143723-de5bf2ad4578 h1:d+Bc7a5rLufV/sSk/8dngufqelfh6jnri85riMAaF/M=
github.com/PuerkitoBio/urlesc v0.0.0-20170810143723-de5bf2ad4578/go.mod h1:uGdkoq3SwY9Y+13GIhn11/XLaGBb4BfwItxLd5jeuXE=
+github.com/Shopify/logrus-bugsnag v0.0.0-20171204204709-577dee27f20d/go.mod h1:HI8ITrYtUY+O+ZhtlqUnD8+KwNPOyugEhfP9fdUIaEQ=
github.com/agnivade/levenshtein v1.0.1/go.mod h1:CURSv5d9Uaml+FovSIICkLbAUZ9S4RqaHDIsdSBg7lM=
github.com/alecthomas/template v0.0.0-20160405071501-a0175ee3bccc/go.mod h1:LOuyumcjzFXgccqObfd/Ljyb9UuFJ6TxHnclSeseNhc=
github.com/alecthomas/template v0.0.0-20190718012654-fb15b899a751/go.mod h1:LOuyumcjzFXgccqObfd/Ljyb9UuFJ6TxHnclSeseNhc=
github.com/alecthomas/units v0.0.0-20151022065526-2efee857e7cf/go.mod h1:ybxpYRFXyAe+OPACYpWeL0wqObRcbAqCMya13uyzqw0=
github.com/alecthomas/units v0.0.0-20190717042225-c3de453c63f4/go.mod h1:ybxpYRFXyAe+OPACYpWeL0wqObRcbAqCMya13uyzqw0=
github.com/alecthomas/units v0.0.0-20190924025748-f65c72e2690d/go.mod h1:rBZYJk541a8SKzHPHnH3zbiI+7dagKZ0cgpgrD7Fyho=
+github.com/alessio/shellescape v1.4.1 h1:V7yhSDDn8LP4lc4jS8pFkt0zCnzVJlG5JXy9BVKJUX0=
+github.com/alessio/shellescape v1.4.1/go.mod h1:PZAiSCk0LJaZkiCSkPv8qIobYglO3FPpyFjDCtHLS30=
+github.com/alexflint/go-filemutex v0.0.0-20171022225611-72bdc8eae2ae/go.mod h1:CgnQgUtFrFz9mxFNtED3jI5tLDjKlOM+oUF/sTk6ps0=
github.com/andreyvit/diff v0.0.0-20170406064948-c7f18ee00883/go.mod h1:rCTlJbsFo29Kk6CurOXKm700vrz8f0KW0JNfpkRJY/8=
github.com/antihax/optional v1.0.0/go.mod h1:uupD/76wgC+ih3iEmQUL+0Ugr19nfwCT1kdvxnR2qWY=
+github.com/antlr/antlr4/runtime/Go/antlr v0.0.0-20210826220005-b48c857c3a0e/go.mod h1:F7bn7fEU90QkQ3tnmaTx3LTKLEDqnwWODIYppRQ5hnY=
github.com/armon/circbuf v0.0.0-20150827004946-bbbad097214e/go.mod h1:3U/XgcO3hCbHZ8TKRvWD2dDTCfh9M9ya+I9JpbB7O8o=
github.com/armon/consul-api v0.0.0-20180202201655-eb2c6b5be1b6/go.mod h1:grANhF5doyWs3UAsr3K4I6qtAmlQcZDesFNEHPZAzj8=
github.com/armon/go-metrics v0.0.0-20180917152333-f0300d1749da/go.mod h1:Q73ZrmVTwzkszR9V5SSuryQ31EELlFMUz1kKyl939pY=
github.com/armon/go-radix v0.0.0-20180808171621-7fddfc383310/go.mod h1:ufUuZ+zHj4x4TnLV4JWEpy2hxWSpsRywHrMgIH9cCH8=
github.com/asaskevich/govalidator v0.0.0-20180720115003-f9ffefc3facf/go.mod h1:lB+ZfQJz7igIIfQNfa7Ml4HSf2uFQQRzpGGRXenZAgY=
github.com/asaskevich/govalidator v0.0.0-20190424111038-f61b66f89f4a/go.mod h1:lB+ZfQJz7igIIfQNfa7Ml4HSf2uFQQRzpGGRXenZAgY=
+github.com/aws/aws-sdk-go v1.15.11/go.mod h1:mFuSZ37Z9YOHbQEwBWztmVzqXrEkub65tZoCYDt7FT0=
github.com/beevik/etree v1.1.0 h1:T0xke/WvNtMoCqgzPhkX2r4rjY3GDZFi+FjpRZY2Jbs=
github.com/beevik/etree v1.1.0/go.mod h1:r8Aw8JqVegEf0w2fDnATrX9VpkMcyFeM0FhwO62wh+A=
-github.com/benbjohnson/clock v1.0.3 h1:vkLuvpK4fmtSCuo60+yC63p7y0BmQ8gm5ZXGuBCJyXg=
github.com/benbjohnson/clock v1.0.3/go.mod h1:bGMdMPoPVvcYyt1gHDf4J2KE153Yf9BuiUKYMaxlTDM=
+github.com/benbjohnson/clock v1.1.0 h1:Q92kusRqC1XV2MjkWETPvjJVqKetz1OzxZB7mHJLju8=
+github.com/benbjohnson/clock v1.1.0/go.mod h1:J11/hYXuz8f4ySSvYwY0FKfm+ezbsZBKZxNJlLklBHA=
+github.com/beorn7/perks v0.0.0-20160804104726-4c0e84591b9a/go.mod h1:Dwedo/Wpr24TaqPxmxbtue+5NUziq4I4S80YR8gNf3Q=
github.com/beorn7/perks v0.0.0-20180321164747-3a771d992973/go.mod h1:Dwedo/Wpr24TaqPxmxbtue+5NUziq4I4S80YR8gNf3Q=
github.com/beorn7/perks v1.0.0/go.mod h1:KWe93zE9D1o94FZ5RNwFwVgaQK1VOXiVxmqh+CedLV8=
github.com/beorn7/perks v1.0.1 h1:VlbKKnNfV8bJzeqoa4cOKqO6bYr3WgKZxO8Z16+hsOM=
github.com/beorn7/perks v1.0.1/go.mod h1:G2ZrVWU2WbWT9wwq4/hrbKbnv/1ERSJQ0ibhJ6rlkpw=
github.com/bgentry/speakeasy v0.1.0/go.mod h1:+zsyZBPWlz7T6j88CTgSN5bM796AkVf0kBD4zp0CCIs=
+github.com/bitly/go-simplejson v0.5.0/go.mod h1:cXHtHw4XUPsvGaxgjIAn8PhEWG9NfngEKAMDJEczWVA=
github.com/bketelsen/crypt v0.0.3-0.20200106085610-5cbc8cc4026c/go.mod h1:MKsuJmJgSg28kpZDP6UIiPt0e0Oz0kqKNGyRaWEPv84=
-github.com/blang/semver v3.5.0+incompatible/go.mod h1:kRBLl5iJ+tD4TcOOxsy/0fnwebNt5EWlYSAyrTnjyyk=
+github.com/bketelsen/crypt v0.0.4/go.mod h1:aI6NrJ0pMGgvZKL1iVgXLnfIFJtfV+bKCoqOes/6LfM=
+github.com/blang/semver v3.1.0+incompatible/go.mod h1:kRBLl5iJ+tD4TcOOxsy/0fnwebNt5EWlYSAyrTnjyyk=
github.com/blang/semver v3.5.1+incompatible h1:cQNTCjp13qL8KC3Nbxr/y2Bqb63oX6wdnnjpJbkM4JQ=
github.com/blang/semver v3.5.1+incompatible/go.mod h1:kRBLl5iJ+tD4TcOOxsy/0fnwebNt5EWlYSAyrTnjyyk=
+github.com/bmizerany/assert v0.0.0-20160611221934-b7ed37b82869/go.mod h1:Ekp36dRnpXw/yCqJaO+ZrUyxD+3VXMFFr56k5XYrpB4=
+github.com/bshuster-repo/logrus-logstash-hook v0.4.1/go.mod h1:zsTqEiSzDgAa/8GZR7E1qaXrhYNDKBYy5/dWPTIflbk=
+github.com/buger/jsonparser v0.0.0-20180808090653-f4dd9f5a6b44/go.mod h1:bbYlZJ7hK1yFx9hf58LP0zeX7UjIGs20ufpu3evjr+s=
+github.com/bugsnag/bugsnag-go v0.0.0-20141110184014-b1d153021fcd/go.mod h1:2oa8nejYd4cQ/b0hMIopN0lCRxU0bueqREvZLWFrtK8=
+github.com/bugsnag/osext v0.0.0-20130617224835-0dd3f918b21b/go.mod h1:obH5gd0BsqsP2LwDJ9aOkm/6J86V6lyAXCoQWGw3K50=
+github.com/bugsnag/panicwrap v0.0.0-20151223152923-e2c28503fcd0/go.mod h1:D/8v3kj0zr8ZAKg1AQ6crr+5VwKN5eIywRkfhyM/+dE=
github.com/census-instrumentation/opencensus-proto v0.2.1/go.mod h1:f6KPmirojxKA12rnyqOA5BBL4O983OfeGPqjHWSTneU=
github.com/certifi/gocertifi v0.0.0-20191021191039-0944d244cd40/go.mod h1:sGbDF6GwGcLpkNXPUTkMRoywsNa/ol15pxFe6ERfguA=
github.com/certifi/gocertifi v0.0.0-20200922220541-2c3bb06c6054/go.mod h1:sGbDF6GwGcLpkNXPUTkMRoywsNa/ol15pxFe6ERfguA=
@@ -89,50 +144,171 @@ github.com/cespare/xxhash v1.1.0 h1:a6HrQnmkObjyL+Gs60czilIUGqrzKutQD6XZog3p+ko=
github.com/cespare/xxhash v1.1.0/go.mod h1:XrSqR1VqqWfGrhpAt58auRo0WTKS1nRRg3ghfAqPWnc=
github.com/cespare/xxhash/v2 v2.1.1 h1:6MnRN8NT7+YBpUIWxHtefFZOKTAPgGjpQSxqLNn0+qY=
github.com/cespare/xxhash/v2 v2.1.1/go.mod h1:VGX0DQ3Q6kWi7AoAeZDth3/j3BFtOZR5XLFGgcrjCOs=
+github.com/chai2010/gettext-go v0.0.0-20160711120539-c6fed771bfd5/go.mod h1:/iP1qXHoty45bqomnu2LM+VVyAEdWN+vtSHGlQgyxbw=
+github.com/checkpoint-restore/go-criu/v4 v4.1.0/go.mod h1:xUQBLp4RLc5zJtWY++yjOoMoB5lihDt7fai+75m+rGw=
github.com/chzyer/logex v1.1.10/go.mod h1:+Ywpsq7O8HXn0nuIou7OrIPyXbp3wmkHB+jjWRnGsAI=
github.com/chzyer/readline v0.0.0-20180603132655-2972be24d48e/go.mod h1:nSuG5e5PlCu98SY8svDHJxuZscDgtXS6KTTbou5AhLI=
github.com/chzyer/test v0.0.0-20180213035817-a1ea475d72b1/go.mod h1:Q3SI9o4m/ZMnBNeIyt5eFwwo7qiLfzFZmjNmxjkiQlU=
+github.com/cilium/ebpf v0.0.0-20200110133405-4032b1d8aae3/go.mod h1:MA5e5Lr8slmEg9bt0VpxxWqJlO4iwu3FBdHUzV7wQVg=
+github.com/cilium/ebpf v0.0.0-20200702112145-1c8d4c9ef775/go.mod h1:7cR51M8ViRLIdUjrmSXlK9pkrsDlLHbO8jiB8X8JnOc=
+github.com/cilium/ebpf v0.2.0/go.mod h1:To2CFviqOWL/M0gIMsvSMlqe7em/l1ALkX1PyjrX2Qs=
+github.com/cilium/ebpf v0.4.0/go.mod h1:4tRaxcgiL706VnOzHOdBlY8IEAIdxINsQBcU4xJJXRs=
github.com/client9/misspell v0.3.4/go.mod h1:qj6jICC3Q7zFZvVWo7KLAzC3yx5G7kyvSDkc90ppPyw=
github.com/cncf/udpa/go v0.0.0-20191209042840-269d4d468f6f/go.mod h1:M8M6+tZqaGXZJjfX53e64911xZQV5JYwmTeXPW+k8Sc=
+github.com/cncf/udpa/go v0.0.0-20200629203442-efcf912fb354/go.mod h1:WmhPx2Nbnhtbo57+VJT5O0JRkEi1Wbu0z5j0R8u5Hbk=
github.com/cncf/udpa/go v0.0.0-20201120205902-5459f2c99403/go.mod h1:WmhPx2Nbnhtbo57+VJT5O0JRkEi1Wbu0z5j0R8u5Hbk=
+github.com/cncf/xds/go v0.0.0-20210312221358-fbca930ec8ed/go.mod h1:eXthEFrGJvWHgFFCl3hGmgk+/aYT6PnTQLykKQRLhEs=
github.com/cockroachdb/datadriven v0.0.0-20190809214429-80d97fb3cbaa/go.mod h1:zn76sxSg3SzpJ0PPJaLDCu+Bu0Lg3sKTORVIj19EIF8=
github.com/cockroachdb/datadriven v0.0.0-20200714090401-bf6692d28da5/go.mod h1:h6jFvWxBdQXxjopDMZyH2UVceIRfR84bdzbkoKrsWNo=
github.com/cockroachdb/errors v1.2.4/go.mod h1:rQD95gz6FARkaKkQXUksEje/d9a6wBJoCr5oaCLELYA=
github.com/cockroachdb/logtags v0.0.0-20190617123548-eb05cc24525f/go.mod h1:i/u985jwjWRlyHXQbwatDASoW0RMlZ/3i9yJHE2xLkI=
+github.com/containerd/aufs v0.0.0-20200908144142-dab0cbea06f4/go.mod h1:nukgQABAEopAHvB6j7cnP5zJ+/3aVcE7hCYqvIwAHyE=
+github.com/containerd/aufs v0.0.0-20201003224125-76a6863f2989/go.mod h1:AkGGQs9NM2vtYHaUen+NljV0/baGCAPELGm2q9ZXpWU=
+github.com/containerd/aufs v0.0.0-20210316121734-20793ff83c97/go.mod h1:kL5kd6KM5TzQjR79jljyi4olc1Vrx6XBlcyj3gNv2PU=
+github.com/containerd/aufs v1.0.0/go.mod h1:kL5kd6KM5TzQjR79jljyi4olc1Vrx6XBlcyj3gNv2PU=
+github.com/containerd/btrfs v0.0.0-20201111183144-404b9149801e/go.mod h1:jg2QkJcsabfHugurUvvPhS3E08Oxiuh5W/g1ybB4e0E=
+github.com/containerd/btrfs v0.0.0-20210316141732-918d888fb676/go.mod h1:zMcX3qkXTAi9GI50+0HOeuV8LU2ryCE/V2vG/ZBiTss=
+github.com/containerd/btrfs v1.0.0/go.mod h1:zMcX3qkXTAi9GI50+0HOeuV8LU2ryCE/V2vG/ZBiTss=
+github.com/containerd/cgroups v0.0.0-20190717030353-c4b9ac5c7601/go.mod h1:X9rLEHIqSf/wfK8NsPqxJmeZgW4pcfzdXITDrUSJ6uI=
+github.com/containerd/cgroups v0.0.0-20190919134610-bf292b21730f/go.mod h1:OApqhQ4XNSNC13gXIwDjhOQxjWa/NxkwZXJ1EvqT0ko=
+github.com/containerd/cgroups v0.0.0-20200531161412-0dbf7f05ba59/go.mod h1:pA0z1pT8KYB3TCXK/ocprsh7MAkoW8bZVzPdih9snmM=
+github.com/containerd/cgroups v0.0.0-20200710171044-318312a37340/go.mod h1:s5q4SojHctfxANBDvMeIaIovkq29IP48TKAxnhYRxvo=
+github.com/containerd/cgroups v0.0.0-20200824123100-0b889c03f102/go.mod h1:s5q4SojHctfxANBDvMeIaIovkq29IP48TKAxnhYRxvo=
+github.com/containerd/cgroups v0.0.0-20210114181951-8a68de567b68/go.mod h1:ZJeTFisyysqgcCdecO57Dj79RfL0LNeGiFUqLYQRYLE=
+github.com/containerd/cgroups v1.0.1/go.mod h1:0SJrPIenamHDcZhEcJMNBB85rHcUsw4f25ZfBiPYRkU=
+github.com/containerd/console v0.0.0-20180822173158-c12b1e7919c1/go.mod h1:Tj/on1eG8kiEhd0+fhSDzsPAFESxzBBvdyEgyryXffw=
+github.com/containerd/console v0.0.0-20181022165439-0650fd9eeb50/go.mod h1:Tj/on1eG8kiEhd0+fhSDzsPAFESxzBBvdyEgyryXffw=
+github.com/containerd/console v0.0.0-20191206165004-02ecf6a7291e/go.mod h1:8Pf4gM6VEbTNRIT26AyyU7hxdQU3MvAvxVI0sc00XBE=
+github.com/containerd/console v1.0.1/go.mod h1:XUsP6YE/mKtz6bxc+I8UiKKTP04qjQL4qcS3XoQ5xkw=
+github.com/containerd/console v1.0.2/go.mod h1:ytZPjGgY2oeTkAONYafi2kSj0aYggsf8acV1PGKCbzQ=
+github.com/containerd/containerd v1.2.10/go.mod h1:bC6axHOhabU15QhwfG7w5PipXdVtMXFTttgp+kVtyUA=
+github.com/containerd/containerd v1.3.0-beta.2.0.20190828155532-0293cbd26c69/go.mod h1:bC6axHOhabU15QhwfG7w5PipXdVtMXFTttgp+kVtyUA=
+github.com/containerd/containerd v1.3.0/go.mod h1:bC6axHOhabU15QhwfG7w5PipXdVtMXFTttgp+kVtyUA=
+github.com/containerd/containerd v1.3.1-0.20191213020239-082f7e3aed57/go.mod h1:bC6axHOhabU15QhwfG7w5PipXdVtMXFTttgp+kVtyUA=
+github.com/containerd/containerd v1.3.2/go.mod h1:bC6axHOhabU15QhwfG7w5PipXdVtMXFTttgp+kVtyUA=
+github.com/containerd/containerd v1.4.0-beta.2.0.20200729163537-40b22ef07410/go.mod h1:bC6axHOhabU15QhwfG7w5PipXdVtMXFTttgp+kVtyUA=
+github.com/containerd/containerd v1.4.1/go.mod h1:bC6axHOhabU15QhwfG7w5PipXdVtMXFTttgp+kVtyUA=
+github.com/containerd/containerd v1.4.3/go.mod h1:bC6axHOhabU15QhwfG7w5PipXdVtMXFTttgp+kVtyUA=
+github.com/containerd/containerd v1.5.0-beta.1/go.mod h1:5HfvG1V2FsKesEGQ17k5/T7V960Tmcumvqn8Mc+pCYQ=
+github.com/containerd/containerd v1.5.0-beta.3/go.mod h1:/wr9AVtEM7x9c+n0+stptlo/uBBoBORwEx6ardVcmKU=
+github.com/containerd/containerd v1.5.0-beta.4/go.mod h1:GmdgZd2zA2GYIBZ0w09ZvgqEq8EfBp/m3lcVZIvPHhI=
+github.com/containerd/containerd v1.5.0-rc.0/go.mod h1:V/IXoMqNGgBlabz3tHD2TWDoTJseu1FGOKuoA4nNb2s=
+github.com/containerd/containerd v1.5.2 h1:MG/Bg1pbmMb61j3wHCFWPxESXHieiKr2xG64px/k8zQ=
+github.com/containerd/containerd v1.5.2/go.mod h1:0DOxVqwDy2iZvrZp2JUx/E+hS0UNTVn7dJnIOwtYR4g=
+github.com/containerd/continuity v0.0.0-20190426062206-aaeac12a7ffc/go.mod h1:GL3xCUCBDV3CZiTSEKksMWbLE66hEyuu9qyDOOqM47Y=
+github.com/containerd/continuity v0.0.0-20190815185530-f2a389ac0a02/go.mod h1:GL3xCUCBDV3CZiTSEKksMWbLE66hEyuu9qyDOOqM47Y=
+github.com/containerd/continuity v0.0.0-20191127005431-f65d91d395eb/go.mod h1:GL3xCUCBDV3CZiTSEKksMWbLE66hEyuu9qyDOOqM47Y=
+github.com/containerd/continuity v0.0.0-20200710164510-efbc4488d8fe/go.mod h1:cECdGN1O8G9bgKTlLhuPJimka6Xb/Gg7vYzCTNVxhvo=
+github.com/containerd/continuity v0.0.0-20201208142359-180525291bb7/go.mod h1:kR3BEg7bDFaEddKm54WSmrol1fKWDU1nKYkgrcgZT7Y=
+github.com/containerd/continuity v0.0.0-20210208174643-50096c924a4e/go.mod h1:EXlVlkqNba9rJe3j7w3Xa924itAMLgZH4UD/Q4PExuQ=
+github.com/containerd/continuity v0.1.0/go.mod h1:ICJu0PwR54nI0yPEnJ6jcS+J7CZAUXrLh8lPo2knzsM=
+github.com/containerd/fifo v0.0.0-20180307165137-3d5202aec260/go.mod h1:ODA38xgv3Kuk8dQz2ZQXpnv/UZZUHUCL7pnLehbXgQI=
+github.com/containerd/fifo v0.0.0-20190226154929-a9fb20d87448/go.mod h1:ODA38xgv3Kuk8dQz2ZQXpnv/UZZUHUCL7pnLehbXgQI=
+github.com/containerd/fifo v0.0.0-20200410184934-f15a3290365b/go.mod h1:jPQ2IAeZRCYxpS/Cm1495vGFww6ecHmMk1YJH2Q5ln0=
+github.com/containerd/fifo v0.0.0-20201026212402-0724c46b320c/go.mod h1:jPQ2IAeZRCYxpS/Cm1495vGFww6ecHmMk1YJH2Q5ln0=
+github.com/containerd/fifo v0.0.0-20210316144830-115abcc95a1d/go.mod h1:ocF/ME1SX5b1AOlWi9r677YJmCPSwwWnQ9O123vzpE4=
+github.com/containerd/fifo v1.0.0/go.mod h1:ocF/ME1SX5b1AOlWi9r677YJmCPSwwWnQ9O123vzpE4=
+github.com/containerd/go-cni v1.0.1/go.mod h1:+vUpYxKvAF72G9i1WoDOiPGRtQpqsNW/ZHtSlv++smU=
+github.com/containerd/go-cni v1.0.2/go.mod h1:nrNABBHzu0ZwCug9Ije8hL2xBCYh/pjfMb1aZGrrohk=
+github.com/containerd/go-runc v0.0.0-20180907222934-5a6d9f37cfa3/go.mod h1:IV7qH3hrUgRmyYrtgEeGWJfWbgcHL9CSRruz2Vqcph0=
+github.com/containerd/go-runc v0.0.0-20190911050354-e029b79d8cda/go.mod h1:IV7qH3hrUgRmyYrtgEeGWJfWbgcHL9CSRruz2Vqcph0=
+github.com/containerd/go-runc v0.0.0-20200220073739-7016d3ce2328/go.mod h1:PpyHrqVs8FTi9vpyHwPwiNEGaACDxT/N/pLcvMSRA9g=
+github.com/containerd/go-runc v0.0.0-20201020171139-16b287bc67d0/go.mod h1:cNU0ZbCgCQVZK4lgG3P+9tn9/PaJNmoDXPpoJhDR+Ok=
+github.com/containerd/go-runc v1.0.0/go.mod h1:cNU0ZbCgCQVZK4lgG3P+9tn9/PaJNmoDXPpoJhDR+Ok=
+github.com/containerd/imgcrypt v1.0.1/go.mod h1:mdd8cEPW7TPgNG4FpuP3sGBiQ7Yi/zak9TYCG3juvb0=
+github.com/containerd/imgcrypt v1.0.4-0.20210301171431-0ae5c75f59ba/go.mod h1:6TNsg0ctmizkrOgXRNQjAPFWpMYRWuiB6dSF4Pfa5SA=
+github.com/containerd/imgcrypt v1.1.1-0.20210312161619-7ed62a527887/go.mod h1:5AZJNI6sLHJljKuI9IHnw1pWqo/F0nGDOuR9zgTs7ow=
+github.com/containerd/imgcrypt v1.1.1/go.mod h1:xpLnwiQmEUJPvQoAapeb2SNCxz7Xr6PJrXQb0Dpc4ms=
+github.com/containerd/nri v0.0.0-20201007170849-eb1350a75164/go.mod h1:+2wGSDGFYfE5+So4M5syatU0N0f0LbWpuqyMi4/BE8c=
+github.com/containerd/nri v0.0.0-20210316161719-dbaa18c31c14/go.mod h1:lmxnXF6oMkbqs39FiCt1s0R2HSMhcLel9vNL3m4AaeY=
+github.com/containerd/nri v0.1.0/go.mod h1:lmxnXF6oMkbqs39FiCt1s0R2HSMhcLel9vNL3m4AaeY=
+github.com/containerd/ttrpc v0.0.0-20190828154514-0e0f228740de/go.mod h1:PvCDdDGpgqzQIzDW1TphrGLssLDZp2GuS+X5DkEJB8o=
+github.com/containerd/ttrpc v0.0.0-20190828172938-92c8520ef9f8/go.mod h1:PvCDdDGpgqzQIzDW1TphrGLssLDZp2GuS+X5DkEJB8o=
+github.com/containerd/ttrpc v0.0.0-20191028202541-4f1b8fe65a5c/go.mod h1:LPm1u0xBw8r8NOKoOdNMeVHSawSsltak+Ihv+etqsE8=
+github.com/containerd/ttrpc v1.0.1/go.mod h1:UAxOpgT9ziI0gJrmKvgcZivgxOp8iFPSk8httJEt98Y=
+github.com/containerd/ttrpc v1.0.2/go.mod h1:UAxOpgT9ziI0gJrmKvgcZivgxOp8iFPSk8httJEt98Y=
+github.com/containerd/typeurl v0.0.0-20180627222232-a93fcdb778cd/go.mod h1:Cm3kwCdlkCfMSHURc+r6fwoGH6/F1hH3S4sg0rLFWPc=
+github.com/containerd/typeurl v0.0.0-20190911142611-5eb25027c9fd/go.mod h1:GeKYzf2pQcqv7tJ0AoCuuhtnqhva5LNU3U+OyKxxJpk=
+github.com/containerd/typeurl v1.0.1/go.mod h1:TB1hUtrpaiO88KEK56ijojHS1+NeF0izUACaJW2mdXg=
+github.com/containerd/typeurl v1.0.2/go.mod h1:9trJWW2sRlGub4wZJRTW83VtbOLS6hwcDZXTn6oPz9s=
+github.com/containerd/zfs v0.0.0-20200918131355-0a33824f23a2/go.mod h1:8IgZOBdv8fAgXddBT4dBXJPtxyRsejFIpXoklgxgEjw=
+github.com/containerd/zfs v0.0.0-20210301145711-11e8f1707f62/go.mod h1:A9zfAbMlQwE+/is6hi0Xw8ktpL+6glmqZYtevJgaB8Y=
+github.com/containerd/zfs v0.0.0-20210315114300-dde8f0fda960/go.mod h1:m+m51S1DvAP6r3FcmYCp54bQ34pyOwTieQDNRIRHsFY=
+github.com/containerd/zfs v0.0.0-20210324211415-d5c4544f0433/go.mod h1:m+m51S1DvAP6r3FcmYCp54bQ34pyOwTieQDNRIRHsFY=
+github.com/containerd/zfs v1.0.0/go.mod h1:m+m51S1DvAP6r3FcmYCp54bQ34pyOwTieQDNRIRHsFY=
+github.com/containernetworking/cni v0.7.1/go.mod h1:LGwApLUm2FpoOfxTDEeq8T9ipbpZ61X79hmU3w8FmsY=
+github.com/containernetworking/cni v0.8.0/go.mod h1:LGwApLUm2FpoOfxTDEeq8T9ipbpZ61X79hmU3w8FmsY=
+github.com/containernetworking/cni v0.8.1/go.mod h1:LGwApLUm2FpoOfxTDEeq8T9ipbpZ61X79hmU3w8FmsY=
+github.com/containernetworking/plugins v0.8.6/go.mod h1:qnw5mN19D8fIwkqW7oHHYDHVlzhJpcY6TQxn/fUyDDM=
+github.com/containernetworking/plugins v0.9.1/go.mod h1:xP/idU2ldlzN6m4p5LmGiwRDjeJr6FLK6vuiUwoH7P8=
+github.com/containers/ocicrypt v1.0.1/go.mod h1:MeJDzk1RJHv89LjsH0Sp5KTY3ZYkjXO/C+bKAeWFIrc=
+github.com/containers/ocicrypt v1.1.0/go.mod h1:b8AOe0YR67uU8OqfVNcznfFpAzu3rdgUV4GP9qXPfu4=
+github.com/containers/ocicrypt v1.1.1/go.mod h1:Dm55fwWm1YZAjYRaJ94z2mfZikIyIN4B0oB3dj3jFxY=
+github.com/coredns/caddy v1.1.0 h1:ezvsPrT/tA/7pYDBZxu0cT0VmWk75AfIaf6GSYCNMf0=
+github.com/coredns/caddy v1.1.0/go.mod h1:A6ntJQlAWuQfFlsd9hvigKbo2WS0VUs2l1e2F+BawD4=
+github.com/coredns/corefile-migration v1.0.12 h1:TJGATo0YLQJVIKJZLajXE1IrhRFtYTR1cYsGIT1YNEk=
+github.com/coredns/corefile-migration v1.0.12/go.mod h1:NJOI8ceUF/NTgEwtjD+TUq3/BnH/GF7WAM3RzCa3hBo=
github.com/coreos/bbolt v1.3.2/go.mod h1:iRUV2dpdMOn7Bo10OQBFzIJO9kkE559Wcmn+qkEiiKk=
github.com/coreos/etcd v3.3.10+incompatible/go.mod h1:uF7uidLiAD3TWHmW31ZFd/JWoc32PjwdhPthX9715RE=
github.com/coreos/etcd v3.3.13+incompatible/go.mod h1:uF7uidLiAD3TWHmW31ZFd/JWoc32PjwdhPthX9715RE=
-github.com/coreos/go-etcd v2.0.0+incompatible/go.mod h1:Jez6KQU2B/sWsbdaef3ED8NzMklzPG4d5KIOhIy30Tk=
+github.com/coreos/go-iptables v0.4.5/go.mod h1:/mVI274lEDI2ns62jHCDnCyBF9Iwsmekav8Dbxlm1MU=
+github.com/coreos/go-iptables v0.5.0/go.mod h1:/mVI274lEDI2ns62jHCDnCyBF9Iwsmekav8Dbxlm1MU=
github.com/coreos/go-oidc v2.1.0+incompatible/go.mod h1:CgnwVTmzoESiwO9qyAFEMiHoZ1nMCKZlZ9V6mm3/LKc=
github.com/coreos/go-semver v0.2.0/go.mod h1:nnelYz7RCh+5ahJtPPxZlU+153eP4D4r3EedlOD2RNk=
github.com/coreos/go-semver v0.3.0 h1:wkHLiw0WNATZnSG7epLsujiMCgPAc9xhjJ4tgnAxmfM=
github.com/coreos/go-semver v0.3.0/go.mod h1:nnelYz7RCh+5ahJtPPxZlU+153eP4D4r3EedlOD2RNk=
+github.com/coreos/go-systemd v0.0.0-20161114122254-48702e0da86b/go.mod h1:F5haX7vjVVG0kc13fIWeqUViNPyEJxv/OmvnBo0Yme4=
github.com/coreos/go-systemd v0.0.0-20180511133405-39ca1b05acc7/go.mod h1:F5haX7vjVVG0kc13fIWeqUViNPyEJxv/OmvnBo0Yme4=
github.com/coreos/go-systemd v0.0.0-20190321100706-95778dfbb74e h1:Wf6HqHfScWJN9/ZjdUKyjop4mf3Qdd+1TvvltAvM3m8=
github.com/coreos/go-systemd v0.0.0-20190321100706-95778dfbb74e/go.mod h1:F5haX7vjVVG0kc13fIWeqUViNPyEJxv/OmvnBo0Yme4=
+github.com/coreos/go-systemd/v22 v22.0.0/go.mod h1:xO0FLkIi5MaZafQlIrOotqXZ90ih+1atmu1JpKERPPk=
+github.com/coreos/go-systemd/v22 v22.1.0/go.mod h1:xO0FLkIi5MaZafQlIrOotqXZ90ih+1atmu1JpKERPPk=
github.com/coreos/go-systemd/v22 v22.3.2 h1:D9/bQk5vlXQFZ6Kwuu6zaiXJ9oTPe68++AzAJc1DzSI=
github.com/coreos/go-systemd/v22 v22.3.2/go.mod h1:Y58oyj3AT4RCenI/lSvhwexgC+NSVTIJ3seZv2GcEnc=
github.com/coreos/pkg v0.0.0-20160727233714-3ac0863d7acf/go.mod h1:E3G3o1h8I7cfcXa63jLwjI0eiQQMgzzUDFVpN/nH/eA=
-github.com/coreos/pkg v0.0.0-20180108230652-97fdf19511ea/go.mod h1:E3G3o1h8I7cfcXa63jLwjI0eiQQMgzzUDFVpN/nH/eA=
github.com/coreos/pkg v0.0.0-20180928190104-399ea9e2e55f/go.mod h1:E3G3o1h8I7cfcXa63jLwjI0eiQQMgzzUDFVpN/nH/eA=
-github.com/cpuguy83/go-md2man v1.0.10/go.mod h1:SmD6nW6nTyfqj6ABTjUi3V3JVMnlJmwcJI5acqYI6dE=
+github.com/cpuguy83/go-md2man/v2 v2.0.0-20190314233015-f79a8a8ca69d/go.mod h1:maD7wRr/U5Z6m/iR4s+kqSMx2CaBsrgA7czyZG/E6dU=
github.com/cpuguy83/go-md2man/v2 v2.0.0/go.mod h1:maD7wRr/U5Z6m/iR4s+kqSMx2CaBsrgA7czyZG/E6dU=
github.com/creack/pty v1.1.7/go.mod h1:lj5s0c3V2DBrqTV7llrYr5NG6My20zk30Fl46Y7DoTY=
github.com/creack/pty v1.1.9/go.mod h1:oKZEueFk5CKHvIhNR5MUki03XCEU+Q6VDXinZuGJ33E=
github.com/creack/pty v1.1.11 h1:07n33Z8lZxZ2qwegKbObQohDhXDQxiMMz1NOUGYlesw=
github.com/creack/pty v1.1.11/go.mod h1:oKZEueFk5CKHvIhNR5MUki03XCEU+Q6VDXinZuGJ33E=
+github.com/cyphar/filepath-securejoin v0.2.2/go.mod h1:FpkQEhXnPnOthhzymB7CGsFk2G9VLXONKD9G7QGMM+4=
+github.com/d2g/dhcp4 v0.0.0-20170904100407-a1d1b6c41b1c/go.mod h1:Ct2BUK8SB0YC1SMSibvLzxjeJLnrYEVLULFNiHY9YfQ=
+github.com/d2g/dhcp4client v1.0.0/go.mod h1:j0hNfjhrt2SxUOw55nL0ATM/z4Yt3t2Kd1mW34z5W5s=
+github.com/d2g/dhcp4server v0.0.0-20181031114812-7d4a0a7f59a5/go.mod h1:Eo87+Kg/IX2hfWJfwxMzLyuSZyxSoAug2nGa1G2QAi8=
+github.com/d2g/hardwareaddr v0.0.0-20190221164911-e7d9fbe030e4/go.mod h1:bMl4RjIciD2oAxI7DmWRx6gbeqrkoLqv3MV0vzNad+I=
github.com/davecgh/go-spew v0.0.0-20151105211317-5215b55f46b2/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
github.com/davecgh/go-spew v1.1.0/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
github.com/davecgh/go-spew v1.1.1 h1:vj9j/u1bqnvCEfJOwUhtlOARqs3+rkHYY13jYWTU97c=
github.com/davecgh/go-spew v1.1.1/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
github.com/davecgh/go-xdr v0.0.0-20161123171359-e6a2ba005892/go.mod h1:CTDl0pzVzE5DEzZhPfvhY/9sPFMQIxaJ9VAMs9AagrE=
+github.com/daviddengcn/go-colortext v0.0.0-20160507010035-511bcaf42ccd/go.mod h1:dv4zxwHi5C/8AeI+4gX4dCWOIvNi7I6JCSX0HvlKPgE=
+github.com/denverdino/aliyungo v0.0.0-20190125010748-a747050bb1ba/go.mod h1:dV8lFg6daOBZbT6/BDGIz6Y3WFGn8juu6G+CQ6LHtl0=
+github.com/dgrijalva/jwt-go v0.0.0-20170104182250-a601269ab70c/go.mod h1:E3ru+11k8xSBh+hMPgOLZmtrrCbhqsmaPHjLKYnJCaQ=
github.com/dgrijalva/jwt-go v3.2.0+incompatible/go.mod h1:E3ru+11k8xSBh+hMPgOLZmtrrCbhqsmaPHjLKYnJCaQ=
github.com/dgryski/go-sip13 v0.0.0-20181026042036-e10d5fee7954/go.mod h1:vAd38F8PWV+bWy6jNmig1y/TA+kYO4g3RSRF0IAv0no=
-github.com/docker/docker v0.7.3-0.20190327010347-be7ac8be2ae0/go.mod h1:eEKB0N0r5NX/I1kEveEz05bcu8tLC/8azJZsviup8Sk=
+github.com/dnaeon/go-vcr v1.0.1/go.mod h1:aBB1+wY4s93YsC3HHjMBMrwTj2R9FHDzUr9KyGc8n1E=
+github.com/docker/distribution v0.0.0-20190905152932-14b96e55d84c/go.mod h1:0+TTO4EOBfRPhZXAeF1Vu+W3hHZ8eLp8PgKVZlcvtFY=
+github.com/docker/distribution v2.7.1-0.20190205005809-0d3efadf0154+incompatible/go.mod h1:J2gT2udsDAN96Uj4KfcMRqY0/ypR+oyYUYmja8H+y+w=
+github.com/docker/distribution v2.7.1+incompatible h1:a5mlkVzth6W5A4fOsS3D2EO5BUmsJpcB+cRlLU7cSug=
+github.com/docker/distribution v2.7.1+incompatible/go.mod h1:J2gT2udsDAN96Uj4KfcMRqY0/ypR+oyYUYmja8H+y+w=
+github.com/docker/docker v20.10.7+incompatible h1:Z6O9Nhsjv+ayUEeI1IojKbYcsGdgYSNqxe1s2MYzUhQ=
+github.com/docker/docker v20.10.7+incompatible/go.mod h1:eEKB0N0r5NX/I1kEveEz05bcu8tLC/8azJZsviup8Sk=
+github.com/docker/go-connections v0.4.0 h1:El9xVISelRB7BuFusrZozjnkIM5YnzCViNKohAFqRJQ=
+github.com/docker/go-connections v0.4.0/go.mod h1:Gbd7IOopHjR8Iph03tsViu4nIes5XhDvyHbTtUxmeec=
+github.com/docker/go-events v0.0.0-20170721190031-9461782956ad/go.mod h1:Uw6UezgYA44ePAFQYUehOuCzmy5zmg/+nl2ZfMWGkpA=
+github.com/docker/go-events v0.0.0-20190806004212-e31b211e4f1c/go.mod h1:Uw6UezgYA44ePAFQYUehOuCzmy5zmg/+nl2ZfMWGkpA=
+github.com/docker/go-metrics v0.0.0-20180209012529-399ea8c73916/go.mod h1:/u0gXw0Gay3ceNrsHubL3BtdOL2fHf93USgMTe0W5dI=
+github.com/docker/go-metrics v0.0.1/go.mod h1:cG1hvH2utMXtqgqqYE9plW6lDxS3/5ayHzueweSI3Vw=
github.com/docker/go-units v0.3.3/go.mod h1:fgPhTUdO+D/Jk86RDLlptpiXQzgHJF7gydDDbaIK4Dk=
+github.com/docker/go-units v0.4.0 h1:3uh0PgVws3nIA0Q+MwDC8yjEPf9zjRfZZWXZYDct3Tw=
github.com/docker/go-units v0.4.0/go.mod h1:fgPhTUdO+D/Jk86RDLlptpiXQzgHJF7gydDDbaIK4Dk=
+github.com/docker/libtrust v0.0.0-20150114040149-fa567046d9b1/go.mod h1:cyGadeNEkKy96OOhEzfZl+yxihPEzKnqJwvfuSUqbZE=
github.com/docker/spdystream v0.0.0-20160310174837-449fdfce4d96/go.mod h1:Qh8CwZgvJUkLughtfhJv5dyTYa91l1fOUCrgjqmcifM=
github.com/docopt/docopt-go v0.0.0-20180111231733-ee0de3bc6815/go.mod h1:WwZ+bS3ebgob9U8Nd0kOddGdZWjyMGR8Wziv+TBNwSE=
+github.com/drone/envsubst/v2 v2.0.0-20210615175204-7bf45dbf5372 h1:lMxlL2YBq247PkbbAhbcpEzDhqRp9IX6LSVy5WUz97s=
+github.com/drone/envsubst/v2 v2.0.0-20210615175204-7bf45dbf5372/go.mod h1:esf2rsHFNlZlxsqsZDojNBcnNs5REqIvRrWRHqX0vEU=
github.com/dustin/go-humanize v0.0.0-20171111073723-bb3d318650d4/go.mod h1:HtrtbFcZ19U5GC7JDqmcUSB87Iq5E25KnS6fMYU6eOk=
github.com/dustin/go-humanize v1.0.0 h1:VSnTsYCnlFHaM2/igO1h6X3HA71jcobQuxemgkq4zYo=
github.com/dustin/go-humanize v1.0.0/go.mod h1:HtrtbFcZ19U5GC7JDqmcUSB87Iq5E25KnS6fMYU6eOk=
@@ -144,23 +320,39 @@ github.com/emicklei/go-restful v2.9.5+incompatible/go.mod h1:otzb+WCGbkyDHkqmQmT
github.com/envoyproxy/go-control-plane v0.9.0/go.mod h1:YTl/9mNaCwkRvm6d1a2C3ymFceY/DCBVvsKhRF0iEA4=
github.com/envoyproxy/go-control-plane v0.9.1-0.20191026205805-5f8ba28d4473/go.mod h1:YTl/9mNaCwkRvm6d1a2C3ymFceY/DCBVvsKhRF0iEA4=
github.com/envoyproxy/go-control-plane v0.9.4/go.mod h1:6rpuAdCZL397s3pYoYcLgu1mIlRU8Am5FuJP05cCM98=
+github.com/envoyproxy/go-control-plane v0.9.7/go.mod h1:cwu0lG7PUMfa9snN8LXBig5ynNVH9qI8YYLbd1fK2po=
github.com/envoyproxy/go-control-plane v0.9.9-0.20201210154907-fd9021fe5dad/go.mod h1:cXg6YxExXjJnVBQHBLXeUAgxn2UodCpnH306RInaBQk=
github.com/envoyproxy/go-control-plane v0.9.9-0.20210217033140-668b12f5399d/go.mod h1:cXg6YxExXjJnVBQHBLXeUAgxn2UodCpnH306RInaBQk=
+github.com/envoyproxy/go-control-plane v0.9.9-0.20210512163311-63b5d3c536b0/go.mod h1:hliV/p42l8fGbc6Y9bQ70uLwIvmJyVE5k4iMKlh8wCQ=
github.com/envoyproxy/protoc-gen-validate v0.1.0/go.mod h1:iSmxcyjqTsJpI2R4NaDN7+kN2VEUnK/pcBlmesArF7c=
+github.com/evanphx/json-patch v0.5.2/go.mod h1:ZWS5hhDbVDyob71nXKNL0+PWn6ToqBHMikGIFbs31qQ=
github.com/evanphx/json-patch v4.2.0+incompatible/go.mod h1:50XU6AFN0ol/bzJsmQLiYLvXMP4fmwYFNcr97nuDLSk=
github.com/evanphx/json-patch v4.5.0+incompatible/go.mod h1:50XU6AFN0ol/bzJsmQLiYLvXMP4fmwYFNcr97nuDLSk=
github.com/evanphx/json-patch v4.9.0+incompatible/go.mod h1:50XU6AFN0ol/bzJsmQLiYLvXMP4fmwYFNcr97nuDLSk=
-github.com/evanphx/json-patch v4.11.0+incompatible h1:glyUF9yIYtMHzn8xaKw5rMhdWcwsYV8dZHIq5567/xs=
github.com/evanphx/json-patch v4.11.0+incompatible/go.mod h1:50XU6AFN0ol/bzJsmQLiYLvXMP4fmwYFNcr97nuDLSk=
+github.com/evanphx/json-patch v4.12.0+incompatible h1:4onqiflcdA9EOZ4RxV643DvftH5pOlLGNtQ5lPWQu84=
+github.com/evanphx/json-patch v4.12.0+incompatible/go.mod h1:50XU6AFN0ol/bzJsmQLiYLvXMP4fmwYFNcr97nuDLSk=
+github.com/evanphx/json-patch/v5 v5.2.0 h1:8ozOH5xxoMYDt5/u+yMTsVXydVCbTORFnOOoq2lumco=
+github.com/evanphx/json-patch/v5 v5.2.0/go.mod h1:G79N1coSVB93tBe7j6PhzjmR3/2VvlbKOFpnXhI9Bw4=
+github.com/exponent-io/jsonpath v0.0.0-20151013193312-d6023ce2651d/go.mod h1:ZZMPRZwes7CROmyNKgQzC3XPs6L/G2EJLHddWejkmf4=
+github.com/fatih/camelcase v1.0.0/go.mod h1:yN2Sb0lFhZJUdVvtELVWefmrXpuZESvPmqwoZc+/fpc=
github.com/fatih/color v1.7.0/go.mod h1:Zm6kSWBoL9eyXnKyktHP6abPY2pDugNf5KwzbycvMj4=
+github.com/fatih/color v1.12.0/go.mod h1:ELkj/draVOlAH/xkhN6mQ50Qd0MPOk5AAr3maGEBuJM=
github.com/felixge/httpsnoop v1.0.1 h1:lvB5Jl89CsZtGIWuTcDM1E/vkVs49/Ml7JJe07l8SPQ=
github.com/felixge/httpsnoop v1.0.1/go.mod h1:m8KPJKqk1gH5J9DgRY2ASl2lWCfGKXixSwevea8zH2U=
+github.com/flynn/go-shlex v0.0.0-20150515145356-3f9db97f8568/go.mod h1:xEzjJPgXI435gkrCt3MPfRiAkVrwSbHsst4LCFVfpJc=
github.com/form3tech-oss/jwt-go v3.2.2+incompatible/go.mod h1:pbq4aXjuKjdthFRnoDwaVPLA+WlJuPGy+QneDUgJi2k=
github.com/form3tech-oss/jwt-go v3.2.3+incompatible h1:7ZaBxOI7TMoYBfyA3cQHErNNyAWIKUMIwqxEtgHOs5c=
github.com/form3tech-oss/jwt-go v3.2.3+incompatible/go.mod h1:pbq4aXjuKjdthFRnoDwaVPLA+WlJuPGy+QneDUgJi2k=
+github.com/frankban/quicktest v1.11.3/go.mod h1:wRf/ReqHper53s+kmmSZizM8NamnL3IM0I9ntUbOk+k=
github.com/fsnotify/fsnotify v1.4.7/go.mod h1:jwhsz4b93w/PPRr/qN1Yymfu8t87LnFCMoQvtojpjFo=
-github.com/fsnotify/fsnotify v1.4.9 h1:hsms1Qyu0jgnwNXIxa+/V/PDsU6CfLf6CNO8H7IWoS4=
github.com/fsnotify/fsnotify v1.4.9/go.mod h1:znqG4EE+3YCdAaPaxE2ZRY/06pZUdp0tY4IgpuI1SZQ=
+github.com/fsnotify/fsnotify v1.5.1 h1:mZcQUHVQUQWoPXXtuf9yuEXKudkV2sx1E06UadKWpgI=
+github.com/fsnotify/fsnotify v1.5.1/go.mod h1:T3375wBYaZdLLcVNkcVbzGHY7f1l/uK5T5Ai1i3InKU=
+github.com/fullsailor/pkcs7 v0.0.0-20190404230743-d7302db945fa/go.mod h1:KnogPXtdwXqoenmZCw6S+25EAm2MkxbG0deNDu4cbSA=
+github.com/fvbommel/sortorder v1.0.1/go.mod h1:uk88iVf1ovNn1iLfgUVU2F9o5eO30ui720w+kxuqRs0=
+github.com/garyburd/redigo v0.0.0-20150301180006-535138d7bcd7/go.mod h1:NR3MbYisc3/PwhQ00EMzDiPmrwpPxAn5GI05/YaO1SY=
+github.com/getkin/kin-openapi v0.76.0/go.mod h1:660oXbgy5JFMKreazJaQTw7o+X00qeSyhcnluiMv+Xg=
github.com/getsentry/raven-go v0.2.0/go.mod h1:KungGk8q33+aIAZUIVWZDr2OfAEBsO49PX4NzFV5kcQ=
github.com/ghodss/yaml v0.0.0-20150909031657-73d445a93680/go.mod h1:4dBDuWmgqj2HViK6kFavaiC9ZROes6MMH2rRYeMEF04=
github.com/ghodss/yaml v1.0.0/go.mod h1:4dBDuWmgqj2HViK6kFavaiC9ZROes6MMH2rRYeMEF04=
@@ -168,9 +360,11 @@ github.com/gibson042/canonicaljson-go v1.0.3 h1:EAyF8L74AWabkyUmrvEFHEt/AGFQeD6R
github.com/gibson042/canonicaljson-go v1.0.3/go.mod h1:DsLpJTThXyGNO+KZlI85C1/KDcImpP67k/RKVjcaEqo=
github.com/globalsign/mgo v0.0.0-20180905125535-1ca0a4f7cbcb/go.mod h1:xkRDCp4j0OGD1HRkm4kmhM+pmpv3AKq5SU7GMg4oO/Q=
github.com/globalsign/mgo v0.0.0-20181015135952-eeefdecb41b8/go.mod h1:xkRDCp4j0OGD1HRkm4kmhM+pmpv3AKq5SU7GMg4oO/Q=
+github.com/go-errors/errors v1.0.1/go.mod h1:f4zRHt4oKfwPJE5k8C9vpYG+aDHdBFUsgrm6/TyX73Q=
github.com/go-gl/glfw v0.0.0-20190409004039-e6da0acd62b1/go.mod h1:vR7hzQXu2zJy9AVAgeJqvqgH9Q5CA+iKCZ2gyEVpxRU=
github.com/go-gl/glfw/v3.3/glfw v0.0.0-20191125211704-12ad95a8df72/go.mod h1:tQ2UAYgL5IevRw8kRxooKSPJfGvJ9fJQFa0TUsXzTg8=
github.com/go-gl/glfw/v3.3/glfw v0.0.0-20200222043503-6f7a984d4dc4/go.mod h1:tQ2UAYgL5IevRw8kRxooKSPJfGvJ9fJQFa0TUsXzTg8=
+github.com/go-ini/ini v1.25.4/go.mod h1:ByCAeIL28uOIIG0E3PJtZPDL8WnHpFKFOtgjp+3Ies8=
github.com/go-kit/kit v0.8.0/go.mod h1:xBxKIO96dXMWWy0MnWVtmwkA9/13aqxPnvrjFYMA2as=
github.com/go-kit/kit v0.9.0/go.mod h1:xBxKIO96dXMWWy0MnWVtmwkA9/13aqxPnvrjFYMA2as=
github.com/go-kit/log v0.1.0/go.mod h1:zbhenjAZHb184qTLMA9ZjW7ThYL0H2mk7Q6pNt4vbaY=
@@ -179,10 +373,12 @@ github.com/go-logfmt/logfmt v0.4.0/go.mod h1:3RMwSq7FuexP4Kalkev3ejPJsZTpXXBr9+V
github.com/go-logfmt/logfmt v0.5.0/go.mod h1:wCYkCAKZfumFQihp8CzCvQ3paCTfi41vtzG1KdI/P7A=
github.com/go-logr/logr v0.1.0/go.mod h1:ixOQHD9gLJUVQQ2ZOR7zLEifBX6tGkNJF4QyIY7sIas=
github.com/go-logr/logr v0.2.0/go.mod h1:z6/tIYblkpsD+a4lm/fGIIU9mZ+XfAiaFtq7xTgseGU=
-github.com/go-logr/logr v0.4.0 h1:K7/B1jt6fIBQVd4Owv2MqGQClcgf0R266+7C/QjRcLc=
github.com/go-logr/logr v0.4.0/go.mod h1:z6/tIYblkpsD+a4lm/fGIIU9mZ+XfAiaFtq7xTgseGU=
-github.com/go-logr/zapr v0.1.0 h1:h+WVe9j6HAA01niTJPA/kKH0i7e0rLZBCwauQFcRE54=
-github.com/go-logr/zapr v0.1.0/go.mod h1:tabnROwaDl0UNxkVeFRbY8bwB37GwRv0P8lg6aAiEnk=
+github.com/go-logr/logr v1.2.0 h1:QK40JKJyMdUDz+h+xvCsru/bJhvG0UxvePV0ufL/AcE=
+github.com/go-logr/logr v1.2.0/go.mod h1:jdQByPbusPIv2/zmleS9BjJVeZ6kBagPoEUsqbVz/1A=
+github.com/go-logr/zapr v0.4.0/go.mod h1:tabnROwaDl0UNxkVeFRbY8bwB37GwRv0P8lg6aAiEnk=
+github.com/go-logr/zapr v1.2.0 h1:n4JnPI1T3Qq1SFEi/F8rwLrZERp2bso19PJZDB9dayk=
+github.com/go-logr/zapr v1.2.0/go.mod h1:Qa4Bsj2Vb+FAVeAKsLD8RLQ+YRJB8YDmOAKxaBQf7Ro=
github.com/go-openapi/analysis v0.0.0-20180825180245-b006789cd277/go.mod h1:k70tL6pCuVxPJOHXQ+wIac1FUrvNkHolPie/cLEU6hI=
github.com/go-openapi/analysis v0.17.0/go.mod h1:IowGgpVeD0vNm45So8nr+IcQ3pxVtpRoBWb8PVZO0ik=
github.com/go-openapi/analysis v0.18.0/go.mod h1:IowGgpVeD0vNm45So8nr+IcQ3pxVtpRoBWb8PVZO0ik=
@@ -218,10 +414,12 @@ github.com/go-openapi/spec v0.17.0/go.mod h1:XkF/MOi14NmjsfZ8VtAKf8pIlbZzyoTvZsd
github.com/go-openapi/spec v0.18.0/go.mod h1:XkF/MOi14NmjsfZ8VtAKf8pIlbZzyoTvZsdfssdxcBI=
github.com/go-openapi/spec v0.19.2/go.mod h1:sCxk3jxKgioEJikev4fgkNmwS+3kuYdJtcsZsD5zxMY=
github.com/go-openapi/spec v0.19.3/go.mod h1:FpwSN1ksY1eteniUU7X0N/BgJ7a4WvBFVA8Lj9mJglo=
+github.com/go-openapi/spec v0.19.5/go.mod h1:Hm2Jr4jv8G1ciIAo+frC/Ft+rR2kQDh8JHKHb3gWUSk=
github.com/go-openapi/strfmt v0.17.0/go.mod h1:P82hnJI0CXkErkXi8IKjPbNBM6lV6+5pLP5l494TcyU=
github.com/go-openapi/strfmt v0.18.0/go.mod h1:P82hnJI0CXkErkXi8IKjPbNBM6lV6+5pLP5l494TcyU=
github.com/go-openapi/strfmt v0.19.0/go.mod h1:+uW+93UVvGGq2qGaZxdDeJqSAqBqBdl+ZPMF/cC8nDY=
github.com/go-openapi/strfmt v0.19.3/go.mod h1:0yX7dbo8mKIvc3XSKp7MNfxw4JytCfCD6+bY1AVL9LU=
+github.com/go-openapi/strfmt v0.19.5/go.mod h1:eftuHTlB/dI8Uq8JJOyRlieZf+WkkxUuk0dgdHXr2Qk=
github.com/go-openapi/swag v0.0.0-20160704191624-1d0bd113de87/go.mod h1:DXUve3Dpr1UfpPtxFw+EFuQ41HhCWZfha5jSVRG7C7I=
github.com/go-openapi/swag v0.17.0/go.mod h1:AByQ+nYG6gQg71GINrmuDXCPWdL640yX49/kXLo40Tg=
github.com/go-openapi/swag v0.18.0/go.mod h1:AByQ+nYG6gQg71GINrmuDXCPWdL640yX49/kXLo40Tg=
@@ -231,16 +429,28 @@ github.com/go-openapi/swag v0.19.14 h1:gm3vOOXfiuw5i9p5N9xJvfjvuofpyvLA9Wr6QfK5F
github.com/go-openapi/swag v0.19.14/go.mod h1:QYRuS/SOXUCsnplDa677K7+DxSOj6IPNl/eQntq43wQ=
github.com/go-openapi/validate v0.18.0/go.mod h1:Uh4HdOzKt19xGIGm1qHf/ofbX1YQ4Y+MYsct2VUrAJ4=
github.com/go-openapi/validate v0.19.2/go.mod h1:1tRCw7m3jtI8eNWEEliiAqUIcBztB2KDnRCRMUi7GTA=
-github.com/go-openapi/validate v0.19.5/go.mod h1:8DJv2CVJQ6kGNpFW6eV9N3JviE1C85nY1c2z52x1Gk4=
+github.com/go-openapi/validate v0.19.8/go.mod h1:8DJv2CVJQ6kGNpFW6eV9N3JviE1C85nY1c2z52x1Gk4=
github.com/go-stack/stack v1.8.0/go.mod h1:v0f6uXyyMGvRgIKkXu+yp6POWl0qKG85gN/melR3HDY=
+github.com/go-task/slim-sprig v0.0.0-20210107165309-348f09dbbbc0/go.mod h1:fyg7847qk6SyHyPtNmDHnmrv/HOrqktSC+C9fM+CJOE=
+github.com/gobuffalo/flect v0.2.3 h1:f/ZukRnSNA/DUpSNDadko7Qc0PhGvsew35p/2tu+CRY=
+github.com/gobuffalo/flect v0.2.3/go.mod h1:vmkQwuZYhN5Pc4ljYQZzP+1sq+NEkK+lh20jmEmX3jc=
+github.com/gobuffalo/here v0.6.0/go.mod h1:wAG085dHOYqUpf+Ap+WOdrPTp5IYcDAs/x7PLa8Y5fM=
+github.com/godbus/dbus v0.0.0-20151105175453-c7fdd8b5cd55/go.mod h1:/YcGZj5zSblfDWMMoOzV4fas9FZnQYTkDnsGvmh2Grw=
+github.com/godbus/dbus v0.0.0-20180201030542-885f9cc04c9c/go.mod h1:/YcGZj5zSblfDWMMoOzV4fas9FZnQYTkDnsGvmh2Grw=
+github.com/godbus/dbus v0.0.0-20190422162347-ade71ed3457e/go.mod h1:bBOAhwG1umN6/6ZUMtDFBMQR8jRg9O75tm9K00oMsK4=
+github.com/godbus/dbus/v5 v5.0.3/go.mod h1:xhWf0FNVPg57R7Z0UbKHbJfkEywrmjJnf7w5xrFpKfA=
github.com/godbus/dbus/v5 v5.0.4/go.mod h1:xhWf0FNVPg57R7Z0UbKHbJfkEywrmjJnf7w5xrFpKfA=
+github.com/gogo/googleapis v1.2.0/go.mod h1:Njal3psf3qN6dwBtQfUmBZh2ybovJ0tlu3o/AC7HYjU=
+github.com/gogo/googleapis v1.4.0/go.mod h1:5YRNX2z1oM5gXdAkurHa942MDgEJyk02w4OecKY87+c=
github.com/gogo/protobuf v1.1.1/go.mod h1:r8qH/GZQm5c6nD/R0oafs1akxWv10x8SbQlK7atdtwQ=
github.com/gogo/protobuf v1.2.1/go.mod h1:hp+jE20tsWTFYpLwKvXlhS1hjn+gTNwPg2I6zVXpSg4=
github.com/gogo/protobuf v1.2.2-0.20190723190241-65acae22fc9d/go.mod h1:SlYgWuQ5SjCEi6WLHjHCa1yvBfUnHcTbrrZtXPKa29o=
+github.com/gogo/protobuf v1.3.0/go.mod h1:SlYgWuQ5SjCEi6WLHjHCa1yvBfUnHcTbrrZtXPKa29o=
github.com/gogo/protobuf v1.3.1/go.mod h1:SlYgWuQ5SjCEi6WLHjHCa1yvBfUnHcTbrrZtXPKa29o=
github.com/gogo/protobuf v1.3.2 h1:Ov1cvc58UF3b5XjBnZv7+opcTcQFZebYjWzi34vdm4Q=
github.com/gogo/protobuf v1.3.2/go.mod h1:P1XiOD3dCwIKUDQYPy72D8LYyHL2YPYrpS2s69NZV8Q=
github.com/golang/glog v0.0.0-20160126235308-23def4e6c14b/go.mod h1:SBH7ygxi8pfUlaOkMMuAQtPIUF8ecWP5IEl/CR7VP2Q=
+github.com/golang/glog v1.0.0/go.mod h1:EWib/APOK0SL3dFbYqvxE3UYd8E6s1ouQ7iEp/0LWV4=
github.com/golang/groupcache v0.0.0-20160516000752-02826c3e7903/go.mod h1:cIg4eruTrX1D+g88fzRXU5OdNfaM+9IcxsU14FzY7Hc=
github.com/golang/groupcache v0.0.0-20190129154638-5b532d6fd5ef/go.mod h1:cIg4eruTrX1D+g88fzRXU5OdNfaM+9IcxsU14FzY7Hc=
github.com/golang/groupcache v0.0.0-20190702054246-869f871628b6/go.mod h1:cIg4eruTrX1D+g88fzRXU5OdNfaM+9IcxsU14FzY7Hc=
@@ -253,14 +463,17 @@ github.com/golang/mock v1.2.0/go.mod h1:oTYuIxOrZwtPieC+H1uAHpcLFnEyAGVDL/k47Jfb
github.com/golang/mock v1.3.1/go.mod h1:sBzyDLLjw3U8JLTeZvSv8jJB+tU5PVekmnlKIyFUx0Y=
github.com/golang/mock v1.4.0/go.mod h1:UOMv5ysSaYNkG+OFQykRIcU/QvvxJf3p21QfJ2Bt3cw=
github.com/golang/mock v1.4.1/go.mod h1:UOMv5ysSaYNkG+OFQykRIcU/QvvxJf3p21QfJ2Bt3cw=
-github.com/golang/mock v1.4.4 h1:l75CXGRSwbaYNpl/Z2X1XIIAMSCquvXgpVZDhwEIJsc=
+github.com/golang/mock v1.4.3/go.mod h1:UOMv5ysSaYNkG+OFQykRIcU/QvvxJf3p21QfJ2Bt3cw=
github.com/golang/mock v1.4.4/go.mod h1:l3mdAwkq5BuhzHwde/uurv3sEJeZMXNpwsxVWU71h+4=
+github.com/golang/mock v1.5.0 h1:jlYHihg//f7RRwuPfptm04yp4s7O6Kw8EZiVYIGcH0g=
+github.com/golang/mock v1.5.0/go.mod h1:CWnOUgYIOo4TcNZ0wHX3YZCqsaM1I1Jvs6v3mP3KVu8=
github.com/golang/protobuf v0.0.0-20161109072736-4bd1920723d7/go.mod h1:6lQm79b+lXiMfvg/cZm0SGofjICqVBUtrP5yJMmIC1U=
github.com/golang/protobuf v1.2.0/go.mod h1:6lQm79b+lXiMfvg/cZm0SGofjICqVBUtrP5yJMmIC1U=
github.com/golang/protobuf v1.3.1/go.mod h1:6lQm79b+lXiMfvg/cZm0SGofjICqVBUtrP5yJMmIC1U=
github.com/golang/protobuf v1.3.2/go.mod h1:6lQm79b+lXiMfvg/cZm0SGofjICqVBUtrP5yJMmIC1U=
github.com/golang/protobuf v1.3.3/go.mod h1:vzj43D7+SQXF/4pzW/hwtAqwc6iTitCiVSaWz5lYuqw=
github.com/golang/protobuf v1.3.4/go.mod h1:vzj43D7+SQXF/4pzW/hwtAqwc6iTitCiVSaWz5lYuqw=
+github.com/golang/protobuf v1.3.5/go.mod h1:6O5/vntMXwX2lRkT1hjjk0nAC1IDOTvTlVgjlRvqsdk=
github.com/golang/protobuf v1.4.0-rc.1/go.mod h1:ceaxUfeHdC40wWswd/P6IGgMaK3YpKi5j83Wpe3EHw8=
github.com/golang/protobuf v1.4.0-rc.1.0.20200221234624-67d41d38c208/go.mod h1:xKAWHe0F5eneWXFV3EuXVDTCmh+JuBKY0li0aMyXATA=
github.com/golang/protobuf v1.4.0-rc.2/go.mod h1:LlEzMj4AhA7rCAGe4KMBDvJI+AwstrUpVNzEA03Pprs=
@@ -273,49 +486,76 @@ github.com/golang/protobuf v1.5.0/go.mod h1:FsONVRAS9T7sI+LIUmWTfcYkHO4aIWwzhcaS
github.com/golang/protobuf v1.5.1/go.mod h1:DopwsBzvsk0Fs44TXzsVbJyPhcCPeIwnvohx4u74HPM=
github.com/golang/protobuf v1.5.2 h1:ROPKBNFfQgOUMifHyP+KYbvpjbdoFNs+aK7DXlji0Tw=
github.com/golang/protobuf v1.5.2/go.mod h1:XVQd3VNwM+JqD3oG2Ue2ip4fOMUkwXdXDdiuN0vRsmY=
+github.com/golangplus/testing v0.0.0-20180327235837-af21d9c3145e/go.mod h1:0AA//k/eakGydO4jKRoRL2j92ZKSzTgj9tclaCrvXHk=
github.com/google/btree v0.0.0-20180813153112-4030bb1f1f0c/go.mod h1:lNA+9X1NB3Zf8V7Ke586lFgjr2dZNuvo3lPJSGZ5JPQ=
github.com/google/btree v1.0.0/go.mod h1:lNA+9X1NB3Zf8V7Ke586lFgjr2dZNuvo3lPJSGZ5JPQ=
github.com/google/btree v1.0.1 h1:gK4Kx5IaGY9CD5sPJ36FHiBJ6ZXl0kilRiiCj+jdYp4=
github.com/google/btree v1.0.1/go.mod h1:xXMiIv4Fb/0kKde4SpL7qlzvu5cMJDRkFDxJfI9uaxA=
+github.com/google/cel-go v0.9.0/go.mod h1:U7ayypeSkw23szu4GaQTPJGx66c20mx8JklMSxrmI1w=
+github.com/google/cel-spec v0.6.0/go.mod h1:Nwjgxy5CbjlPrtCWjeDjUyKMl8w41YBYGjsyDdqk0xA=
github.com/google/go-cmp v0.2.0/go.mod h1:oXzfMopK8JAjlY9xF4vHSVASa0yLyX7SntLO5aqRK0M=
github.com/google/go-cmp v0.3.0/go.mod h1:8QqcDgzrUqlUb/G2PQTWiueGozuR1884gddMywk6iLU=
github.com/google/go-cmp v0.3.1/go.mod h1:8QqcDgzrUqlUb/G2PQTWiueGozuR1884gddMywk6iLU=
github.com/google/go-cmp v0.4.0/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE=
+github.com/google/go-cmp v0.4.1/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE=
github.com/google/go-cmp v0.5.0/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE=
+github.com/google/go-cmp v0.5.1/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE=
+github.com/google/go-cmp v0.5.2/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE=
+github.com/google/go-cmp v0.5.3/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE=
github.com/google/go-cmp v0.5.4/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE=
-github.com/google/go-cmp v0.5.5 h1:Khx7svrCpmxxtHBq5j2mp/xVjsi8hQMfNLvJFAlrGgU=
github.com/google/go-cmp v0.5.5/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE=
+github.com/google/go-cmp v0.5.6 h1:BKbKCqvP6I+rmFHt06ZmyQtvB8xAkWdhFyr0ZUNZcxQ=
+github.com/google/go-cmp v0.5.6/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE=
+github.com/google/go-github/v33 v33.0.0 h1:qAf9yP0qc54ufQxzwv+u9H0tiVOnPJxo0lI/JXqw3ZM=
+github.com/google/go-github/v33 v33.0.0/go.mod h1:GMdDnVZY/2TsWgp/lkYnpSAh6TrzhANBBwm6k6TTEXg=
+github.com/google/go-querystring v1.0.0 h1:Xkwi/a1rcvNg1PPYe5vI8GbeBY/jrVuDX5ASuANWTrk=
+github.com/google/go-querystring v1.0.0/go.mod h1:odCYkC5MyYFN7vkCjXpyrEuKhc/BUO6wN/zVPAxq5ck=
github.com/google/gofuzz v0.0.0-20161122191042-44d81051d367/go.mod h1:HP5RmnzzSNb993RKQDq4+1A4ia9nllfqcQFTQJedwGI=
github.com/google/gofuzz v1.0.0/go.mod h1:dBl0BpW6vV/+mYPU4Po3pmUjxk6FQPldtuIdl/M65Eg=
-github.com/google/gofuzz v1.1.0 h1:Hsa8mG0dQ46ij8Sl2AYJDUv1oA9/d6Vk+3LG99Oe02g=
github.com/google/gofuzz v1.1.0/go.mod h1:dBl0BpW6vV/+mYPU4Po3pmUjxk6FQPldtuIdl/M65Eg=
+github.com/google/gofuzz v1.2.0 h1:xRy4A+RhZaiKjJ1bPfwQ8sedCA+YS2YcCHW6ec7JMi0=
+github.com/google/gofuzz v1.2.0/go.mod h1:dBl0BpW6vV/+mYPU4Po3pmUjxk6FQPldtuIdl/M65Eg=
github.com/google/martian v2.1.0+incompatible/go.mod h1:9I4somxYTbIHy5NJKHRl3wXiIaQGbYVAs8BPL6v8lEs=
+github.com/google/martian/v3 v3.0.0/go.mod h1:y5Zk1BBys9G+gd6Jrk0W3cC1+ELVxBWuIGO+w/tUAp0=
+github.com/google/martian/v3 v3.1.0/go.mod h1:y5Zk1BBys9G+gd6Jrk0W3cC1+ELVxBWuIGO+w/tUAp0=
github.com/google/pprof v0.0.0-20181206194817-3ea8567a2e57/go.mod h1:zfwlbNMJ+OItoe0UupaVj+oy1omPYYDuagoSzA8v9mc=
github.com/google/pprof v0.0.0-20190515194954-54271f7e092f/go.mod h1:zfwlbNMJ+OItoe0UupaVj+oy1omPYYDuagoSzA8v9mc=
github.com/google/pprof v0.0.0-20191218002539-d4f498aebedc/go.mod h1:ZgVRPoUq/hfqzAqh7sHMqb3I9Rq5C59dIz2SbBwJ4eM=
github.com/google/pprof v0.0.0-20200212024743-f11f1df84d12/go.mod h1:ZgVRPoUq/hfqzAqh7sHMqb3I9Rq5C59dIz2SbBwJ4eM=
github.com/google/pprof v0.0.0-20200229191704-1ebb73c60ed3/go.mod h1:ZgVRPoUq/hfqzAqh7sHMqb3I9Rq5C59dIz2SbBwJ4eM=
+github.com/google/pprof v0.0.0-20200430221834-fc25d7d30c6d/go.mod h1:ZgVRPoUq/hfqzAqh7sHMqb3I9Rq5C59dIz2SbBwJ4eM=
+github.com/google/pprof v0.0.0-20200708004538-1a94d8640e99/go.mod h1:ZgVRPoUq/hfqzAqh7sHMqb3I9Rq5C59dIz2SbBwJ4eM=
+github.com/google/pprof v0.0.0-20201023163331-3e6fc7fc9c4c/go.mod h1:kpwsk12EmLew5upagYY7GY0pfYCcupk39gWOCRROcvE=
+github.com/google/pprof v0.0.0-20201203190320-1bf35d6f28c2/go.mod h1:kpwsk12EmLew5upagYY7GY0pfYCcupk39gWOCRROcvE=
+github.com/google/pprof v0.0.0-20210122040257-d980be63207e/go.mod h1:kpwsk12EmLew5upagYY7GY0pfYCcupk39gWOCRROcvE=
+github.com/google/pprof v0.0.0-20210226084205-cbba55b83ad5/go.mod h1:kpwsk12EmLew5upagYY7GY0pfYCcupk39gWOCRROcvE=
github.com/google/renameio v0.1.0/go.mod h1:KWCgfxg9yswjAJkECMjeO8J8rahYeXnNhOm40UhjYkI=
+github.com/google/shlex v0.0.0-20191202100458-e7afc7fbc510/go.mod h1:pupxD2MaaD3pAXIBCelhxNneeOaAeabZDe5s4K6zSpQ=
github.com/google/uuid v0.0.0-20170306145142-6a5e28554805/go.mod h1:TIyPZe4MgqvfeYDBFedMoGGpEw/LqOeaOT+nhxU+yHo=
github.com/google/uuid v1.0.0/go.mod h1:TIyPZe4MgqvfeYDBFedMoGGpEw/LqOeaOT+nhxU+yHo=
github.com/google/uuid v1.1.1/go.mod h1:TIyPZe4MgqvfeYDBFedMoGGpEw/LqOeaOT+nhxU+yHo=
-github.com/google/uuid v1.1.2 h1:EVhdT+1Kseyi1/pUmXKaFxYsDNy9RQYkMWRH68J/W7Y=
github.com/google/uuid v1.1.2/go.mod h1:TIyPZe4MgqvfeYDBFedMoGGpEw/LqOeaOT+nhxU+yHo=
+github.com/google/uuid v1.2.0 h1:qJYtXnJRWmpe7m/3XlyhrsLrEURqHRM2kxzoxXqyUDs=
+github.com/google/uuid v1.2.0/go.mod h1:TIyPZe4MgqvfeYDBFedMoGGpEw/LqOeaOT+nhxU+yHo=
github.com/googleapis/gax-go/v2 v2.0.4/go.mod h1:0Wqv26UfaUD9n4G6kQubkQ+KchISgw+vpHVxEJEs9eg=
github.com/googleapis/gax-go/v2 v2.0.5/go.mod h1:DWXyrwAJ9X0FpwwEdw+IPEYBICEFu5mhpdKc/us6bOk=
github.com/googleapis/gnostic v0.0.0-20170729233727-0c5108395e2d/go.mod h1:sJBsCZ4ayReDTBIg8b9dl28c5xFWyhBTVRp3pOg5EKY=
-github.com/googleapis/gnostic v0.1.0/go.mod h1:sJBsCZ4ayReDTBIg8b9dl28c5xFWyhBTVRp3pOg5EKY=
-github.com/googleapis/gnostic v0.3.1/go.mod h1:on+2t9HRStVgn95RSsFWFz+6Q0Snyqv1awfrALZdbtU=
+github.com/googleapis/gnostic v0.4.1/go.mod h1:LRhVm6pbyptWbWbuZ38d1eyptfvIytN3ir6b65WBswg=
github.com/googleapis/gnostic v0.5.1/go.mod h1:6U4PtQXGIEt/Z3h5MAT7FNofLnw9vXk2cUuW7uA/OeU=
github.com/googleapis/gnostic v0.5.5 h1:9fHAtK0uDfpveeqqo1hkEZJcFvYXAiCN3UutL8F9xHw=
github.com/googleapis/gnostic v0.5.5/go.mod h1:7+EbHbldMins07ALC74bsA81Ovc97DwqyJO1AENw9kA=
-github.com/gophercloud/gophercloud v0.1.0/go.mod h1:vxM41WHh5uqHVBMZHzuwNOHh8XEoIEcSTewFxm1c5g8=
+github.com/gopherjs/gopherjs v0.0.0-20181017120253-0766667cb4d1 h1:EGx4pi6eqNxGaHF6qqu48+N2wcFQ5qg5FXgOdqsJ5d8=
github.com/gopherjs/gopherjs v0.0.0-20181017120253-0766667cb4d1/go.mod h1:wJfORRmW1u3UXTncJ5qlYoELFm8eSnnEO6hX4iZ3EWY=
+github.com/gorilla/handlers v0.0.0-20150720190736-60c7bfde3e33/go.mod h1:Qkdc/uu4tH4g6mTK6auzZ766c4CA0Ng8+o/OAirnOIQ=
+github.com/gorilla/mux v1.7.2/go.mod h1:1lud6UwP+6orDFRuTfBEV8e9/aOM/c4fVVCaMa2zaAs=
github.com/gorilla/mux v1.7.4/go.mod h1:DVbg23sWSpFRCP0SfiEN6jmj59UnW/n46BH5rLB71So=
+github.com/gorilla/mux v1.8.0 h1:i40aqfkR1h2SlN9hojwV5ZA91wcXFOvkdNIeFDP5koI=
+github.com/gorilla/mux v1.8.0/go.mod h1:DVbg23sWSpFRCP0SfiEN6jmj59UnW/n46BH5rLB71So=
github.com/gorilla/websocket v0.0.0-20170926233335-4201258b820c/go.mod h1:E7qHFY5m1UJ88s3WnNqhKjPHQ0heANvMoAMk2YaljkQ=
github.com/gorilla/websocket v1.4.0/go.mod h1:E7qHFY5m1UJ88s3WnNqhKjPHQ0heANvMoAMk2YaljkQ=
github.com/gorilla/websocket v1.4.2 h1:+/TMaTYc4QFitKJxsQ7Yye35DkWvkdLcvGKqM+x0Ufc=
github.com/gorilla/websocket v1.4.2/go.mod h1:YR8l580nyteQvAITg2hZ9XVh4b55+EU/adAjf1fMHhE=
+github.com/gosuri/uitable v0.0.4/go.mod h1:tKR86bXuXPZazfOTG1FIzvjIdXzd0mo4Vtn16vt0PJo=
github.com/gregjones/httpcache v0.0.0-20180305231024-9cad4c3443a7/go.mod h1:FecbI9+v66THATjSRHfNgh1IVFe/9kFxbXtjV0ctIMA=
github.com/grpc-ecosystem/go-grpc-middleware v1.0.0/go.mod h1:FiyG127CGDf3tlThmgyCl78X/SZQqEOJBCDaAfeWzPs=
github.com/grpc-ecosystem/go-grpc-middleware v1.0.1-0.20190118093823-f849b5445de4/go.mod h1:FiyG127CGDf3tlThmgyCl78X/SZQqEOJBCDaAfeWzPs=
@@ -329,10 +569,12 @@ github.com/grpc-ecosystem/grpc-gateway v1.16.0 h1:gmcG1KaJ57LophUzW0Hy8NmPhnMZb4
github.com/grpc-ecosystem/grpc-gateway v1.16.0/go.mod h1:BDjrQk3hbvj6Nolgz8mAMFbcEtjT1g+wF4CSlocrBnw=
github.com/hashicorp/consul/api v1.1.0/go.mod h1:VmuI/Lkw1nC05EYQWNKwWGbkg+FbDBtguAZLlVdkD9Q=
github.com/hashicorp/consul/sdk v0.1.1/go.mod h1:VKf9jXwCTEY1QZP2MOLRhb5i/I/ssyNV1vwHyQBF0x8=
+github.com/hashicorp/errwrap v0.0.0-20141028054710-7554cd9344ce/go.mod h1:YH+1FKiLXxHSkmPseP+kNlulaMuP3n2brvKWEqk/Jc4=
github.com/hashicorp/errwrap v1.0.0/go.mod h1:YH+1FKiLXxHSkmPseP+kNlulaMuP3n2brvKWEqk/Jc4=
github.com/hashicorp/go-cleanhttp v0.5.1/go.mod h1:JpRdi6/HCYpAwUzNwuwqhbovhLtngrth3wmdIIUrZ80=
github.com/hashicorp/go-immutable-radix v1.0.0/go.mod h1:0y9vanUI8NX6FsYoO3zeMjhV/C5i9g4Q3DwcSNZ4P60=
github.com/hashicorp/go-msgpack v0.5.3/go.mod h1:ahLV/dePpqEmjfWmKiqvPkv/twdG7iPBM1vqhUKIvfM=
+github.com/hashicorp/go-multierror v0.0.0-20161216184304-ed905158d874/go.mod h1:JMRHfdO9jKNzS/+BTlxCjKNQHg/jZAft8U7LloJvN7I=
github.com/hashicorp/go-multierror v1.0.0/go.mod h1:dHtQlpGsu+cZNNAkkCN/P3hoUDHhCYQXV3UM06sGGrk=
github.com/hashicorp/go-rootcerts v1.0.0/go.mod h1:K6zTfqpRlCUIjkwsN4Z+hiSfzSTQa6eBIzfwKfwNnHU=
github.com/hashicorp/go-sockaddr v1.0.0/go.mod h1:7Xibr9yA9JjQq1JpNB2Vw7kxv8xerXegt+ozgdvDeDU=
@@ -343,18 +585,26 @@ github.com/hashicorp/go.net v0.0.1/go.mod h1:hjKkEWcCURg++eb33jQU7oqQcI9XDCnUzHA
github.com/hashicorp/golang-lru v0.5.0/go.mod h1:/m3WP610KZHVQ1SGc6re/UDhFvYD7pJ4Ao+sR/qLZy8=
github.com/hashicorp/golang-lru v0.5.1/go.mod h1:/m3WP610KZHVQ1SGc6re/UDhFvYD7pJ4Ao+sR/qLZy8=
github.com/hashicorp/golang-lru v0.5.4/go.mod h1:iADmTwqILo4mZ8BN3D2Q6+9jd8WM5uGBxy+E8yxSoD4=
+github.com/hashicorp/hcl v1.0.0 h1:0Anlzjpi4vEasTeNFn2mLJgTSwt0+6sfsiTG8qcWGx4=
github.com/hashicorp/hcl v1.0.0/go.mod h1:E5yfLk+7swimpb2L/Alb/PJmXilQ/rhwaUYs4T20WEQ=
github.com/hashicorp/logutils v1.0.0/go.mod h1:QIAnNjmIWmVIIkWDTG1z5v++HQmx9WQRO+LraFDTW64=
github.com/hashicorp/mdns v1.0.0/go.mod h1:tL+uN++7HEJ6SQLQ2/p+z2pH24WQKWjBPkE0mNTz8vQ=
github.com/hashicorp/memberlist v0.1.3/go.mod h1:ajVTdAv/9Im8oMAAj5G31PhhMCZJV2pPBoIllUwCN7I=
github.com/hashicorp/serf v0.8.2/go.mod h1:6hOLApaqBFA1NXqRQAsxw9QxuDEvNxSQRwA/JwenrHc=
-github.com/hpcloud/tail v1.0.0/go.mod h1:ab1qPbhIpdTxEkNHXyeSf5vhxWSCs/tWer42PpOxQnU=
github.com/ianlancetaylor/demangle v0.0.0-20181102032728-5e5cf60278f6/go.mod h1:aSSvb/t6k1mPoxDqO4vJh6VOCGPwU4O0C2/Eqndh1Sc=
+github.com/ianlancetaylor/demangle v0.0.0-20200824232613-28f6c0f3b639/go.mod h1:aSSvb/t6k1mPoxDqO4vJh6VOCGPwU4O0C2/Eqndh1Sc=
github.com/imdario/mergo v0.3.5/go.mod h1:2EnlNZ0deacrJVfApfmtdGgDfMuh/nq6Ok1EcJh5FfA=
-github.com/imdario/mergo v0.3.9 h1:UauaLniWCFHWd+Jp9oCEkTBj8VO/9DKg3PV3VCNMDIg=
-github.com/imdario/mergo v0.3.9/go.mod h1:2EnlNZ0deacrJVfApfmtdGgDfMuh/nq6Ok1EcJh5FfA=
+github.com/imdario/mergo v0.3.8/go.mod h1:2EnlNZ0deacrJVfApfmtdGgDfMuh/nq6Ok1EcJh5FfA=
+github.com/imdario/mergo v0.3.10/go.mod h1:jmQim1M+e3UYxmgPu/WyfjB3N3VflVyUjjjwH0dnCYA=
+github.com/imdario/mergo v0.3.11/go.mod h1:jmQim1M+e3UYxmgPu/WyfjB3N3VflVyUjjjwH0dnCYA=
+github.com/imdario/mergo v0.3.12 h1:b6R2BslTbIEToALKP7LxUvijTsNI9TAe80pLWN2g/HU=
+github.com/imdario/mergo v0.3.12/go.mod h1:jmQim1M+e3UYxmgPu/WyfjB3N3VflVyUjjjwH0dnCYA=
github.com/inconshreveable/mousetrap v1.0.0 h1:Z8tu5sraLXCXIcARxBp/8cbvlwVa7Z1NHg9XEKhtSvM=
github.com/inconshreveable/mousetrap v1.0.0/go.mod h1:PxqpIevigyE2G7u3NXJIT2ANytuPF1OarO4DADm73n8=
+github.com/j-keck/arping v0.0.0-20160618110441-2cf9dc699c56/go.mod h1:ymszkNOg6tORTn+6F6j+Jc8TOr5osrynvN6ivFWZ2GA=
+github.com/jessevdk/go-flags v1.4.0/go.mod h1:4FA24M0QyGHXBuZZK/XkWh8h0e1EYbRYJSGM75WSRxI=
+github.com/jmespath/go-jmespath v0.0.0-20160202185014-0b12d6b521d8/go.mod h1:Nht3zPeWKUH0NzdCt2Blrr5ys8VGpn0CEB0cQHVjt7k=
+github.com/jmespath/go-jmespath v0.0.0-20160803190731-bd40a432e4c7/go.mod h1:Nht3zPeWKUH0NzdCt2Blrr5ys8VGpn0CEB0cQHVjt7k=
github.com/jonboulle/clockwork v0.1.0/go.mod h1:Ii8DK3G1RaLaWxj9trq07+26W01tbo22gdxWY5EU2bo=
github.com/jonboulle/clockwork v0.2.2 h1:UOGuzwb1PwsrDAObMuhUnj0p5ULPj8V/xJ7Kx9qUBdQ=
github.com/jonboulle/clockwork v0.2.2/go.mod h1:Pkfl5aHPm1nk2H9h0bjmnJD/BcgbGXUBGnn1kMkgxc8=
@@ -366,10 +616,12 @@ github.com/json-iterator/go v1.1.6/go.mod h1:+SdeFBvtyEkXs7REEP0seUULqWtbJapLOCV
github.com/json-iterator/go v1.1.7/go.mod h1:KdQUCv79m/52Kvf8AW2vK1V8akMuk1QjK/uOdHXbAo4=
github.com/json-iterator/go v1.1.8/go.mod h1:KdQUCv79m/52Kvf8AW2vK1V8akMuk1QjK/uOdHXbAo4=
github.com/json-iterator/go v1.1.10/go.mod h1:KdQUCv79m/52Kvf8AW2vK1V8akMuk1QjK/uOdHXbAo4=
-github.com/json-iterator/go v1.1.11 h1:uVUAXhF2To8cbw/3xN3pxj6kk7TYKs98NIrTqPlMWAQ=
github.com/json-iterator/go v1.1.11/go.mod h1:KdQUCv79m/52Kvf8AW2vK1V8akMuk1QjK/uOdHXbAo4=
+github.com/json-iterator/go v1.1.12 h1:PV8peI4a0ysnczrg+LtxykD8LfKY9ML6u2jnxaEnrnM=
+github.com/json-iterator/go v1.1.12/go.mod h1:e30LSqwooZae/UwlEbR2852Gd8hjQvJoHmT4TnhNGBo=
github.com/jstemmer/go-junit-report v0.0.0-20190106144839-af01ea7f8024/go.mod h1:6v2b51hI/fHJwM22ozAgKL4VKDeJcHhJFhtBdhmNjmU=
github.com/jstemmer/go-junit-report v0.9.1/go.mod h1:Brl9GWCQeLvo8nXZwPNNblvFj/XSXhF0NWZEnDohbsk=
+github.com/jtolds/gls v4.20.0+incompatible h1:xdiiI2gbIgH/gLH7ADydsJ1uDOEzR8yvV7C0MuV77Wo=
github.com/jtolds/gls v4.20.0+incompatible/go.mod h1:QJZ7F/aHp+rZTRtaJ1ow/lLfFfVYBRgL+9YlvaHOwJU=
github.com/julienschmidt/httprouter v1.2.0/go.mod h1:SYymIcj16QtmaHHD7aYtjjsJG7VTCxuUUipMqKk8s4w=
github.com/julienschmidt/httprouter v1.3.0/go.mod h1:JR6WtHb+2LUe8TCKY3cZOxFyyO8IZAc4RVcycCCAKdM=
@@ -377,18 +629,27 @@ github.com/kisielk/errcheck v1.1.0/go.mod h1:EZBBE59ingxPouuu3KfxchcWSUPOHkagtvW
github.com/kisielk/errcheck v1.2.0/go.mod h1:/BMXB+zMLi60iA8Vv6Ksmxu/1UDYcXs4uQLJ+jE2L00=
github.com/kisielk/errcheck v1.5.0/go.mod h1:pFxgyoBC7bSaBwPgfKdkLd5X25qrDl4LWUI2bnpBCr8=
github.com/kisielk/gotool v1.0.0/go.mod h1:XhKaO+MFFWcvkIS/tQcRk01m1F5IRFswLeQ+oQHNcck=
+github.com/klauspost/compress v1.11.3/go.mod h1:aoV0uJVorq1K+umq18yTdKaF57EivdYsUV+/s2qKfXs=
+github.com/klauspost/compress v1.11.13/go.mod h1:aoV0uJVorq1K+umq18yTdKaF57EivdYsUV+/s2qKfXs=
github.com/konsorten/go-windows-terminal-sequences v1.0.1/go.mod h1:T0+1ngSBFLxvqU3pZ+m/2kptfBszLMUkC4ZK/EgS/cQ=
+github.com/konsorten/go-windows-terminal-sequences v1.0.2/go.mod h1:T0+1ngSBFLxvqU3pZ+m/2kptfBszLMUkC4ZK/EgS/cQ=
github.com/konsorten/go-windows-terminal-sequences v1.0.3/go.mod h1:T0+1ngSBFLxvqU3pZ+m/2kptfBszLMUkC4ZK/EgS/cQ=
+github.com/kr/fs v0.1.0/go.mod h1:FFnZGqtBN9Gxj7eW1uZ42v5BccTP0vu6NEaFoC2HwRg=
github.com/kr/logfmt v0.0.0-20140226030751-b84e30acd515/go.mod h1:+0opPa2QZZtGFBFZlji/RkVcI2GknAs/DXo4wKdlNEc=
github.com/kr/pretty v0.1.0/go.mod h1:dAy3ld7l9f0ibDNOQOHHMYYIIbhfbHSm3C4ZsoJORNo=
github.com/kr/pretty v0.2.0/go.mod h1:ipq/a2n7PKx3OHsz4KJII5eveXtPO4qwEXGdVfWzfnI=
+github.com/kr/pretty v0.2.1/go.mod h1:ipq/a2n7PKx3OHsz4KJII5eveXtPO4qwEXGdVfWzfnI=
github.com/kr/pty v1.1.1/go.mod h1:pFQYn66WHrOpPYNljwOMqo10TkYh1fy3cYio2l3bCsQ=
github.com/kr/pty v1.1.5/go.mod h1:9r2w37qlBe7rQ6e1fg1S/9xpWHSnaqNdHD3WcMdbPDA=
github.com/kr/text v0.1.0/go.mod h1:4Jbv+DJW3UT/LiOwJeYQe1efqtUx/iVham/4vfdArNI=
github.com/kr/text v0.2.0 h1:5Nx0Ya0ZqY2ygV366QzturHI13Jq95ApcVaJBhpS+AY=
github.com/kr/text v0.2.0/go.mod h1:eLer722TekiGuMkidMxC/pM04lWEeraHUUmBw8l2grE=
+github.com/liggitt/tabwriter v0.0.0-20181228230101-89fcab3d43de/go.mod h1:zAbeS9B/r2mtpb6U+EI2rYA5OAXxsYw6wTamcNW+zcE=
+github.com/lithammer/dedent v1.1.0/go.mod h1:jrXYCQtgg0nJiN+StA2KgR7w6CiQNv9Fd/Z9BP0jIOc=
github.com/magiconair/properties v1.8.0/go.mod h1:PppfXfuXeibc/6YijjN8zIbojt8czPbwD3XqdrwzmxQ=
github.com/magiconair/properties v1.8.1/go.mod h1:PppfXfuXeibc/6YijjN8zIbojt8czPbwD3XqdrwzmxQ=
+github.com/magiconair/properties v1.8.5 h1:b6kJs+EmPFMYGkow9GiUyCyOvIwYetYJ3fSaWak/Gls=
+github.com/magiconair/properties v1.8.5/go.mod h1:y3VJvCyxH9uVvJTWEGAELF3aiYNyPKd5NZ3oSwXrF60=
github.com/mailru/easyjson v0.0.0-20160728113105-d5b7844b561a/go.mod h1:C1wdFJiN94OJF2b5HbByQZoLdCWB1Yqtg26g4irojpc=
github.com/mailru/easyjson v0.0.0-20180823135443-60711f1a8329/go.mod h1:C1wdFJiN94OJF2b5HbByQZoLdCWB1Yqtg26g4irojpc=
github.com/mailru/easyjson v0.0.0-20190312143242-1de009706dbe/go.mod h1:C1wdFJiN94OJF2b5HbByQZoLdCWB1Yqtg26g4irojpc=
@@ -397,23 +658,43 @@ github.com/mailru/easyjson v0.0.0-20190626092158-b2ccc519800e/go.mod h1:C1wdFJiN
github.com/mailru/easyjson v0.7.0/go.mod h1:KAzv3t3aY1NaHWoQz1+4F1ccyAH66Jk7yos7ldAVICs=
github.com/mailru/easyjson v0.7.6 h1:8yTIVnZgCoiM1TgqoeTl+LfU5Jg6/xL3QhGQnimLYnA=
github.com/mailru/easyjson v0.7.6/go.mod h1:xzfreul335JAWq5oZzymOObrkdz5UnU4kGfJJLY9Nlc=
+github.com/markbates/pkger v0.17.1/go.mod h1:0JoVlrol20BSywW79rN3kdFFsE5xYM+rSCQDXbLhiuI=
+github.com/marstr/guid v1.1.0/go.mod h1:74gB1z2wpxxInTG6yaqA7KrtM0NZ+RbrcqDvYHefzho=
github.com/mattn/go-colorable v0.0.9/go.mod h1:9vuHe8Xs5qXnSaW/c/ABM9alt+Vo+STaOChaDxuIBZU=
+github.com/mattn/go-colorable v0.1.8/go.mod h1:u6P/XSegPjTcexA+o6vUJrdnUu04hMope9wVRipJSqc=
github.com/mattn/go-isatty v0.0.3/go.mod h1:M+lRXTBqGeGNdLjl/ufCoiOlB5xdOkqRJdNxMWT7Zi4=
github.com/mattn/go-isatty v0.0.4/go.mod h1:M+lRXTBqGeGNdLjl/ufCoiOlB5xdOkqRJdNxMWT7Zi4=
+github.com/mattn/go-isatty v0.0.12 h1:wuysRhFDzyxgEmMf5xjvJ2M9dZoWAXNNr5LSBS7uHXY=
+github.com/mattn/go-isatty v0.0.12/go.mod h1:cbi8OIDigv2wuxKPP5vlRcQ1OAZbq2CE4Kysco4FUpU=
github.com/mattn/go-runewidth v0.0.2/go.mod h1:LwmH8dsx7+W8Uxz3IHJYH5QSwggIsqBzpuz5H//U1FU=
+github.com/mattn/go-runewidth v0.0.7/go.mod h1:H031xJmbD/WCDINGzjvQ9THkh0rPKHF+m2gUSrubnMI=
+github.com/mattn/go-runewidth v0.0.13/go.mod h1:Jdepj2loyihRzMpdS35Xk/zdY8IAYHsh153qUoGf23w=
+github.com/mattn/go-shellwords v1.0.3/go.mod h1:3xCvwCdWdlDJUrvuMn7Wuy9eWs4pE8vqg+NOMyg4B2o=
github.com/matttproud/golang_protobuf_extensions v1.0.1/go.mod h1:D8He9yQNgCq6Z5Ld7szi9bcBfOoFv/3dc6xSMkL2PC0=
github.com/matttproud/golang_protobuf_extensions v1.0.2-0.20181231171920-c182affec369 h1:I0XW9+e1XWDxdcEniV4rQAIOPUGDq67JSCiRCgGCZLI=
github.com/matttproud/golang_protobuf_extensions v1.0.2-0.20181231171920-c182affec369/go.mod h1:BSXmuO+STAnVfrANrmjBb36TMTDstsz7MSK+HVaYKv4=
github.com/miekg/dns v1.0.14/go.mod h1:W1PPwlIAgtquWBMBEV9nkV9Cazfe8ScdGz/Lj7v3Nrg=
+github.com/miekg/pkcs11 v1.0.3/go.mod h1:XsNlhZGX73bx86s2hdc/FuaLm2CPZJemRLMA+WTFxgs=
+github.com/mistifyio/go-zfs v2.1.2-0.20190413222219-f784269be439+incompatible/go.mod h1:8AuVvqP/mXw1px98n46wfvcGfQ4ci2FwoAjKYxuo3Z4=
github.com/mitchellh/cli v1.0.0/go.mod h1:hNIlj7HEI86fIcpObd7a0FcrxTWetlwJDGcceTlRvqc=
github.com/mitchellh/go-homedir v1.0.0/go.mod h1:SfyaCUpYCn1Vlf4IUYiD9fPX4A5wJrkLzIz1N1q0pr0=
github.com/mitchellh/go-homedir v1.1.0/go.mod h1:SfyaCUpYCn1Vlf4IUYiD9fPX4A5wJrkLzIz1N1q0pr0=
github.com/mitchellh/go-testing-interface v1.0.0/go.mod h1:kRemZodwjscx+RGhAo8eIhFbs2+BFgRtFPeD/KE+zxI=
+github.com/mitchellh/go-wordwrap v1.0.0/go.mod h1:ZXFpozHsX6DPmq2I0TCekCxypsnAUbP2oI0UX1GXzOo=
github.com/mitchellh/gox v0.4.0/go.mod h1:Sd9lOJ0+aimLBi73mGofS1ycjY8lL3uZM3JPS42BGNg=
github.com/mitchellh/iochan v1.0.0/go.mod h1:JwYml1nuB7xOzsp52dPpHFffvOCDupsG0QubkSMEySY=
github.com/mitchellh/mapstructure v0.0.0-20160808181253-ca63d7c062ee/go.mod h1:FVVH3fgwuzCH5S8UJGiWEs2h04kUh9fWfEaFds41c1Y=
github.com/mitchellh/mapstructure v1.1.2/go.mod h1:FVVH3fgwuzCH5S8UJGiWEs2h04kUh9fWfEaFds41c1Y=
+github.com/mitchellh/mapstructure v1.4.1 h1:CpVNEelQCZBooIPDn+AR3NpivK/TIKU8bDxdASFVQag=
+github.com/mitchellh/mapstructure v1.4.1/go.mod h1:bFUtVrKA4DC2yAKiSyO/QUcy7e+RRV2QTWOzhPopBRo=
+github.com/mitchellh/osext v0.0.0-20151018003038-5e2d6d41470f/go.mod h1:OkQIRizQZAeMln+1tSwduZz7+Af5oFlKirV/MSYes2A=
+github.com/moby/locker v1.0.1/go.mod h1:S7SDdo5zpBK84bzzVlKr2V0hz+7x9hWbYC/kq7oQppc=
github.com/moby/spdystream v0.2.0/go.mod h1:f7i0iNDQJ059oMTcWxx8MA/zKFIuD/lY+0GqbN2Wy8c=
+github.com/moby/sys/mountinfo v0.4.0/go.mod h1:rEr8tzG/lsIZHBtN/JjGG+LMYx9eXgW2JI+6q0qou+A=
+github.com/moby/sys/mountinfo v0.4.1/go.mod h1:rEr8tzG/lsIZHBtN/JjGG+LMYx9eXgW2JI+6q0qou+A=
+github.com/moby/sys/symlink v0.1.0/go.mod h1:GGDODQmbFOjFsXvfLVn3+ZRxkch54RkSiGqsZeMYowQ=
+github.com/moby/term v0.0.0-20200312100748-672ec06f55cd/go.mod h1:DdlQx2hp0Ss5/fLikoLlEeIYiATotOjgB//nb973jeo=
+github.com/moby/term v0.0.0-20201216013528-df9cb8a40635/go.mod h1:FBS0z0QWA44HXygs7VXDUOGoN/1TV3RuWkLO04am3wc=
github.com/moby/term v0.0.0-20210610120745-9d4ed1856297 h1:yH0SvLzcbZxcJXho2yh7CqdENGMQe73Cw3woZBpPli0=
github.com/moby/term v0.0.0-20210610120745-9d4ed1856297/go.mod h1:vgPCkQMyxTZ7IDy8SXRufE172gr8+K/JE/7hHFxHW3A=
github.com/modern-go/concurrent v0.0.0-20180228061459-e0a39a4cb421/go.mod h1:6dJC0mAP4ikYIbvyc7fijjWJddQyLn8Ig3JB5CqoB9Q=
@@ -421,113 +702,175 @@ github.com/modern-go/concurrent v0.0.0-20180306012644-bacd9c7ef1dd h1:TRLaZ9cD/w
github.com/modern-go/concurrent v0.0.0-20180306012644-bacd9c7ef1dd/go.mod h1:6dJC0mAP4ikYIbvyc7fijjWJddQyLn8Ig3JB5CqoB9Q=
github.com/modern-go/reflect2 v0.0.0-20180320133207-05fbef0ca5da/go.mod h1:bx2lNnkwVCuqBIxFjflWJWanXIb3RllmbCylyMrvgv0=
github.com/modern-go/reflect2 v0.0.0-20180701023420-4b7aa43c6742/go.mod h1:bx2lNnkwVCuqBIxFjflWJWanXIb3RllmbCylyMrvgv0=
-github.com/modern-go/reflect2 v1.0.1 h1:9f412s+6RmYXLWZSEzVVgPGK7C2PphHj5RJrvfx9AWI=
github.com/modern-go/reflect2 v1.0.1/go.mod h1:bx2lNnkwVCuqBIxFjflWJWanXIb3RllmbCylyMrvgv0=
+github.com/modern-go/reflect2 v1.0.2 h1:xBagoLtFs94CBntxluKeaWgTMpvLxC4ur3nMaC9Gz0M=
+github.com/modern-go/reflect2 v1.0.2/go.mod h1:yWuevngMOJpCy52FWWMvUC8ws7m/LJsjYzDa0/r8luk=
+github.com/monochromegane/go-gitignore v0.0.0-20200626010858-205db1a8cc00/go.mod h1:Pm3mSP3c5uWn86xMLZ5Sa7JB9GsEZySvHYXCTK4E9q4=
+github.com/morikuni/aec v1.0.0 h1:nP9CBfwrvYnBRgY6qfDQkygYDmYwOilePFkwzv4dU8A=
+github.com/morikuni/aec v1.0.0/go.mod h1:BbKIizmSmc5MMPqRYbxO4ZU0S0+P200+tUnFx7PXmsc=
+github.com/mrunalp/fileutils v0.5.0/go.mod h1:M1WthSahJixYnrXQl/DFQuteStB1weuxD2QJNHXfbSQ=
github.com/munnerz/goautoneg v0.0.0-20120707110453-a547fc61f48d/go.mod h1:+n7T8mK8HuQTcFwEeznm/DIxMOiR9yIdICNftLE1DvQ=
github.com/munnerz/goautoneg v0.0.0-20191010083416-a7dc8b61c822 h1:C3w9PqII01/Oq1c1nUAm88MOHcQC9l5mIlSMApZMrHA=
github.com/munnerz/goautoneg v0.0.0-20191010083416-a7dc8b61c822/go.mod h1:+n7T8mK8HuQTcFwEeznm/DIxMOiR9yIdICNftLE1DvQ=
github.com/mwitkow/go-conntrack v0.0.0-20161129095857-cc309e4a2223/go.mod h1:qRWi+5nqEBWmkhHvq77mSJWrCKwh8bxhgT7d/eI7P4U=
github.com/mwitkow/go-conntrack v0.0.0-20190716064945-2f068394615f/go.mod h1:qRWi+5nqEBWmkhHvq77mSJWrCKwh8bxhgT7d/eI7P4U=
github.com/mxk/go-flowrate v0.0.0-20140419014527-cca7078d478f/go.mod h1:ZdcZmHo+o7JKHSa8/e818NopupXU1YMK5fe1lsApnBw=
+github.com/ncw/swift v1.0.47/go.mod h1:23YIA4yWVnGwv2dQlN4bB7egfYX6YLn0Yo/S6zZO/ZM=
github.com/niemeyer/pretty v0.0.0-20200227124842-a10e7caefd8e h1:fD57ERR4JtEqsWbfPhv4DMiApHyliiK5xCTNVSPiaAs=
github.com/niemeyer/pretty v0.0.0-20200227124842-a10e7caefd8e/go.mod h1:zD1mROLANZcx1PVRCS0qkT7pwLkGfwJo4zjcN/Tysno=
-github.com/nxadm/tail v1.4.4 h1:DQuhQpB1tVlglWS2hLQ5OV6B5r8aGxSrPc5Qo6uTN78=
-github.com/nxadm/tail v1.4.4/go.mod h1:kenIhsEOeOJmVchQTgglprH7qJGnHDVpk1VPCcaMI8A=
+github.com/nxadm/tail v1.4.8 h1:nPr65rt6Y5JFSKQO7qToXr7pePgD6Gwiw05lkbyAQTE=
+github.com/nxadm/tail v1.4.8/go.mod h1:+ncqLTQzXmGhMZNUePPaPqPvBxHAIsmXswZKocGu+AU=
github.com/oklog/ulid v1.3.1/go.mod h1:CirwcVhetQ6Lv90oh/F+FBtV6XMibvdAFo93nm5qn4U=
github.com/olekukonko/tablewriter v0.0.0-20170122224234-a0225b3f23b5/go.mod h1:vsDQFd/mU46D+Z4whnwzcISnGGzXWMclvtLoiIKAKIo=
-github.com/onsi/ginkgo v0.0.0-20170829012221-11459a886d9c/go.mod h1:lLunBs/Ym6LB5Z9jYTR76FiuTmxDTDusOGeTQH+WWjE=
-github.com/onsi/ginkgo v1.6.0/go.mod h1:lLunBs/Ym6LB5Z9jYTR76FiuTmxDTDusOGeTQH+WWjE=
-github.com/onsi/ginkgo v1.10.1/go.mod h1:lLunBs/Ym6LB5Z9jYTR76FiuTmxDTDusOGeTQH+WWjE=
-github.com/onsi/ginkgo v1.11.0/go.mod h1:lLunBs/Ym6LB5Z9jYTR76FiuTmxDTDusOGeTQH+WWjE=
-github.com/onsi/ginkgo v1.12.1/go.mod h1:zj2OWP4+oCPe1qIXoGWkgMRwljMUYCdkwsT2108oapk=
-github.com/onsi/ginkgo v1.14.0 h1:2mOpI4JVVPBN+WQRa0WKH2eXR+Ey+uK4n7Zj0aYpIQA=
-github.com/onsi/ginkgo v1.14.0/go.mod h1:iSB4RoI2tjJc9BBv4NKIKWKya62Rps+oPG/Lv9klQyY=
-github.com/onsi/gomega v0.0.0-20170829124025-dcabb60a477c/go.mod h1:C1qb7wdrVGGVU+Z6iS04AVkA3Q65CEZX59MT0QO5uiA=
-github.com/onsi/gomega v1.7.0/go.mod h1:ex+gbHU/CVuBBDIJjb2X0qEXbFg53c61hWP/1CpauHY=
-github.com/onsi/gomega v1.7.1/go.mod h1:XdKZgCCFLUoM/7CFJVPcG8C1xQ1AJ0vpAezJrB7JYyY=
-github.com/onsi/gomega v1.10.1 h1:o0+MgICZLuZ7xjH7Vx6zS/zcu93/BEp1VwkIW1mEXCE=
-github.com/onsi/gomega v1.10.1/go.mod h1:iN09h71vgCQne3DLsj+A5owkum+a2tYe+TOCB1ybHNo=
+github.com/olekukonko/tablewriter v0.0.4/go.mod h1:zq6QwlOf5SlnkVbMSr5EoBv3636FWnp+qbPhuoO21uA=
+github.com/onsi/ginkgo v1.16.1 h1:foqVmeWDD6yYpK+Yz3fHyNIxFYNxswxqNFjSKe+vI54=
+github.com/onsi/ginkgo v1.16.1/go.mod h1:CObGmKUOKaSC0RjmoAK7tKyn4Azo5P2IWuoMnvwxz1E=
+github.com/onsi/gomega v1.11.0 h1:+CqWgvj0OZycCaqclBD1pxKHAU+tOkHmQIWvDHq2aug=
+github.com/onsi/gomega v1.11.0/go.mod h1:azGKhqFUon9Vuj0YmTfLSmx0FUwqXYSTl5re8lQLTUg=
+github.com/opencontainers/go-digest v0.0.0-20170106003457-a6d0ee40d420/go.mod h1:cMLVZDEM3+U2I4VmLI6N8jQYUd2OVphdqWwCJHrFt2s=
+github.com/opencontainers/go-digest v0.0.0-20180430190053-c9281466c8b2/go.mod h1:cMLVZDEM3+U2I4VmLI6N8jQYUd2OVphdqWwCJHrFt2s=
+github.com/opencontainers/go-digest v1.0.0-rc1/go.mod h1:cMLVZDEM3+U2I4VmLI6N8jQYUd2OVphdqWwCJHrFt2s=
+github.com/opencontainers/go-digest v1.0.0-rc1.0.20180430190053-c9281466c8b2/go.mod h1:cMLVZDEM3+U2I4VmLI6N8jQYUd2OVphdqWwCJHrFt2s=
+github.com/opencontainers/go-digest v1.0.0 h1:apOUWs51W5PlhuyGyz9FCeeBIOUDA/6nW8Oi/yOhh5U=
+github.com/opencontainers/go-digest v1.0.0/go.mod h1:0JzlMkj0TRzQZfJkVvzbP0HBR3IKzErnv2BNG4W4MAM=
+github.com/opencontainers/image-spec v1.0.0/go.mod h1:BtxoFyWECRxE4U/7sNtV5W15zMzWCbyJoFRP3s7yZA0=
+github.com/opencontainers/image-spec v1.0.1 h1:JMemWkRwHx4Zj+fVxWoMCFm/8sYGGrUVojFA6h/TRcI=
+github.com/opencontainers/image-spec v1.0.1/go.mod h1:BtxoFyWECRxE4U/7sNtV5W15zMzWCbyJoFRP3s7yZA0=
+github.com/opencontainers/runc v0.0.0-20190115041553-12f6a991201f/go.mod h1:qT5XzbpPznkRYVz/mWwUaVBUv2rmF59PVA73FjuZG0U=
+github.com/opencontainers/runc v0.1.1/go.mod h1:qT5XzbpPznkRYVz/mWwUaVBUv2rmF59PVA73FjuZG0U=
+github.com/opencontainers/runc v1.0.0-rc8.0.20190926000215-3e425f80a8c9/go.mod h1:qT5XzbpPznkRYVz/mWwUaVBUv2rmF59PVA73FjuZG0U=
+github.com/opencontainers/runc v1.0.0-rc9/go.mod h1:qT5XzbpPznkRYVz/mWwUaVBUv2rmF59PVA73FjuZG0U=
+github.com/opencontainers/runc v1.0.0-rc93/go.mod h1:3NOsor4w32B2tC0Zbl8Knk4Wg84SM2ImC1fxBuqJ/H0=
+github.com/opencontainers/runtime-spec v0.1.2-0.20190507144316-5b71a03e2700/go.mod h1:jwyrGlmzljRJv/Fgzds9SsS/C5hL+LL3ko9hs6T5lQ0=
+github.com/opencontainers/runtime-spec v1.0.1/go.mod h1:jwyrGlmzljRJv/Fgzds9SsS/C5hL+LL3ko9hs6T5lQ0=
+github.com/opencontainers/runtime-spec v1.0.2-0.20190207185410-29686dbc5559/go.mod h1:jwyrGlmzljRJv/Fgzds9SsS/C5hL+LL3ko9hs6T5lQ0=
+github.com/opencontainers/runtime-spec v1.0.2/go.mod h1:jwyrGlmzljRJv/Fgzds9SsS/C5hL+LL3ko9hs6T5lQ0=
+github.com/opencontainers/runtime-spec v1.0.3-0.20200929063507-e6143ca7d51d/go.mod h1:jwyrGlmzljRJv/Fgzds9SsS/C5hL+LL3ko9hs6T5lQ0=
+github.com/opencontainers/runtime-tools v0.0.0-20181011054405-1d69bd0f9c39/go.mod h1:r3f7wjNzSs2extwzU3Y+6pKfobzPh+kKFJ3ofN+3nfs=
+github.com/opencontainers/selinux v1.6.0/go.mod h1:VVGKuOLlE7v4PJyT6h7mNWvq1rzqiriPsEqVhc+svHE=
+github.com/opencontainers/selinux v1.8.0/go.mod h1:RScLhm78qiWa2gbVCcGkC7tCGdgk3ogry1nUQF8Evvo=
github.com/opentracing/opentracing-go v1.1.0/go.mod h1:UkNAQd3GIcIGf0SeVgPpRdFStlNbqXla1AfSYxPUl2o=
github.com/pascaldekloe/goe v0.0.0-20180627143212-57f6aae5913c/go.mod h1:lzWF7FIEvWOWxwDKqyGYQf6ZUaNfKdP144TG7ZOy1lc=
github.com/pborman/uuid v1.2.0/go.mod h1:X/NO0urCmaxf9VXbdlT7C2Yzkj2IKimNn4k+gtPdI/k=
github.com/pelletier/go-toml v1.2.0/go.mod h1:5z9KED0ma1S8pY6P1sdut58dfprrGBbd/94hg7ilaic=
+github.com/pelletier/go-toml v1.8.1/go.mod h1:T2/BmBdy8dvIRq1a/8aqjN41wvWlN4lrapLU/GW4pbc=
+github.com/pelletier/go-toml v1.9.3 h1:zeC5b1GviRUyKYd6OJPvBU/mcVDVoL1OhT17FCt5dSQ=
+github.com/pelletier/go-toml v1.9.3/go.mod h1:u1nR/EPcESfeI/szUZKdtJ0xRNbUoANCkoOuaOx1Y+c=
github.com/peterbourgon/diskv v2.0.1+incompatible/go.mod h1:uqqh8zWWbv1HBMNONnaR/tNboyR3/BZd58JJSHlUSCU=
github.com/pkg/errors v0.8.0/go.mod h1:bwawxfHBFNV+L2hUp1rHADufV3IMtnDRdf1r5NINEl0=
+github.com/pkg/errors v0.8.1-0.20171018195549-f15c970de5b7/go.mod h1:bwawxfHBFNV+L2hUp1rHADufV3IMtnDRdf1r5NINEl0=
github.com/pkg/errors v0.8.1/go.mod h1:bwawxfHBFNV+L2hUp1rHADufV3IMtnDRdf1r5NINEl0=
github.com/pkg/errors v0.9.1 h1:FEBLx1zS214owpjy7qsBeixbURkuhQAwrK5UwLGTwt4=
github.com/pkg/errors v0.9.1/go.mod h1:bwawxfHBFNV+L2hUp1rHADufV3IMtnDRdf1r5NINEl0=
+github.com/pkg/sftp v1.10.1/go.mod h1:lYOWFsE0bwd1+KfKJaKeuokY15vzFx25BLbzYYoAxZI=
github.com/pmezard/go-difflib v0.0.0-20151028094244-d8ed2627bdf0/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4=
github.com/pmezard/go-difflib v1.0.0 h1:4DBwDE0NGyQoBHbLQYPwSUPoCMWR5BEzIk/f1lZbAQM=
github.com/pmezard/go-difflib v1.0.0/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4=
github.com/posener/complete v1.1.1/go.mod h1:em0nMJCgc9GFtwrmVmEMR/ZL6WyhyjMBndrE9hABlRI=
github.com/pquerna/cachecontrol v0.0.0-20171018203845-0dec1b30a021/go.mod h1:prYjPmNq4d1NPVmpShWobRqXY3q7Vp+80DqgxxUrUIA=
+github.com/prometheus/client_golang v0.0.0-20180209125602-c332b6f63c06/go.mod h1:7SWBe2y4D6OKWSNQJUaRYU/AaXPKyh/dDVn+NZz0KFw=
github.com/prometheus/client_golang v0.9.1/go.mod h1:7SWBe2y4D6OKWSNQJUaRYU/AaXPKyh/dDVn+NZz0KFw=
github.com/prometheus/client_golang v0.9.3/go.mod h1:/TN21ttK/J9q6uSwhBd54HahCDft0ttaMvbicHlPoso=
github.com/prometheus/client_golang v1.0.0/go.mod h1:db9x61etRT2tGnBNRi70OPL5FsnadC4Ky3P0J6CfImo=
+github.com/prometheus/client_golang v1.1.0/go.mod h1:I1FGZT9+L76gKKOs5djB6ezCbFQP1xR9D75/vuwEF3g=
github.com/prometheus/client_golang v1.7.1/go.mod h1:PY5Wy2awLA44sXw4AOSfFBetzPP4j5+D6mVACh+pe2M=
github.com/prometheus/client_golang v1.11.0 h1:HNkLOAEQMIDv/K+04rukrLx6ch7msSRwf3/SASFAGtQ=
github.com/prometheus/client_golang v1.11.0/go.mod h1:Z6t4BnS23TR94PD6BsDNk8yVqroYurpAkEiz0P2BEV0=
+github.com/prometheus/client_model v0.0.0-20171117100541-99fa1f4be8e5/go.mod h1:MbSGuTsp3dbXC40dX6PRTWyKYBIrTGTE9sqQNg2J8bo=
github.com/prometheus/client_model v0.0.0-20180712105110-5c3871d89910/go.mod h1:MbSGuTsp3dbXC40dX6PRTWyKYBIrTGTE9sqQNg2J8bo=
github.com/prometheus/client_model v0.0.0-20190129233127-fd36f4220a90/go.mod h1:xMI15A0UPsDsEKsMN9yxemIoYk6Tm2C1GtYGdfGttqA=
github.com/prometheus/client_model v0.0.0-20190812154241-14fe0d1b01d4/go.mod h1:xMI15A0UPsDsEKsMN9yxemIoYk6Tm2C1GtYGdfGttqA=
github.com/prometheus/client_model v0.2.0 h1:uq5h0d+GuxiXLJLNABMgp2qUWDPiLvgCzz2dUR+/W/M=
github.com/prometheus/client_model v0.2.0/go.mod h1:xMI15A0UPsDsEKsMN9yxemIoYk6Tm2C1GtYGdfGttqA=
+github.com/prometheus/common v0.0.0-20180110214958-89604d197083/go.mod h1:daVV7qP5qjZbuso7PdcryaAu0sAZbrN9i7WWcTMWvro=
github.com/prometheus/common v0.0.0-20181113130724-41aa239b4cce/go.mod h1:daVV7qP5qjZbuso7PdcryaAu0sAZbrN9i7WWcTMWvro=
github.com/prometheus/common v0.4.0/go.mod h1:TNfzLD0ON7rHzMJeJkieUDPYmFC7Snx/y86RQel1bk4=
github.com/prometheus/common v0.4.1/go.mod h1:TNfzLD0ON7rHzMJeJkieUDPYmFC7Snx/y86RQel1bk4=
+github.com/prometheus/common v0.6.0/go.mod h1:eBmuwkDJBwy6iBfxCBob6t6dR6ENT/y+J+Zk0j9GMYc=
github.com/prometheus/common v0.10.0/go.mod h1:Tlit/dnDKsSWFlCLTWaA1cyBgKHSMdTB80sz/V91rCo=
-github.com/prometheus/common v0.26.0 h1:iMAkS2TDoNWnKM+Kopnx/8tnEStIfpYA0ur0xQzzhMQ=
github.com/prometheus/common v0.26.0/go.mod h1:M7rCNAaPfAosfx8veZJCuw84e35h3Cfd9VFqTh1DIvc=
+github.com/prometheus/common v0.28.0 h1:vGVfV9KrDTvWt5boZO0I19g2E3CsWfpPPKZM9dt3mEw=
+github.com/prometheus/common v0.28.0/go.mod h1:vu+V0TpY+O6vW9J44gczi3Ap/oXXR10b+M/gUGO4Hls=
+github.com/prometheus/procfs v0.0.0-20180125133057-cb4147076ac7/go.mod h1:c3At6R/oaqEKCNdg8wHV1ftS6bRYblBhIjjI8uT2IGk=
github.com/prometheus/procfs v0.0.0-20181005140218-185b4288413d/go.mod h1:c3At6R/oaqEKCNdg8wHV1ftS6bRYblBhIjjI8uT2IGk=
github.com/prometheus/procfs v0.0.0-20190507164030-5867b95ac084/go.mod h1:TjEm7ze935MbeOT/UhFTIMYKhuLP4wbCsTZCD3I8kEA=
+github.com/prometheus/procfs v0.0.0-20190522114515-bc1a522cf7b1/go.mod h1:TjEm7ze935MbeOT/UhFTIMYKhuLP4wbCsTZCD3I8kEA=
github.com/prometheus/procfs v0.0.2/go.mod h1:TjEm7ze935MbeOT/UhFTIMYKhuLP4wbCsTZCD3I8kEA=
-github.com/prometheus/procfs v0.0.11/go.mod h1:lV6e/gmhEcM9IjHGsFOCxxuZ+z1YqCvr4OA4YeYWdaU=
+github.com/prometheus/procfs v0.0.3/go.mod h1:4A/X28fw3Fc593LaREMrKMqOKvUAntwMDaekg4FpcdQ=
+github.com/prometheus/procfs v0.0.5/go.mod h1:4A/X28fw3Fc593LaREMrKMqOKvUAntwMDaekg4FpcdQ=
+github.com/prometheus/procfs v0.0.8/go.mod h1:7Qr8sr6344vo1JqZ6HhLceV9o3AJ1Ff+GxbHq6oeK9A=
github.com/prometheus/procfs v0.1.3/go.mod h1:lV6e/gmhEcM9IjHGsFOCxxuZ+z1YqCvr4OA4YeYWdaU=
+github.com/prometheus/procfs v0.2.0/go.mod h1:lV6e/gmhEcM9IjHGsFOCxxuZ+z1YqCvr4OA4YeYWdaU=
github.com/prometheus/procfs v0.6.0 h1:mxy4L2jP6qMonqmq+aTtOx1ifVWUgG/TAmntgbh3xv4=
github.com/prometheus/procfs v0.6.0/go.mod h1:cz+aTbrPOrUb4q7XlbU9ygM+/jj0fzG6c1xBZuNvfVA=
github.com/prometheus/tsdb v0.7.1/go.mod h1:qhTCs0VvXwvX/y3TZrWD7rabWM+ijKTux40TwIPHuXU=
+github.com/rivo/uniseg v0.2.0/go.mod h1:J6wj4VEh+S6ZtnVlnTBMWIodfgj8LQOQFoIToxlJtxc=
github.com/rogpeppe/fastuuid v0.0.0-20150106093220-6724a57986af/go.mod h1:XWv6SoW27p1b0cqNHllgS5HIMJraePCO15w5zCzIWYg=
github.com/rogpeppe/fastuuid v1.2.0/go.mod h1:jVj6XXZzXRy/MSR5jhDC/2q6DgLz+nrA6LYCDYWNEvQ=
github.com/rogpeppe/go-internal v1.3.0/go.mod h1:M8bDsm7K2OlrFYOpmOWEs/qY81heoFRclV5y23lUDJ4=
github.com/russross/blackfriday v1.5.2/go.mod h1:JO/DiYxRf+HjHt06OyowR9PTA263kcR/rfWxYHBV53g=
github.com/russross/blackfriday/v2 v2.0.1/go.mod h1:+Rmxgy9KzJVeS9/2gXHxylqXiyQDYRxCVz55jmeOWTM=
github.com/ryanuber/columnize v0.0.0-20160712163229-9b3edd62028f/go.mod h1:sm1tb6uqfes/u+d4ooFouqFdy9/2g9QGwK3SQygK0Ts=
+github.com/safchain/ethtool v0.0.0-20190326074333-42ed695e3de8/go.mod h1:Z0q5wiBQGYcxhMZ6gUqHn6pYNLypFAvaL3UvgZLR0U4=
+github.com/satori/go.uuid v1.2.0/go.mod h1:dA0hQrYB0VpLJoorglMZABFdXlWrHn1NEOzdhQKdks0=
github.com/sean-/seed v0.0.0-20170313163322-e2103e2c3529/go.mod h1:DxrIzT+xaE7yg65j358z/aeFdxmN0P9QXhEzd20vsDc=
+github.com/seccomp/libseccomp-golang v0.9.1/go.mod h1:GbW5+tmTXfcxTToHLXlScSlAvWlF4P2Ca7zGrPiEpWo=
github.com/sergi/go-diff v1.0.0/go.mod h1:0CfEIISq7TuYL3j771MWULgwwjU+GofnZX9QAmXWZgo=
+github.com/sergi/go-diff v1.1.0/go.mod h1:STckp+ISIX8hZLjrqAeVduY0gWCT9IjLuqbuNXdaHfM=
github.com/shurcooL/sanitized_anchor_name v1.0.0/go.mod h1:1NzhyTcUVG4SuEtjjoZeVRXNmyL/1OwPU0+IJeTBvfc=
+github.com/sirupsen/logrus v1.0.4-0.20170822132746-89742aefa4b2/go.mod h1:pMByvHTf9Beacp5x1UXfOR9xyW/9antXMhjMPG0dEzc=
+github.com/sirupsen/logrus v1.0.6/go.mod h1:pMByvHTf9Beacp5x1UXfOR9xyW/9antXMhjMPG0dEzc=
github.com/sirupsen/logrus v1.2.0/go.mod h1:LxeOpSwHxABJmUn/MG1IvRgCAasNZTLOkJPxbbu5VWo=
+github.com/sirupsen/logrus v1.4.1/go.mod h1:ni0Sbl8bgC9z8RoU9G6nDWqqs/fq4eDPysMBDgk/93Q=
github.com/sirupsen/logrus v1.4.2/go.mod h1:tLMulIdttU9McNUspp0xgXVQah82FyeX6MwdIuYE2rE=
github.com/sirupsen/logrus v1.6.0/go.mod h1:7uNnSEd1DgxDLC74fIahvMZmmYsHGZGEOFrfsX/uA88=
github.com/sirupsen/logrus v1.7.0/go.mod h1:yWOB1SBYBC5VeMP7gHvWumXLIWorT60ONWic61uBYv0=
github.com/sirupsen/logrus v1.8.1 h1:dJKuHgqk1NNQlqoA6BTlM1Wf9DOH3NBjQyu0h9+AZZE=
github.com/sirupsen/logrus v1.8.1/go.mod h1:yWOB1SBYBC5VeMP7gHvWumXLIWorT60ONWic61uBYv0=
+github.com/smartystreets/assertions v0.0.0-20180927180507-b2de0cb4f26d h1:zE9ykElWQ6/NYmHa3jpm/yHnI4xSofP+UP6SpjHcSeM=
github.com/smartystreets/assertions v0.0.0-20180927180507-b2de0cb4f26d/go.mod h1:OnSkiWE9lh6wB0YB77sQom3nweQdgAjqCqsofrRNTgc=
+github.com/smartystreets/goconvey v0.0.0-20190330032615-68dc04aab96a/go.mod h1:syvi0/a8iFYH4r/RixwvyeAJjdLS9QV7WQ/tjFTllLA=
+github.com/smartystreets/goconvey v1.6.4 h1:fv0U8FUIMPNf1L9lnHLvLhgicrIVChEkdzIKYqbNC9s=
github.com/smartystreets/goconvey v1.6.4/go.mod h1:syvi0/a8iFYH4r/RixwvyeAJjdLS9QV7WQ/tjFTllLA=
github.com/soheilhy/cmux v0.1.4/go.mod h1:IM3LyeVVIOuxMH7sFAkER9+bJ4dT7Ms6E4xg4kGIyLM=
github.com/soheilhy/cmux v0.1.5 h1:jjzc5WVemNEDTLwv9tlmemhC73tI08BNOIGwBOo10Js=
github.com/soheilhy/cmux v0.1.5/go.mod h1:T7TcVDs9LWfQgPlPsdngu6I6QIoyIFZDDC6sNE1GqG0=
github.com/spaolacci/murmur3 v0.0.0-20180118202830-f09979ecbc72/go.mod h1:JwIasOWyU6f++ZhiEuf87xNszmSA2myDM2Kzu9HwQUA=
github.com/spf13/afero v1.1.2/go.mod h1:j4pytiNVoe2o6bmDsKpLACNPDBIoEAkihy7loJ1B0CQ=
-github.com/spf13/afero v1.2.2 h1:5jhuqJyZCZf2JRofRvN/nIFgIWNzPa3/Vz8mYylgbWc=
github.com/spf13/afero v1.2.2/go.mod h1:9ZxEEn6pIJ8Rxe320qSDBk6AsU0r9pR7Q4OcevTdifk=
+github.com/spf13/afero v1.6.0 h1:xoax2sJ2DT8S8xA2paPFjDCScCNeWsg75VG0DLRreiY=
+github.com/spf13/afero v1.6.0/go.mod h1:Ai8FlHk4v/PARR026UzYexafAt9roJ7LcLMAmO6Z93I=
github.com/spf13/cast v1.3.0/go.mod h1:Qx5cxh0v+4UWYiBimWS+eyWzqEqokIECu5etghLkUJE=
+github.com/spf13/cast v1.3.1 h1:nFm6S0SMdyzrzcmThSipiEubIDy8WEXKNZ0UOgiRpng=
+github.com/spf13/cast v1.3.1/go.mod h1:Qx5cxh0v+4UWYiBimWS+eyWzqEqokIECu5etghLkUJE=
+github.com/spf13/cobra v0.0.2-0.20171109065643-2da4a54c5cee/go.mod h1:1l0Ry5zgKvJasoi3XT1TypsSe7PqH0Sj9dhYf7v3XqQ=
github.com/spf13/cobra v0.0.3/go.mod h1:1l0Ry5zgKvJasoi3XT1TypsSe7PqH0Sj9dhYf7v3XqQ=
-github.com/spf13/cobra v0.0.5/go.mod h1:3K3wKZymM7VvHMDS9+Akkh4K60UwM26emMESw8tLCHU=
-github.com/spf13/cobra v1.1.3 h1:xghbfqPkxzxP3C/f3n5DdpAbdKLj4ZE4BWQI362l53M=
+github.com/spf13/cobra v1.0.0/go.mod h1:/6GTrnGXV9HjY+aR4k0oJ5tcvakLuG6EuKReYlHNrgE=
+github.com/spf13/cobra v1.1.1/go.mod h1:WnodtKOvamDL/PwE2M4iKs8aMDBZ5Q5klgD3qfVJQMI=
github.com/spf13/cobra v1.1.3/go.mod h1:pGADOWyqRD/YMrPZigI/zbliZ2wVD/23d+is3pSWzOo=
+github.com/spf13/cobra v1.2.1 h1:+KmjbUw1hriSNMF55oPrkZcb27aECyrj8V2ytv7kWDw=
+github.com/spf13/cobra v1.2.1/go.mod h1:ExllRjgxM/piMAM+3tAZvg8fsklGAf3tPfi+i8t68Nk=
github.com/spf13/jwalterweatherman v1.0.0/go.mod h1:cQK4TGJAtQXfYWX+Ddv3mKDzgVb68N+wFjFa4jdeBTo=
+github.com/spf13/jwalterweatherman v1.1.0 h1:ue6voC5bR5F8YxI5S67j9i582FU4Qvo2bmqnqMYADFk=
+github.com/spf13/jwalterweatherman v1.1.0/go.mod h1:aNWZUN0dPAAO/Ljvb5BEdw96iTZ0EXowPYD95IqWIGo=
github.com/spf13/pflag v0.0.0-20170130214245-9ff6c6923cff/go.mod h1:DYY7MBk1bdzusC3SYhjObp+wFpr4gzcvqqNjLnInEg4=
+github.com/spf13/pflag v1.0.1-0.20171106142849-4c012f6dcd95/go.mod h1:DYY7MBk1bdzusC3SYhjObp+wFpr4gzcvqqNjLnInEg4=
github.com/spf13/pflag v1.0.1/go.mod h1:DYY7MBk1bdzusC3SYhjObp+wFpr4gzcvqqNjLnInEg4=
github.com/spf13/pflag v1.0.3/go.mod h1:DYY7MBk1bdzusC3SYhjObp+wFpr4gzcvqqNjLnInEg4=
github.com/spf13/pflag v1.0.5 h1:iy+VFUOCP1a+8yFto/drg2CJ5u0yRoB7fZw3DKv/JXA=
github.com/spf13/pflag v1.0.5/go.mod h1:McXfInJRrz4CZXVZOBLb0bTZqETkiAhM9Iw0y3An2Bg=
-github.com/spf13/viper v1.3.2/go.mod h1:ZiWeW+zYFKm7srdB9IoDzzZXaJaI5eL9QjNiN/DMA2s=
+github.com/spf13/viper v1.4.0/go.mod h1:PTJ7Z/lr49W6bUbkmS1V3by4uWynFiR9p7+dSq/yZzE=
github.com/spf13/viper v1.7.0/go.mod h1:8WkrPz2fc9jxqZNCJI/76HCieCp4Q8HaLFoCha5qpdg=
+github.com/spf13/viper v1.8.1 h1:Kq1fyeebqsBfbjZj4EL7gj2IO0mMaiyjYUWcUsl2O44=
+github.com/spf13/viper v1.8.1/go.mod h1:o0Pch8wJ9BVSWGQMbra6iw0oQ5oktSIBaujf1rJH9Ns=
+github.com/stefanberger/go-pkcs11uri v0.0.0-20201008174630-78d3cae3a980/go.mod h1:AO3tvPzVZ/ayst6UlUKUv6rcPQInYe3IknH3jYhAKu8=
github.com/stoewer/go-strcase v1.2.0/go.mod h1:IBiWB2sKIp3wVVQ3Y035++gc+knqhUQag1KpM8ahLw8=
+github.com/stretchr/objx v0.0.0-20180129172003-8a3f7159479f/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+wExME=
github.com/stretchr/objx v0.1.0/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+wExME=
github.com/stretchr/objx v0.1.1/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+wExME=
github.com/stretchr/objx v0.2.0/go.mod h1:qt09Ya8vawLte6SNmTgCsAVtYtaKzEcn8ATUoHMkEqE=
github.com/stretchr/testify v0.0.0-20151208002404-e3a8ff8ce365/go.mod h1:a8OnRcib4nhh0OaRAV+Yts87kKdq0PP7pXfy6kDkUVs=
+github.com/stretchr/testify v0.0.0-20180303142811-b89eecf5ca5d/go.mod h1:a8OnRcib4nhh0OaRAV+Yts87kKdq0PP7pXfy6kDkUVs=
github.com/stretchr/testify v1.2.2/go.mod h1:a8OnRcib4nhh0OaRAV+Yts87kKdq0PP7pXfy6kDkUVs=
github.com/stretchr/testify v1.3.0/go.mod h1:M5WIy9Dh21IEIfnGCwXGc5bZfKNJtfHm1UVUgZn+9EI=
github.com/stretchr/testify v1.4.0/go.mod h1:j7eGeouHqKxXV5pUuKE4zz7dFj8WfuZ+81PSLYec5m4=
@@ -535,15 +878,29 @@ github.com/stretchr/testify v1.5.1/go.mod h1:5W2xD1RspED5o8YsWQXVCued0rvSQ+mT+I5
github.com/stretchr/testify v1.6.1/go.mod h1:6Fq8oRcR53rry900zMqJjRRixrwX3KX962/h/Wwjteg=
github.com/stretchr/testify v1.7.0 h1:nwc3DEeHmmLAfoZucVR881uASk0Mfjw8xYJ99tb5CcY=
github.com/stretchr/testify v1.7.0/go.mod h1:6Fq8oRcR53rry900zMqJjRRixrwX3KX962/h/Wwjteg=
+github.com/subosito/gotenv v1.2.0 h1:Slr1R9HxAlEKefgq5jn9U+DnETlIUa6HfgEzj0g5d7s=
github.com/subosito/gotenv v1.2.0/go.mod h1:N0PQaV/YGNqwC0u51sEeR/aUtSLEXKX9iv69rRypqCw=
+github.com/syndtr/gocapability v0.0.0-20170704070218-db04d3cc01c8/go.mod h1:hkRG7XYTFWNJGYcbNJQlaLq0fg1yr4J4t/NcTQtrfww=
+github.com/syndtr/gocapability v0.0.0-20180916011248-d98352740cb2/go.mod h1:hkRG7XYTFWNJGYcbNJQlaLq0fg1yr4J4t/NcTQtrfww=
+github.com/syndtr/gocapability v0.0.0-20200815063812-42c35b437635/go.mod h1:hkRG7XYTFWNJGYcbNJQlaLq0fg1yr4J4t/NcTQtrfww=
+github.com/tchap/go-patricia v2.2.6+incompatible/go.mod h1:bmLyhP68RS6kStMGxByiQ23RP/odRBOTVjwp2cDyi6I=
github.com/tidwall/pretty v1.0.0/go.mod h1:XNkn88O1ChpSDQmQeStsy+sBenx6DDtFZJxhVysOjyk=
github.com/tmc/grpc-websocket-proxy v0.0.0-20170815181823-89b8d40f7ca8/go.mod h1:ncp9v5uamzpCO7NfCPTXjqaC+bZgJeR0sMTm6dMHP7U=
github.com/tmc/grpc-websocket-proxy v0.0.0-20190109142713-0ad062ec5ee5/go.mod h1:ncp9v5uamzpCO7NfCPTXjqaC+bZgJeR0sMTm6dMHP7U=
github.com/tmc/grpc-websocket-proxy v0.0.0-20201229170055-e5319fda7802 h1:uruHq4dN7GR16kFc5fp3d1RIYzJW5onx8Ybykw2YQFA=
github.com/tmc/grpc-websocket-proxy v0.0.0-20201229170055-e5319fda7802/go.mod h1:ncp9v5uamzpCO7NfCPTXjqaC+bZgJeR0sMTm6dMHP7U=
-github.com/ugorji/go/codec v0.0.0-20181204163529-d75b2dcb6bc8/go.mod h1:VFNgLljTbGfSG7qAOspJ7OScBnGdDN/yBr0sguwnwf0=
+github.com/ugorji/go v1.1.4/go.mod h1:uQMGLiO92mf5W77hV/PUCpI3pbzQx3CRekS0kk+RGrc=
+github.com/urfave/cli v0.0.0-20171014202726-7bc6a0acffa5/go.mod h1:70zkFmudgCuE/ngEzBv17Jvp/497gISqfk5gWijbERA=
github.com/urfave/cli v1.20.0/go.mod h1:70zkFmudgCuE/ngEzBv17Jvp/497gISqfk5gWijbERA=
+github.com/urfave/cli v1.22.1/go.mod h1:Gos4lmkARVdJ6EkW0WaNv/tZAAMe9V7XWyB60NtXRu0=
+github.com/urfave/cli v1.22.2/go.mod h1:Gos4lmkARVdJ6EkW0WaNv/tZAAMe9V7XWyB60NtXRu0=
github.com/vektah/gqlparser v1.1.2/go.mod h1:1ycwN7Ij5njmMkPPAOaRFY4rET2Enx7IkVv3vaXspKw=
+github.com/vishvananda/netlink v0.0.0-20181108222139-023a6dafdcdf/go.mod h1:+SR5DhBJrl6ZM7CoCKvpw5BKroDKQ+PJqOg65H/2ktk=
+github.com/vishvananda/netlink v1.1.0/go.mod h1:cTgwzPIzzgDAYoQrMm0EdrjRUBkTqKYppBueQtXaqoE=
+github.com/vishvananda/netlink v1.1.1-0.20201029203352-d40f9887b852/go.mod h1:twkDnbuQxJYemMlGd4JFIcuhgX83tXhKS2B/PRMpOho=
+github.com/vishvananda/netns v0.0.0-20180720170159-13995c7128cc/go.mod h1:ZjcWmFBXmLKZu9Nxj3WKYEafiSqer2rnvPr0en9UNpI=
+github.com/vishvananda/netns v0.0.0-20191106174202-0a2b9b5464df/go.mod h1:JP3t17pCcGlemwknint6hfoeCVQrEMVwxRLRjXpq+BU=
+github.com/vishvananda/netns v0.0.0-20200728191858-db3c7e526aae/go.mod h1:DD4vA1DwXk04H54A1oHXtwZmA0grkVMdPxx/VGLCah0=
github.com/vmware-tanzu/vm-operator-api v0.1.4-0.20201118171008-5ca641b0e126 h1:kDAAVFnTW9YHGuUTMGQoWz9AV7aZo3+wCbtzyenVktI=
github.com/vmware-tanzu/vm-operator-api v0.1.4-0.20201118171008-5ca641b0e126/go.mod h1:mubK0QMyaA2TbeAmGsu2GVfiqDFppNUAUqoMPoKFgzM=
github.com/vmware/govmomi v0.22.1 h1:ZIEYmBdAS2i+s7RctapqdHfbeGiUcL8LRN05uS4TfPc=
@@ -555,18 +912,32 @@ github.com/vmware/vsphere-automation-sdk-go/runtime v0.2.0 h1:AM5AK9cyiJWFIfxrh1
github.com/vmware/vsphere-automation-sdk-go/runtime v0.2.0/go.mod h1:M6pTKDrJrPlVG++lboLRf0bDYc3TJ2fsR+KOoWXfCns=
github.com/vmware/vsphere-automation-sdk-go/services/nsxt v0.3.0 h1:Ekf0/umhKdr4N0oURDFlkhZHVm6w0eXzbsn6yc/vL+4=
github.com/vmware/vsphere-automation-sdk-go/services/nsxt v0.3.0/go.mod h1:k9tf91B5Ah7gkaM2s+Z6nATmn6gKmgt8AqJ8RUiKLfo=
+github.com/willf/bitset v1.1.11-0.20200630133818-d5bec3311243/go.mod h1:RjeCKbqT1RxIR/KWY6phxZiaY1IyutSBfGjNPySAYV4=
+github.com/willf/bitset v1.1.11/go.mod h1:83CECat5yLh5zVOf4P1ErAgKA5UDvKtgyUABdr3+MjI=
+github.com/xeipuuv/gojsonpointer v0.0.0-20180127040702-4e3ac2762d5f/go.mod h1:N2zxlSyiKSe5eX1tZViRH5QA0qijqEDrYZiPEAiq3wU=
+github.com/xeipuuv/gojsonreference v0.0.0-20180127040603-bd5ef7bd5415/go.mod h1:GwrjFmJcFw6At/Gs6z4yjiIwzuJ1/+UwLxMQDVQXShQ=
+github.com/xeipuuv/gojsonschema v0.0.0-20180618132009-1d523034197f/go.mod h1:5yf86TLmAcydyeJq5YvxkGPE2fm/u4myDekKRoLuqhs=
github.com/xiang90/probing v0.0.0-20190116061207-43a291ad63a2 h1:eY9dn8+vbi4tKz5Qo6v2eYzo7kUS51QINcR5jNpbZS8=
github.com/xiang90/probing v0.0.0-20190116061207-43a291ad63a2/go.mod h1:UETIi67q53MR2AWcXfiuqkDkRtnGDLqkBTpCHuJHxtU=
+github.com/xlab/treeprint v0.0.0-20181112141820-a009c3971eca/go.mod h1:ce1O1j6UtZfjr22oyGxGLbauSBp2YVXpARAosm7dHBg=
github.com/xordataexchange/crypt v0.0.3-0.20170626215501-b2862e3d0a77/go.mod h1:aYKd//L2LvnjZzWKhF00oedf4jCCReLcmhLdhm1A27Q=
+github.com/yuin/goldmark v1.1.25/go.mod h1:3hX8gzYuyVAZsxl0MRgGTJEmQBFcNTphYh9decYSb74=
github.com/yuin/goldmark v1.1.27/go.mod h1:3hX8gzYuyVAZsxl0MRgGTJEmQBFcNTphYh9decYSb74=
+github.com/yuin/goldmark v1.1.32/go.mod h1:3hX8gzYuyVAZsxl0MRgGTJEmQBFcNTphYh9decYSb74=
github.com/yuin/goldmark v1.2.1/go.mod h1:3hX8gzYuyVAZsxl0MRgGTJEmQBFcNTphYh9decYSb74=
github.com/yuin/goldmark v1.3.5/go.mod h1:mwnBkeHKe2W/ZEtQ+71ViKU8L12m81fl3OWwC1Zlc8k=
+github.com/yuin/goldmark v1.4.0/go.mod h1:mwnBkeHKe2W/ZEtQ+71ViKU8L12m81fl3OWwC1Zlc8k=
+github.com/yuin/goldmark v1.4.1/go.mod h1:mwnBkeHKe2W/ZEtQ+71ViKU8L12m81fl3OWwC1Zlc8k=
+github.com/yvasiyarov/go-metrics v0.0.0-20140926110328-57bccd1ccd43/go.mod h1:aX5oPXxHm3bOH+xeAttToC8pqch2ScQN/JoXYupl6xs=
+github.com/yvasiyarov/gorelic v0.0.0-20141212073537-a9bba5b9ab50/go.mod h1:NUSPSUX/bi6SeDMUh6brw0nXpxHnc96TguQh0+r/ssA=
+github.com/yvasiyarov/newrelic_platform_go v0.0.0-20140908184405-b21fdbd4370f/go.mod h1:GlGEuHIJweS1mbCqG+7vt2nvWLzLLnRHbXz5JKd/Qbg=
go.etcd.io/bbolt v1.3.2/go.mod h1:IbVyRI1SCnLcuJnV2u8VeU0CEYM7e686BmAb1XKL+uU=
go.etcd.io/bbolt v1.3.3/go.mod h1:IbVyRI1SCnLcuJnV2u8VeU0CEYM7e686BmAb1XKL+uU=
+go.etcd.io/bbolt v1.3.5/go.mod h1:G5EMThwa9y8QZGBClrRx5EY+Yw9kAhnjy3bSjsnlVTQ=
go.etcd.io/bbolt v1.3.6 h1:/ecaJf0sk1l4l6V4awd65v2C3ILy7MSj+s/x1ADCIMU=
go.etcd.io/bbolt v1.3.6/go.mod h1:qXsaaIqmgQH0T+OPdb99Bf+PKfBBQVAdyD6TY9G8XM4=
-go.etcd.io/etcd v0.0.0-20191023171146-3cf2f69b5738 h1:VcrIfasaLFkyjk6KNlXQSzO+B0fZcnECiDrKJsfxka0=
-go.etcd.io/etcd v0.0.0-20191023171146-3cf2f69b5738/go.mod h1:dnLIgRNXwCJa5e+c6mIZCrds/GIG4ncV9HhK5PX7jPg=
+go.etcd.io/etcd v0.5.0-alpha.5.0.20200910180754-dd1b699fc489 h1:1JFLBqwIgdyHN1ZtgjTBwO+blA6gVOmZurpiMEsETKo=
+go.etcd.io/etcd v0.5.0-alpha.5.0.20200910180754-dd1b699fc489/go.mod h1:yVHk9ub3CSBatqGNg7GRmsnfLWtoW60w4eDYfh7vHDg=
go.etcd.io/etcd/api/v3 v3.5.0 h1:GsV3S+OfZEOCNXdtNkBSR7kgLobAa/SO6tCxRa0GAYw=
go.etcd.io/etcd/api/v3 v3.5.0/go.mod h1:cbVKeC6lCfl7j/8jBhAK6aIYO9XOjdptoxU/nLQcPvs=
go.etcd.io/etcd/client/pkg/v3 v3.5.0 h1:2aQv6F436YnN7I4VbI8PPYrBhu+SmrTaADcf8Mi/6PU=
@@ -584,10 +955,14 @@ go.etcd.io/etcd/server/v3 v3.5.0/go.mod h1:3Ah5ruV+M+7RZr0+Y/5mNLwC+eQlni+mQmOVd
go.mongodb.org/mongo-driver v1.0.3/go.mod h1:u7ryQJ+DOzQmeO7zB6MHyr8jkEQvC8vH7qLUO4lqsUM=
go.mongodb.org/mongo-driver v1.1.1/go.mod h1:u7ryQJ+DOzQmeO7zB6MHyr8jkEQvC8vH7qLUO4lqsUM=
go.mongodb.org/mongo-driver v1.1.2/go.mod h1:u7ryQJ+DOzQmeO7zB6MHyr8jkEQvC8vH7qLUO4lqsUM=
+go.mozilla.org/pkcs7 v0.0.0-20200128120323-432b2356ecb1/go.mod h1:SNgMg+EgDFwmvSmLRTNKC5fegJjB7v23qTQ0XLGUNHk=
go.opencensus.io v0.21.0/go.mod h1:mSImk1erAIZhrmZN+AvHh14ztQfjbGwt4TtuofqLduU=
go.opencensus.io v0.22.0/go.mod h1:+kGneAE2xo2IficOXnaByMWTGM9T73dGwxeWcUqIpI8=
go.opencensus.io v0.22.2/go.mod h1:yxeiOL68Rb0Xd1ddK5vPZ/oVn4vY4Ynel7k9FzqtOIw=
go.opencensus.io v0.22.3/go.mod h1:yxeiOL68Rb0Xd1ddK5vPZ/oVn4vY4Ynel7k9FzqtOIw=
+go.opencensus.io v0.22.4/go.mod h1:yxeiOL68Rb0Xd1ddK5vPZ/oVn4vY4Ynel7k9FzqtOIw=
+go.opencensus.io v0.22.5/go.mod h1:5pWMHQbX5EPX2/62yrJeAkowc+lfs/XD7Uxpq3pI6kk=
+go.opencensus.io v0.23.0/go.mod h1:XItmlyltB5F7CS4xOC1DcqMoFqwtC6OG2xF7mCv7P7E=
go.opentelemetry.io/contrib v0.20.0 h1:ubFQUn0VCZ0gPwIoJfBJVpeBlyRMxu8Mm/huKWYd9p0=
go.opentelemetry.io/contrib v0.20.0/go.mod h1:G/EtFaa6qaN7+LxqfIAT3GiZa7Wv5DTBUzl5H4LY0Kc=
go.opentelemetry.io/contrib/instrumentation/google.golang.org/grpc/otelgrpc v0.20.0 h1:sO4WKdPAudZGKPcpZT4MJn6JaDmpyLrMPDGGyA1SttE=
@@ -612,34 +987,43 @@ go.opentelemetry.io/otel/trace v0.20.0 h1:1DL6EXUdcg95gukhuRRvLDO/4X5THh/5dIV52l
go.opentelemetry.io/otel/trace v0.20.0/go.mod h1:6GjCW8zgDjwGHGa6GkyeB8+/5vjT16gUEi0Nf1iBdgw=
go.opentelemetry.io/proto/otlp v0.7.0 h1:rwOQPCuKAKmwGKq2aVNnYIibI6wnV7EvzgfTCzcdGg8=
go.opentelemetry.io/proto/otlp v0.7.0/go.mod h1:PqfVotwruBrMGOCsRd/89rSnXhoiJIqeYNgFYFoEGnI=
+go.starlark.net v0.0.0-20200306205701-8dd3e2ee1dd5/go.mod h1:nmDLcffg48OtT/PSW0Hg7FvpRQsQh5OSqIylirxKC7o=
go.uber.org/atomic v1.3.2/go.mod h1:gD2HeocX3+yG+ygLZcrzQJaqmWj9AIm7n08wl/qW/PE=
go.uber.org/atomic v1.4.0/go.mod h1:gD2HeocX3+yG+ygLZcrzQJaqmWj9AIm7n08wl/qW/PE=
go.uber.org/atomic v1.7.0 h1:ADUqmZGgLDDfbSL9ZmPxKTybcoEYHgpYfELNoN+7hsw=
go.uber.org/atomic v1.7.0/go.mod h1:fEN4uk6kAWBTFdckzkM89CLk9XfWZrxpCo0nPH17wJc=
-go.uber.org/goleak v1.1.10 h1:z+mqJhf6ss6BSfSM671tgKyZBFPTTJM+HLxnhPC3wu0=
go.uber.org/goleak v1.1.10/go.mod h1:8a7PlsEVH3e/a/GLqe5IIrQx6GzcnRmZEufDUTk4A7A=
+go.uber.org/goleak v1.1.11-0.20210813005559-691160354723/go.mod h1:cwTWslyiVhfpKIDGSZEM2HlOvcqm+tG4zioyIeLoqMQ=
+go.uber.org/goleak v1.1.12 h1:gZAh5/EyT/HQwlpkCy6wTpqfH9H8Lz8zbm3dZh+OyzA=
+go.uber.org/goleak v1.1.12/go.mod h1:cwTWslyiVhfpKIDGSZEM2HlOvcqm+tG4zioyIeLoqMQ=
go.uber.org/multierr v1.1.0/go.mod h1:wR5kodmAFQ0UK8QlbwjlSNy0Z68gJhDJUG5sjR94q/0=
go.uber.org/multierr v1.6.0 h1:y6IPFStTAIT5Ytl7/XYmHvzXQ7S3g/IeZW9hyZ5thw4=
go.uber.org/multierr v1.6.0/go.mod h1:cdWPpRnG4AhwMwsgIHip0KRBQjJy5kYEpYjJxpXp9iU=
go.uber.org/zap v1.10.0/go.mod h1:vwi/ZaCAaUcBkycHslxD9B2zi4UTXhF60s6SWpuDF0Q=
-go.uber.org/zap v1.17.0 h1:MTjgFu6ZLKvY6Pvaqk97GlxNBuMpV4Hy/3P6tRGlI2U=
go.uber.org/zap v1.17.0/go.mod h1:MXVU+bhUf/A7Xi2HNOnopQOrmycQ5Ih87HtOu4q5SSo=
+go.uber.org/zap v1.19.0/go.mod h1:xg/QME4nWcxGxrpdeYfq7UvYrLh66cuVKdrbD1XF/NI=
+go.uber.org/zap v1.19.1 h1:ue41HOKd1vGURxrmeKIgELGb3jPW9DMUDGtsinblHwI=
+go.uber.org/zap v1.19.1/go.mod h1:j3DNczoxDZroyBnOT1L/Q79cfUMGZxlv/9dzN7SM1rI=
+golang.org/x/crypto v0.0.0-20171113213409-9f005a07e0d3/go.mod h1:6SG95UA2DQfeDnfUPMdvaQW0Q7yPrPDi9nlGo2tz2b4=
golang.org/x/crypto v0.0.0-20180904163835-0709b304e793/go.mod h1:6SG95UA2DQfeDnfUPMdvaQW0Q7yPrPDi9nlGo2tz2b4=
+golang.org/x/crypto v0.0.0-20181009213950-7c1a557ab941/go.mod h1:6SG95UA2DQfeDnfUPMdvaQW0Q7yPrPDi9nlGo2tz2b4=
golang.org/x/crypto v0.0.0-20181029021203-45a5f77698d3/go.mod h1:6SG95UA2DQfeDnfUPMdvaQW0Q7yPrPDi9nlGo2tz2b4=
-golang.org/x/crypto v0.0.0-20181203042331-505ab145d0a9/go.mod h1:6SG95UA2DQfeDnfUPMdvaQW0Q7yPrPDi9nlGo2tz2b4=
-golang.org/x/crypto v0.0.0-20190211182817-74369b46fc67/go.mod h1:6SG95UA2DQfeDnfUPMdvaQW0Q7yPrPDi9nlGo2tz2b4=
golang.org/x/crypto v0.0.0-20190308221718-c2843e01d9a2/go.mod h1:djNgcEr1/C05ACkg1iLfiJU5Ep61QUkGW8qpdssI0+w=
golang.org/x/crypto v0.0.0-20190320223903-b7391e95e576/go.mod h1:djNgcEr1/C05ACkg1iLfiJU5Ep61QUkGW8qpdssI0+w=
golang.org/x/crypto v0.0.0-20190510104115-cbcb75029529/go.mod h1:yigFU9vqHzYiE8UmvKecakEJjdnWj3jj499lnFckfCI=
golang.org/x/crypto v0.0.0-20190605123033-f99c8df09eb5/go.mod h1:yigFU9vqHzYiE8UmvKecakEJjdnWj3jj499lnFckfCI=
golang.org/x/crypto v0.0.0-20190611184440-5c40567a22f8/go.mod h1:yigFU9vqHzYiE8UmvKecakEJjdnWj3jj499lnFckfCI=
golang.org/x/crypto v0.0.0-20190617133340-57b3e21c3d56/go.mod h1:yigFU9vqHzYiE8UmvKecakEJjdnWj3jj499lnFckfCI=
+golang.org/x/crypto v0.0.0-20190701094942-4def268fd1a4/go.mod h1:yigFU9vqHzYiE8UmvKecakEJjdnWj3jj499lnFckfCI=
+golang.org/x/crypto v0.0.0-20190820162420-60c769a6c586/go.mod h1:yigFU9vqHzYiE8UmvKecakEJjdnWj3jj499lnFckfCI=
golang.org/x/crypto v0.0.0-20191011191535-87dc89f01550/go.mod h1:yigFU9vqHzYiE8UmvKecakEJjdnWj3jj499lnFckfCI=
-golang.org/x/crypto v0.0.0-20200220183623-bac4c82f6975/go.mod h1:LzIPMQfyMNhhGPhUkYOs5KpL4U8rLKemX1yGLhDgUto=
golang.org/x/crypto v0.0.0-20200622213623-75b288015ac9/go.mod h1:LzIPMQfyMNhhGPhUkYOs5KpL4U8rLKemX1yGLhDgUto=
+golang.org/x/crypto v0.0.0-20200728195943-123391ffb6de/go.mod h1:LzIPMQfyMNhhGPhUkYOs5KpL4U8rLKemX1yGLhDgUto=
golang.org/x/crypto v0.0.0-20201002170205-7f63de1d35b0/go.mod h1:LzIPMQfyMNhhGPhUkYOs5KpL4U8rLKemX1yGLhDgUto=
-golang.org/x/crypto v0.0.0-20210220033148-5ea612d1eb83 h1:/ZScEX8SfEmUGRHs0gxpqteO5nfNW6axyZbBdw9A12g=
golang.org/x/crypto v0.0.0-20210220033148-5ea612d1eb83/go.mod h1:jdWPYTVW3xRLrWPugEBEK3UY2ZEsg3UU495nc5E+M+I=
+golang.org/x/crypto v0.0.0-20210322153248-0c34fe9e7dc2/go.mod h1:T9bdIzuCu7OtxOm1hfPfRQxPLYneinmdGuTeoZ9dtd4=
+golang.org/x/crypto v0.0.0-20210817164053-32db794688a5 h1:HWj/xjIHfjYU5nVXpTM0s39J9CbLn7Cc5a7IC5rwsMQ=
+golang.org/x/crypto v0.0.0-20210817164053-32db794688a5/go.mod h1:GvvjBRRGRdwPK5ydBHafDWAxML/pGHZbMvKqRZ5+Abc=
golang.org/x/exp v0.0.0-20190121172915-509febef88a4/go.mod h1:CJ0aWSM057203Lf6IL+f9T1iT9GByDxfZKAQTCR3kQA=
golang.org/x/exp v0.0.0-20190306152737-a1d7652674e8/go.mod h1:CJ0aWSM057203Lf6IL+f9T1iT9GByDxfZKAQTCR3kQA=
golang.org/x/exp v0.0.0-20190510132918-efd6b22b2522/go.mod h1:ZjyILWgesfNpC6sMxTJOJm9Kp84zZh5NQWvqDGG3Qr8=
@@ -662,7 +1046,7 @@ golang.org/x/lint v0.0.0-20190930215403-16217165b5de/go.mod h1:6SW0HCj/g11FgYtHl
golang.org/x/lint v0.0.0-20191125180803-fdd1cda4f05f/go.mod h1:5qLYkcX4OjUUV8bRuDixDT3tpyyb+LUpUlRWLxfhWrs=
golang.org/x/lint v0.0.0-20200130185559-910be7a94367/go.mod h1:3xt1FjdF8hUf6vQPIChWIBhFzV8gjjsPE/fR3IyQdNY=
golang.org/x/lint v0.0.0-20200302205851-738671d3881b/go.mod h1:3xt1FjdF8hUf6vQPIChWIBhFzV8gjjsPE/fR3IyQdNY=
-golang.org/x/lint v0.0.0-20210508222113-6edffad5e616 h1:VLliZ0d+/avPrXXH+OakdXhpJuEoBZuwh1m2j7U6Iug=
+golang.org/x/lint v0.0.0-20201208152925-83fdc39ff7b5/go.mod h1:3xt1FjdF8hUf6vQPIChWIBhFzV8gjjsPE/fR3IyQdNY=
golang.org/x/lint v0.0.0-20210508222113-6edffad5e616/go.mod h1:3xt1FjdF8hUf6vQPIChWIBhFzV8gjjsPE/fR3IyQdNY=
golang.org/x/mobile v0.0.0-20190312151609-d3739f865fa6/go.mod h1:z+o9i4GpDbdi3rU15maQ/Ox0txvL9dWGYEHz965HBQE=
golang.org/x/mobile v0.0.0-20190719004257-d2bd2a29d028/go.mod h1:E/iHnbuqvinMTCcRqshq8CkpyQDoeVncDDYHnLhea+o=
@@ -672,13 +1056,18 @@ golang.org/x/mod v0.1.1-0.20191105210325-c90efee705ee/go.mod h1:QqPTAvyqsEbceGzB
golang.org/x/mod v0.1.1-0.20191107180719-034126e5016b/go.mod h1:QqPTAvyqsEbceGzBzNggFXnrqF1CaUcvgkdR5Ot7KZg=
golang.org/x/mod v0.2.0/go.mod h1:s0Qsj1ACt9ePp/hMypM3fl4fZqREWJwdYDEqhRiZZUA=
golang.org/x/mod v0.3.0/go.mod h1:s0Qsj1ACt9ePp/hMypM3fl4fZqREWJwdYDEqhRiZZUA=
-golang.org/x/mod v0.4.2 h1:Gz96sIWK3OalVv/I/qNygP42zyoKp3xptRVCWRFEBvo=
+golang.org/x/mod v0.3.1-0.20200828183125-ce943fd02449/go.mod h1:s0Qsj1ACt9ePp/hMypM3fl4fZqREWJwdYDEqhRiZZUA=
+golang.org/x/mod v0.4.0/go.mod h1:s0Qsj1ACt9ePp/hMypM3fl4fZqREWJwdYDEqhRiZZUA=
+golang.org/x/mod v0.4.1/go.mod h1:s0Qsj1ACt9ePp/hMypM3fl4fZqREWJwdYDEqhRiZZUA=
golang.org/x/mod v0.4.2/go.mod h1:s0Qsj1ACt9ePp/hMypM3fl4fZqREWJwdYDEqhRiZZUA=
+golang.org/x/mod v0.5.1 h1:OJxoQ/rynoF0dcCdI7cLPktw/hR2cueqYfjm43oqK38=
+golang.org/x/mod v0.5.1/go.mod h1:5OXOZSfqPIIbmVBIIKWRFfZjPR0E5r58TLhUjH0a2Ro=
golang.org/x/net v0.0.0-20170114055629-f2499483f923/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
golang.org/x/net v0.0.0-20180724234803-3673e40ba225/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
golang.org/x/net v0.0.0-20180826012351-8a410e7b638d/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
golang.org/x/net v0.0.0-20180906233101-161cd47e91fd/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
golang.org/x/net v0.0.0-20181005035420-146acd28ed58/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
+golang.org/x/net v0.0.0-20181011144130-49bb7cea24b1/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
golang.org/x/net v0.0.0-20181023162649-9b4f9f5ad519/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
golang.org/x/net v0.0.0-20181114220301-adae6a3d119a/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
golang.org/x/net v0.0.0-20181201002055-351d144fa1fc/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
@@ -690,9 +1079,12 @@ golang.org/x/net v0.0.0-20190320064053-1272bf9dcd53/go.mod h1:t9HGtf8HONx5eT2rtn
golang.org/x/net v0.0.0-20190404232315-eb5bcb51f2a3/go.mod h1:t9HGtf8HONx5eT2rtn7q6eTqICYqUVnKs3thJo3Qplg=
golang.org/x/net v0.0.0-20190501004415-9ce7a6920f09/go.mod h1:t9HGtf8HONx5eT2rtn7q6eTqICYqUVnKs3thJo3Qplg=
golang.org/x/net v0.0.0-20190503192946-f4e77d36d62c/go.mod h1:t9HGtf8HONx5eT2rtn7q6eTqICYqUVnKs3thJo3Qplg=
+golang.org/x/net v0.0.0-20190522155817-f3200d17e092/go.mod h1:HSz+uSET+XFnRR8LxR5pz3Of3rY3CfYBVs4xY44aLks=
golang.org/x/net v0.0.0-20190603091049-60506f45cf65/go.mod h1:HSz+uSET+XFnRR8LxR5pz3Of3rY3CfYBVs4xY44aLks=
golang.org/x/net v0.0.0-20190613194153-d28f0bde5980/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
+golang.org/x/net v0.0.0-20190619014844-b5b0513f8c1b/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
golang.org/x/net v0.0.0-20190620200207-3b0461eec859/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
+golang.org/x/net v0.0.0-20190628185345-da137c7871d7/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
golang.org/x/net v0.0.0-20190724013045-ca1201d0de80/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
golang.org/x/net v0.0.0-20190813141303-74dc4d7220e7/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
golang.org/x/net v0.0.0-20190827160401-ba9fcec4b297/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
@@ -704,26 +1096,54 @@ golang.org/x/net v0.0.0-20200222125558-5a598a2470a0/go.mod h1:z5CRVTTTmAJ677TzLL
golang.org/x/net v0.0.0-20200226121028-0de0cce0169b/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
golang.org/x/net v0.0.0-20200301022130-244492dfa37a/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
golang.org/x/net v0.0.0-20200324143707-d3edc9973b7e/go.mod h1:qpuaurCH72eLCgpAm/N6yyVIVM9cpaDIP3A8BGJEC5A=
-golang.org/x/net v0.0.0-20200520004742-59133d7f0dd7/go.mod h1:qpuaurCH72eLCgpAm/N6yyVIVM9cpaDIP3A8BGJEC5A=
+golang.org/x/net v0.0.0-20200501053045-e0ff5e5a1de5/go.mod h1:qpuaurCH72eLCgpAm/N6yyVIVM9cpaDIP3A8BGJEC5A=
+golang.org/x/net v0.0.0-20200506145744-7e3656a0809f/go.mod h1:qpuaurCH72eLCgpAm/N6yyVIVM9cpaDIP3A8BGJEC5A=
+golang.org/x/net v0.0.0-20200513185701-a91f0712d120/go.mod h1:qpuaurCH72eLCgpAm/N6yyVIVM9cpaDIP3A8BGJEC5A=
+golang.org/x/net v0.0.0-20200520182314-0ba52f642ac2/go.mod h1:qpuaurCH72eLCgpAm/N6yyVIVM9cpaDIP3A8BGJEC5A=
golang.org/x/net v0.0.0-20200625001655-4c5254603344/go.mod h1:/O7V0waA8r7cgGh81Ro3o1hOxt32SMVPicZroKQ2sZA=
+golang.org/x/net v0.0.0-20200707034311-ab3426394381/go.mod h1:/O7V0waA8r7cgGh81Ro3o1hOxt32SMVPicZroKQ2sZA=
golang.org/x/net v0.0.0-20200822124328-c89045814202/go.mod h1:/O7V0waA8r7cgGh81Ro3o1hOxt32SMVPicZroKQ2sZA=
golang.org/x/net v0.0.0-20201021035429-f5854403a974/go.mod h1:sp8m0HH+o8qH0wwXwYZr8TS3Oi6o0r6Gce1SSxlDquU=
+golang.org/x/net v0.0.0-20201031054903-ff519b6c9102/go.mod h1:sp8m0HH+o8qH0wwXwYZr8TS3Oi6o0r6Gce1SSxlDquU=
+golang.org/x/net v0.0.0-20201110031124-69a78807bb2b/go.mod h1:sp8m0HH+o8qH0wwXwYZr8TS3Oi6o0r6Gce1SSxlDquU=
golang.org/x/net v0.0.0-20201202161906-c7110b5ffcbb/go.mod h1:sp8m0HH+o8qH0wwXwYZr8TS3Oi6o0r6Gce1SSxlDquU=
+golang.org/x/net v0.0.0-20201209123823-ac852fbbde11/go.mod h1:m0MpNAwzfU5UDzcl9v0D8zg8gWTRqZa9RBIspLL5mdg=
+golang.org/x/net v0.0.0-20201224014010-6772e930b67b/go.mod h1:m0MpNAwzfU5UDzcl9v0D8zg8gWTRqZa9RBIspLL5mdg=
+golang.org/x/net v0.0.0-20210119194325-5f4716e94777/go.mod h1:m0MpNAwzfU5UDzcl9v0D8zg8gWTRqZa9RBIspLL5mdg=
+golang.org/x/net v0.0.0-20210224082022-3d97a244fca7/go.mod h1:m0MpNAwzfU5UDzcl9v0D8zg8gWTRqZa9RBIspLL5mdg=
+golang.org/x/net v0.0.0-20210226172049-e18ecbb05110/go.mod h1:m0MpNAwzfU5UDzcl9v0D8zg8gWTRqZa9RBIspLL5mdg=
+golang.org/x/net v0.0.0-20210316092652-d523dce5a7f4/go.mod h1:RBQZq4jEuRlivfhVLdyRGr576XBO4/greRjx4P4O3yc=
golang.org/x/net v0.0.0-20210405180319-a5a99cb37ef4/go.mod h1:p54w0d4576C0XHj96bSt6lcn1PtDYWL6XObtHCRCNQM=
-golang.org/x/net v0.0.0-20210520170846-37e1c6afe023 h1:ADo5wSpq2gqaCGQWzk7S5vd//0iyyLeAratkEoG5dLE=
-golang.org/x/net v0.0.0-20210520170846-37e1c6afe023/go.mod h1:9nx3DQGgdP8bBQD5qxJ1jj9UTztislL4KSBs9R2vV5Y=
+golang.org/x/net v0.0.0-20210525063256-abc453219eb5/go.mod h1:9nx3DQGgdP8bBQD5qxJ1jj9UTztislL4KSBs9R2vV5Y=
+golang.org/x/net v0.0.0-20210805182204-aaa1db679c0d/go.mod h1:9nx3DQGgdP8bBQD5qxJ1jj9UTztislL4KSBs9R2vV5Y=
+golang.org/x/net v0.0.0-20210825183410-e898025ed96a/go.mod h1:9nx3DQGgdP8bBQD5qxJ1jj9UTztislL4KSBs9R2vV5Y=
+golang.org/x/net v0.0.0-20211015210444-4f30a5c0130f/go.mod h1:9nx3DQGgdP8bBQD5qxJ1jj9UTztislL4KSBs9R2vV5Y=
+golang.org/x/net v0.0.0-20211209124913-491a49abca63 h1:iocB37TsdFuN6IBRZ+ry36wrkoV51/tl5vOWqkcPGvY=
+golang.org/x/net v0.0.0-20211209124913-491a49abca63/go.mod h1:9nx3DQGgdP8bBQD5qxJ1jj9UTztislL4KSBs9R2vV5Y=
golang.org/x/oauth2 v0.0.0-20180821212333-d2e6202438be/go.mod h1:N/0e6XlmueqKjAGxoOufVs8QHGRruUQn6yWY3a++T0U=
golang.org/x/oauth2 v0.0.0-20190226205417-e64efc72b421/go.mod h1:gOpvHmFTYa4IltrdGE7lF6nIHvwfUNPOp7c8zoXwtLw=
golang.org/x/oauth2 v0.0.0-20190604053449-0f29369cfe45/go.mod h1:gOpvHmFTYa4IltrdGE7lF6nIHvwfUNPOp7c8zoXwtLw=
golang.org/x/oauth2 v0.0.0-20191202225959-858c2ad4c8b6/go.mod h1:gOpvHmFTYa4IltrdGE7lF6nIHvwfUNPOp7c8zoXwtLw=
-golang.org/x/oauth2 v0.0.0-20200107190931-bf48bf16ab8d h1:TzXSXBo42m9gQenoE3b9BGiEpg5IG2JkU5FkPIawgtw=
golang.org/x/oauth2 v0.0.0-20200107190931-bf48bf16ab8d/go.mod h1:gOpvHmFTYa4IltrdGE7lF6nIHvwfUNPOp7c8zoXwtLw=
+golang.org/x/oauth2 v0.0.0-20200902213428-5d25da1a8d43/go.mod h1:KelEdhl1UZF7XfJ4dDtk6s++YSgaE7mD/BuKKDLBl4A=
+golang.org/x/oauth2 v0.0.0-20201109201403-9fd604954f58/go.mod h1:KelEdhl1UZF7XfJ4dDtk6s++YSgaE7mD/BuKKDLBl4A=
+golang.org/x/oauth2 v0.0.0-20201208152858-08078c50e5b5/go.mod h1:KelEdhl1UZF7XfJ4dDtk6s++YSgaE7mD/BuKKDLBl4A=
+golang.org/x/oauth2 v0.0.0-20210218202405-ba52d332ba99/go.mod h1:KelEdhl1UZF7XfJ4dDtk6s++YSgaE7mD/BuKKDLBl4A=
+golang.org/x/oauth2 v0.0.0-20210220000619-9bb904979d93/go.mod h1:KelEdhl1UZF7XfJ4dDtk6s++YSgaE7mD/BuKKDLBl4A=
+golang.org/x/oauth2 v0.0.0-20210313182246-cd4f82c27b84/go.mod h1:KelEdhl1UZF7XfJ4dDtk6s++YSgaE7mD/BuKKDLBl4A=
+golang.org/x/oauth2 v0.0.0-20210402161424-2e8d93401602/go.mod h1:KelEdhl1UZF7XfJ4dDtk6s++YSgaE7mD/BuKKDLBl4A=
+golang.org/x/oauth2 v0.0.0-20210514164344-f6687ab2804c/go.mod h1:KelEdhl1UZF7XfJ4dDtk6s++YSgaE7mD/BuKKDLBl4A=
+golang.org/x/oauth2 v0.0.0-20210628180205-a41e5a781914/go.mod h1:KelEdhl1UZF7XfJ4dDtk6s++YSgaE7mD/BuKKDLBl4A=
+golang.org/x/oauth2 v0.0.0-20210819190943-2bc19b11175f h1:Qmd2pbz05z7z6lm0DrgQVVPuBm92jqujBKMHMOlOQEw=
+golang.org/x/oauth2 v0.0.0-20210819190943-2bc19b11175f/go.mod h1:KelEdhl1UZF7XfJ4dDtk6s++YSgaE7mD/BuKKDLBl4A=
golang.org/x/sync v0.0.0-20180314180146-1d60e4601c6f/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20181108010431-42b317875d0f/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20181221193216-37e7f081c4d4/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20190227155943-e225da77a7e6/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20190423024810-112230192c58/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20190911185100-cd5d95a43a6e/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
+golang.org/x/sync v0.0.0-20200317015054-43a5402ce75a/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
+golang.org/x/sync v0.0.0-20200625203802-6e8e738ad208/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20201020160332-67f06af15bc9/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20201207232520-09787c993a3a/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20210220032951-036812b2e83c h1:5KslGYwFpkhGh+Q16bwMP3cOontH8FOep7tGV86Y7SQ=
@@ -736,8 +1156,6 @@ golang.org/x/sys v0.0.0-20180909124046-d0be0721c37e/go.mod h1:STP8DvDyc/dI5b8T5h
golang.org/x/sys v0.0.0-20181026203630-95b1ffbd15a5/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20181107165924-66b7b1311ac8/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20181116152217-5ac8a444bdc5/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
-golang.org/x/sys v0.0.0-20181205085412-a5c9d58dba9a/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
-golang.org/x/sys v0.0.0-20190209173611-3b5209105503/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20190215142949-d0b11bdaac8a/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20190312061237-fead79001313/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20190321052220-f7bb7a8bee54/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
@@ -745,59 +1163,108 @@ golang.org/x/sys v0.0.0-20190412213103-97732733099d/go.mod h1:h1NjWce9XRLGQEsW7w
golang.org/x/sys v0.0.0-20190422165155-953cdadca894/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20190502145724-3ef323f4f1fd/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20190507160741-ecd444e8653b/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
+golang.org/x/sys v0.0.0-20190514135907-3a4b5fb9f71f/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
+golang.org/x/sys v0.0.0-20190522044717-8097e1b27ff5/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
+golang.org/x/sys v0.0.0-20190602015325-4c4f7f33c9ed/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20190606165138-5da285871e9c/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
+golang.org/x/sys v0.0.0-20190606203320-7fc4e5ec1444/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20190616124812-15dcb6c0061f/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20190624142023-c5567b49c5d0/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20190726091711-fc99dfbffb4e/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
+golang.org/x/sys v0.0.0-20190801041406-cbf593c0f2f3/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
+golang.org/x/sys v0.0.0-20190812073006-9eafafc0a87e/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20190826190057-c7b8b68b1456/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
-golang.org/x/sys v0.0.0-20190904154756-749cb33beabd/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
+golang.org/x/sys v0.0.0-20190916202348-b4ddaad3f8a3/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20191001151750-bb3f8db39f24/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
+golang.org/x/sys v0.0.0-20191002063906-3421d5a6bb1c/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20191005200804-aed5e4c7ecf9/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20191022100944-742c48ecaeb7/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20191026070338-33540a1f6037/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
+golang.org/x/sys v0.0.0-20191115151921-52ab43148777/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20191120155948-bd437916bb0e/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20191204072324-ce4227a45e2e/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
+golang.org/x/sys v0.0.0-20191210023423-ac6580df4449/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20191228213918-04cbcbbfeed8/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200106162015-b016eb3dc98e/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200113162924-86b910548bc1/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
+golang.org/x/sys v0.0.0-20200116001909-b77594299b42/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
+golang.org/x/sys v0.0.0-20200120151820-655fe14d7479/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200122134326-e047566fdf82/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
+golang.org/x/sys v0.0.0-20200124204421-9fbb57f87de9/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200202164722-d101bd2416d5/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200212091648-12a6c2dcc1e4/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
+golang.org/x/sys v0.0.0-20200217220822-9197077df867/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200223170610-d5e6a3e2c0ae/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200302150141-5c8b2ff67527/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200323222414-85ca7c5b95cd/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
-golang.org/x/sys v0.0.0-20200519105757-fe76b779f299/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
+golang.org/x/sys v0.0.0-20200331124033-c3d80250170d/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
+golang.org/x/sys v0.0.0-20200501052902-10377860bb8e/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
+golang.org/x/sys v0.0.0-20200511232937-7e40ca221e25/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
+golang.org/x/sys v0.0.0-20200515095857-1151b9dac4a9/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
+golang.org/x/sys v0.0.0-20200523222454-059865788121/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200615200032-f1bc736245b1/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
+golang.org/x/sys v0.0.0-20200622214017-ed371f2e16b4/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200625212154-ddb9806d33ae/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
+golang.org/x/sys v0.0.0-20200728102440-3e129f6d46b1/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
+golang.org/x/sys v0.0.0-20200803210538-64077c9b5642/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
+golang.org/x/sys v0.0.0-20200817155316-9781c653f443/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200831180312-196b9ba8737a/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
+golang.org/x/sys v0.0.0-20200905004654-be1d3432aa8f/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
+golang.org/x/sys v0.0.0-20200909081042-eff7692f9009/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
+golang.org/x/sys v0.0.0-20200916030750-2334cc1a136f/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
+golang.org/x/sys v0.0.0-20200922070232-aee5d888a860/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200923182605-d9f96fdee20d/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200930185726-fdedc70b468f/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
+golang.org/x/sys v0.0.0-20201112073958-5cba982894dd/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
+golang.org/x/sys v0.0.0-20201117170446-d9b008d0a637/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20201119102817-f84b799fce68/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
+golang.org/x/sys v0.0.0-20201201145000-ef89a241ccb3/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
+golang.org/x/sys v0.0.0-20201202213521-69691e467435/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
+golang.org/x/sys v0.0.0-20210104204734-6f8348627aad/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
+golang.org/x/sys v0.0.0-20210112080510-489259a85091/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
+golang.org/x/sys v0.0.0-20210119212857-b64e53b001e4/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20210124154548-22da62e12c0c/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
+golang.org/x/sys v0.0.0-20210220050731-9a76102bfb43/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
+golang.org/x/sys v0.0.0-20210305230114-8fe3ee5dd75b/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
+golang.org/x/sys v0.0.0-20210315160823-c6e025ad8005/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
+golang.org/x/sys v0.0.0-20210320140829-1e4c9ba3b0c4/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
+golang.org/x/sys v0.0.0-20210324051608-47abb6519492/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20210330210617-4fbd30eecc44/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20210403161142-5e06dd20ab57/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20210423082822-04245dca01da/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
+golang.org/x/sys v0.0.0-20210426230700-d19ff857e887/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20210510120138-977fb7262007/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20210603081109-ebe580a85c40/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
-golang.org/x/sys v0.0.0-20210616094352-59db8d763f22 h1:RqytpXGR1iVNX7psjB3ff8y7sNFinVFvkx1c8SjBkio=
+golang.org/x/sys v0.0.0-20210615035016-665e8c7367d1/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20210616094352-59db8d763f22/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
+golang.org/x/sys v0.0.0-20210630005230-0f9fa26af87c/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
+golang.org/x/sys v0.0.0-20210809222454-d867a43fc93e/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
+golang.org/x/sys v0.0.0-20210817190340-bfb29a6856f2/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
+golang.org/x/sys v0.0.0-20210831042530-f4d43177bf5e/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
+golang.org/x/sys v0.0.0-20211019181941-9d821ace8654/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
+golang.org/x/sys v0.0.0-20211029165221-6e7872819dc8 h1:M69LAlWZCshgp0QSzyDcSsSIejIEeuaCVpmwcKwyLMk=
+golang.org/x/sys v0.0.0-20211029165221-6e7872819dc8/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/term v0.0.0-20201117132131-f5c789dd3221/go.mod h1:Nr5EML6q2oocZ2LXRh80K7BxOlk5/8JxuGnuhpl+muw=
golang.org/x/term v0.0.0-20201126162022-7de9c90e9dd1/go.mod h1:bj7SfCRtBDWHUb9snDiAeCFNEtKQo2Wmx5Cou7ajbmo=
-golang.org/x/term v0.0.0-20210220032956-6a3ed077a48d h1:SZxvLBoTP5yHO3Frd4z4vrF+DBX9vMVanchswa69toE=
golang.org/x/term v0.0.0-20210220032956-6a3ed077a48d/go.mod h1:bj7SfCRtBDWHUb9snDiAeCFNEtKQo2Wmx5Cou7ajbmo=
+golang.org/x/term v0.0.0-20210615171337-6886f2dfbf5b h1:9zKuko04nR4gjZ4+DNjHqRlAJqbJETHwiNKDqTfOjfE=
+golang.org/x/term v0.0.0-20210615171337-6886f2dfbf5b/go.mod h1:jbD1KX2456YbFQfuXm/mYQcufACuNUgVhRMnK/tPxf8=
golang.org/x/text v0.0.0-20160726164857-2910a502d2bf/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ=
golang.org/x/text v0.0.0-20170915032832-14c0d48ead0c/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ=
golang.org/x/text v0.3.0/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ=
golang.org/x/text v0.3.1-0.20180807135948-17ff2d5776d2/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ=
golang.org/x/text v0.3.2/go.mod h1:bEr9sfX3Q8Zfm5fL9x+3itogRgK3+ptLWKqgva+5dAk=
golang.org/x/text v0.3.3/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ=
+golang.org/x/text v0.3.4/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ=
golang.org/x/text v0.3.5/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ=
-golang.org/x/text v0.3.6 h1:aRYxNxv6iGQlyVaZmk6ZgYEDa+Jg18DxebPSrd6bg1M=
golang.org/x/text v0.3.6/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ=
+golang.org/x/text v0.3.7 h1:olpwvP2KacW1ZWvsR7uQhoyTYvKAupfQrRGBFM352Gk=
+golang.org/x/text v0.3.7/go.mod h1:u+2+/6zg+i71rQMx5EYifcz6MCKuco9NR6JIITiCfzQ=
golang.org/x/time v0.0.0-20180412165947-fbb02b2291d2/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ=
golang.org/x/time v0.0.0-20181108054448-85acf8d2951c/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ=
golang.org/x/time v0.0.0-20190308202827-9d24e82272b4/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ=
golang.org/x/time v0.0.0-20191024005414-555d28b269f0/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ=
+golang.org/x/time v0.0.0-20200630173020-3af7569d3a1e/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ=
golang.org/x/time v0.0.0-20210220033141-f8bda1e9f3ba/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ=
golang.org/x/time v0.0.0-20210723032227-1f47c861a9ac h1:7zkz7BUtwNFFqcowJ+RIgu2MaV/MapERkDIy+mwPyjs=
golang.org/x/time v0.0.0-20210723032227-1f47c861a9ac/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ=
@@ -823,7 +1290,6 @@ golang.org/x/tools v0.0.0-20190624222133-a101b041ded4/go.mod h1:/rFqwRUd4F7ZHNgw
golang.org/x/tools v0.0.0-20190628153133-6cdbf07be9d0/go.mod h1:/rFqwRUd4F7ZHNgwSSTFct+R/Kf4OFW1sUzUTQQTgfc=
golang.org/x/tools v0.0.0-20190816200558-6889da9d5479/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo=
golang.org/x/tools v0.0.0-20190911174233-4f2ddba30aff/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo=
-golang.org/x/tools v0.0.0-20190920225731-5eefd052ad72/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo=
golang.org/x/tools v0.0.0-20191012152004-8de300cfc20a/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo=
golang.org/x/tools v0.0.0-20191108193012-7d206e10da11/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo=
golang.org/x/tools v0.0.0-20191112195655-aa38f8e97acc/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo=
@@ -841,19 +1307,40 @@ golang.org/x/tools v0.0.0-20200204074204-1cc6d1ef6c74/go.mod h1:TB2adYChydJhpapK
golang.org/x/tools v0.0.0-20200207183749-b753a1ba74fa/go.mod h1:TB2adYChydJhpapKDTa4BR/hXlZSLoq2Wpct/0txZ28=
golang.org/x/tools v0.0.0-20200212150539-ea181f53ac56/go.mod h1:TB2adYChydJhpapKDTa4BR/hXlZSLoq2Wpct/0txZ28=
golang.org/x/tools v0.0.0-20200224181240-023911ca70b2/go.mod h1:TB2adYChydJhpapKDTa4BR/hXlZSLoq2Wpct/0txZ28=
+golang.org/x/tools v0.0.0-20200227222343-706bc42d1f0d/go.mod h1:TB2adYChydJhpapKDTa4BR/hXlZSLoq2Wpct/0txZ28=
golang.org/x/tools v0.0.0-20200304193943-95d2e580d8eb/go.mod h1:o4KQGtdN14AW+yjsvvwRTJJuXz8XRtIHtEnmAXLyFUw=
+golang.org/x/tools v0.0.0-20200312045724-11d5b4c81c7d/go.mod h1:o4KQGtdN14AW+yjsvvwRTJJuXz8XRtIHtEnmAXLyFUw=
+golang.org/x/tools v0.0.0-20200331025713-a30bf2db82d4/go.mod h1:Sl4aGygMT6LrqrWclx+PTx3U+LnKx/seiNR+3G19Ar8=
+golang.org/x/tools v0.0.0-20200501065659-ab2804fb9c9d/go.mod h1:EkVYQZoAsY45+roYkvgYkIh4xh/qjgUK9TdY2XT94GE=
golang.org/x/tools v0.0.0-20200505023115-26f46d2f7ef8/go.mod h1:EkVYQZoAsY45+roYkvgYkIh4xh/qjgUK9TdY2XT94GE=
+golang.org/x/tools v0.0.0-20200512131952-2bc93b1c0c88/go.mod h1:EkVYQZoAsY45+roYkvgYkIh4xh/qjgUK9TdY2XT94GE=
+golang.org/x/tools v0.0.0-20200515010526-7d3b6ebf133d/go.mod h1:EkVYQZoAsY45+roYkvgYkIh4xh/qjgUK9TdY2XT94GE=
+golang.org/x/tools v0.0.0-20200618134242-20370b0cb4b2/go.mod h1:EkVYQZoAsY45+roYkvgYkIh4xh/qjgUK9TdY2XT94GE=
golang.org/x/tools v0.0.0-20200619180055-7c47624df98f/go.mod h1:EkVYQZoAsY45+roYkvgYkIh4xh/qjgUK9TdY2XT94GE=
+golang.org/x/tools v0.0.0-20200729194436-6467de6f59a7/go.mod h1:njjCfa9FT2d7l9Bc6FUM5FLjQPp3cFF28FI3qnDFljA=
+golang.org/x/tools v0.0.0-20200804011535-6c149bb5ef0d/go.mod h1:njjCfa9FT2d7l9Bc6FUM5FLjQPp3cFF28FI3qnDFljA=
+golang.org/x/tools v0.0.0-20200825202427-b303f430e36d/go.mod h1:njjCfa9FT2d7l9Bc6FUM5FLjQPp3cFF28FI3qnDFljA=
+golang.org/x/tools v0.0.0-20200904185747-39188db58858/go.mod h1:Cj7w3i3Rnn0Xh82ur9kSqwfTHTeVxaDqrfMjpcNT6bE=
+golang.org/x/tools v0.0.0-20201110124207-079ba7bd75cd/go.mod h1:emZCQorbCU4vsT4fOWvOPXz4eW1wZW4PmDk9uLelYpA=
+golang.org/x/tools v0.0.0-20201201161351-ac6f37ff4c2a/go.mod h1:emZCQorbCU4vsT4fOWvOPXz4eW1wZW4PmDk9uLelYpA=
+golang.org/x/tools v0.0.0-20201208233053-a543418bbed2/go.mod h1:emZCQorbCU4vsT4fOWvOPXz4eW1wZW4PmDk9uLelYpA=
+golang.org/x/tools v0.0.0-20201224043029-2b0845dc783e/go.mod h1:emZCQorbCU4vsT4fOWvOPXz4eW1wZW4PmDk9uLelYpA=
+golang.org/x/tools v0.0.0-20210105154028-b0ab187a4818/go.mod h1:emZCQorbCU4vsT4fOWvOPXz4eW1wZW4PmDk9uLelYpA=
golang.org/x/tools v0.0.0-20210106214847-113979e3529a/go.mod h1:emZCQorbCU4vsT4fOWvOPXz4eW1wZW4PmDk9uLelYpA=
-golang.org/x/tools v0.1.2 h1:kRBLX7v7Af8W7Gdbbc908OJcdgtK8bOz9Uaj8/F1ACA=
+golang.org/x/tools v0.1.0/go.mod h1:xkSsbof2nBLbhDlRMhhhyNLN/zl3eTqcnHD5viDpcZ0=
golang.org/x/tools v0.1.2/go.mod h1:o0xws9oXOQQZyjljx8fwUC0k7L1pTE6eaCbjGeHmOkk=
+golang.org/x/tools v0.1.5/go.mod h1:o0xws9oXOQQZyjljx8fwUC0k7L1pTE6eaCbjGeHmOkk=
+golang.org/x/tools v0.1.6-0.20210820212750-d4cc65f0b2ff/go.mod h1:YD9qOF0M9xpSpdWTBbzEl5e/RnCefISl8E5Noe10jFM=
+golang.org/x/tools v0.1.8 h1:P1HhGGuLW4aAclzjtmJdf0mJOjVUZUzOTqkAkWL+l6w=
+golang.org/x/tools v0.1.8/go.mod h1:nABZi5QlRsZVlzPpHl034qft6wpY4eDcsTt5AaioBiU=
golang.org/x/xerrors v0.0.0-20190717185122-a985d3407aa7/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
golang.org/x/xerrors v0.0.0-20191011141410-1b5146add898/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
golang.org/x/xerrors v0.0.0-20191204190536-9bdfabe68543/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
golang.org/x/xerrors v0.0.0-20200804184101-5ec99f83aff1 h1:go1bK/D/BFZV2I8cIQd1NKEZ+0owSTG1fDTci4IqFcE=
golang.org/x/xerrors v0.0.0-20200804184101-5ec99f83aff1/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
-gomodules.xyz/jsonpatch/v2 v2.0.1 h1:xyiBuvkD2g5n7cYzx6u2sxQvsAy4QJsZFCzGVdzOXZ0=
-gomodules.xyz/jsonpatch/v2 v2.0.1/go.mod h1:IhYNNY4jnS53ZnfE4PAmpKtDpTCj1JFXc+3mwe7XcUU=
+gomodules.xyz/jsonpatch/v2 v2.2.0 h1:4pT439QV83L+G9FkcCriY6EkpcK6r6bK+A5FBUMI7qY=
+gomodules.xyz/jsonpatch/v2 v2.2.0/go.mod h1:WXp+iVDkoLQqPudfQ9GBlwB2eZ5DKOnjQZCYdOS8GPY=
+google.golang.org/api v0.0.0-20160322025152-9bf6e6e569ff/go.mod h1:4mhQ8q/RsB7i+udVvVy5NUi08OU8ZlA0gRVgrF7VFY0=
google.golang.org/api v0.4.0/go.mod h1:8k5glujaEP+g9n7WNsDg8QP6cUVNI86fCNMcbazEtwE=
google.golang.org/api v0.7.0/go.mod h1:WtwebWUNSVBH/HAw79HIFXZNqEvBhG+Ra+ax0hx3E3M=
google.golang.org/api v0.8.0/go.mod h1:o4eAsZoiT+ibD93RtjEohWalFOjRDx6CVaqeizhEnKg=
@@ -863,18 +1350,34 @@ google.golang.org/api v0.14.0/go.mod h1:iLdEw5Ide6rF15KTC1Kkl0iskquN2gFfn9o9XIsb
google.golang.org/api v0.15.0/go.mod h1:iLdEw5Ide6rF15KTC1Kkl0iskquN2gFfn9o9XIsbkAI=
google.golang.org/api v0.17.0/go.mod h1:BwFmGc8tA3vsd7r/7kR8DY7iEEGSU04BFxCo5jP/sfE=
google.golang.org/api v0.18.0/go.mod h1:BwFmGc8tA3vsd7r/7kR8DY7iEEGSU04BFxCo5jP/sfE=
+google.golang.org/api v0.19.0/go.mod h1:BwFmGc8tA3vsd7r/7kR8DY7iEEGSU04BFxCo5jP/sfE=
google.golang.org/api v0.20.0/go.mod h1:BwFmGc8tA3vsd7r/7kR8DY7iEEGSU04BFxCo5jP/sfE=
+google.golang.org/api v0.22.0/go.mod h1:BwFmGc8tA3vsd7r/7kR8DY7iEEGSU04BFxCo5jP/sfE=
+google.golang.org/api v0.24.0/go.mod h1:lIXQywCXRcnZPGlsd8NbLnOjtAoL6em04bJ9+z0MncE=
+google.golang.org/api v0.28.0/go.mod h1:lIXQywCXRcnZPGlsd8NbLnOjtAoL6em04bJ9+z0MncE=
+google.golang.org/api v0.29.0/go.mod h1:Lcubydp8VUV7KeIHD9z2Bys/sm/vGKnG1UHuDBSrHWM=
+google.golang.org/api v0.30.0/go.mod h1:QGmEvQ87FHZNiUVJkT14jQNYJ4ZJjdRF23ZXz5138Fc=
+google.golang.org/api v0.35.0/go.mod h1:/XrVsuzM0rZmrsbjJutiuftIzeuTQcEeaYcSk/mQ1dg=
+google.golang.org/api v0.36.0/go.mod h1:+z5ficQTmoYpPn8LCUNVpK5I7hwkpjbcgqA7I34qYtE=
+google.golang.org/api v0.40.0/go.mod h1:fYKFpnQN0DsDSKRVRcQSDQNtqWPfM9i+zNPxepjRCQ8=
+google.golang.org/api v0.41.0/go.mod h1:RkxM5lITDfTzmyKFPt+wGrCJbVfniCr2ool8kTBzRTU=
+google.golang.org/api v0.43.0/go.mod h1:nQsDGjRXMo4lvh5hP0TKqF244gqhGcr/YSIykhUk/94=
+google.golang.org/api v0.44.0/go.mod h1:EBOGZqzyhtvMDoxwS97ctnh0zUmYY6CxqXsc1AvkYD8=
google.golang.org/appengine v1.1.0/go.mod h1:EbEs0AVv82hx2wNQdGPgUI5lhzA/G0D9YwlJXL52JkM=
google.golang.org/appengine v1.4.0/go.mod h1:xpcJRLb0r/rnEns0DIKYYv+WjYCduHsrkT7/EB5XEv4=
google.golang.org/appengine v1.5.0/go.mod h1:xpcJRLb0r/rnEns0DIKYYv+WjYCduHsrkT7/EB5XEv4=
google.golang.org/appengine v1.6.1/go.mod h1:i06prIuMbXzDqacNJfV5OdTW448YApPu5ww/cMBSeb0=
-google.golang.org/appengine v1.6.5 h1:tycE03LOZYQNhDpS27tcQdAzLCVMaj7QT2SXxebnpCM=
google.golang.org/appengine v1.6.5/go.mod h1:8WjMMxjGQR8xUklV/ARdw2HLXBOI7O7uCIDZVag1xfc=
+google.golang.org/appengine v1.6.6/go.mod h1:8WjMMxjGQR8xUklV/ARdw2HLXBOI7O7uCIDZVag1xfc=
+google.golang.org/appengine v1.6.7 h1:FZR1q0exgwxzPzp/aF+VccGrSfxfPpkBqjIIEq3ru6c=
+google.golang.org/appengine v1.6.7/go.mod h1:8WjMMxjGQR8xUklV/ARdw2HLXBOI7O7uCIDZVag1xfc=
+google.golang.org/cloud v0.0.0-20151119220103-975617b05ea8/go.mod h1:0H1ncTHf11KCFhTc/+EFRbzSCOZx+VUbRMk55Yv5MYk=
google.golang.org/genproto v0.0.0-20180817151627-c66870c02cf8/go.mod h1:JiN7NxoALGmiZfu7CAH4rXhgtRTLTxftemlI0sWmxmc=
google.golang.org/genproto v0.0.0-20190307195333-5fe7a883aa19/go.mod h1:VzzqZJRnGkLBvHegQrXjBqPurQTc5/KpmUdxsrq26oE=
google.golang.org/genproto v0.0.0-20190418145605-e7d98fc518a7/go.mod h1:VzzqZJRnGkLBvHegQrXjBqPurQTc5/KpmUdxsrq26oE=
google.golang.org/genproto v0.0.0-20190425155659-357c62f0e4bb/go.mod h1:VzzqZJRnGkLBvHegQrXjBqPurQTc5/KpmUdxsrq26oE=
google.golang.org/genproto v0.0.0-20190502173448-54afdca5d873/go.mod h1:VzzqZJRnGkLBvHegQrXjBqPurQTc5/KpmUdxsrq26oE=
+google.golang.org/genproto v0.0.0-20190522204451-c2c4e71fbf69/go.mod h1:z3L6/3dTEVtUr6QSP8miRzeRqwQOioJ9I66odjN4I7s=
google.golang.org/genproto v0.0.0-20190801165951-fa694d86fc64/go.mod h1:DMBHOl98Agz4BDEuKkezgsaosCRResVns1a3J2ZsMNc=
google.golang.org/genproto v0.0.0-20190819201941-24fa4b261c55/go.mod h1:DMBHOl98Agz4BDEuKkezgsaosCRResVns1a3J2ZsMNc=
google.golang.org/genproto v0.0.0-20190911173649-1774047e7e51/go.mod h1:IbNlFCBrqXvoKpeg0TB2l7cyZUmoaFKYIwrEpbDKLA8=
@@ -883,32 +1386,69 @@ google.golang.org/genproto v0.0.0-20191115194625-c23dd37a84c9/go.mod h1:n3cpQtvx
google.golang.org/genproto v0.0.0-20191216164720-4f79533eabd1/go.mod h1:n3cpQtvxv34hfy77yVDNjmbRyujviMdxYliBSkLhpCc=
google.golang.org/genproto v0.0.0-20191230161307-f3c370f40bfb/go.mod h1:n3cpQtvxv34hfy77yVDNjmbRyujviMdxYliBSkLhpCc=
google.golang.org/genproto v0.0.0-20200115191322-ca5a22157cba/go.mod h1:n3cpQtvxv34hfy77yVDNjmbRyujviMdxYliBSkLhpCc=
+google.golang.org/genproto v0.0.0-20200117163144-32f20d992d24/go.mod h1:n3cpQtvxv34hfy77yVDNjmbRyujviMdxYliBSkLhpCc=
google.golang.org/genproto v0.0.0-20200122232147-0452cf42e150/go.mod h1:n3cpQtvxv34hfy77yVDNjmbRyujviMdxYliBSkLhpCc=
google.golang.org/genproto v0.0.0-20200204135345-fa8e72b47b90/go.mod h1:GmwEX6Z4W5gMy59cAlVYjN9JhxgbQH6Gn+gFDQe2lzA=
google.golang.org/genproto v0.0.0-20200212174721-66ed5ce911ce/go.mod h1:55QSHmfGQM9UVYDPBsyGGes0y52j32PQ3BqQfXhyH3c=
google.golang.org/genproto v0.0.0-20200224152610-e50cd9704f63/go.mod h1:55QSHmfGQM9UVYDPBsyGGes0y52j32PQ3BqQfXhyH3c=
+google.golang.org/genproto v0.0.0-20200228133532-8c2c7df3a383/go.mod h1:55QSHmfGQM9UVYDPBsyGGes0y52j32PQ3BqQfXhyH3c=
google.golang.org/genproto v0.0.0-20200305110556-506484158171/go.mod h1:55QSHmfGQM9UVYDPBsyGGes0y52j32PQ3BqQfXhyH3c=
+google.golang.org/genproto v0.0.0-20200312145019-da6875a35672/go.mod h1:55QSHmfGQM9UVYDPBsyGGes0y52j32PQ3BqQfXhyH3c=
+google.golang.org/genproto v0.0.0-20200331122359-1ee6d9798940/go.mod h1:55QSHmfGQM9UVYDPBsyGGes0y52j32PQ3BqQfXhyH3c=
google.golang.org/genproto v0.0.0-20200423170343-7949de9c1215/go.mod h1:55QSHmfGQM9UVYDPBsyGGes0y52j32PQ3BqQfXhyH3c=
+google.golang.org/genproto v0.0.0-20200430143042-b979b6f78d84/go.mod h1:55QSHmfGQM9UVYDPBsyGGes0y52j32PQ3BqQfXhyH3c=
+google.golang.org/genproto v0.0.0-20200511104702-f5ebc3bea380/go.mod h1:55QSHmfGQM9UVYDPBsyGGes0y52j32PQ3BqQfXhyH3c=
google.golang.org/genproto v0.0.0-20200513103714-09dca8ec2884/go.mod h1:55QSHmfGQM9UVYDPBsyGGes0y52j32PQ3BqQfXhyH3c=
+google.golang.org/genproto v0.0.0-20200515170657-fc4c6c6a6587/go.mod h1:YsZOwe1myG/8QRHRsmBRE1LrgQY60beZKjly0O1fX9U=
google.golang.org/genproto v0.0.0-20200526211855-cb27e3aa2013/go.mod h1:NbSheEEYHJ7i3ixzK3sjbqSGDJWnxyFXZblF3eUsNvo=
+google.golang.org/genproto v0.0.0-20200618031413-b414f8b61790/go.mod h1:jDfRM7FcilCzHH/e9qn6dsT145K34l5v+OpcnNgKAAA=
+google.golang.org/genproto v0.0.0-20200729003335-053ba62fc06f/go.mod h1:FWY/as6DDZQgahTzZj3fqbO1CbirC29ZNUFHwi0/+no=
+google.golang.org/genproto v0.0.0-20200804131852-c06518451d9c/go.mod h1:FWY/as6DDZQgahTzZj3fqbO1CbirC29ZNUFHwi0/+no=
+google.golang.org/genproto v0.0.0-20200825200019-8632dd797987/go.mod h1:FWY/as6DDZQgahTzZj3fqbO1CbirC29ZNUFHwi0/+no=
+google.golang.org/genproto v0.0.0-20200904004341-0bd0a958aa1d/go.mod h1:FWY/as6DDZQgahTzZj3fqbO1CbirC29ZNUFHwi0/+no=
google.golang.org/genproto v0.0.0-20201019141844-1ed22bb0c154/go.mod h1:FWY/as6DDZQgahTzZj3fqbO1CbirC29ZNUFHwi0/+no=
-google.golang.org/genproto v0.0.0-20210602131652-f16073e35f0c h1:wtujag7C+4D6KMoulW9YauvK2lgdvCMS260jsqqBXr0=
+google.golang.org/genproto v0.0.0-20201102152239-715cce707fb0/go.mod h1:FWY/as6DDZQgahTzZj3fqbO1CbirC29ZNUFHwi0/+no=
+google.golang.org/genproto v0.0.0-20201109203340-2640f1f9cdfb/go.mod h1:FWY/as6DDZQgahTzZj3fqbO1CbirC29ZNUFHwi0/+no=
+google.golang.org/genproto v0.0.0-20201110150050-8816d57aaa9a/go.mod h1:FWY/as6DDZQgahTzZj3fqbO1CbirC29ZNUFHwi0/+no=
+google.golang.org/genproto v0.0.0-20201201144952-b05cb90ed32e/go.mod h1:FWY/as6DDZQgahTzZj3fqbO1CbirC29ZNUFHwi0/+no=
+google.golang.org/genproto v0.0.0-20201210142538-e3217bee35cc/go.mod h1:FWY/as6DDZQgahTzZj3fqbO1CbirC29ZNUFHwi0/+no=
+google.golang.org/genproto v0.0.0-20201214200347-8c77b98c765d/go.mod h1:FWY/as6DDZQgahTzZj3fqbO1CbirC29ZNUFHwi0/+no=
+google.golang.org/genproto v0.0.0-20210222152913-aa3ee6e6a81c/go.mod h1:FWY/as6DDZQgahTzZj3fqbO1CbirC29ZNUFHwi0/+no=
+google.golang.org/genproto v0.0.0-20210303154014-9728d6b83eeb/go.mod h1:FWY/as6DDZQgahTzZj3fqbO1CbirC29ZNUFHwi0/+no=
+google.golang.org/genproto v0.0.0-20210310155132-4ce2db91004e/go.mod h1:FWY/as6DDZQgahTzZj3fqbO1CbirC29ZNUFHwi0/+no=
+google.golang.org/genproto v0.0.0-20210319143718-93e7006c17a6/go.mod h1:FWY/as6DDZQgahTzZj3fqbO1CbirC29ZNUFHwi0/+no=
+google.golang.org/genproto v0.0.0-20210402141018-6c239bbf2bb1/go.mod h1:9lPAdzaEmUacj36I+k7YKbEc5CXzPIeORRgDAUOu28A=
google.golang.org/genproto v0.0.0-20210602131652-f16073e35f0c/go.mod h1:UODoCrxHCcBojKKwX1terBiRUaqAsFqJiF615XL43r0=
+google.golang.org/genproto v0.0.0-20210831024726-fe130286e0e2 h1:NHN4wOCScVzKhPenJ2dt+BTs3X/XkBVI/Rh4iDt55T8=
+google.golang.org/genproto v0.0.0-20210831024726-fe130286e0e2/go.mod h1:eFjDcFEctNawg4eG61bRv87N7iHBWyVhJu7u1kqDUXY=
+google.golang.org/grpc v0.0.0-20160317175043-d3ddb4469d5a/go.mod h1:yo6s7OP7yaDglbqo1J04qKzAhqBH6lvTonzMVmEdcZw=
google.golang.org/grpc v1.19.0/go.mod h1:mqu4LbDTu4XGKhr4mRzUsmM4RtVoemTSY81AxZiDr8c=
google.golang.org/grpc v1.20.1/go.mod h1:10oTOabMzJvdu6/UiuZezV6QK5dSlG84ov/aaiqXj38=
+google.golang.org/grpc v1.21.0/go.mod h1:oYelfM1adQP15Ek0mdvEgi9Df8B9CZIaU1084ijfRaM=
google.golang.org/grpc v1.21.1/go.mod h1:oYelfM1adQP15Ek0mdvEgi9Df8B9CZIaU1084ijfRaM=
google.golang.org/grpc v1.23.0/go.mod h1:Y5yQAOtifL1yxbo5wqy6BxZv8vAUGQwXBOALyacEbxg=
google.golang.org/grpc v1.23.1/go.mod h1:Y5yQAOtifL1yxbo5wqy6BxZv8vAUGQwXBOALyacEbxg=
+google.golang.org/grpc v1.24.0/go.mod h1:XDChyiUovWa60DnaeDeZmSW86xtLtjtZbwvSiRnRtcA=
google.golang.org/grpc v1.25.1/go.mod h1:c3i+UQWmh7LiEpx4sFZnkU36qjEYZ0imhYfXVyQciAY=
google.golang.org/grpc v1.26.0/go.mod h1:qbnxyOmOxrQa7FizSgH+ReBfzJrCY1pSN7KXBS8abTk=
google.golang.org/grpc v1.27.0/go.mod h1:qbnxyOmOxrQa7FizSgH+ReBfzJrCY1pSN7KXBS8abTk=
google.golang.org/grpc v1.27.1/go.mod h1:qbnxyOmOxrQa7FizSgH+ReBfzJrCY1pSN7KXBS8abTk=
+google.golang.org/grpc v1.28.0/go.mod h1:rpkK4SK4GF4Ach/+MFLZUBavHOvF2JJB5uozKKal+60=
google.golang.org/grpc v1.29.1/go.mod h1:itym6AZVZYACWQqET3MqgPpjcuV5QH3BxFS3IjizoKk=
+google.golang.org/grpc v1.30.0/go.mod h1:N36X2cJ7JwdamYAgDz+s+rVMFjt3numwzf/HckM8pak=
+google.golang.org/grpc v1.31.0/go.mod h1:N36X2cJ7JwdamYAgDz+s+rVMFjt3numwzf/HckM8pak=
+google.golang.org/grpc v1.31.1/go.mod h1:N36X2cJ7JwdamYAgDz+s+rVMFjt3numwzf/HckM8pak=
google.golang.org/grpc v1.33.1/go.mod h1:fr5YgcSWrqhRRxogOsw7RzIpsmvOZ6IcH4kBYTpR3n0=
+google.golang.org/grpc v1.33.2/go.mod h1:JMHMWHQWaTccqQQlmk3MJZS+GWXOdAesneDmEnv2fbc=
+google.golang.org/grpc v1.34.0/go.mod h1:WotjhfgOW/POjDeRt8vscBtXq+2VjORFy659qA51WJ8=
+google.golang.org/grpc v1.35.0/go.mod h1:qjiiYl8FncCW8feJPdyg3v6XW24KsRHe+dy9BAGRRjU=
google.golang.org/grpc v1.36.0/go.mod h1:qjiiYl8FncCW8feJPdyg3v6XW24KsRHe+dy9BAGRRjU=
+google.golang.org/grpc v1.36.1/go.mod h1:qjiiYl8FncCW8feJPdyg3v6XW24KsRHe+dy9BAGRRjU=
google.golang.org/grpc v1.37.0/go.mod h1:NREThFqKR1f3iQ6oBuvc5LadQuXVGo9rkm5ZGrQdJfM=
-google.golang.org/grpc v1.38.0 h1:/9BgsAsa5nWe26HqOlvlgJnqBuktYOLCgjCPqsa56W0=
google.golang.org/grpc v1.38.0/go.mod h1:NREThFqKR1f3iQ6oBuvc5LadQuXVGo9rkm5ZGrQdJfM=
+google.golang.org/grpc v1.39.0/go.mod h1:PImNr+rS9TWYb2O4/emRugxiyHZ5JyHW5F+RPnDzfrE=
+google.golang.org/grpc v1.40.0 h1:AGJ0Ih4mHjSeibYkFGh1dD9KJ/eOtZ93I6hoHhukQ5Q=
+google.golang.org/grpc v1.40.0/go.mod h1:ogyxbiOoUXAkP+4+xa6PZSE9DZgIHtSpzjDTB9KAK34=
google.golang.org/protobuf v0.0.0-20200109180630-ec00e32a8dfd/go.mod h1:DFci5gLYBciE7Vtevhsrf46CRTquxDuWsQurQQe4oz8=
google.golang.org/protobuf v0.0.0-20200221191635-4d8936d0db64/go.mod h1:kwYJMbMJ01Woi6D6+Kah6886xMZcty6N08ah7+eCXa0=
google.golang.org/protobuf v0.0.0-20200228230310-ab0ca4ff8a60/go.mod h1:cfTl7dwQJ+fmap5saPgwCLgHXTUD7jkjRqWcaiX5VyM=
@@ -920,26 +1460,33 @@ google.golang.org/protobuf v1.23.1-0.20200526195155-81db48ad09cc/go.mod h1:EGpAD
google.golang.org/protobuf v1.24.0/go.mod h1:r/3tXBNzIEhYS9I1OUVjXDlt8tc493IdKGjtUeSXeh4=
google.golang.org/protobuf v1.25.0/go.mod h1:9JNX74DMeImyA3h4bdi1ymwjUzf21/xIlbajtzgsN7c=
google.golang.org/protobuf v1.26.0-rc.1/go.mod h1:jlhhOSvTdKEhbULTjvd4ARK9grFBp09yW+WbY/TyQbw=
-google.golang.org/protobuf v1.26.0 h1:bxAC2xTBsZGibn2RTntX0oH50xLsqy1OxA9tTL3p/lk=
google.golang.org/protobuf v1.26.0/go.mod h1:9q0QmTI4eRPtz6boOQmLYwt+qCgq0jsYwAQnmE0givc=
+google.golang.org/protobuf v1.27.1 h1:SnqbnDw1V7RiZcXPx5MEeqPv2s79L9i7BJUlG/+RurQ=
+google.golang.org/protobuf v1.27.1/go.mod h1:9q0QmTI4eRPtz6boOQmLYwt+qCgq0jsYwAQnmE0givc=
+gopkg.in/airbrake/gobrake.v2 v2.0.9/go.mod h1:/h5ZAUhDkGaJfjzjKLSjv6zCL6O0LLBxU4K+aSYdM/U=
gopkg.in/alecthomas/kingpin.v2 v2.2.6/go.mod h1:FMv+mEhP44yOT+4EoQTLFTRgOQ1FBLkstjWtayDeSgw=
gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
+gopkg.in/check.v1 v1.0.0-20141024133853-64131543e789/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
gopkg.in/check.v1 v1.0.0-20180628173108-788fd7840127/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
gopkg.in/check.v1 v1.0.0-20190902080502-41f04d3bba15/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
gopkg.in/check.v1 v1.0.0-20200227125254-8fa46927fb4f h1:BLraFXnmrev5lT+xlilqcH8XK9/i0At2xKjWk4p6zsU=
gopkg.in/check.v1 v1.0.0-20200227125254-8fa46927fb4f/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
gopkg.in/cheggaaa/pb.v1 v1.0.25/go.mod h1:V/YB90LKu/1FcN3WVnfiiE5oMCibMjukxqG/qStrOgw=
gopkg.in/errgo.v2 v2.1.0/go.mod h1:hNsd1EY+bozCKY1Ytp96fpM3vjJbqLJn88ws8XvfDNI=
-gopkg.in/fsnotify.v1 v1.4.7/go.mod h1:Tz8NjZHkW78fSQdbUxIjBTcgA1z1m8ZHf0WmKUhAMys=
gopkg.in/gcfg.v1 v1.2.3 h1:m8OOJ4ccYHnx2f4gQwpno8nAX5OGOh7RLaaz0pj3Ogs=
gopkg.in/gcfg.v1 v1.2.3/go.mod h1:yesOnuUOFQAhST5vPY4nbZsb/huCgGGXlipJsBn0b3o=
+gopkg.in/gemnasium/logrus-airbrake-hook.v2 v2.1.2/go.mod h1:Xk6kEKp8OKb+X14hQBKWaSkCsqBpgog8nAV2xsGOxlo=
gopkg.in/inf.v0 v0.9.1 h1:73M5CoZyi3ZLMOyDlQh031Cx6N9NDJ2Vvfl76EDAgDc=
gopkg.in/inf.v0 v0.9.1/go.mod h1:cWUDdTG/fYaXco+Dcufb5Vnc6Gp2YChqWtbxRZE0mXw=
gopkg.in/ini.v1 v1.51.0/go.mod h1:pNLf8WUiyNEtQjuu5G5vTm06TEv9tsIgeAvK8hOrP4k=
+gopkg.in/ini.v1 v1.62.0 h1:duBzk771uxoUuOlyRLkHsygud9+5lrlGjdFBb4mSKDU=
+gopkg.in/ini.v1 v1.62.0/go.mod h1:pNLf8WUiyNEtQjuu5G5vTm06TEv9tsIgeAvK8hOrP4k=
gopkg.in/natefinch/lumberjack.v2 v2.0.0 h1:1Lc07Kr7qY4U2YPouBjpCLxpiyxIVoxqXgkXLknAOE8=
gopkg.in/natefinch/lumberjack.v2 v2.0.0/go.mod h1:l0ndWWf7gzL7RNwBG7wST/UCcT4T24xpD6X8LsfU/+k=
gopkg.in/resty.v1 v1.12.0/go.mod h1:mDo4pnntr5jdWRML875a/NmxYqAlA73dVijT2AXvQQo=
gopkg.in/square/go-jose.v2 v2.2.2/go.mod h1:M9dMgbHiYLoDGQrXy7OpJDJWiKiU//h+vD76mk0e1AI=
+gopkg.in/square/go-jose.v2 v2.3.1/go.mod h1:M9dMgbHiYLoDGQrXy7OpJDJWiKiU//h+vD76mk0e1AI=
+gopkg.in/square/go-jose.v2 v2.5.1/go.mod h1:M9dMgbHiYLoDGQrXy7OpJDJWiKiU//h+vD76mk0e1AI=
gopkg.in/tomb.v1 v1.0.0-20141024135613-dd632973f1e7 h1:uRGJdciOHaEIrze2W8Q3AKkepLTh2hOroT7a+7czfdQ=
gopkg.in/tomb.v1 v1.0.0-20141024135613-dd632973f1e7/go.mod h1:dt/ZhP58zS4L8KSrWDmTeBkI65Dw0HsyUHuEVlX15mw=
gopkg.in/warnings.v0 v0.1.2 h1:wFXVbFY8DY5/xOe1ECiWdKCzZlxgshcYVNkBHstARME=
@@ -950,6 +1497,7 @@ gopkg.in/yaml.v2 v2.2.2/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI=
gopkg.in/yaml.v2 v2.2.3/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI=
gopkg.in/yaml.v2 v2.2.4/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI=
gopkg.in/yaml.v2 v2.2.5/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI=
+gopkg.in/yaml.v2 v2.2.7/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI=
gopkg.in/yaml.v2 v2.2.8/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI=
gopkg.in/yaml.v2 v2.3.0/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI=
gopkg.in/yaml.v2 v2.4.0 h1:D8xgwECY7CYvx+Y2n4sBz93Jn9JRvxdiyyo8CTfuKaY=
@@ -969,69 +1517,125 @@ honnef.co/go/tools v0.0.0-20190418001031-e561f6794a2a/go.mod h1:rf3lG4BRIbNafJWh
honnef.co/go/tools v0.0.0-20190523083050-ea95bdfd59fc/go.mod h1:rf3lG4BRIbNafJWhAfAdb/ePZxsR/4RtNHQocxwk9r4=
honnef.co/go/tools v0.0.1-2019.2.3/go.mod h1:a3bituU0lyd329TUQxRnasdCoJDkEUEAqEt0JzvZhAg=
honnef.co/go/tools v0.0.1-2020.1.3/go.mod h1:X/FiERA/W4tHapMX5mGpAtMSVEeEUOyHaw9vFzvIQ3k=
+honnef.co/go/tools v0.0.1-2020.1.4/go.mod h1:X/FiERA/W4tHapMX5mGpAtMSVEeEUOyHaw9vFzvIQ3k=
k8s.io/api v0.17.4/go.mod h1:5qxx6vjmwUVG2nHQTKGlLts8Tbok8PzHl4vHtVFuZCA=
-k8s.io/api v0.18.6/go.mod h1:eeyxr+cwCjMdLAmr2W3RyDI0VvTawSg/3RFFBEnmZGI=
-k8s.io/api v0.22.1 h1:ISu3tD/jRhYfSW8jI/Q1e+lRxkR7w9UwQEZ7FgslrwY=
-k8s.io/api v0.22.1/go.mod h1:bh13rkTp3F1XEaLGykbyRD2QaTTzPm0e/BMd8ptFONY=
-k8s.io/apiextensions-apiserver v0.18.6/go.mod h1:lv89S7fUysXjLZO7ke783xOwVTm6lKizADfvUM/SS/M=
-k8s.io/apiextensions-apiserver v0.22.1 h1:YSJYzlFNFSfUle+yeEXX0lSQyLEoxoPJySRupepb0gE=
-k8s.io/apiextensions-apiserver v0.22.1/go.mod h1:HeGmorjtRmRLE+Q8dJu6AYRoZccvCMsghwS8XTUYb2c=
+k8s.io/api v0.20.1/go.mod h1:KqwcCVogGxQY3nBlRpwt+wpAMF/KjaCc7RpywacvqUo=
+k8s.io/api v0.20.4/go.mod h1:++lNL1AJMkDymriNniQsWRkMDzRaX2Y/POTUi8yvqYQ=
+k8s.io/api v0.20.6/go.mod h1:X9e8Qag6JV/bL5G6bU8sdVRltWKmdHsFUGS3eVndqE8=
+k8s.io/api v0.21.4/go.mod h1:fTVGP+M4D8+00FN2cMnJqk/eb/GH53bvmNs2SVTmpFk=
+k8s.io/api v0.23.0/go.mod h1:8wmDdLBHBNxtOIytwLstXt5E9PddnZb0GaMcqsvDBpg=
+k8s.io/api v0.23.1 h1:ncu/qfBfUoClqwkTGbeRqqOqBCRoUAflMuOaOD7J0c8=
+k8s.io/api v0.23.1/go.mod h1:WfXnOnwSqNtG62Y1CdjoMxh7r7u9QXGCkA1u0na2jgo=
+k8s.io/apiextensions-apiserver v0.21.4/go.mod h1:OoC8LhI9LnV+wKjZkXIBbLUwtnOGJiTRE33qctH5CIk=
+k8s.io/apiextensions-apiserver v0.23.0/go.mod h1:xIFAEEDlAZgpVBl/1VSjGDmLoXAWRG40+GsWhKhAxY4=
+k8s.io/apiextensions-apiserver v0.23.1 h1:xxE0q1vLOVZiWORu1KwNRQFsGWtImueOrqSl13sS5EU=
+k8s.io/apiextensions-apiserver v0.23.1/go.mod h1:0qz4fPaHHsVhRApbtk3MGXNn2Q9M/cVWWhfHdY2SxiM=
k8s.io/apimachinery v0.17.4/go.mod h1:gxLnyZcGNdZTCLnq3fgzyg2A5BVCHTNDFrw8AmuJ+0g=
-k8s.io/apimachinery v0.18.6/go.mod h1:OaXp26zu/5J7p0f92ASynJa1pZo06YlV9fG7BoWbCko=
-k8s.io/apimachinery v0.22.1 h1:DTARnyzmdHMz7bFWFDDm22AM4pLWTQECMpRTFu2d2OM=
-k8s.io/apimachinery v0.22.1/go.mod h1:O3oNtNadZdeOMxHFVxOreoznohCpy0z6mocxbZr7oJ0=
-k8s.io/apiserver v0.18.6/go.mod h1:Zt2XvTHuaZjBz6EFYzpp+X4hTmgWGy8AthNVnTdm3Wg=
-k8s.io/apiserver v0.22.1 h1:Ul9Iv8OMB2s45h2tl5XWPpAZo1VPIJ/6N+MESeed7L8=
-k8s.io/apiserver v0.22.1/go.mod h1:2mcM6dzSt+XndzVQJX21Gx0/Klo7Aen7i0Ai6tIa400=
-k8s.io/client-go v0.18.6/go.mod h1:/fwtGLjYMS1MaM5oi+eXhKwG+1UHidUEXRh6cNsdO0Q=
-k8s.io/client-go v0.22.1 h1:jW0ZSHi8wW260FvcXHkIa0NLxFBQszTlhiAVsU5mopw=
-k8s.io/client-go v0.22.1/go.mod h1:BquC5A4UOo4qVDUtoc04/+Nxp1MeHcVc1HJm1KmG8kk=
-k8s.io/cloud-provider v0.22.1 h1:bxNgHd0chiPpXQ8jzibRrbwuCRPrTgQiFSLbgVebzHs=
-k8s.io/cloud-provider v0.22.1/go.mod h1:Dm3xJ4j3l88rZ0LBCRLrt7V9Pz0avRAzZSU6ENwYnrw=
-k8s.io/code-generator v0.18.6/go.mod h1:TgNEVx9hCyPGpdtCWA34olQYLkh3ok9ar7XfSsr8b6c=
-k8s.io/code-generator v0.22.1 h1:zAcKpn+xe9Iyc4qtZlfg4tD0f+SO2h5+e/s4pZPOVhs=
-k8s.io/code-generator v0.22.1/go.mod h1:eV77Y09IopzeXOJzndrDyCI88UBok2h6WxAlBwpxa+o=
-k8s.io/component-base v0.18.6/go.mod h1:knSVsibPR5K6EW2XOjEHik6sdU5nCvKMrzMt2D4In14=
-k8s.io/component-base v0.22.1 h1:SFqIXsEN3v3Kkr1bS6rstrs1wd45StJqbtgbQ4nRQdo=
-k8s.io/component-base v0.22.1/go.mod h1:0D+Bl8rrnsPN9v0dyYvkqFfBeAd4u7n77ze+p8CMiPo=
-k8s.io/controller-manager v0.22.1 h1:6yu4ApWEk7DxIc4Bp7Ibxq46vopV9+VVEjZTNE+1Qd0=
-k8s.io/controller-manager v0.22.1/go.mod h1:HN5qzvZs8A4fd/xuqDZwqe+Nsz249a2Kbq/YqZ903n8=
+k8s.io/apimachinery v0.20.1/go.mod h1:WlLqWAHZGg07AeltaI0MV5uk1Omp8xaN0JGLY6gkRpU=
+k8s.io/apimachinery v0.20.2/go.mod h1:WlLqWAHZGg07AeltaI0MV5uk1Omp8xaN0JGLY6gkRpU=
+k8s.io/apimachinery v0.20.4/go.mod h1:WlLqWAHZGg07AeltaI0MV5uk1Omp8xaN0JGLY6gkRpU=
+k8s.io/apimachinery v0.20.6/go.mod h1:ejZXtW1Ra6V1O5H8xPBGz+T3+4gfkTCeExAHKU57MAc=
+k8s.io/apimachinery v0.21.4/go.mod h1:H/IM+5vH9kZRNJ4l3x/fXP/5bOPJaVP/guptnZPeCFI=
+k8s.io/apimachinery v0.23.0/go.mod h1:fFCTTBKvKcwTPFzjlcxp91uPFZr+JA0FubU4fLzzFYc=
+k8s.io/apimachinery v0.23.1 h1:sfBjlDFwj2onG0Ijx5C+SrAoeUscPrmghm7wHP+uXlo=
+k8s.io/apimachinery v0.23.1/go.mod h1:SADt2Kl8/sttJ62RRsi9MIV4o8f5S3coArm0Iu3fBno=
+k8s.io/apiserver v0.20.1/go.mod h1:ro5QHeQkgMS7ZGpvf4tSMx6bBOgPfE+f52KwvXfScaU=
+k8s.io/apiserver v0.20.4/go.mod h1:Mc80thBKOyy7tbvFtB4kJv1kbdD0eIH8k8vianJcbFM=
+k8s.io/apiserver v0.20.6/go.mod h1:QIJXNt6i6JB+0YQRNcS0hdRHJlMhflFmsBDeSgT1r8Q=
+k8s.io/apiserver v0.21.4/go.mod h1:SErUuFBBPZUcD2nsUU8hItxoYheqyYr2o/pCINEPW8g=
+k8s.io/apiserver v0.23.0/go.mod h1:Cec35u/9zAepDPPFyT+UMrgqOCjgJ5qtfVJDxjZYmt4=
+k8s.io/apiserver v0.23.1 h1:vWGf8LcV9Pk/z5rdLmCiBDqE21ccbe930dzrtVMhw9g=
+k8s.io/apiserver v0.23.1/go.mod h1:Bqt0gWbeM2NefS8CjWswwd2VNAKN6lUKR85Ft4gippY=
+k8s.io/cli-runtime v0.21.4/go.mod h1:eRbLHYkdVWzvG87yrkgGd8CqX6/+fAG9DTdAqTXmlRY=
+k8s.io/client-go v0.20.1/go.mod h1:/zcHdt1TeWSd5HoUe6elJmHSQ6uLLgp4bIJHVEuy+/Y=
+k8s.io/client-go v0.20.4/go.mod h1:LiMv25ND1gLUdBeYxBIwKpkSC5IsozMMmOOeSJboP+k=
+k8s.io/client-go v0.20.6/go.mod h1:nNQMnOvEUEsOzRRFIIkdmYOjAZrC8bgq0ExboWSU1I0=
+k8s.io/client-go v0.21.4/go.mod h1:t0/eMKyUAq/DoQ7vW8NVVA00/nomlwC+eInsS8PxSew=
+k8s.io/client-go v0.23.0/go.mod h1:hrDnpnK1mSr65lHHcUuIZIXDgEbzc7/683c6hyG4jTA=
+k8s.io/client-go v0.23.1 h1:Ma4Fhf/p07Nmj9yAB1H7UwbFHEBrSPg8lviR24U2GiQ=
+k8s.io/client-go v0.23.1/go.mod h1:6QSI8fEuqD4zgFK0xbdwfB/PthBsIxCJMa3s17WlcO0=
+k8s.io/cloud-provider v0.23.1 h1:KQH/nq+stfw0i6/9H8LFZ69PLKjdsl4WuqT6wWmVUqw=
+k8s.io/cloud-provider v0.23.1/go.mod h1:kI8AnYwOSru5Bci8pPUWwV5kJMVkY1ICOp1p8KKZWpc=
+k8s.io/cluster-bootstrap v0.21.4 h1:dnCOcVJdCAMz8+nvqodrFv/yd/3Ae9Jn14cChpQjps8=
+k8s.io/cluster-bootstrap v0.21.4/go.mod h1:GtXGuiEtdV4XQJcscR6qQCm/vtQWkhUi3qnl9KL9jzw=
+k8s.io/code-generator v0.21.4/go.mod h1:K3y0Bv9Cz2cOW2vXUrNZlFbflhuPvuadW6JdnN6gGKo=
+k8s.io/code-generator v0.23.0/go.mod h1:vQvOhDXhuzqiVfM/YHp+dmg10WDZCchJVObc9MvowsE=
+k8s.io/code-generator v0.23.1 h1:ViFOlP/0bYD7VrnUDS+ch5ej5EIuMawFmHcRuv9Yxyw=
+k8s.io/code-generator v0.23.1/go.mod h1:V7yn6VNTCWW8GqodYCESVo95fuiEg713S8B7WacWZDA=
+k8s.io/component-base v0.20.1/go.mod h1:guxkoJnNoh8LNrbtiQOlyp2Y2XFCZQmrcg2n/DeYNLk=
+k8s.io/component-base v0.20.4/go.mod h1:t4p9EdiagbVCJKrQ1RsA5/V4rFQNDfRlevJajlGwgjI=
+k8s.io/component-base v0.20.6/go.mod h1:6f1MPBAeI+mvuts3sIdtpjljHWBQ2cIy38oBIWMYnrM=
+k8s.io/component-base v0.21.4/go.mod h1:ZKG0eHVX+tUDcaoIGpU3Vtk4TIjMddN9uhEWDmW6Nyg=
+k8s.io/component-base v0.23.0/go.mod h1:DHH5uiFvLC1edCpvcTDV++NKULdYYU6pR9Tt3HIKMKI=
+k8s.io/component-base v0.23.1 h1:j/BqdZUWeWKCy2v/jcgnOJAzpRYWSbGcjGVYICko8Uc=
+k8s.io/component-base v0.23.1/go.mod h1:6llmap8QtJIXGDd4uIWJhAq0Op8AtQo6bDW2RrNMTeo=
+k8s.io/component-helpers v0.21.4/go.mod h1:/5TBNWmxaAymZweO1JWv3Pt5rcYJV1LbWWY0x1rDdVU=
+k8s.io/component-helpers v0.23.1 h1:Xrtj0LwXUqYyTPvN2bOE2UcqURX+uSBmKX1koNGhVxI=
+k8s.io/component-helpers v0.23.1/go.mod h1:ZK24U+2oXnBPcas2KolLigVVN9g5zOzaHLkHiQMFGr0=
+k8s.io/controller-manager v0.23.1 h1:Uc4+xV/gJckpU6xVcyKMUTx1xHU1HeZyyF4eVmxGLi0=
+k8s.io/controller-manager v0.23.1/go.mod h1:AFE4qIllvTh+nRwGr3SRSUt7F+xVSzXCeb0hhzYlU4k=
+k8s.io/cri-api v0.17.3/go.mod h1:X1sbHmuXhwaHs9xxYffLqJogVsnI+f6cPRcgPel7ywM=
+k8s.io/cri-api v0.20.1/go.mod h1:2JRbKt+BFLTjtrILYVqQK5jqhI+XNdF6UiGMgczeBCI=
+k8s.io/cri-api v0.20.4/go.mod h1:2JRbKt+BFLTjtrILYVqQK5jqhI+XNdF6UiGMgczeBCI=
+k8s.io/cri-api v0.20.6/go.mod h1:ew44AjNXwyn1s0U4xCKGodU7J1HzBeZ1MpGrpa5r8Yc=
k8s.io/gengo v0.0.0-20190128074634-0689ccc1d7d6/go.mod h1:ezvh/TsK7cY6rbqRK0oQQ8IAqLxYwwyPxAX1Pzy0ii0=
-k8s.io/gengo v0.0.0-20200114144118-36b2048a9120/go.mod h1:ezvh/TsK7cY6rbqRK0oQQ8IAqLxYwwyPxAX1Pzy0ii0=
k8s.io/gengo v0.0.0-20200413195148-3a45101e95ac/go.mod h1:ezvh/TsK7cY6rbqRK0oQQ8IAqLxYwwyPxAX1Pzy0ii0=
-k8s.io/gengo v0.0.0-20201214224949-b6c5ce23f027 h1:Uusb3oh8XcdzDF/ndlI4ToKTYVlkCSJP39SRY2mfRAw=
k8s.io/gengo v0.0.0-20201214224949-b6c5ce23f027/go.mod h1:FiNAH4ZV3gBg2Kwh89tzAEV2be7d5xI0vBa/VySYy3E=
+k8s.io/gengo v0.0.0-20210813121822-485abfe95c7c h1:GohjlNKauSai7gN4wsJkeZ3WAJx4Sh+oT/b5IYn5suA=
+k8s.io/gengo v0.0.0-20210813121822-485abfe95c7c/go.mod h1:FiNAH4ZV3gBg2Kwh89tzAEV2be7d5xI0vBa/VySYy3E=
k8s.io/klog v0.0.0-20181102134211-b9b56d5dfc92/go.mod h1:Gq+BEi5rUBO/HRz0bTSXDUcqjScdoY3a9IHpCEIOOfk=
-k8s.io/klog v0.3.0/go.mod h1:Gq+BEi5rUBO/HRz0bTSXDUcqjScdoY3a9IHpCEIOOfk=
k8s.io/klog v1.0.0 h1:Pt+yjF5aB1xDSVbau4VsWe+dQNzA0qv1LlXdC2dF6Q8=
k8s.io/klog v1.0.0/go.mod h1:4Bi6QPql/J/LkTDqv7R/cd3hPo4k2DG6Ptcz060Ez5I=
k8s.io/klog/v2 v2.0.0/go.mod h1:PBfzABfn139FHAV07az/IF9Wp1bkk3vpT2XSJ76fSDE=
k8s.io/klog/v2 v2.2.0/go.mod h1:Od+F08eJP+W3HUb4pSrPpgp9DGU4GzlpG/TmITuYh/Y=
-k8s.io/klog/v2 v2.9.0 h1:D7HV+n1V57XeZ0m6tdRkfknthUaM06VFbWldOFh8kzM=
+k8s.io/klog/v2 v2.4.0/go.mod h1:Od+F08eJP+W3HUb4pSrPpgp9DGU4GzlpG/TmITuYh/Y=
+k8s.io/klog/v2 v2.8.0/go.mod h1:hy9LJ/NvuK+iVyP4Ehqva4HxZG/oXyIS3n3Jmire4Ec=
k8s.io/klog/v2 v2.9.0/go.mod h1:hy9LJ/NvuK+iVyP4Ehqva4HxZG/oXyIS3n3Jmire4Ec=
+k8s.io/klog/v2 v2.30.0 h1:bUO6drIvCIsvZ/XFgfxoGFQU/a4Qkh0iAlvUR7vlHJw=
+k8s.io/klog/v2 v2.30.0/go.mod h1:y1WjHnz7Dj687irZUWR/WLkLc5N1YHtjLdmgWjndZn0=
k8s.io/kube-openapi v0.0.0-20191107075043-30be4d16710a/go.mod h1:1TqjTSzOxsLGIKfj0lK8EeCP7K1iUG65v09OM0/WG5E=
-k8s.io/kube-openapi v0.0.0-20200410145947-61e04a5be9a6/go.mod h1:GRQhZsXIAJ1xR0C9bd8UpWHZ5plfAS9fzPjJuQ6JL3E=
-k8s.io/kube-openapi v0.0.0-20210421082810-95288971da7e h1:KLHHjkdQFomZy8+06csTWZ0m1343QqxZhR2LJ1OxCYM=
-k8s.io/kube-openapi v0.0.0-20210421082810-95288971da7e/go.mod h1:vHXdDvt9+2spS2Rx9ql3I8tycm3H9FDfdUoIuKCefvw=
-k8s.io/utils v0.0.0-20200324210504-a9aa75ae1b89/go.mod h1:sZAwmy6armz5eXlNoLmJcl4F1QuKu7sr+mFQ0byX7Ew=
-k8s.io/utils v0.0.0-20200603063816-c1c6865ac451/go.mod h1:jPW/WVKK9YHAvNhRxK0md/EJ228hCsBRufyofKtW8HA=
-k8s.io/utils v0.0.0-20210707171843-4b05e18ac7d9 h1:imL9YgXQ9p7xmPzHFm/vVd/cF78jad+n4wK1ABwYtMM=
-k8s.io/utils v0.0.0-20210707171843-4b05e18ac7d9/go.mod h1:jPW/WVKK9YHAvNhRxK0md/EJ228hCsBRufyofKtW8HA=
+k8s.io/kube-openapi v0.0.0-20201113171705-d219536bb9fd/go.mod h1:WOJ3KddDSol4tAGcJo0Tvi+dK12EcqSLqcWsryKMpfM=
+k8s.io/kube-openapi v0.0.0-20210305001622-591a79e4bda7/go.mod h1:wXW5VT87nVfh/iLV8FpR2uDvrFyomxbtb1KivDbvPTE=
+k8s.io/kube-openapi v0.0.0-20211115234752-e816edb12b65 h1:E3J9oCLlaobFUqsjG9DfKbP2BmgwBL2p7pn0A3dG9W4=
+k8s.io/kube-openapi v0.0.0-20211115234752-e816edb12b65/go.mod h1:sX9MT8g7NVZM5lVL/j8QyCCJe8YSMW30QvGZWaCIDIk=
+k8s.io/kubectl v0.21.4/go.mod h1:rRYB5HeScoGQKxZDQmus17pTSVIuqfm0D31ApET/qSM=
+k8s.io/kubernetes v1.13.0/go.mod h1:ocZa8+6APFNC2tX1DZASIbocyYT5jHzqFVsY5aoB7Jk=
+k8s.io/metrics v0.21.4/go.mod h1:uhWoVuVumUMSeCa1B1p2tm4Y4XuZIg0n24QEtB54wuA=
+k8s.io/utils v0.0.0-20201110183641-67b214c5f920/go.mod h1:jPW/WVKK9YHAvNhRxK0md/EJ228hCsBRufyofKtW8HA=
+k8s.io/utils v0.0.0-20210802155522-efc7438f0176/go.mod h1:jPW/WVKK9YHAvNhRxK0md/EJ228hCsBRufyofKtW8HA=
+k8s.io/utils v0.0.0-20210930125809-cb0fa318a74b h1:wxEMGetGMur3J1xuGLQY7GEQYg9bZxKn3tKo5k/eYcs=
+k8s.io/utils v0.0.0-20210930125809-cb0fa318a74b/go.mod h1:jPW/WVKK9YHAvNhRxK0md/EJ228hCsBRufyofKtW8HA=
rsc.io/binaryregexp v0.2.0/go.mod h1:qTv7/COck+e2FymRvadv62gMdZztPaShugOCi3I+8D8=
rsc.io/quote/v3 v3.1.0/go.mod h1:yEA65RcK8LyAZtP9Kv3t0HmxON59tX3rD+tICJqUlj0=
rsc.io/sampler v1.3.0/go.mod h1:T1hPZKmBbMNahiBKFy5HrXp6adAjACjK9JXDnKaTXpA=
-sigs.k8s.io/apiserver-network-proxy/konnectivity-client v0.0.7/go.mod h1:PHgbrJT7lCHcxMU+mDHEm+nx46H4zuuHZkDP6icnhu0=
-sigs.k8s.io/apiserver-network-proxy/konnectivity-client v0.0.22 h1:fmRfl9WJ4ApJn7LxNuED4m0t18qivVQOxP6aAYG9J6c=
+sigs.k8s.io/apiserver-network-proxy/konnectivity-client v0.0.14/go.mod h1:LEScyzhFmoF5pso/YSeBstl57mOzx9xlU9n85RGrDQg=
+sigs.k8s.io/apiserver-network-proxy/konnectivity-client v0.0.15/go.mod h1:LEScyzhFmoF5pso/YSeBstl57mOzx9xlU9n85RGrDQg=
sigs.k8s.io/apiserver-network-proxy/konnectivity-client v0.0.22/go.mod h1:LEScyzhFmoF5pso/YSeBstl57mOzx9xlU9n85RGrDQg=
-sigs.k8s.io/controller-runtime v0.6.5 h1:DSRu6E4FBeVwd/p8niskCVWnX5TSC6ZT9L/OIWOBK7s=
-sigs.k8s.io/controller-runtime v0.6.5/go.mod h1:WlZNXcM0++oyaQt4B7C2lEE5JYRs8vJUzRP4N4JpdAY=
+sigs.k8s.io/apiserver-network-proxy/konnectivity-client v0.0.25 h1:DEQ12ZRxJjsglk5JIi5bLgpKaHihGervKmg5uryaEHw=
+sigs.k8s.io/apiserver-network-proxy/konnectivity-client v0.0.25/go.mod h1:Mlj9PNLmG9bZ6BHFwFKDo5afkpWyUISkb9Me0GnK66I=
+sigs.k8s.io/cluster-api v0.4.5 h1:yqn3WW28ZHzyhjEgAKC1gsURDsZTqyO3Lg06uZ9MGHw=
+sigs.k8s.io/cluster-api v0.4.5/go.mod h1:KKycu4yJEm1sxKG5UaHX9ZnYxRiBzJsFjJVmvMQUP2k=
+sigs.k8s.io/cluster-api/test v0.4.5 h1:3QRnrbCoNPPk5G2Yr6bU+b0E63HKjJlALYT5Kfbt4E4=
+sigs.k8s.io/cluster-api/test v0.4.5/go.mod h1:QSthG8w6jaeNE9hwDxntAspluG8xPJFzi1ptSGoNNbw=
+sigs.k8s.io/controller-runtime v0.9.7/go.mod h1:nExcHcQ2zvLMeoO9K7rOesGCmgu32srN5SENvpAEbGA=
+sigs.k8s.io/controller-runtime v0.11.0 h1:DqO+c8mywcZLFJWILq4iktoECTyn30Bkj0CwgqMpZWQ=
+sigs.k8s.io/controller-runtime v0.11.0/go.mod h1:KKwLiTooNGu+JmLZGn9Sl3Gjmfj66eMbCQznLP5zcqA=
+sigs.k8s.io/json v0.0.0-20211020170558-c049b76a60c6 h1:fD1pz4yfdADVNfFmcP2aBEtudwUQ1AlLnRBALr33v3s=
+sigs.k8s.io/json v0.0.0-20211020170558-c049b76a60c6/go.mod h1:p4QtZmO4uMYipTQNzagwnNoseA6OxSUutVw05NhYDRs=
+sigs.k8s.io/kind v0.11.1 h1:pVzOkhUwMBrCB0Q/WllQDO3v14Y+o2V0tFgjTqIUjwA=
+sigs.k8s.io/kind v0.11.1/go.mod h1:fRpgVhtqAWrtLB9ED7zQahUimpUXuG/iHT88xYqEGIA=
+sigs.k8s.io/kustomize/api v0.8.8/go.mod h1:He1zoK0nk43Pc6NlV085xDXDXTNprtcyKZVm3swsdNY=
+sigs.k8s.io/kustomize/cmd/config v0.9.10/go.mod h1:Mrby0WnRH7hA6OwOYnYpfpiY0WJIMgYrEDfwOeFdMK0=
+sigs.k8s.io/kustomize/kustomize/v4 v4.1.2/go.mod h1:PxBvo4WGYlCLeRPL+ziT64wBXqbgfcalOS/SXa/tcyo=
+sigs.k8s.io/kustomize/kyaml v0.10.17/go.mod h1:mlQFagmkm1P+W4lZJbJ/yaxMd8PqMRSC4cPcfUVt5Hg=
sigs.k8s.io/structured-merge-diff v0.0.0-20190525122527-15d366b2352e h1:4Z09Hglb792X0kfOBBJUPFEyvVfQWrYT/l8h5EKA6JQ=
sigs.k8s.io/structured-merge-diff v0.0.0-20190525122527-15d366b2352e/go.mod h1:wWxsB5ozmmv/SG7nM11ayaAW51xMvak/t1r0CSlcokI=
-sigs.k8s.io/structured-merge-diff/v3 v3.0.0-20200116222232-67a7b8c61874/go.mod h1:PlARxl6Hbt/+BC80dRLi1qAmnMqwqDg62YvvVkZjemw=
-sigs.k8s.io/structured-merge-diff/v3 v3.0.0/go.mod h1:PlARxl6Hbt/+BC80dRLi1qAmnMqwqDg62YvvVkZjemw=
sigs.k8s.io/structured-merge-diff/v4 v4.0.2/go.mod h1:bJZC9H9iH24zzfZ/41RGcq60oK1F7G282QMXDPYydCw=
-sigs.k8s.io/structured-merge-diff/v4 v4.1.2 h1:Hr/htKFmJEbtMgS/UD0N+gtgctAqz81t3nu+sPzynno=
+sigs.k8s.io/structured-merge-diff/v4 v4.0.3/go.mod h1:bJZC9H9iH24zzfZ/41RGcq60oK1F7G282QMXDPYydCw=
sigs.k8s.io/structured-merge-diff/v4 v4.1.2/go.mod h1:j/nl6xW8vLS49O8YvXW1ocPhZawJtm+Yrr7PPRQ0Vg4=
+sigs.k8s.io/structured-merge-diff/v4 v4.2.0 h1:kDvPBbnPk+qYmkHmSo8vKGp438IASWofnbbUKDE/bv0=
+sigs.k8s.io/structured-merge-diff/v4 v4.2.0/go.mod h1:j/nl6xW8vLS49O8YvXW1ocPhZawJtm+Yrr7PPRQ0Vg4=
sigs.k8s.io/yaml v1.1.0/go.mod h1:UJmg0vDUVViEyp3mgSv9WPwZCDxu4rQW1olrI1uml+o=
-sigs.k8s.io/yaml v1.2.0 h1:kr/MCeFWJWTwyaHoR9c8EjH9OumOmoF9YGiZd7lFm/Q=
sigs.k8s.io/yaml v1.2.0/go.mod h1:yfXDCHCao9+ENCvLSE62v9VSji2MKu5jeNfTrofGhJc=
+sigs.k8s.io/yaml v1.3.0 h1:a2VclLzOGrwOHDiV8EfBGhvjHvP46CtW5j6POvhYGGo=
+sigs.k8s.io/yaml v1.3.0/go.mod h1:GeOyir5tyXNByN85N/dRIT9es5UQNerPYEKK56eTBm8=
diff --git a/hack/check-format.sh b/hack/check-format.sh
index fc932f452..b680c5c9c 100755
--- a/hack/check-format.sh
+++ b/hack/check-format.sh
@@ -42,6 +42,7 @@ rm -f "${out}" && touch "${out}"
# Run goimports on all the sources.
go get golang.org/x/tools/cmd/goimports
+go install golang.org/x/tools/cmd/goimports
cmd=$(go list -f \{\{\.Target\}\} golang.org/x/tools/cmd/goimports)
flags="-e -w"
[ -z "${PROW_JOB_ID-}" ] || flags="-d ${flags}"
diff --git a/hack/check-lint.sh b/hack/check-lint.sh
index a1c9a5e74..5bc2161d9 100755
--- a/hack/check-lint.sh
+++ b/hack/check-lint.sh
@@ -23,6 +23,7 @@ set -o pipefail
cd "$(dirname "${BASH_SOURCE[0]}")/.."
go get golang.org/x/lint/golint
+go install golang.org/x/lint/golint
CMD=$(go list -f \{\{\.Target\}\} golang.org/x/lint/golint)
diff --git a/hack/check-staticcheck.sh b/hack/check-staticcheck.sh
index 67d283cd8..35c6598dd 100755
--- a/hack/check-staticcheck.sh
+++ b/hack/check-staticcheck.sh
@@ -23,6 +23,7 @@ set -o pipefail
cd "$(dirname "${BASH_SOURCE[0]}")/.."
go get honnef.co/go/tools/cmd/staticcheck
+go install honnef.co/go/tools/cmd/staticcheck
CMD=$(go list -f \{\{\.Target\}\} honnef.co/go/tools/cmd/staticcheck)
# re-enable SA1019 when we upgrade to Go 1.14
diff --git a/hack/images/ci/Dockerfile b/hack/images/ci/Dockerfile
index 0cd9afe9f..d34bdb3ba 100644
--- a/hack/images/ci/Dockerfile
+++ b/hack/images/ci/Dockerfile
@@ -3,7 +3,7 @@
################################################################################
# The golang image is used to create the project's module and build caches
# and is also the image on which this image is based.
-ARG GOLANG_IMAGE=golang:1.16.7
+ARG GOLANG_IMAGE=golang:1.17.5
# The image from which the Terraform project used to turn up a K8s cluster is
# copied, as well as several programs.
diff --git a/hack/tools/Makefile b/hack/tools/Makefile
new file mode 100644
index 000000000..fe549b981
--- /dev/null
+++ b/hack/tools/Makefile
@@ -0,0 +1,49 @@
+# Copyright 2019 The Kubernetes Authors.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+# If you update this file, please follow
+# https://suva.sh/posts/well-documented-makefiles
+
+# Ensure Make is run with bash shell as some syntax below is bash-specific
+SHELL := /usr/bin/env bash
+
+.DEFAULT_GOAL := all
+
+# Use GOPROXY environment variable if set
+GOPROXY := $(shell go env GOPROXY)
+ifeq (,$(strip $(GOPROXY)))
+GOPROXY := https://proxy.golang.org
+endif
+export GOPROXY
+
+# Active module mode, as we use go modules to manage dependencies
+export GO111MODULE := on
+
+# Directories.
+BIN_DIR := bin
+SRCS := go.mod go.sum
+
+# Binaries.
+KIND := $(BIN_DIR)/kind
+GINKGO := $(BIN_DIR)/ginkgo
+
+all: kind ginkgo
+
+kind: $(KIND) $(SRCS)
+$(KIND): go.mod
+ go build -tags=tools -o $@ sigs.k8s.io/kind
+
+ginkgo: $(GINKGO) $(SRCS)
+$(GINKGO): go.mod
+ go build -tags=tools -o $@ github.com/onsi/ginkgo/ginkgo
\ No newline at end of file
diff --git a/hack/tools/go.mod b/hack/tools/go.mod
new file mode 100644
index 000000000..ea58ba9c5
--- /dev/null
+++ b/hack/tools/go.mod
@@ -0,0 +1,8 @@
+module tools
+
+go 1.16
+
+require (
+ github.com/onsi/ginkgo v1.16.4
+ sigs.k8s.io/kind v0.7.0
+)
diff --git a/hack/tools/go.sum b/hack/tools/go.sum
new file mode 100644
index 000000000..899b7de3f
--- /dev/null
+++ b/hack/tools/go.sum
@@ -0,0 +1,202 @@
+github.com/BurntSushi/toml v0.3.1 h1:WXkYYl6Yr3qBf1K79EBnL4mak0OimBfB0XUf9Vl28OQ=
+github.com/BurntSushi/toml v0.3.1/go.mod h1:xHWCNGjB5oqiDr8zfno3MHue2Ht5sIBksp03qcyfWMU=
+github.com/NYTimes/gziphandler v0.0.0-20170623195520-56545f4a5d46/go.mod h1:3wb06e3pkSAbeQ52E9H9iFoQsEEwGN64994WTCIhntQ=
+github.com/PuerkitoBio/purell v1.0.0/go.mod h1:c11w/QuzBsJSee3cPx9rAFu61PvFxuPbtSwDGJws/X0=
+github.com/PuerkitoBio/urlesc v0.0.0-20160726150825-5bd2802263f2/go.mod h1:uGdkoq3SwY9Y+13GIhn11/XLaGBb4BfwItxLd5jeuXE=
+github.com/alessio/shellescape v0.0.0-20190409004728-b115ca0f9053 h1:H/GMMKYPkEIC3DF/JWQz8Pdd+Feifov2EIgGfNpeogI=
+github.com/alessio/shellescape v0.0.0-20190409004728-b115ca0f9053/go.mod h1:xW8sBma2LE3QxFSzCnH9qe6gAE2yO9GvQaWwX89HxbE=
+github.com/armon/consul-api v0.0.0-20180202201655-eb2c6b5be1b6/go.mod h1:grANhF5doyWs3UAsr3K4I6qtAmlQcZDesFNEHPZAzj8=
+github.com/coreos/etcd v3.3.10+incompatible/go.mod h1:uF7uidLiAD3TWHmW31ZFd/JWoc32PjwdhPthX9715RE=
+github.com/coreos/go-etcd v2.0.0+incompatible/go.mod h1:Jez6KQU2B/sWsbdaef3ED8NzMklzPG4d5KIOhIy30Tk=
+github.com/coreos/go-semver v0.2.0/go.mod h1:nnelYz7RCh+5ahJtPPxZlU+153eP4D4r3EedlOD2RNk=
+github.com/cpuguy83/go-md2man v1.0.10/go.mod h1:SmD6nW6nTyfqj6ABTjUi3V3JVMnlJmwcJI5acqYI6dE=
+github.com/davecgh/go-spew v0.0.0-20151105211317-5215b55f46b2/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
+github.com/davecgh/go-spew v1.1.0/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
+github.com/davecgh/go-spew v1.1.1 h1:vj9j/u1bqnvCEfJOwUhtlOARqs3+rkHYY13jYWTU97c=
+github.com/davecgh/go-spew v1.1.1/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
+github.com/docker/spdystream v0.0.0-20160310174837-449fdfce4d96/go.mod h1:Qh8CwZgvJUkLughtfhJv5dyTYa91l1fOUCrgjqmcifM=
+github.com/elazarl/goproxy v0.0.0-20170405201442-c4fc26588b6e/go.mod h1:/Zj4wYkgs4iZTTu3o/KG3Itv/qCCa8VVMlb3i9OVuzc=
+github.com/emicklei/go-restful v0.0.0-20170410110728-ff4f55a20633/go.mod h1:otzb+WCGbkyDHkqmQmT5YD2WR4BBwUdeQoFo8l/7tVs=
+github.com/evanphx/json-patch v4.2.0+incompatible/go.mod h1:50XU6AFN0ol/bzJsmQLiYLvXMP4fmwYFNcr97nuDLSk=
+github.com/evanphx/json-patch v4.5.0+incompatible h1:ouOWdg56aJriqS0huScTkVXPC5IcNrDCXZ6OoTAWu7M=
+github.com/evanphx/json-patch v4.5.0+incompatible/go.mod h1:50XU6AFN0ol/bzJsmQLiYLvXMP4fmwYFNcr97nuDLSk=
+github.com/fsnotify/fsnotify v1.4.7/go.mod h1:jwhsz4b93w/PPRr/qN1Yymfu8t87LnFCMoQvtojpjFo=
+github.com/fsnotify/fsnotify v1.4.9 h1:hsms1Qyu0jgnwNXIxa+/V/PDsU6CfLf6CNO8H7IWoS4=
+github.com/fsnotify/fsnotify v1.4.9/go.mod h1:znqG4EE+3YCdAaPaxE2ZRY/06pZUdp0tY4IgpuI1SZQ=
+github.com/ghodss/yaml v0.0.0-20150909031657-73d445a93680/go.mod h1:4dBDuWmgqj2HViK6kFavaiC9ZROes6MMH2rRYeMEF04=
+github.com/go-logr/logr v0.1.0/go.mod h1:ixOQHD9gLJUVQQ2ZOR7zLEifBX6tGkNJF4QyIY7sIas=
+github.com/go-openapi/jsonpointer v0.0.0-20160704185906-46af16f9f7b1/go.mod h1:+35s3my2LFTysnkMfxsJBAMHj/DoqoB9knIWoYG/Vk0=
+github.com/go-openapi/jsonreference v0.0.0-20160704190145-13c6e3589ad9/go.mod h1:W3Z9FmVs9qj+KR4zFKmDPGiLdk1D9Rlm7cyMvf57TTg=
+github.com/go-openapi/spec v0.0.0-20160808142527-6aced65f8501/go.mod h1:J8+jY1nAiCcj+friV/PDoE1/3eeccG9LYBs0tYvLOWc=
+github.com/go-openapi/swag v0.0.0-20160704191624-1d0bd113de87/go.mod h1:DXUve3Dpr1UfpPtxFw+EFuQ41HhCWZfha5jSVRG7C7I=
+github.com/go-task/slim-sprig v0.0.0-20210107165309-348f09dbbbc0 h1:p104kn46Q8WdvHunIJ9dAyjPVtrBPhSr3KT2yUst43I=
+github.com/go-task/slim-sprig v0.0.0-20210107165309-348f09dbbbc0/go.mod h1:fyg7847qk6SyHyPtNmDHnmrv/HOrqktSC+C9fM+CJOE=
+github.com/gogo/protobuf v1.2.2-0.20190723190241-65acae22fc9d/go.mod h1:SlYgWuQ5SjCEi6WLHjHCa1yvBfUnHcTbrrZtXPKa29o=
+github.com/golang/groupcache v0.0.0-20160516000752-02826c3e7903/go.mod h1:cIg4eruTrX1D+g88fzRXU5OdNfaM+9IcxsU14FzY7Hc=
+github.com/golang/protobuf v0.0.0-20161109072736-4bd1920723d7/go.mod h1:6lQm79b+lXiMfvg/cZm0SGofjICqVBUtrP5yJMmIC1U=
+github.com/golang/protobuf v1.2.0/go.mod h1:6lQm79b+lXiMfvg/cZm0SGofjICqVBUtrP5yJMmIC1U=
+github.com/golang/protobuf v1.3.2/go.mod h1:6lQm79b+lXiMfvg/cZm0SGofjICqVBUtrP5yJMmIC1U=
+github.com/golang/protobuf v1.4.0-rc.1/go.mod h1:ceaxUfeHdC40wWswd/P6IGgMaK3YpKi5j83Wpe3EHw8=
+github.com/golang/protobuf v1.4.0-rc.1.0.20200221234624-67d41d38c208/go.mod h1:xKAWHe0F5eneWXFV3EuXVDTCmh+JuBKY0li0aMyXATA=
+github.com/golang/protobuf v1.4.0-rc.2/go.mod h1:LlEzMj4AhA7rCAGe4KMBDvJI+AwstrUpVNzEA03Pprs=
+github.com/golang/protobuf v1.4.0-rc.4.0.20200313231945-b860323f09d0/go.mod h1:WU3c8KckQ9AFe+yFwt9sWVRKCVIyN9cPHBJSNnbL67w=
+github.com/golang/protobuf v1.4.0/go.mod h1:jodUvKwWbYaEsadDk5Fwe5c77LiNKVO9IDvqG2KuDX0=
+github.com/golang/protobuf v1.4.2 h1:+Z5KGCizgyZCbGh1KZqA0fcLLkwbsjIzS4aV2v7wJX0=
+github.com/golang/protobuf v1.4.2/go.mod h1:oDoupMAO8OvCJWAcko0GGGIgR6R6ocIYbsSw735rRwI=
+github.com/google/go-cmp v0.3.0/go.mod h1:8QqcDgzrUqlUb/G2PQTWiueGozuR1884gddMywk6iLU=
+github.com/google/go-cmp v0.3.1/go.mod h1:8QqcDgzrUqlUb/G2PQTWiueGozuR1884gddMywk6iLU=
+github.com/google/go-cmp v0.4.0/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE=
+github.com/google/gofuzz v0.0.0-20161122191042-44d81051d367/go.mod h1:HP5RmnzzSNb993RKQDq4+1A4ia9nllfqcQFTQJedwGI=
+github.com/google/gofuzz v1.0.0/go.mod h1:dBl0BpW6vV/+mYPU4Po3pmUjxk6FQPldtuIdl/M65Eg=
+github.com/google/uuid v1.1.1/go.mod h1:TIyPZe4MgqvfeYDBFedMoGGpEw/LqOeaOT+nhxU+yHo=
+github.com/googleapis/gnostic v0.0.0-20170729233727-0c5108395e2d/go.mod h1:sJBsCZ4ayReDTBIg8b9dl28c5xFWyhBTVRp3pOg5EKY=
+github.com/hashicorp/golang-lru v0.5.1/go.mod h1:/m3WP610KZHVQ1SGc6re/UDhFvYD7pJ4Ao+sR/qLZy8=
+github.com/hashicorp/hcl v1.0.0/go.mod h1:E5yfLk+7swimpb2L/Alb/PJmXilQ/rhwaUYs4T20WEQ=
+github.com/hpcloud/tail v1.0.0/go.mod h1:ab1qPbhIpdTxEkNHXyeSf5vhxWSCs/tWer42PpOxQnU=
+github.com/inconshreveable/mousetrap v1.0.0 h1:Z8tu5sraLXCXIcARxBp/8cbvlwVa7Z1NHg9XEKhtSvM=
+github.com/inconshreveable/mousetrap v1.0.0/go.mod h1:PxqpIevigyE2G7u3NXJIT2ANytuPF1OarO4DADm73n8=
+github.com/json-iterator/go v0.0.0-20180612202835-f2b4162afba3/go.mod h1:+SdeFBvtyEkXs7REEP0seUULqWtbJapLOCVDaaPEHmU=
+github.com/json-iterator/go v1.1.8/go.mod h1:KdQUCv79m/52Kvf8AW2vK1V8akMuk1QjK/uOdHXbAo4=
+github.com/kisielk/errcheck v1.2.0/go.mod h1:/BMXB+zMLi60iA8Vv6Ksmxu/1UDYcXs4uQLJ+jE2L00=
+github.com/kisielk/gotool v1.0.0/go.mod h1:XhKaO+MFFWcvkIS/tQcRk01m1F5IRFswLeQ+oQHNcck=
+github.com/kr/pretty v0.1.0 h1:L/CwN0zerZDmRFUapSPitk6f+Q3+0za1rQkzVuMiMFI=
+github.com/kr/pretty v0.1.0/go.mod h1:dAy3ld7l9f0ibDNOQOHHMYYIIbhfbHSm3C4ZsoJORNo=
+github.com/kr/pty v1.1.1/go.mod h1:pFQYn66WHrOpPYNljwOMqo10TkYh1fy3cYio2l3bCsQ=
+github.com/kr/text v0.1.0 h1:45sCR5RtlFHMR4UwH9sdQ5TC8v0qDQCHnXt+kaKSTVE=
+github.com/kr/text v0.1.0/go.mod h1:4Jbv+DJW3UT/LiOwJeYQe1efqtUx/iVham/4vfdArNI=
+github.com/magiconair/properties v1.8.0/go.mod h1:PppfXfuXeibc/6YijjN8zIbojt8czPbwD3XqdrwzmxQ=
+github.com/mailru/easyjson v0.0.0-20160728113105-d5b7844b561a/go.mod h1:C1wdFJiN94OJF2b5HbByQZoLdCWB1Yqtg26g4irojpc=
+github.com/mattn/go-isatty v0.0.11 h1:FxPOTFNqGkuDUGi3H/qkUbQO4ZiBa2brKq5r0l8TGeM=
+github.com/mattn/go-isatty v0.0.11/go.mod h1:PhnuNfih5lzO57/f3n+odYbM4JtupLOxQOAqxQCu2WE=
+github.com/mitchellh/go-homedir v1.1.0/go.mod h1:SfyaCUpYCn1Vlf4IUYiD9fPX4A5wJrkLzIz1N1q0pr0=
+github.com/mitchellh/mapstructure v1.1.2/go.mod h1:FVVH3fgwuzCH5S8UJGiWEs2h04kUh9fWfEaFds41c1Y=
+github.com/modern-go/concurrent v0.0.0-20180228061459-e0a39a4cb421/go.mod h1:6dJC0mAP4ikYIbvyc7fijjWJddQyLn8Ig3JB5CqoB9Q=
+github.com/modern-go/concurrent v0.0.0-20180306012644-bacd9c7ef1dd/go.mod h1:6dJC0mAP4ikYIbvyc7fijjWJddQyLn8Ig3JB5CqoB9Q=
+github.com/modern-go/reflect2 v0.0.0-20180320133207-05fbef0ca5da/go.mod h1:bx2lNnkwVCuqBIxFjflWJWanXIb3RllmbCylyMrvgv0=
+github.com/modern-go/reflect2 v0.0.0-20180701023420-4b7aa43c6742/go.mod h1:bx2lNnkwVCuqBIxFjflWJWanXIb3RllmbCylyMrvgv0=
+github.com/modern-go/reflect2 v1.0.1/go.mod h1:bx2lNnkwVCuqBIxFjflWJWanXIb3RllmbCylyMrvgv0=
+github.com/munnerz/goautoneg v0.0.0-20120707110453-a547fc61f48d/go.mod h1:+n7T8mK8HuQTcFwEeznm/DIxMOiR9yIdICNftLE1DvQ=
+github.com/mxk/go-flowrate v0.0.0-20140419014527-cca7078d478f/go.mod h1:ZdcZmHo+o7JKHSa8/e818NopupXU1YMK5fe1lsApnBw=
+github.com/nxadm/tail v1.4.4/go.mod h1:kenIhsEOeOJmVchQTgglprH7qJGnHDVpk1VPCcaMI8A=
+github.com/nxadm/tail v1.4.8 h1:nPr65rt6Y5JFSKQO7qToXr7pePgD6Gwiw05lkbyAQTE=
+github.com/nxadm/tail v1.4.8/go.mod h1:+ncqLTQzXmGhMZNUePPaPqPvBxHAIsmXswZKocGu+AU=
+github.com/onsi/ginkgo v0.0.0-20170829012221-11459a886d9c/go.mod h1:lLunBs/Ym6LB5Z9jYTR76FiuTmxDTDusOGeTQH+WWjE=
+github.com/onsi/ginkgo v1.6.0/go.mod h1:lLunBs/Ym6LB5Z9jYTR76FiuTmxDTDusOGeTQH+WWjE=
+github.com/onsi/ginkgo v1.10.1/go.mod h1:lLunBs/Ym6LB5Z9jYTR76FiuTmxDTDusOGeTQH+WWjE=
+github.com/onsi/ginkgo v1.12.1/go.mod h1:zj2OWP4+oCPe1qIXoGWkgMRwljMUYCdkwsT2108oapk=
+github.com/onsi/ginkgo v1.16.4 h1:29JGrr5oVBm5ulCWet69zQkzWipVXIol6ygQUe/EzNc=
+github.com/onsi/ginkgo v1.16.4/go.mod h1:dX+/inL/fNMqNlz0e9LfyB9TswhZpCVdJM/Z6Vvnwo0=
+github.com/onsi/gomega v0.0.0-20170829124025-dcabb60a477c/go.mod h1:C1qb7wdrVGGVU+Z6iS04AVkA3Q65CEZX59MT0QO5uiA=
+github.com/onsi/gomega v1.7.0/go.mod h1:ex+gbHU/CVuBBDIJjb2X0qEXbFg53c61hWP/1CpauHY=
+github.com/onsi/gomega v1.7.1/go.mod h1:XdKZgCCFLUoM/7CFJVPcG8C1xQ1AJ0vpAezJrB7JYyY=
+github.com/onsi/gomega v1.10.1 h1:o0+MgICZLuZ7xjH7Vx6zS/zcu93/BEp1VwkIW1mEXCE=
+github.com/onsi/gomega v1.10.1/go.mod h1:iN09h71vgCQne3DLsj+A5owkum+a2tYe+TOCB1ybHNo=
+github.com/pelletier/go-toml v1.2.0/go.mod h1:5z9KED0ma1S8pY6P1sdut58dfprrGBbd/94hg7ilaic=
+github.com/pelletier/go-toml v1.6.0 h1:aetoXYr0Tv7xRU/V4B4IZJ2QcbtMUFoNb3ORp7TzIK4=
+github.com/pelletier/go-toml v1.6.0/go.mod h1:5N711Q9dKgbdkxHL+MEfF31hpT7l0S0s/t2kKREewys=
+github.com/pkg/errors v0.9.0 h1:J8lpUdobwIeCI7OiSxHqEwJUKvJwicL5+3v1oe2Yb4k=
+github.com/pkg/errors v0.9.0/go.mod h1:bwawxfHBFNV+L2hUp1rHADufV3IMtnDRdf1r5NINEl0=
+github.com/pmezard/go-difflib v0.0.0-20151028094244-d8ed2627bdf0/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4=
+github.com/pmezard/go-difflib v1.0.0 h1:4DBwDE0NGyQoBHbLQYPwSUPoCMWR5BEzIk/f1lZbAQM=
+github.com/pmezard/go-difflib v1.0.0/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4=
+github.com/russross/blackfriday v1.5.2/go.mod h1:JO/DiYxRf+HjHt06OyowR9PTA263kcR/rfWxYHBV53g=
+github.com/spf13/afero v1.1.2/go.mod h1:j4pytiNVoe2o6bmDsKpLACNPDBIoEAkihy7loJ1B0CQ=
+github.com/spf13/cast v1.3.0/go.mod h1:Qx5cxh0v+4UWYiBimWS+eyWzqEqokIECu5etghLkUJE=
+github.com/spf13/cobra v0.0.5 h1:f0B+LkLX6DtmRH1isoNA9VTtNUK9K8xYd28JNNfOv/s=
+github.com/spf13/cobra v0.0.5/go.mod h1:3K3wKZymM7VvHMDS9+Akkh4K60UwM26emMESw8tLCHU=
+github.com/spf13/jwalterweatherman v1.0.0/go.mod h1:cQK4TGJAtQXfYWX+Ddv3mKDzgVb68N+wFjFa4jdeBTo=
+github.com/spf13/pflag v0.0.0-20170130214245-9ff6c6923cff/go.mod h1:DYY7MBk1bdzusC3SYhjObp+wFpr4gzcvqqNjLnInEg4=
+github.com/spf13/pflag v1.0.3/go.mod h1:DYY7MBk1bdzusC3SYhjObp+wFpr4gzcvqqNjLnInEg4=
+github.com/spf13/pflag v1.0.5 h1:iy+VFUOCP1a+8yFto/drg2CJ5u0yRoB7fZw3DKv/JXA=
+github.com/spf13/pflag v1.0.5/go.mod h1:McXfInJRrz4CZXVZOBLb0bTZqETkiAhM9Iw0y3An2Bg=
+github.com/spf13/viper v1.3.2/go.mod h1:ZiWeW+zYFKm7srdB9IoDzzZXaJaI5eL9QjNiN/DMA2s=
+github.com/stretchr/objx v0.1.0/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+wExME=
+github.com/stretchr/testify v0.0.0-20151208002404-e3a8ff8ce365/go.mod h1:a8OnRcib4nhh0OaRAV+Yts87kKdq0PP7pXfy6kDkUVs=
+github.com/stretchr/testify v1.2.2/go.mod h1:a8OnRcib4nhh0OaRAV+Yts87kKdq0PP7pXfy6kDkUVs=
+github.com/stretchr/testify v1.3.0/go.mod h1:M5WIy9Dh21IEIfnGCwXGc5bZfKNJtfHm1UVUgZn+9EI=
+github.com/stretchr/testify v1.4.0/go.mod h1:j7eGeouHqKxXV5pUuKE4zz7dFj8WfuZ+81PSLYec5m4=
+github.com/stretchr/testify v1.5.1 h1:nOGnQDM7FYENwehXlg/kFVnos3rEvtKTjRvOWSzb6H4=
+github.com/stretchr/testify v1.5.1/go.mod h1:5W2xD1RspED5o8YsWQXVCued0rvSQ+mT+I5cxcmMvtA=
+github.com/ugorji/go/codec v0.0.0-20181204163529-d75b2dcb6bc8/go.mod h1:VFNgLljTbGfSG7qAOspJ7OScBnGdDN/yBr0sguwnwf0=
+github.com/xordataexchange/crypt v0.0.3-0.20170626215501-b2862e3d0a77/go.mod h1:aYKd//L2LvnjZzWKhF00oedf4jCCReLcmhLdhm1A27Q=
+github.com/yuin/goldmark v1.2.1/go.mod h1:3hX8gzYuyVAZsxl0MRgGTJEmQBFcNTphYh9decYSb74=
+golang.org/x/crypto v0.0.0-20181203042331-505ab145d0a9/go.mod h1:6SG95UA2DQfeDnfUPMdvaQW0Q7yPrPDi9nlGo2tz2b4=
+golang.org/x/crypto v0.0.0-20190308221718-c2843e01d9a2/go.mod h1:djNgcEr1/C05ACkg1iLfiJU5Ep61QUkGW8qpdssI0+w=
+golang.org/x/crypto v0.0.0-20191011191535-87dc89f01550/go.mod h1:yigFU9vqHzYiE8UmvKecakEJjdnWj3jj499lnFckfCI=
+golang.org/x/crypto v0.0.0-20200622213623-75b288015ac9/go.mod h1:LzIPMQfyMNhhGPhUkYOs5KpL4U8rLKemX1yGLhDgUto=
+golang.org/x/mod v0.3.0/go.mod h1:s0Qsj1ACt9ePp/hMypM3fl4fZqREWJwdYDEqhRiZZUA=
+golang.org/x/net v0.0.0-20170114055629-f2499483f923/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
+golang.org/x/net v0.0.0-20180906233101-161cd47e91fd/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
+golang.org/x/net v0.0.0-20190404232315-eb5bcb51f2a3/go.mod h1:t9HGtf8HONx5eT2rtn7q6eTqICYqUVnKs3thJo3Qplg=
+golang.org/x/net v0.0.0-20190620200207-3b0461eec859/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
+golang.org/x/net v0.0.0-20191004110552-13f9640d40b9/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
+golang.org/x/net v0.0.0-20200520004742-59133d7f0dd7/go.mod h1:qpuaurCH72eLCgpAm/N6yyVIVM9cpaDIP3A8BGJEC5A=
+golang.org/x/net v0.0.0-20201021035429-f5854403a974 h1:IX6qOQeG5uLjB/hjjwjedwfjND0hgjPMMyO1RoIXQNI=
+golang.org/x/net v0.0.0-20201021035429-f5854403a974/go.mod h1:sp8m0HH+o8qH0wwXwYZr8TS3Oi6o0r6Gce1SSxlDquU=
+golang.org/x/sync v0.0.0-20180314180146-1d60e4601c6f/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
+golang.org/x/sync v0.0.0-20190423024810-112230192c58/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
+golang.org/x/sync v0.0.0-20201020160332-67f06af15bc9/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
+golang.org/x/sys v0.0.0-20170830134202-bb24a47a89ea/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
+golang.org/x/sys v0.0.0-20180909124046-d0be0721c37e/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
+golang.org/x/sys v0.0.0-20181205085412-a5c9d58dba9a/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
+golang.org/x/sys v0.0.0-20190215142949-d0b11bdaac8a/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
+golang.org/x/sys v0.0.0-20190412213103-97732733099d/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
+golang.org/x/sys v0.0.0-20190826190057-c7b8b68b1456/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
+golang.org/x/sys v0.0.0-20190904154756-749cb33beabd/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
+golang.org/x/sys v0.0.0-20191005200804-aed5e4c7ecf9/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
+golang.org/x/sys v0.0.0-20191026070338-33540a1f6037/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
+golang.org/x/sys v0.0.0-20191120155948-bd437916bb0e/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
+golang.org/x/sys v0.0.0-20200113162924-86b910548bc1/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
+golang.org/x/sys v0.0.0-20200323222414-85ca7c5b95cd/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
+golang.org/x/sys v0.0.0-20200930185726-fdedc70b468f/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
+golang.org/x/sys v0.0.0-20210112080510-489259a85091 h1:DMyOG0U+gKfu8JZzg2UQe9MeaC1X+xQWlAKcRnjxjCw=
+golang.org/x/sys v0.0.0-20210112080510-489259a85091/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
+golang.org/x/text v0.0.0-20160726164857-2910a502d2bf/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ=
+golang.org/x/text v0.3.0/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ=
+golang.org/x/text v0.3.2/go.mod h1:bEr9sfX3Q8Zfm5fL9x+3itogRgK3+ptLWKqgva+5dAk=
+golang.org/x/text v0.3.3 h1:cokOdA+Jmi5PJGXLlLllQSgYigAEfHXJAERHVMaCc2k=
+golang.org/x/text v0.3.3/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ=
+golang.org/x/tools v0.0.0-20180917221912-90fa682c2a6e/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ=
+golang.org/x/tools v0.0.0-20181011042414-1f849cf54d09/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ=
+golang.org/x/tools v0.0.0-20181030221726-6c7e314b6563/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ=
+golang.org/x/tools v0.0.0-20191119224855-298f0cb1881e/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo=
+golang.org/x/tools v0.0.0-20201224043029-2b0845dc783e h1:4nW4NLDYnU28ojHaHO8OVxFHk/aQ33U01a9cjED+pzE=
+golang.org/x/tools v0.0.0-20201224043029-2b0845dc783e/go.mod h1:emZCQorbCU4vsT4fOWvOPXz4eW1wZW4PmDk9uLelYpA=
+golang.org/x/xerrors v0.0.0-20190717185122-a985d3407aa7/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
+golang.org/x/xerrors v0.0.0-20191011141410-1b5146add898/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
+golang.org/x/xerrors v0.0.0-20191204190536-9bdfabe68543/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
+golang.org/x/xerrors v0.0.0-20200804184101-5ec99f83aff1 h1:go1bK/D/BFZV2I8cIQd1NKEZ+0owSTG1fDTci4IqFcE=
+golang.org/x/xerrors v0.0.0-20200804184101-5ec99f83aff1/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
+google.golang.org/protobuf v0.0.0-20200109180630-ec00e32a8dfd/go.mod h1:DFci5gLYBciE7Vtevhsrf46CRTquxDuWsQurQQe4oz8=
+google.golang.org/protobuf v0.0.0-20200221191635-4d8936d0db64/go.mod h1:kwYJMbMJ01Woi6D6+Kah6886xMZcty6N08ah7+eCXa0=
+google.golang.org/protobuf v0.0.0-20200228230310-ab0ca4ff8a60/go.mod h1:cfTl7dwQJ+fmap5saPgwCLgHXTUD7jkjRqWcaiX5VyM=
+google.golang.org/protobuf v1.20.1-0.20200309200217-e05f789c0967/go.mod h1:A+miEFZTKqfCUM6K7xSMQL9OKL/b6hQv+e19PK+JZNE=
+google.golang.org/protobuf v1.21.0/go.mod h1:47Nbq4nVaFHyn7ilMalzfO3qCViNmqZ2kzikPIcrTAo=
+google.golang.org/protobuf v1.23.0 h1:4MY060fB1DLGMB/7MBTLnwQUY6+F09GEiz6SsrNqyzM=
+google.golang.org/protobuf v1.23.0/go.mod h1:EGpADcykh3NcUnDUJcl1+ZksZNG86OlYog2l/sGQquU=
+gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
+gopkg.in/check.v1 v1.0.0-20180628173108-788fd7840127 h1:qIbj1fsPNlZgppZ+VLlY7N33q108Sa+fhmuc+sWQYwY=
+gopkg.in/check.v1 v1.0.0-20180628173108-788fd7840127/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
+gopkg.in/fsnotify.v1 v1.4.7/go.mod h1:Tz8NjZHkW78fSQdbUxIjBTcgA1z1m8ZHf0WmKUhAMys=
+gopkg.in/inf.v0 v0.9.1/go.mod h1:cWUDdTG/fYaXco+Dcufb5Vnc6Gp2YChqWtbxRZE0mXw=
+gopkg.in/tomb.v1 v1.0.0-20141024135613-dd632973f1e7 h1:uRGJdciOHaEIrze2W8Q3AKkepLTh2hOroT7a+7czfdQ=
+gopkg.in/tomb.v1 v1.0.0-20141024135613-dd632973f1e7/go.mod h1:dt/ZhP58zS4L8KSrWDmTeBkI65Dw0HsyUHuEVlX15mw=
+gopkg.in/yaml.v2 v2.2.1/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI=
+gopkg.in/yaml.v2 v2.2.2/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI=
+gopkg.in/yaml.v2 v2.2.4/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI=
+gopkg.in/yaml.v2 v2.2.7/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI=
+gopkg.in/yaml.v2 v2.3.0 h1:clyUAQHOM3G0M3f5vQj7LuJrETvjVot3Z5el9nffUtU=
+gopkg.in/yaml.v2 v2.3.0/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI=
+gopkg.in/yaml.v3 v3.0.0-20191120175047-4206685974f2 h1:XZx7nhd5GMaZpmDaEHFVafUZC7ya0fuo7cSJ3UCKYmM=
+gopkg.in/yaml.v3 v3.0.0-20191120175047-4206685974f2/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM=
+k8s.io/apimachinery v0.17.0 h1:xRBnuie9rXcPxUkDizUsGvPf1cnlZCFu210op7J7LJo=
+k8s.io/apimachinery v0.17.0/go.mod h1:b9qmWdKlLuU9EBh+06BtLcSf/Mu89rWL33naRxs1uZg=
+k8s.io/gengo v0.0.0-20190128074634-0689ccc1d7d6/go.mod h1:ezvh/TsK7cY6rbqRK0oQQ8IAqLxYwwyPxAX1Pzy0ii0=
+k8s.io/klog v0.0.0-20181102134211-b9b56d5dfc92/go.mod h1:Gq+BEi5rUBO/HRz0bTSXDUcqjScdoY3a9IHpCEIOOfk=
+k8s.io/klog v1.0.0/go.mod h1:4Bi6QPql/J/LkTDqv7R/cd3hPo4k2DG6Ptcz060Ez5I=
+k8s.io/kube-openapi v0.0.0-20191107075043-30be4d16710a/go.mod h1:1TqjTSzOxsLGIKfj0lK8EeCP7K1iUG65v09OM0/WG5E=
+sigs.k8s.io/kind v0.7.0 h1:7y7a8EYtGHM+auHmsvzuK5o84SrxPYGidlvfql7j/k4=
+sigs.k8s.io/kind v0.7.0/go.mod h1:An/AbWHT6pA/Lm0Og8j3ukGhfJP3RiVN/IBU6Lo3zl8=
+sigs.k8s.io/structured-merge-diff v0.0.0-20190525122527-15d366b2352e/go.mod h1:wWxsB5ozmmv/SG7nM11ayaAW51xMvak/t1r0CSlcokI=
+sigs.k8s.io/yaml v1.1.0 h1:4A07+ZFc2wgJwo8YNlQpr1rVlgUDlxXHhPJciaPY5gs=
+sigs.k8s.io/yaml v1.1.0/go.mod h1:UJmg0vDUVViEyp3mgSv9WPwZCDxu4rQW1olrI1uml+o=
diff --git a/hack/tools/tools.go b/hack/tools/tools.go
new file mode 100644
index 000000000..0aa1adc72
--- /dev/null
+++ b/hack/tools/tools.go
@@ -0,0 +1,26 @@
+//go:build tools
+// +build tools
+
+/*
+Copyright 2021 The Kubernetes Authors.
+
+Licensed under the Apache License, Version 2.0 (the "License");
+you may not use this file except in compliance with the License.
+You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing, software
+distributed under the License is distributed on an "AS IS" BASIS,
+WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+See the License for the specific language governing permissions and
+limitations under the License.
+*/
+
+// This package imports things required by build scripts, to force `go mod` to see them as dependencies
+package tools
+
+import (
+ _ "github.com/onsi/ginkgo/ginkgo"
+ _ "sigs.k8s.io/kind"
+)
diff --git a/index.yaml b/index.yaml
index bed7f181b..c9956c5fa 100644
--- a/index.yaml
+++ b/index.yaml
@@ -2,7 +2,7 @@ apiVersion: v1
entries:
vsphere-cpi:
- apiVersion: v2
- appVersion: 1.21.0
+ appVersion: 1.22.2
created: "2021-07-28T14:36:52.229812-07:00"
description: A Helm chart for vSphere Cloud Provider Interface Manager (CPI)
digest: db24dcbcbdb250809c313912ea7bfc50fbf5b2a99ee0cc3bbd799cdf0db726ac
diff --git a/openshift-hack/images/cloud-controller-manager-openshift.Dockerfile b/openshift-hack/images/cloud-controller-manager-openshift.Dockerfile
index 26d1d568a..8e9f9b17f 100644
--- a/openshift-hack/images/cloud-controller-manager-openshift.Dockerfile
+++ b/openshift-hack/images/cloud-controller-manager-openshift.Dockerfile
@@ -1,4 +1,4 @@
-FROM registry.ci.openshift.org/ocp/builder:rhel-8-golang-1.16-openshift-4.10 AS builder
+FROM registry.ci.openshift.org/ocp/builder:rhel-8-golang-1.17-openshift-4.10 AS builder
WORKDIR /go/src/github.com/openshift/cloud-provider-vsphere
COPY . .
diff --git a/openshift-hack/test-unit-ci.sh b/openshift-hack/test-unit-ci.sh
index 66371c7eb..1da413a2b 100755
--- a/openshift-hack/test-unit-ci.sh
+++ b/openshift-hack/test-unit-ci.sh
@@ -34,9 +34,7 @@ OPENSHIFT_CI=${OPENSHIFT_CI:-""}
ARTIFACT_DIR=${ARTIFACT_DIR:-""}
function go_test() {
- local PKGS_WITH_TESTS
- PKGS_WITH_TESTS=$(find . -name "*_test.go" -type f -exec dirname \{\} \;)
- go test -v -tags=unit $PKGS_WITH_TESTS
+ go test ./pkg/...
}
runTestCI() {
diff --git a/pkg/cloudprovider/vsphere/config/config_ini_legacy.go b/pkg/cloudprovider/vsphere/config/config_ini_legacy.go
index 218b2beec..094b9ee26 100644
--- a/pkg/cloudprovider/vsphere/config/config_ini_legacy.go
+++ b/pkg/cloudprovider/vsphere/config/config_ini_legacy.go
@@ -35,10 +35,12 @@ func (cci *CPIConfigINI) CreateConfig() *CPIConfig {
cfg := &CPIConfig{
*cci.CommonConfigINI.CreateConfig(),
Nodes{
- InternalNetworkSubnetCIDR: cci.Nodes.InternalNetworkSubnetCIDR,
- ExternalNetworkSubnetCIDR: cci.Nodes.ExternalNetworkSubnetCIDR,
- InternalVMNetworkName: cci.Nodes.InternalVMNetworkName,
- ExternalVMNetworkName: cci.Nodes.ExternalVMNetworkName,
+ InternalNetworkSubnetCIDR: cci.Nodes.InternalNetworkSubnetCIDR,
+ ExternalNetworkSubnetCIDR: cci.Nodes.ExternalNetworkSubnetCIDR,
+ InternalVMNetworkName: cci.Nodes.InternalVMNetworkName,
+ ExternalVMNetworkName: cci.Nodes.ExternalVMNetworkName,
+ ExcludeInternalNetworkSubnetCIDR: cci.Nodes.ExcludeInternalNetworkSubnetCIDR,
+ ExcludeExternalNetworkSubnetCIDR: cci.Nodes.ExcludeExternalNetworkSubnetCIDR,
},
}
diff --git a/pkg/cloudprovider/vsphere/config/config_ini_legacy_test.go b/pkg/cloudprovider/vsphere/config/config_ini_legacy_test.go
index 841477f77..cc425a071 100644
--- a/pkg/cloudprovider/vsphere/config/config_ini_legacy_test.go
+++ b/pkg/cloudprovider/vsphere/config/config_ini_legacy_test.go
@@ -52,6 +52,21 @@ internal-vm-network-name = "Internal K8s Traffic"
external-vm-network-name = "External/Outbound Traffic"
`
+const excludeSubnetINIConfig = `
+[Global]
+server = 0.0.0.0
+port = 443
+user = user
+password = password
+insecure-flag = true
+datacenters = us-west
+ca-file = /some/path/to/a/ca.pem
+
+[Nodes]
+exclude-internal-network-subnet-cidr = "192.0.2.0/24,fe80::1/128"
+exclude-external-network-subnet-cidr = "192.1.2.0/24,fe80::2/128"
+`
+
func TestReadINIConfigSubnetCidr(t *testing.T) {
_, err := ReadCPIConfigINI(nil)
if err == nil {
@@ -94,3 +109,18 @@ func TestReadINIConfigNetworkName(t *testing.T) {
t.Errorf("incorrect internal vm network name: %s", cfg.Nodes.ExternalVMNetworkName)
}
}
+
+func TestReadINIConfigExcludeSubnetCidr(t *testing.T) {
+ cfg, err := ReadCPIConfigINI([]byte(excludeSubnetINIConfig))
+ if err != nil {
+ t.Fatalf("Should succeed when a valid config is provided: %s", err)
+ }
+
+ if cfg.Nodes.ExcludeInternalNetworkSubnetCIDR != "192.0.2.0/24,fe80::1/128" {
+ t.Errorf("incorrect exclude internal network subnet cidrs: %s", cfg.Nodes.ExcludeInternalNetworkSubnetCIDR)
+ }
+
+ if cfg.Nodes.ExcludeExternalNetworkSubnetCIDR != "192.1.2.0/24,fe80::2/128" {
+ t.Errorf("incorrect exclude external network subnet cidrs: %s", cfg.Nodes.ExcludeExternalNetworkSubnetCIDR)
+ }
+}
diff --git a/pkg/cloudprovider/vsphere/config/config_yaml.go b/pkg/cloudprovider/vsphere/config/config_yaml.go
index d53f02e95..89986f731 100644
--- a/pkg/cloudprovider/vsphere/config/config_yaml.go
+++ b/pkg/cloudprovider/vsphere/config/config_yaml.go
@@ -36,10 +36,12 @@ func (ccy *CPIConfigYAML) CreateConfig() *CPIConfig {
cfg := &CPIConfig{
*ccy.CommonConfigYAML.CreateConfig(),
Nodes{
- InternalNetworkSubnetCIDR: ccy.Nodes.InternalNetworkSubnetCIDR,
- ExternalNetworkSubnetCIDR: ccy.Nodes.ExternalNetworkSubnetCIDR,
- InternalVMNetworkName: ccy.Nodes.InternalVMNetworkName,
- ExternalVMNetworkName: ccy.Nodes.ExternalVMNetworkName,
+ InternalNetworkSubnetCIDR: ccy.Nodes.InternalNetworkSubnetCIDR,
+ ExternalNetworkSubnetCIDR: ccy.Nodes.ExternalNetworkSubnetCIDR,
+ InternalVMNetworkName: ccy.Nodes.InternalVMNetworkName,
+ ExternalVMNetworkName: ccy.Nodes.ExternalVMNetworkName,
+ ExcludeInternalNetworkSubnetCIDR: ccy.Nodes.ExcludeInternalNetworkSubnetCIDR,
+ ExcludeExternalNetworkSubnetCIDR: ccy.Nodes.ExcludeExternalNetworkSubnetCIDR,
},
}
diff --git a/pkg/cloudprovider/vsphere/config/config_yaml_test.go b/pkg/cloudprovider/vsphere/config/config_yaml_test.go
index ec17e98eb..97fa1ee9e 100644
--- a/pkg/cloudprovider/vsphere/config/config_yaml_test.go
+++ b/pkg/cloudprovider/vsphere/config/config_yaml_test.go
@@ -57,6 +57,22 @@ nodes:
externalVmNetworkName: External/Outbound Traffic
`
+const excludeSubnetCidrYAMLConfig = `
+global:
+ server: 0.0.0.0
+ port: 443
+ user: user
+ password: password
+ insecureFlag: true
+ datacenters:
+ - us-west
+ caFile: /some/path/to/a/ca.pem
+
+nodes:
+ excludeInternalNetworkSubnetCidr: "192.0.2.0/24,fe80::1/128"
+ excludeExternalNetworkSubnetCidr: "192.1.2.0/24,fe80::2/128"
+`
+
func TestReadYAMLConfigSubnetCidr(t *testing.T) {
_, err := ReadCPIConfigYAML(nil)
if err == nil {
@@ -99,3 +115,18 @@ func TestReadYAMLConfigNetworkName(t *testing.T) {
t.Errorf("incorrect internal vm network name: %s", cfg.Nodes.ExternalVMNetworkName)
}
}
+
+func TestReadYAMLConfigExcludeSubnetCidr(t *testing.T) {
+ cfg, err := ReadCPIConfigYAML([]byte(excludeSubnetCidrYAMLConfig))
+ if err != nil {
+ t.Fatalf("Should succeed when a valid config is provided: %s", err)
+ }
+
+ if cfg.Nodes.ExcludeInternalNetworkSubnetCIDR != "192.0.2.0/24,fe80::1/128" {
+ t.Errorf("incorrect exclude internal network subnet cidrs: %s", cfg.Nodes.ExcludeInternalNetworkSubnetCIDR)
+ }
+
+ if cfg.Nodes.ExcludeExternalNetworkSubnetCIDR != "192.1.2.0/24,fe80::2/128" {
+ t.Errorf("incorrect exclude external network subnet cidrs: %s", cfg.Nodes.ExcludeExternalNetworkSubnetCIDR)
+ }
+}
diff --git a/pkg/cloudprovider/vsphere/config/types_common.go b/pkg/cloudprovider/vsphere/config/types_common.go
index 412fd553b..b281e3f47 100644
--- a/pkg/cloudprovider/vsphere/config/types_common.go
+++ b/pkg/cloudprovider/vsphere/config/types_common.go
@@ -38,6 +38,11 @@ type Nodes struct {
// only have a single IP address assigned to it.
InternalVMNetworkName string
ExternalVMNetworkName string
+ // IP addresses in these subnet ranges will be excluded when selecting
+ // the IP address from the VirtualMachine's VM for use in the
+ // status.addresses fields.
+ ExcludeInternalNetworkSubnetCIDR string
+ ExcludeExternalNetworkSubnetCIDR string
}
// CPIConfig is used to read and store information (related only to the CPI) from the cloud configuration file
diff --git a/pkg/cloudprovider/vsphere/config/types_ini_legacy.go b/pkg/cloudprovider/vsphere/config/types_ini_legacy.go
index 90974214b..d713da270 100644
--- a/pkg/cloudprovider/vsphere/config/types_ini_legacy.go
+++ b/pkg/cloudprovider/vsphere/config/types_ini_legacy.go
@@ -37,6 +37,11 @@ type NodesINI struct {
// only have a single IP address assigned to it.
InternalVMNetworkName string `gcfg:"internal-vm-network-name"`
ExternalVMNetworkName string `gcfg:"external-vm-network-name"`
+ // IP addresses in these subnet ranges will be excluded when selecting
+ // the IP address from the VirtualMachine's VM for use in the
+ // status.addresses fields.
+ ExcludeInternalNetworkSubnetCIDR string `gcfg:"exclude-internal-network-subnet-cidr"`
+ ExcludeExternalNetworkSubnetCIDR string `gcfg:"exclude-external-network-subnet-cidr"`
}
// CPIConfigINI is the INI representation
diff --git a/pkg/cloudprovider/vsphere/config/types_yaml.go b/pkg/cloudprovider/vsphere/config/types_yaml.go
index c71a93d41..ed63ac1f7 100644
--- a/pkg/cloudprovider/vsphere/config/types_yaml.go
+++ b/pkg/cloudprovider/vsphere/config/types_yaml.go
@@ -41,6 +41,11 @@ type NodesYAML struct {
// only have a single IP address assigned to it.
InternalVMNetworkName string `yaml:"internalVmNetworkName"`
ExternalVMNetworkName string `yaml:"externalVmNetworkName"`
+ // IP addresses in these subnet ranges will be excluded when selecting
+ // the IP address from the VirtualMachine's VM for use in the
+ // status.addresses fields.
+ ExcludeInternalNetworkSubnetCIDR string `yaml:"excludeInternalNetworkSubnetCidr"`
+ ExcludeExternalNetworkSubnetCIDR string `yaml:"excludeExternalNetworkSubnetCidr"`
}
// CPIConfigYAML is the YAML representation
diff --git a/pkg/cloudprovider/vsphere/instances.go b/pkg/cloudprovider/vsphere/instances.go
index bc9cfff3c..fe4a8675c 100644
--- a/pkg/cloudprovider/vsphere/instances.go
+++ b/pkg/cloudprovider/vsphere/instances.go
@@ -35,7 +35,7 @@ import (
var (
// ErrNotFound is returned by NodeAddresses, NodeAddressesByProviderID,
// and InstanceID when a node cannot be found.
- ErrNodeNotFound = errors.New("Node not found")
+ ErrNodeNotFound = errors.New("node not found")
)
func newInstances(nodeManager *NodeManager) cloudprovider.Instances {
@@ -52,12 +52,6 @@ var _ cloudprovider.Instances = &instances{}
func (i *instances) NodeAddresses(ctx context.Context, nodeName types.NodeName) ([]v1.NodeAddress, error) {
klog.V(4).Info("instances.NodeAddresses() called with ", string(nodeName))
- // Check if node has been discovered already
- if node, ok := i.nodeManager.nodeNameMap[string(nodeName)]; ok {
- klog.V(2).Info("instances.NodeAddresses() CACHED with ", string(nodeName))
- return node.NodeAddresses, nil
- }
-
if err := i.nodeManager.DiscoverNode(string(nodeName), cm.FindVMByName); err == nil {
if i.nodeManager.nodeNameMap[string(nodeName)] == nil {
klog.Errorf("DiscoverNode succeeded, but CACHE missed for node=%s. If this is a Linux VM, hostnames are case sensitive. Make sure they match.", string(nodeName))
@@ -77,12 +71,7 @@ func (i *instances) NodeAddresses(ctx context.Context, nodeName types.NodeName)
func (i *instances) NodeAddressesByProviderID(ctx context.Context, providerID string) ([]v1.NodeAddress, error) {
klog.V(4).Info("instances.NodeAddressesByProviderID() called with ", providerID)
- // Check if node has been discovered already
uid := GetUUIDFromProviderID(providerID)
- if node, ok := i.nodeManager.nodeUUIDMap[uid]; ok {
- klog.V(2).Info("instances.NodeAddressesByProviderID() CACHED with ", uid)
- return node.NodeAddresses, nil
- }
if err := i.nodeManager.DiscoverNode(uid, cm.FindVMByUUID); err == nil {
klog.V(2).Info("instances.NodeAddressesByProviderID() FOUND with ", uid)
@@ -194,7 +183,7 @@ func (i *instances) InstanceShutdownByProviderID(ctx context.Context, providerID
// Check if node has been discovered already
uid := GetUUIDFromProviderID(providerID)
if _, ok := i.nodeManager.nodeUUIDMap[uid]; !ok {
- // IF the uuid is not cached, we end up here
+ // if the uuid is not cached, we end up here
klog.V(2).Info("instances.InstanceShutdownByProviderID() NOT CACHED")
if err := i.nodeManager.DiscoverNode(uid, cm.FindVMByUUID); err != nil {
klog.V(4).Info("instances.InstanceShutdownByProviderID() NOT FOUND with ", uid)
diff --git a/pkg/cloudprovider/vsphere/instances_test.go b/pkg/cloudprovider/vsphere/instances_test.go
index 2ebbc25a5..dfcddc913 100644
--- a/pkg/cloudprovider/vsphere/instances_test.go
+++ b/pkg/cloudprovider/vsphere/instances_test.go
@@ -80,7 +80,7 @@ func TestInstance(t *testing.T) {
instances := newInstances(&nm.NodeManager)
vm := simulator.Map.Any("VirtualMachine").(*simulator.VirtualMachine)
- name := vm.Name
+ name := strings.ToLower(vm.Name)
vm.Guest.HostName = name
vm.Guest.Net = []vimtypes.GuestNicInfo{
{
diff --git a/pkg/cloudprovider/vsphere/loadbalancer/README.md b/pkg/cloudprovider/vsphere/loadbalancer/README.md
index c4858b3db..fe40627ef 100644
--- a/pkg/cloudprovider/vsphere/loadbalancer/README.md
+++ b/pkg/cloudprovider/vsphere/loadbalancer/README.md
@@ -186,6 +186,7 @@ the load balancer classes. The following attributes are supported:
|`size`|Size of load balancer service (`SMALL`,`MEDIUM`,`LARGE`,`XLARGE`)|
|`lbServiceId`|service id of the load balancer service to use (for unmanaged mode)|
|`tier1GatewayPath`|policy path for the tier1 gateway|
+|`snatDisabled`|Set to true if want to preserve client IP (for inline mode)|
|`tags`|JSON map with name/value pairs used for creating additional tags for the generated NSX-T elements|
If the tag key `owner` is given it overwrites the default owner
diff --git a/pkg/cloudprovider/vsphere/loadbalancer/access.go b/pkg/cloudprovider/vsphere/loadbalancer/access.go
index 4fd60f4f7..fdf54095d 100644
--- a/pkg/cloudprovider/vsphere/loadbalancer/access.go
+++ b/pkg/cloudprovider/vsphere/loadbalancer/access.go
@@ -21,6 +21,7 @@ import (
"time"
"github.com/pkg/errors"
+ "github.com/vmware/vsphere-automation-sdk-go/runtime/data"
"github.com/vmware/vsphere-automation-sdk-go/services/nsxt/model"
corev1 "k8s.io/api/core/v1"
@@ -273,9 +274,18 @@ func (a *access) DeleteVirtualServer(id string) error {
}
func (a *access) CreatePool(clusterName string, objectName types.NamespacedName, mapping Mapping, members []model.LBPoolMember, activeMonitorPaths []string) (*model.LBPool, error) {
- snatTranslation, err := newNsxtTypeConverter().createLBSnatAutoMap()
- if err != nil {
- return nil, errors.Wrapf(err, "creating pool failed on preparing LBSnatAutoMap failed")
+ var snatTranslation *data.StructValue
+ var err error
+ if a.config.LoadBalancer.SnatDisabled {
+ snatTranslation, err = newNsxtTypeConverter().createLBSnatDisabled()
+ if err != nil {
+ return nil, errors.Wrapf(err, "creating pool failed on preparing LBSnatDisabled failed")
+ }
+ } else {
+ snatTranslation, err = newNsxtTypeConverter().createLBSnatAutoMap()
+ if err != nil {
+ return nil, errors.Wrapf(err, "creating pool failed on preparing LBSnatAutoMap failed")
+ }
}
pool := model.LBPool{
Description: strptr(fmt.Sprintf("pool for cluster %s, service %s created by %s", clusterName, objectName, AppName)),
diff --git a/pkg/cloudprovider/vsphere/loadbalancer/cleanup.go b/pkg/cloudprovider/vsphere/loadbalancer/cleanup.go
index 39ea12f16..aeca253f7 100644
--- a/pkg/cloudprovider/vsphere/loadbalancer/cleanup.go
+++ b/pkg/cloudprovider/vsphere/loadbalancer/cleanup.go
@@ -20,6 +20,7 @@ import (
"context"
"time"
+ "github.com/pkg/errors"
corev1 "k8s.io/api/core/v1"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/types"
@@ -88,10 +89,10 @@ func (p *lbProvider) doCleanupStep(clusterName string, client clientcorev1.Servi
}
}
- return p.CleanupServices(clusterName, services)
+ return p.CleanupServices(clusterName, services, false)
}
-func (p *lbProvider) CleanupServices(clusterName string, validServices map[types.NamespacedName]corev1.Service) error {
+func (p *lbProvider) CleanupServices(clusterName string, validServices map[types.NamespacedName]corev1.Service, ensureLBServiceDeleted bool) error {
ipPoolIds := sets.NewString()
for _, name := range p.classes.GetClassNames() {
class := p.classes.GetClass(name)
@@ -99,7 +100,7 @@ func (p *lbProvider) CleanupServices(clusterName string, validServices map[types
}
lbs := map[types.NamespacedName]struct{}{}
- servers, err := p.access.ListVirtualServers(ClusterName)
+ servers, err := p.access.ListVirtualServers(clusterName)
if err != nil {
return err
}
@@ -164,5 +165,13 @@ func (p *lbProvider) CleanupServices(clusterName string, validServices map[types
}
}
}
+
+ // check for orphan unmanaged load balancer service if there are no virtual servers and flag ensureLBServiceDeleted == true
+ if len(lbs) == 0 && ensureLBServiceDeleted {
+ err = p.removeLoadBalancerServiceIfUnused(clusterName)
+ if err != nil && !isNotFoundError(err) {
+ return errors.Wrap(err, "removeLoadBalancerServiceIfUnused failed")
+ }
+ }
return nil
}
diff --git a/pkg/cloudprovider/vsphere/loadbalancer/config/config_ini_legacy.go b/pkg/cloudprovider/vsphere/loadbalancer/config/config_ini_legacy.go
index 63b64c185..b55a22f42 100644
--- a/pkg/cloudprovider/vsphere/loadbalancer/config/config_ini_legacy.go
+++ b/pkg/cloudprovider/vsphere/loadbalancer/config/config_ini_legacy.go
@@ -49,6 +49,7 @@ func (lbc *LBConfigINI) CreateConfig() *LBConfig {
cfg.LoadBalancer.Size = lbc.LoadBalancer.Size
cfg.LoadBalancer.LBServiceID = lbc.LoadBalancer.LBServiceID
cfg.LoadBalancer.Tier1GatewayPath = lbc.LoadBalancer.Tier1GatewayPath
+ cfg.LoadBalancer.SnatDisabled = lbc.LoadBalancer.SnatDisabled
cfg.LoadBalancer.AdditionalTags = lbc.LoadBalancer.AdditionalTags
//LoadBalancerClass
diff --git a/pkg/cloudprovider/vsphere/loadbalancer/config/config_ini_legacy_test.go b/pkg/cloudprovider/vsphere/loadbalancer/config/config_ini_legacy_test.go
index a072f0472..463aca53f 100644
--- a/pkg/cloudprovider/vsphere/loadbalancer/config/config_ini_legacy_test.go
+++ b/pkg/cloudprovider/vsphere/loadbalancer/config/config_ini_legacy_test.go
@@ -18,6 +18,8 @@ package config
import (
"testing"
+
+ "github.com/stretchr/testify/assert"
)
/*
@@ -34,6 +36,7 @@ lb-service-id = 4711
tier1-gateway-path = 1234
tcp-app-profile-name = default-tcp-lb-app-profile
udp-app-profile-name = default-udp-lb-app-profile
+snat-disabled = false
tags = {\"tag1\": \"value1\", \"tag2\": \"value 2\"}
[LoadBalancerClass "public"]
@@ -61,6 +64,7 @@ udp-app-profile-name = udp2
assertEquals("LoadBalancer.tcpAppProfileName", config.LoadBalancer.TCPAppProfileName, "default-tcp-lb-app-profile")
assertEquals("LoadBalancer.udpAppProfileName", config.LoadBalancer.UDPAppProfileName, "default-udp-lb-app-profile")
assertEquals("LoadBalancer.size", config.LoadBalancer.Size, "MEDIUM")
+ assert.Equal(t, false, config.LoadBalancer.SnatDisabled)
if len(config.LoadBalancerClass) != 2 {
t.Errorf("expected two LoadBalancerClass subsections, but got %d", len(config.LoadBalancerClass))
}
@@ -80,6 +84,7 @@ size = MEDIUM
tier1-gateway-path = 1234
tcp-app-profile-path = infra/xxx/tcp1234
udp-app-profile-path = infra/xxx/udp1234
+snat-disabled = false
`
config, err := ReadRawConfigINI([]byte(contents))
if err != nil {
@@ -96,4 +101,5 @@ udp-app-profile-path = infra/xxx/udp1234
assertEquals("LoadBalancer.tier1-gateway-path", config.LoadBalancer.Tier1GatewayPath, "1234")
assertEquals("LoadBalancer.tcp-app-profile-path", config.LoadBalancer.TCPAppProfilePath, "infra/xxx/tcp1234")
assertEquals("LoadBalancer.udp-app-profile-path", config.LoadBalancer.UDPAppProfilePath, "infra/xxx/udp1234")
+ assert.Equal(t, false, config.LoadBalancer.SnatDisabled)
}
diff --git a/pkg/cloudprovider/vsphere/loadbalancer/config/config_yaml.go b/pkg/cloudprovider/vsphere/loadbalancer/config/config_yaml.go
index 3f9d1c293..aeea2609c 100644
--- a/pkg/cloudprovider/vsphere/loadbalancer/config/config_yaml.go
+++ b/pkg/cloudprovider/vsphere/loadbalancer/config/config_yaml.go
@@ -48,6 +48,7 @@ func (lbc *LBConfigYAML) CreateConfig() *LBConfig {
cfg.LoadBalancer.Size = lbc.LoadBalancer.Size
cfg.LoadBalancer.LBServiceID = lbc.LoadBalancer.LBServiceID
cfg.LoadBalancer.Tier1GatewayPath = lbc.LoadBalancer.Tier1GatewayPath
+ cfg.LoadBalancer.SnatDisabled = lbc.LoadBalancer.SnatDisabled
cfg.LoadBalancer.AdditionalTags = lbc.LoadBalancer.AdditionalTags
//LoadBalancerClass
diff --git a/pkg/cloudprovider/vsphere/loadbalancer/config/config_yaml_test.go b/pkg/cloudprovider/vsphere/loadbalancer/config/config_yaml_test.go
index 396143f9e..251da95a1 100644
--- a/pkg/cloudprovider/vsphere/loadbalancer/config/config_yaml_test.go
+++ b/pkg/cloudprovider/vsphere/loadbalancer/config/config_yaml_test.go
@@ -18,6 +18,8 @@ package config
import (
"testing"
+
+ "github.com/stretchr/testify/assert"
)
/*
@@ -34,6 +36,7 @@ loadBalancer:
tier1GatewayPath: 1234
tcpAppProfileName: default-tcp-lb-app-profile
udpAppProfileName: default-udp-lb-app-profile
+ snatDisabled: false
tags:
tag1: value1
tag2: value 2
@@ -63,6 +66,7 @@ loadBalancerClass:
assertEquals("loadBalancer.tcpAppProfileName", config.LoadBalancer.TCPAppProfileName, "default-tcp-lb-app-profile")
assertEquals("loadBalancer.udpAppProfileName", config.LoadBalancer.UDPAppProfileName, "default-udp-lb-app-profile")
assertEquals("loadBalancer.size", config.LoadBalancer.Size, "MEDIUM")
+ assert.Equal(t, false, config.LoadBalancer.SnatDisabled)
if len(config.LoadBalancerClass) != 2 {
t.Errorf("expected two LoadBalancerClass subsections, but got %d", len(config.LoadBalancerClass))
}
@@ -82,6 +86,7 @@ loadBalancer:
tier1GatewayPath: 1234
tcpAppProfilePath: infra/xxx/tcp1234
udpAppProfilePath: infra/xxx/udp1234
+ snatDisabled: false
`
config, err := ReadRawConfigYAML([]byte(contents))
if err != nil {
@@ -98,4 +103,5 @@ loadBalancer:
assertEquals("loadBalancer.tier1GatewayPath", config.LoadBalancer.Tier1GatewayPath, "1234")
assertEquals("loadBalancer.tcpAppProfilePath", config.LoadBalancer.TCPAppProfilePath, "infra/xxx/tcp1234")
assertEquals("loadBalancer.udpAppProfilePath", config.LoadBalancer.UDPAppProfilePath, "infra/xxx/udp1234")
+ assert.Equal(t, false, config.LoadBalancer.SnatDisabled)
}
diff --git a/pkg/cloudprovider/vsphere/loadbalancer/config/types_common.go b/pkg/cloudprovider/vsphere/loadbalancer/config/types_common.go
index faa22d7dc..ea5e7e561 100644
--- a/pkg/cloudprovider/vsphere/loadbalancer/config/types_common.go
+++ b/pkg/cloudprovider/vsphere/loadbalancer/config/types_common.go
@@ -28,6 +28,7 @@ type LoadBalancerConfig struct {
Size string
LBServiceID string
Tier1GatewayPath string
+ SnatDisabled bool
AdditionalTags map[string]string
}
diff --git a/pkg/cloudprovider/vsphere/loadbalancer/config/types_ini_legacy.go b/pkg/cloudprovider/vsphere/loadbalancer/config/types_ini_legacy.go
index 8d803dddf..e11d779f6 100644
--- a/pkg/cloudprovider/vsphere/loadbalancer/config/types_ini_legacy.go
+++ b/pkg/cloudprovider/vsphere/loadbalancer/config/types_ini_legacy.go
@@ -28,6 +28,7 @@ type LoadBalancerConfigINI struct {
Size string `gcfg:"size"`
LBServiceID string `gcfg:"lb-service-id"`
Tier1GatewayPath string `gcfg:"tier1-gateway-path"`
+ SnatDisabled bool `gcfg:"snat-disabled"`
RawTags string `gcfg:"tags"`
AdditionalTags map[string]string
}
diff --git a/pkg/cloudprovider/vsphere/loadbalancer/config/types_yaml.go b/pkg/cloudprovider/vsphere/loadbalancer/config/types_yaml.go
index 9fb4079a9..9148d21fe 100644
--- a/pkg/cloudprovider/vsphere/loadbalancer/config/types_yaml.go
+++ b/pkg/cloudprovider/vsphere/loadbalancer/config/types_yaml.go
@@ -37,6 +37,7 @@ type LoadBalancerConfigYAML struct {
Size string `yaml:"size"`
LBServiceID string `yaml:"lbServiceId"`
Tier1GatewayPath string `yaml:"tier1GatewayPath"`
+ SnatDisabled bool `yaml:"snatDisabled"`
AdditionalTags map[string]string `yaml:"tags"`
// this struct use to inherit from LoadBalancerClassConfigYAML, but the YAML parser
diff --git a/pkg/cloudprovider/vsphere/loadbalancer/interface.go b/pkg/cloudprovider/vsphere/loadbalancer/interface.go
index 73c237250..62e0c48c5 100644
--- a/pkg/cloudprovider/vsphere/loadbalancer/interface.go
+++ b/pkg/cloudprovider/vsphere/loadbalancer/interface.go
@@ -31,7 +31,7 @@ import (
type LBProvider interface {
cloudprovider.LoadBalancer
Initialize(clusterName string, client clientset.Interface, stop <-chan struct{})
- CleanupServices(clusterName string, services map[types.NamespacedName]corev1.Service) error
+ CleanupServices(clusterName string, services map[types.NamespacedName]corev1.Service, ensureLBServiceDeleted bool) error
}
// NSXTAccess provides methods for dealing with NSX-T objects
diff --git a/pkg/cloudprovider/vsphere/loadbalancer/nsxt_type_converter.go b/pkg/cloudprovider/vsphere/loadbalancer/nsxt_type_converter.go
index 1daf8ccbb..82d1fa902 100644
--- a/pkg/cloudprovider/vsphere/loadbalancer/nsxt_type_converter.go
+++ b/pkg/cloudprovider/vsphere/loadbalancer/nsxt_type_converter.go
@@ -47,6 +47,19 @@ func (c *nsxtTypeConverter) createLBSnatAutoMap() (*data.StructValue, error) {
return dataValue.(*data.StructValue), nil
}
+func (c *nsxtTypeConverter) createLBSnatDisabled() (*data.StructValue, error) {
+ entry := model.LBSnatDisabled{
+ Type_: model.LBSnatDisabled__TYPE_IDENTIFIER,
+ }
+
+ dataValue, errs := c.ConvertToVapi(entry, model.LBSnatDisabledBindingType())
+ if errs != nil {
+ return nil, errs[0]
+ }
+
+ return dataValue.(*data.StructValue), nil
+}
+
func (c *nsxtTypeConverter) convertLBTCPMonitorProfileToStructValue(monitor model.LBTcpMonitorProfile) (*data.StructValue, error) {
dataValue, errs := c.ConvertToVapi(monitor, model.LBTcpMonitorProfileBindingType())
if errs != nil {
diff --git a/pkg/cloudprovider/vsphere/nodemanager.go b/pkg/cloudprovider/vsphere/nodemanager.go
index cfd1422c2..925fd8f63 100644
--- a/pkg/cloudprovider/vsphere/nodemanager.go
+++ b/pkg/cloudprovider/vsphere/nodemanager.go
@@ -33,6 +33,7 @@ import (
klog "k8s.io/klog/v2"
"github.com/vmware/govmomi/vim25/mo"
+ "github.com/vmware/govmomi/vim25/types"
)
// Errors
@@ -43,7 +44,7 @@ var (
// ErrDatacenterNotFound is returned when the configured datacenter cannot
// be found.
- ErrDatacenterNotFound = errors.New("Datacenter not found")
+ ErrDatacenterNotFound = errors.New("datacenter not found")
// ErrVMNotFound is returned when the specified VM cannot be found.
ErrVMNotFound = errors.New("VM not found")
@@ -103,6 +104,19 @@ func (nm *NodeManager) removeNode(uuid string, node *v1.Node) {
klog.V(4).Info("removeNode NodeName: ", node.GetName(), ", UID: ", uuid)
delete(nm.nodeRegUUIDMap, uuid)
nm.nodeRegInfoLock.Unlock()
+
+ nm.nodeInfoLock.Lock()
+ klog.V(4).Info("removeNode from UUID and Name cache. NodeName: ", node.GetName(), ", UID: ", uuid)
+ // in case of a race condition that node with same name create happens before delete event,
+ // delete the node based on uuid
+ name := nm.getNodeNameByUUID(uuid)
+ if name != "" {
+ delete(nm.nodeNameMap, name)
+ } else {
+ klog.V(4).Info("node name: ", node.GetName(), " has a different uuid. Skip deleting this node from cache.")
+ }
+ delete(nm.nodeUUIDMap, uuid)
+ nm.nodeInfoLock.Unlock()
}
func (nm *NodeManager) shakeOutNodeIDLookup(ctx context.Context, nodeID string, searchBy cm.FindVM) (*cm.VMDiscoveryInfo, error) {
@@ -153,23 +167,13 @@ func (nm *NodeManager) shakeOutNodeIDLookup(ctx context.Context, nodeID string,
return nil, err
}
-func returnIPsFromSpecificFamily(family string, ips []string) []string {
- var matching []string
-
- for _, ip := range ips {
- if err := ErrOnLocalOnlyIPAddr(ip); err != nil {
- klog.V(4).Infof("IP is local only or there was an error. ip=%q err=%v", ip, err)
- continue
- }
-
- if strings.EqualFold(family, vcfg.IPv6Family) && net.ParseIP(ip).To4() == nil {
- matching = append(matching, ip)
- } else if strings.EqualFold(family, vcfg.IPv4Family) && net.ParseIP(ip).To4() != nil {
- matching = append(matching, ip)
- }
- }
+type ipAddrNetworkName struct {
+ ipAddr string
+ networkName string
+}
- return matching
+func (c *ipAddrNetworkName) ip() net.IP {
+ return net.ParseIP(c.ipAddr)
}
// DiscoverNode finds a node's VM using the specified search value and search
@@ -183,6 +187,10 @@ func (nm *NodeManager) DiscoverNode(nodeID string, searchBy cm.FindVM) error {
return err
}
+ if vmDI.UUID == "" {
+ return errors.New("discovered VM UUID is empty")
+ }
+
var oVM mo.VirtualMachine
err = vmDI.VM.Properties(ctx, vmDI.VM.Reference(), []string{"guest", "summary"}, &oVM)
if err != nil {
@@ -205,39 +213,42 @@ func (nm *NodeManager) DiscoverNode(nodeID string, searchBy cm.FindVM) error {
}
vcInstance := nm.connectionManager.VsphereInstanceMap[tenantRef]
- ipFamily := []string{vcfg.DefaultIPFamily}
+ ipFamilies := []string{vcfg.DefaultIPFamily}
if vcInstance != nil {
- ipFamily = vcInstance.Cfg.IPFamilyPriority
+ ipFamilies = vcInstance.Cfg.IPFamilyPriority
} else {
klog.Warningf("Unable to find vcInstance for %s. Defaulting to ipv4.", tenantRef)
}
- var internalNetworkSubnet *net.IPNet
- var externalNetworkSubnet *net.IPNet
+ var internalNetworkSubnets []*net.IPNet
+ var externalNetworkSubnets []*net.IPNet
+ var excludeInternalNetworkSubnets []*net.IPNet
+ var excludeExternalNetworkSubnets []*net.IPNet
var internalVMNetworkName string
var externalVMNetworkName string
if nm.cfg != nil {
- if nm.cfg.Nodes.InternalNetworkSubnetCIDR != "" {
- _, internalNetworkSubnet, err = net.ParseCIDR(nm.cfg.Nodes.InternalNetworkSubnetCIDR)
- if err != nil {
- return err
- }
+ internalNetworkSubnets, err = parseCIDRs(nm.cfg.Nodes.InternalNetworkSubnetCIDR)
+ if err != nil {
+ return err
}
- if nm.cfg.Nodes.ExternalNetworkSubnetCIDR != "" {
- _, externalNetworkSubnet, err = net.ParseCIDR(nm.cfg.Nodes.ExternalNetworkSubnetCIDR)
- if err != nil {
- return err
- }
+ externalNetworkSubnets, err = parseCIDRs(nm.cfg.Nodes.ExternalNetworkSubnetCIDR)
+ if err != nil {
+ return err
+ }
+ excludeInternalNetworkSubnets, err = parseCIDRs(nm.cfg.Nodes.ExcludeInternalNetworkSubnetCIDR)
+ if err != nil {
+ return err
+ }
+ excludeExternalNetworkSubnets, err = parseCIDRs(nm.cfg.Nodes.ExcludeExternalNetworkSubnetCIDR)
+ if err != nil {
+ return err
}
internalVMNetworkName = nm.cfg.Nodes.InternalVMNetworkName
externalVMNetworkName = nm.cfg.Nodes.ExternalVMNetworkName
}
- foundInternal := false
- foundExternal := false
addrs := []v1.NodeAddress{}
-
klog.V(2).Infof("Adding Hostname: %s", oVM.Guest.HostName)
v1helper.AddToNodeAddresses(&addrs,
v1.NodeAddress{
@@ -246,12 +257,8 @@ func (nm *NodeManager) DiscoverNode(nodeID string, searchBy cm.FindVM) error {
},
)
- for _, v := range oVM.Guest.Net {
- if v.DeviceConfigId == -1 {
- klog.V(4).Info("Skipping device because not a vNIC")
- continue
- }
-
+ nonVNICDevices := collectNonVNICDevices(oVM.Guest.Net)
+ for _, v := range nonVNICDevices {
klog.V(6).Infof("internalVMNetworkName = %s", internalVMNetworkName)
klog.V(6).Infof("externalVMNetworkName = %s", externalVMNetworkName)
klog.V(6).Infof("v.Network = %s", v.Network)
@@ -260,112 +267,58 @@ func (nm *NodeManager) DiscoverNode(nodeID string, searchBy cm.FindVM) error {
(externalVMNetworkName != "" && !strings.EqualFold(externalVMNetworkName, v.Network)) {
klog.V(4).Infof("Skipping device because vNIC Network=%s doesn't match internal=%s or external=%s network names",
v.Network, internalVMNetworkName, externalVMNetworkName)
- continue
}
+ }
- // Only return a single IP address based on the preference of IPFamily
- // Must break out of loop in the event of ipv6,ipv4 where the NIC does
- // contain a valid IPv6 and IPV4 address
- for _, family := range ipFamily {
-
- ips := returnIPsFromSpecificFamily(family, v.IpAddress)
-
- for _, ip := range ips {
- parsedIP := net.ParseIP(ip)
- if parsedIP == nil {
- return fmt.Errorf("can't parse IP: %s", ip)
- }
-
- // prioritize address masking over networkname
- if !foundInternal && internalNetworkSubnet != nil && internalNetworkSubnet.Contains(parsedIP) {
- klog.V(2).Infof("Adding Internal IP by AddressMatching: %s", ip)
- v1helper.AddToNodeAddresses(&addrs,
- v1.NodeAddress{
- Type: v1.NodeInternalIP,
- Address: ip,
- },
- )
- foundInternal = true
- }
- if !foundExternal && externalNetworkSubnet != nil && externalNetworkSubnet.Contains(parsedIP) {
- klog.V(2).Infof("Adding External IP by AddressMatching: %s", ip)
- v1helper.AddToNodeAddresses(&addrs,
- v1.NodeAddress{
- Type: v1.NodeExternalIP,
- Address: ip,
- },
- )
- foundExternal = true
- }
-
- // then use network name
- if !foundInternal && internalVMNetworkName != "" && strings.EqualFold(internalVMNetworkName, v.Network) {
- klog.V(2).Infof("Adding Internal IP by NetworkName: %s", ip)
- v1helper.AddToNodeAddresses(&addrs,
- v1.NodeAddress{
- Type: v1.NodeInternalIP,
- Address: ip,
- },
- )
- foundInternal = true
- }
- if !foundExternal && externalVMNetworkName != "" && strings.EqualFold(externalVMNetworkName, v.Network) {
- klog.V(2).Infof("Adding External IP by NetworkName: %s", ip)
- v1helper.AddToNodeAddresses(&addrs,
- v1.NodeAddress{
- Type: v1.NodeExternalIP,
- Address: ip,
- },
- )
- foundExternal = true
- }
- }
+ existingNetworkNames := toNetworkNames(nonVNICDevices)
+ if internalVMNetworkName != "" && externalVMNetworkName != "" {
+ if !ArrayContainsCaseInsensitive(existingNetworkNames, internalVMNetworkName) &&
+ !ArrayContainsCaseInsensitive(existingNetworkNames, externalVMNetworkName) {
+ return fmt.Errorf("unable to find suitable IP address for node")
+ }
+ }
- // At least one of the Internal or External addresses has been found.
- // Minimally the Internal needs to exist for the node to function correctly.
- // If only one was discovered, will log the warning and continue which will
- // ultimately be visible to the end user
- if foundInternal || foundExternal {
- if foundInternal && !foundExternal {
- klog.Warning("Internal address found, but external address not found. Returning what addresses were discovered.")
- } else if !foundInternal && foundExternal {
- klog.Warning("External address found, but internal address not found. Returning what addresses were discovered.")
- }
- break
- }
+ ipAddrNetworkNames := toIPAddrNetworkNames(nonVNICDevices)
+ nonLocalhostIPs := excludeLocalhostIPs(ipAddrNetworkNames)
+
+ for _, ipFamily := range ipFamilies {
+ klog.V(6).Infof("ipFamily: %q nonLocalhostIPs: %q", ipFamily, nonLocalhostIPs)
+ discoveredInternal, discoveredExternal := discoverIPs(
+ nonLocalhostIPs,
+ ipFamily,
+ internalNetworkSubnets,
+ externalNetworkSubnets,
+ excludeInternalNetworkSubnets,
+ excludeExternalNetworkSubnets,
+ internalVMNetworkName,
+ externalVMNetworkName,
+ )
+
+ klog.V(6).Infof("ipFamily: %q discovered Internal: %q discoveredExternal: %q",
+ ipFamily, discoveredInternal, discoveredExternal)
+
+ if discoveredInternal != nil {
+ v1helper.AddToNodeAddresses(&addrs,
+ v1.NodeAddress{Type: v1.NodeInternalIP, Address: discoveredInternal.ipAddr},
+ )
+ }
- // Neither internal or external addresses were found. This defaults to the old
- // address selection behavior which is we only support a single address and we
- // return the first one found
- klog.V(5).Info("Default address selection. Single NIC, Single IP Address")
- for _, ip := range ips {
- klog.V(2).Infof("Adding IP: %s", ip)
- v1helper.AddToNodeAddresses(&addrs,
- v1.NodeAddress{
- Type: v1.NodeInternalIP,
- Address: ip,
- },
- v1.NodeAddress{
- Type: v1.NodeExternalIP,
- Address: ip,
- },
- )
- foundInternal = true
- foundExternal = true
- break
- }
+ if discoveredExternal != nil {
+ v1helper.AddToNodeAddresses(&addrs,
+ v1.NodeAddress{Type: v1.NodeExternalIP, Address: discoveredExternal.ipAddr},
+ )
}
- }
- if len(oVM.Guest.Net) > 0 {
- if !foundInternal && !foundExternal {
- return fmt.Errorf("unable to find suitable IP address for node %s with IP family %s", nodeID, ipFamily)
+ if len(oVM.Guest.Net) > 0 {
+ if discoveredInternal == nil && discoveredExternal == nil {
+ return fmt.Errorf("unable to find suitable IP address for node %s with IP family %s", nodeID, ipFamilies)
+ }
}
}
klog.V(2).Infof("Found node %s as vm=%+v in vc=%s and datacenter=%s",
nodeID, vmDI.VM, vmDI.VcServer, vmDI.DataCenter.Name())
- klog.V(2).Info("Hostname: ", oVM.Guest.HostName, " UUID: ", oVM.Summary.Config.Uuid)
+ klog.V(2).Info("Hostname: ", oVM.Guest.HostName, " UUID: ", vmDI.UUID)
os := "unknown"
if g, ok := GuestOSLookup[oVM.Summary.Config.GuestId]; ok {
@@ -386,6 +339,241 @@ func (nm *NodeManager) DiscoverNode(nodeID string, searchBy cm.FindVM) error {
return nil
}
+// discoverIPs returns a pair of *ipAddrNetworkNames. The first representing
+// the internal network IP and the second being the external network IP.
+//
+// The returned ipAddrNetworkNames will match the given ipFamily.
+//
+// ipAddrNetworkNames that are contained in the excludeInternalNetworkSubnets
+// will never be returned as an internal address, and similarly addresses
+// contained in the exludedExternalNetworkSubnets will never be returned
+// as an external address - no matter the method of discovery described below.
+//
+// The returned ipAddrNetworkNames will be selected first by attempting to
+// match the given internalNetworkSubnets and externalNetworkSubnets. Subnet
+// matching has the highest precedence.
+//
+// If subnet matches are not found, or if subnets are not provided, then an
+// attempt is made to select ipAddrNetworkNames that match the given network
+// names. Network name matching has the second highest precedence.
+//
+// If ipAddrNetworkNames are not found by subnet nor network name matching, then
+// the first ipAddrNetworkName of the desired family is returned as both the
+// internal and external matches.
+//
+// If either of these IPs cannot be discovered, nil will be returned instead.
+func discoverIPs(ipAddrNetworkNames []*ipAddrNetworkName, ipFamily string,
+ internalNetworkSubnets, externalNetworkSubnets,
+ excludeInternalNetworkSubnets, excludeExternalNetworkSubnets []*net.IPNet,
+ internalVMNetworkName, externalVMNetworkName string) (internal *ipAddrNetworkName, external *ipAddrNetworkName) {
+
+ ipFamilyMatches := collectMatchesForIPFamily(ipAddrNetworkNames, ipFamily)
+
+ var discoveredInternal *ipAddrNetworkName
+ var discoveredExternal *ipAddrNetworkName
+
+ filteredInternalMatches := filterSubnetExclusions(ipFamilyMatches, excludeInternalNetworkSubnets)
+ filteredExternalMatches := filterSubnetExclusions(ipFamilyMatches, excludeExternalNetworkSubnets)
+
+ if len(filteredInternalMatches) > 0 || len(filteredExternalMatches) > 0 {
+ discoveredInternal = findSubnetMatch(filteredInternalMatches, internalNetworkSubnets)
+ if discoveredInternal != nil {
+ klog.V(2).Infof("Adding Internal IP by AddressMatching: %s", discoveredInternal.ipAddr)
+ }
+ discoveredExternal = findSubnetMatch(filteredExternalMatches, externalNetworkSubnets)
+ if discoveredExternal != nil {
+ klog.V(2).Infof("Adding External IP by AddressMatching: %s", discoveredExternal.ipAddr)
+ }
+
+ if discoveredInternal == nil && internalVMNetworkName != "" {
+ discoveredInternal = findNetworkNameMatch(filteredInternalMatches, internalVMNetworkName)
+ if discoveredInternal != nil {
+ klog.V(2).Infof("Adding Internal IP by NetworkName: %s", discoveredInternal.ipAddr)
+ }
+ }
+
+ if discoveredExternal == nil && externalVMNetworkName != "" {
+ discoveredExternal = findNetworkNameMatch(filteredExternalMatches, externalVMNetworkName)
+ if discoveredExternal != nil {
+ klog.V(2).Infof("Adding External IP by NetworkName: %s", discoveredExternal.ipAddr)
+ }
+ }
+
+ // Neither internal or external addresses were found. This defaults to the legacy
+ // address selection behavior which is to only support a single address and
+ // return the first one found
+ if discoveredInternal == nil && discoveredExternal == nil {
+ klog.V(5).Info("Default address selection.")
+ if len(filteredInternalMatches) > 0 {
+ klog.V(2).Infof("Adding Internal IP: %s", filteredInternalMatches[0].ipAddr)
+ discoveredInternal = filteredInternalMatches[0]
+ }
+
+ if len(filteredExternalMatches) > 0 {
+ klog.V(2).Infof("Adding External IP: %s", filteredExternalMatches[0].ipAddr)
+ discoveredExternal = filteredExternalMatches[0]
+ }
+ } else {
+ // At least one of the Internal or External addresses has been found.
+ // Minimally the Internal needs to exist for the node to function correctly.
+ // If only one was discovered, will log the warning and continue which will
+ // ultimately be visible to the end user
+ if discoveredInternal != nil && discoveredExternal == nil {
+ klog.Warning("Internal address found, but external address not found. Returning what addresses were discovered.")
+ } else if discoveredInternal == nil && discoveredExternal != nil {
+ klog.Warning("External address found, but internal address not found. Returning what addresses were discovered.")
+ }
+ }
+ }
+ return discoveredInternal, discoveredExternal
+}
+
+// collectNonVNICDevices filters out NICs that are virtual NIC devices. The IPs of
+// these NICs should not be added to the node status.
+func collectNonVNICDevices(guestNicInfos []types.GuestNicInfo) []types.GuestNicInfo {
+ var toReturn []types.GuestNicInfo
+ for _, v := range guestNicInfos {
+ if v.DeviceConfigId == -1 {
+ klog.V(4).Info("Skipping device because not a vNIC")
+ continue
+ }
+ toReturn = append(toReturn, v)
+ }
+ return toReturn
+}
+
+// parseCIDRs converts a comma delimited string of CIDRs to
+// a slice of IPNet pointers.
+func parseCIDRs(cidrsString string) ([]*net.IPNet, error) {
+ if cidrsString != "" {
+ cidrStringSlice := strings.Split(cidrsString, ",")
+ subnets := make([]*net.IPNet, len(cidrStringSlice))
+ for i, cidrString := range cidrStringSlice {
+ _, ipNet, err := net.ParseCIDR(cidrString)
+ if err != nil {
+ return nil, err
+ }
+ subnets[i] = ipNet
+ }
+ return subnets, nil
+ }
+ return nil, nil
+}
+
+// toIPAddrNetworkNames maps an array of GuestNicInfo to and array of *ipAddrNetworkName.
+func toIPAddrNetworkNames(guestNicInfos []types.GuestNicInfo) []*ipAddrNetworkName {
+ var candidates []*ipAddrNetworkName
+ for _, v := range guestNicInfos {
+ for _, ip := range v.IpAddress {
+ candidates = append(candidates, &ipAddrNetworkName{ipAddr: ip, networkName: v.Network})
+ }
+ }
+ return candidates
+}
+
+// toNetworkNames maps an array of GuestNicInfo to an array of network name strings
+func toNetworkNames(guestNicInfos []types.GuestNicInfo) []string {
+ var existingNetworkNames []string
+ for _, v := range guestNicInfos {
+ existingNetworkNames = append(existingNetworkNames, v.Network)
+ }
+ return existingNetworkNames
+}
+
+// collectMatchesForIPFamily collects all ipAddrNetworkNames that have ips of the
+// desired IP family
+func collectMatchesForIPFamily(ipAddrNetworkNames []*ipAddrNetworkName, ipFamily string) []*ipAddrNetworkName {
+ return filter(ipAddrNetworkNames, func(candidate *ipAddrNetworkName) bool {
+ return matchesFamily(candidate.ip(), ipFamily)
+ })
+}
+
+// matchesFamily detects whether a given IP matches the given IP family.
+func matchesFamily(ip net.IP, ipFamily string) bool {
+ if ipFamily == vcfg.IPv6Family {
+ return ip.To4() == nil && ip.To16() != nil
+ }
+
+ if ipFamily == vcfg.IPv4Family {
+ return ip.To4() != nil
+ }
+
+ return false
+}
+
+// filter returns a subset of given ipAddrNetworkNames based on whether the
+// items in the collection pass the given predicate function.
+func filter(ipAddrNetworkNames []*ipAddrNetworkName, predicate func(*ipAddrNetworkName) bool) []*ipAddrNetworkName {
+ var filtered []*ipAddrNetworkName
+ for _, item := range ipAddrNetworkNames {
+ if predicate(item) {
+ filtered = append(filtered, item)
+ }
+ }
+ return filtered
+}
+
+// findSubnetMatch finds the first *ipAddrNetworkName that has an IP in the
+// given network subnets.
+func findSubnetMatch(ipAddrNetworkNames []*ipAddrNetworkName, networkSubnets []*net.IPNet) *ipAddrNetworkName {
+ for _, networkSubnet := range networkSubnets {
+ match := findFirst(ipAddrNetworkNames, func(candidate *ipAddrNetworkName) bool {
+ return networkSubnet.Contains(candidate.ip())
+ })
+
+ if match != nil {
+ return match
+ }
+ }
+ return nil
+}
+
+// findNetworkNameMatch finds the first *ipAddrNetworkName that matches the
+// given network name, ignoring case.
+func findNetworkNameMatch(ipAddrNetworkNames []*ipAddrNetworkName, networkName string) *ipAddrNetworkName {
+ if networkName != "" {
+ return findFirst(ipAddrNetworkNames, func(candidate *ipAddrNetworkName) bool {
+ return strings.EqualFold(networkName, candidate.networkName)
+ })
+ }
+ return nil
+}
+
+// findFirst returns the first occurance that matches the given predicate
+func findFirst(ipAddrNetworkNames []*ipAddrNetworkName, predicate func(*ipAddrNetworkName) bool) *ipAddrNetworkName {
+ for _, item := range ipAddrNetworkNames {
+ if predicate(item) {
+ return item
+ }
+ }
+ return nil
+}
+
+// excludeLocalhostIPs collects ipAddrNetworkNames that have valid IPs, ipv4 or
+// ipv6, that are not localhost IPs. Localhost IPs should not be added to the
+// node status.
+func excludeLocalhostIPs(ipAddrNetworkNames []*ipAddrNetworkName) []*ipAddrNetworkName {
+ return filter(ipAddrNetworkNames, func(i *ipAddrNetworkName) bool {
+ err := ErrOnLocalOnlyIPAddr(i.ipAddr)
+ if err != nil {
+ klog.V(4).Infof("IP is local only or there was an error. ip=%q err=%v", i.ipAddr, err)
+ }
+ return err == nil
+ })
+}
+
+func filterSubnetExclusions(ipAddrNetworkNames []*ipAddrNetworkName, exlusionSubnets []*net.IPNet) []*ipAddrNetworkName {
+ return filter(ipAddrNetworkNames, func(i *ipAddrNetworkName) bool {
+ for _, exlusionSubnet := range exlusionSubnets {
+ if exlusionSubnet.Contains(i.ip()) {
+ klog.V(4).Infof("IP is excluded %q because it is contained in exlusion subnet %q", i.ipAddr, exlusionSubnet.String())
+ return false
+ }
+ }
+ return true
+ })
+}
+
// GetNode gets the NodeInfo by UUID
func (nm *NodeManager) GetNode(UUID string, node *pb.Node) error {
nodeInfo, err := nm.FindNodeInfo(UUID)
@@ -535,3 +723,13 @@ func (nm *NodeManager) FindNodeInfo(UUID string) (*NodeInfo, error) {
klog.V(4).Infof("FindNodeInfo( %s ) FOUND", UUIDlower)
return nodeInfo, nil
}
+
+func (nm *NodeManager) getNodeNameByUUID(UUID string) string {
+ for k, v := range nm.nodeNameMap {
+ if v.UUID == UUID {
+ return k
+ }
+
+ }
+ return ""
+}
diff --git a/pkg/cloudprovider/vsphere/nodemanager_test.go b/pkg/cloudprovider/vsphere/nodemanager_test.go
index e58993054..6a1872b69 100644
--- a/pkg/cloudprovider/vsphere/nodemanager_test.go
+++ b/pkg/cloudprovider/vsphere/nodemanager_test.go
@@ -18,17 +18,18 @@ package vsphere
import (
"context"
+ "net"
"strings"
"testing"
"github.com/vmware/govmomi/simulator"
vimtypes "github.com/vmware/govmomi/vim25/types"
+ ccfg "k8s.io/cloud-provider-vsphere/pkg/cloudprovider/vsphere/config"
v1 "k8s.io/api/core/v1"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
pb "k8s.io/cloud-provider-vsphere/pkg/cloudprovider/vsphere/proto"
- vcfg "k8s.io/cloud-provider-vsphere/pkg/common/config"
cm "k8s.io/cloud-provider-vsphere/pkg/common/connectionmanager"
)
@@ -79,11 +80,11 @@ func TestRegUnregNode(t *testing.T) {
nm.UnregisterNode(node)
- if len(nm.nodeNameMap) != 1 {
- t.Errorf("Failed: nodeNameMap should be a length of 1")
+ if len(nm.nodeNameMap) != 0 {
+ t.Errorf("Failed: nodeNameMap should be a length of 0")
}
- if len(nm.nodeUUIDMap) != 1 {
- t.Errorf("Failed: nodeUUIDMap should be a length of 1")
+ if len(nm.nodeUUIDMap) != 0 {
+ t.Errorf("Failed: nodeUUIDMap should be a length of 0")
}
if len(nm.nodeRegUUIDMap) != 0 {
t.Errorf("Failed: nodeRegUUIDMap should be a length of 0")
@@ -244,26 +245,1661 @@ func TestExport(t *testing.T) {
nm.UnregisterNode(node)
}
-func TestReturnIPsFromSpecificFamily(t *testing.T) {
- ipFamilies := []string{
- "10.161.34.192",
- "fd01:0:101:2609:bdd2:ee20:7bd7:5836",
- "fe80::98b5:4834:27a8:c58d",
+func TestDiscoverNodeIPs(t *testing.T) {
+ type testSetup struct {
+ ipFamilyPriority []string
+ cpiConfig *ccfg.CPIConfig
+ networks []vimtypes.GuestNicInfo
+ }
+ testcases := []struct {
+ testName string
+ setup testSetup
+ expectedIPs []v1.NodeAddress
+ expectedErrorSubstring string
+ }{
+
+ {
+ testName: "BySubnet",
+ setup: testSetup{
+ ipFamilyPriority: []string{"ipv4"},
+ cpiConfig: &ccfg.CPIConfig{
+ Nodes: ccfg.Nodes{
+ InternalNetworkSubnetCIDR: "10.10.0.0/16",
+ ExternalNetworkSubnetCIDR: "172.15.0.0/16",
+ },
+ },
+ networks: []vimtypes.GuestNicInfo{
+ {
+ Network: "net_123abc",
+ IpAddress: []string{
+ "127.0.0.6",
+ "20.30.40.50",
+ "10.10.1.22",
+ "10.10.1.23",
+ "172.15.108.10",
+ "172.15.108.11",
+ },
+ },
+ },
+ },
+ expectedIPs: []v1.NodeAddress{
+ {Type: "InternalIP", Address: "10.10.1.22"},
+ {Type: "ExternalIP", Address: "172.15.108.10"},
+ },
+ },
+ {
+ testName: "ByNetworkName",
+ setup: testSetup{
+ ipFamilyPriority: []string{"ipv4"},
+ cpiConfig: &ccfg.CPIConfig{
+ Nodes: ccfg.Nodes{
+ InternalVMNetworkName: "internal_net",
+ ExternalVMNetworkName: "external_net",
+ },
+ },
+ networks: []vimtypes.GuestNicInfo{
+ {
+ Network: "internal_net",
+ IpAddress: []string{
+ "127.0.0.6",
+ "10.10.1.22",
+ "10.10.1.23",
+ },
+ },
+ {
+ Network: "external_net",
+ IpAddress: []string{
+ "127.0.0.7",
+ "172.15.108.10",
+ "172.15.108.11",
+ },
+ },
+ },
+ },
+ expectedIPs: []v1.NodeAddress{
+ {Type: "InternalIP", Address: "10.10.1.22"},
+ {Type: "ExternalIP", Address: "172.15.108.10"},
+ },
+ },
+ {
+ testName: "ByDefaultSelection",
+ setup: testSetup{
+ ipFamilyPriority: []string{"ipv4"},
+ cpiConfig: nil,
+ networks: []vimtypes.GuestNicInfo{
+ {
+ Network: "net_123abc",
+ IpAddress: []string{
+ "127.0.0.6",
+ "10.10.1.22",
+ "10.10.1.23",
+ },
+ },
+ {
+ Network: "test_another_nic",
+ IpAddress: []string{
+ "127.0.0.7",
+ "172.15.108.11",
+ },
+ },
+ },
+ },
+ expectedIPs: []v1.NodeAddress{
+ {Type: "InternalIP", Address: "10.10.1.22"},
+ {Type: "ExternalIP", Address: "10.10.1.22"},
+ },
+ },
+ {
+ testName: "BySubnetIPv6",
+ setup: testSetup{
+ ipFamilyPriority: []string{"ipv6"},
+ cpiConfig: &ccfg.CPIConfig{
+ Nodes: ccfg.Nodes{
+ InternalNetworkSubnetCIDR: "fd00:cccc::/64",
+ ExternalNetworkSubnetCIDR: "fd00:bbbb::/64",
+ },
+ },
+ networks: []vimtypes.GuestNicInfo{
+ {
+ Network: "net_123abc",
+ IpAddress: []string{
+ "fe80::1",
+ "fd00:aaaa::1",
+ "fd00:cccc::1",
+ "fd00:cccc::2",
+ "fd00:bbbb::1",
+ "fd00:bbbb::2",
+ },
+ },
+ },
+ },
+ expectedIPs: []v1.NodeAddress{
+ {Type: "InternalIP", Address: "fd00:cccc::1"},
+ {Type: "ExternalIP", Address: "fd00:bbbb::1"},
+ },
+ },
+ {
+ testName: "ByNetworkNameIPv6",
+ setup: testSetup{
+ ipFamilyPriority: []string{"ipv6"},
+ cpiConfig: &ccfg.CPIConfig{
+ Nodes: ccfg.Nodes{
+ InternalVMNetworkName: "internal_net",
+ ExternalVMNetworkName: "external_net",
+ },
+ },
+ networks: []vimtypes.GuestNicInfo{
+ {
+ Network: "internal_net",
+ IpAddress: []string{
+ "fe80::3",
+ "fd00:cccc::1",
+ "fd00:cccc::2",
+ },
+ },
+ {
+ Network: "external_net",
+ IpAddress: []string{
+ "fe80::2",
+ "fd00:bbbb::1",
+ "fd00:bbbb::2",
+ },
+ },
+ },
+ },
+ expectedIPs: []v1.NodeAddress{
+ {Type: "InternalIP", Address: "fd00:cccc::1"},
+ {Type: "ExternalIP", Address: "fd00:bbbb::1"},
+ },
+ },
+ {
+ testName: "ByDefaultSelectionIPv6",
+ setup: testSetup{
+ ipFamilyPriority: []string{"ipv6"},
+ cpiConfig: nil,
+ networks: []vimtypes.GuestNicInfo{
+ {
+ Network: "net_123abc",
+ IpAddress: []string{
+ "fe80::3",
+ "fd00:cccc::1",
+ "fd00:cccc::2",
+ },
+ },
+ {
+ Network: "test_another_nic",
+ IpAddress: []string{
+ "fe80::2",
+ "fd00:bbbb::1",
+ "fd00:bbbb::2",
+ },
+ },
+ },
+ },
+ expectedIPs: []v1.NodeAddress{
+ {Type: "InternalIP", Address: "fd00:cccc::1"},
+ {Type: "ExternalIP", Address: "fd00:cccc::1"},
+ },
+ },
+ {
+ testName: "ByNetworkNameAndTwoNICs_desiredIPsAfterFirstNIC",
+ setup: testSetup{
+ ipFamilyPriority: []string{"ipv4"},
+ cpiConfig: &ccfg.CPIConfig{
+ Nodes: ccfg.Nodes{
+ InternalVMNetworkName: "internal_net",
+ ExternalVMNetworkName: "external_net",
+ },
+ },
+ networks: []vimtypes.GuestNicInfo{
+ {
+ Network: "net_123abc",
+ IpAddress: []string{
+ "127.0.0.6",
+ "169.0.1.2",
+ },
+ },
+ {
+ Network: "internal_net",
+ IpAddress: []string{
+ "10.10.10.10",
+ },
+ },
+ {
+ Network: "external_net",
+ IpAddress: []string{
+ "172.15.108.11",
+ },
+ },
+ },
+ },
+ expectedIPs: []v1.NodeAddress{
+ {Type: "InternalIP", Address: "10.10.10.10"},
+ {Type: "ExternalIP", Address: "172.15.108.11"},
+ },
+ },
+ {
+ testName: "ByMultipleSubnets_dualstack_itSelectsBothIPv4andIPv6Addrs",
+ setup: testSetup{
+ ipFamilyPriority: []string{"ipv4", "ipv6"},
+ cpiConfig: &ccfg.CPIConfig{
+ Nodes: ccfg.Nodes{
+ InternalNetworkSubnetCIDR: "10.10.0.0/16,fd00:cccc::/64",
+ ExternalNetworkSubnetCIDR: "172.15.0.0/16,fd00:dddd::/64",
+ },
+ },
+ networks: []vimtypes.GuestNicInfo{
+ {
+ Network: "net_foo",
+ IpAddress: []string{
+ "127.0.0.6",
+ "169.0.1.2",
+ },
+ },
+ {
+ Network: "net_bar",
+ IpAddress: []string{
+ "10.10.1.22",
+ "fd00:dddd::11",
+ },
+ },
+ {
+ Network: "net_baz",
+ IpAddress: []string{
+ "172.15.108.11",
+ "fd00:cccc::22",
+ },
+ },
+ },
+ },
+ expectedIPs: []v1.NodeAddress{
+ {Type: "InternalIP", Address: "10.10.1.22"},
+ {Type: "ExternalIP", Address: "172.15.108.11"},
+ {Type: "InternalIP", Address: "fd00:cccc::22"},
+ {Type: "ExternalIP", Address: "fd00:dddd::11"},
+ },
+ },
+ {
+ testName: "ByMultipleSubnets_dualstack_WhenNoIPsOfFamilyMatchAnySubnets_itFallsThroughToDefaultSelection",
+ setup: testSetup{
+ ipFamilyPriority: []string{"ipv4", "ipv6"},
+ cpiConfig: &ccfg.CPIConfig{
+ Nodes: ccfg.Nodes{
+ InternalNetworkSubnetCIDR: "10.10.0.0/16,fd00:ffff::/64",
+ ExternalNetworkSubnetCIDR: "172.15.0.0/16,fd00:eeee::/64",
+ },
+ },
+ networks: []vimtypes.GuestNicInfo{
+ {
+ Network: "net_foo",
+ IpAddress: []string{
+ "127.0.0.6",
+ "169.0.1.2",
+ },
+ },
+ {
+ Network: "net_bar",
+ IpAddress: []string{
+ "10.10.1.22",
+ "fd00:dddd::11",
+ },
+ },
+ {
+ Network: "net_baz",
+ IpAddress: []string{
+ "172.15.108.11",
+ "fd00:cccc::22",
+ },
+ },
+ },
+ },
+ expectedIPs: []v1.NodeAddress{
+ {Type: "InternalIP", Address: "10.10.1.22"},
+ {Type: "ExternalIP", Address: "172.15.108.11"},
+ {Type: "InternalIP", Address: "fd00:dddd::11"},
+ {Type: "ExternalIP", Address: "fd00:dddd::11"},
+ },
+ },
+ {
+ testName: "ByMultipleSubnets_dualstack_WhenNoIPsOfFamilyMatchesInternalOrExternalSubnets_itUsesSubnetSelectionAndOmitsTheIPThatHasNoMatch",
+ setup: testSetup{
+ ipFamilyPriority: []string{"ipv4", "ipv6"},
+ cpiConfig: &ccfg.CPIConfig{
+ Nodes: ccfg.Nodes{
+ InternalNetworkSubnetCIDR: "10.10.0.0/16,fd00:ffff::/64",
+ ExternalNetworkSubnetCIDR: "172.15.0.0/16,fd00:dddd::/64",
+ },
+ },
+ networks: []vimtypes.GuestNicInfo{
+ {
+ Network: "net_foo",
+ IpAddress: []string{
+ "127.0.0.6",
+ "169.0.1.2",
+ },
+ },
+ {
+ Network: "net_bar",
+ IpAddress: []string{
+ "10.10.1.22",
+ "fd00:dddd::11",
+ },
+ },
+ {
+ Network: "net_baz",
+ IpAddress: []string{
+ "172.15.108.11",
+ "fd00:cccc::22",
+ },
+ },
+ },
+ },
+ expectedIPs: []v1.NodeAddress{
+ {Type: "InternalIP", Address: "10.10.1.22"},
+ {Type: "ExternalIP", Address: "172.15.108.11"},
+ {Type: "ExternalIP", Address: "fd00:dddd::11"},
+ },
+ },
+ {
+ testName: "ByMultipleSubnets",
+ setup: testSetup{
+ ipFamilyPriority: []string{"ipv4"},
+ cpiConfig: &ccfg.CPIConfig{
+ Nodes: ccfg.Nodes{
+ InternalNetworkSubnetCIDR: "170.12.0.0/16,10.10.0.0/16",
+ ExternalNetworkSubnetCIDR: "172.15.0.0/16",
+ },
+ },
+ networks: []vimtypes.GuestNicInfo{
+ {
+ Network: "net_123abc",
+ IpAddress: []string{
+ "127.0.0.6",
+ "169.0.1.2",
+ },
+ },
+ {
+ Network: "internal_net",
+ IpAddress: []string{
+ "10.10.1.22",
+ },
+ },
+ {
+ Network: "external_net",
+ IpAddress: []string{
+ "172.15.108.11",
+ },
+ },
+ },
+ },
+ expectedIPs: []v1.NodeAddress{
+ {Type: "InternalIP", Address: "10.10.1.22"},
+ {Type: "ExternalIP", Address: "172.15.108.11"},
+ },
+ },
+ {
+ testName: "BySubnetAndTwoNICs_desiredIPsAfterFirstNIC",
+ setup: testSetup{
+ ipFamilyPriority: []string{"ipv4"},
+ cpiConfig: &ccfg.CPIConfig{
+ Nodes: ccfg.Nodes{
+ InternalNetworkSubnetCIDR: "10.10.0.0/16",
+ ExternalNetworkSubnetCIDR: "172.15.0.0/16",
+ },
+ },
+ networks: []vimtypes.GuestNicInfo{
+ {
+ Network: "net_123abc",
+ IpAddress: []string{
+ "127.0.0.6",
+ "169.0.1.2",
+ },
+ },
+ {
+ Network: "internal_net",
+ IpAddress: []string{
+ "10.10.1.22",
+ },
+ },
+ {
+ Network: "external_net",
+ IpAddress: []string{
+ "172.15.108.11",
+ },
+ },
+ },
+ },
+ expectedIPs: []v1.NodeAddress{
+ {Type: "InternalIP", Address: "10.10.1.22"},
+ {Type: "ExternalIP", Address: "172.15.108.11"},
+ },
+ },
+ {
+ testName: "BySubnetAndTwoNICs_desiredIPsAreSplitAcrossNICs",
+ setup: testSetup{
+ ipFamilyPriority: []string{"ipv4"},
+ cpiConfig: &ccfg.CPIConfig{
+ Nodes: ccfg.Nodes{
+ InternalNetworkSubnetCIDR: "10.10.0.0/16",
+ ExternalNetworkSubnetCIDR: "172.15.0.0/16",
+ },
+ },
+ networks: []vimtypes.GuestNicInfo{
+ {
+ Network: "net_123abc",
+ IpAddress: []string{
+ "127.0.0.6",
+ "169.0.1.2",
+ "10.10.1.22",
+ },
+ },
+ {
+ Network: "test_another_nic",
+ IpAddress: []string{
+ "127.0.0.7",
+ "172.15.108.11",
+ },
+ },
+ },
+ },
+ expectedIPs: []v1.NodeAddress{
+ {Type: "InternalIP", Address: "10.10.1.22"},
+ {Type: "ExternalIP", Address: "172.15.108.11"},
+ },
+ },
+ {
+ testName: "BySubnet_whenExternalCIDRHasNoMatch_itReturnsOnlyInternalIP",
+ setup: testSetup{
+ ipFamilyPriority: []string{"ipv4"},
+ cpiConfig: &ccfg.CPIConfig{
+ Nodes: ccfg.Nodes{
+ InternalNetworkSubnetCIDR: "10.10.0.0/16",
+ ExternalNetworkSubnetCIDR: "172.15.0.0/16",
+ },
+ },
+ networks: []vimtypes.GuestNicInfo{
+ {
+ Network: "net_123abc",
+ IpAddress: []string{
+ "127.0.0.6",
+ "169.0.1.2",
+ "10.10.1.22",
+ },
+ },
+ {
+ Network: "test_another_nic",
+ IpAddress: []string{
+ "127.0.0.7",
+ },
+ },
+ },
+ },
+ expectedIPs: []v1.NodeAddress{
+ {Type: "InternalIP", Address: "10.10.1.22"},
+ },
+ },
+ {
+ testName: "BySubnet_whenInternalCIDRHasNoMatch_itReturnsOnlyExternalIP",
+ setup: testSetup{
+ ipFamilyPriority: []string{"ipv4"},
+ cpiConfig: &ccfg.CPIConfig{
+ Nodes: ccfg.Nodes{
+ InternalNetworkSubnetCIDR: "10.10.0.0/16",
+ ExternalNetworkSubnetCIDR: "172.15.0.0/16",
+ },
+ },
+ networks: []vimtypes.GuestNicInfo{
+ {
+ Network: "net_123abc",
+ IpAddress: []string{
+ "127.0.0.6",
+ "169.0.1.2",
+ "172.15.108.11",
+ },
+ },
+ {
+ Network: "test_another_nic",
+ IpAddress: []string{
+ "127.0.0.7",
+ },
+ },
+ },
+ },
+ expectedIPs: []v1.NodeAddress{
+ {Type: "ExternalIP", Address: "172.15.108.11"},
+ },
+ },
+ {
+ testName: "ByNetworkName_whenInternalNameHasNoMatch_itReturnsOnlyExternalIP",
+ setup: testSetup{
+ ipFamilyPriority: []string{"ipv4"},
+ cpiConfig: &ccfg.CPIConfig{
+ Nodes: ccfg.Nodes{
+ InternalVMNetworkName: "no-matches",
+ ExternalVMNetworkName: "external_net",
+ },
+ },
+ networks: []vimtypes.GuestNicInfo{
+ {
+ Network: "net_123abc",
+ IpAddress: []string{
+ "127.0.0.6",
+ },
+ },
+ {
+ Network: "internal_net",
+ IpAddress: []string{
+ "10.10.5.8",
+ },
+ },
+ {
+ Network: "external_net",
+ IpAddress: []string{
+ "172.15.2.3",
+ },
+ },
+ },
+ },
+ expectedIPs: []v1.NodeAddress{
+ {Type: "ExternalIP", Address: "172.15.2.3"},
+ },
+ },
+ {
+ testName: "ByNetworkName_whenExternalNameHasNoMatch_itReturnsOnlyInternalIP",
+ setup: testSetup{
+ ipFamilyPriority: []string{"ipv4"},
+ cpiConfig: &ccfg.CPIConfig{
+ Nodes: ccfg.Nodes{
+ InternalVMNetworkName: "internal_net",
+ ExternalVMNetworkName: "no-matches",
+ },
+ },
+ networks: []vimtypes.GuestNicInfo{
+ {
+ Network: "net_123abc",
+ IpAddress: []string{
+ "127.0.0.6",
+ },
+ },
+ {
+ Network: "internal_net",
+ IpAddress: []string{
+ "10.10.5.8",
+ },
+ },
+ {
+ Network: "external_net",
+ IpAddress: []string{
+ "172.15.2.3",
+ },
+ },
+ },
+ },
+ expectedIPs: []v1.NodeAddress{
+ {Type: "InternalIP", Address: "10.10.5.8"},
+ },
+ },
+ {
+ testName: "BySubnet_whenOnlyExternalCIDRIsSet_itReturnsOnlyExternalIP",
+ setup: testSetup{
+ ipFamilyPriority: []string{"ipv4"},
+ cpiConfig: &ccfg.CPIConfig{
+ Nodes: ccfg.Nodes{
+ ExternalNetworkSubnetCIDR: "172.15.0.0/16",
+ },
+ },
+ networks: []vimtypes.GuestNicInfo{
+ {
+ Network: "net_123abc",
+ IpAddress: []string{
+ "127.0.0.6",
+ "20.30.40.50",
+ "10.10.1.22",
+ "10.10.1.23",
+ "172.15.108.10",
+ "172.15.108.11",
+ },
+ },
+ },
+ },
+ expectedIPs: []v1.NodeAddress{
+ {Type: "ExternalIP", Address: "172.15.108.10"},
+ },
+ },
+ {
+ testName: "BySubnet_whenOnlyInternalCIDRIsSet_itReturnsOnlyInternalIP",
+ setup: testSetup{
+ ipFamilyPriority: []string{"ipv4"},
+ cpiConfig: &ccfg.CPIConfig{
+ Nodes: ccfg.Nodes{
+ InternalNetworkSubnetCIDR: "10.10.0.0/16",
+ },
+ },
+ networks: []vimtypes.GuestNicInfo{
+ {
+ Network: "net_123abc",
+ IpAddress: []string{
+ "127.0.0.6",
+ "20.30.40.50",
+ "10.10.1.22",
+ "10.10.1.23",
+ "172.15.108.10",
+ "172.15.108.11",
+ },
+ },
+ },
+ },
+ expectedIPs: []v1.NodeAddress{
+ {Type: "InternalIP", Address: "10.10.1.22"},
+ },
+ },
+
+ {
+ testName: "ByNetworkName_selectsIgnoringCase",
+ setup: testSetup{
+ ipFamilyPriority: []string{"ipv4"},
+ cpiConfig: &ccfg.CPIConfig{
+ Nodes: ccfg.Nodes{
+ InternalVMNetworkName: "InTerNal_NEt",
+ ExternalVMNetworkName: "ExTeRnAL_NeT",
+ },
+ },
+ networks: []vimtypes.GuestNicInfo{
+ {
+ Network: "internal_net",
+ IpAddress: []string{
+ "127.0.0.6",
+ "20.30.40.50",
+ },
+ },
+ {
+ Network: "external_net",
+ IpAddress: []string{
+ "127.0.0.6",
+ "20.30.40.51",
+ },
+ },
+ },
+ },
+ expectedIPs: []v1.NodeAddress{
+ {Type: "InternalIP", Address: "20.30.40.50"},
+ {Type: "ExternalIP", Address: "20.30.40.51"},
+ },
+ },
+ {
+ testName: "ByNetworkName_whenOnlyExternalNetworkIsSet_onlyExternalNetIsSet",
+ setup: testSetup{
+ ipFamilyPriority: []string{"ipv4"},
+ cpiConfig: &ccfg.CPIConfig{
+ Nodes: ccfg.Nodes{
+ // TODO: update test net names
+ ExternalVMNetworkName: "external_net",
+ },
+ },
+ networks: []vimtypes.GuestNicInfo{
+ {
+ Network: "internal_net",
+ IpAddress: []string{
+ "127.0.0.6",
+ "10.10.1.22",
+ "10.10.1.23",
+ },
+ },
+ {
+ Network: "external_net",
+ IpAddress: []string{
+ "127.0.0.7",
+ "172.15.108.10",
+ "172.15.108.11",
+ },
+ },
+ },
+ },
+ expectedIPs: []v1.NodeAddress{
+ {Type: "ExternalIP", Address: "172.15.108.10"},
+ },
+ },
+ {
+ testName: "ByNetworkName_whenOnlyInternalNetworkIsSet_itReturnsOnlyInternalIP",
+ setup: testSetup{
+ ipFamilyPriority: []string{"ipv4"},
+ cpiConfig: &ccfg.CPIConfig{
+ Nodes: ccfg.Nodes{
+ InternalVMNetworkName: "internal_net",
+ },
+ },
+ networks: []vimtypes.GuestNicInfo{
+ {
+ Network: "internal_net",
+ IpAddress: []string{
+ "127.0.0.6",
+ "10.10.1.22",
+ "10.10.1.23",
+ },
+ },
+ {
+ Network: "external_net",
+ IpAddress: []string{
+ "127.0.0.7",
+ "172.15.108.10",
+ "172.15.108.11",
+ },
+ },
+ },
+ },
+ expectedIPs: []v1.NodeAddress{
+ {Type: "InternalIP", Address: "10.10.1.22"},
+ },
+ },
+ {
+ testName: "BySubnetAndNetworkNameTwoNICs_desiredIPsAreSplitAcrossNICs",
+ setup: testSetup{
+ ipFamilyPriority: []string{"ipv4"},
+ cpiConfig: &ccfg.CPIConfig{
+ Nodes: ccfg.Nodes{
+ InternalNetworkSubnetCIDR: "10.10.0.0/16",
+ ExternalVMNetworkName: "test_another_nic",
+ },
+ },
+ networks: []vimtypes.GuestNicInfo{
+ {
+ Network: "net_123abc",
+ IpAddress: []string{
+ "127.0.0.6",
+ "169.0.1.2",
+ "10.10.1.22",
+ },
+ },
+ {
+ Network: "test_another_nic",
+ IpAddress: []string{
+ "127.0.0.7",
+ "172.15.108.11",
+ },
+ },
+ },
+ },
+ expectedIPs: []v1.NodeAddress{
+ {Type: "InternalIP", Address: "10.10.1.22"},
+ {Type: "ExternalIP", Address: "172.15.108.11"},
+ },
+ },
+ {
+ testName: "BySettingBothNetworkNameAndSubnets_SubnetSelectionHasPrecedenceWhenMatchesAreFound",
+ setup: testSetup{
+ ipFamilyPriority: []string{"ipv4"},
+ cpiConfig: &ccfg.CPIConfig{
+ Nodes: ccfg.Nodes{
+ InternalNetworkSubnetCIDR: "10.10.0.0/16",
+ ExternalNetworkSubnetCIDR: "172.15.0.0/16",
+ InternalVMNetworkName: "internal_net",
+ ExternalVMNetworkName: "external_net",
+ },
+ },
+ networks: []vimtypes.GuestNicInfo{
+ {
+ Network: "internal_net",
+ IpAddress: []string{
+ "22.22.22.22",
+ "172.15.108.11",
+ },
+ },
+ {
+ Network: "external_net",
+ IpAddress: []string{
+ "33.33.33.33",
+ "10.10.1.22",
+ },
+ },
+ },
+ },
+ expectedIPs: []v1.NodeAddress{
+ {Type: "InternalIP", Address: "10.10.1.22"},
+ {Type: "ExternalIP", Address: "172.15.108.11"},
+ },
+ },
+ {
+ testName: "BySettingBothNetworkNameAndSubnets_whenSubnetsMatchNoIPs_itUsesNetworkNameSelection",
+ setup: testSetup{
+ ipFamilyPriority: []string{"ipv4"},
+ cpiConfig: &ccfg.CPIConfig{
+ Nodes: ccfg.Nodes{
+ InternalNetworkSubnetCIDR: "254.10.0.0/16",
+ ExternalNetworkSubnetCIDR: "253.15.0.0/16",
+ InternalVMNetworkName: "internal_net",
+ ExternalVMNetworkName: "external_net",
+ },
+ },
+ networks: []vimtypes.GuestNicInfo{
+ {
+ Network: "internal_net",
+ IpAddress: []string{
+ "22.22.22.22",
+ "172.15.108.11",
+ },
+ },
+ {
+ Network: "external_net",
+ IpAddress: []string{
+ "33.33.33.33",
+ "10.10.1.22",
+ },
+ },
+ },
+ },
+ expectedIPs: []v1.NodeAddress{
+ {Type: "InternalIP", Address: "22.22.22.22"},
+ {Type: "ExternalIP", Address: "33.33.33.33"},
+ },
+ },
+ {
+ testName: "ItIgnoresVNICDevices",
+ setup: testSetup{
+ ipFamilyPriority: []string{"ipv4"},
+ cpiConfig: &ccfg.CPIConfig{
+ Nodes: ccfg.Nodes{
+ InternalNetworkSubnetCIDR: "254.10.0.0/16",
+ ExternalNetworkSubnetCIDR: "253.15.0.0/16",
+ InternalVMNetworkName: "internal_net",
+ ExternalVMNetworkName: "external_net",
+ },
+ },
+ networks: []vimtypes.GuestNicInfo{
+ {
+ DeviceConfigId: -1,
+ Network: "vnic-device",
+ IpAddress: []string{
+ "254.10.1.2",
+ "253.15.2.4",
+ },
+ },
+ {
+ Network: "internal_net",
+ IpAddress: []string{
+ "22.22.22.22",
+ "172.15.108.11",
+ },
+ },
+ {
+ Network: "external_net",
+ IpAddress: []string{
+ "33.33.33.33",
+ "10.10.1.22",
+ },
+ },
+ },
+ },
+ expectedIPs: []v1.NodeAddress{
+ {Type: "InternalIP", Address: "22.22.22.22"},
+ {Type: "ExternalIP", Address: "33.33.33.33"},
+ },
+ },
+ {
+ testName: "BySettingANetworkNameThatDoesntExist",
+ setup: testSetup{
+ ipFamilyPriority: []string{"ipv4"},
+ cpiConfig: &ccfg.CPIConfig{
+ Nodes: ccfg.Nodes{
+ InternalVMNetworkName: "internal_net",
+ ExternalVMNetworkName: "external_net",
+ },
+ },
+ networks: []vimtypes.GuestNicInfo{
+ {
+ Network: "net_a",
+ IpAddress: []string{
+ "10.10.1.22",
+ },
+ },
+ {
+ Network: "net_b",
+ IpAddress: []string{
+ "172.15.108.11",
+ },
+ },
+ },
+ },
+ expectedErrorSubstring: "unable to find suitable IP address for node",
+ },
+ {
+ testName: "ByDiscoveringAnUnParsableIP_itIsIgnored",
+ setup: testSetup{
+ ipFamilyPriority: []string{"ipv4"},
+ cpiConfig: nil,
+ networks: []vimtypes.GuestNicInfo{
+ {
+ Network: "net_123abc",
+ IpAddress: []string{
+ "blarg",
+ "127.0.0.6",
+ "10.10.1.22",
+ "10.10.1.23",
+ },
+ },
+ {
+ Network: "test_another_nic",
+ IpAddress: []string{
+ "127.0.0.7",
+ "172.15.108.11",
+ },
+ },
+ },
+ },
+ expectedIPs: []v1.NodeAddress{
+ {Type: "InternalIP", Address: "10.10.1.22"},
+ {Type: "ExternalIP", Address: "10.10.1.22"},
+ },
+ },
+ {
+ testName: "ByDefaultSelection_whenTheSecondNICHasNoIPs",
+ setup: testSetup{
+ ipFamilyPriority: []string{"ipv4"},
+ cpiConfig: nil,
+ networks: []vimtypes.GuestNicInfo{
+ {
+ Network: "net_a",
+ IpAddress: []string{
+ "172.15.108.11",
+ },
+ },
+ {
+ Network: "net_b",
+ IpAddress: []string{},
+ },
+ },
+ },
+ expectedIPs: []v1.NodeAddress{
+ {Type: "InternalIP", Address: "172.15.108.11"},
+ {Type: "ExternalIP", Address: "172.15.108.11"},
+ },
+ },
+ {
+ testName: "ByDefaultSelection_whenTheFirstNICHasNoIPs",
+ setup: testSetup{
+ ipFamilyPriority: []string{"ipv4"},
+ cpiConfig: nil,
+ networks: []vimtypes.GuestNicInfo{
+ {
+ Network: "net_a",
+ IpAddress: []string{},
+ },
+ {
+ Network: "net_b",
+ IpAddress: []string{
+ "172.15.108.11",
+ },
+ },
+ },
+ },
+ expectedIPs: []v1.NodeAddress{
+ {Type: "InternalIP", Address: "172.15.108.11"},
+ {Type: "ExternalIP", Address: "172.15.108.11"},
+ },
+ },
+ {
+ testName: "ByDefaultSelection_whenTheFirstNICHasNoIPsOfTheDesiredFamily",
+ setup: testSetup{
+ ipFamilyPriority: []string{"ipv4"},
+ cpiConfig: nil,
+ networks: []vimtypes.GuestNicInfo{
+ {
+ Network: "net_a",
+ IpAddress: []string{
+ "fd00:cccc::1",
+ },
+ },
+ {
+ Network: "net_b",
+ IpAddress: []string{
+ "172.15.108.11",
+ },
+ },
+ },
+ },
+ expectedIPs: []v1.NodeAddress{
+ {Type: "InternalIP", Address: "172.15.108.11"},
+ {Type: "ExternalIP", Address: "172.15.108.11"},
+ },
+ },
+ {
+ testName: "ByDefaultSelection_TheSecondNICHasNoIPsOfTheDesiredFamily",
+ setup: testSetup{
+ ipFamilyPriority: []string{"ipv4"},
+ cpiConfig: nil,
+ networks: []vimtypes.GuestNicInfo{
+ {
+ Network: "net_a",
+ IpAddress: []string{
+ "172.15.108.11",
+ "fe80:cccc::1",
+ },
+ },
+ {
+ Network: "net_b",
+ IpAddress: []string{
+ "fe80:cccc::2",
+ },
+ },
+ },
+ },
+ expectedIPs: []v1.NodeAddress{
+ {Type: "InternalIP", Address: "172.15.108.11"},
+ {Type: "ExternalIP", Address: "172.15.108.11"},
+ },
+ },
+ {
+ testName: "ByDefaultSelection_whenDualStackIPv4Primary_itReturnsIPv4AddrsFirst",
+ setup: testSetup{
+ ipFamilyPriority: []string{"ipv4", "ipv6"},
+ cpiConfig: nil,
+ networks: []vimtypes.GuestNicInfo{
+ {
+ Network: "net_a",
+ IpAddress: []string{
+ "172.15.108.11",
+ "fd00:cccc::1",
+ },
+ },
+ {
+ Network: "net_b",
+ IpAddress: []string{
+ "fd00:cccc::2",
+ },
+ },
+ },
+ },
+ expectedIPs: []v1.NodeAddress{
+ {Type: "InternalIP", Address: "172.15.108.11"},
+ {Type: "ExternalIP", Address: "172.15.108.11"},
+ {Type: "InternalIP", Address: "fd00:cccc::1"},
+ {Type: "ExternalIP", Address: "fd00:cccc::1"},
+ },
+ },
+ {
+ testName: "ByDefaultSelection_itDoesNotSelectIPsFromtheExclusionCIDRList",
+ setup: testSetup{
+ ipFamilyPriority: []string{"ipv4", "ipv6"},
+ cpiConfig: &ccfg.CPIConfig{
+ Nodes: ccfg.Nodes{
+ ExcludeInternalNetworkSubnetCIDR: "172.15.108.11/32,fd00:cccc::1/128,fd00:cccc::2/128",
+ ExcludeExternalNetworkSubnetCIDR: "172.15.108.11/32,172.15.108.12/32,fd00:cccc::1/128",
+ },
+ },
+ networks: []vimtypes.GuestNicInfo{
+ {
+ Network: "net_a",
+ IpAddress: []string{
+ "172.15.108.11",
+ "172.15.108.12",
+ "172.15.108.13",
+ "fd00:cccc::1",
+ },
+ },
+ {
+ Network: "net_b",
+ IpAddress: []string{
+ "fd00:cccc::2",
+ "fd00:cccc::3",
+ },
+ },
+ },
+ },
+ expectedIPs: []v1.NodeAddress{
+ {Type: "InternalIP", Address: "172.15.108.12"},
+ {Type: "ExternalIP", Address: "172.15.108.13"},
+ {Type: "InternalIP", Address: "fd00:cccc::3"},
+ {Type: "ExternalIP", Address: "fd00:cccc::2"},
+ },
+ },
+ {
+ testName: "ByDefaultSelection_DualStackIPv6Primary_itReturnsIPv6AddrsFirst",
+ setup: testSetup{
+ ipFamilyPriority: []string{"ipv6", "ipv4"},
+ cpiConfig: nil,
+ networks: []vimtypes.GuestNicInfo{
+ {
+ Network: "net_a",
+ IpAddress: []string{
+ "172.15.108.11",
+ "fd00:cccc::1",
+ },
+ },
+ {
+ Network: "net_b",
+ IpAddress: []string{
+ "fd00:cccc::2",
+ },
+ },
+ },
+ },
+ expectedIPs: []v1.NodeAddress{
+ {Type: "InternalIP", Address: "fd00:cccc::1"},
+ {Type: "ExternalIP", Address: "fd00:cccc::1"},
+ {Type: "InternalIP", Address: "172.15.108.11"},
+ {Type: "ExternalIP", Address: "172.15.108.11"},
+ },
+ },
+ {
+ testName: "ByNetworkName_whenDualStack",
+ setup: testSetup{
+ ipFamilyPriority: []string{"ipv6", "ipv4"},
+ cpiConfig: &ccfg.CPIConfig{
+ Nodes: ccfg.Nodes{
+ InternalVMNetworkName: "internal_net",
+ ExternalVMNetworkName: "external_net",
+ },
+ },
+ networks: []vimtypes.GuestNicInfo{
+ {
+ Network: "internal_net",
+ IpAddress: []string{
+ "172.15.108.11",
+ "fd00:cccc::1",
+ },
+ },
+ {
+ Network: "external_net",
+ IpAddress: []string{
+ "fd00:cccc::2",
+ "172.15.108.12",
+ },
+ },
+ },
+ },
+ expectedIPs: []v1.NodeAddress{
+ {Type: "InternalIP", Address: "fd00:cccc::1"},
+ {Type: "ExternalIP", Address: "fd00:cccc::2"},
+ {Type: "InternalIP", Address: "172.15.108.11"},
+ {Type: "ExternalIP", Address: "172.15.108.12"},
+ },
+ },
+ {
+ testName: "BySubnet_itDoesNotSelectIPsFromtheExclusionCIDRList",
+ setup: testSetup{
+ ipFamilyPriority: []string{"ipv4", "ipv6"},
+ cpiConfig: &ccfg.CPIConfig{
+ Nodes: ccfg.Nodes{
+ InternalNetworkSubnetCIDR: "172.15.0.0/16,fd00:cccc::0/32",
+ ExternalNetworkSubnetCIDR: "173.15.0.0/16,fd01:cccc::0/32",
+
+ ExcludeInternalNetworkSubnetCIDR: "172.15.108.11/32,fd00:cccc::1/128,fd00:cccc::2/128",
+ ExcludeExternalNetworkSubnetCIDR: "173.15.108.11/32,173.15.108.12/32,fd01:cccc::1/128",
+ },
+ },
+ networks: []vimtypes.GuestNicInfo{
+ {
+ Network: "internal_net",
+ IpAddress: []string{
+ "172.15.108.11",
+ "172.15.108.12",
+ "172.15.108.13",
+ "fd00:cccc::1",
+ "fd00:cccc::2",
+ "fd00:cccc::3",
+ },
+ },
+ {
+ Network: "external_net",
+ IpAddress: []string{
+ "173.15.108.11",
+ "173.15.108.12",
+ "173.15.108.13",
+ "fd01:cccc::1",
+ "fd01:cccc::2",
+ "fd01:cccc::3",
+ },
+ },
+ },
+ },
+ expectedIPs: []v1.NodeAddress{
+ {Type: "InternalIP", Address: "172.15.108.12"},
+ {Type: "ExternalIP", Address: "173.15.108.13"},
+ {Type: "InternalIP", Address: "fd00:cccc::3"},
+ {Type: "ExternalIP", Address: "fd01:cccc::2"},
+ },
+ },
+ {
+ testName: "ByNetworkName_itDoesNotSelectIPsFromtheExclusionCIDRList",
+ setup: testSetup{
+ ipFamilyPriority: []string{"ipv4", "ipv6"},
+ cpiConfig: &ccfg.CPIConfig{
+ Nodes: ccfg.Nodes{
+ InternalVMNetworkName: "internal_net",
+ ExternalVMNetworkName: "external_net",
+ ExcludeInternalNetworkSubnetCIDR: "172.15.108.11/32,fd00:cccc::1/128,fd00:cccc::2/128",
+ ExcludeExternalNetworkSubnetCIDR: "173.15.108.11/32,173.15.108.12/32,fd01:cccc::1/128",
+ },
+ },
+ networks: []vimtypes.GuestNicInfo{
+ {
+ Network: "internal_net",
+ IpAddress: []string{
+ "172.15.108.11",
+ "172.15.108.12",
+ "172.15.108.13",
+ "fd00:cccc::1",
+ "fd00:cccc::2",
+ "fd00:cccc::3",
+ },
+ },
+ {
+ Network: "external_net",
+ IpAddress: []string{
+ "173.15.108.11",
+ "173.15.108.12",
+ "173.15.108.13",
+ "fd01:cccc::1",
+ "fd01:cccc::2",
+ "fd01:cccc::3",
+ },
+ },
+ },
+ },
+ expectedIPs: []v1.NodeAddress{
+ {Type: "InternalIP", Address: "172.15.108.12"},
+ {Type: "ExternalIP", Address: "173.15.108.13"},
+ {Type: "InternalIP", Address: "fd00:cccc::3"},
+ {Type: "ExternalIP", Address: "fd01:cccc::2"},
+ },
+ },
+ {
+ testName: "Dualstack_ExcludingSubnets_whenNoIPv4AddrIsDiscovered",
+ setup: testSetup{
+ ipFamilyPriority: []string{"ipv6", "ipv4"},
+ cpiConfig: &ccfg.CPIConfig{
+ Nodes: ccfg.Nodes{
+ ExcludeInternalNetworkSubnetCIDR: "172.15.108.11/8",
+ ExcludeExternalNetworkSubnetCIDR: "172.15.108.11/8",
+ },
+ },
+ networks: []vimtypes.GuestNicInfo{
+ {
+ Network: "internal_net",
+ IpAddress: []string{
+ "172.15.108.11",
+ "fd00:cccc::1",
+ },
+ },
+ },
+ },
+ expectedErrorSubstring: "unable to find suitable IP address for node",
+ },
+ {
+ testName: "Dualstack_ExcludingSubnets_whenNoIPv6AddrIsDiscovered",
+ setup: testSetup{
+ ipFamilyPriority: []string{"ipv6", "ipv4"},
+ cpiConfig: &ccfg.CPIConfig{
+ Nodes: ccfg.Nodes{
+ ExcludeInternalNetworkSubnetCIDR: "fd00:cccc::1/16",
+ ExcludeExternalNetworkSubnetCIDR: "fd00:cccc::1/16",
+ },
+ },
+ networks: []vimtypes.GuestNicInfo{
+ {
+ Network: "internal_net",
+ IpAddress: []string{
+ "172.15.108.11",
+ "fd00:cccc::1",
+ },
+ },
+ },
+ },
+ expectedErrorSubstring: "unable to find suitable IP address for node",
+ },
+ {
+ testName: "DualStack_whenNoIPsOfOneFamilyAreDiscovered",
+ setup: testSetup{
+ ipFamilyPriority: []string{"ipv6", "ipv4"},
+ cpiConfig: nil,
+ networks: []vimtypes.GuestNicInfo{
+ {
+ Network: "internal_net",
+ IpAddress: []string{
+ "127.0.0.1",
+ "fd00:cccc::1",
+ },
+ },
+ },
+ },
+ expectedErrorSubstring: "unable to find suitable IP address for node",
+ },
+ }
+
+ for _, testcase := range testcases {
+ t.Run(testcase.testName, func(t *testing.T) {
+ cfg, fin := configFromEnvOrSim(true)
+ defer fin()
+
+ cfg.VirtualCenter[cfg.Global.VCenterIP].IPFamilyPriority = testcase.setup.ipFamilyPriority
+ connMgr := cm.NewConnectionManager(cfg, nil, nil)
+ defer connMgr.Logout()
+
+ nm := newNodeManager(testcase.setup.cpiConfig, connMgr)
+
+ vm := simulator.Map.Any("VirtualMachine").(*simulator.VirtualMachine)
+ vm.Guest.HostName = strings.ToLower(vm.Name) // simulator.SearchIndex.FindByDnsName matches against the guest.hostName property
+ vm.Guest.Net = testcase.setup.networks
+
+ name := vm.Name
+
+ err := connMgr.Connect(context.Background(), connMgr.VsphereInstanceMap[cfg.Global.VCenterIP])
+ if err != nil {
+ t.Errorf("Failed to Connect to vSphere: %s", err)
+ }
+
+ // subject
+ err = nm.DiscoverNode(name, cm.FindVMByName)
+ if testcase.expectedErrorSubstring != "" {
+ if err == nil {
+ t.Errorf("failed: expected DiscoverNode to return error containing: %q but no error occurred", testcase.expectedErrorSubstring)
+ return
+ }
+ if !strings.Contains(err.Error(), testcase.expectedErrorSubstring) {
+ t.Errorf("failed: expected DiscoverNode to return error containing: %q but was %q", testcase.expectedErrorSubstring, err.Error())
+ }
+ return
+ } else if err != nil {
+ t.Errorf("Failed DiscoverNode: %s", err)
+ return
+ }
+
+ nodeInfo, ok := nm.nodeNameMap[strings.ToLower(name)]
+ if !ok {
+ t.Errorf("failed: %v not found", name)
+ }
+
+ // hostname is always returned first, then the expected ips
+ expectations := append(
+ []v1.NodeAddress{{Type: "Hostname", Address: strings.ToLower(vm.Name)}},
+ testcase.expectedIPs...,
+ )
+ if len(nodeInfo.NodeAddresses) != len(expectations) {
+ t.Errorf("failed: nodeInfo.NodeAddresses should be length %d but was %d", len(testcase.expectedIPs)+1, len(nodeInfo.NodeAddresses))
+ }
+ for i, nodeAddress := range expectations {
+ if nodeInfo.NodeAddresses[i].Address != nodeAddress.Address {
+ t.Errorf("failed: NodeAddresses[%d].Address should eq %q but was %q", i, nodeAddress.Address, nodeInfo.NodeAddresses[i].Address)
+ }
+ if nodeInfo.NodeAddresses[i].Type != nodeAddress.Type {
+ t.Errorf("failed: NodeAddresses[%d].Type should eq %q but was %q", i, nodeAddress.Type, nodeInfo.NodeAddresses[i].Type)
+ }
+ }
+ })
+ }
+}
+
+func TestCollectNonVNICDevices(t *testing.T) {
+ guestNicInfos := []vimtypes.GuestNicInfo{
+ {DeviceConfigId: 10},
+ {DeviceConfigId: -1},
+ }
+
+ returnedGuestNicInfos := collectNonVNICDevices(guestNicInfos)
+
+ if len(returnedGuestNicInfos) != 1 {
+ t.Errorf("failed: expected one GuestNicInfo, got %d", len(returnedGuestNicInfos))
+ }
+
+ if returnedGuestNicInfos[0].DeviceConfigId != 10 {
+ t.Errorf("failed: expected GuestNicInfo.DeviceConfigId to equal 10 but was %d", returnedGuestNicInfos[0].DeviceConfigId)
+ }
+}
+
+func TestToIPAddrNetworkNames(t *testing.T) {
+ guestNicInfos := []vimtypes.GuestNicInfo{
+ {Network: "internal_net", IpAddress: []string{"192.168.1.1", "fd00:1:4::1"}},
+ {Network: "external_net", IpAddress: []string{"10.10.50.12", "fd00:100:64::1"}},
+ }
+
+ actual := toIPAddrNetworkNames(guestNicInfos)
+
+ if len(actual) != 4 {
+ t.Errorf("failed: expected four returned ipAddrNetworkNames, got: %d", len(actual))
+ }
+
+ if actual[0].networkName != "internal_net" || actual[0].ipAddr != "192.168.1.1" {
+ t.Errorf("failed: expected the first entry to have a networkName of \"internal_net\" and a ipAddr of \"192.168.1.1\", but got: %s %s", actual[0].networkName, actual[0].ipAddr)
+ }
+
+ if actual[1].networkName != "internal_net" || actual[1].ipAddr != "fd00:1:4::1" {
+ t.Errorf("failed: expected the first entry to have a networkName of \"internal_net\" and a ipAddr of \"fd00:1:4::1\", but got: %s %s", actual[1].networkName, actual[1].ipAddr)
+ }
+
+ if actual[2].networkName != "external_net" || actual[2].ipAddr != "10.10.50.12" {
+ t.Errorf("failed: expected the first entry to have a networkName of \"external_net\" and a ipAddr of \"10.10.50.12\", but got: %s %s", actual[2].networkName, actual[2].ipAddr)
+ }
+
+ if actual[3].networkName != "external_net" || actual[3].ipAddr != "fd00:100:64::1" {
+ t.Errorf("failed: expected the first entry to have a networkName of \"external_net\" and a ipAddr of \"fd00:100:64::1\", but got: %s %s", actual[3].networkName, actual[3].ipAddr)
+ }
+}
+
+func TestToNetworkNames(t *testing.T) {
+ guestNicInfos := []vimtypes.GuestNicInfo{
+ {Network: "internal_net"},
+ {Network: "external_net"},
+ }
+
+ actual := toNetworkNames(guestNicInfos)
+
+ if len(actual) != 2 {
+ t.Errorf("failed: expected two returned network names: %d", len(actual))
+ }
+
+ if actual[0] != "internal_net" {
+ t.Errorf("failed: expected the first entry to equal of \"internal_net\", but got: %s ", actual[0])
+ }
+
+ if actual[1] != "external_net" {
+ t.Errorf("failed: expected the first entry to equal of \"external_net\", but got: %s ", actual[1])
+ }
+}
+
+func TestCollectMatchesForIPFamily(t *testing.T) {
+ ipAddrNetworkNames := []*ipAddrNetworkName{
+ {ipAddr: "192.168.1.1"},
+ {ipAddr: "fd00:100:64::1"},
+ }
+
+ ipv4IPAddrs := collectMatchesForIPFamily(ipAddrNetworkNames, "ipv4")
+
+ if len(ipv4IPAddrs) != 1 {
+ t.Errorf("failed: expected one ipv4 match, but got: %d", len(ipv4IPAddrs))
+ }
+
+ if ipv4IPAddrs[0].ipAddr != "192.168.1.1" {
+ t.Errorf("failed: expected ipAddr to equal \"192.168.1.1\", but got: %s", ipv4IPAddrs[0].ipAddr)
+ }
+
+ ipv6IPAddrs := collectMatchesForIPFamily(ipAddrNetworkNames, "ipv6")
+
+ if len(ipv6IPAddrs) != 1 {
+ t.Errorf("failed: expected one ipv6 match, but got: %d", len(ipv4IPAddrs))
+ }
+
+ if ipv6IPAddrs[0].ipAddr != "fd00:100:64::1" {
+ t.Errorf("failed: expected ipAddr to equal \"fd00:100:64::1\", but got: %s", ipv6IPAddrs[0].ipAddr)
+ }
+}
+
+func TestMatchesFamily(t *testing.T) {
+ if !matchesFamily(net.ParseIP("192.168.1.1"), "ipv4") {
+ t.Errorf("failed: expected 192.168.1.1 to match ipFamily ipv4, but it did not")
+ }
+
+ if matchesFamily(net.ParseIP("192.168.1.1"), "ipv6") {
+ t.Errorf("failed: expected 192.168.1.1 not to match ipFamily ipv6, but it did")
+ }
+
+ if !matchesFamily(net.ParseIP("fd00:1::1"), "ipv6") {
+ t.Errorf("failed: expected fd00:1::1to match ipFamily ipv6, but it did not")
+ }
+
+ if matchesFamily(net.ParseIP("fd00:1::1"), "ipv4") {
+ t.Errorf("failed: expected fd00:1::1 not to match ipFamily ipv4, but it did")
+ }
+
+ if matchesFamily(net.ParseIP("garbage"), "ipv6") {
+ t.Errorf("failed: expected garbage not to match ipFamily ipv6, but it did")
+ }
+
+ if matchesFamily(net.ParseIP("garbage"), "ipv4") {
+ t.Errorf("failed: expected garbage not to match ipFamily ipv4, but it did")
+ }
+
+ if matchesFamily(net.ParseIP("fd00:1::1"), "ipv7") {
+ t.Errorf("failed: expected fd00:1::1 not to match ipFamily ipv7, but it did")
+ }
+
+ if matchesFamily(net.ParseIP("192.168.1.1"), "ipv7") {
+ t.Errorf("failed: expected 192.168.1.1 not to match ipFamily ipv7, but it did")
+ }
+}
+
+func TestFilter(t *testing.T) {
+ ipAddrNetworkNames := []*ipAddrNetworkName{
+ {networkName: "foo"},
+ {networkName: "bar"},
+ }
+
+ actual := filter(ipAddrNetworkNames, func(n *ipAddrNetworkName) bool {
+ return n.networkName == "foo"
+ })
+
+ if len(actual) != 1 {
+ t.Errorf("failed: expected one ipAddrNetworkName, but got: %d", len(actual))
+ }
+
+ if actual[0].networkName != "foo" {
+ t.Errorf("failed: expected filtered network name to be \"foo\", but got %s", actual[0].networkName)
+ }
+}
+
+func TestFindSubnetMatch(t *testing.T) {
+ ipAddrNetworkNames := []*ipAddrNetworkName{
+ {ipAddr: "192.168.1.1"},
+ {ipAddr: "10.10.1.2"},
+ {ipAddr: "10.10.1.3"},
+ }
+
+ _, ipNetA, err := net.ParseCIDR("10.11.0.0/16")
+ if err != nil {
+ t.Errorf("failed to parse CIDR")
+ }
+ _, ipNetB, err := net.ParseCIDR("10.10.0.0/16")
+ if err != nil {
+ t.Errorf("failed to parse CIDR")
+ }
+
+ actual := findSubnetMatch(ipAddrNetworkNames, []*net.IPNet{ipNetA, ipNetB})
+
+ if actual.ipAddr != "10.10.1.2" {
+ t.Errorf("failed: expected ipAddr to equal 10.10.1.2, but was %s", actual.ipAddr)
+ }
+
+ ipAddrNetworkNames = []*ipAddrNetworkName{
+ {ipAddr: "fc11::1"},
+ {ipAddr: "fd00:100:64::1"},
+ {ipAddr: "fd00:100:64::2"},
+ }
+
+ _, ipNet, err := net.ParseCIDR("fd00:100:64::/64")
+ if err != nil {
+ t.Errorf("failed to parse CIDR")
+ }
+
+ actual = findSubnetMatch(ipAddrNetworkNames, []*net.IPNet{ipNet})
+
+ if actual.ipAddr != "fd00:100:64::1" {
+ t.Errorf("failed: expected ipAddr to equal fd00:100:64::1, but was %s", actual.ipAddr)
+ }
+
+ ipAddrNetworkNames = []*ipAddrNetworkName{
+ {ipAddr: "fc11::1"},
+ {ipAddr: "fd00:101:64::2"},
+ {ipAddr: "fd00:100:64::1"},
+ {ipAddr: "fd00:100:64::2"},
+ }
+
+ _, ipNet1, err := net.ParseCIDR("fd00:100:64::/64")
+ if err != nil {
+ t.Errorf("failed to parse CIDR")
+ }
+
+ _, ipNet2, err := net.ParseCIDR("fd00:101:64::/64")
+ if err != nil {
+ t.Errorf("failed to parse CIDR")
+ }
+
+ actual = findSubnetMatch(ipAddrNetworkNames, []*net.IPNet{ipNet1, ipNet2})
+
+ if actual.ipAddr != "fd00:100:64::1" {
+ t.Errorf("failed: expected ipAddr to equal fd00:100:64::1, but was %s", actual.ipAddr)
+ }
+}
+
+func TestFindFirst(t *testing.T) {
+ ipAddrNetworkNames := []*ipAddrNetworkName{
+ {networkName: "foo", ipAddr: "::1"},
+ {networkName: "bar", ipAddr: "::2"},
+ {networkName: "baz", ipAddr: "::3"},
+ }
+
+ actual := findFirst(ipAddrNetworkNames, func(i *ipAddrNetworkName) bool {
+ return i.networkName == "bar"
+ })
+
+ if actual.networkName != "bar" {
+ t.Errorf("failed: expected ipAddr to have name 'bar', but was %s", actual.networkName)
+ }
+}
+
+func TestFindNetworkNameMatch(t *testing.T) {
+ ipAddrNetworkNames := []*ipAddrNetworkName{
+ {networkName: "foo", ipAddr: "::1"},
+ {networkName: "bar", ipAddr: "::1"},
+ {networkName: "bar", ipAddr: "192.168.1.1"},
+ }
+
+ match := findNetworkNameMatch(ipAddrNetworkNames, "bar")
+
+ if match.networkName != "bar" || match.ipAddr != "::1" {
+ t.Errorf("failed: expected a match of name \"bar\" with an ipAddr of \"::1\", but got: %s %s", match.networkName, match.ipAddr)
+ }
+}
+
+func TestExcludeLocalhostIPs(t *testing.T) {
+ ipAddrNetworkNames := []*ipAddrNetworkName{
+ //doesn't parse
+ {ipAddr: "garbage"},
+ //unspecified
+ {ipAddr: "0.0.0.0"},
+ {ipAddr: "::"},
+ //link local multicast
+ {ipAddr: "224.0.0.1"},
+ {ipAddr: "ff02::1"},
+ // link local unicast
+ {ipAddr: "169.254.0.1"},
+ {ipAddr: "fe80::1"},
+ // loopback
+ {ipAddr: "127.0.0.1"},
+ {ipAddr: "::1"},
+
+ {ipAddr: "192.168.1.1"},
+ {ipAddr: "fd00:100:64::1"},
+ }
+
+ actual := excludeLocalhostIPs(ipAddrNetworkNames)
+
+ if len(actual) != 2 {
+ t.Errorf("failure: expected non localhosts matches to have len 2, but was %d", len(actual))
}
- ips := returnIPsFromSpecificFamily(vcfg.IPv6Family, ipFamilies)
- size := len(ips)
- if size != 1 {
- t.Errorf("Should only return single IPv6 address. expected: 1, actual: %d", size)
- } else if !strings.EqualFold(ips[0], "fd01:0:101:2609:bdd2:ee20:7bd7:5836") {
- t.Errorf("IPv6 does not match. expected: fd01:0:101:2609:bdd2:ee20:7bd7:5836, actual: %s", ips[0])
+ if actual[0].ipAddr != "192.168.1.1" {
+ t.Errorf("failure: expected ipAddr to equal 192.168.1.1, but was %s", actual[0].ipAddr)
}
- ips = returnIPsFromSpecificFamily(vcfg.IPv4Family, ipFamilies)
- size = len(ips)
- if size != 1 {
- t.Errorf("Should only return single IPv4 address. expected: 1, actual: %d", size)
- } else if !strings.EqualFold(ips[0], "10.161.34.192") {
- t.Errorf("IPv6 does not match. expected: 10.161.34.192, actual: %s", ips[0])
+ if actual[1].ipAddr != "fd00:100:64::1" {
+ t.Errorf("failure: expected ipAddr to equal fd00:100:64::1, but was %s", actual[1].ipAddr)
}
}
diff --git a/pkg/cloudprovider/vsphere/util.go b/pkg/cloudprovider/vsphere/util.go
index d0f13bec5..520916b54 100644
--- a/pkg/cloudprovider/vsphere/util.go
+++ b/pkg/cloudprovider/vsphere/util.go
@@ -82,3 +82,14 @@ func ErrOnLocalOnlyIPAddr(addr string) error {
}
return nil
}
+
+// ArrayContainsCaseInsensitive detects whether a given array of string contains
+// the given string, ignoring case.
+func ArrayContainsCaseInsensitive(arr []string, str string) bool {
+ for _, a := range arr {
+ if strings.EqualFold(a, str) {
+ return true
+ }
+ }
+ return false
+}
diff --git a/pkg/cloudprovider/vsphere/util_test.go b/pkg/cloudprovider/vsphere/util_test.go
index da276148c..0e7fbca29 100644
--- a/pkg/cloudprovider/vsphere/util_test.go
+++ b/pkg/cloudprovider/vsphere/util_test.go
@@ -105,3 +105,43 @@ func TestUUIDConvertAndRevert(t *testing.T) {
t.Errorf("Failed to revert UUID")
}
}
+
+func TestArrayContainsCaseInsensitive(t *testing.T) {
+ arr := []string{"First", "second", "THIRD"}
+
+ if !ArrayContainsCaseInsensitive(arr, "First") {
+ t.Errorf("Failed to find First")
+ }
+
+ if !ArrayContainsCaseInsensitive(arr, "firsT") {
+ t.Errorf("Failed to find firsT")
+ }
+
+ if ArrayContainsCaseInsensitive(arr, "firs") {
+ t.Errorf("Found firs")
+ }
+
+ if !ArrayContainsCaseInsensitive(arr, "second") {
+ t.Errorf("Failed to find second")
+ }
+
+ if !ArrayContainsCaseInsensitive(arr, "Second") {
+ t.Errorf("Failed to find Second")
+ }
+
+ if ArrayContainsCaseInsensitive(arr, "SecondInLine") {
+ t.Errorf("Found SecondInLine")
+ }
+
+ if !ArrayContainsCaseInsensitive(arr, "THIRD") {
+ t.Errorf("Failed to find THIRD")
+ }
+
+ if !ArrayContainsCaseInsensitive(arr, "third") {
+ t.Errorf("Failed to find third")
+ }
+
+ if ArrayContainsCaseInsensitive(arr, "ThirdMakesACrowd") {
+ t.Errorf("Found ThirdMakesACrowd")
+ }
+}
diff --git a/pkg/cloudprovider/vsphereparavirtual/cloud.go b/pkg/cloudprovider/vsphereparavirtual/cloud.go
index 83db9c82b..8b602496c 100644
--- a/pkg/cloudprovider/vsphereparavirtual/cloud.go
+++ b/pkg/cloudprovider/vsphereparavirtual/cloud.go
@@ -146,6 +146,12 @@ func (cp *VSphereParavirtual) Initialize(clientBuilder cloudprovider.ControllerC
}
}
+ zones, err := NewZones(clusterNS, kcfg)
+ if err != nil {
+ klog.Errorf("Failed to init Zones: %v", err)
+ }
+ cp.zones = zones
+
cp.informMgr.Listen()
klog.V(0).Info("Initing vSphere Paravirtual Cloud Provider Succeeded")
}
@@ -174,7 +180,7 @@ func (cp *VSphereParavirtual) InstancesV2() (cloudprovider.InstancesV2, bool) {
// is supported, false otherwise.
func (cp *VSphereParavirtual) Zones() (cloudprovider.Zones, bool) {
klog.V(1).Info("Enabling Zones interface on vsphere paravirtual cloud provider")
- return nil, false
+ return cp.zones, true
}
// Clusters returns a clusters interface. Also returns true if the interface
diff --git a/pkg/cloudprovider/vsphereparavirtual/instances.go b/pkg/cloudprovider/vsphereparavirtual/instances.go
index f4ea548f6..37bb8bcd9 100644
--- a/pkg/cloudprovider/vsphereparavirtual/instances.go
+++ b/pkg/cloudprovider/vsphereparavirtual/instances.go
@@ -18,13 +18,13 @@ package vsphereparavirtual
import (
"context"
+ "errors"
"strings"
"time"
v1 "k8s.io/api/core/v1"
"k8s.io/client-go/rest"
- apierrors "k8s.io/apimachinery/pkg/api/errors"
"k8s.io/apimachinery/pkg/types"
"k8s.io/apimachinery/pkg/util/wait"
"sigs.k8s.io/controller-runtime/pkg/client"
@@ -32,10 +32,8 @@ import (
cloudprovider "k8s.io/cloud-provider"
"k8s.io/klog/v2"
- "k8s.io/cloud-provider-vsphere/pkg/cloudprovider/vsphereparavirtual/vmservice"
- "k8s.io/cloud-provider-vsphere/pkg/util"
-
vmopv1alpha1 "github.com/vmware-tanzu/vm-operator-api/api/v1alpha1"
+ "k8s.io/cloud-provider-vsphere/pkg/cloudprovider/vsphereparavirtual/vmservice"
)
type instances struct {
@@ -56,6 +54,10 @@ var DiscoverNodeBackoff = wait.Backoff{
Jitter: 1.0,
}
+var (
+ errBiosUUIDEmpty = errors.New("discovered Bios UUID is empty")
+)
+
func checkError(err error) bool {
return err != nil
}
@@ -63,61 +65,13 @@ func checkError(err error) bool {
// discoverNodeByProviderID takes a ProviderID and returns a VirtualMachine if one exists, or nil otherwise
// VirtualMachine not found is not an error
func (i instances) discoverNodeByProviderID(ctx context.Context, providerID string) (*vmopv1alpha1.VirtualMachine, error) {
- var discoveredNode *vmopv1alpha1.VirtualMachine = nil
-
- // Adding Retry here because there is no retry in caller from node controller
- // https://github.com/kubernetes/kubernetes/blob/master/pkg/controller/cloud/node_controller.go#L368
- err := util.RetryOnError(
- DiscoverNodeBackoff,
- checkError,
- func() error {
- uuid := GetUUIDFromProviderID(providerID)
- vms := vmopv1alpha1.VirtualMachineList{}
- err := i.vmClient.List(ctx, &vms, &client.ListOptions{
- Namespace: i.namespace,
- })
- if err != nil {
- return err
- }
- for i := range vms.Items {
- vm := vms.Items[i]
- if uuid == vm.Status.BiosUUID {
- discoveredNode = &vm
- break
- }
- }
-
- return nil
- })
-
- return discoveredNode, err
+ return discoverNodeByProviderID(ctx, providerID, i.namespace, i.vmClient)
}
// discoverNodeByName takes a node name and returns a VirtualMachine if one exists, or nil otherwise
// VirtualMachine not found is not an error
func (i instances) discoverNodeByName(ctx context.Context, name types.NodeName) (*vmopv1alpha1.VirtualMachine, error) {
- var discoveredNode *vmopv1alpha1.VirtualMachine = nil
-
- // Adding Retry here because there is no retry in caller from node controller
- // https://github.com/kubernetes/kubernetes/blob/master/pkg/controller/cloud/node_controller.go#L368
- err := util.RetryOnError(
- DiscoverNodeBackoff,
- checkError,
- func() error {
- vmKey := types.NamespacedName{Name: string(name), Namespace: i.namespace}
- vm := vmopv1alpha1.VirtualMachine{}
- err := i.vmClient.Get(ctx, vmKey, &vm)
- if err != nil {
- if apierrors.IsNotFound(err) {
- return nil
- }
- return err
- }
- discoveredNode = &vm
- return nil
- })
-
- return discoveredNode, err
+ return discoverNodeByName(ctx, name, i.namespace, i.vmClient)
}
// NewInstances returns an implementation of cloudprovider.Instances
@@ -197,6 +151,10 @@ func (i *instances) InstanceID(ctx context.Context, nodeName types.NodeName) (st
return "", cloudprovider.InstanceNotFound
}
+ if vm.Status.BiosUUID == "" {
+ return "", errBiosUUIDEmpty
+ }
+
klog.V(4).Infof("instances.InstanceID() called to get vm: %v uuid: %v", nodeName, vm.Status.BiosUUID)
return vm.Status.BiosUUID, nil
}
diff --git a/pkg/cloudprovider/vsphereparavirtual/instances_test.go b/pkg/cloudprovider/vsphereparavirtual/instances_test.go
index 71c86e563..fb9051930 100644
--- a/pkg/cloudprovider/vsphereparavirtual/instances_test.go
+++ b/pkg/cloudprovider/vsphereparavirtual/instances_test.go
@@ -21,22 +21,17 @@ import (
"fmt"
"testing"
+ "github.com/stretchr/testify/assert"
+ vmopv1alpha1 "github.com/vmware-tanzu/vm-operator-api/api/v1alpha1"
v1 "k8s.io/api/core/v1"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/runtime"
+ "k8s.io/apimachinery/pkg/types"
cloudprovider "k8s.io/cloud-provider"
-
- "github.com/stretchr/testify/assert"
-
+ "k8s.io/cloud-provider-vsphere/pkg/util"
"sigs.k8s.io/controller-runtime/pkg/client"
fakeClient "sigs.k8s.io/controller-runtime/pkg/client/fake"
"sigs.k8s.io/controller-runtime/pkg/envtest"
-
- "k8s.io/apimachinery/pkg/types"
-
- vmopv1alpha1 "github.com/vmware-tanzu/vm-operator-api/api/v1alpha1"
-
- "k8s.io/cloud-provider-vsphere/pkg/util"
)
var (
@@ -137,6 +132,12 @@ func TestInstanceID(t *testing.T) {
expectedInstanceID: "",
expectedErr: cloudprovider.InstanceNotFound,
},
+ {
+ name: "cannot find virtualmachine with empty bios uuid",
+ testVM: createTestVM(string(testVMName), testClusterNameSpace, ""),
+ expectedInstanceID: "",
+ expectedErr: errBiosUUIDEmpty,
+ },
}
for _, testCase := range testCases {
@@ -165,7 +166,7 @@ func TestInstanceIDThrowsErr(t *testing.T) {
for _, testCase := range testCases {
t.Run(testCase.name, func(t *testing.T) {
instance, fcw := initTest(testCase.testVM)
- fcw.GetFunc = func(ctx context.Context, key client.ObjectKey, obj runtime.Object) error {
+ fcw.GetFunc = func(ctx context.Context, key client.ObjectKey, obj client.Object) error {
return fmt.Errorf("Internal error getting VMs")
}
@@ -324,7 +325,7 @@ func TestNodeAddressesByProviderIDInternalErr(t *testing.T) {
for _, testCase := range testCases {
t.Run(testCase.name, func(t *testing.T) {
instance, fcw := initTest(testCase.testVM)
- fcw.ListFunc = func(ctx context.Context, list runtime.Object, opts ...client.ListOption) error {
+ fcw.ListFunc = func(ctx context.Context, list client.ObjectList, opts ...client.ListOption) error {
return fmt.Errorf("Internal error listing VMs")
}
@@ -398,7 +399,7 @@ func TestNodeAddressesInternalErr(t *testing.T) {
for _, testCase := range testCases {
t.Run(testCase.name, func(t *testing.T) {
instance, fcw := initTest(testCase.testVM)
- fcw.GetFunc = func(ctx context.Context, key client.ObjectKey, obj runtime.Object) error {
+ fcw.GetFunc = func(ctx context.Context, key client.ObjectKey, obj client.Object) error {
return fmt.Errorf("Internal error getting VMs")
}
diff --git a/pkg/cloudprovider/vsphereparavirtual/loadbalancer_test.go b/pkg/cloudprovider/vsphereparavirtual/loadbalancer_test.go
index fac7f79fc..1d0c79644 100644
--- a/pkg/cloudprovider/vsphereparavirtual/loadbalancer_test.go
+++ b/pkg/cloudprovider/vsphereparavirtual/loadbalancer_test.go
@@ -181,7 +181,7 @@ func TestUpdateLoadBalancer(t *testing.T) {
if testCase.expectErr {
// Ensure that the client Update call returns an error on update
- fcw.UpdateFunc = func(ctx context.Context, obj runtime.Object, opts ...client.UpdateOption) error {
+ fcw.UpdateFunc = func(ctx context.Context, obj client.Object, opts ...client.UpdateOption) error {
return fmt.Errorf("Some undefined update error")
}
err = lb.UpdateLoadBalancer(context.Background(), testClustername, testK8sService, []*v1.Node{})
@@ -205,7 +205,7 @@ func TestEnsureLoadBalancer_VMServiceExternalTrafficPolicyLocal(t *testing.T) {
ExternalTrafficPolicy: v1.ServiceExternalTrafficPolicyTypeLocal,
},
}
- fcw.CreateFunc = func(ctx context.Context, obj runtime.Object, opts ...client.CreateOption) error {
+ fcw.CreateFunc = func(ctx context.Context, obj client.Object, opts ...client.CreateOption) error {
vms := &vmopv1alpha1.VirtualMachineService{
Status: vmopv1alpha1.VirtualMachineServiceStatus{
LoadBalancer: vmopv1alpha1.LoadBalancerStatus{
@@ -231,7 +231,7 @@ func TestEnsureLoadBalancer_VMServiceExternalTrafficPolicyLocal(t *testing.T) {
func TestEnsureLoadBalancer(t *testing.T) {
testCases := []struct {
name string
- createFunc func(ctx context.Context, obj runtime.Object, opts ...client.CreateOption) error
+ createFunc func(ctx context.Context, obj client.Object, opts ...client.CreateOption) error
expectErr error
}{
{
@@ -240,7 +240,7 @@ func TestEnsureLoadBalancer(t *testing.T) {
},
{
name: "when VMService creation failed",
- createFunc: func(ctx context.Context, obj runtime.Object, opts ...client.CreateOption) error {
+ createFunc: func(ctx context.Context, obj client.Object, opts ...client.CreateOption) error {
return fmt.Errorf(vmservice.ErrCreateVMService.Error())
},
expectErr: vmservice.ErrCreateVMService,
@@ -276,7 +276,7 @@ func TestEnsureLoadBalancer_VMServiceCreatedIPFound(t *testing.T) {
},
}
// Ensure that the client Create call returns a VMService with a valid IP
- fcw.CreateFunc = func(ctx context.Context, obj runtime.Object, opts ...client.CreateOption) error {
+ fcw.CreateFunc = func(ctx context.Context, obj client.Object, opts ...client.CreateOption) error {
vms := &vmopv1alpha1.VirtualMachineService{
Status: vmopv1alpha1.VirtualMachineServiceStatus{
LoadBalancer: vmopv1alpha1.LoadBalancerStatus{
@@ -324,18 +324,18 @@ func TestEnsureLoadBalancer_VMServiceCreatedIPFound(t *testing.T) {
func TestEnsureLoadBalancer_DeleteLB(t *testing.T) {
testCases := []struct {
name string
- deleteFunc func(ctx context.Context, obj runtime.Object, opts ...client.DeleteOption) error
+ deleteFunc func(ctx context.Context, obj client.Object, opts ...client.DeleteOption) error
expectErr string
}{
{
name: "should ignore not found error",
- deleteFunc: func(ctx context.Context, obj runtime.Object, opts ...client.DeleteOption) error {
+ deleteFunc: func(ctx context.Context, obj client.Object, opts ...client.DeleteOption) error {
return apierrors.NewNotFound(vmopv1alpha1.Resource("virtualmachineservice"), testClustername)
},
},
{
name: "should return error",
- deleteFunc: func(ctx context.Context, obj runtime.Object, opts ...client.DeleteOption) error {
+ deleteFunc: func(ctx context.Context, obj client.Object, opts ...client.DeleteOption) error {
return fmt.Errorf("an error occurred while deleting load balancer")
},
expectErr: "an error occurred while deleting load balancer",
diff --git a/pkg/cloudprovider/vsphereparavirtual/types.go b/pkg/cloudprovider/vsphereparavirtual/types.go
index 9e48b0089..1c987e0b6 100644
--- a/pkg/cloudprovider/vsphereparavirtual/types.go
+++ b/pkg/cloudprovider/vsphereparavirtual/types.go
@@ -33,4 +33,5 @@ type VSphereParavirtual struct {
loadBalancer cloudprovider.LoadBalancer
instances cloudprovider.Instances
routes RoutesProvider
+ zones cloudprovider.Zones
}
diff --git a/pkg/cloudprovider/vsphereparavirtual/vmoperator.go b/pkg/cloudprovider/vsphereparavirtual/vmoperator.go
new file mode 100644
index 000000000..f4ed891bb
--- /dev/null
+++ b/pkg/cloudprovider/vsphereparavirtual/vmoperator.go
@@ -0,0 +1,71 @@
+package vsphereparavirtual
+
+import (
+ "context"
+
+ vmopv1alpha1 "github.com/vmware-tanzu/vm-operator-api/api/v1alpha1"
+ apierrors "k8s.io/apimachinery/pkg/api/errors"
+ "k8s.io/apimachinery/pkg/types"
+ "k8s.io/cloud-provider-vsphere/pkg/util"
+ "sigs.k8s.io/controller-runtime/pkg/client"
+)
+
+// discoverNodeByProviderID takes a ProviderID and returns a VirtualMachine if one exists, or nil otherwise
+// VirtualMachine not found is not an error
+func discoverNodeByProviderID(ctx context.Context, providerID string, namespace string, vmClient client.Client) (*vmopv1alpha1.VirtualMachine, error) {
+ var discoveredNode *vmopv1alpha1.VirtualMachine = nil
+
+ // Adding Retry here because there is no retry in caller from node controller
+ // https://github.com/kubernetes/kubernetes/blob/master/pkg/controller/cloud/node_controller.go#L368
+ err := util.RetryOnError(
+ DiscoverNodeBackoff,
+ checkError,
+ func() error {
+ uuid := GetUUIDFromProviderID(providerID)
+ vms := vmopv1alpha1.VirtualMachineList{}
+ err := vmClient.List(ctx, &vms, &client.ListOptions{
+ Namespace: namespace,
+ })
+ if err != nil {
+ return err
+ }
+ for i := range vms.Items {
+ vm := vms.Items[i]
+ if uuid == vm.Status.BiosUUID {
+ discoveredNode = &vm
+ break
+ }
+ }
+
+ return nil
+ })
+
+ return discoveredNode, err
+}
+
+// discoverNodeByName takes a node name and returns a VirtualMachine if one exists, or nil otherwise
+// VirtualMachine not found is not an error
+func discoverNodeByName(ctx context.Context, name types.NodeName, namespace string, vmClient client.Client) (*vmopv1alpha1.VirtualMachine, error) {
+ var discoveredNode *vmopv1alpha1.VirtualMachine = nil
+
+ // Adding Retry here because there is no retry in caller from node controller
+ // https://github.com/kubernetes/kubernetes/blob/master/pkg/controller/cloud/node_controller.go#L368
+ err := util.RetryOnError(
+ DiscoverNodeBackoff,
+ checkError,
+ func() error {
+ vmKey := types.NamespacedName{Name: string(name), Namespace: namespace}
+ vm := vmopv1alpha1.VirtualMachine{}
+ err := vmClient.Get(ctx, vmKey, &vm)
+ if err != nil {
+ if apierrors.IsNotFound(err) {
+ return nil
+ }
+ return err
+ }
+ discoveredNode = &vm
+ return nil
+ })
+
+ return discoveredNode, err
+}
diff --git a/pkg/cloudprovider/vsphereparavirtual/vmservice/vmservice_test.go b/pkg/cloudprovider/vsphereparavirtual/vmservice/vmservice_test.go
index 988a85c30..9ea0ab290 100644
--- a/pkg/cloudprovider/vsphereparavirtual/vmservice/vmservice_test.go
+++ b/pkg/cloudprovider/vsphereparavirtual/vmservice/vmservice_test.go
@@ -386,19 +386,19 @@ func TestCreateOrUpdateVMService(t *testing.T) {
func TestCreateOrUpdateVMService_RedefineGetFunc(t *testing.T) {
testCases := []struct {
name string
- getFunc func(ctx context.Context, key client.ObjectKey, obj runtime.Object) error
+ getFunc func(ctx context.Context, key client.ObjectKey, obj client.Object) error
expectedErr error
}{
{
name: "failed to create VirtualMachineService",
- getFunc: func(ctx context.Context, key client.ObjectKey, obj runtime.Object) error {
+ getFunc: func(ctx context.Context, key client.ObjectKey, obj client.Object) error {
return fmt.Errorf("failed to get VirtualMachineService")
},
expectedErr: ErrGetVMService,
},
{
name: "when VMService does not exist",
- getFunc: func(ctx context.Context, key client.ObjectKey, obj runtime.Object) error {
+ getFunc: func(ctx context.Context, key client.ObjectKey, obj client.Object) error {
return apierrors.NewNotFound(v1alpha1.Resource("virtualmachineservice"), testClustername)
},
expectedErr: ErrVMServiceIPNotFound,
@@ -419,7 +419,7 @@ func TestCreateOrUpdateVMService_RedefineGetFunc(t *testing.T) {
func TestCreateOrUpdateVMService_RedefineCreateFunc(t *testing.T) {
testK8sService, vms, fcw := initTest()
// Redefine Create in the client to return an error
- fcw.CreateFunc = func(ctx context.Context, obj runtime.Object, opts ...client.CreateOption) error {
+ fcw.CreateFunc = func(ctx context.Context, obj client.Object, opts ...client.CreateOption) error {
return fmt.Errorf("failed to create VirtualMachineService")
}
_, err := vms.CreateOrUpdate(context.Background(), testK8sService, testClustername)
diff --git a/pkg/cloudprovider/vsphereparavirtual/zone.go b/pkg/cloudprovider/vsphereparavirtual/zone.go
new file mode 100644
index 000000000..7f896d0b5
--- /dev/null
+++ b/pkg/cloudprovider/vsphereparavirtual/zone.go
@@ -0,0 +1,97 @@
+package vsphereparavirtual
+
+import (
+ "context"
+
+ vmopv1alpha1 "github.com/vmware-tanzu/vm-operator-api/api/v1alpha1"
+ "k8s.io/apimachinery/pkg/types"
+ "k8s.io/client-go/rest"
+ cloudprovider "k8s.io/cloud-provider"
+ "k8s.io/cloud-provider-vsphere/pkg/cloudprovider/vsphereparavirtual/vmservice"
+ "k8s.io/klog/v2"
+ "sigs.k8s.io/controller-runtime/pkg/client"
+)
+
+type zones struct {
+ vmClient client.Client
+ namespace string
+}
+
+func (z zones) GetZone(ctx context.Context) (cloudprovider.Zone, error) {
+ zone := cloudprovider.Zone{}
+ return zone, cloudprovider.NotImplemented
+}
+
+func (z zones) GetZoneByProviderID(ctx context.Context, providerID string) (cloudprovider.Zone, error) {
+ zone := cloudprovider.Zone{}
+
+ vm, err := z.discoverNodeByProviderID(ctx, providerID)
+ if err != nil {
+ klog.Errorf("Error trying to find vm : %v", err)
+ return zone, err
+ }
+
+ if vm == nil {
+ klog.V(4).Info("instances.GetZoneByProviderID() InstanceNotFound ", providerID)
+ return zone, cloudprovider.InstanceNotFound
+ }
+
+ if val, ok := vm.Labels["topology.kubernetes.io/zone"]; ok {
+ klog.V(4).Info("retrieved zone", val)
+ zone = cloudprovider.Zone{
+ FailureDomain: val,
+ }
+ }
+
+ return zone, nil
+}
+
+func (z zones) GetZoneByNodeName(ctx context.Context, nodeName types.NodeName) (cloudprovider.Zone, error) {
+ zone := cloudprovider.Zone{}
+
+ vm, err := z.discoverNodeByName(ctx, nodeName)
+ if err != nil {
+ klog.Errorf("Error trying to find vm : %v", err)
+ return zone, err
+ }
+
+ if vm == nil {
+ klog.V(4).Info("zones.GetZoneByNodeName() InstanceNotFound ", nodeName)
+ return zone, cloudprovider.InstanceNotFound
+ }
+
+ if val, ok := vm.Labels["topology.kubernetes.io/zone"]; ok {
+ klog.V(4).Info("retrieved zone", val)
+ zone = cloudprovider.Zone{
+ FailureDomain: val,
+ }
+ }
+
+ return zone, nil
+}
+
+// discoverNodeByProviderID takes a ProviderID and returns a VirtualMachine if one exists, or nil otherwise
+// VirtualMachine not found is not an error
+func (z zones) discoverNodeByProviderID(ctx context.Context, providerID string) (*vmopv1alpha1.VirtualMachine, error) {
+ return discoverNodeByProviderID(ctx, providerID, z.namespace, z.vmClient)
+}
+
+// discoverNodeByName takes a node name and returns a VirtualMachine if one exists, or nil otherwise
+// VirtualMachine not found is not an error
+func (z zones) discoverNodeByName(ctx context.Context, name types.NodeName) (*vmopv1alpha1.VirtualMachine, error) {
+ return discoverNodeByName(ctx, name, z.namespace, z.vmClient)
+}
+
+// NewZones returns an implementation of cloudprovider.Instances
+func NewZones(namespace string, kcfg *rest.Config) (cloudprovider.Zones, error) {
+ vmClient, err := vmservice.GetVmopClient(kcfg)
+
+ if err != nil {
+ return nil, err
+ }
+
+ return &zones{
+ vmClient: vmClient,
+ namespace: namespace,
+ }, nil
+}
diff --git a/pkg/cloudprovider/vsphereparavirtual/zone_test.go b/pkg/cloudprovider/vsphereparavirtual/zone_test.go
new file mode 100644
index 000000000..3caefa6a5
--- /dev/null
+++ b/pkg/cloudprovider/vsphereparavirtual/zone_test.go
@@ -0,0 +1,175 @@
+package vsphereparavirtual
+
+import (
+ "context"
+ "testing"
+
+ "github.com/stretchr/testify/assert"
+ vmopv1alpha1 "github.com/vmware-tanzu/vm-operator-api/api/v1alpha1"
+ metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
+ "k8s.io/apimachinery/pkg/runtime"
+ "k8s.io/apimachinery/pkg/types"
+ cloudprovider "k8s.io/cloud-provider"
+ "k8s.io/cloud-provider-vsphere/pkg/util"
+ fakeClient "sigs.k8s.io/controller-runtime/pkg/client/fake"
+ "sigs.k8s.io/controller-runtime/pkg/envtest"
+)
+
+var (
+ vmName = types.NodeName("test-vm")
+ fakeVMName = types.NodeName("fake-vm")
+ vmuuid = "421960e7-3041-f44a-4b3f-ed99748c12d0"
+ providerid = "vsphere://" + vmuuid
+)
+
+func TestNewZones(t *testing.T) {
+ testCases := []struct {
+ name string
+ testEnv *envtest.Environment
+ expectedErr error
+ testVM *vmopv1alpha1.VirtualMachine
+ }{
+ {
+ name: "NewZone: when everything is ok",
+ testEnv: &envtest.Environment{},
+ testVM: createTestVMWithZone(string(vmName), testClusterNameSpace),
+ expectedErr: nil,
+ },
+ }
+
+ for _, testCase := range testCases {
+ t.Run(testCase.name, func(t *testing.T) {
+ cfg, err := testCase.testEnv.Start()
+ assert.NoError(t, err)
+ //initVMopClient(testCase.testVM)
+ _, err = NewZones(testClusterNameSpace, cfg)
+ assert.NoError(t, err)
+ assert.Equal(t, testCase.expectedErr, err)
+
+ err = testCase.testEnv.Stop()
+ assert.NoError(t, err)
+ })
+ }
+}
+
+func TestZonesByProviderID(t *testing.T) {
+ testCases := []struct {
+ name string
+ testEnv *envtest.Environment
+ expectedResult string
+ expectedErr error
+ testVM *vmopv1alpha1.VirtualMachine
+ }{
+ {
+ name: "TestZonesByProviderID should return true",
+ testVM: createTestVMWithZoneID(string(vmName), testClusterNameSpace, vmuuid),
+ expectedResult: "zone-a",
+ expectedErr: nil,
+ },
+ {
+ name: "TestZonesByProviderID should return error",
+ testVM: createTestVMWithZoneID(string(vmName), testClusterNameSpace, "fakeuuid"),
+ expectedResult: "",
+ expectedErr: cloudprovider.InstanceNotFound,
+ },
+ }
+
+ for _, testCase := range testCases {
+ t.Run(testCase.name, func(t *testing.T) {
+ ctx := context.Background()
+
+ zone, _ := initVMopClient(testCase.testVM)
+ z, err := zone.GetZoneByProviderID(ctx, providerid)
+
+ if testCase.expectedErr != nil {
+ assert.Equal(t, cloudprovider.InstanceNotFound, err)
+ } else {
+ assert.NoError(t, err)
+ }
+
+ assert.Equal(t, testCase.expectedResult, z.FailureDomain)
+ })
+ }
+}
+
+func TestZonesByNodeName(t *testing.T) {
+ testCases := []struct {
+ name string
+ testEnv *envtest.Environment
+ expectedResult string
+ expectedErr error
+ testVM *vmopv1alpha1.VirtualMachine
+ vmName types.NodeName
+ }{
+ {
+ name: "TestZonesByNodeName should return true",
+ testVM: createTestVMWithZoneID(string(vmName), testClusterNameSpace, vmuuid),
+ vmName: vmName,
+ expectedResult: "zone-a",
+ expectedErr: nil,
+ },
+ {
+ name: "TestZonesByNodeName should return error",
+ testVM: createTestVMWithZoneID(string(vmName), testClusterNameSpace, "fakeuuid"),
+ vmName: fakeVMName,
+ expectedResult: "",
+ expectedErr: cloudprovider.InstanceNotFound,
+ },
+ }
+
+ for _, testCase := range testCases {
+ t.Run(testCase.name, func(t *testing.T) {
+ ctx := context.Background()
+
+ zone, _ := initVMopClient(testCase.testVM)
+ z, err := zone.GetZoneByNodeName(ctx, testCase.vmName)
+
+ if testCase.expectedErr != nil {
+ assert.Equal(t, cloudprovider.InstanceNotFound, err)
+ } else {
+ assert.NoError(t, err)
+ }
+
+ assert.Equal(t, testCase.expectedResult, z.FailureDomain)
+ })
+ }
+}
+
+func initVMopClient(testVM *vmopv1alpha1.VirtualMachine) (zones, *util.FakeClientWrapper) {
+ scheme := runtime.NewScheme()
+ _ = vmopv1alpha1.AddToScheme(scheme)
+ fc := fakeClient.NewFakeClientWithScheme(scheme, testVM)
+ fcw := util.NewFakeClientWrapper(fc)
+ zone := zones{
+ vmClient: fcw,
+ namespace: testClusterNameSpace,
+ }
+ return zone, fcw
+}
+
+func createTestVMWithZone(name, namespace string) *vmopv1alpha1.VirtualMachine {
+ labels := make(map[string]string)
+ labels["topology.kubernetes.io/zone"] = "zone-a"
+ return &vmopv1alpha1.VirtualMachine{
+ ObjectMeta: metav1.ObjectMeta{
+ Name: name,
+ Namespace: namespace,
+ Labels: labels,
+ },
+ }
+}
+
+func createTestVMWithZoneID(name, namespace, biosUUID string) *vmopv1alpha1.VirtualMachine {
+ labels := make(map[string]string)
+ labels["topology.kubernetes.io/zone"] = "zone-a"
+ return &vmopv1alpha1.VirtualMachine{
+ ObjectMeta: metav1.ObjectMeta{
+ Name: name,
+ Namespace: namespace,
+ Labels: labels,
+ },
+ Status: vmopv1alpha1.VirtualMachineStatus{
+ BiosUUID: biosUUID,
+ },
+ }
+}
diff --git a/pkg/util/fake_client_wrapper.go b/pkg/util/fake_client_wrapper.go
index 0bf0d3557..294b35e22 100644
--- a/pkg/util/fake_client_wrapper.go
+++ b/pkg/util/fake_client_wrapper.go
@@ -19,7 +19,9 @@ package util
import (
"context"
- runtime "k8s.io/apimachinery/pkg/runtime"
+ "k8s.io/apimachinery/pkg/api/meta"
+ "k8s.io/apimachinery/pkg/runtime"
+
client "sigs.k8s.io/controller-runtime/pkg/client"
)
@@ -27,11 +29,21 @@ import (
type FakeClientWrapper struct {
fakeClient client.Client
// Set these functions if you want to override the default fakeClient behavior
- GetFunc func(ctx context.Context, key client.ObjectKey, obj runtime.Object) error
- CreateFunc func(ctx context.Context, obj runtime.Object, opts ...client.CreateOption) error
- UpdateFunc func(ctx context.Context, obj runtime.Object, opts ...client.UpdateOption) error
- DeleteFunc func(ctx context.Context, obj runtime.Object, opts ...client.DeleteOption) error
- ListFunc func(ctx context.Context, list runtime.Object, opts ...client.ListOption) error
+ GetFunc func(ctx context.Context, key client.ObjectKey, obj client.Object) error
+ CreateFunc func(ctx context.Context, obj client.Object, opts ...client.CreateOption) error
+ UpdateFunc func(ctx context.Context, obj client.Object, opts ...client.UpdateOption) error
+ DeleteFunc func(ctx context.Context, obj client.Object, opts ...client.DeleteOption) error
+ ListFunc func(ctx context.Context, list client.ObjectList, opts ...client.ListOption) error
+}
+
+// Scheme invokes the fakeClient's Scheme
+func (w *FakeClientWrapper) Scheme() *runtime.Scheme {
+ return w.fakeClient.Scheme()
+}
+
+// RESTMapper invokes the fakeClient's RESTMapper
+func (w *FakeClientWrapper) RESTMapper() meta.RESTMapper {
+ return w.fakeClient.RESTMapper()
}
// NewFakeClientWrapper creates a FakeClientWrapper
@@ -42,7 +54,7 @@ func NewFakeClientWrapper(fakeClient client.Client) *FakeClientWrapper {
}
// Get retrieves an obj for the given object key from the Kubernetes Cluster.
-func (w *FakeClientWrapper) Get(ctx context.Context, key client.ObjectKey, obj runtime.Object) error {
+func (w *FakeClientWrapper) Get(ctx context.Context, key client.ObjectKey, obj client.Object) error {
if w.GetFunc != nil {
return w.GetFunc(ctx, key, obj)
}
@@ -50,7 +62,7 @@ func (w *FakeClientWrapper) Get(ctx context.Context, key client.ObjectKey, obj r
}
// List retrieves list of objects for a given namespace and list options.
-func (w *FakeClientWrapper) List(ctx context.Context, list runtime.Object, opts ...client.ListOption) error {
+func (w *FakeClientWrapper) List(ctx context.Context, list client.ObjectList, opts ...client.ListOption) error {
if w.ListFunc != nil {
return w.ListFunc(ctx, list, opts...)
}
@@ -58,7 +70,7 @@ func (w *FakeClientWrapper) List(ctx context.Context, list runtime.Object, opts
}
// Create saves the object obj in the Kubernetes cluster.
-func (w *FakeClientWrapper) Create(ctx context.Context, obj runtime.Object, opts ...client.CreateOption) error {
+func (w *FakeClientWrapper) Create(ctx context.Context, obj client.Object, opts ...client.CreateOption) error {
if w.CreateFunc != nil {
return w.CreateFunc(ctx, obj, opts...)
}
@@ -66,7 +78,7 @@ func (w *FakeClientWrapper) Create(ctx context.Context, obj runtime.Object, opts
}
// Delete deletes the given obj from Kubernetes cluster.
-func (w *FakeClientWrapper) Delete(ctx context.Context, obj runtime.Object, opts ...client.DeleteOption) error {
+func (w *FakeClientWrapper) Delete(ctx context.Context, obj client.Object, opts ...client.DeleteOption) error {
if w.DeleteFunc != nil {
return w.DeleteFunc(ctx, obj, opts...)
}
@@ -74,7 +86,7 @@ func (w *FakeClientWrapper) Delete(ctx context.Context, obj runtime.Object, opts
}
// Update updates the given obj in the Kubernetes cluster.
-func (w *FakeClientWrapper) Update(ctx context.Context, obj runtime.Object, opts ...client.UpdateOption) error {
+func (w *FakeClientWrapper) Update(ctx context.Context, obj client.Object, opts ...client.UpdateOption) error {
if w.UpdateFunc != nil {
return w.UpdateFunc(ctx, obj, opts...)
}
@@ -82,12 +94,12 @@ func (w *FakeClientWrapper) Update(ctx context.Context, obj runtime.Object, opts
}
// Patch patches the given obj in the Kubernetes cluster.
-func (w *FakeClientWrapper) Patch(ctx context.Context, obj runtime.Object, patch client.Patch, opts ...client.PatchOption) error {
+func (w *FakeClientWrapper) Patch(ctx context.Context, obj client.Object, patch client.Patch, opts ...client.PatchOption) error {
return w.fakeClient.Patch(ctx, obj, patch, opts...)
}
// DeleteAllOf deletes all objects of the given type matching the given options.
-func (w *FakeClientWrapper) DeleteAllOf(ctx context.Context, obj runtime.Object, opts ...client.DeleteAllOfOption) error {
+func (w *FakeClientWrapper) DeleteAllOf(ctx context.Context, obj client.Object, opts ...client.DeleteAllOfOption) error {
return w.fakeClient.DeleteAllOf(ctx, obj, opts...)
}
diff --git a/releases/README.md b/releases/README.md
new file mode 100644
index 000000000..be0544fd4
--- /dev/null
+++ b/releases/README.md
@@ -0,0 +1,102 @@
+# Deploying the vSphere CPI using release manifests
+
+This document is designed to show you how to deploy the vSphere CPI using the release manifest YAMLs we provide.
+
+CPI is releasing deployment YAML files per k8s release. You should be able to find the corresponding release manifest YAML under [this repo](https://github.com/kubernetes/cloud-provider-vsphere/tree/master/releases)
+
+Note that YAML files from [manifests/controller-manager repo](https://github.com/kubernetes/cloud-provider-vsphere/tree/master/manifests/controller-manager) is deprecated.
+
+## Example workflow
+
+In this tutorial, we will be installing the latest version of cloud provider vsphere(v1.22.3) freshly. If you have an older version of CPI already installed, the steps to deploy and upgrade CPI stay the same. With our `RollingUpdate` update strategy, after you update a DaemonSet template, old DaemonSet pods will be killed, and new DaemonSet pods will be created automatically.
+
+### Step 1: find the kubernetes major version you are using
+
+For example, the major version of '1.22.x' is '1.22', then run:
+
+```bash
+VERSION=1.22
+wget https://raw.githubusercontent.com/kubernetes/cloud-provider-vsphere/release-$VERSION/releases/v$VERSION/vsphere-cloud-controller-manager.yaml
+```
+
+### Step 2: edit the Secret and ConfigMap inside 'vsphere-cloud-controller-manager.yaml'
+
+In the release yaml files, what we provide is just an example configuration, you will need to update with real values based on your environment.
+
+```bash
+...
+---
+apiVersion: v1
+kind: Secret
+metadata:
+ name: vsphere-cloud-secret
+ labels:
+ vsphere-cpi-infra: secret
+ component: cloud-controller-manager
+ namespace: kube-system
+ # NOTE: this is just an example configuration, update with real values based on your environment
+stringData:
+ 10.0.0.1.username: ""
+ 10.0.0.1.password: ""
+ 1.2.3.4.username: ""
+ 1.2.3.4.password: ""
+---
+apiVersion: v1
+kind: ConfigMap
+metadata:
+ name: vsphere-cloud-config
+ labels:
+ vsphere-cpi-infra: config
+ component: cloud-controller-manager
+ namespace: kube-system
+data:
+ # NOTE: this is just an example configuration, update with real values based on your environment
+ vsphere.conf: |
+ # Global properties in this section will be used for all specified vCenters unless overriden in VirtualCenter section.
+ global:
+ port: 443
+ # set insecureFlag to true if the vCenter uses a self-signed cert
+ insecureFlag: true
+ # settings for using k8s secret
+ secretName: vsphere-cloud-secret
+ secretNamespace: kube-system
+
+ # vcenter section
+ vcenter:
+ your-vcenter-name-here:
+ server: 10.0.0.1
+ user: use-your-vcenter-user-here
+ password: use-your-vcenter-password-here
+ datacenters:
+ - hrwest
+ - hreast
+ could-be-a-tenant-label:
+ server: 1.2.3.4
+ datacenters:
+ - mytenantdc
+ secretName: cpi-engineering-secret
+ secretNamespace: kube-system
+
+ # labels for regions and zones
+ labels:
+ region: k8s-region
+ zone: k8s-zone
+---
+...
+```
+
+### Step 3: Now you can apply the release manifest (with updated values in Secret and ConfigMap)
+
+```bash
+kubectl apply -f vsphere-cloud-controller-manager.yaml
+```
+
+This will start to create Roles, Roles Bindings, Service Account, Service, Secret, ConfigMap and cloud-controller-manager Pod.
+
+### Step 4: Cleanups
+
+```bash
+rm vsphere-cloud-controller-manager.yaml
+```
+
+For more information, please refer to [this doc](https://github.com/kubernetes/cloud-provider-vsphere/blob/master/docs/book/cloud_provider_interface.md).
diff --git a/releases/v1.21/vsphere-cloud-controller-manager.yaml b/releases/v1.21/vsphere-cloud-controller-manager.yaml
index 8fabed84a..767c25261 100644
--- a/releases/v1.21/vsphere-cloud-controller-manager.yaml
+++ b/releases/v1.21/vsphere-cloud-controller-manager.yaml
@@ -234,7 +234,7 @@ spec:
serviceAccountName: cloud-controller-manager
containers:
- name: vsphere-cloud-controller-manager
- image: gcr.io/cloud-provider-vsphere/cpi/release/manager:v1.21.0
+ image: gcr.io/cloud-provider-vsphere/cpi/release/manager:v1.21.1
args:
- --cloud-provider=vsphere
- --v=2
diff --git a/releases/v1.22/vsphere-cloud-controller-manager.yaml b/releases/v1.22/vsphere-cloud-controller-manager.yaml
new file mode 100644
index 000000000..6d778c86b
--- /dev/null
+++ b/releases/v1.22/vsphere-cloud-controller-manager.yaml
@@ -0,0 +1,253 @@
+---
+apiVersion: v1
+kind: ServiceAccount
+metadata:
+ name: cloud-controller-manager
+ labels:
+ vsphere-cpi-infra: service-account
+ component: cloud-controller-manager
+ namespace: kube-system
+---
+apiVersion: v1
+kind: Secret
+metadata:
+ name: vsphere-cloud-secret
+ labels:
+ vsphere-cpi-infra: secret
+ component: cloud-controller-manager
+ namespace: kube-system
+ # NOTE: this is just an example configuration, update with real values based on your environment
+stringData:
+ 10.0.0.1.username: ""
+ 10.0.0.1.password: ""
+ 1.2.3.4.username: ""
+ 1.2.3.4.password: ""
+---
+apiVersion: v1
+kind: ConfigMap
+metadata:
+ name: vsphere-cloud-config
+ labels:
+ vsphere-cpi-infra: config
+ component: cloud-controller-manager
+ namespace: kube-system
+data:
+ # NOTE: this is just an example configuration, update with real values based on your environment
+ vsphere.conf: |
+ # Global properties in this section will be used for all specified vCenters unless overriden in VirtualCenter section.
+ global:
+ port: 443
+ # set insecureFlag to true if the vCenter uses a self-signed cert
+ insecureFlag: true
+ # settings for using k8s secret
+ secretName: vsphere-cloud-secret
+ secretNamespace: kube-system
+
+ # vcenter section
+ vcenter:
+ your-vcenter-name-here:
+ server: 10.0.0.1
+ user: use-your-vcenter-user-here
+ password: use-your-vcenter-password-here
+ datacenters:
+ - hrwest
+ - hreast
+ could-be-a-tenant-label:
+ server: 1.2.3.4
+ datacenters:
+ - mytenantdc
+ secretName: cpi-engineering-secret
+ secretNamespace: kube-system
+
+ # labels for regions and zones
+ labels:
+ region: k8s-region
+ zone: k8s-zone
+---
+apiVersion: rbac.authorization.k8s.io/v1
+kind: RoleBinding
+metadata:
+ name: servicecatalog.k8s.io:apiserver-authentication-reader
+ labels:
+ vsphere-cpi-infra: role-binding
+ component: cloud-controller-manager
+ namespace: kube-system
+roleRef:
+ apiGroup: rbac.authorization.k8s.io
+ kind: Role
+ name: extension-apiserver-authentication-reader
+subjects:
+ - apiGroup: ""
+ kind: ServiceAccount
+ name: cloud-controller-manager
+ namespace: kube-system
+ - apiGroup: ""
+ kind: User
+ name: cloud-controller-manager
+---
+apiVersion: rbac.authorization.k8s.io/v1
+kind: ClusterRoleBinding
+metadata:
+ name: system:cloud-controller-manager
+ labels:
+ vsphere-cpi-infra: cluster-role-binding
+ component: cloud-controller-manager
+roleRef:
+ apiGroup: rbac.authorization.k8s.io
+ kind: ClusterRole
+ name: system:cloud-controller-manager
+subjects:
+ - kind: ServiceAccount
+ name: cloud-controller-manager
+ namespace: kube-system
+ - kind: User
+ name: cloud-controller-manager
+---
+apiVersion: rbac.authorization.k8s.io/v1
+kind: ClusterRole
+metadata:
+ name: system:cloud-controller-manager
+ labels:
+ vsphere-cpi-infra: role
+ component: cloud-controller-manager
+rules:
+ - apiGroups:
+ - ""
+ resources:
+ - events
+ verbs:
+ - create
+ - patch
+ - update
+ - apiGroups:
+ - ""
+ resources:
+ - nodes
+ verbs:
+ - "*"
+ - apiGroups:
+ - ""
+ resources:
+ - nodes/status
+ verbs:
+ - patch
+ - apiGroups:
+ - ""
+ resources:
+ - services
+ verbs:
+ - list
+ - patch
+ - update
+ - watch
+ - apiGroups:
+ - ""
+ resources:
+ - services/status
+ verbs:
+ - patch
+ - apiGroups:
+ - ""
+ resources:
+ - serviceaccounts
+ verbs:
+ - create
+ - get
+ - list
+ - watch
+ - update
+ - apiGroups:
+ - ""
+ resources:
+ - persistentvolumes
+ verbs:
+ - get
+ - list
+ - update
+ - watch
+ - apiGroups:
+ - ""
+ resources:
+ - endpoints
+ verbs:
+ - create
+ - get
+ - list
+ - watch
+ - update
+ - apiGroups:
+ - ""
+ resources:
+ - secrets
+ verbs:
+ - get
+ - list
+ - watch
+ - apiGroups:
+ - "coordination.k8s.io"
+ resources:
+ - leases
+ verbs:
+ - create
+ - get
+ - list
+ - watch
+ - update
+---
+apiVersion: apps/v1
+kind: DaemonSet
+metadata:
+ name: vsphere-cloud-controller-manager
+ labels:
+ component: cloud-controller-manager
+ tier: control-plane
+ namespace: kube-system
+ annotations:
+ scheduler.alpha.kubernetes.io/critical-pod: ""
+spec:
+ selector:
+ matchLabels:
+ name: vsphere-cloud-controller-manager
+ updateStrategy:
+ type: RollingUpdate
+ template:
+ metadata:
+ labels:
+ name: vsphere-cloud-controller-manager
+ component: cloud-controller-manager
+ tier: control-plane
+ spec:
+ nodeSelector:
+ node-role.kubernetes.io/master: ""
+ tolerations:
+ - key: node.cloudprovider.kubernetes.io/uninitialized
+ value: "true"
+ effect: NoSchedule
+ - key: node-role.kubernetes.io/master
+ effect: NoSchedule
+ operator: Exists
+ - key: node.kubernetes.io/not-ready
+ effect: NoSchedule
+ operator: Exists
+ securityContext:
+ runAsUser: 1001
+ serviceAccountName: cloud-controller-manager
+ containers:
+ - name: vsphere-cloud-controller-manager
+ image: gcr.io/cloud-provider-vsphere/cpi/release/manager:v1.22.3
+ args:
+ - --cloud-provider=vsphere
+ - --v=2
+ - --cloud-config=/etc/cloud/vsphere.conf
+ volumeMounts:
+ - mountPath: /etc/cloud
+ name: vsphere-config-volume
+ readOnly: true
+ resources:
+ requests:
+ cpu: 200m
+ hostNetwork: true
+ volumes:
+ - name: vsphere-config-volume
+ configMap:
+ name: vsphere-cloud-config
diff --git a/test/e2e/Makefile b/test/e2e/Makefile
new file mode 100644
index 000000000..15f3281a5
--- /dev/null
+++ b/test/e2e/Makefile
@@ -0,0 +1,55 @@
+# Copyright 2021 The Kubernetes Authors.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+REPO_ROOT := $(shell git rev-parse --show-toplevel)
+
+TMP_DIR := /tmp
+TMP_CAPV_DIR := $(TMP_DIR)/capv
+
+TOOLS_DIR := $(REPO_ROOT)/hack/tools
+TOOLS_BIN_DIR := $(TOOLS_DIR)/bin
+GINKGO := $(TOOLS_BIN_DIR)/ginkgo
+KIND := $(TOOLS_BIN_DIR)/kind
+
+TOOLING_BINARIES := $(GINKGO) $(KIND)
+
+# E2E_ARTIFACTS is the folder to store e2e test artifacts
+E2E_ARTIFACTS ?= $(REPO_ROOT)/_e2e_artifacts
+
+# E2E_CONF_FILE is the configuration file for e2e testing
+E2E_CONF_FILE ?= ${REPO_ROOT}/test/e2e/config/vsphere-dev.yaml
+
+# E2E_DATA_DIR contains provider manifests needed to create the bootsrap cluster, required by the E2E_CONF_FILE
+E2E_DATA_DIR := ${REPO_ROOT}/test/e2e/data
+
+# E2E_DATA_CAPV_VER defines the providers from which version of CAPV to use
+E2E_DATA_CAPV_VER ?= release-1.0
+
+all: run
+
+$(TOOLING_BINARIES):
+ make -C $(TOOLS_DIR) $(@F)
+
+$(TMP_CAPV_DIR):
+ git clone -b $(E2E_DATA_CAPV_VER) git@github.com:kubernetes-sigs/cluster-api-provider-vsphere.git $(TMP_CAPV_DIR)
+
+$(E2E_DATA_DIR): $(TMP_CAPV_DIR)
+ cp -r $(TMP_CAPV_DIR)/test/e2e/data $(E2E_DATA_DIR) && \
+ cp $(TMP_CAPV_DIR)/metadata.yaml $(E2E_DATA_DIR)
+
+run: $(TOOLING_BINARIES) $(E2E_DATA_DIR)
+ $(GINKGO) -v . -- --e2e.config="$(E2E_CONF_FILE)" --e2e.artifacts-folder="$(E2E_ARTIFACTS)" --e2e.skip-resource-cleanup=false
+
+clean:
+ rm -rf $(E2E_DATA_DIR) $(TMP_CAPV_DIR)
diff --git a/test/e2e/README.md b/test/e2e/README.md
new file mode 100644
index 000000000..eab73a03a
--- /dev/null
+++ b/test/e2e/README.md
@@ -0,0 +1,33 @@
+# E2E test for cloud-provider-vsphere
+
+## Requirements
+
+In order to perform e2e tests against the cloud-provider-vsphere, make sure
+
+* you have administrative access to a vSphere server
+* golang 1.16+
+* Docker ([download](https://www.docker.com/get-started))
+
+## Environment variables
+
+The first step to running the e2e tests is setting up the required environment variables in the [e2e config file](./config/vsphere-dev.yaml):
+
+| Environment variable | Description | Example |
+| -------------------------- | ----------------------------------------------------------------------------------------------------- | -------------------------------------------------------------------------------- |
+| `VSPHERE_SERVER` | The IP address or FQDN of a vCenter 6.7u3 server (required) | `my.vcenter.com` |
+| `VSPHERE_USERNAME` | The username used to access the vSphere server (required) | `my-username` |
+| `VSPHERE_PASSWORD` | The password used to access the vSphere server (required) | `my-password` |
+| `VSPHERE_DATACENTER` | The unique name or inventory path of the datacenter in which VMs will be created | `my-datacenter` or `/my-datacenter` |
+| `VSPHERE_FOLDER` | The unique name or inventory path of the folder in which VMs will be created | `my-folder` or `/my-datacenter/vm/my-folder` |
+| `VSPHERE_RESOURCE_POOL` | The unique name or inventory path of the resource pool in which VMs will be created | `my-resource-pool` or `/my-datacenter/host/Cluster-1/Resources/my-resource-pool` |
+| `VSPHERE_DATASTORE` | The unique name or inventory path of the datastore in which VMs will be created | `my-datastore` or `/my-datacenter/datstore/my-datastore` |
+| `VSPHERE_NETWORK` | The unique name or inventory path of the network to which VMs will be connected | `my-network` or `/my-datacenter/network/my-network` |
+| `VSPHERE_HAPROXY_TEMPLATE` | The unique name or inventory path of the template from which the HAProxy load balancer VMs are cloned | `my-haproxy-template` or `/my-datacenter/vm/my-haproxy-template` |
+| `VSPHERE_SSH_PRIVATE_KEY` | The file path of the private key used to ssh into the CAPV VMs | `/home/foo/bar-ssh.key` |
+| `VSPHERE_SSH_AUTHORIZED_KEY` | The public key that is added to the CAPV VMs | `ss-rsa ABCDEF...XYZ=` |
+
+## Running the e2e tests
+
+Checkout the e2e directory `PROJECT_ROOT/test/e2e` and run it with `make`.
+
+Or run `make test-e2e` under the `PROJECT_ROOT`.
diff --git a/test/e2e/config/vsphere-dev.yaml b/test/e2e/config/vsphere-dev.yaml
new file mode 100644
index 000000000..7bc9e0f4b
--- /dev/null
+++ b/test/e2e/config/vsphere-dev.yaml
@@ -0,0 +1,170 @@
+---
+# E2E test scenario using local dev images and manifests built from the source tree for following providers:
+# - cluster-api
+# - bootstrap kubeadm
+# - control-plane kubeadm
+# - vsphere
+
+images:
+ - name: gcr.io/k8s-staging-cluster-api/cluster-api-controller-amd64:v1.0.0
+ loadBehavior: tryLoad
+ - name: gcr.io/k8s-staging-cluster-api/kubeadm-bootstrap-controller-amd64:v1.0.0
+ loadBehavior: tryLoad
+ - name: gcr.io/k8s-staging-cluster-api/kubeadm-control-plane-controller-amd64:v1.0.0
+ loadBehavior: tryLoad
+ - name: gcr.io/cluster-api-provider-vsphere/release/manager:v1.0.1
+ loadBehavior: tryLoad
+ - name: quay.io/jetstack/cert-manager-cainjector:v0.16.1
+ loadBehavior: tryLoad
+ - name: quay.io/jetstack/cert-manager-webhook:v0.16.1
+ loadBehavior: tryLoad
+ - name: quay.io/jetstack/cert-manager-controller:v0.16.1
+ loadBehavior: tryLoad
+
+providers:
+ - name: cluster-api
+ type: CoreProvider
+ versions:
+ - name: v0.3.23 # latest published release in the v1alpha3 series; this is used for v1alpha3 --> v1beta1 clusterctl upgrades test only.
+ value: "https://github.com/kubernetes-sigs/cluster-api/releases/download/v0.3.23/core-components.yaml"
+ type: "url"
+ contract: v1alpha3
+ replacements:
+ - old: "imagePullPolicy: Always"
+ new: "imagePullPolicy: IfNotPresent"
+ files:
+ - sourcePath: "../data/shared/metadata.yaml"
+ - name: v0.4.4 # latest published release in the v1alpha4 series; this is used for v1alpha4 --> v1beta1 clusterctl upgrades test only.
+ value: "https://github.com/kubernetes-sigs/cluster-api/releases/download/v0.4.4/core-components.yaml"
+ type: "url"
+ contract: v1alpha4
+ replacements:
+ - old: "imagePullPolicy: Always"
+ new: "imagePullPolicy: IfNotPresent"
+ files:
+ - sourcePath: "../data/shared/metadata.yaml"
+ - name: v1.0.1 # latest published release in the v1beta1 series; this is used for v1beta1 --> main clusterctl upgrades test only.
+ value: "https://github.com/kubernetes-sigs/cluster-api/releases/download/v1.0.1/core-components.yaml"
+ type: "url"
+ contract: v1beta1
+ replacements:
+ - old: "imagePullPolicy: Always"
+ new: "imagePullPolicy: IfNotPresent"
+ files:
+ - sourcePath: "../data/shared/metadata.yaml"
+
+ - name: kubeadm
+ type: BootstrapProvider
+ versions:
+ - name: v0.3.23 # latest published release in the v1alpha3 series; this is used for v1alpha3 --> v1beta1 clusterctl upgrades test only.
+ value: "https://github.com/kubernetes-sigs/cluster-api/releases/download/v0.3.23/bootstrap-components.yaml"
+ type: "url"
+ contract: v1alpha3
+ replacements:
+ - old: "imagePullPolicy: Always"
+ new: "imagePullPolicy: IfNotPresent"
+ files:
+ - sourcePath: "../data/shared/metadata.yaml"
+ - name: v0.4.4 # latest published release in the v1alpha4 series; this is used for v1alpha4 --> v1beta1 clusterctl upgrades test only.
+ value: "https://github.com/kubernetes-sigs/cluster-api/releases/download/v0.4.4/bootstrap-components.yaml"
+ type: "url"
+ contract: v1alpha4
+ replacements:
+ - old: "imagePullPolicy: Always"
+ new: "imagePullPolicy: IfNotPresent"
+ files:
+ - sourcePath: "../data/shared/metadata.yaml"
+ - name: v1.0.1 # latest published release in the v1beta1 series; this is used for v1beta1 --> main clusterctl upgrades test only.
+ value: "https://github.com/kubernetes-sigs/cluster-api/releases/download/v1.0.1/bootstrap-components.yaml"
+ type: "url"
+ contract: v1beta1
+ replacements:
+ - old: "imagePullPolicy: Always"
+ new: "imagePullPolicy: IfNotPresent"
+ files:
+ - sourcePath: "../data/shared/metadata.yaml"
+
+ - name: kubeadm
+ type: ControlPlaneProvider
+ versions:
+ - name: v0.3.23 # latest published release in the v1alpha3 series; this is used for v1alpha3 --> v1beta1 clusterctl upgrades test only.
+ value: "https://github.com/kubernetes-sigs/cluster-api/releases/download/v0.3.23/control-plane-components.yaml"
+ type: "url"
+ contract: v1alpha3
+ replacements:
+ - old: "imagePullPolicy: Always"
+ new: "imagePullPolicy: IfNotPresent"
+ files:
+ - sourcePath: "../data/shared/metadata.yaml"
+ - name: v0.4.4 # latest published release in the v1alpha4 series; this is used for v1alpha4 --> v1beta1 clusterctl upgrades test only.
+ value: "https://github.com/kubernetes-sigs/cluster-api/releases/download/v0.4.4/control-plane-components.yaml"
+ type: "url"
+ contract: v1alpha4
+ replacements:
+ - old: "imagePullPolicy: Always"
+ new: "imagePullPolicy: IfNotPresent"
+ files:
+ - sourcePath: "../data/shared/metadata.yaml"
+ - name: v1.0.1 # latest published release in the v1beta1 series; this is used for v1beta1 --> main clusterctl upgrades test only.
+ value: "https://github.com/kubernetes-sigs/cluster-api/releases/download/v1.0.1/control-plane-components.yaml"
+ type: "url"
+ contract: v1beta1
+ replacements:
+ - old: "imagePullPolicy: Always"
+ new: "imagePullPolicy: IfNotPresent"
+ files:
+ - sourcePath: "../data/shared/metadata.yaml"
+
+ - name: vsphere
+ type: InfrastructureProvider
+ versions:
+ - name: v0.7.10
+ value: https://github.com/kubernetes-sigs/cluster-api-provider-vsphere/releases/download/v0.7.10/infrastructure-components.yaml
+ type: url
+ contract: v1alpha3
+ replacements:
+ - old: "imagePullPolicy: Always"
+ new: "imagePullPolicy: IfNotPresent"
+ files:
+ # TODO: v1a3 cluster-template includes WORKLOAD_CONTROL_PLANE_ENDPOINT_IP
+ - sourcePath: "../data/metadata.yaml"
+ - sourcePath: "../data/infrastructure-vsphere/capi-upgrades/v1alpha3/cluster-template.yaml"
+ - name: v0.8.1
+ type: url
+ contract: v1alpha4
+ value: https://github.com/kubernetes-sigs/cluster-api-provider-vsphere/releases/download/v0.8.1/infrastructure-components.yaml
+ replacements:
+ - old: "imagePullPolicy: Always"
+ new: "imagePullPolicy: IfNotPresent"
+ files:
+ - sourcePath: "../data/metadata.yaml"
+ - sourcePath: "../data/infrastructure-vsphere/capi-upgrades/v1alpha4/cluster-template.yaml"
+
+variables:
+ KUBERNETES_VERSION: "v1.22.3"
+ CNI: "./data/cni/calico/calico.yaml"
+ EXP_CLUSTER_RESOURCE_SET: "true"
+ CONTROL_PLANE_MACHINE_COUNT: 1
+ WORKER_MACHINE_COUNT: 1
+ IP_FAMILY: "IPv4"
+ # Following CAPV variables should be set before testing
+ VSPHERE_SERVER: "vcenter.vmware.com"
+ VSPHERE_DATACENTER: "dc0"
+ VSPHERE_DATASTORE: "WorkloadDatastore"
+ VSPHERE_STORAGE_POLICY: "Cluster API vSphere Storage Policy"
+ VSPHERE_FOLDER: "rp0"
+ VSPHERE_NETWORK: "VM Network"
+ VSPHERE_RESOURCE_POOL: "ResourcePool"
+ VSPHERE_TEMPLATE: "ubuntu-2004-kube-v1.21.2"
+ # Also following variables are required but it is recommended to use env variables to avoid disclosure of sensitive data
+ # VSPHERE_SSH_AUTHORIZED_KEY:
+ # VSPHERE_USERNAME:
+ # VSPHERE_PASSWORD:
+
+intervals:
+ default/wait-controllers: ["5m", "10s"]
+ default/wait-cluster: ["5m", "10s"]
+ default/wait-control-plane: ["20m", "10s"]
+ default/wait-worker-nodes: ["20m", "10s"]
+ default/wait-delete-cluster: ["5m", "10s"]
+ default/wait-machine-upgrade: ["15m", "1m"]
diff --git a/test/e2e/e2e_suite_test.go b/test/e2e/e2e_suite_test.go
new file mode 100644
index 000000000..51ec241d7
--- /dev/null
+++ b/test/e2e/e2e_suite_test.go
@@ -0,0 +1,181 @@
+/*
+Copyright 2021 The Kubernetes Authors.
+
+Licensed under the Apache License, Version 2.0 (the "License");
+you may not use this file except in compliance with the License.
+You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing, software
+distributed under the License is distributed on an "AS IS" BASIS,
+WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+See the License for the specific language governing permissions and
+limitations under the License.
+*/
+
+package e2e
+
+import (
+ "context"
+ "flag"
+ "os"
+ "path/filepath"
+ "strings"
+
+ "testing"
+
+ . "github.com/onsi/ginkgo"
+ . "github.com/onsi/gomega"
+
+ "k8s.io/apimachinery/pkg/runtime"
+ "sigs.k8s.io/cluster-api/test/e2e"
+ "sigs.k8s.io/cluster-api/test/framework"
+ "sigs.k8s.io/cluster-api/test/framework/bootstrap"
+ "sigs.k8s.io/cluster-api/test/framework/clusterctl"
+)
+
+// Test Suite flags
+var (
+ // configPath is the path to the e2e config file.
+ configPath string
+
+ // artifactFolder is the folder to store e2e test artifacts.
+ artifactFolder string
+
+ // clusterctlConfig is the file which tests will use as a clusterctl config.
+ // If it is not set, a local clusterctl repository (including a clusterctl config) will be created automatically.
+ clusterctlConfig string
+
+ // useExistingCluster instructs the test to use the current cluster instead of creating a new one (default discovery rules apply).
+ useExistingCluster bool
+
+ // skipCleanup prevents cleanup of test resources e.g. for debug purposes.
+ skipCleanup bool
+)
+
+func init() {
+ flag.StringVar(&configPath, "e2e.config", "", "path to the e2e config file")
+ flag.StringVar(&artifactFolder, "e2e.artifacts-folder", "", "folder where e2e test artifact should be stored")
+ flag.StringVar(&clusterctlConfig, "e2e.clusterctl-config", "", "file which tests will use as a clusterctl config. If it is not set, a local clusterctl repository (including a clusterctl config) will be created automatically.")
+ flag.BoolVar(&useExistingCluster, "e2e.use-existing-cluster", false,
+ "if true, the test uses the current cluster instead of creating a new one (default discovery rules apply)")
+ flag.BoolVar(&skipCleanup, "e2e.skip-resource-cleanup", false, "if true, the resource cleanup after tests will be skipped")
+
+}
+
+// Global variables
+var (
+ ctx = context.Background()
+ err error
+
+ e2eConfig *clusterctl.E2EConfig
+ vsphere VSphereClient
+ clusterctlConfigPath string // path to the clusterctl config file
+
+ provider bootstrap.ClusterProvider
+ proxy framework.ClusterProxy
+ kubeconfig string
+)
+
+func defaultScheme() *runtime.Scheme {
+ sc := runtime.NewScheme()
+ framework.TryAddDefaultSchemes(sc)
+ return sc
+}
+
+func TestE2E(t *testing.T) {
+ RegisterFailHandler(Fail)
+
+ RunSpecs(t, "vsphere-cpi-e2e")
+}
+
+// Create a kind cluster that shared across all the tests
+var _ = SynchronizedBeforeSuite(func() []byte {
+ By("load e2e config file", func() {
+ Expect(configPath).To(BeAnExistingFile(), "invalid test suite argument. e2e.config should be an existing file.")
+ e2eConfig = clusterctl.LoadE2EConfig(ctx, clusterctl.LoadE2EConfigInput{ConfigPath: configPath})
+ Expect(e2eConfig).NotTo(BeNil(), "cannot load e2e config file from ", configPath)
+ })
+
+ Expect(os.MkdirAll(artifactFolder, 0755)).To(Succeed(), "Invalid test suite argument. Can't create e2e.artifacts-folder %q", artifactFolder) //nolint:gosec
+ By("ensure clusterctl config", func() {
+ if clusterctlConfig == "" {
+ clusterctlConfigPath = createClusterctlLocalRepository(e2eConfig, filepath.Join(artifactFolder, "repository"))
+ } else {
+ clusterctlConfigPath = clusterctlConfig
+ }
+ })
+
+ By("init vSphere session", func() {
+ vsphere, err = CreateVSphereTestClient(ctx, e2eConfig)
+ Expect(err).Should(BeNil())
+ Expect(vsphere).NotTo(BeNil())
+ })
+
+ By("setup bootstrap cluster", func() {
+ provider = bootstrap.CreateKindBootstrapClusterAndLoadImages(ctx, bootstrap.CreateKindBootstrapClusterAndLoadImagesInput{
+ Name: e2eConfig.ManagementClusterName,
+ RequiresDockerSock: e2eConfig.HasDockerProvider(),
+ Images: e2eConfig.Images,
+ })
+ Expect(provider).NotTo(BeNil())
+
+ kubeconfig = provider.GetKubeconfigPath()
+ Expect(kubeconfig).NotTo(BeEmpty())
+ Expect(kubeconfig).To(BeAnExistingFile(), "kubeconfig for the boostrap cluster does not exist")
+
+ proxy = framework.NewClusterProxy("bootstrap", kubeconfig, defaultScheme())
+ Expect(proxy).NotTo(BeNil())
+ })
+
+ By("initialize bootstrap cluster", func() {
+ clusterctl.InitManagementClusterAndWatchControllerLogs(ctx, clusterctl.InitManagementClusterAndWatchControllerLogsInput{
+ ClusterProxy: proxy,
+ ClusterctlConfigPath: clusterctlConfigPath,
+ LogFolder: filepath.Join(artifactFolder, "clusters", proxy.GetName()),
+ InfrastructureProviders: e2eConfig.InfrastructureProviders(),
+ }, e2eConfig.GetIntervals(proxy.GetName(), "wait-controllers")...)
+ })
+
+ return []byte(
+ strings.Join([]string{
+ artifactFolder,
+ configPath,
+ clusterctlConfigPath,
+ proxy.GetKubeconfigPath(),
+ }, ","))
+}, func(data []byte) {
+ // before each parallel thread
+})
+
+var _ = SynchronizedAfterSuite(func() {}, func() {
+ // after all parallel test cases finish
+ if !skipCleanup {
+ By("tear down the bootstrap cluster", func() {
+ Expect(provider).NotTo(BeNil())
+ Expect(proxy).NotTo(BeNil())
+
+ provider.Dispose(ctx)
+ proxy.Dispose(ctx)
+ })
+ }
+})
+
+func createClusterctlLocalRepository(config *clusterctl.E2EConfig, repositoryFolder string) string {
+ createRepositoryInput := clusterctl.CreateRepositoryInput{
+ E2EConfig: config,
+ RepositoryFolder: repositoryFolder,
+ }
+
+ // Ensuring a CNI file is defined in the config and register a FileTransformation to inject the referenced file in place of the CNI_RESOURCES envSubst variable.
+ Expect(config.Variables).To(HaveKey(e2e.CNIPath), "Missing %s variable in the config", e2e.CNIPath)
+ cniPath := config.GetVariable(e2e.CNIPath)
+ Expect(cniPath).To(BeAnExistingFile(), "The %s variable should resolve to an existing file", e2e.CNIPath)
+
+ createRepositoryInput.RegisterClusterResourceSetConfigMapTransformation(cniPath, e2e.CNIResources)
+
+ clusterctlConfig := clusterctl.CreateRepository(ctx, createRepositoryInput)
+ Expect(clusterctlConfig).To(BeAnExistingFile(), "The clusterctl config file does not exists in the local repository %s", repositoryFolder)
+ return clusterctlConfig
+}
diff --git a/test/e2e/sanity_test.go b/test/e2e/sanity_test.go
new file mode 100644
index 000000000..aa6dd67e7
--- /dev/null
+++ b/test/e2e/sanity_test.go
@@ -0,0 +1,25 @@
+/*
+Copyright 2021 The Kubernetes Authors.
+
+Licensed under the Apache License, Version 2.0 (the "License");
+you may not use this file except in compliance with the License.
+You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing, software
+distributed under the License is distributed on an "AS IS" BASIS,
+WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+See the License for the specific language governing permissions and
+limitations under the License.
+*/
+
+package e2e
+
+import (
+ . "github.com/onsi/ginkgo"
+)
+
+var _ = Describe("Cluster creation with VSphere node resources", func() {
+ It("should pass sanity check, i.e. create and initialize the bootstrap cluster", func() {})
+})
diff --git a/test/e2e/vsphere.go b/test/e2e/vsphere.go
new file mode 100644
index 000000000..9af496ce5
--- /dev/null
+++ b/test/e2e/vsphere.go
@@ -0,0 +1,102 @@
+/*
+Copyright 2021 The Kubernetes Authors.
+
+Licensed under the Apache License, Version 2.0 (the "License");
+you may not use this file except in compliance with the License.
+You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing, software
+distributed under the License is distributed on an "AS IS" BASIS,
+WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+See the License for the specific language governing permissions and
+limitations under the License.
+*/
+
+package e2e
+
+import (
+ "context"
+ "errors"
+ "net/url"
+
+ "github.com/vmware/govmomi"
+ "github.com/vmware/govmomi/find"
+ "github.com/vmware/govmomi/object"
+ "github.com/vmware/govmomi/vim25/soap"
+
+ "sigs.k8s.io/cluster-api/test/framework/clusterctl"
+)
+
+var ErrFieldNotFound = errors.New("field not found in the e2e config")
+
+type VSphereClient interface {
+}
+
+// vSphere specific client for e2e testing
+type vSphereTestClient struct {
+ Config *vSphereClientConfig
+ Client *govmomi.Client
+ Finder *find.Finder
+ Datacenter *object.Datacenter
+}
+
+// configurations for VSphereClient
+type vSphereClientConfig struct {
+ username string
+ password string
+ server string
+ datacenter string
+}
+
+// NewVSphereClientConfigFromE2E extracts a vSphereClientConfig from the cluster-api e2e config
+func NewVSphereClientConfigFromE2E(e *clusterctl.E2EConfig) (*vSphereClientConfig, error) {
+ server, ok := e.Variables["VSPHERE_SERVER"]
+ if !ok {
+ return nil, ErrFieldNotFound
+ }
+ username, ok := e.Variables["VSPHERE_USERNAME"]
+ if !ok {
+ return nil, ErrFieldNotFound
+ }
+ password, ok := e.Variables["VSPHERE_PASSWORD"]
+ if !ok {
+ return nil, ErrFieldNotFound
+ }
+ datacenter, ok := e.Variables["VSPHERE_DATACENTER"]
+ if !ok {
+ return nil, ErrFieldNotFound
+ }
+ return &vSphereClientConfig{
+ username: username,
+ password: password,
+ server: server,
+ datacenter: datacenter,
+ }, nil
+}
+
+// CreateVSphereTestClient creates an vSphereTestClient when config is provided
+func CreateVSphereTestClient(ctx context.Context, e2eConfig *clusterctl.E2EConfig) (VSphereClient, error) {
+ config, err := NewVSphereClientConfigFromE2E(e2eConfig)
+ if err != nil {
+ return nil, err
+ }
+ serverURL, err := soap.ParseURL(config.server)
+ if err != nil {
+ return nil, err
+ }
+ serverURL.User = url.UserPassword(config.username, config.password)
+
+ client, err := govmomi.NewClient(ctx, serverURL, true)
+ if err != nil {
+ return nil, err
+ }
+
+ finder := find.NewFinder(client.Client)
+ datacenter, err := finder.DatacenterOrDefault(ctx, config.datacenter)
+ if err != nil {
+ return nil, err
+ }
+ return vSphereTestClient{Config: config, Client: client, Finder: finder, Datacenter: datacenter}, nil
+}
diff --git a/vendor/github.com/Azure/go-ansiterm/go.mod b/vendor/github.com/Azure/go-ansiterm/go.mod
deleted file mode 100644
index 965cb8120..000000000
--- a/vendor/github.com/Azure/go-ansiterm/go.mod
+++ /dev/null
@@ -1,5 +0,0 @@
-module github.com/Azure/go-ansiterm
-
-go 1.16
-
-require golang.org/x/sys v0.0.0-20210616094352-59db8d763f22
diff --git a/vendor/github.com/Azure/go-ansiterm/go.sum b/vendor/github.com/Azure/go-ansiterm/go.sum
deleted file mode 100644
index 9f05d9d3e..000000000
--- a/vendor/github.com/Azure/go-ansiterm/go.sum
+++ /dev/null
@@ -1,2 +0,0 @@
-golang.org/x/sys v0.0.0-20210616094352-59db8d763f22 h1:RqytpXGR1iVNX7psjB3ff8y7sNFinVFvkx1c8SjBkio=
-golang.org/x/sys v0.0.0-20210616094352-59db8d763f22/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
diff --git a/vendor/github.com/BurntSushi/toml/.gitignore b/vendor/github.com/BurntSushi/toml/.gitignore
new file mode 100644
index 000000000..0cd380037
--- /dev/null
+++ b/vendor/github.com/BurntSushi/toml/.gitignore
@@ -0,0 +1,5 @@
+TAGS
+tags
+.*.swp
+tomlcheck/tomlcheck
+toml.test
diff --git a/vendor/github.com/BurntSushi/toml/.travis.yml b/vendor/github.com/BurntSushi/toml/.travis.yml
new file mode 100644
index 000000000..8b8afc4f0
--- /dev/null
+++ b/vendor/github.com/BurntSushi/toml/.travis.yml
@@ -0,0 +1,15 @@
+language: go
+go:
+ - 1.1
+ - 1.2
+ - 1.3
+ - 1.4
+ - 1.5
+ - 1.6
+ - tip
+install:
+ - go install ./...
+ - go get github.com/BurntSushi/toml-test
+script:
+ - export PATH="$PATH:$HOME/gopath/bin"
+ - make test
diff --git a/vendor/github.com/BurntSushi/toml/COMPATIBLE b/vendor/github.com/BurntSushi/toml/COMPATIBLE
new file mode 100644
index 000000000..6efcfd0ce
--- /dev/null
+++ b/vendor/github.com/BurntSushi/toml/COMPATIBLE
@@ -0,0 +1,3 @@
+Compatible with TOML version
+[v0.4.0](https://github.com/toml-lang/toml/blob/v0.4.0/versions/en/toml-v0.4.0.md)
+
diff --git a/vendor/github.com/BurntSushi/toml/COPYING b/vendor/github.com/BurntSushi/toml/COPYING
new file mode 100644
index 000000000..01b574320
--- /dev/null
+++ b/vendor/github.com/BurntSushi/toml/COPYING
@@ -0,0 +1,21 @@
+The MIT License (MIT)
+
+Copyright (c) 2013 TOML authors
+
+Permission is hereby granted, free of charge, to any person obtaining a copy
+of this software and associated documentation files (the "Software"), to deal
+in the Software without restriction, including without limitation the rights
+to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
+copies of the Software, and to permit persons to whom the Software is
+furnished to do so, subject to the following conditions:
+
+The above copyright notice and this permission notice shall be included in
+all copies or substantial portions of the Software.
+
+THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
+AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
+OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
+THE SOFTWARE.
diff --git a/vendor/github.com/BurntSushi/toml/Makefile b/vendor/github.com/BurntSushi/toml/Makefile
new file mode 100644
index 000000000..3600848d3
--- /dev/null
+++ b/vendor/github.com/BurntSushi/toml/Makefile
@@ -0,0 +1,19 @@
+install:
+ go install ./...
+
+test: install
+ go test -v
+ toml-test toml-test-decoder
+ toml-test -encoder toml-test-encoder
+
+fmt:
+ gofmt -w *.go */*.go
+ colcheck *.go */*.go
+
+tags:
+ find ./ -name '*.go' -print0 | xargs -0 gotags > TAGS
+
+push:
+ git push origin master
+ git push github master
+
diff --git a/vendor/github.com/BurntSushi/toml/README.md b/vendor/github.com/BurntSushi/toml/README.md
new file mode 100644
index 000000000..7c1b37ecc
--- /dev/null
+++ b/vendor/github.com/BurntSushi/toml/README.md
@@ -0,0 +1,218 @@
+## TOML parser and encoder for Go with reflection
+
+TOML stands for Tom's Obvious, Minimal Language. This Go package provides a
+reflection interface similar to Go's standard library `json` and `xml`
+packages. This package also supports the `encoding.TextUnmarshaler` and
+`encoding.TextMarshaler` interfaces so that you can define custom data
+representations. (There is an example of this below.)
+
+Spec: https://github.com/toml-lang/toml
+
+Compatible with TOML version
+[v0.4.0](https://github.com/toml-lang/toml/blob/master/versions/en/toml-v0.4.0.md)
+
+Documentation: https://godoc.org/github.com/BurntSushi/toml
+
+Installation:
+
+```bash
+go get github.com/BurntSushi/toml
+```
+
+Try the toml validator:
+
+```bash
+go get github.com/BurntSushi/toml/cmd/tomlv
+tomlv some-toml-file.toml
+```
+
+[![Build Status](https://travis-ci.org/BurntSushi/toml.svg?branch=master)](https://travis-ci.org/BurntSushi/toml) [![GoDoc](https://godoc.org/github.com/BurntSushi/toml?status.svg)](https://godoc.org/github.com/BurntSushi/toml)
+
+### Testing
+
+This package passes all tests in
+[toml-test](https://github.com/BurntSushi/toml-test) for both the decoder
+and the encoder.
+
+### Examples
+
+This package works similarly to how the Go standard library handles `XML`
+and `JSON`. Namely, data is loaded into Go values via reflection.
+
+For the simplest example, consider some TOML file as just a list of keys
+and values:
+
+```toml
+Age = 25
+Cats = [ "Cauchy", "Plato" ]
+Pi = 3.14
+Perfection = [ 6, 28, 496, 8128 ]
+DOB = 1987-07-05T05:45:00Z
+```
+
+Which could be defined in Go as:
+
+```go
+type Config struct {
+ Age int
+ Cats []string
+ Pi float64
+ Perfection []int
+ DOB time.Time // requires `import time`
+}
+```
+
+And then decoded with:
+
+```go
+var conf Config
+if _, err := toml.Decode(tomlData, &conf); err != nil {
+ // handle error
+}
+```
+
+You can also use struct tags if your struct field name doesn't map to a TOML
+key value directly:
+
+```toml
+some_key_NAME = "wat"
+```
+
+```go
+type TOML struct {
+ ObscureKey string `toml:"some_key_NAME"`
+}
+```
+
+### Using the `encoding.TextUnmarshaler` interface
+
+Here's an example that automatically parses duration strings into
+`time.Duration` values:
+
+```toml
+[[song]]
+name = "Thunder Road"
+duration = "4m49s"
+
+[[song]]
+name = "Stairway to Heaven"
+duration = "8m03s"
+```
+
+Which can be decoded with:
+
+```go
+type song struct {
+ Name string
+ Duration duration
+}
+type songs struct {
+ Song []song
+}
+var favorites songs
+if _, err := toml.Decode(blob, &favorites); err != nil {
+ log.Fatal(err)
+}
+
+for _, s := range favorites.Song {
+ fmt.Printf("%s (%s)\n", s.Name, s.Duration)
+}
+```
+
+And you'll also need a `duration` type that satisfies the
+`encoding.TextUnmarshaler` interface:
+
+```go
+type duration struct {
+ time.Duration
+}
+
+func (d *duration) UnmarshalText(text []byte) error {
+ var err error
+ d.Duration, err = time.ParseDuration(string(text))
+ return err
+}
+```
+
+### More complex usage
+
+Here's an example of how to load the example from the official spec page:
+
+```toml
+# This is a TOML document. Boom.
+
+title = "TOML Example"
+
+[owner]
+name = "Tom Preston-Werner"
+organization = "GitHub"
+bio = "GitHub Cofounder & CEO\nLikes tater tots and beer."
+dob = 1979-05-27T07:32:00Z # First class dates? Why not?
+
+[database]
+server = "192.168.1.1"
+ports = [ 8001, 8001, 8002 ]
+connection_max = 5000
+enabled = true
+
+[servers]
+
+ # You can indent as you please. Tabs or spaces. TOML don't care.
+ [servers.alpha]
+ ip = "10.0.0.1"
+ dc = "eqdc10"
+
+ [servers.beta]
+ ip = "10.0.0.2"
+ dc = "eqdc10"
+
+[clients]
+data = [ ["gamma", "delta"], [1, 2] ] # just an update to make sure parsers support it
+
+# Line breaks are OK when inside arrays
+hosts = [
+ "alpha",
+ "omega"
+]
+```
+
+And the corresponding Go types are:
+
+```go
+type tomlConfig struct {
+ Title string
+ Owner ownerInfo
+ DB database `toml:"database"`
+ Servers map[string]server
+ Clients clients
+}
+
+type ownerInfo struct {
+ Name string
+ Org string `toml:"organization"`
+ Bio string
+ DOB time.Time
+}
+
+type database struct {
+ Server string
+ Ports []int
+ ConnMax int `toml:"connection_max"`
+ Enabled bool
+}
+
+type server struct {
+ IP string
+ DC string
+}
+
+type clients struct {
+ Data [][]interface{}
+ Hosts []string
+}
+```
+
+Note that a case insensitive match will be tried if an exact match can't be
+found.
+
+A working example of the above can be found in `_examples/example.{go,toml}`.
diff --git a/vendor/github.com/BurntSushi/toml/decode.go b/vendor/github.com/BurntSushi/toml/decode.go
new file mode 100644
index 000000000..b0fd51d5b
--- /dev/null
+++ b/vendor/github.com/BurntSushi/toml/decode.go
@@ -0,0 +1,509 @@
+package toml
+
+import (
+ "fmt"
+ "io"
+ "io/ioutil"
+ "math"
+ "reflect"
+ "strings"
+ "time"
+)
+
+func e(format string, args ...interface{}) error {
+ return fmt.Errorf("toml: "+format, args...)
+}
+
+// Unmarshaler is the interface implemented by objects that can unmarshal a
+// TOML description of themselves.
+type Unmarshaler interface {
+ UnmarshalTOML(interface{}) error
+}
+
+// Unmarshal decodes the contents of `p` in TOML format into a pointer `v`.
+func Unmarshal(p []byte, v interface{}) error {
+ _, err := Decode(string(p), v)
+ return err
+}
+
+// Primitive is a TOML value that hasn't been decoded into a Go value.
+// When using the various `Decode*` functions, the type `Primitive` may
+// be given to any value, and its decoding will be delayed.
+//
+// A `Primitive` value can be decoded using the `PrimitiveDecode` function.
+//
+// The underlying representation of a `Primitive` value is subject to change.
+// Do not rely on it.
+//
+// N.B. Primitive values are still parsed, so using them will only avoid
+// the overhead of reflection. They can be useful when you don't know the
+// exact type of TOML data until run time.
+type Primitive struct {
+ undecoded interface{}
+ context Key
+}
+
+// DEPRECATED!
+//
+// Use MetaData.PrimitiveDecode instead.
+func PrimitiveDecode(primValue Primitive, v interface{}) error {
+ md := MetaData{decoded: make(map[string]bool)}
+ return md.unify(primValue.undecoded, rvalue(v))
+}
+
+// PrimitiveDecode is just like the other `Decode*` functions, except it
+// decodes a TOML value that has already been parsed. Valid primitive values
+// can *only* be obtained from values filled by the decoder functions,
+// including this method. (i.e., `v` may contain more `Primitive`
+// values.)
+//
+// Meta data for primitive values is included in the meta data returned by
+// the `Decode*` functions with one exception: keys returned by the Undecoded
+// method will only reflect keys that were decoded. Namely, any keys hidden
+// behind a Primitive will be considered undecoded. Executing this method will
+// update the undecoded keys in the meta data. (See the example.)
+func (md *MetaData) PrimitiveDecode(primValue Primitive, v interface{}) error {
+ md.context = primValue.context
+ defer func() { md.context = nil }()
+ return md.unify(primValue.undecoded, rvalue(v))
+}
+
+// Decode will decode the contents of `data` in TOML format into a pointer
+// `v`.
+//
+// TOML hashes correspond to Go structs or maps. (Dealer's choice. They can be
+// used interchangeably.)
+//
+// TOML arrays of tables correspond to either a slice of structs or a slice
+// of maps.
+//
+// TOML datetimes correspond to Go `time.Time` values.
+//
+// All other TOML types (float, string, int, bool and array) correspond
+// to the obvious Go types.
+//
+// An exception to the above rules is if a type implements the
+// encoding.TextUnmarshaler interface. In this case, any primitive TOML value
+// (floats, strings, integers, booleans and datetimes) will be converted to
+// a byte string and given to the value's UnmarshalText method. See the
+// Unmarshaler example for a demonstration with time duration strings.
+//
+// Key mapping
+//
+// TOML keys can map to either keys in a Go map or field names in a Go
+// struct. The special `toml` struct tag may be used to map TOML keys to
+// struct fields that don't match the key name exactly. (See the example.)
+// A case insensitive match to struct names will be tried if an exact match
+// can't be found.
+//
+// The mapping between TOML values and Go values is loose. That is, there
+// may exist TOML values that cannot be placed into your representation, and
+// there may be parts of your representation that do not correspond to
+// TOML values. This loose mapping can be made stricter by using the IsDefined
+// and/or Undecoded methods on the MetaData returned.
+//
+// This decoder will not handle cyclic types. If a cyclic type is passed,
+// `Decode` will not terminate.
+func Decode(data string, v interface{}) (MetaData, error) {
+ rv := reflect.ValueOf(v)
+ if rv.Kind() != reflect.Ptr {
+ return MetaData{}, e("Decode of non-pointer %s", reflect.TypeOf(v))
+ }
+ if rv.IsNil() {
+ return MetaData{}, e("Decode of nil %s", reflect.TypeOf(v))
+ }
+ p, err := parse(data)
+ if err != nil {
+ return MetaData{}, err
+ }
+ md := MetaData{
+ p.mapping, p.types, p.ordered,
+ make(map[string]bool, len(p.ordered)), nil,
+ }
+ return md, md.unify(p.mapping, indirect(rv))
+}
+
+// DecodeFile is just like Decode, except it will automatically read the
+// contents of the file at `fpath` and decode it for you.
+func DecodeFile(fpath string, v interface{}) (MetaData, error) {
+ bs, err := ioutil.ReadFile(fpath)
+ if err != nil {
+ return MetaData{}, err
+ }
+ return Decode(string(bs), v)
+}
+
+// DecodeReader is just like Decode, except it will consume all bytes
+// from the reader and decode it for you.
+func DecodeReader(r io.Reader, v interface{}) (MetaData, error) {
+ bs, err := ioutil.ReadAll(r)
+ if err != nil {
+ return MetaData{}, err
+ }
+ return Decode(string(bs), v)
+}
+
+// unify performs a sort of type unification based on the structure of `rv`,
+// which is the client representation.
+//
+// Any type mismatch produces an error. Finding a type that we don't know
+// how to handle produces an unsupported type error.
+func (md *MetaData) unify(data interface{}, rv reflect.Value) error {
+
+ // Special case. Look for a `Primitive` value.
+ if rv.Type() == reflect.TypeOf((*Primitive)(nil)).Elem() {
+ // Save the undecoded data and the key context into the primitive
+ // value.
+ context := make(Key, len(md.context))
+ copy(context, md.context)
+ rv.Set(reflect.ValueOf(Primitive{
+ undecoded: data,
+ context: context,
+ }))
+ return nil
+ }
+
+ // Special case. Unmarshaler Interface support.
+ if rv.CanAddr() {
+ if v, ok := rv.Addr().Interface().(Unmarshaler); ok {
+ return v.UnmarshalTOML(data)
+ }
+ }
+
+ // Special case. Handle time.Time values specifically.
+ // TODO: Remove this code when we decide to drop support for Go 1.1.
+ // This isn't necessary in Go 1.2 because time.Time satisfies the encoding
+ // interfaces.
+ if rv.Type().AssignableTo(rvalue(time.Time{}).Type()) {
+ return md.unifyDatetime(data, rv)
+ }
+
+ // Special case. Look for a value satisfying the TextUnmarshaler interface.
+ if v, ok := rv.Interface().(TextUnmarshaler); ok {
+ return md.unifyText(data, v)
+ }
+ // BUG(burntsushi)
+ // The behavior here is incorrect whenever a Go type satisfies the
+ // encoding.TextUnmarshaler interface but also corresponds to a TOML
+ // hash or array. In particular, the unmarshaler should only be applied
+ // to primitive TOML values. But at this point, it will be applied to
+ // all kinds of values and produce an incorrect error whenever those values
+ // are hashes or arrays (including arrays of tables).
+
+ k := rv.Kind()
+
+ // laziness
+ if k >= reflect.Int && k <= reflect.Uint64 {
+ return md.unifyInt(data, rv)
+ }
+ switch k {
+ case reflect.Ptr:
+ elem := reflect.New(rv.Type().Elem())
+ err := md.unify(data, reflect.Indirect(elem))
+ if err != nil {
+ return err
+ }
+ rv.Set(elem)
+ return nil
+ case reflect.Struct:
+ return md.unifyStruct(data, rv)
+ case reflect.Map:
+ return md.unifyMap(data, rv)
+ case reflect.Array:
+ return md.unifyArray(data, rv)
+ case reflect.Slice:
+ return md.unifySlice(data, rv)
+ case reflect.String:
+ return md.unifyString(data, rv)
+ case reflect.Bool:
+ return md.unifyBool(data, rv)
+ case reflect.Interface:
+ // we only support empty interfaces.
+ if rv.NumMethod() > 0 {
+ return e("unsupported type %s", rv.Type())
+ }
+ return md.unifyAnything(data, rv)
+ case reflect.Float32:
+ fallthrough
+ case reflect.Float64:
+ return md.unifyFloat64(data, rv)
+ }
+ return e("unsupported type %s", rv.Kind())
+}
+
+func (md *MetaData) unifyStruct(mapping interface{}, rv reflect.Value) error {
+ tmap, ok := mapping.(map[string]interface{})
+ if !ok {
+ if mapping == nil {
+ return nil
+ }
+ return e("type mismatch for %s: expected table but found %T",
+ rv.Type().String(), mapping)
+ }
+
+ for key, datum := range tmap {
+ var f *field
+ fields := cachedTypeFields(rv.Type())
+ for i := range fields {
+ ff := &fields[i]
+ if ff.name == key {
+ f = ff
+ break
+ }
+ if f == nil && strings.EqualFold(ff.name, key) {
+ f = ff
+ }
+ }
+ if f != nil {
+ subv := rv
+ for _, i := range f.index {
+ subv = indirect(subv.Field(i))
+ }
+ if isUnifiable(subv) {
+ md.decoded[md.context.add(key).String()] = true
+ md.context = append(md.context, key)
+ if err := md.unify(datum, subv); err != nil {
+ return err
+ }
+ md.context = md.context[0 : len(md.context)-1]
+ } else if f.name != "" {
+ // Bad user! No soup for you!
+ return e("cannot write unexported field %s.%s",
+ rv.Type().String(), f.name)
+ }
+ }
+ }
+ return nil
+}
+
+func (md *MetaData) unifyMap(mapping interface{}, rv reflect.Value) error {
+ tmap, ok := mapping.(map[string]interface{})
+ if !ok {
+ if tmap == nil {
+ return nil
+ }
+ return badtype("map", mapping)
+ }
+ if rv.IsNil() {
+ rv.Set(reflect.MakeMap(rv.Type()))
+ }
+ for k, v := range tmap {
+ md.decoded[md.context.add(k).String()] = true
+ md.context = append(md.context, k)
+
+ rvkey := indirect(reflect.New(rv.Type().Key()))
+ rvval := reflect.Indirect(reflect.New(rv.Type().Elem()))
+ if err := md.unify(v, rvval); err != nil {
+ return err
+ }
+ md.context = md.context[0 : len(md.context)-1]
+
+ rvkey.SetString(k)
+ rv.SetMapIndex(rvkey, rvval)
+ }
+ return nil
+}
+
+func (md *MetaData) unifyArray(data interface{}, rv reflect.Value) error {
+ datav := reflect.ValueOf(data)
+ if datav.Kind() != reflect.Slice {
+ if !datav.IsValid() {
+ return nil
+ }
+ return badtype("slice", data)
+ }
+ sliceLen := datav.Len()
+ if sliceLen != rv.Len() {
+ return e("expected array length %d; got TOML array of length %d",
+ rv.Len(), sliceLen)
+ }
+ return md.unifySliceArray(datav, rv)
+}
+
+func (md *MetaData) unifySlice(data interface{}, rv reflect.Value) error {
+ datav := reflect.ValueOf(data)
+ if datav.Kind() != reflect.Slice {
+ if !datav.IsValid() {
+ return nil
+ }
+ return badtype("slice", data)
+ }
+ n := datav.Len()
+ if rv.IsNil() || rv.Cap() < n {
+ rv.Set(reflect.MakeSlice(rv.Type(), n, n))
+ }
+ rv.SetLen(n)
+ return md.unifySliceArray(datav, rv)
+}
+
+func (md *MetaData) unifySliceArray(data, rv reflect.Value) error {
+ sliceLen := data.Len()
+ for i := 0; i < sliceLen; i++ {
+ v := data.Index(i).Interface()
+ sliceval := indirect(rv.Index(i))
+ if err := md.unify(v, sliceval); err != nil {
+ return err
+ }
+ }
+ return nil
+}
+
+func (md *MetaData) unifyDatetime(data interface{}, rv reflect.Value) error {
+ if _, ok := data.(time.Time); ok {
+ rv.Set(reflect.ValueOf(data))
+ return nil
+ }
+ return badtype("time.Time", data)
+}
+
+func (md *MetaData) unifyString(data interface{}, rv reflect.Value) error {
+ if s, ok := data.(string); ok {
+ rv.SetString(s)
+ return nil
+ }
+ return badtype("string", data)
+}
+
+func (md *MetaData) unifyFloat64(data interface{}, rv reflect.Value) error {
+ if num, ok := data.(float64); ok {
+ switch rv.Kind() {
+ case reflect.Float32:
+ fallthrough
+ case reflect.Float64:
+ rv.SetFloat(num)
+ default:
+ panic("bug")
+ }
+ return nil
+ }
+ return badtype("float", data)
+}
+
+func (md *MetaData) unifyInt(data interface{}, rv reflect.Value) error {
+ if num, ok := data.(int64); ok {
+ if rv.Kind() >= reflect.Int && rv.Kind() <= reflect.Int64 {
+ switch rv.Kind() {
+ case reflect.Int, reflect.Int64:
+ // No bounds checking necessary.
+ case reflect.Int8:
+ if num < math.MinInt8 || num > math.MaxInt8 {
+ return e("value %d is out of range for int8", num)
+ }
+ case reflect.Int16:
+ if num < math.MinInt16 || num > math.MaxInt16 {
+ return e("value %d is out of range for int16", num)
+ }
+ case reflect.Int32:
+ if num < math.MinInt32 || num > math.MaxInt32 {
+ return e("value %d is out of range for int32", num)
+ }
+ }
+ rv.SetInt(num)
+ } else if rv.Kind() >= reflect.Uint && rv.Kind() <= reflect.Uint64 {
+ unum := uint64(num)
+ switch rv.Kind() {
+ case reflect.Uint, reflect.Uint64:
+ // No bounds checking necessary.
+ case reflect.Uint8:
+ if num < 0 || unum > math.MaxUint8 {
+ return e("value %d is out of range for uint8", num)
+ }
+ case reflect.Uint16:
+ if num < 0 || unum > math.MaxUint16 {
+ return e("value %d is out of range for uint16", num)
+ }
+ case reflect.Uint32:
+ if num < 0 || unum > math.MaxUint32 {
+ return e("value %d is out of range for uint32", num)
+ }
+ }
+ rv.SetUint(unum)
+ } else {
+ panic("unreachable")
+ }
+ return nil
+ }
+ return badtype("integer", data)
+}
+
+func (md *MetaData) unifyBool(data interface{}, rv reflect.Value) error {
+ if b, ok := data.(bool); ok {
+ rv.SetBool(b)
+ return nil
+ }
+ return badtype("boolean", data)
+}
+
+func (md *MetaData) unifyAnything(data interface{}, rv reflect.Value) error {
+ rv.Set(reflect.ValueOf(data))
+ return nil
+}
+
+func (md *MetaData) unifyText(data interface{}, v TextUnmarshaler) error {
+ var s string
+ switch sdata := data.(type) {
+ case TextMarshaler:
+ text, err := sdata.MarshalText()
+ if err != nil {
+ return err
+ }
+ s = string(text)
+ case fmt.Stringer:
+ s = sdata.String()
+ case string:
+ s = sdata
+ case bool:
+ s = fmt.Sprintf("%v", sdata)
+ case int64:
+ s = fmt.Sprintf("%d", sdata)
+ case float64:
+ s = fmt.Sprintf("%f", sdata)
+ default:
+ return badtype("primitive (string-like)", data)
+ }
+ if err := v.UnmarshalText([]byte(s)); err != nil {
+ return err
+ }
+ return nil
+}
+
+// rvalue returns a reflect.Value of `v`. All pointers are resolved.
+func rvalue(v interface{}) reflect.Value {
+ return indirect(reflect.ValueOf(v))
+}
+
+// indirect returns the value pointed to by a pointer.
+// Pointers are followed until the value is not a pointer.
+// New values are allocated for each nil pointer.
+//
+// An exception to this rule is if the value satisfies an interface of
+// interest to us (like encoding.TextUnmarshaler).
+func indirect(v reflect.Value) reflect.Value {
+ if v.Kind() != reflect.Ptr {
+ if v.CanSet() {
+ pv := v.Addr()
+ if _, ok := pv.Interface().(TextUnmarshaler); ok {
+ return pv
+ }
+ }
+ return v
+ }
+ if v.IsNil() {
+ v.Set(reflect.New(v.Type().Elem()))
+ }
+ return indirect(reflect.Indirect(v))
+}
+
+func isUnifiable(rv reflect.Value) bool {
+ if rv.CanSet() {
+ return true
+ }
+ if _, ok := rv.Interface().(TextUnmarshaler); ok {
+ return true
+ }
+ return false
+}
+
+func badtype(expected string, data interface{}) error {
+ return e("cannot load TOML value of type %T into a Go %s", data, expected)
+}
diff --git a/vendor/github.com/BurntSushi/toml/decode_meta.go b/vendor/github.com/BurntSushi/toml/decode_meta.go
new file mode 100644
index 000000000..b9914a679
--- /dev/null
+++ b/vendor/github.com/BurntSushi/toml/decode_meta.go
@@ -0,0 +1,121 @@
+package toml
+
+import "strings"
+
+// MetaData allows access to meta information about TOML data that may not
+// be inferrable via reflection. In particular, whether a key has been defined
+// and the TOML type of a key.
+type MetaData struct {
+ mapping map[string]interface{}
+ types map[string]tomlType
+ keys []Key
+ decoded map[string]bool
+ context Key // Used only during decoding.
+}
+
+// IsDefined returns true if the key given exists in the TOML data. The key
+// should be specified hierarchially. e.g.,
+//
+// // access the TOML key 'a.b.c'
+// IsDefined("a", "b", "c")
+//
+// IsDefined will return false if an empty key given. Keys are case sensitive.
+func (md *MetaData) IsDefined(key ...string) bool {
+ if len(key) == 0 {
+ return false
+ }
+
+ var hash map[string]interface{}
+ var ok bool
+ var hashOrVal interface{} = md.mapping
+ for _, k := range key {
+ if hash, ok = hashOrVal.(map[string]interface{}); !ok {
+ return false
+ }
+ if hashOrVal, ok = hash[k]; !ok {
+ return false
+ }
+ }
+ return true
+}
+
+// Type returns a string representation of the type of the key specified.
+//
+// Type will return the empty string if given an empty key or a key that
+// does not exist. Keys are case sensitive.
+func (md *MetaData) Type(key ...string) string {
+ fullkey := strings.Join(key, ".")
+ if typ, ok := md.types[fullkey]; ok {
+ return typ.typeString()
+ }
+ return ""
+}
+
+// Key is the type of any TOML key, including key groups. Use (MetaData).Keys
+// to get values of this type.
+type Key []string
+
+func (k Key) String() string {
+ return strings.Join(k, ".")
+}
+
+func (k Key) maybeQuotedAll() string {
+ var ss []string
+ for i := range k {
+ ss = append(ss, k.maybeQuoted(i))
+ }
+ return strings.Join(ss, ".")
+}
+
+func (k Key) maybeQuoted(i int) string {
+ quote := false
+ for _, c := range k[i] {
+ if !isBareKeyChar(c) {
+ quote = true
+ break
+ }
+ }
+ if quote {
+ return "\"" + strings.Replace(k[i], "\"", "\\\"", -1) + "\""
+ }
+ return k[i]
+}
+
+func (k Key) add(piece string) Key {
+ newKey := make(Key, len(k)+1)
+ copy(newKey, k)
+ newKey[len(k)] = piece
+ return newKey
+}
+
+// Keys returns a slice of every key in the TOML data, including key groups.
+// Each key is itself a slice, where the first element is the top of the
+// hierarchy and the last is the most specific.
+//
+// The list will have the same order as the keys appeared in the TOML data.
+//
+// All keys returned are non-empty.
+func (md *MetaData) Keys() []Key {
+ return md.keys
+}
+
+// Undecoded returns all keys that have not been decoded in the order in which
+// they appear in the original TOML document.
+//
+// This includes keys that haven't been decoded because of a Primitive value.
+// Once the Primitive value is decoded, the keys will be considered decoded.
+//
+// Also note that decoding into an empty interface will result in no decoding,
+// and so no keys will be considered decoded.
+//
+// In this sense, the Undecoded keys correspond to keys in the TOML document
+// that do not have a concrete type in your representation.
+func (md *MetaData) Undecoded() []Key {
+ undecoded := make([]Key, 0, len(md.keys))
+ for _, key := range md.keys {
+ if !md.decoded[key.String()] {
+ undecoded = append(undecoded, key)
+ }
+ }
+ return undecoded
+}
diff --git a/vendor/github.com/BurntSushi/toml/doc.go b/vendor/github.com/BurntSushi/toml/doc.go
new file mode 100644
index 000000000..b371f396e
--- /dev/null
+++ b/vendor/github.com/BurntSushi/toml/doc.go
@@ -0,0 +1,27 @@
+/*
+Package toml provides facilities for decoding and encoding TOML configuration
+files via reflection. There is also support for delaying decoding with
+the Primitive type, and querying the set of keys in a TOML document with the
+MetaData type.
+
+The specification implemented: https://github.com/toml-lang/toml
+
+The sub-command github.com/BurntSushi/toml/cmd/tomlv can be used to verify
+whether a file is a valid TOML document. It can also be used to print the
+type of each key in a TOML document.
+
+Testing
+
+There are two important types of tests used for this package. The first is
+contained inside '*_test.go' files and uses the standard Go unit testing
+framework. These tests are primarily devoted to holistically testing the
+decoder and encoder.
+
+The second type of testing is used to verify the implementation's adherence
+to the TOML specification. These tests have been factored into their own
+project: https://github.com/BurntSushi/toml-test
+
+The reason the tests are in a separate project is so that they can be used by
+any implementation of TOML. Namely, it is language agnostic.
+*/
+package toml
diff --git a/vendor/github.com/BurntSushi/toml/encode.go b/vendor/github.com/BurntSushi/toml/encode.go
new file mode 100644
index 000000000..d905c21a2
--- /dev/null
+++ b/vendor/github.com/BurntSushi/toml/encode.go
@@ -0,0 +1,568 @@
+package toml
+
+import (
+ "bufio"
+ "errors"
+ "fmt"
+ "io"
+ "reflect"
+ "sort"
+ "strconv"
+ "strings"
+ "time"
+)
+
+type tomlEncodeError struct{ error }
+
+var (
+ errArrayMixedElementTypes = errors.New(
+ "toml: cannot encode array with mixed element types")
+ errArrayNilElement = errors.New(
+ "toml: cannot encode array with nil element")
+ errNonString = errors.New(
+ "toml: cannot encode a map with non-string key type")
+ errAnonNonStruct = errors.New(
+ "toml: cannot encode an anonymous field that is not a struct")
+ errArrayNoTable = errors.New(
+ "toml: TOML array element cannot contain a table")
+ errNoKey = errors.New(
+ "toml: top-level values must be Go maps or structs")
+ errAnything = errors.New("") // used in testing
+)
+
+var quotedReplacer = strings.NewReplacer(
+ "\t", "\\t",
+ "\n", "\\n",
+ "\r", "\\r",
+ "\"", "\\\"",
+ "\\", "\\\\",
+)
+
+// Encoder controls the encoding of Go values to a TOML document to some
+// io.Writer.
+//
+// The indentation level can be controlled with the Indent field.
+type Encoder struct {
+ // A single indentation level. By default it is two spaces.
+ Indent string
+
+ // hasWritten is whether we have written any output to w yet.
+ hasWritten bool
+ w *bufio.Writer
+}
+
+// NewEncoder returns a TOML encoder that encodes Go values to the io.Writer
+// given. By default, a single indentation level is 2 spaces.
+func NewEncoder(w io.Writer) *Encoder {
+ return &Encoder{
+ w: bufio.NewWriter(w),
+ Indent: " ",
+ }
+}
+
+// Encode writes a TOML representation of the Go value to the underlying
+// io.Writer. If the value given cannot be encoded to a valid TOML document,
+// then an error is returned.
+//
+// The mapping between Go values and TOML values should be precisely the same
+// as for the Decode* functions. Similarly, the TextMarshaler interface is
+// supported by encoding the resulting bytes as strings. (If you want to write
+// arbitrary binary data then you will need to use something like base64 since
+// TOML does not have any binary types.)
+//
+// When encoding TOML hashes (i.e., Go maps or structs), keys without any
+// sub-hashes are encoded first.
+//
+// If a Go map is encoded, then its keys are sorted alphabetically for
+// deterministic output. More control over this behavior may be provided if
+// there is demand for it.
+//
+// Encoding Go values without a corresponding TOML representation---like map
+// types with non-string keys---will cause an error to be returned. Similarly
+// for mixed arrays/slices, arrays/slices with nil elements, embedded
+// non-struct types and nested slices containing maps or structs.
+// (e.g., [][]map[string]string is not allowed but []map[string]string is OK
+// and so is []map[string][]string.)
+func (enc *Encoder) Encode(v interface{}) error {
+ rv := eindirect(reflect.ValueOf(v))
+ if err := enc.safeEncode(Key([]string{}), rv); err != nil {
+ return err
+ }
+ return enc.w.Flush()
+}
+
+func (enc *Encoder) safeEncode(key Key, rv reflect.Value) (err error) {
+ defer func() {
+ if r := recover(); r != nil {
+ if terr, ok := r.(tomlEncodeError); ok {
+ err = terr.error
+ return
+ }
+ panic(r)
+ }
+ }()
+ enc.encode(key, rv)
+ return nil
+}
+
+func (enc *Encoder) encode(key Key, rv reflect.Value) {
+ // Special case. Time needs to be in ISO8601 format.
+ // Special case. If we can marshal the type to text, then we used that.
+ // Basically, this prevents the encoder for handling these types as
+ // generic structs (or whatever the underlying type of a TextMarshaler is).
+ switch rv.Interface().(type) {
+ case time.Time, TextMarshaler:
+ enc.keyEqElement(key, rv)
+ return
+ }
+
+ k := rv.Kind()
+ switch k {
+ case reflect.Int, reflect.Int8, reflect.Int16, reflect.Int32,
+ reflect.Int64,
+ reflect.Uint, reflect.Uint8, reflect.Uint16, reflect.Uint32,
+ reflect.Uint64,
+ reflect.Float32, reflect.Float64, reflect.String, reflect.Bool:
+ enc.keyEqElement(key, rv)
+ case reflect.Array, reflect.Slice:
+ if typeEqual(tomlArrayHash, tomlTypeOfGo(rv)) {
+ enc.eArrayOfTables(key, rv)
+ } else {
+ enc.keyEqElement(key, rv)
+ }
+ case reflect.Interface:
+ if rv.IsNil() {
+ return
+ }
+ enc.encode(key, rv.Elem())
+ case reflect.Map:
+ if rv.IsNil() {
+ return
+ }
+ enc.eTable(key, rv)
+ case reflect.Ptr:
+ if rv.IsNil() {
+ return
+ }
+ enc.encode(key, rv.Elem())
+ case reflect.Struct:
+ enc.eTable(key, rv)
+ default:
+ panic(e("unsupported type for key '%s': %s", key, k))
+ }
+}
+
+// eElement encodes any value that can be an array element (primitives and
+// arrays).
+func (enc *Encoder) eElement(rv reflect.Value) {
+ switch v := rv.Interface().(type) {
+ case time.Time:
+ // Special case time.Time as a primitive. Has to come before
+ // TextMarshaler below because time.Time implements
+ // encoding.TextMarshaler, but we need to always use UTC.
+ enc.wf(v.UTC().Format("2006-01-02T15:04:05Z"))
+ return
+ case TextMarshaler:
+ // Special case. Use text marshaler if it's available for this value.
+ if s, err := v.MarshalText(); err != nil {
+ encPanic(err)
+ } else {
+ enc.writeQuoted(string(s))
+ }
+ return
+ }
+ switch rv.Kind() {
+ case reflect.Bool:
+ enc.wf(strconv.FormatBool(rv.Bool()))
+ case reflect.Int, reflect.Int8, reflect.Int16, reflect.Int32,
+ reflect.Int64:
+ enc.wf(strconv.FormatInt(rv.Int(), 10))
+ case reflect.Uint, reflect.Uint8, reflect.Uint16,
+ reflect.Uint32, reflect.Uint64:
+ enc.wf(strconv.FormatUint(rv.Uint(), 10))
+ case reflect.Float32:
+ enc.wf(floatAddDecimal(strconv.FormatFloat(rv.Float(), 'f', -1, 32)))
+ case reflect.Float64:
+ enc.wf(floatAddDecimal(strconv.FormatFloat(rv.Float(), 'f', -1, 64)))
+ case reflect.Array, reflect.Slice:
+ enc.eArrayOrSliceElement(rv)
+ case reflect.Interface:
+ enc.eElement(rv.Elem())
+ case reflect.String:
+ enc.writeQuoted(rv.String())
+ default:
+ panic(e("unexpected primitive type: %s", rv.Kind()))
+ }
+}
+
+// By the TOML spec, all floats must have a decimal with at least one
+// number on either side.
+func floatAddDecimal(fstr string) string {
+ if !strings.Contains(fstr, ".") {
+ return fstr + ".0"
+ }
+ return fstr
+}
+
+func (enc *Encoder) writeQuoted(s string) {
+ enc.wf("\"%s\"", quotedReplacer.Replace(s))
+}
+
+func (enc *Encoder) eArrayOrSliceElement(rv reflect.Value) {
+ length := rv.Len()
+ enc.wf("[")
+ for i := 0; i < length; i++ {
+ elem := rv.Index(i)
+ enc.eElement(elem)
+ if i != length-1 {
+ enc.wf(", ")
+ }
+ }
+ enc.wf("]")
+}
+
+func (enc *Encoder) eArrayOfTables(key Key, rv reflect.Value) {
+ if len(key) == 0 {
+ encPanic(errNoKey)
+ }
+ for i := 0; i < rv.Len(); i++ {
+ trv := rv.Index(i)
+ if isNil(trv) {
+ continue
+ }
+ panicIfInvalidKey(key)
+ enc.newline()
+ enc.wf("%s[[%s]]", enc.indentStr(key), key.maybeQuotedAll())
+ enc.newline()
+ enc.eMapOrStruct(key, trv)
+ }
+}
+
+func (enc *Encoder) eTable(key Key, rv reflect.Value) {
+ panicIfInvalidKey(key)
+ if len(key) == 1 {
+ // Output an extra newline between top-level tables.
+ // (The newline isn't written if nothing else has been written though.)
+ enc.newline()
+ }
+ if len(key) > 0 {
+ enc.wf("%s[%s]", enc.indentStr(key), key.maybeQuotedAll())
+ enc.newline()
+ }
+ enc.eMapOrStruct(key, rv)
+}
+
+func (enc *Encoder) eMapOrStruct(key Key, rv reflect.Value) {
+ switch rv := eindirect(rv); rv.Kind() {
+ case reflect.Map:
+ enc.eMap(key, rv)
+ case reflect.Struct:
+ enc.eStruct(key, rv)
+ default:
+ panic("eTable: unhandled reflect.Value Kind: " + rv.Kind().String())
+ }
+}
+
+func (enc *Encoder) eMap(key Key, rv reflect.Value) {
+ rt := rv.Type()
+ if rt.Key().Kind() != reflect.String {
+ encPanic(errNonString)
+ }
+
+ // Sort keys so that we have deterministic output. And write keys directly
+ // underneath this key first, before writing sub-structs or sub-maps.
+ var mapKeysDirect, mapKeysSub []string
+ for _, mapKey := range rv.MapKeys() {
+ k := mapKey.String()
+ if typeIsHash(tomlTypeOfGo(rv.MapIndex(mapKey))) {
+ mapKeysSub = append(mapKeysSub, k)
+ } else {
+ mapKeysDirect = append(mapKeysDirect, k)
+ }
+ }
+
+ var writeMapKeys = func(mapKeys []string) {
+ sort.Strings(mapKeys)
+ for _, mapKey := range mapKeys {
+ mrv := rv.MapIndex(reflect.ValueOf(mapKey))
+ if isNil(mrv) {
+ // Don't write anything for nil fields.
+ continue
+ }
+ enc.encode(key.add(mapKey), mrv)
+ }
+ }
+ writeMapKeys(mapKeysDirect)
+ writeMapKeys(mapKeysSub)
+}
+
+func (enc *Encoder) eStruct(key Key, rv reflect.Value) {
+ // Write keys for fields directly under this key first, because if we write
+ // a field that creates a new table, then all keys under it will be in that
+ // table (not the one we're writing here).
+ rt := rv.Type()
+ var fieldsDirect, fieldsSub [][]int
+ var addFields func(rt reflect.Type, rv reflect.Value, start []int)
+ addFields = func(rt reflect.Type, rv reflect.Value, start []int) {
+ for i := 0; i < rt.NumField(); i++ {
+ f := rt.Field(i)
+ // skip unexported fields
+ if f.PkgPath != "" && !f.Anonymous {
+ continue
+ }
+ frv := rv.Field(i)
+ if f.Anonymous {
+ t := f.Type
+ switch t.Kind() {
+ case reflect.Struct:
+ // Treat anonymous struct fields with
+ // tag names as though they are not
+ // anonymous, like encoding/json does.
+ if getOptions(f.Tag).name == "" {
+ addFields(t, frv, f.Index)
+ continue
+ }
+ case reflect.Ptr:
+ if t.Elem().Kind() == reflect.Struct &&
+ getOptions(f.Tag).name == "" {
+ if !frv.IsNil() {
+ addFields(t.Elem(), frv.Elem(), f.Index)
+ }
+ continue
+ }
+ // Fall through to the normal field encoding logic below
+ // for non-struct anonymous fields.
+ }
+ }
+
+ if typeIsHash(tomlTypeOfGo(frv)) {
+ fieldsSub = append(fieldsSub, append(start, f.Index...))
+ } else {
+ fieldsDirect = append(fieldsDirect, append(start, f.Index...))
+ }
+ }
+ }
+ addFields(rt, rv, nil)
+
+ var writeFields = func(fields [][]int) {
+ for _, fieldIndex := range fields {
+ sft := rt.FieldByIndex(fieldIndex)
+ sf := rv.FieldByIndex(fieldIndex)
+ if isNil(sf) {
+ // Don't write anything for nil fields.
+ continue
+ }
+
+ opts := getOptions(sft.Tag)
+ if opts.skip {
+ continue
+ }
+ keyName := sft.Name
+ if opts.name != "" {
+ keyName = opts.name
+ }
+ if opts.omitempty && isEmpty(sf) {
+ continue
+ }
+ if opts.omitzero && isZero(sf) {
+ continue
+ }
+
+ enc.encode(key.add(keyName), sf)
+ }
+ }
+ writeFields(fieldsDirect)
+ writeFields(fieldsSub)
+}
+
+// tomlTypeName returns the TOML type name of the Go value's type. It is
+// used to determine whether the types of array elements are mixed (which is
+// forbidden). If the Go value is nil, then it is illegal for it to be an array
+// element, and valueIsNil is returned as true.
+
+// Returns the TOML type of a Go value. The type may be `nil`, which means
+// no concrete TOML type could be found.
+func tomlTypeOfGo(rv reflect.Value) tomlType {
+ if isNil(rv) || !rv.IsValid() {
+ return nil
+ }
+ switch rv.Kind() {
+ case reflect.Bool:
+ return tomlBool
+ case reflect.Int, reflect.Int8, reflect.Int16, reflect.Int32,
+ reflect.Int64,
+ reflect.Uint, reflect.Uint8, reflect.Uint16, reflect.Uint32,
+ reflect.Uint64:
+ return tomlInteger
+ case reflect.Float32, reflect.Float64:
+ return tomlFloat
+ case reflect.Array, reflect.Slice:
+ if typeEqual(tomlHash, tomlArrayType(rv)) {
+ return tomlArrayHash
+ }
+ return tomlArray
+ case reflect.Ptr, reflect.Interface:
+ return tomlTypeOfGo(rv.Elem())
+ case reflect.String:
+ return tomlString
+ case reflect.Map:
+ return tomlHash
+ case reflect.Struct:
+ switch rv.Interface().(type) {
+ case time.Time:
+ return tomlDatetime
+ case TextMarshaler:
+ return tomlString
+ default:
+ return tomlHash
+ }
+ default:
+ panic("unexpected reflect.Kind: " + rv.Kind().String())
+ }
+}
+
+// tomlArrayType returns the element type of a TOML array. The type returned
+// may be nil if it cannot be determined (e.g., a nil slice or a zero length
+// slize). This function may also panic if it finds a type that cannot be
+// expressed in TOML (such as nil elements, heterogeneous arrays or directly
+// nested arrays of tables).
+func tomlArrayType(rv reflect.Value) tomlType {
+ if isNil(rv) || !rv.IsValid() || rv.Len() == 0 {
+ return nil
+ }
+ firstType := tomlTypeOfGo(rv.Index(0))
+ if firstType == nil {
+ encPanic(errArrayNilElement)
+ }
+
+ rvlen := rv.Len()
+ for i := 1; i < rvlen; i++ {
+ elem := rv.Index(i)
+ switch elemType := tomlTypeOfGo(elem); {
+ case elemType == nil:
+ encPanic(errArrayNilElement)
+ case !typeEqual(firstType, elemType):
+ encPanic(errArrayMixedElementTypes)
+ }
+ }
+ // If we have a nested array, then we must make sure that the nested
+ // array contains ONLY primitives.
+ // This checks arbitrarily nested arrays.
+ if typeEqual(firstType, tomlArray) || typeEqual(firstType, tomlArrayHash) {
+ nest := tomlArrayType(eindirect(rv.Index(0)))
+ if typeEqual(nest, tomlHash) || typeEqual(nest, tomlArrayHash) {
+ encPanic(errArrayNoTable)
+ }
+ }
+ return firstType
+}
+
+type tagOptions struct {
+ skip bool // "-"
+ name string
+ omitempty bool
+ omitzero bool
+}
+
+func getOptions(tag reflect.StructTag) tagOptions {
+ t := tag.Get("toml")
+ if t == "-" {
+ return tagOptions{skip: true}
+ }
+ var opts tagOptions
+ parts := strings.Split(t, ",")
+ opts.name = parts[0]
+ for _, s := range parts[1:] {
+ switch s {
+ case "omitempty":
+ opts.omitempty = true
+ case "omitzero":
+ opts.omitzero = true
+ }
+ }
+ return opts
+}
+
+func isZero(rv reflect.Value) bool {
+ switch rv.Kind() {
+ case reflect.Int, reflect.Int8, reflect.Int16, reflect.Int32, reflect.Int64:
+ return rv.Int() == 0
+ case reflect.Uint, reflect.Uint8, reflect.Uint16, reflect.Uint32, reflect.Uint64:
+ return rv.Uint() == 0
+ case reflect.Float32, reflect.Float64:
+ return rv.Float() == 0.0
+ }
+ return false
+}
+
+func isEmpty(rv reflect.Value) bool {
+ switch rv.Kind() {
+ case reflect.Array, reflect.Slice, reflect.Map, reflect.String:
+ return rv.Len() == 0
+ case reflect.Bool:
+ return !rv.Bool()
+ }
+ return false
+}
+
+func (enc *Encoder) newline() {
+ if enc.hasWritten {
+ enc.wf("\n")
+ }
+}
+
+func (enc *Encoder) keyEqElement(key Key, val reflect.Value) {
+ if len(key) == 0 {
+ encPanic(errNoKey)
+ }
+ panicIfInvalidKey(key)
+ enc.wf("%s%s = ", enc.indentStr(key), key.maybeQuoted(len(key)-1))
+ enc.eElement(val)
+ enc.newline()
+}
+
+func (enc *Encoder) wf(format string, v ...interface{}) {
+ if _, err := fmt.Fprintf(enc.w, format, v...); err != nil {
+ encPanic(err)
+ }
+ enc.hasWritten = true
+}
+
+func (enc *Encoder) indentStr(key Key) string {
+ return strings.Repeat(enc.Indent, len(key)-1)
+}
+
+func encPanic(err error) {
+ panic(tomlEncodeError{err})
+}
+
+func eindirect(v reflect.Value) reflect.Value {
+ switch v.Kind() {
+ case reflect.Ptr, reflect.Interface:
+ return eindirect(v.Elem())
+ default:
+ return v
+ }
+}
+
+func isNil(rv reflect.Value) bool {
+ switch rv.Kind() {
+ case reflect.Interface, reflect.Map, reflect.Ptr, reflect.Slice:
+ return rv.IsNil()
+ default:
+ return false
+ }
+}
+
+func panicIfInvalidKey(key Key) {
+ for _, k := range key {
+ if len(k) == 0 {
+ encPanic(e("Key '%s' is not a valid table name. Key names "+
+ "cannot be empty.", key.maybeQuotedAll()))
+ }
+ }
+}
+
+func isValidKeyName(s string) bool {
+ return len(s) != 0
+}
diff --git a/vendor/github.com/BurntSushi/toml/encoding_types.go b/vendor/github.com/BurntSushi/toml/encoding_types.go
new file mode 100644
index 000000000..d36e1dd60
--- /dev/null
+++ b/vendor/github.com/BurntSushi/toml/encoding_types.go
@@ -0,0 +1,19 @@
+// +build go1.2
+
+package toml
+
+// In order to support Go 1.1, we define our own TextMarshaler and
+// TextUnmarshaler types. For Go 1.2+, we just alias them with the
+// standard library interfaces.
+
+import (
+ "encoding"
+)
+
+// TextMarshaler is a synonym for encoding.TextMarshaler. It is defined here
+// so that Go 1.1 can be supported.
+type TextMarshaler encoding.TextMarshaler
+
+// TextUnmarshaler is a synonym for encoding.TextUnmarshaler. It is defined
+// here so that Go 1.1 can be supported.
+type TextUnmarshaler encoding.TextUnmarshaler
diff --git a/vendor/github.com/BurntSushi/toml/encoding_types_1.1.go b/vendor/github.com/BurntSushi/toml/encoding_types_1.1.go
new file mode 100644
index 000000000..e8d503d04
--- /dev/null
+++ b/vendor/github.com/BurntSushi/toml/encoding_types_1.1.go
@@ -0,0 +1,18 @@
+// +build !go1.2
+
+package toml
+
+// These interfaces were introduced in Go 1.2, so we add them manually when
+// compiling for Go 1.1.
+
+// TextMarshaler is a synonym for encoding.TextMarshaler. It is defined here
+// so that Go 1.1 can be supported.
+type TextMarshaler interface {
+ MarshalText() (text []byte, err error)
+}
+
+// TextUnmarshaler is a synonym for encoding.TextUnmarshaler. It is defined
+// here so that Go 1.1 can be supported.
+type TextUnmarshaler interface {
+ UnmarshalText(text []byte) error
+}
diff --git a/vendor/github.com/BurntSushi/toml/lex.go b/vendor/github.com/BurntSushi/toml/lex.go
new file mode 100644
index 000000000..e0a742a88
--- /dev/null
+++ b/vendor/github.com/BurntSushi/toml/lex.go
@@ -0,0 +1,953 @@
+package toml
+
+import (
+ "fmt"
+ "strings"
+ "unicode"
+ "unicode/utf8"
+)
+
+type itemType int
+
+const (
+ itemError itemType = iota
+ itemNIL // used in the parser to indicate no type
+ itemEOF
+ itemText
+ itemString
+ itemRawString
+ itemMultilineString
+ itemRawMultilineString
+ itemBool
+ itemInteger
+ itemFloat
+ itemDatetime
+ itemArray // the start of an array
+ itemArrayEnd
+ itemTableStart
+ itemTableEnd
+ itemArrayTableStart
+ itemArrayTableEnd
+ itemKeyStart
+ itemCommentStart
+ itemInlineTableStart
+ itemInlineTableEnd
+)
+
+const (
+ eof = 0
+ comma = ','
+ tableStart = '['
+ tableEnd = ']'
+ arrayTableStart = '['
+ arrayTableEnd = ']'
+ tableSep = '.'
+ keySep = '='
+ arrayStart = '['
+ arrayEnd = ']'
+ commentStart = '#'
+ stringStart = '"'
+ stringEnd = '"'
+ rawStringStart = '\''
+ rawStringEnd = '\''
+ inlineTableStart = '{'
+ inlineTableEnd = '}'
+)
+
+type stateFn func(lx *lexer) stateFn
+
+type lexer struct {
+ input string
+ start int
+ pos int
+ line int
+ state stateFn
+ items chan item
+
+ // Allow for backing up up to three runes.
+ // This is necessary because TOML contains 3-rune tokens (""" and ''').
+ prevWidths [3]int
+ nprev int // how many of prevWidths are in use
+ // If we emit an eof, we can still back up, but it is not OK to call
+ // next again.
+ atEOF bool
+
+ // A stack of state functions used to maintain context.
+ // The idea is to reuse parts of the state machine in various places.
+ // For example, values can appear at the top level or within arbitrarily
+ // nested arrays. The last state on the stack is used after a value has
+ // been lexed. Similarly for comments.
+ stack []stateFn
+}
+
+type item struct {
+ typ itemType
+ val string
+ line int
+}
+
+func (lx *lexer) nextItem() item {
+ for {
+ select {
+ case item := <-lx.items:
+ return item
+ default:
+ lx.state = lx.state(lx)
+ }
+ }
+}
+
+func lex(input string) *lexer {
+ lx := &lexer{
+ input: input,
+ state: lexTop,
+ line: 1,
+ items: make(chan item, 10),
+ stack: make([]stateFn, 0, 10),
+ }
+ return lx
+}
+
+func (lx *lexer) push(state stateFn) {
+ lx.stack = append(lx.stack, state)
+}
+
+func (lx *lexer) pop() stateFn {
+ if len(lx.stack) == 0 {
+ return lx.errorf("BUG in lexer: no states to pop")
+ }
+ last := lx.stack[len(lx.stack)-1]
+ lx.stack = lx.stack[0 : len(lx.stack)-1]
+ return last
+}
+
+func (lx *lexer) current() string {
+ return lx.input[lx.start:lx.pos]
+}
+
+func (lx *lexer) emit(typ itemType) {
+ lx.items <- item{typ, lx.current(), lx.line}
+ lx.start = lx.pos
+}
+
+func (lx *lexer) emitTrim(typ itemType) {
+ lx.items <- item{typ, strings.TrimSpace(lx.current()), lx.line}
+ lx.start = lx.pos
+}
+
+func (lx *lexer) next() (r rune) {
+ if lx.atEOF {
+ panic("next called after EOF")
+ }
+ if lx.pos >= len(lx.input) {
+ lx.atEOF = true
+ return eof
+ }
+
+ if lx.input[lx.pos] == '\n' {
+ lx.line++
+ }
+ lx.prevWidths[2] = lx.prevWidths[1]
+ lx.prevWidths[1] = lx.prevWidths[0]
+ if lx.nprev < 3 {
+ lx.nprev++
+ }
+ r, w := utf8.DecodeRuneInString(lx.input[lx.pos:])
+ lx.prevWidths[0] = w
+ lx.pos += w
+ return r
+}
+
+// ignore skips over the pending input before this point.
+func (lx *lexer) ignore() {
+ lx.start = lx.pos
+}
+
+// backup steps back one rune. Can be called only twice between calls to next.
+func (lx *lexer) backup() {
+ if lx.atEOF {
+ lx.atEOF = false
+ return
+ }
+ if lx.nprev < 1 {
+ panic("backed up too far")
+ }
+ w := lx.prevWidths[0]
+ lx.prevWidths[0] = lx.prevWidths[1]
+ lx.prevWidths[1] = lx.prevWidths[2]
+ lx.nprev--
+ lx.pos -= w
+ if lx.pos < len(lx.input) && lx.input[lx.pos] == '\n' {
+ lx.line--
+ }
+}
+
+// accept consumes the next rune if it's equal to `valid`.
+func (lx *lexer) accept(valid rune) bool {
+ if lx.next() == valid {
+ return true
+ }
+ lx.backup()
+ return false
+}
+
+// peek returns but does not consume the next rune in the input.
+func (lx *lexer) peek() rune {
+ r := lx.next()
+ lx.backup()
+ return r
+}
+
+// skip ignores all input that matches the given predicate.
+func (lx *lexer) skip(pred func(rune) bool) {
+ for {
+ r := lx.next()
+ if pred(r) {
+ continue
+ }
+ lx.backup()
+ lx.ignore()
+ return
+ }
+}
+
+// errorf stops all lexing by emitting an error and returning `nil`.
+// Note that any value that is a character is escaped if it's a special
+// character (newlines, tabs, etc.).
+func (lx *lexer) errorf(format string, values ...interface{}) stateFn {
+ lx.items <- item{
+ itemError,
+ fmt.Sprintf(format, values...),
+ lx.line,
+ }
+ return nil
+}
+
+// lexTop consumes elements at the top level of TOML data.
+func lexTop(lx *lexer) stateFn {
+ r := lx.next()
+ if isWhitespace(r) || isNL(r) {
+ return lexSkip(lx, lexTop)
+ }
+ switch r {
+ case commentStart:
+ lx.push(lexTop)
+ return lexCommentStart
+ case tableStart:
+ return lexTableStart
+ case eof:
+ if lx.pos > lx.start {
+ return lx.errorf("unexpected EOF")
+ }
+ lx.emit(itemEOF)
+ return nil
+ }
+
+ // At this point, the only valid item can be a key, so we back up
+ // and let the key lexer do the rest.
+ lx.backup()
+ lx.push(lexTopEnd)
+ return lexKeyStart
+}
+
+// lexTopEnd is entered whenever a top-level item has been consumed. (A value
+// or a table.) It must see only whitespace, and will turn back to lexTop
+// upon a newline. If it sees EOF, it will quit the lexer successfully.
+func lexTopEnd(lx *lexer) stateFn {
+ r := lx.next()
+ switch {
+ case r == commentStart:
+ // a comment will read to a newline for us.
+ lx.push(lexTop)
+ return lexCommentStart
+ case isWhitespace(r):
+ return lexTopEnd
+ case isNL(r):
+ lx.ignore()
+ return lexTop
+ case r == eof:
+ lx.emit(itemEOF)
+ return nil
+ }
+ return lx.errorf("expected a top-level item to end with a newline, "+
+ "comment, or EOF, but got %q instead", r)
+}
+
+// lexTable lexes the beginning of a table. Namely, it makes sure that
+// it starts with a character other than '.' and ']'.
+// It assumes that '[' has already been consumed.
+// It also handles the case that this is an item in an array of tables.
+// e.g., '[[name]]'.
+func lexTableStart(lx *lexer) stateFn {
+ if lx.peek() == arrayTableStart {
+ lx.next()
+ lx.emit(itemArrayTableStart)
+ lx.push(lexArrayTableEnd)
+ } else {
+ lx.emit(itemTableStart)
+ lx.push(lexTableEnd)
+ }
+ return lexTableNameStart
+}
+
+func lexTableEnd(lx *lexer) stateFn {
+ lx.emit(itemTableEnd)
+ return lexTopEnd
+}
+
+func lexArrayTableEnd(lx *lexer) stateFn {
+ if r := lx.next(); r != arrayTableEnd {
+ return lx.errorf("expected end of table array name delimiter %q, "+
+ "but got %q instead", arrayTableEnd, r)
+ }
+ lx.emit(itemArrayTableEnd)
+ return lexTopEnd
+}
+
+func lexTableNameStart(lx *lexer) stateFn {
+ lx.skip(isWhitespace)
+ switch r := lx.peek(); {
+ case r == tableEnd || r == eof:
+ return lx.errorf("unexpected end of table name " +
+ "(table names cannot be empty)")
+ case r == tableSep:
+ return lx.errorf("unexpected table separator " +
+ "(table names cannot be empty)")
+ case r == stringStart || r == rawStringStart:
+ lx.ignore()
+ lx.push(lexTableNameEnd)
+ return lexValue // reuse string lexing
+ default:
+ return lexBareTableName
+ }
+}
+
+// lexBareTableName lexes the name of a table. It assumes that at least one
+// valid character for the table has already been read.
+func lexBareTableName(lx *lexer) stateFn {
+ r := lx.next()
+ if isBareKeyChar(r) {
+ return lexBareTableName
+ }
+ lx.backup()
+ lx.emit(itemText)
+ return lexTableNameEnd
+}
+
+// lexTableNameEnd reads the end of a piece of a table name, optionally
+// consuming whitespace.
+func lexTableNameEnd(lx *lexer) stateFn {
+ lx.skip(isWhitespace)
+ switch r := lx.next(); {
+ case isWhitespace(r):
+ return lexTableNameEnd
+ case r == tableSep:
+ lx.ignore()
+ return lexTableNameStart
+ case r == tableEnd:
+ return lx.pop()
+ default:
+ return lx.errorf("expected '.' or ']' to end table name, "+
+ "but got %q instead", r)
+ }
+}
+
+// lexKeyStart consumes a key name up until the first non-whitespace character.
+// lexKeyStart will ignore whitespace.
+func lexKeyStart(lx *lexer) stateFn {
+ r := lx.peek()
+ switch {
+ case r == keySep:
+ return lx.errorf("unexpected key separator %q", keySep)
+ case isWhitespace(r) || isNL(r):
+ lx.next()
+ return lexSkip(lx, lexKeyStart)
+ case r == stringStart || r == rawStringStart:
+ lx.ignore()
+ lx.emit(itemKeyStart)
+ lx.push(lexKeyEnd)
+ return lexValue // reuse string lexing
+ default:
+ lx.ignore()
+ lx.emit(itemKeyStart)
+ return lexBareKey
+ }
+}
+
+// lexBareKey consumes the text of a bare key. Assumes that the first character
+// (which is not whitespace) has not yet been consumed.
+func lexBareKey(lx *lexer) stateFn {
+ switch r := lx.next(); {
+ case isBareKeyChar(r):
+ return lexBareKey
+ case isWhitespace(r):
+ lx.backup()
+ lx.emit(itemText)
+ return lexKeyEnd
+ case r == keySep:
+ lx.backup()
+ lx.emit(itemText)
+ return lexKeyEnd
+ default:
+ return lx.errorf("bare keys cannot contain %q", r)
+ }
+}
+
+// lexKeyEnd consumes the end of a key and trims whitespace (up to the key
+// separator).
+func lexKeyEnd(lx *lexer) stateFn {
+ switch r := lx.next(); {
+ case r == keySep:
+ return lexSkip(lx, lexValue)
+ case isWhitespace(r):
+ return lexSkip(lx, lexKeyEnd)
+ default:
+ return lx.errorf("expected key separator %q, but got %q instead",
+ keySep, r)
+ }
+}
+
+// lexValue starts the consumption of a value anywhere a value is expected.
+// lexValue will ignore whitespace.
+// After a value is lexed, the last state on the next is popped and returned.
+func lexValue(lx *lexer) stateFn {
+ // We allow whitespace to precede a value, but NOT newlines.
+ // In array syntax, the array states are responsible for ignoring newlines.
+ r := lx.next()
+ switch {
+ case isWhitespace(r):
+ return lexSkip(lx, lexValue)
+ case isDigit(r):
+ lx.backup() // avoid an extra state and use the same as above
+ return lexNumberOrDateStart
+ }
+ switch r {
+ case arrayStart:
+ lx.ignore()
+ lx.emit(itemArray)
+ return lexArrayValue
+ case inlineTableStart:
+ lx.ignore()
+ lx.emit(itemInlineTableStart)
+ return lexInlineTableValue
+ case stringStart:
+ if lx.accept(stringStart) {
+ if lx.accept(stringStart) {
+ lx.ignore() // Ignore """
+ return lexMultilineString
+ }
+ lx.backup()
+ }
+ lx.ignore() // ignore the '"'
+ return lexString
+ case rawStringStart:
+ if lx.accept(rawStringStart) {
+ if lx.accept(rawStringStart) {
+ lx.ignore() // Ignore """
+ return lexMultilineRawString
+ }
+ lx.backup()
+ }
+ lx.ignore() // ignore the "'"
+ return lexRawString
+ case '+', '-':
+ return lexNumberStart
+ case '.': // special error case, be kind to users
+ return lx.errorf("floats must start with a digit, not '.'")
+ }
+ if unicode.IsLetter(r) {
+ // Be permissive here; lexBool will give a nice error if the
+ // user wrote something like
+ // x = foo
+ // (i.e. not 'true' or 'false' but is something else word-like.)
+ lx.backup()
+ return lexBool
+ }
+ return lx.errorf("expected value but found %q instead", r)
+}
+
+// lexArrayValue consumes one value in an array. It assumes that '[' or ','
+// have already been consumed. All whitespace and newlines are ignored.
+func lexArrayValue(lx *lexer) stateFn {
+ r := lx.next()
+ switch {
+ case isWhitespace(r) || isNL(r):
+ return lexSkip(lx, lexArrayValue)
+ case r == commentStart:
+ lx.push(lexArrayValue)
+ return lexCommentStart
+ case r == comma:
+ return lx.errorf("unexpected comma")
+ case r == arrayEnd:
+ // NOTE(caleb): The spec isn't clear about whether you can have
+ // a trailing comma or not, so we'll allow it.
+ return lexArrayEnd
+ }
+
+ lx.backup()
+ lx.push(lexArrayValueEnd)
+ return lexValue
+}
+
+// lexArrayValueEnd consumes everything between the end of an array value and
+// the next value (or the end of the array): it ignores whitespace and newlines
+// and expects either a ',' or a ']'.
+func lexArrayValueEnd(lx *lexer) stateFn {
+ r := lx.next()
+ switch {
+ case isWhitespace(r) || isNL(r):
+ return lexSkip(lx, lexArrayValueEnd)
+ case r == commentStart:
+ lx.push(lexArrayValueEnd)
+ return lexCommentStart
+ case r == comma:
+ lx.ignore()
+ return lexArrayValue // move on to the next value
+ case r == arrayEnd:
+ return lexArrayEnd
+ }
+ return lx.errorf(
+ "expected a comma or array terminator %q, but got %q instead",
+ arrayEnd, r,
+ )
+}
+
+// lexArrayEnd finishes the lexing of an array.
+// It assumes that a ']' has just been consumed.
+func lexArrayEnd(lx *lexer) stateFn {
+ lx.ignore()
+ lx.emit(itemArrayEnd)
+ return lx.pop()
+}
+
+// lexInlineTableValue consumes one key/value pair in an inline table.
+// It assumes that '{' or ',' have already been consumed. Whitespace is ignored.
+func lexInlineTableValue(lx *lexer) stateFn {
+ r := lx.next()
+ switch {
+ case isWhitespace(r):
+ return lexSkip(lx, lexInlineTableValue)
+ case isNL(r):
+ return lx.errorf("newlines not allowed within inline tables")
+ case r == commentStart:
+ lx.push(lexInlineTableValue)
+ return lexCommentStart
+ case r == comma:
+ return lx.errorf("unexpected comma")
+ case r == inlineTableEnd:
+ return lexInlineTableEnd
+ }
+ lx.backup()
+ lx.push(lexInlineTableValueEnd)
+ return lexKeyStart
+}
+
+// lexInlineTableValueEnd consumes everything between the end of an inline table
+// key/value pair and the next pair (or the end of the table):
+// it ignores whitespace and expects either a ',' or a '}'.
+func lexInlineTableValueEnd(lx *lexer) stateFn {
+ r := lx.next()
+ switch {
+ case isWhitespace(r):
+ return lexSkip(lx, lexInlineTableValueEnd)
+ case isNL(r):
+ return lx.errorf("newlines not allowed within inline tables")
+ case r == commentStart:
+ lx.push(lexInlineTableValueEnd)
+ return lexCommentStart
+ case r == comma:
+ lx.ignore()
+ return lexInlineTableValue
+ case r == inlineTableEnd:
+ return lexInlineTableEnd
+ }
+ return lx.errorf("expected a comma or an inline table terminator %q, "+
+ "but got %q instead", inlineTableEnd, r)
+}
+
+// lexInlineTableEnd finishes the lexing of an inline table.
+// It assumes that a '}' has just been consumed.
+func lexInlineTableEnd(lx *lexer) stateFn {
+ lx.ignore()
+ lx.emit(itemInlineTableEnd)
+ return lx.pop()
+}
+
+// lexString consumes the inner contents of a string. It assumes that the
+// beginning '"' has already been consumed and ignored.
+func lexString(lx *lexer) stateFn {
+ r := lx.next()
+ switch {
+ case r == eof:
+ return lx.errorf("unexpected EOF")
+ case isNL(r):
+ return lx.errorf("strings cannot contain newlines")
+ case r == '\\':
+ lx.push(lexString)
+ return lexStringEscape
+ case r == stringEnd:
+ lx.backup()
+ lx.emit(itemString)
+ lx.next()
+ lx.ignore()
+ return lx.pop()
+ }
+ return lexString
+}
+
+// lexMultilineString consumes the inner contents of a string. It assumes that
+// the beginning '"""' has already been consumed and ignored.
+func lexMultilineString(lx *lexer) stateFn {
+ switch lx.next() {
+ case eof:
+ return lx.errorf("unexpected EOF")
+ case '\\':
+ return lexMultilineStringEscape
+ case stringEnd:
+ if lx.accept(stringEnd) {
+ if lx.accept(stringEnd) {
+ lx.backup()
+ lx.backup()
+ lx.backup()
+ lx.emit(itemMultilineString)
+ lx.next()
+ lx.next()
+ lx.next()
+ lx.ignore()
+ return lx.pop()
+ }
+ lx.backup()
+ }
+ }
+ return lexMultilineString
+}
+
+// lexRawString consumes a raw string. Nothing can be escaped in such a string.
+// It assumes that the beginning "'" has already been consumed and ignored.
+func lexRawString(lx *lexer) stateFn {
+ r := lx.next()
+ switch {
+ case r == eof:
+ return lx.errorf("unexpected EOF")
+ case isNL(r):
+ return lx.errorf("strings cannot contain newlines")
+ case r == rawStringEnd:
+ lx.backup()
+ lx.emit(itemRawString)
+ lx.next()
+ lx.ignore()
+ return lx.pop()
+ }
+ return lexRawString
+}
+
+// lexMultilineRawString consumes a raw string. Nothing can be escaped in such
+// a string. It assumes that the beginning "'''" has already been consumed and
+// ignored.
+func lexMultilineRawString(lx *lexer) stateFn {
+ switch lx.next() {
+ case eof:
+ return lx.errorf("unexpected EOF")
+ case rawStringEnd:
+ if lx.accept(rawStringEnd) {
+ if lx.accept(rawStringEnd) {
+ lx.backup()
+ lx.backup()
+ lx.backup()
+ lx.emit(itemRawMultilineString)
+ lx.next()
+ lx.next()
+ lx.next()
+ lx.ignore()
+ return lx.pop()
+ }
+ lx.backup()
+ }
+ }
+ return lexMultilineRawString
+}
+
+// lexMultilineStringEscape consumes an escaped character. It assumes that the
+// preceding '\\' has already been consumed.
+func lexMultilineStringEscape(lx *lexer) stateFn {
+ // Handle the special case first:
+ if isNL(lx.next()) {
+ return lexMultilineString
+ }
+ lx.backup()
+ lx.push(lexMultilineString)
+ return lexStringEscape(lx)
+}
+
+func lexStringEscape(lx *lexer) stateFn {
+ r := lx.next()
+ switch r {
+ case 'b':
+ fallthrough
+ case 't':
+ fallthrough
+ case 'n':
+ fallthrough
+ case 'f':
+ fallthrough
+ case 'r':
+ fallthrough
+ case '"':
+ fallthrough
+ case '\\':
+ return lx.pop()
+ case 'u':
+ return lexShortUnicodeEscape
+ case 'U':
+ return lexLongUnicodeEscape
+ }
+ return lx.errorf("invalid escape character %q; only the following "+
+ "escape characters are allowed: "+
+ `\b, \t, \n, \f, \r, \", \\, \uXXXX, and \UXXXXXXXX`, r)
+}
+
+func lexShortUnicodeEscape(lx *lexer) stateFn {
+ var r rune
+ for i := 0; i < 4; i++ {
+ r = lx.next()
+ if !isHexadecimal(r) {
+ return lx.errorf(`expected four hexadecimal digits after '\u', `+
+ "but got %q instead", lx.current())
+ }
+ }
+ return lx.pop()
+}
+
+func lexLongUnicodeEscape(lx *lexer) stateFn {
+ var r rune
+ for i := 0; i < 8; i++ {
+ r = lx.next()
+ if !isHexadecimal(r) {
+ return lx.errorf(`expected eight hexadecimal digits after '\U', `+
+ "but got %q instead", lx.current())
+ }
+ }
+ return lx.pop()
+}
+
+// lexNumberOrDateStart consumes either an integer, a float, or datetime.
+func lexNumberOrDateStart(lx *lexer) stateFn {
+ r := lx.next()
+ if isDigit(r) {
+ return lexNumberOrDate
+ }
+ switch r {
+ case '_':
+ return lexNumber
+ case 'e', 'E':
+ return lexFloat
+ case '.':
+ return lx.errorf("floats must start with a digit, not '.'")
+ }
+ return lx.errorf("expected a digit but got %q", r)
+}
+
+// lexNumberOrDate consumes either an integer, float or datetime.
+func lexNumberOrDate(lx *lexer) stateFn {
+ r := lx.next()
+ if isDigit(r) {
+ return lexNumberOrDate
+ }
+ switch r {
+ case '-':
+ return lexDatetime
+ case '_':
+ return lexNumber
+ case '.', 'e', 'E':
+ return lexFloat
+ }
+
+ lx.backup()
+ lx.emit(itemInteger)
+ return lx.pop()
+}
+
+// lexDatetime consumes a Datetime, to a first approximation.
+// The parser validates that it matches one of the accepted formats.
+func lexDatetime(lx *lexer) stateFn {
+ r := lx.next()
+ if isDigit(r) {
+ return lexDatetime
+ }
+ switch r {
+ case '-', 'T', ':', '.', 'Z', '+':
+ return lexDatetime
+ }
+
+ lx.backup()
+ lx.emit(itemDatetime)
+ return lx.pop()
+}
+
+// lexNumberStart consumes either an integer or a float. It assumes that a sign
+// has already been read, but that *no* digits have been consumed.
+// lexNumberStart will move to the appropriate integer or float states.
+func lexNumberStart(lx *lexer) stateFn {
+ // We MUST see a digit. Even floats have to start with a digit.
+ r := lx.next()
+ if !isDigit(r) {
+ if r == '.' {
+ return lx.errorf("floats must start with a digit, not '.'")
+ }
+ return lx.errorf("expected a digit but got %q", r)
+ }
+ return lexNumber
+}
+
+// lexNumber consumes an integer or a float after seeing the first digit.
+func lexNumber(lx *lexer) stateFn {
+ r := lx.next()
+ if isDigit(r) {
+ return lexNumber
+ }
+ switch r {
+ case '_':
+ return lexNumber
+ case '.', 'e', 'E':
+ return lexFloat
+ }
+
+ lx.backup()
+ lx.emit(itemInteger)
+ return lx.pop()
+}
+
+// lexFloat consumes the elements of a float. It allows any sequence of
+// float-like characters, so floats emitted by the lexer are only a first
+// approximation and must be validated by the parser.
+func lexFloat(lx *lexer) stateFn {
+ r := lx.next()
+ if isDigit(r) {
+ return lexFloat
+ }
+ switch r {
+ case '_', '.', '-', '+', 'e', 'E':
+ return lexFloat
+ }
+
+ lx.backup()
+ lx.emit(itemFloat)
+ return lx.pop()
+}
+
+// lexBool consumes a bool string: 'true' or 'false.
+func lexBool(lx *lexer) stateFn {
+ var rs []rune
+ for {
+ r := lx.next()
+ if !unicode.IsLetter(r) {
+ lx.backup()
+ break
+ }
+ rs = append(rs, r)
+ }
+ s := string(rs)
+ switch s {
+ case "true", "false":
+ lx.emit(itemBool)
+ return lx.pop()
+ }
+ return lx.errorf("expected value but found %q instead", s)
+}
+
+// lexCommentStart begins the lexing of a comment. It will emit
+// itemCommentStart and consume no characters, passing control to lexComment.
+func lexCommentStart(lx *lexer) stateFn {
+ lx.ignore()
+ lx.emit(itemCommentStart)
+ return lexComment
+}
+
+// lexComment lexes an entire comment. It assumes that '#' has been consumed.
+// It will consume *up to* the first newline character, and pass control
+// back to the last state on the stack.
+func lexComment(lx *lexer) stateFn {
+ r := lx.peek()
+ if isNL(r) || r == eof {
+ lx.emit(itemText)
+ return lx.pop()
+ }
+ lx.next()
+ return lexComment
+}
+
+// lexSkip ignores all slurped input and moves on to the next state.
+func lexSkip(lx *lexer, nextState stateFn) stateFn {
+ return func(lx *lexer) stateFn {
+ lx.ignore()
+ return nextState
+ }
+}
+
+// isWhitespace returns true if `r` is a whitespace character according
+// to the spec.
+func isWhitespace(r rune) bool {
+ return r == '\t' || r == ' '
+}
+
+func isNL(r rune) bool {
+ return r == '\n' || r == '\r'
+}
+
+func isDigit(r rune) bool {
+ return r >= '0' && r <= '9'
+}
+
+func isHexadecimal(r rune) bool {
+ return (r >= '0' && r <= '9') ||
+ (r >= 'a' && r <= 'f') ||
+ (r >= 'A' && r <= 'F')
+}
+
+func isBareKeyChar(r rune) bool {
+ return (r >= 'A' && r <= 'Z') ||
+ (r >= 'a' && r <= 'z') ||
+ (r >= '0' && r <= '9') ||
+ r == '_' ||
+ r == '-'
+}
+
+func (itype itemType) String() string {
+ switch itype {
+ case itemError:
+ return "Error"
+ case itemNIL:
+ return "NIL"
+ case itemEOF:
+ return "EOF"
+ case itemText:
+ return "Text"
+ case itemString, itemRawString, itemMultilineString, itemRawMultilineString:
+ return "String"
+ case itemBool:
+ return "Bool"
+ case itemInteger:
+ return "Integer"
+ case itemFloat:
+ return "Float"
+ case itemDatetime:
+ return "DateTime"
+ case itemTableStart:
+ return "TableStart"
+ case itemTableEnd:
+ return "TableEnd"
+ case itemKeyStart:
+ return "KeyStart"
+ case itemArray:
+ return "Array"
+ case itemArrayEnd:
+ return "ArrayEnd"
+ case itemCommentStart:
+ return "CommentStart"
+ }
+ panic(fmt.Sprintf("BUG: Unknown type '%d'.", int(itype)))
+}
+
+func (item item) String() string {
+ return fmt.Sprintf("(%s, %s)", item.typ.String(), item.val)
+}
diff --git a/vendor/github.com/BurntSushi/toml/parse.go b/vendor/github.com/BurntSushi/toml/parse.go
new file mode 100644
index 000000000..50869ef92
--- /dev/null
+++ b/vendor/github.com/BurntSushi/toml/parse.go
@@ -0,0 +1,592 @@
+package toml
+
+import (
+ "fmt"
+ "strconv"
+ "strings"
+ "time"
+ "unicode"
+ "unicode/utf8"
+)
+
+type parser struct {
+ mapping map[string]interface{}
+ types map[string]tomlType
+ lx *lexer
+
+ // A list of keys in the order that they appear in the TOML data.
+ ordered []Key
+
+ // the full key for the current hash in scope
+ context Key
+
+ // the base key name for everything except hashes
+ currentKey string
+
+ // rough approximation of line number
+ approxLine int
+
+ // A map of 'key.group.names' to whether they were created implicitly.
+ implicits map[string]bool
+}
+
+type parseError string
+
+func (pe parseError) Error() string {
+ return string(pe)
+}
+
+func parse(data string) (p *parser, err error) {
+ defer func() {
+ if r := recover(); r != nil {
+ var ok bool
+ if err, ok = r.(parseError); ok {
+ return
+ }
+ panic(r)
+ }
+ }()
+
+ p = &parser{
+ mapping: make(map[string]interface{}),
+ types: make(map[string]tomlType),
+ lx: lex(data),
+ ordered: make([]Key, 0),
+ implicits: make(map[string]bool),
+ }
+ for {
+ item := p.next()
+ if item.typ == itemEOF {
+ break
+ }
+ p.topLevel(item)
+ }
+
+ return p, nil
+}
+
+func (p *parser) panicf(format string, v ...interface{}) {
+ msg := fmt.Sprintf("Near line %d (last key parsed '%s'): %s",
+ p.approxLine, p.current(), fmt.Sprintf(format, v...))
+ panic(parseError(msg))
+}
+
+func (p *parser) next() item {
+ it := p.lx.nextItem()
+ if it.typ == itemError {
+ p.panicf("%s", it.val)
+ }
+ return it
+}
+
+func (p *parser) bug(format string, v ...interface{}) {
+ panic(fmt.Sprintf("BUG: "+format+"\n\n", v...))
+}
+
+func (p *parser) expect(typ itemType) item {
+ it := p.next()
+ p.assertEqual(typ, it.typ)
+ return it
+}
+
+func (p *parser) assertEqual(expected, got itemType) {
+ if expected != got {
+ p.bug("Expected '%s' but got '%s'.", expected, got)
+ }
+}
+
+func (p *parser) topLevel(item item) {
+ switch item.typ {
+ case itemCommentStart:
+ p.approxLine = item.line
+ p.expect(itemText)
+ case itemTableStart:
+ kg := p.next()
+ p.approxLine = kg.line
+
+ var key Key
+ for ; kg.typ != itemTableEnd && kg.typ != itemEOF; kg = p.next() {
+ key = append(key, p.keyString(kg))
+ }
+ p.assertEqual(itemTableEnd, kg.typ)
+
+ p.establishContext(key, false)
+ p.setType("", tomlHash)
+ p.ordered = append(p.ordered, key)
+ case itemArrayTableStart:
+ kg := p.next()
+ p.approxLine = kg.line
+
+ var key Key
+ for ; kg.typ != itemArrayTableEnd && kg.typ != itemEOF; kg = p.next() {
+ key = append(key, p.keyString(kg))
+ }
+ p.assertEqual(itemArrayTableEnd, kg.typ)
+
+ p.establishContext(key, true)
+ p.setType("", tomlArrayHash)
+ p.ordered = append(p.ordered, key)
+ case itemKeyStart:
+ kname := p.next()
+ p.approxLine = kname.line
+ p.currentKey = p.keyString(kname)
+
+ val, typ := p.value(p.next())
+ p.setValue(p.currentKey, val)
+ p.setType(p.currentKey, typ)
+ p.ordered = append(p.ordered, p.context.add(p.currentKey))
+ p.currentKey = ""
+ default:
+ p.bug("Unexpected type at top level: %s", item.typ)
+ }
+}
+
+// Gets a string for a key (or part of a key in a table name).
+func (p *parser) keyString(it item) string {
+ switch it.typ {
+ case itemText:
+ return it.val
+ case itemString, itemMultilineString,
+ itemRawString, itemRawMultilineString:
+ s, _ := p.value(it)
+ return s.(string)
+ default:
+ p.bug("Unexpected key type: %s", it.typ)
+ panic("unreachable")
+ }
+}
+
+// value translates an expected value from the lexer into a Go value wrapped
+// as an empty interface.
+func (p *parser) value(it item) (interface{}, tomlType) {
+ switch it.typ {
+ case itemString:
+ return p.replaceEscapes(it.val), p.typeOfPrimitive(it)
+ case itemMultilineString:
+ trimmed := stripFirstNewline(stripEscapedWhitespace(it.val))
+ return p.replaceEscapes(trimmed), p.typeOfPrimitive(it)
+ case itemRawString:
+ return it.val, p.typeOfPrimitive(it)
+ case itemRawMultilineString:
+ return stripFirstNewline(it.val), p.typeOfPrimitive(it)
+ case itemBool:
+ switch it.val {
+ case "true":
+ return true, p.typeOfPrimitive(it)
+ case "false":
+ return false, p.typeOfPrimitive(it)
+ }
+ p.bug("Expected boolean value, but got '%s'.", it.val)
+ case itemInteger:
+ if !numUnderscoresOK(it.val) {
+ p.panicf("Invalid integer %q: underscores must be surrounded by digits",
+ it.val)
+ }
+ val := strings.Replace(it.val, "_", "", -1)
+ num, err := strconv.ParseInt(val, 10, 64)
+ if err != nil {
+ // Distinguish integer values. Normally, it'd be a bug if the lexer
+ // provides an invalid integer, but it's possible that the number is
+ // out of range of valid values (which the lexer cannot determine).
+ // So mark the former as a bug but the latter as a legitimate user
+ // error.
+ if e, ok := err.(*strconv.NumError); ok &&
+ e.Err == strconv.ErrRange {
+
+ p.panicf("Integer '%s' is out of the range of 64-bit "+
+ "signed integers.", it.val)
+ } else {
+ p.bug("Expected integer value, but got '%s'.", it.val)
+ }
+ }
+ return num, p.typeOfPrimitive(it)
+ case itemFloat:
+ parts := strings.FieldsFunc(it.val, func(r rune) bool {
+ switch r {
+ case '.', 'e', 'E':
+ return true
+ }
+ return false
+ })
+ for _, part := range parts {
+ if !numUnderscoresOK(part) {
+ p.panicf("Invalid float %q: underscores must be "+
+ "surrounded by digits", it.val)
+ }
+ }
+ if !numPeriodsOK(it.val) {
+ // As a special case, numbers like '123.' or '1.e2',
+ // which are valid as far as Go/strconv are concerned,
+ // must be rejected because TOML says that a fractional
+ // part consists of '.' followed by 1+ digits.
+ p.panicf("Invalid float %q: '.' must be followed "+
+ "by one or more digits", it.val)
+ }
+ val := strings.Replace(it.val, "_", "", -1)
+ num, err := strconv.ParseFloat(val, 64)
+ if err != nil {
+ if e, ok := err.(*strconv.NumError); ok &&
+ e.Err == strconv.ErrRange {
+
+ p.panicf("Float '%s' is out of the range of 64-bit "+
+ "IEEE-754 floating-point numbers.", it.val)
+ } else {
+ p.panicf("Invalid float value: %q", it.val)
+ }
+ }
+ return num, p.typeOfPrimitive(it)
+ case itemDatetime:
+ var t time.Time
+ var ok bool
+ var err error
+ for _, format := range []string{
+ "2006-01-02T15:04:05Z07:00",
+ "2006-01-02T15:04:05",
+ "2006-01-02",
+ } {
+ t, err = time.ParseInLocation(format, it.val, time.Local)
+ if err == nil {
+ ok = true
+ break
+ }
+ }
+ if !ok {
+ p.panicf("Invalid TOML Datetime: %q.", it.val)
+ }
+ return t, p.typeOfPrimitive(it)
+ case itemArray:
+ array := make([]interface{}, 0)
+ types := make([]tomlType, 0)
+
+ for it = p.next(); it.typ != itemArrayEnd; it = p.next() {
+ if it.typ == itemCommentStart {
+ p.expect(itemText)
+ continue
+ }
+
+ val, typ := p.value(it)
+ array = append(array, val)
+ types = append(types, typ)
+ }
+ return array, p.typeOfArray(types)
+ case itemInlineTableStart:
+ var (
+ hash = make(map[string]interface{})
+ outerContext = p.context
+ outerKey = p.currentKey
+ )
+
+ p.context = append(p.context, p.currentKey)
+ p.currentKey = ""
+ for it := p.next(); it.typ != itemInlineTableEnd; it = p.next() {
+ if it.typ != itemKeyStart {
+ p.bug("Expected key start but instead found %q, around line %d",
+ it.val, p.approxLine)
+ }
+ if it.typ == itemCommentStart {
+ p.expect(itemText)
+ continue
+ }
+
+ // retrieve key
+ k := p.next()
+ p.approxLine = k.line
+ kname := p.keyString(k)
+
+ // retrieve value
+ p.currentKey = kname
+ val, typ := p.value(p.next())
+ // make sure we keep metadata up to date
+ p.setType(kname, typ)
+ p.ordered = append(p.ordered, p.context.add(p.currentKey))
+ hash[kname] = val
+ }
+ p.context = outerContext
+ p.currentKey = outerKey
+ return hash, tomlHash
+ }
+ p.bug("Unexpected value type: %s", it.typ)
+ panic("unreachable")
+}
+
+// numUnderscoresOK checks whether each underscore in s is surrounded by
+// characters that are not underscores.
+func numUnderscoresOK(s string) bool {
+ accept := false
+ for _, r := range s {
+ if r == '_' {
+ if !accept {
+ return false
+ }
+ accept = false
+ continue
+ }
+ accept = true
+ }
+ return accept
+}
+
+// numPeriodsOK checks whether every period in s is followed by a digit.
+func numPeriodsOK(s string) bool {
+ period := false
+ for _, r := range s {
+ if period && !isDigit(r) {
+ return false
+ }
+ period = r == '.'
+ }
+ return !period
+}
+
+// establishContext sets the current context of the parser,
+// where the context is either a hash or an array of hashes. Which one is
+// set depends on the value of the `array` parameter.
+//
+// Establishing the context also makes sure that the key isn't a duplicate, and
+// will create implicit hashes automatically.
+func (p *parser) establishContext(key Key, array bool) {
+ var ok bool
+
+ // Always start at the top level and drill down for our context.
+ hashContext := p.mapping
+ keyContext := make(Key, 0)
+
+ // We only need implicit hashes for key[0:-1]
+ for _, k := range key[0 : len(key)-1] {
+ _, ok = hashContext[k]
+ keyContext = append(keyContext, k)
+
+ // No key? Make an implicit hash and move on.
+ if !ok {
+ p.addImplicit(keyContext)
+ hashContext[k] = make(map[string]interface{})
+ }
+
+ // If the hash context is actually an array of tables, then set
+ // the hash context to the last element in that array.
+ //
+ // Otherwise, it better be a table, since this MUST be a key group (by
+ // virtue of it not being the last element in a key).
+ switch t := hashContext[k].(type) {
+ case []map[string]interface{}:
+ hashContext = t[len(t)-1]
+ case map[string]interface{}:
+ hashContext = t
+ default:
+ p.panicf("Key '%s' was already created as a hash.", keyContext)
+ }
+ }
+
+ p.context = keyContext
+ if array {
+ // If this is the first element for this array, then allocate a new
+ // list of tables for it.
+ k := key[len(key)-1]
+ if _, ok := hashContext[k]; !ok {
+ hashContext[k] = make([]map[string]interface{}, 0, 5)
+ }
+
+ // Add a new table. But make sure the key hasn't already been used
+ // for something else.
+ if hash, ok := hashContext[k].([]map[string]interface{}); ok {
+ hashContext[k] = append(hash, make(map[string]interface{}))
+ } else {
+ p.panicf("Key '%s' was already created and cannot be used as "+
+ "an array.", keyContext)
+ }
+ } else {
+ p.setValue(key[len(key)-1], make(map[string]interface{}))
+ }
+ p.context = append(p.context, key[len(key)-1])
+}
+
+// setValue sets the given key to the given value in the current context.
+// It will make sure that the key hasn't already been defined, account for
+// implicit key groups.
+func (p *parser) setValue(key string, value interface{}) {
+ var tmpHash interface{}
+ var ok bool
+
+ hash := p.mapping
+ keyContext := make(Key, 0)
+ for _, k := range p.context {
+ keyContext = append(keyContext, k)
+ if tmpHash, ok = hash[k]; !ok {
+ p.bug("Context for key '%s' has not been established.", keyContext)
+ }
+ switch t := tmpHash.(type) {
+ case []map[string]interface{}:
+ // The context is a table of hashes. Pick the most recent table
+ // defined as the current hash.
+ hash = t[len(t)-1]
+ case map[string]interface{}:
+ hash = t
+ default:
+ p.bug("Expected hash to have type 'map[string]interface{}', but "+
+ "it has '%T' instead.", tmpHash)
+ }
+ }
+ keyContext = append(keyContext, key)
+
+ if _, ok := hash[key]; ok {
+ // Typically, if the given key has already been set, then we have
+ // to raise an error since duplicate keys are disallowed. However,
+ // it's possible that a key was previously defined implicitly. In this
+ // case, it is allowed to be redefined concretely. (See the
+ // `tests/valid/implicit-and-explicit-after.toml` test in `toml-test`.)
+ //
+ // But we have to make sure to stop marking it as an implicit. (So that
+ // another redefinition provokes an error.)
+ //
+ // Note that since it has already been defined (as a hash), we don't
+ // want to overwrite it. So our business is done.
+ if p.isImplicit(keyContext) {
+ p.removeImplicit(keyContext)
+ return
+ }
+
+ // Otherwise, we have a concrete key trying to override a previous
+ // key, which is *always* wrong.
+ p.panicf("Key '%s' has already been defined.", keyContext)
+ }
+ hash[key] = value
+}
+
+// setType sets the type of a particular value at a given key.
+// It should be called immediately AFTER setValue.
+//
+// Note that if `key` is empty, then the type given will be applied to the
+// current context (which is either a table or an array of tables).
+func (p *parser) setType(key string, typ tomlType) {
+ keyContext := make(Key, 0, len(p.context)+1)
+ for _, k := range p.context {
+ keyContext = append(keyContext, k)
+ }
+ if len(key) > 0 { // allow type setting for hashes
+ keyContext = append(keyContext, key)
+ }
+ p.types[keyContext.String()] = typ
+}
+
+// addImplicit sets the given Key as having been created implicitly.
+func (p *parser) addImplicit(key Key) {
+ p.implicits[key.String()] = true
+}
+
+// removeImplicit stops tagging the given key as having been implicitly
+// created.
+func (p *parser) removeImplicit(key Key) {
+ p.implicits[key.String()] = false
+}
+
+// isImplicit returns true if the key group pointed to by the key was created
+// implicitly.
+func (p *parser) isImplicit(key Key) bool {
+ return p.implicits[key.String()]
+}
+
+// current returns the full key name of the current context.
+func (p *parser) current() string {
+ if len(p.currentKey) == 0 {
+ return p.context.String()
+ }
+ if len(p.context) == 0 {
+ return p.currentKey
+ }
+ return fmt.Sprintf("%s.%s", p.context, p.currentKey)
+}
+
+func stripFirstNewline(s string) string {
+ if len(s) == 0 || s[0] != '\n' {
+ return s
+ }
+ return s[1:]
+}
+
+func stripEscapedWhitespace(s string) string {
+ esc := strings.Split(s, "\\\n")
+ if len(esc) > 1 {
+ for i := 1; i < len(esc); i++ {
+ esc[i] = strings.TrimLeftFunc(esc[i], unicode.IsSpace)
+ }
+ }
+ return strings.Join(esc, "")
+}
+
+func (p *parser) replaceEscapes(str string) string {
+ var replaced []rune
+ s := []byte(str)
+ r := 0
+ for r < len(s) {
+ if s[r] != '\\' {
+ c, size := utf8.DecodeRune(s[r:])
+ r += size
+ replaced = append(replaced, c)
+ continue
+ }
+ r += 1
+ if r >= len(s) {
+ p.bug("Escape sequence at end of string.")
+ return ""
+ }
+ switch s[r] {
+ default:
+ p.bug("Expected valid escape code after \\, but got %q.", s[r])
+ return ""
+ case 'b':
+ replaced = append(replaced, rune(0x0008))
+ r += 1
+ case 't':
+ replaced = append(replaced, rune(0x0009))
+ r += 1
+ case 'n':
+ replaced = append(replaced, rune(0x000A))
+ r += 1
+ case 'f':
+ replaced = append(replaced, rune(0x000C))
+ r += 1
+ case 'r':
+ replaced = append(replaced, rune(0x000D))
+ r += 1
+ case '"':
+ replaced = append(replaced, rune(0x0022))
+ r += 1
+ case '\\':
+ replaced = append(replaced, rune(0x005C))
+ r += 1
+ case 'u':
+ // At this point, we know we have a Unicode escape of the form
+ // `uXXXX` at [r, r+5). (Because the lexer guarantees this
+ // for us.)
+ escaped := p.asciiEscapeToUnicode(s[r+1 : r+5])
+ replaced = append(replaced, escaped)
+ r += 5
+ case 'U':
+ // At this point, we know we have a Unicode escape of the form
+ // `uXXXX` at [r, r+9). (Because the lexer guarantees this
+ // for us.)
+ escaped := p.asciiEscapeToUnicode(s[r+1 : r+9])
+ replaced = append(replaced, escaped)
+ r += 9
+ }
+ }
+ return string(replaced)
+}
+
+func (p *parser) asciiEscapeToUnicode(bs []byte) rune {
+ s := string(bs)
+ hex, err := strconv.ParseUint(strings.ToLower(s), 16, 32)
+ if err != nil {
+ p.bug("Could not parse '%s' as a hexadecimal number, but the "+
+ "lexer claims it's OK: %s", s, err)
+ }
+ if !utf8.ValidRune(rune(hex)) {
+ p.panicf("Escaped character '\\u%s' is not valid UTF-8.", s)
+ }
+ return rune(hex)
+}
+
+func isStringType(ty itemType) bool {
+ return ty == itemString || ty == itemMultilineString ||
+ ty == itemRawString || ty == itemRawMultilineString
+}
diff --git a/vendor/github.com/BurntSushi/toml/session.vim b/vendor/github.com/BurntSushi/toml/session.vim
new file mode 100644
index 000000000..562164be0
--- /dev/null
+++ b/vendor/github.com/BurntSushi/toml/session.vim
@@ -0,0 +1 @@
+au BufWritePost *.go silent!make tags > /dev/null 2>&1
diff --git a/vendor/github.com/BurntSushi/toml/type_check.go b/vendor/github.com/BurntSushi/toml/type_check.go
new file mode 100644
index 000000000..c73f8afc1
--- /dev/null
+++ b/vendor/github.com/BurntSushi/toml/type_check.go
@@ -0,0 +1,91 @@
+package toml
+
+// tomlType represents any Go type that corresponds to a TOML type.
+// While the first draft of the TOML spec has a simplistic type system that
+// probably doesn't need this level of sophistication, we seem to be militating
+// toward adding real composite types.
+type tomlType interface {
+ typeString() string
+}
+
+// typeEqual accepts any two types and returns true if they are equal.
+func typeEqual(t1, t2 tomlType) bool {
+ if t1 == nil || t2 == nil {
+ return false
+ }
+ return t1.typeString() == t2.typeString()
+}
+
+func typeIsHash(t tomlType) bool {
+ return typeEqual(t, tomlHash) || typeEqual(t, tomlArrayHash)
+}
+
+type tomlBaseType string
+
+func (btype tomlBaseType) typeString() string {
+ return string(btype)
+}
+
+func (btype tomlBaseType) String() string {
+ return btype.typeString()
+}
+
+var (
+ tomlInteger tomlBaseType = "Integer"
+ tomlFloat tomlBaseType = "Float"
+ tomlDatetime tomlBaseType = "Datetime"
+ tomlString tomlBaseType = "String"
+ tomlBool tomlBaseType = "Bool"
+ tomlArray tomlBaseType = "Array"
+ tomlHash tomlBaseType = "Hash"
+ tomlArrayHash tomlBaseType = "ArrayHash"
+)
+
+// typeOfPrimitive returns a tomlType of any primitive value in TOML.
+// Primitive values are: Integer, Float, Datetime, String and Bool.
+//
+// Passing a lexer item other than the following will cause a BUG message
+// to occur: itemString, itemBool, itemInteger, itemFloat, itemDatetime.
+func (p *parser) typeOfPrimitive(lexItem item) tomlType {
+ switch lexItem.typ {
+ case itemInteger:
+ return tomlInteger
+ case itemFloat:
+ return tomlFloat
+ case itemDatetime:
+ return tomlDatetime
+ case itemString:
+ return tomlString
+ case itemMultilineString:
+ return tomlString
+ case itemRawString:
+ return tomlString
+ case itemRawMultilineString:
+ return tomlString
+ case itemBool:
+ return tomlBool
+ }
+ p.bug("Cannot infer primitive type of lex item '%s'.", lexItem)
+ panic("unreachable")
+}
+
+// typeOfArray returns a tomlType for an array given a list of types of its
+// values.
+//
+// In the current spec, if an array is homogeneous, then its type is always
+// "Array". If the array is not homogeneous, an error is generated.
+func (p *parser) typeOfArray(types []tomlType) tomlType {
+ // Empty arrays are cool.
+ if len(types) == 0 {
+ return tomlArray
+ }
+
+ theType := types[0]
+ for _, t := range types[1:] {
+ if !typeEqual(theType, t) {
+ p.panicf("Array contains values of type '%s' and '%s', but "+
+ "arrays must be homogeneous.", theType, t)
+ }
+ }
+ return tomlArray
+}
diff --git a/vendor/github.com/BurntSushi/toml/type_fields.go b/vendor/github.com/BurntSushi/toml/type_fields.go
new file mode 100644
index 000000000..608997c22
--- /dev/null
+++ b/vendor/github.com/BurntSushi/toml/type_fields.go
@@ -0,0 +1,242 @@
+package toml
+
+// Struct field handling is adapted from code in encoding/json:
+//
+// Copyright 2010 The Go Authors. All rights reserved.
+// Use of this source code is governed by a BSD-style
+// license that can be found in the Go distribution.
+
+import (
+ "reflect"
+ "sort"
+ "sync"
+)
+
+// A field represents a single field found in a struct.
+type field struct {
+ name string // the name of the field (`toml` tag included)
+ tag bool // whether field has a `toml` tag
+ index []int // represents the depth of an anonymous field
+ typ reflect.Type // the type of the field
+}
+
+// byName sorts field by name, breaking ties with depth,
+// then breaking ties with "name came from toml tag", then
+// breaking ties with index sequence.
+type byName []field
+
+func (x byName) Len() int { return len(x) }
+
+func (x byName) Swap(i, j int) { x[i], x[j] = x[j], x[i] }
+
+func (x byName) Less(i, j int) bool {
+ if x[i].name != x[j].name {
+ return x[i].name < x[j].name
+ }
+ if len(x[i].index) != len(x[j].index) {
+ return len(x[i].index) < len(x[j].index)
+ }
+ if x[i].tag != x[j].tag {
+ return x[i].tag
+ }
+ return byIndex(x).Less(i, j)
+}
+
+// byIndex sorts field by index sequence.
+type byIndex []field
+
+func (x byIndex) Len() int { return len(x) }
+
+func (x byIndex) Swap(i, j int) { x[i], x[j] = x[j], x[i] }
+
+func (x byIndex) Less(i, j int) bool {
+ for k, xik := range x[i].index {
+ if k >= len(x[j].index) {
+ return false
+ }
+ if xik != x[j].index[k] {
+ return xik < x[j].index[k]
+ }
+ }
+ return len(x[i].index) < len(x[j].index)
+}
+
+// typeFields returns a list of fields that TOML should recognize for the given
+// type. The algorithm is breadth-first search over the set of structs to
+// include - the top struct and then any reachable anonymous structs.
+func typeFields(t reflect.Type) []field {
+ // Anonymous fields to explore at the current level and the next.
+ current := []field{}
+ next := []field{{typ: t}}
+
+ // Count of queued names for current level and the next.
+ count := map[reflect.Type]int{}
+ nextCount := map[reflect.Type]int{}
+
+ // Types already visited at an earlier level.
+ visited := map[reflect.Type]bool{}
+
+ // Fields found.
+ var fields []field
+
+ for len(next) > 0 {
+ current, next = next, current[:0]
+ count, nextCount = nextCount, map[reflect.Type]int{}
+
+ for _, f := range current {
+ if visited[f.typ] {
+ continue
+ }
+ visited[f.typ] = true
+
+ // Scan f.typ for fields to include.
+ for i := 0; i < f.typ.NumField(); i++ {
+ sf := f.typ.Field(i)
+ if sf.PkgPath != "" && !sf.Anonymous { // unexported
+ continue
+ }
+ opts := getOptions(sf.Tag)
+ if opts.skip {
+ continue
+ }
+ index := make([]int, len(f.index)+1)
+ copy(index, f.index)
+ index[len(f.index)] = i
+
+ ft := sf.Type
+ if ft.Name() == "" && ft.Kind() == reflect.Ptr {
+ // Follow pointer.
+ ft = ft.Elem()
+ }
+
+ // Record found field and index sequence.
+ if opts.name != "" || !sf.Anonymous || ft.Kind() != reflect.Struct {
+ tagged := opts.name != ""
+ name := opts.name
+ if name == "" {
+ name = sf.Name
+ }
+ fields = append(fields, field{name, tagged, index, ft})
+ if count[f.typ] > 1 {
+ // If there were multiple instances, add a second,
+ // so that the annihilation code will see a duplicate.
+ // It only cares about the distinction between 1 or 2,
+ // so don't bother generating any more copies.
+ fields = append(fields, fields[len(fields)-1])
+ }
+ continue
+ }
+
+ // Record new anonymous struct to explore in next round.
+ nextCount[ft]++
+ if nextCount[ft] == 1 {
+ f := field{name: ft.Name(), index: index, typ: ft}
+ next = append(next, f)
+ }
+ }
+ }
+ }
+
+ sort.Sort(byName(fields))
+
+ // Delete all fields that are hidden by the Go rules for embedded fields,
+ // except that fields with TOML tags are promoted.
+
+ // The fields are sorted in primary order of name, secondary order
+ // of field index length. Loop over names; for each name, delete
+ // hidden fields by choosing the one dominant field that survives.
+ out := fields[:0]
+ for advance, i := 0, 0; i < len(fields); i += advance {
+ // One iteration per name.
+ // Find the sequence of fields with the name of this first field.
+ fi := fields[i]
+ name := fi.name
+ for advance = 1; i+advance < len(fields); advance++ {
+ fj := fields[i+advance]
+ if fj.name != name {
+ break
+ }
+ }
+ if advance == 1 { // Only one field with this name
+ out = append(out, fi)
+ continue
+ }
+ dominant, ok := dominantField(fields[i : i+advance])
+ if ok {
+ out = append(out, dominant)
+ }
+ }
+
+ fields = out
+ sort.Sort(byIndex(fields))
+
+ return fields
+}
+
+// dominantField looks through the fields, all of which are known to
+// have the same name, to find the single field that dominates the
+// others using Go's embedding rules, modified by the presence of
+// TOML tags. If there are multiple top-level fields, the boolean
+// will be false: This condition is an error in Go and we skip all
+// the fields.
+func dominantField(fields []field) (field, bool) {
+ // The fields are sorted in increasing index-length order. The winner
+ // must therefore be one with the shortest index length. Drop all
+ // longer entries, which is easy: just truncate the slice.
+ length := len(fields[0].index)
+ tagged := -1 // Index of first tagged field.
+ for i, f := range fields {
+ if len(f.index) > length {
+ fields = fields[:i]
+ break
+ }
+ if f.tag {
+ if tagged >= 0 {
+ // Multiple tagged fields at the same level: conflict.
+ // Return no field.
+ return field{}, false
+ }
+ tagged = i
+ }
+ }
+ if tagged >= 0 {
+ return fields[tagged], true
+ }
+ // All remaining fields have the same length. If there's more than one,
+ // we have a conflict (two fields named "X" at the same level) and we
+ // return no field.
+ if len(fields) > 1 {
+ return field{}, false
+ }
+ return fields[0], true
+}
+
+var fieldCache struct {
+ sync.RWMutex
+ m map[reflect.Type][]field
+}
+
+// cachedTypeFields is like typeFields but uses a cache to avoid repeated work.
+func cachedTypeFields(t reflect.Type) []field {
+ fieldCache.RLock()
+ f := fieldCache.m[t]
+ fieldCache.RUnlock()
+ if f != nil {
+ return f
+ }
+
+ // Compute fields without lock.
+ // Might duplicate effort but won't hold other computations back.
+ f = typeFields(t)
+ if f == nil {
+ f = []field{}
+ }
+
+ fieldCache.Lock()
+ if fieldCache.m == nil {
+ fieldCache.m = map[reflect.Type][]field{}
+ }
+ fieldCache.m[t] = f
+ fieldCache.Unlock()
+ return f
+}
diff --git a/vendor/github.com/MakeNowJust/heredoc/LICENSE b/vendor/github.com/MakeNowJust/heredoc/LICENSE
new file mode 100644
index 000000000..6d0eb9d5d
--- /dev/null
+++ b/vendor/github.com/MakeNowJust/heredoc/LICENSE
@@ -0,0 +1,21 @@
+The MIT License (MIT)
+
+Copyright (c) 2014-2019 TSUYUSATO Kitsune
+
+Permission is hereby granted, free of charge, to any person obtaining a copy
+of this software and associated documentation files (the "Software"), to deal
+in the Software without restriction, including without limitation the rights
+to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
+copies of the Software, and to permit persons to whom the Software is
+furnished to do so, subject to the following conditions:
+
+The above copyright notice and this permission notice shall be included in
+all copies or substantial portions of the Software.
+
+THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
+AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
+OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
+THE SOFTWARE.
diff --git a/vendor/github.com/MakeNowJust/heredoc/README.md b/vendor/github.com/MakeNowJust/heredoc/README.md
new file mode 100644
index 000000000..e9924d297
--- /dev/null
+++ b/vendor/github.com/MakeNowJust/heredoc/README.md
@@ -0,0 +1,52 @@
+# heredoc
+
+[![Build Status](https://circleci.com/gh/MakeNowJust/heredoc.svg?style=svg)](https://circleci.com/gh/MakeNowJust/heredoc) [![GoDoc](https://godoc.org/github.com/MakeNowJusti/heredoc?status.svg)](https://godoc.org/github.com/MakeNowJust/heredoc)
+
+## About
+
+Package heredoc provides the here-document with keeping indent.
+
+## Install
+
+```console
+$ go get github.com/MakeNowJust/heredoc
+```
+
+## Import
+
+```go
+// usual
+import "github.com/MakeNowJust/heredoc"
+```
+
+## Example
+
+```go
+package main
+
+import (
+ "fmt"
+ "github.com/MakeNowJust/heredoc"
+)
+
+func main() {
+ fmt.Println(heredoc.Doc(`
+ Lorem ipsum dolor sit amet, consectetur adipisicing elit,
+ sed do eiusmod tempor incididunt ut labore et dolore magna
+ aliqua. Ut enim ad minim veniam, ...
+ `))
+ // Output:
+ // Lorem ipsum dolor sit amet, consectetur adipisicing elit,
+ // sed do eiusmod tempor incididunt ut labore et dolore magna
+ // aliqua. Ut enim ad minim veniam, ...
+ //
+}
+```
+
+## API Document
+
+ - [heredoc - GoDoc](https://godoc.org/github.com/MakeNowJust/heredoc)
+
+## License
+
+This software is released under the MIT License, see LICENSE.
diff --git a/vendor/github.com/MakeNowJust/heredoc/heredoc.go b/vendor/github.com/MakeNowJust/heredoc/heredoc.go
new file mode 100644
index 000000000..1fc046955
--- /dev/null
+++ b/vendor/github.com/MakeNowJust/heredoc/heredoc.go
@@ -0,0 +1,105 @@
+// Copyright (c) 2014-2019 TSUYUSATO Kitsune
+// This software is released under the MIT License.
+// http://opensource.org/licenses/mit-license.php
+
+// Package heredoc provides creation of here-documents from raw strings.
+//
+// Golang supports raw-string syntax.
+//
+// doc := `
+// Foo
+// Bar
+// `
+//
+// But raw-string cannot recognize indentation. Thus such content is an indented string, equivalent to
+//
+// "\n\tFoo\n\tBar\n"
+//
+// I dont't want this!
+//
+// However this problem is solved by package heredoc.
+//
+// doc := heredoc.Doc(`
+// Foo
+// Bar
+// `)
+//
+// Is equivalent to
+//
+// "Foo\nBar\n"
+package heredoc
+
+import (
+ "fmt"
+ "strings"
+ "unicode"
+)
+
+const maxInt = int(^uint(0) >> 1)
+
+// Doc returns un-indented string as here-document.
+func Doc(raw string) string {
+ skipFirstLine := false
+ if len(raw) > 0 && raw[0] == '\n' {
+ raw = raw[1:]
+ } else {
+ skipFirstLine = true
+ }
+
+ lines := strings.Split(raw, "\n")
+
+ minIndentSize := getMinIndent(lines, skipFirstLine)
+ lines = removeIndentation(lines, minIndentSize, skipFirstLine)
+
+ return strings.Join(lines, "\n")
+}
+
+// getMinIndent calculates the minimum indentation in lines, excluding empty lines.
+func getMinIndent(lines []string, skipFirstLine bool) int {
+ minIndentSize := maxInt
+
+ for i, line := range lines {
+ if i == 0 && skipFirstLine {
+ continue
+ }
+
+ indentSize := 0
+ for _, r := range []rune(line) {
+ if unicode.IsSpace(r) {
+ indentSize += 1
+ } else {
+ break
+ }
+ }
+
+ if len(line) == indentSize {
+ if i == len(lines)-1 && indentSize < minIndentSize {
+ lines[i] = ""
+ }
+ } else if indentSize < minIndentSize {
+ minIndentSize = indentSize
+ }
+ }
+ return minIndentSize
+}
+
+// removeIndentation removes n characters from the front of each line in lines.
+// Skips first line if skipFirstLine is true, skips empty lines.
+func removeIndentation(lines []string, n int, skipFirstLine bool) []string {
+ for i, line := range lines {
+ if i == 0 && skipFirstLine {
+ continue
+ }
+
+ if len(lines[i]) >= n {
+ lines[i] = line[n:]
+ }
+ }
+ return lines
+}
+
+// Docf returns unindented and formatted string as here-document.
+// Formatting is done as for fmt.Printf().
+func Docf(raw string, args ...interface{}) string {
+ return fmt.Sprintf(Doc(raw), args...)
+}
diff --git a/vendor/github.com/Microsoft/go-winio/.gitignore b/vendor/github.com/Microsoft/go-winio/.gitignore
new file mode 100644
index 000000000..b883f1fdc
--- /dev/null
+++ b/vendor/github.com/Microsoft/go-winio/.gitignore
@@ -0,0 +1 @@
+*.exe
diff --git a/vendor/github.com/Microsoft/go-winio/CODEOWNERS b/vendor/github.com/Microsoft/go-winio/CODEOWNERS
new file mode 100644
index 000000000..ae1b4942b
--- /dev/null
+++ b/vendor/github.com/Microsoft/go-winio/CODEOWNERS
@@ -0,0 +1 @@
+ * @microsoft/containerplat
diff --git a/vendor/github.com/Microsoft/go-winio/LICENSE b/vendor/github.com/Microsoft/go-winio/LICENSE
new file mode 100644
index 000000000..b8b569d77
--- /dev/null
+++ b/vendor/github.com/Microsoft/go-winio/LICENSE
@@ -0,0 +1,22 @@
+The MIT License (MIT)
+
+Copyright (c) 2015 Microsoft
+
+Permission is hereby granted, free of charge, to any person obtaining a copy
+of this software and associated documentation files (the "Software"), to deal
+in the Software without restriction, including without limitation the rights
+to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
+copies of the Software, and to permit persons to whom the Software is
+furnished to do so, subject to the following conditions:
+
+The above copyright notice and this permission notice shall be included in all
+copies or substantial portions of the Software.
+
+THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
+AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
+OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
+SOFTWARE.
+
diff --git a/vendor/github.com/Microsoft/go-winio/README.md b/vendor/github.com/Microsoft/go-winio/README.md
new file mode 100644
index 000000000..60c93fe50
--- /dev/null
+++ b/vendor/github.com/Microsoft/go-winio/README.md
@@ -0,0 +1,22 @@
+# go-winio [![Build Status](https://github.com/microsoft/go-winio/actions/workflows/ci.yml/badge.svg)](https://github.com/microsoft/go-winio/actions/workflows/ci.yml)
+
+This repository contains utilities for efficiently performing Win32 IO operations in
+Go. Currently, this is focused on accessing named pipes and other file handles, and
+for using named pipes as a net transport.
+
+This code relies on IO completion ports to avoid blocking IO on system threads, allowing Go
+to reuse the thread to schedule another goroutine. This limits support to Windows Vista and
+newer operating systems. This is similar to the implementation of network sockets in Go's net
+package.
+
+Please see the LICENSE file for licensing information.
+
+This project has adopted the [Microsoft Open Source Code of
+Conduct](https://opensource.microsoft.com/codeofconduct/). For more information
+see the [Code of Conduct
+FAQ](https://opensource.microsoft.com/codeofconduct/faq/) or contact
+[opencode@microsoft.com](mailto:opencode@microsoft.com) with any additional
+questions or comments.
+
+Thanks to natefinch for the inspiration for this library. See https://github.com/natefinch/npipe
+for another named pipe implementation.
diff --git a/vendor/github.com/Microsoft/go-winio/backup.go b/vendor/github.com/Microsoft/go-winio/backup.go
new file mode 100644
index 000000000..2be34af43
--- /dev/null
+++ b/vendor/github.com/Microsoft/go-winio/backup.go
@@ -0,0 +1,280 @@
+// +build windows
+
+package winio
+
+import (
+ "encoding/binary"
+ "errors"
+ "fmt"
+ "io"
+ "io/ioutil"
+ "os"
+ "runtime"
+ "syscall"
+ "unicode/utf16"
+)
+
+//sys backupRead(h syscall.Handle, b []byte, bytesRead *uint32, abort bool, processSecurity bool, context *uintptr) (err error) = BackupRead
+//sys backupWrite(h syscall.Handle, b []byte, bytesWritten *uint32, abort bool, processSecurity bool, context *uintptr) (err error) = BackupWrite
+
+const (
+ BackupData = uint32(iota + 1)
+ BackupEaData
+ BackupSecurity
+ BackupAlternateData
+ BackupLink
+ BackupPropertyData
+ BackupObjectId
+ BackupReparseData
+ BackupSparseBlock
+ BackupTxfsData
+)
+
+const (
+ StreamSparseAttributes = uint32(8)
+)
+
+const (
+ WRITE_DAC = 0x40000
+ WRITE_OWNER = 0x80000
+ ACCESS_SYSTEM_SECURITY = 0x1000000
+)
+
+// BackupHeader represents a backup stream of a file.
+type BackupHeader struct {
+ Id uint32 // The backup stream ID
+ Attributes uint32 // Stream attributes
+ Size int64 // The size of the stream in bytes
+ Name string // The name of the stream (for BackupAlternateData only).
+ Offset int64 // The offset of the stream in the file (for BackupSparseBlock only).
+}
+
+type win32StreamId struct {
+ StreamId uint32
+ Attributes uint32
+ Size uint64
+ NameSize uint32
+}
+
+// BackupStreamReader reads from a stream produced by the BackupRead Win32 API and produces a series
+// of BackupHeader values.
+type BackupStreamReader struct {
+ r io.Reader
+ bytesLeft int64
+}
+
+// NewBackupStreamReader produces a BackupStreamReader from any io.Reader.
+func NewBackupStreamReader(r io.Reader) *BackupStreamReader {
+ return &BackupStreamReader{r, 0}
+}
+
+// Next returns the next backup stream and prepares for calls to Read(). It skips the remainder of the current stream if
+// it was not completely read.
+func (r *BackupStreamReader) Next() (*BackupHeader, error) {
+ if r.bytesLeft > 0 {
+ if s, ok := r.r.(io.Seeker); ok {
+ // Make sure Seek on io.SeekCurrent sometimes succeeds
+ // before trying the actual seek.
+ if _, err := s.Seek(0, io.SeekCurrent); err == nil {
+ if _, err = s.Seek(r.bytesLeft, io.SeekCurrent); err != nil {
+ return nil, err
+ }
+ r.bytesLeft = 0
+ }
+ }
+ if _, err := io.Copy(ioutil.Discard, r); err != nil {
+ return nil, err
+ }
+ }
+ var wsi win32StreamId
+ if err := binary.Read(r.r, binary.LittleEndian, &wsi); err != nil {
+ return nil, err
+ }
+ hdr := &BackupHeader{
+ Id: wsi.StreamId,
+ Attributes: wsi.Attributes,
+ Size: int64(wsi.Size),
+ }
+ if wsi.NameSize != 0 {
+ name := make([]uint16, int(wsi.NameSize/2))
+ if err := binary.Read(r.r, binary.LittleEndian, name); err != nil {
+ return nil, err
+ }
+ hdr.Name = syscall.UTF16ToString(name)
+ }
+ if wsi.StreamId == BackupSparseBlock {
+ if err := binary.Read(r.r, binary.LittleEndian, &hdr.Offset); err != nil {
+ return nil, err
+ }
+ hdr.Size -= 8
+ }
+ r.bytesLeft = hdr.Size
+ return hdr, nil
+}
+
+// Read reads from the current backup stream.
+func (r *BackupStreamReader) Read(b []byte) (int, error) {
+ if r.bytesLeft == 0 {
+ return 0, io.EOF
+ }
+ if int64(len(b)) > r.bytesLeft {
+ b = b[:r.bytesLeft]
+ }
+ n, err := r.r.Read(b)
+ r.bytesLeft -= int64(n)
+ if err == io.EOF {
+ err = io.ErrUnexpectedEOF
+ } else if r.bytesLeft == 0 && err == nil {
+ err = io.EOF
+ }
+ return n, err
+}
+
+// BackupStreamWriter writes a stream compatible with the BackupWrite Win32 API.
+type BackupStreamWriter struct {
+ w io.Writer
+ bytesLeft int64
+}
+
+// NewBackupStreamWriter produces a BackupStreamWriter on top of an io.Writer.
+func NewBackupStreamWriter(w io.Writer) *BackupStreamWriter {
+ return &BackupStreamWriter{w, 0}
+}
+
+// WriteHeader writes the next backup stream header and prepares for calls to Write().
+func (w *BackupStreamWriter) WriteHeader(hdr *BackupHeader) error {
+ if w.bytesLeft != 0 {
+ return fmt.Errorf("missing %d bytes", w.bytesLeft)
+ }
+ name := utf16.Encode([]rune(hdr.Name))
+ wsi := win32StreamId{
+ StreamId: hdr.Id,
+ Attributes: hdr.Attributes,
+ Size: uint64(hdr.Size),
+ NameSize: uint32(len(name) * 2),
+ }
+ if hdr.Id == BackupSparseBlock {
+ // Include space for the int64 block offset
+ wsi.Size += 8
+ }
+ if err := binary.Write(w.w, binary.LittleEndian, &wsi); err != nil {
+ return err
+ }
+ if len(name) != 0 {
+ if err := binary.Write(w.w, binary.LittleEndian, name); err != nil {
+ return err
+ }
+ }
+ if hdr.Id == BackupSparseBlock {
+ if err := binary.Write(w.w, binary.LittleEndian, hdr.Offset); err != nil {
+ return err
+ }
+ }
+ w.bytesLeft = hdr.Size
+ return nil
+}
+
+// Write writes to the current backup stream.
+func (w *BackupStreamWriter) Write(b []byte) (int, error) {
+ if w.bytesLeft < int64(len(b)) {
+ return 0, fmt.Errorf("too many bytes by %d", int64(len(b))-w.bytesLeft)
+ }
+ n, err := w.w.Write(b)
+ w.bytesLeft -= int64(n)
+ return n, err
+}
+
+// BackupFileReader provides an io.ReadCloser interface on top of the BackupRead Win32 API.
+type BackupFileReader struct {
+ f *os.File
+ includeSecurity bool
+ ctx uintptr
+}
+
+// NewBackupFileReader returns a new BackupFileReader from a file handle. If includeSecurity is true,
+// Read will attempt to read the security descriptor of the file.
+func NewBackupFileReader(f *os.File, includeSecurity bool) *BackupFileReader {
+ r := &BackupFileReader{f, includeSecurity, 0}
+ return r
+}
+
+// Read reads a backup stream from the file by calling the Win32 API BackupRead().
+func (r *BackupFileReader) Read(b []byte) (int, error) {
+ var bytesRead uint32
+ err := backupRead(syscall.Handle(r.f.Fd()), b, &bytesRead, false, r.includeSecurity, &r.ctx)
+ if err != nil {
+ return 0, &os.PathError{"BackupRead", r.f.Name(), err}
+ }
+ runtime.KeepAlive(r.f)
+ if bytesRead == 0 {
+ return 0, io.EOF
+ }
+ return int(bytesRead), nil
+}
+
+// Close frees Win32 resources associated with the BackupFileReader. It does not close
+// the underlying file.
+func (r *BackupFileReader) Close() error {
+ if r.ctx != 0 {
+ backupRead(syscall.Handle(r.f.Fd()), nil, nil, true, false, &r.ctx)
+ runtime.KeepAlive(r.f)
+ r.ctx = 0
+ }
+ return nil
+}
+
+// BackupFileWriter provides an io.WriteCloser interface on top of the BackupWrite Win32 API.
+type BackupFileWriter struct {
+ f *os.File
+ includeSecurity bool
+ ctx uintptr
+}
+
+// NewBackupFileWriter returns a new BackupFileWriter from a file handle. If includeSecurity is true,
+// Write() will attempt to restore the security descriptor from the stream.
+func NewBackupFileWriter(f *os.File, includeSecurity bool) *BackupFileWriter {
+ w := &BackupFileWriter{f, includeSecurity, 0}
+ return w
+}
+
+// Write restores a portion of the file using the provided backup stream.
+func (w *BackupFileWriter) Write(b []byte) (int, error) {
+ var bytesWritten uint32
+ err := backupWrite(syscall.Handle(w.f.Fd()), b, &bytesWritten, false, w.includeSecurity, &w.ctx)
+ if err != nil {
+ return 0, &os.PathError{"BackupWrite", w.f.Name(), err}
+ }
+ runtime.KeepAlive(w.f)
+ if int(bytesWritten) != len(b) {
+ return int(bytesWritten), errors.New("not all bytes could be written")
+ }
+ return len(b), nil
+}
+
+// Close frees Win32 resources associated with the BackupFileWriter. It does not
+// close the underlying file.
+func (w *BackupFileWriter) Close() error {
+ if w.ctx != 0 {
+ backupWrite(syscall.Handle(w.f.Fd()), nil, nil, true, false, &w.ctx)
+ runtime.KeepAlive(w.f)
+ w.ctx = 0
+ }
+ return nil
+}
+
+// OpenForBackup opens a file or directory, potentially skipping access checks if the backup
+// or restore privileges have been acquired.
+//
+// If the file opened was a directory, it cannot be used with Readdir().
+func OpenForBackup(path string, access uint32, share uint32, createmode uint32) (*os.File, error) {
+ winPath, err := syscall.UTF16FromString(path)
+ if err != nil {
+ return nil, err
+ }
+ h, err := syscall.CreateFile(&winPath[0], access, share, nil, createmode, syscall.FILE_FLAG_BACKUP_SEMANTICS|syscall.FILE_FLAG_OPEN_REPARSE_POINT, 0)
+ if err != nil {
+ err = &os.PathError{Op: "open", Path: path, Err: err}
+ return nil, err
+ }
+ return os.NewFile(uintptr(h), path), nil
+}
diff --git a/vendor/github.com/Microsoft/go-winio/ea.go b/vendor/github.com/Microsoft/go-winio/ea.go
new file mode 100644
index 000000000..4051c1b33
--- /dev/null
+++ b/vendor/github.com/Microsoft/go-winio/ea.go
@@ -0,0 +1,137 @@
+package winio
+
+import (
+ "bytes"
+ "encoding/binary"
+ "errors"
+)
+
+type fileFullEaInformation struct {
+ NextEntryOffset uint32
+ Flags uint8
+ NameLength uint8
+ ValueLength uint16
+}
+
+var (
+ fileFullEaInformationSize = binary.Size(&fileFullEaInformation{})
+
+ errInvalidEaBuffer = errors.New("invalid extended attribute buffer")
+ errEaNameTooLarge = errors.New("extended attribute name too large")
+ errEaValueTooLarge = errors.New("extended attribute value too large")
+)
+
+// ExtendedAttribute represents a single Windows EA.
+type ExtendedAttribute struct {
+ Name string
+ Value []byte
+ Flags uint8
+}
+
+func parseEa(b []byte) (ea ExtendedAttribute, nb []byte, err error) {
+ var info fileFullEaInformation
+ err = binary.Read(bytes.NewReader(b), binary.LittleEndian, &info)
+ if err != nil {
+ err = errInvalidEaBuffer
+ return
+ }
+
+ nameOffset := fileFullEaInformationSize
+ nameLen := int(info.NameLength)
+ valueOffset := nameOffset + int(info.NameLength) + 1
+ valueLen := int(info.ValueLength)
+ nextOffset := int(info.NextEntryOffset)
+ if valueLen+valueOffset > len(b) || nextOffset < 0 || nextOffset > len(b) {
+ err = errInvalidEaBuffer
+ return
+ }
+
+ ea.Name = string(b[nameOffset : nameOffset+nameLen])
+ ea.Value = b[valueOffset : valueOffset+valueLen]
+ ea.Flags = info.Flags
+ if info.NextEntryOffset != 0 {
+ nb = b[info.NextEntryOffset:]
+ }
+ return
+}
+
+// DecodeExtendedAttributes decodes a list of EAs from a FILE_FULL_EA_INFORMATION
+// buffer retrieved from BackupRead, ZwQueryEaFile, etc.
+func DecodeExtendedAttributes(b []byte) (eas []ExtendedAttribute, err error) {
+ for len(b) != 0 {
+ ea, nb, err := parseEa(b)
+ if err != nil {
+ return nil, err
+ }
+
+ eas = append(eas, ea)
+ b = nb
+ }
+ return
+}
+
+func writeEa(buf *bytes.Buffer, ea *ExtendedAttribute, last bool) error {
+ if int(uint8(len(ea.Name))) != len(ea.Name) {
+ return errEaNameTooLarge
+ }
+ if int(uint16(len(ea.Value))) != len(ea.Value) {
+ return errEaValueTooLarge
+ }
+ entrySize := uint32(fileFullEaInformationSize + len(ea.Name) + 1 + len(ea.Value))
+ withPadding := (entrySize + 3) &^ 3
+ nextOffset := uint32(0)
+ if !last {
+ nextOffset = withPadding
+ }
+ info := fileFullEaInformation{
+ NextEntryOffset: nextOffset,
+ Flags: ea.Flags,
+ NameLength: uint8(len(ea.Name)),
+ ValueLength: uint16(len(ea.Value)),
+ }
+
+ err := binary.Write(buf, binary.LittleEndian, &info)
+ if err != nil {
+ return err
+ }
+
+ _, err = buf.Write([]byte(ea.Name))
+ if err != nil {
+ return err
+ }
+
+ err = buf.WriteByte(0)
+ if err != nil {
+ return err
+ }
+
+ _, err = buf.Write(ea.Value)
+ if err != nil {
+ return err
+ }
+
+ _, err = buf.Write([]byte{0, 0, 0}[0 : withPadding-entrySize])
+ if err != nil {
+ return err
+ }
+
+ return nil
+}
+
+// EncodeExtendedAttributes encodes a list of EAs into a FILE_FULL_EA_INFORMATION
+// buffer for use with BackupWrite, ZwSetEaFile, etc.
+func EncodeExtendedAttributes(eas []ExtendedAttribute) ([]byte, error) {
+ var buf bytes.Buffer
+ for i := range eas {
+ last := false
+ if i == len(eas)-1 {
+ last = true
+ }
+
+ err := writeEa(&buf, &eas[i], last)
+ if err != nil {
+ return nil, err
+ }
+ }
+ return buf.Bytes(), nil
+}
diff --git a/vendor/github.com/Microsoft/go-winio/file.go b/vendor/github.com/Microsoft/go-winio/file.go
new file mode 100644
index 000000000..0385e4108
--- /dev/null
+++ b/vendor/github.com/Microsoft/go-winio/file.go
@@ -0,0 +1,323 @@
+// +build windows
+
+package winio
+
+import (
+ "errors"
+ "io"
+ "runtime"
+ "sync"
+ "sync/atomic"
+ "syscall"
+ "time"
+)
+
+//sys cancelIoEx(file syscall.Handle, o *syscall.Overlapped) (err error) = CancelIoEx
+//sys createIoCompletionPort(file syscall.Handle, port syscall.Handle, key uintptr, threadCount uint32) (newport syscall.Handle, err error) = CreateIoCompletionPort
+//sys getQueuedCompletionStatus(port syscall.Handle, bytes *uint32, key *uintptr, o **ioOperation, timeout uint32) (err error) = GetQueuedCompletionStatus
+//sys setFileCompletionNotificationModes(h syscall.Handle, flags uint8) (err error) = SetFileCompletionNotificationModes
+//sys wsaGetOverlappedResult(h syscall.Handle, o *syscall.Overlapped, bytes *uint32, wait bool, flags *uint32) (err error) = ws2_32.WSAGetOverlappedResult
+
+type atomicBool int32
+
+func (b *atomicBool) isSet() bool { return atomic.LoadInt32((*int32)(b)) != 0 }
+func (b *atomicBool) setFalse() { atomic.StoreInt32((*int32)(b), 0) }
+func (b *atomicBool) setTrue() { atomic.StoreInt32((*int32)(b), 1) }
+func (b *atomicBool) swap(new bool) bool {
+ var newInt int32
+ if new {
+ newInt = 1
+ }
+ return atomic.SwapInt32((*int32)(b), newInt) == 1
+}
+
+const (
+ cFILE_SKIP_COMPLETION_PORT_ON_SUCCESS = 1
+ cFILE_SKIP_SET_EVENT_ON_HANDLE = 2
+)
+
+var (
+ ErrFileClosed = errors.New("file has already been closed")
+ ErrTimeout = &timeoutError{}
+)
+
+type timeoutError struct{}
+
+func (e *timeoutError) Error() string { return "i/o timeout" }
+func (e *timeoutError) Timeout() bool { return true }
+func (e *timeoutError) Temporary() bool { return true }
+
+type timeoutChan chan struct{}
+
+var ioInitOnce sync.Once
+var ioCompletionPort syscall.Handle
+
+// ioResult contains the result of an asynchronous IO operation
+type ioResult struct {
+ bytes uint32
+ err error
+}
+
+// ioOperation represents an outstanding asynchronous Win32 IO
+type ioOperation struct {
+ o syscall.Overlapped
+ ch chan ioResult
+}
+
+func initIo() {
+ h, err := createIoCompletionPort(syscall.InvalidHandle, 0, 0, 0xffffffff)
+ if err != nil {
+ panic(err)
+ }
+ ioCompletionPort = h
+ go ioCompletionProcessor(h)
+}
+
+// win32File implements Reader, Writer, and Closer on a Win32 handle without blocking in a syscall.
+// It takes ownership of this handle and will close it if it is garbage collected.
+type win32File struct {
+ handle syscall.Handle
+ wg sync.WaitGroup
+ wgLock sync.RWMutex
+ closing atomicBool
+ socket bool
+ readDeadline deadlineHandler
+ writeDeadline deadlineHandler
+}
+
+type deadlineHandler struct {
+ setLock sync.Mutex
+ channel timeoutChan
+ channelLock sync.RWMutex
+ timer *time.Timer
+ timedout atomicBool
+}
+
+// makeWin32File makes a new win32File from an existing file handle
+func makeWin32File(h syscall.Handle) (*win32File, error) {
+ f := &win32File{handle: h}
+ ioInitOnce.Do(initIo)
+ _, err := createIoCompletionPort(h, ioCompletionPort, 0, 0xffffffff)
+ if err != nil {
+ return nil, err
+ }
+ err = setFileCompletionNotificationModes(h, cFILE_SKIP_COMPLETION_PORT_ON_SUCCESS|cFILE_SKIP_SET_EVENT_ON_HANDLE)
+ if err != nil {
+ return nil, err
+ }
+ f.readDeadline.channel = make(timeoutChan)
+ f.writeDeadline.channel = make(timeoutChan)
+ return f, nil
+}
+
+func MakeOpenFile(h syscall.Handle) (io.ReadWriteCloser, error) {
+ // If we return the result of makeWin32File directly, it can result in an
+ // interface-wrapped nil, rather than a nil interface value.
+ f, err := makeWin32File(h)
+ if err != nil {
+ return nil, err
+ }
+ return f, nil
+}
+
+// closeHandle closes the resources associated with a Win32 handle
+func (f *win32File) closeHandle() {
+ f.wgLock.Lock()
+ // Atomically set that we are closing, releasing the resources only once.
+ if !f.closing.swap(true) {
+ f.wgLock.Unlock()
+ // cancel all IO and wait for it to complete
+ cancelIoEx(f.handle, nil)
+ f.wg.Wait()
+ // at this point, no new IO can start
+ syscall.Close(f.handle)
+ f.handle = 0
+ } else {
+ f.wgLock.Unlock()
+ }
+}
+
+// Close closes a win32File.
+func (f *win32File) Close() error {
+ f.closeHandle()
+ return nil
+}
+
+// prepareIo prepares for a new IO operation.
+// The caller must call f.wg.Done() when the IO is finished, prior to Close() returning.
+func (f *win32File) prepareIo() (*ioOperation, error) {
+ f.wgLock.RLock()
+ if f.closing.isSet() {
+ f.wgLock.RUnlock()
+ return nil, ErrFileClosed
+ }
+ f.wg.Add(1)
+ f.wgLock.RUnlock()
+ c := &ioOperation{}
+ c.ch = make(chan ioResult)
+ return c, nil
+}
+
+// ioCompletionProcessor processes completed async IOs forever
+func ioCompletionProcessor(h syscall.Handle) {
+ for {
+ var bytes uint32
+ var key uintptr
+ var op *ioOperation
+ err := getQueuedCompletionStatus(h, &bytes, &key, &op, syscall.INFINITE)
+ if op == nil {
+ panic(err)
+ }
+ op.ch <- ioResult{bytes, err}
+ }
+}
+
+// asyncIo processes the return value from ReadFile or WriteFile, blocking until
+// the operation has actually completed.
+func (f *win32File) asyncIo(c *ioOperation, d *deadlineHandler, bytes uint32, err error) (int, error) {
+ if err != syscall.ERROR_IO_PENDING {
+ return int(bytes), err
+ }
+
+ if f.closing.isSet() {
+ cancelIoEx(f.handle, &c.o)
+ }
+
+ var timeout timeoutChan
+ if d != nil {
+ d.channelLock.Lock()
+ timeout = d.channel
+ d.channelLock.Unlock()
+ }
+
+ var r ioResult
+ select {
+ case r = <-c.ch:
+ err = r.err
+ if err == syscall.ERROR_OPERATION_ABORTED {
+ if f.closing.isSet() {
+ err = ErrFileClosed
+ }
+ } else if err != nil && f.socket {
+ // err is from Win32. Query the overlapped structure to get the winsock error.
+ var bytes, flags uint32
+ err = wsaGetOverlappedResult(f.handle, &c.o, &bytes, false, &flags)
+ }
+ case <-timeout:
+ cancelIoEx(f.handle, &c.o)
+ r = <-c.ch
+ err = r.err
+ if err == syscall.ERROR_OPERATION_ABORTED {
+ err = ErrTimeout
+ }
+ }
+
+ // runtime.KeepAlive is needed, as c is passed via native
+ // code to ioCompletionProcessor, c must remain alive
+ // until the channel read is complete.
+ runtime.KeepAlive(c)
+ return int(r.bytes), err
+}
+
+// Read reads from a file handle.
+func (f *win32File) Read(b []byte) (int, error) {
+ c, err := f.prepareIo()
+ if err != nil {
+ return 0, err
+ }
+ defer f.wg.Done()
+
+ if f.readDeadline.timedout.isSet() {
+ return 0, ErrTimeout
+ }
+
+ var bytes uint32
+ err = syscall.ReadFile(f.handle, b, &bytes, &c.o)
+ n, err := f.asyncIo(c, &f.readDeadline, bytes, err)
+ runtime.KeepAlive(b)
+
+ // Handle EOF conditions.
+ if err == nil && n == 0 && len(b) != 0 {
+ return 0, io.EOF
+ } else if err == syscall.ERROR_BROKEN_PIPE {
+ return 0, io.EOF
+ } else {
+ return n, err
+ }
+}
+
+// Write writes to a file handle.
+func (f *win32File) Write(b []byte) (int, error) {
+ c, err := f.prepareIo()
+ if err != nil {
+ return 0, err
+ }
+ defer f.wg.Done()
+
+ if f.writeDeadline.timedout.isSet() {
+ return 0, ErrTimeout
+ }
+
+ var bytes uint32
+ err = syscall.WriteFile(f.handle, b, &bytes, &c.o)
+ n, err := f.asyncIo(c, &f.writeDeadline, bytes, err)
+ runtime.KeepAlive(b)
+ return n, err
+}
+
+func (f *win32File) SetReadDeadline(deadline time.Time) error {
+ return f.readDeadline.set(deadline)
+}
+
+func (f *win32File) SetWriteDeadline(deadline time.Time) error {
+ return f.writeDeadline.set(deadline)
+}
+
+func (f *win32File) Flush() error {
+ return syscall.FlushFileBuffers(f.handle)
+}
+
+func (f *win32File) Fd() uintptr {
+ return uintptr(f.handle)
+}
+
+func (d *deadlineHandler) set(deadline time.Time) error {
+ d.setLock.Lock()
+ defer d.setLock.Unlock()
+
+ if d.timer != nil {
+ if !d.timer.Stop() {
+ <-d.channel
+ }
+ d.timer = nil
+ }
+ d.timedout.setFalse()
+
+ select {
+ case <-d.channel:
+ d.channelLock.Lock()
+ d.channel = make(chan struct{})
+ d.channelLock.Unlock()
+ default:
+ }
+
+ if deadline.IsZero() {
+ return nil
+ }
+
+ timeoutIO := func() {
+ d.timedout.setTrue()
+ close(d.channel)
+ }
+
+ now := time.Now()
+ duration := deadline.Sub(now)
+ if deadline.After(now) {
+ // Deadline is in the future, set a timer to wait
+ d.timer = time.AfterFunc(duration, timeoutIO)
+ } else {
+ // Deadline is in the past. Cancel all pending IO now.
+ timeoutIO()
+ }
+ return nil
+}
diff --git a/vendor/github.com/Microsoft/go-winio/fileinfo.go b/vendor/github.com/Microsoft/go-winio/fileinfo.go
new file mode 100644
index 000000000..3ab6bff69
--- /dev/null
+++ b/vendor/github.com/Microsoft/go-winio/fileinfo.go
@@ -0,0 +1,73 @@
+// +build windows
+
+package winio
+
+import (
+ "os"
+ "runtime"
+ "unsafe"
+
+ "golang.org/x/sys/windows"
+)
+
+// FileBasicInfo contains file access time and file attributes information.
+type FileBasicInfo struct {
+ CreationTime, LastAccessTime, LastWriteTime, ChangeTime windows.Filetime
+ FileAttributes uint32
+ pad uint32 // padding
+}
+
+// GetFileBasicInfo retrieves times and attributes for a file.
+func GetFileBasicInfo(f *os.File) (*FileBasicInfo, error) {
+ bi := &FileBasicInfo{}
+ if err := windows.GetFileInformationByHandleEx(windows.Handle(f.Fd()), windows.FileBasicInfo, (*byte)(unsafe.Pointer(bi)), uint32(unsafe.Sizeof(*bi))); err != nil {
+ return nil, &os.PathError{Op: "GetFileInformationByHandleEx", Path: f.Name(), Err: err}
+ }
+ runtime.KeepAlive(f)
+ return bi, nil
+}
+
+// SetFileBasicInfo sets times and attributes for a file.
+func SetFileBasicInfo(f *os.File, bi *FileBasicInfo) error {
+ if err := windows.SetFileInformationByHandle(windows.Handle(f.Fd()), windows.FileBasicInfo, (*byte)(unsafe.Pointer(bi)), uint32(unsafe.Sizeof(*bi))); err != nil {
+ return &os.PathError{Op: "SetFileInformationByHandle", Path: f.Name(), Err: err}
+ }
+ runtime.KeepAlive(f)
+ return nil
+}
+
+// FileStandardInfo contains extended information for the file.
+// FILE_STANDARD_INFO in WinBase.h
+// https://docs.microsoft.com/en-us/windows/win32/api/winbase/ns-winbase-file_standard_info
+type FileStandardInfo struct {
+ AllocationSize, EndOfFile int64
+ NumberOfLinks uint32
+ DeletePending, Directory bool
+}
+
+// GetFileStandardInfo retrieves ended information for the file.
+func GetFileStandardInfo(f *os.File) (*FileStandardInfo, error) {
+ si := &FileStandardInfo{}
+ if err := windows.GetFileInformationByHandleEx(windows.Handle(f.Fd()), windows.FileStandardInfo, (*byte)(unsafe.Pointer(si)), uint32(unsafe.Sizeof(*si))); err != nil {
+ return nil, &os.PathError{Op: "GetFileInformationByHandleEx", Path: f.Name(), Err: err}
+ }
+ runtime.KeepAlive(f)
+ return si, nil
+}
+
+// FileIDInfo contains the volume serial number and file ID for a file. This pair should be
+// unique on a system.
+type FileIDInfo struct {
+ VolumeSerialNumber uint64
+ FileID [16]byte
+}
+
+// GetFileID retrieves the unique (volume, file ID) pair for a file.
+func GetFileID(f *os.File) (*FileIDInfo, error) {
+ fileID := &FileIDInfo{}
+ if err := windows.GetFileInformationByHandleEx(windows.Handle(f.Fd()), windows.FileIdInfo, (*byte)(unsafe.Pointer(fileID)), uint32(unsafe.Sizeof(*fileID))); err != nil {
+ return nil, &os.PathError{Op: "GetFileInformationByHandleEx", Path: f.Name(), Err: err}
+ }
+ runtime.KeepAlive(f)
+ return fileID, nil
+}
diff --git a/vendor/github.com/Microsoft/go-winio/hvsock.go b/vendor/github.com/Microsoft/go-winio/hvsock.go
new file mode 100644
index 000000000..b632f8f8b
--- /dev/null
+++ b/vendor/github.com/Microsoft/go-winio/hvsock.go
@@ -0,0 +1,307 @@
+// +build windows
+
+package winio
+
+import (
+ "fmt"
+ "io"
+ "net"
+ "os"
+ "syscall"
+ "time"
+ "unsafe"
+
+ "github.com/Microsoft/go-winio/pkg/guid"
+)
+
+//sys bind(s syscall.Handle, name unsafe.Pointer, namelen int32) (err error) [failretval==socketError] = ws2_32.bind
+
+const (
+ afHvSock = 34 // AF_HYPERV
+
+ socketError = ^uintptr(0)
+)
+
+// An HvsockAddr is an address for a AF_HYPERV socket.
+type HvsockAddr struct {
+ VMID guid.GUID
+ ServiceID guid.GUID
+}
+
+type rawHvsockAddr struct {
+ Family uint16
+ _ uint16
+ VMID guid.GUID
+ ServiceID guid.GUID
+}
+
+// Network returns the address's network name, "hvsock".
+func (addr *HvsockAddr) Network() string {
+ return "hvsock"
+}
+
+func (addr *HvsockAddr) String() string {
+ return fmt.Sprintf("%s:%s", &addr.VMID, &addr.ServiceID)
+}
+
+// VsockServiceID returns an hvsock service ID corresponding to the specified AF_VSOCK port.
+func VsockServiceID(port uint32) guid.GUID {
+ g, _ := guid.FromString("00000000-facb-11e6-bd58-64006a7986d3")
+ g.Data1 = port
+ return g
+}
+
+func (addr *HvsockAddr) raw() rawHvsockAddr {
+ return rawHvsockAddr{
+ Family: afHvSock,
+ VMID: addr.VMID,
+ ServiceID: addr.ServiceID,
+ }
+}
+
+func (addr *HvsockAddr) fromRaw(raw *rawHvsockAddr) {
+ addr.VMID = raw.VMID
+ addr.ServiceID = raw.ServiceID
+}
+
+// HvsockListener is a socket listener for the AF_HYPERV address family.
+type HvsockListener struct {
+ sock *win32File
+ addr HvsockAddr
+}
+
+// HvsockConn is a connected socket of the AF_HYPERV address family.
+type HvsockConn struct {
+ sock *win32File
+ local, remote HvsockAddr
+}
+
+func newHvSocket() (*win32File, error) {
+ fd, err := syscall.Socket(afHvSock, syscall.SOCK_STREAM, 1)
+ if err != nil {
+ return nil, os.NewSyscallError("socket", err)
+ }
+ f, err := makeWin32File(fd)
+ if err != nil {
+ syscall.Close(fd)
+ return nil, err
+ }
+ f.socket = true
+ return f, nil
+}
+
+// ListenHvsock listens for connections on the specified hvsock address.
+func ListenHvsock(addr *HvsockAddr) (_ *HvsockListener, err error) {
+ l := &HvsockListener{addr: *addr}
+ sock, err := newHvSocket()
+ if err != nil {
+ return nil, l.opErr("listen", err)
+ }
+ sa := addr.raw()
+ err = bind(sock.handle, unsafe.Pointer(&sa), int32(unsafe.Sizeof(sa)))
+ if err != nil {
+ return nil, l.opErr("listen", os.NewSyscallError("socket", err))
+ }
+ err = syscall.Listen(sock.handle, 16)
+ if err != nil {
+ return nil, l.opErr("listen", os.NewSyscallError("listen", err))
+ }
+ return &HvsockListener{sock: sock, addr: *addr}, nil
+}
+
+func (l *HvsockListener) opErr(op string, err error) error {
+ return &net.OpError{Op: op, Net: "hvsock", Addr: &l.addr, Err: err}
+}
+
+// Addr returns the listener's network address.
+func (l *HvsockListener) Addr() net.Addr {
+ return &l.addr
+}
+
+// Accept waits for the next connection and returns it.
+func (l *HvsockListener) Accept() (_ net.Conn, err error) {
+ sock, err := newHvSocket()
+ if err != nil {
+ return nil, l.opErr("accept", err)
+ }
+ defer func() {
+ if sock != nil {
+ sock.Close()
+ }
+ }()
+ c, err := l.sock.prepareIo()
+ if err != nil {
+ return nil, l.opErr("accept", err)
+ }
+ defer l.sock.wg.Done()
+
+ // AcceptEx, per documentation, requires an extra 16 bytes per address.
+ const addrlen = uint32(16 + unsafe.Sizeof(rawHvsockAddr{}))
+ var addrbuf [addrlen * 2]byte
+
+ var bytes uint32
+ err = syscall.AcceptEx(l.sock.handle, sock.handle, &addrbuf[0], 0, addrlen, addrlen, &bytes, &c.o)
+ _, err = l.sock.asyncIo(c, nil, bytes, err)
+ if err != nil {
+ return nil, l.opErr("accept", os.NewSyscallError("acceptex", err))
+ }
+ conn := &HvsockConn{
+ sock: sock,
+ }
+ conn.local.fromRaw((*rawHvsockAddr)(unsafe.Pointer(&addrbuf[0])))
+ conn.remote.fromRaw((*rawHvsockAddr)(unsafe.Pointer(&addrbuf[addrlen])))
+ sock = nil
+ return conn, nil
+}
+
+// Close closes the listener, causing any pending Accept calls to fail.
+func (l *HvsockListener) Close() error {
+ return l.sock.Close()
+}
+
+/* Need to finish ConnectEx handling
+func DialHvsock(ctx context.Context, addr *HvsockAddr) (*HvsockConn, error) {
+ sock, err := newHvSocket()
+ if err != nil {
+ return nil, err
+ }
+ defer func() {
+ if sock != nil {
+ sock.Close()
+ }
+ }()
+ c, err := sock.prepareIo()
+ if err != nil {
+ return nil, err
+ }
+ defer sock.wg.Done()
+ var bytes uint32
+ err = windows.ConnectEx(windows.Handle(sock.handle), sa, nil, 0, &bytes, &c.o)
+ _, err = sock.asyncIo(ctx, c, nil, bytes, err)
+ if err != nil {
+ return nil, err
+ }
+ conn := &HvsockConn{
+ sock: sock,
+ remote: *addr,
+ }
+ sock = nil
+ return conn, nil
+}
+*/
+
+func (conn *HvsockConn) opErr(op string, err error) error {
+ return &net.OpError{Op: op, Net: "hvsock", Source: &conn.local, Addr: &conn.remote, Err: err}
+}
+
+func (conn *HvsockConn) Read(b []byte) (int, error) {
+ c, err := conn.sock.prepareIo()
+ if err != nil {
+ return 0, conn.opErr("read", err)
+ }
+ defer conn.sock.wg.Done()
+ buf := syscall.WSABuf{Buf: &b[0], Len: uint32(len(b))}
+ var flags, bytes uint32
+ err = syscall.WSARecv(conn.sock.handle, &buf, 1, &bytes, &flags, &c.o, nil)
+ n, err := conn.sock.asyncIo(c, &conn.sock.readDeadline, bytes, err)
+ if err != nil {
+ if _, ok := err.(syscall.Errno); ok {
+ err = os.NewSyscallError("wsarecv", err)
+ }
+ return 0, conn.opErr("read", err)
+ } else if n == 0 {
+ err = io.EOF
+ }
+ return n, err
+}
+
+func (conn *HvsockConn) Write(b []byte) (int, error) {
+ t := 0
+ for len(b) != 0 {
+ n, err := conn.write(b)
+ if err != nil {
+ return t + n, err
+ }
+ t += n
+ b = b[n:]
+ }
+ return t, nil
+}
+
+func (conn *HvsockConn) write(b []byte) (int, error) {
+ c, err := conn.sock.prepareIo()
+ if err != nil {
+ return 0, conn.opErr("write", err)
+ }
+ defer conn.sock.wg.Done()
+ buf := syscall.WSABuf{Buf: &b[0], Len: uint32(len(b))}
+ var bytes uint32
+ err = syscall.WSASend(conn.sock.handle, &buf, 1, &bytes, 0, &c.o, nil)
+ n, err := conn.sock.asyncIo(c, &conn.sock.writeDeadline, bytes, err)
+ if err != nil {
+ if _, ok := err.(syscall.Errno); ok {
+ err = os.NewSyscallError("wsasend", err)
+ }
+ return 0, conn.opErr("write", err)
+ }
+ return n, err
+}
+
+// Close closes the socket connection, failing any pending read or write calls.
+func (conn *HvsockConn) Close() error {
+ return conn.sock.Close()
+}
+
+func (conn *HvsockConn) shutdown(how int) error {
+ err := syscall.Shutdown(conn.sock.handle, syscall.SHUT_RD)
+ if err != nil {
+ return os.NewSyscallError("shutdown", err)
+ }
+ return nil
+}
+
+// CloseRead shuts down the read end of the socket.
+func (conn *HvsockConn) CloseRead() error {
+ err := conn.shutdown(syscall.SHUT_RD)
+ if err != nil {
+ return conn.opErr("close", err)
+ }
+ return nil
+}
+
+// CloseWrite shuts down the write end of the socket, notifying the other endpoint that
+// no more data will be written.
+func (conn *HvsockConn) CloseWrite() error {
+ err := conn.shutdown(syscall.SHUT_WR)
+ if err != nil {
+ return conn.opErr("close", err)
+ }
+ return nil
+}
+
+// LocalAddr returns the local address of the connection.
+func (conn *HvsockConn) LocalAddr() net.Addr {
+ return &conn.local
+}
+
+// RemoteAddr returns the remote address of the connection.
+func (conn *HvsockConn) RemoteAddr() net.Addr {
+ return &conn.remote
+}
+
+// SetDeadline implements the net.Conn SetDeadline method.
+func (conn *HvsockConn) SetDeadline(t time.Time) error {
+ conn.SetReadDeadline(t)
+ conn.SetWriteDeadline(t)
+ return nil
+}
+
+// SetReadDeadline implements the net.Conn SetReadDeadline method.
+func (conn *HvsockConn) SetReadDeadline(t time.Time) error {
+ return conn.sock.SetReadDeadline(t)
+}
+
+// SetWriteDeadline implements the net.Conn SetWriteDeadline method.
+func (conn *HvsockConn) SetWriteDeadline(t time.Time) error {
+ return conn.sock.SetWriteDeadline(t)
+}
diff --git a/vendor/github.com/Microsoft/go-winio/pipe.go b/vendor/github.com/Microsoft/go-winio/pipe.go
new file mode 100644
index 000000000..96700a73d
--- /dev/null
+++ b/vendor/github.com/Microsoft/go-winio/pipe.go
@@ -0,0 +1,517 @@
+// +build windows
+
+package winio
+
+import (
+ "context"
+ "errors"
+ "fmt"
+ "io"
+ "net"
+ "os"
+ "runtime"
+ "syscall"
+ "time"
+ "unsafe"
+)
+
+//sys connectNamedPipe(pipe syscall.Handle, o *syscall.Overlapped) (err error) = ConnectNamedPipe
+//sys createNamedPipe(name string, flags uint32, pipeMode uint32, maxInstances uint32, outSize uint32, inSize uint32, defaultTimeout uint32, sa *syscall.SecurityAttributes) (handle syscall.Handle, err error) [failretval==syscall.InvalidHandle] = CreateNamedPipeW
+//sys createFile(name string, access uint32, mode uint32, sa *syscall.SecurityAttributes, createmode uint32, attrs uint32, templatefile syscall.Handle) (handle syscall.Handle, err error) [failretval==syscall.InvalidHandle] = CreateFileW
+//sys getNamedPipeInfo(pipe syscall.Handle, flags *uint32, outSize *uint32, inSize *uint32, maxInstances *uint32) (err error) = GetNamedPipeInfo
+//sys getNamedPipeHandleState(pipe syscall.Handle, state *uint32, curInstances *uint32, maxCollectionCount *uint32, collectDataTimeout *uint32, userName *uint16, maxUserNameSize uint32) (err error) = GetNamedPipeHandleStateW
+//sys localAlloc(uFlags uint32, length uint32) (ptr uintptr) = LocalAlloc
+//sys ntCreateNamedPipeFile(pipe *syscall.Handle, access uint32, oa *objectAttributes, iosb *ioStatusBlock, share uint32, disposition uint32, options uint32, typ uint32, readMode uint32, completionMode uint32, maxInstances uint32, inboundQuota uint32, outputQuota uint32, timeout *int64) (status ntstatus) = ntdll.NtCreateNamedPipeFile
+//sys rtlNtStatusToDosError(status ntstatus) (winerr error) = ntdll.RtlNtStatusToDosErrorNoTeb
+//sys rtlDosPathNameToNtPathName(name *uint16, ntName *unicodeString, filePart uintptr, reserved uintptr) (status ntstatus) = ntdll.RtlDosPathNameToNtPathName_U
+//sys rtlDefaultNpAcl(dacl *uintptr) (status ntstatus) = ntdll.RtlDefaultNpAcl
+
+type ioStatusBlock struct {
+ Status, Information uintptr
+}
+
+type objectAttributes struct {
+ Length uintptr
+ RootDirectory uintptr
+ ObjectName *unicodeString
+ Attributes uintptr
+ SecurityDescriptor *securityDescriptor
+ SecurityQoS uintptr
+}
+
+type unicodeString struct {
+ Length uint16
+ MaximumLength uint16
+ Buffer uintptr
+}
+
+type securityDescriptor struct {
+ Revision byte
+ Sbz1 byte
+ Control uint16
+ Owner uintptr
+ Group uintptr
+ Sacl uintptr
+ Dacl uintptr
+}
+
+type ntstatus int32
+
+func (status ntstatus) Err() error {
+ if status >= 0 {
+ return nil
+ }
+ return rtlNtStatusToDosError(status)
+}
+
+const (
+ cERROR_PIPE_BUSY = syscall.Errno(231)
+ cERROR_NO_DATA = syscall.Errno(232)
+ cERROR_PIPE_CONNECTED = syscall.Errno(535)
+ cERROR_SEM_TIMEOUT = syscall.Errno(121)
+
+ cSECURITY_SQOS_PRESENT = 0x100000
+ cSECURITY_ANONYMOUS = 0
+
+ cPIPE_TYPE_MESSAGE = 4
+
+ cPIPE_READMODE_MESSAGE = 2
+
+ cFILE_OPEN = 1
+ cFILE_CREATE = 2
+
+ cFILE_PIPE_MESSAGE_TYPE = 1
+ cFILE_PIPE_REJECT_REMOTE_CLIENTS = 2
+
+ cSE_DACL_PRESENT = 4
+)
+
+var (
+ // ErrPipeListenerClosed is returned for pipe operations on listeners that have been closed.
+ // This error should match net.errClosing since docker takes a dependency on its text.
+ ErrPipeListenerClosed = errors.New("use of closed network connection")
+
+ errPipeWriteClosed = errors.New("pipe has been closed for write")
+)
+
+type win32Pipe struct {
+ *win32File
+ path string
+}
+
+type win32MessageBytePipe struct {
+ win32Pipe
+ writeClosed bool
+ readEOF bool
+}
+
+type pipeAddress string
+
+func (f *win32Pipe) LocalAddr() net.Addr {
+ return pipeAddress(f.path)
+}
+
+func (f *win32Pipe) RemoteAddr() net.Addr {
+ return pipeAddress(f.path)
+}
+
+func (f *win32Pipe) SetDeadline(t time.Time) error {
+ f.SetReadDeadline(t)
+ f.SetWriteDeadline(t)
+ return nil
+}
+
+// CloseWrite closes the write side of a message pipe in byte mode.
+func (f *win32MessageBytePipe) CloseWrite() error {
+ if f.writeClosed {
+ return errPipeWriteClosed
+ }
+ err := f.win32File.Flush()
+ if err != nil {
+ return err
+ }
+ _, err = f.win32File.Write(nil)
+ if err != nil {
+ return err
+ }
+ f.writeClosed = true
+ return nil
+}
+
+// Write writes bytes to a message pipe in byte mode. Zero-byte writes are ignored, since
+// they are used to implement CloseWrite().
+func (f *win32MessageBytePipe) Write(b []byte) (int, error) {
+ if f.writeClosed {
+ return 0, errPipeWriteClosed
+ }
+ if len(b) == 0 {
+ return 0, nil
+ }
+ return f.win32File.Write(b)
+}
+
+// Read reads bytes from a message pipe in byte mode. A read of a zero-byte message on a message
+// mode pipe will return io.EOF, as will all subsequent reads.
+func (f *win32MessageBytePipe) Read(b []byte) (int, error) {
+ if f.readEOF {
+ return 0, io.EOF
+ }
+ n, err := f.win32File.Read(b)
+ if err == io.EOF {
+ // If this was the result of a zero-byte read, then
+ // it is possible that the read was due to a zero-size
+ // message. Since we are simulating CloseWrite with a
+ // zero-byte message, ensure that all future Read() calls
+ // also return EOF.
+ f.readEOF = true
+ } else if err == syscall.ERROR_MORE_DATA {
+ // ERROR_MORE_DATA indicates that the pipe's read mode is message mode
+ // and the message still has more bytes. Treat this as a success, since
+ // this package presents all named pipes as byte streams.
+ err = nil
+ }
+ return n, err
+}
+
+func (s pipeAddress) Network() string {
+ return "pipe"
+}
+
+func (s pipeAddress) String() string {
+ return string(s)
+}
+
+// tryDialPipe attempts to dial the pipe at `path` until `ctx` cancellation or timeout.
+func tryDialPipe(ctx context.Context, path *string, access uint32) (syscall.Handle, error) {
+ for {
+
+ select {
+ case <-ctx.Done():
+ return syscall.Handle(0), ctx.Err()
+ default:
+ h, err := createFile(*path, access, 0, nil, syscall.OPEN_EXISTING, syscall.FILE_FLAG_OVERLAPPED|cSECURITY_SQOS_PRESENT|cSECURITY_ANONYMOUS, 0)
+ if err == nil {
+ return h, nil
+ }
+ if err != cERROR_PIPE_BUSY {
+ return h, &os.PathError{Err: err, Op: "open", Path: *path}
+ }
+ // Wait 10 msec and try again. This is a rather simplistic
+ // view, as we always try each 10 milliseconds.
+ time.Sleep(10 * time.Millisecond)
+ }
+ }
+}
+
+// DialPipe connects to a named pipe by path, timing out if the connection
+// takes longer than the specified duration. If timeout is nil, then we use
+// a default timeout of 2 seconds. (We do not use WaitNamedPipe.)
+func DialPipe(path string, timeout *time.Duration) (net.Conn, error) {
+ var absTimeout time.Time
+ if timeout != nil {
+ absTimeout = time.Now().Add(*timeout)
+ } else {
+ absTimeout = time.Now().Add(2 * time.Second)
+ }
+ ctx, _ := context.WithDeadline(context.Background(), absTimeout)
+ conn, err := DialPipeContext(ctx, path)
+ if err == context.DeadlineExceeded {
+ return nil, ErrTimeout
+ }
+ return conn, err
+}
+
+// DialPipeContext attempts to connect to a named pipe by `path` until `ctx`
+// cancellation or timeout.
+func DialPipeContext(ctx context.Context, path string) (net.Conn, error) {
+ return DialPipeAccess(ctx, path, syscall.GENERIC_READ|syscall.GENERIC_WRITE)
+}
+
+// DialPipeAccess attempts to connect to a named pipe by `path` with `access` until `ctx`
+// cancellation or timeout.
+func DialPipeAccess(ctx context.Context, path string, access uint32) (net.Conn, error) {
+ var err error
+ var h syscall.Handle
+ h, err = tryDialPipe(ctx, &path, access)
+ if err != nil {
+ return nil, err
+ }
+
+ var flags uint32
+ err = getNamedPipeInfo(h, &flags, nil, nil, nil)
+ if err != nil {
+ return nil, err
+ }
+
+ f, err := makeWin32File(h)
+ if err != nil {
+ syscall.Close(h)
+ return nil, err
+ }
+
+ // If the pipe is in message mode, return a message byte pipe, which
+ // supports CloseWrite().
+ if flags&cPIPE_TYPE_MESSAGE != 0 {
+ return &win32MessageBytePipe{
+ win32Pipe: win32Pipe{win32File: f, path: path},
+ }, nil
+ }
+ return &win32Pipe{win32File: f, path: path}, nil
+}
+
+type acceptResponse struct {
+ f *win32File
+ err error
+}
+
+type win32PipeListener struct {
+ firstHandle syscall.Handle
+ path string
+ config PipeConfig
+ acceptCh chan (chan acceptResponse)
+ closeCh chan int
+ doneCh chan int
+}
+
+func makeServerPipeHandle(path string, sd []byte, c *PipeConfig, first bool) (syscall.Handle, error) {
+ path16, err := syscall.UTF16FromString(path)
+ if err != nil {
+ return 0, &os.PathError{Op: "open", Path: path, Err: err}
+ }
+
+ var oa objectAttributes
+ oa.Length = unsafe.Sizeof(oa)
+
+ var ntPath unicodeString
+ if err := rtlDosPathNameToNtPathName(&path16[0], &ntPath, 0, 0).Err(); err != nil {
+ return 0, &os.PathError{Op: "open", Path: path, Err: err}
+ }
+ defer localFree(ntPath.Buffer)
+ oa.ObjectName = &ntPath
+
+ // The security descriptor is only needed for the first pipe.
+ if first {
+ if sd != nil {
+ len := uint32(len(sd))
+ sdb := localAlloc(0, len)
+ defer localFree(sdb)
+ copy((*[0xffff]byte)(unsafe.Pointer(sdb))[:], sd)
+ oa.SecurityDescriptor = (*securityDescriptor)(unsafe.Pointer(sdb))
+ } else {
+ // Construct the default named pipe security descriptor.
+ var dacl uintptr
+ if err := rtlDefaultNpAcl(&dacl).Err(); err != nil {
+ return 0, fmt.Errorf("getting default named pipe ACL: %s", err)
+ }
+ defer localFree(dacl)
+
+ sdb := &securityDescriptor{
+ Revision: 1,
+ Control: cSE_DACL_PRESENT,
+ Dacl: dacl,
+ }
+ oa.SecurityDescriptor = sdb
+ }
+ }
+
+ typ := uint32(cFILE_PIPE_REJECT_REMOTE_CLIENTS)
+ if c.MessageMode {
+ typ |= cFILE_PIPE_MESSAGE_TYPE
+ }
+
+ disposition := uint32(cFILE_OPEN)
+ access := uint32(syscall.GENERIC_READ | syscall.GENERIC_WRITE | syscall.SYNCHRONIZE)
+ if first {
+ disposition = cFILE_CREATE
+ // By not asking for read or write access, the named pipe file system
+ // will put this pipe into an initially disconnected state, blocking
+ // client connections until the next call with first == false.
+ access = syscall.SYNCHRONIZE
+ }
+
+ timeout := int64(-50 * 10000) // 50ms
+
+ var (
+ h syscall.Handle
+ iosb ioStatusBlock
+ )
+ err = ntCreateNamedPipeFile(&h, access, &oa, &iosb, syscall.FILE_SHARE_READ|syscall.FILE_SHARE_WRITE, disposition, 0, typ, 0, 0, 0xffffffff, uint32(c.InputBufferSize), uint32(c.OutputBufferSize), &timeout).Err()
+ if err != nil {
+ return 0, &os.PathError{Op: "open", Path: path, Err: err}
+ }
+
+ runtime.KeepAlive(ntPath)
+ return h, nil
+}
+
+func (l *win32PipeListener) makeServerPipe() (*win32File, error) {
+ h, err := makeServerPipeHandle(l.path, nil, &l.config, false)
+ if err != nil {
+ return nil, err
+ }
+ f, err := makeWin32File(h)
+ if err != nil {
+ syscall.Close(h)
+ return nil, err
+ }
+ return f, nil
+}
+
+func (l *win32PipeListener) makeConnectedServerPipe() (*win32File, error) {
+ p, err := l.makeServerPipe()
+ if err != nil {
+ return nil, err
+ }
+
+ // Wait for the client to connect.
+ ch := make(chan error)
+ go func(p *win32File) {
+ ch <- connectPipe(p)
+ }(p)
+
+ select {
+ case err = <-ch:
+ if err != nil {
+ p.Close()
+ p = nil
+ }
+ case <-l.closeCh:
+ // Abort the connect request by closing the handle.
+ p.Close()
+ p = nil
+ err = <-ch
+ if err == nil || err == ErrFileClosed {
+ err = ErrPipeListenerClosed
+ }
+ }
+ return p, err
+}
+
+func (l *win32PipeListener) listenerRoutine() {
+ closed := false
+ for !closed {
+ select {
+ case <-l.closeCh:
+ closed = true
+ case responseCh := <-l.acceptCh:
+ var (
+ p *win32File
+ err error
+ )
+ for {
+ p, err = l.makeConnectedServerPipe()
+ // If the connection was immediately closed by the client, try
+ // again.
+ if err != cERROR_NO_DATA {
+ break
+ }
+ }
+ responseCh <- acceptResponse{p, err}
+ closed = err == ErrPipeListenerClosed
+ }
+ }
+ syscall.Close(l.firstHandle)
+ l.firstHandle = 0
+ // Notify Close() and Accept() callers that the handle has been closed.
+ close(l.doneCh)
+}
+
+// PipeConfig contain configuration for the pipe listener.
+type PipeConfig struct {
+ // SecurityDescriptor contains a Windows security descriptor in SDDL format.
+ SecurityDescriptor string
+
+ // MessageMode determines whether the pipe is in byte or message mode. In either
+ // case the pipe is read in byte mode by default. The only practical difference in
+ // this implementation is that CloseWrite() is only supported for message mode pipes;
+ // CloseWrite() is implemented as a zero-byte write, but zero-byte writes are only
+ // transferred to the reader (and returned as io.EOF in this implementation)
+ // when the pipe is in message mode.
+ MessageMode bool
+
+ // InputBufferSize specifies the size of the input buffer, in bytes.
+ InputBufferSize int32
+
+ // OutputBufferSize specifies the size of the output buffer, in bytes.
+ OutputBufferSize int32
+}
+
+// ListenPipe creates a listener on a Windows named pipe path, e.g. \\.\pipe\mypipe.
+// The pipe must not already exist.
+func ListenPipe(path string, c *PipeConfig) (net.Listener, error) {
+ var (
+ sd []byte
+ err error
+ )
+ if c == nil {
+ c = &PipeConfig{}
+ }
+ if c.SecurityDescriptor != "" {
+ sd, err = SddlToSecurityDescriptor(c.SecurityDescriptor)
+ if err != nil {
+ return nil, err
+ }
+ }
+ h, err := makeServerPipeHandle(path, sd, c, true)
+ if err != nil {
+ return nil, err
+ }
+ l := &win32PipeListener{
+ firstHandle: h,
+ path: path,
+ config: *c,
+ acceptCh: make(chan (chan acceptResponse)),
+ closeCh: make(chan int),
+ doneCh: make(chan int),
+ }
+ go l.listenerRoutine()
+ return l, nil
+}
+
+func connectPipe(p *win32File) error {
+ c, err := p.prepareIo()
+ if err != nil {
+ return err
+ }
+ defer p.wg.Done()
+
+ err = connectNamedPipe(p.handle, &c.o)
+ _, err = p.asyncIo(c, nil, 0, err)
+ if err != nil && err != cERROR_PIPE_CONNECTED {
+ return err
+ }
+ return nil
+}
+
+func (l *win32PipeListener) Accept() (net.Conn, error) {
+ ch := make(chan acceptResponse)
+ select {
+ case l.acceptCh <- ch:
+ response := <-ch
+ err := response.err
+ if err != nil {
+ return nil, err
+ }
+ if l.config.MessageMode {
+ return &win32MessageBytePipe{
+ win32Pipe: win32Pipe{win32File: response.f, path: l.path},
+ }, nil
+ }
+ return &win32Pipe{win32File: response.f, path: l.path}, nil
+ case <-l.doneCh:
+ return nil, ErrPipeListenerClosed
+ }
+}
+
+func (l *win32PipeListener) Close() error {
+ select {
+ case l.closeCh <- 1:
+ <-l.doneCh
+ case <-l.doneCh:
+ }
+ return nil
+}
+
+func (l *win32PipeListener) Addr() net.Addr {
+ return pipeAddress(l.path)
+}
diff --git a/vendor/github.com/Microsoft/go-winio/pkg/guid/guid.go b/vendor/github.com/Microsoft/go-winio/pkg/guid/guid.go
new file mode 100644
index 000000000..f497c0e39
--- /dev/null
+++ b/vendor/github.com/Microsoft/go-winio/pkg/guid/guid.go
@@ -0,0 +1,237 @@
+// +build windows
+
+// Package guid provides a GUID type. The backing structure for a GUID is
+// identical to that used by the golang.org/x/sys/windows GUID type.
+// There are two main binary encodings used for a GUID, the big-endian encoding,
+// and the Windows (mixed-endian) encoding. See here for details:
+// https://en.wikipedia.org/wiki/Universally_unique_identifier#Encoding
+package guid
+
+import (
+ "crypto/rand"
+ "crypto/sha1"
+ "encoding"
+ "encoding/binary"
+ "fmt"
+ "strconv"
+
+ "golang.org/x/sys/windows"
+)
+
+// Variant specifies which GUID variant (or "type") of the GUID. It determines
+// how the entirety of the rest of the GUID is interpreted.
+type Variant uint8
+
+// The variants specified by RFC 4122.
+const (
+ // VariantUnknown specifies a GUID variant which does not conform to one of
+ // the variant encodings specified in RFC 4122.
+ VariantUnknown Variant = iota
+ VariantNCS
+ VariantRFC4122
+ VariantMicrosoft
+ VariantFuture
+)
+
+// Version specifies how the bits in the GUID were generated. For instance, a
+// version 4 GUID is randomly generated, and a version 5 is generated from the
+// hash of an input string.
+type Version uint8
+
+var _ = (encoding.TextMarshaler)(GUID{})
+var _ = (encoding.TextUnmarshaler)(&GUID{})
+
+// GUID represents a GUID/UUID. It has the same structure as
+// golang.org/x/sys/windows.GUID so that it can be used with functions expecting
+// that type. It is defined as its own type so that stringification and
+// marshaling can be supported. The representation matches that used by native
+// Windows code.
+type GUID windows.GUID
+
+// NewV4 returns a new version 4 (pseudorandom) GUID, as defined by RFC 4122.
+func NewV4() (GUID, error) {
+ var b [16]byte
+ if _, err := rand.Read(b[:]); err != nil {
+ return GUID{}, err
+ }
+
+ g := FromArray(b)
+ g.setVersion(4) // Version 4 means randomly generated.
+ g.setVariant(VariantRFC4122)
+
+ return g, nil
+}
+
+// NewV5 returns a new version 5 (generated from a string via SHA-1 hashing)
+// GUID, as defined by RFC 4122. The RFC is unclear on the encoding of the name,
+// and the sample code treats it as a series of bytes, so we do the same here.
+//
+// Some implementations, such as those found on Windows, treat the name as a
+// big-endian UTF16 stream of bytes. If that is desired, the string can be
+// encoded as such before being passed to this function.
+func NewV5(namespace GUID, name []byte) (GUID, error) {
+ b := sha1.New()
+ namespaceBytes := namespace.ToArray()
+ b.Write(namespaceBytes[:])
+ b.Write(name)
+
+ a := [16]byte{}
+ copy(a[:], b.Sum(nil))
+
+ g := FromArray(a)
+ g.setVersion(5) // Version 5 means generated from a string.
+ g.setVariant(VariantRFC4122)
+
+ return g, nil
+}
+
+func fromArray(b [16]byte, order binary.ByteOrder) GUID {
+ var g GUID
+ g.Data1 = order.Uint32(b[0:4])
+ g.Data2 = order.Uint16(b[4:6])
+ g.Data3 = order.Uint16(b[6:8])
+ copy(g.Data4[:], b[8:16])
+ return g
+}
+
+func (g GUID) toArray(order binary.ByteOrder) [16]byte {
+ b := [16]byte{}
+ order.PutUint32(b[0:4], g.Data1)
+ order.PutUint16(b[4:6], g.Data2)
+ order.PutUint16(b[6:8], g.Data3)
+ copy(b[8:16], g.Data4[:])
+ return b
+}
+
+// FromArray constructs a GUID from a big-endian encoding array of 16 bytes.
+func FromArray(b [16]byte) GUID {
+ return fromArray(b, binary.BigEndian)
+}
+
+// ToArray returns an array of 16 bytes representing the GUID in big-endian
+// encoding.
+func (g GUID) ToArray() [16]byte {
+ return g.toArray(binary.BigEndian)
+}
+
+// FromWindowsArray constructs a GUID from a Windows encoding array of bytes.
+func FromWindowsArray(b [16]byte) GUID {
+ return fromArray(b, binary.LittleEndian)
+}
+
+// ToWindowsArray returns an array of 16 bytes representing the GUID in Windows
+// encoding.
+func (g GUID) ToWindowsArray() [16]byte {
+ return g.toArray(binary.LittleEndian)
+}
+
+func (g GUID) String() string {
+ return fmt.Sprintf(
+ "%08x-%04x-%04x-%04x-%012x",
+ g.Data1,
+ g.Data2,
+ g.Data3,
+ g.Data4[:2],
+ g.Data4[2:])
+}
+
+// FromString parses a string containing a GUID and returns the GUID. The only
+// format currently supported is the `xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx`
+// format.
+func FromString(s string) (GUID, error) {
+ if len(s) != 36 {
+ return GUID{}, fmt.Errorf("invalid GUID %q", s)
+ }
+ if s[8] != '-' || s[13] != '-' || s[18] != '-' || s[23] != '-' {
+ return GUID{}, fmt.Errorf("invalid GUID %q", s)
+ }
+
+ var g GUID
+
+ data1, err := strconv.ParseUint(s[0:8], 16, 32)
+ if err != nil {
+ return GUID{}, fmt.Errorf("invalid GUID %q", s)
+ }
+ g.Data1 = uint32(data1)
+
+ data2, err := strconv.ParseUint(s[9:13], 16, 16)
+ if err != nil {
+ return GUID{}, fmt.Errorf("invalid GUID %q", s)
+ }
+ g.Data2 = uint16(data2)
+
+ data3, err := strconv.ParseUint(s[14:18], 16, 16)
+ if err != nil {
+ return GUID{}, fmt.Errorf("invalid GUID %q", s)
+ }
+ g.Data3 = uint16(data3)
+
+ for i, x := range []int{19, 21, 24, 26, 28, 30, 32, 34} {
+ v, err := strconv.ParseUint(s[x:x+2], 16, 8)
+ if err != nil {
+ return GUID{}, fmt.Errorf("invalid GUID %q", s)
+ }
+ g.Data4[i] = uint8(v)
+ }
+
+ return g, nil
+}
+
+func (g *GUID) setVariant(v Variant) {
+ d := g.Data4[0]
+ switch v {
+ case VariantNCS:
+ d = (d & 0x7f)
+ case VariantRFC4122:
+ d = (d & 0x3f) | 0x80
+ case VariantMicrosoft:
+ d = (d & 0x1f) | 0xc0
+ case VariantFuture:
+ d = (d & 0x0f) | 0xe0
+ case VariantUnknown:
+ fallthrough
+ default:
+ panic(fmt.Sprintf("invalid variant: %d", v))
+ }
+ g.Data4[0] = d
+}
+
+// Variant returns the GUID variant, as defined in RFC 4122.
+func (g GUID) Variant() Variant {
+ b := g.Data4[0]
+ if b&0x80 == 0 {
+ return VariantNCS
+ } else if b&0xc0 == 0x80 {
+ return VariantRFC4122
+ } else if b&0xe0 == 0xc0 {
+ return VariantMicrosoft
+ } else if b&0xe0 == 0xe0 {
+ return VariantFuture
+ }
+ return VariantUnknown
+}
+
+func (g *GUID) setVersion(v Version) {
+ g.Data3 = (g.Data3 & 0x0fff) | (uint16(v) << 12)
+}
+
+// Version returns the GUID version, as defined in RFC 4122.
+func (g GUID) Version() Version {
+ return Version((g.Data3 & 0xF000) >> 12)
+}
+
+// MarshalText returns the textual representation of the GUID.
+func (g GUID) MarshalText() ([]byte, error) {
+ return []byte(g.String()), nil
+}
+
+// UnmarshalText takes the textual representation of a GUID, and unmarhals it
+// into this GUID.
+func (g *GUID) UnmarshalText(text []byte) error {
+ g2, err := FromString(string(text))
+ if err != nil {
+ return err
+ }
+ *g = g2
+ return nil
+}
diff --git a/vendor/github.com/Microsoft/go-winio/privilege.go b/vendor/github.com/Microsoft/go-winio/privilege.go
new file mode 100644
index 000000000..c3dd7c217
--- /dev/null
+++ b/vendor/github.com/Microsoft/go-winio/privilege.go
@@ -0,0 +1,203 @@
+// +build windows
+
+package winio
+
+import (
+ "bytes"
+ "encoding/binary"
+ "fmt"
+ "runtime"
+ "sync"
+ "syscall"
+ "unicode/utf16"
+
+ "golang.org/x/sys/windows"
+)
+
+//sys adjustTokenPrivileges(token windows.Token, releaseAll bool, input *byte, outputSize uint32, output *byte, requiredSize *uint32) (success bool, err error) [true] = advapi32.AdjustTokenPrivileges
+//sys impersonateSelf(level uint32) (err error) = advapi32.ImpersonateSelf
+//sys revertToSelf() (err error) = advapi32.RevertToSelf
+//sys openThreadToken(thread syscall.Handle, accessMask uint32, openAsSelf bool, token *windows.Token) (err error) = advapi32.OpenThreadToken
+//sys getCurrentThread() (h syscall.Handle) = GetCurrentThread
+//sys lookupPrivilegeValue(systemName string, name string, luid *uint64) (err error) = advapi32.LookupPrivilegeValueW
+//sys lookupPrivilegeName(systemName string, luid *uint64, buffer *uint16, size *uint32) (err error) = advapi32.LookupPrivilegeNameW
+//sys lookupPrivilegeDisplayName(systemName string, name *uint16, buffer *uint16, size *uint32, languageId *uint32) (err error) = advapi32.LookupPrivilegeDisplayNameW
+
+const (
+ SE_PRIVILEGE_ENABLED = 2
+
+ ERROR_NOT_ALL_ASSIGNED syscall.Errno = 1300
+
+ SeBackupPrivilege = "SeBackupPrivilege"
+ SeRestorePrivilege = "SeRestorePrivilege"
+ SeSecurityPrivilege = "SeSecurityPrivilege"
+)
+
+const (
+ securityAnonymous = iota
+ securityIdentification
+ securityImpersonation
+ securityDelegation
+)
+
+var (
+ privNames = make(map[string]uint64)
+ privNameMutex sync.Mutex
+)
+
+// PrivilegeError represents an error enabling privileges.
+type PrivilegeError struct {
+ privileges []uint64
+}
+
+func (e *PrivilegeError) Error() string {
+ s := ""
+ if len(e.privileges) > 1 {
+ s = "Could not enable privileges "
+ } else {
+ s = "Could not enable privilege "
+ }
+ for i, p := range e.privileges {
+ if i != 0 {
+ s += ", "
+ }
+ s += `"`
+ s += getPrivilegeName(p)
+ s += `"`
+ }
+ return s
+}
+
+// RunWithPrivilege enables a single privilege for a function call.
+func RunWithPrivilege(name string, fn func() error) error {
+ return RunWithPrivileges([]string{name}, fn)
+}
+
+// RunWithPrivileges enables privileges for a function call.
+func RunWithPrivileges(names []string, fn func() error) error {
+ privileges, err := mapPrivileges(names)
+ if err != nil {
+ return err
+ }
+ runtime.LockOSThread()
+ defer runtime.UnlockOSThread()
+ token, err := newThreadToken()
+ if err != nil {
+ return err
+ }
+ defer releaseThreadToken(token)
+ err = adjustPrivileges(token, privileges, SE_PRIVILEGE_ENABLED)
+ if err != nil {
+ return err
+ }
+ return fn()
+}
+
+func mapPrivileges(names []string) ([]uint64, error) {
+ var privileges []uint64
+ privNameMutex.Lock()
+ defer privNameMutex.Unlock()
+ for _, name := range names {
+ p, ok := privNames[name]
+ if !ok {
+ err := lookupPrivilegeValue("", name, &p)
+ if err != nil {
+ return nil, err
+ }
+ privNames[name] = p
+ }
+ privileges = append(privileges, p)
+ }
+ return privileges, nil
+}
+
+// EnableProcessPrivileges enables privileges globally for the process.
+func EnableProcessPrivileges(names []string) error {
+ return enableDisableProcessPrivilege(names, SE_PRIVILEGE_ENABLED)
+}
+
+// DisableProcessPrivileges disables privileges globally for the process.
+func DisableProcessPrivileges(names []string) error {
+ return enableDisableProcessPrivilege(names, 0)
+}
+
+func enableDisableProcessPrivilege(names []string, action uint32) error {
+ privileges, err := mapPrivileges(names)
+ if err != nil {
+ return err
+ }
+
+ p, _ := windows.GetCurrentProcess()
+ var token windows.Token
+ err = windows.OpenProcessToken(p, windows.TOKEN_ADJUST_PRIVILEGES|windows.TOKEN_QUERY, &token)
+ if err != nil {
+ return err
+ }
+
+ defer token.Close()
+ return adjustPrivileges(token, privileges, action)
+}
+
+func adjustPrivileges(token windows.Token, privileges []uint64, action uint32) error {
+ var b bytes.Buffer
+ binary.Write(&b, binary.LittleEndian, uint32(len(privileges)))
+ for _, p := range privileges {
+ binary.Write(&b, binary.LittleEndian, p)
+ binary.Write(&b, binary.LittleEndian, action)
+ }
+ prevState := make([]byte, b.Len())
+ reqSize := uint32(0)
+ success, err := adjustTokenPrivileges(token, false, &b.Bytes()[0], uint32(len(prevState)), &prevState[0], &reqSize)
+ if !success {
+ return err
+ }
+ if err == ERROR_NOT_ALL_ASSIGNED {
+ return &PrivilegeError{privileges}
+ }
+ return nil
+}
+
+func getPrivilegeName(luid uint64) string {
+ var nameBuffer [256]uint16
+ bufSize := uint32(len(nameBuffer))
+ err := lookupPrivilegeName("", &luid, &nameBuffer[0], &bufSize)
+ if err != nil {
+ return fmt.Sprintf("", luid)
+ }
+
+ var displayNameBuffer [256]uint16
+ displayBufSize := uint32(len(displayNameBuffer))
+ var langID uint32
+ err = lookupPrivilegeDisplayName("", &nameBuffer[0], &displayNameBuffer[0], &displayBufSize, &langID)
+ if err != nil {
+ return fmt.Sprintf("", string(utf16.Decode(nameBuffer[:bufSize])))
+ }
+
+ return string(utf16.Decode(displayNameBuffer[:displayBufSize]))
+}
+
+func newThreadToken() (windows.Token, error) {
+ err := impersonateSelf(securityImpersonation)
+ if err != nil {
+ return 0, err
+ }
+
+ var token windows.Token
+ err = openThreadToken(getCurrentThread(), syscall.TOKEN_ADJUST_PRIVILEGES|syscall.TOKEN_QUERY, false, &token)
+ if err != nil {
+ rerr := revertToSelf()
+ if rerr != nil {
+ panic(rerr)
+ }
+ return 0, err
+ }
+ return token, nil
+}
+
+func releaseThreadToken(h windows.Token) {
+ err := revertToSelf()
+ if err != nil {
+ panic(err)
+ }
+ h.Close()
+}
diff --git a/vendor/github.com/Microsoft/go-winio/reparse.go b/vendor/github.com/Microsoft/go-winio/reparse.go
new file mode 100644
index 000000000..fc1ee4d3a
--- /dev/null
+++ b/vendor/github.com/Microsoft/go-winio/reparse.go
@@ -0,0 +1,128 @@
+package winio
+
+import (
+ "bytes"
+ "encoding/binary"
+ "fmt"
+ "strings"
+ "unicode/utf16"
+ "unsafe"
+)
+
+const (
+ reparseTagMountPoint = 0xA0000003
+ reparseTagSymlink = 0xA000000C
+)
+
+type reparseDataBuffer struct {
+ ReparseTag uint32
+ ReparseDataLength uint16
+ Reserved uint16
+ SubstituteNameOffset uint16
+ SubstituteNameLength uint16
+ PrintNameOffset uint16
+ PrintNameLength uint16
+}
+
+// ReparsePoint describes a Win32 symlink or mount point.
+type ReparsePoint struct {
+ Target string
+ IsMountPoint bool
+}
+
+// UnsupportedReparsePointError is returned when trying to decode a non-symlink or
+// mount point reparse point.
+type UnsupportedReparsePointError struct {
+ Tag uint32
+}
+
+func (e *UnsupportedReparsePointError) Error() string {
+ return fmt.Sprintf("unsupported reparse point %x", e.Tag)
+}
+
+// DecodeReparsePoint decodes a Win32 REPARSE_DATA_BUFFER structure containing either a symlink
+// or a mount point.
+func DecodeReparsePoint(b []byte) (*ReparsePoint, error) {
+ tag := binary.LittleEndian.Uint32(b[0:4])
+ return DecodeReparsePointData(tag, b[8:])
+}
+
+func DecodeReparsePointData(tag uint32, b []byte) (*ReparsePoint, error) {
+ isMountPoint := false
+ switch tag {
+ case reparseTagMountPoint:
+ isMountPoint = true
+ case reparseTagSymlink:
+ default:
+ return nil, &UnsupportedReparsePointError{tag}
+ }
+ nameOffset := 8 + binary.LittleEndian.Uint16(b[4:6])
+ if !isMountPoint {
+ nameOffset += 4
+ }
+ nameLength := binary.LittleEndian.Uint16(b[6:8])
+ name := make([]uint16, nameLength/2)
+ err := binary.Read(bytes.NewReader(b[nameOffset:nameOffset+nameLength]), binary.LittleEndian, &name)
+ if err != nil {
+ return nil, err
+ }
+ return &ReparsePoint{string(utf16.Decode(name)), isMountPoint}, nil
+}
+
+func isDriveLetter(c byte) bool {
+ return (c >= 'a' && c <= 'z') || (c >= 'A' && c <= 'Z')
+}
+
+// EncodeReparsePoint encodes a Win32 REPARSE_DATA_BUFFER structure describing a symlink or
+// mount point.
+func EncodeReparsePoint(rp *ReparsePoint) []byte {
+ // Generate an NT path and determine if this is a relative path.
+ var ntTarget string
+ relative := false
+ if strings.HasPrefix(rp.Target, `\\?\`) {
+ ntTarget = `\??\` + rp.Target[4:]
+ } else if strings.HasPrefix(rp.Target, `\\`) {
+ ntTarget = `\??\UNC\` + rp.Target[2:]
+ } else if len(rp.Target) >= 2 && isDriveLetter(rp.Target[0]) && rp.Target[1] == ':' {
+ ntTarget = `\??\` + rp.Target
+ } else {
+ ntTarget = rp.Target
+ relative = true
+ }
+
+ // The paths must be NUL-terminated even though they are counted strings.
+ target16 := utf16.Encode([]rune(rp.Target + "\x00"))
+ ntTarget16 := utf16.Encode([]rune(ntTarget + "\x00"))
+
+ size := int(unsafe.Sizeof(reparseDataBuffer{})) - 8
+ size += len(ntTarget16)*2 + len(target16)*2
+
+ tag := uint32(reparseTagMountPoint)
+ if !rp.IsMountPoint {
+ tag = reparseTagSymlink
+ size += 4 // Add room for symlink flags
+ }
+
+ data := reparseDataBuffer{
+ ReparseTag: tag,
+ ReparseDataLength: uint16(size),
+ SubstituteNameOffset: 0,
+ SubstituteNameLength: uint16((len(ntTarget16) - 1) * 2),
+ PrintNameOffset: uint16(len(ntTarget16) * 2),
+ PrintNameLength: uint16((len(target16) - 1) * 2),
+ }
+
+ var b bytes.Buffer
+ binary.Write(&b, binary.LittleEndian, &data)
+ if !rp.IsMountPoint {
+ flags := uint32(0)
+ if relative {
+ flags |= 1
+ }
+ binary.Write(&b, binary.LittleEndian, flags)
+ }
+
+ binary.Write(&b, binary.LittleEndian, ntTarget16)
+ binary.Write(&b, binary.LittleEndian, target16)
+ return b.Bytes()
+}
diff --git a/vendor/github.com/Microsoft/go-winio/sd.go b/vendor/github.com/Microsoft/go-winio/sd.go
new file mode 100644
index 000000000..db1b370a1
--- /dev/null
+++ b/vendor/github.com/Microsoft/go-winio/sd.go
@@ -0,0 +1,98 @@
+// +build windows
+
+package winio
+
+import (
+ "syscall"
+ "unsafe"
+)
+
+//sys lookupAccountName(systemName *uint16, accountName string, sid *byte, sidSize *uint32, refDomain *uint16, refDomainSize *uint32, sidNameUse *uint32) (err error) = advapi32.LookupAccountNameW
+//sys convertSidToStringSid(sid *byte, str **uint16) (err error) = advapi32.ConvertSidToStringSidW
+//sys convertStringSecurityDescriptorToSecurityDescriptor(str string, revision uint32, sd *uintptr, size *uint32) (err error) = advapi32.ConvertStringSecurityDescriptorToSecurityDescriptorW
+//sys convertSecurityDescriptorToStringSecurityDescriptor(sd *byte, revision uint32, secInfo uint32, sddl **uint16, sddlSize *uint32) (err error) = advapi32.ConvertSecurityDescriptorToStringSecurityDescriptorW
+//sys localFree(mem uintptr) = LocalFree
+//sys getSecurityDescriptorLength(sd uintptr) (len uint32) = advapi32.GetSecurityDescriptorLength
+
+const (
+ cERROR_NONE_MAPPED = syscall.Errno(1332)
+)
+
+type AccountLookupError struct {
+ Name string
+ Err error
+}
+
+func (e *AccountLookupError) Error() string {
+ if e.Name == "" {
+ return "lookup account: empty account name specified"
+ }
+ var s string
+ switch e.Err {
+ case cERROR_NONE_MAPPED:
+ s = "not found"
+ default:
+ s = e.Err.Error()
+ }
+ return "lookup account " + e.Name + ": " + s
+}
+
+type SddlConversionError struct {
+ Sddl string
+ Err error
+}
+
+func (e *SddlConversionError) Error() string {
+ return "convert " + e.Sddl + ": " + e.Err.Error()
+}
+
+// LookupSidByName looks up the SID of an account by name
+func LookupSidByName(name string) (sid string, err error) {
+ if name == "" {
+ return "", &AccountLookupError{name, cERROR_NONE_MAPPED}
+ }
+
+ var sidSize, sidNameUse, refDomainSize uint32
+ err = lookupAccountName(nil, name, nil, &sidSize, nil, &refDomainSize, &sidNameUse)
+ if err != nil && err != syscall.ERROR_INSUFFICIENT_BUFFER {
+ return "", &AccountLookupError{name, err}
+ }
+ sidBuffer := make([]byte, sidSize)
+ refDomainBuffer := make([]uint16, refDomainSize)
+ err = lookupAccountName(nil, name, &sidBuffer[0], &sidSize, &refDomainBuffer[0], &refDomainSize, &sidNameUse)
+ if err != nil {
+ return "", &AccountLookupError{name, err}
+ }
+ var strBuffer *uint16
+ err = convertSidToStringSid(&sidBuffer[0], &strBuffer)
+ if err != nil {
+ return "", &AccountLookupError{name, err}
+ }
+ sid = syscall.UTF16ToString((*[0xffff]uint16)(unsafe.Pointer(strBuffer))[:])
+ localFree(uintptr(unsafe.Pointer(strBuffer)))
+ return sid, nil
+}
+
+func SddlToSecurityDescriptor(sddl string) ([]byte, error) {
+ var sdBuffer uintptr
+ err := convertStringSecurityDescriptorToSecurityDescriptor(sddl, 1, &sdBuffer, nil)
+ if err != nil {
+ return nil, &SddlConversionError{sddl, err}
+ }
+ defer localFree(sdBuffer)
+ sd := make([]byte, getSecurityDescriptorLength(sdBuffer))
+ copy(sd, (*[0xffff]byte)(unsafe.Pointer(sdBuffer))[:len(sd)])
+ return sd, nil
+}
+
+func SecurityDescriptorToSddl(sd []byte) (string, error) {
+ var sddl *uint16
+ // The returned string length seems to including an aribtrary number of terminating NULs.
+ // Don't use it.
+ err := convertSecurityDescriptorToStringSecurityDescriptor(&sd[0], 1, 0xff, &sddl, nil)
+ if err != nil {
+ return "", err
+ }
+ defer localFree(uintptr(unsafe.Pointer(sddl)))
+ return syscall.UTF16ToString((*[0xffff]uint16)(unsafe.Pointer(sddl))[:]), nil
+}
diff --git a/vendor/github.com/Microsoft/go-winio/syscall.go b/vendor/github.com/Microsoft/go-winio/syscall.go
new file mode 100644
index 000000000..5955c99fd
--- /dev/null
+++ b/vendor/github.com/Microsoft/go-winio/syscall.go
@@ -0,0 +1,3 @@
+package winio
+
+//go:generate go run golang.org/x/sys/windows/mkwinsyscall -output zsyscall_windows.go file.go pipe.go sd.go fileinfo.go privilege.go backup.go hvsock.go
diff --git a/vendor/github.com/Microsoft/go-winio/zsyscall_windows.go b/vendor/github.com/Microsoft/go-winio/zsyscall_windows.go
new file mode 100644
index 000000000..176ff75e3
--- /dev/null
+++ b/vendor/github.com/Microsoft/go-winio/zsyscall_windows.go
@@ -0,0 +1,427 @@
+// Code generated by 'go generate'; DO NOT EDIT.
+
+package winio
+
+import (
+ "syscall"
+ "unsafe"
+
+ "golang.org/x/sys/windows"
+)
+
+var _ unsafe.Pointer
+
+// Do the interface allocations only once for common
+// Errno values.
+const (
+ errnoERROR_IO_PENDING = 997
+)
+
+var (
+ errERROR_IO_PENDING error = syscall.Errno(errnoERROR_IO_PENDING)
+ errERROR_EINVAL error = syscall.EINVAL
+)
+
+// errnoErr returns common boxed Errno values, to prevent
+// allocations at runtime.
+func errnoErr(e syscall.Errno) error {
+ switch e {
+ case 0:
+ return errERROR_EINVAL
+ case errnoERROR_IO_PENDING:
+ return errERROR_IO_PENDING
+ }
+ // TODO: add more here, after collecting data on the common
+ // error values see on Windows. (perhaps when running
+ // all.bat?)
+ return e
+}
+
+var (
+ modadvapi32 = windows.NewLazySystemDLL("advapi32.dll")
+ modkernel32 = windows.NewLazySystemDLL("kernel32.dll")
+ modntdll = windows.NewLazySystemDLL("ntdll.dll")
+ modws2_32 = windows.NewLazySystemDLL("ws2_32.dll")
+
+ procAdjustTokenPrivileges = modadvapi32.NewProc("AdjustTokenPrivileges")
+ procConvertSecurityDescriptorToStringSecurityDescriptorW = modadvapi32.NewProc("ConvertSecurityDescriptorToStringSecurityDescriptorW")
+ procConvertSidToStringSidW = modadvapi32.NewProc("ConvertSidToStringSidW")
+ procConvertStringSecurityDescriptorToSecurityDescriptorW = modadvapi32.NewProc("ConvertStringSecurityDescriptorToSecurityDescriptorW")
+ procGetSecurityDescriptorLength = modadvapi32.NewProc("GetSecurityDescriptorLength")
+ procImpersonateSelf = modadvapi32.NewProc("ImpersonateSelf")
+ procLookupAccountNameW = modadvapi32.NewProc("LookupAccountNameW")
+ procLookupPrivilegeDisplayNameW = modadvapi32.NewProc("LookupPrivilegeDisplayNameW")
+ procLookupPrivilegeNameW = modadvapi32.NewProc("LookupPrivilegeNameW")
+ procLookupPrivilegeValueW = modadvapi32.NewProc("LookupPrivilegeValueW")
+ procOpenThreadToken = modadvapi32.NewProc("OpenThreadToken")
+ procRevertToSelf = modadvapi32.NewProc("RevertToSelf")
+ procBackupRead = modkernel32.NewProc("BackupRead")
+ procBackupWrite = modkernel32.NewProc("BackupWrite")
+ procCancelIoEx = modkernel32.NewProc("CancelIoEx")
+ procConnectNamedPipe = modkernel32.NewProc("ConnectNamedPipe")
+ procCreateFileW = modkernel32.NewProc("CreateFileW")
+ procCreateIoCompletionPort = modkernel32.NewProc("CreateIoCompletionPort")
+ procCreateNamedPipeW = modkernel32.NewProc("CreateNamedPipeW")
+ procGetCurrentThread = modkernel32.NewProc("GetCurrentThread")
+ procGetNamedPipeHandleStateW = modkernel32.NewProc("GetNamedPipeHandleStateW")
+ procGetNamedPipeInfo = modkernel32.NewProc("GetNamedPipeInfo")
+ procGetQueuedCompletionStatus = modkernel32.NewProc("GetQueuedCompletionStatus")
+ procLocalAlloc = modkernel32.NewProc("LocalAlloc")
+ procLocalFree = modkernel32.NewProc("LocalFree")
+ procSetFileCompletionNotificationModes = modkernel32.NewProc("SetFileCompletionNotificationModes")
+ procNtCreateNamedPipeFile = modntdll.NewProc("NtCreateNamedPipeFile")
+ procRtlDefaultNpAcl = modntdll.NewProc("RtlDefaultNpAcl")
+ procRtlDosPathNameToNtPathName_U = modntdll.NewProc("RtlDosPathNameToNtPathName_U")
+ procRtlNtStatusToDosErrorNoTeb = modntdll.NewProc("RtlNtStatusToDosErrorNoTeb")
+ procWSAGetOverlappedResult = modws2_32.NewProc("WSAGetOverlappedResult")
+ procbind = modws2_32.NewProc("bind")
+)
+
+func adjustTokenPrivileges(token windows.Token, releaseAll bool, input *byte, outputSize uint32, output *byte, requiredSize *uint32) (success bool, err error) {
+ var _p0 uint32
+ if releaseAll {
+ _p0 = 1
+ }
+ r0, _, e1 := syscall.Syscall6(procAdjustTokenPrivileges.Addr(), 6, uintptr(token), uintptr(_p0), uintptr(unsafe.Pointer(input)), uintptr(outputSize), uintptr(unsafe.Pointer(output)), uintptr(unsafe.Pointer(requiredSize)))
+ success = r0 != 0
+ if true {
+ err = errnoErr(e1)
+ }
+ return
+}
+
+func convertSecurityDescriptorToStringSecurityDescriptor(sd *byte, revision uint32, secInfo uint32, sddl **uint16, sddlSize *uint32) (err error) {
+ r1, _, e1 := syscall.Syscall6(procConvertSecurityDescriptorToStringSecurityDescriptorW.Addr(), 5, uintptr(unsafe.Pointer(sd)), uintptr(revision), uintptr(secInfo), uintptr(unsafe.Pointer(sddl)), uintptr(unsafe.Pointer(sddlSize)), 0)
+ if r1 == 0 {
+ err = errnoErr(e1)
+ }
+ return
+}
+
+func convertSidToStringSid(sid *byte, str **uint16) (err error) {
+ r1, _, e1 := syscall.Syscall(procConvertSidToStringSidW.Addr(), 2, uintptr(unsafe.Pointer(sid)), uintptr(unsafe.Pointer(str)), 0)
+ if r1 == 0 {
+ err = errnoErr(e1)
+ }
+ return
+}
+
+func convertStringSecurityDescriptorToSecurityDescriptor(str string, revision uint32, sd *uintptr, size *uint32) (err error) {
+ var _p0 *uint16
+ _p0, err = syscall.UTF16PtrFromString(str)
+ if err != nil {
+ return
+ }
+ return _convertStringSecurityDescriptorToSecurityDescriptor(_p0, revision, sd, size)
+}
+
+func _convertStringSecurityDescriptorToSecurityDescriptor(str *uint16, revision uint32, sd *uintptr, size *uint32) (err error) {
+ r1, _, e1 := syscall.Syscall6(procConvertStringSecurityDescriptorToSecurityDescriptorW.Addr(), 4, uintptr(unsafe.Pointer(str)), uintptr(revision), uintptr(unsafe.Pointer(sd)), uintptr(unsafe.Pointer(size)), 0, 0)
+ if r1 == 0 {
+ err = errnoErr(e1)
+ }
+ return
+}
+
+func getSecurityDescriptorLength(sd uintptr) (len uint32) {
+ r0, _, _ := syscall.Syscall(procGetSecurityDescriptorLength.Addr(), 1, uintptr(sd), 0, 0)
+ len = uint32(r0)
+ return
+}
+
+func impersonateSelf(level uint32) (err error) {
+ r1, _, e1 := syscall.Syscall(procImpersonateSelf.Addr(), 1, uintptr(level), 0, 0)
+ if r1 == 0 {
+ err = errnoErr(e1)
+ }
+ return
+}
+
+func lookupAccountName(systemName *uint16, accountName string, sid *byte, sidSize *uint32, refDomain *uint16, refDomainSize *uint32, sidNameUse *uint32) (err error) {
+ var _p0 *uint16
+ _p0, err = syscall.UTF16PtrFromString(accountName)
+ if err != nil {
+ return
+ }
+ return _lookupAccountName(systemName, _p0, sid, sidSize, refDomain, refDomainSize, sidNameUse)
+}
+
+func _lookupAccountName(systemName *uint16, accountName *uint16, sid *byte, sidSize *uint32, refDomain *uint16, refDomainSize *uint32, sidNameUse *uint32) (err error) {
+ r1, _, e1 := syscall.Syscall9(procLookupAccountNameW.Addr(), 7, uintptr(unsafe.Pointer(systemName)), uintptr(unsafe.Pointer(accountName)), uintptr(unsafe.Pointer(sid)), uintptr(unsafe.Pointer(sidSize)), uintptr(unsafe.Pointer(refDomain)), uintptr(unsafe.Pointer(refDomainSize)), uintptr(unsafe.Pointer(sidNameUse)), 0, 0)
+ if r1 == 0 {
+ err = errnoErr(e1)
+ }
+ return
+}
+
+func lookupPrivilegeDisplayName(systemName string, name *uint16, buffer *uint16, size *uint32, languageId *uint32) (err error) {
+ var _p0 *uint16
+ _p0, err = syscall.UTF16PtrFromString(systemName)
+ if err != nil {
+ return
+ }
+ return _lookupPrivilegeDisplayName(_p0, name, buffer, size, languageId)
+}
+
+func _lookupPrivilegeDisplayName(systemName *uint16, name *uint16, buffer *uint16, size *uint32, languageId *uint32) (err error) {
+ r1, _, e1 := syscall.Syscall6(procLookupPrivilegeDisplayNameW.Addr(), 5, uintptr(unsafe.Pointer(systemName)), uintptr(unsafe.Pointer(name)), uintptr(unsafe.Pointer(buffer)), uintptr(unsafe.Pointer(size)), uintptr(unsafe.Pointer(languageId)), 0)
+ if r1 == 0 {
+ err = errnoErr(e1)
+ }
+ return
+}
+
+func lookupPrivilegeName(systemName string, luid *uint64, buffer *uint16, size *uint32) (err error) {
+ var _p0 *uint16
+ _p0, err = syscall.UTF16PtrFromString(systemName)
+ if err != nil {
+ return
+ }
+ return _lookupPrivilegeName(_p0, luid, buffer, size)
+}
+
+func _lookupPrivilegeName(systemName *uint16, luid *uint64, buffer *uint16, size *uint32) (err error) {
+ r1, _, e1 := syscall.Syscall6(procLookupPrivilegeNameW.Addr(), 4, uintptr(unsafe.Pointer(systemName)), uintptr(unsafe.Pointer(luid)), uintptr(unsafe.Pointer(buffer)), uintptr(unsafe.Pointer(size)), 0, 0)
+ if r1 == 0 {
+ err = errnoErr(e1)
+ }
+ return
+}
+
+func lookupPrivilegeValue(systemName string, name string, luid *uint64) (err error) {
+ var _p0 *uint16
+ _p0, err = syscall.UTF16PtrFromString(systemName)
+ if err != nil {
+ return
+ }
+ var _p1 *uint16
+ _p1, err = syscall.UTF16PtrFromString(name)
+ if err != nil {
+ return
+ }
+ return _lookupPrivilegeValue(_p0, _p1, luid)
+}
+
+func _lookupPrivilegeValue(systemName *uint16, name *uint16, luid *uint64) (err error) {
+ r1, _, e1 := syscall.Syscall(procLookupPrivilegeValueW.Addr(), 3, uintptr(unsafe.Pointer(systemName)), uintptr(unsafe.Pointer(name)), uintptr(unsafe.Pointer(luid)))
+ if r1 == 0 {
+ err = errnoErr(e1)
+ }
+ return
+}
+
+func openThreadToken(thread syscall.Handle, accessMask uint32, openAsSelf bool, token *windows.Token) (err error) {
+ var _p0 uint32
+ if openAsSelf {
+ _p0 = 1
+ }
+ r1, _, e1 := syscall.Syscall6(procOpenThreadToken.Addr(), 4, uintptr(thread), uintptr(accessMask), uintptr(_p0), uintptr(unsafe.Pointer(token)), 0, 0)
+ if r1 == 0 {
+ err = errnoErr(e1)
+ }
+ return
+}
+
+func revertToSelf() (err error) {
+ r1, _, e1 := syscall.Syscall(procRevertToSelf.Addr(), 0, 0, 0, 0)
+ if r1 == 0 {
+ err = errnoErr(e1)
+ }
+ return
+}
+
+func backupRead(h syscall.Handle, b []byte, bytesRead *uint32, abort bool, processSecurity bool, context *uintptr) (err error) {
+ var _p0 *byte
+ if len(b) > 0 {
+ _p0 = &b[0]
+ }
+ var _p1 uint32
+ if abort {
+ _p1 = 1
+ }
+ var _p2 uint32
+ if processSecurity {
+ _p2 = 1
+ }
+ r1, _, e1 := syscall.Syscall9(procBackupRead.Addr(), 7, uintptr(h), uintptr(unsafe.Pointer(_p0)), uintptr(len(b)), uintptr(unsafe.Pointer(bytesRead)), uintptr(_p1), uintptr(_p2), uintptr(unsafe.Pointer(context)), 0, 0)
+ if r1 == 0 {
+ err = errnoErr(e1)
+ }
+ return
+}
+
+func backupWrite(h syscall.Handle, b []byte, bytesWritten *uint32, abort bool, processSecurity bool, context *uintptr) (err error) {
+ var _p0 *byte
+ if len(b) > 0 {
+ _p0 = &b[0]
+ }
+ var _p1 uint32
+ if abort {
+ _p1 = 1
+ }
+ var _p2 uint32
+ if processSecurity {
+ _p2 = 1
+ }
+ r1, _, e1 := syscall.Syscall9(procBackupWrite.Addr(), 7, uintptr(h), uintptr(unsafe.Pointer(_p0)), uintptr(len(b)), uintptr(unsafe.Pointer(bytesWritten)), uintptr(_p1), uintptr(_p2), uintptr(unsafe.Pointer(context)), 0, 0)
+ if r1 == 0 {
+ err = errnoErr(e1)
+ }
+ return
+}
+
+func cancelIoEx(file syscall.Handle, o *syscall.Overlapped) (err error) {
+ r1, _, e1 := syscall.Syscall(procCancelIoEx.Addr(), 2, uintptr(file), uintptr(unsafe.Pointer(o)), 0)
+ if r1 == 0 {
+ err = errnoErr(e1)
+ }
+ return
+}
+
+func connectNamedPipe(pipe syscall.Handle, o *syscall.Overlapped) (err error) {
+ r1, _, e1 := syscall.Syscall(procConnectNamedPipe.Addr(), 2, uintptr(pipe), uintptr(unsafe.Pointer(o)), 0)
+ if r1 == 0 {
+ err = errnoErr(e1)
+ }
+ return
+}
+
+func createFile(name string, access uint32, mode uint32, sa *syscall.SecurityAttributes, createmode uint32, attrs uint32, templatefile syscall.Handle) (handle syscall.Handle, err error) {
+ var _p0 *uint16
+ _p0, err = syscall.UTF16PtrFromString(name)
+ if err != nil {
+ return
+ }
+ return _createFile(_p0, access, mode, sa, createmode, attrs, templatefile)
+}
+
+func _createFile(name *uint16, access uint32, mode uint32, sa *syscall.SecurityAttributes, createmode uint32, attrs uint32, templatefile syscall.Handle) (handle syscall.Handle, err error) {
+ r0, _, e1 := syscall.Syscall9(procCreateFileW.Addr(), 7, uintptr(unsafe.Pointer(name)), uintptr(access), uintptr(mode), uintptr(unsafe.Pointer(sa)), uintptr(createmode), uintptr(attrs), uintptr(templatefile), 0, 0)
+ handle = syscall.Handle(r0)
+ if handle == syscall.InvalidHandle {
+ err = errnoErr(e1)
+ }
+ return
+}
+
+func createIoCompletionPort(file syscall.Handle, port syscall.Handle, key uintptr, threadCount uint32) (newport syscall.Handle, err error) {
+ r0, _, e1 := syscall.Syscall6(procCreateIoCompletionPort.Addr(), 4, uintptr(file), uintptr(port), uintptr(key), uintptr(threadCount), 0, 0)
+ newport = syscall.Handle(r0)
+ if newport == 0 {
+ err = errnoErr(e1)
+ }
+ return
+}
+
+func createNamedPipe(name string, flags uint32, pipeMode uint32, maxInstances uint32, outSize uint32, inSize uint32, defaultTimeout uint32, sa *syscall.SecurityAttributes) (handle syscall.Handle, err error) {
+ var _p0 *uint16
+ _p0, err = syscall.UTF16PtrFromString(name)
+ if err != nil {
+ return
+ }
+ return _createNamedPipe(_p0, flags, pipeMode, maxInstances, outSize, inSize, defaultTimeout, sa)
+}
+
+func _createNamedPipe(name *uint16, flags uint32, pipeMode uint32, maxInstances uint32, outSize uint32, inSize uint32, defaultTimeout uint32, sa *syscall.SecurityAttributes) (handle syscall.Handle, err error) {
+ r0, _, e1 := syscall.Syscall9(procCreateNamedPipeW.Addr(), 8, uintptr(unsafe.Pointer(name)), uintptr(flags), uintptr(pipeMode), uintptr(maxInstances), uintptr(outSize), uintptr(inSize), uintptr(defaultTimeout), uintptr(unsafe.Pointer(sa)), 0)
+ handle = syscall.Handle(r0)
+ if handle == syscall.InvalidHandle {
+ err = errnoErr(e1)
+ }
+ return
+}
+
+func getCurrentThread() (h syscall.Handle) {
+ r0, _, _ := syscall.Syscall(procGetCurrentThread.Addr(), 0, 0, 0, 0)
+ h = syscall.Handle(r0)
+ return
+}
+
+func getNamedPipeHandleState(pipe syscall.Handle, state *uint32, curInstances *uint32, maxCollectionCount *uint32, collectDataTimeout *uint32, userName *uint16, maxUserNameSize uint32) (err error) {
+ r1, _, e1 := syscall.Syscall9(procGetNamedPipeHandleStateW.Addr(), 7, uintptr(pipe), uintptr(unsafe.Pointer(state)), uintptr(unsafe.Pointer(curInstances)), uintptr(unsafe.Pointer(maxCollectionCount)), uintptr(unsafe.Pointer(collectDataTimeout)), uintptr(unsafe.Pointer(userName)), uintptr(maxUserNameSize), 0, 0)
+ if r1 == 0 {
+ err = errnoErr(e1)
+ }
+ return
+}
+
+func getNamedPipeInfo(pipe syscall.Handle, flags *uint32, outSize *uint32, inSize *uint32, maxInstances *uint32) (err error) {
+ r1, _, e1 := syscall.Syscall6(procGetNamedPipeInfo.Addr(), 5, uintptr(pipe), uintptr(unsafe.Pointer(flags)), uintptr(unsafe.Pointer(outSize)), uintptr(unsafe.Pointer(inSize)), uintptr(unsafe.Pointer(maxInstances)), 0)
+ if r1 == 0 {
+ err = errnoErr(e1)
+ }
+ return
+}
+
+func getQueuedCompletionStatus(port syscall.Handle, bytes *uint32, key *uintptr, o **ioOperation, timeout uint32) (err error) {
+ r1, _, e1 := syscall.Syscall6(procGetQueuedCompletionStatus.Addr(), 5, uintptr(port), uintptr(unsafe.Pointer(bytes)), uintptr(unsafe.Pointer(key)), uintptr(unsafe.Pointer(o)), uintptr(timeout), 0)
+ if r1 == 0 {
+ err = errnoErr(e1)
+ }
+ return
+}
+
+func localAlloc(uFlags uint32, length uint32) (ptr uintptr) {
+ r0, _, _ := syscall.Syscall(procLocalAlloc.Addr(), 2, uintptr(uFlags), uintptr(length), 0)
+ ptr = uintptr(r0)
+ return
+}
+
+func localFree(mem uintptr) {
+ syscall.Syscall(procLocalFree.Addr(), 1, uintptr(mem), 0, 0)
+ return
+}
+
+func setFileCompletionNotificationModes(h syscall.Handle, flags uint8) (err error) {
+ r1, _, e1 := syscall.Syscall(procSetFileCompletionNotificationModes.Addr(), 2, uintptr(h), uintptr(flags), 0)
+ if r1 == 0 {
+ err = errnoErr(e1)
+ }
+ return
+}
+
+func ntCreateNamedPipeFile(pipe *syscall.Handle, access uint32, oa *objectAttributes, iosb *ioStatusBlock, share uint32, disposition uint32, options uint32, typ uint32, readMode uint32, completionMode uint32, maxInstances uint32, inboundQuota uint32, outputQuota uint32, timeout *int64) (status ntstatus) {
+ r0, _, _ := syscall.Syscall15(procNtCreateNamedPipeFile.Addr(), 14, uintptr(unsafe.Pointer(pipe)), uintptr(access), uintptr(unsafe.Pointer(oa)), uintptr(unsafe.Pointer(iosb)), uintptr(share), uintptr(disposition), uintptr(options), uintptr(typ), uintptr(readMode), uintptr(completionMode), uintptr(maxInstances), uintptr(inboundQuota), uintptr(outputQuota), uintptr(unsafe.Pointer(timeout)), 0)
+ status = ntstatus(r0)
+ return
+}
+
+func rtlDefaultNpAcl(dacl *uintptr) (status ntstatus) {
+ r0, _, _ := syscall.Syscall(procRtlDefaultNpAcl.Addr(), 1, uintptr(unsafe.Pointer(dacl)), 0, 0)
+ status = ntstatus(r0)
+ return
+}
+
+func rtlDosPathNameToNtPathName(name *uint16, ntName *unicodeString, filePart uintptr, reserved uintptr) (status ntstatus) {
+ r0, _, _ := syscall.Syscall6(procRtlDosPathNameToNtPathName_U.Addr(), 4, uintptr(unsafe.Pointer(name)), uintptr(unsafe.Pointer(ntName)), uintptr(filePart), uintptr(reserved), 0, 0)
+ status = ntstatus(r0)
+ return
+}
+
+func rtlNtStatusToDosError(status ntstatus) (winerr error) {
+ r0, _, _ := syscall.Syscall(procRtlNtStatusToDosErrorNoTeb.Addr(), 1, uintptr(status), 0, 0)
+ if r0 != 0 {
+ winerr = syscall.Errno(r0)
+ }
+ return
+}
+
+func wsaGetOverlappedResult(h syscall.Handle, o *syscall.Overlapped, bytes *uint32, wait bool, flags *uint32) (err error) {
+ var _p0 uint32
+ if wait {
+ _p0 = 1
+ }
+ r1, _, e1 := syscall.Syscall6(procWSAGetOverlappedResult.Addr(), 5, uintptr(h), uintptr(unsafe.Pointer(o)), uintptr(unsafe.Pointer(bytes)), uintptr(_p0), uintptr(unsafe.Pointer(flags)), 0)
+ if r1 == 0 {
+ err = errnoErr(e1)
+ }
+ return
+}
+
+func bind(s syscall.Handle, name unsafe.Pointer, namelen int32) (err error) {
+ r1, _, e1 := syscall.Syscall(procbind.Addr(), 3, uintptr(s), uintptr(name), uintptr(namelen))
+ if r1 == socketError {
+ err = errnoErr(e1)
+ }
+ return
+}
diff --git a/vendor/github.com/NYTimes/gziphandler/go.mod b/vendor/github.com/NYTimes/gziphandler/go.mod
deleted file mode 100644
index 801901274..000000000
--- a/vendor/github.com/NYTimes/gziphandler/go.mod
+++ /dev/null
@@ -1,5 +0,0 @@
-module github.com/NYTimes/gziphandler
-
-go 1.11
-
-require github.com/stretchr/testify v1.3.0
diff --git a/vendor/github.com/NYTimes/gziphandler/go.sum b/vendor/github.com/NYTimes/gziphandler/go.sum
deleted file mode 100644
index 4347755af..000000000
--- a/vendor/github.com/NYTimes/gziphandler/go.sum
+++ /dev/null
@@ -1,7 +0,0 @@
-github.com/davecgh/go-spew v1.1.0 h1:ZDRjVQ15GmhC3fiQ8ni8+OwkZQO4DARzQgrnXU1Liz8=
-github.com/davecgh/go-spew v1.1.0/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
-github.com/pmezard/go-difflib v1.0.0 h1:4DBwDE0NGyQoBHbLQYPwSUPoCMWR5BEzIk/f1lZbAQM=
-github.com/pmezard/go-difflib v1.0.0/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4=
-github.com/stretchr/objx v0.1.0/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+wExME=
-github.com/stretchr/testify v1.3.0 h1:TivCn/peBQ7UY8ooIcPgZFpTNSz0Q2U6UrFlUfqbe0Q=
-github.com/stretchr/testify v1.3.0/go.mod h1:M5WIy9Dh21IEIfnGCwXGc5bZfKNJtfHm1UVUgZn+9EI=
diff --git a/vendor/github.com/alessio/shellescape/.gitignore b/vendor/github.com/alessio/shellescape/.gitignore
new file mode 100644
index 000000000..4ba7c2d13
--- /dev/null
+++ b/vendor/github.com/alessio/shellescape/.gitignore
@@ -0,0 +1,28 @@
+# Compiled Object files, Static and Dynamic libs (Shared Objects)
+*.o
+*.a
+*.so
+
+# Folders
+_obj
+_test
+
+# Architecture specific extensions/prefixes
+*.[568vq]
+[568vq].out
+
+*.cgo1.go
+*.cgo2.c
+_cgo_defun.c
+_cgo_gotypes.go
+_cgo_export.*
+
+_testmain.go
+
+*.exe
+*.test
+*.prof
+
+.idea/
+
+escargs
diff --git a/vendor/github.com/alessio/shellescape/.golangci.yml b/vendor/github.com/alessio/shellescape/.golangci.yml
new file mode 100644
index 000000000..cd4a17e44
--- /dev/null
+++ b/vendor/github.com/alessio/shellescape/.golangci.yml
@@ -0,0 +1,64 @@
+# run:
+# # timeout for analysis, e.g. 30s, 5m, default is 1m
+# timeout: 5m
+
+linters:
+ disable-all: true
+ enable:
+ - bodyclose
+ - deadcode
+ - depguard
+ - dogsled
+ - goconst
+ - gocritic
+ - gofmt
+ - goimports
+ - golint
+ - gosec
+ - gosimple
+ - govet
+ - ineffassign
+ - interfacer
+ - maligned
+ - misspell
+ - prealloc
+ - scopelint
+ - staticcheck
+ - structcheck
+ - stylecheck
+ - typecheck
+ - unconvert
+ - unparam
+ - unused
+ - misspell
+ - wsl
+
+issues:
+ exclude-rules:
+ - text: "Use of weak random number generator"
+ linters:
+ - gosec
+ - text: "comment on exported var"
+ linters:
+ - golint
+ - text: "don't use an underscore in package name"
+ linters:
+ - golint
+ - text: "ST1003:"
+ linters:
+ - stylecheck
+ # FIXME: Disabled until golangci-lint updates stylecheck with this fix:
+ # https://github.com/dominikh/go-tools/issues/389
+ - text: "ST1016:"
+ linters:
+ - stylecheck
+
+linters-settings:
+ dogsled:
+ max-blank-identifiers: 3
+ maligned:
+ # print struct with more effective memory layout or not, false by default
+ suggest-new: true
+
+run:
+ tests: false
diff --git a/vendor/github.com/alessio/shellescape/.goreleaser.yml b/vendor/github.com/alessio/shellescape/.goreleaser.yml
new file mode 100644
index 000000000..064c9374d
--- /dev/null
+++ b/vendor/github.com/alessio/shellescape/.goreleaser.yml
@@ -0,0 +1,33 @@
+# This is an example goreleaser.yaml file with some sane defaults.
+# Make sure to check the documentation at http://goreleaser.com
+before:
+ hooks:
+ # You may remove this if you don't use go modules.
+ - go mod download
+ # you may remove this if you don't need go generate
+ - go generate ./...
+builds:
+ - env:
+ - CGO_ENABLED=0
+ main: ./cmd/escargs
+ goos:
+ - linux
+ - windows
+ - darwin
+archives:
+ - replacements:
+ darwin: Darwin
+ linux: Linux
+ windows: Windows
+ 386: i386
+ amd64: x86_64
+checksum:
+ name_template: 'checksums.txt'
+snapshot:
+ name_template: "{{ .Tag }}-next"
+changelog:
+ sort: asc
+ filters:
+ exclude:
+ - '^docs:'
+ - '^test:'
diff --git a/vendor/github.com/alessio/shellescape/AUTHORS b/vendor/github.com/alessio/shellescape/AUTHORS
new file mode 100644
index 000000000..4a647a6f4
--- /dev/null
+++ b/vendor/github.com/alessio/shellescape/AUTHORS
@@ -0,0 +1 @@
+Alessio Treglia
diff --git a/vendor/github.com/alessio/shellescape/CODE_OF_CONDUCT.md b/vendor/github.com/alessio/shellescape/CODE_OF_CONDUCT.md
new file mode 100644
index 000000000..e8eda6062
--- /dev/null
+++ b/vendor/github.com/alessio/shellescape/CODE_OF_CONDUCT.md
@@ -0,0 +1,76 @@
+# Contributor Covenant Code of Conduct
+
+## Our Pledge
+
+In the interest of fostering an open and welcoming environment, we as
+contributors and maintainers pledge to making participation in our project and
+our community a harassment-free experience for everyone, regardless of age, body
+size, disability, ethnicity, sex characteristics, gender identity and expression,
+level of experience, education, socio-economic status, nationality, personal
+appearance, race, religion, or sexual identity and orientation.
+
+## Our Standards
+
+Examples of behavior that contributes to creating a positive environment
+include:
+
+* Using welcoming and inclusive language
+* Being respectful of differing viewpoints and experiences
+* Gracefully accepting constructive criticism
+* Focusing on what is best for the community
+* Showing empathy towards other community members
+
+Examples of unacceptable behavior by participants include:
+
+* The use of sexualized language or imagery and unwelcome sexual attention or
+ advances
+* Trolling, insulting/derogatory comments, and personal or political attacks
+* Public or private harassment
+* Publishing others' private information, such as a physical or electronic
+ address, without explicit permission
+* Other conduct which could reasonably be considered inappropriate in a
+ professional setting
+
+## Our Responsibilities
+
+Project maintainers are responsible for clarifying the standards of acceptable
+behavior and are expected to take appropriate and fair corrective action in
+response to any instances of unacceptable behavior.
+
+Project maintainers have the right and responsibility to remove, edit, or
+reject comments, commits, code, wiki edits, issues, and other contributions
+that are not aligned to this Code of Conduct, or to ban temporarily or
+permanently any contributor for other behaviors that they deem inappropriate,
+threatening, offensive, or harmful.
+
+## Scope
+
+This Code of Conduct applies both within project spaces and in public spaces
+when an individual is representing the project or its community. Examples of
+representing a project or community include using an official project e-mail
+address, posting via an official social media account, or acting as an appointed
+representative at an online or offline event. Representation of a project may be
+further defined and clarified by project maintainers.
+
+## Enforcement
+
+Instances of abusive, harassing, or otherwise unacceptable behavior may be
+reported by contacting the project team at alessio@debian.org. All
+complaints will be reviewed and investigated and will result in a response that
+is deemed necessary and appropriate to the circumstances. The project team is
+obligated to maintain confidentiality with regard to the reporter of an incident.
+Further details of specific enforcement policies may be posted separately.
+
+Project maintainers who do not follow or enforce the Code of Conduct in good
+faith may face temporary or permanent repercussions as determined by other
+members of the project's leadership.
+
+## Attribution
+
+This Code of Conduct is adapted from the [Contributor Covenant][homepage], version 1.4,
+available at https://www.contributor-covenant.org/version/1/4/code-of-conduct.html
+
+[homepage]: https://www.contributor-covenant.org
+
+For answers to common questions about this code of conduct, see
+https://www.contributor-covenant.org/faq
diff --git a/vendor/github.com/alessio/shellescape/LICENSE b/vendor/github.com/alessio/shellescape/LICENSE
new file mode 100644
index 000000000..9f760679f
--- /dev/null
+++ b/vendor/github.com/alessio/shellescape/LICENSE
@@ -0,0 +1,21 @@
+The MIT License (MIT)
+
+Copyright (c) 2016 Alessio Treglia
+
+Permission is hereby granted, free of charge, to any person obtaining a copy
+of this software and associated documentation files (the "Software"), to deal
+in the Software without restriction, including without limitation the rights
+to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
+copies of the Software, and to permit persons to whom the Software is
+furnished to do so, subject to the following conditions:
+
+The above copyright notice and this permission notice shall be included in all
+copies or substantial portions of the Software.
+
+THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
+AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
+OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
+SOFTWARE.
diff --git a/vendor/github.com/alessio/shellescape/README.md b/vendor/github.com/alessio/shellescape/README.md
new file mode 100644
index 000000000..910bb253b
--- /dev/null
+++ b/vendor/github.com/alessio/shellescape/README.md
@@ -0,0 +1,61 @@
+![Build](https://github.com/alessio/shellescape/workflows/Build/badge.svg)
+[![GoDoc](https://img.shields.io/badge/go.dev-reference-007d9c?logo=go&logoColor=white&style=flat-square)](https://pkg.go.dev/github.com/alessio/shellescape?tab=overview)
+[![sourcegraph](https://sourcegraph.com/github.com/alessio/shellescape/-/badge.svg)](https://sourcegraph.com/github.com/alessio/shellescape)
+[![codecov](https://codecov.io/gh/alessio/shellescape/branch/master/graph/badge.svg)](https://codecov.io/gh/alessio/shellescape)
+[![Coverage](https://gocover.io/_badge/github.com/alessio/shellescape)](https://gocover.io/github.com/alessio/shellescape)
+[![Go Report Card](https://goreportcard.com/badge/github.com/alessio/shellescape)](https://goreportcard.com/report/github.com/alessio/shellescape)
+
+# shellescape
+Escape arbitrary strings for safe use as command line arguments.
+## Contents of the package
+
+This package provides the `shellescape.Quote()` function that returns a
+shell-escaped copy of a string. This functionality could be helpful
+in those cases where it is known that the output of a Go program will
+be appended to/used in the context of shell programs' command line arguments.
+
+This work was inspired by the Python original package
+[shellescape](https://pypi.python.org/pypi/shellescape).
+
+## Usage
+
+The following snippet shows a typical unsafe idiom:
+
+```go
+package main
+
+import (
+ "fmt"
+ "os"
+)
+
+func main() {
+ fmt.Printf("ls -l %s\n", os.Args[1])
+}
+```
+_[See in Go Playground](https://play.golang.org/p/Wj2WoUfH_d)_
+
+Especially when creating pipeline of commands which might end up being
+executed by a shell interpreter, it is particularly unsafe to not
+escape arguments.
+
+`shellescape.Quote()` comes in handy and to safely escape strings:
+
+```go
+package main
+
+import (
+ "fmt"
+ "os"
+
+ "gopkg.in/alessio/shellescape.v1"
+)
+
+func main() {
+ fmt.Printf("ls -l %s\n", shellescape.Quote(os.Args[1]))
+}
+```
+_[See in Go Playground](https://play.golang.org/p/HJ_CXgSrmp)_
+
+## The escargs utility
+__escargs__ reads lines from the standard input and prints shell-escaped versions. Unlinke __xargs__, blank lines on the standard input are not discarded.
diff --git a/vendor/github.com/alessio/shellescape/shellescape.go b/vendor/github.com/alessio/shellescape/shellescape.go
new file mode 100644
index 000000000..dc34a556a
--- /dev/null
+++ b/vendor/github.com/alessio/shellescape/shellescape.go
@@ -0,0 +1,66 @@
+/*
+Package shellescape provides the shellescape.Quote to escape arbitrary
+strings for a safe use as command line arguments in the most common
+POSIX shells.
+
+The original Python package which this work was inspired by can be found
+at https://pypi.python.org/pypi/shellescape.
+*/
+package shellescape // "import gopkg.in/alessio/shellescape.v1"
+
+/*
+The functionality provided by shellescape.Quote could be helpful
+in those cases where it is known that the output of a Go program will
+be appended to/used in the context of shell programs' command line arguments.
+*/
+
+import (
+ "regexp"
+ "strings"
+ "unicode"
+)
+
+var pattern *regexp.Regexp
+
+func init() {
+ pattern = regexp.MustCompile(`[^\w@%+=:,./-]`)
+}
+
+// Quote returns a shell-escaped version of the string s. The returned value
+// is a string that can safely be used as one token in a shell command line.
+func Quote(s string) string {
+ if len(s) == 0 {
+ return "''"
+ }
+
+ if pattern.MatchString(s) {
+ return "'" + strings.ReplaceAll(s, "'", "'\"'\"'") + "'"
+ }
+
+ return s
+}
+
+// QuoteCommand returns a shell-escaped version of the slice of strings.
+// The returned value is a string that can safely be used as shell command arguments.
+func QuoteCommand(args []string) string {
+ l := make([]string, len(args))
+
+ for i, s := range args {
+ l[i] = Quote(s)
+ }
+
+ return strings.Join(l, " ")
+}
+
+// StripUnsafe remove non-printable runes, e.g. control characters in
+// a string that is meant for consumption by terminals that support
+// control characters.
+func StripUnsafe(s string) string {
+ return strings.Map(func(r rune) rune {
+ if unicode.IsPrint(r) {
+ return r
+ }
+
+ return -1
+ }, s)
+}
diff --git a/vendor/github.com/cespare/xxhash/v2/go.mod b/vendor/github.com/cespare/xxhash/v2/go.mod
deleted file mode 100644
index 49f67608b..000000000
--- a/vendor/github.com/cespare/xxhash/v2/go.mod
+++ /dev/null
@@ -1,3 +0,0 @@
-module github.com/cespare/xxhash/v2
-
-go 1.11
diff --git a/vendor/github.com/containerd/containerd/LICENSE b/vendor/github.com/containerd/containerd/LICENSE
new file mode 100644
index 000000000..584149b6e
--- /dev/null
+++ b/vendor/github.com/containerd/containerd/LICENSE
@@ -0,0 +1,191 @@
+
+ Apache License
+ Version 2.0, January 2004
+ https://www.apache.org/licenses/
+
+ TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
+
+ 1. Definitions.
+
+ "License" shall mean the terms and conditions for use, reproduction,
+ and distribution as defined by Sections 1 through 9 of this document.
+
+ "Licensor" shall mean the copyright owner or entity authorized by
+ the copyright owner that is granting the License.
+
+ "Legal Entity" shall mean the union of the acting entity and all
+ other entities that control, are controlled by, or are under common
+ control with that entity. For the purposes of this definition,
+ "control" means (i) the power, direct or indirect, to cause the
+ direction or management of such entity, whether by contract or
+ otherwise, or (ii) ownership of fifty percent (50%) or more of the
+ outstanding shares, or (iii) beneficial ownership of such entity.
+
+ "You" (or "Your") shall mean an individual or Legal Entity
+ exercising permissions granted by this License.
+
+ "Source" form shall mean the preferred form for making modifications,
+ including but not limited to software source code, documentation
+ source, and configuration files.
+
+ "Object" form shall mean any form resulting from mechanical
+ transformation or translation of a Source form, including but
+ not limited to compiled object code, generated documentation,
+ and conversions to other media types.
+
+ "Work" shall mean the work of authorship, whether in Source or
+ Object form, made available under the License, as indicated by a
+ copyright notice that is included in or attached to the work
+ (an example is provided in the Appendix below).
+
+ "Derivative Works" shall mean any work, whether in Source or Object
+ form, that is based on (or derived from) the Work and for which the
+ editorial revisions, annotations, elaborations, or other modifications
+ represent, as a whole, an original work of authorship. For the purposes
+ of this License, Derivative Works shall not include works that remain
+ separable from, or merely link (or bind by name) to the interfaces of,
+ the Work and Derivative Works thereof.
+
+ "Contribution" shall mean any work of authorship, including
+ the original version of the Work and any modifications or additions
+ to that Work or Derivative Works thereof, that is intentionally
+ submitted to Licensor for inclusion in the Work by the copyright owner
+ or by an individual or Legal Entity authorized to submit on behalf of
+ the copyright owner. For the purposes of this definition, "submitted"
+ means any form of electronic, verbal, or written communication sent
+ to the Licensor or its representatives, including but not limited to
+ communication on electronic mailing lists, source code control systems,
+ and issue tracking systems that are managed by, or on behalf of, the
+ Licensor for the purpose of discussing and improving the Work, but
+ excluding communication that is conspicuously marked or otherwise
+ designated in writing by the copyright owner as "Not a Contribution."
+
+ "Contributor" shall mean Licensor and any individual or Legal Entity
+ on behalf of whom a Contribution has been received by Licensor and
+ subsequently incorporated within the Work.
+
+ 2. Grant of Copyright License. Subject to the terms and conditions of
+ this License, each Contributor hereby grants to You a perpetual,
+ worldwide, non-exclusive, no-charge, royalty-free, irrevocable
+ copyright license to reproduce, prepare Derivative Works of,
+ publicly display, publicly perform, sublicense, and distribute the
+ Work and such Derivative Works in Source or Object form.
+
+ 3. Grant of Patent License. Subject to the terms and conditions of
+ this License, each Contributor hereby grants to You a perpetual,
+ worldwide, non-exclusive, no-charge, royalty-free, irrevocable
+ (except as stated in this section) patent license to make, have made,
+ use, offer to sell, sell, import, and otherwise transfer the Work,
+ where such license applies only to those patent claims licensable
+ by such Contributor that are necessarily infringed by their
+ Contribution(s) alone or by combination of their Contribution(s)
+ with the Work to which such Contribution(s) was submitted. If You
+ institute patent litigation against any entity (including a
+ cross-claim or counterclaim in a lawsuit) alleging that the Work
+ or a Contribution incorporated within the Work constitutes direct
+ or contributory patent infringement, then any patent licenses
+ granted to You under this License for that Work shall terminate
+ as of the date such litigation is filed.
+
+ 4. Redistribution. You may reproduce and distribute copies of the
+ Work or Derivative Works thereof in any medium, with or without
+ modifications, and in Source or Object form, provided that You
+ meet the following conditions:
+
+ (a) You must give any other recipients of the Work or
+ Derivative Works a copy of this License; and
+
+ (b) You must cause any modified files to carry prominent notices
+ stating that You changed the files; and
+
+ (c) You must retain, in the Source form of any Derivative Works
+ that You distribute, all copyright, patent, trademark, and
+ attribution notices from the Source form of the Work,
+ excluding those notices that do not pertain to any part of
+ the Derivative Works; and
+
+ (d) If the Work includes a "NOTICE" text file as part of its
+ distribution, then any Derivative Works that You distribute must
+ include a readable copy of the attribution notices contained
+ within such NOTICE file, excluding those notices that do not
+ pertain to any part of the Derivative Works, in at least one
+ of the following places: within a NOTICE text file distributed
+ as part of the Derivative Works; within the Source form or
+ documentation, if provided along with the Derivative Works; or,
+ within a display generated by the Derivative Works, if and
+ wherever such third-party notices normally appear. The contents
+ of the NOTICE file are for informational purposes only and
+ do not modify the License. You may add Your own attribution
+ notices within Derivative Works that You distribute, alongside
+ or as an addendum to the NOTICE text from the Work, provided
+ that such additional attribution notices cannot be construed
+ as modifying the License.
+
+ You may add Your own copyright statement to Your modifications and
+ may provide additional or different license terms and conditions
+ for use, reproduction, or distribution of Your modifications, or
+ for any such Derivative Works as a whole, provided Your use,
+ reproduction, and distribution of the Work otherwise complies with
+ the conditions stated in this License.
+
+ 5. Submission of Contributions. Unless You explicitly state otherwise,
+ any Contribution intentionally submitted for inclusion in the Work
+ by You to the Licensor shall be under the terms and conditions of
+ this License, without any additional terms or conditions.
+ Notwithstanding the above, nothing herein shall supersede or modify
+ the terms of any separate license agreement you may have executed
+ with Licensor regarding such Contributions.
+
+ 6. Trademarks. This License does not grant permission to use the trade
+ names, trademarks, service marks, or product names of the Licensor,
+ except as required for reasonable and customary use in describing the
+ origin of the Work and reproducing the content of the NOTICE file.
+
+ 7. Disclaimer of Warranty. Unless required by applicable law or
+ agreed to in writing, Licensor provides the Work (and each
+ Contributor provides its Contributions) on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
+ implied, including, without limitation, any warranties or conditions
+ of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
+ PARTICULAR PURPOSE. You are solely responsible for determining the
+ appropriateness of using or redistributing the Work and assume any
+ risks associated with Your exercise of permissions under this License.
+
+ 8. Limitation of Liability. In no event and under no legal theory,
+ whether in tort (including negligence), contract, or otherwise,
+ unless required by applicable law (such as deliberate and grossly
+ negligent acts) or agreed to in writing, shall any Contributor be
+ liable to You for damages, including any direct, indirect, special,
+ incidental, or consequential damages of any character arising as a
+ result of this License or out of the use or inability to use the
+ Work (including but not limited to damages for loss of goodwill,
+ work stoppage, computer failure or malfunction, or any and all
+ other commercial damages or losses), even if such Contributor
+ has been advised of the possibility of such damages.
+
+ 9. Accepting Warranty or Additional Liability. While redistributing
+ the Work or Derivative Works thereof, You may choose to offer,
+ and charge a fee for, acceptance of support, warranty, indemnity,
+ or other liability obligations and/or rights consistent with this
+ License. However, in accepting such obligations, You may act only
+ on Your own behalf and on Your sole responsibility, not on behalf
+ of any other Contributor, and only if You agree to indemnify,
+ defend, and hold each Contributor harmless for any liability
+ incurred by, or claims asserted against, such Contributor by reason
+ of your accepting any such warranty or additional liability.
+
+ END OF TERMS AND CONDITIONS
+
+ Copyright The containerd Authors
+
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ https://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
diff --git a/vendor/github.com/containerd/containerd/NOTICE b/vendor/github.com/containerd/containerd/NOTICE
new file mode 100644
index 000000000..8915f0277
--- /dev/null
+++ b/vendor/github.com/containerd/containerd/NOTICE
@@ -0,0 +1,16 @@
+Docker
+Copyright 2012-2015 Docker, Inc.
+
+This product includes software developed at Docker, Inc. (https://www.docker.com).
+
+The following is courtesy of our legal counsel:
+
+
+Use and transfer of Docker may be subject to certain restrictions by the
+United States and other governments.
+It is your responsibility to ensure that your use and/or transfer does not
+violate applicable laws.
+
+For more information, please see https://www.bis.doc.gov
+
+See also https://www.apache.org/dev/crypto.html and/or seek legal counsel.
diff --git a/vendor/github.com/containerd/containerd/errdefs/errors.go b/vendor/github.com/containerd/containerd/errdefs/errors.go
new file mode 100644
index 000000000..05a35228c
--- /dev/null
+++ b/vendor/github.com/containerd/containerd/errdefs/errors.go
@@ -0,0 +1,93 @@
+/*
+ Copyright The containerd Authors.
+
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+*/
+
+// Package errdefs defines the common errors used throughout containerd
+// packages.
+//
+// Use with errors.Wrap and error.Wrapf to add context to an error.
+//
+// To detect an error class, use the IsXXX functions to tell whether an error
+// is of a certain type.
+//
+// The functions ToGRPC and FromGRPC can be used to map server-side and
+// client-side errors to the correct types.
+package errdefs
+
+import (
+ "context"
+
+ "github.com/pkg/errors"
+)
+
+// Definitions of common error types used throughout containerd. All containerd
+// errors returned by most packages will map into one of these errors classes.
+// Packages should return errors of these types when they want to instruct a
+// client to take a particular action.
+//
+// For the most part, we just try to provide local grpc errors. Most conditions
+// map very well to those defined by grpc.
+var (
+ ErrUnknown = errors.New("unknown") // used internally to represent a missed mapping.
+ ErrInvalidArgument = errors.New("invalid argument")
+ ErrNotFound = errors.New("not found")
+ ErrAlreadyExists = errors.New("already exists")
+ ErrFailedPrecondition = errors.New("failed precondition")
+ ErrUnavailable = errors.New("unavailable")
+ ErrNotImplemented = errors.New("not implemented") // represents not supported and unimplemented
+)
+
+// IsInvalidArgument returns true if the error is due to an invalid argument
+func IsInvalidArgument(err error) bool {
+ return errors.Is(err, ErrInvalidArgument)
+}
+
+// IsNotFound returns true if the error is due to a missing object
+func IsNotFound(err error) bool {
+ return errors.Is(err, ErrNotFound)
+}
+
+// IsAlreadyExists returns true if the error is due to an already existing
+// metadata item
+func IsAlreadyExists(err error) bool {
+ return errors.Is(err, ErrAlreadyExists)
+}
+
+// IsFailedPrecondition returns true if an operation could not proceed to the
+// lack of a particular condition
+func IsFailedPrecondition(err error) bool {
+ return errors.Is(err, ErrFailedPrecondition)
+}
+
+// IsUnavailable returns true if the error is due to a resource being unavailable
+func IsUnavailable(err error) bool {
+ return errors.Is(err, ErrUnavailable)
+}
+
+// IsNotImplemented returns true if the error is due to not being implemented
+func IsNotImplemented(err error) bool {
+ return errors.Is(err, ErrNotImplemented)
+}
+
+// IsCanceled returns true if the error is due to `context.Canceled`.
+func IsCanceled(err error) bool {
+ return errors.Is(err, context.Canceled)
+}
+
+// IsDeadlineExceeded returns true if the error is due to
+// `context.DeadlineExceeded`.
+func IsDeadlineExceeded(err error) bool {
+ return errors.Is(err, context.DeadlineExceeded)
+}
diff --git a/vendor/github.com/containerd/containerd/errdefs/grpc.go b/vendor/github.com/containerd/containerd/errdefs/grpc.go
new file mode 100644
index 000000000..209f63bd0
--- /dev/null
+++ b/vendor/github.com/containerd/containerd/errdefs/grpc.go
@@ -0,0 +1,147 @@
+/*
+ Copyright The containerd Authors.
+
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+*/
+
+package errdefs
+
+import (
+ "context"
+ "strings"
+
+ "github.com/pkg/errors"
+ "google.golang.org/grpc/codes"
+ "google.golang.org/grpc/status"
+)
+
+// ToGRPC will attempt to map the backend containerd error into a grpc error,
+// using the original error message as a description.
+//
+// Further information may be extracted from certain errors depending on their
+// type.
+//
+// If the error is unmapped, the original error will be returned to be handled
+// by the regular grpc error handling stack.
+func ToGRPC(err error) error {
+ if err == nil {
+ return nil
+ }
+
+ if isGRPCError(err) {
+ // error has already been mapped to grpc
+ return err
+ }
+
+ switch {
+ case IsInvalidArgument(err):
+ return status.Errorf(codes.InvalidArgument, err.Error())
+ case IsNotFound(err):
+ return status.Errorf(codes.NotFound, err.Error())
+ case IsAlreadyExists(err):
+ return status.Errorf(codes.AlreadyExists, err.Error())
+ case IsFailedPrecondition(err):
+ return status.Errorf(codes.FailedPrecondition, err.Error())
+ case IsUnavailable(err):
+ return status.Errorf(codes.Unavailable, err.Error())
+ case IsNotImplemented(err):
+ return status.Errorf(codes.Unimplemented, err.Error())
+ case IsCanceled(err):
+ return status.Errorf(codes.Canceled, err.Error())
+ case IsDeadlineExceeded(err):
+ return status.Errorf(codes.DeadlineExceeded, err.Error())
+ }
+
+ return err
+}
+
+// ToGRPCf maps the error to grpc error codes, assembling the formatting string
+// and combining it with the target error string.
+//
+// This is equivalent to errors.ToGRPC(errors.Wrapf(err, format, args...))
+func ToGRPCf(err error, format string, args ...interface{}) error {
+ return ToGRPC(errors.Wrapf(err, format, args...))
+}
+
+// FromGRPC returns the underlying error from a grpc service based on the grpc error code
+func FromGRPC(err error) error {
+ if err == nil {
+ return nil
+ }
+
+ var cls error // divide these into error classes, becomes the cause
+
+ switch code(err) {
+ case codes.InvalidArgument:
+ cls = ErrInvalidArgument
+ case codes.AlreadyExists:
+ cls = ErrAlreadyExists
+ case codes.NotFound:
+ cls = ErrNotFound
+ case codes.Unavailable:
+ cls = ErrUnavailable
+ case codes.FailedPrecondition:
+ cls = ErrFailedPrecondition
+ case codes.Unimplemented:
+ cls = ErrNotImplemented
+ case codes.Canceled:
+ cls = context.Canceled
+ case codes.DeadlineExceeded:
+ cls = context.DeadlineExceeded
+ default:
+ cls = ErrUnknown
+ }
+
+ msg := rebaseMessage(cls, err)
+ if msg != "" {
+ err = errors.Wrap(cls, msg)
+ } else {
+ err = errors.WithStack(cls)
+ }
+
+ return err
+}
+
+// rebaseMessage removes the repeats for an error at the end of an error
+// string. This will happen when taking an error over grpc then remapping it.
+//
+// Effectively, we just remove the string of cls from the end of err if it
+// appears there.
+func rebaseMessage(cls error, err error) string {
+ desc := errDesc(err)
+ clss := cls.Error()
+ if desc == clss {
+ return ""
+ }
+
+ return strings.TrimSuffix(desc, ": "+clss)
+}
+
+func isGRPCError(err error) bool {
+ _, ok := status.FromError(err)
+ return ok
+}
+
+func code(err error) codes.Code {
+ if s, ok := status.FromError(err); ok {
+ return s.Code()
+ }
+ return codes.Unknown
+}
+
+func errDesc(err error) string {
+ if s, ok := status.FromError(err); ok {
+ return s.Message()
+ }
+ return err.Error()
+}
diff --git a/vendor/github.com/containerd/containerd/log/context.go b/vendor/github.com/containerd/containerd/log/context.go
new file mode 100644
index 000000000..37b6a7d1c
--- /dev/null
+++ b/vendor/github.com/containerd/containerd/log/context.go
@@ -0,0 +1,68 @@
+/*
+ Copyright The containerd Authors.
+
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+*/
+
+package log
+
+import (
+ "context"
+
+ "github.com/sirupsen/logrus"
+)
+
+var (
+ // G is an alias for GetLogger.
+ //
+ // We may want to define this locally to a package to get package tagged log
+ // messages.
+ G = GetLogger
+
+ // L is an alias for the standard logger.
+ L = logrus.NewEntry(logrus.StandardLogger())
+)
+
+type (
+ loggerKey struct{}
+)
+
+const (
+ // RFC3339NanoFixed is time.RFC3339Nano with nanoseconds padded using zeros to
+ // ensure the formatted time is always the same number of characters.
+ RFC3339NanoFixed = "2006-01-02T15:04:05.000000000Z07:00"
+
+ // TextFormat represents the text logging format
+ TextFormat = "text"
+
+ // JSONFormat represents the JSON logging format
+ JSONFormat = "json"
+)
+
+// WithLogger returns a new context with the provided logger. Use in
+// combination with logger.WithField(s) for great effect.
+func WithLogger(ctx context.Context, logger *logrus.Entry) context.Context {
+ return context.WithValue(ctx, loggerKey{}, logger)
+}
+
+// GetLogger retrieves the current logger from the context. If no logger is
+// available, the default logger is returned.
+func GetLogger(ctx context.Context) *logrus.Entry {
+ logger := ctx.Value(loggerKey{})
+
+ if logger == nil {
+ return L
+ }
+
+ return logger.(*logrus.Entry)
+}
diff --git a/vendor/github.com/containerd/containerd/platforms/compare.go b/vendor/github.com/containerd/containerd/platforms/compare.go
new file mode 100644
index 000000000..c7657e186
--- /dev/null
+++ b/vendor/github.com/containerd/containerd/platforms/compare.go
@@ -0,0 +1,193 @@
+/*
+ Copyright The containerd Authors.
+
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+*/
+
+package platforms
+
+import (
+ "strconv"
+ "strings"
+
+ specs "github.com/opencontainers/image-spec/specs-go/v1"
+)
+
+// MatchComparer is able to match and compare platforms to
+// filter and sort platforms.
+type MatchComparer interface {
+ Matcher
+
+ Less(specs.Platform, specs.Platform) bool
+}
+
+// platformVector returns an (ordered) vector of appropriate specs.Platform
+// objects to try matching for the given platform object (see platforms.Only).
+func platformVector(platform specs.Platform) []specs.Platform {
+ vector := []specs.Platform{platform}
+
+ switch platform.Architecture {
+ case "amd64":
+ vector = append(vector, specs.Platform{
+ Architecture: "386",
+ OS: platform.OS,
+ OSVersion: platform.OSVersion,
+ OSFeatures: platform.OSFeatures,
+ Variant: platform.Variant,
+ })
+ case "arm":
+ if armVersion, err := strconv.Atoi(strings.TrimPrefix(platform.Variant, "v")); err == nil && armVersion > 5 {
+ for armVersion--; armVersion >= 5; armVersion-- {
+ vector = append(vector, specs.Platform{
+ Architecture: platform.Architecture,
+ OS: platform.OS,
+ OSVersion: platform.OSVersion,
+ OSFeatures: platform.OSFeatures,
+ Variant: "v" + strconv.Itoa(armVersion),
+ })
+ }
+ }
+ case "arm64":
+ variant := platform.Variant
+ if variant == "" {
+ variant = "v8"
+ }
+ vector = append(vector, platformVector(specs.Platform{
+ Architecture: "arm",
+ OS: platform.OS,
+ OSVersion: platform.OSVersion,
+ OSFeatures: platform.OSFeatures,
+ Variant: variant,
+ })...)
+ }
+
+ return vector
+}
+
+// Only returns a match comparer for a single platform
+// using default resolution logic for the platform.
+//
+// For arm/v8, will also match arm/v7, arm/v6 and arm/v5
+// For arm/v7, will also match arm/v6 and arm/v5
+// For arm/v6, will also match arm/v5
+// For amd64, will also match 386
+func Only(platform specs.Platform) MatchComparer {
+ return Ordered(platformVector(Normalize(platform))...)
+}
+
+// OnlyStrict returns a match comparer for a single platform.
+//
+// Unlike Only, OnlyStrict does not match sub platforms.
+// So, "arm/vN" will not match "arm/vM" where M < N,
+// and "amd64" will not also match "386".
+//
+// OnlyStrict matches non-canonical forms.
+// So, "arm64" matches "arm/64/v8".
+func OnlyStrict(platform specs.Platform) MatchComparer {
+ return Ordered(Normalize(platform))
+}
+
+// Ordered returns a platform MatchComparer which matches any of the platforms
+// but orders them in order they are provided.
+func Ordered(platforms ...specs.Platform) MatchComparer {
+ matchers := make([]Matcher, len(platforms))
+ for i := range platforms {
+ matchers[i] = NewMatcher(platforms[i])
+ }
+ return orderedPlatformComparer{
+ matchers: matchers,
+ }
+}
+
+// Any returns a platform MatchComparer which matches any of the platforms
+// with no preference for ordering.
+func Any(platforms ...specs.Platform) MatchComparer {
+ matchers := make([]Matcher, len(platforms))
+ for i := range platforms {
+ matchers[i] = NewMatcher(platforms[i])
+ }
+ return anyPlatformComparer{
+ matchers: matchers,
+ }
+}
+
+// All is a platform MatchComparer which matches all platforms
+// with preference for ordering.
+var All MatchComparer = allPlatformComparer{}
+
+type orderedPlatformComparer struct {
+ matchers []Matcher
+}
+
+func (c orderedPlatformComparer) Match(platform specs.Platform) bool {
+ for _, m := range c.matchers {
+ if m.Match(platform) {
+ return true
+ }
+ }
+ return false
+}
+
+func (c orderedPlatformComparer) Less(p1 specs.Platform, p2 specs.Platform) bool {
+ for _, m := range c.matchers {
+ p1m := m.Match(p1)
+ p2m := m.Match(p2)
+ if p1m && !p2m {
+ return true
+ }
+ if p1m || p2m {
+ return false
+ }
+ }
+ return false
+}
+
+type anyPlatformComparer struct {
+ matchers []Matcher
+}
+
+func (c anyPlatformComparer) Match(platform specs.Platform) bool {
+ for _, m := range c.matchers {
+ if m.Match(platform) {
+ return true
+ }
+ }
+ return false
+}
+
+func (c anyPlatformComparer) Less(p1, p2 specs.Platform) bool {
+ var p1m, p2m bool
+ for _, m := range c.matchers {
+ if !p1m && m.Match(p1) {
+ p1m = true
+ }
+ if !p2m && m.Match(p2) {
+ p2m = true
+ }
+ if p1m && p2m {
+ return false
+ }
+ }
+ // If one matches, and the other does, sort match first
+ return p1m && !p2m
+}
+
+type allPlatformComparer struct{}
+
+func (allPlatformComparer) Match(specs.Platform) bool {
+ return true
+}
+
+func (allPlatformComparer) Less(specs.Platform, specs.Platform) bool {
+ return false
+}
diff --git a/vendor/github.com/containerd/containerd/platforms/cpuinfo.go b/vendor/github.com/containerd/containerd/platforms/cpuinfo.go
new file mode 100644
index 000000000..4a7177e31
--- /dev/null
+++ b/vendor/github.com/containerd/containerd/platforms/cpuinfo.go
@@ -0,0 +1,131 @@
+/*
+ Copyright The containerd Authors.
+
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+*/
+
+package platforms
+
+import (
+ "bufio"
+ "os"
+ "runtime"
+ "strings"
+ "sync"
+
+ "github.com/containerd/containerd/errdefs"
+ "github.com/containerd/containerd/log"
+ "github.com/pkg/errors"
+)
+
+// Present the ARM instruction set architecture, eg: v7, v8
+// Don't use this value directly; call cpuVariant() instead.
+var cpuVariantValue string
+
+var cpuVariantOnce sync.Once
+
+func cpuVariant() string {
+ cpuVariantOnce.Do(func() {
+ if isArmArch(runtime.GOARCH) {
+ cpuVariantValue = getCPUVariant()
+ }
+ })
+ return cpuVariantValue
+}
+
+// For Linux, the kernel has already detected the ABI, ISA and Features.
+// So we don't need to access the ARM registers to detect platform information
+// by ourselves. We can just parse these information from /proc/cpuinfo
+func getCPUInfo(pattern string) (info string, err error) {
+ if !isLinuxOS(runtime.GOOS) {
+ return "", errors.Wrapf(errdefs.ErrNotImplemented, "getCPUInfo for OS %s", runtime.GOOS)
+ }
+
+ cpuinfo, err := os.Open("/proc/cpuinfo")
+ if err != nil {
+ return "", err
+ }
+ defer cpuinfo.Close()
+
+ // Start to Parse the Cpuinfo line by line. For SMP SoC, we parse
+ // the first core is enough.
+ scanner := bufio.NewScanner(cpuinfo)
+ for scanner.Scan() {
+ newline := scanner.Text()
+ list := strings.Split(newline, ":")
+
+ if len(list) > 1 && strings.EqualFold(strings.TrimSpace(list[0]), pattern) {
+ return strings.TrimSpace(list[1]), nil
+ }
+ }
+
+ // Check whether the scanner encountered errors
+ err = scanner.Err()
+ if err != nil {
+ return "", err
+ }
+
+ return "", errors.Wrapf(errdefs.ErrNotFound, "getCPUInfo for pattern: %s", pattern)
+}
+
+func getCPUVariant() string {
+ if runtime.GOOS == "windows" || runtime.GOOS == "darwin" {
+ // Windows/Darwin only supports v7 for ARM32 and v8 for ARM64 and so we can use
+ // runtime.GOARCH to determine the variants
+ var variant string
+ switch runtime.GOARCH {
+ case "arm64":
+ variant = "v8"
+ case "arm":
+ variant = "v7"
+ default:
+ variant = "unknown"
+ }
+
+ return variant
+ }
+
+ variant, err := getCPUInfo("Cpu architecture")
+ if err != nil {
+ log.L.WithError(err).Error("failure getting variant")
+ return ""
+ }
+
+ // handle edge case for Raspberry Pi ARMv6 devices (which due to a kernel quirk, report "CPU architecture: 7")
+ // https://www.raspberrypi.org/forums/viewtopic.php?t=12614
+ if runtime.GOARCH == "arm" && variant == "7" {
+ model, err := getCPUInfo("model name")
+ if err == nil && strings.HasPrefix(strings.ToLower(model), "armv6-compatible") {
+ variant = "6"
+ }
+ }
+
+ switch strings.ToLower(variant) {
+ case "8", "aarch64":
+ variant = "v8"
+ case "7", "7m", "?(12)", "?(13)", "?(14)", "?(15)", "?(16)", "?(17)":
+ variant = "v7"
+ case "6", "6tej":
+ variant = "v6"
+ case "5", "5t", "5te", "5tej":
+ variant = "v5"
+ case "4", "4t":
+ variant = "v4"
+ case "3":
+ variant = "v3"
+ default:
+ variant = "unknown"
+ }
+
+ return variant
+}
diff --git a/vendor/github.com/containerd/containerd/platforms/database.go b/vendor/github.com/containerd/containerd/platforms/database.go
new file mode 100644
index 000000000..6ede94061
--- /dev/null
+++ b/vendor/github.com/containerd/containerd/platforms/database.go
@@ -0,0 +1,114 @@
+/*
+ Copyright The containerd Authors.
+
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+*/
+
+package platforms
+
+import (
+ "runtime"
+ "strings"
+)
+
+// isLinuxOS returns true if the operating system is Linux.
+//
+// The OS value should be normalized before calling this function.
+func isLinuxOS(os string) bool {
+ return os == "linux"
+}
+
+// These function are generated from https://golang.org/src/go/build/syslist.go.
+//
+// We use switch statements because they are slightly faster than map lookups
+// and use a little less memory.
+
+// isKnownOS returns true if we know about the operating system.
+//
+// The OS value should be normalized before calling this function.
+func isKnownOS(os string) bool {
+ switch os {
+ case "aix", "android", "darwin", "dragonfly", "freebsd", "hurd", "illumos", "js", "linux", "nacl", "netbsd", "openbsd", "plan9", "solaris", "windows", "zos":
+ return true
+ }
+ return false
+}
+
+// isArmArch returns true if the architecture is ARM.
+//
+// The arch value should be normalized before being passed to this function.
+func isArmArch(arch string) bool {
+ switch arch {
+ case "arm", "arm64":
+ return true
+ }
+ return false
+}
+
+// isKnownArch returns true if we know about the architecture.
+//
+// The arch value should be normalized before being passed to this function.
+func isKnownArch(arch string) bool {
+ switch arch {
+ case "386", "amd64", "amd64p32", "arm", "armbe", "arm64", "arm64be", "ppc64", "ppc64le", "mips", "mipsle", "mips64", "mips64le", "mips64p32", "mips64p32le", "ppc", "riscv", "riscv64", "s390", "s390x", "sparc", "sparc64", "wasm":
+ return true
+ }
+ return false
+}
+
+func normalizeOS(os string) string {
+ if os == "" {
+ return runtime.GOOS
+ }
+ os = strings.ToLower(os)
+
+ switch os {
+ case "macos":
+ os = "darwin"
+ }
+ return os
+}
+
+// normalizeArch normalizes the architecture.
+func normalizeArch(arch, variant string) (string, string) {
+ arch, variant = strings.ToLower(arch), strings.ToLower(variant)
+ switch arch {
+ case "i386":
+ arch = "386"
+ variant = ""
+ case "x86_64", "x86-64":
+ arch = "amd64"
+ variant = ""
+ case "aarch64", "arm64":
+ arch = "arm64"
+ switch variant {
+ case "8", "v8":
+ variant = ""
+ }
+ case "armhf":
+ arch = "arm"
+ variant = "v7"
+ case "armel":
+ arch = "arm"
+ variant = "v6"
+ case "arm":
+ switch variant {
+ case "", "7":
+ variant = "v7"
+ case "5", "6", "8":
+ variant = "v" + variant
+ }
+ }
+
+ return arch, variant
+}
diff --git a/vendor/github.com/containerd/containerd/platforms/defaults.go b/vendor/github.com/containerd/containerd/platforms/defaults.go
new file mode 100644
index 000000000..cb77fbc9f
--- /dev/null
+++ b/vendor/github.com/containerd/containerd/platforms/defaults.go
@@ -0,0 +1,43 @@
+/*
+ Copyright The containerd Authors.
+
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+*/
+
+package platforms
+
+import (
+ "runtime"
+
+ specs "github.com/opencontainers/image-spec/specs-go/v1"
+)
+
+// DefaultString returns the default string specifier for the platform.
+func DefaultString() string {
+ return Format(DefaultSpec())
+}
+
+// DefaultSpec returns the current platform's default platform specification.
+func DefaultSpec() specs.Platform {
+ return specs.Platform{
+ OS: runtime.GOOS,
+ Architecture: runtime.GOARCH,
+ // The Variant field will be empty if arch != ARM.
+ Variant: cpuVariant(),
+ }
+}
+
+// DefaultStrict returns strict form of Default.
+func DefaultStrict() MatchComparer {
+ return OnlyStrict(DefaultSpec())
+}
diff --git a/vendor/github.com/containerd/containerd/platforms/defaults_unix.go b/vendor/github.com/containerd/containerd/platforms/defaults_unix.go
new file mode 100644
index 000000000..e8a7d5ffa
--- /dev/null
+++ b/vendor/github.com/containerd/containerd/platforms/defaults_unix.go
@@ -0,0 +1,24 @@
+// +build !windows
+
+/*
+ Copyright The containerd Authors.
+
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+*/
+
+package platforms
+
+// Default returns the default matcher for the platform.
+func Default() MatchComparer {
+ return Only(DefaultSpec())
+}
diff --git a/vendor/github.com/containerd/containerd/platforms/defaults_windows.go b/vendor/github.com/containerd/containerd/platforms/defaults_windows.go
new file mode 100644
index 000000000..0c380e3b7
--- /dev/null
+++ b/vendor/github.com/containerd/containerd/platforms/defaults_windows.go
@@ -0,0 +1,81 @@
+// +build windows
+
+/*
+ Copyright The containerd Authors.
+
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+*/
+
+package platforms
+
+import (
+ "fmt"
+ "runtime"
+ "strconv"
+ "strings"
+
+ imagespec "github.com/opencontainers/image-spec/specs-go/v1"
+ specs "github.com/opencontainers/image-spec/specs-go/v1"
+ "golang.org/x/sys/windows"
+)
+
+type matchComparer struct {
+ defaults Matcher
+ osVersionPrefix string
+}
+
+// Match matches platform with the same windows major, minor
+// and build version.
+func (m matchComparer) Match(p imagespec.Platform) bool {
+ if m.defaults.Match(p) {
+ // TODO(windows): Figure out whether OSVersion is deprecated.
+ return strings.HasPrefix(p.OSVersion, m.osVersionPrefix)
+ }
+ return false
+}
+
+// Less sorts matched platforms in front of other platforms.
+// For matched platforms, it puts platforms with larger revision
+// number in front.
+func (m matchComparer) Less(p1, p2 imagespec.Platform) bool {
+ m1, m2 := m.Match(p1), m.Match(p2)
+ if m1 && m2 {
+ r1, r2 := revision(p1.OSVersion), revision(p2.OSVersion)
+ return r1 > r2
+ }
+ return m1 && !m2
+}
+
+func revision(v string) int {
+ parts := strings.Split(v, ".")
+ if len(parts) < 4 {
+ return 0
+ }
+ r, err := strconv.Atoi(parts[3])
+ if err != nil {
+ return 0
+ }
+ return r
+}
+
+// Default returns the current platform's default platform specification.
+func Default() MatchComparer {
+ major, minor, build := windows.RtlGetNtVersionNumbers()
+ return matchComparer{
+ defaults: Ordered(DefaultSpec(), specs.Platform{
+ OS: "linux",
+ Architecture: runtime.GOARCH,
+ }),
+ osVersionPrefix: fmt.Sprintf("%d.%d.%d", major, minor, build),
+ }
+}
diff --git a/vendor/github.com/containerd/containerd/platforms/platforms.go b/vendor/github.com/containerd/containerd/platforms/platforms.go
new file mode 100644
index 000000000..088bdea05
--- /dev/null
+++ b/vendor/github.com/containerd/containerd/platforms/platforms.go
@@ -0,0 +1,278 @@
+/*
+ Copyright The containerd Authors.
+
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+*/
+
+// Package platforms provides a toolkit for normalizing, matching and
+// specifying container platforms.
+//
+// Centered around OCI platform specifications, we define a string-based
+// specifier syntax that can be used for user input. With a specifier, users
+// only need to specify the parts of the platform that are relevant to their
+// context, providing an operating system or architecture or both.
+//
+// How do I use this package?
+//
+// The vast majority of use cases should simply use the match function with
+// user input. The first step is to parse a specifier into a matcher:
+//
+// m, err := Parse("linux")
+// if err != nil { ... }
+//
+// Once you have a matcher, use it to match against the platform declared by a
+// component, typically from an image or runtime. Since extracting an images
+// platform is a little more involved, we'll use an example against the
+// platform default:
+//
+// if ok := m.Match(Default()); !ok { /* doesn't match */ }
+//
+// This can be composed in loops for resolving runtimes or used as a filter for
+// fetch and select images.
+//
+// More details of the specifier syntax and platform spec follow.
+//
+// Declaring Platform Support
+//
+// Components that have strict platform requirements should use the OCI
+// platform specification to declare their support. Typically, this will be
+// images and runtimes that should make these declaring which platform they
+// support specifically. This looks roughly as follows:
+//
+// type Platform struct {
+// Architecture string
+// OS string
+// Variant string
+// }
+//
+// Most images and runtimes should at least set Architecture and OS, according
+// to their GOARCH and GOOS values, respectively (follow the OCI image
+// specification when in doubt). ARM should set variant under certain
+// discussions, which are outlined below.
+//
+// Platform Specifiers
+//
+// While the OCI platform specifications provide a tool for components to
+// specify structured information, user input typically doesn't need the full
+// context and much can be inferred. To solve this problem, we introduced
+// "specifiers". A specifier has the format
+// `||/[/]`. The user can provide either the
+// operating system or the architecture or both.
+//
+// An example of a common specifier is `linux/amd64`. If the host has a default
+// of runtime that matches this, the user can simply provide the component that
+// matters. For example, if a image provides amd64 and arm64 support, the
+// operating system, `linux` can be inferred, so they only have to provide
+// `arm64` or `amd64`. Similar behavior is implemented for operating systems,
+// where the architecture may be known but a runtime may support images from
+// different operating systems.
+//
+// Normalization
+//
+// Because not all users are familiar with the way the Go runtime represents
+// platforms, several normalizations have been provided to make this package
+// easier to user.
+//
+// The following are performed for architectures:
+//
+// Value Normalized
+// aarch64 arm64
+// armhf arm
+// armel arm/v6
+// i386 386
+// x86_64 amd64
+// x86-64 amd64
+//
+// We also normalize the operating system `macos` to `darwin`.
+//
+// ARM Support
+//
+// To qualify ARM architecture, the Variant field is used to qualify the arm
+// version. The most common arm version, v7, is represented without the variant
+// unless it is explicitly provided. This is treated as equivalent to armhf. A
+// previous architecture, armel, will be normalized to arm/v6.
+//
+// While these normalizations are provided, their support on arm platforms has
+// not yet been fully implemented and tested.
+package platforms
+
+import (
+ "regexp"
+ "runtime"
+ "strconv"
+ "strings"
+
+ "github.com/containerd/containerd/errdefs"
+ specs "github.com/opencontainers/image-spec/specs-go/v1"
+ "github.com/pkg/errors"
+)
+
+var (
+ specifierRe = regexp.MustCompile(`^[A-Za-z0-9_-]+$`)
+)
+
+// Matcher matches platforms specifications, provided by an image or runtime.
+type Matcher interface {
+ Match(platform specs.Platform) bool
+}
+
+// NewMatcher returns a simple matcher based on the provided platform
+// specification. The returned matcher only looks for equality based on os,
+// architecture and variant.
+//
+// One may implement their own matcher if this doesn't provide the required
+// functionality.
+//
+// Applications should opt to use `Match` over directly parsing specifiers.
+func NewMatcher(platform specs.Platform) Matcher {
+ return &matcher{
+ Platform: Normalize(platform),
+ }
+}
+
+type matcher struct {
+ specs.Platform
+}
+
+func (m *matcher) Match(platform specs.Platform) bool {
+ normalized := Normalize(platform)
+ return m.OS == normalized.OS &&
+ m.Architecture == normalized.Architecture &&
+ m.Variant == normalized.Variant
+}
+
+func (m *matcher) String() string {
+ return Format(m.Platform)
+}
+
+// Parse parses the platform specifier syntax into a platform declaration.
+//
+// Platform specifiers are in the format `||/[/]`.
+// The minimum required information for a platform specifier is the operating
+// system or architecture. If there is only a single string (no slashes), the
+// value will be matched against the known set of operating systems, then fall
+// back to the known set of architectures. The missing component will be
+// inferred based on the local environment.
+func Parse(specifier string) (specs.Platform, error) {
+ if strings.Contains(specifier, "*") {
+ // TODO(stevvooe): need to work out exact wildcard handling
+ return specs.Platform{}, errors.Wrapf(errdefs.ErrInvalidArgument, "%q: wildcards not yet supported", specifier)
+ }
+
+ parts := strings.Split(specifier, "/")
+
+ for _, part := range parts {
+ if !specifierRe.MatchString(part) {
+ return specs.Platform{}, errors.Wrapf(errdefs.ErrInvalidArgument, "%q is an invalid component of %q: platform specifier component must match %q", part, specifier, specifierRe.String())
+ }
+ }
+
+ var p specs.Platform
+ switch len(parts) {
+ case 1:
+ // in this case, we will test that the value might be an OS, then look
+ // it up. If it is not known, we'll treat it as an architecture. Since
+ // we have very little information about the platform here, we are
+ // going to be a little more strict if we don't know about the argument
+ // value.
+ p.OS = normalizeOS(parts[0])
+ if isKnownOS(p.OS) {
+ // picks a default architecture
+ p.Architecture = runtime.GOARCH
+ if p.Architecture == "arm" && cpuVariant() != "v7" {
+ p.Variant = cpuVariant()
+ }
+
+ return p, nil
+ }
+
+ p.Architecture, p.Variant = normalizeArch(parts[0], "")
+ if p.Architecture == "arm" && p.Variant == "v7" {
+ p.Variant = ""
+ }
+ if isKnownArch(p.Architecture) {
+ p.OS = runtime.GOOS
+ return p, nil
+ }
+
+ return specs.Platform{}, errors.Wrapf(errdefs.ErrInvalidArgument, "%q: unknown operating system or architecture", specifier)
+ case 2:
+ // In this case, we treat as a regular os/arch pair. We don't care
+ // about whether or not we know of the platform.
+ p.OS = normalizeOS(parts[0])
+ p.Architecture, p.Variant = normalizeArch(parts[1], "")
+ if p.Architecture == "arm" && p.Variant == "v7" {
+ p.Variant = ""
+ }
+
+ return p, nil
+ case 3:
+ // we have a fully specified variant, this is rare
+ p.OS = normalizeOS(parts[0])
+ p.Architecture, p.Variant = normalizeArch(parts[1], parts[2])
+ if p.Architecture == "arm64" && p.Variant == "" {
+ p.Variant = "v8"
+ }
+
+ return p, nil
+ }
+
+ return specs.Platform{}, errors.Wrapf(errdefs.ErrInvalidArgument, "%q: cannot parse platform specifier", specifier)
+}
+
+// MustParse is like Parses but panics if the specifier cannot be parsed.
+// Simplifies initialization of global variables.
+func MustParse(specifier string) specs.Platform {
+ p, err := Parse(specifier)
+ if err != nil {
+ panic("platform: Parse(" + strconv.Quote(specifier) + "): " + err.Error())
+ }
+ return p
+}
+
+// Format returns a string specifier from the provided platform specification.
+func Format(platform specs.Platform) string {
+ if platform.OS == "" {
+ return "unknown"
+ }
+
+ return joinNotEmpty(platform.OS, platform.Architecture, platform.Variant)
+}
+
+func joinNotEmpty(s ...string) string {
+ var ss []string
+ for _, s := range s {
+ if s == "" {
+ continue
+ }
+
+ ss = append(ss, s)
+ }
+
+ return strings.Join(ss, "/")
+}
+
+// Normalize validates and translate the platform to the canonical value.
+//
+// For example, if "Aarch64" is encountered, we change it to "arm64" or if
+// "x86_64" is encountered, it becomes "amd64".
+func Normalize(platform specs.Platform) specs.Platform {
+ platform.OS = normalizeOS(platform.OS)
+ platform.Architecture, platform.Variant = normalizeArch(platform.Architecture, platform.Variant)
+
+ // these fields are deprecated, remove them
+ platform.OSFeatures = nil
+ platform.OSVersion = ""
+
+ return platform
+}
diff --git a/vendor/github.com/coredns/caddy/LICENSE.txt b/vendor/github.com/coredns/caddy/LICENSE.txt
new file mode 100644
index 000000000..8dada3eda
--- /dev/null
+++ b/vendor/github.com/coredns/caddy/LICENSE.txt
@@ -0,0 +1,201 @@
+ Apache License
+ Version 2.0, January 2004
+ http://www.apache.org/licenses/
+
+ TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
+
+ 1. Definitions.
+
+ "License" shall mean the terms and conditions for use, reproduction,
+ and distribution as defined by Sections 1 through 9 of this document.
+
+ "Licensor" shall mean the copyright owner or entity authorized by
+ the copyright owner that is granting the License.
+
+ "Legal Entity" shall mean the union of the acting entity and all
+ other entities that control, are controlled by, or are under common
+ control with that entity. For the purposes of this definition,
+ "control" means (i) the power, direct or indirect, to cause the
+ direction or management of such entity, whether by contract or
+ otherwise, or (ii) ownership of fifty percent (50%) or more of the
+ outstanding shares, or (iii) beneficial ownership of such entity.
+
+ "You" (or "Your") shall mean an individual or Legal Entity
+ exercising permissions granted by this License.
+
+ "Source" form shall mean the preferred form for making modifications,
+ including but not limited to software source code, documentation
+ source, and configuration files.
+
+ "Object" form shall mean any form resulting from mechanical
+ transformation or translation of a Source form, including but
+ not limited to compiled object code, generated documentation,
+ and conversions to other media types.
+
+ "Work" shall mean the work of authorship, whether in Source or
+ Object form, made available under the License, as indicated by a
+ copyright notice that is included in or attached to the work
+ (an example is provided in the Appendix below).
+
+ "Derivative Works" shall mean any work, whether in Source or Object
+ form, that is based on (or derived from) the Work and for which the
+ editorial revisions, annotations, elaborations, or other modifications
+ represent, as a whole, an original work of authorship. For the purposes
+ of this License, Derivative Works shall not include works that remain
+ separable from, or merely link (or bind by name) to the interfaces of,
+ the Work and Derivative Works thereof.
+
+ "Contribution" shall mean any work of authorship, including
+ the original version of the Work and any modifications or additions
+ to that Work or Derivative Works thereof, that is intentionally
+ submitted to Licensor for inclusion in the Work by the copyright owner
+ or by an individual or Legal Entity authorized to submit on behalf of
+ the copyright owner. For the purposes of this definition, "submitted"
+ means any form of electronic, verbal, or written communication sent
+ to the Licensor or its representatives, including but not limited to
+ communication on electronic mailing lists, source code control systems,
+ and issue tracking systems that are managed by, or on behalf of, the
+ Licensor for the purpose of discussing and improving the Work, but
+ excluding communication that is conspicuously marked or otherwise
+ designated in writing by the copyright owner as "Not a Contribution."
+
+ "Contributor" shall mean Licensor and any individual or Legal Entity
+ on behalf of whom a Contribution has been received by Licensor and
+ subsequently incorporated within the Work.
+
+ 2. Grant of Copyright License. Subject to the terms and conditions of
+ this License, each Contributor hereby grants to You a perpetual,
+ worldwide, non-exclusive, no-charge, royalty-free, irrevocable
+ copyright license to reproduce, prepare Derivative Works of,
+ publicly display, publicly perform, sublicense, and distribute the
+ Work and such Derivative Works in Source or Object form.
+
+ 3. Grant of Patent License. Subject to the terms and conditions of
+ this License, each Contributor hereby grants to You a perpetual,
+ worldwide, non-exclusive, no-charge, royalty-free, irrevocable
+ (except as stated in this section) patent license to make, have made,
+ use, offer to sell, sell, import, and otherwise transfer the Work,
+ where such license applies only to those patent claims licensable
+ by such Contributor that are necessarily infringed by their
+ Contribution(s) alone or by combination of their Contribution(s)
+ with the Work to which such Contribution(s) was submitted. If You
+ institute patent litigation against any entity (including a
+ cross-claim or counterclaim in a lawsuit) alleging that the Work
+ or a Contribution incorporated within the Work constitutes direct
+ or contributory patent infringement, then any patent licenses
+ granted to You under this License for that Work shall terminate
+ as of the date such litigation is filed.
+
+ 4. Redistribution. You may reproduce and distribute copies of the
+ Work or Derivative Works thereof in any medium, with or without
+ modifications, and in Source or Object form, provided that You
+ meet the following conditions:
+
+ (a) You must give any other recipients of the Work or
+ Derivative Works a copy of this License; and
+
+ (b) You must cause any modified files to carry prominent notices
+ stating that You changed the files; and
+
+ (c) You must retain, in the Source form of any Derivative Works
+ that You distribute, all copyright, patent, trademark, and
+ attribution notices from the Source form of the Work,
+ excluding those notices that do not pertain to any part of
+ the Derivative Works; and
+
+ (d) If the Work includes a "NOTICE" text file as part of its
+ distribution, then any Derivative Works that You distribute must
+ include a readable copy of the attribution notices contained
+ within such NOTICE file, excluding those notices that do not
+ pertain to any part of the Derivative Works, in at least one
+ of the following places: within a NOTICE text file distributed
+ as part of the Derivative Works; within the Source form or
+ documentation, if provided along with the Derivative Works; or,
+ within a display generated by the Derivative Works, if and
+ wherever such third-party notices normally appear. The contents
+ of the NOTICE file are for informational purposes only and
+ do not modify the License. You may add Your own attribution
+ notices within Derivative Works that You distribute, alongside
+ or as an addendum to the NOTICE text from the Work, provided
+ that such additional attribution notices cannot be construed
+ as modifying the License.
+
+ You may add Your own copyright statement to Your modifications and
+ may provide additional or different license terms and conditions
+ for use, reproduction, or distribution of Your modifications, or
+ for any such Derivative Works as a whole, provided Your use,
+ reproduction, and distribution of the Work otherwise complies with
+ the conditions stated in this License.
+
+ 5. Submission of Contributions. Unless You explicitly state otherwise,
+ any Contribution intentionally submitted for inclusion in the Work
+ by You to the Licensor shall be under the terms and conditions of
+ this License, without any additional terms or conditions.
+ Notwithstanding the above, nothing herein shall supersede or modify
+ the terms of any separate license agreement you may have executed
+ with Licensor regarding such Contributions.
+
+ 6. Trademarks. This License does not grant permission to use the trade
+ names, trademarks, service marks, or product names of the Licensor,
+ except as required for reasonable and customary use in describing the
+ origin of the Work and reproducing the content of the NOTICE file.
+
+ 7. Disclaimer of Warranty. Unless required by applicable law or
+ agreed to in writing, Licensor provides the Work (and each
+ Contributor provides its Contributions) on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
+ implied, including, without limitation, any warranties or conditions
+ of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
+ PARTICULAR PURPOSE. You are solely responsible for determining the
+ appropriateness of using or redistributing the Work and assume any
+ risks associated with Your exercise of permissions under this License.
+
+ 8. Limitation of Liability. In no event and under no legal theory,
+ whether in tort (including negligence), contract, or otherwise,
+ unless required by applicable law (such as deliberate and grossly
+ negligent acts) or agreed to in writing, shall any Contributor be
+ liable to You for damages, including any direct, indirect, special,
+ incidental, or consequential damages of any character arising as a
+ result of this License or out of the use or inability to use the
+ Work (including but not limited to damages for loss of goodwill,
+ work stoppage, computer failure or malfunction, or any and all
+ other commercial damages or losses), even if such Contributor
+ has been advised of the possibility of such damages.
+
+ 9. Accepting Warranty or Additional Liability. While redistributing
+ the Work or Derivative Works thereof, You may choose to offer,
+ and charge a fee for, acceptance of support, warranty, indemnity,
+ or other liability obligations and/or rights consistent with this
+ License. However, in accepting such obligations, You may act only
+ on Your own behalf and on Your sole responsibility, not on behalf
+ of any other Contributor, and only if You agree to indemnify,
+ defend, and hold each Contributor harmless for any liability
+ incurred by, or claims asserted against, such Contributor by reason
+ of your accepting any such warranty or additional liability.
+
+ END OF TERMS AND CONDITIONS
+
+ APPENDIX: How to apply the Apache License to your work.
+
+ To apply the Apache License to your work, attach the following
+ boilerplate notice, with the fields enclosed by brackets "{}"
+ replaced with your own identifying information. (Don't include
+ the brackets!) The text should be enclosed in the appropriate
+ comment syntax for the file format. We also recommend that a
+ file or class name and description of purpose be included on the
+ same "printed page" as the copyright notice for easier
+ identification within third-party archives.
+
+ Copyright {yyyy} {name of copyright owner}
+
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
diff --git a/vendor/github.com/coredns/caddy/caddyfile/dispenser.go b/vendor/github.com/coredns/caddy/caddyfile/dispenser.go
new file mode 100644
index 000000000..c7b3f4c12
--- /dev/null
+++ b/vendor/github.com/coredns/caddy/caddyfile/dispenser.go
@@ -0,0 +1,260 @@
+// Copyright 2015 Light Code Labs, LLC
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+package caddyfile
+
+import (
+ "errors"
+ "fmt"
+ "io"
+ "strings"
+)
+
+// Dispenser is a type that dispenses tokens, similarly to a lexer,
+// except that it can do so with some notion of structure and has
+// some really convenient methods.
+type Dispenser struct {
+ filename string
+ tokens []Token
+ cursor int
+ nesting int
+}
+
+// NewDispenser returns a Dispenser, ready to use for parsing the given input.
+func NewDispenser(filename string, input io.Reader) Dispenser {
+ tokens, _ := allTokens(input) // ignoring error because nothing to do with it
+ return Dispenser{
+ filename: filename,
+ tokens: tokens,
+ cursor: -1,
+ }
+}
+
+// NewDispenserTokens returns a Dispenser filled with the given tokens.
+func NewDispenserTokens(filename string, tokens []Token) Dispenser {
+ return Dispenser{
+ filename: filename,
+ tokens: tokens,
+ cursor: -1,
+ }
+}
+
+// Next loads the next token. Returns true if a token
+// was loaded; false otherwise. If false, all tokens
+// have been consumed.
+func (d *Dispenser) Next() bool {
+ if d.cursor < len(d.tokens)-1 {
+ d.cursor++
+ return true
+ }
+ return false
+}
+
+// NextArg loads the next token if it is on the same
+// line. Returns true if a token was loaded; false
+// otherwise. If false, all tokens on the line have
+// been consumed. It handles imported tokens correctly.
+func (d *Dispenser) NextArg() bool {
+ if d.cursor < 0 {
+ d.cursor++
+ return true
+ }
+ if d.cursor >= len(d.tokens) {
+ return false
+ }
+ if d.cursor < len(d.tokens)-1 &&
+ d.tokens[d.cursor].File == d.tokens[d.cursor+1].File &&
+ d.tokens[d.cursor].Line+d.numLineBreaks(d.cursor) == d.tokens[d.cursor+1].Line {
+ d.cursor++
+ return true
+ }
+ return false
+}
+
+// NextLine loads the next token only if it is not on the same
+// line as the current token, and returns true if a token was
+// loaded; false otherwise. If false, there is not another token
+// or it is on the same line. It handles imported tokens correctly.
+func (d *Dispenser) NextLine() bool {
+ if d.cursor < 0 {
+ d.cursor++
+ return true
+ }
+ if d.cursor >= len(d.tokens) {
+ return false
+ }
+ if d.cursor < len(d.tokens)-1 &&
+ (d.tokens[d.cursor].File != d.tokens[d.cursor+1].File ||
+ d.tokens[d.cursor].Line+d.numLineBreaks(d.cursor) < d.tokens[d.cursor+1].Line) {
+ d.cursor++
+ return true
+ }
+ return false
+}
+
+// NextBlock can be used as the condition of a for loop
+// to load the next token as long as it opens a block or
+// is already in a block. It returns true if a token was
+// loaded, or false when the block's closing curly brace
+// was loaded and thus the block ended. Nested blocks are
+// not supported.
+func (d *Dispenser) NextBlock() bool {
+ if d.nesting > 0 {
+ d.Next()
+ if d.Val() == "}" {
+ d.nesting--
+ return false
+ }
+ return true
+ }
+ if !d.NextArg() { // block must open on same line
+ return false
+ }
+ if d.Val() != "{" {
+ d.cursor-- // roll back if not opening brace
+ return false
+ }
+ d.Next()
+ if d.Val() == "}" {
+ // Open and then closed right away
+ return false
+ }
+ d.nesting++
+ return true
+}
+
+// Val gets the text of the current token. If there is no token
+// loaded, it returns empty string.
+func (d *Dispenser) Val() string {
+ if d.cursor < 0 || d.cursor >= len(d.tokens) {
+ return ""
+ }
+ return d.tokens[d.cursor].Text
+}
+
+// Line gets the line number of the current token. If there is no token
+// loaded, it returns 0.
+func (d *Dispenser) Line() int {
+ if d.cursor < 0 || d.cursor >= len(d.tokens) {
+ return 0
+ }
+ return d.tokens[d.cursor].Line
+}
+
+// File gets the filename of the current token. If there is no token loaded,
+// it returns the filename originally given when parsing started.
+func (d *Dispenser) File() string {
+ if d.cursor < 0 || d.cursor >= len(d.tokens) {
+ return d.filename
+ }
+ if tokenFilename := d.tokens[d.cursor].File; tokenFilename != "" {
+ return tokenFilename
+ }
+ return d.filename
+}
+
+// Args is a convenience function that loads the next arguments
+// (tokens on the same line) into an arbitrary number of strings
+// pointed to in targets. If there are fewer tokens available
+// than string pointers, the remaining strings will not be changed
+// and false will be returned. If there were enough tokens available
+// to fill the arguments, then true will be returned.
+func (d *Dispenser) Args(targets ...*string) bool {
+ enough := true
+ for i := 0; i < len(targets); i++ {
+ if !d.NextArg() {
+ enough = false
+ break
+ }
+ *targets[i] = d.Val()
+ }
+ return enough
+}
+
+// RemainingArgs loads any more arguments (tokens on the same line)
+// into a slice and returns them. Open curly brace tokens also indicate
+// the end of arguments, and the curly brace is not included in
+// the return value nor is it loaded.
+func (d *Dispenser) RemainingArgs() []string {
+ var args []string
+
+ for d.NextArg() {
+ if d.Val() == "{" {
+ d.cursor--
+ break
+ }
+ args = append(args, d.Val())
+ }
+
+ return args
+}
+
+// ArgErr returns an argument error, meaning that another
+// argument was expected but not found. In other words,
+// a line break or open curly brace was encountered instead of
+// an argument.
+func (d *Dispenser) ArgErr() error {
+ if d.Val() == "{" {
+ return d.Err("Unexpected token '{', expecting argument")
+ }
+ return d.Errf("Wrong argument count or unexpected line ending after '%s'", d.Val())
+}
+
+// SyntaxErr creates a generic syntax error which explains what was
+// found and what was expected.
+func (d *Dispenser) SyntaxErr(expected string) error {
+ msg := fmt.Sprintf("%s:%d - Syntax error: Unexpected token '%s', expecting '%s'", d.File(), d.Line(), d.Val(), expected)
+ return errors.New(msg)
+}
+
+// EOFErr returns an error indicating that the dispenser reached
+// the end of the input when searching for the next token.
+func (d *Dispenser) EOFErr() error {
+ return d.Errf("Unexpected EOF")
+}
+
+// Err generates a custom parse-time error with a message of msg.
+func (d *Dispenser) Err(msg string) error {
+ msg = fmt.Sprintf("%s:%d - Error during parsing: %s", d.File(), d.Line(), msg)
+ return errors.New(msg)
+}
+
+// Errf is like Err, but for formatted error messages
+func (d *Dispenser) Errf(format string, args ...interface{}) error {
+ return d.Err(fmt.Sprintf(format, args...))
+}
+
+// numLineBreaks counts how many line breaks are in the token
+// value given by the token index tknIdx. It returns 0 if the
+// token does not exist or there are no line breaks.
+func (d *Dispenser) numLineBreaks(tknIdx int) int {
+ if tknIdx < 0 || tknIdx >= len(d.tokens) {
+ return 0
+ }
+ return strings.Count(d.tokens[tknIdx].Text, "\n")
+}
+
+// isNewLine determines whether the current token is on a different
+// line (higher line number) than the previous token. It handles imported
+// tokens correctly. If there isn't a previous token, it returns true.
+func (d *Dispenser) isNewLine() bool {
+ if d.cursor < 1 {
+ return true
+ }
+ if d.cursor > len(d.tokens)-1 {
+ return false
+ }
+ return d.tokens[d.cursor-1].File != d.tokens[d.cursor].File ||
+ d.tokens[d.cursor-1].Line+d.numLineBreaks(d.cursor-1) < d.tokens[d.cursor].Line
+}
diff --git a/vendor/github.com/coredns/caddy/caddyfile/json.go b/vendor/github.com/coredns/caddy/caddyfile/json.go
new file mode 100644
index 000000000..0d37e8e98
--- /dev/null
+++ b/vendor/github.com/coredns/caddy/caddyfile/json.go
@@ -0,0 +1,198 @@
+// Copyright 2015 Light Code Labs, LLC
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+package caddyfile
+
+import (
+ "bytes"
+ "encoding/json"
+ "fmt"
+ "sort"
+ "strconv"
+ "strings"
+)
+
+const filename = "Caddyfile"
+
+// ToJSON converts caddyfile to its JSON representation.
+func ToJSON(caddyfile []byte) ([]byte, error) {
+ var j EncodedCaddyfile
+
+ serverBlocks, err := Parse(filename, bytes.NewReader(caddyfile), nil)
+ if err != nil {
+ return nil, err
+ }
+
+ for _, sb := range serverBlocks {
+ block := EncodedServerBlock{
+ Keys: sb.Keys,
+ Body: [][]interface{}{},
+ }
+
+ // Extract directives deterministically by sorting them
+ var directives = make([]string, len(sb.Tokens))
+ for dir := range sb.Tokens {
+ directives = append(directives, dir)
+ }
+ sort.Strings(directives)
+
+ // Convert each directive's tokens into our JSON structure
+ for _, dir := range directives {
+ disp := NewDispenserTokens(filename, sb.Tokens[dir])
+ for disp.Next() {
+ block.Body = append(block.Body, constructLine(&disp))
+ }
+ }
+
+ // tack this block onto the end of the list
+ j = append(j, block)
+ }
+
+ result, err := json.Marshal(j)
+ if err != nil {
+ return nil, err
+ }
+
+ return result, nil
+}
+
+// constructLine transforms tokens into a JSON-encodable structure;
+// but only one line at a time, to be used at the top-level of
+// a server block only (where the first token on each line is a
+// directive) - not to be used at any other nesting level.
+func constructLine(d *Dispenser) []interface{} {
+ var args []interface{}
+
+ args = append(args, d.Val())
+
+ for d.NextArg() {
+ if d.Val() == "{" {
+ args = append(args, constructBlock(d))
+ continue
+ }
+ args = append(args, d.Val())
+ }
+
+ return args
+}
+
+// constructBlock recursively processes tokens into a
+// JSON-encodable structure. To be used in a directive's
+// block. Goes to end of block.
+func constructBlock(d *Dispenser) [][]interface{} {
+ block := [][]interface{}{}
+
+ for d.Next() {
+ if d.Val() == "}" {
+ break
+ }
+ block = append(block, constructLine(d))
+ }
+
+ return block
+}
+
+// FromJSON converts JSON-encoded jsonBytes to Caddyfile text
+func FromJSON(jsonBytes []byte) ([]byte, error) {
+ var j EncodedCaddyfile
+ var result string
+
+ err := json.Unmarshal(jsonBytes, &j)
+ if err != nil {
+ return nil, err
+ }
+
+ for sbPos, sb := range j {
+ if sbPos > 0 {
+ result += "\n\n"
+ }
+ for i, key := range sb.Keys {
+ if i > 0 {
+ result += ", "
+ }
+ //result += standardizeScheme(key)
+ result += key
+ }
+ result += jsonToText(sb.Body, 1)
+ }
+
+ return []byte(result), nil
+}
+
+// jsonToText recursively transforms a scope of JSON into plain
+// Caddyfile text.
+func jsonToText(scope interface{}, depth int) string {
+ var result string
+
+ switch val := scope.(type) {
+ case string:
+ if strings.ContainsAny(val, "\" \n\t\r") {
+ result += `"` + strings.Replace(val, "\"", "\\\"", -1) + `"`
+ } else {
+ result += val
+ }
+ case int:
+ result += strconv.Itoa(val)
+ case float64:
+ result += fmt.Sprintf("%v", val)
+ case bool:
+ result += fmt.Sprintf("%t", val)
+ case [][]interface{}:
+ result += " {\n"
+ for _, arg := range val {
+ result += strings.Repeat("\t", depth) + jsonToText(arg, depth+1) + "\n"
+ }
+ result += strings.Repeat("\t", depth-1) + "}"
+ case []interface{}:
+ for i, v := range val {
+ if block, ok := v.([]interface{}); ok {
+ result += "{\n"
+ for _, arg := range block {
+ result += strings.Repeat("\t", depth) + jsonToText(arg, depth+1) + "\n"
+ }
+ result += strings.Repeat("\t", depth-1) + "}"
+ continue
+ }
+ result += jsonToText(v, depth)
+ if i < len(val)-1 {
+ result += " "
+ }
+ }
+ }
+
+ return result
+}
+
+// TODO: Will this function come in handy somewhere else?
+/*
+// standardizeScheme turns an address like host:https into https://host,
+// or "host:" into "host".
+func standardizeScheme(addr string) string {
+ if hostname, port, err := net.SplitHostPort(addr); err == nil {
+ if port == "http" || port == "https" {
+ addr = port + "://" + hostname
+ }
+ }
+ return strings.TrimSuffix(addr, ":")
+}
+*/
+
+// EncodedCaddyfile encapsulates a slice of EncodedServerBlocks.
+type EncodedCaddyfile []EncodedServerBlock
+
+// EncodedServerBlock represents a server block ripe for encoding.
+type EncodedServerBlock struct {
+ Keys []string `json:"keys"`
+ Body [][]interface{} `json:"body"`
+}
diff --git a/vendor/github.com/coredns/caddy/caddyfile/lexer.go b/vendor/github.com/coredns/caddy/caddyfile/lexer.go
new file mode 100644
index 000000000..f928772e1
--- /dev/null
+++ b/vendor/github.com/coredns/caddy/caddyfile/lexer.go
@@ -0,0 +1,153 @@
+// Copyright 2015 Light Code Labs, LLC
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+package caddyfile
+
+import (
+ "bufio"
+ "io"
+ "unicode"
+)
+
+type (
+ // lexer is a utility which can get values, token by
+ // token, from a Reader. A token is a word, and tokens
+ // are separated by whitespace. A word can be enclosed
+ // in quotes if it contains whitespace.
+ lexer struct {
+ reader *bufio.Reader
+ token Token
+ line int
+ }
+
+ // Token represents a single parsable unit.
+ Token struct {
+ File string
+ Line int
+ Text string
+ }
+)
+
+// load prepares the lexer to scan an input for tokens.
+// It discards any leading byte order mark.
+func (l *lexer) load(input io.Reader) error {
+ l.reader = bufio.NewReader(input)
+ l.line = 1
+
+ // discard byte order mark, if present
+ firstCh, _, err := l.reader.ReadRune()
+ if err != nil {
+ if err == io.EOF {
+ return nil
+ }
+ return err
+ }
+ if firstCh != 0xFEFF {
+ err := l.reader.UnreadRune()
+ if err != nil {
+ return err
+ }
+ }
+
+ return nil
+}
+
+// next loads the next token into the lexer.
+// A token is delimited by whitespace, unless
+// the token starts with a quotes character (")
+// in which case the token goes until the closing
+// quotes (the enclosing quotes are not included).
+// Inside quoted strings, quotes may be escaped
+// with a preceding \ character. No other chars
+// may be escaped. The rest of the line is skipped
+// if a "#" character is read in. Returns true if
+// a token was loaded; false otherwise.
+func (l *lexer) next() bool {
+ var val []rune
+ var comment, quoted, escaped bool
+
+ makeToken := func() bool {
+ l.token.Text = string(val)
+ return true
+ }
+
+ for {
+ ch, _, err := l.reader.ReadRune()
+ if err != nil {
+ if len(val) > 0 {
+ return makeToken()
+ }
+ if err == io.EOF {
+ return false
+ }
+ panic(err)
+ }
+
+ if quoted {
+ if !escaped {
+ if ch == '\\' {
+ escaped = true
+ continue
+ } else if ch == '"' {
+ quoted = false
+ return makeToken()
+ }
+ }
+ if ch == '\n' {
+ l.line++
+ }
+ if escaped {
+ // only escape quotes
+ if ch != '"' {
+ val = append(val, '\\')
+ }
+ }
+ val = append(val, ch)
+ escaped = false
+ continue
+ }
+
+ if unicode.IsSpace(ch) {
+ if ch == '\r' {
+ continue
+ }
+ if ch == '\n' {
+ l.line++
+ comment = false
+ }
+ if len(val) > 0 {
+ return makeToken()
+ }
+ continue
+ }
+
+ if ch == '#' {
+ comment = true
+ }
+
+ if comment {
+ continue
+ }
+
+ if len(val) == 0 {
+ l.token = Token{Line: l.line}
+ if ch == '"' {
+ quoted = true
+ continue
+ }
+ }
+
+ val = append(val, ch)
+ }
+}
diff --git a/vendor/github.com/coredns/caddy/caddyfile/parse.go b/vendor/github.com/coredns/caddy/caddyfile/parse.go
new file mode 100644
index 000000000..32d7a2b5f
--- /dev/null
+++ b/vendor/github.com/coredns/caddy/caddyfile/parse.go
@@ -0,0 +1,490 @@
+// Copyright 2015 Light Code Labs, LLC
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+package caddyfile
+
+import (
+ "io"
+ "log"
+ "os"
+ "path/filepath"
+ "strings"
+)
+
+// Parse parses the input just enough to group tokens, in
+// order, by server block. No further parsing is performed.
+// Server blocks are returned in the order in which they appear.
+// Directives that do not appear in validDirectives will cause
+// an error. If you do not want to check for valid directives,
+// pass in nil instead.
+func Parse(filename string, input io.Reader, validDirectives []string) ([]ServerBlock, error) {
+ p := parser{Dispenser: NewDispenser(filename, input), validDirectives: validDirectives}
+ return p.parseAll()
+}
+
+// allTokens lexes the entire input, but does not parse it.
+// It returns all the tokens from the input, unstructured
+// and in order.
+func allTokens(input io.Reader) ([]Token, error) {
+ l := new(lexer)
+ err := l.load(input)
+ if err != nil {
+ return nil, err
+ }
+ var tokens []Token
+ for l.next() {
+ tokens = append(tokens, l.token)
+ }
+ return tokens, nil
+}
+
+type parser struct {
+ Dispenser
+ block ServerBlock // current server block being parsed
+ validDirectives []string // a directive must be valid or it's an error
+ eof bool // if we encounter a valid EOF in a hard place
+ definedSnippets map[string][]Token
+}
+
+func (p *parser) parseAll() ([]ServerBlock, error) {
+ var blocks []ServerBlock
+
+ for p.Next() {
+ err := p.parseOne()
+ if err != nil {
+ return blocks, err
+ }
+ if len(p.block.Keys) > 0 {
+ blocks = append(blocks, p.block)
+ }
+ }
+
+ return blocks, nil
+}
+
+func (p *parser) parseOne() error {
+ p.block = ServerBlock{Tokens: make(map[string][]Token)}
+
+ return p.begin()
+}
+
+func (p *parser) begin() error {
+ if len(p.tokens) == 0 {
+ return nil
+ }
+
+ err := p.addresses()
+
+ if err != nil {
+ return err
+ }
+
+ if p.eof {
+ // this happens if the Caddyfile consists of only
+ // a line of addresses and nothing else
+ return nil
+ }
+
+ if ok, name := p.isSnippet(); ok {
+ if p.definedSnippets == nil {
+ p.definedSnippets = map[string][]Token{}
+ }
+ if _, found := p.definedSnippets[name]; found {
+ return p.Errf("redeclaration of previously declared snippet %s", name)
+ }
+ // consume all tokens til matched close brace
+ tokens, err := p.snippetTokens()
+ if err != nil {
+ return err
+ }
+ p.definedSnippets[name] = tokens
+ // empty block keys so we don't save this block as a real server.
+ p.block.Keys = nil
+ return nil
+ }
+
+ return p.blockContents()
+}
+
+func (p *parser) addresses() error {
+ var expectingAnother bool
+
+ for {
+ tkn := replaceEnvVars(p.Val())
+
+ // special case: import directive replaces tokens during parse-time
+ if tkn == "import" && p.isNewLine() {
+ err := p.doImport()
+ if err != nil {
+ return err
+ }
+ continue
+ }
+
+ // Open brace definitely indicates end of addresses
+ if tkn == "{" {
+ if expectingAnother {
+ return p.Errf("Expected another address but had '%s' - check for extra comma", tkn)
+ }
+ break
+ }
+
+ if tkn != "" { // empty token possible if user typed ""
+ // Trailing comma indicates another address will follow, which
+ // may possibly be on the next line
+ if tkn[len(tkn)-1] == ',' {
+ tkn = tkn[:len(tkn)-1]
+ expectingAnother = true
+ } else {
+ expectingAnother = false // but we may still see another one on this line
+ }
+
+ p.block.Keys = append(p.block.Keys, tkn)
+ }
+
+ // Advance token and possibly break out of loop or return error
+ hasNext := p.Next()
+ if expectingAnother && !hasNext {
+ return p.EOFErr()
+ }
+ if !hasNext {
+ p.eof = true
+ break // EOF
+ }
+ if !expectingAnother && p.isNewLine() {
+ break
+ }
+ }
+
+ return nil
+}
+
+func (p *parser) blockContents() error {
+ errOpenCurlyBrace := p.openCurlyBrace()
+ if errOpenCurlyBrace != nil {
+ // single-server configs don't need curly braces
+ p.cursor--
+ }
+
+ err := p.directives()
+ if err != nil {
+ return err
+ }
+
+ // Only look for close curly brace if there was an opening
+ if errOpenCurlyBrace == nil {
+ err = p.closeCurlyBrace()
+ if err != nil {
+ return err
+ }
+ }
+
+ return nil
+}
+
+// directives parses through all the lines for directives
+// and it expects the next token to be the first
+// directive. It goes until EOF or closing curly brace
+// which ends the server block.
+func (p *parser) directives() error {
+ for p.Next() {
+ // end of server block
+ if p.Val() == "}" {
+ break
+ }
+
+ // special case: import directive replaces tokens during parse-time
+ if p.Val() == "import" {
+ err := p.doImport()
+ if err != nil {
+ return err
+ }
+ p.cursor-- // cursor is advanced when we continue, so roll back one more
+ continue
+ }
+
+ // normal case: parse a directive on this line
+ if err := p.directive(); err != nil {
+ return err
+ }
+ }
+ return nil
+}
+
+// doImport swaps out the import directive and its argument
+// (a total of 2 tokens) with the tokens in the specified file
+// or globbing pattern. When the function returns, the cursor
+// is on the token before where the import directive was. In
+// other words, call Next() to access the first token that was
+// imported.
+func (p *parser) doImport() error {
+ // syntax checks
+ if !p.NextArg() {
+ return p.ArgErr()
+ }
+ importPattern := replaceEnvVars(p.Val())
+ if importPattern == "" {
+ return p.Err("Import requires a non-empty filepath")
+ }
+ if p.NextArg() {
+ return p.Err("Import takes only one argument (glob pattern or file)")
+ }
+ // splice out the import directive and its argument (2 tokens total)
+ tokensBefore := p.tokens[:p.cursor-1]
+ tokensAfter := p.tokens[p.cursor+1:]
+ var importedTokens []Token
+
+ // first check snippets. That is a simple, non-recursive replacement
+ if p.definedSnippets != nil && p.definedSnippets[importPattern] != nil {
+ importedTokens = p.definedSnippets[importPattern]
+ } else {
+ // make path relative to the file of the _token_ being processed rather
+ // than current working directory (issue #867) and then use glob to get
+ // list of matching filenames
+ absFile, err := filepath.Abs(p.Dispenser.File())
+ if err != nil {
+ return p.Errf("Failed to get absolute path of file: %s: %v", p.Dispenser.filename, err)
+ }
+
+ var matches []string
+ var globPattern string
+ if !filepath.IsAbs(importPattern) {
+ globPattern = filepath.Join(filepath.Dir(absFile), importPattern)
+ } else {
+ globPattern = importPattern
+ }
+ if strings.Count(globPattern, "*") > 1 || strings.Count(globPattern, "?") > 1 ||
+ (strings.Contains(globPattern, "[") && strings.Contains(globPattern, "]")) {
+ // See issue #2096 - a pattern with many glob expansions can hang for too long
+ return p.Errf("Glob pattern may only contain one wildcard (*), but has others: %s", globPattern)
+ }
+ matches, err = filepath.Glob(globPattern)
+
+ if err != nil {
+ return p.Errf("Failed to use import pattern %s: %v", importPattern, err)
+ }
+ if len(matches) == 0 {
+ if strings.ContainsAny(globPattern, "*?[]") {
+ log.Printf("[WARNING] No files matching import glob pattern: %s", importPattern)
+ } else {
+ return p.Errf("File to import not found: %s", importPattern)
+ }
+ }
+
+ // collect all the imported tokens
+
+ for _, importFile := range matches {
+ newTokens, err := p.doSingleImport(importFile)
+ if err != nil {
+ return err
+ }
+ importedTokens = append(importedTokens, newTokens...)
+ }
+ }
+
+ // splice the imported tokens in the place of the import statement
+ // and rewind cursor so Next() will land on first imported token
+ p.tokens = append(tokensBefore, append(importedTokens, tokensAfter...)...)
+ p.cursor--
+
+ return nil
+}
+
+// doSingleImport lexes the individual file at importFile and returns
+// its tokens or an error, if any.
+func (p *parser) doSingleImport(importFile string) ([]Token, error) {
+ file, err := os.Open(importFile)
+ if err != nil {
+ return nil, p.Errf("Could not import %s: %v", importFile, err)
+ }
+ defer file.Close()
+
+ if info, err := file.Stat(); err != nil {
+ return nil, p.Errf("Could not import %s: %v", importFile, err)
+ } else if info.IsDir() {
+ return nil, p.Errf("Could not import %s: is a directory", importFile)
+ }
+
+ importedTokens, err := allTokens(file)
+ if err != nil {
+ return nil, p.Errf("Could not read tokens while importing %s: %v", importFile, err)
+ }
+
+ // Tack the file path onto these tokens so errors show the imported file's name
+ // (we use full, absolute path to avoid bugs: issue #1892)
+ filename, err := filepath.Abs(importFile)
+ if err != nil {
+ return nil, p.Errf("Failed to get absolute path of file: %s: %v", p.Dispenser.filename, err)
+ }
+ for i := 0; i < len(importedTokens); i++ {
+ importedTokens[i].File = filename
+ }
+
+ return importedTokens, nil
+}
+
+// directive collects tokens until the directive's scope
+// closes (either end of line or end of curly brace block).
+// It expects the currently-loaded token to be a directive
+// (or } that ends a server block). The collected tokens
+// are loaded into the current server block for later use
+// by directive setup functions.
+func (p *parser) directive() error {
+ dir := replaceEnvVars(p.Val())
+ nesting := 0
+
+ // TODO: More helpful error message ("did you mean..." or "maybe you need to install its server type")
+ if !p.validDirective(dir) {
+ return p.Errf("Unknown directive '%s'", dir)
+ }
+
+ // The directive itself is appended as a relevant token
+ p.block.Tokens[dir] = append(p.block.Tokens[dir], p.tokens[p.cursor])
+
+ for p.Next() {
+ if p.Val() == "{" {
+ nesting++
+ } else if p.isNewLine() && nesting == 0 {
+ p.cursor-- // read too far
+ break
+ } else if p.Val() == "}" && nesting > 0 {
+ nesting--
+ } else if p.Val() == "}" && nesting == 0 {
+ return p.Err("Unexpected '}' because no matching opening brace")
+ } else if p.Val() == "import" && p.isNewLine() {
+ if err := p.doImport(); err != nil {
+ return err
+ }
+ p.cursor-- // cursor is advanced when we continue, so roll back one more
+ continue
+ }
+ p.tokens[p.cursor].Text = replaceEnvVars(p.tokens[p.cursor].Text)
+ p.block.Tokens[dir] = append(p.block.Tokens[dir], p.tokens[p.cursor])
+ }
+
+ if nesting > 0 {
+ return p.EOFErr()
+ }
+ return nil
+}
+
+// openCurlyBrace expects the current token to be an
+// opening curly brace. This acts like an assertion
+// because it returns an error if the token is not
+// a opening curly brace. It does NOT advance the token.
+func (p *parser) openCurlyBrace() error {
+ if p.Val() != "{" {
+ return p.SyntaxErr("{")
+ }
+ return nil
+}
+
+// closeCurlyBrace expects the current token to be
+// a closing curly brace. This acts like an assertion
+// because it returns an error if the token is not
+// a closing curly brace. It does NOT advance the token.
+func (p *parser) closeCurlyBrace() error {
+ if p.Val() != "}" {
+ return p.SyntaxErr("}")
+ }
+ return nil
+}
+
+// validDirective returns true if dir is in p.validDirectives.
+func (p *parser) validDirective(dir string) bool {
+ if p.validDirectives == nil {
+ return true
+ }
+ for _, d := range p.validDirectives {
+ if d == dir {
+ return true
+ }
+ }
+ return false
+}
+
+// replaceEnvVars replaces environment variables that appear in the token
+// and understands both the $UNIX and %WINDOWS% syntaxes.
+func replaceEnvVars(s string) string {
+ s = replaceEnvReferences(s, "{%", "%}")
+ s = replaceEnvReferences(s, "{$", "}")
+ return s
+}
+
+// replaceEnvReferences performs the actual replacement of env variables
+// in s, given the placeholder start and placeholder end strings.
+func replaceEnvReferences(s, refStart, refEnd string) string {
+ index := strings.Index(s, refStart)
+ for index != -1 {
+ endIndex := strings.Index(s[index:], refEnd)
+ if endIndex == -1 {
+ break
+ }
+
+ endIndex += index
+ if endIndex > index+len(refStart) {
+ ref := s[index : endIndex+len(refEnd)]
+ s = strings.Replace(s, ref, os.Getenv(ref[len(refStart):len(ref)-len(refEnd)]), -1)
+ } else {
+ return s
+ }
+ index = strings.Index(s, refStart)
+ }
+ return s
+}
+
+// ServerBlock associates any number of keys (usually addresses
+// of some sort) with tokens (grouped by directive name).
+type ServerBlock struct {
+ Keys []string
+ Tokens map[string][]Token
+}
+
+func (p *parser) isSnippet() (bool, string) {
+ keys := p.block.Keys
+ // A snippet block is a single key with parens. Nothing else qualifies.
+ if len(keys) == 1 && strings.HasPrefix(keys[0], "(") && strings.HasSuffix(keys[0], ")") {
+ return true, strings.TrimSuffix(keys[0][1:], ")")
+ }
+ return false, ""
+}
+
+// read and store everything in a block for later replay.
+func (p *parser) snippetTokens() ([]Token, error) {
+ // TODO: disallow imports in snippets for simplicity at import time
+ // snippet must have curlies.
+ err := p.openCurlyBrace()
+ if err != nil {
+ return nil, err
+ }
+ count := 1
+ tokens := []Token{}
+ for p.Next() {
+ if p.Val() == "}" {
+ count--
+ if count == 0 {
+ break
+ }
+ }
+ if p.Val() == "{" {
+ count++
+ }
+ tokens = append(tokens, p.tokens[p.cursor])
+ }
+ // make sure we're matched up
+ if count != 0 {
+ return nil, p.SyntaxErr("}")
+ }
+ return tokens, nil
+}
diff --git a/vendor/github.com/coredns/corefile-migration/LICENSE b/vendor/github.com/coredns/corefile-migration/LICENSE
new file mode 100644
index 000000000..261eeb9e9
--- /dev/null
+++ b/vendor/github.com/coredns/corefile-migration/LICENSE
@@ -0,0 +1,201 @@
+ Apache License
+ Version 2.0, January 2004
+ http://www.apache.org/licenses/
+
+ TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
+
+ 1. Definitions.
+
+ "License" shall mean the terms and conditions for use, reproduction,
+ and distribution as defined by Sections 1 through 9 of this document.
+
+ "Licensor" shall mean the copyright owner or entity authorized by
+ the copyright owner that is granting the License.
+
+ "Legal Entity" shall mean the union of the acting entity and all
+ other entities that control, are controlled by, or are under common
+ control with that entity. For the purposes of this definition,
+ "control" means (i) the power, direct or indirect, to cause the
+ direction or management of such entity, whether by contract or
+ otherwise, or (ii) ownership of fifty percent (50%) or more of the
+ outstanding shares, or (iii) beneficial ownership of such entity.
+
+ "You" (or "Your") shall mean an individual or Legal Entity
+ exercising permissions granted by this License.
+
+ "Source" form shall mean the preferred form for making modifications,
+ including but not limited to software source code, documentation
+ source, and configuration files.
+
+ "Object" form shall mean any form resulting from mechanical
+ transformation or translation of a Source form, including but
+ not limited to compiled object code, generated documentation,
+ and conversions to other media types.
+
+ "Work" shall mean the work of authorship, whether in Source or
+ Object form, made available under the License, as indicated by a
+ copyright notice that is included in or attached to the work
+ (an example is provided in the Appendix below).
+
+ "Derivative Works" shall mean any work, whether in Source or Object
+ form, that is based on (or derived from) the Work and for which the
+ editorial revisions, annotations, elaborations, or other modifications
+ represent, as a whole, an original work of authorship. For the purposes
+ of this License, Derivative Works shall not include works that remain
+ separable from, or merely link (or bind by name) to the interfaces of,
+ the Work and Derivative Works thereof.
+
+ "Contribution" shall mean any work of authorship, including
+ the original version of the Work and any modifications or additions
+ to that Work or Derivative Works thereof, that is intentionally
+ submitted to Licensor for inclusion in the Work by the copyright owner
+ or by an individual or Legal Entity authorized to submit on behalf of
+ the copyright owner. For the purposes of this definition, "submitted"
+ means any form of electronic, verbal, or written communication sent
+ to the Licensor or its representatives, including but not limited to
+ communication on electronic mailing lists, source code control systems,
+ and issue tracking systems that are managed by, or on behalf of, the
+ Licensor for the purpose of discussing and improving the Work, but
+ excluding communication that is conspicuously marked or otherwise
+ designated in writing by the copyright owner as "Not a Contribution."
+
+ "Contributor" shall mean Licensor and any individual or Legal Entity
+ on behalf of whom a Contribution has been received by Licensor and
+ subsequently incorporated within the Work.
+
+ 2. Grant of Copyright License. Subject to the terms and conditions of
+ this License, each Contributor hereby grants to You a perpetual,
+ worldwide, non-exclusive, no-charge, royalty-free, irrevocable
+ copyright license to reproduce, prepare Derivative Works of,
+ publicly display, publicly perform, sublicense, and distribute the
+ Work and such Derivative Works in Source or Object form.
+
+ 3. Grant of Patent License. Subject to the terms and conditions of
+ this License, each Contributor hereby grants to You a perpetual,
+ worldwide, non-exclusive, no-charge, royalty-free, irrevocable
+ (except as stated in this section) patent license to make, have made,
+ use, offer to sell, sell, import, and otherwise transfer the Work,
+ where such license applies only to those patent claims licensable
+ by such Contributor that are necessarily infringed by their
+ Contribution(s) alone or by combination of their Contribution(s)
+ with the Work to which such Contribution(s) was submitted. If You
+ institute patent litigation against any entity (including a
+ cross-claim or counterclaim in a lawsuit) alleging that the Work
+ or a Contribution incorporated within the Work constitutes direct
+ or contributory patent infringement, then any patent licenses
+ granted to You under this License for that Work shall terminate
+ as of the date such litigation is filed.
+
+ 4. Redistribution. You may reproduce and distribute copies of the
+ Work or Derivative Works thereof in any medium, with or without
+ modifications, and in Source or Object form, provided that You
+ meet the following conditions:
+
+ (a) You must give any other recipients of the Work or
+ Derivative Works a copy of this License; and
+
+ (b) You must cause any modified files to carry prominent notices
+ stating that You changed the files; and
+
+ (c) You must retain, in the Source form of any Derivative Works
+ that You distribute, all copyright, patent, trademark, and
+ attribution notices from the Source form of the Work,
+ excluding those notices that do not pertain to any part of
+ the Derivative Works; and
+
+ (d) If the Work includes a "NOTICE" text file as part of its
+ distribution, then any Derivative Works that You distribute must
+ include a readable copy of the attribution notices contained
+ within such NOTICE file, excluding those notices that do not
+ pertain to any part of the Derivative Works, in at least one
+ of the following places: within a NOTICE text file distributed
+ as part of the Derivative Works; within the Source form or
+ documentation, if provided along with the Derivative Works; or,
+ within a display generated by the Derivative Works, if and
+ wherever such third-party notices normally appear. The contents
+ of the NOTICE file are for informational purposes only and
+ do not modify the License. You may add Your own attribution
+ notices within Derivative Works that You distribute, alongside
+ or as an addendum to the NOTICE text from the Work, provided
+ that such additional attribution notices cannot be construed
+ as modifying the License.
+
+ You may add Your own copyright statement to Your modifications and
+ may provide additional or different license terms and conditions
+ for use, reproduction, or distribution of Your modifications, or
+ for any such Derivative Works as a whole, provided Your use,
+ reproduction, and distribution of the Work otherwise complies with
+ the conditions stated in this License.
+
+ 5. Submission of Contributions. Unless You explicitly state otherwise,
+ any Contribution intentionally submitted for inclusion in the Work
+ by You to the Licensor shall be under the terms and conditions of
+ this License, without any additional terms or conditions.
+ Notwithstanding the above, nothing herein shall supersede or modify
+ the terms of any separate license agreement you may have executed
+ with Licensor regarding such Contributions.
+
+ 6. Trademarks. This License does not grant permission to use the trade
+ names, trademarks, service marks, or product names of the Licensor,
+ except as required for reasonable and customary use in describing the
+ origin of the Work and reproducing the content of the NOTICE file.
+
+ 7. Disclaimer of Warranty. Unless required by applicable law or
+ agreed to in writing, Licensor provides the Work (and each
+ Contributor provides its Contributions) on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
+ implied, including, without limitation, any warranties or conditions
+ of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
+ PARTICULAR PURPOSE. You are solely responsible for determining the
+ appropriateness of using or redistributing the Work and assume any
+ risks associated with Your exercise of permissions under this License.
+
+ 8. Limitation of Liability. In no event and under no legal theory,
+ whether in tort (including negligence), contract, or otherwise,
+ unless required by applicable law (such as deliberate and grossly
+ negligent acts) or agreed to in writing, shall any Contributor be
+ liable to You for damages, including any direct, indirect, special,
+ incidental, or consequential damages of any character arising as a
+ result of this License or out of the use or inability to use the
+ Work (including but not limited to damages for loss of goodwill,
+ work stoppage, computer failure or malfunction, or any and all
+ other commercial damages or losses), even if such Contributor
+ has been advised of the possibility of such damages.
+
+ 9. Accepting Warranty or Additional Liability. While redistributing
+ the Work or Derivative Works thereof, You may choose to offer,
+ and charge a fee for, acceptance of support, warranty, indemnity,
+ or other liability obligations and/or rights consistent with this
+ License. However, in accepting such obligations, You may act only
+ on Your own behalf and on Your sole responsibility, not on behalf
+ of any other Contributor, and only if You agree to indemnify,
+ defend, and hold each Contributor harmless for any liability
+ incurred by, or claims asserted against, such Contributor by reason
+ of your accepting any such warranty or additional liability.
+
+ END OF TERMS AND CONDITIONS
+
+ APPENDIX: How to apply the Apache License to your work.
+
+ To apply the Apache License to your work, attach the following
+ boilerplate notice, with the fields enclosed by brackets "[]"
+ replaced with your own identifying information. (Don't include
+ the brackets!) The text should be enclosed in the appropriate
+ comment syntax for the file format. We also recommend that a
+ file or class name and description of purpose be included on the
+ same "printed page" as the copyright notice for easier
+ identification within third-party archives.
+
+ Copyright [yyyy] [name of copyright owner]
+
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
diff --git a/vendor/github.com/coredns/corefile-migration/migration/corefile/corefile.go b/vendor/github.com/coredns/corefile-migration/migration/corefile/corefile.go
new file mode 100644
index 000000000..15d2eea16
--- /dev/null
+++ b/vendor/github.com/coredns/corefile-migration/migration/corefile/corefile.go
@@ -0,0 +1,195 @@
+package corefile
+
+import (
+ "strings"
+
+ "github.com/coredns/caddy/caddyfile"
+)
+
+type Corefile struct {
+ Servers []*Server
+}
+
+type Server struct {
+ DomPorts []string
+ Plugins []*Plugin
+}
+
+type Plugin struct {
+ Name string
+ Args []string
+ Options []*Option
+}
+
+type Option struct {
+ Name string
+ Args []string
+}
+
+func New(s string) (*Corefile, error) {
+ c := Corefile{}
+ cc := caddyfile.NewDispenser("migration", strings.NewReader(s))
+ depth := 0
+ var cSvr *Server
+ var cPlg *Plugin
+ for cc.Next() {
+ if cc.Val() == "{" {
+ depth += 1
+ continue
+ } else if cc.Val() == "}" {
+ depth -= 1
+ continue
+ }
+ val := cc.Val()
+ args := cc.RemainingArgs()
+ switch depth {
+ case 0:
+ c.Servers = append(c.Servers,
+ &Server{
+ DomPorts: append([]string{val}, args...),
+ })
+ cSvr = c.Servers[len(c.Servers)-1]
+ case 1:
+ cSvr.Plugins = append(cSvr.Plugins,
+ &Plugin{
+ Name: val,
+ Args: args,
+ })
+ cPlg = cSvr.Plugins[len(cSvr.Plugins)-1]
+ case 2:
+ cPlg.Options = append(cPlg.Options,
+ &Option{
+ Name: val,
+ Args: args,
+ })
+ }
+ }
+ return &c, nil
+}
+
+func (c *Corefile) ToString() (out string) {
+ strs := []string{}
+ for _, s := range c.Servers {
+ strs = append(strs, s.ToString())
+ }
+ return strings.Join(strs, "\n")
+}
+
+func (s *Server) ToString() (out string) {
+ str := strings.Join(escapeArgs(s.DomPorts), " ")
+ strs := []string{}
+ for _, p := range s.Plugins {
+ strs = append(strs, strings.Repeat(" ", indent)+p.ToString())
+ }
+ if len(strs) > 0 {
+ str += " {\n" + strings.Join(strs, "\n") + "\n}\n"
+ }
+ return str
+}
+
+func (p *Plugin) ToString() (out string) {
+ str := strings.Join(append([]string{p.Name}, escapeArgs(p.Args)...), " ")
+ strs := []string{}
+ for _, o := range p.Options {
+ strs = append(strs, strings.Repeat(" ", indent*2)+o.ToString())
+ }
+ if len(strs) > 0 {
+ str += " {\n" + strings.Join(strs, "\n") + "\n" + strings.Repeat(" ", indent*1) + "}"
+ }
+ return str
+}
+
+func (o *Option) ToString() (out string) {
+ str := strings.Join(append([]string{o.Name}, escapeArgs(o.Args)...), " ")
+ return str
+}
+
+// escapeArgs returns the arguments list escaping and wrapping any argument containing whitespace in quotes
+func escapeArgs(args []string) []string {
+ var escapedArgs []string
+ for _, a := range args {
+ // if there is white space, wrap argument with quotes
+ if len(strings.Fields(a)) > 1 {
+ // escape quotes
+ a = strings.Replace(a, "\"", "\\\"", -1)
+ // wrap with quotes
+ a = "\"" + a + "\""
+ }
+ escapedArgs = append(escapedArgs, a)
+ }
+ return escapedArgs
+}
+
+func (s *Server) FindMatch(def []*Server) (*Server, bool) {
+NextServer:
+ for _, sDef := range def {
+ for i, dp := range sDef.DomPorts {
+ if dp == "*" {
+ continue
+ }
+ if dp == "***" {
+ return sDef, true
+ }
+ if i >= len(s.DomPorts) || dp != s.DomPorts[i] {
+ continue NextServer
+ }
+ }
+ if len(sDef.DomPorts) != len(s.DomPorts) {
+ continue
+ }
+ return sDef, true
+ }
+ return nil, false
+}
+
+func (p *Plugin) FindMatch(def []*Plugin) (*Plugin, bool) {
+NextPlugin:
+ for _, pDef := range def {
+ if pDef.Name != p.Name {
+ continue
+ }
+ for i, arg := range pDef.Args {
+ if arg == "*" {
+ continue
+ }
+ if arg == "***" {
+ return pDef, true
+ }
+ if i >= len(p.Args) || arg != p.Args[i] {
+ continue NextPlugin
+ }
+ }
+ if len(pDef.Args) != len(p.Args) {
+ continue
+ }
+ return pDef, true
+ }
+ return nil, false
+}
+
+func (o *Option) FindMatch(def []*Option) (*Option, bool) {
+NextOption:
+ for _, oDef := range def {
+ if oDef.Name != o.Name {
+ continue
+ }
+ for i, arg := range oDef.Args {
+ if arg == "*" {
+ continue
+ }
+ if arg == "***" {
+ return oDef, true
+ }
+ if i >= len(o.Args) || arg != o.Args[i] {
+ continue NextOption
+ }
+ }
+ if len(oDef.Args) != len(o.Args) {
+ continue
+ }
+ return oDef, true
+ }
+ return nil, false
+}
+
+const indent = 4
diff --git a/vendor/github.com/coredns/corefile-migration/migration/migrate.go b/vendor/github.com/coredns/corefile-migration/migration/migrate.go
new file mode 100644
index 000000000..4ba637555
--- /dev/null
+++ b/vendor/github.com/coredns/corefile-migration/migration/migrate.go
@@ -0,0 +1,483 @@
+package migration
+
+// This package provides a set of functions to help handle migrations of CoreDNS Corefiles to be compatible with new
+// versions of CoreDNS. The task of upgrading CoreDNS is the responsibility of a variety of Kubernetes management tools
+// (e.g. kubeadm and others), and the precise behavior may be different for each one. This library abstracts some basic
+// helper functions that make this easier to implement.
+
+import (
+ "errors"
+ "fmt"
+ "regexp"
+ "sort"
+
+ "github.com/coredns/corefile-migration/migration/corefile"
+)
+
+// Deprecated returns a list of deprecation notifications affecting the given Corefile. Notifications are returned for
+// any deprecated, removed, or ignored plugins/directives present in the Corefile. Notifications are also returned for
+// any new default plugins that would be added in a migration.
+func Deprecated(fromCoreDNSVersion, toCoreDNSVersion, corefileStr string) ([]Notice, error) {
+ return getStatus(fromCoreDNSVersion, toCoreDNSVersion, corefileStr, SevAll)
+}
+
+// Unsupported returns a list notifications of plugins/options that are not handled supported by this migration tool,
+// but may still be valid in CoreDNS.
+func Unsupported(fromCoreDNSVersion, toCoreDNSVersion, corefileStr string) ([]Notice, error) {
+ return getStatus(fromCoreDNSVersion, toCoreDNSVersion, corefileStr, SevUnsupported)
+}
+
+func getStatus(fromCoreDNSVersion, toCoreDNSVersion, corefileStr, status string) ([]Notice, error) {
+ err := ValidUpMigration(fromCoreDNSVersion, toCoreDNSVersion)
+ if err != nil {
+ return nil, err
+ }
+ cf, err := corefile.New(corefileStr)
+ if err != nil {
+ return nil, err
+ }
+ notices := []Notice{}
+ v := fromCoreDNSVersion
+ for {
+ if fromCoreDNSVersion != toCoreDNSVersion {
+ v = Versions[v].nextVersion
+ }
+ for _, s := range cf.Servers {
+ for _, p := range s.Plugins {
+ vp, present := Versions[v].plugins[p.Name]
+ if status == SevUnsupported && !present {
+ notices = append(notices, Notice{Plugin: p.Name, Severity: status, Version: v})
+ continue
+ }
+ if !present {
+ continue
+ }
+ if vp.status != "" && vp.status != SevNewDefault && status != SevUnsupported {
+ notices = append(notices, Notice{
+ Plugin: p.Name,
+ Severity: vp.status,
+ Version: v,
+ ReplacedBy: vp.replacedBy,
+ Additional: vp.additional,
+ })
+ continue
+ }
+ for _, o := range p.Options {
+ vo, present := matchOption(o.Name, Versions[v].plugins[p.Name])
+ if status == SevUnsupported {
+ if present {
+ continue
+ }
+ notices = append(notices, Notice{
+ Plugin: p.Name,
+ Option: o.Name,
+ Severity: status,
+ Version: v,
+ })
+ continue
+ }
+ if !present {
+ continue
+ }
+ if vo.status != "" && vo.status != SevNewDefault {
+ notices = append(notices, Notice{Plugin: p.Name, Option: o.Name, Severity: vo.status, Version: v})
+ continue
+ }
+ }
+ if status != SevUnsupported {
+ CheckForNewOptions:
+ for name, vo := range Versions[v].plugins[p.Name].namedOptions {
+ if vo.status != SevNewDefault {
+ continue
+ }
+ for _, o := range p.Options {
+ if name == o.Name {
+ continue CheckForNewOptions
+ }
+ }
+ notices = append(notices, Notice{Plugin: p.Name, Option: name, Severity: SevNewDefault, Version: v})
+ }
+ }
+ }
+ if status != SevUnsupported {
+ CheckForNewPlugins:
+ for name, vp := range Versions[v].plugins {
+ if vp.status != SevNewDefault {
+ continue
+ }
+ for _, p := range s.Plugins {
+ if name == p.Name {
+ continue CheckForNewPlugins
+ }
+ }
+ notices = append(notices, Notice{Plugin: name, Option: "", Severity: SevNewDefault, Version: v})
+ }
+ }
+ }
+ if v == toCoreDNSVersion {
+ break
+ }
+ }
+ return notices, nil
+}
+
+// Migrate returns the Corefile converted to toCoreDNSVersion, or an error if it cannot. This function only accepts
+// a forward migration, where the destination version is => the start version.
+// If deprecations is true, deprecated plugins/options will be migrated as soon as they are deprecated.
+// If deprecations is false, deprecated plugins/options will be migrated only once they become removed or ignored.
+func Migrate(fromCoreDNSVersion, toCoreDNSVersion, corefileStr string, deprecations bool) (string, error) {
+ if fromCoreDNSVersion == toCoreDNSVersion {
+ return corefileStr, nil
+ }
+ err := ValidUpMigration(fromCoreDNSVersion, toCoreDNSVersion)
+ if err != nil {
+ return "", err
+ }
+ cf, err := corefile.New(corefileStr)
+ if err != nil {
+ return "", err
+ }
+ v := fromCoreDNSVersion
+ for {
+ v = Versions[v].nextVersion
+
+ // apply any global corefile level pre-processing
+ if Versions[v].preProcess != nil {
+ cf, err = Versions[v].preProcess(cf)
+ if err != nil {
+ return "", err
+ }
+ }
+
+ newSrvs := []*corefile.Server{}
+ for _, s := range cf.Servers {
+ newPlugs := []*corefile.Plugin{}
+ for _, p := range s.Plugins {
+ vp, present := Versions[v].plugins[p.Name]
+ if !present {
+ newPlugs = append(newPlugs, p)
+ continue
+ }
+ if !deprecations && vp.status == SevDeprecated {
+ newPlugs = append(newPlugs, p)
+ continue
+ }
+ newOpts := []*corefile.Option{}
+ for _, o := range p.Options {
+ vo, present := matchOption(o.Name, Versions[v].plugins[p.Name])
+ if !present {
+ newOpts = append(newOpts, o)
+ continue
+ }
+ if !deprecations && vo.status == SevDeprecated {
+ newOpts = append(newOpts, o)
+ continue
+ }
+ if vo.action == nil {
+ newOpts = append(newOpts, o)
+ continue
+ }
+ o, err := vo.action(o)
+ if err != nil {
+ return "", err
+ }
+ if o == nil {
+ // remove option
+ continue
+ }
+ newOpts = append(newOpts, o)
+ }
+ if vp.action != nil {
+ p, err := vp.action(p)
+ if err != nil {
+ return "", err
+ }
+ if p == nil {
+ // remove plugin, skip options processing
+ continue
+ }
+ }
+ newPlug := &corefile.Plugin{
+ Name: p.Name,
+ Args: p.Args,
+ Options: newOpts,
+ }
+ CheckForNewOptions:
+ for name, vo := range Versions[v].plugins[p.Name].namedOptions {
+ if vo.status != SevNewDefault {
+ continue
+ }
+ for _, o := range p.Options {
+ if name == o.Name {
+ continue CheckForNewOptions
+ }
+ }
+ newPlug, err = vo.add(newPlug)
+ if err != nil {
+ return "", err
+ }
+ }
+
+ newPlugs = append(newPlugs, newPlug)
+ }
+ newSrv := &corefile.Server{
+ DomPorts: s.DomPorts,
+ Plugins: newPlugs,
+ }
+ CheckForNewPlugins:
+ for name, vp := range Versions[v].plugins {
+ if vp.status != SevNewDefault {
+ continue
+ }
+ for _, p := range s.Plugins {
+ if name == p.Name {
+ continue CheckForNewPlugins
+ }
+ }
+ newSrv, err = vp.add(newSrv)
+ if err != nil {
+ return "", err
+ }
+ }
+
+ newSrvs = append(newSrvs, newSrv)
+ }
+
+ cf = &corefile.Corefile{Servers: newSrvs}
+
+ // apply any global corefile level post processing
+ if Versions[v].postProcess != nil {
+ cf, err = Versions[v].postProcess(cf)
+ if err != nil {
+ return "", err
+ }
+ }
+
+ if v == toCoreDNSVersion {
+ break
+ }
+ }
+ return cf.ToString(), nil
+}
+
+// MigrateDown returns the Corefile converted to toCoreDNSVersion, or an error if it cannot. This function only accepts
+// a downward migration, where the destination version is <= the start version.
+func MigrateDown(fromCoreDNSVersion, toCoreDNSVersion, corefileStr string) (string, error) {
+ if fromCoreDNSVersion == toCoreDNSVersion {
+ return corefileStr, nil
+ }
+ err := validDownMigration(fromCoreDNSVersion, toCoreDNSVersion)
+ if err != nil {
+ return "", err
+ }
+ cf, err := corefile.New(corefileStr)
+ if err != nil {
+ return "", err
+ }
+ v := fromCoreDNSVersion
+ for {
+ newSrvs := []*corefile.Server{}
+ for _, s := range cf.Servers {
+ newPlugs := []*corefile.Plugin{}
+ for _, p := range s.Plugins {
+ vp, present := Versions[v].plugins[p.Name]
+ if !present {
+ newPlugs = append(newPlugs, p)
+ continue
+ }
+ if vp.downAction == nil {
+ newPlugs = append(newPlugs, p)
+ continue
+ }
+ p, err := vp.downAction(p)
+ if err != nil {
+ return "", err
+ }
+ if p == nil {
+ // remove plugin, skip options processing
+ continue
+ }
+
+ newOpts := []*corefile.Option{}
+ for _, o := range p.Options {
+ vo, present := matchOption(o.Name, Versions[v].plugins[p.Name])
+ if !present {
+ newOpts = append(newOpts, o)
+ continue
+ }
+ if vo.downAction == nil {
+ newOpts = append(newOpts, o)
+ continue
+ }
+ o, err := vo.downAction(o)
+ if err != nil {
+ return "", err
+ }
+ if o == nil {
+ // remove option
+ continue
+ }
+ newOpts = append(newOpts, o)
+ }
+ newPlug := &corefile.Plugin{
+ Name: p.Name,
+ Args: p.Args,
+ Options: newOpts,
+ }
+ newPlugs = append(newPlugs, newPlug)
+ }
+ newSrv := &corefile.Server{
+ DomPorts: s.DomPorts,
+ Plugins: newPlugs,
+ }
+ newSrvs = append(newSrvs, newSrv)
+ }
+
+ cf = &corefile.Corefile{Servers: newSrvs}
+
+ if v == toCoreDNSVersion {
+ break
+ }
+ v = Versions[v].priorVersion
+ }
+ return cf.ToString(), nil
+}
+
+// Default returns true if the Corefile is the default for a given version of Kubernetes.
+// Or, if k8sVersion is empty, Default returns true if the Corefile is the default for any version of Kubernetes.
+func Default(k8sVersion, corefileStr string) bool {
+ cf, err := corefile.New(corefileStr)
+ if err != nil {
+ return false
+ }
+NextVersion:
+ for _, v := range Versions {
+ for _, release := range v.k8sReleases {
+ if k8sVersion != "" && k8sVersion != release {
+ continue
+ }
+ }
+ defCf, err := corefile.New(v.defaultConf)
+ if err != nil {
+ continue
+ }
+ // check corefile against k8s release default
+ if len(cf.Servers) != len(defCf.Servers) {
+ continue NextVersion
+ }
+ for _, s := range cf.Servers {
+ defS, found := s.FindMatch(defCf.Servers)
+ if !found {
+ continue NextVersion
+ }
+ if len(s.Plugins) != len(defS.Plugins) {
+ continue NextVersion
+ }
+ for _, p := range s.Plugins {
+ defP, found := p.FindMatch(defS.Plugins)
+ if !found {
+ continue NextVersion
+ }
+ if len(p.Options) != len(defP.Options) {
+ continue NextVersion
+ }
+ for _, o := range p.Options {
+ _, found := o.FindMatch(defP.Options)
+ if !found {
+ continue NextVersion
+ }
+ }
+ }
+ }
+ return true
+ }
+ return false
+}
+
+// Released returns true if dockerImageSHA matches any released image of CoreDNS.
+func Released(dockerImageSHA string) bool {
+ for _, v := range Versions {
+ if v.dockerImageSHA == dockerImageSHA {
+ return true
+ }
+ }
+ return false
+}
+
+// VersionFromSHA returns the version string matching the dockerImageSHA.
+func VersionFromSHA(dockerImageSHA string) (string, error) {
+ for vStr, v := range Versions {
+ if v.dockerImageSHA == dockerImageSHA {
+ return vStr, nil
+ }
+ }
+ return "", errors.New("sha unsupported")
+}
+
+// ValidVersions returns a list of all versions defined
+func ValidVersions() []string {
+ var vStrs []string
+ for vStr := range Versions {
+ vStrs = append(vStrs, vStr)
+ }
+ sort.Strings(vStrs)
+ return vStrs
+}
+
+func ValidUpMigration(fromCoreDNSVersion, toCoreDNSVersion string) error {
+
+ err := validateVersion(fromCoreDNSVersion)
+ if err != nil {
+ return err
+ }
+ if fromCoreDNSVersion == toCoreDNSVersion {
+ return nil
+ }
+ for next := Versions[fromCoreDNSVersion].nextVersion; next != ""; next = Versions[next].nextVersion {
+ if next != toCoreDNSVersion {
+ continue
+ }
+ return nil
+ }
+ return fmt.Errorf("cannot migrate up to '%v' from '%v'", toCoreDNSVersion, fromCoreDNSVersion)
+}
+
+func validateVersion(fromCoreDNSVersion string) error {
+ if _, ok := Versions[fromCoreDNSVersion]; !ok {
+ return fmt.Errorf("start version '%v' not supported", fromCoreDNSVersion)
+ }
+ return nil
+}
+
+func validDownMigration(fromCoreDNSVersion, toCoreDNSVersion string) error {
+ err := validateVersion(fromCoreDNSVersion)
+ if err != nil {
+ return err
+ }
+ for prior := Versions[fromCoreDNSVersion].priorVersion; prior != ""; prior = Versions[prior].priorVersion {
+ if prior != toCoreDNSVersion {
+ continue
+ }
+ return nil
+ }
+ return fmt.Errorf("cannot migrate down to '%v' from '%v'", toCoreDNSVersion, fromCoreDNSVersion)
+}
+
+func matchOption(oName string, p plugin) (*option, bool) {
+ o, exists := p.namedOptions[oName]
+ if exists {
+ o.name = oName
+ return &o, exists
+ }
+ for pattern, o := range p.patternOptions {
+ matched, err := regexp.MatchString(pattern, oName)
+ if err != nil {
+ continue
+ }
+ if matched {
+ o.name = oName
+ return &o, true
+ }
+ }
+ return nil, false
+}
diff --git a/vendor/github.com/coredns/corefile-migration/migration/notice.go b/vendor/github.com/coredns/corefile-migration/migration/notice.go
new file mode 100644
index 000000000..68fb9c52b
--- /dev/null
+++ b/vendor/github.com/coredns/corefile-migration/migration/notice.go
@@ -0,0 +1,48 @@
+package migration
+
+import "fmt"
+
+// Notice is a migration warning
+type Notice struct {
+ Plugin string
+ Option string
+ Severity string // 'deprecated', 'removed', or 'unsupported'
+ ReplacedBy string
+ Additional string
+ Version string
+}
+
+func (n *Notice) ToString() string {
+ s := ""
+ if n.Option == "" {
+ s += fmt.Sprintf(`Plugin "%v" `, n.Plugin)
+ } else {
+ s += fmt.Sprintf(`Option "%v" in plugin "%v" `, n.Option, n.Plugin)
+ }
+ if n.Severity == SevUnsupported {
+ s += "is unsupported by this migration tool in " + n.Version + "."
+ } else if n.Severity == SevNewDefault {
+ s += "is added as a default in " + n.Version + "."
+ } else {
+ s += "is " + n.Severity + " in " + n.Version + "."
+ }
+ if n.ReplacedBy != "" {
+ s += fmt.Sprintf(` It is replaced by "%v".`, n.ReplacedBy)
+ }
+ if n.Additional != "" {
+ s += " " + n.Additional
+ }
+ return s
+}
+
+const (
+ // The following statuses are used to indicate the state of support/deprecation in a given release.
+ SevDeprecated = "deprecated" // deprecated, but still completely functional
+ SevIgnored = "ignored" // if included in the corefile, it will be ignored by CoreDNS
+ SevRemoved = "removed" // completely removed from CoreDNS, and would cause CoreDNS to exit if present in the Corefile
+ SevNewDefault = "newdefault" // added to the default corefile. CoreDNS may not function properly if it is not present in the corefile.
+ SevUnsupported = "unsupported" // the plugin/option is not supported by the migration tool
+
+ // The following statuses are used for selecting/filtering notifications
+ SevAll = "all" // show all statuses
+)
diff --git a/vendor/github.com/coredns/corefile-migration/migration/plugins.go b/vendor/github.com/coredns/corefile-migration/migration/plugins.go
new file mode 100644
index 000000000..258d2b80d
--- /dev/null
+++ b/vendor/github.com/coredns/corefile-migration/migration/plugins.go
@@ -0,0 +1,614 @@
+package migration
+
+import (
+ "errors"
+
+ "github.com/coredns/corefile-migration/migration/corefile"
+)
+
+type plugin struct {
+ status string
+ replacedBy string
+ additional string
+ namedOptions map[string]option
+ patternOptions map[string]option
+ action pluginActionFn // action affecting this plugin only
+ add serverActionFn // action to add a new plugin to the server block
+ downAction pluginActionFn // downgrade action affecting this plugin only
+}
+
+type option struct {
+ name string
+ status string
+ replacedBy string
+ additional string
+ action optionActionFn // action affecting this option only
+ add pluginActionFn // action to add the option to the plugin
+ downAction optionActionFn // downgrade action affecting this option only
+}
+
+type corefileAction func(*corefile.Corefile) (*corefile.Corefile, error)
+type serverActionFn func(*corefile.Server) (*corefile.Server, error)
+type pluginActionFn func(*corefile.Plugin) (*corefile.Plugin, error)
+type optionActionFn func(*corefile.Option) (*corefile.Option, error)
+
+// plugins holds a map of plugin names and their migration rules per "version". "Version" here is meaningless outside
+// of the context of this code. Each change in options or migration actions for a plugin requires a new "version"
+// containing those new/removed options and migration actions. Plugins in CoreDNS are not versioned.
+var plugins = map[string]map[string]plugin{
+ "kubernetes": {
+ "v1": plugin{
+ namedOptions: map[string]option{
+ "resyncperiod": {},
+ "endpoint": {},
+ "tls": {},
+ "namespaces": {},
+ "labels": {},
+ "pods": {},
+ "endpoint_pod_names": {},
+ "upstream": {},
+ "ttl": {},
+ "noendpoints": {},
+ "transfer": {},
+ "fallthrough": {},
+ "ignore": {},
+ },
+ },
+ "v2": plugin{
+ namedOptions: map[string]option{
+ "resyncperiod": {},
+ "endpoint": {},
+ "tls": {},
+ "namespaces": {},
+ "labels": {},
+ "pods": {},
+ "endpoint_pod_names": {},
+ "upstream": {},
+ "ttl": {},
+ "noendpoints": {},
+ "transfer": {},
+ "fallthrough": {},
+ "ignore": {},
+ "kubeconfig": {}, // new option
+ },
+ },
+ "v3": plugin{
+ namedOptions: map[string]option{
+ "resyncperiod": {},
+ "endpoint": { // new deprecation
+ status: SevDeprecated,
+ action: useFirstArgumentOnly,
+ },
+ "tls": {},
+ "kubeconfig": {},
+ "namespaces": {},
+ "labels": {},
+ "pods": {},
+ "endpoint_pod_names": {},
+ "upstream": {},
+ "ttl": {},
+ "noendpoints": {},
+ "transfer": {},
+ "fallthrough": {},
+ "ignore": {},
+ },
+ },
+ "v4": plugin{
+ namedOptions: map[string]option{
+ "resyncperiod": {},
+ "endpoint": {
+ status: SevIgnored,
+ action: useFirstArgumentOnly,
+ },
+ "tls": {},
+ "kubeconfig": {},
+ "namespaces": {},
+ "labels": {},
+ "pods": {},
+ "endpoint_pod_names": {},
+ "upstream": { // new deprecation
+ status: SevDeprecated,
+ action: removeOption,
+ },
+ "ttl": {},
+ "noendpoints": {},
+ "transfer": {},
+ "fallthrough": {},
+ "ignore": {},
+ },
+ },
+ "v5": plugin{
+ namedOptions: map[string]option{
+ "resyncperiod": { // new deprecation
+ status: SevDeprecated,
+ action: removeOption,
+ },
+ "endpoint": {
+ status: SevIgnored,
+ action: useFirstArgumentOnly,
+ },
+ "tls": {},
+ "kubeconfig": {},
+ "namespaces": {},
+ "labels": {},
+ "pods": {},
+ "endpoint_pod_names": {},
+ "upstream": {
+ status: SevIgnored,
+ action: removeOption,
+ },
+ "ttl": {},
+ "noendpoints": {},
+ "transfer": {},
+ "fallthrough": {},
+ "ignore": {},
+ },
+ },
+ "v6": plugin{
+ namedOptions: map[string]option{
+ "resyncperiod": { // now ignored
+ status: SevIgnored,
+ action: removeOption,
+ },
+ "endpoint": {
+ status: SevIgnored,
+ action: useFirstArgumentOnly,
+ },
+ "tls": {},
+ "kubeconfig": {},
+ "namespaces": {},
+ "labels": {},
+ "pods": {},
+ "endpoint_pod_names": {},
+ "upstream": {
+ status: SevIgnored,
+ action: removeOption,
+ },
+ "ttl": {},
+ "noendpoints": {},
+ "transfer": {},
+ "fallthrough": {},
+ "ignore": {},
+ },
+ },
+ "v7": plugin{
+ namedOptions: map[string]option{
+ "resyncperiod": { // new removal
+ status: SevRemoved,
+ action: removeOption,
+ },
+ "endpoint": {
+ status: SevIgnored,
+ action: useFirstArgumentOnly,
+ },
+ "tls": {},
+ "kubeconfig": {},
+ "namespaces": {},
+ "labels": {},
+ "pods": {},
+ "endpoint_pod_names": {},
+ "upstream": { // new removal
+ status: SevRemoved,
+ action: removeOption,
+ },
+ "ttl": {},
+ "noendpoints": {},
+ "transfer": {},
+ "fallthrough": {},
+ "ignore": {},
+ },
+ },
+ "v8 remove transfer option": plugin{
+ namedOptions: map[string]option{
+ "endpoint": {
+ status: SevIgnored,
+ action: useFirstArgumentOnly,
+ },
+ "tls": {},
+ "kubeconfig": {},
+ "namespaces": {},
+ "labels": {},
+ "pods": {},
+ "endpoint_pod_names": {},
+ "ttl": {},
+ "noendpoints": {},
+ "transfer": {
+ status: SevRemoved,
+ action: removeOption,
+ },
+ "fallthrough": {},
+ "ignore": {},
+ },
+ },
+ "v8": plugin{
+ namedOptions: map[string]option{
+ "endpoint": {
+ status: SevIgnored,
+ action: useFirstArgumentOnly,
+ },
+ "tls": {},
+ "kubeconfig": {},
+ "namespaces": {},
+ "labels": {},
+ "pods": {},
+ "endpoint_pod_names": {},
+ "ttl": {},
+ "noendpoints": {},
+ "fallthrough": {},
+ "ignore": {},
+ },
+ },
+ },
+
+ "errors": {
+ "v1": plugin{},
+ "v2": plugin{
+ namedOptions: map[string]option{
+ "consolidate": {},
+ },
+ },
+ },
+
+ "health": {
+ "v1": plugin{
+ namedOptions: map[string]option{
+ "lameduck": {},
+ },
+ },
+ "v1 add lameduck": plugin{
+ namedOptions: map[string]option{
+ "lameduck": {
+ status: SevNewDefault,
+ add: func(c *corefile.Plugin) (*corefile.Plugin, error) {
+ return addOptionToPlugin(c, &corefile.Option{Name: "lameduck 5s"})
+ },
+ downAction: removeOption,
+ },
+ },
+ },
+ },
+
+ "hosts": {
+ "v1": plugin{
+ namedOptions: map[string]option{
+ "ttl": {},
+ "no_reverse": {},
+ "reload": {},
+ "fallthrough": {},
+ },
+ patternOptions: map[string]option{
+ `\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3}`: {}, // close enough
+ `[0-9A-Fa-f]{1,4}:[:0-9A-Fa-f]+:[0-9A-Fa-f]{1,4}`: {}, // less close, but still close enough
+ },
+ },
+ },
+
+ "rewrite": {
+ "v1": plugin{
+ namedOptions: map[string]option{
+ "type": {},
+ "class": {},
+ "name": {},
+ "answer name": {},
+ "edns0": {},
+ },
+ },
+ "v2": plugin{
+ namedOptions: map[string]option{
+ "type": {},
+ "class": {},
+ "name": {},
+ "answer name": {},
+ "edns0": {},
+ "ttl": {}, // new option
+ },
+ },
+ },
+
+ "log": {
+ "v1": plugin{
+ namedOptions: map[string]option{
+ "class": {},
+ },
+ },
+ },
+
+ "cache": {
+ "v1": plugin{
+ namedOptions: map[string]option{
+ "success": {},
+ "denial": {},
+ "prefetch": {},
+ },
+ },
+ "v2": plugin{
+ namedOptions: map[string]option{
+ "success": {},
+ "denial": {},
+ "prefetch": {},
+ "serve_stale": {}, // new option
+ },
+ },
+ },
+
+ "forward": {
+ "v1": plugin{
+ namedOptions: map[string]option{
+ "except": {},
+ "force_tcp": {},
+ "expire": {},
+ "max_fails": {},
+ "tls": {},
+ "tls_servername": {},
+ "policy": {},
+ "health_check": {},
+ },
+ },
+ "v2": plugin{
+ namedOptions: map[string]option{
+ "except": {},
+ "force_tcp": {},
+ "prefer_udp": {},
+ "expire": {},
+ "max_fails": {},
+ "tls": {},
+ "tls_servername": {},
+ "policy": {},
+ "health_check": {},
+ },
+ },
+ "v3": plugin{
+ namedOptions: map[string]option{
+ "except": {},
+ "force_tcp": {},
+ "prefer_udp": {},
+ "expire": {},
+ "max_fails": {},
+ "tls": {},
+ "tls_servername": {},
+ "policy": {},
+ "health_check": {},
+ "max_concurrent": {},
+ },
+ },
+ "v3 add max_concurrent": plugin{
+ namedOptions: map[string]option{
+ "except": {},
+ "force_tcp": {},
+ "prefer_udp": {},
+ "expire": {},
+ "max_fails": {},
+ "tls": {},
+ "tls_servername": {},
+ "policy": {},
+ "health_check": {},
+ "max_concurrent": { // new option
+ status: SevNewDefault,
+ add: func(c *corefile.Plugin) (*corefile.Plugin, error) {
+ return addOptionToPlugin(c, &corefile.Option{Name: "max_concurrent 1000"})
+ },
+ downAction: removeOption,
+ },
+ },
+ },
+ },
+
+ "k8s_external": {
+ "v1": plugin{
+ namedOptions: map[string]option{
+ "apex": {},
+ "ttl": {},
+ },
+ },
+ },
+
+ "proxy": {
+ "v1": plugin{
+ namedOptions: map[string]option{
+ "policy": {},
+ "fail_timeout": {},
+ "max_fails": {},
+ "health_check": {},
+ "except": {},
+ "spray": {},
+ "protocol": { // https_google option ignored
+ status: SevIgnored,
+ action: proxyRemoveHttpsGoogleProtocol,
+ },
+ },
+ },
+ "v2": plugin{
+ namedOptions: map[string]option{
+ "policy": {},
+ "fail_timeout": {},
+ "max_fails": {},
+ "health_check": {},
+ "except": {},
+ "spray": {},
+ "protocol": { // https_google option removed
+ status: SevRemoved,
+ action: proxyRemoveHttpsGoogleProtocol,
+ },
+ },
+ },
+ "deprecation": plugin{ // proxy -> forward deprecation migration
+ status: SevDeprecated,
+ replacedBy: "forward",
+ action: proxyToForwardPluginAction,
+ namedOptions: proxyToForwardOptionsMigrations,
+ },
+ "removal": plugin{ // proxy -> forward forced migration
+ status: SevRemoved,
+ replacedBy: "forward",
+ action: proxyToForwardPluginAction,
+ namedOptions: proxyToForwardOptionsMigrations,
+ },
+ },
+
+ "transfer": {
+ "v1": plugin{
+ namedOptions: map[string]option{
+ "to": {},
+ },
+ },
+ },
+}
+
+func removePlugin(*corefile.Plugin) (*corefile.Plugin, error) { return nil, nil }
+func removeOption(*corefile.Option) (*corefile.Option, error) { return nil, nil }
+
+func renamePlugin(p *corefile.Plugin, to string) (*corefile.Plugin, error) {
+ p.Name = to
+ return p, nil
+}
+
+func addToServerBlockWithPlugins(sb *corefile.Server, newPlugin *corefile.Plugin, with []string) (*corefile.Server, error) {
+ if len(with) == 0 {
+ // add to all blocks
+ sb.Plugins = append(sb.Plugins, newPlugin)
+ return sb, nil
+ }
+ for _, p := range sb.Plugins {
+ for _, w := range with {
+ if w == p.Name {
+ // add to this block
+ sb.Plugins = append(sb.Plugins, newPlugin)
+ return sb, nil
+ }
+ }
+ }
+ return sb, nil
+}
+
+func copyKubernetesTransferOptToPlugin(cf *corefile.Corefile) (*corefile.Corefile, error) {
+ for _, s := range cf.Servers {
+ var (
+ to []string
+ zone string
+ )
+ for _, p := range s.Plugins {
+ if p.Name != "kubernetes" {
+ continue
+ }
+ zone = p.Args[0]
+ for _, o := range p.Options {
+ if o.Name != "transfer" {
+ continue
+ }
+ to = o.Args
+ }
+ }
+ if len(to) < 2 {
+ continue
+ }
+ s.Plugins = append(s.Plugins, &corefile.Plugin{
+ Name: "transfer",
+ Args: []string{zone},
+ Options: []*corefile.Option{{Name: "to", Args: to[1:]}},
+ })
+ }
+ return cf, nil
+}
+
+func addToKubernetesServerBlocks(sb *corefile.Server, newPlugin *corefile.Plugin) (*corefile.Server, error) {
+ return addToServerBlockWithPlugins(sb, newPlugin, []string{"kubernetes"})
+}
+
+func addToForwardingServerBlocks(sb *corefile.Server, newPlugin *corefile.Plugin) (*corefile.Server, error) {
+ return addToServerBlockWithPlugins(sb, newPlugin, []string{"forward", "proxy"})
+}
+
+func addToAllServerBlocks(sb *corefile.Server, newPlugin *corefile.Plugin) (*corefile.Server, error) {
+ return addToServerBlockWithPlugins(sb, newPlugin, []string{})
+}
+
+func addOptionToPlugin(pl *corefile.Plugin, newOption *corefile.Option) (*corefile.Plugin, error) {
+ pl.Options = append(pl.Options, newOption)
+ return pl, nil
+}
+
+var proxyToForwardOptionsMigrations = map[string]option{
+ "policy": {
+ action: func(o *corefile.Option) (*corefile.Option, error) {
+ if len(o.Args) == 1 && o.Args[0] == "least_conn" {
+ o.Name = "force_tcp"
+ o.Args = nil
+ }
+ return o, nil
+ },
+ },
+ "except": {},
+ "fail_timeout": {action: removeOption},
+ "max_fails": {action: removeOption},
+ "health_check": {action: removeOption},
+ "spray": {action: removeOption},
+ "protocol": {
+ action: func(o *corefile.Option) (*corefile.Option, error) {
+ if len(o.Args) >= 2 && o.Args[0] == "force_tcp" {
+ o.Name = "force_tcp"
+ o.Args = nil
+ return o, nil
+ }
+ return nil, nil
+ },
+ },
+}
+
+var proxyToForwardPluginAction = func(p *corefile.Plugin) (*corefile.Plugin, error) {
+ return renamePlugin(p, "forward")
+}
+
+var useFirstArgumentOnly = func(o *corefile.Option) (*corefile.Option, error) {
+ if len(o.Args) < 1 {
+ return o, nil
+ }
+ o.Args = o.Args[:1]
+ return o, nil
+}
+
+var proxyRemoveHttpsGoogleProtocol = func(o *corefile.Option) (*corefile.Option, error) {
+ if len(o.Args) > 0 && o.Args[0] == "https_google" {
+ return nil, nil
+ }
+ return o, nil
+}
+
+func breakForwardStubDomainsIntoServerBlocks(cf *corefile.Corefile) (*corefile.Corefile, error) {
+ for _, sb := range cf.Servers {
+ for j, fwd := range sb.Plugins {
+ if fwd.Name != "forward" {
+ continue
+ }
+ if len(fwd.Args) == 0 {
+ return nil, errors.New("found invalid forward plugin declaration")
+ }
+ if fwd.Args[0] == "." {
+ // dont move the default upstream
+ continue
+ }
+ if len(sb.DomPorts) != 1 {
+ return cf, errors.New("unhandled migration of multi-domain/port server block")
+ }
+ if sb.DomPorts[0] != "." && sb.DomPorts[0] != ".:53" {
+ return cf, errors.New("unhandled migration of non-default domain/port server block")
+ }
+
+ newSb := &corefile.Server{} // create a new server block
+ newSb.DomPorts = []string{fwd.Args[0]} // copy the forward zone to the server block domain
+ fwd.Args[0] = "." // the plugin's zone changes to "." for brevity
+ newSb.Plugins = append(newSb.Plugins, fwd) // add the plugin to its new server block
+
+ // Add appropriate addtl plugins to new server block
+ newSb.Plugins = append(newSb.Plugins, &corefile.Plugin{Name: "loop"})
+ newSb.Plugins = append(newSb.Plugins, &corefile.Plugin{Name: "errors"})
+ newSb.Plugins = append(newSb.Plugins, &corefile.Plugin{Name: "cache", Args: []string{"30"}})
+
+ //add new server block to corefile
+ cf.Servers = append(cf.Servers, newSb)
+
+ //remove the forward plugin from the original server block
+ sb.Plugins = append(sb.Plugins[:j], sb.Plugins[j+1:]...)
+ }
+ }
+ return cf, nil
+}
diff --git a/vendor/github.com/coredns/corefile-migration/migration/versions.go b/vendor/github.com/coredns/corefile-migration/migration/versions.go
new file mode 100644
index 000000000..b9c7a1391
--- /dev/null
+++ b/vendor/github.com/coredns/corefile-migration/migration/versions.go
@@ -0,0 +1,800 @@
+package migration
+
+import (
+ "github.com/coredns/corefile-migration/migration/corefile"
+)
+
+// release holds information pertaining to a single CoreDNS release
+type release struct {
+ k8sReleases []string // a list of K8s versions that deploy this CoreDNS release by default
+ nextVersion string // the next CoreDNS version
+ priorVersion string // the prior CoreDNS version
+ dockerImageSHA string // the docker image SHA for this release
+ plugins map[string]plugin // map of plugins with deprecation status and migration actions for this release
+
+ // pre/postProcess are processing actions to take on the corefile as a whole. Used for complex migration
+ // tasks that dont fit well into the modular plugin/option migration framework. For example, when the
+ // action on a plugin would need to extend beyond the scope of that plugin (affecting other plugins, or
+ // server blocks, etc). e.g. Splitting plugins out into separate server blocks.
+ preProcess corefileAction
+ postProcess corefileAction
+
+ // defaultConf holds the default Corefile template packaged with the corresponding k8sReleases.
+ // Wildcards are used for fuzzy matching:
+ // "*" matches exactly one token
+ // "***" matches 0 all remaining tokens on the line
+ // Order of server blocks, plugins, and namedOptions does not matter.
+ // Order of arguments does matter.
+ defaultConf string
+}
+
+// Versions holds a map of plugin/option migrations per CoreDNS release (since 1.1.4)
+var Versions = map[string]release{
+ "1.8.4": {
+ priorVersion: "1.8.3",
+ dockerImageSHA: "6e5a02c21641597998b4be7cb5eb1e7b02c0d8d23cce4dd09f4682d463798890",
+ plugins: map[string]plugin{
+ "errors": plugins["errors"]["v2"],
+ "log": plugins["log"]["v1"],
+ "health": plugins["health"]["v1"],
+ "ready": {},
+ "autopath": {},
+ "kubernetes": plugins["kubernetes"]["v8"],
+ "k8s_external": plugins["k8s_external"]["v1"],
+ "prometheus": {},
+ "forward": plugins["forward"]["v3"],
+ "cache": plugins["cache"]["v1"],
+ "loop": {},
+ "reload": {},
+ "loadbalance": {},
+ "hosts": plugins["hosts"]["v1"],
+ "rewrite": plugins["rewrite"]["v2"],
+ "transfer": plugins["transfer"]["v1"],
+ },
+ },
+ "1.8.3": {
+ nextVersion: "1.8.4",
+ priorVersion: "1.8.0", // CoreDNS 1.8.2 is not a valid version and 1.8.1 docker images were never released.
+ dockerImageSHA: "642ff9910da6ea9a8624b0234eef52af9ca75ecbec474c5507cb096bdfbae4e5",
+ plugins: map[string]plugin{
+ "errors": plugins["errors"]["v2"],
+ "log": plugins["log"]["v1"],
+ "health": plugins["health"]["v1"],
+ "ready": {},
+ "autopath": {},
+ "kubernetes": plugins["kubernetes"]["v8"],
+ "k8s_external": plugins["k8s_external"]["v1"],
+ "prometheus": {},
+ "forward": plugins["forward"]["v3"],
+ "cache": plugins["cache"]["v1"],
+ "loop": {},
+ "reload": {},
+ "loadbalance": {},
+ "hosts": plugins["hosts"]["v1"],
+ "rewrite": plugins["rewrite"]["v2"],
+ "transfer": plugins["transfer"]["v1"],
+ },
+ },
+ "1.8.0": {
+ nextVersion: "1.8.3", // CoreDNS 1.8.2 is not a valid version and 1.8.1 docker images were never released.
+ priorVersion: "1.7.1",
+ k8sReleases: []string{"1.21"},
+ dockerImageSHA: "cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e",
+ plugins: map[string]plugin{
+ "errors": plugins["errors"]["v2"],
+ "log": plugins["log"]["v1"],
+ "health": plugins["health"]["v1"],
+ "ready": {},
+ "autopath": {},
+ "kubernetes": plugins["kubernetes"]["v8 remove transfer option"],
+ "k8s_external": plugins["k8s_external"]["v1"],
+ "prometheus": {},
+ "forward": plugins["forward"]["v3"],
+ "cache": plugins["cache"]["v1"],
+ "loop": {},
+ "reload": {},
+ "loadbalance": {},
+ "hosts": plugins["hosts"]["v1"],
+ "rewrite": plugins["rewrite"]["v2"],
+ "transfer": plugins["transfer"]["v1"],
+ },
+ preProcess: copyKubernetesTransferOptToPlugin,
+ },
+ "1.7.1": {
+ nextVersion: "1.8.0",
+ priorVersion: "1.7.0",
+ dockerImageSHA: "4a6e0769130686518325b21b0c1d0688b54e7c79244d48e1b15634e98e40c6ef",
+ plugins: map[string]plugin{
+ "errors": plugins["errors"]["v2"],
+ "log": plugins["log"]["v1"],
+ "health": plugins["health"]["v1"],
+ "ready": {},
+ "autopath": {},
+ "kubernetes": plugins["kubernetes"]["v7"],
+ "k8s_external": plugins["k8s_external"]["v1"],
+ "prometheus": {},
+ "forward": plugins["forward"]["v3"],
+ "cache": plugins["cache"]["v1"],
+ "loop": {},
+ "reload": {},
+ "loadbalance": {},
+ "hosts": plugins["hosts"]["v1"],
+ "rewrite": plugins["rewrite"]["v2"],
+ },
+ },
+ "1.7.0": {
+ nextVersion: "1.7.1",
+ priorVersion: "1.6.9",
+ k8sReleases: []string{"1.19", "1.20"},
+ dockerImageSHA: "73ca82b4ce829766d4f1f10947c3a338888f876fbed0540dc849c89ff256e90c",
+ defaultConf: `.:53 {
+ errors
+ health {
+ lameduck 5s
+ }
+ ready
+ kubernetes * *** {
+ pods insecure
+ fallthrough in-addr.arpa ip6.arpa
+ ttl 30
+ }
+ prometheus :9153
+ forward . * {
+ max_concurrent 1000
+ }
+ cache 30
+ loop
+ reload
+ loadbalance
+}`,
+ plugins: map[string]plugin{
+ "errors": plugins["errors"]["v2"],
+ "log": plugins["log"]["v1"],
+ "health": plugins["health"]["v1"],
+ "ready": {},
+ "autopath": {},
+ "kubernetes": plugins["kubernetes"]["v7"],
+ "k8s_external": plugins["k8s_external"]["v1"],
+ "prometheus": {},
+ "forward": plugins["forward"]["v3 add max_concurrent"],
+ "cache": plugins["cache"]["v1"],
+ "loop": {},
+ "reload": {},
+ "loadbalance": {},
+ "hosts": plugins["hosts"]["v1"],
+ "rewrite": plugins["rewrite"]["v2"],
+ },
+ },
+ "1.6.9": {
+ nextVersion: "1.7.0",
+ priorVersion: "1.6.7",
+ dockerImageSHA: "40ee1b708e20e3a6b8e04ccd8b6b3dd8fd25343eab27c37154946f232649ae21",
+ plugins: map[string]plugin{
+ "errors": plugins["errors"]["v2"],
+ "log": plugins["log"]["v1"],
+ "health": plugins["health"]["v1"],
+ "ready": {},
+ "autopath": {},
+ "kubernetes": plugins["kubernetes"]["v6"],
+ "k8s_external": plugins["k8s_external"]["v1"],
+ "prometheus": {},
+ "forward": plugins["forward"]["v3"],
+ "cache": plugins["cache"]["v2"],
+ "loop": {},
+ "reload": {},
+ "loadbalance": {},
+ "hosts": plugins["hosts"]["v1"],
+ "rewrite": plugins["rewrite"]["v2"],
+ },
+ },
+ "1.6.7": {
+ nextVersion: "1.6.9",
+ priorVersion: "1.6.6",
+ k8sReleases: []string{"1.18"},
+ dockerImageSHA: "2c8d61c46f484d881db43b34d13ca47a269336e576c81cf007ca740fa9ec0800",
+ defaultConf: `.:53 {
+ errors
+ health {
+ lameduck 5s
+ }
+ ready
+ kubernetes * *** {
+ pods insecure
+ fallthrough in-addr.arpa ip6.arpa
+ ttl 30
+ }
+ prometheus :9153
+ forward . *
+ cache 30
+ loop
+ reload
+ loadbalance
+}`,
+ plugins: map[string]plugin{
+ "errors": plugins["errors"]["v2"],
+ "log": plugins["log"]["v1"],
+ "health": plugins["health"]["v1"],
+ "ready": {},
+ "autopath": {},
+ "kubernetes": plugins["kubernetes"]["v6"],
+ "k8s_external": plugins["k8s_external"]["v1"],
+ "prometheus": {},
+ "forward": plugins["forward"]["v2"],
+ "cache": plugins["cache"]["v2"],
+ "loop": {},
+ "reload": {},
+ "loadbalance": {},
+ "hosts": plugins["hosts"]["v1"],
+ "rewrite": plugins["rewrite"]["v2"],
+ },
+ },
+ "1.6.6": {
+ nextVersion: "1.6.7",
+ priorVersion: "1.6.5",
+ dockerImageSHA: "41bee6992c2ed0f4628fcef75751048927bcd6b1cee89c79f6acb63ca5474d5a",
+ plugins: map[string]plugin{
+ "errors": plugins["errors"]["v2"],
+ "log": plugins["log"]["v1"],
+ "health": plugins["health"]["v1"],
+ "ready": {},
+ "autopath": {},
+ "kubernetes": plugins["kubernetes"]["v6"],
+ "k8s_external": plugins["k8s_external"]["v1"],
+ "prometheus": {},
+ "forward": plugins["forward"]["v2"],
+ "cache": plugins["cache"]["v2"],
+ "loop": {},
+ "reload": {},
+ "loadbalance": {},
+ "hosts": plugins["hosts"]["v1"],
+ "rewrite": plugins["rewrite"]["v2"],
+ },
+ },
+ "1.6.5": {
+ nextVersion: "1.6.6",
+ priorVersion: "1.6.4",
+ k8sReleases: []string{"1.17"},
+ dockerImageSHA: "7ec975f167d815311a7136c32e70735f0d00b73781365df1befd46ed35bd4fe7",
+ defaultConf: `.:53 {
+ errors
+ health {
+ lameduck 5s
+ }
+ ready
+ kubernetes * *** {
+ pods insecure
+ fallthrough in-addr.arpa ip6.arpa
+ ttl 30
+ }
+ prometheus :9153
+ forward . *
+ cache 30
+ loop
+ reload
+ loadbalance
+}`,
+ plugins: map[string]plugin{
+ "errors": plugins["errors"]["v2"],
+ "log": plugins["log"]["v1"],
+ "health": plugins["health"]["v1 add lameduck"],
+ "ready": {},
+ "autopath": {},
+ "kubernetes": plugins["kubernetes"]["v6"],
+ "k8s_external": plugins["k8s_external"]["v1"],
+ "prometheus": {},
+ "forward": plugins["forward"]["v2"],
+ "cache": plugins["cache"]["v1"],
+ "loop": {},
+ "reload": {},
+ "loadbalance": {},
+ "hosts": plugins["hosts"]["v1"],
+ "rewrite": plugins["rewrite"]["v2"],
+ },
+ },
+ "1.6.4": {
+ nextVersion: "1.6.5",
+ priorVersion: "1.6.3",
+ dockerImageSHA: "493ee88e1a92abebac67cbd4b5658b4730e0f33512461442d8d9214ea6734a9b",
+ plugins: map[string]plugin{
+ "errors": plugins["errors"]["v2"],
+ "log": plugins["log"]["v1"],
+ "health": plugins["health"]["v1"],
+ "ready": {},
+ "autopath": {},
+ "kubernetes": plugins["kubernetes"]["v6"],
+ "k8s_external": plugins["k8s_external"]["v1"],
+ "prometheus": {},
+ "forward": plugins["forward"]["v2"],
+ "cache": plugins["cache"]["v1"],
+ "loop": {},
+ "reload": {},
+ "loadbalance": {},
+ "hosts": plugins["hosts"]["v1"],
+ "rewrite": plugins["rewrite"]["v2"],
+ },
+ },
+ "1.6.3": {
+ nextVersion: "1.6.4",
+ priorVersion: "1.6.2",
+ dockerImageSHA: "cfa7236dab4e3860881fdf755880ff8361e42f6cba2e3775ae48e2d46d22f7ba",
+ plugins: map[string]plugin{
+ "errors": plugins["errors"]["v2"],
+ "log": plugins["log"]["v1"],
+ "health": plugins["health"]["v1"],
+ "ready": {},
+ "autopath": {},
+ "kubernetes": plugins["kubernetes"]["v6"],
+ "k8s_external": plugins["k8s_external"]["v1"],
+ "prometheus": {},
+ "forward": plugins["forward"]["v2"],
+ "cache": plugins["cache"]["v1"],
+ "loop": {},
+ "reload": {},
+ "loadbalance": {},
+ "hosts": plugins["hosts"]["v1"],
+ "rewrite": plugins["rewrite"]["v2"],
+ },
+ },
+ "1.6.2": {
+ nextVersion: "1.6.3",
+ priorVersion: "1.6.1",
+ k8sReleases: []string{"1.16"},
+ dockerImageSHA: "12eb885b8685b1b13a04ecf5c23bc809c2e57917252fd7b0be9e9c00644e8ee5",
+ defaultConf: `.:53 {
+ errors
+ health
+ ready
+ kubernetes * *** {
+ pods insecure
+ fallthrough in-addr.arpa ip6.arpa
+ ttl 30
+ }
+ prometheus :9153
+ forward . *
+ cache 30
+ loop
+ reload
+ loadbalance
+}`,
+ plugins: map[string]plugin{
+ "errors": plugins["errors"]["v2"],
+ "log": plugins["log"]["v1"],
+ "health": plugins["health"]["v1"],
+ "ready": {},
+ "autopath": {},
+ "kubernetes": plugins["kubernetes"]["v6"],
+ "k8s_external": plugins["k8s_external"]["v1"],
+ "prometheus": {},
+ "forward": plugins["forward"]["v2"],
+ "cache": plugins["cache"]["v1"],
+ "loop": {},
+ "reload": {},
+ "loadbalance": {},
+ "hosts": plugins["hosts"]["v1"],
+ "rewrite": plugins["rewrite"]["v2"],
+ },
+ },
+ "1.6.1": {
+ nextVersion: "1.6.2",
+ priorVersion: "1.6.0",
+ dockerImageSHA: "9ae3b6fcac4ee821362277de6bd8fd2236fa7d3e19af2ef0406d80b595620a7a",
+ plugins: map[string]plugin{
+ "errors": plugins["errors"]["v2"],
+ "log": plugins["log"]["v1"],
+ "health": plugins["health"]["v1"],
+ "ready": {},
+ "autopath": {},
+ "kubernetes": plugins["kubernetes"]["v6"],
+ "k8s_external": plugins["k8s_external"]["v1"],
+ "prometheus": {},
+ "forward": plugins["forward"]["v2"],
+ "cache": plugins["cache"]["v1"],
+ "loop": {},
+ "reload": {},
+ "loadbalance": {},
+ "hosts": plugins["hosts"]["v1"],
+ "rewrite": plugins["rewrite"]["v2"],
+ },
+ },
+ "1.6.0": {
+ nextVersion: "1.6.1",
+ priorVersion: "1.5.2",
+ dockerImageSHA: "263d03f2b889a75a0b91e035c2a14d45d7c1559c53444c5f7abf3a76014b779d",
+ plugins: map[string]plugin{
+ "errors": plugins["errors"]["v2"],
+ "log": plugins["log"]["v1"],
+ "health": plugins["health"]["v1"],
+ "ready": {},
+ "autopath": {},
+ "kubernetes": plugins["kubernetes"]["v6"],
+ "k8s_external": plugins["k8s_external"]["v1"],
+ "prometheus": {},
+ "forward": plugins["forward"]["v2"],
+ "cache": plugins["cache"]["v1"],
+ "loop": {},
+ "reload": {},
+ "loadbalance": {},
+ "hosts": plugins["hosts"]["v1"],
+ "rewrite": plugins["rewrite"]["v2"],
+ },
+ },
+ "1.5.2": {
+ nextVersion: "1.6.0",
+ priorVersion: "1.5.1",
+ dockerImageSHA: "586d15ec14911ee680ac9c5af20ff24b9d1412fbbf0e05862ee1f5c37baa65b2",
+ plugins: map[string]plugin{
+ "errors": plugins["errors"]["v2"],
+ "log": plugins["log"]["v1"],
+ "health": plugins["health"]["v1"],
+ "ready": {},
+ "autopath": {},
+ "kubernetes": plugins["kubernetes"]["v5"],
+ "k8s_external": plugins["k8s_external"]["v1"],
+ "prometheus": {},
+ "forward": plugins["forward"]["v2"],
+ "cache": plugins["cache"]["v1"],
+ "loop": {},
+ "reload": {},
+ "loadbalance": {},
+ "hosts": plugins["hosts"]["v1"],
+ "rewrite": plugins["rewrite"]["v2"],
+ },
+ },
+ "1.5.1": {
+ nextVersion: "1.5.2",
+ priorVersion: "1.5.0",
+ dockerImageSHA: "451817637035535ae1fc8639753b453fa4b781d0dea557d5da5cb3c131e62ef5",
+ plugins: map[string]plugin{
+ "errors": plugins["errors"]["v2"],
+ "log": plugins["log"]["v1"],
+ "health": plugins["health"]["v1"],
+ "ready": {},
+ "autopath": {},
+ "kubernetes": plugins["kubernetes"]["v5"],
+ "k8s_external": plugins["k8s_external"]["v1"],
+ "prometheus": {},
+ "forward": plugins["forward"]["v2"],
+ "cache": plugins["cache"]["v1"],
+ "loop": {},
+ "reload": {},
+ "loadbalance": {},
+ "hosts": plugins["hosts"]["v1"],
+ "rewrite": plugins["rewrite"]["v2"],
+ },
+ },
+ "1.5.0": {
+ nextVersion: "1.5.1",
+ priorVersion: "1.4.0",
+ dockerImageSHA: "e83beb5e43f8513fa735e77ffc5859640baea30a882a11cc75c4c3244a737d3c",
+ plugins: map[string]plugin{
+ "errors": plugins["errors"]["v2"],
+ "log": plugins["log"]["v1"],
+ "health": plugins["health"]["v1"],
+ "ready": {
+ status: SevNewDefault,
+ add: func(c *corefile.Server) (*corefile.Server, error) {
+ return addToKubernetesServerBlocks(c, &corefile.Plugin{Name: "ready"})
+ },
+ downAction: removePlugin,
+ },
+ "autopath": {},
+ "kubernetes": plugins["kubernetes"]["v5"],
+ "k8s_external": plugins["k8s_external"]["v1"],
+ "prometheus": {},
+ "proxy": plugins["proxy"]["removal"],
+ "forward": plugins["forward"]["v2"],
+ "cache": plugins["cache"]["v1"],
+ "loop": {},
+ "reload": {},
+ "loadbalance": {},
+ "hosts": plugins["hosts"]["v1"],
+ "rewrite": plugins["rewrite"]["v2"],
+ },
+ postProcess: breakForwardStubDomainsIntoServerBlocks,
+ },
+ "1.4.0": {
+ nextVersion: "1.5.0",
+ priorVersion: "1.3.1",
+ dockerImageSHA: "70a92e9f6fc604f9b629ca331b6135287244a86612f550941193ec7e12759417",
+ plugins: map[string]plugin{
+ "errors": plugins["errors"]["v2"],
+ "log": plugins["log"]["v1"],
+ "health": plugins["health"]["v1"],
+ "autopath": {},
+ "kubernetes": plugins["kubernetes"]["v4"],
+ "k8s_external": plugins["k8s_external"]["v1"],
+ "prometheus": {},
+ "proxy": plugins["proxy"]["deprecation"],
+ "forward": plugins["forward"]["v2"],
+ "cache": plugins["cache"]["v1"],
+ "loop": {},
+ "reload": {},
+ "loadbalance": {},
+ "hosts": plugins["hosts"]["v1"],
+ "rewrite": plugins["rewrite"]["v2"],
+ },
+ postProcess: breakForwardStubDomainsIntoServerBlocks,
+ },
+ "1.3.1": {
+ nextVersion: "1.4.0",
+ priorVersion: "1.3.0",
+ k8sReleases: []string{"1.15", "1.14"},
+ dockerImageSHA: "02382353821b12c21b062c59184e227e001079bb13ebd01f9d3270ba0fcbf1e4",
+ defaultConf: `.:53 {
+ errors
+ health
+ kubernetes * *** {
+ pods insecure
+ upstream
+ fallthrough in-addr.arpa ip6.arpa
+ ttl 30
+ }
+ prometheus :9153
+ forward . *
+ cache 30
+ loop
+ reload
+ loadbalance
+}`,
+ plugins: map[string]plugin{
+ "errors": plugins["errors"]["v2"],
+ "log": plugins["log"]["v1"],
+ "health": plugins["health"]["v1"],
+ "autopath": {},
+ "kubernetes": plugins["kubernetes"]["v3"],
+ "k8s_external": plugins["k8s_external"]["v1"],
+ "prometheus": {},
+ "proxy": plugins["proxy"]["v2"],
+ "forward": plugins["forward"]["v2"],
+ "cache": plugins["cache"]["v1"],
+ "loop": {},
+ "reload": {},
+ "loadbalance": {},
+ "hosts": plugins["hosts"]["v1"],
+ "rewrite": plugins["rewrite"]["v2"],
+ },
+ },
+ "1.3.0": {
+ nextVersion: "1.3.1",
+ priorVersion: "1.2.6",
+ dockerImageSHA: "e030773c7fee285435ed7fc7623532ee54c4c1c4911fb24d95cd0170a8a768bc",
+ plugins: map[string]plugin{
+ "errors": plugins["errors"]["v2"],
+ "log": plugins["log"]["v1"],
+ "health": plugins["health"]["v1"],
+ "autopath": {},
+ "kubernetes": plugins["kubernetes"]["v2"],
+ "k8s_external": plugins["k8s_external"]["v1"],
+ "prometheus": {},
+ "proxy": plugins["proxy"]["v2"],
+ "forward": plugins["forward"]["v2"],
+ "cache": plugins["cache"]["v1"],
+ "loop": {},
+ "reload": {},
+ "loadbalance": {},
+ "hosts": plugins["hosts"]["v1"],
+ "rewrite": plugins["rewrite"]["v2"],
+ },
+ },
+ "1.2.6": {
+ nextVersion: "1.3.0",
+ priorVersion: "1.2.5",
+ k8sReleases: []string{"1.13"},
+ dockerImageSHA: "81936728011c0df9404cb70b95c17bbc8af922ec9a70d0561a5d01fefa6ffa51",
+ defaultConf: `.:53 {
+ errors
+ health
+ kubernetes * *** {
+ pods insecure
+ upstream
+ fallthrough in-addr.arpa ip6.arpa
+ }
+ prometheus :9153
+ proxy . *
+ cache 30
+ loop
+ reload
+ loadbalance
+}`,
+ plugins: map[string]plugin{
+ "errors": plugins["errors"]["v2"],
+ "log": plugins["log"]["v1"],
+ "health": plugins["health"]["v1"],
+ "autopath": {},
+ "kubernetes": plugins["kubernetes"]["v2"],
+ "prometheus": {},
+ "proxy": plugins["proxy"]["v2"],
+ "forward": plugins["forward"]["v2"],
+ "cache": plugins["cache"]["v1"],
+ "loop": {},
+ "reload": {},
+ "loadbalance": {},
+ "hosts": plugins["hosts"]["v1"],
+ "rewrite": plugins["rewrite"]["v2"],
+ },
+ },
+ "1.2.5": {
+ nextVersion: "1.2.6",
+ priorVersion: "1.2.4",
+ dockerImageSHA: "33c8da20b887ae12433ec5c40bfddefbbfa233d5ce11fb067122e68af30291d6",
+ plugins: map[string]plugin{
+ "errors": plugins["errors"]["v1"],
+ "log": plugins["log"]["v1"],
+ "health": plugins["health"]["v1"],
+ "autopath": {},
+ "kubernetes": plugins["kubernetes"]["v2"],
+ "prometheus": {},
+ "proxy": plugins["proxy"]["v2"],
+ "forward": plugins["forward"]["v2"],
+ "cache": plugins["cache"]["v1"],
+ "loop": {},
+ "reload": {},
+ "loadbalance": {},
+ "hosts": plugins["hosts"]["v1"],
+ "rewrite": plugins["rewrite"]["v2"],
+ },
+ },
+ "1.2.4": {
+ nextVersion: "1.2.5",
+ priorVersion: "1.2.3",
+ dockerImageSHA: "a0d40ad961a714c699ee7b61b77441d165f6252f9fb84ac625d04a8d8554c0ec",
+ plugins: map[string]plugin{
+ "errors": plugins["errors"]["v1"],
+ "log": plugins["log"]["v1"],
+ "health": plugins["health"]["v1"],
+ "autopath": {},
+ "kubernetes": plugins["kubernetes"]["v2"],
+ "prometheus": {},
+ "proxy": plugins["proxy"]["v2"],
+ "forward": plugins["forward"]["v2"],
+ "cache": plugins["cache"]["v1"],
+ "loop": {},
+ "reload": {},
+ "loadbalance": {},
+ "hosts": plugins["hosts"]["v1"],
+ "rewrite": plugins["rewrite"]["v2"],
+ },
+ },
+ "1.2.3": {
+ nextVersion: "1.2.4",
+ priorVersion: "1.2.2",
+ dockerImageSHA: "12f3cab301c826978fac736fd40aca21ac023102fd7f4aa6b4341ae9ba89e90e",
+ plugins: map[string]plugin{
+ "errors": plugins["errors"]["v1"],
+ "log": plugins["log"]["v1"],
+ "health": plugins["health"]["v1"],
+ "autopath": {},
+ "kubernetes": plugins["kubernetes"]["v2"],
+ "prometheus": {},
+ "proxy": plugins["proxy"]["v2"],
+ "forward": plugins["forward"]["v2"],
+ "cache": plugins["cache"]["v1"],
+ "loop": {},
+ "reload": {},
+ "loadbalance": {},
+ "hosts": plugins["hosts"]["v1"],
+ "rewrite": plugins["rewrite"]["v2"],
+ },
+ },
+ "1.2.2": {
+ nextVersion: "1.2.3",
+ priorVersion: "1.2.1",
+ k8sReleases: []string{"1.12"},
+ dockerImageSHA: "3e2be1cec87aca0b74b7668bbe8c02964a95a402e45ceb51b2252629d608d03a",
+ defaultConf: `.:53 {
+ errors
+ health
+ kubernetes * *** {
+ pods insecure
+ upstream
+ fallthrough in-addr.arpa ip6.arpa
+ }
+ prometheus :9153
+ proxy . *
+ cache 30
+ loop
+ reload
+ loadbalance
+}`,
+ plugins: map[string]plugin{
+ "errors": plugins["errors"]["v1"],
+ "log": plugins["log"]["v1"],
+ "health": plugins["health"]["v1"],
+ "autopath": {},
+ "kubernetes": plugins["kubernetes"]["v1"],
+ "prometheus": {},
+ "proxy": plugins["proxy"]["v2"],
+ "forward": plugins["forward"]["v2"],
+ "cache": plugins["cache"]["v1"],
+ "loop": {},
+ "reload": {},
+ "loadbalance": {},
+ "hosts": plugins["hosts"]["v1"],
+ "rewrite": plugins["rewrite"]["v1"],
+ },
+ },
+ "1.2.1": {
+ nextVersion: "1.2.2",
+ priorVersion: "1.2.0",
+ dockerImageSHA: "fb129c6a7c8912bc6d9cc4505e1f9007c5565ceb1aa6369750e60cc79771a244",
+ plugins: map[string]plugin{
+ "errors": plugins["errors"]["v1"],
+ "log": plugins["log"]["v1"],
+ "health": plugins["health"]["v1"],
+ "autopath": {},
+ "kubernetes": plugins["kubernetes"]["v1"],
+ "prometheus": {},
+ "proxy": plugins["proxy"]["v2"],
+ "forward": plugins["forward"]["v2"],
+ "cache": plugins["cache"]["v1"],
+ "loop": {
+ status: SevNewDefault,
+ add: func(s *corefile.Server) (*corefile.Server, error) {
+ return addToForwardingServerBlocks(s, &corefile.Plugin{Name: "loop"})
+ },
+ downAction: removePlugin,
+ },
+ "reload": {},
+ "loadbalance": {},
+ "hosts": plugins["hosts"]["v1"],
+ "rewrite": plugins["rewrite"]["v1"],
+ },
+ },
+ "1.2.0": {
+ nextVersion: "1.2.1",
+ priorVersion: "1.1.4",
+ dockerImageSHA: "ae69a32f8cc29a3e2af9628b6473f24d3e977950a2cb62ce8911478a61215471",
+ plugins: map[string]plugin{
+ "errors": plugins["errors"]["v1"],
+ "log": plugins["log"]["v1"],
+ "health": plugins["health"]["v1"],
+ "autopath": {},
+ "kubernetes": plugins["kubernetes"]["v1"],
+ "prometheus": {},
+ "proxy": plugins["proxy"]["v2"],
+ "forward": plugins["forward"]["v2"],
+ "cache": plugins["cache"]["v1"],
+ "reload": {},
+ "loadbalance": {},
+ "hosts": plugins["hosts"]["v1"],
+ "rewrite": plugins["rewrite"]["v1"],
+ },
+ },
+ "1.1.4": {
+ nextVersion: "1.2.0",
+ priorVersion: "1.1.3",
+ dockerImageSHA: "463c7021141dd3bfd4a75812f4b735ef6aadc0253a128f15ffe16422abe56e50",
+ plugins: map[string]plugin{
+ "errors": plugins["errors"]["v1"],
+ "log": plugins["log"]["v1"],
+ "health": plugins["health"]["v1"],
+ "autopath": {},
+ "kubernetes": plugins["kubernetes"]["v1"],
+ "prometheus": {},
+ "proxy": plugins["proxy"]["v1"],
+ "forward": plugins["forward"]["v1"],
+ "cache": plugins["cache"]["v1"],
+ "reload": {},
+ "loadbalance": {},
+ "hosts": plugins["hosts"]["v1"],
+ "rewrite": plugins["rewrite"]["v1"],
+ },
+ },
+ "1.1.3": {
+ nextVersion: "1.1.4",
+ k8sReleases: []string{"1.11"},
+ dockerImageSHA: "a5dd18e048983c7401e15648b55c3ef950601a86dd22370ef5dfc3e72a108aaa",
+ defaultConf: `.:53 {
+ errors
+ health
+ kubernetes * *** {
+ pods insecure
+ upstream
+ fallthrough in-addr.arpa ip6.arpa
+ }
+ prometheus :9153
+ proxy . *
+ cache 30
+ reload
+}`},
+}
diff --git a/vendor/github.com/docker/distribution/LICENSE b/vendor/github.com/docker/distribution/LICENSE
new file mode 100644
index 000000000..e06d20818
--- /dev/null
+++ b/vendor/github.com/docker/distribution/LICENSE
@@ -0,0 +1,202 @@
+Apache License
+ Version 2.0, January 2004
+ http://www.apache.org/licenses/
+
+ TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
+
+ 1. Definitions.
+
+ "License" shall mean the terms and conditions for use, reproduction,
+ and distribution as defined by Sections 1 through 9 of this document.
+
+ "Licensor" shall mean the copyright owner or entity authorized by
+ the copyright owner that is granting the License.
+
+ "Legal Entity" shall mean the union of the acting entity and all
+ other entities that control, are controlled by, or are under common
+ control with that entity. For the purposes of this definition,
+ "control" means (i) the power, direct or indirect, to cause the
+ direction or management of such entity, whether by contract or
+ otherwise, or (ii) ownership of fifty percent (50%) or more of the
+ outstanding shares, or (iii) beneficial ownership of such entity.
+
+ "You" (or "Your") shall mean an individual or Legal Entity
+ exercising permissions granted by this License.
+
+ "Source" form shall mean the preferred form for making modifications,
+ including but not limited to software source code, documentation
+ source, and configuration files.
+
+ "Object" form shall mean any form resulting from mechanical
+ transformation or translation of a Source form, including but
+ not limited to compiled object code, generated documentation,
+ and conversions to other media types.
+
+ "Work" shall mean the work of authorship, whether in Source or
+ Object form, made available under the License, as indicated by a
+ copyright notice that is included in or attached to the work
+ (an example is provided in the Appendix below).
+
+ "Derivative Works" shall mean any work, whether in Source or Object
+ form, that is based on (or derived from) the Work and for which the
+ editorial revisions, annotations, elaborations, or other modifications
+ represent, as a whole, an original work of authorship. For the purposes
+ of this License, Derivative Works shall not include works that remain
+ separable from, or merely link (or bind by name) to the interfaces of,
+ the Work and Derivative Works thereof.
+
+ "Contribution" shall mean any work of authorship, including
+ the original version of the Work and any modifications or additions
+ to that Work or Derivative Works thereof, that is intentionally
+ submitted to Licensor for inclusion in the Work by the copyright owner
+ or by an individual or Legal Entity authorized to submit on behalf of
+ the copyright owner. For the purposes of this definition, "submitted"
+ means any form of electronic, verbal, or written communication sent
+ to the Licensor or its representatives, including but not limited to
+ communication on electronic mailing lists, source code control systems,
+ and issue tracking systems that are managed by, or on behalf of, the
+ Licensor for the purpose of discussing and improving the Work, but
+ excluding communication that is conspicuously marked or otherwise
+ designated in writing by the copyright owner as "Not a Contribution."
+
+ "Contributor" shall mean Licensor and any individual or Legal Entity
+ on behalf of whom a Contribution has been received by Licensor and
+ subsequently incorporated within the Work.
+
+ 2. Grant of Copyright License. Subject to the terms and conditions of
+ this License, each Contributor hereby grants to You a perpetual,
+ worldwide, non-exclusive, no-charge, royalty-free, irrevocable
+ copyright license to reproduce, prepare Derivative Works of,
+ publicly display, publicly perform, sublicense, and distribute the
+ Work and such Derivative Works in Source or Object form.
+
+ 3. Grant of Patent License. Subject to the terms and conditions of
+ this License, each Contributor hereby grants to You a perpetual,
+ worldwide, non-exclusive, no-charge, royalty-free, irrevocable
+ (except as stated in this section) patent license to make, have made,
+ use, offer to sell, sell, import, and otherwise transfer the Work,
+ where such license applies only to those patent claims licensable
+ by such Contributor that are necessarily infringed by their
+ Contribution(s) alone or by combination of their Contribution(s)
+ with the Work to which such Contribution(s) was submitted. If You
+ institute patent litigation against any entity (including a
+ cross-claim or counterclaim in a lawsuit) alleging that the Work
+ or a Contribution incorporated within the Work constitutes direct
+ or contributory patent infringement, then any patent licenses
+ granted to You under this License for that Work shall terminate
+ as of the date such litigation is filed.
+
+ 4. Redistribution. You may reproduce and distribute copies of the
+ Work or Derivative Works thereof in any medium, with or without
+ modifications, and in Source or Object form, provided that You
+ meet the following conditions:
+
+ (a) You must give any other recipients of the Work or
+ Derivative Works a copy of this License; and
+
+ (b) You must cause any modified files to carry prominent notices
+ stating that You changed the files; and
+
+ (c) You must retain, in the Source form of any Derivative Works
+ that You distribute, all copyright, patent, trademark, and
+ attribution notices from the Source form of the Work,
+ excluding those notices that do not pertain to any part of
+ the Derivative Works; and
+
+ (d) If the Work includes a "NOTICE" text file as part of its
+ distribution, then any Derivative Works that You distribute must
+ include a readable copy of the attribution notices contained
+ within such NOTICE file, excluding those notices that do not
+ pertain to any part of the Derivative Works, in at least one
+ of the following places: within a NOTICE text file distributed
+ as part of the Derivative Works; within the Source form or
+ documentation, if provided along with the Derivative Works; or,
+ within a display generated by the Derivative Works, if and
+ wherever such third-party notices normally appear. The contents
+ of the NOTICE file are for informational purposes only and
+ do not modify the License. You may add Your own attribution
+ notices within Derivative Works that You distribute, alongside
+ or as an addendum to the NOTICE text from the Work, provided
+ that such additional attribution notices cannot be construed
+ as modifying the License.
+
+ You may add Your own copyright statement to Your modifications and
+ may provide additional or different license terms and conditions
+ for use, reproduction, or distribution of Your modifications, or
+ for any such Derivative Works as a whole, provided Your use,
+ reproduction, and distribution of the Work otherwise complies with
+ the conditions stated in this License.
+
+ 5. Submission of Contributions. Unless You explicitly state otherwise,
+ any Contribution intentionally submitted for inclusion in the Work
+ by You to the Licensor shall be under the terms and conditions of
+ this License, without any additional terms or conditions.
+ Notwithstanding the above, nothing herein shall supersede or modify
+ the terms of any separate license agreement you may have executed
+ with Licensor regarding such Contributions.
+
+ 6. Trademarks. This License does not grant permission to use the trade
+ names, trademarks, service marks, or product names of the Licensor,
+ except as required for reasonable and customary use in describing the
+ origin of the Work and reproducing the content of the NOTICE file.
+
+ 7. Disclaimer of Warranty. Unless required by applicable law or
+ agreed to in writing, Licensor provides the Work (and each
+ Contributor provides its Contributions) on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
+ implied, including, without limitation, any warranties or conditions
+ of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
+ PARTICULAR PURPOSE. You are solely responsible for determining the
+ appropriateness of using or redistributing the Work and assume any
+ risks associated with Your exercise of permissions under this License.
+
+ 8. Limitation of Liability. In no event and under no legal theory,
+ whether in tort (including negligence), contract, or otherwise,
+ unless required by applicable law (such as deliberate and grossly
+ negligent acts) or agreed to in writing, shall any Contributor be
+ liable to You for damages, including any direct, indirect, special,
+ incidental, or consequential damages of any character arising as a
+ result of this License or out of the use or inability to use the
+ Work (including but not limited to damages for loss of goodwill,
+ work stoppage, computer failure or malfunction, or any and all
+ other commercial damages or losses), even if such Contributor
+ has been advised of the possibility of such damages.
+
+ 9. Accepting Warranty or Additional Liability. While redistributing
+ the Work or Derivative Works thereof, You may choose to offer,
+ and charge a fee for, acceptance of support, warranty, indemnity,
+ or other liability obligations and/or rights consistent with this
+ License. However, in accepting such obligations, You may act only
+ on Your own behalf and on Your sole responsibility, not on behalf
+ of any other Contributor, and only if You agree to indemnify,
+ defend, and hold each Contributor harmless for any liability
+ incurred by, or claims asserted against, such Contributor by reason
+ of your accepting any such warranty or additional liability.
+
+ END OF TERMS AND CONDITIONS
+
+ APPENDIX: How to apply the Apache License to your work.
+
+ To apply the Apache License to your work, attach the following
+ boilerplate notice, with the fields enclosed by brackets "{}"
+ replaced with your own identifying information. (Don't include
+ the brackets!) The text should be enclosed in the appropriate
+ comment syntax for the file format. We also recommend that a
+ file or class name and description of purpose be included on the
+ same "printed page" as the copyright notice for easier
+ identification within third-party archives.
+
+ Copyright {yyyy} {name of copyright owner}
+
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+
diff --git a/vendor/github.com/docker/distribution/digestset/set.go b/vendor/github.com/docker/distribution/digestset/set.go
new file mode 100644
index 000000000..71327dca7
--- /dev/null
+++ b/vendor/github.com/docker/distribution/digestset/set.go
@@ -0,0 +1,247 @@
+package digestset
+
+import (
+ "errors"
+ "sort"
+ "strings"
+ "sync"
+
+ digest "github.com/opencontainers/go-digest"
+)
+
+var (
+ // ErrDigestNotFound is used when a matching digest
+ // could not be found in a set.
+ ErrDigestNotFound = errors.New("digest not found")
+
+ // ErrDigestAmbiguous is used when multiple digests
+ // are found in a set. None of the matching digests
+ // should be considered valid matches.
+ ErrDigestAmbiguous = errors.New("ambiguous digest string")
+)
+
+// Set is used to hold a unique set of digests which
+// may be easily referenced by easily referenced by a string
+// representation of the digest as well as short representation.
+// The uniqueness of the short representation is based on other
+// digests in the set. If digests are omitted from this set,
+// collisions in a larger set may not be detected, therefore it
+// is important to always do short representation lookups on
+// the complete set of digests. To mitigate collisions, an
+// appropriately long short code should be used.
+type Set struct {
+ mutex sync.RWMutex
+ entries digestEntries
+}
+
+// NewSet creates an empty set of digests
+// which may have digests added.
+func NewSet() *Set {
+ return &Set{
+ entries: digestEntries{},
+ }
+}
+
+// checkShortMatch checks whether two digests match as either whole
+// values or short values. This function does not test equality,
+// rather whether the second value could match against the first
+// value.
+func checkShortMatch(alg digest.Algorithm, hex, shortAlg, shortHex string) bool {
+ if len(hex) == len(shortHex) {
+ if hex != shortHex {
+ return false
+ }
+ if len(shortAlg) > 0 && string(alg) != shortAlg {
+ return false
+ }
+ } else if !strings.HasPrefix(hex, shortHex) {
+ return false
+ } else if len(shortAlg) > 0 && string(alg) != shortAlg {
+ return false
+ }
+ return true
+}
+
+// Lookup looks for a digest matching the given string representation.
+// If no digests could be found ErrDigestNotFound will be returned
+// with an empty digest value. If multiple matches are found
+// ErrDigestAmbiguous will be returned with an empty digest value.
+func (dst *Set) Lookup(d string) (digest.Digest, error) {
+ dst.mutex.RLock()
+ defer dst.mutex.RUnlock()
+ if len(dst.entries) == 0 {
+ return "", ErrDigestNotFound
+ }
+ var (
+ searchFunc func(int) bool
+ alg digest.Algorithm
+ hex string
+ )
+ dgst, err := digest.Parse(d)
+ if err == digest.ErrDigestInvalidFormat {
+ hex = d
+ searchFunc = func(i int) bool {
+ return dst.entries[i].val >= d
+ }
+ } else {
+ hex = dgst.Hex()
+ alg = dgst.Algorithm()
+ searchFunc = func(i int) bool {
+ if dst.entries[i].val == hex {
+ return dst.entries[i].alg >= alg
+ }
+ return dst.entries[i].val >= hex
+ }
+ }
+ idx := sort.Search(len(dst.entries), searchFunc)
+ if idx == len(dst.entries) || !checkShortMatch(dst.entries[idx].alg, dst.entries[idx].val, string(alg), hex) {
+ return "", ErrDigestNotFound
+ }
+ if dst.entries[idx].alg == alg && dst.entries[idx].val == hex {
+ return dst.entries[idx].digest, nil
+ }
+ if idx+1 < len(dst.entries) && checkShortMatch(dst.entries[idx+1].alg, dst.entries[idx+1].val, string(alg), hex) {
+ return "", ErrDigestAmbiguous
+ }
+
+ return dst.entries[idx].digest, nil
+}
+
+// Add adds the given digest to the set. An error will be returned
+// if the given digest is invalid. If the digest already exists in the
+// set, this operation will be a no-op.
+func (dst *Set) Add(d digest.Digest) error {
+ if err := d.Validate(); err != nil {
+ return err
+ }
+ dst.mutex.Lock()
+ defer dst.mutex.Unlock()
+ entry := &digestEntry{alg: d.Algorithm(), val: d.Hex(), digest: d}
+ searchFunc := func(i int) bool {
+ if dst.entries[i].val == entry.val {
+ return dst.entries[i].alg >= entry.alg
+ }
+ return dst.entries[i].val >= entry.val
+ }
+ idx := sort.Search(len(dst.entries), searchFunc)
+ if idx == len(dst.entries) {
+ dst.entries = append(dst.entries, entry)
+ return nil
+ } else if dst.entries[idx].digest == d {
+ return nil
+ }
+
+ entries := append(dst.entries, nil)
+ copy(entries[idx+1:], entries[idx:len(entries)-1])
+ entries[idx] = entry
+ dst.entries = entries
+ return nil
+}
+
+// Remove removes the given digest from the set. An err will be
+// returned if the given digest is invalid. If the digest does
+// not exist in the set, this operation will be a no-op.
+func (dst *Set) Remove(d digest.Digest) error {
+ if err := d.Validate(); err != nil {
+ return err
+ }
+ dst.mutex.Lock()
+ defer dst.mutex.Unlock()
+ entry := &digestEntry{alg: d.Algorithm(), val: d.Hex(), digest: d}
+ searchFunc := func(i int) bool {
+ if dst.entries[i].val == entry.val {
+ return dst.entries[i].alg >= entry.alg
+ }
+ return dst.entries[i].val >= entry.val
+ }
+ idx := sort.Search(len(dst.entries), searchFunc)
+ // Not found if idx is after or value at idx is not digest
+ if idx == len(dst.entries) || dst.entries[idx].digest != d {
+ return nil
+ }
+
+ entries := dst.entries
+ copy(entries[idx:], entries[idx+1:])
+ entries = entries[:len(entries)-1]
+ dst.entries = entries
+
+ return nil
+}
+
+// All returns all the digests in the set
+func (dst *Set) All() []digest.Digest {
+ dst.mutex.RLock()
+ defer dst.mutex.RUnlock()
+ retValues := make([]digest.Digest, len(dst.entries))
+ for i := range dst.entries {
+ retValues[i] = dst.entries[i].digest
+ }
+
+ return retValues
+}
+
+// ShortCodeTable returns a map of Digest to unique short codes. The
+// length represents the minimum value, the maximum length may be the
+// entire value of digest if uniqueness cannot be achieved without the
+// full value. This function will attempt to make short codes as short
+// as possible to be unique.
+func ShortCodeTable(dst *Set, length int) map[digest.Digest]string {
+ dst.mutex.RLock()
+ defer dst.mutex.RUnlock()
+ m := make(map[digest.Digest]string, len(dst.entries))
+ l := length
+ resetIdx := 0
+ for i := 0; i < len(dst.entries); i++ {
+ var short string
+ extended := true
+ for extended {
+ extended = false
+ if len(dst.entries[i].val) <= l {
+ short = dst.entries[i].digest.String()
+ } else {
+ short = dst.entries[i].val[:l]
+ for j := i + 1; j < len(dst.entries); j++ {
+ if checkShortMatch(dst.entries[j].alg, dst.entries[j].val, "", short) {
+ if j > resetIdx {
+ resetIdx = j
+ }
+ extended = true
+ } else {
+ break
+ }
+ }
+ if extended {
+ l++
+ }
+ }
+ }
+ m[dst.entries[i].digest] = short
+ if i >= resetIdx {
+ l = length
+ }
+ }
+ return m
+}
+
+type digestEntry struct {
+ alg digest.Algorithm
+ val string
+ digest digest.Digest
+}
+
+type digestEntries []*digestEntry
+
+func (d digestEntries) Len() int {
+ return len(d)
+}
+
+func (d digestEntries) Less(i, j int) bool {
+ if d[i].val != d[j].val {
+ return d[i].val < d[j].val
+ }
+ return d[i].alg < d[j].alg
+}
+
+func (d digestEntries) Swap(i, j int) {
+ d[i], d[j] = d[j], d[i]
+}
diff --git a/vendor/github.com/docker/distribution/reference/helpers.go b/vendor/github.com/docker/distribution/reference/helpers.go
new file mode 100644
index 000000000..978df7eab
--- /dev/null
+++ b/vendor/github.com/docker/distribution/reference/helpers.go
@@ -0,0 +1,42 @@
+package reference
+
+import "path"
+
+// IsNameOnly returns true if reference only contains a repo name.
+func IsNameOnly(ref Named) bool {
+ if _, ok := ref.(NamedTagged); ok {
+ return false
+ }
+ if _, ok := ref.(Canonical); ok {
+ return false
+ }
+ return true
+}
+
+// FamiliarName returns the familiar name string
+// for the given named, familiarizing if needed.
+func FamiliarName(ref Named) string {
+ if nn, ok := ref.(normalizedNamed); ok {
+ return nn.Familiar().Name()
+ }
+ return ref.Name()
+}
+
+// FamiliarString returns the familiar string representation
+// for the given reference, familiarizing if needed.
+func FamiliarString(ref Reference) string {
+ if nn, ok := ref.(normalizedNamed); ok {
+ return nn.Familiar().String()
+ }
+ return ref.String()
+}
+
+// FamiliarMatch reports whether ref matches the specified pattern.
+// See https://godoc.org/path#Match for supported patterns.
+func FamiliarMatch(pattern string, ref Reference) (bool, error) {
+ matched, err := path.Match(pattern, FamiliarString(ref))
+ if namedRef, isNamed := ref.(Named); isNamed && !matched {
+ matched, _ = path.Match(pattern, FamiliarName(namedRef))
+ }
+ return matched, err
+}
diff --git a/vendor/github.com/docker/distribution/reference/normalize.go b/vendor/github.com/docker/distribution/reference/normalize.go
new file mode 100644
index 000000000..2d71fc5e9
--- /dev/null
+++ b/vendor/github.com/docker/distribution/reference/normalize.go
@@ -0,0 +1,170 @@
+package reference
+
+import (
+ "errors"
+ "fmt"
+ "strings"
+
+ "github.com/docker/distribution/digestset"
+ "github.com/opencontainers/go-digest"
+)
+
+var (
+ legacyDefaultDomain = "index.docker.io"
+ defaultDomain = "docker.io"
+ officialRepoName = "library"
+ defaultTag = "latest"
+)
+
+// normalizedNamed represents a name which has been
+// normalized and has a familiar form. A familiar name
+// is what is used in Docker UI. An example normalized
+// name is "docker.io/library/ubuntu" and corresponding
+// familiar name of "ubuntu".
+type normalizedNamed interface {
+ Named
+ Familiar() Named
+}
+
+// ParseNormalizedNamed parses a string into a named reference
+// transforming a familiar name from Docker UI to a fully
+// qualified reference. If the value may be an identifier
+// use ParseAnyReference.
+func ParseNormalizedNamed(s string) (Named, error) {
+ if ok := anchoredIdentifierRegexp.MatchString(s); ok {
+ return nil, fmt.Errorf("invalid repository name (%s), cannot specify 64-byte hexadecimal strings", s)
+ }
+ domain, remainder := splitDockerDomain(s)
+ var remoteName string
+ if tagSep := strings.IndexRune(remainder, ':'); tagSep > -1 {
+ remoteName = remainder[:tagSep]
+ } else {
+ remoteName = remainder
+ }
+ if strings.ToLower(remoteName) != remoteName {
+ return nil, errors.New("invalid reference format: repository name must be lowercase")
+ }
+
+ ref, err := Parse(domain + "/" + remainder)
+ if err != nil {
+ return nil, err
+ }
+ named, isNamed := ref.(Named)
+ if !isNamed {
+ return nil, fmt.Errorf("reference %s has no name", ref.String())
+ }
+ return named, nil
+}
+
+// splitDockerDomain splits a repository name to domain and remotename string.
+// If no valid domain is found, the default domain is used. Repository name
+// needs to be already validated before.
+func splitDockerDomain(name string) (domain, remainder string) {
+ i := strings.IndexRune(name, '/')
+ if i == -1 || (!strings.ContainsAny(name[:i], ".:") && name[:i] != "localhost") {
+ domain, remainder = defaultDomain, name
+ } else {
+ domain, remainder = name[:i], name[i+1:]
+ }
+ if domain == legacyDefaultDomain {
+ domain = defaultDomain
+ }
+ if domain == defaultDomain && !strings.ContainsRune(remainder, '/') {
+ remainder = officialRepoName + "/" + remainder
+ }
+ return
+}
+
+// familiarizeName returns a shortened version of the name familiar
+// to to the Docker UI. Familiar names have the default domain
+// "docker.io" and "library/" repository prefix removed.
+// For example, "docker.io/library/redis" will have the familiar
+// name "redis" and "docker.io/dmcgowan/myapp" will be "dmcgowan/myapp".
+// Returns a familiarized named only reference.
+func familiarizeName(named namedRepository) repository {
+ repo := repository{
+ domain: named.Domain(),
+ path: named.Path(),
+ }
+
+ if repo.domain == defaultDomain {
+ repo.domain = ""
+ // Handle official repositories which have the pattern "library/"
+ if split := strings.Split(repo.path, "/"); len(split) == 2 && split[0] == officialRepoName {
+ repo.path = split[1]
+ }
+ }
+ return repo
+}
+
+func (r reference) Familiar() Named {
+ return reference{
+ namedRepository: familiarizeName(r.namedRepository),
+ tag: r.tag,
+ digest: r.digest,
+ }
+}
+
+func (r repository) Familiar() Named {
+ return familiarizeName(r)
+}
+
+func (t taggedReference) Familiar() Named {
+ return taggedReference{
+ namedRepository: familiarizeName(t.namedRepository),
+ tag: t.tag,
+ }
+}
+
+func (c canonicalReference) Familiar() Named {
+ return canonicalReference{
+ namedRepository: familiarizeName(c.namedRepository),
+ digest: c.digest,
+ }
+}
+
+// TagNameOnly adds the default tag "latest" to a reference if it only has
+// a repo name.
+func TagNameOnly(ref Named) Named {
+ if IsNameOnly(ref) {
+ namedTagged, err := WithTag(ref, defaultTag)
+ if err != nil {
+ // Default tag must be valid, to create a NamedTagged
+ // type with non-validated input the WithTag function
+ // should be used instead
+ panic(err)
+ }
+ return namedTagged
+ }
+ return ref
+}
+
+// ParseAnyReference parses a reference string as a possible identifier,
+// full digest, or familiar name.
+func ParseAnyReference(ref string) (Reference, error) {
+ if ok := anchoredIdentifierRegexp.MatchString(ref); ok {
+ return digestReference("sha256:" + ref), nil
+ }
+ if dgst, err := digest.Parse(ref); err == nil {
+ return digestReference(dgst), nil
+ }
+
+ return ParseNormalizedNamed(ref)
+}
+
+// ParseAnyReferenceWithSet parses a reference string as a possible short
+// identifier to be matched in a digest set, a full digest, or familiar name.
+func ParseAnyReferenceWithSet(ref string, ds *digestset.Set) (Reference, error) {
+ if ok := anchoredShortIdentifierRegexp.MatchString(ref); ok {
+ dgst, err := ds.Lookup(ref)
+ if err == nil {
+ return digestReference(dgst), nil
+ }
+ } else {
+ if dgst, err := digest.Parse(ref); err == nil {
+ return digestReference(dgst), nil
+ }
+ }
+
+ return ParseNormalizedNamed(ref)
+}
diff --git a/vendor/github.com/docker/distribution/reference/reference.go b/vendor/github.com/docker/distribution/reference/reference.go
new file mode 100644
index 000000000..2f66cca87
--- /dev/null
+++ b/vendor/github.com/docker/distribution/reference/reference.go
@@ -0,0 +1,433 @@
+// Package reference provides a general type to represent any way of referencing images within the registry.
+// Its main purpose is to abstract tags and digests (content-addressable hash).
+//
+// Grammar
+//
+// reference := name [ ":" tag ] [ "@" digest ]
+// name := [domain '/'] path-component ['/' path-component]*
+// domain := domain-component ['.' domain-component]* [':' port-number]
+// domain-component := /([a-zA-Z0-9]|[a-zA-Z0-9][a-zA-Z0-9-]*[a-zA-Z0-9])/
+// port-number := /[0-9]+/
+// path-component := alpha-numeric [separator alpha-numeric]*
+// alpha-numeric := /[a-z0-9]+/
+// separator := /[_.]|__|[-]*/
+//
+// tag := /[\w][\w.-]{0,127}/
+//
+// digest := digest-algorithm ":" digest-hex
+// digest-algorithm := digest-algorithm-component [ digest-algorithm-separator digest-algorithm-component ]*
+// digest-algorithm-separator := /[+.-_]/
+// digest-algorithm-component := /[A-Za-z][A-Za-z0-9]*/
+// digest-hex := /[0-9a-fA-F]{32,}/ ; At least 128 bit digest value
+//
+// identifier := /[a-f0-9]{64}/
+// short-identifier := /[a-f0-9]{6,64}/
+package reference
+
+import (
+ "errors"
+ "fmt"
+ "strings"
+
+ "github.com/opencontainers/go-digest"
+)
+
+const (
+ // NameTotalLengthMax is the maximum total number of characters in a repository name.
+ NameTotalLengthMax = 255
+)
+
+var (
+ // ErrReferenceInvalidFormat represents an error while trying to parse a string as a reference.
+ ErrReferenceInvalidFormat = errors.New("invalid reference format")
+
+ // ErrTagInvalidFormat represents an error while trying to parse a string as a tag.
+ ErrTagInvalidFormat = errors.New("invalid tag format")
+
+ // ErrDigestInvalidFormat represents an error while trying to parse a string as a tag.
+ ErrDigestInvalidFormat = errors.New("invalid digest format")
+
+ // ErrNameContainsUppercase is returned for invalid repository names that contain uppercase characters.
+ ErrNameContainsUppercase = errors.New("repository name must be lowercase")
+
+ // ErrNameEmpty is returned for empty, invalid repository names.
+ ErrNameEmpty = errors.New("repository name must have at least one component")
+
+ // ErrNameTooLong is returned when a repository name is longer than NameTotalLengthMax.
+ ErrNameTooLong = fmt.Errorf("repository name must not be more than %v characters", NameTotalLengthMax)
+
+ // ErrNameNotCanonical is returned when a name is not canonical.
+ ErrNameNotCanonical = errors.New("repository name must be canonical")
+)
+
+// Reference is an opaque object reference identifier that may include
+// modifiers such as a hostname, name, tag, and digest.
+type Reference interface {
+ // String returns the full reference
+ String() string
+}
+
+// Field provides a wrapper type for resolving correct reference types when
+// working with encoding.
+type Field struct {
+ reference Reference
+}
+
+// AsField wraps a reference in a Field for encoding.
+func AsField(reference Reference) Field {
+ return Field{reference}
+}
+
+// Reference unwraps the reference type from the field to
+// return the Reference object. This object should be
+// of the appropriate type to further check for different
+// reference types.
+func (f Field) Reference() Reference {
+ return f.reference
+}
+
+// MarshalText serializes the field to byte text which
+// is the string of the reference.
+func (f Field) MarshalText() (p []byte, err error) {
+ return []byte(f.reference.String()), nil
+}
+
+// UnmarshalText parses text bytes by invoking the
+// reference parser to ensure the appropriately
+// typed reference object is wrapped by field.
+func (f *Field) UnmarshalText(p []byte) error {
+ r, err := Parse(string(p))
+ if err != nil {
+ return err
+ }
+
+ f.reference = r
+ return nil
+}
+
+// Named is an object with a full name
+type Named interface {
+ Reference
+ Name() string
+}
+
+// Tagged is an object which has a tag
+type Tagged interface {
+ Reference
+ Tag() string
+}
+
+// NamedTagged is an object including a name and tag.
+type NamedTagged interface {
+ Named
+ Tag() string
+}
+
+// Digested is an object which has a digest
+// in which it can be referenced by
+type Digested interface {
+ Reference
+ Digest() digest.Digest
+}
+
+// Canonical reference is an object with a fully unique
+// name including a name with domain and digest
+type Canonical interface {
+ Named
+ Digest() digest.Digest
+}
+
+// namedRepository is a reference to a repository with a name.
+// A namedRepository has both domain and path components.
+type namedRepository interface {
+ Named
+ Domain() string
+ Path() string
+}
+
+// Domain returns the domain part of the Named reference
+func Domain(named Named) string {
+ if r, ok := named.(namedRepository); ok {
+ return r.Domain()
+ }
+ domain, _ := splitDomain(named.Name())
+ return domain
+}
+
+// Path returns the name without the domain part of the Named reference
+func Path(named Named) (name string) {
+ if r, ok := named.(namedRepository); ok {
+ return r.Path()
+ }
+ _, path := splitDomain(named.Name())
+ return path
+}
+
+func splitDomain(name string) (string, string) {
+ match := anchoredNameRegexp.FindStringSubmatch(name)
+ if len(match) != 3 {
+ return "", name
+ }
+ return match[1], match[2]
+}
+
+// SplitHostname splits a named reference into a
+// hostname and name string. If no valid hostname is
+// found, the hostname is empty and the full value
+// is returned as name
+// DEPRECATED: Use Domain or Path
+func SplitHostname(named Named) (string, string) {
+ if r, ok := named.(namedRepository); ok {
+ return r.Domain(), r.Path()
+ }
+ return splitDomain(named.Name())
+}
+
+// Parse parses s and returns a syntactically valid Reference.
+// If an error was encountered it is returned, along with a nil Reference.
+// NOTE: Parse will not handle short digests.
+func Parse(s string) (Reference, error) {
+ matches := ReferenceRegexp.FindStringSubmatch(s)
+ if matches == nil {
+ if s == "" {
+ return nil, ErrNameEmpty
+ }
+ if ReferenceRegexp.FindStringSubmatch(strings.ToLower(s)) != nil {
+ return nil, ErrNameContainsUppercase
+ }
+ return nil, ErrReferenceInvalidFormat
+ }
+
+ if len(matches[1]) > NameTotalLengthMax {
+ return nil, ErrNameTooLong
+ }
+
+ var repo repository
+
+ nameMatch := anchoredNameRegexp.FindStringSubmatch(matches[1])
+ if nameMatch != nil && len(nameMatch) == 3 {
+ repo.domain = nameMatch[1]
+ repo.path = nameMatch[2]
+ } else {
+ repo.domain = ""
+ repo.path = matches[1]
+ }
+
+ ref := reference{
+ namedRepository: repo,
+ tag: matches[2],
+ }
+ if matches[3] != "" {
+ var err error
+ ref.digest, err = digest.Parse(matches[3])
+ if err != nil {
+ return nil, err
+ }
+ }
+
+ r := getBestReferenceType(ref)
+ if r == nil {
+ return nil, ErrNameEmpty
+ }
+
+ return r, nil
+}
+
+// ParseNamed parses s and returns a syntactically valid reference implementing
+// the Named interface. The reference must have a name and be in the canonical
+// form, otherwise an error is returned.
+// If an error was encountered it is returned, along with a nil Reference.
+// NOTE: ParseNamed will not handle short digests.
+func ParseNamed(s string) (Named, error) {
+ named, err := ParseNormalizedNamed(s)
+ if err != nil {
+ return nil, err
+ }
+ if named.String() != s {
+ return nil, ErrNameNotCanonical
+ }
+ return named, nil
+}
+
+// WithName returns a named object representing the given string. If the input
+// is invalid ErrReferenceInvalidFormat will be returned.
+func WithName(name string) (Named, error) {
+ if len(name) > NameTotalLengthMax {
+ return nil, ErrNameTooLong
+ }
+
+ match := anchoredNameRegexp.FindStringSubmatch(name)
+ if match == nil || len(match) != 3 {
+ return nil, ErrReferenceInvalidFormat
+ }
+ return repository{
+ domain: match[1],
+ path: match[2],
+ }, nil
+}
+
+// WithTag combines the name from "name" and the tag from "tag" to form a
+// reference incorporating both the name and the tag.
+func WithTag(name Named, tag string) (NamedTagged, error) {
+ if !anchoredTagRegexp.MatchString(tag) {
+ return nil, ErrTagInvalidFormat
+ }
+ var repo repository
+ if r, ok := name.(namedRepository); ok {
+ repo.domain = r.Domain()
+ repo.path = r.Path()
+ } else {
+ repo.path = name.Name()
+ }
+ if canonical, ok := name.(Canonical); ok {
+ return reference{
+ namedRepository: repo,
+ tag: tag,
+ digest: canonical.Digest(),
+ }, nil
+ }
+ return taggedReference{
+ namedRepository: repo,
+ tag: tag,
+ }, nil
+}
+
+// WithDigest combines the name from "name" and the digest from "digest" to form
+// a reference incorporating both the name and the digest.
+func WithDigest(name Named, digest digest.Digest) (Canonical, error) {
+ if !anchoredDigestRegexp.MatchString(digest.String()) {
+ return nil, ErrDigestInvalidFormat
+ }
+ var repo repository
+ if r, ok := name.(namedRepository); ok {
+ repo.domain = r.Domain()
+ repo.path = r.Path()
+ } else {
+ repo.path = name.Name()
+ }
+ if tagged, ok := name.(Tagged); ok {
+ return reference{
+ namedRepository: repo,
+ tag: tagged.Tag(),
+ digest: digest,
+ }, nil
+ }
+ return canonicalReference{
+ namedRepository: repo,
+ digest: digest,
+ }, nil
+}
+
+// TrimNamed removes any tag or digest from the named reference.
+func TrimNamed(ref Named) Named {
+ domain, path := SplitHostname(ref)
+ return repository{
+ domain: domain,
+ path: path,
+ }
+}
+
+func getBestReferenceType(ref reference) Reference {
+ if ref.Name() == "" {
+ // Allow digest only references
+ if ref.digest != "" {
+ return digestReference(ref.digest)
+ }
+ return nil
+ }
+ if ref.tag == "" {
+ if ref.digest != "" {
+ return canonicalReference{
+ namedRepository: ref.namedRepository,
+ digest: ref.digest,
+ }
+ }
+ return ref.namedRepository
+ }
+ if ref.digest == "" {
+ return taggedReference{
+ namedRepository: ref.namedRepository,
+ tag: ref.tag,
+ }
+ }
+
+ return ref
+}
+
+type reference struct {
+ namedRepository
+ tag string
+ digest digest.Digest
+}
+
+func (r reference) String() string {
+ return r.Name() + ":" + r.tag + "@" + r.digest.String()
+}
+
+func (r reference) Tag() string {
+ return r.tag
+}
+
+func (r reference) Digest() digest.Digest {
+ return r.digest
+}
+
+type repository struct {
+ domain string
+ path string
+}
+
+func (r repository) String() string {
+ return r.Name()
+}
+
+func (r repository) Name() string {
+ if r.domain == "" {
+ return r.path
+ }
+ return r.domain + "/" + r.path
+}
+
+func (r repository) Domain() string {
+ return r.domain
+}
+
+func (r repository) Path() string {
+ return r.path
+}
+
+type digestReference digest.Digest
+
+func (d digestReference) String() string {
+ return digest.Digest(d).String()
+}
+
+func (d digestReference) Digest() digest.Digest {
+ return digest.Digest(d)
+}
+
+type taggedReference struct {
+ namedRepository
+ tag string
+}
+
+func (t taggedReference) String() string {
+ return t.Name() + ":" + t.tag
+}
+
+func (t taggedReference) Tag() string {
+ return t.tag
+}
+
+type canonicalReference struct {
+ namedRepository
+ digest digest.Digest
+}
+
+func (c canonicalReference) String() string {
+ return c.Name() + "@" + c.digest.String()
+}
+
+func (c canonicalReference) Digest() digest.Digest {
+ return c.digest
+}
diff --git a/vendor/github.com/docker/distribution/reference/regexp.go b/vendor/github.com/docker/distribution/reference/regexp.go
new file mode 100644
index 000000000..786034932
--- /dev/null
+++ b/vendor/github.com/docker/distribution/reference/regexp.go
@@ -0,0 +1,143 @@
+package reference
+
+import "regexp"
+
+var (
+ // alphaNumericRegexp defines the alpha numeric atom, typically a
+ // component of names. This only allows lower case characters and digits.
+ alphaNumericRegexp = match(`[a-z0-9]+`)
+
+ // separatorRegexp defines the separators allowed to be embedded in name
+ // components. This allow one period, one or two underscore and multiple
+ // dashes.
+ separatorRegexp = match(`(?:[._]|__|[-]*)`)
+
+ // nameComponentRegexp restricts registry path component names to start
+ // with at least one letter or number, with following parts able to be
+ // separated by one period, one or two underscore and multiple dashes.
+ nameComponentRegexp = expression(
+ alphaNumericRegexp,
+ optional(repeated(separatorRegexp, alphaNumericRegexp)))
+
+ // domainComponentRegexp restricts the registry domain component of a
+ // repository name to start with a component as defined by DomainRegexp
+ // and followed by an optional port.
+ domainComponentRegexp = match(`(?:[a-zA-Z0-9]|[a-zA-Z0-9][a-zA-Z0-9-]*[a-zA-Z0-9])`)
+
+ // DomainRegexp defines the structure of potential domain components
+ // that may be part of image names. This is purposely a subset of what is
+ // allowed by DNS to ensure backwards compatibility with Docker image
+ // names.
+ DomainRegexp = expression(
+ domainComponentRegexp,
+ optional(repeated(literal(`.`), domainComponentRegexp)),
+ optional(literal(`:`), match(`[0-9]+`)))
+
+ // TagRegexp matches valid tag names. From docker/docker:graph/tags.go.
+ TagRegexp = match(`[\w][\w.-]{0,127}`)
+
+ // anchoredTagRegexp matches valid tag names, anchored at the start and
+ // end of the matched string.
+ anchoredTagRegexp = anchored(TagRegexp)
+
+ // DigestRegexp matches valid digests.
+ DigestRegexp = match(`[A-Za-z][A-Za-z0-9]*(?:[-_+.][A-Za-z][A-Za-z0-9]*)*[:][[:xdigit:]]{32,}`)
+
+ // anchoredDigestRegexp matches valid digests, anchored at the start and
+ // end of the matched string.
+ anchoredDigestRegexp = anchored(DigestRegexp)
+
+ // NameRegexp is the format for the name component of references. The
+ // regexp has capturing groups for the domain and name part omitting
+ // the separating forward slash from either.
+ NameRegexp = expression(
+ optional(DomainRegexp, literal(`/`)),
+ nameComponentRegexp,
+ optional(repeated(literal(`/`), nameComponentRegexp)))
+
+ // anchoredNameRegexp is used to parse a name value, capturing the
+ // domain and trailing components.
+ anchoredNameRegexp = anchored(
+ optional(capture(DomainRegexp), literal(`/`)),
+ capture(nameComponentRegexp,
+ optional(repeated(literal(`/`), nameComponentRegexp))))
+
+ // ReferenceRegexp is the full supported format of a reference. The regexp
+ // is anchored and has capturing groups for name, tag, and digest
+ // components.
+ ReferenceRegexp = anchored(capture(NameRegexp),
+ optional(literal(":"), capture(TagRegexp)),
+ optional(literal("@"), capture(DigestRegexp)))
+
+ // IdentifierRegexp is the format for string identifier used as a
+ // content addressable identifier using sha256. These identifiers
+ // are like digests without the algorithm, since sha256 is used.
+ IdentifierRegexp = match(`([a-f0-9]{64})`)
+
+ // ShortIdentifierRegexp is the format used to represent a prefix
+ // of an identifier. A prefix may be used to match a sha256 identifier
+ // within a list of trusted identifiers.
+ ShortIdentifierRegexp = match(`([a-f0-9]{6,64})`)
+
+ // anchoredIdentifierRegexp is used to check or match an
+ // identifier value, anchored at start and end of string.
+ anchoredIdentifierRegexp = anchored(IdentifierRegexp)
+
+ // anchoredShortIdentifierRegexp is used to check if a value
+ // is a possible identifier prefix, anchored at start and end
+ // of string.
+ anchoredShortIdentifierRegexp = anchored(ShortIdentifierRegexp)
+)
+
+// match compiles the string to a regular expression.
+var match = regexp.MustCompile
+
+// literal compiles s into a literal regular expression, escaping any regexp
+// reserved characters.
+func literal(s string) *regexp.Regexp {
+ re := match(regexp.QuoteMeta(s))
+
+ if _, complete := re.LiteralPrefix(); !complete {
+ panic("must be a literal")
+ }
+
+ return re
+}
+
+// expression defines a full expression, where each regular expression must
+// follow the previous.
+func expression(res ...*regexp.Regexp) *regexp.Regexp {
+ var s string
+ for _, re := range res {
+ s += re.String()
+ }
+
+ return match(s)
+}
+
+// optional wraps the expression in a non-capturing group and makes the
+// production optional.
+func optional(res ...*regexp.Regexp) *regexp.Regexp {
+ return match(group(expression(res...)).String() + `?`)
+}
+
+// repeated wraps the regexp in a non-capturing group to get one or more
+// matches.
+func repeated(res ...*regexp.Regexp) *regexp.Regexp {
+ return match(group(expression(res...)).String() + `+`)
+}
+
+// group wraps the regexp in a non-capturing group.
+func group(res ...*regexp.Regexp) *regexp.Regexp {
+ return match(`(?:` + expression(res...).String() + `)`)
+}
+
+// capture wraps the expression in a capturing group.
+func capture(res ...*regexp.Regexp) *regexp.Regexp {
+ return match(`(` + expression(res...).String() + `)`)
+}
+
+// anchored anchors the regular expression by adding start and end delimiters.
+func anchored(res ...*regexp.Regexp) *regexp.Regexp {
+ return match(`^` + expression(res...).String() + `$`)
+}
diff --git a/vendor/github.com/docker/distribution/registry/api/errcode/errors.go b/vendor/github.com/docker/distribution/registry/api/errcode/errors.go
new file mode 100644
index 000000000..6d9bb4b62
--- /dev/null
+++ b/vendor/github.com/docker/distribution/registry/api/errcode/errors.go
@@ -0,0 +1,267 @@
+package errcode
+
+import (
+ "encoding/json"
+ "fmt"
+ "strings"
+)
+
+// ErrorCoder is the base interface for ErrorCode and Error allowing
+// users of each to just call ErrorCode to get the real ID of each
+type ErrorCoder interface {
+ ErrorCode() ErrorCode
+}
+
+// ErrorCode represents the error type. The errors are serialized via strings
+// and the integer format may change and should *never* be exported.
+type ErrorCode int
+
+var _ error = ErrorCode(0)
+
+// ErrorCode just returns itself
+func (ec ErrorCode) ErrorCode() ErrorCode {
+ return ec
+}
+
+// Error returns the ID/Value
+func (ec ErrorCode) Error() string {
+ // NOTE(stevvooe): Cannot use message here since it may have unpopulated args.
+ return strings.ToLower(strings.Replace(ec.String(), "_", " ", -1))
+}
+
+// Descriptor returns the descriptor for the error code.
+func (ec ErrorCode) Descriptor() ErrorDescriptor {
+ d, ok := errorCodeToDescriptors[ec]
+
+ if !ok {
+ return ErrorCodeUnknown.Descriptor()
+ }
+
+ return d
+}
+
+// String returns the canonical identifier for this error code.
+func (ec ErrorCode) String() string {
+ return ec.Descriptor().Value
+}
+
+// Message returned the human-readable error message for this error code.
+func (ec ErrorCode) Message() string {
+ return ec.Descriptor().Message
+}
+
+// MarshalText encodes the receiver into UTF-8-encoded text and returns the
+// result.
+func (ec ErrorCode) MarshalText() (text []byte, err error) {
+ return []byte(ec.String()), nil
+}
+
+// UnmarshalText decodes the form generated by MarshalText.
+func (ec *ErrorCode) UnmarshalText(text []byte) error {
+ desc, ok := idToDescriptors[string(text)]
+
+ if !ok {
+ desc = ErrorCodeUnknown.Descriptor()
+ }
+
+ *ec = desc.Code
+
+ return nil
+}
+
+// WithMessage creates a new Error struct based on the passed-in info and
+// overrides the Message property.
+func (ec ErrorCode) WithMessage(message string) Error {
+ return Error{
+ Code: ec,
+ Message: message,
+ }
+}
+
+// WithDetail creates a new Error struct based on the passed-in info and
+// set the Detail property appropriately
+func (ec ErrorCode) WithDetail(detail interface{}) Error {
+ return Error{
+ Code: ec,
+ Message: ec.Message(),
+ }.WithDetail(detail)
+}
+
+// WithArgs creates a new Error struct and sets the Args slice
+func (ec ErrorCode) WithArgs(args ...interface{}) Error {
+ return Error{
+ Code: ec,
+ Message: ec.Message(),
+ }.WithArgs(args...)
+}
+
+// Error provides a wrapper around ErrorCode with extra Details provided.
+type Error struct {
+ Code ErrorCode `json:"code"`
+ Message string `json:"message"`
+ Detail interface{} `json:"detail,omitempty"`
+
+ // TODO(duglin): See if we need an "args" property so we can do the
+ // variable substitution right before showing the message to the user
+}
+
+var _ error = Error{}
+
+// ErrorCode returns the ID/Value of this Error
+func (e Error) ErrorCode() ErrorCode {
+ return e.Code
+}
+
+// Error returns a human readable representation of the error.
+func (e Error) Error() string {
+ return fmt.Sprintf("%s: %s", e.Code.Error(), e.Message)
+}
+
+// WithDetail will return a new Error, based on the current one, but with
+// some Detail info added
+func (e Error) WithDetail(detail interface{}) Error {
+ return Error{
+ Code: e.Code,
+ Message: e.Message,
+ Detail: detail,
+ }
+}
+
+// WithArgs uses the passed-in list of interface{} as the substitution
+// variables in the Error's Message string, but returns a new Error
+func (e Error) WithArgs(args ...interface{}) Error {
+ return Error{
+ Code: e.Code,
+ Message: fmt.Sprintf(e.Code.Message(), args...),
+ Detail: e.Detail,
+ }
+}
+
+// ErrorDescriptor provides relevant information about a given error code.
+type ErrorDescriptor struct {
+ // Code is the error code that this descriptor describes.
+ Code ErrorCode
+
+ // Value provides a unique, string key, often captilized with
+ // underscores, to identify the error code. This value is used as the
+ // keyed value when serializing api errors.
+ Value string
+
+ // Message is a short, human readable decription of the error condition
+ // included in API responses.
+ Message string
+
+ // Description provides a complete account of the errors purpose, suitable
+ // for use in documentation.
+ Description string
+
+ // HTTPStatusCode provides the http status code that is associated with
+ // this error condition.
+ HTTPStatusCode int
+}
+
+// ParseErrorCode returns the value by the string error code.
+// `ErrorCodeUnknown` will be returned if the error is not known.
+func ParseErrorCode(value string) ErrorCode {
+ ed, ok := idToDescriptors[value]
+ if ok {
+ return ed.Code
+ }
+
+ return ErrorCodeUnknown
+}
+
+// Errors provides the envelope for multiple errors and a few sugar methods
+// for use within the application.
+type Errors []error
+
+var _ error = Errors{}
+
+func (errs Errors) Error() string {
+ switch len(errs) {
+ case 0:
+ return ""
+ case 1:
+ return errs[0].Error()
+ default:
+ msg := "errors:\n"
+ for _, err := range errs {
+ msg += err.Error() + "\n"
+ }
+ return msg
+ }
+}
+
+// Len returns the current number of errors.
+func (errs Errors) Len() int {
+ return len(errs)
+}
+
+// MarshalJSON converts slice of error, ErrorCode or Error into a
+// slice of Error - then serializes
+func (errs Errors) MarshalJSON() ([]byte, error) {
+ var tmpErrs struct {
+ Errors []Error `json:"errors,omitempty"`
+ }
+
+ for _, daErr := range errs {
+ var err Error
+
+ switch daErr.(type) {
+ case ErrorCode:
+ err = daErr.(ErrorCode).WithDetail(nil)
+ case Error:
+ err = daErr.(Error)
+ default:
+ err = ErrorCodeUnknown.WithDetail(daErr)
+
+ }
+
+ // If the Error struct was setup and they forgot to set the
+ // Message field (meaning its "") then grab it from the ErrCode
+ msg := err.Message
+ if msg == "" {
+ msg = err.Code.Message()
+ }
+
+ tmpErrs.Errors = append(tmpErrs.Errors, Error{
+ Code: err.Code,
+ Message: msg,
+ Detail: err.Detail,
+ })
+ }
+
+ return json.Marshal(tmpErrs)
+}
+
+// UnmarshalJSON deserializes []Error and then converts it into slice of
+// Error or ErrorCode
+func (errs *Errors) UnmarshalJSON(data []byte) error {
+ var tmpErrs struct {
+ Errors []Error
+ }
+
+ if err := json.Unmarshal(data, &tmpErrs); err != nil {
+ return err
+ }
+
+ var newErrs Errors
+ for _, daErr := range tmpErrs.Errors {
+ // If Message is empty or exactly matches the Code's message string
+ // then just use the Code, no need for a full Error struct
+ if daErr.Detail == nil && (daErr.Message == "" || daErr.Message == daErr.Code.Message()) {
+ // Error's w/o details get converted to ErrorCode
+ newErrs = append(newErrs, daErr.Code)
+ } else {
+ // Error's w/ details are untouched
+ newErrs = append(newErrs, Error{
+ Code: daErr.Code,
+ Message: daErr.Message,
+ Detail: daErr.Detail,
+ })
+ }
+ }
+
+ *errs = newErrs
+ return nil
+}
diff --git a/vendor/github.com/docker/distribution/registry/api/errcode/handler.go b/vendor/github.com/docker/distribution/registry/api/errcode/handler.go
new file mode 100644
index 000000000..d77e70473
--- /dev/null
+++ b/vendor/github.com/docker/distribution/registry/api/errcode/handler.go
@@ -0,0 +1,40 @@
+package errcode
+
+import (
+ "encoding/json"
+ "net/http"
+)
+
+// ServeJSON attempts to serve the errcode in a JSON envelope. It marshals err
+// and sets the content-type header to 'application/json'. It will handle
+// ErrorCoder and Errors, and if necessary will create an envelope.
+func ServeJSON(w http.ResponseWriter, err error) error {
+ w.Header().Set("Content-Type", "application/json; charset=utf-8")
+ var sc int
+
+ switch errs := err.(type) {
+ case Errors:
+ if len(errs) < 1 {
+ break
+ }
+
+ if err, ok := errs[0].(ErrorCoder); ok {
+ sc = err.ErrorCode().Descriptor().HTTPStatusCode
+ }
+ case ErrorCoder:
+ sc = errs.ErrorCode().Descriptor().HTTPStatusCode
+ err = Errors{err} // create an envelope.
+ default:
+ // We just have an unhandled error type, so just place in an envelope
+ // and move along.
+ err = Errors{err}
+ }
+
+ if sc == 0 {
+ sc = http.StatusInternalServerError
+ }
+
+ w.WriteHeader(sc)
+
+ return json.NewEncoder(w).Encode(err)
+}
diff --git a/vendor/github.com/docker/distribution/registry/api/errcode/register.go b/vendor/github.com/docker/distribution/registry/api/errcode/register.go
new file mode 100644
index 000000000..d1e8826c6
--- /dev/null
+++ b/vendor/github.com/docker/distribution/registry/api/errcode/register.go
@@ -0,0 +1,138 @@
+package errcode
+
+import (
+ "fmt"
+ "net/http"
+ "sort"
+ "sync"
+)
+
+var (
+ errorCodeToDescriptors = map[ErrorCode]ErrorDescriptor{}
+ idToDescriptors = map[string]ErrorDescriptor{}
+ groupToDescriptors = map[string][]ErrorDescriptor{}
+)
+
+var (
+ // ErrorCodeUnknown is a generic error that can be used as a last
+ // resort if there is no situation-specific error message that can be used
+ ErrorCodeUnknown = Register("errcode", ErrorDescriptor{
+ Value: "UNKNOWN",
+ Message: "unknown error",
+ Description: `Generic error returned when the error does not have an
+ API classification.`,
+ HTTPStatusCode: http.StatusInternalServerError,
+ })
+
+ // ErrorCodeUnsupported is returned when an operation is not supported.
+ ErrorCodeUnsupported = Register("errcode", ErrorDescriptor{
+ Value: "UNSUPPORTED",
+ Message: "The operation is unsupported.",
+ Description: `The operation was unsupported due to a missing
+ implementation or invalid set of parameters.`,
+ HTTPStatusCode: http.StatusMethodNotAllowed,
+ })
+
+ // ErrorCodeUnauthorized is returned if a request requires
+ // authentication.
+ ErrorCodeUnauthorized = Register("errcode", ErrorDescriptor{
+ Value: "UNAUTHORIZED",
+ Message: "authentication required",
+ Description: `The access controller was unable to authenticate
+ the client. Often this will be accompanied by a
+ Www-Authenticate HTTP response header indicating how to
+ authenticate.`,
+ HTTPStatusCode: http.StatusUnauthorized,
+ })
+
+ // ErrorCodeDenied is returned if a client does not have sufficient
+ // permission to perform an action.
+ ErrorCodeDenied = Register("errcode", ErrorDescriptor{
+ Value: "DENIED",
+ Message: "requested access to the resource is denied",
+ Description: `The access controller denied access for the
+ operation on a resource.`,
+ HTTPStatusCode: http.StatusForbidden,
+ })
+
+ // ErrorCodeUnavailable provides a common error to report unavailability
+ // of a service or endpoint.
+ ErrorCodeUnavailable = Register("errcode", ErrorDescriptor{
+ Value: "UNAVAILABLE",
+ Message: "service unavailable",
+ Description: "Returned when a service is not available",
+ HTTPStatusCode: http.StatusServiceUnavailable,
+ })
+
+ // ErrorCodeTooManyRequests is returned if a client attempts too many
+ // times to contact a service endpoint.
+ ErrorCodeTooManyRequests = Register("errcode", ErrorDescriptor{
+ Value: "TOOMANYREQUESTS",
+ Message: "too many requests",
+ Description: `Returned when a client attempts to contact a
+ service too many times`,
+ HTTPStatusCode: http.StatusTooManyRequests,
+ })
+)
+
+var nextCode = 1000
+var registerLock sync.Mutex
+
+// Register will make the passed-in error known to the environment and
+// return a new ErrorCode
+func Register(group string, descriptor ErrorDescriptor) ErrorCode {
+ registerLock.Lock()
+ defer registerLock.Unlock()
+
+ descriptor.Code = ErrorCode(nextCode)
+
+ if _, ok := idToDescriptors[descriptor.Value]; ok {
+ panic(fmt.Sprintf("ErrorValue %q is already registered", descriptor.Value))
+ }
+ if _, ok := errorCodeToDescriptors[descriptor.Code]; ok {
+ panic(fmt.Sprintf("ErrorCode %v is already registered", descriptor.Code))
+ }
+
+ groupToDescriptors[group] = append(groupToDescriptors[group], descriptor)
+ errorCodeToDescriptors[descriptor.Code] = descriptor
+ idToDescriptors[descriptor.Value] = descriptor
+
+ nextCode++
+ return descriptor.Code
+}
+
+type byValue []ErrorDescriptor
+
+func (a byValue) Len() int { return len(a) }
+func (a byValue) Swap(i, j int) { a[i], a[j] = a[j], a[i] }
+func (a byValue) Less(i, j int) bool { return a[i].Value < a[j].Value }
+
+// GetGroupNames returns the list of Error group names that are registered
+func GetGroupNames() []string {
+ keys := []string{}
+
+ for k := range groupToDescriptors {
+ keys = append(keys, k)
+ }
+ sort.Strings(keys)
+ return keys
+}
+
+// GetErrorCodeGroup returns the named group of error descriptors
+func GetErrorCodeGroup(name string) []ErrorDescriptor {
+ desc := groupToDescriptors[name]
+ sort.Sort(byValue(desc))
+ return desc
+}
+
+// GetErrorAllDescriptors returns a slice of all ErrorDescriptors that are
+// registered, irrespective of what group they're in
+func GetErrorAllDescriptors() []ErrorDescriptor {
+ result := []ErrorDescriptor{}
+
+ for _, group := range GetGroupNames() {
+ result = append(result, GetErrorCodeGroup(group)...)
+ }
+ sort.Sort(byValue(result))
+ return result
+}
diff --git a/vendor/github.com/docker/docker/AUTHORS b/vendor/github.com/docker/docker/AUTHORS
new file mode 100644
index 000000000..dffacff11
--- /dev/null
+++ b/vendor/github.com/docker/docker/AUTHORS
@@ -0,0 +1,2175 @@
+# This file lists all individuals having contributed content to the repository.
+# For how it is generated, see `hack/generate-authors.sh`.
+
+Aanand Prasad
+Aaron Davidson
+Aaron Feng
+Aaron Hnatiw
+Aaron Huslage
+Aaron L. Xu
+Aaron Lehmann
+Aaron Welch
+Aaron.L.Xu
+Abel Muiño
+Abhijeet Kasurde
+Abhinandan Prativadi
+Abhinav Ajgaonkar
+Abhishek Chanda
+Abhishek Sharma
+Abin Shahab
+Adam Avilla
+Adam Dobrawy
+Adam Eijdenberg
+Adam Kunk
+Adam Miller
+Adam Mills
+Adam Pointer
+Adam Singer
+Adam Walz
+Addam Hardy
+Aditi Rajagopal
+Aditya
+Adnan Khan
+Adolfo Ochagavía
+Adria Casas
+Adrian Moisey
+Adrian Mouat
+Adrian Oprea
+Adrien Folie
+Adrien Gallouët
+Ahmed Kamal
+Ahmet Alp Balkan
+Aidan Feldman
+Aidan Hobson Sayers
+AJ Bowen
+Ajey Charantimath
+ajneu
+Akash Gupta
+Akhil Mohan
+Akihiro Matsushima
+Akihiro Suda
+Akim Demaille
+Akira Koyasu
+Akshay Karle
+Al Tobey
+alambike
+Alan Hoyle
+Alan Scherger
+Alan Thompson
+Albert Callarisa
+Albert Zhang
+Albin Kerouanton
+Alejandro González Hevia
+Aleksa Sarai
+Aleksandrs Fadins
+Alena Prokharchyk
+Alessandro Boch
+Alessio Biancalana
+Alex Chan
+Alex Chen
+Alex Coventry
+Alex Crawford
+Alex Ellis
+Alex Gaynor
+Alex Goodman
+Alex Olshansky
+Alex Samorukov
+Alex Warhawk
+Alexander Artemenko
+Alexander Boyd
+Alexander Larsson
+Alexander Midlash
+Alexander Morozov
+Alexander Shopov
+Alexandre Beslic
+Alexandre Garnier
+Alexandre González
+Alexandre Jomin
+Alexandru Sfirlogea
+Alexei Margasov
+Alexey Guskov
+Alexey Kotlyarov
+Alexey Shamrin
+Alexis THOMAS
+Alfred Landrum
+Ali Dehghani
+Alicia Lauerman
+Alihan Demir
+Allen Madsen
+Allen Sun
+almoehi
+Alvaro Saurin
+Alvin Deng
+Alvin Richards
+amangoel
+Amen Belayneh
+Amir Goldstein
+Amit Bakshi
+Amit Krishnan
+Amit Shukla
+Amr Gawish
+Amy Lindburg
+Anand Patil
+AnandkumarPatel
+Anatoly Borodin
+Anca Iordache
+Anchal Agrawal
+Anda Xu
+Anders Janmyr
+Andre Dublin <81dublin@gmail.com>
+Andre Granovsky
+Andrea Denisse Gómez
+Andrea Luzzardi
+Andrea Turli
+Andreas Elvers
+Andreas Köhler
+Andreas Savvides
+Andreas Tiefenthaler
+Andrei Gherzan
+Andrei Vagin
+Andrew C. Bodine
+Andrew Clay Shafer
+Andrew Duckworth
+Andrew France
+Andrew Gerrand
+Andrew Guenther
+Andrew He
+Andrew Hsu
+Andrew Kuklewicz
+Andrew Macgregor
+Andrew Macpherson
+Andrew Martin
+Andrew McDonnell
+Andrew Munsell
+Andrew Pennebaker
+Andrew Po
+Andrew Weiss
+Andrew Williams
+Andrews Medina
+Andrey Kolomentsev
+Andrey Petrov
+Andrey Stolbovsky
+André Martins
+andy
+Andy Chambers
+andy diller
+Andy Goldstein
+Andy Kipp
+Andy Rothfusz
+Andy Smith
+Andy Wilson
+Anes Hasicic
+Anil Belur
+Anil Madhavapeddy
+Ankit Jain
+Ankush Agarwal
+Anonmily
+Anran Qiao
+Anshul Pundir
+Anthon van der Neut
+Anthony Baire
+Anthony Bishopric
+Anthony Dahanne
+Anthony Sottile
+Anton Löfgren
+Anton Nikitin
+Anton Polonskiy
+Anton Tiurin
+Antonio Murdaca
+Antonis Kalipetis
+Antony Messerli
+Anuj Bahuguna
+Anusha Ragunathan
+apocas
+Arash Deshmeh
+ArikaChen
+Arko Dasgupta
+Arnaud Lefebvre
+Arnaud Porterie
+Arnaud Rebillout
+Arthur Barr
+Arthur Gautier
+Artur Meyster
+Arun Gupta
+Asad Saeeduddin
+Asbjørn Enge
+averagehuman
+Avi Das
+Avi Kivity
+Avi Miller
+Avi Vaid
+ayoshitake
+Azat Khuyiyakhmetov
+Bardia Keyoumarsi
+Barnaby Gray
+Barry Allard
+Bartłomiej Piotrowski
+Bastiaan Bakker
+bdevloed
+Ben Bonnefoy
+Ben Firshman
+Ben Golub
+Ben Gould
+Ben Hall
+Ben Sargent
+Ben Severson
+Ben Toews
+Ben Wiklund
+Benjamin Atkin
+Benjamin Baker
+Benjamin Boudreau
+Benjamin Yolken
+Benny Ng