Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Update chart and tests to Helm 3 #146

Closed
wants to merge 2 commits into from
Closed
Show file tree
Hide file tree
Changes from 1 commit
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
10 changes: 6 additions & 4 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
Expand Up @@ -9,10 +9,12 @@ Improvements:

* Moved `global.image` to `server.image`
* Changed UI service template to route pods that aren't ready via `publishNotReadyAddresses: true`
* Update chart to support Helm 3

Bugs:

* Fixed upgrade bug by removing chart label which contained the version
* Fixed unit test failures due to Helm 3 CLI changes

## 0.2.1 (November 12th, 2019)

Expand Down Expand Up @@ -52,21 +54,21 @@ Features:

* Added `extraSecretEnvironmentVars` to allow users to mount secrets as
environment variables
* Added `tlsDisable` configurable to change HTTP protocols from HTTP/HTTPS
* Added `tlsDisable` configurable to change HTTP protocols from HTTP/HTTPS
depending on the value
* Added `serviceNodePort` to configure a NodePort value when setting `serviceType`
* Added `serviceNodePort` to configure a NodePort value when setting `serviceType`
to "NodePort"

Improvements:

* Changed UI port to 8200 for better HTTP protocol support
* Added `path` to `extraVolumes` to define where the volume should be
* Added `path` to `extraVolumes` to define where the volume should be
mounted. Defaults to `/vault/userconfig`
* Upgraded Vault to 1.2.2

Bugs:

* Fixed bug where upgrade would fail because immutable labels were being
* Fixed bug where upgrade would fail because immutable labels were being
changed (Helm Version label)
* Fixed bug where UI service used wrong selector after updating helm labels
* Added `VAULT_API_ADDR` env to Vault pod to fixed bug where Vault thinks
Expand Down
4 changes: 2 additions & 2 deletions Chart.yaml
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
apiVersion: v1
apiVersion: v2
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please leave this as v1 for now.

Suggested change
apiVersion: v2
apiVersion: v1

name: vault
version: 0.2.1
version: 0.3.0
description: Install and configure Vault on Kubernetes.
home: https://www.vaultproject.io
icon: https://github.com/hashicorp/vault/raw/f22d202cde2018f9455dec755118a9b84586e082/Vault_PrimaryLogo_Black.png
Expand Down
16 changes: 8 additions & 8 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -16,7 +16,7 @@ of this README. Please refer to the Kubernetes and Helm documentation.

The versions required are:

* **Helm 2.10+** - This is the earliest version of Helm tested. It is possible
* **Helm 3.0+** - This is the earliest version of Helm tested. It is possible
it works with earlier versions but this chart is untested for those versions.
* **Kubernetes 1.9+** - This is the earliest version of Kubernetes tested.
It is possible that this chart works with earlier versions but it is
Expand Down Expand Up @@ -56,7 +56,7 @@ The acceptance tests require a Kubernetes cluster with a configured `kubectl`.
```
* [helm](https://helm.sh)
```bash
brew install kubernetes-helm
brew install helm
```

### Running The Tests
Expand Down Expand Up @@ -180,13 +180,13 @@ Here are some examples of common test patterns:
```
@test "syncCatalog/Deployment: disabled by default" {
cd `chart_dir`
local actual=$(helm template \
run helm template \
-x templates/server-statefulset.yaml \
--set 'global.enabled=false' \
. | tee /dev/stderr |
yq 'length > 0' | tee /dev/stderr)
[ "${actual}" = "false" ]
.
[ "$status" -eq 1 ]
}
```
Here we are check the length of the command output to see if the anything is rendered.
This style can easily be switched to check that a file is rendered instead.
Here we are checking to ensure that the template isn't rendered by verifying that `helm template` exits with an error code.

*Note:* This test will not work correctly if you put any comments before the conditional statement.
4 changes: 2 additions & 2 deletions templates/server-disruptionbudget.yaml
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
# PodDisruptionBudget to prevent degrading the server cluster through
# voluntary cluster changes.
{{ template "vault.mode" . }}
{{- if and (and (eq (.Values.global.enabled | toString) "true") (eq .mode "ha")) (eq (.Values.server.ha.disruptionBudget.enabled | toString) "true") -}}
# PodDisruptionBudget to prevent degrading the server cluster through
# voluntary cluster changes.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for cleaning up the generated yaml templates!

apiVersion: policy/v1beta1
kind: PodDisruptionBudget
metadata:
Expand Down
2 changes: 1 addition & 1 deletion templates/server-service.yaml
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
# Service for Vault cluster
{{- if and (eq (.Values.server.service.enabled | toString) "true" ) (eq (.Values.global.enabled | toString) "true") }}
# Service for Vault cluster
apiVersion: v1
kind: Service
metadata:
Expand Down
4 changes: 2 additions & 2 deletions templates/server-statefulset.yaml
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
# StatefulSet to run the actual vault server cluster.
{{ template "vault.mode" . }}
{{- if and (ne .mode "") (eq (.Values.global.enabled | toString) "true") }}
# StatefulSet to run the actual vault server cluster.
apiVersion: apps/v1
kind: StatefulSet
metadata:
Expand Down Expand Up @@ -120,7 +120,7 @@ spec:
{{- end }}
lifecycle:
# Vault container doesn't receive SIGTERM from Kubernetes
# and after the grace period ends, Kube sends SIGKILL. This
# and after the grace period ends, Kube sends SIGKILL. This
# causes issues with graceful shutdowns such as deregistering itself
# from Consul (zombie services).
preStop:
Expand Down
2 changes: 1 addition & 1 deletion templates/ui-service.yaml
Original file line number Diff line number Diff line change
@@ -1,11 +1,11 @@
{{ template "vault.mode" . }}
{{- if and (ne .mode "") (eq (.Values.global.enabled | toString) "true") }}
{{- if eq (.Values.ui.enabled | toString) "true" }}
# Headless service for Vault server DNS entries. This service should only
# point to Vault servers. For access to an agent, one should assume that
# the agent is installed locally on the node and the NODE_IP should be used.
# If the node can't run a Vault agent, then this service can be used to
# communicate directly to a server agent.
{{- if eq (.Values.ui.enabled | toString) "true" }}
apiVersion: v1
kind: Service
metadata:
Expand Down
2 changes: 1 addition & 1 deletion test/docker/Test.dockerfile
Original file line number Diff line number Diff line change
Expand Up @@ -37,7 +37,7 @@ RUN curl -LO https://storage.googleapis.com/kubernetes-release/release/$(curl -s
mv ./kubectl /usr/local/bin/kubectl

# helm
RUN curl https://raw.githubusercontent.com/kubernetes/helm/master/scripts/get | bash
RUN curl https://raw.githubusercontent.com/kubernetes/helm/master/scripts/get-helm-3 | bash
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks like this lives under the helm org now, per https://helm.sh/docs/intro/install/#from-script:

Suggested change
RUN curl https://raw.githubusercontent.com/kubernetes/helm/master/scripts/get-helm-3 | bash
RUN curl https://raw.githubusercontent.com/helm/helm/master/scripts/get-helm-3 | bash


# bats
RUN curl -sSL https://github.com/bats-core/bats-core/archive/v${BATS_VERSION}.tar.gz -o /tmp/bats.tgz \
Expand Down
20 changes: 0 additions & 20 deletions test/terraform/main.tf
Original file line number Diff line number Diff line change
@@ -1,7 +1,3 @@
locals {
service_account_path = "${path.module}/service-account.yaml"
}

provider "google" {
project = "${var.project}"
region = "us-central1"
Expand Down Expand Up @@ -91,19 +87,3 @@ resource "null_resource" "kubectl" {
command = "kubectl config get-contexts | grep ${google_container_cluster.cluster.name} | xargs -n1 kubectl config delete-context"
}
}

resource "null_resource" "helm" {
count = "${var.init_cli ? 1 : 0 }"
depends_on = ["null_resource.kubectl"]

triggers = {
cluster = "${google_container_cluster.cluster.id}"
}

provisioner "local-exec" {
command = <<EOF
kubectl apply -f '${local.service_account_path}'
helm init --service-account helm
EOF
}
}
18 changes: 0 additions & 18 deletions test/terraform/service-account.yaml

This file was deleted.

2 changes: 1 addition & 1 deletion test/terraform/variables.tf
Original file line number Diff line number Diff line change
Expand Up @@ -15,7 +15,7 @@ variable "zone" {

variable "init_cli" {
default = true
description = "Whether to init the CLI tools kubectl, helm, etc. or not."
description = "Whether to init kubectl or not."
}

variable "gcp_service_account" {
Expand Down
42 changes: 19 additions & 23 deletions test/unit/server-clusterrolebinding.bats
Original file line number Diff line number Diff line change
Expand Up @@ -4,56 +4,52 @@ load _helpers

@test "server/ClusterRoleBinding: disabled by default" {
cd `chart_dir`
local actual=$(helm template \
-x templates/server-clusterrolebinding.yaml \
run helm template \
--show-only templates/server-clusterrolebinding.yaml \
--set 'server.dev.enabled=true' \
. | tee /dev/stderr |
yq 'length > 0' | tee /dev/stderr)
[ "${actual}" = "false" ]
.
[ "$status" -eq 1 ]

local actual=$(helm template \
-x templates/server-clusterrolebinding.yaml \
run helm template \
--show-only templates/server-clusterrolebinding.yaml \
--set 'server.ha.enabled=true' \
. | tee /dev/stderr |
yq 'length > 0' | tee /dev/stderr)
[ "${actual}" = "false" ]
.
[ "$status" -eq 1 ]

local actual=$(helm template \
-x templates/server-clusterrolebinding.yaml \
. | tee /dev/stderr |
yq 'length > 0' | tee /dev/stderr)
[ "${actual}" = "false" ]
run helm template \
--show-only templates/server-clusterrolebinding.yaml \
.
[ "$status" -eq 1 ]
}

@test "server/ClusterRoleBinding: disable with global.enabled" {
cd `chart_dir`
local actual=$(helm template \
-x templates/server-clusterrolebinding.yaml \
run helm template \
--show-only templates/server-clusterrolebinding.yaml \
--set 'global.enabled=false' \
. | tee /dev/stderr |
yq 'length > 0' | tee /dev/stderr)
[ "${actual}" = "false" ]
.
[ "$status" -eq 1 ]
Comment on lines +27 to +31
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is an interesting idea, since the clusterrolebinding template will be empty, though I would suggest keeping the existing style for consistency between tests. Something like this?

  local actual=$( (helm template \
      --show-only templates/server-clusterrolebinding.yaml  \
      --set 'global.enabled=false' \
      . || echo "---") | tee /dev/stderr |
      yq 'length > 0' | tee /dev/stderr)
  [ "${actual}" = "false" ]

}

@test "server/ClusterRoleBinding: can enable with server.authDelegator" {
cd `chart_dir`
local actual=$(helm template \
-x templates/server-clusterrolebinding.yaml \
--show-only templates/server-clusterrolebinding.yaml \
--set 'server.authDelegator.enabled=true' \
. | tee /dev/stderr |
yq 'length > 0' | tee /dev/stderr)
[ "${actual}" = "true" ]

local actual=$(helm template \
-x templates/server-clusterrolebinding.yaml \
--show-only templates/server-clusterrolebinding.yaml \
--set 'server.authDelegator.enabled=true' \
--set 'server.ha.enabled=true' \
. | tee /dev/stderr |
yq 'length > 0' | tee /dev/stderr)
[ "${actual}" = "true" ]

local actual=$(helm template \
-x templates/server-clusterrolebinding.yaml \
--show-only templates/server-clusterrolebinding.yaml \
--set 'server.authDelegator.enabled=true' \
--set 'server.dev.enabled=true' \
. | tee /dev/stderr |
Expand Down
32 changes: 15 additions & 17 deletions test/unit/server-configmap.bats
Original file line number Diff line number Diff line change
Expand Up @@ -5,20 +5,20 @@ load _helpers
@test "server/ConfigMap: enabled by default" {
cd `chart_dir`
local actual=$(helm template \
-x templates/server-config-configmap.yaml \
--show-only templates/server-config-configmap.yaml \
. | tee /dev/stderr |
yq 'length > 0' | tee /dev/stderr)
[ "${actual}" = "true" ]

local actual=$(helm template \
-x templates/server-config-configmap.yaml \
--show-only templates/server-config-configmap.yaml \
--set 'server.ha.enabled=true' \
. | tee /dev/stderr |
yq 'length > 0' | tee /dev/stderr)
[ "${actual}" = "true" ]

local actual=$(helm template \
-x templates/server-config-configmap.yaml \
--show-only templates/server-config-configmap.yaml \
--set 'server.standalone.enabled=true' \
. | tee /dev/stderr |
yq 'length > 0' | tee /dev/stderr)
Expand All @@ -27,36 +27,34 @@ load _helpers

@test "server/ConfigMap: disabled by server.dev.enabled true" {
cd `chart_dir`
local actual=$(helm template \
-x templates/server-config-configmap.yaml \
run helm template \
--show-only templates/server-config-configmap.yaml \
--set 'server.dev.enabled=true' \
. | tee /dev/stderr |
yq 'length > 0' | tee /dev/stderr)
[ "${actual}" = "false" ]
.
[ "$status" -eq 1 ]
}

@test "server/ConfigMap: disable with global.enabled" {
cd `chart_dir`
local actual=$(helm template \
-x templates/server-config-configmap.yaml \
run helm template \
--show-only templates/server-config-configmap.yaml \
--set 'global.enabled=false' \
. | tee /dev/stderr |
yq 'length > 0' | tee /dev/stderr)
[ "${actual}" = "false" ]
.
[ "$status" -eq 1 ]
}

@test "server/ConfigMap: standalone extraConfig is set" {
cd `chart_dir`
local actual=$(helm template \
-x templates/server-config-configmap.yaml \
--show-only templates/server-config-configmap.yaml \
--set 'server.standalone.enabled=true' \
--set 'server.standalone.config="{\"hello\": \"world\"}"' \
. | tee /dev/stderr |
yq '.data["extraconfig-from-values.hcl"] | match("world") | length' | tee /dev/stderr)
[ ! -z "${actual}" ]

local actual=$(helm template \
-x templates/server-config-configmap.yaml \
--show-only templates/server-config-configmap.yaml \
--set 'server.standalone.enabled=true' \
--set 'server.standalone.config="{\"foo\": \"bar\"}"' \
. | tee /dev/stderr |
Expand All @@ -67,15 +65,15 @@ load _helpers
@test "server/ConfigMap: ha extraConfig is set" {
cd `chart_dir`
local actual=$(helm template \
-x templates/server-config-configmap.yaml \
--show-only templates/server-config-configmap.yaml \
--set 'server.ha.enabled=true' \
--set 'server.ha.config="{\"hello\": \"world\"}"' \
. | tee /dev/stderr |
yq '.data["extraconfig-from-values.hcl"] | match("world") | length' | tee /dev/stderr)
[ ! -z "${actual}" ]

local actual=$(helm template \
-x templates/server-config-configmap.yaml \
--show-only templates/server-config-configmap.yaml \
--set 'server.ha.enabled=true' \
--set 'server.ha.config="{\"foo\": \"bar\"}"' \
. | tee /dev/stderr |
Expand Down
Loading