Skip to content
Permalink

Comparing changes

Choose two branches to see what’s changed or to start a new pull request. If you need to, you can also or learn more about diff comparisons.

Open a pull request

Create a new pull request by comparing changes across two branches. If you need to, you can also . Learn more about diff comparisons here.
base repository: Praqma/helmsman
Failed to load repositories. Confirm that selected base ref is valid, then try again.
Loading
base: master
Choose a base ref
...
head repository: dreamteam-gg/helmsman
Failed to load repositories. Confirm that selected head ref is valid, then try again.
Loading
compare: master
Choose a head ref
Can’t automatically merge. Don’t worry, you can still create the pull request.

Commits on Nov 5, 2017

  1. Verified

    This commit was created on GitHub.com and signed with GitHub’s verified signature. The key has expired.
    Copy the full SHA
    5706568 View commit details
  2. adding the desired state specification doc

    Sami Alajrami committed Nov 5, 2017
    Copy the full SHA
    1c92106 View commit details
  3. fixing links in the desired state specification doc

    Sami Alajrami committed Nov 5, 2017
    Copy the full SHA
    919398e View commit details

Commits on Nov 9, 2017

  1. added a dockerfile fro helmsman.

    Sami Alajrami committed Nov 9, 2017
    Copy the full SHA
    9cbdb3a View commit details
  2. fixed #6

    Sami Alajrami committed Nov 9, 2017
    Copy the full SHA
    0cf4bf1 View commit details
  3. updated README to include the docker image usage. Closes #4

    Sami Alajrami committed Nov 9, 2017
    Copy the full SHA
    ad6888b View commit details
  4. Copy the full SHA
    6006fdc View commit details
  5. closing #8

    Sami Alajrami committed Nov 9, 2017
    Copy the full SHA
    91bae03 View commit details
  6. closing #7

    Sami Alajrami committed Nov 9, 2017
    Copy the full SHA
    046e8f6 View commit details
  7. added aws to dockerfile

    Sami Alajrami committed Nov 9, 2017
    Copy the full SHA
    9f8566b View commit details

Commits on Nov 15, 2017

  1. added feature for apllying changes in values.yaml when the release is…

    … deployed. #9
    Sami Alajrami committed Nov 15, 2017
    Copy the full SHA
    f2948bf View commit details
  2. added circleci config and some tests.

    Sami Alajrami committed Nov 15, 2017
    Copy the full SHA
    56245d8 View commit details
  3. enabled goreleaser in circleci build

    Sami Alajrami committed Nov 15, 2017
    Copy the full SHA
    b8178cf View commit details
  4. #10 updated the circleci build script and config

    Sami Alajrami committed Nov 15, 2017
    Copy the full SHA
    3a3cb5d View commit details
  5. #10 fixed a typo in the build script.

    Sami Alajrami committed Nov 15, 2017
    Copy the full SHA
    21fba9c View commit details

Commits on Nov 16, 2017

  1. #2 adding unit tests.

    Sami Alajrami committed Nov 16, 2017
    Copy the full SHA
    063ff0d View commit details
  2. removing the context setup from init to allow tests to run in a non-c…

    …onfigured env.
    Sami Alajrami committed Nov 16, 2017
    Copy the full SHA
    e346400 View commit details
  3. cleaned up dockerfile for the helmsman test

    Sami Alajrami committed Nov 16, 2017
    Copy the full SHA
    d63daf1 View commit details
  4. Copy the full SHA
    8c13f47 View commit details
  5. Copy the full SHA
    7b28439 View commit details
  6. #11 updated plan_test

    Sami Alajrami committed Nov 16, 2017
    Copy the full SHA
    4550850 View commit details
  7. #10 fixed the working directory in the testing docker image

    Sami Alajrami committed Nov 16, 2017
    Copy the full SHA
    a8c641d View commit details
  8. #11 changed the test to create empty files and inspect them.

    Sami Alajrami committed Nov 16, 2017
    Copy the full SHA
    f7d9679 View commit details
  9. #10 fixed a typo in the circleci script.

    Sami Alajrami committed Nov 16, 2017
    Copy the full SHA
    f5c26f5 View commit details
  10. #10 updated the circleci script

    Sami Alajrami committed Nov 16, 2017
    Copy the full SHA
    47ab329 View commit details

Commits on Nov 18, 2017

  1. Copy the full SHA
    079d69d View commit details
  2. added helm init and updated error messages processing.

    Sami Alajrami committed Nov 18, 2017
    Copy the full SHA
    2c3e82b View commit details
  3. updated logs for running commands with debug option.

    Sami Alajrami committed Nov 18, 2017
    Copy the full SHA
    48db801 View commit details
  4. Copy the full SHA
    35f66b2 View commit details
  5. fixed a bug in getReleaseChart()

    Sami Alajrami committed Nov 18, 2017
    Copy the full SHA
    1cf06f0 View commit details
  6. enabled purge deleting a failed release.

    Sami Alajrami committed Nov 18, 2017
    Copy the full SHA
    94b7c13 View commit details

Commits on Nov 19, 2017

  1. fixed the rollback revision number param.

    Sami Alajrami committed Nov 19, 2017
    Copy the full SHA
    963d008 View commit details
  2. Create LICENSE

    sami-alajrami authored Nov 19, 2017

    Verified

    This commit was created on GitHub.com and signed with GitHub’s verified signature. The key has expired.
    Copy the full SHA
    f357f91 View commit details
  3. fixed install/upgrade bugs

    Sami Alajrami committed Nov 19, 2017
    Copy the full SHA
    99e6f95 View commit details
  4. close #5 updated docs.

    Sami Alajrami committed Nov 19, 2017
    Copy the full SHA
    572af4c View commit details
  5. Merge branch 'master' of https://github.com/Praqma/Helmsman

    Sami Alajrami committed Nov 19, 2017
    Copy the full SHA
    e8b51bb View commit details

Commits on Nov 22, 2017

  1. removed a broken link

    Sami Alajrami committed Nov 22, 2017
    Copy the full SHA
    2e7083c View commit details

Commits on Nov 25, 2017

  1. #15 added support for reading user input from env vars.

    Sami Alajrami committed Nov 25, 2017
    Copy the full SHA
    a673853 View commit details
  2. added a step to cleanup repo before releasing.

    Sami Alajrami committed Nov 25, 2017
    Copy the full SHA
    ae8ebc0 View commit details
  3. improved error reporting.

    Sami Alajrami committed Nov 25, 2017
    Copy the full SHA
    3f0e5ac View commit details
  4. updated the k8s password env var validation

    Sami Alajrami committed Nov 25, 2017
    Copy the full SHA
    1e2a6de View commit details
  5. updated example.toml

    Sami Alajrami committed Nov 25, 2017
    Copy the full SHA
    efd4ed6 View commit details
  6. close #16 allowed certs to be used from local file system.

    Sami Alajrami committed Nov 25, 2017
    Copy the full SHA
    c2a92b5 View commit details
  7. updated tests.

    Sami Alajrami committed Nov 25, 2017
    Copy the full SHA
    d9547d5 View commit details
  8. fixed cluster password reading bug.

    Sami Alajrami committed Nov 25, 2017
    Copy the full SHA
    3232ca0 View commit details
  9. fixed a code typo.

    Sami Alajrami committed Nov 25, 2017
    Copy the full SHA
    d8321e9 View commit details
  10. disabled running tests on upgrade and rollback.

    Sami Alajrami committed Nov 25, 2017
    Copy the full SHA
    ef78050 View commit details
  11. adding version.

    Sami Alajrami committed Nov 25, 2017
    Copy the full SHA
    419471b View commit details
  12. added version print flag

    Sami Alajrami committed Nov 25, 2017
    Copy the full SHA
    24b0920 View commit details
  13. added release-notes.md

    Sami Alajrami committed Nov 25, 2017
    Copy the full SHA
    eb59aab View commit details
Showing with 8,543 additions and 584 deletions.
  1. +77 −0 .circleci/config.yml
  2. +5 −1 .gitignore
  3. +3 −0 .goreleaser.yml
  4. +25 −0 CONTRIBUTION.md
  5. +373 −0 Gopkg.lock
  6. +74 −0 Gopkg.toml
  7. +21 −0 LICENSE
  8. +100 −0 Makefile
  9. +63 −65 README.md
  10. +69 −0 aws/aws.go
  11. +83 −0 azure/azblob.go
  12. +237 −0 bindata.go
  13. +15 −10 command.go
  14. +111 −0 command_test.go
  15. +11 −0 data/role.yaml
  16. +389 −116 decision_maker.go
  17. +78 −0 decision_maker_test.go
  18. +17 −0 dockerfile/README.md
  19. +54 −0 dockerfile/dockerfile
  20. +20 −0 docs/best_practice.md
  21. +71 −0 docs/cmd_reference.md
  22. +144 −0 docs/deployment_strategies.md
  23. +386 −0 docs/desired_state_specification.md
  24. +49 −0 docs/how_to/README.md
  25. +124 −0 docs/how_to/apps/basic.md
  26. +14 −0 docs/how_to/apps/destroy.md
  27. +45 −0 docs/how_to/apps/helm_tests.md
  28. +166 −0 docs/how_to/apps/moving_across_namespaces.md
  29. +70 −0 docs/how_to/apps/multiple_values_files.md
  30. +84 −0 docs/how_to/apps/order.md
  31. +109 −0 docs/how_to/apps/override_namespaces.md
  32. +31 −0 docs/how_to/apps/protection.md
  33. +77 −0 docs/how_to/apps/secrets.md
  34. +33 −0 docs/how_to/deployments/ci.md
  35. +51 −0 docs/how_to/deployments/inside_k8s.md
  36. +27 −0 docs/how_to/helm_repos/basic_auth.md
  37. +45 −0 docs/how_to/helm_repos/default.md
  38. +30 −0 docs/how_to/helm_repos/gcs.md
  39. +42 −0 docs/how_to/helm_repos/local.md
  40. +21 −0 docs/how_to/helm_repos/pre_configured.md
  41. +29 −0 docs/how_to/helm_repos/s3.md
  42. +29 −0 docs/how_to/misc/auth_to_storage_providers.md
  43. +19 −0 docs/how_to/misc/helmsman_on_windows10.md
  44. +47 −0 docs/how_to/misc/limit-deployment-to-specific-apps.md
  45. +44 −0 docs/how_to/misc/merge_desired_state_files.md
  46. +164 −0 docs/how_to/misc/multitenant_clusters_guide.md
  47. +68 −0 docs/how_to/misc/protect_namespaces_and_releases.md
  48. +23 −0 docs/how_to/misc/send_slack_notifications_from_helmsman.md
  49. +27 −0 docs/how_to/namespaces/create.md
  50. +36 −0 docs/how_to/namespaces/labels_and_annotations.md
  51. +52 −0 docs/how_to/namespaces/limits.md
  52. +32 −0 docs/how_to/namespaces/protection.md
  53. +42 −0 docs/how_to/settings/creating_kube_context_with_certs.md
  54. +29 −0 docs/how_to/settings/creating_kube_context_with_token.md
  55. +10 −0 docs/how_to/settings/current_kube_context.md
  56. +19 −0 docs/how_to/settings/existing_kube_context.md
  57. +36 −0 docs/how_to/tiller/deploy_apps_with_specific_tiller.md
  58. +21 −0 docs/how_to/tiller/existing.md
  59. +56 −0 docs/how_to/tiller/multitenancy.md
  60. +18 −0 docs/how_to/tiller/prevent_tiller_in_kube_system.md
  61. +83 −0 docs/how_to/tiller/shared.md
  62. BIN docs/images/helmsman.png
  63. BIN docs/images/multi-DSF.png
  64. +8 −0 docs/migrating_to_v1.4.0-rc.md
  65. +18 −14 docs/why_helmsman.md
  66. +92 −33 example.toml
  67. +112 −0 example.yaml
  68. +81 −0 gcs/gcs.go
  69. +487 −65 helm_helpers.go
  70. +107 −0 helm_helpers_test.go
  71. +159 −136 init.go
  72. +134 −0 init_test.go
  73. +500 −0 kube_helpers.go
  74. +158 −5 main.go
  75. +20 −0 minimal-example.toml
  76. +19 −0 minimal-example.yaml
  77. +65 −0 namespace.go
  78. +110 −19 plan.go
  79. +210 −0 plan_test.go
  80. +8 −0 release-notes.md
  81. +128 −22 release.go
  82. +296 −0 release_test.go
  83. +161 −73 state.go
  84. +438 −0 state_test.go
  85. +33 −0 test_files/dockerfile
  86. +19 −0 test_files/invalid_example.toml
  87. +19 −0 test_files/invalid_example.yaml
  88. 0 test_files/values.xml
  89. 0 test_files/values.yaml
  90. 0 test_files/values2.yaml
  91. +450 −25 utils.go
  92. +383 −0 utils_test.go
77 changes: 77 additions & 0 deletions .circleci/config.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,77 @@
# Golang CircleCI 2.0 configuration file
#
# Check https://circleci.com/docs/2.0/language-go/ for more details
version: 2
jobs:
build:
working_directory: "/go/src/helmsman"
docker:
- image: praqma/helmsman-test
steps:
- checkout
- run:
name: Build helmsman
command: |
echo "building ..."
make build
test:
working_directory: "/go/src/helmsman"
docker:
- image: praqma/helmsman-test
steps:
- checkout
- run:
name: Unit test helmsman
command: |
echo "running tests ..."
make test
release:
working_directory: "/go/src/helmsman"
docker:
- image: praqma/helmsman-test
steps:
- checkout
- run:
name: Release helmsman
command: |
TAG_SHA=$(git rev-parse $(git describe --abbrev=0 --tags))
LAST_COMMIT=$(git rev-parse HEAD)
if [ "${TAG_SHA}" == "${LAST_COMMIT}" ]; then
echo "releasing ..."
make release
else
echo "No release is needed yet."
exit 0
fi
# - setup_remote_docker
# - run:
# name: build docker images and push them to dockerhub
# command: |
# TAG=$(git describe --abbrev=0 --tags)
# docker login -u $DOCKER_USER -p $DOCKERHUB
# docker build -t praqma/helmsman:$TAG-helm-v2.8.1 --build-arg HELM_VERSION=v2.8.1 dockerfile/.
# docker push praqma/helmsman:$TAG-helm-v2.8.1
# docker build -t praqma/helmsman:$TAG-helm-v2.8.0 --build-arg HELM_VERSION=v2.8.0 dockerfile/.
# docker push praqma/helmsman:$TAG-helm-v2.8.0
# docker build -t praqma/helmsman:$TAG-helm-v2.7.2 --build-arg HELM_VERSION=v2.7.2 dockerfile/.
# docker push praqma/helmsman:$TAG-helm-v2.7.2


workflows:
version: 2
build-test-push-release:
jobs:
- build
- test:
requires:
- build
- release:
requires:
- test
filters:
branches:
only: master
tags:
only: /^v.*/
6 changes: 5 additions & 1 deletion .gitignore
Original file line number Diff line number Diff line change
@@ -1,4 +1,8 @@
*.passwd
*.key
*.crt
/dist
/dist
/vendor/
*.world
*.world1
helmsman
3 changes: 3 additions & 0 deletions .goreleaser.yml
Original file line number Diff line number Diff line change
@@ -2,6 +2,9 @@
# Build customization
builds:
- binary: helmsman
ldflags: -s -w -X main.build={{.Version}} -extldflags "-static"
env:
- CGO_ENABLED=0
goos:
- darwin
- linux
25 changes: 25 additions & 0 deletions CONTRIBUTION.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,25 @@
# Contribution Guide

Pull requests, feeback/feature requests are all welcome. This guide will be updated overtime.

## Build helmsman from source

To build helmsman from source, you need go:1.9+. Follow the steps below:

```
git clone https://github.com/Praqma/helmsman.git
make build
```

## Submitting pull requests

Please make sure you state the purpose of the pull request and that the code you submit is documented. If in doubt, [this guide](https://blog.github.com/2015-01-21-how-to-write-the-perfect-pull-request/) offers some good tips on writing a PR.

## Contribution to documentation

Contribution to the documentation can be done via pull requests or by opening an issue.

## Reporting issues/featuer requests

Please provide details of the issue, versions of helmsman, helm and kubernetes and all possible logs.

373 changes: 373 additions & 0 deletions Gopkg.lock
74 changes: 74 additions & 0 deletions Gopkg.toml
Original file line number Diff line number Diff line change
@@ -0,0 +1,74 @@
# Gopkg.toml example
#
# Refer to https://golang.github.io/dep/docs/Gopkg.toml.html
# for detailed Gopkg.toml documentation.
#
# required = ["github.com/user/thing/cmd/thing"]
# ignored = ["github.com/user/project/pkgX", "bitbucket.org/user/project/pkgA/pkgY"]
#
# [[constraint]]
# name = "github.com/user/project"
# version = "1.0.0"
#
# [[constraint]]
# name = "github.com/user/project2"
# branch = "dev"
# source = "github.com/myfork/project2"
#
# [[override]]
# name = "github.com/x/y"
# version = "2.4.0"
#
# [prune]
# non-go = false
# go-tests = true
# unused-packages = true

[[constraint]]
version = "0.3.0"
name = "github.com/Azure/azure-storage-blob-go"

[[constraint]]
name = "cloud.google.com/go"
version = "0.28.0"

[[constraint]]
name = "github.com/BurntSushi/toml"
version = "0.3.1"

[[constraint]]
name = "github.com/Praqma/helmsman"
#version = "1.7.4"
branch = "master"

[[constraint]]
name = "github.com/aws/aws-sdk-go"
version = "1.15.43"

[[constraint]]
name = "github.com/hashicorp/go-version"
version = "1.1.0"

[[constraint]]
name = "github.com/imdario/mergo"
version = "0.3.6"

[[constraint]]
name = "github.com/joho/godotenv"
version = "1.3.0"

[[constraint]]
branch = "master"
name = "github.com/logrusorgru/aurora"

[[constraint]]
branch = "master"
name = "golang.org/x/net"

[[constraint]]
name = "gopkg.in/yaml.v2"
version = "2.2.2"

[prune]
go-tests = true
unused-packages = true
21 changes: 21 additions & 0 deletions LICENSE
Original file line number Diff line number Diff line change
@@ -0,0 +1,21 @@
MIT License

Copyright (c) 2017 Praqma

Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:

The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.

THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.
100 changes: 100 additions & 0 deletions Makefile
Original file line number Diff line number Diff line change
@@ -0,0 +1,100 @@
.DEFAULT_GOAL := help

PKGS := $(shell go list ./... | grep -v /vendor/)
TAG := $(shell git describe --always --tags --abbrev=0 HEAD)
LAST := $(shell git describe --always --tags --abbrev=0 HEAD^)
BODY := "`git log ${LAST}..HEAD --oneline --decorate` `printf '\n\#\#\# [Build Info](${BUILD_URL})'`"
DATE := $(shell date +'%d%m%y')

# Ensure we have an unambiguous GOPATH.
GOPATH := $(shell go env GOPATH)

ifneq "$(or $(findstring :,$(GOPATH)),$(findstring ;,$(GOPATH)))" ""
$(error GOPATHs with multiple entries are not supported)
endif

GOPATH := $(realpath $(GOPATH))
ifeq ($(strip $(GOPATH)),)
$(error GOPATH is not set and could not be automatically determined)
endif

SRCDIR := $(GOPATH)/src/

ifeq ($(filter $(GOPATH)%,$(CURDIR)),)
GOPATH := $(shell mktemp -d "/tmp/dep.XXXXXXXX")
SRCDIR := $(GOPATH)/src/
endif

ifneq ($(OS),Windows_NT)
# Before we start test that we have the mandatory executables available
EXECUTABLES = go
OK := $(foreach exec,$(EXECUTABLES),\
$(if $(shell which $(exec)),some string,$(error "No $(exec) in PATH, please install $(exec)")))
endif

help:
@echo "Available options:"
@grep -E '^[/1-9a-zA-Z._%-]+:.*?## .*$$' $(MAKEFILE_LIST) \
| sort \
| awk 'BEGIN {FS = ":.*?## "}; {printf "\033[36m%-45s\033[0m %s\n", $$1, $$2}'
.PHONY: help

clean: ## Remove build artifacts
@git clean -fdX
.PHONY: clean

fmt: ## Reformat package sources
@go fmt
.PHONY: fmt

dependencies: ## Ensure all the necessary dependencies
@go get -t -d -v ./...
.PHONY: dependencies

$(SRCDIR):
@mkdir -p $(SRCDIR)
@ln -s $(CURDIR) $(SRCDIR)

dep: $(SRCDIR) ## Ensure vendors with dep
@cd $(SRCDIR)helmsman && \
dep ensure
.PHONY: dep

dep-update: $(SRCDIR) ## Ensure vendors with dep
@cd $(SRCDIR)helmsman && \
dep ensure --update
.PHONY: dep-update

build: dep ## Build the package
@cd $(SRCDIR)helmsman && \
go build -ldflags '-X main.version="${TAG}-${DATE}" -extldflags "-static"'

generate:
@go generate #${PKGS}
.PHONY: generate

check: $(SRCDIR)
@cd $(SRCDIR)helmsman && \
dep check && \
go vet #${PKGS}
.PHONY: check

test: dep ## Run unit tests
@cd $(SRCDIR)helmsman && \
go test -v -cover -p=1 -args -f example.toml
.PHONY: test

cross: dep ## Create binaries for all OSs
@cd $(SRCDIR)helmsman && \
env CGO_ENABLED=0 gox -os '!freebsd !netbsd' -arch '!arm' -output "dist/{{.Dir}}_{{.OS}}_{{.Arch}}" -ldflags '-X main.Version=${TAG}-${DATE}'
.PHONY: cross

release: dep ## Generate a new release
@cd $(SRCDIR)helmsman && \
goreleaser --release-notes release-notes.md

tools: ## Get extra tools used by this makefile
@go get -u github.com/golang/dep/cmd/dep
@go get -u github.com/mitchellh/gox
@go get -u github.com/goreleaser/goreleaser
.PHONY: tools
128 changes: 63 additions & 65 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,97 +1,95 @@
[![GitHub version](https://d25lcipzij17d.cloudfront.net/badge.svg?id=gh&type=6&v=v1.9.1&x2=0)](https://github.com/Praqma/helmsman/releases) [![CircleCI](https://circleci.com/gh/Praqma/helmsman/tree/master.svg?style=svg)](https://circleci.com/gh/Praqma/helmsman/tree/master)

![helmsman-logo](docs/images/helmsman.png)

# What is Helmsman?

Helmsman is a Helm Charts as Code tool which adds another layer of abstraction on top of [Helm](https://helm.sh) (the [Kubernetes](https://kubernetes.io/) package manager). It allows you to automate the deployment/management of your Helm charts (k8s packaged applications).
Helmsman is a Helm Charts (k8s applications) as Code tool which allows you to automate the deployment/management of your Helm charts from version controlled code.

# Why Helmsman?
# How does it work?

Helmsman was created to ease continous deployment of Helm charts. When you want to configure a continous deployment pipeline to manage multiple charts deployed on your k8s cluster(s), a CI script will quickly become complex and difficult to maintain. That's where Helmsman comes to rescue. Read more about [how Helmsman can save you time and effort in the docs](docs/why_helmsman.md).
Helmsman uses a simple declarative [TOML](https://github.com/toml-lang/toml) file to allow you to describe a desired state for your k8s applications as in the [example toml file](https://github.com/Praqma/helmsman/blob/master/example.toml).
Alternatively YAML declaration is also acceptable [example yaml file](https://github.com/Praqma/helmsman/blob/master/example.yaml).

The desired state file (DSF) follows the [desired state specification](https://github.com/Praqma/helmsman/blob/master/docs/desired_state_specification.md).

# How does it work?
Helmsman sees what you desire, validates that your desire makes sense (e.g. that the charts you desire are available in the repos you defined), compares it with the current state of Helm and figures out what to do to make your desire come true.

Helmsman uses a simple declarative [TOML](https://github.com/toml-lang/toml) file to allow you to describe a desired state for your k8s applications as in the [example file](example.toml).
To plan without executing:
``` $ helmsman -f example.toml ```

The desired state file follows the [desired state specification](docs/desired_state_specification.md).
To plan and execute the plan:
``` $ helmsman --apply -f example.toml ```

Helmsman sees what you desire, validates that your desire makes sense (e.g. that the charts you desire are available in the repos you defined), compares it with the current state of Helm and figures out what to do to make your desire come true. Below is the result of executing the [example.toml](example.toml)
To show debugging details:
``` $ helmsman --debug --apply -f example.toml ```

```
$ helmsman -f example.toml -apply
2017/11/04 17:23:34 Parsed [[ example.toml ]] successfully and found [2] apps
2017/11/04 17:23:49 WARN: I could not create namespace [staging ]. It already exists. I am skipping this.
2017/11/04 17:23:49 WARN: I could not create namespace [default ]. It already exists. I am skipping this.
---------------
Ok, I have generated a plan for you at: 2017-11-04 17:23:49.649943386 +0100 CET m=+14.976742294
DECISION: release [ jenkins ] is currently deleted and is desired to be rolledback to namespace [[ staging ]] . No problem!
DECISION: release [ jenkins ] is required to be tested when installed/upgraded/rolledback. Got it!
DECISION: release [ vault ] is not present in the current k8s context. Will install it in namespace [[ staging ]]
DECISION: release [ vault ] is required to be tested when installed/upgraded/rolledback. Got it!
```
To run a dry-run:
``` $ helmsman --debug --dry-run -f example.toml ```

```
$ helm list
NAME REVISION UPDATED STATUS CHART NAMESPACE
jenkins 1 Thu Nov 4 17:24:05 2017 DEPLOYED jenkins-0.9.0 staging
vault 1 Thu Nov 4 17:24:55 2017 DEPLOYED vault-0.1.0 staging
```
To limit execution to specific application:
``` $ helmsman --debug --dry-run --target artifactory -f example.toml ```

You can then change your desire, for example to disable the Jenkins release that was created above by setting `enabled = false` :
# Features

```
...
[apps.jenkins]
name = "jenkins" # should be unique across all apps
description = "jenkins"
env = "staging" # maps to the namespace as defined in environmetns above
enabled = false # change to false if you want to delete this app release [empty = flase]
chart = "stable/jenkins" # changing the chart name means delete and recreate this chart
version = "0.9.0"
valuesFile = "" # leaving it empty uses the default chart values
purge = false # will only be considered when there is a delete operation
test = true # run the tests whenever this release is installed/upgraded/rolledback
...
- **Built for CD**: Helmsman can be used as a docker image or a binary.
- **Applications as code**: describe your desired applications and manage them from a single version-controlled declarative file.
- **Suitable for Multitenant Clusters**: deploy Tiller in different namespaces with service accounts and TLS.
- **Easy to use**: deep knowledge of Helm CLI and Kubectl is NOT mandatory to use Helmsman.
- **Plan, View, apply**: you can run Helmsman to generate and view a plan with/without executing it.
- **Portable**: Helmsman can be used to manage charts deployments on any k8s cluster.
- **Protect Namespaces/Releases**: you can define certain namespaces/releases to be protected against accidental human mistakes.
- **Define the order of managing releases**: you can define the priorities at which releases are managed by helmsman (useful for dependencies).
- **Idempotency**: As long your desired state file does not change, you can execute Helmsman several times and get the same result.
- **Continue from failures**: In the case of partial deployment due to a specific chart deployment failure, fix your helm chart and execute Helmsman again without needing to rollback the partial successes first.

```
# Install

Then run Helmsman again and it will detect that you want to delete Jenkins:
## From binary

```
$ helmsman -f example.toml -apply
2017/11/04 17:25:29 Parsed [[ example.toml ]] successfully and found [2] apps
2017/11/04 17:25:44 WARN: I could not create namespace [staging ]. It already exists. I am skipping this.
2017/11/04 17:25:44 WARN: I could not create namespace [default ]. It already exists. I am skipping this.
---------------
Ok, I have generated a plan for you at: 2017-11-04 17:23:44.649947467 +0100 CET m=+14.976746752
DECISION: release [ jenkins ] is desired to be deleted and purged!. Planing this for you!
```
Please make sure the following are installed prior to using `helmsman` as a binary (the docker image contains all of them):

- [kubectl](https://github.com/kubernetes/kubectl)
- [helm](https://github.com/helm/helm) (for `helmsman` >= 1.6.0, use helm >= 2.10.0. this is due to a dependency bug #87 )
- [helm-diff](https://github.com/databus23/helm-diff) (`helmsman` >= 1.6.0)

If you use private helm repos, you will need either `helm-gcs` or `helm-s3` plugin or you can use basic auth to authenticate to your repos. See the [docs](https://github.com/Praqma/helmsman/blob/master/docs/how_to/use_private_helm_charts.md) for details.


Check the [releases page](https://github.com/Praqma/Helmsman/releases) for the different versions.
```
$ helm list
NAME REVISION UPDATED STATUS CHART NAMESPACE
vault 1 Thu Nov 4 17:24:55 2017 DEPLOYED vault-0.1.0 staging
# on Linux
curl -L https://github.com/Praqma/helmsman/releases/download/v1.9.1/helmsman_1.9.1_linux_amd64.tar.gz | tar zx
# on MacOS
curl -L https://github.com/Praqma/helmsman/releases/download/v1.9.1/helmsman_1.9.1_darwin_amd64.tar.gz | tar zx
mv helmsman /usr/local/bin/helmsman
```

Similarly, if you change `enabled` back to `true`, it will figure out that you would like to roll it back. You can also change the chart or chart version and specify a values.yaml file to override the default chart values.
## As a docker image
Check the images on [dockerhub](https://hub.docker.com/r/praqma/helmsman/tags/)

# Usage
## As a package
Helmsman has been packaged in Archlinux under `helmsman-bin` for the latest binary release, and `helmsman-git` for master.

Helmsman can be used in two ways:
# Documentation

1. In a continuous deployment pipeline. In this case Helmsman can be used in a docker container run by your CI system to maintain your desired state (which you can store in a version control repository). The docker image will be available soon.
- [How-Tos](https://github.com/Praqma/helmsman/blob/master/docs/how_to/).

[//]: # (docker run -it praqma/helmsman ```)
- [Desired state specification](https://github.com/Praqma/helmsman/blob/master/docs/desired_state_specification.md).

[//]: # (The docker image is built from a [dockerfile](dockerfile/dockerfile).)
- [CMD reference](https://github.com/Praqma/helmsman/blob/master/docs/cmd_reference.md)

2. As a binary application. Helmsman dependes on [Helm](https://helm.sh) and [Kubectl](https://kubernetes.io/docs/user-guide/kubectl/) being installed. See below for installation.

# Installation
## Usage

Install Helmsman for your OS from the [releases page](https://github.com/Praqma/Helmsman/releases). Currently, available for both Linux and MacOS only.
Helmsman can be used in three different settings:

# Documentaion
- [As a binary with a hosted cluster](https://github.com/Praqma/helmsman/blob/master/docs/how_to/settings).
- [As a docker image in a CI system or local machine](https://github.com/Praqma/helmsman/blob/master/docs/how_to/deployments/ci.md) Always use a tagged docker image from [dockerhub](https://hub.docker.com/r/praqma/helmsman/) as the `latest` image can (at times) be unstable.
- [As a docker image inside a k8s cluster](https://github.com/Praqma/helmsman/blob/master/docs/how_to/deployments/inside_k8s.md)

Documentation can be found under the [docs](/docs/) directory.

# Contributing
Contribution and feeback/feature requests are welcome. Please check the [Contribution Guide](CONTRIBUTING.md).

Pull requests, feedback/feature requests are welcome. Please check our [contribution guide](CONTRIBUTION.md).
69 changes: 69 additions & 0 deletions aws/aws.go
Original file line number Diff line number Diff line change
@@ -0,0 +1,69 @@
package aws

import (
"log"
"os"

"github.com/aws/aws-sdk-go/aws"
"github.com/aws/aws-sdk-go/aws/session"
"github.com/aws/aws-sdk-go/service/s3"
"github.com/aws/aws-sdk-go/service/s3/s3manager"
"github.com/logrusorgru/aurora"
)

// colorizer
var style aurora.Aurora

func checkCredentialsEnvVar() bool {

if os.Getenv("AWS_ACCESS_KEY_ID") == "" || os.Getenv("AWS_SECRET_ACCESS_KEY") == "" {

return false

} else if os.Getenv("AWS_REGION") == "" {

if os.Getenv("AWS_DEFAULT_REGION") == "" {
return false
}
os.Setenv("AWS_REGION", os.Getenv("AWS_DEFAULT_REGION"))

}
return true
}

// ReadFile reads a file from S3 bucket and saves it in a desired location.
func ReadFile(bucketName string, filename string, outFile string, noColors bool) {
style = aurora.NewAurora(!noColors)
// Checking env vars are set to configure AWS
if !checkCredentialsEnvVar() {
log.Println("WARN: Failed to find the AWS env vars needed to configure AWS. Please make sure they are set in the environment.")
}

// Create Session -- use config (credentials + region) from env vars or aws profile
sess, err := session.NewSession()

if err != nil {
log.Fatal(style.Bold(style.Red("ERROR: Can't create AWS session: " + err.Error())))
}
// create S3 download manger
downloader := s3manager.NewDownloader(sess)

file, err := os.Create(outFile)
if err != nil {
log.Fatal(style.Bold(style.Red("ERROR: Failed to open file " + outFile + ": " + err.Error())))
}

defer file.Close()

_, err = downloader.Download(file,
&s3.GetObjectInput{
Bucket: aws.String(bucketName),
Key: aws.String(filename),
})
if err != nil {
log.Fatal(style.Bold(style.Red("ERROR: Failed to download file " + filename + " from S3: " + err.Error())))
}

log.Println("INFO: Successfully downloaded " + filename + " from S3 as " + outFile)

}
83 changes: 83 additions & 0 deletions azure/azblob.go
Original file line number Diff line number Diff line change
@@ -0,0 +1,83 @@
package azure

import (
"bytes"
"context"
"fmt"
"io"
"log"
"net/url"
"os"

"github.com/Azure/azure-pipeline-go/pipeline"
"github.com/Azure/azure-storage-blob-go/azblob"
"github.com/logrusorgru/aurora"
)

// colorizer
var style aurora.Aurora
var accountName string
var accountKey string
var p pipeline.Pipeline

// auth checks for AZURE_STORAGE_ACCOUNT and AZURE_STORAGE_ACCESS_KEY in the environment
// if env vars are set, it will authenticate and create an azblob request pipeline
// returns false and error message if credentials are not set or are invalid
func auth() (bool, string) {
accountName, accountKey = os.Getenv("AZURE_STORAGE_ACCOUNT"), os.Getenv("AZURE_STORAGE_ACCESS_KEY")
if len(accountName) != 0 && len(accountKey) != 0 {
log.Println("AZURE_STORAGE_ACCOUNT and AZURE_STORAGE_ACCESS_KEY are set in the environment. They will be used to connect to Azure storage.")
// Create a default request pipeline
credential, err := azblob.NewSharedKeyCredential(accountName, accountKey)
if err == nil {
p = azblob.NewPipeline(credential, azblob.PipelineOptions{})
return true, ""
}
return false, err.Error()

}
return false, "either the AZURE_STORAGE_ACCOUNT or AZURE_STORAGE_ACCESS_KEY environment variable is not set"
}

// ReadFile reads a file from storage container and saves it in a desired location.
func ReadFile(containerName string, filename string, outFile string, noColors bool) {
style = aurora.NewAurora(!noColors)
if ok, err := auth(); !ok {
log.Fatal(style.Bold(style.Red("ERROR: " + err)))
}

URL, _ := url.Parse(
fmt.Sprintf("https://%s.blob.core.windows.net/%s", accountName, containerName))

containerURL := azblob.NewContainerURL(*URL, p)

ctx := context.Background()

blobURL := containerURL.NewBlockBlobURL(filename)
downloadResponse, err := blobURL.Download(ctx, 0, azblob.CountToEnd, azblob.BlobAccessConditions{}, false)
if err != nil {
log.Fatal(style.Bold(style.Red("ERROR: failed to download file " + filename + " with error: " + err.Error())))
}
bodyStream := downloadResponse.Body(azblob.RetryReaderOptions{MaxRetryRequests: 20})

// read the body into a buffer
downloadedData := bytes.Buffer{}
if _, err = downloadedData.ReadFrom(bodyStream); err != nil {
log.Fatal(style.Bold(style.Red("ERROR: failed to download file " + filename + " with error: " + err.Error())))
}

// create output file and write to it
var writers []io.Writer
file, err := os.Create(outFile)
if err != nil {
log.Fatal(style.Bold(style.Red("ERROR: Failed to create an output file: " + err.Error())))
}
writers = append(writers, file)
defer file.Close()

dest := io.MultiWriter(writers...)
if _, err := downloadedData.WriteTo(dest); err != nil {
log.Fatal(style.Bold(style.Red("ERROR: Failed to read object content: " + err.Error())))
}
log.Println("INFO: Successfully downloaded " + filename + " from Azure storage as " + outFile)
}
237 changes: 237 additions & 0 deletions bindata.go
25 changes: 15 additions & 10 deletions command.go
Original file line number Diff line number Diff line change
@@ -5,11 +5,12 @@ import (
"fmt"
"log"
"os/exec"
"strings"
"syscall"
)

// command type representing all executable commands Helmsman needs
// to execute in order to inspect the environement/ releases/ charts etc.
// to execute in order to inspect the environment/ releases/ charts etc.
type command struct {
Cmd string
Args []string
@@ -29,29 +30,33 @@ func (c command) printFullCommand() {
}

// exec executes the executable command and returns the exit code and execution result
func (c command) exec(debug bool) (int, string) {
func (c command) exec(debug bool, verbose bool) (int, string) {

if debug {
log.Println("INFO: executing command: " + c.Description)
log.Println("INFO: " + c.Description)
}
if verbose {
log.Println("VERBOSE: " + strings.Join(c.Args[1:], " "))
}

cmd := exec.Command(c.Cmd, c.Args...)
var out bytes.Buffer
cmd.Stdout = &out
var stdout, stderr bytes.Buffer
cmd.Stdout = &stdout
cmd.Stderr = &stderr

if err := cmd.Start(); err != nil {
log.Fatalf("ERROR: cmd.Start: %v", err)
logError("ERROR: cmd.Start: " + err.Error())
}

if err := cmd.Wait(); err != nil {
if exiterr, ok := err.(*exec.ExitError); ok {
if status, ok := exiterr.Sys().(syscall.WaitStatus); ok {
//log.Printf("Exit Status: %d", status.ExitStatus())
return status.ExitStatus(), out.String()
return status.ExitStatus(), stderr.String()
}
} else {
log.Fatalf("ERROR: cmd.Wait: %v", err)
logError("ERROR: cmd.Wait: " + err.Error())
}
}

return 0, out.String()
return 0, stdout.String()
}
111 changes: 111 additions & 0 deletions command_test.go
Original file line number Diff line number Diff line change
@@ -0,0 +1,111 @@
package main

import (
"strings"
"testing"
)

// func Test_command_printDescription(t *testing.T) {
// type fields struct {
// Cmd string
// Args []string
// Description string
// }
// tests := []struct {
// name string
// fields fields
// }{
// // TODO: Add test cases.
// }
// for _, tt := range tests {
// t.Run(tt.name, func(t *testing.T) {
// c := command{
// Cmd: tt.fields.Cmd,
// Args: tt.fields.Args,
// Description: tt.fields.Description,
// }
// c.printDescription()
// })
// }
// }

// func Test_command_printFullCommand(t *testing.T) {
// type fields struct {
// Cmd string
// Args []string
// Description string
// }
// tests := []struct {
// name string
// fields fields
// }{
// // TODO: Add test cases.
// }
// for _, tt := range tests {
// t.Run(tt.name, func(t *testing.T) {
// c := command{
// Cmd: tt.fields.Cmd,
// Args: tt.fields.Args,
// Description: tt.fields.Description,
// }
// c.printFullCommand()
// })
// }
// }

func Test_command_exec(t *testing.T) {
type fields struct {
Cmd string
Args []string
Description string
}
type args struct {
debug bool
verbose bool
}
tests := []struct {
name string
fields fields
args args
want int
want1 string
}{
{
name: "echo",
fields: fields{
Cmd: "bash",
Args: []string{"-c", "echo this is fun"},
Description: "A bash command execution test with echo.",
},
args: args{debug: false, verbose: false},
want: 0,
want1: "this is fun",
}, {
name: "exitCode",
fields: fields{
Cmd: "bash",
Args: []string{"-c", "echo $?"},
Description: "A bash command execution test with exitCode.",
},
args: args{debug: false},
want: 0,
want1: "0",
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
c := command{
Cmd: tt.fields.Cmd,
Args: tt.fields.Args,
Description: tt.fields.Description,
}
got, got1 := c.exec(tt.args.debug, tt.args.verbose)
if got != tt.want {
t.Errorf("command.exec() got = %v, want %v", got, tt.want)
}
if strings.TrimSpace(got1) != tt.want1 {
t.Errorf("command.exec() got1 = %v, want %v", got1, tt.want1)
}
})
}
}
11 changes: 11 additions & 0 deletions data/role.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,11 @@
kind: Role
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: <<role-name>>
namespace: <<namespace>>
labels:
CREATED-BY: HELMSMAN
rules:
- apiGroups: ["", "batch", "extensions", "apps", "autoscaling", "rbac.authorization.k8s.io"]
resources: ["*"]
verbs: ["*"]
505 changes: 389 additions & 116 deletions decision_maker.go

Large diffs are not rendered by default.

78 changes: 78 additions & 0 deletions decision_maker_test.go
Original file line number Diff line number Diff line change
@@ -0,0 +1,78 @@
package main

import (
"testing"
)

func Test_getValuesFiles(t *testing.T) {
type args struct {
r *release
}
tests := []struct {
name string
args args
want string
}{
{
name: "test case 1",
args: args{
r: &release{
Name: "release1",
Description: "",
Namespace: "namespace",
Enabled: true,
Chart: "repo/chartX",
Version: "1.0",
ValuesFile: "test_files/values.yaml",
Purge: true,
Test: true,
},
//s: st,
},
want: " -f test_files/values.yaml",
},
{
name: "test case 2",
args: args{
r: &release{
Name: "release1",
Description: "",
Namespace: "namespace",
Enabled: true,
Chart: "repo/chartX",
Version: "1.0",
ValuesFiles: []string{"test_files/values.yaml"},
Purge: true,
Test: true,
},
//s: st,
},
want: " -f test_files/values.yaml",
},
{
name: "test case 1",
args: args{
r: &release{
Name: "release1",
Description: "",
Namespace: "namespace",
Enabled: true,
Chart: "repo/chartX",
Version: "1.0",
ValuesFiles: []string{"test_files/values.yaml", "test_files/values2.yaml"},
Purge: true,
Test: true,
},
//s: st,
},
want: " -f test_files/values.yaml -f test_files/values2.yaml",
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
if got := getValuesFiles(tt.args.r); got != tt.want {
t.Errorf("getValuesFiles() = %v, want %v", got, tt.want)
}
})
}
}
17 changes: 17 additions & 0 deletions dockerfile/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,17 @@
---
version: v0.1.2
---

# Usage

```
docker run -v $(pwd):/tmp --rm -it \
-e KUBECTL_PASSWORD=<k8s_password> \
-e AWS_ACCESS_KEY_ID=<aws_key_id> \
-e AWS_DEFAULT_REGION=<aws_region> \
-e AWS_SECRET_ACCESS_KEY=<acess_key> \
praqma/helmsman:v0.1.2 \
helmsman -debug -apply -f <your_desired_state_file>.<toml|yaml>
```

Check the different image tags on [Dockerhub](https://hub.docker.com/r/praqma/helmsman/)
54 changes: 54 additions & 0 deletions dockerfile/dockerfile
Original file line number Diff line number Diff line change
@@ -0,0 +1,54 @@
# This is a docker image for helmsman


FROM golang:1.10-alpine3.7 as builder

WORKDIR /go/src/

RUN apk --no-cache add make git
RUN git clone https://github.com/Praqma/helmsman.git

# build a statically linked binary so that it works on stripped linux images such as alpine/busybox.
RUN cd helmsman \
&& LastTag=$(git describe --abbrev=0 --tags) \
&& TAG=$LastTag-$(date +"%d%m%y") \
&& LT_SHA=$(git rev-parse ${LastTag}^{}) \
&& LC_SHA=$(git rev-parse HEAD) \
&& if [ ${LT_SHA} != ${LC_SHA} ]; then TAG=latest-$(date +"%d%m%y"); fi \
&& make dependencies \
&& CGO_ENABLED=0 GOOS=linux go install -a -ldflags '-X main.version='$TAG' -extldflags "-static"' .


# The image to keep
FROM alpine:3.7

ARG KUBE_VERSION
ARG HELM_VERSION

ENV KUBE_VERSION ${KUBE_VERSION:-v1.11.3}
ENV HELM_VERSION ${HELM_VERSION:-v2.11.0}
ENV HELM_DIFF_VERSION ${HELM_DIFF_VERSION:-v2.11.0+3}

RUN apk --no-cache update \
&& apk add --update --no-cache ca-certificates git openssh \
&& apk add --update -t deps curl tar gzip make bash \
&& rm -rf /var/cache/apk/* \
&& curl -L https://storage.googleapis.com/kubernetes-release/release/${KUBE_VERSION}/bin/linux/amd64/kubectl -o /usr/local/bin/kubectl \
&& chmod +x /usr/local/bin/kubectl \
&& curl -L http://storage.googleapis.com/kubernetes-helm/helm-${HELM_VERSION}-linux-amd64.tar.gz | tar zxv -C /tmp \
&& mv /tmp/linux-amd64/helm /usr/local/bin/helm \
&& rm -rf /tmp/linux-amd64 \
&& chmod +x /usr/local/bin/helm

COPY --from=builder /go/bin/helmsman /bin/helmsman

RUN mkdir -p ~/.helm/plugins \
&& helm plugin install https://github.com/hypnoglow/helm-s3.git \
&& helm plugin install https://github.com/nouney/helm-gcs \
&& helm plugin install https://github.com/databus23/helm-diff --version ${HELM_DIFF_VERSION} \
&& helm plugin install https://github.com/futuresimple/helm-secrets \
&& rm -r /tmp/helm-diff /tmp/helm-diff.tgz

WORKDIR /tmp
# ENTRYPOINT ["/bin/helmsman"]

20 changes: 20 additions & 0 deletions docs/best_practice.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,20 @@
---
version: v1.0.0
---

# Best Practice

When using Helmsman, we recommend the following best practices:

- Add useful metadata in your desired state files (DSFs) so that others (who have access to them) can make understand what your DSF is for. We recommend the following metadata: organization, maintainer (name and email), and description/purpose.

- Use environment variables to pass K8S connection secrets (password, certificates paths on the local system or AWS/GCS bucket urls and the API URI). This keeps all sensitive information out of your version controlled source code.

- Define certain namespaces (e.g, production) as protected namespaces (supported in v1.0.0+) and deploy your production-ready releases there.

- If you use multiple desired state files (DSF) with the same cluster, make sure your namespace protection definitions are identical across all DSFs.

- Don't maintain the same release in multiple DSFs.

- While the decision on how many DSFs to use and what each can contain is up to you and depends on your case, we recommend coming up with your own rule for how to split them.

71 changes: 71 additions & 0 deletions docs/cmd_reference.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,71 @@
---
version: v1.9.0
---

# CMD reference

This is the list of the available CMD options in Helmsman:

> you can find the CMD options for the version you are using by typing: `helmsman -h` or `helmsman --help`
`--apply`
apply the plan directly.

`--apply-labels`
apply Helmsman labels to Helm state for all defined apps.

`--debug`
show the execution logs.

`--destroy`
delete all deployed releases. Purge delete is used if the purge option is set to true for the releases.

`--dry-run`
apply the dry-run option for helm commands.

`-e value`
file(s) to load environment variables from (default .env), may be supplied more than once.

`-f value`
desired state file name(s), may be supplied more than once to merge state files.

`--keep-untracked-releases`
keep releases that are managed by Helmsman and are no longer tracked in your desired state.

`--no-banner`
don't show the banner.

`--no-color`
don't use colors.

`--no-fancy`
don't display the banner and don't use colors.

`--no-ns`
don't create namespaces.

`--ns-override string`
override defined namespaces with this one.

`--show-diff`
show helm diff results. Can expose sensitive information.

`--skip-validation`
skip desired state validation.

`--suppress-diff-secrets`
don't show secrets in helm diff output.

`-v` show the version.

`--verbose`
show verbose execution logs.

`--kubeconfig`
path to the kubeconfig file to use for CLI requests.

`--target`
limit execution to specific app.

`--no-env-subst`
turn off environment substitution globally.
144 changes: 144 additions & 0 deletions docs/deployment_strategies.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,144 @@
---
version: v1.1.0
---

# Deployment Strategies

This document describes the different strategies to use Helmsman for maintaining your helm charts deployment to k8s clusters.

## Deploying 3rd party charts (apps) in a production cluster

Suppose you are deploying 3rd party charts (e.g. Jenkins, Jira ... etc.) in your cluster. These applications can be deployed with Helmsman using a single desired state file. The desired state tells helmsman to deploy these apps into certain namespaces in a production cluster.

You can test 3rd party charts in designated namespaces (e.g, staging) within the same production cluster. This also can be defined in the same desired state file. Below is an example of a desired state file for deploying 3rd party apps in production and staging namespaces:

```toml
[metadata]
org = "example"

# using a minikube cluster
[settings]
kubeContext = "minikube"

[namespaces]
[namespaces.staging]
protected = false
[namespaces.production]
protected = true

[helmRepos]
stable = "https://kubernetes-charts.storage.googleapis.com"
incubator = "http://storage.googleapis.com/kubernetes-charts-incubator"

[apps]

[apps.jenkins]
name = "jenkins-prod" # should be unique across all apps
description = "production jenkins"
namespace = "production"
enabled = true
chart = "stable/jenkins"
version = "0.9.1" # chart version
valuesFiles = [ "../my-jenkins-common-values.yaml", "../my-jenkins-production-values.yaml" ]


[apps.artifactory]
name = "artifactory-prod" # should be unique across all apps
description = "production artifactory"
namespace = "production"
enabled = true
chart = "stable/artifactory"
version = "6.2.0" # chart version
valuesFile = "../my-artificatory-production-values.yaml"


# the jenkins release below is being tested in the staging namespace
[apps.jenkins-test]
name = "jenkins-test" # should be unique across all apps
description = "test release of jenkins, testing xyz feature"
namespace = "staging"
enabled = true
chart = "stable/jenkins"
version = "0.9.1" # chart version
valuesFiles = [ "../my-jenkins-common-values.yaml", "../my-jenkins-testing-values.yaml" ]
```

```yaml
metadata:
org: "example"

# using a minikube cluster
settings:
kubeContext: "minikube"

namespaces:
staging:
protected: false
production:
protected: true

helmRepos:
stable: "https://kubernetes-charts.storage.googleapis.com"
incubator: "http://storage.googleapis.com/kubernetes-charts-incubator"

apps:
jenkins:
name: "jenkins-prod" # should be unique across all apps
description: "production jenkins"
namespace: "production"
enabled: true
chart: "stable/jenkins"
version: "0.9.1" # chart version
valuesFile: "../my-jenkins-production-values.yaml"

artifactory:
name: "artifactory-prod" # should be unique across all apps
description: "production artifactory"
namespace: "production"
enabled: true
chart: "stable/artifactory"
version: "6.2.0" # chart version
valuesFile: "../my-artifactory-production-values.yaml"

# the jenkins release below is being tested in the staging namespace
jenkins-test:
name: "jenkins-test" # should be unique across all apps
description: "test release of jenkins, testing xyz feature"
namespace: "staging"
enabled: true
chart: "stable/jenkins"
version: "0.9.1" # chart version
valuesFile: "../my-jenkins-testing-values.yaml"

```

You can split the desired state file into multiple files if your deployment pipelines requires that, but it is important to read the notes below on using multiple desired state files with one cluster.

## Working with multiple clusters

If you use multiple clusters for multiple purposes, you need at least one Helmsman desired state file for each cluster.


## Deploying your dev charts

If you are developing your own applications/services and packaging them in helm charts. It makes sense to automatically deploy these charts to a staging namespace or a dev cluster on every source code commit.

Often, you would have multiple apps developed in separate source code repositories but you would like to test their deployment in the same cluster/namespace. In that case, Helmsman can be used [as part of your CI pipeline](how_to/run_helmsman_in_ci.md) as described in the diagram below:

> as of v1.1.0 , you can use the `ns-override`flag to force helmsman to deploy/move all apps into a given namespace. For example, you could use this flag in a CI job that gets triggered on commits to the dev branch to deploy all apps into the `staging` namespace.
![multi-DSF](images/multi-DSF.png)

Each repository will have a Helmsman desired state file (DSF). But it is important to consider the notes below on using multiple desired state files with one cluster.

If you need supporting applications (charts) for your application (e.g, reverse proxies, DB, k8s dashboard etc.), you can describe the desired state for these in a separate file which can live in another repository. Adding such file in the pipeline where you create your cluster from code makes total "DevOps" sense.

## Notes on using multiple Helmsman desired state files with the same cluster

Helmsman works with a single desired state file at a time (starting from v1.5.0, you can pass multiple desired state files which get merged at runtime. See the [docs](how_to/merge_desired_state_files.md)) and does not maintain a state anywhere. i.e. it does not have any context awareness about other desired state files used with the same cluster. For this reason, it is the user's responsibility to make sure that:

- no releases have the same name in different desired state files pointing to the same cluster. If such conflict exists, Helmsman will not raise any errors but that release would be subject to unexpected behavior.

- protected namespaces are defined protected in all the desired state files. Otherwise, namespace protection can be accidentally compromised if the same release name is used across multiple desired state files.

Also please refer to the [best practice](best_practice.md) document.
386 changes: 386 additions & 0 deletions docs/desired_state_specification.md

Large diffs are not rendered by default.

49 changes: 49 additions & 0 deletions docs/how_to/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,49 @@

# How To Guides

This page contains a list of guides on how to use Helmsman.

- Connecting to Kubernetes clusters
- [Using an existing kube context](settings/existing_kube_context.md)
- [Using the current kube context](settings/current_kube_context.md)
- [Connecting with certificates](settings/creating_kube_context_with_certs.md)
- [Connecting with bearer token](settings/creating_kube_context_with_token.md)
- Defining Namespaces
- [Create namespaces](namespaces/create.md)
- [Label namespaces](namespaces/labels_and_annotations.md)
- [Set resource limits for namespaces](namespaces/limits.md)
- [Protecting namespaces](namespaces/protection.md)
- Deploying Helm Tiller
- [Using existing Tillers](tiller/existing.md)
- [Deploy shared Tiller in kube-system](tiller/shared.md)
- [Prevent Deploying Tiller in kube-system](tiller/prevent_tiller_in_kube_system.md)
- [Deploy Multiple Tillers with custom setup for each](tiller/multitenancy.md)
- [Deploy apps with specific Tillers](tiller/deploy_apps_with_specific_tiller.md)
- Defining Helm repositories
- [Using default helm repos](helm_repos/default.md)
- [Using private repos in Google GCS](helm_repos/gcs.md)
- [Using private repos in AWS S3](helm_repos/s3.md)
- [Using private repos with basic auth](helm_repos/basic_auth.md)
- [Using pre-configured repos](helm_repos/pre_configured.md)
- [Using local charts](helm_repos/local.md)
- Manipulating Apps
- [Basic operations](apps/basic.md)
- [Passing secrets from env vars](apps/secrets.md)
- [Use multiple values files for apps](apps/multiple_values_files.md)
- [Protect releases (apps)](apps/protection.md)
- [Moving releases (apps) across namespaces](apps/moving_across_namespaces.md)
- [Override defined namespaces](apps/override_namespaces.md)
- [Run helm tests for deployed releases (apps)](apps/helm_tests.md)
- [Define the order of apps operations](apps/order.md)
- [Delete all releases (apps)](apps/destroy.md)
- Running Helmsman in different environments
- [Running Helmsman in CI](deployment/ci.md)
- [Running Helmsman inside your k8s cluster](deployment/inside_k8s.md)
- Misc
- [Authenticating to cloud storage providers](misc/auth_to_storage_providers.md)
- [Protecting namespaces and releases](misc/protect_namespaces_and_releases.md)
- [Send slack notifications from Helmsman](misc/send_slack_notifications_from_helmsman.md)
- [Merge multiple desired state files](misc/merge_desired_state_files.md)
- [Limit Helmsman deployment to specific apps](misc/limit-deployment-to-specific-apps.md)
- [Multitenant clusters guide](misc/multitenant_clusters_guide.md)
- [Helmsman on Windows 10](misc/helmsman_on_windows10.md)
124 changes: 124 additions & 0 deletions docs/how_to/apps/basic.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,124 @@
---
version: v1.5.0
---

# install releases

You can run helmsman with the [example.toml](https://github.com/Praqma/helmsman/blob/master/example.toml) or [example.yaml](https://github.com/Praqma/helmsman/blob/master/example.yaml) file.

```
$ helmsman --apply -f example.toml
2017/11/19 18:17:57 Parsed [[ example.toml ]] successfully and found [ 2 ] apps.
2017/11/19 18:17:59 WARN: I could not create namespace [staging ]. It already exists. I am skipping this.
2017/11/19 18:17:59 WARN: I could not create namespace [default ]. It already exists. I am skipping this.
2017/11/19 18:18:02 INFO: Executing the following plan ...
---------------
Ok, I have generated a plan for you at: 2017-11-19 18:17:59.347859706 +0100 CET m=+2.255430021
DECISION: release [ jenkins ] is not present in the current k8s context. Will install it in namespace [[ staging ]]
DECISION: release [ artifactory ] is not present in the current k8s context. Will install it in namespace [[ staging ]]
2017/11/19 18:18:02 INFO: attempting: -- installing release [ jenkins ] in namespace [[ staging ]]
2017/11/19 18:18:05 INFO: attempting: -- installing release [ artifactory ] in namespace [[ staging ]]
```

```
$ helm list --namespace staging
NAME REVISION UPDATED STATUS CHART NAMESPACE
artifactory 1 Sun Nov 19 18:18:06 2017 DEPLOYED artifactory-6.2.0 staging
jenkins 1 Sun Nov 19 18:18:03 2017 DEPLOYED jenkins-0.9.1 staging
```

# delete releases

You can then change your desire, for example to disable the Jenkins release that was created above by setting `enabled = false` :

Then run Helmsman again and it will detect that you want to delete Jenkins:

> Note: As of v1.4.0-rc, deleting the jenkins app entry in the desired state file WILL result in deleting the jenkins release. To prevent this, use the `--keep-untracked-releases` flag with your Helmsman command.
```
$ helmsman --apply -f example.toml
2017/11/19 18:28:27 Parsed [[ example.toml ]] successfully and found [ 2 ] apps.
2017/11/19 18:28:29 WARN: I could not create namespace [staging ]. It already exists. I am skipping this.
2017/11/19 18:28:29 WARN: I could not create namespace [default ]. It already exists. I am skipping this.
2017/11/19 18:29:01 INFO: Executing the following plan ...
---------------
Ok, I have generated a plan for you at: 2017-11-19 18:28:29.437061909 +0100 CET m=+1.987623555
DECISION: release [ jenkins ] is desired to be deleted . Planning this for you!
DECISION: release [ artifactory ] is desired to be upgraded. Planning this for you!
2017/11/19 18:29:01 INFO: attempting: -- deleting release [ jenkins ]
2017/11/19 18:29:11 INFO: attempting: -- upgrading release [ artifactory ]
```

```
$ helm list --namespace staging
NAME REVISION UPDATED STATUS CHART NAMESPACE
artifactory 2 Sun Nov 19 18:29:11 2017 DEPLOYED artifactory-6.2.0 staging
```

If you would like the release to be deleted along with its history, you can use the `purge` flag in your desired state file as follows:

> NOTE: purge deleting a release means you can't roll it back.
```toml
...
[apps]

[apps.jenkins]
name = "jenkins"
description = "jenkins"
namespace = "staging"
enabled = false # this tells helmsman to delete it
chart = "stable/jenkins"
version = "0.9.1"
valuesFile = ""
purge = true # this means purge delete this release whenever it is required to be deleted
test = false

...
```

```yaml
...
apps:
jenkins:
name: "jenkins"
description: "jenkins"
namespace: "staging"
enabled: false # this tells helmsman to delete it
chart: "stable/jenkins"
version: "0.9.1"
valuesFile: ""
purge: true # this means purge delete this release whenever it is required to be deleted
test: false

...
```

# rollback releases

> Rollbacks in helm versions 2.8.2 and higher may not work due to a [bug](https://github.com/helm/helm/issues/3722).
Similarly, if you change `enabled` back to `true`, it will figure out that you would like to roll it back.

```
$ helmsman --apply -f example.toml
2017/11/19 18:30:41 Parsed [[ example.toml ]] successfully and found [ 2 ] apps.
2017/11/19 18:30:42 WARN: I could not create namespace [staging ]. It already exists. I am skipping this.
2017/11/19 18:30:43 WARN: I could not create namespace [default ]. It already exists. I am skipping this.
2017/11/19 18:30:49 INFO: Executing the following plan ...
---------------
Ok, I have generated a plan for you at: 2017-11-19 18:30:43.108693039 +0100 CET m=+1.978435517
DECISION: release [ jenkins ] is currently deleted and is desired to be rolledback to namespace [[ staging ]] . No problem!
DECISION: release [ artifactory ] is desired to be upgraded. Planning this for you!
2017/11/19 18:30:49 INFO: attempting: -- rolling back release [ jenkins ]
2017/11/19 18:30:50 INFO: attempting: -- upgrading release [ artifactory ]
```

# upgrade releases

Every time you run Helmsman, (unless the release is [protected or deployed in a protected namespace](protect_namespaces_and_releases.md)) it will upgrade existing deployed releases to the version you specified in the desired state file. It also applies the `values.yaml` file you specify with each install/upgrade. This means that when you don't change anything for a specific release, Helmsman would upgrade with the `values.yaml` file you provide (just in case it is a new file or you changed something there.)

If you change the chart, the existing release will be deleted and a new one with the same name will be created using the new chart.


14 changes: 14 additions & 0 deletions docs/how_to/apps/destroy.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,14 @@
---
version: v1.6.2
---

# delete all deployed releases

Helmsman allows you to delete all the helm releases that were deployed by Helmsman from a given desired state.

The `--destroy` flag will remove all deployed releases from a given desired state file (DSF). Note that this does not currently delete the namespaces nor the Kubernetes contexts created.

The deletion of releases will respect the `purge` options in the desired state file. i.e. only if `purge` is true for release A, then the destruction of A will be a purge delete

This was originally requested in issue [#88](https://github.com/Praqma/helmsman/issues/88).

45 changes: 45 additions & 0 deletions docs/how_to/apps/helm_tests.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,45 @@
---
version: v1.3.0-rc
---

# test charts

You can specify that you would like a chart to be tested whenever it is installed for the first time using the `test` key as follows:

```toml
...
[apps]

[apps.jenkins]
name = "jenkins"
description = "jenkins"
namespace = "staging"
enabled = true
chart = "stable/jenkins"
version = "0.9.1"
valuesFile = ""
purge = false
test = true # setting this to true, means you want the charts tests to be run on this release when it is installed.

...

```

```yaml
...
apps:

jenkins:
name: "jenkins"
description: "jenkins"
namespace: "staging"
enabled: true
chart: "stable/jenkins"
version: "0.9.1"
valuesFile: ""
purge: false
test: true # setting this to true, means you want the charts tests to be run on this release when it is installed.

...

```
166 changes: 166 additions & 0 deletions docs/how_to/apps/moving_across_namespaces.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,166 @@
---
version: v1.3.0-rc
---

# move charts across namespaces

If you have a workflow for testing a release first in the `staging` namespace then move it to the `production` namespace, Helmsman can help you.

> NOTE: If your chart uses a persistent volume, then you have to read the note on PVs below first.
```toml
...

[namespaces]
[namespaces.staging]
[namespaces.production]


[apps]

[apps.jenkins]
name = "jenkins"
description = "jenkins"
namespace = "staging" # this is where it is deployed
enabled = true
chart = "stable/jenkins"
version = "0.9.1"
valuesFile = ""
purge = false
test = true

...

```

```yaml
...

namespaces:
staging:
production:

apps:
jenkins:
name: "jenkins"
description: "jenkins"
namespace: "staging" # this is where it is deployed
enabled: true
chart: "stable/jenkins"
version: "0.9.1"
valuesFile: ""
purge: false
test: true

...

```

Then if you change the namespace key for jenkins:

```toml
...

[namespaces]
[namespaces.staging]
[namespaces.production]

[apps]

[apps.jenkins]
name = "jenkins"
description = "jenkins"
namespace = "production" # we want to move it to production
enabled = true
chart = "stable/jenkins"
version = "0.9.1"
valuesFile = ""
purge = false
test = true

...

```

```yaml
...

namespaces:
staging:
production:

apps:
jenkins:
name: "jenkins"
description: "jenkins"
namespace: "production" # we want to move it to production
enabled: true
chart: "stable/jenkins"
version: "0.9.1"
valuesFile: ""
purge: false
test: true

...

```

Helmsman will delete the jenkins release from the `staging` namespace and install it in the `production` namespace (default in the above setup).

## Note on Persistent Volumes

Helmsman does not automatically move PVCs across namespaces. You have to follow the steps below to retain your data when moving an app to a different namespace.

Persistent Volumes (PV) are accessed through Persistent Volume Claims (PVC). But **PVCs are namespaced objects** which means moving an application from one namespace to another will result in a new PVC created in the new namespace. The old PV -which possibly contains your application data- will still be mounted to the old PVC (the one in the old namespace) even if you have deleted your application helm release.

Now, the newly created PVC (in the new namespace) will not be able to mount to the old PV and instead it will mount to any other available one or (in the case of dynamic provisioning) will provision a new PV. This means the application in the new namespace does not have the old data. Don't panic, the old PV is still there and contains your old data.

### Mounting the old PV to the new PVC (in the new namespace)

1. You have to make sure the _Reclaim Policy_ of the old PV is set to **Retain**. In dynamic provisioned PVs, the default is Delete.
To change it:

```
kubectl patch pv <your-pv-name> -p '{"spec":{"persistentVolumeReclaimPolicy":"Retain"}}'
```

2. Once your old helm release is deleted, the old PVC and PV are still there. Go ahead and delete the PVC

```
kubectl delete pvc <your-pvc-name> --namespace <the-old-namespace>
```
Since, we changed the Reclaim Policy to Retain, the PV will stay around (with all your data).

3. The PV is now in the **Released** state but not yet available for mounting.

```
kubectl get pv
NAME CAPACITY ACCESSMODES RECLAIMPOLICY STATUS CLAIM STORAGECLASS REASON AGE
...
pvc-f791ef92-01ab-11e8-8a7e-02412acf5adc 20Gi RWO Retain Released staging/myapp-persistent-storage-test-old-0 gp2 5m
```
Now, you need to make it Available, for that we need to remove the `PV.Spec.ClaimRef` from the PV spec:

```
kubectl edit pv <pv-name>
# edit the file and save it
```

Now, the PV should become in the **Available** state:

```
kubectl get pv
NAME CAPACITY ACCESSMODES RECLAIMPOLICY STATUS CLAIM STORAGECLASS REASON AGE
...
pvc-f791ef92-01ab-11e8-8a7e-02412acf5adc 20Gi RWO Retain Available gp2 7m
```
4. Delete the new PVC (and its mounted PV if necessary), then delete your application pod(s) in the new namespace. Assuming you have a deployment/replication controller in place, the pod will be recreated in the new namespace and this time will mount to the old volume and your data will be once again available to your application.

> NOTE: if there are multiple PVs in the Available state and they match capacity and read access for your application, then your application (in the new namespace) might mount to any of them. In this case, either ensure only the right PV is in the available state or make the PV available to a specific PVC - pre-fill `PV.Spec.ClaimRef` with a pointer to a PVC. Leave the `PV.Spec.ClaimRef,UID` empty, as the PVC does not need to exist at this point and you don't know PVC's UID. This PV can be bound only to the specified PVC
Further details:
https://github.com/kubernetes/kubernetes/issues/48609
https://kubernetes.io/docs/tasks/administer-cluster/change-pv-reclaim-policy/

70 changes: 70 additions & 0 deletions docs/how_to/apps/multiple_values_files.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,70 @@
---
version: v1.3.0-rc
---

# multiple value files

You can include multiple yaml value files to separate configuration for different environments.

```toml
...
[apps]

[apps.jenkins]
name = "jenkins-prod" # should be unique across all apps
description = "production jenkins"
namespace = "production"
enabled = true
chart = "stable/jenkins"
version = "0.9.1" # chart version
valuesFiles = [
"../my-jenkins-common-values.yaml",
"../my-jenkins-production-values.yaml"
]

# the jenkins release below is being tested in the staging namespace
[apps.jenkins-test]
name = "jenkins-test" # should be unique across all apps
description = "test release of jenkins, testing xyz feature"
namespace = "staging"
enabled = true
chart = "stable/jenkins"
version = "0.9.1" # chart version
valuesFiles = [
"../my-jenkins-common-values.yaml",
"../my-jenkins-testing-values.yaml"
]

...

```

```yaml
...
apps:

jenkins:
name: "jenkins-prod" # should be unique across all apps
description: "production jenkins"
namespace: "production"
enabled: true
chart: "stable/jenkins"
version: "0.9.1" # chart version
valuesFiles:
- "../my-jenkins-common-values.yaml"
- "../my-jenkins-production-values.yaml"

# the jenkins release below is being tested in the staging namespace
jenkins-test:
name: "jenkins-test" # should be unique across all apps
description: "test release of jenkins, testing xyz feature"
namespace: "staging"
enabled: true
chart: "stable/jenkins"
version: "0.9.1" # chart version
valuesFiles:
- "../my-jenkins-common-values.yaml"
- "../my-jenkins-testing-values.yaml"
...

```
84 changes: 84 additions & 0 deletions docs/how_to/apps/order.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,84 @@
---
version: v1.3.0-rc
---

# Using the priority key for Apps

The `priority` flag in Apps definition allows you to define the order at which apps operations will be applied. This is useful if you have dependencies between your apps/services.

Priority is an optional flag and has a default value of 0 (zero). If set, it can only use a negative value. The lower the value, the higher the priority.

If you want your apps to be deleted in the reverse order as they where created, you can also use the optional `Settings` flag `reverseDelete`, to achieve this, set it to `true`

## Example

```toml
[metadata]
org = "example.com"
description = "example Desired State File for demo purposes."

[settings]
kubeContext = "minikube"
reverseDelete = false # Optional flag to reverse the priorities when deleting

[namespaces]
[namespaces.staging]
protected = false
[namespaces.production]
protected = true

[helmRepos]
stable = "https://kubernetes-charts.storage.googleapis.com"
incubator = "http://storage.googleapis.com/kubernetes-charts-incubator"

[apps]
[apps.jenkins]
name = "jenkins" # should be unique across all apps
description = "jenkins"
namespace = "staging" # maps to the namespace as defined in environments above
enabled = true # change to false if you want to delete this app release [empty = false]
chart = "stable/jenkins" # changing the chart name means delete and recreate this chart
version = "0.14.3" # chart version
valuesFile = "" # leaving it empty uses the default chart values
priority= -2

[apps.jenkins1]
name = "jenkins1" # should be unique across all apps
description = "jenkins"
namespace = "staging" # maps to the namespace as defined in environments above
enabled = true # change to false if you want to delete this app release [empty = false]
chart = "stable/jenkins" # changing the chart name means delete and recreate this chart
version = "0.14.3" # chart version
valuesFile = "" # leaving it empty uses the default chart values


[apps.jenkins2]
name = "jenkins2" # should be unique across all apps
description = "jenkins"
namespace = "production" # maps to the namespace as defined in environments above
enabled = true # change to false if you want to delete this app release [empty = false]
chart = "stable/jenkins" # changing the chart name means delete and recreate this chart
version = "0.14.3" # chart version
valuesFile = "" # leaving it empty uses the default chart values
priority= -3

[apps.artifactory]
name = "artifactory" # should be unique across all apps
description = "artifactory"
namespace = "staging" # maps to the namespace as defined in environments above
enabled = true # change to false if you want to delete this app release [empty = false]
chart = "stable/artifactory" # changing the chart name means delete and recreate this chart
version = "7.0.6" # chart version
valuesFile = "" # leaving it empty uses the default chart values
priority= -2
```

The above example will generate the following plan:

```
DECISION: release [ jenkins2 ] is not present in the current k8s context. Will install it in namespace [[ production ]] -- priority: -3
DECISION: release [ jenkins ] is not present in the current k8s context. Will install it in namespace [[ staging ]] -- priority: -2
DECISION: release [ artifactory ] is not present in the current k8s context. Will install it in namespace [[ staging ]] -- priority: -2
DECISION: release [ jenkins1 ] is not present in the current k8s context. Will install it in namespace [[ staging ]] -- priority: 0
```
109 changes: 109 additions & 0 deletions docs/how_to/apps/override_namespaces.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,109 @@
---
version: v1.3.0-rc
---

# Override defined namespaces from command line

If you use different release branches for your releasing/managing your applications in your k8s clusters, then you might want to use the same desired state but with different namespaces on each branch. Instead of duplicating the DSF in multiple branches and adjusting it, you can use the `--ns-override` command line flag when running helmsman.

This flag overrides all namespaces defined in your DSF with the single one you pass from command line.

# Example

dsf.toml
```toml
[metadata]
org = "example.com"
description = "example Desired State File for demo purposes."


[settings]
kubeContext = "minikube"

[namespaces]
[namespaces.staging]
protected = false
[namespaces.production]
prtoected = true

[helmRepos]
stable = "https://kubernetes-charts.storage.googleapis.com"
incubator = "http://storage.googleapis.com/kubernetes-charts-incubator"


[apps]

[apps.jenkins]
name = "jenkins" # should be unique across all apps
description = "jenkins"
namespace = "production" # maps to the namespace as defined in environments above
enabled = true # change to false if you want to delete this app release [empty = false]
chart = "stable/jenkins" # changing the chart name means delete and recreate this chart
version = "0.14.3" # chart version
valuesFile = "" # leaving it empty uses the default chart values

[apps.artifactory]
name = "artifactory" # should be unique across all apps
description = "artifactory"
namespace = "staging" # maps to the namespace as defined in environments above
enabled = true # change to false if you want to delete this app release [empty = false]
chart = "stable/artifactory" # changing the chart name means delete and recreate this chart
version = "7.0.6" # chart version
valuesFile = "" # leaving it empty uses the default chart values
```

dsf.yaml
```yaml
metadata:
org: "example.com"
description: "example Desired State File for demo purposes."


settings:
kubeContext: "minikube"

namespaces:
staging:
protected: false
production:
protected: true

helmRepos:
stable: "https://kubernetes-charts.storage.googleapis.com"
incubator: "http://storage.googleapis.com/kubernetes-charts-incubator"


apps:

jenkins:
name: "jenkins" # should be unique across all apps
description: "jenkins"
namespace: "production" # maps to the namespace as defined in environments above
enabled: true # change to false if you want to delete this app release [empty: false]
chart: "stable/jenkins" # changing the chart name means delete and recreate this chart
version: "0.14.3" # chart version
valuesFile: "" # leaving it empty uses the default chart values

artifactory:
name: "artifactory" # should be unique across all apps
description: "artifactory"
namespace: "staging" # maps to the namespace as defined in environments above
enabled: true # change to false if you want to delete this app release [empty: false]
chart: "stable/artifactory" # changing the chart name means delete and recreate this chart
version: "7.0.6" # chart version
valuesFile: "" # leaving it empty uses the default chart values
```
In command line, we run :
```
helmsman -f dsf.toml --debug --ns-override testing
```

This will override the `staging` and `production` namespaces defined in `dsf.toml` :

```
2018/03/31 17:38:12 INFO: Plan generated at: Sat Mar 31 2018 17:37:57
DECISION: release [ jenkins ] is not present in the current k8s context. Will install it in namespace [[ testing ]] -- priority: 0
DECISION: release [ artifactory ] is not present in the current k8s context. Will install it in namespace [[ testing ]] -- priority: 0
```
31 changes: 31 additions & 0 deletions docs/how_to/apps/protection.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,31 @@
---
version: v1.8.0
---

# Protecting apps (releases)

You can define apps to be protected using the `protected` field. Please check [this doc](../protect_namespaces_and_releases.md) for details about what protection means and the difference between namespace-level and release-level protection.

Here is an example of a protected app:

```toml
[apps]

[apps.jenkins]
namespace = "staging"
enabled = true
chart = "stable/jenkins"
version = "0.9.1"
protected = true # defining this release to be protected.
```

```yaml
apps:

jenkins:
namespace: "staging"
enabled: true
chart: "stable/jenkins"
version: "0.9.1"
protected: true # defining this release to be protected.
```
77 changes: 77 additions & 0 deletions docs/how_to/apps/secrets.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,77 @@
---
version: v1.6.0
---

# passing secrets from env variables:

Starting from v0.1.3, Helmsman allows you to pass secrets and other user input to helm charts from environment variables as follows:

```toml
# ...
[apps]

[apps.jira]
name = "jira"
description = "jira"
namespace = "staging"
enabled = true
chart = "myrepo/jira"
version = "0.1.5"
valuesFile = "applications/jira-values.yaml"
purge = false
test = true
[apps.jira.set] # the format is [apps.<<release_name (as defined above)>>.set]
db_username= "$JIRA_DB_USERNAME" # pass any number of key/value pairs where the key is the input expected by the helm charts and the value is an env variable name starting with $
db_password= "$JIRA_DB_PASSWORD"
# ...

```

```yaml
# ...
apps:

jira:
name: "jira"
description: "jira"
namespace: "staging"
enabled: true
chart: "myrepo/jira"
version: "0.1.5"
valuesFile: "applications/jira-values.yaml"
purge: false
test: true
set:
db_username: "$JIRA_DB_USERNAME" # pass any number of key/value pairs where the key is the input expected by the helm charts and the value is an env variable name starting with $
db_password: "$JIRA_DB_PASSWORD"
# ...

```

These input variables will be passed to the chart when it is deployed/upgraded using helm's `--set <<var_name>>=<<var_value_read_from_env_var>>`

# passing secrets from env files
You can also keep these environment variables in files, by default Helmsman will load variables from a `.env` file but you can also specify files by using the `-e` option:

```bash
helmsman -e myVars
```

Below are some examples of valid env files

```bash
# I am a comment and that is OK
SOME_VAR=someval
FOO=BAR # comments at line end are OK too
export BAR=BAZ
```
Or you can do YAML(ish) style

```yaml
FOO: bar
BAR: baz
```
# passing secrets using helm secrets plugin
You can also use the [helm secrets plugin](https://github.com/futuresimple/helm-secrets) to pass your secrets.
33 changes: 33 additions & 0 deletions docs/how_to/deployments/ci.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,33 @@
---
version: v1.3.0-rc
---

# Run Helmsman in CI

You can run Helmsman as a job in your CI system using the [helmsman docker image](https://hub.docker.com/r/praqma/helmsman/).
The following example is a `config.yml` file for CircleCI but can be replicated for other CI systems.

```
version: 2
jobs:
deploy-apps:
docker:
- image: praqma/helmsman:v1.8.0
steps:
- checkout
- run:
name: Deploy Helm Packages using helmsman
command: helmsman --debug --apply -f helmsman-deployments.toml
workflows:
version: 2
build:
jobs:
- deploy-apps
```

> IMPORTANT: If your CI build logs are publicly readable, don't use the `--verbose` flag as logs any secrets being passed from env vars to the helm charts.
The `helmsman-deployments.toml` is your desired state file which will version controlled in your git repo.
51 changes: 51 additions & 0 deletions docs/how_to/deployments/inside_k8s.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,51 @@
---
version: v1.8.0
---

# Running Helmsman inside your k8s cluster

Helmsman can be deployed inside your k8s cluster and can talk to the k8s API using a `bearer token`.

See [connecting to your cluster with bearer token](../settings/create_kube_context_with_token.md) for more details.


Your desired state will look like:

```toml
[settings]
kubeContext = "test" # the name of the context to be created
bearerToken = true
clusterURI = "https://kubernetes.default"
```

```yaml
settings:
kubeContext: "test" # the name of the context to be created
bearerToken: true
clusterURI: "https://kubernetes.default"
```
To deploy Helmsman into a k8s cluster, few steps are needed:
> The steps below assume default namespace
1. Create a k8s service account
```bash
$ kubectl create sa helmsman
```

2. Create a clusterrolebinding

```bash
$ kubectl create clusterrolebinding helmsman-cluster-admin --clusterrole=cluster-admin --serviceaccount=default:helmsman
```

3. Deploy helmsman

This command gives an interactive session:

```bash
$ kubectl run helmsman --restart Never --image praqma/helmsman --serviceaccount=helmsman -- helmsman -f -- sleep 3600
```
But you can also create a proper kubernetes deployment and mount a volume to it containing your desired state file(s).
27 changes: 27 additions & 0 deletions docs/how_to/helm_repos/basic_auth.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,27 @@
---
version: v1.8.0
---

# Using private helm repos with basic auth

Helmsman allows you to use any private helm repo hosting which supports basic auth (e.g. Artifactory).

For such repos, you need to add the basic auth information in the repo URL as in the example below:

> Be aware that some special characters in the username or password can make the URL invalid.
```toml

[helmRepos]
# PASS is an env var containing the password
myPrivateRepo = "https://user:$PASS@myprivaterepo.org"

```

```yaml

helmRepos:
# PASS is an env var containing the password
myPrivateRepo: "https://user:$PASS@myprivaterepo.org"

```
45 changes: 45 additions & 0 deletions docs/how_to/helm_repos/default.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,45 @@
---
version: v1.8.0
---

# Default helm repos

By default, helm comes with two default repos; `stable` and `incubator`. These two DO NOT need to be defined explicitly in your desired state file (DSF). However, if you would like to configure some repo with the name stable for example, you can override the default repo.

This example would have `stable` and `incubator` added by default and another `custom` repo defined explicitly:

```toml


[helmRepos]
custom = "https://mycustomrepo.org"

```

```yaml

helmRepos:
custom: "https://mycustomrepo.org"


```

This example would have `stable` overriden with a custom repo:

```toml
...

[helmRepos]
stable = "https://mycustomstablerepo.com"
...

```

```yaml
...

helmRepos:
stable: "https://mycustomstablerepo.com"
...

```
30 changes: 30 additions & 0 deletions docs/how_to/helm_repos/gcs.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,30 @@
---
version: v1.8.0
---

# Using private helm repos in GCS

Helmsman allows you to use private charts from private repos. Currently only repos hosted in S3 or GCS buckets are supported for private repos.

You need to provide one of the following env variables:

- `GOOGLE_APPLICATION_CREDENTIALS` environment variable to contain the absolute path to your Google cloud credentials.json file.
- Or, `GCLOUD_CREDENTIALS` environment variable to contain the content of the credentials.json file.

Helmsman uses the [helm GCS](https://github.com/nouney/helm-gcs) plugin to work with GCS helm repos.

```toml


[helmRepos]
gcsRepo = "gs://myrepobucket/charts"

```

```yaml

helmRepos:
gcsRepo: "gs://myrepobucket/charts"


```
42 changes: 42 additions & 0 deletions docs/how_to/helm_repos/local.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,42 @@
---
version: v1.3.0-rc
---

# use local helm charts

You can use your locally developed charts.

## From file system

If you use a file path (relative to the DSF, or absolute) for the ```chart``` attribute
helmsman will try to resolve that chart from the local file system. The chart on the
local file system must have a version matching the version specified in the DSF.

## Served by Helm

You can serve them on localhost using helm's `serve` option.

```toml
...

[helmRepos]
stable = "https://kubernetes-charts.storage.googleapis.com"
incubator = "http://storage.googleapis.com/kubernetes-charts-incubator"
local = http://127.0.0.1:8879

...

```

```yaml
...

helmRepos:
stable: "https://kubernetes-charts.storage.googleapis.com"
incubator: "http://storage.googleapis.com/kubernetes-charts-incubator"
local: http://127.0.0.1:8879

...

```

21 changes: 21 additions & 0 deletions docs/how_to/helm_repos/pre_configured.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,21 @@
---
version: v1.8.0
---

# Using pre-configured helm repos

The primary use-case is if you have some helm repositories that require HTTP basic authentication and you don't want to store the password in the desired state file or as an environment variable. In this case you can execute the following sequence to have those repositories configured:

Set up the helmsman configuration:

```toml
preconfiguredHelmRepos = [ "myrepo1", "myrepo2" ]
```

```yaml
preconfiguredHelmRepos:
- myrepo1
- myrepo2
```
> In this case you will manually need to execute `helm repo add myrepo1 <URL> --username= --password=`
29 changes: 29 additions & 0 deletions docs/how_to/helm_repos/s3.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,29 @@
---
version: v1.8.0
---

# Using private helm repos in S3

Helmsman allows you to use private charts from private repos. Currently only repos hosted in S3 or GCS buckets are supported for private repos.

You need to provide one of the following env variables:

- AWS_ACCESS_KEY_ID
- AWS_SECRET_ACCESS_KEY
- AWS_DEFAULT_REGION

Helmsman uses the [helm s3](https://github.com/hypnoglow/helm-s3) plugin to work with S3 helm repos.

```toml

[helmRepos]
myPrivateRepo = s3://this-is-a-private-repo/charts

```

```yaml

helmRepos:
myPrivateRepo: s3://this-is-a-private-repo/charts

```
29 changes: 29 additions & 0 deletions docs/how_to/misc/auth_to_storage_providers.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,29 @@
---
version: v1.8.0
---

# Authenticating to cloud storage providers

Helmsman can read files like certificates for connecting to the cluster or TLS certificates for communicating with Tiller from some cloud storage providers; namely: GCS, S3 and Azure blob storage. Below is the authentication requirement for each provider:

## AWS S3

You need to provide ALL the following AWS env variables:

- `AWS_ACCESS_KEY_ID`
- `AWS_SECRET_ACCESS_KEY`
- `AWS_DEFAULT_REGION`

## Google GCS

You need to provide ONE of the following env variables:

- `GOOGLE_APPLICATION_CREDENTIALS` the absolute path to your Google cloud credentials.json file.
- Or, `GCLOUD_CREDENTIALS` the content of the credentials.json file.

## Microsoft Azure

You need to provide ALL of the following env variables:

- `AZURE_STORAGE_ACCOUNT`
- `AZURE_STORAGE_ACCESS_KEY`
19 changes: 19 additions & 0 deletions docs/how_to/misc/helmsman_on_windows10.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,19 @@
---
version: v1.1.0
---

> This guide has not been thoroughly tested.
# Using Helmsman from a docker image on Windows 10

If you have Windows 10 with Docker installed, you **might** be able to run Helmsman in a linux container on Windows.

1. Switch to the Linux containers from the docker tray icon.
2. Configure your local kubectl on Windows to connect to your cluster.
3. Configure your desired state file to use the kubeContext only. i.e. no cluster connection settings.
2. Run the following command:

```
docker run --rm -it -v <your kubectl config location>:/root/.kube -v <your dsf.toml directory>:/tmp praqma/helmsman:v1.0.2 helmsman -f dsf.toml --debug --apply
```

47 changes: 47 additions & 0 deletions docs/how_to/misc/limit-deployment-to-specific-apps.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,47 @@
---
version: v1.9.0
---

# limit execution to explicitly defined apps

Starting from v1.9.0, Helmsman allows you to pass the `-target` flag multiple times to specify multiple apps
that limits apps considered by Helmsman during this specific execution.
Thanks to this one can deploy specific applications among all defined for an environment.

## An example

having environment defined with such apps:

* example.yaml:
```yaml
...
apps:
jenkins:
namespace: "staging" # maps to the namespace as defined in namespaces above
enabled: true # change to false if you want to delete this app release empty: false:
chart: "stable/jenkins" # changing the chart name means delete and recreate this chart
version: "0.14.3" # chart version

artifactory:
namespace: "production" # maps to the namespace as defined in namespaces above
enabled: true # change to false if you want to delete this app release empty: false:
chart: "stable/artifactory" # changing the chart name means delete and recreate this chart
version: "7.0.6" # chart version
...
```

running Helmsman with `-f example.yaml` would result in checking state and invoking deployment for both jenkins and artifactory application.

With `-target` flag in command like

```bash
$ helmsman -f example.yaml -target artifactory ...
```

one can execute Helmsman's environment defined with example.yaml limited to only one `artifactory` app. Others are ignored until the flag is defined.

Multiple applications can be set with `-target`, like

```bash
$ helmsman -f example.yaml -target artifactory -target jenkins ...
```
44 changes: 44 additions & 0 deletions docs/how_to/misc/merge_desired_state_files.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,44 @@
---
version: v1.5.0
---

# supply multiple desired state files

Starting from v1.5.0, Helmsman allows you to pass the `-f` flag multiple times to specify multiple desired state files
that should be merged. This allows us to do things like specify our non-environment-specific config in a `common.toml` file
and environment specific info in a `nonprod.toml` or `prod.toml` file. This process uses [this library](https://github.com/imdario/mergo)
to do the merging, and is subject to the limitations described there.

For example:

* common.toml:
```toml
[metadata]
org = "Organization Name"
maintainer = "project-owners@example.com"
description = "Project charts"

[settings]
serviceAccount = "tiller"
storageBackend = "secret"
...
```

* nonprod.toml:
```toml
[settings]
kubeContext = "cluster-nonprod"

[apps]
[apps.external-dns]
valuesFiles = ["./external-dns/values.yaml", "./external-dns/nonprod.yaml"]

[apps.cert-issuer]
valuesFile = "./cert-issuer/nonprod.yaml"
...
```

One can then run the following to use the merged config of the above files, with later files override values of earlier ones:
```bash
$ helmsman -f common.toml -f nonprod.toml ...
```
164 changes: 164 additions & 0 deletions docs/how_to/misc/multitenant_clusters_guide.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,164 @@
---
version: v1.5.0
---

# Multitenant Clusters Guide

This guide helps you use Helmsman to secure your Helm deployment with service accounts and TLS.

>Checkout Helm's [security guide](https://github.com/kubernetes/helm/blob/master/docs/securing_installation.md)
> These features are available starting from v1.2.0-rc
## Deploying Tiller in multiple namespaces

In a multitenant cluster, it is a good idea to separate the Helm work of different users. You can achieve that by deploying Tiller in multiple namespaces. This is done in the `namespaces` section using the `installTiller` flag:

```toml

[namespaces]
[namespaces.staging]
installTiller = true
[namespaces.production]
installTiller = true
[namespaces.developer1]
installTiller = true
[namespaces.developer2]
installTiller = true

```

```yaml

namespaces:
staging:
installTiller: true
production:
installTiller: true
developer1:
installTiller: true
developer2:
installTiller: true

```

By default Tiller will be deployed into `kube-system` even if you don't define kube-system in the namespaces section. To prevent deploying Tiller into `kube-system, you need to explicitly add `kube-system` in your defined namespaces. See the [namespaces guide](define_namespaces.md#preventing_tiller_deployment_in_kube-system) for an example.

## Deploying Tiller with a service account

For K8S clusters with RBAC enabled, you will need to initialize Helm with a service account. Check [Helm's RBAC guide](https://github.com/kubernetes/helm/blob/master/docs/rbac.md).

Helmsman lets you deploy each of the Tillers with a different k8s service account Or with a default service account of your choice.

```toml

[settings]
# other options
serviceAccount = "default-tiller-sa"

[namespaces]
[namespaces.staging]
installTiller = true
tillerServiceAccount = "custom-sa"

[namespaces.production]
installTiller = true

[namespaces.developer1]
installTiller = true
tillerServiceAccount = "dev1-sa"

[namespaces.developer2]
installTiller = true
tillerServiceAccount = "dev2-sa"

```

```yaml

settings:
# other options
serviceAccount: "default-tiller-sa"

namespaces:
staging:
installTiller: true
tillerServiceAccount: "custom-sa"

production:
installTiller: true

developer1:
installTiller: true
tillerServiceAccount: "dev1-sa"

developer2:
installTiller: true
tillerServiceAccount: "dev2-sa"

```

If `tillerServiceAccount` is not defined, the following options are considered:

1. If the `serviceAccount` defined in the `settings` section exists in the namespace you want to deploy Tiller in, it will be used, else
2. Helmsman creates the service account in that namespace and binds it to a role. If the namespace is kube-system, the service account is bound to `cluster-admin` clusterrole. Otherwise, a new role called `helmsman-tiller` is created in that namespace and only gives access to that namespace.


In the example above, namespaces `staging, developer1 & developer2` will have Tiller deployed with different service accounts.
The `production` namespace ,however, will be deployed using the `default-tiller-sa` service account defined in the `settings` section -assuming it exists in the production namespace-. If this one is not defined, Helmsman creates a new service account and binds it to a new role that only gives access to the `production` namespace.

## Deploying Tiller with TLS enabled

In a multitenant setting, it is also recommended to deploy Tiller with TLS enabled. This is also done in the `namespaces` section:

```toml

[namespaces]
[namespaces.kube-system]
installTiller = true
caCert = "secrets/kube-system/ca.cert.pem"
tillerCert = "secrets/kube-system/tiller.cert.pem"
tillerKey = "$TILLER_KEY" # where TILLER_KEY=secrets/kube-system/tiller.key.pem
clientCert = "gs://mybucket/mydir/helm.cert.pem"
clientKey = "s3://mybucket/mydir/helm.key.pem"

[namespaces.staging]
installTiller = true

[namespaces.production]
installTiller = true
tillerServiceAccount = "tiller-production"
caCert = "secrets/ca.cert.pem"
tillerCert = "secrets/tiller.cert.pem"
tillerKey = "$TILLER_KEY" # where TILLER_KEY=secrets/tiller.key.pem
clientCert = "gs://mybucket/mydir/helm.cert.pem"
clientKey = "s3://mybucket/mydir/helm.key.pem"

```

```yaml

namespaces:
kube-system:
installTiller: false # has no effect. Tiller is always deployed in kube-system
caCert: "secrets/kube-system/ca.cert.pem"
tillerCert: "secrets/kube-system/tiller.cert.pem"
tillerKey: "$TILLER_KEY" # where TILLER_KEY=secrets/kube-system/tiller.key.pem
clientCert: "gs://mybucket/mydir/helm.cert.pem"
clientKey: "s3://mybucket/mydir/helm.key.pem"

staging:
installTiller: true

production:
installTiller: true
tillerServiceAccount: "tiller-production"
caCert: "secrets/ca.cert.pem"
tillerCert: "secrets/tiller.cert.pem"
tillerKey: "$TILLER_KEY" # where TILLER_KEY=secrets/tiller.key.pem
clientCert: "gs://mybucket/mydir/helm.cert.pem"
clientKey: "s3://mybucket/mydir/helm.key.pem"

```


68 changes: 68 additions & 0 deletions docs/how_to/misc/protect_namespaces_and_releases.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,68 @@
---
version: v1.3.0-rc
---

# Namespace and Release Protection

Since helmsman is used with version controlled code and is often configured to be triggered as part of a CI pipeline, accidental mistakes could happen by the user (e.g, disabling a production application and taking out of service as a result of a mistaken change in the desired state file).

As of version v1.0.0, helmsman provides a fine-grained mechanism to protect releases/namespaces from accidental desired state file changes.

## Protection definition

- When a release (application) is protected, it CANNOT:
- deleted
- upgraded
- moved to another namespace

- A release CAN be moved into protection from a non-protected state.
- If a protected release need to be updated/changed or even deleted, this is possible, but the protection has to be removed first (i.e. remove the namespace/release from the protected state). This explained further below.

> A release is an instance (installation) of an application which has been packaged as a helm chart.
## Protection mechanism
Protection is supported in two forms:

- **Namespace-level Protection**: is defined at the namespace level. A namespace can be declaratively defined to be protected in the desired state file as in the example below:

```toml
[namespaces]
[namespaces.staging]
protected = false
[namespaces.production]
protected = true

```

- **Release-level Protection** is defined at the release level as in the example below:

```toml
[apps]

[apps.jenkins]
namespace = "staging"
enabled = true
chart = "stable/jenkins"
version = "0.9.1"
protected = true # defining this release to be protected.
```

```yaml
apps:

jenkins:
namespace: "staging"
enabled: true
chart: "stable/jenkins"
version: "0.9.1"
protected: true # defining this release to be protected.
```
> All releases in a protected namespace are automatically protected. Namespace protection has higher priority than the release-level protection.
## Important Notes
- You can combine both types of protection in your desired state file. The namespace-level protection always has a higher priority.
- Removing the protection from a namespace means all releases deployed in that namespace are no longer protected.
- We recommend using namespace-level protection for production namespace(s) and release-level protection for releases deployed in other namespaces.
- Release/namespace protection is only applied on single desired state files. It is your responsibility to make sure that multiple desired state files (if used) do not conflict with each other (e.g, one defines a particular namespace as protected and another defines it unprotected.) If you use multiple desired state files with the same cluster, please refer to [deployment strategies](../deployment_strategies.md) and [best practice](../best_practice.md) documentation.
23 changes: 23 additions & 0 deletions docs/how_to/misc/send_slack_notifications_from_helmsman.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,23 @@
---
version: v1.5.0
---

# Slack notifications from Helmsman

Starting from v1.4.0-rc, Helmsman can send slack notifications to a channel of your choice. To enable the notifications, simply add a `slack webhook` in the `settings` section of your desired state file. The webhook URL can be passed directly or from an environment variable.

```toml
[settings]
...
slackWebhook = $MY_SLACK_WEBHOOK
```

```yaml
settings:
...
#slackWebhook : "$MY_SLACK_WEBHOOK"
```

## Getting a Slack Webhook URL

Follow the [slack guide](https://api.slack.com/incoming-webhooks) for generating a webhook URL.
27 changes: 27 additions & 0 deletions docs/how_to/namespaces/create.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,27 @@
---
version: v1.8.0
---

# Create namespaces

You can define namespaces to be used in your cluster. If they don't exist, Helmsman will create them for you.

```toml
#...

[namespaces]
[namespaces.staging]
[namespaces.production]

#...
```

```yaml

namespaces:
staging:
production:

```

The example above will create two namespaces; staging and production.
36 changes: 36 additions & 0 deletions docs/how_to/namespaces/labels_and_annotations.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,36 @@
---
version: v1.8.0
---

# Label & annotate namespaces

You can define namespaces to be used in your cluster. If they don't exist, Helmsman will create them for you. You can also set some labels to apply for those namespaces.

```toml
#...

[namespaces]
[namespaces.staging]
[namespaces.staging.labels]
env = "staging"
[namespaces.production]
[namespaces.production.annotations]
"iam.amazonaws.com/role" = "dynamodb-reader"


#...
```

```yaml

namespaces:
staging:
labels:
env: "staging"
production:
annotations:
iam.amazonaws.com/role: "dynamodb-reader"

```

The above examples create two namespaces; staging and production. The staging namespace has one label `env`= `staging` while the production namespace has one annotation `iam.amazonaws.com/role`=`dynamodb-reader`.
52 changes: 52 additions & 0 deletions docs/how_to/namespaces/limits.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,52 @@
---
version: v1.8.0
---

# Define resource limits for namespaces

You can define namespaces to be used in your cluster. If they don't exist, Helmsman will create them for you. You can also define how much resource limits to set for each namespace.

You can read more about the `LimitRange` specification [here](https://docs.openshift.com/container-platform/3.11/dev_guide/compute_resources.html#dev-limit-ranges).

```toml
#...

[namespaces]
[namespaces.staging]
[[namespaces.staging.limits]]
type = "Container"
[namespaces.staging.limits.default]
cpu = "300m"
memory = "200Mi"
[namespaces.staging.limits.defaultRequest]
cpu = "200m"
memory = "100Mi"
[[namespaces.staging.limits]]
type = "Pod"
[namespaces.staging.limits.max]
memory = "300Mi"
[namespaces.production]

#...
```

```yaml

namespaces:
staging:
limits:
- type: Container
default:
cpu: "300m"
memory: "200Mi"
defaultRequest:
cpu: "200m"
memory: "100Mi"
- type: Pod
max:
memory: "300Mi"
production:

```

The example above will create two namespaces; staging and production with resource limits defined for the staging namespace.
32 changes: 32 additions & 0 deletions docs/how_to/namespaces/protection.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,32 @@
---
version: v1.8.0
---

# Protecting namespaces

You can define namespaces to be used in your cluster. If they don't exist, Helmsman will create them for you.

You can also define certain namespaces to be protected using the `protected` field. Please check [this doc](../protect_namespaces_and_releases.md) for details about what protection means and the difference between namespace-level and release-level protection.


```toml
#...

[namespaces]
[namespaces.staging]
[namespaces.production]
protected = true

#...
```

```yaml

namespaces:
staging:
production:
protected: true

```

The example above will create two namespaces; staging and production. Where Helmsman sees the production namespace as a protected namespace.
42 changes: 42 additions & 0 deletions docs/how_to/settings/creating_kube_context_with_certs.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,42 @@
---
version: v1.8.0
---

# Cluster connection -- creating the kube context with certificates

Helmsman can create the kube context for you (i.e. establish connection to your cluster). This guide describe how its done with certificates. If you want to use bearer tokens, check [this guide](creating_kube_context_with_token.md).

Creating the context with certs, requires both the `settings` and `certificates` stanzas.

> If you use GCS, S3, or Azure blob storage for your certificates, you will need to provide means to authenticate to the respective cloud provider in the environment. See [authenticating to cloud storage providers](../auth_to_storage_providers.md) for details.
```toml
[settings]
kubeContext = "mycontext" # the name of the context to be created
username = "admin" # the cluster user name
password = "$K8S_PASSWORD" # the name of an environment variable containing the k8s password
clusterURI = "${CLUSTER_URI}" # the name of an environment variable containing the cluster API endpoint
#clusterURI = "https://192.168.99.100:8443" # equivalent to the above

[certificates]
caClient = "gs://mybucket/client.crt" # GCS bucket path
caCrt = "s3://mybucket/ca.crt" # S3 bucket path
# caCrt = "az://myblobcontainer/ca.crt" # Azure blob object
caKey = "../ca.key" # valid local file relative path to the DSF file
```

```yaml
settings:
kubeContext: "mycontext" # the name of the context to be created
username: "admin" # the cluster user name
password: "$K8S_PASSWORD" # the name of an environment variable containing the k8s password
clusterURI: "${CLUSTER_URI}" # the name of an environment variable containing the cluster API endpoint
#clusterURI: "https://192.168.99.100:8443" # equivalent to the above

certificates:
caClient: "gs://mybucket/client.crt" # GCS bucket path
caCrt: "s3://mybucket/ca.crt" # S3 bucket path
#caCrt: "az://myblobcontainer/ca.crt" # Azure blob object
caKey: "../ca.key" # valid local file relative path to the DSF file

```
29 changes: 29 additions & 0 deletions docs/how_to/settings/creating_kube_context_with_token.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,29 @@
---
version: v1.8.0
---

# Cluster connection -- creating the kube context with bearer tokens

Helmsman can create the kube context for you (i.e. establish connection to your cluster). This guide describe how its done with bearer tokens. If you want to use certificates, check [this guide](creating_kube_context_with_certs.md).

All you need to do is set `bearerToken` to true and set the `clusterURI` to point to your cluster API endpoint in the `settings` stanza.

> Note: Helmsman and therefore helm will only be able to do what the kubernetes service account (from which the token is taken) allows.
By default, Helmsman will look for a token in `/var/run/secrets/kubernetes.io/serviceaccount/token`. If you have the token else where, you can specify its path with `bearerTokenPath`.

```toml
[settings]
kubeContext = "test" # the name of the context to be created
bearerToken = true
clusterURI = "https://kubernetes.default"
# bearerTokenPath = "/path/to/custom/bearer/token/file"
```

```yaml
settings:
kubeContext: "test" # the name of the context to be created
bearerToken: true
clusterURI: "https://kubernetes.default"
# bearerTokenPath: "/path/to/custom/bearer/token/file"
```
10 changes: 10 additions & 0 deletions docs/how_to/settings/current_kube_context.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,10 @@
---
version: v1.8.0
---

# Cluster connection -- Using the current kube context

Helmsman can use the current configured kube context. In this case, the `kubeContext` field in the `settings` stanza needs to be left empty. If no other `settings` fields are needed, you can delete the whole `settings` stanza.


If you want Helmsman to create the kube context for you, see [this guide](creating_kube_context_with_certs.md) for more details on creating a context with certs or [here](creating_kube_context_with_token.md) for details on creating context with bearer token.
19 changes: 19 additions & 0 deletions docs/how_to/settings/existing_kube_context.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,19 @@
---
version: v1.8.0
---

# Cluster connection -- Using an existing kube context

Helmsman can use any predefined kube context in the environment. All you need to do is set the context name in the `settings` stanza.

```toml
[settings]
kubeContext = "minikube"
```

```yaml
settings:
kubeContext: "minikube"
```
In the examples above, Helmsman tries to set the kube context to `minikube`. If that fails, it will attempt to create that kube context. Creating kube context requires more infromation provided. See [this guide](creating_kube_context_with_certs.md) for more details on creating a context with certs or [here](creating_kube_context_with_token.md) for details on creating context with bearer token.
36 changes: 36 additions & 0 deletions docs/how_to/tiller/deploy_apps_with_specific_tiller.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,36 @@
---
version: v1.8.0
---

# Deploying apps (releases) with specific Tillers
You can then tell Helmsman to deploy specific releases in a specific namespace:

```toml
#...
[apps]

[apps.jenkins]
namespace = "production" # pointing to the namespace defined above
enabled = true
chart = "stable/jenkins"
version = "0.9.1"


#...

```

```yaml
#...
apps:
jenkins:
namespace: "production" # pointing to the namespace defined above
enabled: true
chart: "stable/jenkins"
version: "0.9.1"

#...

```

In the above example, `Jenkins` will be deployed in the production namespace using the Tiller deployed in the production namespace. If the production namespace was not configured to have Tiller deployed there, Jenkins will be deployed using the Tiller in `kube-system`.
21 changes: 21 additions & 0 deletions docs/how_to/tiller/existing.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,21 @@
---
version: v1.8.0
---

## Using your existing Tillers (available from v1.6.0)

If you would like to use custom configuration when deploying your Tiller, you can do that before using Helmsman and then use the `useTiller` option in your namespace definition.

This will allow Helmsman to use your existing Tiller as it is. Note that you can't set both `useTiller` and `installTiller` to true at the same time.

```toml
[namespaces]
[namespaces.production]
useTiller = true
```

```yaml
namespaces:
production:
useTiller: true
```
56 changes: 56 additions & 0 deletions docs/how_to/tiller/multitenancy.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,56 @@
---
version: v1.8.0
---

# Deploying multiple Tillers

You can deploy multiple Tillers in the cluster (max. one per namespace). In each namespace definition you can configure how Tiller is installed. The following options are available:
- with/without RBAC
- with/without TLS
- with cluster-admin clusterrole or with a namespace-limited role or with an pre-configured role.

> If you use GCS, S3, or Azure blob storage for your certificates, you will need to provide means to authenticate to the respective cloud provider in the environment. See [authenticating to cloud storage providers](../auth_to_storage_providers.md) for details.

> More details about using Helmsman in a multitenant cluster can be found [here](../multitenant_clusters_guide.md)
You can also use pre-configured Tillers in specific namespaces. In the example below, the desired state is: to deploy Tiller in the `production` namespace with TLS and RBAC, and to use a pre-configured Tiller in the `dev` namespace. The `staging` namespace does not have any Tiller to be deployed or used. Tiller is not deployed in `kube-system`.


```toml
[namespaces]
# to prevent deploying Tiller into kube-system, use the two lines below
[namespaces.kube-system]
installTiller = false # this line can be omitted since installTiller defaults to false
[namespaces.staging]
[namespaces.dev]
useTiller = true # use a Tiller which has been deployed in dev namespace
[namespaces.production]
installTiller = true
tillerServiceAccount = "tiller-production"
tillerRole = "cluster-admin"
caCert = "secrets/ca.cert.pem"
tillerCert = "az://myblobcontainer/tiller.cert.pem"
tillerKey = "$TILLER_KEY" # where TILLER_KEY=secrets/tiller.key.pem
clientCert = "gs://mybucket/mydir/helm.cert.pem"
clientKey = "s3://mybucket/mydir/helm.key.pem"
```

```yaml
namespaces:
# to prevent deploying Tiller into kube-system, use the two lines below
kube-system:
installTiller: false # this line can be omitted since installTiller defaults to false
staging: # no Tiller deployed or used here
dev:
useTiller: true # use a Tiller which has been deployed in dev namespace
production:
installTiller: true
tillerServiceAccount: "tiller-production"
tillerRole: "cluster-admin"
caCert: "secrets/ca.cert.pem"
tillerCert: "az://myblobcontainer/tiller.cert.pem"
tillerKey: "$TILLER_KEY" # where TILLER_KEY=secrets/tiller.key.pem
clientCert: "gs://mybucket/mydir/helm.cert.pem"
clientKey: "s3://mybucket/mydir/helm.key.pem"
```
18 changes: 18 additions & 0 deletions docs/how_to/tiller/prevent_tiller_in_kube_system.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,18 @@
---
version: v1.8.0
---

# Prevent Tiller Deployment in kube-system

By default Tiller will be deployed into `kube-system` even if you don't define kube-system in the namespaces section. To prevent this, simply add `kube-system` into your namespaces section. Since `installTiller` for namespaces is by default false, Helmsman will not deploy Tiller in `kube-system`.

```toml
[namespaces]
[namespaces.kube-system]
# installTiller = false # this line is not needed since the default is false, but can be added for human readability.
```
```yaml
namespaces:
kube-system:
#installTiller: false # this line is not needed since the default is false, but can be added for human readability.
```
83 changes: 83 additions & 0 deletions docs/how_to/tiller/shared.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,83 @@
---
version: v1.8.0
---

# Deploying a shared Tiller (available from v1.2.0)

You can instruct Helmsman to deploy Tiller into specific namespaces (with or without TLS).

> By default Tiller will be deployed into `kube-system` even if you don't define kube-system in the namespaces section. To prevent deploying Tiller into `kube-system, see [preventing Tiller deployment in kube-system](prevent_tiller_in_kube_system.md)
## Without TLS

```toml
[namespaces]
[namespaces.production]
installTiller = true
```

```yaml
namespaces:
production:
installTiller: true
```
## With RBAC service account
You specify an existing service account to be used for deploying Tiller. If that service account does not exist, Helmsman will attempt to create it. If `tillerRole` (e.g. cluster-admin) is specified, it will be bound to the newly created service account.

By default, Tiller deployed in kube-system will be given cluster-admin clusterrole. Tiller in other namespaces will be given a custom role that gives it access to that namespace only. The custom role is created using [this template](../../../data/role.yaml).

```toml
[namespaces]
[namespaces.production]
installTiller = true
tillerServiceAccount = "tiller-production"
tillerRole = "cluster-admin"
[namespaces.staging]
installTiller = true
tillerServiceAccount = "tiller-stagin"
```

```yaml
namespaces:
production:
installTiller: true
tillerServiceAccount: "tiller-production"
tillerRole: "cluster-admin"
staging:
installTiller: true
tillerServiceAccount: "tiller-staging"
```

The above example will create two service accounts; `tiller-production` and `tiller-staging`. Service account `tiller-production` will be bound to a cluster admin clusterrole while `tiller-staging` will be bound to a newly created role with access to the staging namespace only.

## With RBAC and TLS

You have to provide the TLS certificates as below. Certificates can be either located locally or in Google GCS, AWS S3 or Azure blob storage.

> If you use GCS, S3, or Azure blob storage for your certificates, you will need to provide means to authenticate to the respective cloud provider in the environment. See [authenticating to cloud storage providers](../auth_to_storage_providers.md) for details.

```toml
[namespaces]
[namespaces.production]
installTiller = true
tillerServiceAccount = "tiller-production"
caCert = "secrets/ca.cert.pem"
tillerCert = "secrets/tiller.cert.pem"
tillerKey = "$TILLER_KEY" # where TILLER_KEY=secrets/tiller.key.pem
clientCert = "gs://mybucket/mydir/helm.cert.pem"
clientKey = "s3://mybucket/mydir/helm.key.pem"
```

```yaml
namespaces:
production:
installTiller: true
tillerServiceAccount: "tiller-production"
caCert: "secrets/ca.cert.pem"
tillerCert: "secrets/tiller.cert.pem"
tillerKey: "$TILLER_KEY" # where TILLER_KEY=secrets/tiller.key.pem
clientCert: "gs://mybucket/mydir/helm.cert.pem"
clientKey: "s3://mybucket/mydir/helm.key.pem"
```
Binary file added docs/images/helmsman.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added docs/images/multi-DSF.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
8 changes: 8 additions & 0 deletions docs/migrating_to_v1.4.0-rc.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,8 @@
# Migrating to Helmsman v1.4.0-rc or higher

This document highlights the main changes between Helmsman v1.4.0-rc and the older versions. While the changes are still backward-compatible, the behavior and the internals have changed. The list below highlights those changes:

- Helmsman v1.4.0-rc tracks the releases it manages by applying specific labels to their Helm state (stored in Helm's configured backend storage). For smooth transition when upgrading to v1.4.0-rc, you should run `helmsman -f <your desired state file> --apply-labels` once. This will label all releases from your desired state as with a `MANAGED-BY=Helmsman` label. The `--apply-labels`is safe to run multiple times.

- After each run, Helmsman v1.4.0-rc looks for, and deletes any releases with the `MANAGED-BY=Helmsman` label which are no longer existing in your desired state. This means that **deleting/commenting out an app from your desired state file will result in its deletion**. You can disable this cleanup by adding the flag `--keep-untracked-releases` to your Helmsman commands.

32 changes: 18 additions & 14 deletions docs/why_helmsman.md
Original file line number Diff line number Diff line change
@@ -1,38 +1,42 @@
# Why Helmsman?
---
version: v0.1.2
---

This document describes the reasoning and need behind the inception of Helmsman.
# Why Helmsman?

This document describes the reasoning and need behind the inception of Helmsman.

## Before Helm

Helmsman was created with continous deployment automation in mind.
When we started using k8s, we deployed applications on our cluster directly from k8s manifest files. Initially, we had a custom shell script added to our CI to deploy the k8s resources on the cluster. That script could only create the k8s resources from the manifest files. Soon we needed to have a more flexible way to dynamically create/delete those resources. We structured our git repo and used custom file names (adding enabled or disabled into file names) and updated the shell script accordingly. It did not take long before we realized that this does not scale and is difficult to maintain.
Helmsman was created with continuous deployment in mind.
When we started using k8s, we deployed applications on our cluster directly from k8s manifest files. Initially, we had a custom shell script added to our CI system to deploy the k8s resources on the cluster. That script could only create the k8s resources from the manifest files. Soon we needed to have a more flexible way to dynamically create/delete those resources. We structured our git repo and used custom file names (adding enabled or disabled into file names) and updated the shell script accordingly. It did not take long before we realized that this does not scale and is difficult to maintain.

![CI-pipeline-before-helm](https://github.com/praqma/helmsman/docs/images/CI-pipeline-before-helm.jpg )
![CI-pipeline-before-helm](images/CI-pipeline-before-helm.jpg)

## Helm to the rescue?

While looking for solutions for managing the growing number of k8s manifest files from a CI pipeline, we came to know about Helm and quickly releaized its potential. By creating Helm charts, we packaged related k8s manifests together into a single entity "a chart". This reduced the amount of files the CI script has to deal with. However, all the CI shell script could do is package a chart and install/upgrade it in our k8s cluster whenever a new commit is done into the chart's files in git.
While looking for solutions for managing the growing number of k8s manifest files from a CI pipeline, we came to know about Helm and quickly realized its potential. By creating Helm charts, we packaged related k8s manifests together into a single entity "a chart". This reduced the amount of files the CI script has to deal with. However, all the CI shell script could do is package a chart and install/upgrade it in our k8s cluster whenever a new commit is done into the chart's files in git.

![CI-pipeline-after-helm](https://github.com/praqma/helmsman/docs/images/CI-pipeline-after-helm.jpg)
![CI-pipeline-after-helm](images/CI-pipeline-after-helm.jpg)

But there were a couple of issues here:
1. Helm has more to it than package and install. Operations such as rollback, running chart tests etc. are only doable from the Helm's CLI client.
2. You have to keep updating your CI script everytime you add a chart to k8s.
2. You have to keep updating your CI script every time you add a chart to k8s.
3. What if you want to do the same on another cluster? you will have to replicate your CI pipeline and possibly change your CI script accordingly.

We have also decided to split the Helm charts development from the git repositories where they are used. This is simply to let us develop the charts independently from the projects where we used them and to allow us to reuse them in different projects.
We have also decided to split the Helm charts development from the git repositories where they are used. This is simply to let us develop the charts independently from the projects where we used them and to allow us to reuse them in different projects.

With all this in mind, we needed a flexible and dynamic solution that can let us deploy and manage Helm charts into multiple k8s cluster independently and with minimum human intervention. Such solution should be generic enough to be reusable for many different projects/cluster. And this is where Helmsman was born!

## The Helmsman way

In English, [Helmsman](https://www.merriam-webster.com/dictionary/helmsman) is the person at the helm (in a ship). In k8s and Helm context, Helmsman sets at the Helm and maintains your Helm charts' lifecycle in your k8s cluster(s). Helmsman gets its directions to navigate from a [declarative file](desired_state_specification.md) maintained by the user (k8s admin).
In English, [Helmsman](https://www.merriam-webster.com/dictionary/helmsman) is the person at the helm (in a ship). In k8s and Helm context, Helmsman holds the Helm and maintains your Helm charts' lifecycle in your k8s cluster(s). Helmsman gets its directions to navigate from a [declarative file](desired_state_specification.md) maintained by the user (k8s admin).

> The Helmsman user does not need to know much about Helm and possibly even about k8s.
> Although knowledge about Helm and K8S is highly beneficial, such knowledge is NOT required to use Helmsman.
As the diagram below shows, we recommend having a_ desired state file_ for each k8s cluster you are managing. Along with that file, you would need to have any custom [values yaml files](https://docs.helm.sh/chart_template_guide/#values-files) for the Helm chart's you deploy on your k8s. Then you could configure your CI pipeline to use Helmsman docker container to process your desired state file whenever a commit is made to it.
As the diagram below shows, we recommend having a _desired state file_ for each k8s cluster you are managing. Along with that file, you would need to have any custom [values yaml files](https://docs.helm.sh/chart_template_guide/#values-files) for the Helm chart's you deploy on your k8s. Then you could configure your CI pipeline to use Helmsman docker image to process your desired state file whenever a commit is made to it.

![CI-pipeline-helmsman](https://github.com/praqma/helmsman/docs/images/CI-pipeline-helmsman.jpg)
![CI-pipeline-helmsman](images/CI-pipeline-helmsman.jpg)


> Helmsman can also be used manually as a binary tool on a machine which has Helm and Kubectl installed.
> Helmsman can also be used manually as a binary tool on a machine which has Helm and Kubectl installed.
125 changes: 92 additions & 33 deletions example.toml
Original file line number Diff line number Diff line change
@@ -1,58 +1,117 @@
# version: v1.6.2
# metadata -- add as many key/value pairs as you want
[metadata]
org = "orgX"
maintainer = "k8s-admin"
org = "example.com/${ORG_PATH}/"
maintainer = "k8s-admin (me@example.com)"
description = "example Desired State File for demo purposes."


# paths to the certificate for connecting to the cluster
# You can skip this if you use Helmsman on a machine with kubectl already connected to your k8s cluster.
[certifications]
caCrt = "ca.crt" # s3 bucket path
caKey = "ca.key" # Or, a path to the file location
# You can skip this if you use Helmsman on a machine with kubectl already connected to your k8s cluster.
# you have to use exact key names here : 'caCrt' for certificate and 'caKey' for the key and caClient for the client certificate
[certificates]
# caClient = "gs://mybucket/client.crt" # GCS bucket path
# caCrt = "s3://mybucket/ca.crt" # S3 bucket path
# caKey = "../ca.key" # valid local file relative path


[settings]
kubeContext = "minikube" # will try connect to this context first, if it does not exist, it will be created using the details below
# username = "admin"
# password = "passwd.passwd" # read it from a .passwd file which you should make it ignored by git.
# clusterURI = "https://192.168.99.100:8443" # cluster API
kubeContext = "minikube" # will try connect to this context first, if it does not exist, it will be created using the details below
# username = "admin"
# password = "$K8S_PASSWORD" # the name of an environment variable containing the k8s password
clusterURI = "${SET_URI}" # the name of an environment variable containing the cluster API
# #clusterURI = "https://192.168.99.100:8443" # equivalent to the above
# serviceAccount = "tiller" # k8s serviceaccount. If it does not exist, it will be created.
# storageBackend = "secret" # default is configMap
# slackWebhook = "$slack" # or "your slack webhook url"
# reverseDelete = false # reverse the priorities on delete
#### to use bearer token:
# bearerToken = true
# clusterURI = "https://kubernetes.default"



# define your environments and thier k8s namespaces
# syntax: environment_name = "k8s_namespace"
# define your environments and their k8s namespaces
# syntax:
# [namespaces.<your namespace>] -- whitespace before this entry does not matter, use whatever indentation style you like
# protected = <true or false> -- default to false
[namespaces]
staging = "staging"
production = "default"
[namespaces.production]
protected = true
[[namespaces.production.limits]]
type = "Container"
[namespaces.production.limits.default]
cpu = "300m"
memory = "200Mi"
[namespaces.production.limits.defaultRequest]
cpu = "200m"
memory = "100Mi"
[[namespaces.production.limits]]
type = "Pod"
[namespaces.production.limits.max]
memory = "300Mi"
[namespaces.staging]
protected = false
installTiller = true
# tillerServiceAccount = "tiller-staging" # should already exist in the staging namespace
# tillerRole = "cluster-admin" # Give tiller full access to the cluster
# caCert = "secrets/ca.cert.pem" # or an env var, e.g. "$CA_CERT_PATH"
# tillerCert = "secrets/tiller.cert.pem" # or S3 bucket s3://mybucket/tiller.crt
# tillerKey = "secrets/tiller.key.pem" # or GCS bucket gs://mybucket/tiller.key
# clientCert = "secrets/helm.cert.pem"
# clientKey = "secrets/helm.key.pem"
[namespaces.staging.labels]
env = "staging"


# define any private/public helm charts repos you would like to get charts from
# syntax: repo_name = "repo_url"
# only private repos hosted in s3 buckets are now supported
[helmRepos]
stable = "https://kubernetes-charts.storage.googleapis.com"
incubator = "http://storage.googleapis.com/kubernetes-charts-incubator"
stable = "https://kubernetes-charts.storage.googleapis.com"
incubator = "http://storage.googleapis.com/kubernetes-charts-incubator"
# myS3repo = "s3://my-S3-private-repo/charts"
# myGCSrepo = "gs://my-GCS-private-repo/charts"
# custom = "https://user:pass@mycustomrepo.org"


# define the desired state of your applications helm charts
# each contains the following:

[apps]

[apps.jenkins]
name = "jenkins" # should be unique across all apps
description = "jenkins"
env = "staging" # maps to the namespace as defined in environmetns above
enabled = true # change to false if you want to delete this app release [empty = flase]
chart = "stable/jenkins" # changing the chart name means delete and recreate this chart
version = "0.9.0"
# jenkins will be deployed using the Tiller in the staging namespace
[apps.jenkins]
namespace = "staging" # maps to the namespace as defined in namespaces above
enabled = true # change to false if you want to delete this app release [default = false]
chart = "stable/jenkins" # changing the chart name means delete and recreate this release
version = "0.14.3" # chart version
### Optional values below
name = "jenkins" # should be unique across all apps which are managed by the same Tiller
valuesFile = "" # leaving it empty uses the default chart values
#tillerNamespace = "kube-system" # which Tiller to use to deploy this release
purge = false # will only be considered when there is a delete operation
test = true # run the tests whenever this release is installed/upgraded/rolledback
test = false # run the tests when this release is installed for the first time only
protected = true
priority= -3
wait = true
# [apps.jenkins.setString] # values to override values from values.yaml with values from env vars or directly entered-- useful for passing secrets to charts
# AdminPassword="$JENKINS_PASSWORD" # $JENKINS_PASSWORD must exist in the environment
# MyLongIntVar="1234567890"
[apps.jenkins.set]
AdminUser="admin"


[apps.vault]
name = "vault" # should be unique across all apps
description = "vault"
env = "staging" # maps to the namespace as defined in environmetns above
enabled = true # change to false if you want to delete this app release [empty = flase]
chart = "incubator/vault" # don't change the chart name, create a new release instead
version = "0.1.0"
# artifactory will be deployed using the Tiller in the kube-system namespace
[apps.artifactory]
namespace = "production" # maps to the namespace as defined in namespaces above
enabled = true # change to false if you want to delete this app release [default = false]
chart = "stable/artifactory" # changing the chart name means delete and recreate this release
version = "7.0.6" # chart version
### Optional values below
name = "artifactory" # should be unique across all apps which are managed by the same Tiller
valuesFile = "" # leaving it empty uses the default chart values
purge = false # will only be considered when there is a delete operation
test = true # run the tests whenever this release is installed/upgraded/rolledback
test = false # run the tests when this release is installed for the first time only
priority= -2

# See https://github.com/Praqma/helmsman/blob/master/docs/desired_state_specification.md#apps for more apps options
112 changes: 112 additions & 0 deletions example.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,112 @@
# version: v1.5.0
# metadata -- add as many key/value pairs as you want
metadata:
org: "example.com/$ORG_PATH/"
maintainer: "k8s-admin (me@example.com)"
description: "example Desired State File for demo purposes."

# paths to the certificate for connecting to the cluster
# You can skip this if you use Helmsman on a machine with kubectl already connected to your k8s cluster.
# you have to use exact key names here : 'caCrt' for certificate and 'caKey' for the key and caClient for the client certificate
certificates:
#caClient: "gs://mybucket/client.crt" # GCS bucket path
#caCrt: "s3://mybucket/ca.crt" # S3 bucket path
#caKey: "../ca.key" # valid local file relative path

settings:
kubeContext: "minikube" # will try connect to this context first, if it does not exist, it will be created using the details below
#username: "admin"
#password: "$K8S_PASSWORD" # the name of an environment variable containing the k8s password
clusterURI: "$SET_URI" # the name of an environment variable containing the cluster API
#clusterURI: "https://192.168.99.100:8443" # equivalent to the above
#serviceAccount: "foo" # k8s serviceaccount must be already defined, validation error will be thrown otherwise
storageBackend: "secret" # default is configMap
#slackWebhook: "$slack" # or your slack webhook url
#reverseDelete: false # reverse the priorities on delete
#### to use bearer token:
# bearerToken: true
# clusterURI: "https://kubernetes.default"

# define your environments and their k8s namespaces
namespaces:
production:
protected: true
limits:
- type: Container
default:
cpu: "300m"
memory: "200Mi"
defaultRequest:
cpu: "200m"
memory: "100Mi"
- type: Pod
max:
memory: "300Mi"
staging:
protected: false
installTiller: true
#tillerServiceAccount: "tiller-staging" # should already exist in the staging namespace
#tillerRole: "cluster-admin" # Give tiller full access to the cluster
#caCert: "secrets/ca.cert.pem" # or an env var, e.g. "$CA_CERT_PATH"
#tillerCert: "secrets/tiller.cert.pem" # or S3 bucket s3://mybucket/tiller.crt
#tillerKey: "secrets/tiller.key.pem" # or GCS bucket gs://mybucket/tiller.key
#clientCert: "secrets/helm.cert.pem"
#clientKey: "secrets/helm.key.pem"
labels:
env: "staging"


# define any private/public helm charts repos you would like to get charts from
# syntax: repo_name: "repo_url"
# only private repos hosted in s3 buckets are now supported
helmRepos:
stable: "https://kubernetes-charts.storage.googleapis.com"
incubator: "http://storage.googleapis.com/kubernetes-charts-incubator"
#myS3repo: "s3://my-S3-private-repo/charts"
#myGCSrepo: "gs://my-GCS-private-repo/charts"
#custom: "https://user:pass@mycustomrepo.org"

# define the desired state of your applications helm charts
# each contains the following:


apps:

# jenkins will be deployed using the Tiller in the staging namespace
jenkins:
namespace: "staging" # maps to the namespace as defined in namespaces above
enabled: true # change to false if you want to delete this app release empty: false:
chart: "stable/jenkins" # changing the chart name means delete and recreate this chart
version: "0.14.3" # chart version
### Optional values below
name: "jenkins" # should be unique across all apps
description: "jenkins"
valuesFile: "" # leaving it empty uses the default chart values
purge: false # will only be considered when there is a delete operation
test: false # run the tests when this release is installed for the first time only
protected: true
priority: -3
wait: true
#tillerNamespace: "kube-system" # which Tiller to use to deploy this release
set: # values to override values from values.yaml with values from env vars-- useful for passing secrets to charts
AdminPassword: "$JENKINS_PASSWORD" # $JENKINS_PASSWORD must exist in the environment
AdminUser: "admin"
setString:
MyLongIntVar: "1234567890"


# artifactory will be deployed using the Tiller in the kube-system namespace
artifactory:
namespace: "production" # maps to the namespace as defined in namespaces above
enabled: true # change to false if you want to delete this app release empty: false:
chart: "stable/artifactory" # changing the chart name means delete and recreate this chart
version: "7.0.6" # chart version
### Optional values below
name: "artifactory" # should be unique across all apps
description: "artifactory"
valuesFile: "" # leaving it empty uses the default chart values
purge: false # will only be considered when there is a delete operation
test: false # run the tests when this release is installed for the first time only
priority: -2

# See https://github.com/Praqma/helmsman/blob/master/docs/desired_state_specification.md#apps for more apps options
81 changes: 81 additions & 0 deletions gcs/gcs.go
Original file line number Diff line number Diff line change
@@ -0,0 +1,81 @@
package gcs

import (
"io"
"io/ioutil"
"log"
"os"

// Imports the Google Cloud Storage client package.
"cloud.google.com/go/storage"
"github.com/logrusorgru/aurora"
"golang.org/x/net/context"
)

// colorizer
var style aurora.Aurora

// Auth checks for GCLOUD_CREDENTIALS in the environment
// returns true if they exist and creates a json credentials file and sets the GOOGLE_APPLICATION_CREDENTIALS env var
// returns false if credentials are not found
func Auth() bool {
if os.Getenv("GOOGLE_APPLICATION_CREDENTIALS") != "" {
log.Println("INFO: GOOGLE_APPLICATION_CREDENTIALS is already set in the environment.")
return true
}

if os.Getenv("GCLOUD_CREDENTIALS") != "" {
credFile := "/tmp/gcloud_credentials.json"
// write the credentials content into a json file
d := []byte(os.Getenv("GCLOUD_CREDENTIALS"))
err := ioutil.WriteFile(credFile, d, 0644)

if err != nil {
log.Fatal(style.Bold(style.Red("ERROR: Cannot create credentials file: " + err.Error())))
}

os.Setenv("GOOGLE_APPLICATION_CREDENTIALS", credFile)
return true
}
return false
}

// ReadFile reads a file from storage bucket and saves it in a desired location.
func ReadFile(bucketName string, filename string, outFile string, noColors bool) {
style = aurora.NewAurora(!noColors)
if !Auth() {
log.Fatal(style.Bold(style.Red("ERROR: Failed to find the GCLOUD_CREDENTIALS env var. Please make sure it is set in the environment.")))
}

ctx := context.Background()
client, err := storage.NewClient(ctx)
if err != nil {
log.Fatal(style.Bold(style.Red("ERROR: Failed to configure Storage bucket: " + err.Error())))
}
storageBucket := client.Bucket(bucketName)

// Creates an Object handler for our file
obj := storageBucket.Object(filename)

// Read the object.
r, err := obj.NewReader(ctx)
if err != nil {
log.Fatal(style.Bold(style.Red("ERROR: Failed to create object reader: " + err.Error())))
}
defer r.Close()

// create output file and write to it
var writers []io.Writer
file, err := os.Create(outFile)
if err != nil {
log.Fatal(style.Bold(style.Red("ERROR: Failed to create an output file: " + err.Error())))
}
writers = append(writers, file)
defer file.Close()

dest := io.MultiWriter(writers...)
if _, err := io.Copy(dest, r); err != nil {
log.Fatal(style.Bold(style.Red("ERROR: Failed to read object content: " + err.Error())))
}
log.Println("INFO: Successfully downloaded " + filename + " from GCS as " + outFile)
}
552 changes: 487 additions & 65 deletions helm_helpers.go

Large diffs are not rendered by default.

107 changes: 107 additions & 0 deletions helm_helpers_test.go
Original file line number Diff line number Diff line change
@@ -0,0 +1,107 @@
package main

import (
"testing"
"time"
)

func Test_getReleaseChartVersion(t *testing.T) {
// version string = the first semver-valid string after the last hypen in the chart string.

type args struct {
r releaseState
}
tests := []struct {
name string
args args
want string
}{
{
name: "test case 1: there is a pre-release version",
args: args{
r: releaseState{
Revision: 0,
Updated: time.Now(),
Status: "",
Chart: "elasticsearch-1.3.0-1",
Namespace: "",
TillerNamespace: "",
},
},
want: "1.3.0-1",
}, {
name: "test case 2: normal case",
args: args{
r: releaseState{
Revision: 0,
Updated: time.Now(),
Status: "",
Chart: "elasticsearch-1.3.0",
Namespace: "",
TillerNamespace: "",
},
},
want: "1.3.0",
}, {
name: "test case 3: there is a hypen in the name",
args: args{
r: releaseState{
Revision: 0,
Updated: time.Now(),
Status: "",
Chart: "elastic-search-1.3.0",
Namespace: "",
TillerNamespace: "",
},
},
want: "1.3.0",
}, {
name: "test case 4: there is meta information",
args: args{
r: releaseState{
Revision: 0,
Updated: time.Now(),
Status: "",
Chart: "elastic-search-1.3.0+meta.info",
Namespace: "",
TillerNamespace: "",
},
},
want: "1.3.0+meta.info",
}, {
name: "test case 5: an invalid string",
args: args{
r: releaseState{
Revision: 0,
Updated: time.Now(),
Status: "",
Chart: "foo",
Namespace: "",
TillerNamespace: "",
},
},
want: "",
}, {
name: "test case 6: version includes v",
args: args{
r: releaseState{
Revision: 0,
Updated: time.Now(),
Status: "",
Chart: "cert-manager-v0.5.2",
Namespace: "",
TillerNamespace: "",
},
},
want: "v0.5.2",
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
t.Log(tt.want)
if got := getReleaseChartVersion(tt.args.r); got != tt.want {
t.Errorf("getReleaseChartName() = %v, want %v", got, tt.want)
}
})
}
}
295 changes: 159 additions & 136 deletions init.go
Original file line number Diff line number Diff line change
@@ -2,212 +2,235 @@ package main

import (
"flag"
"fmt"
"log"
"os"
"strings"

"github.com/imdario/mergo"
"github.com/joho/godotenv"
"github.com/logrusorgru/aurora"
)

// colorizer
var style aurora.Aurora

const (
banner = " _ _ \n" +
"| | | | \n" +
"| |__ ___| |_ __ ___ ___ _ __ ___ __ _ _ __\n" +
"| '_ \\ / _ \\ | '_ ` _ \\/ __| '_ ` _ \\ / _` | '_ \\ \n" +
"| | | | __/ | | | | | \\__ \\ | | | | | (_| | | | | \n" +
"|_| |_|\\___|_|_| |_| |_|___/_| |_| |_|\\__,_|_| |_|"
slogan = "A Helm-Charts-as-Code tool.\n\n"
)

func printUsage() {
log.Println(banner + "\n")
log.Println("Helmsman version: " + appVersion)
log.Println("Helmsman is a Helm Charts as Code tool which allows you to automate the deployment/management of your Helm charts.")
log.Println()
log.Println("Usage: helmsman [options]")
flag.PrintDefaults()
}

// init is executed after all package vars are initialized [before the main() func in this case].
// It checks if Helm and Kubectl exist and configures: the connection to the k8s cluster, helm repos, namespaces, etc.
func init() {
// parsing command line flags
flag.StringVar(&file, "f", "", "desired state file name")
//parsing command line flags
flag.Var(&files, "f", "desired state file name(s), may be supplied more than once to merge state files")
flag.Var(&envFiles, "e", "file(s) to load environment variables from (default .env), may be supplied more than once")
flag.StringVar(&kubeconfig, "kubeconfig", "", "path to the kubeconfig file to use for CLI requests")
flag.BoolVar(&apply, "apply", false, "apply the plan directly")
flag.BoolVar(&debug, "debug", false, "show the execution logs")
flag.BoolVar(&help, "help", false, "show Helmsman help")

flag.BoolVar(&dryRun, "dry-run", false, "apply the dry-run option for helm commands.")
flag.Var(&target, "target", "limit execution to specific app.")
flag.BoolVar(&destroy, "destroy", false, "delete all deployed releases. Purge delete is used if the purge option is set to true for the releases.")
flag.BoolVar(&v, "v", false, "show the version")
flag.BoolVar(&verbose, "verbose", false, "show verbose execution logs")
flag.BoolVar(&noBanner, "no-banner", false, "don't show the banner")
flag.BoolVar(&noColors, "no-color", false, "don't use colors")
flag.BoolVar(&noFancy, "no-fancy", false, "don't display the banner and don't use colors")
flag.BoolVar(&noNs, "no-ns", false, "don't create namespaces")
flag.StringVar(&nsOverride, "ns-override", "", "override defined namespaces with this one")
flag.BoolVar(&skipValidation, "skip-validation", false, "skip desired state validation")
flag.BoolVar(&applyLabels, "apply-labels", false, "apply Helmsman labels to Helm state for all defined apps.")
flag.BoolVar(&keepUntrackedReleases, "keep-untracked-releases", false, "keep releases that are managed by Helmsman and are no longer tracked in your desired state.")
flag.BoolVar(&showDiff, "show-diff", false, "show helm diff results. Can expose sensitive information.")
flag.BoolVar(&suppressDiffSecrets, "suppress-diff-secrets", false, "don't show secrets in helm diff output.")
flag.BoolVar(&noEnvSubst, "no-env-subst", false, "turn off environment substitution globally")

log.SetOutput(os.Stdout)

flag.Usage = printUsage
flag.Parse()

if help {
printHelp()
os.Exit(0)
if noFancy {
noColors = true
noBanner = true
}

if !toolExists("helm") {
log.Fatal("ERROR: helm is not installed/configured correctly. Aborting!")
os.Exit(1)
style = aurora.NewAurora(!noColors)

if !noBanner {
fmt.Println(banner + " version: " + appVersion + "\n" + slogan)
}

if !toolExists("kubectl") {
log.Fatal("ERROR: kubectl is not installed/configured correctly. Aborting!")
os.Exit(1)
if dryRun && apply {
logError("ERROR: --apply and --dry-run can't be used together.")
}

// after the init() func is run, read the TOML desired state file
fromTOML(file, &s)
if destroy && apply {
logError("ERROR: --destroy and --apply can't be used together.")
}

// validate the desired state content
s.validate() // syntax validation
helmVersion = strings.TrimSpace(strings.SplitN(getHelmClientVersion(), ": ", 2)[1])
kubectlVersion = strings.TrimSpace(strings.SplitN(getKubectlClientVersion(), ": ", 2)[1])

// set the kubecontext to be used Or create it if it does not exist
if !setKubeContext(s.Settings["kubeContext"]) {
if !createContext() {
os.Exit(1)
}
if verbose {
logVersions()
}

// add repos -- fails if they are not valid
if !addHelmRepos(s.HelmRepos) {
os.Exit(1)
if v {
fmt.Println("Helmsman version: " + appVersion)
os.Exit(0)
}

// validate charts-versions exist in supllied repos
if !validateReleaseCharts(s.Apps) {
os.Exit(1)
if len(files) == 0 {
log.Println("INFO: No desired state files provided.")
os.Exit(0)
}

// add/validate namespaces
addNamespaces(s.Namespaces)

}

// toolExists returns true if the tool is present in the environment and false otherwise.
// It takes as input the tool's command to check if it is recognizable or not. e.g. helm or kubectl
func toolExists(tool string) bool {
cmd := command{
Cmd: "bash",
Args: []string{"-c", tool},
Description: "validating that " + tool + " is installed.",
if kubeconfig != "" {
os.Setenv("KUBECONFIG", kubeconfig)
}

exitCode, _ := cmd.exec(debug)
if !toolExists("kubectl") {
logError("ERROR: kubectl is not installed/configured correctly. Aborting!")
}

if exitCode != 0 {
return false
if !toolExists("helm") {
logError("ERROR: helm is not installed/configured correctly. Aborting!")
}

return true
}
if !helmPluginExists("diff") {
logError("ERROR: helm diff plugin is not installed/configured correctly. Aborting!")
}

// addNamespaces creates a set of namespaces in your k8s cluster.
// If a namespace with the same name exsts, it will skip it.
func addNamespaces(namespaces map[string]string) {
for _, namespace := range namespaces {
cmd := command{
Cmd: "bash",
Args: []string{"-c", "kubectl create namespace " + namespace},
Description: "creating namespace " + namespace,
// read the env file
if len(envFiles) == 0 {
if _, err := os.Stat(".env"); err == nil {
err = godotenv.Load()
if err != nil {
logError("Error loading .env file")
}
}
}

exitCode, _ := cmd.exec(debug)

if exitCode != 0 {
log.Println("WARN: I could not create namespace [" +
namespace + " ]. It already exists. I am skipping this.")
for _, e := range envFiles {
err := godotenv.Load(e)
if err != nil {
logError("Error loading " + e + " env file")
}
}
}

// validateReleaseCharts validates if the charts defined in a release are valid.
// Valid charts are the ones that can be found in the defined repos.
// This function uses Helm search to verify if the chart can be found or not.
func validateReleaseCharts(apps map[string]release) bool {
// wipe & create a temporary directory
os.RemoveAll(tempFilesDir)
_ = os.MkdirAll(tempFilesDir, 0755)

for app, r := range apps {
cmd := command{
Cmd: "bash",
Args: []string{"-c", "helm search " + r.Chart + " --version " + r.Version},
Description: "validating chart " + r.Chart + "-" + r.Version + " is available in the used repos.",
// read the TOML/YAML desired state file
var fileState state
for _, f := range files {
result, msg := fromFile(f, &fileState)
if result {
log.Printf(msg)
} else {
logError(msg)
}

exitCode, _ := cmd.exec(debug)
// Merge Apps that already existed in the state
for appName, app := range fileState.Apps {
if _, ok := s.Apps[appName]; ok {
if err := mergo.Merge(s.Apps[appName], app, mergo.WithAppendSlice, mergo.WithOverride); err != nil {
logError("Failed to merge " + appName + " from desired state file" + f)
}
}
}

if exitCode != 0 {
log.Fatal("ERROR: chart "+r.Chart+"-"+r.Version+" is specified for ",
"app ["+app+"] but is not found in the provided repos.")
return false
// Merge the remaining Apps
if err := mergo.Merge(&s.Apps, &fileState.Apps); err != nil {
logError("Failed to merge desired state file" + f)
}
// All the apps are already merged, make fileState.Apps empty to avoid conflicts in the final merge
fileState.Apps = make(map[string]*release)

if err := mergo.Merge(&s, &fileState, mergo.WithAppendSlice, mergo.WithOverride); err != nil {
logError("Failed to merge desired state file" + f)
}
}
return true
}

// addHelmRepos adds repositories to Helm if they don't exist already.
// Helm does not mind if a repo with the same name exists. It treats it as an update.
func addHelmRepos(repos map[string]string) bool {
if debug {
s.print()
}

for repoName, url := range repos {
cmd := command{
Cmd: "bash",
Args: []string{"-c", "helm repo add " + repoName + " " + url},
Description: "adding repo " + repoName,
if !skipValidation {
// validate the desired state content
if len(files) > 0 {
if result, msg := s.validate(); !result { // syntax validation
logError(msg)
}
}
} else {
log.Println("INFO: desired state validation is skipped.")
}

exitCode, _ := cmd.exec(debug)

if exitCode != 0 {
log.Fatal("ERROR: there has been a problem while adding repo [" +
repoName + "].")
return false
if applyLabels {
for _, r := range s.Apps {
labelResource(r)
}
}

if len(target) > 0 {
targetMap = map[string]bool{}
for _, v := range target {
targetMap[v] = true
}
}

return true
}

// setKubeContext sets your kubectl context to the one specified in the desired state file.
// It returns false if it fails to set the context. This means the context deos not exist.
func setKubeContext(context string) bool {
// toolExists returns true if the tool is present in the environment and false otherwise.
// It takes as input the tool's command to check if it is recognizable or not. e.g. helm or kubectl
func toolExists(tool string) bool {
cmd := command{
Cmd: "bash",
Args: []string{"-c", "kubectl config use-context " + context},
Description: "setting kubectl context to [ " + context + " ]",
Args: []string{"-c", tool},
Description: "validating that " + tool + " is installed.",
}

exitCode, _ := cmd.exec(debug)
exitCode, _ := cmd.exec(debug, false)

if exitCode != 0 {
log.Println("INFO: KubeContext: " + context + " does not exist. I will try to create it.")
return false
}

return true
}

// createContext creates a context -connecting to a k8s cluster- in kubectl config.
// It returns true if successful, false otherwise
func createContext() bool {

// helmPluginExists returns true if the plugin is present in the environment and false otherwise.
// It takes as input the plugin's name to check if it is recognizable or not. e.g. diff
func helmPluginExists(plugin string) bool {
cmd := command{
Cmd: "bash",
Args: []string{"-c", "kubectl config set-credentials " + s.Settings["username"] + " --username=" + s.Settings["username"] +
" --password=" + readFile(s.Settings["password"]) + " --client-key=" + s.Certifications["caKey"]},
Description: "creating kubectl context - part 1",
}

exitCode, _ := cmd.exec(debug)

if exitCode != 0 {
log.Fatal("ERROR: failed to create context [ " + s.Settings["kubeContext"] + " ].")
return false
}

cmd = command{
Cmd: "bash",
Args: []string{"-c", "kubectl config set-cluster " + s.Settings["kubeContext"] + " --server=" + s.Settings["clusterURI"] +
" --certificate-authority=" + s.Certifications["caCrt"]},
Description: "creating kubectl context - part 2",
}

exitCode, _ = cmd.exec(debug)

if exitCode != 0 {
log.Fatal("ERROR: failed to create context [ " + s.Settings["kubeContext"] + " ].")
return false
}

cmd = command{
Cmd: "bash",
Args: []string{"-c", "kubectl config set-context " + s.Settings["kubeContext"] + " --cluster=" + s.Settings["kubeContext"] +
" --user=" + s.Settings["username"] + " --password=" + readFile(s.Settings["password"])},
Description: "creating kubectl context - part 3",
Cmd: "bash",
Args: []string{"-c", "helm plugin list"},
Description: "validating that " + plugin + " is installed.",
}

exitCode, _ = cmd.exec(debug)
exitCode, result := cmd.exec(debug, false)

if exitCode != 0 {
log.Fatal("ERROR: failed to create context [ " + s.Settings["kubeContext"] + " ].")
return false
}

return setKubeContext(s.Settings["kubeContext"])
return strings.Contains(result, plugin)
}
134 changes: 134 additions & 0 deletions init_test.go
Original file line number Diff line number Diff line change
@@ -0,0 +1,134 @@
package main

import "testing"

func Test_toolExists(t *testing.T) {
type args struct {
tool string
}
tests := []struct {
name string
args args
want bool
}{
{
name: "test case 1 -- checking helm exists.",
args: args{
tool: "helm",
},
want: true,
}, {
name: "test case 2 -- checking kubectl exists.",
args: args{
tool: "kubectl",
},
want: true,
}, {
name: "test case 3 -- checking ipconfig exists.",
args: args{
tool: "ipconfig",
},
want: false,
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
if got := toolExists(tt.args.tool); got != tt.want {
t.Errorf("toolExists() = %v, want %v", got, tt.want)
}
})
}
}

// func Test_addNamespaces(t *testing.T) {
// type args struct {
// namespaces map[string]string
// }
// tests := []struct {
// name string
// args args
// }{
// // TODO: Add test cases.
// }
// for _, tt := range tests {
// t.Run(tt.name, func(t *testing.T) {
// addNamespaces(tt.args.namespaces)
// })
// }
// }

// func Test_validateReleaseCharts(t *testing.T) {
// type args struct {
// apps map[string]release
// }
// tests := []struct {
// name string
// args args
// want bool
// }{
// // TODO: Add test cases.
// }
// for _, tt := range tests {
// t.Run(tt.name, func(t *testing.T) {
// if got := validateReleaseCharts(tt.args.apps); got != tt.want {
// t.Errorf("validateReleaseCharts() = %v, want %v", got, tt.want)
// }
// })
// }
// }

// func Test_addHelmRepos(t *testing.T) {
// type args struct {
// repos map[string]string
// }
// tests := []struct {
// name string
// args args
// want bool
// }{
// // TODO: Add test cases.
// }
// for _, tt := range tests {
// t.Run(tt.name, func(t *testing.T) {
// if got := addHelmRepos(tt.args.repos); got != tt.want {
// t.Errorf("addHelmRepos() = %v, want %v", got, tt.want)
// }
// })
// }
// }

// func Test_setKubeContext(t *testing.T) {
// type args struct {
// context string
// }
// tests := []struct {
// name string
// args args
// want bool
// }{
// // TODO: Add test cases.
// }
// for _, tt := range tests {
// t.Run(tt.name, func(t *testing.T) {
// if got := setKubeContext(tt.args.context); got != tt.want {
// t.Errorf("setKubeContext() = %v, want %v", got, tt.want)
// }
// })
// }
// }

// func Test_createContext(t *testing.T) {
// tests := []struct {
// name string
// want bool
// }{
// // TODO: Add test cases.
// }
// for _, tt := range tests {
// t.Run(tt.name, func(t *testing.T) {
// if got := createContext(); got != tt.want {
// t.Errorf("createContext() = %v, want %v", got, tt.want)
// }
// })
// }
// }
500 changes: 500 additions & 0 deletions kube_helpers.go

Large diffs are not rendered by default.

163 changes: 158 additions & 5 deletions main.go
Original file line number Diff line number Diff line change
@@ -1,19 +1,172 @@
package main

import (
"log"
"os"
)

// Allow parsing of multiple string command line options into an array of strings
type stringArray []string

func (i *stringArray) String() string {
return "my string representation"
}

func (i *stringArray) Set(value string) error {
*i = append(*i, value)
return nil
}

var s state
var debug bool
var file string
var files stringArray
var envFiles stringArray
var kubeconfig string
var apply bool
var help bool
var v bool
var verbose bool
var noBanner bool
var noColors bool
var noFancy bool
var noNs bool
var nsOverride string
var skipValidation bool
var applyLabels bool
var keepUntrackedReleases bool
var appVersion = "v1.9.1"
var helmVersion string
var kubectlVersion string
var dryRun bool
var target stringArray
var targetMap map[string]bool
var destroy bool
var showDiff bool
var suppressDiffSecrets bool
var noEnvSubst bool

const tempFilesDir = ".helmsman-tmp"
const stableHelmRepo = "https://kubernetes-charts.storage.googleapis.com"
const incubatorHelmRepo = "http://storage.googleapis.com/kubernetes-charts-incubator"

func main() {
// delete temp files with substituted env vars when the program terminates
defer os.RemoveAll(tempFilesDir)
defer cleanup()

p := makePlan(&s)
// set the kubecontext to be used Or create it if it does not exist
if !setKubeContext(s.Settings.KubeContext) {
if r, msg := createContext(); !r {
logError(msg)
}
}

if apply || dryRun || destroy {
// add/validate namespaces
if !noNs {
addNamespaces(s.Namespaces)
}

if !apply {
p.printPlan()
if r, msg := initHelm(); !r {
logError(msg)
}

// check if helm Tiller is ready
for k, ns := range s.Namespaces {
if ns.InstallTiller || ns.UseTiller {
waitForTiller(k)
}
}

if _, ok := s.Namespaces["kube-system"]; !ok {
waitForTiller("kube-system")
}
} else {
initHelmClientOnly()
}

// add repos -- fails if they are not valid
if r, msg := addHelmRepos(s.HelmRepos); !r {
logError(msg)
}

if !skipValidation {
// validate charts-versions exist in defined repos
if r, msg := validateReleaseCharts(s.Apps); !r {
logError(msg)
}
} else {
log.Println("INFO: charts validation is skipped.")
}

log.Println("INFO: checking what I need to do for your charts ... ")
if destroy {
log.Println("WARN: --destroy is enabled. Your releases will be deleted!")
}

p := makePlan(&s)
if !keepUntrackedReleases {
cleanUntrackedReleases()
}

p.sortPlan()
p.printPlan()
p.sendPlanToSlack()

if apply || dryRun || destroy {
p.execPlan()
}

log.Println("INFO: completed successfully!")
}

// cleanup deletes the k8s certificates and keys files
// It also deletes any Tiller TLS certs and keys
// and secret files
func cleanup() {
log.Println("INFO: cleaning up sensitive and temp files")
if _, err := os.Stat("ca.crt"); err == nil {
deleteFile("ca.crt")
}

if _, err := os.Stat("ca.key"); err == nil {
deleteFile("ca.key")
}

if _, err := os.Stat("client.crt"); err == nil {
deleteFile("client.crt")
}

if _, err := os.Stat("bearer.token"); err == nil {
deleteFile("bearer.token")
}

for k := range s.Namespaces {
if _, err := os.Stat(k + "-tiller.cert"); err == nil {
deleteFile(k + "-tiller.cert")
}
if _, err := os.Stat(k + "-tiller.key"); err == nil {
deleteFile(k + "-tiller.key")
}
if _, err := os.Stat(k + "-ca.cert"); err == nil {
deleteFile(k + "-ca.cert")
}
if _, err := os.Stat(k + "-client.cert"); err == nil {
deleteFile(k + "-client.cert")
}
if _, err := os.Stat(k + "-client.key"); err == nil {
deleteFile(k + "-client.key")
}
}

for _, app := range s.Apps {
if _, err := os.Stat(app.SecretsFile + ".dec"); err == nil {
deleteFile(app.SecretsFile + ".dec")
}
for _, secret := range app.SecretsFiles {
if _, err := os.Stat(secret + ".dec"); err == nil {
deleteFile(secret + ".dec")
}
}
}

}
20 changes: 20 additions & 0 deletions minimal-example.toml
Original file line number Diff line number Diff line change
@@ -0,0 +1,20 @@
## This is a minimal example.
## It will use your current kube context and will deploy Tiller without RBAC service account.
## For the full config spec and options, check https://github.com/Praqma/helmsman/blob/master/docs/desired_state_specification.md

[namespaces]
[namespaces.staging]

[apps]

[apps.jenkins]
namespace = "staging"
enabled = true
chart = "stable/jenkins"
version = "0.14.3"

[apps.artifactory]
namespace = "staging"
enabled = true
chart = "stable/artifactory"
version = "7.0.6"
19 changes: 19 additions & 0 deletions minimal-example.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,19 @@
## This is a minimal example.
## It will use your current kube context and will deploy Tiller without RBAC service account.
## For the full config spec and options, check https://github.com/Praqma/helmsman/blob/master/docs/desired_state_specification.md

namespaces:
staging:

apps:
jenkins:
namespace: staging
enabled: true
chart: stable/jenkins
version: 0.14.3

artifactory:
namespace: staging
enabled: true
chart: stable/artifactory
version: 7.0.6
65 changes: 65 additions & 0 deletions namespace.go
Original file line number Diff line number Diff line change
@@ -0,0 +1,65 @@
package main

import (
"fmt"
)

// resources type
type resources struct {
CPU string `yaml:"cpu,omitempty"`
Memory string `yaml:"memory,omitempty"`
}

// limits type
type limits []struct {
Max resources `yaml:"max,omitempty"`
Min resources `yaml:"min,omitempty"`
Default resources `yaml:"default,omitempty"`
DefaultRequest resources `yaml:"defaultRequest,omitempty"`
MaxLimitRequestRatio resources `yaml:"maxLimitRequestRatio,omitempty"`
LimitType string `yaml:"type"`
}

// namespace type represents the fields of a namespace
type namespace struct {
Protected bool `yaml:"protected"`
InstallTiller bool `yaml:"installTiller"`
UseTiller bool `yaml:"useTiller"`
TillerServiceAccount string `yaml:"tillerServiceAccount"`
TillerRole string `yaml:"tillerRole"`
CaCert string `yaml:"caCert"`
TillerCert string `yaml:"tillerCert"`
TillerKey string `yaml:"tillerKey"`
ClientCert string `yaml:"clientCert"`
ClientKey string `yaml:"clientKey"`
Limits limits `yaml:"limits,omitempty"`
Labels map[string]string `yaml:"labels"`
Annotations map[string]string `yaml:"annotations"`
}

// checkNamespaceDefined checks if a given namespace is defined in the namespaces section of the desired state file
func checkNamespaceDefined(ns string, s state) bool {
_, ok := s.Namespaces[ns]
if !ok {
return false
}
return true
}

// print prints the namespace
func (n namespace) print() {
fmt.Println("")
fmt.Println("\tprotected : ", n.Protected)
fmt.Println("\tinstallTiller : ", n.InstallTiller)
fmt.Println("\tuseTiller : ", n.UseTiller)
fmt.Println("\ttillerServiceAccount : ", n.TillerServiceAccount)
fmt.Println("\ttillerRole: ", n.TillerRole)
fmt.Println("\tcaCert : ", n.CaCert)
fmt.Println("\ttillerCert : ", n.TillerCert)
fmt.Println("\ttillerKey : ", n.TillerKey)
fmt.Println("\tclientCert : ", n.ClientCert)
fmt.Println("\tclientKey : ", n.ClientKey)
fmt.Println("\tlabels : ")
printMap(n.Labels, 2)
fmt.Println("------------------- ")
}
129 changes: 110 additions & 19 deletions plan.go
Original file line number Diff line number Diff line change
@@ -3,62 +3,153 @@ package main
import (
"fmt"
"log"
"net/url"
"sort"
"strconv"
"strings"
"time"

"github.com/logrusorgru/aurora"
)

// decisionType type representing type of Decision for console output
type decisionType int

const (
create decisionType = iota + 1
change
delete
noop
)

var decisionColor = map[decisionType]aurora.Color{
create: aurora.BlueFg,
change: aurora.BrownFg,
delete: aurora.RedFg,
noop: aurora.GreenFg,
}

// orderedDecision type representing a Decision and it's priority weight
type orderedDecision struct {
Description string
Priority int
Type decisionType
}

// orderedCommand type representing a Command and it's priority weight and the targeted release from the desired state
type orderedCommand struct {
Command command
Priority int
targetRelease *release
}

// plan type representing the plan of actions to make the desired state come true.
type plan struct {
Commands []command
Decisions []string
Commands []orderedCommand
Decisions []orderedDecision
Created time.Time
}

// createPlan initializes an empty plan
func createPlan() plan {

p := plan{
Commands: []command{},
Decisions: []string{},
Created: time.Now(),
Commands: []orderedCommand{},
Decisions: []orderedDecision{},
Created: time.Now().UTC(),
}
return p
}

// addCommand adds a command type to the plan
func (p *plan) addCommand(c command) {
func (p *plan) addCommand(cmd command, priority int, r *release) {
oc := orderedCommand{
Command: cmd,
Priority: priority,
targetRelease: r,
}

p.Commands = append(p.Commands, c)
p.Commands = append(p.Commands, oc)
}

// addDecision adds a decision type to the plan
func (p *plan) addDecision(decision string) {

p.Decisions = append(p.Decisions, decision)
func (p *plan) addDecision(decision string, priority int, decisionType decisionType) {
od := orderedDecision{
Description: decision,
Priority: priority,
Type: decisionType,
}
p.Decisions = append(p.Decisions, od)
}

// execPlan executes the commands (actions) which were added to the plan.
func (p plan) execPlan() {
log.Println("INFO: Executing the following plan ... ")
p.printPlan()
p.sortPlan()
if len(p.Commands) > 0 {
log.Println("INFO: Executing the plan ... ")
} else {
log.Println("INFO: Nothing to execute ... ")
}

for _, cmd := range p.Commands {
log.Println("INFO: attempting: -- ", cmd.Description)
cmd.exec(debug)
if exitCode, msg := cmd.Command.exec(debug, verbose); exitCode != 0 {
var errorMsg string
if errorMsg = msg; !verbose {
errorMsg = strings.Split(msg, "---")[0]
}
logError("Command returned with exit code: " + string(exitCode) + ". And error message: " + errorMsg)
} else {
log.Println(style.Cyan(msg))
if cmd.targetRelease != nil && !dryRun {
labelResource(cmd.targetRelease)
}
if _, err := url.ParseRequestURI(s.Settings.SlackWebhook); err == nil {
notifySlack(cmd.Command.Description+" ... SUCCESS!", s.Settings.SlackWebhook, false, true)
}
}
}
}

// printPlanCmds prints the actual commands that will be executed as part of a plan.
func (p plan) printPlanCmds() {
fmt.Println("Printing the commands of the current plan ...")
for _, Cmd := range p.Commands {
fmt.Println(Cmd.Description)
for _, cmd := range p.Commands {
fmt.Println(cmd.Command.Args[1])
}
}

// printPlan prints the decisions made in a plan.
func (p plan) printPlan() {
fmt.Println("---------------")
fmt.Printf("Ok, I have generated a plan for you at: %s \n", p.Created)
log.Println("----------------------")
log.Println(style.Bold(style.Green("INFO: Plan generated at: " + p.Created.Format("Mon Jan _2 2006 15:04:05"))))
for _, decision := range p.Decisions {
fmt.Println(decision)
log.Println(style.Colorize(decision.Description+" -- priority: "+strconv.Itoa(decision.Priority), decisionColor[decision.Type]))
}
}

// sendPlanToSlack sends the description of plan commands to slack if a webhook is provided.
func (p plan) sendPlanToSlack() {
if _, err := url.ParseRequestURI(s.Settings.SlackWebhook); err == nil {
str := ""
for _, c := range p.Commands {
str = str + c.Command.Description + "\n"
}

notifySlack(strings.TrimRight(str, "\n"), s.Settings.SlackWebhook, false, false)
}

}

// sortPlan sorts the slices of commands and decisions based on priorities
// the lower the priority value the earlier a command should be attempted
func (p plan) sortPlan() {
log.Println("INFO: sorting the commands in the plan based on priorities (order flags) ... ")

sort.SliceStable(p.Commands, func(i, j int) bool {
return p.Commands[i].Priority < p.Commands[j].Priority
})

sort.SliceStable(p.Decisions, func(i, j int) bool {
return p.Decisions[i].Priority < p.Decisions[j].Priority
})
}
210 changes: 210 additions & 0 deletions plan_test.go
Original file line number Diff line number Diff line change
@@ -0,0 +1,210 @@
package main

import (
"reflect"
"testing"
"time"
)

func Test_createPlan(t *testing.T) {
tests := []struct {
name string
want plan
}{
{
name: "test creating a plan",
want: createPlan(),
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
if got := createPlan(); reflect.DeepEqual(got, tt.want) {
t.Errorf("createPlan() = %v, want %v", got, tt.want)
}
})
}
}

func Test_plan_addCommand(t *testing.T) {
type fields struct {
Commands []orderedCommand
Decisions []orderedDecision
Created time.Time
}
type args struct {
c command
}
tests := []struct {
name string
fields fields
args args
}{
{
name: "testing command 1",
fields: fields{
Commands: []orderedCommand{},
Decisions: []orderedDecision{},
Created: time.Now(),
},
args: args{
c: command{
Cmd: "bash",
Args: []string{"-c", "echo this is fun"},
Description: "A bash command execution test with echo.",
},
},
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
p := &plan{
Commands: tt.fields.Commands,
Decisions: tt.fields.Decisions,
Created: tt.fields.Created,
}
r := &release{}
p.addCommand(tt.args.c, 0, r)
if got := len(p.Commands); got != 1 {
t.Errorf("addCommand(): got %v, want 1", got)
}
})
}
}

func Test_plan_addDecision(t *testing.T) {
type fields struct {
Commands []orderedCommand
Decisions []orderedDecision
Created time.Time
}
type args struct {
decision string
}
tests := []struct {
name string
fields fields
args args
}{
{
name: "testing decision adding",
fields: fields{
Commands: []orderedCommand{},
Decisions: []orderedDecision{},
Created: time.Now(),
},
args: args{
decision: "This is a test decision.",
},
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
p := &plan{
Commands: tt.fields.Commands,
Decisions: tt.fields.Decisions,
Created: tt.fields.Created,
}
p.addDecision(tt.args.decision, 0, noop)
if got := len(p.Decisions); got != 1 {
t.Errorf("addDecision(): got %v, want 1", got)
}
})
}
}

// func Test_plan_execPlan(t *testing.T) {
// type fields struct {
// Commands []command
// Decisions []string
// Created time.Time
// }
// tests := []struct {
// name string
// fields fields
// }{
// {
// name: "testing executing a plan",
// fields: fields{
// Commands: []command{
// {
// Cmd: "bash",
// Args: []string{"-c", "touch hello.world"},
// Description: "Creating hello.world file.",
// }, {
// Cmd: "bash",
// Args: []string{"-c", "touch hello.world1"},
// Description: "Creating hello.world1 file.",
// },
// },
// Decisions: []string{"Create hello.world.", "Create hello.world1."},
// Created: time.Now(),
// },
// },
// }
// for _, tt := range tests {
// t.Run(tt.name, func(t *testing.T) {
// p := plan{
// Commands: tt.fields.Commands,
// Decisions: tt.fields.Decisions,
// Created: tt.fields.Created,
// }
// p.execPlan()
// c := command{
// Cmd: "bash",
// Args: []string{"-c", "ls | grep hello.world | wc -l"},
// Description: "",
// }
// if _, got := c.exec(false, false); strings.TrimSpace(got) != "2" {
// t.Errorf("execPlan(): got %v, want hello world, again!", got)
// }
// })
// }
// }

// func Test_plan_printPlanCmds(t *testing.T) {
// type fields struct {
// Commands []command
// Decisions []string
// Created time.Time
// }
// tests := []struct {
// name string
// fields fields
// }{
// // TODO: Add test cases.
// }
// for _, tt := range tests {
// t.Run(tt.name, func(t *testing.T) {
// p := plan{
// Commands: tt.fields.Commands,
// Decisions: tt.fields.Decisions,
// Created: tt.fields.Created,
// }
// p.printPlanCmds()
// })
// }
// }

// func Test_plan_printPlan(t *testing.T) {
// type fields struct {
// Commands []command
// Decisions []string
// Created time.Time
// }
// tests := []struct {
// name string
// fields fields
// }{
// // TODO: Add test cases.
// }
// for _, tt := range tests {
// t.Run(tt.name, func(t *testing.T) {
// p := plan{
// Commands: tt.fields.Commands,
// Decisions: tt.fields.Decisions,
// Created: tt.fields.Created,
// }
// p.printPlan()
// })
// }
// }
8 changes: 8 additions & 0 deletions release-notes.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,8 @@
# v1.9.1

> If you are already using an older version of Helmsman than v1.4.0-rc, please read the changes below carefully and follow the upgrade guide [here](docs/migrating_to_v1.4.0-rc.md)
# Fixes and improvements:
- Improving temporary files handling. PR #242 (thanks to @brndnmtthws)


150 changes: 128 additions & 22 deletions release.go
Original file line number Diff line number Diff line change
@@ -2,54 +2,160 @@ package main

import (
"fmt"
"log"
"os"
"strings"

"github.com/hashicorp/go-version"
)

// release type representing Helm releases which are described in the desired state
type release struct {
Name string
Description string
Env string
Enabled bool
Chart string
Version string
ValuesFile string
Purge bool
Test bool
Name string `yaml:"name"`
Description string `yaml:"description"`
Namespace string `yaml:"namespace"`
Enabled bool `yaml:"enabled"`
Chart string `yaml:"chart"`
Version string `yaml:"version"`
ValuesFile string `yaml:"valuesFile"`
ValuesFiles []string `yaml:"valuesFiles"`
SecretsFile string `yaml:"secretsFile"`
SecretsFiles []string `yaml:"secretsFiles"`
Purge bool `yaml:"purge"`
Test bool `yaml:"test"`
Protected bool `yaml:"protected"`
Wait bool `yaml:"wait"`
Priority int `yaml:"priority"`
TillerNamespace string `yaml:"tillerNamespace"`
Set map[string]string `yaml:"set"`
SetString map[string]string `yaml:"setString"`
HelmFlags []string `yaml:"helmFlags"`
NoHooks bool `yaml:"noHooks"`
Timeout int `yaml:"timeout"`
}

// validateRelease validates if a release inside a desired state meets the specifications or not.
// check the full specification @ https://github.com/Praqma/Helmsman/docs/desired_state_spec.md
func validateRelease(r release, names map[string]bool) (bool, string) {
_, err := os.Stat(r.ValuesFile)
if r.Name == "" || names[r.Name] {
return false, "release name can't be empty and must be unique."
} else if r.Env == "" {
return false, "env can't be empty."
} else if r.Chart == "" || !strings.ContainsAny(r.Chart, "/") {
// check the full specification @ https://github.com/Praqma/helmsman/docs/desired_state_spec.md
func validateRelease(appLabel string, r *release, names map[string]map[string]bool, s state) (bool, string) {
if r.Name == "" {
r.Name = appLabel
}
if r.TillerNamespace != "" {
if ns, ok := s.Namespaces[r.TillerNamespace]; !ok {
return false, "tillerNamespace specified, but the namespace specified does not exist!"
} else if !ns.InstallTiller && !ns.UseTiller {
return false, "tillerNamespace specified, but that namespace does not have neither installTiller nor useTiller set to true."
}
} else if getDesiredTillerNamespace(r) == "kube-system" {
if ns, ok := s.Namespaces["kube-system"]; ok && !ns.InstallTiller && !ns.UseTiller {
return false, "app is desired to be deployed using Tiller from [[ kube-system ]] but kube-system is not desired to have a Tiller installed nor use an existing Tiller. You can use another Tiller with the 'tillerNamespace' option or deploy Tiller in kube-system. "
}
}
if names[r.Name][getDesiredTillerNamespace(r)] {
return false, "release name must be unique within a given Tiller."
}

if nsOverride == "" && r.Namespace == "" {
return false, "release targeted namespace can't be empty."
} else if nsOverride == "" && r.Namespace != "" && r.Namespace != "kube-system" && !checkNamespaceDefined(r.Namespace, s) {
return false, "release " + r.Name + " is using namespace [ " + r.Namespace + " ] which is not defined in the Namespaces section of your desired state file." +
" Release [ " + r.Name + " ] can't be installed in that Namespace until its defined."
}
if r.Chart == "" || !strings.ContainsAny(r.Chart, "/") {
return false, "chart can't be empty and must be of the format: repo/chart."
} else if r.Version == "" {
}
if r.Version == "" {
return false, "version can't be empty."
} else if r.ValuesFile != "" && (!isOfType(r.ValuesFile, ".yaml") || err != nil) {
return false, "valuesFile must be a valid file path for a yaml file, Or can be left empty."
}

names[r.Name] = true
_, err := os.Stat(r.ValuesFile)
if r.ValuesFile != "" && (!isOfType(r.ValuesFile, []string{".yaml", ".yml", ".json"}) || err != nil) {
return false, fmt.Sprintf("valuesFile must be a valid relative (from dsf file) file path for a yaml file, or can be left empty (provided path resolved to %q).", r.ValuesFile)
} else if r.ValuesFile != "" && len(r.ValuesFiles) > 0 {
return false, "valuesFile and valuesFiles should not be used together."
} else if len(r.ValuesFiles) > 0 {
for i, filePath := range r.ValuesFiles {
if _, pathErr := os.Stat(filePath); !isOfType(filePath, []string{".yaml", ".yml", ".json"}) || pathErr != nil {
return false, fmt.Sprintf("valuesFiles must be valid relative (from dsf file) file paths for a yaml file; path at index %d provided path resolved to %q.", i, filePath)
}
}
}

_, err = os.Stat(r.SecretsFile)
if r.SecretsFile != "" && (!isOfType(r.SecretsFile, []string{".yaml", ".yml", ".json"}) || err != nil) {
return false, fmt.Sprintf("secretsFile must be a valid relative (from dsf file) file path for a yaml file, or can be left empty (provided path resolved to %q).", r.SecretsFile)
} else if r.SecretsFile != "" && len(r.SecretsFiles) > 0 {
return false, "secretsFile and secretsFiles should not be used together."
} else if len(r.SecretsFiles) > 0 {
for _, filePath := range r.SecretsFiles {
if i, pathErr := os.Stat(filePath); !isOfType(filePath, []string{".yaml", ".yml", ".json"}) || pathErr != nil {
return false, fmt.Sprintf("secretsFiles must be valid relative (from dsf file) file paths for a yaml file; path at index %d provided path resolved to %q.", i, filePath)
}
}
}

if r.Priority != 0 && r.Priority > 0 {
return false, "priority can only be 0 or negative value, positive values are not allowed."
}

if names[r.Name] == nil {
names[r.Name] = make(map[string]bool)
}
if r.TillerNamespace != "" {
names[r.Name][r.TillerNamespace] = true
} else if s.Namespaces[r.Namespace].InstallTiller {
names[r.Name][r.Namespace] = true
} else {
names[r.Name]["kube-system"] = true
}

if len(r.SetString) > 0 {
v1, _ := version.NewVersion(helmVersion)
setStringConstraint, _ := version.NewConstraint(">=2.9.0")
if !setStringConstraint.Check(v1) {
return false, "you are using setString in your desired state, but your helm client does not support it. You need helm v2.9.0 or above for this feature."
}
}

// add $$ escaping for $ strings
os.Setenv("HELMSMAN_DOLLAR", "$")
for k, v := range r.Set {
if strings.Contains(v, "$") {
if os.ExpandEnv(strings.Replace(v, "$$", "${HELMSMAN_DOLLAR}", -1)) == "" {
return false, "env var [ " + v + " ] is not set, but is wanted to be passed for [ " + k + " ] in [[ " + r.Name + " ]]"
}
}
}

return true, ""
}

// overrideNamespace overrides a release defined namespace with a new given one
func overrideNamespace(r *release, newNs string) {
log.Println("INFO: overriding namespace for app: " + r.Name)
r.Namespace = newNs
}

// print prints the details of the release
func (r release) print() {
fmt.Println("")
fmt.Println("\tname : ", r.Name)
fmt.Println("\tdescription : ", r.Description)
fmt.Println("\tenv : ", r.Env)
fmt.Println("\tnamespace : ", r.Namespace)
fmt.Println("\tenabled : ", r.Enabled)
fmt.Println("\tchart : ", r.Chart)
fmt.Println("\tversion : ", r.Version)
fmt.Println("\tvaluesFile : ", r.ValuesFile)
fmt.Println("\tvaluesFiles : ", strings.Join(r.ValuesFiles, ","))
fmt.Println("\tpurge : ", r.Purge)
fmt.Println("\ttest : ", r.Test)
fmt.Println("\tprotected : ", r.Protected)
fmt.Println("\twait : ", r.Wait)
fmt.Println("\tpriority : ", r.Priority)
fmt.Println("\ttiller namespace : ", r.TillerNamespace)
fmt.Println("\tno-hooks : ", r.NoHooks)
fmt.Println("\ttimeout : ", r.Timeout)
fmt.Println("\tvalues to override from env:")
printMap(r.Set, 2)
fmt.Println("------------------- ")
}
296 changes: 296 additions & 0 deletions release_test.go
Original file line number Diff line number Diff line change
@@ -0,0 +1,296 @@
package main

import (
"strings"
"testing"
)

func Test_validateRelease(t *testing.T) {
st := state{
Metadata: make(map[string]string),
Certificates: make(map[string]string),
Settings: (config{}),
Namespaces: map[string]namespace{"namespace": namespace{false, false, false, "", "", "", "", "", "", "", (limits{}), make(map[string]string), make(map[string]string)}},
HelmRepos: make(map[string]string),
Apps: make(map[string]*release),
}

type args struct {
s state
r *release
}
tests := []struct {
name string
args args
want bool
want1 string
}{
{
name: "test case 1",
args: args{
r: &release{
Name: "release1",
Description: "",
Namespace: "namespace",
Enabled: true,
Chart: "repo/chartX",
Version: "1.0",
ValuesFile: "test_files/values.yaml",
Purge: true,
Test: true,
},
s: st,
},
want: true,
want1: "",
}, {
name: "test case 2",
args: args{
r: &release{
Name: "release2",
Description: "",
Namespace: "namespace",
Enabled: true,
Chart: "repo/chartX",
Version: "1.0",
ValuesFile: "xyz.yaml",
Purge: true,
Test: true,
},
s: st,
},
want: false,
want1: "valuesFile must be a valid relative (from dsf file) file path for a yaml file, or can be left empty (provided path resolved to \"xyz.yaml\").",
}, {
name: "test case 3",
args: args{
r: &release{
Name: "release3",
Description: "",
Namespace: "namespace",
Enabled: true,
Chart: "repo/chartX",
Version: "1.0",
ValuesFile: "test_files/values.xml",
Purge: true,
Test: true,
},
s: st,
},
want: false,
want1: "valuesFile must be a valid relative (from dsf file) file path for a yaml file, or can be left empty (provided path resolved to \"test_files/values.xml\").",
}, {
name: "test case 4",
args: args{
r: &release{
Name: "release1",
Description: "",
Namespace: "namespace",
Enabled: true,
Chart: "repo/chartX",
Version: "1.0",
ValuesFile: "test_files/values.yaml",
Purge: true,
Test: true,
},
s: st,
},
want: false,
want1: "release name must be unique within a given Tiller.",
}, {
name: "test case 5",
args: args{
r: &release{
Name: "",
Description: "",
Namespace: "namespace",
Enabled: true,
Chart: "repo/chartX",
Version: "1.0",
ValuesFile: "test_files/values.yaml",
Purge: true,
Test: true,
},
s: st,
},
want: true,
want1: "",
}, {
name: "test case 6",
args: args{
r: &release{
Name: "release6",
Description: "",
Namespace: "",
Enabled: true,
Chart: "repo/chartX",
Version: "1.0",
ValuesFile: "test_files/values.yaml",
Purge: true,
Test: true,
},
s: st,
},
want: false,
want1: "release targeted namespace can't be empty.",
}, {
name: "test case 7",
args: args{
r: &release{
Name: "release7",
Description: "",
Namespace: "namespace",
Enabled: true,
Chart: "chartX",
Version: "1.0",
ValuesFile: "test_files/values.yaml",
Purge: true,
Test: true,
},
s: st,
},
want: false,
want1: "chart can't be empty and must be of the format: repo/chart.",
}, {
name: "test case 8",
args: args{
r: &release{
Name: "release8",
Description: "",
Namespace: "namespace",
Enabled: true,
Chart: "",
Version: "1.0",
ValuesFile: "test_files/values.yaml",
Purge: true,
Test: true,
},
s: st,
},
want: false,
want1: "chart can't be empty and must be of the format: repo/chart.",
}, {
name: "test case 9",
args: args{
r: &release{
Name: "release9",
Description: "",
Namespace: "namespace",
Enabled: true,
Chart: "repo/chartX",
Version: "",
ValuesFile: "test_files/values.yaml",
Purge: true,
Test: true,
},
s: st,
},
want: false,
want1: "version can't be empty.",
}, {
name: "test case 10",
args: args{
r: &release{
Name: "release10",
Description: "",
Namespace: "namespace",
Enabled: true,
Chart: "repo/chartX",
Version: "1.0",
ValuesFile: "test_files/values.yaml",
Purge: true,
Test: true,
},
s: st,
},
want: true,
want1: "",
}, {
name: "test case 11",
args: args{
r: &release{
Name: "release11",
Description: "",
Namespace: "namespace",
Enabled: true,
Chart: "repo/chartX",
Version: "1.0",
ValuesFile: "test_files/values.yaml",
ValuesFiles: []string{"xyz.yaml"},
Purge: true,
Test: true,
},
s: st,
},
want: false,
want1: "valuesFile and valuesFiles should not be used together.",
}, {
name: "test case 12",
args: args{
r: &release{
Name: "release12",
Description: "",
Namespace: "namespace",
Enabled: true,
Chart: "repo/chartX",
Version: "1.0",
ValuesFiles: []string{"xyz.yaml"},
Purge: true,
Test: true,
},
s: st,
},
want: false,
want1: "valuesFiles must be valid relative (from dsf file) file paths for a yaml file; path at index 0 provided path resolved to \"xyz.yaml\".",
}, {
name: "test case 13",
args: args{
r: &release{
Name: "release13",
Description: "",
Namespace: "namespace",
Enabled: true,
Chart: "repo/chartX",
Version: "1.0",
ValuesFiles: []string{"./test_files/values.yaml", "test_files/values2.yaml"},
Purge: true,
Test: true,
},
s: st,
},
want: true,
want1: "",
}, {
name: "test case 14",
args: args{
r: &release{
Name: "release14",
Description: "",
Namespace: "namespace",
Enabled: true,
Chart: "repo/chartX",
Version: "1.0",
ValuesFiles: []string{"./test_files/values.yaml", "test_files/values2.yaml"},
Purge: true,
Test: true,
Set: map[string]string{"some_var": "$SOME_VAR"},
},
s: st,
},
want: false,
want1: "env var [ $SOME_VAR ] is not set, but is wanted to be passed for [ some_var ] in [[ release14 ]]",
},
}
names := make(map[string]map[string]bool)
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
got, got1 := validateRelease("testApp", tt.args.r, names, tt.args.s)
if got != tt.want {
t.Errorf("validateRelease() got = %v, want %v", got, tt.want)
}
if strings.TrimSpace(got1) != tt.want1 {
t.Errorf("validateRelease() got1 = %v, want %v", got1, tt.want1)
}
})
}
}
234 changes: 161 additions & 73 deletions state.go
438 changes: 438 additions & 0 deletions state_test.go
33 changes: 33 additions & 0 deletions test_files/dockerfile
19 changes: 19 additions & 0 deletions test_files/invalid_example.toml
19 changes: 19 additions & 0 deletions test_files/invalid_example.yaml
Empty file added test_files/values.xml
Empty file.
Empty file added test_files/values.yaml
Empty file.
Empty file added test_files/values2.yaml
Empty file.
475 changes: 450 additions & 25 deletions utils.go
383 changes: 383 additions & 0 deletions utils_test.go