Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Initial commit of the Portworx helm chart #1

Closed
wants to merge 5 commits into from
Closed
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
4 changes: 4 additions & 0 deletions charts/px/Chart.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,4 @@
apiVersion: v1
description: A Helm chart for installing Portworx on Kubernetes
name: portworx-helm
version: 0.0.1
88 changes: 88 additions & 0 deletions charts/px/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,88 @@
# Portworx

[Portworx](https://portworx.com/) is a software defined persistent storage solution designed and purpose built for applications deployed as containers, via container orchestrators such as Kubernetes, Marathon and Swarm. It is a clustered block storage solution and provides a Cloud-Native layer from which containerized stateful applications programmatically consume block, file and object storage services directly through the scheduler.

## Introduction

This chart deploys Portworx to all nodes in your cluster via a DaemonSet.

## Prerequisites

- Kubernetes 1.7+
- All [Pre-requisites](https://docs.portworx.com/#minimum-requirements). for Portworx fulfilled.

## Installing the Chart

#### IMPORTANT NOTE:

Provide tiller the right RBAC permissions. Portworx delete hooks use the service account name as `tiller`.
```
kubectl create serviceaccount --namespace kube-system tiller
kubectl create clusterrolebinding tiller-cluster-rule --clusterrole=cluster-admin --serviceaccount=kube-system:tiller
kubectl edit deploy --namespace kube-system tiller-deploy
```

To install the chart with the release name `my-release` run:

Clone the repository.
cd into the root directory

```
$ helm install --name my-release \
--set imageVersion=1.2.12.0 .
```

## Uninstalling the Chart

To uninstall/delete the `my-release` deployment:

```bash
$ helm delete my-release
```

The command removes all the Kubernetes components associated with the chart and deletes the release.

## Configuration

The following tables lists the configurable parameters of the Datadog chart and their default values.

| Parameter | Description | Default |
|-----------------------------|------------------------------------|-------------------------------------------|
| `deploymentType` | The deployment type. Can be either docker/OCI | `oci` |
| `imageVersion` | The image tag to pull | `latest` |
| `openshiftInstall` | Installing on Openshift? | `false` |
| `isTargetOSCoreOS` | Is target CoreOS | `false` |
| `etcdEndpoint` | (REQUIRED) ETCD endpoint for PX to function properly in the form "etcd:http://<your-etcd-endpoint>:2379" | `etcd:http://<your-etcd-endpoint>:2379` |
| `clusterName` | Portworx Cluster Name | `mycluster` |
| `runOnMaster` | Run Portworx on Kubernetes Master? | `false` |
| `zeroStorage` | Run Portworx on Master with Zero Storage? | `false` |
| `usefileSystemDrive` | Should Portworx use an unmounted drive even with a filesystem ? | `false` |
| `usedrivesAndPartitions` | Should Portworx use the drives as well as partitions on the disk ? | `false` |
| `secretType` | Secrets store to be used can be AWS/KVDB/Vault | `none` |
| `drives` | Comma seperated list of drives to be used for storage | `none` |
| `dataInterface` | Name of the interface <ethX> | `none` |
| `managementInterface` | Name of the interface <ethX> | `none` |
| `envVars` | Colon-separated list of environment variables that will be exported to portworx. (example: API_SERVER=http://lighthouse-new.portworx.com:MYENV1=val1:MYENV2=val2) | `none` |
| `lighthouse.token` | Portworx lighthouse token for cluster. (example: token-a980f3a8-5091-4d9c-871a-cbbeb030d1e6) | `none` |
| `etcd.credentials` | Username and password for ETCD authentication in the form user:password | `none:none` |
| `etcd.ca` | Location of CA file for ETCD authentication. Should be /path/to/server.ca | `none` |
| `etcd.cert` | Location of certificate for ETCD authentication. Should be /path/to/server.crt | `none` |
| `etcd.key` | Location of certificate key for ETCD authentication Should be /path/to/servery.key | `none` |
| `etcd.acl` | ACL token value used for Consul authentication. (example: 398073a8-5091-4d9c-871a-bbbeb030d1f6) | `none` |


Specify each parameter using the `--set key=value[,key=value]` argument to `helm install`. For example,

```bash
$ helm install --name my-release \
--set deploymentType=docker,imageVersion=1.2.12.0 \
.
```

Alternatively, a YAML file that specifies the values for the parameters can be provided while installing the chart. For example,

```bash
$ helm install --name my-release -f values.yaml .
```

> **Tip**: You can use the default [values.yaml](values.yaml)
21 changes: 21 additions & 0 deletions charts/px/templates/NOTES.txt
Original file line number Diff line number Diff line change
@@ -0,0 +1,21 @@

Your Release is named {{ .Release.Name | quote }}
Portworx Pods should be running on each node in your cluster.

Portworx would create a unified pool of the disks attached to your Kubernetes nodes.
No further action should be required and you are ready to consume Portworx Volumes as part of your application data requirements.

For further information on usage of the Portworx in creating Volumes please refer
https://docs.portworx.com/scheduler/kubernetes/preprovisioned-volumes.html

For dynamically provisioning volumes for your Stateful applications as they run on Kubernetes please refer
https://docs.portworx.com/scheduler/kubernetes/dynamic-provisioning.html

Want to use Storage Orchestration for hyperconvergence, Please look at STork here. (NOTE: This isnt currently deployed as part of the Helm chart)
https://docs.portworx.com/scheduler/kubernetes/stork.html

Refer application solutions such as Cassandra, Kafka etcetera.
https://docs.portworx.com/scheduler/kubernetes/cassandra-k8s.html
https://docs.portworx.com/scheduler/kubernetes/kafka-k8s.html

For options that you could provide while installing Portworx on your cluster head over to the README.md
22 changes: 22 additions & 0 deletions charts/px/templates/_helpers.tpl
Original file line number Diff line number Diff line change
@@ -0,0 +1,22 @@
{{/* Gets the correct API Version based on the version of the cluster
*/}}

{{- define "rbac.apiVersion" -}}
{{- if ge .Capabilities.KubeVersion.Minor "8" -}}
"rbac.authorization.k8s.io/v1"
{{- else -}}
"rbac.authorization.k8s.io/v1beta1"
{{- end -}}
{{- end -}}


{{- define "px.labels" -}}
chart: "{{ .Chart.Name }}-{{ .Chart.Version }}"
heritage: {{ .Release.Service | quote }}
release: {{ .Release.Name | quote }}
{{- end -}}

{{- define "driveOpts" }}
{{ $v := .Values.installOptions.drives | split "," }}
{{$v._0}}
{{- end -}}
Original file line number Diff line number Diff line change
@@ -0,0 +1,33 @@
apiVersion: batch/v1
kind: Job
metadata:
namespace: kube-system
name: px-postdelete-unlabelnode
labels:
heritage: {{.Release.Service | quote }}
release: {{.Release.Name | quote }}
chart: "{{.Chart.Name}}-{{.Chart.Version}}"
annotations:
"helm.sh/hook": post-delete
"helm.sh/hook-weight": "-5"
"helm.sh/hook-delete-policy": hook-succeeded
spec:
backoffLimit: 1
template:
spec:
serviceAccountName: tiller
restartPolicy: OnFailure
volumes:
- name: kubectl
hostPath:
path: /usr/bin/kubectl
containers:
- name: post-delete-job
image: "lachlanevenson/k8s-kubectl:{{ .Capabilities.KubeVersion.GitVersion }}"
volumeMounts:
- name: kubectl
mountPath: /kubectl
command:
- sh
- -c
- "kubectl label nodes --all px/enabled-"
33 changes: 33 additions & 0 deletions charts/px/templates/hooks/pre-delete/px-predelete-nodelabel.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,33 @@
apiVersion: batch/v1
kind: Job
metadata:
namespace: kube-system
name: px-predelete-nodelabel
labels:
heritage: {{.Release.Service | quote }}
release: {{.Release.Name | quote }}
chart: "{{.Chart.Name}}-{{.Chart.Version}}"
annotations:
"helm.sh/hook": pre-delete
"helm.sh/hook-weight": "-5"
"helm.sh/hook-delete-policy": hook-succeeded
spec:
backoffLimit: 1
template:
spec:
serviceAccountName: tiller
restartPolicy: OnFailure
volumes:
- name: kubectl
hostPath:
path: /usr/bin/kubectl
containers:
- name: pre-delete-job
image: "lachlanevenson/k8s-kubectl:{{ .Capabilities.KubeVersion.GitVersion }}"
volumeMounts:
- name: kubectl
mountPath: /kubectl
command:
- sh
- -c
- "kubectl label nodes --all px/enabled=remove --overwrite"
26 changes: 26 additions & 0 deletions charts/px/templates/hooks/pre-install/etcd-preinstall-hook.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,26 @@
# apiVersion: batch/v1
# kind: Job
# metadata:
# namespace: kube-system
# name: px-etcd-preinstall-hook
# labels:
# heritage: {{.Release.Service | quote }}
# release: {{.Release.Name | quote }}
# chart: "{{.Chart.Name}}-{{.Chart.Version}}"
# annotations:
# "helm.sh/hook": pre-install
# "helm.sh/hook-weight": "-5"
# "helm.sh/hook-delete-policy": hook-succeeded, hook-failed
# spec:
# backoffLimit: 1
# template:
# spec:
# restartPolicy: Never
# containers:
# - name: pre-install-job
# terminationMessagePath: '/dev/termination-log'
# terminationMessagePolicy: 'FallbackToLogsOnError'
# imagePullPolicy: Always
# image: "hrishi/px-etcd-preinstall-hook:v1"
# command: ['/bin/sh']
# args: ['/usr/bin/etcdStatus.sh',"{{ .Values.etcdEndPoint }}"]
Loading