We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Now, both pre-delete-job.yaml and post-install-job.yaml jobs are set to bitnami/kubectl:1.18:
pre-delete-job.yaml
post-install-job.yaml
bitnami/kubectl:1.18
{{- $cmd := printf "kubectl scale deployment -n $NAMESPACE %s --replicas 0 &&" (include "capsule.deploymentName" .) -}} {{- $cmd = printf "%s kubectl delete secret -n $NAMESPACE %s %s --ignore-not-found &&" $cmd (include "capsule.secretTlsName" .) (include "capsule.secretCaName" .) -}} {{- $cmd = printf "%s kubectl delete clusterroles.rbac.authorization.k8s.io capsule-namespace-deleter capsule-namespace-provisioner --ignore-not-found &&" $cmd -}} {{- $cmd = printf "%s kubectl delete clusterrolebindings.rbac.authorization.k8s.io capsule-namespace-provisioner --ignore-not-found" $cmd -}} apiVersion: batch/v1 kind: Job metadata: name: "{{ .Release.Name }}" labels: app.kubernetes.io/managed-by: {{ .Release.Service | quote }} app.kubernetes.io/instance: {{ .Release.Name | quote }} app.kubernetes.io/version: {{ .Chart.AppVersion }} helm.sh/chart: "{{ .Chart.Name }}-{{ .Chart.Version }}" annotations: # This is what defines this resource as a hook. Without this line, the # job is considered part of the release. "helm.sh/hook": pre-delete "helm.sh/hook-weight": "-5" "helm.sh/hook-delete-policy": hook-succeeded spec: template: metadata: name: "{{ .Release.Name }}" labels: app.kubernetes.io/managed-by: {{ .Release.Service | quote }} app.kubernetes.io/instance: {{ .Release.Name | quote }} helm.sh/chart: "{{ .Chart.Name }}-{{ .Chart.Version }}" spec: restartPolicy: Never containers: - name: pre-delete-job image: "bitnami/kubectl:1.18" command: [ "sh", "-c", "{{ $cmd }}"] env: - name: NAMESPACE valueFrom: fieldRef: fieldPath: metadata.namespace serviceAccountName: {{ include "capsule.serviceAccountName" . }}
{{- $cmd := "while [ -z $$(kubectl -n $NAMESPACE get secret capsule-tls -o jsonpath='{.data.tls\\\\.crt}') ];" -}} {{- $cmd = printf "%s do echo 'waiting Capsule to be up and running...' && sleep 5;" $cmd -}} {{- $cmd = printf "%s done" $cmd -}} apiVersion: batch/v1 kind: Job metadata: name: "{{ .Release.Name }}" labels: app.kubernetes.io/managed-by: {{ .Release.Service | quote }} app.kubernetes.io/instance: {{ .Release.Name | quote }} app.kubernetes.io/version: {{ .Chart.AppVersion }} helm.sh/chart: "{{ .Chart.Name }}-{{ .Chart.Version }}" annotations: # This is what defines this resource as a hook. Without this line, the # job is considered part of the release. "helm.sh/hook": post-install "helm.sh/hook-weight": "-5" "helm.sh/hook-delete-policy": hook-succeeded spec: template: metadata: name: "{{ .Release.Name }}" labels: app.kubernetes.io/managed-by: {{ .Release.Service | quote }} app.kubernetes.io/instance: {{ .Release.Name | quote }} helm.sh/chart: "{{ .Chart.Name }}-{{ .Chart.Version }}" spec: restartPolicy: Never containers: - name: post-install-job image: "bitnami/kubectl:1.18" command: ["sh", "-c", "{{ $cmd }}"] env: - name: NAMESPACE valueFrom: fieldRef: fieldPath: metadata.namespace serviceAccountName: {{ include "capsule.serviceAccountName" . }}
It would be great to be able to set these Jobs image repository as with the manager and proxy ingress images.
repository
The text was updated successfully, but these errors were encountered:
I've openned the following PR to implement the requested feature 😄:
#209
Sorry, something went wrong.
unai-ttxu
Successfully merging a pull request may close this issue.
Describe the feature
Now, both
pre-delete-job.yaml
andpost-install-job.yaml
jobs are set tobitnami/kubectl:1.18
:It would be great to be able to set these Jobs image
repository
as with the manager and proxy ingress images.The text was updated successfully, but these errors were encountered: