You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Helm 2 is deprecated since of November 2019, and will receive bugfixes until August 13, 2020. To transition smoothly, we need to keep track of both helm 2's latest version and helm 3's latest version for a while.
Since our guide contains documentation on the usage of helm, we need to figure out practices that make sense for both v2 and v3 of helm.
Docs on using helm3
Switching to deploy with v3 form v2
I found official helm docs to require a bit much comprehension about how helm works in helm v2 and v3, and want to summarize it a bit instead of only referencing helm's main docs.
In helm v2, you needed to write helm delete --purge to get the same result as you get in helm v3 writing helm delete.
The --timeout flag
--timeout requires a specification like --timeout 5m0s rather than --timeout 300 in helm v3.
Flags for helm upgrade
When it comes to helm upgrade, there is plenty of complexity on how to make them. I've tried to figure out the details.
options: -n and --namespace
In helm 2, there was a -n flag that meant to set the name of an release for install. In helm 3, this is would mean --namespace instead. If our docs transition to never use helm install but instead use helm upgrade --install we will always have a positional argument to provide the release name and -n will always mean namespace.
So, in our docs, we should use helm upgrade --install instead of helm install followed by future helm upgrade commands to not need to distinguish between helm v2 and v3.
options: --cleanup-on-fail, --wait, and --atomic
I concluded in helm/helm#7811 that it make sense to not use --wait or --atomic to avoid issues. If we want to await something, we should do it after the helm upgrade command itself, like below for example.
Before --cleanup-on-fail was around, it was very important in helm 2 to use --force in some situations. This was because otherwise helm could loose track of resources that would later cause upgrades to fail, and to recover from this state could be very troublesome as it requires insights into both the helm chart itself and what specific change was made. Those states are very hard to resolve for everyone.
In helm v2 and v3, there was a breaking change with respect to what --force does. These map to different interactions with the underlying kubernetes API server which accepts POST, PUT, PATCH, DELETE, GET.
# helm 2 - DELETE/POST
--force Force resource update through delete/recreate if needed
# helm 3 - PUT
--force Force resource updates through a replacement strategy
In helm v2 to v3, --force transitions from being kubectl delete (DELETE) + kubectl create (POST) to being kubectl replace (PUT). Sometimes though, the kubectl replace or (PUT) command in helm v3 can fail where the v2 way wouldn't. The v2 way is more forceful and maps to the operation of kubectl replace --force.
Given all of this, I have a current idea of what to recommend, assuming we always use --cleanup-on-fail.
Helm 2 commands run by humans could run without --force and att it if it fails, or always use it.
Helm 2 commands run by in CI/CD systems should use --force by default as it doesn't really hurt.
Helm 3 commands run by humans could run without --force and att it if it fails.
Helm 3 commands run by CI/CD systems could run with --force if they prefer, but all resources will be kubectl replace instead of kubectl apply even when it isn't needed.
Helm 3 commands run with --force that fail need to manually delete the resources that failed to be upgraded as there currently is no more forceful option representing kubectl replace --force that would do what helm v2's --force do - which is to delete and then create the resource.
When we opt to make our chart itself be helm v3 native without support for helm2, we can upgrade our Chart.yaml very simple. We do it by embedding requirements.yaml into Chart.yaml and bumping the apiVersion in Chart.yaml. If we have CRDs they need to be provided in a crds directory next to templates.
There is no need to do this transition for a long time I think, as it provides little benefit as far as I know.
The text was updated successfully, but these errors were encountered:
Helm 2 is deprecated since of November 2019, and will receive bugfixes until August 13, 2020. To transition smoothly, we need to keep track of both helm 2's latest version and helm 3's latest version for a while.
Since our guide contains documentation on the usage of
helm
, we need to figure out practices that make sense for both v2 and v3 of helm.Docs on using helm3
Switching to deploy with v3 form v2
I found official helm docs to require a bit much comprehension about how helm works in helm v2 and v3, and want to summarize it a bit instead of only referencing helm's main docs.
Helm's main documentation: https://helm.sh/docs/topics/v2_v3_migration/
Linked blog post: https://helm.sh/blog/migrate-from-helm-v2-to-helm-v3/
Linked plugin: https://github.com/helm/helm-2to3
Manual namespace creation
Namespaces must be manually created in helm3.
Flags for
helm delete
In helm v2, you needed to write
helm delete --purge
to get the same result as you get in helm v3 writinghelm delete
.The
--timeout
flag--timeout
requires a specification like--timeout 5m0s
rather than--timeout 300
in helm v3.Flags for
helm upgrade
When it comes to
helm upgrade
, there is plenty of complexity on how to make them. I've tried to figure out the details.options: -n and --namespace
In helm 2, there was a
-n
flag that meant to set the name of an release forinstall
. In helm 3, this is would mean--namespace
instead. If our docs transition to never usehelm install
but instead usehelm upgrade --install
we will always have a positional argument to provide the release name and-n
will always mean namespace.So, in our docs, we should use
helm upgrade --install
instead ofhelm install
followed by futurehelm upgrade
commands to not need to distinguish between helm v2 and v3.options: --cleanup-on-fail, --wait, and --atomic
I concluded in helm/helm#7811 that it make sense to not use
--wait
or--atomic
to avoid issues. If we want to await something, we should do it after thehelm upgrade
command itself, like below for example.options: --force and --recreate
Before
--cleanup-on-fail
was around, it was very important in helm 2 to use--force
in some situations. This was because otherwise helm could loose track of resources that would later cause upgrades to fail, and to recover from this state could be very troublesome as it requires insights into both the helm chart itself and what specific change was made. Those states are very hard to resolve for everyone.In helm v2 and v3, there was a breaking change with respect to what
--force
does. These map to different interactions with the underlying kubernetes API server which acceptsPOST, PUT, PATCH, DELETE, GET
.In helm v2 to v3,
--force
transitions from beingkubectl delete
(DELETE) +kubectl create
(POST) to beingkubectl replace
(PUT). Sometimes though, thekubectl replace
or (PUT) command in helm v3 can fail where the v2 way wouldn't. The v2 way is more forceful and maps to the operation ofkubectl replace --force
.Given all of this, I have a current idea of what to recommend, assuming we always use
--cleanup-on-fail
.--force
and att it if it fails, or always use it.--force
by default as it doesn't really hurt.--force
and att it if it fails.--force
if they prefer, but all resources will bekubectl replace
instead ofkubectl apply
even when it isn't needed.--force
that fail need to manually delete the resources that failed to be upgraded as there currently is no more forceful option representingkubectl replace --force
that would do what helm v2's--force
do - which is to delete and then create the resource.Reference discussion: helm/helm#7082 (comment)
Converting chart definitions to helm3
When we opt to make our chart itself be helm v3 native without support for helm2, we can upgrade our Chart.yaml very simple. We do it by embedding requirements.yaml into Chart.yaml and bumping the apiVersion in Chart.yaml. If we have CRDs they need to be provided in a
crds
directory next totemplates
.There is no need to do this transition for a long time I think, as it provides little benefit as far as I know.
The text was updated successfully, but these errors were encountered: