Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Feature Request: helm delete --wait #2378

Closed
ahawkins opened this issue May 3, 2017 · 31 comments · Fixed by #9702
Closed

Feature Request: helm delete --wait #2378

ahawkins opened this issue May 3, 2017 · 31 comments · Fixed by #9702
Labels

Comments

@ahawkins
Copy link
Contributor

ahawkins commented May 3, 2017

helm install --wait can wait for everything to be ready. Can a new helm delete --wait do something similar? I'm using helm delete --purge in my build pipeline followed by this script (to do something similar to --wait).

#!/usr/bin/env bash

set -euo pipefail

main() {
	local kinds=( Service Deployment Pod Secret ConfigMap )
	local counter=0 attempts=30

	for release in "$@"; do
		echo "--> Deleting release ${release}"

		if helm ls --kube-context "${KUBE_CONTEXT}" | grep -qF "${release}"; then
			echo "==> Found helm release; deleting with --purge"
			helm delete "${release}" --purge  --kube-context "${KUBE_CONTEXT}"
		else
			echo "==> No release found; deleting manually"

			for kind in "${kinds[@]}"; do
				echo "==> Deleting any dangling ${kind}"

				kubectl delete "${kind}" \
					-l "release=${release}" \
					-n "${KUBE_NAMESPACE}" \
					--force \
					--grace-period 0 \
					--context "${KUBE_CONTEXT}" 2>/dev/null
			done
		fi

		echo "--> Awaiting resource deleting confirmation"
		for kind in "${kinds[@]}"; do
			counter=0

			while [ $counter -lt $attempts ]; do
				pending_resources="$(kubectl get "${kind}" \
					-o wide \
					-l "release=${release}" \
					-n "${KUBE_NAMESPACE}" \
					--context "${KUBE_CONTEXT}" 2>/dev/null
				)"

				if [ -n "${pending_resources}" ]; then
					echo "${release} ${kind} still running. ${counter}/${attempts} tests completed; retrying."
					echo "${pending_resources}" 1>&2
					echo 1>&2

					# NOTE: The pre-increment usage. This makes the arithmatic expression
					# always exit 0. The post-increment form exits non-zero when counter
					# is zero. More information here: http://wiki.bash-hackers.org/syntax/arith_expr#arithmetic_expressions_and_return_codes
					((++counter))
					sleep 10
				else
					break
				fi
			done

			if [ $counter -eq $attempts ]; then
				echo "${release} ${kind} failed to delete in time.";
				return 1
			fi
		done

		echo "--> Awaiting helm confirmation"
		counter=0

		while [ $counter -lt $attempts ]; do
			if helm ls --all --kube-context "${KUBE_CONTEXT}" | grep -qF "${release}"; then
				echo "${release} still in tiller. ${counter}/${attempts} checks completed; retrying."

				# NOTE: The pre-increment usage. This makes the arithmatic expression
				# always exit 0. The post-increment form exits non-zero when counter
				# is zero. More information here: http://wiki.bash-hackers.org/syntax/arith_expr#arithmetic_expressions_and_return_codes
				((++counter))
				sleep 10
			else
				break
			fi
		done

		if [ $counter -eq $attempts ]; then
			echo "${release} failed to purge from tiller delete in time.";
			return 1
		fi
	done
}

main "$@"
@michelleN
Copy link
Member

There needs to be some refactoring of the --wait logic before this can happen but yeah it makes sense.

@alisondy
Copy link

/remove-lifecycle rotten

@alisondy
Copy link

/remove-lifecycle stale

jsravn pushed a commit to lightbend/console-charts that referenced this issue Oct 15, 2018
Without helm/helm#2378 there isn't a way to
deterministically know when all resources are finished being removed. A
simple thing we can do is warn the user on a ES_FORCE_INSTALL=true.
jsravn added a commit to lightbend/console-charts that referenced this issue Oct 17, 2018
Without helm/helm#2378 there isn't a way to
deterministically know when all resources are finished being removed. A
simple thing we can do is warn the user on a ES_FORCE_INSTALL=true.
@umomany
Copy link

umomany commented May 8, 2019

/remove-lifecycle rotten

@umomany
Copy link

umomany commented May 8, 2019

/remove-lifecycle stale

@helm helm deleted a comment from fejta-bot May 8, 2019
@helm helm deleted a comment from fejta-bot May 8, 2019
@helm helm deleted a comment from fejta-bot May 8, 2019
@helm helm deleted a comment from fejta-bot May 8, 2019
@helm helm deleted a comment from fejta-bot May 8, 2019
@bacongobbler
Copy link
Member

We removed the stale bot from helm/helm after the move to the CNCF. No need to try and remove the stale labels; it's all been removed. :)

@YingjunHu
Copy link

Any luck on this?

@bacongobbler
Copy link
Member

Nope. Are you working on an implementation?

@YingjunHu
Copy link

Not yet since I wasn't sure if there's any ongoing fix. I'll try to implement it if it's not planned for both helm2 and 3

@bacongobbler
Copy link
Member

AFAIK nobody's working on this. Feel free to work on an implementation!

@bacongobbler
Copy link
Member

bacongobbler commented Aug 19, 2020

Is the intention here that order would be respected too?

I would assume so, yes. We'd need to delete objects in reverse order. Otherwise we may delete its parent and report a false positive that the object no longer exists (e.g. by deleting a ClusterRole before deleting a ClusterRoleBinding).

@bacongobbler
Copy link
Member

Re-marking this as a feature; this has been a highly requested feature for some time. If someone from the community wishes to implement this, feel free.

@github-actions
Copy link

This issue has been marked as stale because it has been open for 90 days with no activity. This thread will be automatically closed in 30 days if no further activity occurs.

@github-actions github-actions bot added the Stale label Nov 22, 2020
@Bessonov
Copy link

Activity :/

@github-actions github-actions bot removed the Stale label Nov 26, 2020
@bacongobbler
Copy link
Member

In the interest of preventing spam to users subscribing to this thread, instead of commenting "+1" or "waiting for it", please add a 👍 to the OP or consider hitting the "subscribe" button on the top-right, under "Notifications". That way we can keep the conversation relevant to the discussion at hand. I've gone ahead and deleted those comments. Thanks!

@helm helm deleted a comment from mickdewald Dec 16, 2020
@helm helm deleted a comment from arocki7 Dec 16, 2020
@helm helm deleted a comment from omerfsen Dec 16, 2020
@bacongobbler
Copy link
Member

bacongobbler commented Dec 16, 2020

That being said, I'm interested to hear who from the community is willing to work on this feature. OP proposed this idea over three and a half years ago.

There has to be a point where we say the community has not taken enough initiative on this feature request, and we have to close it out as stale/unimplemented.

@omerfsen @arocki7 @micktg any takers?

@Bessonov
Copy link

@bacongobbler I don't think that closing this issue is the right way to handle it. If you close it, you get new issues about this topic. Just remove the stale bot, then everyone get less spam. A bot against "+1" spammers would do better job.

@jerriais
Copy link

Hi @bacongobbler
I am working on this now.
When I try to delete a deployment, I want to wait the pods controlled by the deployment until they all stop.
However those pods don't appear in the big chart manifest, I have no idea which label or selector I can use.
Any suggestions?

When I'm doing this manually I often find that even when all pods are gone some services are still being deleted watch kubectl get svc. Maybe it could work by checking to see if all services are gone, then the delete can be considered as complete...

@bacongobbler
Copy link
Member

bacongobbler commented Jan 19, 2021

I don't think that closing this issue is the right way to handle it. If you close it, you get new issues about this topic.

The Helm project receives about 3-4 new tickets per day. Nearly all of them are support issues or requests for clarification. The stale issue bot has been incredibly helpful to manage these issues, as many of them are either left open or fall off our radar, either because there's little engagement from the OP or the community on certain tickets. On the off chance that there is a ticket that needs to stay open, we can add "keep open" labels to the ticket.

We are very keen to review community contributions that are willing to take the time to implement this feature. But if the ticket goes stale due to 3 months of inactivity (and in this case, it's been left open for nearly 4 years), then it's clearly not a priority ticket item, or community members have found a workaround (as the OP demonstrated in his first comment) and there's little incentive to implement the requested feature.

@archerbj
Copy link

archerbj commented Mar 9, 2021

looking forward to seeing this

@Ornias1993
Copy link

Ornias1993 commented Mar 13, 2021

I don't think that closing this issue is the right way to handle it. If you close it, you get new issues about this topic.

The Helm project receives about 3-4 new tickets per day. Nearly all of them are support issues or requests for clarification. The stale issue bot has been incredibly helpful to manage these issues, as many of them are either left open or fall off our radar, either because there's little engagement from the OP or the community on certain tickets. On the off chance that there is a ticket that needs to stay open, we can add "keep open" labels to the ticket.

We are very keen to review community contributions that are willing to take the time to implement this feature. But if the ticket goes stale due to 3 months of inactivity (and in this case, it's been left open for nearly 4 years), then it's clearly not a priority ticket item, or community members have found a workaround (as the OP demonstrated in his first comment) and there's little incentive to implement the requested feature.

Just because no one submits a fix, doesn't mean no one would willing to. It only means what it says on the tin: No one submitted a fix.
Assuming willingness was the issue is a retorical mistake. It could just as well be simply the users having need for this issue, not being comfortable or knowlegable enough to fix it, them not having the time required to put into it, etc.

For example:
I personally would love this feature but I:
A. Am not comfortable or knowlegable with your codebase at all
B. I don't find myself having the required development skillset for this kind of PR
C. Don't have the time to put into it besides 40+ hours a week on opensource already

I would love to put it in, if I had the hunderd+ hours available to learn your codebase and find out a way to put it in.

@ishaan-rd
Copy link

Any updates?

@eranelbaz
Copy link

Got this issue...
uninstalled helm chart but all the resources was still up...
I had to remove them using kubectl cli and search for all resources by hand...

@patsevanton
Copy link

patsevanton commented Jul 13, 2021

I use werf - https://werf.io/ - https://github.com/werf/werf
werf helm uninstall application

zak905 pushed a commit to zak905/helm that referenced this issue Jan 19, 2023
If set, 'uninstall' command will wait until all the resources are deleted before returning.
It will wait for as long as --timeout

closes helm#2378

Signed-off-by: Mike Ng <ming@redhat.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging a pull request may close this issue.