Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Flags "delete-local-data" & "ignore-daemonsets" #20

Closed
rstreics opened this issue Nov 18, 2019 · 10 comments
Closed

Flags "delete-local-data" & "ignore-daemonsets" #20

rstreics opened this issue Nov 18, 2019 · 10 comments
Assignees
Labels
Type: Enhancement New feature or request

Comments

@rstreics
Copy link

Could please add ability to configure this flags

  • --ignore-daemonsets=false: Ignore DaemonSet-managed pods
  • --delete-local-data=false: Continue even if there are pods using emptyDir (local data that will be deleted when the node is drained)
@bwagner5 bwagner5 self-assigned this Nov 18, 2019
@bwagner5
Copy link
Contributor

Yeah, I think those two args to drain make sense to be configurable.

I'm curious why you would want --delete-local-data=false. Is the idea that you want anything using emptyDir to run as long as possible even though they are about to receive a SIGKILL? That's the idea on --ignore-daemonsets=true (our default), so that the node can continue to operate while the other pods are gracefully shutting down. Just curious on the delete-local-data=false use-case for spot interrupts.

@rstreics
Copy link
Author

Hi, for my personal use-case I need to to delete-local-data=true & ignore-daemonsets=false.
I specified feature suggestion defaults from kubectl. Regarding spot-instances this delete-local-data=true & ignore-daemonsets=falsedefault settings, because nodes will shutdown anyway.
I assumed those flag could be configurable.

@kivagant-ba
Copy link

Hello. I'm also trying to use the node-termination-handler. That's what I got in logs:

2019/11/15 07:18:56 cannot delete Pods with local storage (use --delete-local-data to override): ...//a long list of pods//...

Am I understand correctly and it does not really helps to drain pods in my case?

@rstreics
Copy link
Author

@kivagant-ba in short this feature request would solve this. If you are using emptyDir this tool in current release won't drain the nodes.

@bwagner5
Copy link
Contributor

Yep, hoping to have a PR in this week.

@bwagner5
Copy link
Contributor

I was slightly late on the PR, but it's open now ^^

@bwagner5
Copy link
Contributor

Also @kivagant-ba and @rstreics I have defaulted delete-local-data to true. I think that makes the most sense for spot interruptions, since the whole node is about to go down anyways. You could make the case that you're using an EBS volume that can persist but I think the spirit of emptyDir is that it lives with the node and you should be using a Persistent Volume if you want to persist across nodes. But let me know if you disagree with that default.

@kivagant-ba
Copy link

Thank you, @bwagner5! I will test the change later.

@kivagant-ba
Copy link

@bwagner5 Are there any newer images somewhere? I'd like to test the changes but seems like they have not been released yet.

@bwagner5
Copy link
Contributor

The latest image is published as v1.0.0. You can pull this specific hash if you're caching a local version pulled in the past. amazon/aws-node-termination-handler:v1.0.0@sha256:b8feb9e33b1dd02961496eb32f473cedd8d33cfd8741a39c412783d603fa30de

v1.0.0 is now pinned at the github release, so this will never change now.

I tested below to make sure configuration options are in that image:

docker run -it -e DRY_RUN=true -e NODE_NAME=test amazon/aws-node-termination-handler:v1.0.0@sha256:b8feb9e33b1dd02961496eb32f473cedd8d33cfd8741a39c412783d603fa30de
aws-node-termination-handler arguments:
	dry-run: true,
	node-name: test,
	metadata-url: http://169.254.169.254,
	kubernetes-service-host: ,
	kubernetes-service-port: ,
	delete-local-data: true,
	ignore-daemon-sets: true
	grace-period: -1

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Type: Enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

4 participants