Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

k3s cleaning after itself #74

Closed
bechampion opened this issue Feb 27, 2019 · 5 comments
Closed

k3s cleaning after itself #74

bechampion opened this issue Feb 27, 2019 · 5 comments
Labels
kind/feature A large new piece of functionality

Comments

@bechampion
Copy link

Thanks for helping us to improve k3s! We welcome all bug reports. At this stage, we are also looking for help in testing/QAing fixes. Once we've fixed you're issue, we'll ping you in the comments to see if you can verify the fix. We'll give you details on what version can be used to test the fix. Additionally, if you are interested in testing fixes that you didn't report, look for the issues with the status/to-test label. You can pick any of these up for verification. You can delete this message portion of the bug report.

Describe the bug
k3s doesnt kill containerd containers running neither cleans the veths cni and flannel interfaces

To Reproduce
just run k3s server and stop it

Expected behavior
containers should go away as awell as interfaces etc.

Screenshots
If applicable, add screenshots to help explain your problem.

Additional context
maybe a k3s server stop command would be nice something around those terms.

@bechampion
Copy link
Author

bechampion commented Feb 27, 2019

This is what I currentlypkill containerd-shim

pkill containerd-shim
ip link show | grep veth | awk '{ print $2 }' | cut -d\@ -f1 | xargs -I{} ip link delete {}
ip link delete cni0
ip link delete flannel.1

@aaliddell
Copy link
Contributor

Having those continue after the server is stopped is a sensible default. For example, if you are upgrading the k3s binary and restart the server process, you don't necessarily want all the pods etc to come down too (particulaly when multi-node, as killing the master process definitely shouldn't tear everything down on every node). Essentially, when you stop the process you are running without a master (or a kubelet in the agent-only case), so new changes can't be made, but all existing resources continue until the master returns.

As you mentioned, perhaps an option to dismantle everything setup by the local agent would be useful, without uninstalling k3s, but I don't think that should be the default. The uninstall script appears to do this cleanup properly, except for a few /pause processes leftover, which is a different issue.

@ibuildthecloud
Copy link
Contributor

This is general containerd issue. I honestly don't have a great solution for it yet but it does bother me quite a bit too. Basically I'd like to see a k3s cleanup command.

@bechampion
Copy link
Author

Yep something like a cleanup ... I’ll try my luck this weekend , the good thing is that all the containers are within the same containerd namespace , so it would be kind of easy to pin point .

@erikwilson
Copy link
Contributor

As a temporary solution, if you are using systemd the new install script we are testing for /issues/65 should provide a better uninstall script for cleaning up.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/feature A large new piece of functionality
Projects
None yet
Development

No branches or pull requests

5 participants