New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Experimental support for running kubelet in container #6936
Conversation
Thanks for your pull request. It looks like this may be your first contribution to a Google open source project, in which case you'll need to sign a Contributor License Agreement (CLA). 📝 Please visit https://cla.developers.google.com/ to sign. Once you've signed, please reply here (e.g.
|
So far this PR:
It builds; tests are going to be broken. Next up: make an image that has this kubelet binary and nsenter and test. |
How could I forget @yifan-gu @dchen1107 |
This hack LGTM. |
@vishh Yep, this is 99% hack for POC purposes. The mounter injection would be the 1% that we can use. |
c7f857c
to
32256b0
Compare
Now with more hacks. Tmpfs e2e runs locally with kubelet running in container; secrets doesn't. Next step will be to investigate secrets. |
2c6986c
to
46db2ef
Compare
Code in this branch works with docker 1.6. Currently, there's a dependency on 1.6 because it switches the mount propogation mode of bind mounts from One nit that I found: mounts for volumes are not currently cleaned up when the kubelet is run in a container (ie, umount fails with With that said, I'm curious about how folks feel about some of this code going in so that we can experiment further without carrying the patch. I would want to:
Unit and integration tests still broken; will fix those up next. Thanks to @eparis and @vbatts again for support on getting this working. Any thoughts @thockin @vishk @smarterclayton? |
@@ -204,6 +205,9 @@ func (s *KubeletServer) AddFlags(fs *pflag.FlagSet) { | |||
// Flags intended for testing, not recommended used in production environments. | |||
fs.BoolVar(&s.ReallyCrashForTesting, "really_crash_for_testing", s.ReallyCrashForTesting, "If true, when panics occur crash. Intended for testing.") | |||
fs.Float64Var(&s.ChaosChance, "chaos_chance", s.ChaosChance, "If > 0.0, introduce random client errors and latency. Intended for testing. [default=0.0]") | |||
|
|||
// HACK: are you containerized? | |||
fs.BoolVar(&s.Containerized, "containerized", s.Containerized, "Indicates whether kubelet is running in a container") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Would it be better to check for /.dockerenv or /.dockerinit instead of having to make the user remember on the command line?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@eparis Eventually that seems sane.
9dd48a2
to
31f9f88
Compare
Rebased, will test later. Next up for this is fixing tests. Still waiting on #6400. |
I think I've got a handle on getting unmount to work correctly; I think I need to perform the unmount first in the kubelet container's mount ns, and then on the host. |
b4a01cb
to
b0eef85
Compare
Sweet, more rebasing later. |
b0eef85
to
e09b7d3
Compare
I ran emptyDir and secrets E2Es locally with SELinux enforcing against containerized kubelet, both passed. |
dff50a2
to
7c88a6b
Compare
@vmarmol My rebase party is complete, I think this is ready for final review. |
449895d
to
3d25e40
Compare
I'm blocked at the moment from doing the E2E build because of the bug this commit fixes: They're cutting a new 1.6 package for fedora but it will probably be a couple days before it shows up in repos. :-/ |
@vmarmol If you are running docker 1.5 locally and could find it in your heart do kick off an e2e run, I would definitely owe you 🍻 |
Spoke to @pmorie on IRC and we're gonna split the PR into 4:
|
@pmorie the default Docker on GCE is still 1.5 so I'd be happy to test :D I'll kickoff an e2e with this branch |
c79c936
to
f45cbb7
Compare
Closing this out since we're splitting out separate PRs. |
WIP for #6848