Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Experimental support for running kubelet in container #6936

Closed
wants to merge 1 commit into from

Conversation

@pmorie
Copy link
Member

@pmorie pmorie commented Apr 16, 2015

WIP for #6848

@googlebot
Copy link

@googlebot googlebot commented Apr 16, 2015

Thanks for your pull request. It looks like this may be your first contribution to a Google open source project, in which case you'll need to sign a Contributor License Agreement (CLA).

📝 Please visit https://cla.developers.google.com/ to sign.

Once you've signed, please reply here (e.g. I signed it!) and we'll verify. Thanks.


  • If you've already signed a CLA, it's possible we don't have your GitHub username or you're using a different email address. Check your existing CLA data and verify that your email is set on your git commits.
  • If you signed the CLA as a corporation, please let us know the company's name.
@pmorie
Copy link
Member Author

@pmorie pmorie commented Apr 16, 2015

So far this PR:

  1. Injects mounters into volume plugins
  2. Adds a new mounter that knows how to nsenter the host's mount namespace from a container
  3. Runs a containerized flag to the kubelet
  4. Turns off kubelet joining cgroup to use devices (will be handled with docker --privileged option) when kubelet is containerized
  5. Injects an nsentering mounter into plugins when kubelet is containerized

It builds; tests are going to be broken.

Next up: make an image that has this kubelet binary and nsenter and test.

@pmorie
Copy link
Member Author

@pmorie pmorie commented Apr 16, 2015

@pmorie
Copy link
Member Author

@pmorie pmorie commented Apr 17, 2015

How could I forget @yifan-gu @dchen1107

@vishh
Copy link
Member

@vishh vishh commented Apr 17, 2015

This hack LGTM.

@pmorie
Copy link
Member Author

@pmorie pmorie commented Apr 17, 2015

@vishh Yep, this is 99% hack for POC purposes. The mounter injection would be the 1% that we can use.

@pmorie pmorie force-pushed the pmorie:run-in-container branch from c7f857c to 32256b0 Apr 17, 2015
@pmorie
Copy link
Member Author

@pmorie pmorie commented Apr 17, 2015

Now with more hacks. Tmpfs e2e runs locally with kubelet running in container; secrets doesn't. Next step will be to investigate secrets.

@pmorie pmorie force-pushed the pmorie:run-in-container branch 2 times, most recently from 2c6986c to 46db2ef Apr 20, 2015
@pmorie
Copy link
Member Author

@pmorie pmorie commented Apr 20, 2015

Code in this branch works with docker 1.6. Currently, there's a dependency on 1.6 because it switches the mount propogation mode of bind mounts from MOUNT_PRIVATE to MOUNT_SLAVE, so mounts made to host fs volumes (made from the host's mount ns) bind-mounted into containers will be propagated to the bind-mounts. Docker 1.5 uses MOUNT_PRIVATE. I've tested by doing an e2e run locally. There are some cases that don't work, but they appear to be the usual suspects and most of them shouldn't be run anyway against local (will open PRs up to skip these).

One nit that I found: mounts for volumes are not currently cleaned up when the kubelet is run in a container (ie, umount fails with device or resource busy). Those mount points may show as busy because they're under the bind-mount of the kubelet root dir. Need to do more digging on this to follow up.

With that said, I'm curious about how folks feel about some of this code going in so that we can experiment further without carrying the patch. I would want to:

  1. Rebase on top of #6400 when it goes in
  2. Pull the mounter to use up to being a field of the kubelet that the creator of the kubelet can supply (instead of having the getMounter method that would introduce a dependency on `NsenterMounter)
  3. Pull the NsenterMounter out for now or use some other special casing around it which is clearly marked as experimental

Unit and integration tests still broken; will fix those up next.

Thanks to @eparis and @vbatts again for support on getting this working.

Any thoughts @thockin @vishk @smarterclayton?

@@ -204,6 +205,9 @@ func (s *KubeletServer) AddFlags(fs *pflag.FlagSet) {
// Flags intended for testing, not recommended used in production environments.
fs.BoolVar(&s.ReallyCrashForTesting, "really_crash_for_testing", s.ReallyCrashForTesting, "If true, when panics occur crash. Intended for testing.")
fs.Float64Var(&s.ChaosChance, "chaos_chance", s.ChaosChance, "If > 0.0, introduce random client errors and latency. Intended for testing. [default=0.0]")

// HACK: are you containerized?
fs.BoolVar(&s.Containerized, "containerized", s.Containerized, "Indicates whether kubelet is running in a container")

This comment has been minimized.

@eparis

eparis Apr 20, 2015
Member

Would it be better to check for /.dockerenv or /.dockerinit instead of having to make the user remember on the command line?

This comment has been minimized.

@pmorie

pmorie Apr 22, 2015
Author Member

@eparis Eventually that seems sane.

@pmorie pmorie force-pushed the pmorie:run-in-container branch 3 times, most recently from 9dd48a2 to 31f9f88 Apr 23, 2015
@pmorie
Copy link
Member Author

@pmorie pmorie commented Apr 27, 2015

Rebased, will test later. Next up for this is fixing tests. Still waiting on #6400.

@pmorie
Copy link
Member Author

@pmorie pmorie commented Apr 27, 2015

I think I've got a handle on getting unmount to work correctly; I think I need to perform the unmount first in the kubelet container's mount ns, and then on the host.

@pmorie pmorie force-pushed the pmorie:run-in-container branch 9 times, most recently from b4a01cb to b0eef85 Apr 28, 2015
@pmorie
Copy link
Member Author

@pmorie pmorie commented Apr 29, 2015

Sweet, more rebasing later.

@pmorie pmorie force-pushed the pmorie:run-in-container branch from b0eef85 to e09b7d3 Apr 29, 2015
@pmorie pmorie force-pushed the pmorie:run-in-container branch from 339e429 to 8f5eae4 May 1, 2015
@pmorie
Copy link
Member Author

@pmorie pmorie commented May 1, 2015

I ran emptyDir and secrets E2Es locally with SELinux enforcing against containerized kubelet, both passed.

@pmorie pmorie force-pushed the pmorie:run-in-container branch 5 times, most recently from dff50a2 to 7c88a6b May 1, 2015
@pmorie pmorie changed the title WIP: Run kubelet in container Experimental support for running kubelet in container May 1, 2015
@pmorie
Copy link
Member Author

@pmorie pmorie commented May 1, 2015

@vmarmol My rebase party is complete, I think this is ready for final review.

@pmorie pmorie force-pushed the pmorie:run-in-container branch 5 times, most recently from 449895d to 3d25e40 May 1, 2015
@pmorie
Copy link
Member Author

@pmorie pmorie commented May 1, 2015

I'm blocked at the moment from doing the E2E build because of the bug this commit fixes:

rhatdan/moby1@350a636

They're cutting a new 1.6 package for fedora but it will probably be a couple days before it shows up in repos. :-/

@pmorie
Copy link
Member Author

@pmorie pmorie commented May 1, 2015

@vmarmol If you are running docker 1.5 locally and could find it in your heart do kick off an e2e run, I would definitely owe you 🍻

@vmarmol
Copy link
Contributor

@vmarmol vmarmol commented May 1, 2015

Spoke to @pmorie on IRC and we're gonna split the PR into 4:

  • Injecting mounter
  • nsenter mounter
  • Local Dockerized Kubelet
  • Building Dockerized Kubelet
@vmarmol
Copy link
Contributor

@vmarmol vmarmol commented May 1, 2015

@pmorie the default Docker on GCE is still 1.5 so I'd be happy to test :D I'll kickoff an e2e with this branch

@pmorie pmorie force-pushed the pmorie:run-in-container branch 3 times, most recently from c79c936 to f45cbb7 May 4, 2015
@pmorie pmorie force-pushed the pmorie:run-in-container branch from f45cbb7 to 0d64418 May 4, 2015
@pmorie
Copy link
Member Author

@pmorie pmorie commented May 4, 2015

Closing this out since we're splitting out separate PRs.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Linked issues

Successfully merging this pull request may close these issues.

None yet

8 participants