-
Notifications
You must be signed in to change notification settings - Fork 5.3k
proposals: add proposal for a chrooted kubelet #131
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
LGTM |
hostPath volumes are managed entirely by the docker daemon process, including | ||
SELinux context applying), so Kubelet makes no operations at those paths). This | ||
will likely change in the future, at which point a shared bindmount of `/` will | ||
be made available at a known path in the Kubelet chroot. This change will |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why do you think such a change will be intrusive?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It will require special-casing in kubelet code which isn't needed until then. I've tried to clarify this a little more.
|
||
#### Waiting for Flexv2 + port-forwarding changes | ||
|
||
The CRI effort plans to change how [port-forward](https://github.com/kubernetes/kubernetes/issues/29579) works, towards a method which will not depend explicitly on socat or other networking utilities. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Doesn't kubelet depend on other tools in addition to socat? ebtools is an example I think.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, as I call out in the downsides to this one with:
Finally, it's likely there are dependencies that neither of these proposals cover.
I've worded the downside more strongly and referenced issue 26093
|
||
#### Timeframe | ||
|
||
1.6? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We can get started with testing right away. We can decide on a release based on the testing results.
@euank can you finish this up? |
cc @lucab @jonboulle |
Addressed @vishh's comments. xref kubernetes/kubernetes#35249 which is one of the prerequisite steps for this. I'll pick that PR up again soon. |
@@ -0,0 +1,199 @@ | |||
<!-- BEGIN MUNGE: UNVERSIONED_WARNING --> |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't think we need these warnings in this repo anymore.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
please - I keep sending PRs to remove it..
|
||
#### Hyperkube Image Packaging | ||
|
||
The Hyperkube image is distributed as part of an official release to the `gcr.io/google_containers` registry, but is not included along with the `kube-up` artifacts used for deployment. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
See kubernetes/kubernetes#16508. Some of the current thinking here is to split the hyperkube container into a control-plane version and a node version. I'm thinking that this would be the node version.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Indeed, the node version is relevant here, but I don't think there's any need to codify this observation here; it reads clearly enough either way and hyperkube has not been split yet.
|
||
This is different than running the Kubelet as a pod. Rather than using namespaces, it uses only a chroot and shared bind mounts. | ||
|
||
## Alternatives |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Another option/enhancement is to set up some simple resource cgroups so that we can ensure that we can be sure that the kubelet can keep running. On a heavily loaded system it would be nice to carve out some guaranteed elbow room for the kubelet. Full containerization (namespace+cgroup) wouldn't be necessary.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This proposal is primarily about handling kubelet dependencies, not resource management.
The kubelet should have appropriately configured cgroups, oom_score, etc, but that's out of scope of this proposal.
+1 for resource isolation. I would leave it as an implementation detail
though since systemd is still not the default means to deploy kubelet.
…On Wed, Dec 21, 2016 at 9:52 PM, Joe Beda ***@***.***> wrote:
***@***.**** commented on this pull request.
------------------------------
In contributors/design-proposals/kubelet-rootfs-distribution.md
<#131 (review)>
:
> +
+## Current Use
+
+This method of running the Kubelet is already in use by users of CoreOS Linux. The details of this implementation are found in the [kubelet wrapper documentation](https://coreos.com/kubernetes/docs/latest/kubelet-wrapper.html).
+
+## Implementation
+
+### Target Distros
+
+The two distros which benefit the most from this change are GCI and CoreOS. Initially, these changes will only be implemented for those distros.
+
+This work will also only initially target the GCE provider and `kube-up` method of deployment.
+
+#### Hyperkube Image Packaging
+
+The Hyperkube image is distributed as part of an official release to the `gcr.io/google_containers` <http://gcr.io/google_containers> registry, but is not included along with the `kube-up` artifacts used for deployment.
See kubernetes/kubernetes#16508
<kubernetes/kubernetes#16508>. Some of the
current thinking here is to split the hyperkube container into a
control-plane version and a node version. I'm thinking that this would be
the node version.
------------------------------
In contributors/design-proposals/kubelet-rootfs-distribution.md
<#131 (review)>
:
> +
+
+## FAQ
+
+#### Will this replace or break other installation options?
+
+Other installation options include using RPMs, DEBs, and simply running the statically compiled Kubelet binary.
+
+All of these methods will continue working as they do now. In the future they may choose to also run the kubelet in this manner, but they don't necessarily have to.
+
+
+#### Is this running the kubelet as a pod?
+
+This is different than running the Kubelet as a pod. Rather than using namespaces, it uses only a chroot and shared bind mounts.
+
+## Alternatives
Another option/enhancement is to set up some simple resource cgroups so
that we can ensure that we can be sure that the kubelet can keep running.
On a heavily loaded system it would be nice to carve out some guaranteed
elbow room for the kubelet. Full containerization (namespace+cgroup)
wouldn't be necessary.
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#131 (review)>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/AGvIKGVqFznh5Rard5xMUIa1yM9ukHYoks5rKVJBgaJpZM4LCHwc>
.
|
@vishh any open issues on this from your side? |
Nope. Let's get this done!! Thanks for spearheading @euank! |
LGTM |
Improve readability of rendered exception questionnaire
proposals: add proposal for a chrooted kubelet
According to valentine, Geeknoid has access, too.
After discussion with @philips and @vishh we settled on running the Kubelet in a chroot as being a fairly sane way to solve the problem of mount utility availability (among a couple others).
This proposal outlines a few more details of that idea and provides some alternatives.
Related issues: https://issues.k8s.io/19765, https://issues.k8s.io/16508, https://issues.k8s.io/35224, https://issues.k8s.io/35249 and others I'm sure.
cc @vishh @philips
Moved from kubernetes/kubernetes#35328 due to repo reorgianization, some context remains there.