Enable privileged containers for apiserver and controller #57561
Conversation
/assign @luxas |
c2bb2f2
to
db028f9
/retest |
db028f9
to
475d5c3
/open |
/reopen |
@dims: failed to re-open PR: state cannot be changed. There are no new commits on the dims:enable-privileged-container-for-apiserver-and-controller branch. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
/reopen |
Two questions: 1. unit tests. 2. can it be done with allowing few needed caps instead of fully privileged ? |
@kad we need CAP_SYS_ADMIN as we are mounting/unmounting file systems (similar to http://ceph.com/planet/no-more-privileged-containers-for-ceph-osds/). Any suggestions on the best way to do this? Thanks for the quick review! |
In case of mounts, probably no suggestions from me at the moment. If CAP_SYS_ADMIN is enough, maybe that would be the best way. |
@kad so leave the current implementation as-is? |
cc @kubernetes/sig-cluster-lifecycle-pr-reviews |
I'd also like if we could reduce the permission scope if possible. If it's only possible with privileged, fine with me |
@@ -104,6 +105,15 @@ func GetStaticPodSpecs(cfg *kubeadmapi.MasterConfiguration, k8sVersion *version. | |||
}, mounts.GetVolumes(kubeadmconstants.KubeScheduler)), | |||
} | |||
|
|||
if cfg.CloudProvider == "openstack" { |
jamiehannaford
Jan 12, 2018
Contributor
might be worth making openstack
a constant and also referencing different providers in the unit tests
might be worth making openstack
a constant and also referencing different providers in the unit tests
dims
Jan 12, 2018
Author
Member
@jamiehannaford i see "aws"
and "gce"
as well. i'll do it in another PR after this one merges
@jamiehannaford i see "aws"
and "gce"
as well. i'll do it in another PR after this one merges
jamiehannaford
Jan 12, 2018
Contributor
okay sgtm
okay sgtm
timothysc
Jan 17, 2018
Member
I'd rather this be more generic enablement, b/c this isn't openstack specific. This is a specific os-deployment issue.
I'd rather this be more generic enablement, b/c this isn't openstack specific. This is a specific os-deployment issue.
dims
Jan 17, 2018
Author
Member
Should i remove the if condition for openstack?
Should i remove the if condition for openstack?
jamiehannaford
Jan 17, 2018
Contributor
Do AWS and GCE also use the config drive for node metadata?
Do AWS and GCE also use the config drive for node metadata?
timothysc
Jan 17, 2018
Member
@dims I'd prefer an opt-in knob to enable privileged (default false), b/c again, this is very specific and not necessarily the common case.
@dims I'd prefer an opt-in knob to enable privileged (default false), b/c again, this is very specific and not necessarily the common case.
jbeda
Jan 17, 2018
Contributor
Agree with a specific opt-in knob. I'd consider it acceptable to have documentation for that knob state that often times cloud providers need it to be set and name openstack specifically.
Agree with a specific opt-in knob. I'd consider it acceptable to have documentation for that knob state that often times cloud providers need it to be set and name openstack specifically.
jbeda
Jan 17, 2018
Contributor
Agree with a specific opt-in knob. I'd consider it acceptable to have documentation for that knob state that often times cloud providers need it to be set and name openstack specifically.
Agree with a specific opt-in knob. I'd consider it acceptable to have documentation for that knob state that often times cloud providers need it to be set and name openstack specifically.
Privileged: utilpointer.BoolPtr(true), | ||
} | ||
staticPodSpecs[kubeadmconstants.KubeControllerManager].Spec.Containers[0].SecurityContext = &v1.SecurityContext{ | ||
Privileged: utilpointer.BoolPtr(true), |
timothysc
Jan 17, 2018
Member
Why does the controller manager need to talk with the meta-data service?
Why does the controller manager need to talk with the meta-data service?
dims
Jan 17, 2018
Author
Member
when the cloud provider is looking for "which node am i" info, in this specific scenario there is no openstack metadata service and the code looks in the config drive, to be able to mount the config drive it needs privs. we have a search-order
(https://kubernetes.io/docs/concepts/cluster-administration/cloud-providers/) but in this case there is simply no metadata service and the only choice is the config service where it fails because the kube controller manager does not have enough super powers
when the cloud provider is looking for "which node am i" info, in this specific scenario there is no openstack metadata service and the code looks in the config drive, to be able to mount the config drive it needs privs. we have a search-order
(https://kubernetes.io/docs/concepts/cluster-administration/cloud-providers/) but in this case there is simply no metadata service and the only choice is the config service where it fails because the kube controller manager does not have enough super powers
kargakis
Jan 17, 2018
Member
It would be nice to put this down into a code comment.
It would be nice to put this down into a code comment.
timothysc
Jan 17, 2018
Member
Once the cloud-controller code is excised from the controller manager, is there any reason that we would still need this?
Once the cloud-controller code is excised from the controller manager, is there any reason that we would still need this?
dims
Jan 17, 2018
Author
Member
@timothysc correct, when we excise code that calls out to openstack cloud provider from these 2 containers we can yank the privilege flag.
@timothysc correct, when we excise code that calls out to openstack cloud provider from these 2 containers we can yank the privilege flag.
timothysc
Jan 17, 2018
Member
When is the ETA on that excise? 1.10, 1.11? "Hand-wavy" future?
When is the ETA on that excise? 1.10, 1.11? "Hand-wavy" future?
dims
Jan 18, 2018
Author
Member
@timothysc we con't currently expose --cloud-provider
from kubeadm command line, so that won't help. i've updated the PR with a new field in MasterConfiguration
and updated the unit test. hopefully that combination will work for everyone.
@timothysc we con't currently expose --cloud-provider
from kubeadm command line, so that won't help. i've updated the PR with a new field in MasterConfiguration
and updated the unit test. hopefully that combination will work for everyone.
dims
Jan 18, 2018
Author
Member
@timothysc For 1.10, i do not expect to move at least the openstack cloud provider out of tree. at best we will have an external openstack controller manager that will work for some scenarios with the in-tree one still being the primary one. 1.11 is a distinct possibility.
@timothysc For 1.10, i do not expect to move at least the openstack cloud provider out of tree. at best we will have an external openstack controller manager that will work for some scenarios with the in-tree one still being the primary one. 1.11 is a distinct possibility.
@@ -104,6 +105,15 @@ func GetStaticPodSpecs(cfg *kubeadmapi.MasterConfiguration, k8sVersion *version. | |||
}, mounts.GetVolumes(kubeadmconstants.KubeScheduler)), | |||
} | |||
|
|||
if cfg.CloudProvider == "openstack" { |
timothysc
Jan 17, 2018
Member
I'd rather this be more generic enablement, b/c this isn't openstack specific. This is a specific os-deployment issue.
I'd rather this be more generic enablement, b/c this isn't openstack specific. This is a specific os-deployment issue.
da66c8f
to
6baf7c4
/test pull-kubernetes-unit |
/retest |
6baf7c4
to
042a8c4
minor nit on extraneous shufflings, then lgtm |
NodeName string | ||
AuthorizationModes []string | ||
|
||
// Name of the cloud provider | ||
CloudProvider string |
timothysc
Jan 18, 2018
Member
The CloudProvider strings don't really need to be shuffled around, it makes it look like you're changing it for some reason.
The CloudProvider strings don't really need to be shuffled around, it makes it look like you're changing it for some reason.
dims
Jan 18, 2018
Author
Member
Fixed. thanks!
Fixed. thanks!
042a8c4
to
0b421fd
/lgtm |
In OpenStack environment, when there is no metadata service, we look at the config drive to figure out the metadata. Since we need to run commands like blkid, we need to ensure that api server and kube controller are running in the privileged mode. So add a new field in MasterConfiguration for specifying that the api server and controller manager (s) need extra privileges. Added a TODO to remove this code when we fully yank out cloud provider specific calls from these processes.
0b421fd
to
658a27c
@timothysc apologies, looks like i left one test broken, fixed it now. will ping when everything goes green. |
/lgtm |
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: dims, timothysc Associated issue: #47392 The full list of commands accepted by this bot can be found here.
Needs approval from an approver in each of these OWNERS Files:
You can indicate your approval by writing |
/test all [submit-queue is verifying that this PR is safe to merge] |
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions here. |
afd01c0
into
kubernetes:master
Will this be in a 1.9.x release? |
@bputt I think this will land in 1.10 and won't get cherry-picked into 1.9. |
@errordeveloper @bputt Ilya is correct, this will fall into the new feature category (not bug fix) |
What this PR does / why we need it:
In OpenStack environment, when there is no metadata service, we
look at the config drive to figure out the metadata. Since we need
to run commands like blkid, we need to ensure that api server and
kube controller are running in the privileged mode.
Which issue(s) this PR fixes (optional, in
fixes #<issue number>(, fixes #<issue_number>, ...)
format, will close the issue(s) when PR gets merged):Fixes #47392
Fixes kubernetes/kubeadm#588
Special notes for your reviewer:
Release note: