-
Notifications
You must be signed in to change notification settings - Fork 4.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Limit the IAM EC2 policy for the master nodes #3186
Limit the IAM EC2 policy for the master nodes #3186
Conversation
Hi @KashifSaadat. Thanks for your PR. I'm waiting for a kubernetes member to verify that this patch is reasonable to test. If it is, they should reply with I understand the commands that are listed here. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. I understand the commands that are listed here. |
/assign @justinsb |
d3d9051
to
e846d66
Compare
Effect IAMStatementEffect | ||
Action stringorslice.StringOrSlice | ||
Resource stringorslice.StringOrSlice | ||
Condition Condition `json:",omitempty"` |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why, this makes writing IAM statements almost pleasurable ;-)
pkg/model/iam/iam_builder.go
Outdated
Resource: wildcard, | ||
Condition: Condition{ | ||
"StringEquals": map[string]string{ | ||
"ec2:ResourceTag/KubernetesCluster": b.Cluster.GetName(), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
So we're also trying to phase out this tag in favor of the tag that allows sharing kubernetes.io/cluster/<clustername>
. The value there can be shared
or owned
. Is it possible to match on the presence of a tag? (e.g. does StringNotEquals ""
work?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah I believe we could do on the presence of a tag, I'll see which condition operator fits best.
I noticed that only some resources have the new tag that allows sharing. It only seems to be present on some shared resources, such as VPCs and Subnets. Unless we add the new tag to everything managed by kops, we wouldn't be able to control access to EC2 resources.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That is a good point... I guess we should continue to honor the existing label for now. We can either just have two IAM grants (I believe), one on the old and one on the new tags, both with ec2:*
, or we can try to be granular now. I'm inclined to to do the former, or to do particular rules for subnets & vpcs (which will soon only have the new tags). I think technically we tag with both for now, even on VPCs & subnets, so it is OK to do this as-is for now, but we'll have to fix this going forward.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is going to force VPCs to be tagged. I know in a perfect world the VPC should be tagged, but we are not in
perfect world. I also know that we have people that do not have the VPCs tagged. Let me chew on this a bit. I am wondering if we want to make this optional.
pkg/model/iam/iam_builder.go
Outdated
Effect: IAMStatementEffectAllow, | ||
Action: stringorslice.Slice([]string{ | ||
"ec2:ModifyInstanceAttribute", | ||
"ec2:CreateRoute", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I am worried that there might be more permissions in this list, but I don't know what they would be so...
So generally this looks good, but we really want to figure out a path for users that might be sharing the existing permisisons.
I would prefer kube2iam, because option 2 just feels like punting a decision to the user that really we should be doing for them - if we can secure the nodes, we should. At some stage in the future we would make kube2iam the default as well, so everyone would get the more secure setup. If you agree, tactically then, doing a feature-flag for now would let us do this until we could get kube2iam integrated... |
/ok-to-test |
Yup I agree, all sounds sensible and would be really good to get kube2iam in place! I could use the work in PR #3210 to wrap this up under the cluster spec flag, which should give us some flexibility on tweaking and getting the permissions right? |
e846d66
to
9d87b47
Compare
/retest |
Another thing I was pondering ... when we create "things" we typically can't tag them. The canonical example of that is EBS volumes, but when I checked that there is actually a TagSpecification there so it looks like we could create a volume with tags at the same time ... but we're not using that currently. I think we should fix that in kubernetes/kubernetes (opened kubernetes/kubernetes#50898 ). But we need to be mindful of this problem... I was thinking about the semantics of how to allow it, I do think we could have a field in the API. The meaning would be that you're not relying on reusing the IAM policies, and thus kops should set the IAM policies to the minimal set we can derive. Perhaps:
But yes, the TLDR is that what you've done in #3210 is spot-on, we just need to decide naming & nesting :-) The semantics (I propose) are that if we set strict to true kops won't guarantee any IAM permissions beyond what k8s needs, and you're expect to manage your own permissions (whether directly or with kube2iam) rather than reuse the node / master permissions (which is best practice anyway!) |
(And one more thought: if we make strict default to false, we can merge this even when it is sort of alpha/experimental...) |
Yeah I was concerned about that case too, wasn't sure what resources the master nodes may attempt to create themselves as it would currently fail with the policies I've defined here. We can add specific policies for API calls that create resources, like The spec definition you've given looks good, I'll tweak #3210 with that change and then we can later rebase this to wrap under the API flag, defaulting to false. Thanks for the feedback :) |
/test pull-kops-e2e-kubernetes-aws @KashifSaadat is this PR blocked by the other? |
Yep, if the other PR is okay and goes through I was planning to wrap the API flag around this code (or potentially leave this and go straight to your PR which is much more thorough). |
@KashifSaadat marked this as WIP, so it does not accidentally get merged. |
…licies Automatic merge from submit-queue Allow the strict IAM policies to be optional The stricter IAM policies could potentially cause regression for some edge-cases, or may rely on nodeup image changes that haven't yet been deployed / tagged officially (currently the case on master branch since PR #3158 was merged in). This PR just wraps the new IAM policy rules around a cluster spec flag, `EnableStrictIAM`, so will default to the original behaviour (where the S3 policies were completely open). Could also be used to wrap PR #3186 if it progresses any further. - Or we could reject this and have the policies always strict! :)
ccf224a
to
289f294
Compare
289f294
to
c3a698e
Compare
Some notes on implementation (taken out of main PR description):
|
c3a698e
to
b0e70d7
Compare
…egacyIAM' API flag.
b0e70d7
to
d6e5a62
Compare
/test pull-kops-e2e-kubernetes-aws |
/retest The Opaque Resources failures seem to be an upstream flake (though I haven't looked into it yet) |
/lgtm |
Aha thank you, I was just trying to debug the logs but couldn't spot any EC2 API unauthorized failures. |
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: KashifSaadat, justinsb The full list of commands accepted by this bot can be found here.
Needs approval from an approver in each of these OWNERS Files:
You can indicate your approval by writing |
/test all [submit-queue is verifying that this PR is safe to merge] |
/retest |
Test failure filed as kubernetes/kubernetes#51429 |
PR got merged in, will check with a retest: |
/test all [submit-queue is verifying that this PR is safe to merge] |
Automatic merge from submit-queue |
I am curious, why "ec2:ResourceTag/KubernetesCluster" is used? |
Related to: #3158
The EC2 policy for the master nodes are quite open currently, allowing them to create/delete/modify resources that are not associated with the cluster the node originates from. I've come up with a potential solution using condition keys to validate that the
ec2:ResourceTag/KubernetesCluster
matches the cluster name.