Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[EKS] [IAM]: Pods running on EKS should not have access to underlying IAM Node roles by default #1109

Open
aaronmell opened this issue Oct 8, 2020 · 7 comments
Labels
EKS Managed Nodes EKS Managed Nodes EKS Amazon Elastic Kubernetes Service Proposed Community submitted issue

Comments

@aaronmell
Copy link

Community Note

  • Please vote on this issue by adding a 👍 reaction to the original issue to help the community and maintainers prioritize this request
  • Please do not leave "+1" or "me too" comments, they generate extra noise for issue followers and do not help prioritize the request
  • If you are interested in working on this issue or have submitted a pull request, please leave a comment

According to this document, pods have all permissions assigned to the service account and node IAM Role.
https://docs.aws.amazon.com/eks/latest/userguide/restrict-ec2-credential-access.

Furthermore the solution proposed in the documentation, using iptable rules to block access is not a tenable solution in any environment that actually wants to use IRSA. The alternative, using something like calico is really not a great solution either, as the cluster is not secure by default.

By default, all clusters not using IRSA, or using IRSA and have not defined a role for the services account attached to a pod should grant DENY ALL.

Which service(s) is this request for?
EKS

Tell us about the problem you're trying to solve. What are you trying to do, and why is it hard?
Clusters should be secure by default.

@aaronmell aaronmell added the Proposed Community submitted issue label Oct 8, 2020
@aaronmell aaronmell changed the title [EKS] [IAM]: Pods running on EKS should be secure by default. [EKS] [IAM]: Pods running on EKS should not have access to underlying IAM Node roles by default Oct 8, 2020
@mikestef9 mikestef9 added the EKS Amazon Elastic Kubernetes Service label Oct 8, 2020
@karlskewes
Copy link

Agree. The iptables mechanism is a pretty big ask for those using EKS (a managed service).

Encoding the iptables steps into our Terraform is possible but with Calico and other CNI's mangling the iptables rules it would be easy for this to be bypassed per the warning in the linked docs.

Our workaround is to use a 'default' NetworkPolicy per namespace to block egress *:80 (and other traffic) with Calico and then a separate allow policy per application. Typical firewall admin.
However in practice this makes egress to services on port 80 unavailable to our applications as our only remaining hammer is a IP CIDR block allow list to port 80 which is unmanageable due to dynamic IP addressing.

@mikestef9
Copy link
Contributor

mikestef9 commented Oct 19, 2020

We are working on some doc updates here that will be published soon. For a preview, take a look at this eksctl PR

https://github.com/weaveworks/eksctl/pull/2634/files

Set IMDSv2 to be required, and the hop limit to 1. You can also do this today by using a launch template with managed node groups, although we plan to add a similar explicit API option to managed node groups as well.

This will be the new recommendation over the iptables rule to prevent non host networking pods from accessing the node IAM role by default.

@aaronmell
Copy link
Author

How does that work in a mixed environment? I have some pods that need to use IAM roles, and others that do not.

@mikestef9
Copy link
Contributor

For pods that do need IAM roles, you can use IAM Roles for Service Accounts.

@backjo
Copy link

backjo commented Oct 20, 2020

We've been leveraging Calico's GlobalNetworkPolicy to deny IMDS access across all namespaces, and forcing namespaces to use IAM Roles for Service Accounts.

apiVersion: crd.projectcalico.org/v1
kind: GlobalNetworkPolicy
metadata:
   name: block-aws-metadata-access
   spec:
     types:
     - Egress
     egress:    
     - action: Allow
       destination:
         notNets:
         - 169.254.169.254/32

@dahu33
Copy link

dahu33 commented Oct 29, 2020

Some optional components such as aws-ebs-csi-driver or amazon-cloudwatch-container-insights require to have access to the IMDS to work. Therefore, access (by the pods) to the IMDS cannot be restricted by default.

I see only two viable solutions:

  1. Remove the IMDS requirement from aws-ebs-csi-driver, amazon-cloudwatch-agent and other projects (seems hard to do).
  2. Add a new EC2 instance-metadata option to limit the number of hops for the security credentials endpoint (iam/security-credentials/role-name).

Personally, I would love to see #2 implemented as it would solve the issue and keep the compatibility with existing software that legitimately required IMDS to work but can get their credentials by another way (such as IAM Roles for Service Accounts). The biggest mistake was to have both the metadata and the security credentials available on the same endpoint/IP...

@orirawlings
Copy link

Set IMDSv2 to be required, and the hop limit to 1. You can also do this today by using a launch template with managed node groups, although we plan to add a similar explicit API option to managed node groups as well.

I want to eliminate our custom launch templates on managed node groups. It seems, that requiring IMDSv2 and lowering the hop limit to 1 are our last remaining dependencies for custom launch templates.

@mikestef9 as far as I can tell API options are not yet exposed for configuring IMDS settings. Is that still planned? Is there a better issue I can track on the public roadmap for that?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
EKS Managed Nodes EKS Managed Nodes EKS Amazon Elastic Kubernetes Service Proposed Community submitted issue
Projects
None yet
Development

No branches or pull requests

6 participants