Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Authorization issues on AWS EKS #68

Open
stromvirvel opened this issue Oct 14, 2018 · 3 comments
Open

Authorization issues on AWS EKS #68

stromvirvel opened this issue Oct 14, 2018 · 3 comments

Comments

@stromvirvel
Copy link
Contributor

Summary

When deploying k8s-snapshots on an AWS EKS kubernetes cluster, it cannot create snapshots because of missing permissions in AWS.

I know that you're suggesting to run the controller on the master nodes, but since AWS EKS is a managed Kubernetes cluster, I don't have access to the master nodes for custom workloads.

Therefore I have some questions:

  • How does k8s-snapshots authenticate against AWS API? (Where does it get the credentials?)
  • Can I override the credentials somehow, as you can do it on Google Cloud?

Steps to reproduce

  1. Deploy a PVC with k8s-snapshot configuration
cat << EOF | k apply -f -
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: jrebel
  annotations:
    "backup.kubernetes.io/deltas": "PT1M PT5M PT1H"
spec:
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 1Gi
  storageClassName: ssd-general
EOF
  1. Deploy k8s-snapshots deployment and rbac as stated in the README

  2. Wait for the k8s-snapshots pod being created

Expected result

After one minute, in the AWS console a new snapshot for the given EBS is created.

Actual result

No EBS snapshot is created. k8s-snapshots pod status is first Error, then CrashLoopBackOff. Checking the pod's logs shows EC2ResponseError: 403 Forbidden, see:
https://gist.github.com/moepot/09ece52f86fe6724c63f2e17779ded2a

@miracle2k
Copy link
Owner

So I never used EKS. Does it create regular EC2 instances? An EC2 instance can be assigned an IAM role. kops sets it up such that the IAM role for the master has a Wildcard-Permission for all EC2 APIs:

{
            "Effect": "Allow",
            "Action": [
                "ec2:*"
            ],
            "Resource": [
                "*"
            ]
        },

One option would be to make sure that whatever IAM role your machines have has an access permission like this, or possibly one that is more restrictive (just to create snapshots). If you create a specialized role for this, let me know, and we can add it to the docs.

Another option would be to setup environment variables. This might deserve some documentation as well. We are basically just creating a boto3 session, which gets the credentials like this:

https://boto3.amazonaws.com/v1/documentation/api/latest/guide/configuration.html#configuring-credentials

Skipping 1 and 2 (we don't pass anything), for you, 3 (environment variables) might be the most suitable. Note that the last option, 8, is what I talked about before.

The environment variables specifically are written up here:

https://boto3.amazonaws.com/v1/documentation/api/latest/guide/configuration.html#environment-variables

@stromvirvel
Copy link
Contributor Author

Thank you.

Yes, EKS creates regular EC2 instances for worker nodes, while the master nodes are completely managed, you don't see them anywhere, not even in kubectl get nodes.

The boto3 environment variable looks like an appropriate solution to me. I'll test it and send you a PR for an updated README, if that's fine for you.

@miracle2k
Copy link
Owner

Sure, thanks!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants