Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

error: You must be logged in to the server (Unauthorized) #105

Closed
cdenneen opened this Issue Jun 21, 2018 · 32 comments

Comments

Projects
None yet
@cdenneen
Copy link

cdenneen commented Jun 21, 2018

Error:

~/bin » kubectl get svc
error: the server doesn't have a resource type "svc"
~/bin » kubectl get nodes
error: You must be logged in to the server (Unauthorized)
~/bin » kubectl get secrets
error: You must be logged in to the server (Unauthorized)

KUBECONFIG

apiVersion: v1
clusters:
- cluster:
    insecure-skip-tls-verify: true
    server: https://localhost:6443
  name: docker-for-desktop-cluster
- cluster:
    certificate-authority-data: REDACTED
    server: https://REDACTED.us-east-1.eks.amazonaws.com
  name: eksdemo
- cluster:
    certificate-authority-data: REDACTED
    server: https://api.k8s.domain.com
  name: k8s
contexts:
- context:
    cluster: eksdemo
    user: aws
  name: aws
- context:
    cluster: k8s
    user: k8s
  name: devcap
- context:
    cluster: docker-for-desktop-cluster
    user: docker-for-desktop
  name: docker-for-desktop
- context:
    cluster: k8s
    user: k8s
  name: k8s
current-context: aws
kind: Config
preferences: {}
users:
- name: aws
  user:
    exec:
      apiVersion: client.authentication.k8s.io/v1alpha1
      args:
      - token
      - -i
      - eksdemo
      - -r
      - arn:aws:iam::REDACTED:role/eks_role
      command: heptio-authenticator-aws
      env:
      - name: AWS_PROFILE
        value: k8sdev
- name: docker-for-desktop
  user:
    client-certificate-data: REDACTED
    client-key-data: REDACTED
- name: k8s
  user:
    client-certificate-data: REDACTED
    client-key-data: REDACTED
    password: REDACTED
    username: admin
- name: k8s-basic-auth
  user:
    password: REDACTED
    username: admin
- name: k8s-token
  user:
    client-certificate-data: REDACTED
    client-key-data: REDACTED
    token: REDACTED

STS Assume

AWS_PROFILE=k8sdev aws sts assume-role --role-arn arn:aws:iam::REDACTED:role/eks_role --role-session-name test
{
    "Credentials": {
        "AccessKeyId": "REDACTED",
        "SecretAccessKey": "REDACTED",
        "SessionToken": "REDACTED",
        "Expiration": "2018-06-21T21:15:58Z"
    },
    "AssumedRoleUser": {
        "AssumedRoleId": "REDACTED:test",
        "Arn": "arn:aws:sts::REDACTED:assumed-role/eks_role/test"
    }
}

exports

~/bin » echo $AWS_PROFILE
k8sdev
~/bin » echo $DEFAULT_ROLE
arn:aws:iam::REDACTED:role/eks_role

heptio-authenticator-aws token

~/bin » heptio-authenticator-aws token -i eksdemo
{"kind":"ExecCredential","apiVersion":"client.authentication.k8s.io/v1alpha1","spec":{},"status":{"token":"k8s-aws-v1.REDACTED"}}
@christopherhein

This comment has been minimized.

Copy link
Member

christopherhein commented Jun 23, 2018

Is arn:aws:iam::REDACTED:role/eks_role registered with your cluster? Or did you do the cluster creation with that role assumed?

If not use the credentials of the user that created the cluster and remove that role from your KUBECONFIG then you can register that role with it.

@cdenneen

This comment has been minimized.

Copy link
Author

cdenneen commented Jun 24, 2018

@christopherhein

This comment has been minimized.

Copy link
Member

christopherhein commented Jun 25, 2018

So the user that created the cluster do you have access to credentials directly for that user? You will need to use those credentials to do the initial setup, then you can add additional roles to access.

@nckturner

This comment has been minimized.

Copy link
Collaborator

nckturner commented Jun 26, 2018

@cdenneen The user used to create the cluster will have to access it through kubectl. This user can add rolemappings to allow other users/roles access.

@cdenneen

This comment has been minimized.

Copy link
Author

cdenneen commented Jun 26, 2018

@christopherhein @nckturner I'm not sure I follow.
We use SAML Federated Logins for AWS.
So in the console there are roles for developer, operator, sysadmin, etc that our Federated logins get mapped to.
screen shot 2018-06-26 at 12 05 10 pm
If that "user" is the only one that can access kubectl that doesn't seem correct and I believe this is the exact reason that the -r options were added to the KUBECONFIG (if I used the user that created it then I wouldn't need to assume the role with -r):

      - -r
      - arn:aws:iam::REDACTED:role/eks_role

So this would allow any user that can assume the eks_role to access the cluster because in fact eks_role is the ONLY role we have that has EKS permissions.

When I create the cluster eks_role is the ONLY one available since it's the only one with eks_perms:

screen shot 2018-06-26 at 12 05 31 pm

So now anyone with trust relationship on eks_role should be able to assume that role and be able to have access to the cluster through kubectl.

Using the aws cli I can access the cluster when I assume the role so I know this behavior works... it's just the heptio-authenticator-aws piece that's having the issue.

@christopherhein

This comment has been minimized.

Copy link
Member

christopherhein commented Jun 26, 2018

The Role ARN you are inputting when creating the cluster isn't for user authentication. It's meant to give the control plane access to create AWS resources in your account for the cloud controller manager.

What you need to do is the user or assumed user that you used when going through and clicking "Create Cluster" in the GUI that user or assumed user role is what should be put into the KUBECONFIG, does that make more sense?

@cdenneen

This comment has been minimized.

Copy link
Author

cdenneen commented Jun 26, 2018

@christopherhein yes but as I mentioned when all our users login they will get different roles in their Federated logins...
Some will get Developer, Operator, Sysadmin, etc.
So creating through the console assuming one of these roles would make it so that a developer and a sysadmin couldn't kubectl the same cluster.
What is the -r role used for in the KUBECONFIG?

I could use eks_role for the cluster control plane
I could create another role say eks_admin for users to assume with -r? (what perms does this eks_admin role need?)
I could setup trust relationship on eks_admin to Developer, Operator, Sysadmin?
Then everyone could control via kubectl?

@nckturner

This comment has been minimized.

Copy link
Collaborator

nckturner commented Jun 28, 2018

I believe you have hit one of the more confusing aspects of the user experience, but let me try to explain. Whatever role/user you used to create the cluster, is automatically granted system:masters. You should then use that user/assume that role to modify the authenticator configmap, and add any additional mappings for admins, devs, etc. Right now you cannot modify that admin user, but this is on our roadmap.

What is the -r role used for in the KUBECONFIG?

This is role that is the authenticator will assume before crafting the token, i.e. it is the identity of the user. This role arn should also be present in the configmap, where it should be mapped to a kubernetes user and groups.

This blog post might help, specifically section 3 (Configure Role Permissions via Kubernetes RBAC). The authenticator configmap and RBAC setup is similar regardless of what type of AWS identity you have, i.e. user, role, federated user, ...

I could create another role say eks_admin for users to assume with -r? (what perms does this eks_admin role need?)

You can have as many roles as you want for users to assume, they just need to all be present in the authenticator configmap.

@cdenneen

This comment has been minimized.

Copy link
Author

cdenneen commented Jun 28, 2018

@bilby91

This comment has been minimized.

Copy link

bilby91 commented Jun 28, 2018

Hello,

I think I'm experiencing a similar issue here. Maybe it's just that I don't get the user experience as @nckturner said.

When I try to interact with the cluster I can't manage to get the authentication right.

users:
- name: aws
  user:
    exec:
      apiVersion: client.authentication.k8s.io/v1alpha1
      command: heptio-authenticator-aws
      args:
        - "token"
        - "-i"
        - "dev"
        - "-r"
        - "arn:aws:iam::XXX:user/ZZZ" # This user is the one I used to create the cluster
      env:
        - name: "AWS_PROFILE"
          value: "my-profile"

When I run KUBECONFIG=~/.kube/config kubectl version I get:

Unable to connect to the server: getting token: exec: exit status 1

If I try AWS_PROFILE=my-profile heptio-authenticator-aws token -r arn:aws:iam::XXX:user/ZZZ -i dev I get:

could not get token: AccessDenied: Roles may not be assumed by root accounts.
	status code: 403, request id: 99cb4031-7a70-11e8-9791-0fe6c964a840

If I try AWS_PROFILE=my-profile heptio-authenticator-aws token -i dev I get a valid token. Now, if I try the kube config without the -r flag I get the following response from the kubernetes cluster.

➜  ~ KUBECONFIG=~/.kube/config kubectl version dump
Client Version: version.Info{Major:"1", Minor:"10", GitVersion:"v1.10.5", GitCommit:"32ac1c9073b132b8ba18aa830f46b77dcceb0723", GitTreeState:"clean", BuildDate:"2018-06-22T05:40:33Z", GoVersion:"go1.9.7", Compiler:"gc", Platform:"darwin/amd64"}
error: You must be logged in to the server (the server has asked for the client to provide credentials)

I also added the IAM user that created the cluster to the trusted relationships of the ArnRole that is assigned to the cluster.

Any ideas ?

@bilby91

This comment has been minimized.

Copy link

bilby91 commented Jun 29, 2018

@nckturner Any ideas ?

@nckturner

This comment has been minimized.

Copy link
Collaborator

nckturner commented Jul 3, 2018

Sorry for the late response @bilby91, did you manage to get past this? Did you use the root account to create the cluster?

@bilby91

This comment has been minimized.

Copy link

bilby91 commented Jul 3, 2018

@nckturner Thanks for the reply!

I couldn't solve the issue yet. I'm 100% sure that the cluster creator is a plain IAM user and not the root one.

Any ideas on what to try ?

@nckturner

This comment has been minimized.

Copy link
Collaborator

nckturner commented Jul 4, 2018

@bilby91 First make sure if you call aws sts get-caller-identity using the same user's credentials, it returns the same user arn as the one in the auth mapping.

@bilby91

This comment has been minimized.

Copy link

bilby91 commented Jul 4, 2018

@nckturner Confimed that aws sts get-caller-identity --profile my-profile returns the same user's arn than the one in the auth mapping. In the case of the example, arn:aws:iam::XXX:user/ZZZ, the cluster creator.

@bilby91

This comment has been minimized.

Copy link

bilby91 commented Jul 4, 2018

@nckturner I also tried changing the default configuration so that I don't have to use the profile option and I'm in the same situation.

So, what I'm a little bit confused about is the following. My user, in this case XXX:user/ZZZ will need to assume the role of itself ? Asking this because that is my understanding of the -r flag in heptio-authenticator-aws. The error seems reasonable:

could not get token: AccessDenied: User arn:aws:iam::XXX:user/ZZZ is not authorized to perform: sts:AssumeRole on resource: arn:aws:iam::XXX:user/ZZZ
@bilby91

This comment has been minimized.

Copy link

bilby91 commented Jul 4, 2018

Okey, I think I have solved the issue. It seems that I had some hidden env vars for AWS that seem to have been overriding the AWS_PROFILE option. Would AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY have more preference over AWS_PROFILE env vars ?

@nckturner

This comment has been minimized.

Copy link
Collaborator

nckturner commented Jul 4, 2018

@bilby91 nice! Yeah I'm not sure of the precedence, but that sounds right. (For others: unset AWS_ACCESS_KEY_ID, unset AWS_SECRET_ACCESS_KEY and unset AWS_SESSION_TOKEN if you don't want to use them).

@nckturner nckturner closed this Jul 4, 2018

@jmakanjuola

This comment has been minimized.

Copy link

jmakanjuola commented Aug 1, 2018

@bilby91 Were you able to use AWS federation to resolve the issue with KubeConfig?

@jmakanjuola

This comment has been minimized.

Copy link

jmakanjuola commented Aug 6, 2018

Got it working now. Thanks!

@sarslans

This comment has been minimized.

Copy link

sarslans commented Aug 7, 2018

@jmakanjuola Were you able to use AWS federation to resolve the issue with KubeConfig?

@sarslans

This comment has been minimized.

Copy link

sarslans commented Aug 7, 2018

@cdenneen did you get answer from aws for federation login?

@cdenneen

This comment has been minimized.

Copy link
Author

cdenneen commented Aug 7, 2018

@sarslans Yes I was able to get this working. AWS response was to use sts assume-role-with-saml which I was able to accomplish using https://github.com/dtjohnson/aws-azure-login since we are using Azure AD as our SAML provider. This allows me to use the role I federate as and generate temp access/secret with.

@BlackBsd

This comment has been minimized.

Copy link

BlackBsd commented Aug 22, 2018

Sorry to comment on a closed ticket, but I am having this same issue, also using aws-azure-login. I cannot seem to figure out how to get a role to authorize. I added a given role under the mapRoles section of the ConfigMap.

  mapRoles: |
    - rolearn: arn:aws:iam::XXX:role/WorkerInstanceRole
      username: system:node:{{EC2PrivateDNSName}}
      groups:
        - system:bootstrappers
        - system:nodes
    - rolearn: arn:aws:iam::XXX:role/MyCo/Administrator
      username: myco_admin
      groups:
        - system:masters

When creating the cluster, my role was

# aws --profile dev sts get-caller-identity
{
    "Arn": "arn:aws:sts::XXX:assumed-role/Administrator/blackb3"
}

Lastly my cluster config user section

users:
- name: black-dev-admin
  user:
    exec:
      apiVersion: client.authentication.k8s.io/v1alpha1
      command: aws-iam-authenticator
      args:
        - "token"
        - "-i"
        - "EKSCluster--dev"
      env:
        - name: AWS_PROFILE
          value: "dev"

@cdenneen Can you elaborate more on how you got this working?

@cdenneen

This comment has been minimized.

Copy link
Author

cdenneen commented Aug 22, 2018

@BlackBsd

That's exactly what I have:

apiVersion: v1
kind: ConfigMap
metadata:
  name: aws-auth
  namespace: kube-system
data:
  mapRoles: |
    - rolearn: arn:aws:iam::XXXXXXXXXX:role/ekscluster-workers1-NodeInstanceRole-1VY13IIH9VLKW
      username: system:node:{{EC2PrivateDNSName}}
      groups:
        - system:bootstrappers
        - system:nodes
    - rolearn: arn:aws:iam::XXXXXXXXXX:role/AWS_Root_Role
      username: role_root
      groups:
        - system:masters
    - rolearn: arn:aws:iam::XXXXXXXXXX:role/AWS_SysAdmin_Role
      username: role_sysadmin
      groups:
        - system:masters

Your kubeconfig is missing the arn to match though:

- name: black-dev-admin
  user:
    exec:
      apiVersion: client.authentication.k8s.io/v1alpha1
      args:
      - token
      - -i
      - ekscluster
      - -r
      - arn:aws:iam::XXXXXXXXXX:role/AWS_SysAdmin_Role
      command: aws-iam-authenticator
      env:
      - name: AWS_PROFILE
        value: dev_nonbaa

And whichever role you assumed when creating the cluster is the one you'll want to be assuming in aws-azure-login and inside your kubeconfig... once you add all additional roles to the configMap you can use the others but initially you have to use the one that created it.

@dreampuf

This comment has been minimized.

Copy link

dreampuf commented Aug 30, 2018

FYI, you can't get the token via an MFA user. It will throw a simple error:

An error occurred (AccessDenied) when calling the GetSessionToken operation: Cannot call GetSessionToken with session credentials
@cdenneen

This comment has been minimized.

Copy link
Author

cdenneen commented Aug 30, 2018

@dreampuf Can you explain what you mean? With the configuration above I am able to get this to work without an issue. So I believe @BlackBsd issue was just missing the -r ARN and should work once added.

@dreampuf

This comment has been minimized.

Copy link

dreampuf commented Aug 30, 2018

For example, if you have a user has been forced to authorize with Multifactor. Like this:

// AWS policy
 {
            "Condition": {
                "BoolIfExists": {
                    "aws:MultiFactorAuthPresent": "false"
                }
            },
            "Resource": "*",
            "Effect": "Deny",
            "NotAction": [
                "iam:*"
            ],
            "Sid": "ForceToUseMFA"
        }

You can't use that account call the AWS Security Token Service (aws sts get-session-token). That's the principle of aws-iam-authenticator communicate to AWS.

I just post my found here in case if anyone else has the same problem as me.

@StevenACoffman

This comment has been minimized.

Copy link

StevenACoffman commented Sep 14, 2018

@dreampuf
Can you Edit your ~/.aws/config to add the role_arn and MFA serial number into a new profile:

[profile read-only]
region=us-east-1

[profile admin]
source_profile = read-only
role_arn = arn:aws:iam::123456789012:role/admin-access
mfa_serial = arn:aws:iam::123456789012:mfa/dreampuf

Then if you specify the kubeconfig file like this (top part omitted):

  user:
    exec:
      apiVersion: client.authentication.k8s.io/v1alpha1
      command: aws-iam-authenticator
      args:
        - "token"
        - "-i"
        - "<cluster-name>"
        # - "-r"
        # - "<role-arn>"
      env:
        - name: AWS_PROFILE
          value: "admin"

This will actually use a different code path through the SDK and you will be prompted for an MFA.

@c4urself

This comment has been minimized.

Copy link

c4urself commented Oct 18, 2018

I'll post this here for anyone else running into this, I realise the ticket is closed but hope to help someone.

Even after reading this whole thread, it still took me a good hour to figure out why my cluster is returning error: the server doesn't have a resource type "svc" when calling kubectl get svc. Context is everything, if your setup is different the following probably won't work, this is for folks who use SAML federated logins via a script that does sts assume-role-with-saml under the hood (Azure AD, OneLogin, Okta), this results in temporary credentials which are typically stored in the ~/.aws/credentials file. In many cases multiple accounts are used so there may be a dev, prod, or other profile section in that file.

Steps to take:

  1. Login as you normally would to generate temporary credentials in ~/.aws/credentials.
  2. Verify AWS_PROFILE=<profile> aws-iam-authenticator token -i <cluster_name> works. In my case, it returned a token without errors.
  3. Make sure to unset any lingering AWS_* variables, this bit me, I didn't think it mattered and didn't realise it was messing things up.
  4. Generated the kube config with aws --profile=<profile> eks update-kubeconfig --name <cluster_name>, this creates a file at something like ~/.kube/config.
  5. Edit ~/.kube/config to add the AWS_PROFILE env variable, this should be the same profile you used to launch the cluster.
  6. kubectl get svc should work. \o/
@henshitou

This comment has been minimized.

Copy link

henshitou commented Nov 19, 2018

Issue: openshift okd cluster can run any command, oc status , oc get svc ...
error logs:
oc status
error: You must be logged in to the server (Unauthorized)
error: You must be logged in to the server (Unauthorized)
error: You must be logged in to the server (Unauthorized)
error: You must be logged in to the server (Unauthorized)
error: You must be logged in to the server (Unauthorized)
error: You must be logged in to the server (Unauthorized)
error: You must be logged in to the server (Unauthorized)
error: You must be logged in to the server (Unauthorized)
error: You must be logged in to the server (Unauthorized)
error: You must be logged in to the server (Unauthorized)
error: You must be logged in to the server (Unauthorized)
error: You must be logged in to the server (Unauthorized)
error: You must be logged in to the server (Unauthorized)
error: You must be logged in to the server (Unauthorized)
error: You must be logged in to the server (Unauthorized)
error: You must be logged in to the server (Unauthorized)

workaround:~
you need to login in your cluster first, but I don´t know why have to login first.
$oc login
then input the account/password which you create the cluster. then everything is working well

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
You can’t perform that action at this time.