Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

IRSA Support for Bare Metal #4361

Closed
evdevr opened this issue Dec 9, 2022 · 8 comments · Fixed by #5249
Closed

IRSA Support for Bare Metal #4361

evdevr opened this issue Dec 9, 2022 · 8 comments · Fixed by #5249
Assignees
Labels
area/providers/tinkerbell Tinkerbell provider related tasks and issues kind/enhancement New feature or request priority/p0 ASAP team/packages
Milestone

Comments

@evdevr
Copy link

evdevr commented Dec 9, 2022

What would you like to be added: IRSA Support for Bare Metal clusters

Why is this needed: Users with Bare Metal clusters want to send metrics back to AWS using ADOT w/IRSA.

Today, if a user tries to add a serviceAccountIssuer in their cluster spec, it's not passed through to the kube-api config so the pod-identity-webhook does not work.

@TerryHowe
Copy link
Contributor

We definitely have plans to build, support and probably use this, not sure what that is going to look like though

@TerryHowe
Copy link
Contributor

I also don't know what bare metal changes might be needed if any to support this.

@panktishah26 panktishah26 added kind/enhancement New feature or request area/providers/tinkerbell Tinkerbell provider related tasks and issues area/cli Generic EKS-A CLI features team/packages and removed area/cli Generic EKS-A CLI features labels Dec 9, 2022
@evdevr
Copy link
Author

evdevr commented Dec 12, 2022

currently using this workaround:

  • during cluster creation, after control plane node is created but before PXEing other nodes, edit /etc/kubernetes/manifests/kube-apiserver.yaml to add another --service-account-issuer argument for your OIDC provider BEFORE the existing default one. Then proceed with cluster build.

See IRSA docs here for a description of the kube-api server flags required.

So far it seems to work, as long as the rest of the IRSA setup is correct. (you have a service role that's properly annotated and you've got correct keys in your OIDC provider bucket, etc)

@g-gaston g-gaston modified the milestone: backlog Dec 16, 2022
@balusarakesh
Copy link

@evdevr
can you please expand on what you mean by add another --service-account-issuer argument for your OIDC provider BEFORE the existing default one? do you mean to keep the old one as is and just add the new issuer argument even though it will be a duplicate?

@balusarakesh
Copy link

currently using this workaround:

  • during cluster creation, after control plane node is created but before PXEing other nodes, edit /etc/kubernetes/manifests/kube-apiserver.yaml to add another --service-account-issuer argument for your OIDC provider BEFORE the existing default one. Then proceed with cluster build.

See IRSA docs here for a description of the kube-api server flags required.

So far it seems to work, as long as the rest of the IRSA setup is correct. (you have a service role that's properly annotated and you've got correct keys in your OIDC provider bucket, etc)

I tried to update the /etc/kubernetes/manifests/kube-apiserver.yaml file with the above mentioned arguments and the kube-apiserver does not restart not matter how I try

  • I restarted the docker container
  • I tried to run eksctl anywhere upgrade command which is not gonna work as the kube-apiserver pod is down

The only way I can start kube-apiserver is by manually running the command .usr/local/bin/kube-apiserver --advertise-address=172.18.0.4 ......

FYI: I'm running eksctl anywhere cluster in an EC2 instance

One thing that seems to partially work is to update the kubectl edit KubeadmControlPlane --namespace eksa-system CLUSTER_NAME -o yaml and adding the OIDC config but I'm seeing a ton of API Server: Unable to authenticate the request due to an error: invalid bearer token errors in apiserver

@evdevr
Copy link
Author

evdevr commented Jan 9, 2023

@balusarakesh
You're exactly right: I add a new argument in before the one that's already in the file. I don't have the file in front of me, but you'll end up with something like this in the command spec:

  --service-account-issuer=<your $ISSUER_HOSTPATH value>
  --service-account-issuer=<original cluster.local value>

The important part is to make sure the new one you add is first in the list. The kube-apiserver docs state "When this flag is specified multiple times, the first is used to generate tokens and all are used to determine which issuers are accepted." My understanding is: you need to put it first so that your kube-apiserver will create tokens using the OIDC provider, but you need to leave the second one there so that tokens for accounts in your cluster not using IRSA will still be accepted by the API.

After updating the file, it looked like kube-apiserver picked up the change and restarted itself after a few minutes. In addition, I ran the following commands to restart services on the control plane node before proceeding with the cluster build.

systemctl daemon-reload
systemctl restart kubelet
systemctl restart containerd

@balusarakesh
Copy link

@evdevr
I updated the config and ran the above systemctl commands and the kube-apiserver process still does not start (I also waited for more than 5 minutes)

FYI: I can start kube-apiserver by manually running the command usr/local/bin/kube-apiserver --advertise-address=172.18.0.4 --allow-privileged=true --audit-log-maxage=30 --audit-log-maxbackup=10.........

Let me know if there is a way I can debug why it did not start

  • Thank you

@jiayiwang7
Copy link
Member

Resolved in #5249. Closing this.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/providers/tinkerbell Tinkerbell provider related tasks and issues kind/enhancement New feature or request priority/p0 ASAP team/packages
Projects
None yet
Development

Successfully merging a pull request may close this issue.

7 participants