New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
kops requires --ssh-public-key even if I want to use an existing one #4728
Comments
Hi, @mludvig, I've been developing kops for a while, and as far as I know, kops already allows us to use existing key pair with option I'm not sure if I've made it clear, or maybe you could provide more details about in which situations kops won't work as expected. |
Hi @ihac, with Our organisation rules don't allow that, we've got a pre-created "official" EC2 keypair in each AWS account, named e.g. aws-sandpit, and we are supposed to use that for all EC2 instances that we create. Hence I need a switch like I noticed that the cluster_spec.yaml already supports that (https://github.com/kubernetes/kops/blob/master/docs/cluster_spec.md#sshkeyname) - all I need now is to expose the same functionality in a command line switch like Do you think it's possible? |
Hi again @ihac |
Seems that you wanna use the ssh key pre-created on AWS, rather than those on your local computer, is that right? For what I know, kops currently does not support users to set the Back to your problem, I think there are two solutions:
|
We do have https://github.com/kubernetes/kops/blob/master/docs/secrets.md The problem is that the cli at times expects an ssh key locally still. We need to do some tweaks so that if you are re-using a key we do not force one localy |
Reuse one == reuse an existing key in aws. |
Hi @chrislovecnm this is what works without creating a new AWS key:
This sequence will create the cluster with all instances using All I'm after now is the ability to combine all these steps together into a single
My half-baked patch has been attached above but I don't know how to turn the command line parameter into spec file attribute. Hope someone can help with that... |
Just wanted to leave a note as I'm experiencing something similar, where we are generating a YAML through another process to feed to For now I am just going to extract the key via awscli or boto to make it happy, but it would be nice if we could just specify the aws key name in the attribute and let kops handle things from there. |
Until this issue is fixed, you can extract public key from AWS keypair and use it with |
I have got an implementation of this now which is only tested for AWS, plan to make a PR after cleanup. |
Hi,
|
yes, @maver1ck , it seems to be the case when you run create with --dry-run. it does not create the secrets it needs and complains... try to run kops create without dry-run/saving to file, and then get the file after it was created: |
But I'm creating cluster using |
First create, then export yaml, edit and replace. Replace by itself does not seem to initialize all resources, like the necessary secrets. |
@rjanovski |
@maver1ck did you try this: |
See my way to manage cluster from a file:
|
@maver1ck I believe it needs your public key for accessing the bastion server. It adds your public key to the bastion server's authorized_keys directory thus enabling you to SSH to it. |
We don't even use ssh keys. Don't have any in the AWS account. AMIs come with AD set up. Is there a way around having to 1) specify a |
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
Stale issues rot after 30d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
/remove-lifecycle rotten |
Work has been done on this. #7096 |
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
/remove-lifecycle stale This would be really beneficial working with aws |
99% sure this bug is no longer relevant, and has been addressed since... a few versions ago (not 100% sure when... I want to say 1.12.x, but I might be a bit off) |
Is there some way to specify the aws ec2 key to use besides adding a tag to the cluster config? I'm on 1.15 and ran into this yesterday. The only way to get around it was to follow suggestion by @mludvig from above. |
Not sure, My workflow is to always generate and feed the YAML to kops, so re: CLI flags methods, I can't really speak to that. |
This is a feature we really need! |
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
Stale issues rot after 30d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
/remove-lifecycle rotten
@
…________________________________
From: fejta-bot <notifications@github.com>
Sent: Tuesday, May 12, 2020 11:21:03 PM
To: kubernetes/kops <kops@noreply.github.com>
Cc: arontx <arontrimble@gmail.com>; Comment <comment@noreply.github.com>
Subject: Re: [kubernetes/kops] kops requires --ssh-public-key even if I want to use an existing one (#4728)
Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.
If this issue is safe to close now please do so with /close.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta<https://github.com/fejta>.
/lifecycle rotten
—
You are receiving this because you commented.
Reply to this email directly, view it on GitHub<#4728 (comment)>, or unsubscribe<https://github.com/notifications/unsubscribe-auth/AAL6JZEXBWCALWTX3YI54FDRRIN27ANCNFSM4EWEXOOA>.
|
Rotten issues close after 30d of inactivity. Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
@fejta-bot: Closing this issue. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
Trying to figure out what is left of this issue. Is it being able to create a cluster without ssh keys using |
@michaelajr as of today the kops create cluster \
--name $KOPS_CLUSTER_NAME \
--cloud aws \
--cloud-labels 'Project=kops-admin-area' \
--master-count 1 \
--master-size m5a.large \
--master-zones eu-west-1a \
--node-count 1 \
--node-size m5a.large \
--node-volume-size 64 \
--zones eu-west-1a \
--networking calico \
--topology private \
--ssh-public-key <pub file> \
--bastion \
--dry-run \
-o yaml > $CLUSTER_CONFIG
kops create -f $CLUSTER_CONFIG
kops update cluster --yes Private and pub file was generated using What is the current solution to be able to use either: ? |
Between the We could probably define an sshkey resource for the dryrun to emit and the create to consume, but that's not this issue. |
This might not be the proper etiquette to post without adding value. Let me apologize pre-emptively, if so. But I must tell you... thank you! Random strangers on the internet saving my life. You are a legend, @mludvig. |
For AWS deployments kops v1.9.0-alpha1 requires
--ssh-public-key
even if we already have an existing key pair that we would like to use. All our EC2 instances must use the same SSH key and I am actually not permitted to create a new key pair at all in the prod account (I could change my IAM permissions but that's beyond the point).I propose to add a new option
--ssh-key-name={existing-keypair-name}
and if that is supplied don't require--ssh-public-key
and don't attempt to create a new key pair.I believe this should be a relatively simple change but I'm not a go programmer so can't do it myself, sorry.
Hope someone can help with that! :)
The text was updated successfully, but these errors were encountered: