Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

kops requires --ssh-public-key even if I want to use an existing one #4728

Closed
mludvig opened this issue Mar 20, 2018 · 38 comments
Closed

kops requires --ssh-public-key even if I want to use an existing one #4728

mludvig opened this issue Mar 20, 2018 · 38 comments
Labels
lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.

Comments

@mludvig
Copy link

mludvig commented Mar 20, 2018

For AWS deployments kops v1.9.0-alpha1 requires --ssh-public-key even if we already have an existing key pair that we would like to use. All our EC2 instances must use the same SSH key and I am actually not permitted to create a new key pair at all in the prod account (I could change my IAM permissions but that's beyond the point).

I propose to add a new option --ssh-key-name={existing-keypair-name} and if that is supplied don't require --ssh-public-key and don't attempt to create a new key pair.

I believe this should be a relatively simple change but I'm not a go programmer so can't do it myself, sorry.

Hope someone can help with that! :)

@ihac
Copy link
Contributor

ihac commented Mar 24, 2018

Hi, @mludvig, I've been developing kops for a while, and as far as I know, kops already allows us to use existing key pair with option ---ssh-public-key <public_key_file>. By default, kops uses ~/.ssh/id_rsa.pub as the default key and publishes it to your instances automatically.

I'm not sure if I've made it clear, or maybe you could provide more details about in which situations kops won't work as expected.

@mludvig
Copy link
Author

mludvig commented Mar 24, 2018

Hi @ihac, with --ssh-public-key kops creates a new EC2 keypair in AWS. Even if it takes your existing locally created key and uploads to AWS it's still a "new" key in AWS.

Our organisation rules don't allow that, we've got a pre-created "official" EC2 keypair in each AWS account, named e.g. aws-sandpit, and we are supposed to use that for all EC2 instances that we create.

Hence I need a switch like --ssh-key-name aws-sandpit what will make kops skip the EC2 key creation and instead create the instances with the existing aws-sandpit keypair.

I noticed that the cluster_spec.yaml already supports that (https://github.com/kubernetes/kops/blob/master/docs/cluster_spec.md#sshkeyname) - all I need now is to expose the same functionality in a command line switch like --ssh-key-name.

Do you think it's possible?

@mludvig
Copy link
Author

mludvig commented Mar 24, 2018

Hi again @ihac
I tried to do it myself a few days ago but as I don't know neither Go nor Kops internal I didn't get very far. Here's my attempt to date, hope you can take it from there :)
ssh-key-name-attempt.diff.txt

@ihac
Copy link
Contributor

ihac commented Mar 24, 2018

Seems that you wanna use the ssh key pre-created on AWS, rather than those on your local computer, is that right?

For what I know, kops currently does not support users to set the SSHKeyName field in cluster spec, and it would upload the local ssh key to aws and name it with kubernetes.<cluster_name>-<publickey_fingerprint>. This might be the "new" key you mentioned?

Back to your problem, I think there are two solutions:

  1. Download the pre-created keypair(e.g. aws-sandpit) to your computer, and set it by --ssh-public-key when creating a new cluster. In this way, kops would still create a "new" keypair, but the contents should be the same.
  2. Add a new option --ssh-key-name for kops. I'll try to work on this with a fork but cannot make a promise, since I'm developing and testing on Alibaba Cloud in China, not AWS. Anyway, I'll ping you for any useful updates.

@chrislovecnm
Copy link
Contributor

We do have https://github.com/kubernetes/kops/blob/master/docs/secrets.md
Which shows how to add a key, also I am fairly certain we can reuse one.

The problem is that the cli at times expects an ssh key locally still. We need to do some tweaks so that if you are re-using a key we do not force one localy

@chrislovecnm
Copy link
Contributor

Reuse one == reuse an existing key in aws.

@mludvig
Copy link
Author

mludvig commented Mar 26, 2018

Hi @chrislovecnm this is what works without creating a new AWS key:

  1. First create cluster config in S3 but don't specify --yes

    ~ $ kops-1.9.0-alpha.2 create cluster --cloud aws [..all the options..] \
             --name k8keyname.example.com
    
  2. Next edit cluster and add sshKeyName: aws-sandpit under .spec section, where aws-sandpit is an already existing EC2 SSH Key Pair.

    ~ $ kops-1.9.0-alpha.2 edit cluster k8keyname.example.com
    
  3. Finally call update cluster to actually create it with --yes

     ~ $ kops-1.9.0-alpha.2 update cluster k8keyname.example.com --yes
    

This sequence will create the cluster with all instances using aws-sandpit keypair instead of creating a new one from ~/.ssh/id_rsa.pub. Exactly what I wanted! :)

All I'm after now is the ability to combine all these steps together into a single create cluster command with --ssh-key-name specified, instead of having to go through the unintuitive 3-step process. Something like:

kops create cluster [...] --ssh-key-name aws-sandpit --yes

My half-baked patch has been attached above but I don't know how to turn the command line parameter into spec file attribute. Hope someone can help with that...

@jhohertz
Copy link
Contributor

Just wanted to leave a note as I'm experiencing something similar, where we are generating a YAML through another process to feed to kops create cluster. In that case it is insufficient to specify the sshKeyName in the spec section. Upon trying to run the update, it will complain that I have not yet created a secret.

For now I am just going to extract the key via awscli or boto to make it happy, but it would be nice if we could just specify the aws key name in the attribute and let kops handle things from there.

@xydinesh
Copy link

xydinesh commented Jul 7, 2018

Until this issue is fixed, you can extract public key from AWS keypair and use it with --ssh-public-key option.
https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-key-pairs.html#retrieving-the-public-key

@amitsaha
Copy link

I have got an implementation of this now which is only tested for AWS, plan to make a PR after cleanup.

@maver1ck
Copy link

Hi,
I've got a question regarding this topic.
I made kops to use existing keypair by specifing sshKeyName in cluster spec.
But why kops want to create also sshpublickey secret ?

SSH public key must be specified when running with AWS (create with `kops create secret --name cluster_name.com sshpublickey admin -i ~/.ssh/id_rsa.pub`)

@rjanovski
Copy link

yes, @maver1ck , it seems to be the case when you run create with --dry-run. it does not create the secrets it needs and complains...

try to run kops create without dry-run/saving to file, and then get the file after it was created:
kops get --name=$NAME -oyaml > cluster.yaml
then the secrets were created in s3 and no complaints at kops update :)

@maver1ck
Copy link

maver1ck commented Jan 8, 2019

But I'm creating cluster using kops replace -f cluster.yaml --force
And this is not working

@rjanovski
Copy link

First create, then export yaml, edit and replace. Replace by itself does not seem to initialize all resources, like the necessary secrets.

@maver1ck
Copy link

maver1ck commented Jan 8, 2019

@rjanovski
I'm using it to automate K8S cluster creation.
I have cluster.yaml in a git repo and then using replace to create cluster.
I think this is a bug in replace. (it should work similar to create)

@rjanovski
Copy link

@maver1ck did you try this:
kops create -f my-cluster.yaml

@tomaszkiewicz
Copy link

See my way to manage cluster from a file:

kops get cluster --name $KOPS_CLUSTER_NAME

if [ \$? -ne 0 ]; then
  echo "Cluster not found, creating a new one"
  // we don't care about details, only required things to just create a key file in s3
  kops create cluster --name=$KOPS_CLUSTER_NAME --ssh-public-key key.pub --cloud=aws --zones eu-west-1a
fi

kops replace -f cluster.yaml --force

// then kops update etc

@slashr
Copy link

slashr commented Feb 5, 2019

But why kops want to create also sshpublickey secret ?

@maver1ck I believe it needs your public key for accessing the bastion server. It adds your public key to the bastion server's authorized_keys directory thus enabling you to SSH to it.

@michaelajr
Copy link

We don't even use ssh keys. Don't have any in the AWS account. AMIs come with AD set up. Is there a way around having to 1) specify a sshKey in the YAML, and 2) uploading a dummy one to the state store just to get past the check on kops update? This has been an issue for years.

@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jul 24, 2019
@fejta-bot
Copy link

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Aug 23, 2019
@michaelajr
Copy link

/remove-lifecycle rotten

@k8s-ci-robot k8s-ci-robot removed the lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. label Aug 26, 2019
@michaelajr
Copy link

Work has been done on this. #7096

@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Nov 24, 2019
@ghost
Copy link

ghost commented Dec 5, 2019

/remove-lifecycle stale

This would be really beneficial working with aws

@k8s-ci-robot k8s-ci-robot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Dec 5, 2019
@jhohertz
Copy link
Contributor

jhohertz commented Dec 5, 2019

99% sure this bug is no longer relevant, and has been addressed since... a few versions ago (not 100% sure when... I want to say 1.12.x, but I might be a bit off)

@ghost
Copy link

ghost commented Dec 5, 2019

99% sure this bug is no longer relevant, and has been addressed since... a few versions ago (not 100% sure when... I want to say 1.12.x, but I might be a bit off)

Is there some way to specify the aws ec2 key to use besides adding a tag to the cluster config? I'm on 1.15 and ran into this yesterday. The only way to get around it was to follow suggestion by @mludvig from above.

@jhohertz
Copy link
Contributor

Not sure, My workflow is to always generate and feed the YAML to kops, so re: CLI flags methods, I can't really speak to that.

@xunliu
Copy link

xunliu commented Jan 14, 2020

This is a feature we really need!

@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Apr 13, 2020
@fejta-bot
Copy link

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels May 13, 2020
@ghost
Copy link

ghost commented May 13, 2020 via email

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. labels May 13, 2020
@fejta-bot
Copy link

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

@k8s-ci-robot
Copy link
Contributor

@fejta-bot: Closing this issue.

In response to this:

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@olemarkus
Copy link
Member

Trying to figure out what is left of this issue. Is it being able to create a cluster without ssh keys using kops create cluster?
If so, I would perhaps keep it closed as typically one wants to use kops create -f when creating a proper production cluster. kops create cluster is more of a convenient method to just test kops, but will never give you all the flags you need to fully configure your cluster.

@damianobarbati
Copy link

damianobarbati commented Dec 27, 2022

@michaelajr as of today the --ssh-public-key <path> in kops v1.25.3 is not working with the following combination of parameters:

kops create cluster \
  --name $KOPS_CLUSTER_NAME \
  --cloud aws \
  --cloud-labels 'Project=kops-admin-area' \
  --master-count 1 \
  --master-size m5a.large \
  --master-zones eu-west-1a \
  --node-count 1 \
  --node-size m5a.large \
  --node-volume-size 64 \
  --zones eu-west-1a \
  --networking calico \
  --topology private \
  --ssh-public-key <pub file> \
  --bastion \
  --dry-run \
  -o yaml > $CLUSTER_CONFIG
  
kops create -f $CLUSTER_CONFIG
kops update cluster --yes

Private and pub file was generated using ssh-keygen -t rsa. In the config.yml` there is nothing about ssh, and there's no keypair created in aws with the generated keys.
After applying the config, the login to the bastion is denied (using the specified .pub or my local pub, I tried both).

What is the current solution to be able to use either:
A) an existing keypair by name
B) a local keypair created

?

@johngmyers
Copy link
Member

Between the kops create -f and the kops update cluster you'd need to add kops create sshpublickey -i <pub file>

We could probably define an sshkey resource for the dryrun to emit and the create to consume, but that's not this issue.

@esbc-disciple
Copy link
Contributor

Hi @chrislovecnm this is what works without creating a new AWS key:

  1. First create cluster config in S3 but don't specify --yes
    ~ $ kops-1.9.0-alpha.2 create cluster --cloud aws [..all the options..] \
             --name k8keyname.example.com
    
  2. Next edit cluster and add sshKeyName: aws-sandpit under .spec section, where aws-sandpit is an already existing EC2 SSH Key Pair.
    ~ $ kops-1.9.0-alpha.2 edit cluster k8keyname.example.com
    
  3. Finally call update cluster to actually create it with --yes
     ~ $ kops-1.9.0-alpha.2 update cluster k8keyname.example.com --yes
    

This sequence will create the cluster with all instances using aws-sandpit keypair instead of creating a new one from ~/.ssh/id_rsa.pub. Exactly what I wanted! :)

All I'm after now is the ability to combine all these steps together into a single create cluster command with --ssh-key-name specified, instead of having to go through the unintuitive 3-step process. Something like:

kops create cluster [...] --ssh-key-name aws-sandpit --yes

My half-baked patch has been attached above but I don't know how to turn the command line parameter into spec file attribute. Hope someone can help with that...

This might not be the proper etiquette to post without adding value. Let me apologize pre-emptively, if so. But I must tell you... thank you! Random strangers on the internet saving my life. You are a legend, @mludvig.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.
Projects
None yet
Development

No branches or pull requests