Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Can't ssh-connect to newly created instance with a key_name #14348

Closed
MarounMaroun opened this issue May 10, 2017 · 20 comments
Closed

Can't ssh-connect to newly created instance with a key_name #14348

MarounMaroun opened this issue May 10, 2017 · 20 comments

Comments

@MarounMaroun
Copy link

Terraform Version

v0.9.2

Affected Resource(s)

Please list the resources as a list, for example:

  • aws_instance
  • aws_key_pair

Terraform Configuration Files

resource "aws_instance" "web" {
  subnet_id = "${aws_subnet.main.id}"
  vpc_security_group_ids = ["${aws_security_group.allow_all.id}"]
  ami = "${var.aws_ami}"
  instance_type = "t2.nano"
  key_name = "${aws_key_pair.auth.id}"
  tags {
    "Name" = "instance1"
  }
}

resource "aws_key_pair" "auth" {
  key_name   = "Terraform key"
  public_key = "${file("/tmp/tf_temp/key.pub")}"
}

Debug Output

terraform apply returns success, nothing interesting in the debug log.

Expected Behavior

I expect to successfully ssh-connect to the instance

Actual Behavior

I can't ssh-connect to the newly created instance, and asked to enter the password:

ssh username@address -i key
address's password:

After running ssh with -v, I get:

debug1: Skipping ssh-dss key example@com - not in PubkeyAcceptedKeyTypes

I tried to add PubkeyAcceptedKeyTypes=+ssh-dss to the ssh config file, but that didn't help.

@wzalazar
Copy link

wzalazar commented Dec 1, 2017

I have the same issue.

@chenyosef
Copy link

Same issue for me too )-:

@AndresPineros
Copy link

AndresPineros commented Jan 14, 2018

It seems like Terraform is completely broken here. I've bent backwards trying to get this to work, but the only success I've had was with a connection{} block with a private key defined, which sucks. I've seen so many complaints totally unanswered for such an important functionality.

@MarounMaroun
Copy link
Author

Can someone please update on this?

@leriel
Copy link

leriel commented Jan 23, 2018

I just had this issue, but in my case it was because of the AMI i was using. Depending on the AMI, you may be required to use different username. I was using coreos AMI as per https://github.com/terraform-providers/terraform-provider-aws/blob/master/examples/ecs-alb/main.tf#L65 and trying to log in with root, which failed.

Similar is with ECS AMIs which for example tell you to login as ec2-user.

When i used username matching AMI, i was able to ssh into instance

Please note: ec2 console/connect dialog will always say root@ipaddress, regardless if you can actually log in as root.

@abarax
Copy link

abarax commented Jan 30, 2018

I am also having this issue. I get "Too many authentication failures Authentication failed."

But I see no reason this should not be working.

@leriel
Copy link

leriel commented Jan 30, 2018

@abarax what AMI ID are you using? maybe i'll be able to help. Also, under what username are you trying to log in?

@daniilyar
Copy link

daniilyar commented Mar 6, 2018

Reproduces for me on latest Terraform 0.11.3. I create 5 instances with same key pair name and security groups with Terraform, all in same subnet and one or two of these instances will not allow you to SSH in with a proper key after created. If you drop instances and create them again, the other one or two becomes inaccessible via SSH, so on each re-creation I can't achive all instances to be accessible.

I am absolutely sure that it is not SSH key issue and not AWS networking or security groups issue. It is something with Terraform.

In my case, I use Terraform to create instances with key pair that was created manually and is not managed by Terraform. I will try to make the key pair managed by Terraform and see what happens

@leriel
Copy link

leriel commented Mar 6, 2018

@daniilyar sounds like they all are also using same AMI and same user_data, but i'll ask just to make sure.

@daniilyar
Copy link

@leriel , yes, same AMI and empty (default) user_data for all

@daniilyar
Copy link

I will try to make some minimalistic example for reproducing

@oniabifo
Copy link

oniabifo commented Mar 9, 2018

Has anyone been able to fix this issue please? i ran bash ~/terraform.sh apply and everything worked fine until it wanted to ssh into the system.

It returned with an error
**"1 error(s) occurred:

  • aws_instance.web: 1 error(s) occurred:

  • ssh: handshake failed: ssh: unable to authenticate, attempted methods [none], no supported methods remain"**

Also tried ssh directly into the system, and it returned a broken pipe error

@robax
Copy link

robax commented May 15, 2018

Ran into this issue recently but in my case it wasn't related to Terraform or the assigned keys. After provisioning my image with Ansible and running "apt-get update" the instance stopped responding to SSH (not terraform's fault). Switching my base AMI from Ubuntu 18 to Ubuntu 16 fixed the issue.

@aug70
Copy link

aug70 commented May 16, 2018

Came across same problem. Switching from Ubuntu 18(ami-14c5486b) to Ubuntu 16(ami-13be557e) worked for me too.

@sudhishkr
Copy link

Hitting the same issue with ami-43a15f3e which is Ubuntu 16.

Looking through the AWS Console, i see the key-pair associated with the EC2 instance - but I can't get into that instance.

When I try to launch a new instance from the same ami, with the same key-pair - I am able to SSH.

@sudhishkr
Copy link

@aug70 / @robax - ami-13be557e i seem to be hitting the same problem still. is there any other image that you have tried and is working?

@robax - Also what was the root-cause that makes you tell it was not terraform?

@robax
Copy link

robax commented May 22, 2018

@sudhishkr We're having success with ami-f3211396 and ami-f11a2894 (both Ubuntu 18, I can't find the IDs you mentioned). Maybe there is an issue with Terraform's aws_key_pair (I never got it to work), but using a connection block on a null resource is working for our use case so we are happy.

@saucec0de
Copy link

saucec0de commented Feb 23, 2019

The following has worked for me.
I have added in my client the following line (~/.ssh/config):
IPQoS throughput

Also, before you go on changing the config file, you can test the setting like this:
ssh -o IPQoS=throughput ...

Also, I have noticed that PuTTY worked fine out-of-the-box but ssh was not working without that config.

NOTE: original reference was taken from here: https://communities.vmware.com/thread/590825

@ghost
Copy link

ghost commented Jun 6, 2019

This issue has been automatically migrated to hashicorp/terraform-provider-aws#8897 because it looks like an issue with that provider. If you believe this is not an issue with the provider, please reply to hashicorp/terraform-provider-aws#8897.

@ghost ghost closed this as completed Jun 6, 2019
@ghost
Copy link

ghost commented Jul 25, 2019

I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues.

If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further.

@ghost ghost locked and limited conversation to collaborators Jul 25, 2019
This issue was closed.
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

No branches or pull requests