Skip to content

HTTPS clone URL

Subversion checkout URL

You can clone with
or
.
Download ZIP

Loading…

cluster ssh not working for instance in VPC #236

Closed
gwilton opened this Issue · 0 comments

2 participants

@gwilton

I am now able to launch instances inside a VPC and automatically associate an EIP to the instance with version 4.7.2. However, knife "cluster launch" gets stuck in the final phase of the launch while trying to ssh...

sandbox-raid-0: setting termination flag false
sandbox-raid-0: associating Elastic IP 107.23.163.84
sandbox-raid-0: syncing EBS volumes
sandbox-raid-0: launching
sandbox-raid-0: waiting for ready
sandbox-raid-0: trying ssh

After waiting a few minutes and noticing the instance is online with the proper EIP I CTRL+C from the "cluster launch".

I then try to "cluster ssh" to the available instance and get the following...

[homebase (master)]$ knife cluster ssh sandbox uptime
Inventorying servers in sandbox cluster, all facets, all servers
sandbox: Loading chef
sandbox: Loading ec2
sandbox: Reconciling DSL and provider information
FATAL: No nodes returned from search!

It seems the issue lies in lib/chef/knife/cluster_ssh.rb:54. The c.machine.public_hostname is being used for cluster ssh. AWS forum explain that EIPs in VPC don't have a public_hostname set (https://forums.aws.amazon.com/message.jspa?messageID=406306).

        target = target.select {|t| not t.bogus? }
        addresses = target.map {|c| c.machine.public_hostname }.compact

        (ui.fatal("No nodes returned from search!"); exit 10) if addresses.nil? || addresses.length == 0
@gwilton gwilton referenced this issue from a commit
Wilton Marranzini Fixing issue #236
cluster ssh was broken for VPC instances, this will fix a few bugs.
f89b88e
@temujin9 temujin9 closed this
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Something went wrong with that request. Please try again.