Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

vagrant ssh only possible after restart network #391

Closed
sebastian-alfers opened this issue Jun 15, 2011 · 124 comments
Closed

vagrant ssh only possible after restart network #391

sebastian-alfers opened this issue Jun 15, 2011 · 124 comments

Comments

@sebastian-alfers
Copy link

Hey,

i can not log into my VM after "vagrant up"

i have to start it in gui-mode, then retart my network adapter "sudo /etc/init.d/networking restart"
after this, my VM gets an ip (v4) address and my mac is able to ssh the VM and do the provisioning.

any idea on this?

same isse as here: http://groups.google.com/group/vagrant-up/browse_frm/thread/e951417f59e74b9c

the box is about 5 days old!

Thank you!
Seb

@mitchellh
Copy link
Contributor

Ah, so we tried to fix this in the thread. I'm not entirely sure what the cause of this is, although it has something to do with the setup of the box. I've put a sleep in the bootup process. Please verify you have a pre-up sleep 2 in your /etc/network/interfaces file.

Otherwise, any other hints would be helpful :-\

@Benedict
Copy link

I too am having this problem. I've tried both lucid32 & lucid64, which I downloaded today.

Before running sudo /etc/init.d/networking restart the /etc/network/interfaces looks like

# This file describes the network interfaces available on your system
# and how to activate them. For more information, see interfaces(5).

# The loopback network interface
auto lo
iface lo inet loopback

# The primary network interface
auto eth0
iface eth0 inet dhcp
pre-up sleep 2

Afterward restarting the networking and running vagrant reload, the file looks like

# This file describes the network interfaces available on your system
# and how to activate them. For more information, see interfaces(5).

# The loopback network interface
auto lo
iface lo inet loopback

# The primary network interface
auto eth0
iface eth0 inet dhcp
pre-up sleep 2
#VAGRANT-BEGIN
# The contents below are automatically generated by Vagrant.
# Please do not modify any of these contents.
auto eth1
iface eth1 inet static
      address 33.33.33.10
      netmask 255.255.255.0
#VAGRANT-END

Any ideas?

@hedgehog
Copy link
Contributor

hedgehog commented Jul 1, 2011

ssh doesn't like two hosts at the one address.
I've seen this with two VM's getting the same address and SSH showing the same behavior (below).

Now it turns out SSH doesn't like two redirected port connections to the same port.

Symptom:

$ ssh vagrant@127.0.0.1 -p 2222 -i /path/to/private/key/vagrant -vvv
OpenSSH_5.3p1 Debian-3ubuntu7, OpenSSL 0.9.8k 25 Mar 2009
debug1: Reading configuration data /home/hedge/.ssh/config
debug1: Reading configuration data /etc/ssh/ssh_config
debug1: Applying options for *
debug2: ssh_connect: needpriv 0
debug1: Connecting to 127.0.0.1 [127.0.0.1] port 2222.
debug1: Connection established.
debug3: Not a RSA1 key file /path/to/private/key/vagrant.
debug2: key_type_from_name: unknown key type '-----BEGIN'
debug3: key_read: missing keytype
debug3: key_read: missing whitespace
debug3: key_read: missing whitespace
debug3: key_read: missing whitespace
debug3: key_read: missing whitespace
debug3: key_read: missing whitespace
debug3: key_read: missing whitespace
debug3: key_read: missing whitespace
debug3: key_read: missing whitespace
debug3: key_read: missing whitespace
debug3: key_read: missing whitespace
debug3: key_read: missing whitespace
debug3: key_read: missing whitespace
debug3: key_read: missing whitespace
debug3: key_read: missing whitespace
debug3: key_read: missing whitespace
debug3: key_read: missing whitespace
debug3: key_read: missing whitespace
debug3: key_read: missing whitespace
debug3: key_read: missing whitespace
debug3: key_read: missing whitespace
debug3: key_read: missing whitespace
debug3: key_read: missing whitespace
debug3: key_read: missing whitespace
debug3: key_read: missing whitespace
debug3: key_read: missing whitespace
debug2: key_type_from_name: unknown key type '-----END'
debug3: key_read: missing keytype
debug1: identity file /path/to/private/key/vagrant type 1
debug1: Checking blacklist file /usr/share/ssh/blacklist.RSA-2048
debug1: Checking blacklist file /etc/ssh/blacklist.RSA-2048
^C

Now I see two connections to 127.0.0.1:222

$ lsof -i :2222
ruby    9851 hedge   12u  IPv4 13467394      0t0  TCP localhost:55035->localhost:2222 (ESTABLISHED)
ruby    9851 hedge   13u  IPv4 13469354      0t0  TCP localhost:55098->localhost:2222 (ESTABLISHED)

Confirm that this is vagrant:

$ ps uax|grep 9851
hedge     9851  6.4  0.2 256080 47836 pts/4    Sl+  12:38   0:16 ruby /home/hedge/.rvm/gems/ruby-1.9.2-p180@thinit/bin/vagrant up

Confirm there is only one vm running:

$ ps aux|grep startvm
hedge     9873  4.9  2.6 706800 441432 ?       Sl   12:39   0:29 /usr/lib/virtualbox/VBoxHeadless --comment www --startvm 82cb3255-940b-48f6-b2c7-8ec50ae6500d --vrde config

So it seems the problem is that somewhere in vagrant two connections are being established to port 2222.

Correct?

@judev
Copy link

judev commented Jul 5, 2011

Could this be some sort of timing issue with the linux networking trying to start (or get an IP) before Virtualbox has finished setting up the interface? Must admit that I don't know the internals so not sure if this is even likely.
When I enable the Virtualbox GUI and login (while vagrant is still trying to connect via ssh), ifconfig reports no IPv4 address. If I then run sudo dhclient vagrant successfully connects within a couple of seconds.

@mitchellh
Copy link
Contributor

@judev

If this was the case then switching VirtualBox versions back would fix the issue, which I'm not sure is the case (it may be, I don't know). I say this because previous versions of Vagrant worked just fine. This is still an isolated issue but annoying enough that I'd like to really figure it out, but haven't been able to yet.

@hedgehog
Copy link
Contributor

hedgehog commented Jul 5, 2011

@mitchellh, in my case switching VB back to 4.0.4 seems to have eliminated the issue. VB 4.0.10 was a problem. From memory I upgraded from 4.0.6 because I was hitting some issues. At the time I had 4.06 I wasn't using vagrant much.

Anyway, stepping back to VB 4.0.4 is definitely a fix for this issue in my case.
We also can't rule out the Host OS. I say this simply because the packaged OSE versions of VB on lucid seem to be 4.0.4.

@hedgehog
Copy link
Contributor

hedgehog commented Jul 5, 2011

@judev, what happens if you vagrant reload that VM after you have connected to it via ssh?
Are you able to ssh to it again? Run the lsof -i 2222 and note the connection details of your established ssh connection. In my case I'd see two established connections to localhost:2222 after the reload, one of them being the connection from before the reload.

@hedgehog
Copy link
Contributor

hedgehog commented Jul 5, 2011

@judev, please add your failing and passing configuration details to this page:
https://github.com/jedi4ever/veewee/wiki/vagrant-(veewee)-+-virtualbox-versions-test-matrix

The page has an example script that makes it easy to test (change the Ruby and gem versions to what you have).
It shouldn't pollute your system if you have rvm installed.

@judev
Copy link

judev commented Jul 11, 2011

sorry for the delay - I've tried with each version of VirtualBox from 4.0.4 to 4.0.10, same problem when using the latest lucid32 box, but everything works fine using "ubuntu 11.04 server i386" from http://vagrantbox.es

@hedgehog, when I did sudo dhclient, connected over ssh, then did vagrant reload I still could not connect until doing another sudo dhclient. The previous connection did not show using lsof

Thanks for your help, am happy to say things are working really well with ubuntu 11.04.

@hedgehog
Copy link
Contributor

@judev, Do I understand correctly: lsof -i :2222 returned nothing after vagrant reload, then there was one connection after running sudo dhclient?
Or: Does lsof -i :2222 show two connections after vagrant reload, and this then falls to one connection after sudo client. Might help if you gave the actual commands and their outputs.

@mabroor
Copy link

mabroor commented Jul 13, 2011

I get the same issue.. latest version of vagrant, vbox on win7 x64 using jruby (as mentioned in the docs). Running sudo dhclient on the gui was able to get my puppet manifest running.
Strange thing is that I had another machine with the exact same setup where I encountered this issue only one i the last week. This machine has this problem constantly...

@hedgehog
Copy link
Contributor

@mabroor could you give the additional cmd output, in sequence, requested above?

@mabroor
Copy link

mabroor commented Jul 14, 2011

@hedgehog

I tried after a vagrant halt
Problem returns.. below is teh output from netstat while vagrant is waiting for the vbox to boot (it is already booted)

netstat -an
 TCP    0.0.0.0:2222           0.0.0.0:0              LISTENING
 TCP    127.0.0.1:2222         127.0.0.1:54436        TIME_WAIT
 TCP    127.0.0.1:2222         127.0.0.1:54612        FIN_WAIT_2
 TCP    127.0.0.1:2222         127.0.0.1:54618        ESTABLISHED
 TCP    127.0.0.1:2222         127.0.0.1:54624        ESTABLISHED```

I then login to the vbox and run ```sudo dhclient``` and it works fine..  When vagrant has done its thing.. so connections are shown established using netstat. I am using windows so can't use the native ssh to show verbose output.

@grimen
Copy link

grimen commented Jul 27, 2011

Same issue but @sudo /etc/init.d/networking restart@ didn't solve it for me. I'm trying another box now, let's hop it works.

@mabroor
Copy link

mabroor commented Jul 29, 2011

@grimen: try sudo dhclient
Always works for me now.

@hedgehog
Copy link
Contributor

@mabroor, is it the case that, according to netstat, there are always two est connections when you cannot connect and only one when you can connect?

@mabroor
Copy link

mabroor commented Jul 29, 2011

That's correct.
On Jul 29, 2011 10:28 AM, "hedgehog" <
reply@reply.github.com>
wrote:

@mabroor, is it the case that, according to netstat, there are always
two est connections when you cannot connect and only one when you can
connect?

Reply to this email directly or view it on GitHub:
#391 (comment)

@grimen
Copy link

grimen commented Jul 29, 2011

@mabroor Do you maybe know the OS X corresponding solution?

@mabroor
Copy link

mabroor commented Jul 29, 2011

@grimen the command I mentioned has to be run in the vm. I didn't know the problem existed in OSX, I had the issue on Windows 7 x64.

@grimen
Copy link

grimen commented Jul 29, 2011

@mabroor Ouch, yes of course then it even makes sense. :) Problem though is that I cannot get into the vm - how did u do that?

@mabroor
Copy link

mabroor commented Jul 29, 2011

config.vm.boot_mode = :gui in your vagrantfile to run the vm in gui mode.

@grimen
Copy link

grimen commented Jul 29, 2011

@mabroor Thanks - will try that!

@grimen
Copy link

grimen commented Aug 1, 2011

I got the GUI now but none of the proposals in this thread works for me (for "lucid32" and "lucid64" that is - those seems to be flawed as 'talifun' works). :(

@mrolli
Copy link

mrolli commented Aug 26, 2011

My combo shows the same issue: Mac OS X 10.7.1, Vagrant 0.8.5, Virtualbox 4.1.0, lucid64 with correct guest additions

After first boot vagrant could not connect to vm. In vm (GUI) there was no IP-address set. Did a sudo dhclient while vagrant was hanging and vagrant connected instantly after the guest finally had an IP.

Meanwhile I did vagrant reload twice and never had to do a sudo dhclient.

@vasko
Copy link

vasko commented Sep 8, 2011

I'm using Mac OS X 10.7.1, Vagrant 0.8.6, VirtualBox 4.1.2, lucid32 with the 4.1.0 guest additions.

I've added the following line to my Vagrant::Config and it boots up and works fine now.
config.vm.provision :shell, :inline => "/etc/init.d/networking restart"

It's not the ideal situation, but it works without needing to go into the GUI.

UPDATE: Okay. I've run this a few times and it doesn't always work. Especially when I'm connected to the internal network without an internet connection it seems.

@mikhailov
Copy link

that works for me:

1) login with :gui by login/pass: vagrant/vagrant
2) modify the “/etc/rc.local” file 
to include the line “sh /etc/init.d/networking restart” just before “exit 0″.
3) disable :gui
4) vagrant reload

@shingara
Copy link

There are no technic without hacking on gui mode ?

@vasko
Copy link

vasko commented Sep 14, 2011

I've repeated the below process at least 5 times now for all scenarios.

Running vagrant up after I've started the VirtualBox application works every time.

Running vagrant up without starting the VirtualBOX application fails every time, with or without the ":gui" option.

From my simple testing it seems to be an issue with running headless.

UPDATE: I've just found this article http://serverfault.com/questions/91665/virtualbox-headless-server-on-ubuntu-missing-vrdp-options. I've just installed the Extensions pack and I've had no issues since. VRDP was removed from VirtualBox 4.0 and moved into the extension pack. I believe this might also be related to this issue #455.

UPDATE: I jumped the gun on this I think. I'm having trouble with lucid32 and lucid64 running without the ":gui" option.

hedgehog added a commit to hedgehog/vagrant that referenced this issue Oct 21, 2011
…others too

This should help the ssh connections refused errors.
It seems that it might also make redundant the new ssh session
caching code, but I really couldn't follow what was trying to be achieved
there.
@hedgehog
Copy link
Contributor

Can people with this issue confirm that the following pull request fixes this issue for them?

#534

@ghost
Copy link

ghost commented Dec 26, 2013

Is there an agreed upon solution for this? I'm running VirtualBox 4.3.6 and Vagrant 1.4.1 on RHEL 6.2 and unable to run vagrant ssh. I see the Wiki page, but since I am accessing the host machine through SSH, I don't have access to VirtualBox GUI?

@shimondoodkin
Copy link

i had a problem the vagrant was freezing after restore from hibernate.
in windows 7 after unchecking "enable switch off this device to save power", in wifi card driver(change adapter settings link on the side in network and sharing center,rightclick and properties on an adapter,configuration button), in power management options , the problem seems gone.

probably the problem is like a 'broken pipe' kind of problem. something with the network device. because network device is diconnected before hibernate and on startup

@cstewart87
Copy link

Seeing this issue running Vagrant 1.4.3 with Virtualbox 4.3.6r91406 on Ubuntu 12.04. Are there specific host network settings that are required for Vagrant to work correctly?

@kikitux
Copy link
Contributor

kikitux commented Feb 25, 2014

Vagrant set and use local port 2222 to first nic on port 22

are you setting config.ssh ?

can you paste your vagrantfile ?

On Wed, Feb 26, 2014 at 11:59 AM, Curtis Stewart
notifications@github.comwrote:

Seeing this issue running Vagrant 1.4.3 with Virtualbox 4.3.6r91406 on
Ubuntu 12.04. Are there specific host network settings that are required
for Vagrant to work correctly?

Reply to this email directly or view it on GitHubhttps://github.com//issues/391#issuecomment-36069787
.

@cstewart87
Copy link

I'm using test-kitchen and this is the generated Vagrantfile:

Vagrant.configure("2") do |c|
  c.vm.box = "opscode-ubuntu-12.04"
  c.vm.box_url = "https://opscode-vm-bento.s3.amazonaws.com/vagrant/virtualbox/opscode_ubuntu-12.04_chef-provisionerless.box"
  c.vm.hostname = "host-ubuntu-1204.vagrantup.com"
  c.vm.synced_folder ".", "/vagrant", disabled: true
  c.vm.provider :virtualbox do |p|
    p.customize ["modifyvm", :id, "--memory", "512"]
  end
end

@kikitux
Copy link
Contributor

kikitux commented Feb 27, 2014

@cstewart87 Worked for my with your vagrantfile, no issues at all.

@jean
Copy link

jean commented Mar 10, 2014

I added a public network to my VM. This booted fine and worked great. Then I tried to restart. Subsequently:

19:16 jean@klippie:~/vagrant/geonode$ VAGRANT_LOG=DEBUG vagrant halt
 INFO global: Vagrant version: 1.2.2
[...]
DEBUG subprocess: Waiting for process to exit. Remaining to timeout: 32000
DEBUG subprocess: Exit status: 0
DEBUG virtualbox_4_2:   - [1, "ssh", 2222, 22]
DEBUG ssh: Checking key permissions: /home/jean/.vagrant.d/insecure_private_key
 INFO ssh: Attempting SSH. Retries: 100. Timeout: 30
 INFO ssh: Attempting to connect to SSH...
 INFO ssh:   - Host: 127.0.0.1
 INFO ssh:   - Port: 2222
 INFO ssh:   - Username: vagrant
 INFO ssh:   - Key Path: /home/jean/.vagrant.d/insecure_private_key
DEBUG ssh: == Net-SSH connection debug-level log START ==
DEBUG ssh: D, [2014-03-10T19:16:53.664503 #12855] DEBUG -- net.ssh.transport.session[4caaa54]: establishing connection to 127.0.0.1:2222
D, [2014-03-10T19:16:53.665283 #12855] DEBUG -- net.ssh.transport.session[4caaa54]: connection established
I, [2014-03-10T19:16:53.665407 #12855]  INFO -- net.ssh.transport.server_version[4caa09a]: negotiating protocol version

DEBUG ssh: == Net-SSH connection debug-level log END ==
 INFO retryable: Retryable exception raised: #<Timeout::Error: execution expired>
 INFO ssh: Attempting to connect to SSH...
 INFO ssh:   - Host: 127.0.0.1
 INFO ssh:   - Port: 2222
[...] # repeats endlessly

@kikitux
Copy link
Contributor

kikitux commented Mar 10, 2014

@jean I see you are posting in a bug that is closed, perhaps you want to try the mailing list.

I can tell you that I have seen issues when the vagrant file have some errors in the logic, or the base box had the issues.

you can send an email tot he mailing list, with the Vagrantfile and we can take it from there.,

@jean
Copy link

jean commented Mar 11, 2014

@kikitux thanks for your answer, posting to the list 🙇

@appsol
Copy link

appsol commented Mar 12, 2014

@axsuul I am getting this issue with a CentOS box running on Ubuntu 12.04 host. The issue appeared after a kernel update in Ubuntu which caused the DKMS entry for VirtualBox to be corrupted. This may be related or may be coincidence.
I tried several of the fixes here, but only adding /etc/init.d/networking restart to /etc/rc.local has let me get the box up and running again.

@stenver
Copy link

stenver commented Apr 3, 2014

https://bugs.launchpad.net/ubuntu/+source/grub2/+bug/797544

This is pretty much to root of the isse

@trkoch
Copy link

trkoch commented Jun 15, 2014

I ran into a similiar issue with Ubuntu 14.04 Cloud Image. Turns out Vagrant adds

post-up route del default dev $IFACE

to the interface configuration. Manually removing this before rebooting fixes the issue (i.e. I can immediately vagrant ssh and don't get any timeouts). However, Vagrant adds this snippet back when running the auto configuration on bootup.

@simonmorley
Copy link

Impressed there are people, including myself, still struggling with this.

@mrhassell
Copy link

Thank you @mikhailov ! The...
/etc/rc.local > sh /etc/init.d/networking restart
just before “exit 0″... worked a charm!

@donmccurdy
Copy link

Is there an equivalent of the /etc/rc.local > sh /etc/init.d/networking restart that will work without destroying and re-provisioning the vagrant? My current solution on Ubuntu 14.04 has involved destroying the vagrant whenever the issue happens to show up.

@Gowiem
Copy link

Gowiem commented Oct 21, 2014

I'm experiencing this issue at least once a week and it's driving me nuts. Doing this from the wiki page fixes the issue but it is really time consuming and takes up a good chunk of my morning. I've tried a number of the fixes listed here and they haven't helped.

Is there no way to get a fix for this into Vagrant and skip this runaround of workarounds?

@matiasepalacios
Copy link

I came up with a fix that works fine so far for me.

Disclaimer: this is more a hack than a fix, as it does not solves the real problem, which I have no clue how to fix, however, it works...

Ok, so first, you have to make sure your /etc/networks/interface file is 'clean'.
Mine looks like this:

# This file describes the network interfaces available on your system
# and how to activate them. For more information, see interfaces(5).

# The loopback network interface 
auto lo
iface lo inet loopback

# The primary network interface
allow-hotplug eth0
iface eth0 inet dhcp 
pre-up sleep 2

Then, once you make sure your file is ready, you just create a simple bash script, put it on /etc/init.d and add it to the startup of the VM:

#! /bin/sh
# /etc/init.d/interfaces.sh
#

### BEGIN INIT INFO
# Provides: crap
# Required-Start: 
# Required-Stop:
# Should-Start:
# Should-Stop:
# Default-Start: 2 3 4 5
# Default-Stop: 0 1 6
# Short-Description: Does Vagrant related crap
# Description: More crap
### END INIT INFO


sudo cat /etc/network/interfaces > /etc/network/interfaces.bak

And to add it, you just run this command:

sudo /usr/sbin/update-rc.d interfaces.sh defaults

That will make a backup copy of your /etc/network/interfaces file, that you will copy over again when you shutdown the machine. The script that does it is as simple as the first one:

#! /bin/sh
# /etc/init.d/interfaces-at-shutdown.sh
#

### BEGIN INIT INFO
# Provides: crap
# Required-Start:
# Required-Stop:
# Should-Start:
# Should-Stop:
# Default-Start: 2 3 4 5
# Default-Stop: 0 1 6
# Short-Description: Does Vagrant related crap
# Description: More crap
### END INIT INFO


sudo cat /etc/network/interfaces.bak > /etc/network/interfaces

and then you create a symbolic link to it on /etc/rc0.d like this:

sudo ln -s ../init.d/interfaces-at-shutdown.sh K01interfaces

and that's it. Every time the machine starts, it will backup the file, and every time it shutdown it will reverse the file to it's original state, thus, allowing Vagrant to do it's ssh magic.

@StyxOfDynamite
Copy link

I think shoe-horning all these issues into one has actually had a negative effect. that said here seems to be the only viable place to talk through the issues I'm facing.

Virtualbox fails to configure virtual network adapters when launched via vagrant up

I've literally read everything I can find on the various SSH timeout issues and there seems to be a few different "causes" each with their own "fixes"

None of these identify the actual issue I'm having.

I'm working on a Linux Host, With a Virtualbox Provider and I can't get a single box to launch successfully. Many prepackaged boxes won't launch as my physical chipset doesn't support 64 bit Virtualization (Despite supporting a 64 bit OS)

I've built a Vagrant Box from Scratch. I needed an easily producible, distributable ghost blog installation for theme development. To rule out issues with the created box I've also tried with several 32bit boxes that are already in existence.

If I launch the box with vagrant up at a high level the following things happen.

1 The box is imported from my list of boxes to the current directory
1 Network adapters are initialised
1 The VM boots
1 The VM hangs long enough that vagrant gives up trying to SSH
1 The VM continues to boot after waiting 2 minutes, it waits 60 seconds before waiting an additional 60 seconds and then launches without configured virtual network adapters.

At this point I cannot ssh into the box using either 'vagrant ssh' or 'ssh vagrant@127.0.0.1'

Now the box has been imported and the ports configured. I can use the Virtual Box GUI to send the shutdown signal.

If I then use the VirtualBox GUI to power on the machine the following things happen.

1 The VM boots
1 It uses the saved network configuration from the earlier failed attempt at booting
1 it does not hang
1 I can now use vagrant ssh | vagrant suspend without issues.

This to me suggests the issue I'm facing is with how vagrant up tells VirtualBox to launch the VM.

The Work-around (No Hacks vs No instant portability)

The workaround of killing the booted VM and then relaunching it via the GUI is not the end of the world, doesn't involve any hacks to any vagrant files. It does however mean I can't rely on vagrant to provision the machine (not a problem as I purpose built the base image to avoid keep provisioning machines in the same way. I do have to manually add shared folders once the machine has been imported. I can get by with this just fine for the time being but will look more closely at the difference between how the vagrant up launches the machine to how VirtualBox launches the machine when I can find time.

@nryoung
Copy link

nryoung commented Sep 23, 2015

I am going to put my fix here in case it helps somebody who had the same issue as me.

I received this error when trying to vagrant ssh in to my vm after provisioning:

ssh_exchange_identification: Connection closed by remote host

after changing my synced_folder setting to:

config.vm.synced_folder ".", "/var", type: "nfs"

Turns out this folder is owned by root thus creating this folder would fail on vagrant up and cause the networking portion of the vm to not be configured correctly. When I changed my synced_folder to:

config.vm.synced_folder ".", "/var/<dir not owned by root>", type: "nfs"

where <dir not owned by root> is a custom path. This allowed vagrant to provision the VM correctly and thus the networking to come up correctly and vagrant ssh worked as expected.

@Xitsa
Copy link

Xitsa commented Nov 5, 2015

I have the same problem with virtualbox 5.0.2 and vagrant 1.7.4 on ubuntu 14.04 (all x86).
For some reason eth0/eth1 are bound to ip6.
I've changed in settings of VMs NIC type to Am79C973. After that vagrant connects to VM successfully.

@alexkart
Copy link

Thanks @Xitsa it works for me too,
but not for all vms, for others works this:

ifdown eth0
sleep 1
ifup eth0
exit 0

to /etc/rc.local

upd: also, tried to enable intel virtualization in BIOS, and it helped as well, it seems virtualization is required even for 32 bit operation systems (ubuntu/trusty32)

@timmackinnon
Copy link

timmackinnon commented May 31, 2016

Thought I would add some details around the solution that I employed to work around this issue. This appeared on my end to be bound to the existence of a private network ... so something like this in my Vagrantfile:

config.vm.network "private_network", ip: "192.168.42.2"

The end result was mixed up eth0 and eth1 interfaces that when running with the VBox GUI enabled and logging in, looked something like the following:

[root@host0 ~]# nmcli connection show
NAME     UUID                                  TYPE            DEVICE  
docker0  df9293c9-051e-4005-81a1-f08a4e9fdccf  bridge          docker0 
eth0     5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03  802-3-ethernet  eth1    
[root@host0 ~]# nmcli dev status
DEVICE   TYPE      STATE        CONNECTION 
docker0  bridge    connected    docker0    
eth1     ethernet  connected    eth0       
eth0     ethernet  disconnected --         
lo       loopback  unmanaged    --

My solution, and it is kind of hacky, was to employ a systemd service to clean this up at boot time. It looks like the following:

[Unit]
Description=Service to clean up eth0 and eth1
Wants=network-online.target
After=network.target network-online.target

[Service]
Type=simple
ExecStartPre=-/usr/bin/echo "Network clean up started"
ExecStartPre=-/usr/bin/sleep 5
ExecStartPre=-/usr/bin/nmcli connection delete eth0
ExecStart=/usr/bin/nmcli connection add type ethernet ifname eth0 con-name eth0
ExecStartPost=-/usr/bin/echo "Network clean up completed"
[Install]
WantedBy=multi-user.target

and to push my private network config up to the Vagrant layer:

config.vm.network "private_network", ip: "192.168.42.2", auto_config: false
config.vm.provision "shell",
  run: "always",
  inline: "ifconfig eth1 192.168.42.2 netmask 255.255.255.0 up"

so it runs after vagrant can ssh into the VM.

When running the following:

#!/bin/bash
for i in {1..100}
do
   echo "#### Iteration $i started ####"
   vagrant up
   vagrant destroy -f
   echo "#### Iteration $i completed ####"
done

I was able to get my VM to come up 100/100 times.

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests