Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Private network comes up with wrong IP address #2968

Closed
ahodgkinson opened this issue Feb 13, 2014 · 13 comments
Closed

Private network comes up with wrong IP address #2968

ahodgkinson opened this issue Feb 13, 2014 · 13 comments

Comments

@ahodgkinson
Copy link

My VM comes up with the wrong IP address, the very first time it is started.

I have the following in my Vagrantfile:

  config.vm.network :private_network, ip: "172.16.2.3", netmask: "255.240.0.0"
  1. When I 'vagrant up' the system for the very first time, the VM comes up with the IP address 172.16.2.2, instead of 172.16.2.3.
  2. I subsequently do a 'vagrant halt' and 'vagrant up', then the correct IP address is assigned.
  3. If I then do a 'vagrant destroy' and 'vagrant up' the incorrect IP address 172.16.2.2 is assigned to the VM.

This behavior appears to be repeatable.

Curiously, when the incorrect IP address 172.16.2.2 is assigned, it appears that if I ssh to 172.16.2.3, it actually connects to my VM! (and ssh to 172.16.2.2 also works).

Perhaps the IP 172.16.2.3 is somehow not release by the host system when the VM is destroyed?

This issue may be related to #1014

Software versions:

Host machine: Ubuntu 12.04.3 LTS
VMS: Ubuntu 12.04.3 LTS
Vagrant 1.3.5
VirtualBox 4.3.0r89960

@kikitux
Copy link
Contributor

kikitux commented Feb 13, 2014

can you paste the output of ifconfig -a for both cases in the guest?

On Fri, Feb 14, 2014 at 7:02 AM, Alan Hodgkinson
notifications@github.comwrote:

My VM comes up with the wrong IP address, the very first time it is
started.

I have the following in my Vagrantfile:

config.vm.network :private_network, ip: "172.16.2.3", netmask: "255.240.0.0"

  1. When I 'vagrant up' the system for the very first time, the VM
    comes up with the IP address 172.16.2.2, instead of 172.16.2.3.
  2. I subsequently do a 'vagrant halt' and 'vagrant up', then the
    correct IP address is assigned.
  3. If I then do a 'vagrant destroy' and 'vagrant up' the incorrect IP
    address 172.16.2.2 is assigned to the VM.

This behavior appears to be repeatable.

Curiously, when the incorrect IP address 172.16.2.2 is assigned, it
appears that if I ssh to 172.16.2.3, it actually connects to my VM! (and
ssh to 172.16.2.2 also works).

Perhaps the IP 172.16.2.3 is somehow not release by the host system when
the VM is destroyed?

This issue may be related to #1014#1014

Software versions:

Host machine: Ubuntu 12.04.3 LTS
VMS: Ubuntu 12.04.3 LTS
Vagrant 1.3.5
VirtualBox 4.3.0r89960

Reply to this email directly or view it on GitHubhttps://github.com//issues/2968
.

@ahodgkinson
Copy link
Author

From the first boot (e.g. after 'vagrant destroy' and 'vagrant up'):

root@vmt3:~# ifconfig -a
eth0      Link encap:Ethernet  HWaddr 08:00:27:88:0c:a6  
          inet addr:10.0.2.15  Bcast:10.0.2.255  Mask:255.255.255.0
          inet6 addr: fe80::a00:27ff:fe88:ca6/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:1361 errors:0 dropped:0 overruns:0 frame:0
          TX packets:1032 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000 
          RX bytes:144271 (144.2 KB)  TX bytes:114601 (114.6 KB)

eth1      Link encap:Ethernet  HWaddr 08:00:27:5d:4e:fe  
          inet addr:172.16.2.2  Bcast:172.31.255.255  Mask:255.240.0.0
          inet6 addr: fe80::a00:27ff:fe5d:4efe/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:60 errors:0 dropped:0 overruns:0 frame:0
          TX packets:73 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000 
          RX bytes:9882 (9.8 KB)  TX bytes:10142 (10.1 KB)

lo        Link encap:Local Loopback  
          inet addr:127.0.0.1  Mask:255.0.0.0
          inet6 addr: ::1/128 Scope:Host
          UP LOOPBACK RUNNING  MTU:16436  Metric:1
          RX packets:143 errors:0 dropped:0 overruns:0 frame:0
          TX packets:143 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0 
          RX bytes:18580 (18.5 KB)  TX bytes:18580 (18.5 KB)

After a 'vagrant halt' and 'vagrant up'

root@vmt3:~# ifconfig -a
eth0      Link encap:Ethernet  HWaddr 08:00:27:88:0c:a6  
          inet addr:10.0.2.15  Bcast:10.0.2.255  Mask:255.255.255.0
          inet6 addr: fe80::a00:27ff:fe88:ca6/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:385 errors:0 dropped:0 overruns:0 frame:0
          TX packets:259 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000 
          RX bytes:44799 (44.7 KB)  TX bytes:34028 (34.0 KB)

eth1      Link encap:Ethernet  HWaddr 08:00:27:5c:83:a5  
          inet addr:172.16.2.3  Bcast:172.31.255.255  Mask:255.240.0.0
          inet6 addr: fe80::a00:27ff:fe5c:83a5/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:1 errors:0 dropped:0 overruns:0 frame:0
          TX packets:6 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000 
          RX bytes:60 (60.0 B)  TX bytes:468 (468.0 B)

lo        Link encap:Local Loopback  
          inet addr:127.0.0.1  Mask:255.0.0.0
          inet6 addr: ::1/128 Scope:Host
          UP LOOPBACK RUNNING  MTU:16436  Metric:1
          RX packets:256 errors:0 dropped:0 overruns:0 frame:0
          TX packets:256 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0 
          RX bytes:20400 (20.4 KB)  TX bytes:20400 (20.4 KB)

@tmatilai
Copy link
Contributor

@ahodgkinson I couldn't reproduce with Vagrant 1.4.3. Could you please run VAGRANT_LOG=debug vagrant up 2>&1 | tee vagrant.log (as the first time to get the error) and gist all the log.

@ahodgkinson
Copy link
Author

Here you are: See: vagrant-20140214-0952-issue-2968.log

Let me know if you need any more information.

@tmatilai
Copy link
Contributor

Thanks for the log. But no luck so far.
Could you also gist the contents of /etc/network/interfaces after the failing run?

The difference between first and second up is that the host name is set only on the first one. And it stops and starts network interfaces before the private_network is configured. It shouldn't matter, but you could also test if you can repro the issue if you comment out the vm.hostname configuration in your Vagrantfile.

Is the box you're using publicly available? Or even better, the build templates?

@ahodgkinson
Copy link
Author

@tmatilai: Here is the copy of /etc/network/interfaces taken after a failing run:

vagrant@vmt3:~$ cat /etc/network/interfaces
# This file describes the network interfaces available on your system
# and how to activate them. For more information, see interfaces(5).

# The loopback network interface
auto lo
iface lo inet loopback

# The primary network interface
auto eth0
iface eth0 inet dhcp
#VAGRANT-BEGIN
# The contents below are automatically generated by Vagrant. Do not modify.
auto eth1
iface eth1 inet static
      address 172.16.2.3
      netmask 255.240.0.0
#VAGRANT-END

Note the IP address 172.16.2.3, which looks good to me. Meanwhile, ifconfig reports:

eth1      Link encap:Ethernet  HWaddr 08:00:27:5d:da:3e  
          inet addr:172.16.2.2  Bcast:172.31.255.255  Mask:255.240.0.0
          ...

..weird!

@ahodgkinson
Copy link
Author

Solved!

The problem was that eth1 interface was actually being assigned two IP addresses, one IP address was the address specified in the Vagrantfile and the second was an incorrect IP address (it's not completely clear where it came from. We think that the DHCP server on the host machine could be causing this.)

The presence of the two IP addresses could be confirmed as follows:

$ ip addr show
...
3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
    link/ether 08:00:27:6f:02:63 brd ff:ff:ff:ff:ff:ff
    inet 172.16.2.2/12 brd 172.31.255.255 scope global eth1
    inet 172.16.2.3/12 brd 172.31.255.255 scope global secondary eth1
    inet6 fe80::a00:27ff:fe6f:263/64 scope link
    valid_lft forever preferred_lft forever
...

Note that the secondary address, 172.16.2.3, was the address defined in the Vagrantfile.

There are two possible fixes:

  1. As stated in Sometimes, ifdown is not enough. #2539, add the following to the Vagrantfile to bring the eth1 interface down and then back up (which fixed the problem on my system):

      config.vm.provision :shell, inline: "sudo /sbin/ifdown eth1 && sudo /sbin/ifup eth1"
    
  2. Upgrade Vagrant to 1.4.x. The problem described in Sometimes, ifdown is not enough. #2539, led to a big fix. Upgrading from Vagrant 1.3.5 to 1.4.3 also fixed the problem on my system. This was the solution I preferred.

Many, many thanks go to @tmatilai, who diagnosed the problem and suggest the fixes described here.

@kikitux
Copy link
Contributor

kikitux commented Feb 17, 2014

try this:

find /etc | grep -i eth

sometimes are some weird profiles .. and may end with more than one
ifcfg-eth1

On Sun, Feb 16, 2014 at 11:38 PM, Alan Hodgkinson
notifications@github.comwrote:

Solved!

The problem was that eth1 interface was actually being assigned two IP
addresses, one IP address was the address specified in the Vagrantfile and
the second was an incorrect IP address (it's not completely clear where it
came from. We think that the DHCP server on the host machine could be
causing this.)

The presence of the two IP addresses could be confirmed as follows:

$ ip addr show
...
3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
link/ether 08:00:27:6f:02:63 brd ff:ff:ff:ff:ff:ff
inet 172.16.2.2/12 brd 172.31.255.255 scope global eth1
inet 172.16.2.3/12 brd 172.31.255.255 scope global secondary eth1
inet6 fe80::a00:27ff:fe6f:263/64 scope link
valid_lft forever preferred_lft forever
...

Note that the secondary address, 172.16.2.3, was the address defined in
the Vagrantfile.

There are two possible fixes:

As stated in #2539 #2539,
add the following to the Vagrantfile to bring the eth1 interface down and
then back up (which fixed the problem on my system):

 config.vm.provision :shell, inline: "sudo /sbin/ifdown eth1 && sudo /sbin/ifup eth1"

2.

Upgrade Vagrant to 1.4.x. The problem described in #2539#2539,
led to a big fix. Upgrading from Vagrant 1.3.5 to 1.4.3 also fixed the
problem on my system. This was the solution I preferred.

Many, many thanks go to @tmatilai https://github.com/tmatilai, who
diagnosed the problem and suggest the fixes described here.

Reply to this email directly or view it on GitHubhttps://github.com//issues/2968#issuecomment-35181939
.

@JoelPM
Copy link

JoelPM commented Oct 15, 2014

For what it's worth, I see this same problem when using the coreos-vagrant Vagrantfile. Starting a cluster of 3 machines (using the vmware_fusion provider) results in the machines getting IPs 172.17.8.130, 172.17.8.131, 172.17.8.132 rather than .100, .101, and .102 as they should. I'm using vagrant version 1.6.5.

https://github.com/coreos/coreos-vagrant

@brianm
Copy link

brianm commented Jan 31, 2015

I have this happen to me in 1.7.2 still when using fedora-21 and vmware_fusion. Bouncing the interface still works around it.

@pwm
Copy link

pwm commented Jun 24, 2015

Same problem here with, using 1.7.2 with vmware_fusion and centos 7.1.

@cgbaker
Copy link

cgbaker commented Jun 26, 2015

I've also been struggling with this with 1.7.2 with virtualbox and centos 7.1
Bouncing the interface works, but only for a moment. Then it loses the IP.

@pwm, it looks like, for centos 7.1 anyway, this apparently related to #5590; they reference a fix in #5709 that solves the issue for me on centos 7.1

@mgiaccone
Copy link

I had the same issue, the problem seems to be related to the VirtualBox DHCP.
I solved it with by disabling the DHCP server for the hostonly interface:

VBoxManage dhcpserver remove --ifname vboxnet1

before running: vagrant up

@ghost ghost locked and limited conversation to collaborators Apr 7, 2020
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

9 participants