Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Private network comes up with no IPv4 address without manual restart #8096

Closed
rupert-madden-abbott opened this issue Dec 8, 2016 · 51 comments

Comments

@rupert-madden-abbott
Copy link

rupert-madden-abbott commented Dec 8, 2016

Vagrant version

I'm running Vagrant 1.8.6. This is not the latest version but I can't run 1.8.7 due to #8024 , nor 1.90 due to #8088 .

Host operating system

Windows 7

Guest operating system

CentOS 7

Vagrantfile

Vagrant.configure(2) do |config|
  config.vm.box = 'puppetlabs/centos-7.2-64-puppet-enterprise'

  config.vm.define :master do |master|
    master.vm.network :private_network, ip: '10.20.1.10'
  end

  config.vm.define :node do |node|
    node.vm.network :private_network, ip: '10.20.1.11'
  end
end

Expected behavior

Each VM should have assigned the IP addresses specified in the Vagrantfile.

Actual behavior

No IP address appears to be assigned. However, an IP address is assigned after restarting.

Steps to reproduce

  1. Log into one of the nodes
    vagrant ssh master

  2. Verify that the IP address is actually being configured

cat /etc/sysconfig/network-scripts/ifcfg-enp0s8
#VAGRANT-BEGIN
# The contents below are automatically generated by Vagrant. Do not modify.
NM_CONTROLLED=no
BOOTPROTO=none
ONBOOT=yes
IPADDR=10.20.1.10
NETMASK=255.255.255.0
DEVICE=enp0s8
PEERDNS=no
#VAGRANT-END
  1. But no IP address is actually assigned (note there is no inet value):
ifconfig -a
enp0s8: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet6 fe80::a00:27ff:fe8b:416d  prefixlen 64  scopeid 0x20<link>
        ether 08:00:27:8b:41:6d  txqueuelen 1000  (Ethernet)
        RX packets 66  bytes 21226 (20.7 KiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 65  bytes 11106 (10.8 KiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0
  1. Now exit and restart
exit
vagrant halt master
vagrant up master
  1. Check the IP again and its working (note the inet value is not present and correct):
vagrant ssh master
ifconfig -a
enp0s8: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 10.20.1.10  netmask 255.255.255.0  broadcast 10.20.1.255
        inet6 fe80::a00:27ff:fe8b:416d  prefixlen 64  scopeid 0x20<link>
        ether 08:00:27:8b:41:6d  txqueuelen 1000  (Ethernet)
        RX packets 9  bytes 3078 (3.0 KiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 21  bytes 1586 (1.5 KiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

References

It seems very similar to #2968. However, that has been fixed in a previous version.

@danwelcome
Copy link

I am seeing a very similar behavior on a CentOS VM (running on macOS). This behavior is happening in Vagrant 1.9.1 and NOT 1.9.0. This manifests itself in a failure to mount our defined synced folders between host and VM because the hostonly interface doesn't come up.

Vagrantfile

# -*- mode: ruby -*-
# vi: set ft=ruby :

Vagrant.configure("2") do |config|
    config.vm.define "consumerservices" do |consumerservices|
        consumerservices.vm.box = "bento/centos-7.1"
        consumerservices.vm.synced_folder "./code", "/vagrant/code", type: "nfs"
        consumerservices.vm.synced_folder "./deployments", "/vagrant/deployments", type: "nfs"
        consumerservices.vm.synced_folder "./logs", "/vagrant/logs", type: "nfs"
        consumerservices.vm.synced_folder "./backup", "/vagrant/backup", type: "nfs"
        consumerservices.vm.synced_folder "./setup", "/vagrant/setup", type: "nfs"

        consumerservices.vm.provider "virtualbox" do |v|
            v.memory = (`sysctl -n hw.memsize`.to_i / 1024) / 1024 / 4
            v.cpus = 2
            v.customize ["modifyvm", :id, "--natdnshostresolver1", "on"]
            v.customize ["modifyvm", :id, "--natdnsproxy1", "on"]
            v.customize ["modifyvm", :id, "--ioapic", "on"]
        end

        consumerservices.vm.network "private_network", ip: "192.168.50.10"
        consumerservices.vm.host_name = "ConsumerServiceBundler"
        consumerservices.vm.provision "shell", inline: "timedatectl set-timezone America/New_York"
        consumerservices.vm.provision "shell", inline: "cd /vagrant/setup && ./setup.sh"
    end
end

Startup output

Daniel-Welcomes-MacBook-Pro:consumer-service-bundler danielwelcome$ vagrant up
Bringing machine 'consumerservices' up with 'virtualbox' provider...
==> consumerservices: Checking if box 'bento/centos-7.1' is up to date...
==> consumerservices: Clearing any previously set forwarded ports...
==> consumerservices: Clearing any previously set network interfaces...
==> consumerservices: Preparing network interfaces based on configuration...
    consumerservices: Adapter 1: nat
    consumerservices: Adapter 2: hostonly
==> consumerservices: Forwarding ports...
    consumerservices: 22 (guest) => 2222 (host) (adapter 1)
==> consumerservices: Running 'pre-boot' VM customizations...
==> consumerservices: Booting VM...
==> consumerservices: Waiting for machine to boot. This may take a few minutes...
    consumerservices: SSH address: 127.0.0.1:2222
    consumerservices: SSH username: vagrant
    consumerservices: SSH auth method: private key
    consumerservices: Warning: Remote connection disconnect. Retrying...
==> consumerservices: Machine booted and ready!
==> consumerservices: Checking for guest additions in VM...
==> consumerservices: Setting hostname...
==> consumerservices: Configuring and enabling network interfaces...
==> consumerservices: Exporting NFS shared folders...
==> consumerservices: Preparing to edit /etc/exports. Administrator privileges will be required...
==> consumerservices: Mounting NFS shared folders...
The following SSH command responded with a non-zero exit status.
Vagrant assumes that this means the command failed!

mount -o vers=3,udp 192.168.50.1:/Users/danielwelcome/Development/IdeaProjects/consumer-service-bundler/code /vagrant/code
result=$?
if test $result -eq 0; then
if test -x /sbin/initctl && command -v /sbin/init && /sbin/init 2>/dev/null --version | grep upstart; then
/sbin/initctl emit --no-wait vagrant-mounted MOUNTPOINT=/vagrant/code
fi
else
exit $result
fi


Stdout from the command:



Stderr from the command:

mount.nfs: access denied by server while mounting 192.168.50.1:/Users/danielwelcome/Development/IdeaProjects/consumer-service-bundler/code

If I SSH into the machine, the hostonly interface doesn't have an IP assigned:

[vagrant@ConsumerServiceBundler ~]$ ifconfig -a
enp0s3: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 10.0.2.15  netmask 255.255.255.0  broadcast 10.0.2.255
        inet6 fe80::a00:27ff:fef6:b007  prefixlen 64  scopeid 0x20<link>
        ether 08:00:27:f6:b0:07  txqueuelen 1000  (Ethernet)
        RX packets 643  bytes 71930 (70.2 KiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 421  bytes 60809 (59.3 KiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

enp0s8: flags=4098<BROADCAST,MULTICAST>  mtu 1500
        ether 08:00:27:e3:f8:fa  txqueuelen 1000  (Ethernet)
        RX packets 0  bytes 0 (0.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 11  bytes 818 (818.0 B)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

lo: flags=73<UP,LOOPBACK,RUNNING>  mtu 65536
        inet 127.0.0.1  netmask 255.0.0.0
        inet6 ::1  prefixlen 128  scopeid 0x10<host>
        loop  txqueuelen 0  (Local Loopback)
        RX packets 16  bytes 1172 (1.1 KiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 16  bytes 1172 (1.1 KiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

The IP address is configured:

[vagrant@ConsumerServiceBundler ~]$ cat /etc/sysconfig/network-scripts/ifcfg-enp0s8
#VAGRANT-BEGIN
# The contents below are automatically generated by Vagrant. Do not modify.
NM_CONTROLLED=no
BOOTPROTO=none
ONBOOT=yes
IPADDR=192.168.50.10
NETMASK=255.255.255.0
DEVICE=enp0s8
PEERDNS=no
#VAGRANT-END

My issue differs from @rupert654 in that halting and starting the VM doesn't fix the issue. However, I can restart networking and the interface will come back up. If I ssh into the VM while the shared folders are being mounted and restart networking, the procedure will succeed.

[vagrant@ConsumerServiceBundler ~]$ ifconfig
enp0s3: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 10.0.2.15  netmask 255.255.255.0  broadcast 10.0.2.255
        inet6 fe80::a00:27ff:fef6:b007  prefixlen 64  scopeid 0x20<link>
        ether 08:00:27:f6:b0:07  txqueuelen 1000  (Ethernet)
        RX packets 669  bytes 73868 (72.1 KiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 436  bytes 62823 (61.3 KiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

lo: flags=73<UP,LOOPBACK,RUNNING>  mtu 65536
        inet 127.0.0.1  netmask 255.0.0.0
        inet6 ::1  prefixlen 128  scopeid 0x10<host>
        loop  txqueuelen 0  (Local Loopback)
        RX packets 16  bytes 1172 (1.1 KiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 16  bytes 1172 (1.1 KiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

[vagrant@ConsumerServiceBundler ~]$ sudo /etc/init.d/network restart
Restarting network (via systemctl):                        [  OK  ]
[vagrant@ConsumerServiceBundler ~]$ ifconfig
enp0s3: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 10.0.2.15  netmask 255.255.255.0  broadcast 10.0.2.255
        inet6 fe80::a00:27ff:fef6:b007  prefixlen 64  scopeid 0x20<link>
        ether 08:00:27:f6:b0:07  txqueuelen 1000  (Ethernet)
        RX packets 755  bytes 80758 (78.8 KiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 490  bytes 68029 (66.4 KiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

enp0s8: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 192.168.50.10  netmask 255.255.255.0  broadcast 192.168.50.255
        inet6 fe80::a00:27ff:fee3:f8fa  prefixlen 64  scopeid 0x20<link>
        ether 08:00:27:e3:f8:fa  txqueuelen 1000  (Ethernet)
        RX packets 0  bytes 0 (0.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 21  bytes 1566 (1.5 KiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

lo: flags=73<UP,LOOPBACK,RUNNING>  mtu 65536
        inet 127.0.0.1  netmask 255.0.0.0
        inet6 ::1  prefixlen 128  scopeid 0x10<host>
        loop  txqueuelen 0  (Local Loopback)
        RX packets 16  bytes 1172 (1.1 KiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 16  bytes 1172 (1.1 KiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

@joelpittet
Copy link

I was having this issue too from 1.8.6, here's the cross report geerlingguy/drupal-vm#1040

@andyshinn
Copy link

I think this is a regression in #8052. I reverted these changes in my local 1.9.1 installation and it is working again. Can anyone else confirm?

@joelpittet
Copy link

@andyshinn I tried this out (built vagrant and reverse patched it on master) and all went good. I'm going to try without the reverse patch too.

@joelpittet
Copy link

Yup, you've found the regression for sure. Undoing the patch pumps out that error.

@geerlingguy
Copy link
Contributor

geerlingguy commented Dec 9, 2016

I've hit this too—confirmed when I ran my automated build/test cycle for my geerlingguy/centos7 Packer/Vagrant box: https://github.com/geerlingguy/packer-centos-7

I just ran:

$ packer build --only=virtualbox-iso centos7.json
$ vagrant up virtualbox

And I get the error when it tries mounting the NFS share:

==> virtualbox: Exporting NFS shared folders...
==> virtualbox: Preparing to edit /etc/exports. Administrator privileges will be required...
==> virtualbox: Mounting NFS shared folders...
The following SSH command responded with a non-zero exit status.
Vagrant assumes that this means the command failed!

mount -o vers=3,udp 172.16.3.1:/Users/jeff.geerling/Dropbox/VMs/packer/centos7 /vagrant
result=$?
if test $result -eq 0; then
if test -x /sbin/initctl && command -v /sbin/init && /sbin/init 2>/dev/null --version | grep upstart; then
/sbin/initctl emit --no-wait vagrant-mounted MOUNTPOINT=/vagrant
fi
else
exit $result
fi


Stdout from the command:



Stderr from the command:

mount.nfs: access denied by server while mounting 172.16.3.1:/Users/jeff.geerling/Dropbox/VMs/packer/centos7

This was on macOS Sierra 10.12.1, using Vagrant 1.9.1 and Packer 0.12.0.

@kikitux
Copy link
Contributor

kikitux commented Dec 10, 2016

@chrisroberts for this one, the command being submitted on RH7/Centos7 is:

if service NetworkManager status 2>&1 | grep -q running; then
  service NetworkManager restart
else
  service network restart
fi

The problem is on this line:

NM_CONTROLLED=no

Since service network restart on RH7/Centos7 is redireced to NetworkManager, it does nothing

so, when RedHat7/Centos7 is used, should be:

NM_CONTROLLED=yes

So this template should take an argument for NM_CONTROLLED

embedded/gems/gems/vagrant-1.9.1/templates/guests/redhat/network_static.erb

@4m3ndy
Copy link

4m3ndy commented Dec 12, 2016

@kikitux Thanks a lot, I confirm that your fix sets every thing back on track, it's working fine.
Vagrant version: 1.9.1

@kikitux
Copy link
Contributor

kikitux commented Dec 14, 2016

As workaround, in the meantime this gets fixed, you can use:

replace IFACE with eth1/ens34/name_of_interface

config.vm.provision "shell", inline: "ifup IFACE", run: "always"

@karlkfi
Copy link

karlkfi commented Dec 14, 2016

I'm not sure that changing NM_CONTROLLED=no to NM_CONTROLLED=yes is always the right answer. It's definitely one option that would make the #8052 code work, but I'm not sure it's the correct fix or not.

It's not clear why #8052 was made yet, because the PR doesn't actually say precisely what problem it's fixing. It says "service network restart might fail." on RHEL-7 / Fedora but not how or why. And #8120 reports that that solution causes the same problem reported here on Fedora. So #8052 doesn't seem to have been tested with a static IP (host-only interface).

I think it comes down to this:

  • If we change a NetworkManager managed interface, we need to restart NetworkManager
  • If we change a non-NetworkManager managed interface, we need to restart the network service.

What are the pros and cons of each approach? Why choose one over the other?

@hdeadman
Copy link

Is it a problem that vagrant 1.9.1 is not setting the correct labels on the ifcfg-* file it creates for the private nic it adds? I am seeing error in audit2allow and the selinux labels (as well as file owner and permissions) on the ifcfg-eth1 file are messed up. Centos 7.3

I am defining nic in vagrant file like so:
config.vm.network "private_network", ip: "192.168.33.10", nic_type: "virtio"

Notice ifcfg-eth1 labels, etc:
-rw-r--r--. root root system_u:object_r:net_conf_t:s0 /etc/sysconfig/network-scripts/ifcfg-eth0
-rw-rw-r--. vagrant vagrant unconfined_u:object_r:user_tmp_t:s0 /etc/sysconfig/network-scripts/ifcfg-eth1
-rw-r--r--. root root system_u:object_r:net_conf_t:s0 /etc/sysconfig/network-scripts/ifcfg-lo

I can fix permissions, ownership and run chcon to fix labels but they get changed back on restart. Restarting the network service seems to bring up the interface but might as well fix the labels.

@tonylambiris
Copy link

tonylambiris commented Dec 20, 2016

@karlkfi maybe just enable NM_CONTROLLED inline?

# Restart network (through NetworkManager if running)
if service NetworkManager status 2>&1 | grep -q running; then
  sed -e 's/^NM_CONTROLLED=no/NM_CONTROLLED=yes/g' /etc/sysconfig/network-scripts/ifcfg-*
  service NetworkManager restart
else
  service network restart
fi

I would guess a large amount of RHEL-like vagrant machines will be running NetworkManager out of the box, but if they wish to change that it should be up to the end-user to implement any non-standard behavior either in a provision script or by creating a file via the kickstart config like so:

# ifcfg-eth0
cat > /etc/sysconfig/network-scripts/ifcfg-eth0 <<-EOT
DEVICE=eth0
ONBOOT=yes
BOOTPROTO=dhcp
TYPE=Ethernet
EOT

FWIW in our packer builds we also include NetworkManager-config-server in our kickstart:

Summary     : NetworkManager config file for "server-like" defaults
Description :
This adds a NetworkManager configuration file to make it behave more
like the old "network" service. In particular, it stops NetworkManager
from automatically running DHCP on unconfigured ethernet devices, and
allows connections with static IP addresses to be brought up even on
ethernet devices with no carrier.

This package is intended to be installed by default for server
deployments.

@karlkfi
Copy link

karlkfi commented Dec 20, 2016

Good to know about NetworkManager-config-server, thanks.

As for enabling NM_CONTROLLED inline, that just feels like such a hack. Vagrant already generates the ifcfg file. The generation should really be updated to optionally support NetworkManager, based on some sort of overridable config with an intelligent auto-detected default.

@tonylambiris
Copy link

tonylambiris commented Dec 21, 2016

@karlkfi I believe the default/only solution should just be to restart the network service and let the system take the appropriate action based on how it's configured, ie: if it should restart NetworkManger.service or something else like systemd-networkd.service.

@karlkfi
Copy link

karlkfi commented Dec 21, 2016

@tonylambiris: That's what it used to do. If that worked everywhere, it wouldn't have been changed in the first place.

@Artistan
Copy link

Just for the sake of more information: same issues for me on CentOS7 with Vagrant 1.9.1
details here: puphpet/puphpet#2533

@tonylambiris
Copy link

@karlkfi For me running ifup eth1 on CentOS 7.3 and 1.9.1 configures the interface without having to restart any services.

@jerrywardlow
Copy link

I've been using
config.vm.provision "shell", inline: "systemctl restart network.service", run: "always"
as a workaround until this gets sorted out, hopefully someone can make use of this.

@pgporada
Copy link

If you're like me and using @jerrywardlow's workaround but get errors during the systemctl restart network.service step, you can use this
config.vm.provision "shell", inline: "sudo systemctl restart network 2>/dev/null || true", run: "always"

@tonylambiris
Copy link

tonylambiris commented Jan 25, 2017

Restarting the entire network subsystem just feels super heavy-handed, especially considering some interfaces could be configured manually/externally (ie: using flanneld or creating a bridge interface). Vagrant should only operate in the interface definition and not make assumptions globally.

Could one of the project admins please explain why the ifup command isn't deemed sufficient enough for this task?

20:22:22 builder ~ # bash -x /usr/sbin/ifup eth1
[...TRIM...]
+ '[' -f ../network ']'
+ . ../network
++ NETWORKING=yes
++ HOSTNAME=builder
+ CONFIG=eth1
+ '[' -z eth1 ']'
+ need_config eth1
+ local nconfig
+ CONFIG=ifcfg-eth1
+ '[' -f ifcfg-eth1 ']'
+ return
+ '[' -f ifcfg-eth1 ']'
+ '[' 0 '!=' 0 ']'
+ source_config
+ CONFIG=ifcfg-eth1
+ DEVNAME=eth1
+ . /etc/sysconfig/network-scripts/ifcfg-eth1
++ NM_CONTROLLED=no
++ BOOTPROTO=none
++ ONBOOT=yes
++ IPADDR=172.27.1.10
++ NETMASK=255.255.255.0
++ DEVICE=eth1
++ HWADDR=52:54:00:0d:95:6a
++ PEERDNS=no
+ '[' -r keys-eth1 ']'
+ case "$TYPE" in
+ '[' -n 52:54:00:0d:95:6a ']'
++ echo 52:54:00:0d:95:6a
++ awk '{ print toupper($0) }'
+ HWADDR=52:54:00:0D:95:6A
+ '[' -n '' ']'
+ '[' -z eth1 -a -n 52:54:00:0D:95:6A ']'
+ '[' -z '' ']'
++ echo eth1
++ sed 's/[0-9]*$//'
+ DEVICETYPE=eth
+ '[' -z '' -a -n '' ']'
+ '[' -z '' ']'
+ REALDEVICE=eth1
+ '[' -z '' ']'
+ SYSCTLDEVICE=eth1
+ '[' eth1 '!=' eth1 ']'
+ ISALIAS=no
+ is_nm_running
++ LANG=C
++ nmcli -t --fields running general status
+ '[' running = running ']'
+ '[' eth1 '!=' lo ']'
+ nmcli con load /etc/sysconfig/network-scripts/ifcfg-eth1
+ is_false no
+ case "$1" in
+ return 0
+ '[' foo = fooboot ']'
+ '[' -n '' ']'
+ '[' -n '' -a '' = Bridge ']'
+ '[' '' = true -a -n '' -a eth1 '!=' lo ']'
+ '[' '' = yes ']'
+ '[' none = bootp -o none = dhcp ']'
+ '[' -x /sbin/ifup-pre-local ']'
+ OTHERSCRIPT=/etc/sysconfig/network-scripts/ifup-eth
+ '[' '!' -x /etc/sysconfig/network-scripts/ifup-eth ']'
+ '[' '!' -x /etc/sysconfig/network-scripts/ifup-eth ']'
+ exec /etc/sysconfig/network-scripts/ifup-eth ifcfg-eth1
RTNETLINK answers: File exists

@mikefaille
Copy link
Contributor

@tonylambiris good point. I think this is the best way to work.

@adrianovieira
Copy link

I think you can't use service network restart because firewall will be refreshed and probably you'll need to setup up it again.

I believe that the best thing to do is to bring up only that interface which need to be up ifup <newethdevice>.

I downgraded to 1.9.0 which don't have this 1.9.1 behavior.

@andresvia
Copy link

My workaround here, until a permanent fix:

# 1.9.1 workaround for centos/7
if Vagrant::VERSION == "1.9.1" && config.vm.box == "centos/7"
  config.vm.provision "shell", inline: "service network restart", run: "always"
end

@chrisroberts
Copy link
Member

This is fixed in the 1.9.2 release via PR #8148. Thanks!

@tonylambiris
Copy link

So what happens when distros start migrating to systemd-networkd as their network manager?

@ianmiell
Copy link

I am seeing this behaviour in 1.9.5

@ruibinghao
Copy link

Seeing this behavior as well with Vagrant 1.9.3 and fedora 25.

@nezaboravi
Copy link

Same for me as 1.9.5 , scientific linux 6.1, host MacOS Sierra.
config.vm.network "private_network", ip: "192.168.10.10"

getting eth0 and eth2 both assigned 192.168.10.10

@mikefaille
Copy link
Contributor

mikefaille commented Jun 26, 2017

@ruibinghao @ianmiell @nezaboravi Can you test vagrant v1.9.2 ? https://releases.hashicorp.com/vagrant/1.9.2/

@tonylambiris
­­­> So what happens when distros start migrating to systemd-networkd as their network manager?
I can't answer for version >=1.9.3 but with v1.9.2 you should be ok since I use the daemon script named network to (re)start all interfaces independently which interface is used.

@nezaboravi
Copy link

let me find first how to uninstall the current one and will give a try to 1.9.2

@rwlaschin
Copy link

This problem was discussed and closed due to being a duplicate of this issue 8115

This issue was diagnosed to be related to a fix with #8052

Manually reverting the #8052 mentioned in the above comment in local installation make everything works again.

@peichman-umd
Copy link

FWIW, this is still happening for me on Vagrant 1.9.8, using a box that has CentOS 7.0 (base box: https://app.vagrantup.com/peichman-umd/boxes/ruby/versions/1.0.0).

@ecray
Copy link

ecray commented Sep 11, 2017

I ran into a few of the bugs regarding CentOS 7, systemd and Vagrant 1.9+.

#8115
puphpet/puphpet#2533

I had an issue with setting private networking would bring up some random subnet, remove the Vagrant host ip (10.0.2.15) and be completely unavailable on the network. When checking the interfaces, I would see, lo, eth0, eth1, and enp0s8. I resolved this issue by rebuilding my images to use the following in the kickstart:

bootloader --append="net.ifnames=0 biosdevname=0 crashkernel=auto" --location=mbr --boot-drive=sda

Added additional systemd network package:
NetworkManager-config-server

You can also manually modify the /etc/default/grub and rebuild the initrd.

I tested this and found it working on Vagrant 2.0.

@peichman-umd
Copy link

@ecray Is your assessment of this bug that the problem is in the base box and not the Vagrant code?

@ecray
Copy link

ecray commented Sep 11, 2017

I always blame SystemD. But, it seems like Vagrant is not detecting if it should use persistent device naming as it creates eth0, but also enp0s8 when specifying a private network.

@ghost
Copy link

ghost commented Mar 31, 2020

I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues.

If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further.

@ghost ghost locked and limited conversation to collaborators Mar 31, 2020
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

Successfully merging a pull request may close this issue.