Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

When using atomic host as guest, second iface won't come up automatically #117

Closed
pschiffe opened this issue Aug 24, 2015 · 11 comments
Closed

Comments

@pschiffe
Copy link
Contributor

I'm on Fedora 21:

$ rpm -q vagrant vagrant-libvirt oh-my-vagrant
vagrant-1.7.2-9.fc21.1.noarch
vagrant-libvirt-0.0.24-5.fc21.noarch
oh-my-vagrant-1.0.0-1.noarch

$ vagrant plugin list
vagrant-hostmanager (1.5.0)
  - Version Constraint: 1.5.0
vagrant-libvirt (0.0.30, system)

$ vagrant box list
atomic-rhel-7.1 (libvirt, 0)

In guest:

$ ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN 
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
    link/ether 52:54:00:59:93:22 brd ff:ff:ff:ff:ff:ff
    inet 192.168.121.183/24 brd 192.168.121.255 scope global dynamic eth0
       valid_lft 2937sec preferred_lft 2937sec
    inet6 fe80::5054:ff:fe59:9322/64 scope link 
       valid_lft forever preferred_lft forever
3: eth1: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN qlen 1000
    link/ether 52:54:00:05:24:1c brd ff:ff:ff:ff:ff:ff
4: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN 
    link/ether 56:84:7a:fe:97:99 brd ff:ff:ff:ff:ff:ff
    inet 172.17.42.1/16 scope global docker0
       valid_lft forever preferred_lft forever

$ cat /etc/sysconfig/network-scripts/ifcfg-eth0 
DEVICE="eth0"
BOOTPROTO="dhcp"
ONBOOT="yes"
TYPE="Ethernet"
PERSISTENT_DHCLIENT="yes"

$ cat /etc/sysconfig/network-scripts/ifcfg-eth1
#VAGRANT-BEGIN
# The contents below are automatically generated by Vagrant. Do not modify.
NM_CONTROLLED=no
BOOTPROTO=none
ONBOOT=yes
IPADDR=192.168.127.100
NETMASK=255.255.255.0
DEVICE=eth1
PEERDNS=no
#VAGRANT-END

$ systemctl status NetworkManager
NetworkManager.service - Network Manager
   Loaded: loaded (/usr/lib/systemd/system/NetworkManager.service; enabled)
   Active: active (running) since Mon 2015-08-24 12:02:41 UTC; 15min ago
 Main PID: 606 (NetworkManager)
   CGroup: /system.slice/NetworkManager.service
           ├─606 /usr/sbin/NetworkManager --no-daemon
           └─677 /sbin/dhclient -d -q -sf /usr/libexec/nm-dhcp-helper -pf /var/run/dhclient-eth0.pid ...

$ systemctl status network
network.service - LSB: Bring up/down networking
   Loaded: loaded (/etc/rc.d/init.d/network)
   Active: inactive (dead)

After reboot it still stays down. Bringing it up manually works:

# ifup eth1

# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN 
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
    link/ether 52:54:00:59:93:22 brd ff:ff:ff:ff:ff:ff
    inet 192.168.121.183/24 brd 192.168.121.255 scope global dynamic eth0
       valid_lft 2562sec preferred_lft 2562sec
    inet6 fe80::5054:ff:fe59:9322/64 scope link 
       valid_lft forever preferred_lft forever
3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
    link/ether 52:54:00:05:24:1c brd ff:ff:ff:ff:ff:ff
    inet 192.168.127.100/24 brd 192.168.127.255 scope global eth1
       valid_lft forever preferred_lft forever
    inet6 fe80::5054:ff:fe05:241c/64 scope link 
       valid_lft forever preferred_lft forever
4: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN 
    link/ether 56:84:7a:fe:97:99 brd ff:ff:ff:ff:ff:ff
    inet 172.17.42.1/16 scope global docker0
       valid_lft forever preferred_lft forever

Any tips for more info, how to debug, or how to fix?

@purpleidea
Copy link
Owner

On Mon, Aug 24, 2015 at 8:24 AM, Peter Schiffer notifications@github.com
wrote:

Any tips for more info, how to debug, or how to fix?

First thing to fix is to remove the plugin version of vagrant-libvirt.
You've already got the system version installed!
Try again after fixing that.

@pschiffe
Copy link
Contributor Author

I think that plugin was auto-updated at some point. I had to remove user installed plugin manually by removing gem and gemspec. Now:

$ vagrant plugin list
vagrant-hostmanager (1.5.0)
  - Version Constraint: 1.5.0
vagrant-libvirt (0.0.24, system)

But without success. Created new VM with atomic host and it had eth1 turned off.

@pschiffe
Copy link
Contributor Author

Any other idea?

@purpleidea
Copy link
Owner

Run vlog up ... and see what it says. Also look at /etc/sysconfig/network-scripts/ifcfg-eth1 and paste that file here after it doesn't come up.

@pschiffe
Copy link
Contributor Author

pschiffe commented Sep 2, 2015

Thanks for the tip, I have same updates:
I found out, that there is a new atomic guest plugin in the latest Vagrant. I'm on Fedora 21, and Vagrant packaged there is old (1.7.2), so I've updated to the latest upstream Vagrant rpm (1.7.4). Now, Vagrant detects the atomic guest and correctly configures the second network interface, so after provisioning it's up. But after a reboot, it stays down :-( I've checked rhel-7.1 box and the second iface there stays up after a reboot. The network configuration looks the same in the vagrant.log file from vlog up command. Maybe there's some issue in the atomic OS, not sure.

@purpleidea
Copy link
Owner

On Wed, Sep 2, 2015 at 3:59 AM, Peter Schiffer notifications@github.com
wrote:

Thanks for the tip, I have same updates:
I found out, that there is a new atomic guest plugin in the latest
Vagrant. I'm on Fedora 21, and Vagrant packaged there is old (1.7.2), so
I've updated to the latest upstream Vagrant rpm (1.7.4). Now, Vagrant
detects the atomic guest and correctly configures the second network
interface, so after provisioning it's up. But after a reboot, it stays down
:-( I've checked rhel-7.1 box and the second iface there stays up after a
reboot. The network configuration looks the same in the vagrant.log file
from vlog up command. Maybe there's some issue in the atomic OS, not sure.

Aha! Well now I can help a little bit... There was a bug in Vagrant, that
if the machine had a docker0 interface, things would be broken...
Eg: https://bugzilla.redhat.com/show_bug.cgi?id=1221006 Maybe it was
related?? I thought it was in F21...

Does this help your debugging a bit?

@pschiffe
Copy link
Contributor Author

pschiffe commented Sep 2, 2015

Unfortunately no, I was victim of that bug, but it's result was that the content of the /etc/sysconfig/network-scripts/ifcfg-eth1 file was placed in the /etc/sysconfig/network-scripts/ifcfg-eth0. Here, the content of these files is correct:

[vagrant@master ~]$ cat /etc/sysconfig/network-scripts/ifcfg-eth0 
DEVICE="eth0"
BOOTPROTO="dhcp"
ONBOOT="yes"
TYPE="Ethernet"
PERSISTENT_DHCLIENT="yes"

[vagrant@master ~]$ cat /etc/sysconfig/network-scripts/ifcfg-eth1 
#VAGRANT-BEGIN
# The contents below are automatically generated by Vagrant. Do not modify.
NM_CONTROLLED=no
BOOTPROTO=none
ONBOOT=yes
IPADDR=192.168.91.100
NETMASK=255.255.255.0
DEVICE=eth1
PEERDNS=no
#VAGRANT-END

@purpleidea
Copy link
Owner

Does this mean you figured out the issue?

On Wed, Sep 2, 2015 at 11:39 AM, Peter Schiffer notifications@github.com
wrote:

Unfortunately no, I was victim of that bug, but it's result was that the
content of the /etc/sysconfig/network-scripts/ifcfg-eth1 file was placed
in the /etc/sysconfig/network-scripts/ifcfg-eth0. Here, the content of
these files is correct:

[vagrant@master ~]$ cat /etc/sysconfig/network-scripts/ifcfg-eth0
DEVICE="eth0"
BOOTPROTO="dhcp"
ONBOOT="yes"
TYPE="Ethernet"
PERSISTENT_DHCLIENT="yes"

[vagrant@master ~]$ cat /etc/sysconfig/network-scripts/ifcfg-eth1
#VAGRANT-BEGIN

The contents below are automatically generated by Vagrant. Do not modify.

NM_CONTROLLED=no
BOOTPROTO=none
ONBOOT=yes
IPADDR=192.168.91.100
NETMASK=255.255.255.0
DEVICE=eth1
PEERDNS=no
#VAGRANT-END


Reply to this email directly or view it on GitHub
#117 (comment)
.

@pschiffe
Copy link
Contributor Author

pschiffe commented Sep 2, 2015

No, it means just that the content of the ifcfg files is correct and right after provisioning the networking works OK, but after reboot, the eth1 is down. This is on atomic host. RHEL works fine.

@purpleidea
Copy link
Owner

On Wed, Sep 2, 2015 at 11:45 AM, Peter Schiffer notifications@github.com
wrote:

No, it means just that the content of the ifcfg files is correct and right
after provisioning the networking works OK, but after reboot, the eth1 is
down. This is on atomic host. RHEL works fine.

Can you compare contents of ifcfg-eth1 before/after reboot?

Also look in the logs. Use vlog to do so...

@pschiffe
Copy link
Contributor Author

pschiffe commented Sep 7, 2015

The content of the ifcfg-eth1 file seems the same after reboot and there is nothing strange in the vagrant log. But I suspect the 7.1.0 version of the atomic OS, because in some cases, while doing upgrade, some configuration can be lost. This problem is fixed in newer versions, but I don't have newer vagrant box. So, I'm using workaround, which works pretty fine:

:shell:
- script: chkconfig network on
- script: ifup eth1

I'm closing this for now. I'll post an update if I find something relevant. Thanks for the help.

@pschiffe pschiffe closed this as completed Sep 7, 2015
This issue was closed.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants