Openstack Networking in Pebbles

Hendrik Volkmer edited this page Nov 5, 2013 · 8 revisions
Clone this wiki locally

This page assumes a basic level of familiarity with Openstack Networking.

To start with, the Chef cookbooks for Openstack Networking that Pebbles uses will have support for operating in 3 modes:

  • Flat mode
  • GRE tunneling mode
  • Tenant VLAN mode

All three modes will run on top of the OVS plugin for Openstack Networking -- we may also support the LinuxBridge plugin, but we prefer OVS for its performance and flexibility.

Crowbar Networking vs. Openstack Networking

The Network Barclamp in Crowbar is responsible for creating and managing network interfaces on the physical machines that are members of the Crowbar cluster. It inventories the physical network interfaces on the system, builds up the local network interfaces to match the desired conduit definitions, and then binds any requested IP addresses to the resultant set of bridges, bonds, VLAN interfaces, and raw Ethernet devices.

The Openstack Networking Barclamp in Crowbar is responsible for creating the OVS bridges that Openstack Networking will need on top of the appropriate device.

For the discussion of the rest of the modes, assume that the network barclamp has this configuration and a starting network configuration of the following on a node that will become an OpenStack Compute node:

root@hypo:~# ip addr show 
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 16436 qdisc noqueue state UNKNOWN 
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP
    inet 192.168.124.83/24 scope global eth0
    inet6 fe80::5054:ff:fe12:340d/64 scope link 
    link/ether 52:54:00:12:34:0d brd ff:ff:ff:ff:ff:ff
3: eth1: <BROADCAST,MULTICAST> mtu 1500 qdisc pfifo_fast state DOWN qlen 1000
    link/ether 52:54:00:12:34:0e brd ff:ff:ff:ff:ff:ff
4: eth2: <BROADCAST,MULTICAST> mtu 1500 qdisc pfifo_fast state DOWN qlen 1000
    link/ether 52:54:00:12:34:0f brd ff:ff:ff:ff:ff:ff

Note that the above configuration is shown before the network barclamp applies it's configuration to the node, so no bond has been created yet.

By default, all of the networks created for Openstack Networking by Crowbar do not have connectivity to the outside world. Instead, we allocate an IP address for the public network to any nodes that run the quantum server service, and we create a virtual router that allows for the usual floating IP address mechanism to work. Under any of the modes, if you want to talk to an instance from anything other than another instance, you must add a floating IP address to that instance.

Flat mode

Flat mode is the simplest useful mode that Quantum can operate in. It assumes that all the VM instances that Openstack Compute creates will share a single flat network, which is usually bound to a dedicated physical network. Flat mode does not allow for tenant segregation at the network level. In flat mode, the network barclamp will transform the starting config into

root@hypo:~# ip addr show
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 16436 qdisc noqueue state UNKNOWN 
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST> mtu 1500 qdisc pfifo_fast state DOWN qlen 1000
    link/ether 52:54:00:12:34:0d brd ff:ff:ff:ff:ff:ff
3: eth1: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master bond0 state UP qlen 1000
    link/ether 52:54:00:12:34:0e brd ff:ff:ff:ff:ff:ff
4: eth2: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master bond0 state UP qlen 1000
    link/ether 52:54:00:12:34:0f brd ff:ff:ff:ff:ff:ff
6: bond0: <BROADCAST,MULTICAST,MASTER,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP 
    link/ether 52:54:00:12:34:0e brd ff:ff:ff:ff:ff:ff
    inet 192.168.124.83/24 scope global bond0
    inet6 fe80::5054:ff:fe12:340e/64 scope link 
       valid_lft forever preferred_lft forever
7: bond0.500@bond0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP 
    link/ether 52:54:00:12:34:0e brd ff:ff:ff:ff:ff:ff
    inet6 fe80::5054:ff:fe12:340e/64 scope link 
       valid_lft forever preferred_lft forever

All traffic in between the instances will wind up travelling over bond0.500, which was created for the nova_fixed network. When the nova-compute role is applied to this node, it will wind up transforming the network configuration into

root@hypo:~# ip addr show
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 16436 qdisc noqueue state UNKNOWN 
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST> mtu 1500 qdisc pfifo_fast state DOWN qlen 1000
    link/ether 52:54:00:12:34:0d brd ff:ff:ff:ff:ff:ff
3: eth1: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master bond0 state UP qlen 1000
    link/ether 52:54:00:12:34:0e brd ff:ff:ff:ff:ff:ff
4: eth2: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master bond0 state UP qlen 1000
    link/ether 52:54:00:12:34:0f brd ff:ff:ff:ff:ff:ff
6: bond0: <BROADCAST,MULTICAST,MASTER,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP 
    link/ether 52:54:00:12:34:0e brd ff:ff:ff:ff:ff:ff
    inet 192.168.124.83/24 scope global bond0
    inet6 fe80::5054:ff:fe12:340e/64 scope link 
       valid_lft forever preferred_lft forever
7: bond0.500@bond0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP 
    link/ether 52:54:00:12:34:0e brd ff:ff:ff:ff:ff:ff
    inet6 fe80::5054:ff:fe12:340e/64 scope link 
       valid_lft forever preferred_lft forever
8: br-int: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN 
    link/ether 2a:3c:6f:17:77:45 brd ff:ff:ff:ff:ff:ff
    inet6 fe80::283c:6fff:fe17:7745/64 scope link 
       valid_lft forever preferred_lft forever
10: phy-br-fixed: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
    link/ether e6:ee:ae:23:3e:2c brd ff:ff:ff:ff:ff:ff
    inet6 fe80::e4ee:aeff:fe23:3e2c/64 scope link 
       valid_lft forever preferred_lft forever
11: int-br-fixed: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
    link/ether f6:f9:7f:6c:7c:84 brd ff:ff:ff:ff:ff:ff
    inet6 fe80::f4f9:7fff:fe6c:7c84/64 scope link 
       valid_lft forever preferred_lft forever
12: br-fixed: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN 
    link/ether c6:a5:3c:84:c1:4d brd ff:ff:ff:ff:ff:ff
    inet6 fe80::c4a5:3cff:fe84:c14d/64 scope link 
       valid_lft forever preferred_lft forever

with bond0.500 bound to br-fixed to provide connectivity between the instances and the physical network that the tenant traffic should run on. This binding can be seen by running "sudo ovs-vsctl show" on the compute node. Per our usual configuration, we run the tenant traffic on vlan 500, but it can be run over a dedicated physical network as well if the conduit mapping allows it.

You may also note that br-fixed does not have an IP address attached to it -- we do this deliberately in an effort to segregate tenant traffic from everything else.

GRE tunneling mode

Of all the modes of operation that Quantum supports, GRE is perhaps the most flexible mode -- since it uses GRE tunnels to connect the OVS bridges on different hosts, it provides complete independence from the underlying physical network topology and switch configuration, and can therefore offer per-tenant segregation of networks without having to worry about VLAN mapping on the physical switches or double VLAN issues.

On a hypothetical nova-compute node with Quantum running in GRE mode, the starting configuration will be transformed by the network barclamp into:

root@hypo:~# ip addr show
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 16436 qdisc noqueue state UNKNOWN 
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST> mtu 1500 qdisc pfifo_fast state DOWN qlen 1000
    link/ether 52:54:00:12:34:0d brd ff:ff:ff:ff:ff:ff
3: eth1: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master bond0 state UP qlen 1000
    link/ether 52:54:00:12:34:0e brd ff:ff:ff:ff:ff:ff
4: eth2: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master bond0 state UP qlen 1000
    link/ether 52:54:00:12:34:0f brd ff:ff:ff:ff:ff:ff
6: bond0: <BROADCAST,MULTICAST,MASTER,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP 
    link/ether 52:54:00:12:34:0e brd ff:ff:ff:ff:ff:ff
    inet 192.168.124.83/24 scope global bond0
    inet6 fe80::5054:ff:fe12:340e/64 scope link 
       valid_lft forever preferred_lft forever
7: bond0.700@bond0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP 
    link/ether 52:54:00:12:34:0e brd ff:ff:ff:ff:ff:ff
    inet 192.168.130.12/24 scope global bond0.700
    inet6 fe80::5054:ff:fe12:340e/64 scope link 
       valid_lft forever preferred_lft forever

In this example, the 192.168.124.0/24 subnet on bond0 is the Crowbar administrative network, and the 192.168.130.0/24 subnet on bond0.700 will carry GRE tunnel traffic for Crowbar. Once the nova-compute role is applied to the node, it will transform the network config into:

root@hypo:~# ip addr show
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 16436 qdisc noqueue state UNKNOWN 
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST> mtu 1500 qdisc pfifo_fast state DOWN qlen 1000
    link/ether 52:54:00:12:34:0d brd ff:ff:ff:ff:ff:ff
3: eth1: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master bond0 state UP qlen 1000
    link/ether 52:54:00:12:34:0e brd ff:ff:ff:ff:ff:ff
4: eth2: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master bond0 state UP qlen 1000
    link/ether 52:54:00:12:34:0f brd ff:ff:ff:ff:ff:ff
6: bond0: <BROADCAST,MULTICAST,MASTER,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP 
    link/ether 52:54:00:12:34:0e brd ff:ff:ff:ff:ff:ff
    inet 192.168.124.83/24 scope global bond0
    inet6 fe80::5054:ff:fe12:340e/64 scope link 
       valid_lft forever preferred_lft forever
7: bond0.700@bond0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP 
    link/ether 52:54:00:12:34:0e brd ff:ff:ff:ff:ff:ff
    inet 192.168.130.12/24 scope global bond0.700
    inet6 fe80::5054:ff:fe12:340e/64 scope link 
       valid_lft forever preferred_lft forever
8: br-int: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN 
    link/ether 2a:3c:6f:17:77:45 brd ff:ff:ff:ff:ff:ff
    inet6 fe80::283c:6fff:fe17:7745/64 scope link 
       valid_lft forever preferred_lft forever
10: phy-br-tunnel: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
    link/ether e6:ee:ae:23:3e:2c brd ff:ff:ff:ff:ff:ff
    inet6 fe80::e4ee:aeff:fe23:3e2c/64 scope link 
       valid_lft forever preferred_lft forever
11: int-br-tunnel: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
    link/ether f6:f9:7f:6c:7c:84 brd ff:ff:ff:ff:ff:ff
    inet6 fe80::f4f9:7fff:fe6c:7c84/64 scope link 
       valid_lft forever preferred_lft forever
12: br-tunnel: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN 
    link/ether 02:62:00:b9:81:49 brd ff:ff:ff:ff:ff:ff

In this configuation, bond0.700 is not bound to br-tunnel -- bond0.700 exists to have a convienent IP address that will tunnel all the GRE traffic over vlan 700. Instead, br-tunnel is bound to its peers on other nodes via GRE tunnels:

root@hypo:~# ovs-vsctl show
7ee972d4-bace-4d15-a44b-c8034fc3e118
    Bridge br-int
        Port int-br-tunnel
            Interface int-br-tunnel
        Port patch-tun
            Interface patch-tun
                type: patch
                options: {peer=patch-int}
        Port br-int
            Interface br-int
                type: internal
    Bridge br-tunnel
        Port br-tunnel
            Interface br-tunnel
                type: internal
        Port patch-int
            Interface patch-int
                type: patch
                options: {peer=patch-tun}
        Port "gre-2"
            Interface "gre-2"
                type: gre
                options: {in_key=flow, out_key=flow, remote_ip="192.168.130.11"}
        Port "gre-4"
            Interface "gre-4"
                type: gre
                options: {in_key=flow, out_key=flow, remote_ip="192.168.130.13"}
        Port "gre-1"
            Interface "gre-1"
                type: gre
                options: {in_key=flow, out_key=flow, remote_ip="192.168.130.10"}
    ovs_version: "1.4.0+build0"

As you can see, br-tunnel is bound to its peers at 192.168.130.11, 192.168.130,13, and 192.168.130.10

Tenant VLAN mode

In order to operate in tenant VLAN mode, you must:

  1. Define a conduit that does not share the same underlying physical nics with anything else.
  2. Configure the nova_fixed network to use that conduit, and make sure that the use_vlan and add_bridge settings in nova_fixed are set to false.
  3. Configure the physical switches to allow VLAN tagged traffic in the range you want to be able to travel over the ports attached to the nics bound in step 1.

Once that is finished, Tenant VLAN mode should be operable.