Neutron ONOS Integration for CORD VTN
This document is outdated. See https://wiki.onosproject.org/display/ONOS/CORD+VTN
- OpenStack controller: 1VM with 4GB RAM, runs OpenStack services
- OpenStack compute: 2VMs with 4GB RAM, runs
nova-compute
- OpenStack network: 1VM with 1GB RAM, runs
neutron-l3-agent
- ONOS cluster: 3VMs with 1GB RAM
- All machines have two networks, one with public and the other with private in a same subnet.
- All machines are running on DigitalOcean
Install networking-onos(Neutron ML2 plugin for ONOS) first.
hyunsun@openstack-controller master /opt/stack
$ git clone https://github.com/openstack/networking-onos.git
hyunsun@openstack-controller master /opt/stack/networking-onos
$ sudo python setup.py install
Set ONOS access information in networking-onos/etc/conf_onos.ini
file.
# Configuration options for ONOS ML2 Mechanism driver
[onos]
# (StrOpt) ONOS ReST interface URL. This is a mandatory field.
url_path = http://onos.instance.ip.addr:8181/onos/openstackswitching
# (StrOpt) Username for authentication. This is a mandatory field.
username = onos
# (StrOpt) Password for authentication. This is a mandatory field.
password = rocks
Relevant Neutron/Nova configs
/etc/neutron/neutron.conf
core_plugin = neutron.plugins.ml2.plugin.Ml2Plugin
/etc/neutron/plugins/ml2/ml2_conf.ini
[ml2]
tenant_network_types = vxlan
type_drivers = vxlan
mechanism_drivers = onos_ml2
[ml2_type_vxlan]
vni_ranges = 1001:2000
/etc/nova/nova.conf
[DEFAULT]
force_config_drive = always
You can set the same configs easily with Devstack
[[local|localrc]]
HOST_IP=10.134.231.28
SERVICE_HOST=10.134.231.28
RABBIT_HOST=10.134.231.28
DATABASE_HOST=10.134.231.28
Q_HOST=10.134.231.28
ADMIN_PASSWORD=nova
DATABASE_PASSWORD=$ADMIN_PASSWORD
RABBIT_PASSWORD=$ADMIN_PASSWORD
SERVICE_PASSWORD=$ADMIN_PASSWORD
SERVICE_TOKEN=$ADMIN_PASSWORD
DATABASE_TYPE=mysql
# Log
SCREEN_LOGDIR=/opt/stack/logs/screen
# Images
IMAGE_URLS="http://cloud-images.ubuntu.com/releases/precise/release/ubuntu-12.04-server-cloudimg-amd64.tar.gz"
FORCE_CONFIG_DRIVE=always
NEUTRON_CREATE_INITIAL_NETWORKS=False
Q_ML2_PLUGIN_MECHANISM_DRIVERS=onos_ml2
Q_PLUGIN_EXTRA_CONF_PATH=/opt/stack/networking-onos/etc
Q_PLUGIN_EXTRA_CONF_FILES=(conf_onos.ini)
# Services
enable_service q-svc
disable_service n-net
disable_service n-cpu
disable_service tempest
disable_service c-sch
disable_service c-api
disable_service c-vol
Now run Neutron server with Devstack or if you launch it directly, don't forget to set conf_onos.ini
as a config as well as ml2_conf.ini
and neutron.conf
.
/usr/bin/python /usr/local/bin/neutron-server --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/ml2_conf.ini --config-file /opt/stack/networking-onos/etc/conf_onos.ini
Run Devstack with the following config.
[[local|localrc]]
HOST_IP=10.134.231.30 <-- local IP
SERVICE_HOST=162.243.x.x <-- controller IP, must be reachable from your test browser for console access from Horizon
RABBIT_HOST=10.134.231.28
DATABASE_HOST=10.134.231.28
ADMIN_PASSWORD=nova
DATABASE_PASSWORD=$ADMIN_PASSWORD
RABBIT_PASSWORD=$ADMIN_PASSWORD
SERVICE_PASSWORD=$ADMIN_PASSWORD
SERVICE_TOKEN=$ADMIN_PASSWORD
DATABASE_TYPE=mysql
NOVA_VNC_ENABLED=True
VNCSERVER_PROXYCLIENT_ADDRESS=$HOST_IP
VNCSERVER_LISTEN=$HOST_IP
# Log
SCREEN_LOGDIR=/opt/stack/logs/screen
# Images
IMAGE_URLS="http://cloud-images.ubuntu.com/releases/precise/release/ubuntu-12.04-server-cloudimg-amd64.tar.gz"
LIBVIRT_TYPE=kvm
# Services
ENABLED_SERVICES=n-cpu,neutron
If your compute node is VM, try http://docs.openstack.org/developer/devstack/guides/devstack-with-nested-kvm.html this first or set LIBVIRT_TYPE=qemu
. Nested KVM is much faster than qemu
, if possible.
Set ovsdb listening mode so that ONOS can have a control of it
Add the following in /usr/share/openvswitch/scripts/ovs-ctl
script, right after set ovsdb-server "$DB_FILE"
line.
set "$@" --remote=ptcp:6640
And then restart openvswitch.
$ sudo service openvswitch-switch restart
openvswitch-switch stop/waiting
openvswitch-switch start/running
$ netstat -ntl
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address Foreign Address State
tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN
tcp 0 0 0.0.0.0:6640 0.0.0.0:* LISTEN
tcp6 0 0 :::22
Comment out the following lines from Devstack script to prevent unwanted ovs or bridge updates. We definitely need to add script for onos plugin like lib/neutron_plugins/onos. Anyway I commented out unnecessary part for now.
stack.sh
#if is_service_enabled neutron; then
# install_neutron_agent_packages
#fi
lib/neutron-legacy
#if is_neutron_ovs_base_plugin; then
# neutron_ovs_base_cleanup
#fi
Set ovsdb listening mode so that ONOS can have a control of it
Add the following in /usr/share/openvswitch/scripts/ovs-ctl
script, right after set ovsdb-server "$DB_FILE"
line.
set "$@" --remote=ptcp:6640
And then restart openvswitch.
$ sudo service openvswitch-switch restart
openvswitch-switch stop/waiting
openvswitch-switch start/running
$ netstat -ntl
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address Foreign Address State
tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN
tcp 0 0 0.0.0.0:6640 0.0.0.0:* LISTEN
tcp6 0 0 :::22
Run Devstack with the following config.
[[local|localrc]]
HOST_IP=10.134.231.30 <-- local IP
SERVICE_10.134.231.28 <-- controller IP
RABBIT_HOST=10.134.231.28
DATABASE_HOST=10.134.231.28
ADMIN_PASSWORD=nova
DATABASE_PASSWORD=$ADMIN_PASSWORD
RABBIT_PASSWORD=$ADMIN_PASSWORD
SERVICE_PASSWORD=$ADMIN_PASSWORD
SERVICE_TOKEN=$ADMIN_PASSWORD
DATABASE_TYPE=mysql
NOVA_VNC_ENABLED=True
VNCSERVER_PROXYCLIENT_ADDRESS=$HOST_IP
VNCSERVER_LISTEN=$HOST_IP
# Log
SCREEN_LOGDIR=/opt/stack/logs/screen
# Services
ENABLED_SERVICES=q-l3
After done with DevStack, br-ex
bridge would be created.
Prepare network-cfg.json
file with the following information, which specifies OpenStack nodes with OVSDB server
{
"apps" : {
"org.onosproject.cordvtn" : {
"cordvtn" : {
"nodes" : [
{
"hostname" : "compute-01",
"ovsdbIp" : "compute.01.ip.addr",
"ovsdbPort" : "6640",
"bridgeId" : "of:0000000000000001"
},
{
"hostname" : "compute-02",
"ovsdbIp" : "compute.02.ip.addr",
"ovsdbPort" : "6640",
"bridgeId" : "of:0000000000000002"
},
{
"hostname" : "network",
"ovsdbIp" : "network.node.ip.addr",
"ovsdbPort" : "6640",
"bridgeId" : "of:0000000000000003"
}
]
}
},
"org.onosproject.openstackswitching" : {
"openstackswitching" : {
"do_not_push_flows" : "true",
"neutron_server" : "http://neutron.node.ip.addr:9696/v2.0/",
"keystone_server" : "http://keystone.node.ip.addr:5000/v2.0/",
"user_name" : "admin",
"password" : "passwd"
}
}
}
}
Make sure to activate onos-app-cordvtn Start ONOS with 'network-cfg.json' in your environment or push 'network-cfg.json' to the system through REST API"
curl --user onos:rocks -X POST -H "Content-Type: application/json" http://onos-01:8181/onos/v1/network/configuration/ -d @network-cfg.json
Check OpenStack compute nodes are visible in ONOS.
onos> cordvtn-nodes
hostname=compute-01, ovsdb=com.01.ip.addr:6640, br-int=of:0000000000000001, init=COMPLETE
hostname=compute-02, ovsdb=com.02.ip.addr:6640, br-int=of:0000000000000002, init=COMPLETE
hostname=network, ovsdb=net.node.ip.addr:6640, br-int=of:0000000000000003, init=COMPLETE
Total 2 nodes
onos> devices
id=of:0000000000000001, available=true, role=MASTER, type=SWITCH, mfr=Nicira, Inc., hw=Open vSwitch, sw=2.0.2, serial=None, managementAddress=compute.01.ip.addr, protocol=OF_13, channelId=compute.01.ip.addr:39031
id=of:0000000000000002, available=true, role=STANDBY, type=SWITCH, mfr=Nicira, Inc., hw=Open vSwitch, sw=2.0.2, serial=None, managementAddress=compute.02.ip.addr, protocol=OF_13, channelId=compute.02.ip.addr:44920
Check ovsdb status on OpenStack compute node.
hyunsun@compute-01 master ~/devstack
$ sudo ovs-vsctl show
cedbbc0a-f9a4-4d30-a3ff-ef9afa813efb
Bridge br-int
Controller "tcp:onos.01.ip.addr:6653"
is_connected: true
Controller "tcp:onos.02.ip.addr:6653"
is_connected: true
Controller "tcp:onos.03.ip.addr:6653"
is_connected: true
fail_mode: secure
Port vxlan
Interface vxlan
type: vxlan
options: {key=flow, remote_ip=flow}
Port br-int
Interface br-int
ovs_version: "2.3.2"
Create a tenant network and subnet with Neutron.
hyunsun@openstack-controller master ~/devstack
$ neutron net-create net-01
hyunsun@openstack-controller master ~/devstack
$ neutron subnet-create --name subnet-01 --gateway 192.168.0.1 net-01 192.168.0.0/24
hyunsun@openstack-controller master ~/devstack
$ neutron router-create router-01
hyunsun@openstack-controller master ~/devstack
$ neutron router-interface-add router-01 subnet-01
Create a VM
hyunsun@openstack-controller master ~/devstack
$ nova keypair-add admin
(Don't forget to save the private key for use to access VM)
hyunsun@openstack-controller master ~/devstack
$ nova boot --image f04ed5f1-3784-4f80-aee0-6bf83912c4d0 --flavor 1 --nic net-id=aaaf70a4-f2b2-488e-bffe-63654f7b8a82 --key-name admin server-01
hyunsun@openstack-controller master ~/devstack
$ nova list
+--------------------------------------+-----------+--------+------------+-------------+------------------+
| ID | Name | Status | Task State | Power State | Networks |
+--------------------------------------+-----------+--------+------------+-------------+------------------+
| a1b5e9ab-41a9-47d3-a070-11805a0ca627 | server-01 | ACTIVE | - | Running | net-01=192.168.0.2 |
+--------------------------------------+-----------+--------+------------+-------------+------------------+
Try ping from VM to gateway interface 192.168.0.1
, it should work.
Create external network and set gateway to the router
If you have actual public network available, you have to set provider:physical_network when creating external network.
hyunsun@openstack-controller master ~/devstack
$ neutron neutron net-create ext-net --router:external
hyunsun@openstack-controller master ~/devstack
$ neutron subnet-create --name ext-subnet ext-net 172.27.0.0/24 --disable-dhcp
hyunsun@openstack-controller master ~/devstack
$ neutron router-gateway-set router-01 ext-net
Since I didn't create physical network for public access, it needs the following manual process in the network node. Route to br-ex for 172.27.0.0/24
should be added as a result.
hyunsun@openstack-network master ~/devstack
$ ip addr add 172.27.0.1/24 dev br-ex
hyunsun@openstack-network master ~/devstack
$ netstat -rn
Kernel IP routing table
Destination Gateway Genmask Flags MSS Window irtt Iface
0.0.0.0 159.203.240.1 0.0.0.0 UG 0 0 0 eth0
10.134.0.0 0.0.0.0 255.255.0.0 U 0 0 0 eth1
159.203.240.0 0.0.0.0 255.255.240.0 U 0 0 0 eth0
172.27.0.0 0.0.0.0 255.255.255.0 U 0 0 0 br-ex
Set up NAT in network node
If the external network is not actually connected to the physical external netework, for VM to access the Internet, NAT is required. In my case, eth0
interface is connected to the external.
hyunsun@openstack-network master ~/devstack
$ sudo sysctl net.ipv4.ip_forward=1
net.ipv4.ip_forward = 1
hyunsun@openstack-network master ~/devstack
$ iptables -A FORWARD -d 172.27.0.0/24 -j ACCEPT
hyunsun@openstack-network master ~/devstack
$ iptables -A FORWARD -s 172.27.0.0/24 -j ACCEPT
hyunsun@openstack-network master ~/devstack
$ iptables -t nat -A POSTROUTING -o eth0 -j MASQUERADE
Add floating IP to access VM outside of namespace
With associating floating IP to VM, you can access the VM outside of the router's namespace.
hyunsun@openstack-network master ~/devstack
$ neutron floatingip-create ext-net
hyunsun@openstack-network master ~/devstack
$ neutron floatingip-associate FLOATINGIP_ID PORT_ID
Now try to SSH VM with floating IP in network node with private key you have created before. Before accessing VM, set MTU to 1450. http://docs.openstack.org/juno/config-reference/content/networking-options-plugins-ml2.html
hyunsun@openstack-network master ~/devstack
$ sudo ip li set mtu 1450 dev br-ex
hyunsun@openstack-network master ~/devstack
$ ssh -i admin.pem ubuntu@172.27.0.4
Welcome to Ubuntu 12.04.5 LTS (GNU/Linux 3.2.0-89-virtual x86_64)
Try ping to the Internet inside the VM.
ubuntu@server-02:~$ ping www.google.com
PING www.google.com (74.125.239.48) 56(84) bytes of data.
64 bytes from nuq04s19-in-f16.1e100.net (74.125.239.48): icmp_req=1 ttl=57 time=5.93 ms
64 bytes from nuq04s19-in-f16.1e100.net (74.125.239.48): icmp_req=2 ttl=57 time=2.83 ms