Skip to content

DevStack Quick Start

Henryk Paluch edited this page Sep 24, 2024 · 25 revisions

DevStack Quick start

DevStack is set of scripts that quickly installs OpenStack in single VM - which is perfect for evaluation and testing. OpenStack is open source cloud implementation (it attempts to mimic AWS EC2 and other services where possible).

WARNING!

DevSTack is very demanding - it installs everything from source, so you need reliable and fast Internet connection to install DevStack.

But on PLUS side it does not use Containers !!! (which is plague of this century).

Also if the stack.sh script crashes (typically on network configuration) there is no short way to fix problem and to resume installation from that point. You just can run stack.sh which will again install everything from source, recreate and configure database - which is very time consuming...

Setup

In this world everything is in flux, so we can simply try latest and greatest by following: https://docs.openstack.org/devstack/latest/

Because OpenDev (that is backing OpenStack) uses mostly Ubuntu for their infrastructure and development I strongly recommend to use also Ubuntu for DevStack. If you are adventurous you can look for supported distros in stack.sh script using:

$ git branch -v

* master 0ff62728 Run chown for egg-info only if the directory exists

$ fgrep SUPPORTED_DISTROS= stack.sh

# was: SUPPORTED_DISTROS="bullseye|focal|jammy|f36|opensuse-15.2|opensuse-tumbleweed|rhel8|rhel9|openEuler-22.03"
# now:
SUPPORTED_DISTROS="bookworm|jammy|noble|rhel9"

I was right recommending Ubuntu - because Ubuntu is now only distribution where more than 1 version is supported (Jammy = 22.04 LTS, Nobble = 24.04 LTS, see https://wiki.ubuntu.com/Releases).

Please be aware that RHEL 9 and clones now require infamous x86-64-v2 CPU architecture - not case of my old Opteron Generation 2 CPU. While Fedora (or even Arch Linux) still work on that pre x86-64-v2 CPU - I'm unable to understand such decision in case of RHEL 9...

Because of lot of downloads and CPU demand I strongly recommend to test DevStack in disposable Public Cloud, where network Ingress is (mostly) free unless you are hooked by "improper" Ubuntu setup in AWS (where repository is often in different AZ than your VM and thus Jeff B. will happily bill your package downloads as intra-AZ network traffic).

Here is example script create_vm_for_devstack.sh that I use to spin Azure VM using Azure Bash Web shell. You need to customize at least:

  • subnet=.... - replace with your Virtual Net and Subnet (setting up it is beyond scope of this article)
  • ssh_key_path=... - path to your SSH Key Pair
  • shutdown_email=... - automatic shutdown e-mail notification
  • NOTE: VM definitely should support nested virtualisation (/dev/kvm device must exist inside VM).
#!/bin/bash
set -euo pipefail
# Your SubNet ID
subnet=/subscriptions/xxxx/resourceGroups/VpnGatewayRG101/providers/Microsoft.Network/virtualNetworks/VNet101/subnets/FrontEnd
ssh_key_path=`pwd`/hp_vm2.pub
shutdown_email='your-email@example.com'

rg=DevStackRG
loc=germanywestcentral
vm=hp-devstack
IP=$vm-ip
opts="-o table"
# URN from command:
# az vm image list --all -l germanywestcentral -f 0001-com-ubuntu-server-jammy -p canonical -s 22_04-lts-gen2 -o table
image=Canonical:0001-com-ubuntu-server-jammy:22_04-lts-gen2:latest

set -x
az group create -l $loc -n $rg $opts
az network public-ip create -g $rg -l $loc --name $IP --sku Basic $opts

# 2023: must have: --security-type Standard
# to support nested virtualization
# see https://learn.microsoft.com/en-us/answers/questions/1195862/azure-d4s-v3-vm-virtualization-not-enabled-wsl-una

az vm create -g $rg -l $loc \
    --image $image  \
    --nsg-rule NONE \
    --subnet $subnet \
    --public-ip-address "$IP" --public-ip-sku Basic \
    --storage-sku Premium_LRS \
    --size Standard_D8s_v3 \
    --os-disk-size-gb 128 \
    --security-type Standard \
    --ssh-key-values $ssh_key_path \
    --admin-username azureuser \
    -n $vm $opts
az vm auto-shutdown -g $rg -n $vm --time 2100 --email "$shutdown_email" $opts
set +x
cat <<EOF
You may access this VM in 2 ways:
1. using Azure VPN Gateway
2. Using Public IP - in such case you need to add appropriate
   SSH allow in rule to NSG rules of this created VM
EOF
exit 0

Once you spin above VM you can login as SSH user azureuser with specified private key and continue using:

Here is brief overview:

  • at first install git (should be there) and tmux (to avoid installation failure in case of broken network connection):

    sudo apt-get install git tmux
  • clone Openstack project

    mkdir ~/projects
    cd ~/projects
    git clone https://opendev.org/openstack/devstack.git
    cd devstack
  • Optional: to get Antelope (alias 2023.1 see https://releases.openstack.org/), that is, same version as in OpenStack from Scratch you can switch to this branch:

    # list all branches
    git branch -a
    # checkout Antelope
    git checkout stable/2023.1
    git branch -v
    
      master        2211c778 Allow devstack to set cache driver for glance
    * stable/2023.1 146bf0d1 Merge "git: git checkout for a commit hash combinated with depth argument" into stable/2023.1
  • here is version I tested:

    $ git branch -v
    
    * master b10c0602 Merge "Add config options for cinder nfs backend"
  • now create custom local.conf (in ~/projects/devstack directory) with contents:

    [[local|localrc]]
    ADMIN_PASSWORD=Secret123
    DATABASE_PASSWORD=$ADMIN_PASSWORD
    RABBIT_PASSWORD=$ADMIN_PASSWORD
    SERVICE_PASSWORD=$ADMIN_PASSWORD
  • If you have old CPU (older then "Nehalem") you need to override CPU type, otherwise Nova will refuse to use this host for virtualisation and DevStack setup will fail. Append to local.conf:

    LIBVIRT_CPU_MODE=custom
    LIBVIRT_CPU_MODEL=kvm64

    See https://wiki.openstack.org/wiki/LibvirtXMLCPUModel for details. You can find required CPU capabilities under /usr/share/libvirt/cpu_map/ (under Ubuntu 22.04 host). Look into /usr/share/libvirt/cpu_map/x86_Nehalem.xml for defautl Nehalem CPU model

    That requirement for Nehalem or better CPU is from this commit:

  • Using "shared guest interface": If you have only one NIC available and still want to access your VMs from your physical network you can used so called "shared guest interface", from: https://docs.openstack.org/devstack/latest/networking.html#shared-guest-interface

    Unfortunately I experienced clashes with existing IPv6 configuration in such case so I can't recommend it ....

  • optional: speed up installation of Debian packages to avoid expensive fsync(2) calls of DPKG and other tools:

    sudo apt-get install eatmydata
    • and apply this patch:
      git diff
      diff --git a/functions-common b/functions-common
      index 4eed5d84..9bc6e062 100644
      --- a/functions-common
      +++ b/functions-common
      @@ -1224,7 +1224,7 @@ function apt_get {
           $sudo DEBIAN_FRONTEND=noninteractive \
               http_proxy=${http_proxy:-} https_proxy=${https_proxy:-} \
               no_proxy=${no_proxy:-} \
      -        apt-get --option "Dpkg::Options::=--force-confold" --assume-yes "$@" < /dev/null
      +        eatmydata apt-get --option "Dpkg::Options::=--force-confold" --assume-yes "$@" < /dev/null
           result=$?
       
           # stop the clock
  • now I strongly recommend to run tmux session so the installation will not be interrupted if your SSH connection to VM suddenly drops...

  • here is my complete local.conf

    [[local|localrc]]
    ADMIN_PASSWORD=Secret123
    DATABASE_PASSWORD=$ADMIN_PASSWORD
    RABBIT_PASSWORD=$ADMIN_PASSWORD
    SERVICE_PASSWORD=$ADMIN_PASSWORD
    LIBVIRT_CPU_MODE=custom
    LIBVIRT_CPU_MODEL=kvm64
    DEST=/opt/stack
    LOGFILE=$DEST/logs/stack.sh.log
    LOG_COLOR=False
  • and finally run:

    ./stack.sh

In my case (both Azure VM and old bare metal AMD 2 core machine) everything went smoothly and saw output like:

his is your host IP address: 192.168.0.51
This is your host IPv6 address: 2a02:d8a0:a:1c29:6250:40ff:fe30:2010
Horizon is now available at http://192.168.0.51/dashboard
Keystone is serving at http://192.168.0.51/identity/
The default users are: admin and demo
The password: Secret123

Services are running under systemd unit files.
For more information see:
https://docs.openstack.org/devstack/latest/systemd.html

DevStack Version: 2023.2
Change: b10c06027273d125f2b8cd14d4b19737dfb94b94 Merge "Add config options for cinder nfs backend" 2023-03-27 14:20:04 +0000
OS Version: Ubuntu 22.04 jammy

2023-04-01 08:20:47.487 | stack.sh completed in 3411 seconds.

# This time is for old AMD 2 core machine on Btrfs:
real    56m51.918s

To test it - try admin user:

$ source openrc admin

WARNING: setting legacy OS_TENANT_NAME to support cli tools.

$ openstack project list

+----------------------------------+--------------------+
| 2659ce1726e5421590678823c176d9d6 | service            |
| 399bbd9b3faa48a7946077cc03b9dd44 | demo               |
| 510ba5460e7e46cda47eb8129defc4da | admin              |
| 69e652e5921f4ca4a80ce9235bc1957b | alt_demo           |
| 6ac85cd5bf964dd99e282a9203355d1b | invisible_to_admin |
+----------------------------------+--------------------+

$ openstack user list

+----------------------------------+-----------------+
| ID                               | Name            |
+----------------------------------+-----------------+
| 3753443c8620420b8581606c11800906 | admin           |
| 9102b5cabe3e47958eccf96fa27c2008 | demo            |
| 1a2fc3dbd71641a381ac30115a3bc820 | demo_reader     |
| 853e3c5a6d79431387a57b0667f7318b | alt_demo        |
| 45c2ea8ef2224a558ba216a3d1ac93f5 | alt_demo_member |
| c3ba3adcb4a84c4589973e995e80352a | alt_demo_reader |
| 0cf42a048a054d20a75e346d0dd1b3f9 | system_member   |
| dee0578f456a4090b03a799303b74bb4 | system_reader   |
| c2b26c086a884e72aceb2aa37c519b99 | nova            |
| 8bf6a427691d42b598acda57b13c7139 | glance          |
| 5cc6464d32904cfc904d8a96fbe6ce73 | cinder          |
| 1a9314b2874f4b6797425e964bf766ab | neutron         |
| f5248f0f39d944d1a0024baa3e8b4a2d | placement       |
+----------------------------------+-----------------+

$ openstack hypervisor list

+--------------------------------------+-------------------------+-----------------+--------------+-------+
| ID                                   | Hypervisor Hostname     | Hypervisor Type | Host IP      | State |
+--------------------------------------+-------------------------+-----------------+--------------+-------+
| eedee772-e592-430a-bca3-f427460263bd | x2-DEVSTACK.example.com | QEMU            | 192.168.0.51 | up    |
+--------------------------------------+-------------------------+-----------------+--------------+-------+

To see network list with details try

  • command:

    openstack network list --long -f yaml
  • output:

    - Availability Zones: []
      ID: 957a08d8-023d-4a82-a69a-f4c54e6b0b95
      Name: private
      Network Type: geneve
      Project: 399bbd9b3faa48a7946077cc03b9dd44
      Router Type: false
      Shared: false
      State: true
      Status: ACTIVE
      Subnets:
      - 31f77e76-4aa9-4046-bfcd-568f95948028
      - f9e0647b-00b3-448e-98db-6e1da71ca691
      Tags: []
    - Availability Zones: []
      ID: a7cadd8b-a0de-43db-a92a-e85fb2e36865
      Name: public
      Network Type: flat
      Project: 510ba5460e7e46cda47eb8129defc4da
      Router Type: true
      Shared: false
      State: true
      Status: ACTIVE
      Subnets:
      - c0943a8e-4df4-4384-a2ed-df2b2bf7e9fb
      - fbbaab90-e12a-4730-883d-b754bc47e04b
      Tags: []
    - Availability Zones: []
      ID: c9debcc0-60ec-4e9d-b75b-1eef7b4ca542
      Name: shared
      Network Type: geneve
      Project: 510ba5460e7e46cda47eb8129defc4da
      Router Type: false
      Shared: true
      State: true
      Status: ACTIVE
      Subnets:
      - 087a08b1-0d88-4aa4-b817-a4836168d7bf
      Tags: []
  • we can see 3 networks of 2 types:

    1. name private, type of geneve (Overlay for VMs, they see flat network over all hypervisors, even when those hypervisors run on different nodes) NOTE: That private network should be accessed by VMs only! If you want to access VM from external network you should assign Floating IP from public network to VM
    2. name public of type flat (normally "real" network but not in my case). This network is normally accessible from world and here are assigned public Floating IP addresses to access VMs.
    3. name shared of type geneve (again Overlay for VMs) - found no description :-O
  • normally you can assign only geneve network to VMs (trying to assign public network will cause error on VM creation):

    aborted: Failed to allocate the network(s), not rescheduling
    

How to run 1st VM

To run VM we need at least:

  • Disk Image
  • Flavor
  • SSH key-pair (recommended)
  • Network (recommended to be useful)
  • Network Security Group (recommended)

For consistency we will selectively follow:

  • https://docs.openstack.org/neutron/latest/contributor/testing/ml2_ovn_devstack.html#environment-variables

  • first set environment for user and tenant (project) demo:

    source ~/projects/devstack/openrc
  • verify that you become right user using:

    $ env | egrep '^OS_(REGION_NAME|USERNAME|PROJECT_NAME)='
    
    OS_REGION_NAME=RegionOne
    OS_USERNAME=demo
    OS_PROJECT_NAME=demo
  • list visible networks:

    $ openstack network list --fit-width
    
    +--------------------------------------+---------+-------------------------------------------------------------------+
    | ID                                   | Name    | Subnets                                                           |
    +--------------------------------------+---------+-------------------------------------------------------------------+
    | 094d8c6a-cd8a-4dec-a885-156c9769f25a | private | 31c535e9-4c8c-48f2-88ef-c58322b9f416,                             |
    |                                      |         | ff8772de-8c71-49ad-85c6-64fafe08390c                              |
    | 0f015751-63f9-4b25-9933-1cd97362075b | shared  | b77f84e1-4ca5-426d-9636-8fa3731286bf                              |
    | c2738425-88af-4cd8-aaf8-961e4360f588 | public  | a96a6349-5e40-43b3-8c24-03c1857727d9, bbb552ec-                   |
    |                                      |         | df7c-44e9-a3a9-934d07048399                                       |
    +--------------------------------------+---------+-------------------------------------------------------------------+
  • please note that unlike Guide from https://docs.openstack.org/neutron/latest/contributor/testing/ml2_ovn_devstack.html#environment-variables we see additionally network shared

  • now list Logical Switches (LS):

    $ sudo ovn-nbctl ls-list
    
    0f8b746c-4fa7-4d85-9f2a-6ca3cfcd5da0 (neutron-094d8c6a-cd8a-4dec-a885-156c9769f25a)
    fbbeb3d6-e31c-45e5-b47b-a27a63142f9e (neutron-0f015751-63f9-4b25-9933-1cd97362075b)
    d4e30bfa-347a-4961-a1a6-7c7d21801233 (neutron-c2738425-88af-4cd8-aaf8-961e4360f588)
  • to see logical switches routers and port run simply

    $ sudo ovn-nbctl show
    
    # Logical switch for 'private' network:
    switch 0f8b746c-4fa7-4d85-9f2a-6ca3cfcd5da0 (neutron-094d8c6a-cd8a-4dec-a885-156c9769f25a) (aka private)
        # ports of this logical switch:
        port 7e2d4ef3-a4ab-4d71-a610-d59a13061f54
            type: router
            router-port: lrp-7e2d4ef3-a4ab-4d71-a610-d59a13061f54
        port cf744d1b-5983-46d2-be19-591090c6ff36
            type: router
            router-port: lrp-cf744d1b-5983-46d2-be19-591090c6ff36
        port d7470f7b-d60b-449e-8aa2-f99c4abd92fa
            type: localport
            addresses: ["fa:16:3e:2f:5d:9d 10.0.0.2 fd6a:1832:d33d:0:f816:3eff:fe2f:5d9d"]
    
    # logical switch for 'public' network:
    switch d4e30bfa-347a-4961-a1a6-7c7d21801233 (neutron-c2738425-88af-4cd8-aaf8-961e4360f588) (aka public)
        port 5f96ab2e-fdc3-4371-aa1a-1d37d305b35a
            type: router
            router-port: lrp-5f96ab2e-fdc3-4371-aa1a-1d37d305b35a
        port provnet-d43f3d9a-6eeb-42b9-8de0-5f64d620be1f
            type: localnet
            addresses: ["unknown"]
        port 1f7b90b1-9ea7-4205-8b11-6ef7f3094290
            type: localport
            addresses: ["fa:16:3e:31:02:74"]
    
    # logical switch for 'shared' network
    switch fbbeb3d6-e31c-45e5-b47b-a27a63142f9e (neutron-0f015751-63f9-4b25-9933-1cd97362075b) (aka shared)
        port 63a954c8-6381-4306-ae81-85d23b576fe6
            type: localport
            addresses: ["fa:16:3e:e6:92:81 192.168.233.2"]
    
    # logical Router 'router1'
    router 837be815-83e5-4546-9202-cdf1d1b0bd1f (neutron-22154c66-0372-4bbf-b4e6-45932d257c6e) (aka router1)
        # logical Router port:
        port lrp-5f96ab2e-fdc3-4371-aa1a-1d37d305b35a
            mac: "fa:16:3e:ba:91:24"
            networks: ["172.24.4.159/24", "2001:db8::1/64"]
            gateway chassis: [ca2663dc-2529-4b31-a0a2-3d2c169f5097]
        port lrp-7e2d4ef3-a4ab-4d71-a610-d59a13061f54
            mac: "fa:16:3e:03:bb:12"
            networks: ["fd6a:1832:d33d::1/64"]
        port lrp-cf744d1b-5983-46d2-be19-591090c6ff36
            mac: "fa:16:3e:a7:86:a0"
            networks: ["10.0.0.1/26"]
        nat aaaae326-2bda-43e0-993d-e150829de199
            external ip: "172.24.4.159"
            logical ip: "10.0.0.0/26"
            type: "snat"

There are 2 most important networks:

  • private: 10.0.0.0/26 - Overlay network for VMs - they get IP addresses here

  • public: 172.24.4.159/24 - normally "external" network accessible from world. However in our case it is assigned to br-ex bridge and accessible from Host machine only. It is because in recommended OpenStack setup there should be 2 LAN cards on Host:

    1. LAN should be used for Management only
    2. LAN should be configured as public network to access VMs via Floating IPs

    Both LANs has to have static IP/Netmask/Gateway assignemnt and one has to specify all necessary parameters in local.conf setup as pointed on:

Now we will mostly follow https://docs.openstack.org/neutron/latest/contributor/testing/ml2_ovn_devstack.html#booting-vms to boot 1st VM:

  • prepare key pair for SSH login:

    $ openstack keypair create demo > ~/id_rsa_demo
    
    $ chmod 600 ~/id_rsa_demo
    
    $ file ~/id_rsa_demo
    
    ~/id_rsa_demo: PEM RSA private key
  • list available flavors and anesure that m1.nano is defined:

    $ openstack flavor list
    +----+-----------+-------+------+-----------+-------+-----------+
    | ID | Name      |   RAM | Disk | Ephemeral | VCPUs | Is Public |
    +----+-----------+-------+------+-----------+-------+-----------+
    | 1  | m1.tiny   |   512 |    1 |         0 |     1 | True      |
    | 2  | m1.small  |  2048 |   20 |         0 |     1 | True      |
    | 3  | m1.medium |  4096 |   40 |         0 |     2 | True      |
    | 4  | m1.large  |  8192 |   80 |         0 |     4 | True      |
    | 42 | m1.nano   |   128 |    1 |         0 |     1 | True      |
    | 5  | m1.xlarge | 16384 |  160 |         0 |     8 | True      |
    | 84 | m1.micro  |   192 |    1 |         0 |     1 | True      |
    | c1 | cirros256 |   256 |    1 |         0 |     1 | True      |
    | d1 | ds512M    |   512 |    5 |         0 |     1 | True      |
    | d2 | ds1G      |  1024 |   10 |         0 |     1 | True      |
    | d3 | ds2G      |  2048 |   10 |         0 |     2 | True      |
    | d4 | ds4G      |  4096 |   20 |         0 |     4 | True      |
    +----+-----------+-------+------+-----------+-------+-----------+
  • now list available images - there should be only 1 cirros OS image:

    $ openstack image list
    
    +--------------------------------------+--------------------------+--------+
    | ID                                   | Name                     | Status |
    +--------------------------------------+--------------------------+--------+
    | 9f1a5e95-9617-40c6-9916-5f50d9da8d01 | cirros-0.5.2-x86_64-disk | active |
    +--------------------------------------+--------------------------+--------+
  • create variable holding image ID - notice that it will not work if there is more than 1 image:

    $ IMAGE_ID=$(openstack image list -c ID -f value)
    
    $ echo $IMAGE_ID
    
    9f1a5e95-9617-40c6-9916-5f50d9da8d01
  • now we will relax Default Security group to allow all ICMP and SSH access (of course you should never do that on production system, but DevStack is not recommended for production anyway):

    $ openstack security group rule create --ingress --ethertype IPv4 --dst-port 22 --protocol tcp default
    
    $ openstack security group rule create --ingress --ethertype IPv4 --protocol ICMP default
    
    # list our rules - with filter:
    $ openstack security group rule list | egrep '(22:22|icmp)'
    
    | 4cb07a05-8204-46f9-8b08-1848153bb0f0 | icmp        | IPv4      | 0.0.0.0/0 |            | ingress   | None                                 | None                 | 6e3df21c-4701-4459-ae7d-b95324f05234 |
    | ac2bf385-100d-4ddf-a506-be695edc0de1 | tcp         | IPv4      | 0.0.0.0/0 | 22:22      | ingress   | None                                 | None                 | 6e3df21c-4701-4459-ae7d-b95324f05234 |

Now moment of truth - creating 1st VM:

$ openstack server create --network private --flavor m1.nano --image $IMAGE_ID --key-name demo test1

Now poll command openstack server list until our instance is in ACTIVE state:

$ openstack server list --fit-width

+----------------------+-------+--------+----------------------+----------------------+---------+
| ID                   | Name  | Status | Networks             | Image                | Flavor  |
+----------------------+-------+--------+----------------------+----------------------+---------+
| c8245647-970f-4767-  | test1 | ACTIVE | private=10.0.0.20, f | cirros-0.5.2-x86_64- | m1.nano |
| bc86-fdfabe338102    |       |        | d6a:1832:d33d:0:f816 | disk                 |         |
|                      |       |        | :3eff:feec:f81       |                      |         |
+----------------------+-------+--------+----------------------+----------------------+---------+

Please be aware that the private network at 10.0.0.0/24 should be kept as private network solely for VMs communications. To access VMs from public IP address we should assign Floating IP (from pubclic network at 172.....) using:

# get port for our VM called `test1`

$ TEST1_PORT_ID=$(openstack port list --server test1 -c id -f value)
$ echo $TEST1_PORT_ID

01df943f-010e-4a6e-8fb8-7e07b0e13d46

Now create Floating (public) IP address associated with our VM (using its logical Port connected to Logical Switch):

$ openstack floating ip create --port $TEST1_PORT_ID public --fit-width

+---------------------+---------------------------------------------------------------------------------------------------------+
| Field               | Value                                                                                                   |
+---------------------+---------------------------------------------------------------------------------------------------------+
| created_at          | 2023-04-02T13:29:01Z                                                                                    |
| description         |                                                                                                         |
| dns_domain          |                                                                                                         |
| dns_name            |                                                                                                         |
| fixed_ip_address    | 10.0.0.20                                                                                               |
| floating_ip_address | 172.24.4.116                                                                                            |
| floating_network_id | c2738425-88af-4cd8-aaf8-961e4360f588                                                                    |
| id                  | 13d35be2-885b-4fa5-b141-622908641e0a                                                                    |
| name                | 172.24.4.116                                                                                            |
| port_details        | {'name': '', 'network_id': '094d8c6a-cd8a-4dec-a885-156c9769f25a', 'mac_address': 'fa:16:3e:ec:0f:81',  |
|                     | 'admin_state_up': True, 'status': 'ACTIVE', 'device_id': 'c8245647-970f-4767-bc86-fdfabe338102',        |
|                     | 'device_owner': 'compute:nova'}                                                                         |
| port_id             | 01df943f-010e-4a6e-8fb8-7e07b0e13d46                                                                    |
| project_id          | c7a60681836147248e12413b20f0b161                                                                        |
| qos_policy_id       | None                                                                                                    |
| revision_number     | 0                                                                                                       |
| router_id           | 22154c66-0372-4bbf-b4e6-45932d257c6e                                                                    |
| status              | DOWN                                                                                                    |
| subnet_id           | None                                                                                                    |
| tags                | []                                                                                                      |
| updated_at          | 2023-04-02T13:29:01Z                                                                                    |
+---------------------+---------------------------------------------------------------------------------------------------------+

In our example this IP:

| floating_ip_address | 172.24.4.116 |

Howere as described in manual we need to configure br-ex first (it is empty because it is not bound to any real LAN card in default setup):

$ sudo ip link set br-ex up

$ sudo ip route add 172.24.4.0/24 dev br-ex

$ sudo ip addr add 172.24.4.1/24 dev br-ex

$ ip -br a s dev br-ex

br-ex            UNKNOWN        172.24.4.1/24 fe80::f815:4cff:fe57:7b48/64

# notice that we have link-local IPv6 address ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
# (fe80:: prefix is for link-local autocnfigured non-routable IPv6 address)

$ ip r | fgrep br-ex

172.24.4.0/24 dev br-ex scope link
172.24.4.0/24 dev br-ex proto kernel scope link src 172.24.4.1

WARNING! This bridge's br-ex configuration will not persist across reboot...

Finally we can try to login to VM using floating IP adddress:

# verify that VM is alive:

$ ping -c 1  172.24.4.116

PING 172.24.4.116 (172.24.4.116) 56(84) bytes of data.
64 bytes from 172.24.4.116: icmp_seq=1 ttl=63 time=10.8 ms

# NOTE: Recent SSH clients refuse to use RSA ssh keys, but we have one here :-)
#       So the '-o ...' option....
$ ssh -o 'PubkeyAcceptedKeyTypes=+ssh-rsa'  -i ~/id_rsa_demo cirros@172.24.4.116 uname -a

Linux test1 5.3.0-26-generic #28~18.04.1-Ubuntu SMP Wed Dec 18 16:40:14 UTC 2019 x86_64 GNU/Linux

# or try:

$ ssh -o 'PubkeyAcceptedKeyTypes=+ssh-rsa'  -i ~/id_rsa_demo cirros@172.24.4.116 ip -4 a

1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue qlen 1000
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1442 qdisc pfifo_fast qlen 1000
    inet 10.0.0.20/26 brd 10.0.0.63 scope global eth0
       valid_lft forever preferred_lft forever

Direct console access:

  • you can see latest console log of VM 'test1' using this command:

    $ openstack console log show test1
    
    ...
    === datasource: ec2 net ===
    instance-id: i-00000001
    name: N/A
    availability-zone: nova
    local-hostname: test1.novalocal
    launch-index: 0
    === cirros: current=0.5.2 uptime=18.64 ===
      ____               ____  ____
     / __/ __ ____ ____ / __ \/ __/
    / /__ / // __// __// /_/ /\ \
    \___//_//_/  /_/   \____/___/
       http://cirros-cloud.net
    
    
    login as 'cirros' user. default password: 'gocubsgo'. use 'sudo' for root.
    test1 login:
  • you can also show URL to connect to NoVNC console via regular browser using command like:

    $ openstack  console url show test1
    +----------+---------------------------------------------------------------------------------------------+
    | Field    | Value                                                                                       |
    +----------+---------------------------------------------------------------------------------------------+
    | protocol | vnc                                                                                         |
    | type     | novnc                                                                                       |
    | url      | http://192.168.0.51:6080/vnc_lite.html?path=%3Ftoken%3D49b8fd50-0eb9-49d0-b556-1b91fb7367e4 |
    +----------+---------------------------------------------------------------------------------------------+

Again, visit original manual at:

Access from VM to Internet: In default setup (where bridge br-ex is not assigned to real LAN card) you have to manually enable masquerade as described here:

  • https://rahulait.wordpress.com/2016/06/27/manually-routing-traffic-from-br-ex-to-internet-devstack/

  • note that current DevStack uses 172.24.4.0/24 instead of 172.24.0.0/24 so we have to use these commands instead:

    $ sudo iptables -t nat -I POSTROUTING -o eth0 -s 172.24.4.0/24 -j MASQUERADE
    $ sudo iptables -I FORWARD -s 172.24.4.0/24 -j ACCEPT
  • now from VM you can try command like (where 1.1.1.1 is CloudFlare public DNS)

    $ hostname
    
    test1
    
    $ nslookup www.cnn.com 1.1.1.1
    Server:		1.1.1.1
    Address:	1.1.1.1:53
    
    Non-authoritative answer:
    www.cnn.com	canonical name = cnn-tls.map.fastly.net
    Name:	cnn-tls.map.fastly.net
    ...
  • please note that above NAT rules will not persist across Host reboot.

Trying to understand OVN

OpenStack network is subject of change so it is difficult to learn:

  • first there was only flat network directly provided by Nova (hypervisor) layer, later known as "Nova/legacy network".
  • later there was created Quantum then renamed to Neutron with its own network and agents
  • Neutron started to use OVS (Open Virtual Switch)
  • however now Neutron is redesigned to use OVN as top layer over OVS.

Please see wiki page OVN dedicated for OVN. Here are just few random commands:

Useful resources

So start as demo project in demo tenant as admin user (create by Tempest):

source ~/projects/devstack/openrc

List networks with:

$ openstack network list --fit-width

+--------------------------------------+---------+----------------------------------------------+
| ID                                   | Name    | Subnets                                      |
+--------------------------------------+---------+----------------------------------------------+
| 957a08d8-023d-4a82-a69a-f4c54e6b0b95 | private | 31f77e76-4aa9-4046-bfcd-568f95948028,        |
|                                      |         | f9e0647b-00b3-448e-98db-6e1da71ca691         |
| a7cadd8b-a0de-43db-a92a-e85fb2e36865 | public  | c0943a8e-4df4-4384-a2ed-df2b2bf7e9fb,        |
|                                      |         | fbbaab90-e12a-4730-883d-b754bc47e04b         |
| c9debcc0-60ec-4e9d-b75b-1eef7b4ca542 | shared  | 087a08b1-0d88-4aa4-b817-a4836168d7bf         |
+--------------------------------------+---------+----------------------------------------------+

NOTE: unlike above guide there is additional shared network.

Now list logical (=Software) switches (LS) using:

$ sudo ovn-nbctl ls-list

847ee08c-7eac-48ff-b954-e1019fd34c52 (neutron-957a08d8-023d-4a82-a69a-f4c54e6b0b95)
2bfd05a1-2977-45fe-953d-924fec142f9e (neutron-a7cadd8b-a0de-43db-a92a-e85fb2e36865)
5ea7de91-eacd-45b1-978b-b26494ea4d27 (neutron-c9debcc0-60ec-4e9d-b75b-1eef7b4ca542)

Now list ports for specified switch:

  • for private switch:

    $ sudo ovn-nbctl show 847ee08c-7eac-48ff-b954-e1019fd34c52
    
    switch 847ee08c-7eac-48ff-b954-e1019fd34c52 (neutron-957a08d8-023d-4a82-a69a-f4c54e6b0b95) (aka private)
        port 94d25465-e565-42a1-b460-f6f76e25ac5d
            type: router
            router-port: lrp-94d25465-e565-42a1-b460-f6f76e25ac5d
        port 90b1e1e7-97ba-4f25-b409-86859f01d55f
            type: localport
            addresses: ["fa:16:3e:7e:40:f8 10.0.0.2 fd4f:31da:92a0:0:f816:3eff:fe7e:40f8"]
        port 6791c631-89a6-41e0-b1d0-446fc53b58ce
            type: router
            router-port: lrp-6791c631-89a6-41e0-b1d0-446fc53b58ce
        port 41d70e44-9cef-4d6c-add3-8d27a9ad783d
            addresses: ["fa:16:3e:ce:ec:1a 10.0.0.13 fd4f:31da:92a0:0:f816:3eff:fece:ec1a"]
  • for public switch:

    $ sudo ovn-nbctl show 2bfd05a1-2977-45fe-953d-924fec142f9e
    
    switch 2bfd05a1-2977-45fe-953d-924fec142f9e (neutron-a7cadd8b-a0de-43db-a92a-e85fb2e36865) (aka public)
        port provnet-e24e5763-afaf-4bd4-ac95-33cadac75623
            type: localnet
            addresses: ["unknown"]
        port 73882136-9ead-473d-82f9-392937ef72e9
            type: router
            router-port: lrp-73882136-9ead-473d-82f9-392937ef72e9
        port 8bf6518c-c3bb-4289-978b-830c9a60d844
            type: localport
            addresses: ["fa:16:3e:2f:04:c0"]
  • for shared switch:

    $ sudo ovn-nbctl show 5ea7de91-eacd-45b1-978b-b26494ea4d27
    
    switch 5ea7de91-eacd-45b1-978b-b26494ea4d27 (neutron-c9debcc0-60ec-4e9d-b75b-1eef7b4ca542) (aka shared)
        port 12f86248-e2cf-4ac8-9dbb-b0a34120a60c
            addresses: ["fa:16:3e:fd:62:00 192.168.233.161"]
        port 7b6880b6-8bb4-4022-b4d7-ecd6b43335c5
            type: localport
            addresses: ["fa:16:3e:08:1c:17 192.168.233.2"]

Now list Logical Routers (LR):

$ sudo ovn-nbctl lr-list

95aef87d-9352-4452-9b71-3601c72068c4 (neutron-d4593f5b-d70b-4ad6-8620-3aa6925e24e5)
  • to see detail of above logical router you can try:

    $ sudo ovn-nbctl show 95aef87d-9352-4452-9b71-3601c72068c4
    
    router 95aef87d-9352-4452-9b71-3601c72068c4 (neutron-d4593f5b-d70b-4ad6-8620-3aa6925e24e5) (aka router1)
        port lrp-94d25465-e565-42a1-b460-f6f76e25ac5d
            mac: "fa:16:3e:bb:00:95"
            networks: ["10.0.0.1/26"]
        port lrp-6791c631-89a6-41e0-b1d0-446fc53b58ce
            mac: "fa:16:3e:b3:36:59"
            networks: ["fd4f:31da:92a0::1/64"]
        port lrp-73882136-9ead-473d-82f9-392937ef72e9
            mac: "fa:16:3e:f3:bb:04"
            networks: ["172.24.4.26/24", "2001:db8::1/64"]
            gateway chassis: [fd24ec5d-c0dd-4bd7-a342-14d2ef021cae]
        nat c134523d-3d34-4aef-8860-ec8093c539f7
            external ip: "172.24.4.26"
            logical ip: "10.0.0.0/26"
            type: "snat"

Seeing ports of started VM.

  • now start some VM, in my case I can already start:

    $ openstack server list  --fit-width
    
    +---------------------+---------------+---------+---------------------+---------------------+-----------+
    | ID                  | Name          | Status  | Networks            | Image               | Flavor    |
    +---------------------+---------------+---------+---------------------+---------------------+-----------+
    | cd025a9c-5066-4878- | cirros-public | ERROR   |                     | cirros-0.5.2-       | cirros256 |
    | b699-1ae46e057d52   |               |         |                     | x86_64-disk         |           |
    | 8abc8093-ab39-48aa- | cirros-shared | SHUTOFF | shared=192.168.233. | cirros-0.5.2-       | cirros256 |
    | 8206-cb7427d8e38b   |               |         | 161                 | x86_64-disk         |           |
    | e2aa2e8b-d7c2-49cf- | cirros-1      | SHUTOFF | private=10.0.0.13,  | cirros-0.5.2-       | cirros256 |
    | a159-311a44e397a3   |               |         | fd4f:31da:92a0:0:f8 | x86_64-disk         |           |
    |                     |               |         | 16:3eff:fece:ec1a   |                     |           |
    +---------------------+---------------+---------+---------------------+---------------------+-----------+
    
    $ openstack server start cirros-1
  • wait until VM starts (for example by polling openstack server list command

  • and let's look - there should be new port in private switch:

    $ sudo ovn-nbctl show 847ee08c-7eac-48ff-b954-e1019fd34c52
    
    ...
    switch 847ee08c-7eac-48ff-b954-e1019fd34c52 (neutron-957a08d8-023d-4a82-a69a-f4c54e6b0b95) (aka private)
       port 41d70e44-9cef-4d6c-add3-8d27a9ad783d
            addresses: ["fa:16:3e:ce:ec:1a 10.0.0.13 fd4f:31da:92a0:0:f816:3eff:fece:ec1a"]
  • notice new port with assigned MAC address and both IPv4 and IPv6 address.

Experimental: using MacVTap plugin

MacVTap is most straightforward network interface available on OpenStack. It is direct connection from VM to specified Network interface (typically Bridge to allow more than one VM on single network interface). This way it should be possible to use OpenStack with single network interface through bridge (samw way as in Proxmox VE).

MacVTAP network is always "flat" and of "provider" type.

Please note that MacVTap does NOT support

  • metadata service
  • internal DHCP
  • private and virtual networks

It should be possible to configure DevStack with simple macvtap interface shared via plain Linux bridge with main network interface (similar to standard Proxmox VE setup) as described on:

Here is what I'm currently testing on nested VM (Ubuntu 22.04 LTS as usual):

  • /etc/netplan/99-openstack.yaml
network:
  version: 2
  ethernets:
    eth0:
      dhcp4: no
      dhcp6: no
  bridges:
    br-ex:
      interfaces: [eth0]
      dhcp4: no
      dhcp6: no
      addresses: [192.168.0.7/24]
      gateway4: 192.168.0.1
      nameservers:
        addresses: [1.1.1.1]
  • /etc/hostname
devstack2
  • /etc/hosts
127.0.0.1 localhost
192.168.0.7 devstack2

# omitted IPV6 nonsense definitions
# ...

My preliminary local.conf

[[local|localrc]]
ADMIN_PASSWORD=Secret123
DATABASE_PASSWORD=$ADMIN_PASSWORD
RABBIT_PASSWORD=$ADMIN_PASSWORD
SERVICE_PASSWORD=$ADMIN_PASSWORD
LIBVIRT_CPU_MODE=custom
LIBVIRT_CPU_MODEL=kvm64
DEST=/opt/stack
LOGFILE=$DEST/logs/stack.sh.log
LOG_COLOR=False

# Disable: cinder (managed persistent volumes) tempest (test suite) etcd3 dstat ovn ovs
disable_service c-sch c-api c-vol tempest etcd3 dstat ovn-controller ovn-northd ovs-vswitchd ovsdb-server

HOST_IP=192.168.0.7
SERVICE_HOST=192.168.0.7
MYSQL_HOST=192.168.0.7
RABBIT_HOST=192.168.0.7

Q_ML2_PLUGIN_MECHANISM_DRIVERS=macvtap
Q_USE_PROVIDER_NETWORKING=True

## MacVTap agent options
Q_AGENT=macvtap
PHYSICAL_NETWORK=default

IPV4_ADDRS_SAFE_TO_USE="192.168.0.128/25"
NETWORK_GATEWAY=192.168.0.1
PROVIDER_SUBNET_NAME="provider_net"
PROVIDER_NETWORK_TYPE="flat"
SEGMENTATION_ID=2010
USE_SUBNETPOOL=False

[[post-config|/$Q_PLUGIN_CONF_FILE]]
[macvtap]
physical_interface_mappings = $PHYSICAL_NETWORK:br-ex

[[post-config|$NOVA_CONF]]
force_config_drive = True

Resources

Not sure why, but official docs are missing quick-start:

Some outdated info on Neutron:

Clone this wiki locally