Automated installation scripts for deploying OpenStack Yoga on a 3-node architecture:
- Controller Node: Identity (Keystone), Image (Glance), Compute API (Nova), Network Server (Neutron), Dashboard (Horizon)
- Network/Neutron Node: OVS Agent, L3 Agent, DHCP Agent, Metadata Agent
- Compute Node: Nova Compute, Neutron OVS Agent, KVM/QEMU/Libvirt
openstack-yoga-3node/
├── configs/
│ └── cluster.env # Main configuration file (EDIT THIS FIRST!)
├── scripts/
│ ├── controller.sh # Controller node installation
│ ├── neutron.sh # Network node installation
│ └── compute.sh # Compute node installation
├── templates/ # Configuration templates
│ ├── keystone.conf.template
│ ├── glance-api.conf.template
│ ├── nova-controller.conf.template
│ ├── nova-compute.conf.template
│ ├── neutron.conf.template
│ ├── ml2_conf.ini.template
│ ├── openvswitch_agent.ini.template
│ ├── l3_agent.ini.template
│ ├── dhcp_agent.ini.template
│ ├── metadata_agent.ini.template
│ └── ...
├── logs/ # Installation logs (auto-created)
└── README.md # This file
- CPU: 2 cores (with VT-x/AMD-V for compute)
- RAM: 4 GB (8 GB recommended for controller)
- Storage: 40 GB primary disk
- Network: Based on your environment (see Networking section)
- Ubuntu 20.04 LTS or Ubuntu 22.04 LTS
- Fresh installation recommended
- All nodes must be able to reach each other
- Controller node must be accessible by compute/network nodes
- Internet access for package installation
The installation manuals describe a 2-NIC setup:
- Management Network: Internal OpenStack communication (API, RabbitMQ, Database)
- Provider/Public Network: External access, floating IPs, tenant networks
┌─────────────────────────────────────────────────────────────────────────────┐
│ Management Network (100.100.100.0/24) │
│ ┌─────────────┐ ┌─────────────┐ ┌─────────────┐ │
│ │ Controller │ │ Neutron │ │ Compute │ │
│ │ .1 │ │ .3 │ │ .2 │ │
│ └──────┬──────┘ └──────┬──────┘ └──────┬──────┘ │
│ │ │ │ │
└─────────┴─────────────────────────┴────────────────────────┴────────────────┘
┌─────────────────────────────────────────────────────────────────────────────┐
│ Provider Network (192.168.116.0/24) │
│ │ │ │ │
│ ┌──────┴──────┐ ┌───────┴──────┐ ┌──────┴──────┐ │
│ │ Controller │ │ Neutron │ │ Compute │ │
│ │ .130 │ │ .134 │ │ .132 │ │
│ │ │ │ (br-ex) │ │ │ │
│ └─────────────┘ └──────────────┘ └─────────────┘ │
└─────────────────────────────────────────────────────────────────────────────┘
Edit configs/cluster.env with your actual interface names. To find them:
# List network interfaces
ip link show
# or
nmcli device statusCommon interface naming conventions:
- Legacy:
eth0,eth1 - Predictable:
ens33,ens34,enp0s3,enp0s8
Copy the entire openstack-yoga-3node folder to each node:
# From your workstation
scp -r openstack-yoga-3node/ root@controller:/root/
scp -r openstack-yoga-3node/ root@neutron:/root/
scp -r openstack-yoga-3node/ root@compute:/root/On each node, edit configs/cluster.env:
cd /root/openstack-yoga-3node
vim configs/cluster.envREQUIRED CHANGES:
| Variable | Description | Example |
|---|---|---|
HOST_CONTROLLER |
Controller hostname | controller01 |
HOST_COMPUTE |
Compute hostname | compute01 |
HOST_NEUTRON |
Network hostname | neutron01 |
IP_CONTROLLER_MGMT |
Controller management IP | 100.100.100.1 |
IP_CONTROLLER_PUBLIC |
Controller public IP | 192.168.116.130 |
IP_COMPUTE_MGMT |
Compute management IP | 100.100.100.2 |
IP_COMPUTE_PUBLIC |
Compute public IP | 192.168.116.132 |
IP_NEUTRON_MGMT |
Network management IP | 100.100.100.3 |
IP_NEUTRON_PUBLIC |
Network public IP | 192.168.116.134 |
IFACE_*_MGMT |
Management interface per node | ens33 |
IFACE_*_PUBLIC |
Public interface per node | ens34 |
KEYSTONE_ADMIN_PASS |
Admin password | Change from default! |
DB_ROOT_PASS |
MariaDB root password | Change from default! |
Order is important! Run scripts in this sequence:
cd /root/openstack-yoga-3node
chmod +x scripts/*.sh
sudo ./scripts/controller.shWait for completion. Verify with:
source /root/admin-openrc
openstack service list
openstack endpoint listcd /root/openstack-yoga-3node
chmod +x scripts/*.sh
sudo ./scripts/neutron.shWait for completion. Verify from controller:
source /root/admin-openrc
openstack network agent listcd /root/openstack-yoga-3node
chmod +x scripts/*.sh
sudo ./scripts/compute.shAfter compute node installation:
# On controller node
su -s /bin/sh -c "nova-manage cell_v2 discover_hosts --verbose" nova
# Verify
source /root/admin-openrc
openstack compute service list
openstack hypervisor list# On Controller - Source credentials first
source /root/admin-openrc
# Check Keystone
openstack token issue
# List all services
openstack service list
# List all endpoints
openstack endpoint list
# Check compute services
openstack compute service list
# Check network agents
openstack network agent list
# Check hypervisors
openstack hypervisor list
# Check images
openstack image listOpen in browser:
http://<CONTROLLER_PUBLIC_IP>/horizon
Login credentials:
- Domain: default
- User: admin
- Password: (your KEYSTONE_ADMIN_PASS from cluster.env)
# Source credentials
source /root/admin-openrc
# Create external network (on network node's br-ex)
openstack network create --external --provider-network-type flat \
--provider-physical-network physnet1 external
# Create external subnet
openstack subnet create --network external \
--subnet-range 192.168.116.0/24 \
--allocation-pool start=192.168.116.200,end=192.168.116.250 \
--gateway 192.168.116.1 \
--dns-nameserver 8.8.8.8 \
external-subnet
# Create internal network
openstack network create internal
# Create internal subnet
openstack subnet create --network internal \
--subnet-range 10.0.0.0/24 \
--gateway 10.0.0.1 \
--dns-nameserver 8.8.8.8 \
internal-subnet
# Create router
openstack router create router1
openstack router set --external-gateway external router1
openstack router add subnet router1 internal-subnet
# Create security group rules (allow ping and SSH)
openstack security group rule create --proto icmp default
openstack security group rule create --proto tcp --dst-port 22 default
# Create keypair
openstack keypair create mykey > mykey.pem
chmod 600 mykey.pem
# Create a flavor (if not exists)
openstack flavor create --ram 512 --disk 1 --vcpus 1 m1.tiny
# Launch instance
openstack server create --flavor m1.tiny \
--image cirros \
--network internal \
--key-name mykey \
testvm
# Check instance status
openstack server list
# Create and attach floating IP
openstack floating ip create external
FIP=$(openstack floating ip list -f value -c "Floating IP Address" | head -1)
openstack server add floating ip testvm $FIP
# Test connectivity
ping $FIPSymptom: Services fail to start with "Can't connect to MySQL"
Fix:
# On controller, verify MariaDB is listening
ss -tlnp | grep 3306
# Check MariaDB bind address
grep bind-address /etc/mysql/mariadb.conf.d/99-openstack.cnf
# Test connectivity from other nodes
mysql -h <CONTROLLER_MGMT_IP> -u root -e "SELECT 1;"Symptom: Services timeout connecting to message queue
Fix:
# On controller, check RabbitMQ status
rabbitmqctl status
# Verify user exists
rabbitmqctl list_users
# Check permissions
rabbitmqctl list_permissions
# Test connection from compute/network
python3 -c "import pika; pika.BlockingConnection(pika.ConnectionParameters('CONTROLLER_IP'))"Symptom: openstack network agent list doesn't show the agent
Fix:
# On the affected node, check service status
systemctl status neutron-openvswitch-agent
journalctl -u neutron-openvswitch-agent -f
# Common causes:
# - Wrong tunnel IP in openvswitch_agent.ini
# - RabbitMQ connection issues
# - Wrong Keystone credentialsSymptom: Hypervisor not showing in openstack hypervisor list
Fix:
# On compute node, check nova-compute status
systemctl status nova-compute
journalctl -u nova-compute -f
# On controller, manually discover hosts
su -s /bin/sh -c "nova-manage cell_v2 discover_hosts --verbose" nova
# Check cells
nova-manage cell_v2 list_cellsSymptom: Can't access instance console from Horizon
Fix:
# Verify novncproxy is running on controller
systemctl status nova-novncproxy
# Check VNC settings in nova.conf on compute
grep -A5 "\[vnc\]" /etc/nova/nova.conf
# Verify novncproxy_base_url points to accessible controller IP
# It must be the PUBLIC IP that your browser can reachSymptom: Instance gets no IP or stuck at booting
Fix:
# On neutron node, check DHCP agent
systemctl status neutron-dhcp-agent
journalctl -u neutron-dhcp-agent -f
# Check metadata agent
systemctl status neutron-metadata-agent
# Verify OVS bridges
ovs-vsctl show
# Check if namespace exists
ip netns listSymptom: Various connectivity issues
Fix:
# Check firewall status (Ubuntu uses ufw)
ufw status
# Disable for testing (NOT for production!)
ufw disable
# For production, open required ports:
# Controller: 3306, 5672, 11211, 5000, 8774, 8775, 8778, 9292, 9696, 6080
# Network: 4789 (VXLAN)
# Compute: 4789 (VXLAN)Symptom: Tunnel failures, no connectivity between nodes
Fix:
# Verify local_ip in OVS agent config matches actual interface IP
ip addr show
# Check OVS agent config
cat /etc/neutron/plugins/ml2/openvswitch_agent.ini
# The local_ip must be reachable from other nodes| Service | Port | Node |
|---|---|---|
| MariaDB | 3306 | Controller |
| RabbitMQ | 5672 | Controller |
| Memcached | 11211 | Controller |
| Keystone | 5000 | Controller |
| Glance | 9292 | Controller |
| Nova API | 8774 | Controller |
| Nova Metadata | 8775 | Controller |
| Placement | 8778 | Controller |
| Neutron | 9696 | Controller |
| NoVNC Proxy | 6080 | Controller |
| Horizon | 80/443 | Controller |
| VXLAN | 4789/UDP | All |
These scripts are provided as-is for educational and lab purposes.
These scripts are derived from installation manuals and are intended for lab/learning environments. For production deployments, consider:
- Using official deployment tools (Kolla-Ansible, OpenStack-Ansible, TripleO)
- Implementing proper security hardening
- Setting up high availability
- Configuring proper SSL/TLS
- Using secrets management solutions