Skip to content

silasmue/OpenStack

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

OpenStack

This is the OpenStack 2 node configuration I run in my Homelab.

Nodes

The two nodes are connected via. 100Gbit point-to-point link, and share the same network (via a switch) for the port that is utilized by br-ex and for the management port br-mgmt. See etc/NetworkManager to review the network configuration.

Node 1: palma

This node handles: controll, storage and compute. This node runs Rocky Linux 9.

Hardware

  • AMD Ryzen 5700X
  • 64GB ECC RAM
  • 2x1Gbit Ethernet NIC onboard (connected to management and service network via switch)
  • 4 Kioxia CD6 SSDs, connected to the board via an adapter x16 > 4x x4
  • Mainboard Gigabyte MC12-LE0
  • Bluefield MBF345A-VENOT_ES in x4 Slot (downgraded speed ~56Gbit meassured with iperf3)

Node 2: campos

This node handles: compute. This node runs Rocky Linux 9.

Hardware

  • Intel Core i5 14500
  • 32GB ECC RAM
  • 2x2.5Gbit Ethernet NIC onboard (connected to management and service network via switch)
  • Mellanox MCX515-CCAT in x16 slot @PCIeGen4x4 (crossflash to MXC516-CDAT)
  • Tesla P4 in x4 Slot

Network

As previously mentioned there are multiple networks that are utilized by the cluster. The not documented part of the network here consists out of a network switch, and a firewall arenal.

Management 10.0.7.0/24

Both nodes are connected to this network to access them via ssh. The nodes use a virtual bridge to connect to that network because also the kolla_external_vip is served on that network.

  • palma: 10.0.7.20 and 10.0.7.10 (OpenStack external VIP)
  • campos: 10.0.7.21

Service 10.0.20.0/24

This network is connected to the 2nd onboard interfaces of the nodes and later works as OpenStack provider-network. In globals.yml it's refered to as physnet1 and gets mapped to Neutrons br-ex.

  • palma: br-ex managed by Neutron
  • campos: br-ex managed by Neutron

Cluster Backbone

The cluster backbone network is the point-to-point link between the two nodes. The link is split into two networks using VLANs to seperate storage from API calls, this is not necessary, but could later be handy when adding a switch where it could be necessary to shape traffic to ensure RoCEv2 is working correctly. Sadly the Bluefield card does not support splitting the 200Gbit port into two ports and using a breakout cable. If that is required a NIC with two physical ports is required. At the moment When a 3rd node should be added a switch is necessary.

Storage network 10.33.0.0/16

This is the VLAN tag 33.

  • palma: 10.33.0.20
  • campos: 10.33.0.21

Stack network 10.25.0.0/16

This is the VLAN tag 25.

  • palma: 10.25.0.20
  • campos: 10.25.0.21

TODOs

  • Fix: campos NIC downgraded to x4 (CPU or Mainboard PCIe slot broken)
  • Add: Add a thrid node, 2nd compute
  • Add: Traffic shaping how-to
  • Add: Replace palma with a more capable device that supports more PCIe Lanes
  • Add: Backup server llucmajor, 4 HDDs in RAID5 provision via Ironic, boot once a day copy all important volumes.
  • Add: RaspberryPi sant-jordi, provisioned via Ironic (will be hard because no UEFI)
  • Add: 100Gbit network switch
  • Doc: Post-deploy Cluster configuration, connection to the internet etc.
  • Doc: Gateway server sant-joan, VPS
  • Doc: Reverse proxy magaluf that connects to the gateway
  • Add: SSL/TLS with letsencrypt
  • Config: kolla-external-vip to 10.0.20.10/24. To also have it in the service network.

About

Just a small wiki for myself.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages