This is the OpenStack 2 node configuration I run in my Homelab.
The two nodes are connected via. 100Gbit point-to-point link, and share the same network (via a switch) for the port that is utilized by br-ex and for the management port br-mgmt.
See etc/NetworkManager to review the network configuration.
This node handles: controll, storage and compute.
This node runs Rocky Linux 9.
- AMD Ryzen 5700X
- 64GB ECC RAM
- 2x1Gbit Ethernet NIC onboard (connected to
managementandservicenetwork via switch) - 4 Kioxia CD6 SSDs, connected to the board via an adapter x16 > 4x x4
- Mainboard Gigabyte MC12-LE0
- Bluefield MBF345A-VENOT_ES in x4 Slot (downgraded speed ~56Gbit meassured with
iperf3)
This node handles: compute.
This node runs Rocky Linux 9.
- Intel Core i5 14500
- 32GB ECC RAM
- 2x2.5Gbit Ethernet NIC onboard (connected to
managementandservicenetwork via switch) - Mellanox MCX515-CCAT in x16 slot @PCIeGen4x4 (crossflash to MXC516-CDAT)
- Tesla P4 in x4 Slot
As previously mentioned there are multiple networks that are utilized by the cluster. The not documented part of the network here consists out of a network switch, and a firewall arenal.
Both nodes are connected to this network to access them via ssh. The nodes use a virtual bridge to connect to that network because also the kolla_external_vip is served on that network.
palma:10.0.7.20and10.0.7.10(OpenStack external VIP)campos:10.0.7.21
This network is connected to the 2nd onboard interfaces of the nodes and later works as OpenStack provider-network. In globals.yml it's refered to as physnet1 and gets mapped to Neutrons br-ex.
palma:br-exmanaged by Neutroncampos:br-exmanaged by Neutron
The cluster backbone network is the point-to-point link between the two nodes. The link is split into two networks using VLANs to seperate storage from API calls, this is not necessary, but could later be handy when adding a switch where it could be necessary to shape traffic to ensure RoCEv2 is working correctly. Sadly the Bluefield card does not support splitting the 200Gbit port into two ports and using a breakout cable. If that is required a NIC with two physical ports is required. At the moment When a 3rd node should be added a switch is necessary.
This is the VLAN tag 33.
palma:10.33.0.20campos:10.33.0.21
This is the VLAN tag 25.
palma:10.25.0.20campos:10.25.0.21
- Fix:
camposNIC downgraded to x4 (CPU or Mainboard PCIe slot broken) - Add: Add a thrid node, 2nd compute
- Add: Traffic shaping how-to
- Add: Replace
palmawith a more capable device that supports more PCIe Lanes - Add: Backup server
llucmajor, 4 HDDs in RAID5 provision via Ironic, boot once a day copy all important volumes. - Add: RaspberryPi
sant-jordi, provisioned via Ironic (will be hard because no UEFI) - Add: 100Gbit network switch
- Doc: Post-deploy Cluster configuration, connection to the internet etc.
- Doc: Gateway server
sant-joan, VPS - Doc: Reverse proxy
magalufthat connects to the gateway - Add: SSL/TLS with
letsencrypt - Config:
kolla-external-vipto10.0.20.10/24. To also have it in the service network.