Data Center Fabric
This project is a sister of another my project
Service Provider Fabric. At some time in future they will be merged into a single one, but as of today they are splitted. The primar focus of this project is to make sure the freshly shipped network function is automatically provisioned (zero-touch) up to desired state including infrastructure and customer services and is integrated into data center operation.
Currently used network operation systems
- Arist EOS 126.96.36.199F
- Cisco IOS XR 6.5.1
- Nokia SR OS 16.0.R7
- Cumulus Linux 3.7.6
Zero Touch Provisioning (ZTP)
There is a full infrastructure enablers' stack (DHCP, DNS, FTP and HTTP) is deployed as Docker containers. Once the stack is launched after NetBox is up and running the following is happening:
- Container with DHCP is launched and
dhcpd.confis automaically populated with entries from NetBox for OOB management subnet. In case there are IP/MAC pairs present in the device configuration, the static entries are created as well. For
Aristathere are corresponding entries added with link to ZTP script.
- Container with DNS is launched and
named.confis automaically populated with zone names from NetBox for OOB management subnet. The files for forward and reverse (both IPv4 and IPv6) zones are automatically created and filled in with the entries of IPv4/IPv6 addresses of OOB interfaces matched to the hostnames.
- Container with FTP server is launched and automatically populated with content, what must be shared (currently, only test file).
- Container with HTTP server is launched and ZTP scripts are automatically generated based on the information from NetBox.
In the folder
containers you can find source files of the infrastructure Docker containers build on top of
Alpine Linux to reduce usage of disk space.
The current version of this repository is
You have installed Netbox (https://github.com/digitalocean/netbox) to document your infrastructure, as it's used as "the source of truth" and modelling for data center infrastructure and services.
- Initial release
- Integration of Ansible with NetBox over REST API to retreive information needed to create configuration for Cumulus Linux.
- Automatic provisioning of Cumulus VX using information extracted in previous point.
- Topology is added within
- Automatic provisioning of Arista EOS for underlay IP Fabric.
- Automatic provisioning of Cisco IOS XR for underlay IP Fabric.
- Added folder
containerswith Dockerfiles for infrastructure enablers.
containers\dnscontains the Dockerfile for DNS Server based on BIND9 and Alpine Linux base image.
contaienrs\ftpcontains the Dockerfile for FTP Server based on VSFTP and Alpine Linux base image.
contaienrs\httpcontains the Dockerfile for HTTP Server based on NGINX and Alpine Linux base image.
contaienrs\dhcpcontains the Dockerfile for DHCP Server based on ISC DHCP and Alpine Linux base image.
- The role
cloud_dockeris copied from
The Service Provider Fabricto setup the Docker on CentOS.
- New role
cloud_enableris created to bring life DHCP, DNS, FTP and HTTP services automatically. More details in
- The DHCP config file
dhcpd.confis automatically populated with data from NetBox over REST API using Ansible. The information is populated for the data centre
- The DNS config file
named.condis automatically populated with data from NetBox over REST API using Ansible. The information is populated for the data centre
- The filename for forward and reverse zones (both IPv4 and IPv6) is automatically generated based on the domain given in Ansible variables
group_vars/linuxand OOB subnet extracted from NetBox.
- The DNS forward zone is automatically filled with info from NetBox over REST API using Ansible.
- The DNS reverse zone for IPv4 is automatically filled with info from NetBox over REST API using Ansible.
- The DNS reverse zone for IPv6 is automatically filled with info from NetBox over REST API using Ansible.
- ZTP for Arista is added.
- Some text updates.
- Updated management topology with the Docker cloud on the management host.
- Some minor templates' and tasks' updates.
- Fixed some problem with Docker installation for role
cloud_dockercaused by imporper
pipbehavior and missing folder.
- In the
containersfolder added information about Telegraf, InfluxDB and Grafana.
- There are two containers with Telegraf
telegraf_syslogas they collect different information.
- The role
cloud_enableris extended with monitoring capabilities based on Telegraf, InfluxDB and Grafana using approach from
Service Provider Fabric.
- Information from data centre switches is collected using SNMP and Syslog, from Docker containers using Syslog.
- Configuration of SNMP on Cumulus is added to
dc_underlayrole. Arista/Cisco to add.
- SNMPv3 with authentication and encryption is collected over IPv6.
- Dashboards are added to
- The role
dc_underlayis updated so that for Cumulus Linux rsyslog/snmpd services are restarted.
- Syslog configuration is pushed to the Cumulus devices during provisioning of the
- Management topology is updated with additional four containers.
- All the containers are moved to user-defined Docker bridge running both IPv4/IPv6.
- As a quick and dirty lab setup with 6x Cumulus VXs, there are two shell scripts in the
cumulus_kvmfolder. Refer there for more details.
- Problem with ntp in VRF for the
Cumulusswitches is fixed.
- The Docker container with Kapacitor to proces the data real-time is added, but configuration is still ongoing.
- The Docker container with Kapacitor is automatically deployed together with other
- It connects to the
dcf_influxdbon HTTPS and subscribe to
dcf_syslogdatabase to get all syslog messages.
- There is one TICK script currently which looks for DHCPACK from
dcf_influxdbmessages and triggers alert action (shell script) upon getting it.
- Kapacitor triggers automatic full provisioning of the full
dc_underlayrole for a host with the specific management IP address.
- Kapacitor uses existing role
dc_underlaywith an additional ad-hoc variable.
- The provisioning is done only for devices documented in NetBox with the status
Planned. The device having any other status isn't provisioned. That's applicable both for manual execution and automatic one by Kapacitor.
- Some minor modification of the text in
- Provisioning of 3 described Grafana dashboards are done automatically during the deployment of the Grafana container.
Do you want to automate network like a profi?
Join the network automation course: http://training.karneliuk.com
(c) 2016-2019 karneliuk.com