osvbng (Open Source Virtual Broadband Network Gateway) is a high-performance, scalable, open source BNG for ISPs. Built to scale up to multi-hundred gigabit throughput on standard x86 COTS hardware.
- 400+Gbps throughput with Intel DPDK (Up to 100+Gbps without DPDK)
- 20,000+ Subscriber Sessions
- Plugin-based architecture
- IPoE/DHCPv4
- Modern monitoring stack
- Core implementation is fully open source
- Docker and KVM support
For maximum performance with DPDK and PCI passthrough. Requires a dedicated server with:
- KVM/libvirt installed
- IOMMU enabled (Intel VT-d or AMD-Vi)
- At least 2 physical NICs for PCI passthrough (access and core)
- 4GB+ RAM and 4+ CPU cores recommended
Quick Install:
curl -sL https://v6n.io/osvbng | sudo bashInstall Dependencies (Debian/Ubuntu):
sudo apt install -y libvirt-daemon-system qemu-kvm virtinst curl whiptail gzip bridge-utilsCreate Management Bridge:
The VM requires a management bridge for out-of-band access (SSH, monitoring, etc.). This bridge connects the VM's virtio management interface to your network.
# Create bridge
sudo brctl addbr br-mgmt
# Attach your management NIC (replace eno1 with your interface)
sudo brctl addif br-mgmt eno1
# Bring up the bridge
sudo ip link set br-mgmt up
# Optional: Configure IP on the bridge instead of the physical interface
sudo ip addr add 192.168.1.10/24 dev br-mgmtTo make this persistent, add to /etc/network/interfaces:
auto br-mgmt
iface br-mgmt inet static
address 192.168.1.10
netmask 255.255.255.0
gateway 192.168.1.1
bridge_ports eno1
bridge_stp off
bridge_fd 0
Or with Netplan (/etc/netplan/01-bridge.yaml):
network:
version: 2
ethernets:
eno1:
dhcp4: false
bridges:
br-mgmt:
interfaces: [eno1]
dhcp4: true
# Or static:
# addresses: [192.168.1.10/24]
# routes:
# - to: default
# via: 192.168.1.1Manual Install:
# Download the deployment script
curl -sLO https://raw.githubusercontent.com/veesix-networks/osvbng/main/scripts/qemu/deploy-vm.sh
chmod +x deploy-vm.sh
# Run the interactive installer
sudo ./deploy-vm.shThe installer will:
- Check prerequisites (IOMMU, vfio-pci, etc.)
- Let you select NICs for PCI passthrough (access and core)
- Download and deploy the osvbng VM image
- Configure the VM with your selected interfaces
Manual Image Download (Optional):
If you prefer to download the QEMU image separately, you can download the qcow2 images manually from the Releases page or run the following:
# Download latest release
curl -fLO https://github.com/veesix-networks/osvbng/releases/latest/download/osvbng-debian12.qcow2.gz
# Or download a specific version
curl -fLO https://github.com/veesix-networks/osvbng/releases/download/v1.0.0/osvbng-debian12.qcow2.gz
# Extract the image
gunzip osvbng-debian12.qcow2.gz
# Move to libvirt images directory
sudo mv osvbng-debian12.qcow2 /var/lib/libvirt/images/osvbng.qcow2Start and Connect:
# Start the VM
sudo virsh start osvbng
# Connect to console
sudo virsh console osvbng
# Default login: root / osvbngAccess the CLI:
# Inside the VM
osvbngcli
osvbng> show subscriber sessionsThe VM auto-generates a default configuration on first boot. To customize, edit /etc/osvbng/osvbng.yaml and restart the osvbng service.
- Ubuntu 22.04, 24.04
- Debian 12 (Bookworm)
Prerequisites:
- Docker installed
- Minimum of 2 physical network interfaces (access and core) if deploying in a non-test scenario
Step 1: Start the container
docker run -d --name osvbng \
--privileged \
--network none \
-e OSVBNG_WAIT_FOR_INTERFACES=true \
-e OSVBNG_ACCESS_INTERFACE=eth0 \
-e OSVBNG_CORE_INTERFACE=eth1 \
veesixnetworks/osvbng:latestStep 2: Attach network interfaces
For production with physical NICs (replace enp0s1 and enp1s1 with your interface names):
curl -sLO https://raw.githubusercontent.com/veesix-networks/osvbng/main/docker/setup-interfaces.sh
chmod +x setup-interfaces.sh
./setup-interfaces.sh osvbng eth0:enp0s1 eth1:enp1s1For testing without physical hardware:
./setup-interfaces.sh osvbng eth0 eth1!!! tip
Container network interfaces must be recreated after each restart. The `setup-interfaces.sh` script creates veth pairs (virtual ethernet pairs) that connect your host's physical interfaces to the container. A veth pair acts like a virtual cable with two ends - one end stays on the host and is bridged to your physical NIC, while the other end is moved into the container's network namespace.
Network namespaces are tied to process IDs, which are allocated by the kernel on container start and cannot be predicted. When a container restarts, it gets a new PID and new namespace, breaking the connection to the old veth pair. This is why the veth pairs must be recreated after every restart.
For "production" deployments, use the systemd service (or equivalent) to automatically handle interface setup on container restart.
Step 3: Verify it's running
docker logs -f osvbngStep 4: Access the CLI
docker exec -it osvbng osvbngcliFor "production" deployments, you need to ensure the container and its network interfaces are automatically set up on system boot. Below is an example using systemd (adjust for your init system if using something else):
# Download the service file and setup script
curl -sLO https://raw.githubusercontent.com/veesix-networks/osvbng/main/docker/osvbng.service
curl -sLO https://raw.githubusercontent.com/veesix-networks/osvbng/main/docker/setup-interfaces.sh
# Create working directory
sudo mkdir -p /opt/osvbng
sudo mv setup-interfaces.sh /opt/osvbng/
sudo chmod +x /opt/osvbng/setup-interfaces.sh
# Edit the service file and update the interface names on line 25
# Change eth0:ens19 eth1:ens20 to match your host interface names
sudo nano osvbng.service
# Install and enable the service
sudo mv osvbng.service /etc/systemd/system/
sudo systemctl daemon-reload
sudo systemctl enable osvbng.service
sudo systemctl start osvbng.serviceCheck service status:
sudo systemctl status osvbng.serviceGenerate and customize the config file:
docker run --rm veesixnetworks/osvbng:latest config > osvbng.yaml
sudo mv osvbng.yaml /etc/osvbng/Mount it into the container:
docker run -d --name osvbng \
--privileged \
--network none \
-v /opt/osvbng/osvbng.yaml:/etc/osvbng/osvbng.yaml:ro \
-e OSVBNG_WAIT_FOR_INTERFACES=true \
-e OSVBNG_ACCESS_INTERFACE=eth0 \
-e OSVBNG_CORE_INTERFACE=eth1 \
veesixnetworks/osvbng:latestOr update the systemd service file to include the volume mount.
What can you expect from the open source version of this project? Below are some key points we want to always achieve in every major release:
- Minimum of 100Gbps out-of-the-box support
- IPoE access technology with DHCPv4 support
- Authenticate customers via DHCPv4 Option 82 (Sub-options 1 and 2, Circuit ID and/or Remote ID)
- BGP, IS-IS and OSPF support
- Only Default VRF implementation
- No QoS/HQoS support from day 1 of the v1.0.0 release
- Modern monitoring solution with Prometheus
