A Rust implementation of the MetalBond route distribution protocol.
MetalBond distributes virtual network routes across hypervisors. Connections between clients and servers use TCP (typically over IPv6 for an IPv6-only fabric), while the overlay supports both IPv4 and IPv6 routes (IPv4/6-in-IPv6 tunneling).
- Async client/server with Tokio
- Automatic reconnection
- VNI-based subscriptions
- Multi-server HA with ECMP support
- Standard, NAT, and LoadBalancer route types
- Interoperable with Go MetalBond
- Optional netlink integration for kernel route installation
MetalBond uses a simple message-based protocol over TCP:
- Handshake: Client and server exchange
HELLOmessages to negotiate keepalive intervals - Keepalive: Both sides send periodic
KEEPALIVEmessages to detect connection loss - Subscribe: Clients subscribe to VNIs to receive routes for specific virtual networks
- Update: Route announcements and withdrawals are distributed via
UPDATEmessages
When a client disconnects, the server automatically withdraws all routes announced by that client.
use rustbond::{MetalBondServer, ServerConfig};
let server = MetalBondServer::start("[::]:4711", ServerConfig::default()).await?;
// ... server.shutdown().await?;use rustbond::{MetalBondClient, RouteHandler, Route, Vni};
struct MyHandler;
impl RouteHandler for MyHandler {
fn add_route(&self, vni: Vni, route: Route) { /* handle add */ }
fn remove_route(&self, vni: Vni, route: Route) { /* handle remove */ }
}
let client = MetalBondClient::connect(&["[::1]:4711"], MyHandler);
client.wait_established().await?;
client.subscribe(Vni(100)).await?;For high availability, connect to multiple servers simultaneously:
let client = MetalBondClient::connect(&["[::1]:4711", "[::1]:4712"], MyHandler);
client.wait_any_established().await?;
// Routes are deduplicated across servers; ECMP supported# Run server (default: [::1]:4711)
cargo run --example server
# Run server on custom address
cargo run --example server -- -l [::]:4711
# Connect client to VNI 100
cargo run --example client -- -s [::1]:4711 -v 100
# Announce a route (VNI#prefix@nexthop)
cargo run --example client -- -s [::1]:4711 -v 100 -a "100#10.0.1.0/24@2001:db8::1"
# Multi-server HA (just add more -s flags)
cargo run --example client -- -s [::1]:4711 -s [::1]:4712 -v 100Announcement format: VNI#prefix@nexthop[#type[#fromPort#toPort]]
# Standard route (default) on VNI 100
"100#10.0.1.0/24@2001:db8::1"
# NAT route with port range on VNI 200
"200#10.0.2.0/24@2001:db8::2#nat#1024#2048"
# Load balancer target on VNI 100
"100#10.0.3.0/24@2001:db8::3#lb"Enable the netlink feature to install routes directly into the Linux kernel routing tables:
# Install routes from VNI 100 to kernel routing table 100
cargo run --example client --features netlink -- \
-s [::1]:4711 -v 100 \
--install-routes 100#100 \
--tun ip6tnl0Options:
--install-routes VNI#TABLE- Map VNI to kernel routing table (can be repeated)--tun DEVICE- Tunnel device name for encapsulated traffic (default: ip6tnl0)
How it works:
- Routes received for a VNI are installed into the corresponding kernel routing table
- Each route points to the tunnel device with the next-hop as the gateway
- Routes are marked with protocol 254 (
RTPROT_METALBOND) for identification - On startup, stale routes from previous runs (same protocol marker) are automatically cleaned up
- When a route is withdrawn by the server, it's removed from the kernel table
Note: Requires root/CAP_NET_ADMIN to modify kernel routing tables.
The library uses a single Error type for all operations. Common errors include:
Error::NotEstablished- Operation attempted before connection is readyError::Timeout/Error::ConnectionTimeout- Connection or operation timed outError::Closed- Connection was closedError::RouteAlreadyAnnounced- Attempted to announce a duplicate routeError::RouteNotFound- Attempted to withdraw a non-existent routeError::Io(...)- Underlying I/O errorError::Protocol(...)- Protocol violation or malformed message
For multi-server setups, operations like subscribe() and announce() succeed if at least one server accepts them, logging warnings for servers that fail.
cargo test # All tests
cargo test --lib # Unit tests only
cargo test --test integration # Integration tests only