Adaptive mesh networking library with DHT-based discovery, QUIC transport, and comprehensive NAT traversal.
Quicmesh provides a complete mesh networking stack built on QUIC. It combines a Kademlia-style DHT for discovery with ICE/STUN/TURN-based NAT traversal, latency-aware routing, and seamless connection migration. The library is transport-agnostic at the core and ships with a quinn-based network implementation.
- QUIC transport via quinn with Ed25519 TLS certificates
- Connection management with parallel path probing and automatic path selection
- Smart connections that seamlessly switch between direct and relayed paths
- QUIC connection migration for seamless path switching without reconnection (saves 1-2 RTTs)
- Topic-based messaging with mesh-structured networks
- Epidemic broadcast achieving O(log n) hop delivery
- Deduplication via LRU message cache with configurable TTL
- Mesh maintenance with automatic graft/prune for optimal connectivity
- Lazy push gossip via IHave/IWant for reliable delivery
- ICE-lite implementation with candidate gathering and connectivity checks
- STUN-like address discovery via
WhatIsMyAddrprotocol - TURN-style relay with session management for symmetric NAT/CGNAT
- Hole punching coordination with synchronized connection attempts
- NAT type detection (None, FullCone, RestrictedCone, PortRestrictedCone, Symmetric)
- Kademlia-style routing with 256 buckets and XOR distance metric
- Adaptive k parameter (10-30) that adjusts based on network churn
- Adaptive α parameter (2-5) that adjusts based on lookup success rate
- Parallel lookups querying α nodes concurrently per round
- Endpoint record publishing for address resolution via DHT
- Ed25519 identity with NodeId derived from public key via BLAKE3
- Sybil protection via NodeId-to-TLS certificate binding
- Signed endpoint records for verifiable address announcements
- Storage exhaustion protection with quotas and rate limiting
- Latency-based tiering using k-means clustering for prioritization
- Backpressure controls with O(1) LRU eviction and pressure monitoring
- TTL expiration with 24-hour data lifetime (per Kademlia spec)
- Content-addressed storage using BLAKE3 hashing
- Telemetry snapshots for monitoring and debugging
Add to your Cargo.toml:
[dependencies]
quicmesh = "0.1"use std::net::SocketAddr;
use anyhow::Result;
use quinn::Endpoint;
use quicmesh::{
create_client_config, create_server_config, generate_ed25519_cert,
Contact, MeshNode, DiscoveryNode, Keypair, QuinnNetwork,
};
async fn launch() -> Result<()> {
// Generate Ed25519 keypair for node identity
let keypair = Keypair::generate();
let node_id = keypair.node_id();
// Generate self-signed Ed25519 certificate for QUIC
let (certs, key) = generate_ed25519_cert(&keypair)?;
let server_config = create_server_config(certs, key)?;
let client_config = create_client_config()?;
// Bind to a random available port
let bind_addr: SocketAddr = "0.0.0.0:0".parse()?;
let endpoint = Endpoint::server(server_config, bind_addr)?;
let local_addr = endpoint.local_addr()?;
// Build contact info
let self_contact = Contact {
id: node_id,
addr: local_addr.to_string(),
};
// Create network layer
let network = QuinnNetwork::new(endpoint.clone(), self_contact.clone(), client_config);
// Create discovery node with k=20, alpha=3
let dht = DiscoveryNode::new(node_id, self_contact, network, 20, 3);
// Start the mesh node
let node = MeshNode::new(dht.clone());
let _node_handle = node.spawn(endpoint);
// Use the DHT
// dht.observe_contact(peer_contact).await;
// let nodes = dht.iterative_find_node(target_id).await?;
// let snapshot = dht.telemetry_snapshot().await;
Ok(())
}Quicmesh is organized as a layered networking stack:
┌─────────────────────────────────────────────────────────────┐
│ Application Layer │
│ DiscoveryNode, publish_address, resolve_peer │
├─────────────────────────────────────────────────────────────┤
│ Discovery Layer │
│ Kademlia DHT, RoutingTable, iterative_find_node │
├─────────────────────────────────────────────────────────────┤
│ Connection Layer │
│ SmartConnection, PathProber, ConnectionManager │
├─────────────────────────────────────────────────────────────┤
│ NAT Traversal Layer │
│ ICE Agent, STUN, TURN Relay, Hole Punching │
├─────────────────────────────────────────────────────────────┤
│ Transport Layer │
│ QUIC (quinn), Ed25519 TLS │
├─────────────────────────────────────────────────────────────┤
│ Identity Layer │
│ Keypair, Identity, EndpointRecord, NodeId │
└─────────────────────────────────────────────────────────────┘
| Module | Description |
|---|---|
core |
Kademlia DHT logic: routing table, storage, tiering, adaptive parameters |
identity |
Ed25519 keypair, Identity type, and signed endpoint records |
net |
QUIC network layer with path discovery, connection management, and smart connections |
node |
Mesh node for accepting incoming QUIC connections and RPC dispatch |
protocol |
RPC message definitions for DHT operations and NAT traversal |
pubsub |
GossipSub-style publish/subscribe for topic-based message distribution |
relay |
ICE/STUN/TURN implementation for comprehensive NAT traversal |
// Identity
pub use identity::{
Keypair, // Ed25519 keypair for node identity
Identity, // 32-byte Ed25519 public key (peer address)
EndpointRecord, // Signed record of peer's network addresses
RelayEndpoint, // Relay node info for NAT traversal
node_id_from_public_key, // Derive NodeId from public key
verify_node_id, // Verify NodeId matches public key
};
// Core types and helpers
pub use core::{
derive_node_id, // Hash bytes to NodeId using BLAKE3
hash_content, // Hash content to Key using BLAKE3
verify_key_value_pair, // Verify key matches content hash
Contact, // Peer identity + address
DhtNetwork, // Network abstraction trait
DiscoveryNode, // Main DHT node handle
Key, // 32-byte content key
NodeId, // 32-byte node identifier
RoutingTable, // Kademlia routing table
};
// Network layer
pub use net::{
create_client_config, // Create QUIC client config
create_server_config, // Create QUIC server config
generate_ed25519_cert, // Generate Ed25519 TLS cert from keypair
extract_public_key_from_cert, // Extract public key from peer cert
verify_peer_identity, // Verify peer's cert matches NodeId
QuinnNetwork, // quinn-based network implementation
SmartConnection, // Direct or relayed connection
ConnectionManager, // Parallel path probing for connections
ConnectionStats, // Connection statistics
PathProber, PathCandidate, // Path probing state
PathProbe, PathReply, ReachMe, PathMessage, // Path discovery protocol
DHT_ALPN, // ALPN identifier
};
// Relay and NAT traversal
pub use relay::{
RelayServer, RelayClient, RelayInfo, // Relay node implementation
NatType, NatReport, detect_nat_type, // NAT type detection
ConnectionStrategy, // Direct vs relay connection choice
IceCandidate, IceAgent, IceRole, // ICE types
CandidatePair, CandidateType, CheckState, // ICE connectivity checks
TurnAllocation, // TURN-style allocation
DIRECT_CONNECT_TIMEOUT, STUN_TIMEOUT, // Timeouts
};
// Publish/Subscribe
pub use pubsub::{
GossipSub, GossipConfig, // GossipSub router and config
PubSubMessage, // Protocol messages (Graft, Prune, Publish, etc.)
ReceivedMessage, MessageId, // Received message and unique ID
DEFAULT_MESH_DEGREE, // Default mesh size per topic
};
pub use node::MeshNode; // Mesh node for accepting connectionsEach node has a cryptographic identity based on an Ed25519 keypair:
use quicmesh::{Keypair, verify_node_id};
// Generate a new keypair
let keypair = Keypair::generate();
// The NodeId is the BLAKE3 hash of the public key
let node_id = keypair.node_id();
let public_key = keypair.public_key_bytes();
// Verify that a NodeId matches a public key
assert!(verify_node_id(&node_id, &public_key));
// Keypairs can be restored from the secret key
let secret = keypair.secret_key_bytes();
let restored = Keypair::from_secret_key_bytes(&secret);
assert_eq!(keypair.node_id(), restored.node_id());When connecting to a peer, you can verify their identity:
use quicmesh::{verify_peer_identity, extract_public_key_from_cert};
// After receiving a peer's certificate
let cert_der: &[u8] = /* from TLS handshake */;
let claimed_node_id: NodeId = /* from Contact */;
// Verify the peer owns their claimed NodeId
if verify_peer_identity(cert_der, &claimed_node_id) {
// Peer verified!
}The DHT uses 256-bit identifiers:
type NodeId = [u8; 32]; // Node identifier (BLAKE3 hash of Ed25519 public key)
type Key = [u8; 32]; // Content key (BLAKE3 hash of value)Kademlia-style routing with:
- 256 buckets for 256-bit XOR distance space
- Adaptive k (10-30 contacts per bucket) based on churn rate
- LRU-like eviction preferring long-lived nodes
- Ping-before-evict rule for bucket maintenance
Contacts are dynamically assigned to latency tiers:
- K-means clustering on RTT samples (1-7 tiers, dynamically determined)
- Periodic recomputation every 5 minutes
- Tier-aware lookups prioritizing fast peers
- Spill offloading to slower tiers under pressure
- Stale data cleanup removing nodes not seen in 24 hours
- LRU cache with O(1) operations (get, put, eviction)
- TTL expiration - entries expire after 24 hours (per Kademlia spec)
- Pressure monitoring based on memory, disk, and request rate
- Automatic eviction when pressure exceeds threshold (0.8)
- Content verification using BLAKE3 hash
| Parameter | Range | Adaptation |
|---|---|---|
| k (bucket size) | 10-30 | Increases with churn rate |
| α (parallelism) | 2-5 | Adjusts based on lookup success |
| Constant | Default | Description |
|---|---|---|
K_DEFAULT |
20 | Default bucket size |
ALPHA_DEFAULT |
3 | Default lookup parallelism |
MIN_LATENCY_TIERS |
1 | Minimum number of latency tiers |
MAX_LATENCY_TIERS |
7 | Maximum number of latency tiers |
TIERING_RECOMPUTE_INTERVAL |
5m | Tier recomputation frequency |
TIERING_STALE_THRESHOLD |
24h | When to remove stale tiering data |
PRESSURE_THRESHOLD |
0.75 | Eviction trigger threshold |
LOCAL_STORE_MAX_ENTRIES |
100,000 | Maximum LRU cache entries |
DEFAULT_TTL |
24h | Data expiration time (per Kademlia spec) |
EXPIRATION_CHECK_INTERVAL |
60s | How often expired entries are cleaned up |
MAX_VALUE_SIZE |
64 KB | Maximum size per stored value |
PER_PEER_STORAGE_QUOTA |
1 MB | Maximum storage per remote peer |
PER_PEER_ENTRY_LIMIT |
100 | Maximum entries per remote peer |
PER_PEER_RATE_LIMIT |
20/min | Maximum store requests per peer per minute |
pub const DHT_ALPN: &[u8] = b"quicmesh/dht/1";DHT Operations:
| Request | Response | Description |
|---|---|---|
FindNode |
Nodes |
Find k closest nodes to target |
FindValue |
Value |
Get value or closer nodes |
Store |
Ack |
Store key-value pair |
Ping |
Ack |
Check node responsiveness |
NAT Traversal:
| Request | Response | Description |
|---|---|---|
WhatIsMyAddr |
YourAddr |
STUN-like address discovery |
RelayConnect |
RelayAccepted/Connected/Rejected |
Establish relay session |
RelayForward |
RelayForwarded |
Forward encrypted packet |
RelayClose |
RelayClosed |
Close relay session |
HolePunchRegister |
HolePunchWaiting/Ready |
Coordinate hole punch |
HolePunchStart |
HolePunchReady/Failed |
Trigger hole punch |
Publish/Subscribe (GossipSub):
| Message Type | Description |
|---|---|
Subscribe |
Join a topic mesh |
Unsubscribe |
Leave a topic mesh |
Graft |
Request to join peer's mesh for a topic |
Prune |
Leave peer's mesh, optionally suggest alternatives |
Publish |
Broadcast message to topic |
IHave |
Gossip: announce available messages |
IWant |
Request specific messages by ID |
Quicmesh implements comprehensive NAT traversal using ICE-like techniques with QUIC-native transport:
┌──────────┐ ┌──────────┐
│ Peer A │ │ Peer B │
│ (NAT'd) │ │ (NAT'd) │
└────┬─────┘ └────┬─────┘
│ │
│ 1. WhatIsMyAddr (STUN-like) │
├──────────────────────────────►│
│◄──────────────────────────────┤
│ YourAddr: 203.0.113.5:4433 │
│ │
│ 2. Publish EndpointRecord │
├───────────► DHT ◄─────────────┤
│ │
│ 3. Resolve peer addresses │
├───────────► DHT ◄─────────────┤
│ │
│ 4. Parallel path probing │
│◄─────── PathProbe ───────────►│
│◄─────── PathReply ───────────►│
│ │
│ 5a. Direct connection (if NAT allows)
│◄═════════════════════════════►│
│ OR │
│ 5b. Relay through TURN node │
│◄────► Relay ◄────────────────►│
│ │
│ 6. Upgrade: relay → direct │
│ (when NAT conditions allow) │
└────┴───────────────────────────────┴────┘
- Direct first: Always attempt direct UDP connection with timeout (5s)
- NAT detection: Query multiple STUN-like endpoints to classify NAT type
- Relay fallback: When blocked by symmetric NAT/CGNAT, connect via relay
- Parallel probing: Continuously probe all paths to find the best route
- Path selection: Prefer direct over relay unless relay is >50ms faster
- QUIC migration: Seamlessly switch paths without reconnecting (saves 1-2 RTTs)
When relay is needed, Quicmesh dynamically selects the best relay node using a scoring algorithm that combines:
| Factor | Weight | Description |
|---|---|---|
| RTT latency | High | Measured round-trip time to relay (default: 200ms if unknown) |
| Load | Medium | Relay's current session load (0-100ms penalty) |
| Tier level | Low | DHT tiering level (20ms penalty per tier) |
use quicmesh::{RelayClient, RelayInfo, Identity};
// Relay selection score (lower is better)
// score = rtt_ms + (load * 100) + (tier * 20)
let client = RelayClient::new();
// Update known relays - automatically sorted by score
client.update_relays(discovered_relays).await;
// Update relay metrics from DHT tiering system
client.update_relay_rtt(&relay_identity, rtt_ms, tier_level).await;
// Get best relay for connection
if let Some(best_relay) = client.best_relay().await {
println!("Best relay: {} (score: {:.1})",
best_relay.relay_addrs[0],
best_relay.selection_score());
}Relay nodes publish their capabilities to the DHT:
pub struct RelayInfo {
pub relay_peer: Identity, // Relay's peer ID
pub relay_addrs: Vec<String>, // Relay addresses
pub load: f32, // Current load (0.0-1.0)
pub accepting: bool, // Accepting new sessions?
pub rtt_ms: Option<f32>, // Observed RTT (from tiering)
pub tier: Option<u8>, // Tiering level (0 = fastest)
pub capabilities: RelayCapabilities,
}
pub struct RelayCapabilities {
pub stun: bool, // Supports STUN binding
pub turn: bool, // Supports TURN relay
pub ice_lite: bool, // Supports ICE-lite
pub max_bandwidth_kbps: u32, // Bandwidth limit (0 = unlimited)
pub region: Option<String>, // Geographic region hint
}| NAT Type | Description | Hole Punch |
|---|---|---|
None |
Public IP, no NAT | ✓ Direct |
FullCone |
Any external host can reach mapped address | ✓ Works |
RestrictedCone |
Only hosts we've sent to can reply | ✓ Works |
PortRestrictedCone |
Only (host, port) we've sent to can reply | ✓ Works |
Symmetric |
Different mapping per destination (CGNAT) | ✗ Needs relay |
use quicmesh::{QuinnNetwork, SmartConnection, PathProber};
// Automatically chooses direct or relay based on NAT
let connection = network.smart_connect(&endpoint_record).await?;
match &connection {
SmartConnection::Direct(conn) => {
println!("Direct connection to {}", conn.remote_address());
}
SmartConnection::Relayed { session_id, direct_addrs, .. } => {
println!("Relayed connection, session: {:?}", hex::encode(session_id));
println!("Will probe {} direct addresses for upgrade", direct_addrs.len());
}
SmartConnection::RelayPending { .. } => {
println!("Waiting for peer to connect to relay");
}
}
// Check connection state
if connection.is_direct() {
println!("Using direct path");
} else if connection.can_attempt_upgrade() {
println!("Can attempt upgrade to direct");
}use quicmesh::{PathProber, PathState};
// Create prober for existing connection
let mut prober = PathProber::new(connection, initial_addr, is_relay);
// Add alternate paths to probe
prober.add_direct_candidates(&endpoint_record.addrs);
// Generate probes for all paths needing measurement
let probes = prober.generate_probes();
for (addr, probe) in probes {
// Send probe to addr...
}
// Get path statistics
for stats in prober.path_stats() {
println!("{}: {:?} rtt={:?}ms relay={}",
stats.addr, stats.state, stats.rtt_ms, stats.is_relay);
}Quicmesh includes a GossipSub-style publish/subscribe system for topic-based message distribution. Messages propagate through the network via epidemic broadcast, achieving O(log n) hop delivery with high reliability.
┌─────────────────────────────────────────────────────────────┐
│ Application │
│ subscribe(), publish(), on_message() │
├─────────────────────────────────────────────────────────────┤
│ GossipSub │
│ Topic meshes, message routing, deduplication │
├─────────────────────────────────────────────────────────────┤
│ DiscoveryNode │
│ Peer discovery, DHT, connections │
└─────────────────────────────────────────────────────────────┘
| Concept | Description |
|---|---|
| Topic | A named channel for messages (e.g., "chat/lobby", "sensors/temp") |
| Mesh | Per-topic connection to mesh_degree peers (default: 6) for full message push |
| Fanout | Cached peers for topics we publish to but aren't subscribed to |
| Gossip | Periodic IHave/IWant exchanges to repair mesh gaps |
use quicmesh::pubsub::{GossipSub, GossipConfig};
// Create pubsub layer on top of discovery node
let config = GossipConfig::default();
let mut pubsub = GossipSub::new(dht.clone(), keypair.identity(), config);
// Take the message receiver for incoming messages
let mut receiver = pubsub.take_message_receiver().unwrap();
// Spawn heartbeat task for mesh maintenance
let pubsub_clone = pubsub.clone();
tokio::spawn(async move {
pubsub_clone.run_heartbeat().await;
});
// Subscribe to a topic
pubsub.subscribe("chat/lobby").await?;
// Publish a message
let msg_id = pubsub.publish("chat/lobby", b"Hello, world!".to_vec()).await?;
// Handle incoming messages
while let Some(msg) = receiver.recv().await {
println!("[{}] {}: {:?}", msg.topic, msg.source, msg.data);
}pub struct GossipConfig {
pub mesh_degree: usize, // Target peers per topic (default: 6)
pub mesh_degree_low: usize, // Min before seeking more (default: 4)
pub mesh_degree_high: usize, // Max before pruning (default: 12)
pub message_cache_size: usize, // Dedup cache size (default: 10,000)
pub message_cache_ttl: Duration, // Cache TTL (default: 2 min)
pub gossip_interval: Duration, // IHave interval (default: 1s)
pub heartbeat_interval: Duration, // Mesh maintenance (default: 1s)
}- Publish: Message sent to all mesh peers (full push)
- Forward: Each peer forwards to their mesh peers (except sender)
- Deduplicate: Message ID cache prevents redundant delivery
- Gossip: Periodically exchange IHave to discover missed messages
- Repair: IWant requests fill gaps in message delivery
#[derive(Clone, Debug, Default)]
pub struct TelemetrySnapshot {
pub tier_centroids: Vec<f32>, // Latency tier centers (ms)
pub tier_counts: Vec<usize>, // Nodes per tier
pub pressure: f32, // Current pressure (0.0-1.0)
pub stored_keys: usize, // Keys in local storage
pub replication_factor: usize, // Current k
pub concurrency: usize, // Current alpha
}
// Get a snapshot
let snapshot = dht.telemetry_snapshot().await;The included binary demonstrates a complete mesh node:
# Run with default (info) logging
cargo run
# Run with debug logging
RUST_LOG=debug cargo run
# Run with trace logging (very verbose)
RUST_LOG=trace cargo run
# Filter to specific modules
RUST_LOG=quicmesh=debug cargo runThis starts a node with:
- Ed25519 keypair for cryptographic identity
- Self-signed TLS certificate for QUIC
- Periodic telemetry logging (every 5 minutes)
- Structured logging via
tracing
Example output:
2024-12-02T10:00:00Z INFO quicmesh: DHT node started
2024-12-02T10:00:00Z INFO quicmesh: NodeId node_id="a1b2c3d4..."
2024-12-02T10:00:00Z INFO quicmesh: Listening address addr="0.0.0.0:54321"
2024-12-02T10:05:00Z INFO quicmesh: telemetry snapshot pressure="0.00" stored_keys=0 k=20 alpha=3
A simple chatroom demonstrating mesh networking:
# Start first node
cargo run --example chatroom -- --name alice --room lobby --port 4433
# Connect another node (in a different terminal)
cargo run --example chatroom -- --name bob --room lobby --peer 127.0.0.1:4433The chatroom example shows:
- Peer discovery via DHT
- Direct message routing
- Room-based message filtering
- JSON message serialization over QUIC streams
use quicmesh::{Keypair, DiscoveryNode, Identity};
// Publish our address to the DHT
let keypair = Keypair::generate();
let addresses = vec!["192.168.1.100:4433".to_string()];
node.publish_address(&keypair, addresses).await?;
// Resolve a peer's addresses from the DHT
let peer_identity: Identity = /* ... */;
if let Some(record) = node.resolve_peer(&peer_identity).await? {
println!("Peer addresses: {:?}", record.addrs);
println!("Relay endpoints: {:?}", record.relays);
}
// Republish when network changes (e.g., WiFi → cellular)
let new_addrs = vec!["10.0.0.50:4433".to_string()];
node.republish_on_network_change(&keypair, new_addrs, vec![]).await?;Quicmesh prevents Sybil attacks by cryptographically binding each peer's NodeId to their TLS certificate:
- NodeId derivation:
NodeId = BLAKE3(Ed25519_public_key) - Certificate binding: The Ed25519 public key is embedded in the TLS certificate's Common Name
- Connection verification: On each incoming connection, the server extracts the public key from the peer's certificate and derives the expected NodeId
- Request validation: Every RPC request's
fromContact must match the verified NodeId from the TLS handshake
This prevents attackers from claiming arbitrary NodeIds—they can only use NodeIds derived from keys they control.
The DHT implements multiple layers of protection against storage exhaustion attacks:
| Protection | Limit | Description |
|---|---|---|
| Value size | 64 KB | Rejects oversized values |
| Per-peer quota | 1 MB / 100 entries | Limits total storage per peer |
| Rate limiting | 20 stores/minute | Prevents rapid-fire STORE attacks |
| Popularity-based eviction | - | Frequently accessed data survives longer |
| Pressure-based eviction | >75% pressure | Automatic eviction under resource pressure |
// Storage rejection reasons
pub enum StoreRejection {
ValueTooLarge, // Value exceeds 64 KB
QuotaExceeded, // Peer exceeded 1 MB / 100 entries
RateLimited, // Peer sending >20 stores/minute
}# Run all tests
cargo test
# Run with output
cargo test -- --nocapture
# Run specific test
cargo test iterative_find_node| Crate | Version | Purpose |
|---|---|---|
quinn |
0.11 | QUIC transport |
rustls |
0.23 | TLS implementation |
rcgen |
0.13 | Self-signed certificate generation |
ed25519-dalek |
2.1 | Ed25519 keypair and signatures |
blake3 |
1.5 | BLAKE3 hashing |
x509-parser |
0.16 | Certificate parsing for peer verification |
bincode |
1.3 | Binary serialization for RPC |
tokio |
1.x | Async runtime |
lru |
0.12 | O(1) LRU cache |
tracing |
0.1 | Structured logging |
tracing-subscriber |
0.3 | Log output formatting |
async-trait |
0.1 | Async trait support |
futures |
0.3 | Async utilities |
getrandom |
0.2 | Cryptographically secure random bytes |
Quicmesh's design is informed by the following research:
-
Liang, J., Xu, W., Wang, T., Yang, Q., & Zhang, S. (2024). Implementing NAT Hole Punching with QUIC. VTC2024-Fall Conference. arXiv:2408.01791
This paper demonstrates that QUIC-based hole punching effectively reduces hole punching time compared to TCP, with pronounced advantages in weak network environments. It also shows that QUIC connection migration for connection restoration saves 2 RTTs compared to re-punching, which Quicmesh leverages for seamless path switching.
-
Freedman, M. J., Freudenthal, E., & Mazières, D. (2004). Democratizing Content Publication with Coral. NSDI '04. PDF
Coral introduced the concept of a "sloppy" DHT (DSHT) with hierarchical clustering based on latency. Quicmesh adopts similar ideas with its latency-based tiering system, which uses k-means clustering to organize peers by RTT and prioritize fast peers for lookups while offloading storage pressure to slower tiers.
Contributions are welcome! Please feel free to submit a Pull Request.