5G NR Radar Sensing + Computer Vision | Powered by NVIDIA AI Aerial SDK
Production-grade ISAC (Integrated Sensing and Communications) system that fuses 5G NR FR1 radar sensing with YOLOv8 computer vision β replicating the architecture demonstrated by NVIDIA and Booz Allen Hamilton at GTC Washington D.C. 2025.
β² Live Dashboard β Camera Feed (left), Range-Doppler Heatmap (center), Fusion Panel (right)
- Overview
- Key Demo: ISAC Value Proposition
- Architecture
- Screenshots & Recordings
- Tech Stack
- Project Structure
- Quick Start
- NVIDIA AI Aerial SDK Setup
- Configuration
- API Reference
- Testing
- Docker Deployment
- 3GPP Validation
- Contributing
- License
- Acknowledgments
SenseForge is a full-stack multimodal sensor fusion system that demonstrates how 5G NR OFDM waveforms can serve a dual purpose β communications and radar sensing β within a single integrated pipeline. This is the core concept behind ISAC (Integrated Sensing and Communications), a key feature of 5G-Advanced and 6G networks.
Traditional surveillance relies entirely on cameras, which fail in:
- π«οΈ Fog β visibility drops to near zero
- π Night β insufficient lighting for visual detection
- π§± Occlusion β physical obstructions block line of sight
- π§οΈ Rain β lens distortion and reduced contrast
5G NR radar operates at 3.5 GHz (n78 band) and is unaffected by these conditions. SenseForge fuses both modalities β when cameras degrade, the RF pipeline maintains full situational awareness.
| Capability | Implementation |
|---|---|
| OFDM Waveform Generation | 5G NR FR1 n78, ΞΌ=1, 30 kHz SCS, 272 subcarriers |
| Radar Echo Simulation | Sionna CDL channel + Friis path loss + Rician fading |
| Range-Doppler Processing | LS channel estimation β ECA clutter removal β 2D CA-CFAR |
| Computer Vision | YOLOv8 person detection + ByteTrack-style tracking |
| Depth Estimation | MiDaS monocular depth (with heuristic fallback) |
| Weather Degradation | Fog, Night, Occlusion, Rain simulation at variable intensity |
| AI Fusion | 14-input MLP with source-aware weighted averaging |
| Real-time Dashboard | 3-panel military-grade UI with WebSocket streaming |
The critical demonstration shows SenseForge's resilience when camera feeds degrade. The RF radar continues to detect and track targets, proving the value of integrated sensing.
CLEAR MODE FOG/NIGHT MODE
ββββββββββββββββββ ββββββββββββββββββ
β π· Camera: 100% β β π· Camera: 37% β β Degraded
β π‘ RF: 100% β β π‘ RF: 100% β β Unaffected!
β β β β
β Vision: ββββββ β β Vision: ββ β β Drops
β RF: ββββββ β β RF: ββββββ β β Stable
β Fused: ββββββ β β Fused: ββββββ β β RF compensates
β β β β
β Source: FUSED β β Source: RF_ONLY β β Label changes
ββββββββββββββββββ ββββββββββββββββββ
The ISAC argument: When you click FOG or NIGHT, the camera confidence drops (visible in the gauge bar), vision detections disappear, but blue RF ONLY labels appear on all tracked targets. The 5G NR radar maintains full track continuity.
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β SenseForge ISAC System β
βββββββββββββββ¬ββββββββββββββ¬βββββββββββββββ¬βββββββββββββββ¬βββββββββββββββ€
β LAYER 1 β LAYER 2 β LAYER 3 β LAYER 4 β LAYER 5 β
β RF Radar β Vision β Fusion β Backend β Frontend β
β β β β β β
β βββββββββββ β βββββββββββ β ββββββββββββ β ββββββββββββ β βββββββββββ β
β βWaveform β β β YOLOv8 β β β Feature β β β FastAPI β β β React β β
β β Gen β β βDetector β β β Vector β β β REST + β β βDashboardβ β
β βββββββββββ€ β βββββββββββ€ β β Build β β βWebSocket β β βββββββββββ€ β
β β Echo β β βByteTrackβ β ββββββββββββ€ β ββββββββββββ€ β β Camera β β
β βSimulatorβ β β Tracker β β β MLP β β β /ws/videoβ β β Feed β β
β βββββββββββ€ β βββββββββββ€ β β14β64β32β4β β β /ws/radarβ β βββββββββββ€ β
β βRange- β β β MiDaS β β ββββββββββββ€ β β /ws/det β β β Radar β β
β βDoppler β β β Depth β β β Weighted β β ββββββββββββ€ β βHeatmap β β
β β Map β β βββββββββββ€ β β Fuse β β β /health β β βββββββββββ€ β
β βββββββββββ€ β βDegrader β β β (source β β β /scenarioβ β β Fusion β β
β β Kalman β β βFog/Nightβ β β aware) β β β /degrade β β β Panel β β
β β Tracker β β βOcc/Rain β β β β β β β β β β β
β βββββββββββ β βββββββββββ β ββββββββββββ β ββββββββββββ β βββββββββββ β
βββββββββββββββ΄ββββββββββββββ΄βββββββββββββββ΄βββββββββββββββ΄βββββββββββββββ€
β NVIDIA AI Aerial SDK (pyAerial + Sionna) β
β cuPHY PUSCH Decoder Β· CDL Channel Model β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
5G NR Waveform βββΊ Echo Simulation βββΊ Range-Doppler βββΊ CFAR βββΊ RF Tracks βββ
ββββΊ Fusion MLP βββΊ Dashboard
Camera Frame βββΊ YOLOv8 βββΊ ByteTrack βββΊ Depth βββΊ Vision Tracks βββββββββββββ
β
βΌ
Degradation (Fog/Night/Occlusion/Rain)
| Layer | Technology | Purpose |
| RF Pipeline | NVIDIA AI Aerial SDK (pyAerial) | cuPHY PUSCH decoding, GPU-accelerated PHY |
| Sionna (β₯ 0.18) | CDL channel models, ResourceGrid, OFDM | |
| NumPy / SciPy | Range-Doppler FFT, CA-CFAR, Kalman filter | |
| Vision Pipeline | YOLOv8 (Ultralytics) | Real-time person detection |
| MiDaS (Intel ISL) | Monocular depth estimation | |
| OpenCV | Frame processing, degradation effects | |
| Fusion | PyTorch | 14-input MLP with source-aware fusion |
| Backend | FastAPI + Uvicorn | REST API + 3 WebSocket streams |
| Frontend | React 18 + Canvas API | Military-grade dashboard with IBM Plex Mono |
| Deployment | Docker + NVIDIA Container Toolkit | GPU-accelerated containerised deployment |
SenseForge/
βββ rf/ # Layer 1: RF Radar Pipeline
β βββ __init__.py
β βββ waveform_gen.py # 5G NR FR1 n78 OFDM waveform generator
β βββ echo_simulator.py # Sionna CDL channel + Friis path loss
β βββ range_doppler.py # LS estimation β ECA β CA-CFAR β NMS
β βββ rf_tracker.py # Kalman filter with NN association
β
βββ vision/ # Layer 2: Computer Vision Pipeline
β βββ __init__.py
β βββ detector.py # YOLOv8 person detector (+ synthetic fallback)
β βββ tracker.py # ByteTrack-style Hungarian matching
β βββ depth.py # MiDaS monocular depth estimation
β βββ degradation.py # Fog / Night / Occlusion / Rain simulator
β
βββ fusion/ # Layer 3: AI Fusion Engine
β βββ __init__.py
β βββ model.py # FusionMLP (14β64β32β4) + weighted fuse()
β βββ train.py # Synthetic data generator + training loop
β
βββ backend/ # Layer 4: API & Streaming
β βββ __init__.py
β βββ main.py # FastAPI + 3 WebSockets + background threads
β
βββ frontend/ # Layer 5: React Dashboard
β βββ public/
β β βββ index.html # IBM Plex Mono shell
β βββ src/
β β βββ App.js # Main app with WebSocket state management
β β βββ App.css # Defence-grade design system
β β βββ CameraFeed.js # Canvas camera with corner-bracket boxes
β β βββ RadarHeatmap.js # INFERNO colormap heatmap
β β βββ FusionPanel.js # Gauges, counts, degradation controls
β β βββ index.js # React entry point
β β βββ index.css # Global styles
β βββ package.json
β βββ .env
β
βββ scripts/
β βββ demo_generator.py # Synthetic video generator (OpenCV)
β βββ record_demo.py # WebSocket demo recorder
β
βββ tests/ # 95 total tests
β βββ test_rf_pipeline.py # 32 RF tests
β βββ test_vision_pipeline.py # 35 vision tests
β βββ test_fusion.py # 28 fusion tests
β
βββ docs/
β βββ images/ # Screenshots and recordings
β
βββ models/ # Trained model weights
β βββ fusion_model.pt
β
βββ run_pipeline_test.py # 15 E2E checks with ANSI output
βββ aerial_validate.py # 3GPP TS 38.211 constraint validation
βββ aerial_setup.sh # NVIDIA Aerial SDK setup automation
βββ build.sh # Build + train script
β
βββ docker-compose.yml # GPU backend + nginx frontend
βββ Dockerfile.backend # Aerial container-based backend
βββ Dockerfile.frontend # Multi-stage React + nginx
βββ nginx.conf # SPA routing + WS proxy
β
βββ requirements.txt # Python dependencies
βββ pytest.ini # Test configuration
βββ render.yaml # Render.com deployment
βββ Procfile # Heroku/Railway deployment
βββ .gitignore
- Python 3.11+
- Node.js 18+
- (Optional) NVIDIA GPU + CUDA 12.x for SDK mode
- (Optional) NVIDIA AI Aerial SDK + Sionna for full RF pipeline
git clone https://github.com/yourusername/SenseForge.git
cd SenseForge
# Python dependencies
pip install -r requirements.txt
# Frontend dependencies
cd frontend && npm install && cd ..python -m fusion.train
# Output: models/fusion_model.pt (trains in ~7 seconds)python -m uvicorn backend.main:app --host 0.0.0.0 --port 8000cd frontend
npm start
# Dashboard available at http://localhost:3000Navigate to http://localhost:3000 β you'll see:
- π· Live camera feed with animated detection boxes
- π‘ INFERNO-colormap range-Doppler heatmap
- π§ Real-time fusion gauges and detection log
For full GPU-accelerated RF processing with cuPHY PUSCH decoding:
# Automated setup (requires Docker + NVIDIA drivers)
bash aerial_setup.shThis script will:
- β Check prerequisites (Docker, Git LFS, nvidia-smi)
- β Verify NVIDIA Docker runtime
- β
Clone
aerial-cuda-accelerated-ranrepository - β Pull NGC Aerial container image
- β Start container with project + pyAerial mounted
- β Install dependencies, train model, and start backend
# 1. Clone Aerial SDK
git clone --recurse-submodules https://github.com/NVIDIA/aerial-cuda-accelerated-ran.git ~/aerial
# 2. Install pyAerial
pip install -e ~/aerial/pyaerial/
# 3. Install Sionna
pip install sionna>=0.18.0
# 4. Validate
python aerial_validate.pyNote: Without the Aerial SDK, SenseForge runs in Synthetic Mode β all pipeline stages use simulated data. The dashboard and fusion logic work identically.
| Parameter | Value | 3GPP Reference |
|---|---|---|
| Band | n78 (3.3 β 3.8 GHz) | TS 38.104 |
| Carrier Frequency | 3.5 GHz | |
| Subcarrier Spacing | 30 kHz (ΞΌ=1) | TS 38.211 |
| Subcarriers | 272 | |
| OFDM Symbols/Slot | 14 (Normal CP) | TS 38.211 Β§5.2.1 |
| FFT Size | 512 | |
| Bandwidth | 8.16 MHz | |
| MCS Index | 16 (64-QAM, R=0.48) | TS 38.214 Table 5.1.3.1-1 |
| Parameter | Value |
|---|---|
| Range Resolution | ~18.4 m |
| Max Range | ~5,000 m |
| Velocity Resolution | ~0.61 m/s |
| Max Velocity | ~4.29 m/s |
| CFAR false alarm rate | 10β»β΄ |
| Variable | Default | Description |
|---|---|---|
FRONTEND_URL |
http://localhost:3000 |
CORS origin for backend |
REACT_APP_BACKEND_URL |
http://localhost:8000 |
Backend URL for frontend |
REACT_APP_WS_URL |
ws://localhost:8000 |
WebSocket URL |
| Method | Endpoint | Description |
|---|---|---|
GET |
/health |
System status, uptime, pipeline state |
POST |
/scenario |
Set target count and scenario seed |
POST |
/degrade |
Set camera degradation mode and intensity |
| Endpoint | Rate | Payload |
|---|---|---|
/ws/video |
~15 Hz | { frame: <base64 JPEG> } |
/ws/radar |
~10 Hz | { rd_matrix: [[...]], detections: [...] } |
/ws/detections |
~5 Hz | { detections: [...], mode, camera_confidence, rf_confidence } |
curl -X POST http://localhost:8000/degrade \
-H "Content-Type: application/json" \
-d '{"mode": "fog", "intensity": 0.8}'Response:
{
"mode": "fog",
"intensity": 0.8,
"camera_confidence": 0.24
}pytest tests/ -vpython run_pipeline_test.pyββββββββββββββββββββββββββββββββββββββββββββββββββββββ
SenseForge β End-to-End Pipeline Validation
ββββββββββββββββββββββββββββββββββββββββββββββββββββββ
RF Pipeline
ββββββββββββββββββββββββββββββββββββββββββββββ
β PASS WaveformConfig parameters
β PASS WaveformConfig derived properties
β PASS Target physics (RCS, Doppler, delay)
β PASS Scenario generator
β PASS Channel estimation
β PASS Range-Doppler map computation
β PASS CFAR detection
β PASS RF Kalman tracker
Vision Pipeline
ββββββββββββββββββββββββββββββββββββββββββββββ
β PASS Degradation modes (all 5)
β PASS YOLO detector (synthetic)
β PASS Vision tracker + IoU
β PASS Depth estimation (heuristic)
Fusion Layer
ββββββββββββββββββββββββββββββββββββββββββββββ
β PASS Feature vector normalisation
β PASS Fusion source labelling (all 4 branches)
β PASS Training data generator
ββββββββββββββββββββββββββββββββββββββββββββββββββββββ
15 PASSED / 0 FAILED
ALL CHECKS PASSED β
ββββββββββββββββββββββββββββββββββββββββββββββββββββββ
python aerial_validate.py# Set environment
export AERIAL_SDK_PATH=~/aerial-cuda-accelerated-ran
export AERIAL_IMAGE=nvcr.io/nvidia/aerial/aerial-cuda-accelerated-ran:25-3-cubb
# Launch
docker-compose up -dThis starts:
- Backend on port
8000(GPU-enabled Aerial container) - Frontend on port
3000(nginx serving React build)
docker build -f Dockerfile.frontend -t senseforge-frontend .
docker run -p 3000:3000 senseforge-frontendSenseForge validates all waveform parameters against 3GPP TS 38.211 constraints:
βββββββββββββββββββββββββββββββββββββββββββββββββββββββ
SenseForge β 3GPP TS 38.211 Validation
βββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β SCS valid for FR1: 30000 (expected: one of [15000, 30000, 60000])
β Carrier frequency in FR1: 3500000000.0 (expected: 410e6 <= fc <= 7.125e9)
β Carrier in n78 band: 3500000000.0 (expected: 3.3e9 <= fc <= 3.8e9)
β Numerology ΞΌ: 1 (expected: 1)
β OFDM symbols per slot: 14 (expected: 14)
β FFT size is power of 2: 512 (expected: power of 2)
β FFT size >= num_subcarriers: 512 (expected: >= 272)
β MCS index valid: 16 (expected: 0 <= mcs <= 28)
β Range resolution > 0: 18.38 m
β Max range > 100m: 4997 m
Result: 12/12 passed β
- Fork the repository
- Create a feature branch (
git checkout -b feature/amazing-feature) - Commit your changes (
git commit -m 'Add amazing feature') - Push to the branch (
git push origin feature/amazing-feature) - Open a Pull Request
This project is licensed under the MIT License β see the LICENSE file for details.
- NVIDIA AI Aerial SDK β GPU-accelerated 5G PHY layer processing
- Sionna β Open-source link-level simulator by NVIDIA
- NVIDIA & Booz Allen Hamilton β Original ISAC demonstration at GTC Washington D.C. 2025
- Ultralytics YOLOv8 β State-of-the-art object detection
- MiDaS β Intel ISL monocular depth estimation
=======






