Skip to content

kousar-coder/NVIDIA-6G-Developer-Program-Project

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 

History

7 Commits
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

πŸ›‘οΈ SenseForge

Multimodal ISAC Sensor Fusion System

5G NR Radar Sensing + Computer Vision | Powered by NVIDIA AI Aerial SDK

Python NVIDIA Sionna React FastAPI License

Production-grade ISAC (Integrated Sensing and Communications) system that fuses 5G NR FR1 radar sensing with YOLOv8 computer vision β€” replicating the architecture demonstrated by NVIDIA and Booz Allen Hamilton at GTC Washington D.C. 2025.


SenseForge Live Dashboard

β–² Live Dashboard β€” Camera Feed (left), Range-Doppler Heatmap (center), Fusion Panel (right)


πŸ“‹ Table of Contents


πŸ”­ Overview

SenseForge is a full-stack multimodal sensor fusion system that demonstrates how 5G NR OFDM waveforms can serve a dual purpose β€” communications and radar sensing β€” within a single integrated pipeline. This is the core concept behind ISAC (Integrated Sensing and Communications), a key feature of 5G-Advanced and 6G networks.

Why ISAC?

Traditional surveillance relies entirely on cameras, which fail in:

  • 🌫️ Fog β€” visibility drops to near zero
  • πŸŒ™ Night β€” insufficient lighting for visual detection
  • 🧱 Occlusion β€” physical obstructions block line of sight
  • 🌧️ Rain β€” lens distortion and reduced contrast

5G NR radar operates at 3.5 GHz (n78 band) and is unaffected by these conditions. SenseForge fuses both modalities β€” when cameras degrade, the RF pipeline maintains full situational awareness.

Core Capabilities

Capability Implementation
OFDM Waveform Generation 5G NR FR1 n78, ΞΌ=1, 30 kHz SCS, 272 subcarriers
Radar Echo Simulation Sionna CDL channel + Friis path loss + Rician fading
Range-Doppler Processing LS channel estimation β†’ ECA clutter removal β†’ 2D CA-CFAR
Computer Vision YOLOv8 person detection + ByteTrack-style tracking
Depth Estimation MiDaS monocular depth (with heuristic fallback)
Weather Degradation Fog, Night, Occlusion, Rain simulation at variable intensity
AI Fusion 14-input MLP with source-aware weighted averaging
Real-time Dashboard 3-panel military-grade UI with WebSocket streaming

🎯 Key Demo: ISAC Value Proposition

The critical demonstration shows SenseForge's resilience when camera feeds degrade. The RF radar continues to detect and track targets, proving the value of integrated sensing.

Clear Mode β†’ Fog Mode β†’ Night Mode

Clear Mode (100% Camera) Fog Mode (37% Camera) Night Mode (34% Camera)
Camera + RF both active Camera degraded, RF persists Near-blind camera, RF unaffected
Green FUS + Yellow VIS + Blue RF Green FUS dominant, Blue RF backup Green FUS + Blue RF ONLY

What Happens During Degradation

CLEAR MODE                    FOG/NIGHT MODE
β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”            β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚ πŸ“· Camera: 100% β”‚           β”‚ πŸ“· Camera: 37%  β”‚  ← Degraded
β”‚ πŸ“‘ RF:     100% β”‚           β”‚ πŸ“‘ RF:     100% β”‚  ← Unaffected!
β”‚                β”‚            β”‚                β”‚
β”‚ Vision: β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ β”‚            β”‚ Vision: β–ˆβ–ˆ     β”‚  ← Drops
β”‚ RF:     β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ β”‚            β”‚ RF:     β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ β”‚  ← Stable
β”‚ Fused:  β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ β”‚            β”‚ Fused:  β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ β”‚  ← RF compensates
β”‚                β”‚            β”‚                β”‚
β”‚ Source: FUSED  β”‚            β”‚ Source: RF_ONLY β”‚  ← Label changes
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜            β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜

The ISAC argument: When you click FOG or NIGHT, the camera confidence drops (visible in the gauge bar), vision detections disappear, but blue RF ONLY labels appear on all tracked targets. The 5G NR radar maintains full track continuity.

🎬 Live Demo Recording

Degradation Cycle Demo

β–² Automated degradation cycle: Clear β†’ Fog β†’ Night β†’ Clear


πŸ—οΈ Architecture

β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚                        SenseForge ISAC System                          β”‚
β”œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€
β”‚  LAYER 1    β”‚  LAYER 2    β”‚   LAYER 3    β”‚   LAYER 4    β”‚   LAYER 5   β”‚
β”‚  RF Radar   β”‚  Vision     β”‚   Fusion     β”‚   Backend    β”‚   Frontend  β”‚
β”‚             β”‚             β”‚              β”‚              β”‚             β”‚
β”‚ β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚ β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚ β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚ β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚ β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚
β”‚ β”‚Waveform β”‚ β”‚ β”‚ YOLOv8  β”‚ β”‚ β”‚ Feature  β”‚ β”‚ β”‚ FastAPI  β”‚ β”‚ β”‚  React  β”‚ β”‚
β”‚ β”‚  Gen    β”‚ β”‚ β”‚Detector β”‚ β”‚ β”‚  Vector  β”‚ β”‚ β”‚  REST +  β”‚ β”‚ β”‚Dashboardβ”‚ β”‚
β”‚ β”œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€ β”‚ β”œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€ β”‚ β”‚  Build   β”‚ β”‚ β”‚WebSocket β”‚ β”‚ β”œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€ β”‚
β”‚ β”‚  Echo   β”‚ β”‚ β”‚ByteTrackβ”‚ β”‚ β”œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€ β”‚ β”œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€ β”‚ β”‚ Camera  β”‚ β”‚
β”‚ β”‚Simulatorβ”‚ β”‚ β”‚ Tracker β”‚ β”‚ β”‚   MLP    β”‚ β”‚ β”‚ /ws/videoβ”‚ β”‚ β”‚  Feed   β”‚ β”‚
β”‚ β”œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€ β”‚ β”œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€ β”‚ β”‚14β†’64β†’32β†’4β”‚ β”‚ β”‚ /ws/radarβ”‚ β”‚ β”œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€ β”‚
β”‚ β”‚Range-   β”‚ β”‚ β”‚  MiDaS  β”‚ β”‚ β”œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€ β”‚ β”‚ /ws/det  β”‚ β”‚ β”‚ Radar   β”‚ β”‚
β”‚ β”‚Doppler  β”‚ β”‚ β”‚  Depth  β”‚ β”‚ β”‚ Weighted β”‚ β”‚ β”œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€ β”‚ β”‚Heatmap  β”‚ β”‚
β”‚ β”‚  Map    β”‚ β”‚ β”œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€ β”‚ β”‚  Fuse    β”‚ β”‚ β”‚ /health  β”‚ β”‚ β”œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€ β”‚
β”‚ β”œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€ β”‚ β”‚Degrader β”‚ β”‚ β”‚ (source  β”‚ β”‚ β”‚ /scenarioβ”‚ β”‚ β”‚ Fusion  β”‚ β”‚
β”‚ β”‚ Kalman  β”‚ β”‚ β”‚Fog/Nightβ”‚ β”‚ β”‚  aware)  β”‚ β”‚ β”‚ /degrade β”‚ β”‚ β”‚  Panel  β”‚ β”‚
β”‚ β”‚ Tracker β”‚ β”‚ β”‚Occ/Rain β”‚ β”‚ β”‚          β”‚ β”‚ β”‚          β”‚ β”‚ β”‚         β”‚ β”‚
β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ β”‚
β”œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€
β”‚                   NVIDIA AI Aerial SDK (pyAerial + Sionna)              β”‚
β”‚                   cuPHY PUSCH Decoder Β· CDL Channel Model               β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜

Data Flow

5G NR Waveform ──► Echo Simulation ──► Range-Doppler ──► CFAR ──► RF Tracks ──┐
                                                                               β”œβ”€β”€β–Ί Fusion MLP ──► Dashboard
Camera Frame ──► YOLOv8 ──► ByteTrack ──► Depth ──► Vision Tracks β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
                    β”‚
                    β–Ό
              Degradation (Fog/Night/Occlusion/Rain)

πŸ“Έ Screenshots & Recordings

Dashboard β€” Initial State (Backend Offline)

Dashboard Offline

β–² Dashboard before backend connection β€” showing UI structure with "Awaiting" placeholders

Dashboard β€” Live with Backend

Dashboard Live

β–² Live dashboard receiving streaming data via 3 WebSocket connections

Backend Health Endpoint

Backend Health

β–² /health endpoint showing pipeline status, target count, and uptime

Dashboard Preview Recording

Dashboard Preview Recording

β–² Browser recording of initial dashboard load and interaction


πŸ› οΈ Tech Stack

Layer Technology Purpose
RF Pipeline NVIDIA AI Aerial SDK (pyAerial) cuPHY PUSCH decoding, GPU-accelerated PHY
Sionna (β‰₯ 0.18) CDL channel models, ResourceGrid, OFDM
NumPy / SciPy Range-Doppler FFT, CA-CFAR, Kalman filter
Vision Pipeline YOLOv8 (Ultralytics) Real-time person detection
MiDaS (Intel ISL) Monocular depth estimation
OpenCV Frame processing, degradation effects
Fusion PyTorch 14-input MLP with source-aware fusion
Backend FastAPI + Uvicorn REST API + 3 WebSocket streams
Frontend React 18 + Canvas API Military-grade dashboard with IBM Plex Mono
Deployment Docker + NVIDIA Container Toolkit GPU-accelerated containerised deployment

πŸ“ Project Structure

SenseForge/
β”œβ”€β”€ rf/                          # Layer 1: RF Radar Pipeline
β”‚   β”œβ”€β”€ __init__.py
β”‚   β”œβ”€β”€ waveform_gen.py          # 5G NR FR1 n78 OFDM waveform generator
β”‚   β”œβ”€β”€ echo_simulator.py        # Sionna CDL channel + Friis path loss
β”‚   β”œβ”€β”€ range_doppler.py         # LS estimation β†’ ECA β†’ CA-CFAR β†’ NMS
β”‚   └── rf_tracker.py            # Kalman filter with NN association
β”‚
β”œβ”€β”€ vision/                      # Layer 2: Computer Vision Pipeline
β”‚   β”œβ”€β”€ __init__.py
β”‚   β”œβ”€β”€ detector.py              # YOLOv8 person detector (+ synthetic fallback)
β”‚   β”œβ”€β”€ tracker.py               # ByteTrack-style Hungarian matching
β”‚   β”œβ”€β”€ depth.py                 # MiDaS monocular depth estimation
β”‚   └── degradation.py           # Fog / Night / Occlusion / Rain simulator
β”‚
β”œβ”€β”€ fusion/                      # Layer 3: AI Fusion Engine
β”‚   β”œβ”€β”€ __init__.py
β”‚   β”œβ”€β”€ model.py                 # FusionMLP (14β†’64β†’32β†’4) + weighted fuse()
β”‚   └── train.py                 # Synthetic data generator + training loop
β”‚
β”œβ”€β”€ backend/                     # Layer 4: API & Streaming
β”‚   β”œβ”€β”€ __init__.py
β”‚   └── main.py                  # FastAPI + 3 WebSockets + background threads
β”‚
β”œβ”€β”€ frontend/                    # Layer 5: React Dashboard
β”‚   β”œβ”€β”€ public/
β”‚   β”‚   └── index.html           # IBM Plex Mono shell
β”‚   β”œβ”€β”€ src/
β”‚   β”‚   β”œβ”€β”€ App.js               # Main app with WebSocket state management
β”‚   β”‚   β”œβ”€β”€ App.css              # Defence-grade design system
β”‚   β”‚   β”œβ”€β”€ CameraFeed.js        # Canvas camera with corner-bracket boxes
β”‚   β”‚   β”œβ”€β”€ RadarHeatmap.js      # INFERNO colormap heatmap
β”‚   β”‚   β”œβ”€β”€ FusionPanel.js       # Gauges, counts, degradation controls
β”‚   β”‚   β”œβ”€β”€ index.js             # React entry point
β”‚   β”‚   └── index.css            # Global styles
β”‚   β”œβ”€β”€ package.json
β”‚   └── .env
β”‚
β”œβ”€β”€ scripts/
β”‚   β”œβ”€β”€ demo_generator.py        # Synthetic video generator (OpenCV)
β”‚   └── record_demo.py           # WebSocket demo recorder
β”‚
β”œβ”€β”€ tests/                       # 95 total tests
β”‚   β”œβ”€β”€ test_rf_pipeline.py      # 32 RF tests
β”‚   β”œβ”€β”€ test_vision_pipeline.py  # 35 vision tests
β”‚   └── test_fusion.py           # 28 fusion tests
β”‚
β”œβ”€β”€ docs/
β”‚   └── images/                  # Screenshots and recordings
β”‚
β”œβ”€β”€ models/                      # Trained model weights
β”‚   └── fusion_model.pt
β”‚
β”œβ”€β”€ run_pipeline_test.py         # 15 E2E checks with ANSI output
β”œβ”€β”€ aerial_validate.py           # 3GPP TS 38.211 constraint validation
β”œβ”€β”€ aerial_setup.sh              # NVIDIA Aerial SDK setup automation
β”œβ”€β”€ build.sh                     # Build + train script
β”‚
β”œβ”€β”€ docker-compose.yml           # GPU backend + nginx frontend
β”œβ”€β”€ Dockerfile.backend           # Aerial container-based backend
β”œβ”€β”€ Dockerfile.frontend          # Multi-stage React + nginx
β”œβ”€β”€ nginx.conf                   # SPA routing + WS proxy
β”‚
β”œβ”€β”€ requirements.txt             # Python dependencies
β”œβ”€β”€ pytest.ini                   # Test configuration
β”œβ”€β”€ render.yaml                  # Render.com deployment
β”œβ”€β”€ Procfile                     # Heroku/Railway deployment
└── .gitignore

πŸš€ Quick Start

Prerequisites

  • Python 3.11+
  • Node.js 18+
  • (Optional) NVIDIA GPU + CUDA 12.x for SDK mode
  • (Optional) NVIDIA AI Aerial SDK + Sionna for full RF pipeline

1. Clone & Install

git clone https://github.com/yourusername/SenseForge.git
cd SenseForge

# Python dependencies
pip install -r requirements.txt

# Frontend dependencies
cd frontend && npm install && cd ..

2. Train the Fusion Model

python -m fusion.train
# Output: models/fusion_model.pt (trains in ~7 seconds)

3. Start the Backend

python -m uvicorn backend.main:app --host 0.0.0.0 --port 8000

4. Start the Frontend

cd frontend
npm start
# Dashboard available at http://localhost:3000

5. Open the Dashboard

Navigate to http://localhost:3000 β€” you'll see:

  • πŸ“· Live camera feed with animated detection boxes
  • πŸ“‘ INFERNO-colormap range-Doppler heatmap
  • 🧠 Real-time fusion gauges and detection log

πŸ”§ NVIDIA AI Aerial SDK Setup

For full GPU-accelerated RF processing with cuPHY PUSCH decoding:

# Automated setup (requires Docker + NVIDIA drivers)
bash aerial_setup.sh

This script will:

  1. βœ… Check prerequisites (Docker, Git LFS, nvidia-smi)
  2. βœ… Verify NVIDIA Docker runtime
  3. βœ… Clone aerial-cuda-accelerated-ran repository
  4. βœ… Pull NGC Aerial container image
  5. βœ… Start container with project + pyAerial mounted
  6. βœ… Install dependencies, train model, and start backend

Manual Setup

# 1. Clone Aerial SDK
git clone --recurse-submodules https://github.com/NVIDIA/aerial-cuda-accelerated-ran.git ~/aerial

# 2. Install pyAerial
pip install -e ~/aerial/pyaerial/

# 3. Install Sionna
pip install sionna>=0.18.0

# 4. Validate
python aerial_validate.py

Note: Without the Aerial SDK, SenseForge runs in Synthetic Mode β€” all pipeline stages use simulated data. The dashboard and fusion logic work identically.


βš™οΈ Configuration

Waveform Parameters (5G NR FR1 n78)

Parameter Value 3GPP Reference
Band n78 (3.3 – 3.8 GHz) TS 38.104
Carrier Frequency 3.5 GHz
Subcarrier Spacing 30 kHz (ΞΌ=1) TS 38.211
Subcarriers 272
OFDM Symbols/Slot 14 (Normal CP) TS 38.211 Β§5.2.1
FFT Size 512
Bandwidth 8.16 MHz
MCS Index 16 (64-QAM, R=0.48) TS 38.214 Table 5.1.3.1-1

Radar Performance

Parameter Value
Range Resolution ~18.4 m
Max Range ~5,000 m
Velocity Resolution ~0.61 m/s
Max Velocity ~4.29 m/s
CFAR false alarm rate 10⁻⁴

Environment Variables

Variable Default Description
FRONTEND_URL http://localhost:3000 CORS origin for backend
REACT_APP_BACKEND_URL http://localhost:8000 Backend URL for frontend
REACT_APP_WS_URL ws://localhost:8000 WebSocket URL

πŸ“‘ API Reference

REST Endpoints

Method Endpoint Description
GET /health System status, uptime, pipeline state
POST /scenario Set target count and scenario seed
POST /degrade Set camera degradation mode and intensity

WebSocket Streams

Endpoint Rate Payload
/ws/video ~15 Hz { frame: <base64 JPEG> }
/ws/radar ~10 Hz { rd_matrix: [[...]], detections: [...] }
/ws/detections ~5 Hz { detections: [...], mode, camera_confidence, rf_confidence }

Example: POST /degrade

curl -X POST http://localhost:8000/degrade \
  -H "Content-Type: application/json" \
  -d '{"mode": "fog", "intensity": 0.8}'

Response:

{
  "mode": "fog",
  "intensity": 0.8,
  "camera_confidence": 0.24
}

πŸ§ͺ Testing

Run All Tests (95 tests)

pytest tests/ -v

Run End-to-End Pipeline Validation (15 checks)

python run_pipeline_test.py
══════════════════════════════════════════════════════
  SenseForge β€” End-to-End Pipeline Validation
══════════════════════════════════════════════════════

  RF Pipeline
  ──────────────────────────────────────────────
  βœ“ PASS  WaveformConfig parameters
  βœ“ PASS  WaveformConfig derived properties
  βœ“ PASS  Target physics (RCS, Doppler, delay)
  βœ“ PASS  Scenario generator
  βœ“ PASS  Channel estimation
  βœ“ PASS  Range-Doppler map computation
  βœ“ PASS  CFAR detection
  βœ“ PASS  RF Kalman tracker

  Vision Pipeline
  ──────────────────────────────────────────────
  βœ“ PASS  Degradation modes (all 5)
  βœ“ PASS  YOLO detector (synthetic)
  βœ“ PASS  Vision tracker + IoU
  βœ“ PASS  Depth estimation (heuristic)

  Fusion Layer
  ──────────────────────────────────────────────
  βœ“ PASS  Feature vector normalisation
  βœ“ PASS  Fusion source labelling (all 4 branches)
  βœ“ PASS  Training data generator

══════════════════════════════════════════════════════
  15 PASSED  /  0 FAILED
  ALL CHECKS PASSED βœ“
══════════════════════════════════════════════════════

Run 3GPP Validation

python aerial_validate.py

🐳 Docker Deployment

Full GPU Deployment

# Set environment
export AERIAL_SDK_PATH=~/aerial-cuda-accelerated-ran
export AERIAL_IMAGE=nvcr.io/nvidia/aerial/aerial-cuda-accelerated-ran:25-3-cubb

# Launch
docker-compose up -d

This starts:

  • Backend on port 8000 (GPU-enabled Aerial container)
  • Frontend on port 3000 (nginx serving React build)

Frontend Only (for development)

docker build -f Dockerfile.frontend -t senseforge-frontend .
docker run -p 3000:3000 senseforge-frontend

πŸ“ 3GPP Validation

SenseForge validates all waveform parameters against 3GPP TS 38.211 constraints:

═══════════════════════════════════════════════════════
  SenseForge β€” 3GPP TS 38.211 Validation
═══════════════════════════════════════════════════════

  βœ“ SCS valid for FR1: 30000 (expected: one of [15000, 30000, 60000])
  βœ“ Carrier frequency in FR1: 3500000000.0 (expected: 410e6 <= fc <= 7.125e9)
  βœ“ Carrier in n78 band: 3500000000.0 (expected: 3.3e9 <= fc <= 3.8e9)
  βœ“ Numerology ΞΌ: 1 (expected: 1)
  βœ“ OFDM symbols per slot: 14 (expected: 14)
  βœ“ FFT size is power of 2: 512 (expected: power of 2)
  βœ“ FFT size >= num_subcarriers: 512 (expected: >= 272)
  βœ“ MCS index valid: 16 (expected: 0 <= mcs <= 28)
  βœ“ Range resolution > 0: 18.38 m
  βœ“ Max range > 100m: 4997 m

  Result: 12/12 passed βœ“

🀝 Contributing

  1. Fork the repository
  2. Create a feature branch (git checkout -b feature/amazing-feature)
  3. Commit your changes (git commit -m 'Add amazing feature')
  4. Push to the branch (git push origin feature/amazing-feature)
  5. Open a Pull Request

πŸ“„ License

This project is licensed under the MIT License β€” see the LICENSE file for details.


πŸ™ Acknowledgments


Built with ❀️ by the Kousar Saeed

5G NR FR1 n78 Β· ΞΌ=1 Β· 30 kHz SCS Β· Sionna + cuPHY + YOLOv8


NVIDIA 5G NR ISAC Sensor Fusion
=======

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors