Complete end-to-end pipeline for reconstructing Radio-Frequency Radiance Fields from 3D scenes using 3D Gaussian Splatting (3DGS). This project demonstrates how to create custom 3D scenes, generate multi-modal datasets (visual + RF), and train neural radiance fields that capture both visual and radio-frequency properties.
This pipeline reconstructs Radio-Frequency Radiance Fields (RRF) for indoor environments by:
- Creating custom 3D room models with furniture
- Generating synthetic visual datasets using Blender
- Simulating RF propagation using Sionna RT ray-tracing
- Training 3D Gaussian Splatting models on visual data
- Fine-tuning on RF data to learn radio propagation patterns
- Evaluating and visualizing results in interactive 3D viewers
Key Achievement: Successfully trained a 3DGS model that learns both geometric and radio-frequency properties of a custom indoor scene, enabling RF prediction from novel viewpoints.
- Requirements
- Installation
- Pipeline Overview
- Step-by-Step Workflow
- Results
- Troubleshooting
- References
- Python 3.8+
- CUDA 11.8+ (for GPU acceleration)
- Blender 3.6+ (for visual dataset generation)
- Conda (for environment management)
- Sionna 0.18+ - RT ray-tracing for RF simulation
- TensorFlow 2.15+ with GPU support
- PyTorch 2.0+ with CUDA
- Open3D - Point cloud processing
- NumPy, SciPy, Matplotlib - Scientific computing
See requirements.txt for complete list.
cd Project_1
git clone https://github.com/Wangmz-1203/RF-3DGS.git
cd RF-3DGSconda create -n rf-3dgs python=3.8
conda activate rf-3dgs# Install PyTorch with CUDA
conda install pytorch torchvision pytorch-cuda=11.8 -c pytorch -c nvidia
# Install other requirements
pip install -r requirements.txt
# Install Sionna for RF simulation
pip install sionna
# Install submodules (diff-surfel-rasterization, simple-knn)
cd submodules
pip install ./diff-surfel-rasterization
pip install ./simple-knn
cd ..# Download Blender 3.6+ from https://www.blender.org/download/
# Or install via snap on Ubuntu:
sudo snap install blender --classic┌─────────────────────┐
│ 1. Scene Creation │
│ create_scene_5x3x3 │
└──────────┬──────────┘
│ PLY meshes
▼
┌─────────────────────┐
│ 2. Visual Dataset │
│ generate_visual_ │
│ dataset.py │
└──────────┬──────────┘
│ RGB images + poses
▼
┌─────────────────────┐
│ 3. RF Dataset │
│ generate_dataset_ │
│ ideal_mpc.py │
└──────────┬──────────┘
│ RF heatmaps + COLMAP
▼
┌─────────────────────┐
│ 4a. Visual Training │
│ train.py │
└──────────┬──────────┘
│ Visual checkpoint
▼
┌─────────────────────┐
│ 4b. RF Fine-tuning │
│ train.py --rf │
└──────────┬──────────┘
│ RRF model
▼
┌─────────────────────┐
│ 5. Evaluation │
│ render.py + metrics │
└──────────┬──────────┘
│
▼
┌─────────────────────┐
│ 6. Visualization │
│ WebGL Viewer │
└─────────────────────┘
Create a custom 7m × 5m × 3m room with furniture using parametric mesh generation:
python create_scene_5x3x3_multi.pyWhat it does:
- Generates separate PLY files for each object (walls, floor, ceiling, furniture)
- Creates material-specific meshes for RF simulation:
meshes/concrete_floor.ply- Concrete floormeshes/concrete_walls.ply- Concrete walls with window/door cutoutsmeshes/glass_window.ply- Glass windowmeshes/wood_door.ply- Wooden doormeshes/wood_furniture.ply- Tables, chairs, sofameshes/metal_tv.ply- LED TV
- Generates combined PLY:
room_5x3x3_combined.plyfor visualization
Key Features:
- Parametric room dimensions (configurable X, Y, Z)
- Realistic furniture placement (3 tables + chairs, sofa, TV)
- Material-based mesh separation for RF propagation modeling
- Window (1.5m × 1.5m) and door (2m × 1m) cutouts
Output Structure:
meshes/
├── concrete_floor.ply
├── concrete_walls.ply
├── concrete_ceiling.ply
├── glass_window.ply
├── wood_door.ply
├── wood_furniture.ply
└── metal_tv.ply
room_5x3x3_combined.ply
Ensure proper coordinate system and dimensions:
python check_scene_scale.pyExpected Output:
Total Width (X): 7.0000 m
Total Depth (Y): 5.0000 m
Total Height (Z): 3.0000 m
Create photorealistic training images using Cycles renderer:
blender --background --python generate_visual_dataset.pyConfiguration (in generate_visual_dataset.py):
NUM_IMAGES = 300 # Number of camera poses
RESOLUTION = 800 # Image resolution (800×800)
ROOM_MIN = (0.5, 0.5, 0.0)
ROOM_MAX = (6.5, 4.5, 3.0)What it does:
- Scene Setup: Imports all meshes from
meshes/folder - Material Assignment: Creates PBR materials with realistic properties:
- Concrete: Rough diffuse surfaces
- Glass: Semi-transparent with transmission
- Wood: Textured diffuse with normal mapping
- Metal: Reflective surfaces
- Camera Sampling: Generates diverse camera poses:
- Positions: Random within room bounds
- Orientations: Looking toward room center with perturbations
- High overlap (90% train, 10% test split)
- Rendering: Uses Cycles GPU rendering with:
- 96 samples per pixel
- OptiX denoising
- Neutral color grading (Standard view transform)
Output Structure:
dataset_visual_v2/
├── transforms_train.json # Camera poses (270 images)
├── transforms_test.json # Camera poses (30 images)
└── images/
├── frame_0000.png
├── frame_0001.png
└── ...
transforms_train.json format:
{
"camera_angle_x": 0.8575560450553894,
"frames": [
{
"file_path": "images/frame_0000.png",
"transform_matrix": [
[0.9848, -0.1736, 0.0000, 3.5],
[0.1736, 0.9848, 0.0000, 2.5],
[0.0000, 0.0000, 1.0000, 1.5],
[0.0, 0.0, 0.0, 1.0]
]
},
...
]
}Tips:
- Requires ~10GB GPU memory for rendering
- Takes ~2-3 hours for 300 images at 800px resolution
- Ensure Blender has GPU rendering enabled in preferences
Generate RF heatmaps using ray-tracing simulation:
python generate_dataset_ideal_mpc.pyConfiguration:
# RF Parameters
FREQUENCY = 28e9 # 28 GHz (mmWave)
BANDWIDTH = 1e9 # 1 GHz bandwidth
NUM_TX = 1 # Single transmitter
TX_POWER = 20 # dBm
# Camera/Receiver Parameters
NUM_IMAGES = 300 # Match visual dataset
RESOLUTION = 800 # Match visual dataset
FOCAL_LENGTH = 1164.69 # Calculated from camera_angle_xWhat it does:
-
Sionna Scene Setup:
- Loads all meshes from
meshes/folder - Assigns radio materials based on filenames:
concrete_*→"itu_concrete"glass_*→"itu_glass"wood_*→"itu_wood"metal_*→"itu_metal"
- Loads all meshes from
-
Transmitter Placement:
- Position:
(6.0, 2.5, 2.5)(wall-mounted, centered) - Antenna: Isotropic pattern
- Power: 20 dBm
- Position:
-
Camera Pose Generation:
- Uses same camera poses as visual dataset
- Converts Blender transforms to Sionna camera format
- Euler angles → quaternions (COLMAP format)
-
RF Ray-Tracing:
- For each camera pose:
- Renders 360° panorama (equirectangular)
- Computes path gains, delays, angles
- Projects panorama to perspective view (pinhole camera)
- Saves RF heatmap as grayscale PNG
- Path features: Gains, delays, AoA, AoD, Doppler
- For each camera pose:
-
COLMAP Format Export:
- Saves
cameras.txt(intrinsics) - Saves
images.txt(extrinsics) - Creates
sparse/0/structure for 3DGS
- Saves
Output Structure:
dataset_custom_scene_ideal_mpc/
├── cameras.txt # COLMAP camera intrinsics
├── images.txt # COLMAP camera extrinsics
├── train_index.txt # Training image list
├── test_index.txt # Test image list
├── spectrum/ # RF heatmaps
│ ├── frame_0000.png # Grayscale power map
│ ├── frame_0001.png
│ └── ...
└── sparse/
└── 0/
├── cameras.txt # Copy of intrinsics
├── images.txt # Copy of extrinsics
└── points3D.txt # Dummy file (required by 3DGS)
cameras.txt format:
# Camera list with one line of data per camera:
# CAMERA_ID, MODEL, WIDTH, HEIGHT, PARAMS[]
1 PINHOLE 800 800 1164.69 1164.69 400.0 400.0
images.txt format:
# Image list with two lines of data per image:
# IMAGE_ID, QW, QX, QY, QZ, TX, TY, TZ, CAMERA_ID, NAME
# POINTS2D[] (empty for our case)
1 0.9848 0.0 0.0 0.1736 3.5 2.5 1.5 1 frame_0000.png
2 0.9659 0.0 0.0 0.2588 4.2 3.1 1.8 1 frame_0001.png
...
Organize RF dataset into expected structure:
cd RF-3DGS
python prepare_rf_data.pyWhat it does:
- Creates
sparse/0/directory structure - Copies COLMAP files to correct locations
- Generates
train_index.txtandtest_index.txt - Creates dummy
points3D.txt(required but not used for RF)
First, train on visual RGB images to learn scene geometry:
cd RF-3DGS
conda activate rf-3dgs
python train.py \
-s /home/ved/Ved/Project_1/dataset_visual_v2 \
-m output/visual_model \
--iterations 30000 \
--save_iterations 7000 15000 30000Training Parameters:
- Iterations: 30,000 (standard for 3DGS)
- Densification: Every 100 iterations until iteration 15,000
- Opacity reset: Every 3,000 iterations
- Loss: L1 + SSIM (structural similarity)
What it does:
- Initialization: Randomly initialize Gaussians in scene bounds
- Optimization: Iteratively optimize:
- Gaussian positions (xyz)
- Gaussian scales (scale)
- Gaussian rotations (quaternions)
- Gaussian opacities (alpha)
- Spherical harmonic coefficients (color)
- Densification: Add/split Gaussians in high-gradient regions
- Pruning: Remove low-opacity Gaussians
Expected Output:
output/visual_model/
├── cameras.json
├── cfg_args
├── input.ply # Initial point cloud
├── point_cloud/
│ ├── iteration_7000/
│ │ └── point_cloud.ply # 7K iteration Gaussians
│ ├── iteration_15000/
│ └── iteration_30000/
└── chkpnt30000.pth # Checkpoint for fine-tuning
Monitoring Training:
- Loss should decrease steadily
- PSNR should increase (target: >25 dB for indoor scenes)
- Check
output/visual_model/for intermediate checkpoints
Fine-tune visual model on RF heatmaps:
python train.py \
-s /home/ved/Ved/Project_1/dataset_custom_scene_ideal_mpc \
-m output/rf_model \
--images spectrum \
--start_checkpoint output/visual_model/chkpnt30000.pth \
--iterations 10000 \
--save_iterations 3000 7000 10000Key Parameters:
--images spectrum: Use RF heatmaps fromspectrum/folder--start_checkpoint: Initialize from visual model (transfer learning)- Fewer iterations (10K) since geometry is already learned
What it does:
- Load Visual Checkpoint: Initialize Gaussians from Stage 1
- RF Feature Learning: Add RF-specific attributes:
- RF absorption coefficients
- RF scattering properties
- Material-dependent propagation
- Fine-tuning: Optimize for RF prediction:
- Keep geometry mostly fixed
- Learn RF-specific features
- Minimize L1 loss between predicted and true RF heatmaps
Expected Output:
output/rf_model/
├── cameras.json
├── cfg_args
├── point_cloud/
│ ├── iteration_3000/
│ ├── iteration_7000/
│ └── iteration_10000/
│ └── point_cloud.ply # Final RRF model
└── chkpnt10000.pth
One-Step Script:
For convenience, use the provided bash script:
cd RF-3DGS
bash run_rf_reconstruction.shThis script runs both stages sequentially.
Generate predictions for test set:
# Render visual test views
python render.py \
-m output/visual_model \
--iteration 30000
# Render RF test views
python render.py \
-m output/rf_model \
--iteration 10000Output Structure:
output/visual_model/test/ours_30000/
├── renders/ # Predicted images
│ ├── 00000.png
│ └── ...
└── gt/ # Ground truth images
├── 00000.png
└── ...
output/rf_model/test/ours_10000/
├── renders/ # Predicted RF heatmaps
└── gt/ # Ground truth RF heatmaps
Evaluate reconstruction quality:
# Visual metrics
python metrics.py -m output/visual_model
# RF metrics
python metrics.py -m output/rf_modelReported Metrics:
- PSNR (Peak Signal-to-Noise Ratio): Higher is better (dB)
- SSIM (Structural Similarity Index): Higher is better (0-1)
- LPIPS (Learned Perceptual Image Patch Similarity): Lower is better
Expected Results:
| Model | PSNR (dB) | SSIM | LPIPS |
|---|---|---|---|
| Visual | 28-32 | 0.92-0.96 | 0.05-0.10 |
| RF | 25-30 | 0.88-0.93 | 0.10-0.20 |
Test RF-based localization using fingerprinting:
python evaluate_localization.pyWhat it does:
- Loads RF fingerprint dataset (
rf_dataset.pkl) - Extracts features: path gains, delays, power
- Trains k-NN classifier (k=5)
- Predicts user positions from RF measurements
- Computes localization error (RMSE)
Output:
- Console: Mean/median localization error
localization_results.png: Scatter plot of true vs predicted positions
View reconstructed RRF in WebGL viewer:
cd RF-3DGS/SIBR_viewers
# Build viewer (first time only)
cmake -B build -S . -DCMAKE_BUILD_TYPE=Release
cmake --build build --config Release -j
# Launch viewer
./build/bin/SIBR_gaussianViewer_app \
-m ../output/rf_model \
--iteration 10000Viewer Controls:
- Mouse: Rotate view
- WASD: Move camera
- Q/E: Up/down
- Scroll: Zoom
- Tab: Toggle UI
- Space: Screenshot
Create flythrough video:
python make_video.py \
--input output/rf_model/test/ours_10000/renders \
--output rf_reconstruction.mp4 \
--fps 30Options:
--method opencv: Use OpenCV (faster)--method ffmpeg: Use FFmpeg (better quality)
- Scene: 7m × 5m × 3m room with furniture
- Training: 270 images, 800×800 resolution
- Quality: PSNR ~30 dB, SSIM ~0.94
- Frequency: 28 GHz (mmWave 5G)
- Transmitter: Wall-mounted at (6.0, 2.5, 2.5)
- Coverage: Successfully predicts RF heatmaps at novel viewpoints
- Localization: ~0.5m average error using RF fingerprinting
- Visual pre-training is crucial: Starting from random initialization fails for RF
- Material modeling matters: Concrete vs glass vs metal have distinct RF signatures
- Multi-path propagation: Model captures reflections, diffractions around furniture
- Generalization: RRF generalizes to unseen camera positions
RuntimeError: CUDA out of memory
Solution:
- Reduce
RESOLUTIONto 512 or 640 - Reduce
NUM_IMAGESto 200 - Use
--densify_grad_threshold 0.0003(more aggressive pruning)
AttributeError: 'Scene' object has no attribute 'mi_scene'
Solution:
- Ensure Sionna 0.18+ is installed
- Check PLY file format (must be binary little-endian)
- Verify mesh normals are consistent
Blender hangs or renders very slowly
Solution:
- Enable GPU in Blender preferences: Edit → Preferences → System → CUDA/OptiX
- Reduce
scene.cycles.samplesto 64 - Disable denoising:
scene.cycles.use_denoising = False
Loss increases or NaN loss
Solution:
- Check camera poses (visualize with
debug_scene.py) - Ensure proper camera coordinate system (OpenGL convention)
- Reduce learning rate:
--position_lr_init 0.00008
RuntimeError: Could not find cameras.txt
Solution:
- Run
prepare_rf_data.pyto createsparse/0/structure - Check file paths in
cameras.txtandimages.txt - Ensure
points3D.txtexists (even if empty/dummy)
-
3D Gaussian Splatting - Kerbl et al. (2023)
- Paper
- Original 3DGS implementation
-
RF-3DGS - Wang et al. (2024)
- GitHub
- Radio-frequency extension of 3DGS
-
Sionna RT - Hoydis et al. (2023)
- Documentation
- Differentiable ray-tracing for wireless
- Blender - https://www.blender.org/
- COLMAP - https://colmap.github.io/
- PyTorch - https://pytorch.org/
- TensorFlow - https://www.tensorflow.org/
Contributions welcome! Please open issues for bugs or feature requests.
This project uses code from:
- RF-3DGS: BSD 3-Clause License
- Sionna: Apache 2.0 License
- 3D Gaussian Splatting: Original license (Inria)
See LICENSE files in respective directories.
Ved - RF-RRF Reconstruction Pipeline
- RF-3DGS authors for the RRF framework
- NVIDIA Sionna team for RT ray-tracing
- Inria for original 3D Gaussian Splatting
- Blender Foundation for rendering tools
Last Updated: February 2026