A comprehensive Python-based LIDAR point cloud processing and visualization system for analyzing roadside vehicle scans. IRIS processes E57, PCD, and ROS bag files to detect vehicles, extract interior spaces, perform sensor fusion with camera data, and enable interactive 3D analysis.
- Multi-Format Point Cloud Loading: E57, PCD, and ROS bag file support
- Vehicle Detection & Analysis: Automated vehicle identification using DBSCAN clustering
- Interior Extraction: 3D occupancy grid-based interior space detection
- Interactive 3D Visualization: PyVista/VTK-based real-time rendering
- Sensor Fusion: Camera-LIDAR fusion with YOLO object detection
- Vehicle Tracking: Multi-frame tracking with trajectory analysis
- Human Model Positioning: Interactive human model placement in point cloud scenes
- Ground plane separation (RANSAC-based)
- Vehicle clustering and identification
- Cockpit and dashboard detection
- Interior point extraction
- Spatial analysis with voxel grids
- GUI Mode: Full-featured Tkinter interface
- CLI Mode: Interactive command-line interface
- Headless Mode: Automated batch processing
- Installation
- Quick Start
- Usage
- Application Modes
- Sensor Fusion & Tracking
- Demo Applications
- Project Structure
- Configuration
- Datasets
- Development
- Troubleshooting
- License
- Python 3.11 (required - Python 3.12+ not yet supported)
- Poetry package manager
- macOS, Linux, or Windows (macOS requires special VTK handling)
Poetry is used for dependency management. Install it using one of these methods:
curl -sSL https://install.python-poetry.org | python3 -(Invoke-WebRequest -Uri https://install.python-poetry.org -UseBasicParsing).Content | py -pip install poetryVerify installation:
poetry --versiongit clone https://github.com/yourusername/iris.git
cd iris# Install core dependencies
poetry install# ROS bag file support
poetry install -E ros
# PointTransformerV3 deep learning support
poetry install -E ptv3
# MMDetection3D support (not compatible with Apple Silicon)
poetry install -E mmdet3d
# Install all optional dependencies
poetry install -E all# Activate the Poetry virtual environment
poetry shellOr run commands directly:
poetry run python src/launcher.pypython src/launcher.pyThis opens the main graphical interface where you can:
- Load point cloud files (E57, PCD, ROS bag)
- Run vehicle analysis
- Visualize results in 3D
- Perform interactive cube selection
- Position human models
python src/launcher.py --analysis-only data/your_file.e57python src/launcher.py --mode cliFor optimal VTK stability on macOS:
python src/macos_launcher.pyThe comprehensive graphical interface with tabs for:
- File Operations: Load and preview point cloud files
- Vehicle Analysis: Automated processing pipeline
- Cube Selection: Interactive 3D region selection
- Human Positioning: Position human models in scenes
- Results: View and export analysis results
# Launch GUI (all equivalent)
python src/launcher.py
python src/launcher.py --mode gui
poetry run python src/launcher.pyInteractive command-line interface for terminal-based workflows:
python src/launcher.py --mode cliFeatures:
- File selection and loading
- Analysis execution
- Parameter configuration
- Result inspection
Automated processing without GUI (ideal for batch processing):
# Process a single file
python src/launcher.py --analysis-only path/to/file.e57
# Full headless mode with custom parameters
python src/launcher.py --mode headless --input data/scan.e57 --output results/python src/launcher.py [OPTIONS]
Options:
--mode {gui,cli,headless} Launch mode (default: gui)
--cli Launch CLI mode (shortcut)
--analysis-only FILE Headless analysis of single file
--input PATH Input file path
--output PATH Output directory
--cube-editor Launch cube selection tool
--human-positioner Launch human positioning tool
--help Show help messageIRIS includes advanced sensor fusion capabilities for camera-LIDAR integration.
Combines YOLO object detection with LIDAR point clouds:
# Basic sensor fusion
python src/sensor_fusion.py --sequence seq-53 --device 105
# With visualization
python src/sensor_fusion.py --sequence seq-53 --device 105 --visualize
# Process all frames
python src/run_all_sensor_fusion.py --sequence seq-53Features:
- YOLO-based car detection in camera images
- Camera-LIDAR calibration and projection
- 3D bounding box generation
- Point cloud association with detections
Multi-frame tracking with trajectory analysis:
# Basic tracking
python src/car_tracking.py --sequence seq-53 --device 105
# Enhanced tracking with 3D models
python src/enhanced_car_tracking.py --sequence seq-53 --max_frames 10
# Temporal analysis
python src/demo_temporal_analysis.pyFeatures:
- Multi-frame vehicle tracking
- Trajectory analysis
- Speed estimation
- 3D vehicle model construction
- Movement pattern analysis
# Quick sensor fusion test
python src/quick_sensor_fusion_test.py
# Batch processing
python src/simple_batch_fusion.pyDemonstrates interior analysis and human detection:
# Interactive demo launcher
python src/launch_passenger_demo.py
# Standalone demo
python src/passenger_demo_standalone.py
# Animated demonstration
python src/passenger_detection_animation.pyGenerate professional visualizations for presentations:
python src/sales_deck_launcher.py --sequence seq-53 --device 105 --max_frames 8Creates:
- LIDAR scene overviews
- Vehicle detection visualizations
- Tracking trajectory plots
- 3D model reconstructions
Analyze temporal sequences of LIDAR scans:
python src/sequence_lidar_analysis.py --sequence seq-53
python src/sequence_fusion.py --sequence seq-53 --device 105iris/
├── src/
│ ├── launcher.py # Universal launcher (entry point)
│ ├── macos_launcher.py # macOS-optimized launcher
│ ├── lidar_gui_app.py # Legacy GUI application
│ │
│ ├── core/ # Core launcher modules
│ │ ├── gui_launcher.py # GUI mode implementation
│ │ ├── cli_launcher.py # CLI mode implementation
│ │ ├── headless_launcher.py # Headless mode implementation
│ │ ├── launcher_base.py # Base launcher interface
│ │ ├── dependency_checker.py
│ │ └── environment_setup.py
│ │
│ ├── point_cloud/ # Point cloud processing
│ │ ├── loaders/ # File format loaders
│ │ ├── processors/ # Processing stages
│ │ ├── pipeline/ # Analysis pipeline
│ │ ├── services/ # High-level services
│ │ ├── config.py # Configuration constants
│ │ ├── vtk_utils.py # VTK safety management
│ │ └── error_handling.py # Error handling
│ │
│ ├── platforms/ # Platform-specific code
│ │
│ ├── sensor_fusion.py # Camera-LIDAR fusion
│ ├── car_tracking.py # Vehicle tracking
│ ├── enhanced_car_tracking.py # Advanced tracking
│ └── [demo scripts...] # Various demo applications
│
├── data/ # Data files and datasets
│ └── README.md # Dataset references
│
├── pyproject.toml # Poetry dependencies
├── workspace.dsl # Architecture documentation (Structurizr)
└── yolov8n.pt # YOLO model weights
Edit src/point_cloud/config.py to adjust:
class AnalysisConfig:
# Ground separation
GROUND_HEIGHT_THRESHOLD = 0.5 # meters
GROUND_PLANE_TOLERANCE = 0.1 # meters
# Vehicle identification
DBSCAN_EPS = 0.3 # clustering radius
DBSCAN_MIN_SAMPLES = 50 # minimum cluster size
VEHICLE_MAX_HEIGHT = 3.0 # meters
# Interior detection
GRID_RESOLUTION = 0.1 # voxel size (10cm)
INTERIOR_THRESHOLD_3D = 2 # distance thresholdclass RenderConfig:
POINT_SIZE = 2.0
BACKGROUND_COLOR = (0.1, 0.1, 0.1)
WINDOW_SIZE = (1920, 1080)
CAMERA_POSITION = 'xy'IRIS supports various public roadside LIDAR datasets. See data/README.md for:
- DAIR-V2X (China): Vehicle-infrastructure cooperative dataset
- TAMU (USA): Roadside LIDAR dataset
- A9 Intersection (Germany): Providentia dataset
- DAIR-RCooper (China): Roadside cooperative dataset
- FLIR Thermal: Thermal camera datasets
Place your dataset files in the data/ directory.
# Run all tests
pytest
# Run with coverage
pytest --cov=src --cov-report=html
# Run specific test file
pytest src/point_cloud/test_service_separation.py
# View coverage report
open htmlcov/index.html # macOS
xdg-open htmlcov/index.html # Linux# Format code
black src/
# Lint code
flake8 src/
# Type checking
mypy src/
# Run pre-commit hooks
pre-commit install
pre-commit run --all-filesThe project includes comprehensive C4 model architecture diagrams in Structurizr DSL format:
workspace.dsl- Complete system architecturepoint_cloud_architecture.dsl- Point cloud module detailspoint_cloud_processing_pipeline.dsl- Processing pipeline
View these using Structurizr Lite or the online viewer.
Symptom: Application crashes with segmentation fault during visualization
Solutions:
- Use the macOS launcher:
python src/macos_launcher.py - Ensure VTK environment is properly initialized
- Check that all VTK resources are cleaned up properly
Symptom: Installation fails or dependencies conflict
Solution: Ensure you're using Python 3.11:
python --version # Should show 3.11.x
poetry env use python3.11
poetry installSymptom: Import errors for optional packages
Solution: Install the appropriate extras:
poetry install -E all # Install all optional dependenciesSymptom: OpenCV errors or crashes
Solution: The project pins OpenCV to <4.10 for compatibility. If you have a newer version globally, ensure you're using the Poetry environment:
poetry shell
python -c "import cv2; print(cv2.__version__)"Symptom: Out of memory errors
Solutions:
- Increase downsampling in preprocessing
- Process files in headless mode
- Adjust
GRID_RESOLUTIONin config for coarser voxels
- Use
src/macos_launcher.pyfor best stability - VTK requires special environment variables (handled automatically)
- Some deep learning features may not work on Apple Silicon
- Ensure graphics drivers are up to date for VTK rendering
- May need to install system packages for VTK/OpenGL
- VTK rendering may require additional DirectX configuration
- Use WSL2 for better compatibility if issues arise
- Check the CLAUDE.md developer documentation
- Review architecture diagrams in
.dslfiles - Examine test files for usage examples
- Open an issue on GitHub
Contributions are welcome! Please:
- Fork the repository
- Create a feature branch (
git checkout -b feature/amazing-feature) - Make your changes following the code style (Black formatting)
- Add tests for new functionality
- Ensure all tests pass (
pytest) - Commit your changes (
git commit -m 'Add amazing feature') - Push to the branch (
git push origin feature/amazing-feature) - Open a Pull Request
This project is licensed under the MIT License - see the LICENSE file for details.
- Built with PyVista for 3D visualization
- Uses scikit-learn for clustering algorithms
- YOLO object detection via Ultralytics
- Point cloud processing with NumPy and SciPy
- E57 file support via pye57
If you use IRIS in your research, please cite:
@software{iris2025,
title = {IRIS: Integrated Roadside Intelligence System},
author = {Nenad Cetic, Dimitrije Stojanovic, Sladjana Simic, Teodora Mijovic},
year = {2025},
url = {https://github.com/chedasdp/iris}
}Status: Active Development | Version: 0.1.0 | Updated: 2025