A comprehensive system for generating and optimizing cinematographic trajectories for autonomous quadrotor drones. This repository integrates EGO-Planner with cinematographic constraints to produce smooth, visually appealing flight paths that maintain proper framing, focus distance, and lighting considerations. Environment sensing incorporates image perception models and gyroscopic input.
CINE extends the EGO-Planner trajectory planning framework with cinematographic constraints to enable autonomous drone cinematography. The system modifies B-spline trajectories generated by EGO-Planner to incorporate:
- Focus Distance Control: Maintains optimal camera-to-subject distance for proper focus
- Framing Constraints: Ensures subjects remain properly framed using rule-of-thirds composition
- Lighting Awareness: Adjusts trajectories to avoid poor lighting conditions
- Smooth Motion: Applies velocity and acceleration limits for cinematic motion quality
- Dynamic Waypoint Generation: Generates waypoints that optimize cinematographic cost functions
- Baseline Trajectories: Standard EGO-Planner trajectories without cinematographic constraints
- Constrained Trajectories: EGO-Planner trajectories modified with cinematographic constraints
- B-spline Optimization: Real-time trajectory adjustment using B-spline control point manipulation
- Dynamic Time Scaling: Adjusts trajectory timing to respect velocity and acceleration limits
- Focus Distance: Maintains target focus distance (default: 1.5m) with configurable tolerance
- Framing: Rule-of-thirds composition with configurable anchor points
- Lighting: Avoids poor lighting conditions by adjusting yaw and trajectory
- Smoothness: Penalizes abrupt changes in viewing direction and motion
- Automated Data Collection: Scripts for collecting baseline and constrained trajectory data
- Trajectory Statistics: Analysis tools for computing dataset-level statistics
- Trajectory Averaging: Tools for resampling and averaging multiple trajectory runs
- CSV Export: Structured data format for post-processing and analysis
-
EGO-Planner Integration (
ego-planner-swarm/)- Base trajectory planning using ESDF-free gradient-based local planning
- B-spline trajectory representation and optimization
- Path searching and collision avoidance
-
Cinematographic Adjustment Agent (
cinematographic_adjustment_agent/)- Cinematic B-spline Filter: Modifies EGO-Planner trajectories with cinematographic constraints
- Cost Calculator: Computes framing, depth, lighting, and smoothness costs
- Waypoint Generator: Generates cinematographic waypoints using cost optimization
- State Tracker: Tracks drone state and target information
-
Depth Estimation (
depth_publisher/)- Depth image processing using Depth-Anything with CUDA acceleration
- Provides depth information for focus distance calculations
-
Object Detection (
cinematographic_adjustment_agent/object_detection/)- Simulated object detector for target tracking
- Simulated focus/light publishers for testing without real sensors
-
Data Collection (
cinematographic_adjustment_agent/data_collection/)- Trajectory data logger for recording baseline and constrained trajectories
- CSV export with position, velocity, acceleration, and timing data
EGO-Planner → B-spline Trajectory → Cinematic B-spline Filter → Constrained Trajectory
↓
Cost Calculator (framing, depth, lighting, smoothness)
↓
Waypoint Generator → Optimized Waypoints
CINE/
├── trajectory-optimization_ws/ # ROS 2 workspace
│ ├── src/
│ │ ├── ego-planner-swarm/ # EGO-Planner trajectory planner
│ │ ├── cinematographic_adjustment_agent/ # Cinematographic constraints
│ │ ├── depth_publisher/ # Depth estimation
│ │ └── cinematography_msgs/ # Custom ROS 2 messages
│ ├── collect_baseline_data.sh # Baseline data collection script
│ ├── collect_constrained_data.sh # Constrained data collection script
│ └── start_simulation.sh # Simulation launcher
├── scripts/ # Analysis scripts
│ ├── trajectory_stats.py # Compute trajectory statistics
│ └── trajectory_mean.py # Average multiple trajectories
├── trajectory_data/ # Collected trajectory data
│ ├── baseline/ # Baseline trajectory CSVs
│ └── cinematic/ # Constrained trajectory CSVs
├── configs/ # Configuration files
│ └── user_params/ # User-configurable parameters
└── reference.bib # Bibliography of referenced works
- Ubuntu 20.04+ (or compatible Linux distribution)
- ROS 2 Jazzy (or compatible version)
- Python 3.8+
- CUDA-capable GPU (for depth estimation)
- PCL (Point Cloud Library): For 3D point cloud processing
- VTK: Visualization toolkit (PCL dependency)
- Eigen3: Linear algebra library
- OpenCV: Computer vision library
- CycloneDDS: DDS middleware (recommended over FastDDS for performance)
rclcpp,rclpy: ROS 2 C++ and Python client librariesgeometry_msgs,nav_msgs,visualization_msgs: Standard ROS 2 message typescv_bridge: OpenCV-ROS bridgepcl_conversions: PCL-ROS conversions
Follow the official ROS 2 installation guide for your distribution:
# For Ubuntu 22.04 (Jazzy)
sudo apt install ros-jazzy-desktopsudo apt install ros-jazzy-rmw-cyclonedds-cpp
echo "export RMW_IMPLEMENTATION=rmw_cyclonedds_cpp" >> ~/.bashrc
source ~/.bashrcsudo apt install \
libpcl-dev \
libvtk9-dev \
libeigen3-dev \
libopencv-dev \
python3-pipcd trajectory-optimization_ws
source /opt/ros/jazzy/setup.bash
colcon build
source install/setup.bashCollects standard EGO-Planner trajectories without cinematographic constraints:
cd trajectory-optimization_ws
./collect_baseline_data.shData is saved to trajectory_data/baseline_*.csv
Collects EGO-Planner trajectories modified with cinematographic constraints:
cd trajectory-optimization_ws
# Optionally configure parameters in collect_constrained_data.env
./collect_constrained_data.shData is saved to trajectory_data/cinematic_*.csv
Configuration: Edit collect_constrained_data.env to adjust:
USE_SIMULATED_FOCUS: Use simulated focus (default: 1)USE_SIMULATED_LIGHT: Use simulated lighting (default: 1)TARGET_OFFSET_XYZ: Target position offsetFOCUS_DEPTH_M: Desired focus distance in metersDRONE_ID: Drone identifier
Generate summary statistics for trajectory datasets:
python scripts/trajectory_stats.py \
--trajectory-root trajectory_data \
--datasets baseline cinematic \
--output-dir trajectory_dataOutputs: baseline_stats.csv, cinematic_stats.csv
Resample trajectories onto a uniform time grid and compute mean trajectories:
python scripts/trajectory_mean.py \
--trajectory-root trajectory_data \
--datasets baseline cinematic \
--time-step 0.005 \
--output-dir trajectory_dataOutputs: baseline_mean_std_trajectory.csv, cinematic_mean_std_trajectory.csv
cd trajectory-optimization_ws
./start_simulation.sh baselinecd trajectory-optimization_ws
./start_simulation.sh constrainedEdit configs/user_params/constants.py to adjust:
- Focus Constraints:
DESIRED_FOCUS_DISTANCE_M,FOCUS_DISTANCE_MIN_M - Framing:
THIRDS_ANCHOR(rule-of-thirds anchor points) - Image Parameters:
IMAGE_WIDTH,IMAGE_HEIGHT
Edit cinematographic_adjustment_agent/config/constants.py to adjust:
- Velocity Limits:
FOCUS_MAX_VELOCITY_MPS,DEFAULT_MAX_VELOCITY_MPS - Acceleration Limits:
FOCUS_MAX_ACCEL_MPS2,DEFAULT_MAX_ACCEL_MPS2 - Yaw Constraints:
YAWDOT_MAX,YAWDOT_MAX_FRAMING - Lighting:
LIGHT_AVOID_GAIN,LIGHT_INFLUENCE_DISTANCE_M
Trajectory CSV files contain the following columns:
time: Timestamp (seconds)pos_x,pos_y,pos_z: Position (meters)vel_x,vel_y,vel_z: Velocity (m/s)acc_x,acc_y,acc_z: Acceleration (m/s²)yaw: Yaw angle (radians)- Additional metadata columns as needed
- EGO-Planner: ESDF-free gradient-based local planner for quadrotors
- ROS 2: Robot Operating System framework
- B-splines: Trajectory representation using uniform B-splines
- L-BFGS: Limited-memory BFGS optimization algorithm
- PCL: Point Cloud Library for 3D processing
- OpenCV: Computer vision and image processing
- Eigen: C++ linear algebra library
See reference.bib for complete bibliography of referenced works.
See LICENSE file for details.
If you use this work, please cite:
- EGO-Planner: Zhou et al., "EGO-Planner: An ESDF-free Gradient-based Local Planner for Quadrotors," IEEE Robotics and Automation Letters, 2021
- See
reference.bibfor additional citations
This repository is part of research on autonomous cinematography for drones. For questions or contributions, please refer to the project maintainers.