Source codes for the Science Advances paper titled "Motion Hologram: Jointly optimized hologram generation and motion planning for photorealistic 3D displays via reinforcement learning".
Zhenxing Dong, Yuye Ling, Yan Li, and Yikai Su
We generate the focal stack from RGB in the paper using the DepthAnything and the DeepFocus. We currently also find the mathematical model to synthesize the focal stack in the Holographic Parallax. We sincerely appreciate the authors for sharing their codes.
Replace parameters based on your holographic setup.
python ./codes/test_stage2/main.py --channel=0 --data_dir=./data/example_input --hologram_dir=./hologram
You can jointly optimize the motion hologram generation (or other environment) and motion planning (or other values) based on your task. Replace parameters based on your setup.
python ./codes/train_stage1/train.py --channel=0 --image_dir=./data/focal_stack --depth_dir=./data/depth
The origin citl codes can be found in neural-3d-holography. Here, we also provide citl codes in our paper to calibrate the holographic system. You should replace the system parameters based on your holographic setup.
python ./codes/citl/system_captured/main.py
python ./codes/citl/system_captured/cali.py
python ./codes/citl/train.py --arch Multi_CNNpropCNN --train_dir ./data/citl/train_data/red --val_dir ./data/citl/val_data/red
The codes are built on neural-3d-holography and PPO-PyTorch. We sincerely appreciate the authors for sharing their codes.
If you have any questions, please do not hesitate to contact d_zhenxing@sjtu.edu.cn.