This is the PyTorch implementation of the paper “ViComp: Video Compensation for Projector-Camera Systems”.
@ARTICLE{Wang2024TVCG,
author={Wang, Yuxi and Ling, Haibin and Huang, Bingyao},
journal={IEEE Transactions on Visualization and Computer Graphics},
title={ViComp: Video Compensation for Projector-Camera Systems},
year={2024},
pages={1-10},}
The datasets used in the paper are collected from two blender open source films --"Big Buck Bunny" and "Spring". The shots are detected by TransNet V2: Shot Boundary Detection Neural Network. The image named "textImg.png" is downloaded from this website for displacement estimation, you can replace it with any texture-rich image.
- Clone this repo:
git clone https://github.com/cyxwang/ViComp
-
Download the videos and extract the frames to "DATANAME", generate the shot index file named "DATANAME_shot_index.txt". Put these files into "data".
-
Download the pre-trained model of CompenNeSt to "pretrain".
-
Download the code of FlowFormer to "src/python/compensation".
-
Download the pre-trained model of FlowFormer to "pretrain".
-
cd to "src/python", edit data path and hyper-parameters in "online.yaml", then run testing.sh to start the system.
cd src/python
sh testing.sh