Skip to content

cyxwang/ViComp

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

9 Commits
 
 
 
 
 
 
 
 

Repository files navigation

ViComp

Introduction

This is the PyTorch implementation of the paper “ViComp: Video Compensation for Projector-Camera Systems”.

@ARTICLE{Wang2024TVCG,
    author={Wang, Yuxi and Ling, Haibin and Huang, Bingyao},
    journal={IEEE Transactions on Visualization and Computer Graphics}, 
    title={ViComp: Video Compensation for Projector-Camera Systems}, 
    year={2024},
    pages={1-10},}

Datasets

The datasets used in the paper are collected from two blender open source films --"Big Buck Bunny" and "Spring". The shots are detected by TransNet V2: Shot Boundary Detection Neural Network. The image named "textImg.png" is downloaded from this website for displacement estimation, you can replace it with any texture-rich image.

Usage

  1. Clone this repo:
 git clone https://github.com/cyxwang/ViComp
  1. Download the videos and extract the frames to "DATANAME", generate the shot index file named "DATANAME_shot_index.txt". Put these files into "data".

  2. Download the pre-trained model of CompenNeSt to "pretrain".

  3. Download the code of FlowFormer to "src/python/compensation".

  4. Download the pre-trained model of FlowFormer to "pretrain".

  5. cd to "src/python", edit data path and hyper-parameters in "online.yaml", then run testing.sh to start the system.

 cd src/python
 
 sh testing.sh

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published