Skip to content

IPCV/VideoInp

 
 

Repository files navigation

REQUERIMIENTS

conda create --name VideoInp python=3.7.10
conda activate VideoInp
conda install pytorch=1.7.1 torchvision=0.8.2 cudatoolkit=11.0 -c pytorch
pip install -r requirements.txt

Preparing datasets

Datasets are created from a set of image frames. If you have a video, the video has to be first transformed into a sequence of frame images. This can be done using ffmpeg utils

ffmpeg -i video_a.mp4 ./video_frames/tennis/frames/%05d.jpg -hide_banner

A mask image must be manually created. A sample mask can be found in ./video_frames/mask.png

The datasets are composed by the inputs to the net and the ground truth.

  • inputs
    • The masked optical flow (forward and backward)
    • The masks
  • The ground truth
    • the ground truth optical flow (forward and backward)

The script create_dataset.py also creates additional files not needed (yet).

  • The Ground Truth frames (the non-masked frames)

In order to create the dataset from a set of image frames (in the repository you have a already created a set of images in video_frames/tennis), runs the following script

cd ingestion
python create_dataset.py --in_root_dir ../video_frames --out_dir ../dataset  --masking_mode same_template --template_mask ../masks/mask.png --apply_mask_before --H 256 --W 480 --nLevels 2

other examples for creating datasets: From Notion Download Raw dataset davis_no_mask. Uncompress the file into ./raw/

Now create the dataset

python create_dataset.py --in_root_dir ../raw/davis_no_mask/ --out_dir ../built/davis_no_mask_multiscale  --masking_mode same_template --template_mask ../masks/no_mask.png   --H 256 --W 480 --nLevels 3

Running the overfitting training over data in ./datasets

To run trainings for dataset "dataset":

cd ..
python main.py ./configs/example_1.json
python main.py ./configs/example_2.json 

To run trainings for dataset "davis_no_mask":

cd ..
python main.py ./configs/example_3.json
python main.py ./configs/example_4.json 

After each epoch, see results in ./verbose/training_out

Streamlit App

In order to compare two different models use the streamlit app

cd UI_verbose
streamlit run streamlit_app.py ../verbose/training_out/

Streamlit_app reads from '../verbose/training_out/' the output of the trainings and shows in a web.

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Packages

No packages published

Languages

  • Python 100.0%