Skip to content

roozbehm/newtonian

Repository files navigation

N3: Newtonian Image Understanding: Unfolding the Dynamics of Objects in Statis Images

This is the source code for Newtonian Neural Networks N3, which predicts the dynamics of objects in scenes.

Citation

If you find N3 useful in your research, please consider citing:

@inproceedings{mottaghiCVPR16N3,
    Author = {Roozbeh Mottaghi and Hessam Bagherinezhad and Mohammad Rastegari and Ali Farhadi},
    Title = {Newtonian Image Understanding: Unfolding the Dynamics of Objects in Static Images},
    Booktitle = {CVPR},
    Year = {2016}
}

Requirements

This code is written in Lua, based on Torch. If you are on Ubuntu 14.04+, you can follow this instruction to install torch.

You need the VIND dataset. Extract it in the current directory, and rename it to VIND. Or you can put it somewhere else and change the config.DataRootPath in setting_options.lua.

Training

To run the training:

th main.lua train

This trains the model on training data, and once in every 10 iterations, evalutates on one val_images batch. If you want to validate on val_videos go to setting_options.lua and change the line valmeta = imvalmeta to valmeta = vidvalmeta.

Test

You need to get the weights. Extract the weights in the current directory and rename it weights. To run the test:

th main.lua test

License

This code is released under MIT License.

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Languages