The goal of the project was to tackle the problem of reconstructing 3d models from sequence
of images of given object. We implemented two solutions: one using classical method - Structure From Motion
and the other using deep neural network - Pix2Vox
.
For experiments to start, you need to make sure your data
includes all the necessary data. In data
folder there is a file called where_to_download_data.txt
, which includes all needed links and
describes how you should prepare your data
folder.
To summarize here how the data
folder should look like, here is a description:
├── data
│ ├── mvs_dataset <- MVS data.
│ │ ├── images <- Folder for all images of the MVS dataset. They are included in cleaned_images.zip
│ │ ├── point_clouds <- folder with original and manually corrected ground truths
│ │ ├── results <- optional downloadable folder with reconstructed models (contains originals and models after correction)
│ │ ├── processed_voxels_pix2vox <- Folder with processed voxels which are needed for Pix2Vox model. They are stored in processed_voxels_pix2vox.zip
│ │ ├──
│ ├── ShapeNet <- ShapeNet data.
│ │ ├── ShapeNetRendering <- Images for ShapeNet dataset. Included in ShapeNetRendering.tgz
│ │ └── ShapeNetVox32 <- Voxels for objects in ShapeNet dataset. Included in ShapeNetVox32.tgz
Having PYTHONPATH set to the root of the project as well as to the ./src/models/sfm is necessary. Also Docker is required (only for reconstruction). To run all expriments and generate results for SfM on MVS dataset, simply run the following command from the root of the project:
python .\src\models\sfm\all_runner.py 1 128 -r False -c False
This command assumes, that you use already reconstructed and corrected models. If you also want to run reconstruction by yourself, change False to True in -r option. The same applies to correction (-c option), but additionally you are required to pass path to CloudCompare.exe (-p option) - that's the program used for semi-automatic allignment and cleaning of ground_truth and resulting point clouds. I recommend to download already prepared models, to see how the correction should be carried out.
Make sure you have 4 pretrained models of Pix2Vox
in models
directory. In this
directory there is a .txt
with links to those models. Here we are also posting these links:
- https://gateway.infinitescript.com/?fileName=Pix2Vox-A-ShapeNet.pth - Pix2Vox-A
- https://gateway.infinitescript.com/?fileName=Pix2Vox-F-ShapeNet.pth - Pix2Vox-F
- https://gateway.infinitescript.com/?fileName=Pix2Vox%2B%2B-A-ShapeNet.pth - Pix2Vox++ A
- https://gateway.infinitescript.com/?fileName=Pix2Vox%2B%2B-F-ShapeNet.pth - Pix2Vox++ F
To run training for the new models, run train_pix2vox_models_and_test.sh
shell script.
This script trains new models and also tests them.
To use the trained models and run only tests, run test_pix2vox_models.sh
shell script.
To visualize results generated by testing script, run visualize_pix2vox_results.sh
shell script.