Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
- Loading branch information
0 parents
commit 52309ef
Showing
220 changed files
with
10,103 additions
and
0 deletions.
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,25 @@ | ||
MIT License | ||
|
||
All contributions from “Recovering 3D Planes from a Single Image via Convolutional Neural Networks”: | ||
Copyright (c) 2018 Fengting Yang | ||
|
||
All contributions from “Unsupervised Learning of Depth and Ego-Motion from Video”: | ||
Copyright (c) 2017 Tinghui Zhou | ||
|
||
Permission is hereby granted, free of charge, to any person obtaining a copy | ||
of this software and associated documentation files (the "Software"), to deal | ||
in the Software without restriction, including without limitation the rights | ||
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell | ||
copies of the Software, and to permit persons to whom the Software is | ||
furnished to do so, subject to the following conditions: | ||
|
||
The above copyright notice and this permission notice shall be included in all | ||
copies or substantial portions of the Software. | ||
|
||
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR | ||
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, | ||
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE | ||
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER | ||
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, | ||
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE | ||
SOFTWARE. |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,63 @@ | ||
# Plane-Recover | ||
|
||
This codebase is a TensorFlow implementation of our ECCV-2018 paper: | ||
|
||
[Recovering 3D Planes from a Single Image via Convolutional Neural Networks](https://faculty.ist.psu.edu/zzhou/paper/ECCV18-plane.pdf) | ||
|
||
[Fengting Yang](https://scholar.google.at/citations?user=0vWtvs0AAAAJ&hl=en), [Zihan Zhou](https://faculty.ist.psu.edu/zzhou/Home.html) | ||
|
||
Please contact Fengting Yang (fuy34@psu.edu) if you have any question. | ||
|
||
## Prerequisites | ||
This codebase was developed and tested with python 2.7, Tensorflow 1.4.1, CUDA 8.0.61 and Ubuntu 16.04. | ||
|
||
## Preparing training data | ||
[Here](https://psu.box.com/s/6ds04a85xqf3ud3uljjxnedmux169ebf) we provide our training and testing data on [SYNTHIA](http://synthia-dataset.net/) dataset. Once you download the training data, you can set the training data path as <SYNTHIA_DUMP_DIR> in training command and start to train the network. | ||
|
||
If you wish to generate the training data by yourself, you may want to follow the following steps. | ||
|
||
First, download the four-season sequences (Spring, Summer, Fall, Winter) of SEQS-02, SEQS-04, SEQS-05, and save them in one folder ```<SYNTHIA_DIR>```. Then run the following command to filter out the static frames and generate the training data | ||
``` | ||
python data_pre_processing/SYNTHIA/SYNTHIA_frame_filter.py --dataset_dir=<SYNTHIA_DIR> --dump_root=<SYNTHIA_DUMP_Filtered_DIR> | ||
python data_pre_processing/SYNTHIA/SYNTHIA_pre_processing.py --filtered_dataset=<SYNTHIA_DUMP_Filter_DIR> --dump_root=<SYNTHIA_DUMP_DIR> | ||
``` | ||
The code will generate two "*.txt*" files for training and testing, we recommend to replace the ```tst_100.txt``` with the one in the ```data_pre_processing/SYNTHIA``` folder for the availablity of the ground truth. The "train_8000.txt" in the some folder records the training data we used in our training. Please note the depth unit of SYNTHIA is centimeter, so we divide the depth map by 100.0 in data loading process. | ||
|
||
|
||
## Training | ||
Once the data is prepared, you should be able to train the model by running the following command | ||
``` | ||
python train.py --dataset_dir=<SYNTHIA_DUMP_DIR> --log_dir=<CKPT_LOG_DIR> | ||
``` | ||
|
||
if you want to continue to train or fine-tune from a pre-trained model, you can run | ||
``` | ||
python train.py --dataset_dir=<SYNTHIA_DUMP_DIR> --log_dir=<CKPT_LOG_DIR> --init_checkpoint_file=<PATH_TO_THE_CKPT> --continue_train=True | ||
``` | ||
|
||
You can then start a `tensorboard` session by | ||
``` | ||
tensorboard --logdir=<DIR_CONTAINS_THE_EVENT_FILE> --port=6006 | ||
``` | ||
and monitor the training progress by opening the 6006 port on your browser. If everything is set up properly, reasonable segmenation should be observed around 200k steps. The number of recovered planes will keep increase until it reaches the maximum number set in the code (default=5). | ||
|
||
A pre-trained model has been included in the folder named "pre_trained_model", and the ground truth segmentation is in "eval/labels/". | ||
|
||
## Testing | ||
We provide test code to generate: 1) plane segmentation (and its visualization) and 2) depth prediction (planar area only). The evaluation of the depth prediction accuracy will be presented right after the test process. Please run | ||
``` | ||
python test_SYNTHIA.py --dataset=<SYNTHIA_DUMP_Filtered_DIR> --output_dir=<TEST_OUTPUT_DIR> --test_list=<Tst_100.txt in SYNTHIA_DUMP_DIR> --ckpt_file=<TRAINED_MODEL> | ||
``` | ||
Note: we use the ```filtered data``` as input instead of the ```pre-processed``` one (to preserve the resolution of the ground truth depth). If you do not want to do the pre-processing and already download our data, you can simply modify the path related to the dataset in ```test_SYNTHIA.py```. The final result may not be exactly the same as ours, but should be similar. | ||
|
||
## Evaluation | ||
We also provide the MATLAB code for evaluation of plane segmentation accuracy: | ||
|
||
(1) Open the ```eval/eval_planes.m```; | ||
(2) Set the ```pred_path``` as the path to the ```plane_sgmts``` folder generated in test step and check if the ```label_path``` is appropriately pointing to the ```eval/labels/SYN_GT_sgmt```; | ||
(3) Run the program, you should be able to see the evaluation result on the command window. | ||
|
||
## Acknowledgement | ||
Our code is developed based on the training framework provided by [SfMLearner](https://github.com/tinghuiz/SfMLearner) | ||
|
||
|
Oops, something went wrong.