Skip to content

hinczhang/3D-Scanning-and-Motion-Capture

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

26 Commits
 
 
 
 
 
 
 
 
 
 

Repository files navigation

3D-Scanning-and-Motion-Capture

Mainly for Structure from Motion (SfM) and Multiview Stereo (MVS).
paper  

Debug Edition

Debug edition contains some original test code,containing the full workflow of making the wheel for SfM calculation.
However, after comparison, we finally choose OpenCV lib as our tool to finish the release edition.
Even in this kind of situation, I still consider this part would have the educational meaning.

  1. Fundamential matrix calculation
  2. Essential matrix calculation
  3. RANSAC

Release Edition

Our codebase depend on Ceres, OpenCV4 and Eigen3. to execute the code, you need to correctly install these two dependencies in your local machine. To use SURF, we also configured the xfeatures2d of OpenCV.

Here, we should clarify our workflow and some tricky components in our code. As to workflow:

  1. Find the right sequence of the image pairs, as the DTU dataset only provides PAIR information instead of STEREO-SEQUENCE information. So, for each scan, we could only use a while-loop to search the sequence, which makes us have no ability to give a universal answer for how many images or pairs we use during the bundle adjustment. However we can ensure that more than 10 images participating the BA workflow. The related code is here;

  2. Initialization. We try to initialize the first two images first to obtain 3D points, R, T and corresponding index structures that could connect the 2D keypoints with 3D points. Like here in the RTresponse function;

  3. For the RTresponse function, the common step for initialization and further fusion is to calculate the keypoints and find the match, in the following code. After initialization, the new image would compared with the original right image. From the function get_objpoints_and_imgpoints (here), we can obtain 3D points generated by the previous stereo pair, along with image 2D points from keypoints in the current right image (which means that we try to choose the matches, if they contain the left keypoints that are same as the right keypoints from the previous matches) (Tricky!);

  4. We then use the PnP algorithm to get R and T of the current right image, as we have the keypoints and the corresponding 3D points of the right image. Then we try to use the fusion_structure function (see here) to integrate 3D points and, more important, the index stucture, which could be used to inquiry the relationship between 3D points and 2D keypoints;

  5. Bundle adjustment. We try to load 3D points, keypoints, the intrinsic matrix along with extrinsic matrix to calculate the loss. To see the detailed explanation of the implemented Bundle Adjustment, refer to the attached Bundle Adjustment explained.pdf

Python Edition (evaluation)

python edition contains another pipeline consists of SfM and mvs implemented in python (releaseEdition/eval/stereo.py). In addition, the evaluation for various feature extractors, matching methods and stereo dense matching algorithm is included in (releaseEdition/eval/evaluate.py). The evaluation results in the report comes from the execution of "releaseEdition/eval/evaluate.py".

1.To install dependencies:

pip install opencv-python numpy tqdm stereo-mideval

To use SURF:

pip install opencv-contrib-python

2.To download middlebury2021dataset: download the "all.zip" in this page and extract the zip file in your working directory.

3.To run the evaluate.py, set the variable "DATASET_FOLDER" as the actual path to the middlebury dataset being downloaded&extracted, then execute the following commands:

cd releaseEdition/eval
python evaluate.py

The evaluation metric is printed out in the terminal and the point clouds will be generated in the path specified in the file.

To evaluate keypoint extraction and matchings: set EVALUATE_KEYPOINT as True in the code,

To evaluate mvs methods: set EVALUATE_DENSE_MATCHING as True in the code.

4.To run the stereo.py, set the variable "DATASET_FOLDER" as the actual path to the middlebury dataset being downloaded&extracted, then execute the following commands:

cd releaseEdition/eval
python stereo.py

The generated pointcloud will be saved to the path specified in stereo.py.

Dataset

DTU dataset: https://roboimagedata.compute.dtu.dk/
Middlebury: https://vision.middlebury.edu/stereo/data/

Contribution

SfM and MVS C++: Zhang, Jiongyan
SfM and MVS python: Barry Shichen Hu, Ran Ding
SfM and MVS eval: Barry Shichen Hu
Project Report: Ran Ding, Jiongyan, Barry

About

Final project for 3D scanning and Motion capture

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published