OpenSfM is a Structure from Motion library written in Python on top of OpenCV. The library serves as a processing pipeline for reconstructing camera poses and 3D scenes from multiple images. It consists of basic modules for Structure from Motion (feature detection/matching, minimal solvers) with a focus on building a robust and scalable reconstruction pipeline. It also integrates external sensor (e.g. GPS, accelerometer) measurements for geographical alignment and robustness. A JavaScript viewer is provided to preview the models and debug the pipeline.
Checkout this blog post with more demos
- OpenCV
- OpenGV
- Ceres Solver
- Boost Python
- NumPy, SciPy, Networkx, PyYAML, exifread
Install OpenCV using
brew tap homebrew/science
brew install opencv
brew install homebrew/science/ceres-solver
brew install boost-python
sudo pip install -r requirements.txt
And install OpenGV using
git clone https://github.com/paulinus/opengv.git
cd opengv
mkdir build
cd build
cmake .. -DBUILD_TESTS=OFF -DBUILD_PYTHON=ON
make install
Be sure to update your PYTHONPATH
to include /usr/local/lib/python2.7/site-packages
where OpenCV and OpenGV have been installed. For example:
export PYTHONPATH=/usr/local/lib/python2.7/site-packages:$PYTHONPATH
See the Dockerfile for the commands to install all dependencies on Ubuntu 14.04. The steps are
- Install OpenCV, Boost Python, NumPy, SciPy using apt-get
- Install python requirements using pip
- Clone, build and install OpenGV following the receipt in the Dockerfile
- Build and Install the Ceres solver from its source using the
-fPIC
compilation flag
When running OpenSfM on top of OpenCV 3.0 the OpenCV Contrib modules are required for extracting SIFT or SURF features.
python setup.py build
An example dataset is available at data/berlin
.
- Put some images in
data/DATASET_NAME/images/
- Put config.yaml in
data/DATASET_NAME/config.yaml
- Go to the root of the project and run
bin/run_all data/DATASET_NAME
- Start an http server from the root with
python -m SimpleHTTPServer
- Browse
http://localhost:8000/viewer/reconstruction.html#file=/data/DATASET_NAME/reconstruction.meshed.json
.
Things you can do from there:
- Use datasets with more images
- Click twice on an image to see it. Then use arrows to move between images.
- Run
bin/mesh data/berlin
to build a reconstruction with sparse mesh that will produce smoother transitions from images
- Thank you Jetbrains for supporting the project with free licenses for IntelliJ Ultimate. Contact peter at mapillary dot com if you are contributor and need one. Apply your own project here
I have edited the file OpenSfM/opensfm/reconstruction.py so as to show in the reconstruction the location of the gaze cursor overtime (this project is meant to be used in the context of eye tracking). I have also added the file OpenSfM/bin/just_reconstruction which only calls OpenSfM/opensfm/commands/reconstruct.py and OpenSfM/opensfm/commands/mesh.py as a way to test the edits made faster than running the entire pipeline. In order for this to work as desired, it requires a text file in each dataset (gaze_coordinates.txt) that has the coordinates of the gaze cursor in corresponding order to the video frames in the images folder. The coordinates should be in the format x y every line, where x is the number of pixels from the left edge of the frame and y is the number of pixels from the bottom edge of the frame.