Join GitHub today
MVE Users Guide
Wiki Home ▸ MVE Users Guide
Building MVE and UMVE
Download and Building: In order to build the libraries, type
make in the base path. Note that the
ogl library requires OpenGL (only used by UMVE), but most applications do not require that library. Building this library will fail on systems without OpenGL, and this is fine as long as "ogl" is not required.
$ git clone https://github.com/simonfuhrmann/mve.git $ cd mve $ make -j8
User Interface UMVE: MVE can be operated without UMVE using the command line tools. However, UMVE is useful for inspecting the results of reconstruction steps. UMVE is a Qt-based application and
qmake is used as build system. To build and execute it, run:
$ cd apps/umve/ $ qmake && make -j8 $ ./umve
API Documentation: Optional API level documentation can be generated using Doxygen:
$ make doc $ open-browser docs/doxygen.html
System requirements to compile and run MVE or UVME are:
- libjpeg (for MVE, http://www.ijg.org/)
- libpng (for MVE, http://www.libpng.org/pub/png/libpng.html)
- libtiff (for MVE, http://www.libtiff.org/)
- OpenGL (for
liboglin MVE and UMVE)
- Qt 5 (for UMVE, http://qt.nokia.com)
Currently, there is no install procedure. MVE apps do not depend on any external files. UMVE only requires access to the shaders, and expects these files in the
shader/ directory located next to the binary. If shaders cannot be loaded from the file system, built-in fallback shaders are used.
- If UMVE does not show icons, SVG support for Qt is missing. Search for packages like
The Reconstruction Pipeline
The MVE image-based reconstruction pipeline is composed of the following components:
- Creating a dataset, by converting the input photos into the MVE File Format.
- Structure from Motion, which reconstructs the camera parameters of the input photos.
- Multi-View Stereo, which reconstructs dense depth maps for each image.
- Surface Reconstruction, which reconstructs a surface mesh from the depth maps.
All steps of the pipeline are available as console applications and can be executed on systems without graphical user interface. Only Multi-View Stereo is currently accessible directly from within UMVE.
The following commands are a typical invocation of the pipeline. Read on for more information.
$ makescene -i <image-dir> <scene-dir> $ sfmrecon <scene-dir> $ dmrecon -s2 <scene-dir> $ scene2pset -F2 <scene-dir> <scene-dir>/pset-L2.ply $ fssrecon <scene-dir>/pset-L2.ply <scene-dir>/surface-L2.ply $ meshclean -t10 <scene-dir>/surface-L2.ply <scene-dir>/surface-L2-clean.ply
Note: Call any application without arguments to see the documentation.
Creating a Dataset
The MVE libraries as well as UMVE are designed to work on MVE datasets. An MVE dataset is simply a directory, that contains another
views/ directory. A bundle file
synth_0.out as well as other files may be placed in the dataset directory during the process.
makescene command line application is used to convert input photos to an MVE scene. Don't worry, your original photos are untouched.
makescene also supports to import from a few third party Structure from Motion applications (see Third Party Bundles for details). Another method to create a new scene and import photos is to use UMVE.
There are more advanced ways to create MVE datasets using the MVE API. This involves creating the dataset directory, the
views/ directory, and implementing a program that creates the views with the help of the
mve::View class. You may want to look at the
makescene application code and the API level documentation.
Structure from Motion
makescene has been used to import from an existing Structure from Motion (SfM) reconstruction, this step can be omitted. The
sfmrecon command line application runs the SfM reconstruction on the input images. In some cases,
sfmrecon selects an unsuitable initial pair and then fails to triangulate any tracks. In these cases, try manually selecting an initial pair with the
--initial-pair command line option. See Structure from Motion for more details.
dmrecon application runs Multi-View Stereo (MVS) to reconstruct a depth map for every input image. MVS automatically chooses a resolution for the depth maps. It is rarely useful to reconstruct at full resolution as it will produce less complete depth maps with more noise at a highly increased processing time. This behavior can be changed using the
--max-pixels options. See Multi-View Stereo for more details.
Point Cloud Export
scene2pset application is used to create an extremely dense point cloud as the union of all samples from all depth maps. Use the
-F option to generate output that can be used by Floating Scale Surface Reconstruction and Poisson Surface Reconstruction. See the project websites for more information.
scene2pset tool has the ability to mask out points from each image. Note that each image is stored in its own
.mve/ directory within the
views/ directory. Make one mask for each image within its
.mve/ directory, each with the same name e.g.
mask.png. Then run
scene2pset with the flag
To reconstruct a final surface from the dense point could, the
fssrecon tool can be used. You might also want to consider using Poisson Surface Reconstruction for reconstruction, which often creates more complete and smoother geometry, but doesn't work as well with varying surface detail and doesn't produce colored output.
fssrecon, the output mesh should be cleaned with
meshclean. This eliminates many unnecessary faces as well as unreliable geometry and disconnected components. Tune the
-c parameters to your needs.