Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
11 changes: 10 additions & 1 deletion _sources/cameras.rst.txt
Original file line number Diff line number Diff line change
Expand Up @@ -24,6 +24,15 @@ model that is complex enough to model the distortion effects:
for fisheye lenses and note that all other models are not really capable of
modeling the distortion effects of fisheye lenses. The ``FOV`` model is used by
Google Project Tango (make sure to not initialize ``omega`` to zero).
- ``SIMPLE_FISHEYE``, ``FISHEYE``: Use these camera models for fisheye
lenses with equidistant projection where distortion can be ignored
or has been pre-corrected. These models use the equidistant projection
(theta = atan(r)) without any distortion parameters. ``SIMPLE_FISHEYE``
has a single focal length (f), while ``FISHEYE`` has two (fx, fy).
- ``SIMPLE_DIVISION``, ``DIVISION``: Use these camera models, if you know the
calibration parameters a priori. Similar to ``SIMPLE_RADIAL`` and ``RADIAL``
models, they can model simple radial distortion effects. The two models
have first-order local equivalence for small distortions.

You can inspect the estimated intrinsic parameters by double-clicking specific
images in the model viewer or by exporting the model and opening the
Expand All @@ -44,4 +53,4 @@ fix the intrinsic parameters during the reconstruction

Please, refer to the camera models header file for information on the parameters
of the different camera models:
https://github.com/colmap/colmap/blob/main/src/colmap/sensor/models.h
https://github.com/colmap/colmap/blob/main/src/colmap/sensor/models.h
65 changes: 63 additions & 2 deletions _sources/faq.rst.txt
Original file line number Diff line number Diff line change
Expand Up @@ -175,7 +175,10 @@ Alternatively, you can also produce a dense model without a sparse model as::
Since the sparse point cloud is used to automatically select neighboring images
during the dense stereo stage, you have to manually specify the source images,
as described :ref:`here <faq-dense-manual-source>`. The dense stereo stage
now also requires a manual specification of the depth range::
now also requires a manual specification of the depth range.

Finally, in this case, fusion will fail to successfully match points if min_num_pixels is
left at the default (greater than 1). So also set that parameter, as below::

colmap patch_match_stereo \
--workspace_path path/to/dense/workspace \
Expand All @@ -184,6 +187,7 @@ now also requires a manual specification of the depth range::

colmap stereo_fusion \
--workspace_path path/to/dense/workspace \
--StereoFusion.min_num_pixels 1 \
--output_path path/to/dense/workspace/fused.ply


Expand Down Expand Up @@ -371,7 +375,7 @@ If you encounter the following error message::
or the following:

ERROR: Feature matching failed. This probably caused by insufficient GPU
memory. Consider reducing the maximum number of features.
memory. Consider reducing the maximum number of features.

during feature matching, your GPU runs out of memory. Try decreasing the option
``--FeatureMatching.max_num_matches`` until the error disappears. Note that this
Expand All @@ -387,6 +391,63 @@ required GPU memory will be around 400MB, which are only allocated if one of
your images actually has that many features.


Speedup bundle adjustemnt
-------------------------

The following describes practical ways to reduce bundle adjustment runtime.

- **Reduce the problem size**

Limit the number of correspondences so that BA solves a smaller problem:

- Reduce features by decreasing ``--SiftExtraction.max_image_size`` and/or
``--SiftExtraction.max_num_features``.
- Reduce matching pairs (and avoid ``exhaustive_matcher`` when possible) by
decreasing ``--SequentialMatching.overlap``,
``--SpatialMatching.max_num_neighbors``, or ``--VocabTreeMatching.num_images``.
- Reduce matches by decreasing ``--FeatureMatching.max_num_matches``.
- Enable experimental landmark pruning to drop redundant 3D points using
``--Mapper.ba_global_ignore_redundant_points3D 1``.

- **Utilize GPU acceleration**

Enable GPU-based Ceres solvers for bundle adjustment by setting
``--Mapper.ba_use_gpu 1`` for the ``mapper`` and ``--BundleAdjustmentCeres.use_gpu 1``
for the standalone ``bundle_adjuster``. Several parameters control when and which
GPU solver is used:

- The GPU solver is activated only when the number of images exceeds
``--BundleAdjustmentCeres.min_num_images_gpu_solver``.
- Select between the direct dense, direct sparse, and iterative sparse GPU solvers
using ``--BundleAdjustmentCeres.max_num_images_direct_dense_gpu_solver`` and
``--BundleAdjustmentCeres.max_num_images_direct_sparse_gpu_solver``

.. Attention:: COLMAP's official CUDA-enabled binaries are not distributed with
ceres[cuda] until Ceres 2.3 is officially released. To use the GPU solvers you
must compile Ceres with the CUDA/cuDSS support and link that build to COLMAP.

**Note:** Low GPU utilization for the Schur-based sparse solver (cuDSS) can occur
when the Schur-complement matrix becomes less sparse (i.e., exhibits more fill-in).
Typical causes include:

- High image covisibility
- Shared camera intrinsics.

- **Additional practical tips**

- Improve initial conditions by tuning observation-filtering parameters so BA
receives more inliers and fewer outliers, or by supplying accurate priors
(e.g., intrinsics, poses).
- Fix or restrict refinement of parameters when possible (e.g., hold intrinsics
fixed if they are known) to reduce the number of optimized variables.
- Reduce LM iterations or relax convergence tolerances to trade a small amount of
accuracy for runtime: ``--Mapper.ba_global_max_num_iterations``,
``--Mapper.ba_global_function_tolerance``.
- Reduce the frequency of expensive global BA passes with mapper options:
``--Mapper.ba_global_frames_freq``, ``--Mapper.ba_global_points_freq``,
``--Mapper.ba_global_frames_ratio`` and ``--Mapper.ba_global_points_ratio``.


Trading off completeness and accuracy in dense reconstruction
-------------------------------------------------------------

Expand Down
64 changes: 39 additions & 25 deletions _sources/index.rst.txt
Original file line number Diff line number Diff line change
Expand Up @@ -21,31 +21,7 @@ About
COLMAP is a general-purpose Structure-from-Motion (SfM) and Multi-View Stereo
(MVS) pipeline with a graphical and command-line interface. It offers a wide
range of features for reconstruction of ordered and unordered image collections.
The software is licensed under the new BSD license. If you use this project for
your research, please cite::

@inproceedings{schoenberger2016sfm,
author={Sch\"{o}nberger, Johannes Lutz and Frahm, Jan-Michael},
title={Structure-from-Motion Revisited},
booktitle={Conference on Computer Vision and Pattern Recognition (CVPR)},
year={2016},
}

@inproceedings{schoenberger2016mvs,
author={Sch\"{o}nberger, Johannes Lutz and Zheng, Enliang and Pollefeys, Marc and Frahm, Jan-Michael},
title={Pixelwise View Selection for Unstructured Multi-View Stereo},
booktitle={European Conference on Computer Vision (ECCV)},
year={2016},
}

If you use the image retrieval / vocabulary tree engine, please also cite::

@inproceedings{schoenberger2016vote,
author={Sch\"{o}nberger, Johannes Lutz and Price, True and Sattler, Torsten and Frahm, Jan-Michael and Pollefeys, Marc},
title={A Vote-and-Verify Strategy for Fast Spatial Verification in Image Retrieval},
booktitle={Asian Conference on Computer Vision (ACCV)},
year={2016},
}
The software is licensed under the new BSD license.

The latest source code is available at `GitHub
<https://github.com/colmap/colmap>`_. COLMAP builds on top of existing works and
Expand Down Expand Up @@ -79,6 +55,44 @@ for questions and the `GitHub issue tracker <https://github.com/colmap/colmap>`_
for bug reports, feature requests/additions, etc.


Citation
--------

If you use this project for your research, please cite::

@inproceedings{schoenberger2016sfm,
author={Sch\"{o}nberger, Johannes Lutz and Frahm, Jan-Michael},
title={Structure-from-Motion Revisited},
booktitle={Conference on Computer Vision and Pattern Recognition (CVPR)},
year={2016},
}

@inproceedings{schoenberger2016mvs,
author={Sch\"{o}nberger, Johannes Lutz and Zheng, Enliang and Pollefeys, Marc and Frahm, Jan-Michael},
title={Pixelwise View Selection for Unstructured Multi-View Stereo},
booktitle={European Conference on Computer Vision (ECCV)},
year={2016},
}

If you use the global SfM pipeline (GLOMAP), please cite::

@inproceedings{pan2024glomap,
author={Pan, Linfei and Barath, Daniel and Pollefeys, Marc and Sch\"{o}nberger, Johannes Lutz},
title={{Global Structure-from-Motion Revisited}},
booktitle={European Conference on Computer Vision (ECCV)},
year={2024},
}

If you use the image retrieval / vocabulary tree engine, please cite::

@inproceedings{schoenberger2016vote,
author={Sch\"{o}nberger, Johannes Lutz and Price, True and Sattler, Torsten and Frahm, Jan-Michael and Pollefeys, Marc},
title={A Vote-and-Verify Strategy for Fast Spatial Verification in Image Retrieval},
booktitle={Asian Conference on Computer Vision (ACCV)},
year={2016},
}


Acknowledgments
---------------

Expand Down
39 changes: 27 additions & 12 deletions _sources/install.rst.txt
Original file line number Diff line number Diff line change
Expand Up @@ -81,7 +81,8 @@ Dependencies from the default Ubuntu repositories::
libboost-graph-dev \
libboost-system-dev \
libeigen3-dev \
libfreeimage-dev \
libopenimageio-dev \
openimageio-tools \
libmetis-dev \
libgoogle-glog-dev \
libgtest-dev \
Expand All @@ -93,9 +94,14 @@ Dependencies from the default Ubuntu repositories::
libqt6openglwidgets6 \
libcgal-dev \
libceres-dev \
libsuitesparse-dev \
libcurl4-openssl-dev \
libssl-dev \
libmkl-full-dev
# Fix issue in Ubuntu's openimageio CMake config.
# We don't depend on any of openimageio's OpenCV functionality,
# but it still requires the OpenCV include directory to exist.
sudo mkdir -p /usr/include/opencv4

Alternatively, you can also build against Qt 5 instead of Qt 6 using::

Expand Down Expand Up @@ -151,13 +157,14 @@ Dependencies from `Homebrew <http://brew.sh/>`__::
ninja \
boost \
eigen \
freeimage \
openimageio \
curl \
libomp \
metis \
glog \
googletest \
ceres-solver \
suitesparse \
qt \
glew \
cgal \
Expand All @@ -170,7 +177,7 @@ Configure and compile COLMAP::
cd colmap
mkdir build
cd build
cmake -GNinja
cmake .. -GNinja
ninja
sudo ninja install

Expand Down Expand Up @@ -259,6 +266,7 @@ Install miniconda and run the following commands::
glog \
gtest \
ceres-solver \
suitesparse \
qt \
glew \
sqlite \
Expand Down Expand Up @@ -359,15 +367,22 @@ meaningful traces for reported issues.
Documentation
-------------

You need Python and Sphinx to build the HTML documentation::
1. Install latest pycolmap for up-to-date pycolmap API documentation.
2. Build the documentation::

cd path/to/colmap/doc
pip install -r requirements.txt
make html
open _build/html/index.html # preview results

cd path/to/colmap/doc
sudo apt-get install python
pip install sphinx
make html
open _build/html/index.html
Alternatively, you can build the documentation as PDF, EPUB, etc.::

Alternatively, you can build the documentation as PDF, EPUB, etc.::
make latexpdf
open _build/pdf/COLMAP.pdf

make latexpdf
open _build/pdf/COLMAP.pdf
2. Clone the website repository `colmap/colmap.github.io <https://github.com/colmap/colmap.github.io>`__.
3. Copy the contents of the generated files at ``_build/html`` to the cloned respository root.
4. Create a pull request to the `colmap/colmap.github.io <https://github.com/colmap/colmap.github.io>`__
repository with the updated files.
5. (Optional, if main release) Copy the previous release as legacy to the "legacy" folder,
under a folder with the release number `see here <https://github.com/colmap/colmap.github.io/tree/master/legacy>`__.
2 changes: 1 addition & 1 deletion _sources/pycolmap/index.rst.txt
Original file line number Diff line number Diff line change
Expand Up @@ -29,7 +29,7 @@ To build PyCOLMAP from source, follow these steps:
* On Windows, after installing COLMAP via VCPKG, run in powershell::

python -m pip install . `
--cmake.define.CMAKE_TOOLCHAIN_FILE="$VCPKG_INSTALLATION_ROOT/scripts/buildsystems/vcpkg.cmake" `
--cmake.define.CMAKE_TOOLCHAIN_FILE="$VCPKG_ROOT/scripts/buildsystems/vcpkg.cmake" `
--cmake.define.VCPKG_TARGET_TRIPLET="x64-windows"

Some features, such as cost functions, require that `PyCeres
Expand Down
15 changes: 7 additions & 8 deletions _sources/tutorial.rst.txt
Original file line number Diff line number Diff line change
Expand Up @@ -163,14 +163,13 @@ Data Structure

COLMAP assumes that all input images are in one input directory with potentially
nested sub-directories. It recursively considers all images stored in this
directory, and it supports various different image formats (see `FreeImage
<http://freeimage.sourceforge.net/documentation.html>`_). Other files are
automatically ignored. If high performance is a requirement, then you should
separate any files that are not images. Images are identified uniquely by their
relative file path. For later processing, such as image undistortion or dense
reconstruction, the relative folder structure should be preserved. COLMAP does
not modify the input images or directory and all extracted data is stored in a
single, self-contained SQLite database file (see :doc:`database`).
directory, and it supports various different image formats by OpenImageIO. Other
files are automatically ignored. If high performance is a requirement, then you
should separate any files that are not images. Images are identified uniquely by
their relative file path. For later processing, such as image undistortion or
dense reconstruction, the relative folder structure should be preserved. COLMAP
does not modify the input images or directory and all extracted data is stored
in a single, self-contained SQLite database file (see :doc:`database`).

The first step is to start the graphical user interface of COLMAP by running the
pre-built binaries (Windows: ``COLMAP.bat``, Mac: ``COLMAP.app``) or by executing
Expand Down
Loading
Loading