OpenPano is an panorama stitching program written in C++ from scratch. It mainly follows the routine described in the paper Automatic Panoramic Image Stitching using Invariant Features, which is also the one used by AutoStitch.
- gcc >= 4.7 (Or VS2015)
- FLANN (already included in the repository, slightly modified)
- CImg (optional. already included in the repository)
- libjpeg (optional if you only work with png files)
- cmake or make
Eigen, CImg and FLANN are header-only, to simplify the compilation on different platforms. CImg and libjpeg are only used to read and write images, so you can easily get rid of them.
On ArchLinux, install dependencies by:
sudo pacman -S gcc sed cmake make libjpeg eigen
On Ubuntu, install dependencies by:
sudo apt install build-essential sed cmake libjpeg-dev libeigen3-dev
Set up a Docker container (optional):
If you're having trouble getting OpenPano to run natively, then you may find it easier to run inside of a virtual machine. Docker makes it easy to set up a light weight virtual machine or 'container'.
The settings for you Docker container are described in the Dockerfile, which is just a file named 'Dockerfile' in the root directory of the project.
To build a docker image run the following command from the same directory as the Dockerfile:
$ docker build -t open_pano .
This will take awhile to run, so give it some time.
This will build the image and tag it with the name
open_pano. Next you can run your image.
In the following command replace
<path_to_shared_directory> with the absolute path to the directory you want to share between your workspace and the virtual machine created by Docker.
$ docker run -it --name open_pano -v <path_to_shared_directory>:/shared_folder \open_pano /bin/bash
For example, if you want to make your Desktop accesible from the container, use the following:
$ docker run -it --name open_pano -v ~/Desktop:/Desktop \open_pano /bin/bash
When you run your docker image you'll be dropped right into a bash terminal. From there you can compile and run OpenPano. Images added to
shared_folder will be accesible from both the vm and your workspace.
To exit from the docker container use the command
exit. This will stop the container.
You can start the container again using the command
~$ docker start open_pano.
Then attach to the container with
~$ docker attach open_pano. (Then hit [Enter]).
Some other commands that might be helpful:
Get a list of all docker images:
~$ docker images.
See all running containers:
~$ docker ps.
Once the docker image is running you'll need to install the necessary python packages (pillow and imagemagick)
$ pip install setuptools $ pip install pillow
At that point you should be all set as the Dockerfile will have installed everything else you need including downloading this repository into your workspace.
Now you can run the entire OpenPanoThermo pipeline.
$ cd OpenPanoThermo $ python createThermoPano.py -o /path/to/output.jpg /path/to/input/images
Linux / OSX / WSL (bash on windows)
$ make -C src
$ mkdir build && cd build && cmake .. && make
The default clang on OSX doesn't contain openmp support. You may need gcc or different clang. See #16.
- Install cmake
- Set environment variable
- Open visual studio Developer Command Prompt.
- Open the VS2015 project and compile the project
config.cfgto the directory containing
Three modes are available (set/unset the options in
cylinder mode. Give better results if:
- You are only turning left (or right) when taking the images (as is usually done), no translations or other type of rotations allowed.
- Images are taken with the same camera, with a known
FOCAL_LENGTHset in config.
- Images are given in the left-to-right order. (I might fix this in the future)
camera estimation mode. No translation is the only requirement on cameras. It can usually work well as long as you don't have too few images. But it's slower because it needs to perform pairwise matches.
translation mode. Simply stitch images together by affine transformation. It works when camera performs pure translation and scene points are roughly at the same depth. It also requires ordered input.
Some options you may care:
- FOCAL_LENGTH: focal length of your camera in 35mm equivalent. Only useful in cylinder mode.
- ORDERED_INPUT: whether input images are ordered sequentially. has to be
1in CYLINDER and TRANS mode.
- CROP: whether to crop the final image to avoid irregular white border.
Other parameters are quality-related. The default values are generally good for images with more than 0.7 megapixels. If your images are too small and cannot produce satisfactory results, it might be better to resize your images rather than tune the parameters.
$ ./image-stitching <file1> <file2> ...
The output file is
out.jpg. You can play with the example data to start with.
Before dealing with very large images (4 megapixels or more), it's better to resize them. (I might add this feature in the future)
In cylinder/translation mode, the input file names need to have the correct order.
For more examples, see results.
Speed & Memory:
Tested on Intel Core i7-6700HQ, with
- 11 ordered images of size 600x400: 3.2s.
- 13 ordered images of size 1500x1112: 6s.
- 38 unordered images of size 1300x867 (high vertical FOV): 51s.
Memory consumption is known to be huge with default libc allocator.
Simply using a modern allocator (e.g. tcmalloc, jemalloc) can help a lot.
LAZY_READ to 1 can save memory at the cost of a minor slow down.
Peak memory in bytes (assume each input has the same w & h):
LAZY_READoption: max(finalw * finalh * 12, #photos * w * h * 12 + #photos * #matched_pairs * 96 + #keypoints * 520)
LAZY_READoption: max(finalw * finalh * 16, #threads * w * h * 12, #photos * #matched_pairs * 96 + #keypoints * 520)
- Features: SIFT
- Transformation: use RANSAC to estimate a homography or affine transformation.
- Optimization: focal estimation, bundle adjustment, and some straightening tricks.
For details, see my blog post.
To get the best stitching quality:
- While rotating the camera for different shots, try to keep the position of camera lens static.
- Keep the exposure parameters unchanged.
- Do not shoot on moving objects.
- Objects far away will stitch better.
- The algorithm doesn't work well with wide-angle cameras where images are distorted heavily. Camera parameters are needed to undistort the images.
- run bundle adjustment on sphere lens instead of perspective lens
- use LAZY_READ & 1 byte image in both blender to reduce peak memory
- clean up use of copies of
- faster gaussian blur kernel
- port some hotspot (e.g.
dist.cc) to neon
- support read/write EXIF metadata to:
- get focal length, distortion, etc
- allow pano to be viewed on Facebook
- python bindings