Join GitHub today
GitHub is home to over 28 million developers working together to host and review code, manage projects, and build software together.Sign up
Clone this wiki locally
BLAM is a camera calibration add-on for Blender and is meant to facilitate modeling based on photos or synthetic images rendered with a perspective projection camera. For some examples of what can be accomplished using BLAM, have a look at this introduction video and this tutorial video.
With BLAM, you can
- easily compute the orientation and focal length of the Blender camera based on a photo, so that 3D space as seen through the Blender camera matches that of the photo. This makes it easy to composite 3D objects into photos.
- automatically reconstruct 3D geometry with rectangular faces from photos of such geometry. project images onto meshes with a single mouse click, which is useful for example when doing camera projection mapping.
The test images used below can be downloaded here.
Static camera calibration
Static camera calibration is the computation of focal length and camera orientation based on a single photo or movie frame (note that a calibrated camera can be moved freely as long as it is not rotated). This functionality is located in the tools panel of the Movie Clip Editor. BLAM needs some user input in order to perform calibration. This input is given as grease pencil line segment strokes (ctrl + D + left mouse button drag), drawn on top of the movie clip. Each of these line segments indicates the direction of a given coordinate axis (x, y or z). All line segments in a grease pencil layer must correspond to the same axis. From now on, such a collection of two or more segments corresponding to a given axis will be referred to as a line segment bundle. As an example, the green line segments in the image below form a line segment bundle.
Calibration using two vanishing points
To enable calibration using two vanishing points, select this option in the "Method" drop down box in the Static Camera Calibration panel. Select 'Image midpoint' as the optical center for now (the other two options are described in the next section):
This calibration method computes the focal length and orientation of the active camera based on two line segment bundles (one bundle per grease pencil layer) corresponding to perpendicular axis directions, as in the image below.
Once the line segment bundles have been drawn, you can choose which bundle should correspond to which axis using the two "Parallel to the" drop down boxes. To perform calibration, make sure there is an active camera and press the "Calibrate active camera" button. If the "Set background image" checkbox is checked, the movie clip will be set as the background for the 3D view camera when performing calibration. The image below shows the Blender grid floor seen through a camera calibrated using the two line segment bundles shown above
Defining the optical center
The optical center, or principal point, is where the optical axis intersects the image plane, i.e where a line through the middle of the lens would hit the image. In most cases, the optical center is in the middle of the image, but it might not be if an image is asymmetrically cropped for example. This can be hard to spot with the naked eye; only the first of the two images below has the optical center in the middle.
If calibration using the image midpoint as the optical center does not give accurate results, chances are the optical center is somewhere else. BLAM supports two methods of defining the location of the optical center in this case: either the location can be entered manually or computed based on a third line segment bundle.
To use a known optical center location, select 'From camera data' in the 'Optical center' menu, set the location using the camera data panel in the properties panel of the movie clip editor and press 'Calibrate active camera'.
To compute the optical center, first select 'From 3rd vanishing point' in the 'Optical center' menu, then draw a line segment bundle corresponding to the third principal axis in the third grease pencil layer and finally hit 'Calibrate active camera'. The image below shows a third line segment bundle in blue
Single vanishing point calibration
First of all, note that for single camera calibration to work, the focal length and camera type (or sensor size) must be known in advance and specified in the Camera Data panel of the movie clip editor.
To enable calibration using a single vanishing point, select this option in the "Method" drop down box in the 'Static Camera Calibration' panel:
This calibration method computes the orientation of the active camera based on a known focal length, one line segment bundle (from the first grease pencil layer) and an optional single line segment defining the horizon angle. This single line segment is taken from the second grease pencil layer, if "Compute from grease pencil stroke" is checked, otherwise the horizon tilt angle is set to zero. Single vanishing point calibration is useful for images where only one vanishing point can be established, like the image below, where lines in the directions of two out of three coordinate axes are parallel. The single line segment bundle is drawn red and the horizon line is green.
The final camera orientation can be controlled by choosing an axis corresponding to the single line segment bundle (using the "Parallel to the" drop down box) and an axis defining the up direction, perpendicular to the horizon line (using the "Up axis" drop down box). If the "Set background image" checkbox is checked, the movie clip will be set as the background for the 3D view camera when performing calibration. The following image shows the grid floor seen through a camera calibrated using the line segment bundle and horizon line in the image above, along with a known focal length of 35 mm.
Problematic input images
BLAM will not work well for all kinds of input images. More specifically, an input image is problematic if
- the vanishing lines (i.e the grease pencil strokes) along any axis are parallel or near parallel.
- the lens distortion is too severe.
- perspective correction or other kinds of warping have been applied to it.
Here are some things to keep in mind that could improve the accuracy of the calibration:
- Using more line segments doesn't necessarily improve accuracy. Two line segments that match two vanishing lines well is better than many less accurate segments. Also, adding a less accurate segment to a collection of accurate ones will always have a negative impact.
- Lowering the opacity and thickness of the grease pencil strokes makes it easier to see if they line up properly.
- The grease pencil strokes can be drawn with sub-pixel accuracy if zooming in far enough.
- If your image exhibits lens distortion, you will have to either undistort it prior to using it with blam, or accept a loss of accuracy due to the fact that lines that should have been straight are curved because of the lens distortion.
- The less parallel line segments are, the more accurate the calibration result will be. If you have a choice, try to make the difference in slopes of your line segments as big as possible.
3D mesh reconstruction
Reconstructing a mesh
The mesh reconstruction UI is in the Photo Modeling Tools panel in the tools section of the 3D view.
The starting point when doing mesh reconstruction is the creation of a mesh with faces lining up with the faces to reconstruct when seen through the camera. What this mesh looks like from other angles doesn't matter. The following images illustrate what such an input mesh could look like.
The input mesh must only consist of quad faces and all faces must be connected. Once a valid input mesh has been created, 3D reconstruction can be performed by pressing "Reconstruct 3D geometry". If "Separate faces" is checked, each face in the reconstructed mesh will have its own set of vertices. This can be useful to isolate and eliminate faces with large reconstruction errors. The image below shows the input mesh (selected) along with the 3D reconstruction.
Getting accurate results
3D mesh reconstruction will not work properly unless
- all faces of the resulting mesh are rectangular. There are no limitations on the angles between faces, but each face has to have only right angles.
- the focal length of the Blender camera has been calibrated to match the photo
- the photo satisfies the requirements mentioned in the Problematic Input Images section above.
Photo based modeling
The tools referred to below can be found in the Photo Modeling Tools panel in the tools section of the 3D view
Moving vertices along the line of sight
When creating 3D geometry to match geometry in a photo, it is usually a requirement that the vertices in the 3D mesh coincide with some features in the image, like corners for example. This means that when changing the 3D geometry, the vertices should not move when seen through the camera. The only way of moving a vertex in this fashion is along the line of sight. The line of sight of a vertex is the line passing through the camera origin, i.e the "eye", and the vertex.
By pressing "Set line of sight scale pivot", the pivot for rotation and, more importantly, scaling is set to the camera origin. Selecting one or more vertices and scaling them using this pivot point guarantees that they move along their respective line of sight. This is a convenient way of altering geometry without moving vertices away from their corresponding image features.
Projecting an image onto a mesh
To project the current background image onto the active mesh, just press "Project background image onto mesh". The "Method" drop down box lets you choose one of two currently supported methods to do this.
The simple projection method
The simple projection method works by generating UV coordinates projected from the camera view and using the image as the texture. While sufficient in many cases, this method may cause visible warping of the texture on large faces.
The high quality projection method
The high quality projection method works by first applying a simple subdivision modifier to the mesh and then using a UV project modifier to project the image onto the mesh from the camera view. The amount of texture warping can be controlled by adjusting the number of subdivision levels of the simple subdivision modifier.
Note that for high quality projection, a projector object is created and placed at the position of the camera. If this object is moved relative to the mesh, the texture will change, so either move the active camera or the mesh and projector object together.