Camera lens calibration and undistortion pipeline with a focus on visual interpretability.
DistortionLab provides tools for:
- Intrinsic calibration - Estimate camera intrinsics (focal length, principal point, distortion) from a chessboard video
- Undistortion - Remove lens distortion from videos and images
uv pip install -r requirements.txtAdditionally, FFmpeg must be installed and available in your PATH.
Estimate camera intrinsics (focal length, principal point, distortion coefficients) from a calibration video.
Input: Video of a chessboard pattern moved through the frame Output: Calibration YAML + distortion plot
uv run python step1_intrinsics.pyPlace calibration videos or image folders in input/step1_intrinsics/ named as camXX_*.mp4 (e.g., cam01_intrinsics.mp4) or camXX_*/ image folders (e.g., cam01_images/).
Options:
--board_shape 5 8- Chessboard size in squares (default: 5x8)--desired_frames 200- Number of frames to process (default: 200)--no_distortion- Assume zero distortion (only estimate focal length and principal point)
During the calibration aim to respect the following principles:
- Get close — but keep it sharp!
- Move the board as close to the camera as you can, but stop as soon as you start seeing any blur. A sharp image from further away is better than a blurry close-up — blur degrades corner detection and corrupts the calibration.
- All corners must remain visible and sharp in the frame. (On a few instances, don't be afraid to go near the edges — the code automatically filters out incomplete detections, so it's safe and encouraged for good coverage.)
- Keep the board tilted!
- In most frames try keeping the board tilted at around 45 degrees to the lens, ideally between 30 and 70 degrees.
- The least useful data would be if the calibration board is too far away and parallel to the lens.
Other tips:
- Move the calibration board slowly to avoid motion blur. Motion blur reduces the quality of the calibration.
- Cover all areas of the image, especially corners and edges where distortion is strongest.
- Ensure the board is well-lit and clearly readable. If you use additional lights, position them to avoid reflections or glare on the board surface.
- A monitor can be used to display the board instead of a printed one, but verify that no aspect ratio scaling is applied — the squares must be truly square on screen.
Step 1 generates a distortion plot showing the estimated lens distortion model:
It also generates a coverage plot showing which areas of the image were covered during calibration. Aim for high coverage (green = covered, red = missing) to get accurate distortion estimates, especially at the edges:
Remove lens distortion from videos or image sequences using the calibration from Step 1.
Input: Videos or image folders Output: Undistorted videos (.mp4)
uv run python step2_undistortion.pyPlace inputs in input/step2_undistortion/:
- Videos:
camXX_*.mp4,camXX_*.mov, etc. - Image folders:
camXX_folder/containing PNG/JPG images
The script automatically:
- Detects the best encoder for your platform (VideoToolbox on macOS, NVENC on NVIDIA, CPU fallback)
- Skips cameras with zero distortion coefficients
- Scales intrinsics if input resolution differs from calibration resolution
DistortionLab/
├── input/
│ ├── step1_intrinsics/ # Calibration videos or image folders
│ │ └── cam01_intrinsics.mp4
│ │ └── cam02_intrinsics.mp4
│ │ └── cam03_images/
│ │ ├── frame_001.png
│ │ └── frame_002.png
│ └── step2_undistortion/ # Videos/images to undistort
│ ├── cam01_take1.mp4
│ ├── cam01_take2.mp4
│ ├── cam01_take3.mp4
│ └── cam02_images/
│ ├── frame_001.png
│ └── frame_002.png
├── output/
│ ├── step1_intrinsics/ # Calibration results
│ │ ├── cam01_calibration/
│ │ │ ├── calibration.yaml
│ │ │ ├── distortion_plot.png
│ │ │ └── coverage.png
│ │ ├── cam02_calibration/
│ │ │ ├── ...
│ │ └── all_cams.py
│ └── step2_undistortion/ # Undistorted outputs
│ ├── cam01_take1.mp4
│ ├── cam01_take2.mp4
│ ├── cam01_take3.mp4
│ ├── cam02_images.mp4
│ └── all_cams_undistorted.py
├── step1_intrinsics.py
├── step2_undistortion.py
└── requirements.txt
Both steps generate a Python file with camera settings that can be imported directly:
from output.step1_intrinsics.all_cams import cam01, dict_sdk
print(cam01.focal_length_x)
print(cam01.distortion_coeffs)


