Providing a padded version of OpenCV's warpAffine()
and warpPerspective()
functions.
A completed Python implementation is also available here.
Read my Stack Overflow answer which inspired this repository.
When OpenCV warps an image, any pixels that get warped outside of the bounds of dsize
are excluded from the resulting warped image. While giving a larger dsize
can help retain pixels that get mapped to larger pixel locations, you cannot recover the pixel values that get mapped to negative pixel locations.
The solution requires three steps:
- Calculate the warped pixel locations manually
- Add translation to the transformation by however much is necessary in pixels to send all pixels values to positive numbers
- Pad the destination image to account for the shift and add padding to the edge of the warp
Both warpAffinePadded()
and warpPerspectivePadded()
complete this task using minimal overhead and no user input aside from the same information that would be needed for OpenCV's warpAffine()
and warpPerspective()
.
Please feel free to submit any suggestions you have or bugs you find via a GitHub issue.
The images and ground truth homographies provided in test/
are from Oxford's Visual Geometry Group.