This project aims to develop and implement efficient algorithms for single disparity map inpainting in plain C.
- linearly adressed 8bit grayscale image (unsigned char*)
- it is assumed, that the pixels to be inpainted have value 0
- all holes should be filled
- i.e. all masked pixels should be replaced by meaningful values
- e.g. according to their closest boundaries' values
- no output (in-place processing)
- I first tried linear interpolation in both horizontal and vertical scanlines as suggested by Daniel Gurdan.
Just the horizontal interpolation showed a major problem:
The edges of the input are noisy.
More concretely they show a 1-pixel anti-aliased border.
This falsifies the start and end point values of hole-line-segments, which in turn leads incorrectly interpolated to stripes all over the place.
I also tried using different offsets beyond the initial start/end points and also different thresholds which decide when a pixel is overwritten and when not. No big breakthrough there. - I thought about a different approach and came up with layered flood filling.
The assumption was, that measurement errors occur at consistent depth planes.
I.e. a hole containing a foreground object (bright) probably could be this very object itself.
Also note, that obviously foreground overlays background (see implementation).
This resulted in the following.
The problem was, that this approach yields - as had to be expected - large constant areas, in contrast to the smooth interpolated regions we want. So, next try. - I took a closer look at the input image and only then discovered the constant width of 1-pixel for the anti-aliasing effect.
I thought, that probably the vertical and horizontal lines could complement each other - which of course is not the case but wasn't clear to me at the time.
So, vertical interpolation implemented.
One gets the final image by merging the horizontal and vertical result.
I tried two versions, averaging (top) and maxing (bottom), the latter taking the brighter color for each pixel.
Better, but still same problem. - Ok, so to the hell with those edges.
Let's do something about them.
I implemented an edge sharpening method specifically for my scenario. It looks for pixel adjacent to holes and assigns them the brightes value in their neighbourhood, effectively eliminating the anti-aliasing. Nice. While I first implemented only 4-way-neighbourhoods, 8-way is necessary for the wanted results. - Back to the linear interpolation, but this time on the preprocessed input image.
Nice, but could be better. Looks like a pretty good case for - Median filters.
For now, I use the opencv implementation.
I thought this could be the most resource-consuming part, however, I found a paper introcuding an O(1) (in kernel size) median filter, which looks promissing.
Will implement it in C when I have some time.
Here the currently final result:
- All tests were performed on a 320x240 greyscale image.
- All O-Notations are given w.r.t. operations per pixel.
- All measurements were taken as average on 10000 runs.
Method | Asymptotic Runtime |
---|---|
sharpenEdges | O(1) |
fillLinear | O(1) |
medianBlur | O(1) |
Method(s) | Runtime |
---|---|
sharpenEdges -> fillLinear -> median | 4.168ms (OpenCV) |
sharpenEdges -> fillLinear -> median | 12.56ms (ctmf) |
Note that OpenCV makes use of hardware optimisations and/or OpenCL.
While we do have a constant time inpainting algorithm, the involved constants must and can still be optimised.
Thanks to my bro for giving me something to think about ;).