Join GitHub today
GitHub is home to over 28 million developers working together to host and review code, manage projects, and build software together.Sign up
Z CAM WonderStitch FAQ
Z CAM WonderStitch is the world's first commercial optical flow based software open for sale. The software works closely with Z CAM produced hardware to provide automatic and seamless stitching experience for users. Here is the FAQ for WonderStitch. Here I will use WS for WonderStitch.
**1. Is WS optical flow based? **
Yes, WS is optical flow based. WS new release will have 3 stitching modes: Optical flow/fast/Dirty fast. The option Optical flow will do whole picture optical flow which is slow but of highest quality. Fast will be seam based optical flow stitching which will just stitch part of the area. Dirty fast is using traditional stitching method(MultiBand) for preview only.
2. What's the difference between WS and 3rd party optical flow?
Optical flow needs very close design with the camera itself so normally if 3rd party claims to do optical flow stitching with all cameras support, it most likely to be seam stitching which will be in our "fast" stitching mode.
3. What's the advantages for WS VS traditional stitching methods?
Optical flow isn't magic and it has it's advantages and disadvantages. Basically Optical Flow deals with the scene that there's both very close and far objects better than traditional methods. But optical flow in WS right now is global, so you can't adjust it when you observe some artifacts. That's why we introduced a new stitching method "Fast" which only deals with the seam. With this method if people place movements out of the seam the stitching may work pretty well and shoot very close seam without artifacts. The trade off is exposure and camera setup. Full pixel optical flow will get the exposure difference averaged with all pixels so it has larger exposure tolerance. Well, it's just a new tool for user to try for better contents.
Because of the whole pixel calculation, WS handles rolling shutter much better and it won't break the image with seams.
Right now there's a balanced method for mono which do optical flow within limited area. This is the "Fast Mode" in WS. The advantage is the stitching area. All the artifacts with optical flow will be limited inside the stitching area. So it's good for designed scenes. When the whole camera is moving or there's close objects crossing the seam you will still find problems with this mode. Except for those problems of artifacts, Full Optical flow handles these moving scenes better.
4. Why there's artifacts?
There are many reasons for artifacts from WS comparing to traditional stitching methods. In short, optical flow is a way for interpolating pixels. So basically each pixel is regenerated instead of using the original ones. Right now we use facebook's optical flow for stitching, which is state of art stitching algorithm and there's benchmark showing that it's one of the best optical flow methods. Even though, the accuracy isn't enough to get every pixel correctly calculated. We spent a lot of efforts to polish firmware and software to make it better. Some known limitation here:
a. Over exposed area. This creates misunderstanding for algorithms when searching for the pixel on the other cameras. So when using optical flow based stitching, normally please try not having moving objects crossing the over exposed area.
b. Very thin line pairs. Because of the accuracy limitation of optical flow, very thin lines will be difficult for stitching just because it will be difficult to judge which pixel is the one we want. Because Z CAM 360 cameras are based on fisheye lenses, it will be more challenging to have these pattern at zenith/nadir. This will not just affect the pixels there but also the objects close to it because the difference will be averaged to the whole image.
c. Mirrors or reflective area.This creates confusion between cameras. The two cameras see different thing in the mirror so it doesn't know how to "stitch".
d. very thin and close objects between cameras. This will create overlap issue. Just like you use your hand to make a circle and observe the objects with left/right eye. You will find out that left/right eye sees different stuff and it's difficult to stitch as they don't have similarities. This contributes to the "halo" you see. To solve it you may need to have infinity cameras. Theoretically the more cameras the better but camera itself has size so we are still in seek of the best practical solution.
e. moving objects crossing complicated area. The edge of moving objects may not be distinguished with the backgrounds and it created confusion to the algorithm. Theoretically green screen or pure color backgrounds are preferred.
f. Flares. When pointing the camera to the sun in a special angle, because of the input ray angle, the whole lens will be washed out with diffusing reflection and this creates problem for optical flow.
5. Is there a plan for GPU?
New version is with GPU support. 6. What's the speed of WS with GPU?
With GTX1080 it's 1s per frame with S1 for full optical flow.New version will be faster. with GTX1080 it can do 2fps for S1 with full optical flow stitching.
7. What's Fast mode for WS?
It's seam stitching. For the first version we do fixed area optical flow.Because less area is covered by optical flow it's faster than full pixel optical flow. The other benefit it brings in is that the non-optical flow area will not be affected by stitching so there won't be artifacts. The problem it may have is the brightness adjustment. Full pixel optical flow will have better brightness adjustment. So if you want to use fast mode for stitching, it's suggested to set white balance tolerance and exposure tolerance to none.
8. What's Express mode for WS?
It's multiband stitching. This is coming with WS for free. Coming release will have Nvidia VRWorks integration. In a lot of cases this mode works great. If you are using AVP for a long time you will acheive good result without working on templates at most time.
9. What's scene based optimization?
We found some causes for the artifacts so we started to work on analysis of the scene. By analyzing the video clips we may reduce some of them with new firmware support. You can try this feature to acheive better stitching results.