Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Questions concerning the depth post processing of the original Microsoft Driver #134

Closed
DamienLefloch opened this issue Jan 20, 2015 · 2 comments
Labels

Comments

@DamienLefloch
Copy link

Hello all,

For my research, I am currently working with the Kinect Time-of-Flight camera and I have a question which I need to find an answer.

I already try to contact Joshua Blake and he suggested me to ask in the github since he was not 100% aware of it..

I would like to know if some of you have more informations about what kind of processing filters are applied in the original driver to improve the raw Depth data.
I read in one of the log:
"first working version of ir/depth decoding; several post processing steps like depth disambiguation, bilateral filtering, edge-aware filtering, implemented in the official SDK are missing; the implemented CPU decoding runs at 10Hz or less;"

So I guess that a lib contributor writes this and maybe have more information.

Joshua Blake answers me about this log
"Oh, those notes were written by one of the other contributors. You could post a question about it addressing who wrote that. Others did a detailed analysis of the Microsoft GPU shader implementation so they might know more."

It would be a great help for me to have those information. If bilateral filter is really applied to smooth the data, and what means exactly an edge-aware filtering (bilateral filtering is uusally known as an edge-aware filtering, or is it just the Mixed Pixel removal filters?). I guess also that the microsoft guys does some multi-path detection and masking but this does not really change the depth quality.

Thanks in advance for your time.

Damien

@christiankerl
Copy link
Contributor

Hi Damien,

the current depth processing code in Cpu/OpenGL/OpenCLDepthPacketProcessor is doing the same things as the shader shipped with the K4W2 Preview SDK (might have changed in the meantime). The bilateral filter is applied to the complex valued images before computing the amplitude/phase(depth). Its only aware of intensity edges in these images. The "edge-aware" filter basically tries to filter the flying pixels at the object boundaries by calculating some statistics in a local neighborhood. Both filters can be disabled in libfreenect2.

@DamienLefloch
Copy link
Author

Hello Christian,

Thank you for your fast answer.

Ok that is now clear for me. I was aware thanks to the shader that a joint bilateral filter was done using intensity and that a flying pixel removal was also applied later on on the raw depth. But I thought that additional filters were done later on, on the raw depth itsef.

Maybe this change with the comercial KinectToF, but since I use the prototype, I do not really care.

Thanks again

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

3 participants