Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How to get the depth picture #5

Open
sunjc0306 opened this issue Jan 29, 2021 · 5 comments
Open

How to get the depth picture #5

sunjc0306 opened this issue Jan 29, 2021 · 5 comments

Comments

@sunjc0306
Copy link

Hi! Thank you very much for your excellent work. I've got great reconstruction. However, I have a question how to get the depth map corresponding to the RGB picture?

@LizhenWangT
Copy link
Owner

You should first calibrate your depth camera and get the internal and external parameters. Then, please refer to Registration in (https://github.com/r9y9/pylibfreenect2) as an example.

@sunjc0306
Copy link
Author

Thanks. I read your paper carefully. However, I didn't find three discriminators (Fdb, Fcb, Fdf) in the code. Are there detailed descriptions of these. In addition, I would wonder to ask if there is a plan for open source training code.

@Luciano07
Copy link

Luciano07 commented Feb 26, 2021

Hi,
I'm using 'pylibfreenect2' to capture rgb and depth images from Kinect v2, containing background and body information. I tested NormalGAN with these images, but it fails in erosion function.
I looked in 'datasets/testdata/' and the images only contain body pixels. Seems like that i have to apply a body mask to use NormalGAN correctly.
Which method do you use for the segment body in color and depth images?

Thanks for amazing work!

@LizhenWangT
Copy link
Owner

Hi,
I'm using 'pylibfreenect2' to capture rgb and depth images from Kinect v2, containing background and body information. I tested NormalGAN with these images, but it fails in erosion function.
I looked in 'datasets/testdata/' and the images only contain body pixels. Seems like that i have to apply a body mask to use NormalGAN correctly.
Which method do you use for the segment body in color and depth images?

Thanks for amazing work!

Thank you! Actually, we directly cut the body part using two thresholds in depth map in live demos. This may lead to bad results around the feet part (depth map also performs bad in this area). If you do not care about this area, that's enough.

@LizhenWangT
Copy link
Owner

Thanks. I read your paper carefully. However, I didn't find three discriminators (Fdb, Fcb, Fdf) in the code. Are there detailed descriptions of these. In addition, I would wonder to ask if there is a plan for open source training code.

As we can not distribute our dataset due to commercial reasons, we think it's hard to generate a similar dataset (which needs hundreds of 3D human models). So we don't have the plan to update the training code.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants