Skip to content
This repository has been archived by the owner on Oct 31, 2023. It is now read-only.

I have one question about densepose result! #74

Closed
yagee97 opened this issue Jul 20, 2018 · 21 comments
Closed

I have one question about densepose result! #74

yagee97 opened this issue Jul 20, 2018 · 21 comments

Comments

@yagee97
Copy link

yagee97 commented Jul 20, 2018

I succeeded densepose test for video with reference to (https://github.com/trrahul/densepose-video).
while tests, i got a question.

1. When I detect an object using densepose, where are the keypoints' coordination stored?
(which variable?)
Example) get_keypoints() in keypoints.py ?
2. What information can I use in densepose data?

Thank you! i want to your detail opinion.

@lushihan
Copy link

Same question here

@fire17
Copy link

fire17 commented Jul 20, 2018

me 3

@yagee97
Copy link
Author

yagee97 commented Jul 22, 2018

@lushihan where..? sorry..

@lushihan
Copy link

@yagee97 Oh I mean I have the same questions as yours

@yagee97
Copy link
Author

yagee97 commented Jul 23, 2018

@lushihan oh... i got it now.. i want to solve this question.

@lushihan
Copy link

Still no authors response :(

@vkhalidov
Copy link
Contributor

@yagee97 To get keypoints, you need to train the model with the keypoints head. Please have a look at #34, #39, #48

@yagee97
Copy link
Author

yagee97 commented Jul 26, 2018

Thank you for your reply!
I have question one more.

What information can I use in densepose data?
I want to get some coordinate such as All_Coords in vis.py

because I'm going to utilize person's 2d/3d coordinate at densepose result.

How I get 2D/3D coordinate? Where?

Thank you! :) Have a good day!

@vkhalidov
Copy link
Contributor

The output of the DensePose head is generated here. You can see that for every detected person bounding box of size (H, W), you get the output of size (3, H, W). The first channel contains part index, the other 2 channels contain regressed inner coordinate values U and V for the corresponding part. Thus 2D image coordinates are obtained from bounding box coordinates + pixel offset within the bounding box. To understand how to map the estimated IUV values to the 3D SMPL models, please have a look at DensePose-COCO-on-SMPL.ipynb notebook

@yagee97
Copy link
Author

yagee97 commented Jul 27, 2018

oh really thank you.! i'v got it this problem. So let me ask you one last question.
I want to store the im_detect_body_uv result.
so I have tried to print its output. but print result is [0,0,0,0, .....] !

How can I print or store them separately?
and when visualize the densepose, are there rules that objects coloring?

very thank you for your kind! :)

@vkhalidov
Copy link
Contributor

There are many possibilities to store results, for example pickle, numpy, json.

For visualization, I suggest you to check visualization and texture transfer notebooks

@anweiwei
Copy link

I want to use my own image to generate I,U,V and to visualize on the SMPL mode. Then I got the IUV output [3,H,W]. and the output[0], output[1] ,output[2] is correspondent to I,U,V respectively. However, when I reference the DensePose-COCO-on-SMPL.ipynb, in the demo_dp_single_ann.pkl, where I,U,V is a vector(length 125). So, my question is how can I use the output[3,H,W] to do the visualization, or can you provide the code to generate the
demo_dp_single_ann.pkl?

@yagee97 yagee97 closed this as completed Jul 31, 2018
@ingramator
Copy link

ingramator commented Aug 27, 2018

Same question as anweiwei, how do we generate that demo_dp_single_ann.pkl file or do the visualisation given the IUV output for an arbritary image

@jaggernaut007
Copy link

jaggernaut007 commented Sep 23, 2018

@ingramator @anweiwei Did you manage to get the xyz from iuv output? do share your code. maybe we can collab and find a fix? I have the IUV output from the model. Cant make sense of it.

@ingramator
Copy link

@jaggernaut007 I have this working now are you still interested in seeing it?

@kalyo-zjl
Copy link

@ingramator Hi, how do you make it working? My IUV output from infer_simple.py doesn't seem to fit well when mapping it to the SMPL model. Could you please share the script?

@ingramator
Copy link

@kalyo-zjl @jaggernaut007 check the pull request #99 it provides an excellent sample notebook that shows how its done! I am at this stage trying to work backwards. For instance how do I map a specific vertex on the SMPL model to RGB input image. Does anyone have any ideas?

@kalyo-zjl
Copy link

@ingramator Thank you!

@vkhalidov
Copy link
Contributor

@ingramator this is not straightforward. What you're up to is 3D reconstruction based on 2D manifold coordinates. This can be done through reprojection error minimization for visible parts. You can try looking into bundle adjustment, ceres from Google can be a good starting point

@kekedan
Copy link

kekedan commented Dec 21, 2018

Does anyone have any ideas

the same problem , so do you have any ideas?

@wine3603
Copy link

Hi guys, I am following the notebook of https://github.com/facebookresearch/DensePose/pull/99
but I cannot show the points on the smpl, the points on the picked person is always (0, 3), with pick_idx=1.
Does it work well in your cases?

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

10 participants