Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Question about GT and training parameters #15

Closed
MooreManor opened this issue Mar 9, 2022 · 4 comments
Closed

Question about GT and training parameters #15

MooreManor opened this issue Mar 9, 2022 · 4 comments

Comments

@MooreManor
Copy link

MooreManor commented Mar 9, 2022

Hello Yujin,

Thank you for sharing the great work!

I'm confused about the generation of pseudo masks in ground truth and training.

  1. From this issue, some masks of samples in GT are not quite accurate and even complete black. How do you separate them, and why the results of some segmented GT are unacceptable?
  2. I noticed that in your code, you used textures whose shape were like (faces.shape[0], faces.shape[1], texture_size, texture_size, texture_size, 3). What's the mean of three 'texture_size'? And will a bigger texture_size induces better rendered RGB images?

Thanks!

@TerenceCYJ
Copy link
Owner

Hi.

  1. The mask here is from the rendered silhouette of the estimated mesh, so if the estimated mesh is not accurate, so will the mask.
  2. We use texture_size=1 for each mesh face, it may get better results if with a more detailed texture for each face (texture_size>1), but I am not sure it helps in this case or not.

@MooreManor
Copy link
Author

MooreManor commented Mar 10, 2022

@TerenceCYJ
Thanks for your quick reply!

For Q1, why not use an off-the-shelf silhouette separator to preprocess the GT? Some samples in this issue are weird, and using the correct foreground image can give good supervision to the shape of hand and reduce the impact of background.

For Q2, do you think the 1*1*1 texture_size is enough for the generation of the rendered RGB image?

@TerenceCYJ
Copy link
Owner

Hi.

  1. Yes, I think if the off-the-shelf "silhouette separator" is accurate, then it would be helpful. But I find it is hard to find one. The rendered silhouette can not necessarily be inaccurate if the predicted 3D mesh is good (given the 2D keypoints as supervision).
  2. I think the 111 texture size of each mesh face is enough in our case.

@MooreManor
Copy link
Author

MooreManor commented Mar 15, 2022

Thanks! I am clear now.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants