Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Problem about SMPL refining loss. #74

Closed
SongYupei opened this issue Jun 15, 2022 · 7 comments
Closed

Problem about SMPL refining loss. #74

SongYupei opened this issue Jun 15, 2022 · 7 comments
Labels
Discussion HPS Human-Pose-Shape

Comments

@SongYupei
Copy link

Many thanks to the author for his work. I found a question in the process of reading papers and code.
The paper introduces in the Refining SMPL section that the results of SMPL modeling can be iteratively optimized during the inference process. The loss function includes two parts, the L1 difference between the unclothed normal map and the normal map of the model prediction results, and The L1 difference between the mask of the smpl normal map and the mask of the original image, but you did not implement the corresponding implementation in the code.
What is the reason for this? Is the existing code implementation more efficient than the original implementation?

            # silhouette loss
            smpl_arr = torch.cat([T_mask_F, T_mask_B], dim=-1)[0]       # smpl mask maps
            gt_arr = torch.cat(                                         # clothed normal maps
                [in_tensor['normal_F'][0], in_tensor['normal_B'][0]],
                dim=2).permute(1, 2, 0)
            gt_arr = ((gt_arr + 1.0) * 0.5).to(device)
            bg_color = torch.Tensor([0.5, 0.5,
                                     0.5]).unsqueeze(0).unsqueeze(0).to(device)
            gt_arr = ((gt_arr - bg_color).sum(dim=-1) != 0.0).float()
            diff_S = torch.abs(smpl_arr - gt_arr)
            losses['silhouette']['value'] = diff_S.mean()
@YuliangXiu
Copy link
Owner

normal diff and silhouette diff are in L216-L234

@SongYupei
Copy link
Author

So, is this a dissertation revision error? Do you actually trust the output result of the normal map after wearing clothes, and directly use this result to optimize the construction of the SMPL model?

@YuliangXiu
Copy link
Owner

Yes, you could check the details from ICON's paper.

@SongYupei
Copy link
Author

OK, I know. Another good solution is to increase the constraints of pose points? Although it may increase the computational time of inference, pose constraints can better optimize the pose parameters of SMPL.

@SongYupei
Copy link
Author

Maybe you can use Openpose related work as another module.

@YuliangXiu
Copy link
Owner

Of course, given 2D keypoints (OpenPose, MediaPipe, AlphaPose) or even the semantic parsing results, the refinement process will be further improved for sure. If you are interested in adding the keypoints constrain into ICON, any pull requests are welcome.

@YuliangXiu YuliangXiu added HPS Human-Pose-Shape Discussion labels Jun 17, 2022
@YuliangXiu
Copy link
Owner

@SongYupei New cloth-refinement module is released. Use -loop_cloth 200 to refine ICON's reconstruction, making it as good as the predicted clothing normal image.
overlap

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Discussion HPS Human-Pose-Shape
Projects
None yet
Development

No branches or pull requests

1 participant