Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Interpretations of One Ablation #2

Open
GloryyrolG opened this issue Jan 27, 2024 · 2 comments
Open

Interpretations of One Ablation #2

GloryyrolG opened this issue Jan 27, 2024 · 2 comments

Comments

@GloryyrolG
Copy link

GloryyrolG commented Jan 27, 2024

Hi @tiangexiang et al.,

Thx for ur great constant contribution in this field. May I ask in Sec. 4.6 & Fig. 8 1st row, it states:

... by inheriting the two-stage parameterization but optimizing the rendering of the foreground human on the visible parts only [46] while maintaining the same optimization objectives.

image

if this is the case, then why it will lead to bad rendering results in Fig. 8? I am a bit confused and may miss sth. Is that becuz there are no sufficient frames where the problematic part is visible? Many thx in advance for any help. Rgds,

@tiangexiang
Copy link
Owner

Good question! My interpretation is that:

  1. Human's appearance is conditioned on SMPL poses [vid2avatar], such that for different poses, the rendering can be different, even though for ray samples that are sampled at the same position in the canonical space.
  2. For this particular pose, only a part of human is visible, therefore only the rays at the corresponding visible area are casted and sampled during training.
  3. Based on the above two points, we can see that the rendering of human appearance depends on two factors: both pose and ray sample positions. However, such combinations at he invisible area are never seen during training, making the rendering of invisible part funny.

@GloryyrolG
Copy link
Author

Hi @tiangexiang ,

Thx for ur instant reply first. I still do not fully understand.

  1. For 1st point, do u mean by the view-dependent effect which affects the alpha composition/integral along different rays?

  2. I understood 2nd point:)

  3. First ignoring the logic I may not understand correctly, do u imply those invisible areas are not observed in any training frames? E.g., this part is always occluded for people always hold the object. Though I do not think that is the case. Cuz if so, Wild2Avatar is not able to trace back part appearance either.

Thx for help.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants