You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi,
In your paper, you "concatenate the source image Is, the source parsing map Ss, the generated parsing map Sg and the target pose Pt in depth (channel) dimension and extract its feature Fp ", as shown in the Fig.
However, in my opinion, Fp should aims to provide the target pose information. Why do you additionally use the source image Is and the source parsing map Ss as input? Do you try to only use Sg and Pt to extract Fp ?
Thanks!
The text was updated successfully, but these errors were encountered:
Hi,
We found this pipeline can disentangle the shape and style information in the texture transfer model. I believe it is a good try to only take Sg and Pt as the inputs of the image generator, and I have tried to do it. Besides, if you would like to do it, more normalization blocks are recommended to adopt in the image generator.
Hi,
In your paper, you "concatenate the source image Is, the source parsing map Ss, the generated parsing map Sg and the target pose Pt in depth (channel) dimension and extract its feature Fp ", as shown in the Fig.
However, in my opinion, Fp should aims to provide the target pose information. Why do you additionally use the source image Is and the source parsing map Ss as input? Do you try to only use Sg and Pt to extract Fp ?
Thanks!
The text was updated successfully, but these errors were encountered: