You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Thank you for your impressive work.
Also thank you for sharing the model and code.
We believe this work is one of the most important in the field of style transfer this year.
I studied the code you posted in detail. The separation of the head transition model and the background transition model is very confusing to me. In the paper, the details we get are that we only need a Texture translation network to uniformly transfer the style of the whole image.
It is this feature that impresses me!
Looking forward to your reply.
Best,
The text was updated successfully, but these errors were encountered:
Good question. Here we provide a more practical infer version with head & background processed separately for time efficiency (especially for images with small faces & image with multi-faces in different size). The model also supports directly full image translation with face in suitable scale. With resized images, directly use "cartoon_anime_bg.pb" for inference. Actually, cartoon_[style]bg.pb and cartoon[style]_h.pb are the same model with same params. For anime style, we give an additonal optim for head model, but bg model also works well.
Thank you for your impressive work.
Also thank you for sharing the model and code.
We believe this work is one of the most important in the field of style transfer this year.
I studied the code you posted in detail. The separation of the head transition model and the background transition model is very confusing to me. In the paper, the details we get are that we only need a Texture translation network to uniformly transfer the style of the whole image.
It is this feature that impresses me!
Looking forward to your reply.
Best,
The text was updated successfully, but these errors were encountered: