You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Thank you in advanced, very helpful article.
In your blog, you mentioned that:
"The model does not support direct modification of such features. I used illust2vec to extract character features such as hair color, eye color, etc. Then I got some ideas from Conditional generative adversarial nets and supplied those features as embeddings to Generator. Now when I generate an image, I add an additional Anime character as input, and the transferred result should look like that character, with the position and facial expression(TODO) of the human portrait kept untouched. The result is like this:"
I wonder is there any pre-trained model of this feature yet?
Regards,
MinakoKojima
The text was updated successfully, but these errors were encountered:
Thank you for the interest. Unfortunately the code base has changed quite a lot since I wrote the blog. That experimental idea is still applicable, but it was orthogonal to the main theme, which is to propose a new framework that works well with of deformations, abstractions, and domain shifts during unsupervised image translation. I do not want to further complicate the code base, so I decided against adding that feature in this repository. Sorry!
If you are interested in contributing, I can send you the code I used directly. (It's quite messy and will require some work to make it compatible with this repo.)
Thank you in advanced, very helpful article.
In your blog, you mentioned that:
I wonder is there any pre-trained model of this feature yet?
Regards,
MinakoKojima
The text was updated successfully, but these errors were encountered: