You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Is it possible to easily translate a source image from one domain to another using latent and not ref images?
I see a fucntion translate_using_latent(nets, args, x_src, y_trg_list, z_trg_list, psi, filename): in utils.py (line 78) but it is never used and I am not sure of how "y_trg_list" and "z_trg_list" are supposed to be.
The text was updated successfully, but these errors were encountered:
so [torch.ones(src.x.size(0)).long().to('cuda')] is the y_target list, here it is all ones so all source images will be change to the domain 1 which is 'male' in the case of the celeb A pretrained network.
[torch.randn(src.x.size(0), 16).to('cuda')] is the list of latent vector that'll be used in the mapping_network alongside the y_target list to get the styles which will be used in the generator with the source images to generate the fake images.
The best way would be to then create a function sample_with_latent in Solver.py but this was just to make it work really quickly :)
Is it possible to easily translate a source image from one domain to another using latent and not ref images?
I see a fucntion translate_using_latent(nets, args, x_src, y_trg_list, z_trg_list, psi, filename): in utils.py (line 78) but it is never used and I am not sure of how "y_trg_list" and "z_trg_list" are supposed to be.
The text was updated successfully, but these errors were encountered: