Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

The generated image is quite different from the reference image #9

Closed
1273545169 opened this issue Mar 21, 2022 · 2 comments
Closed

Comments

@1273545169
Copy link

I tested the effect and found that the hair style of the generated image is quite different from that of the reference image. Here is my test script. The reference image is selected from CelebAMask-HQ dataset. Is there a problem in my test process?

python scripts/inference.py \ --exp_dir=../outputs/0321/ \ --checkpoint_path=../pretrained_models/hairclip.pt \ --latents_test_path=../pretrained_models/test_faces.pt \ --editing_type=both \ --input_type=image_image \ --color_ref_img_test_path=../input/16 \ --hairstyle_ref_img_test_path=../input/16 --num_of_ref_img 1

image

@wty-ustc
Copy link
Owner

As stated in the limitations of our paper, since our hairstyle transfer embedding is provided by the image encoder of CLIP, it may not be good enough for characterizing the fine-grained structure of hairstyles, so the results may sometimes be unsatisfactory. You can try other images, or train HairCLIP specifically for hairstyle transfer, or add optimization strategies.

@1273545169
Copy link
Author

Thank you so much

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants