Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

evaluate the fine-tuning textual inversion #2

Open
pribadihcr opened this issue Mar 22, 2023 · 3 comments
Open

evaluate the fine-tuning textual inversion #2

pribadihcr opened this issue Mar 22, 2023 · 3 comments
Assignees

Comments

@pribadihcr
Copy link

Hi,
how to evaluate fine-tuning textual inversion good or not? thanks

@brandontrabucco brandontrabucco self-assigned this Mar 25, 2023
@brandontrabucco
Copy link
Owner

brandontrabucco commented Mar 25, 2023

Hello pribadihcr,

Once you have performed textual inversion using this script (https://github.com/brandontrabucco/da-fusion/blob/main/fine_tune.py), we have created a utility to check the generations for visual inspection.

https://github.com/brandontrabucco/da-fusion/blob/main/generate_images.py

There are three arguments that you will need to change:

--embed-path: this should point to the .bin file containing the textual inversion tokens you created.
--prompt: this is a prompt that should contain the name of the token from textual inversion.
--out: the path where images will be saved.
--erasure-ckpt-name: you should set the default value in the script of this parameter to None. It is not fully  supported yet, and will be used in the future to allow you to erase concepts from Stable Diffusion.

Let me know if you have other questions I can help with!

-Brandon

@pribadihcr
Copy link
Author

Hello @brandontrabucco,
How to continue training the fine-tune textual inversion.
e.g: I want to continue to train from the checkpoint: learned_embeds-steps-5000.bin
look like the parameter resume_from_checkpoint is not suitable for the current saved embedding model. Thanks

@brandontrabucco
Copy link
Owner

Hello pribadihcr,

Looking at the script, we modified it to only save the learned_embeds.bin file, but the --resume_from_checkpoint parameter appears to expect the state of the Accelerator object, not just the learned_embeds.bin. The fix is to copy the relevant lines of code for saving the accelerator object from the original script here:

https://github.com/huggingface/diffusers/blob/main/examples/textual_inversion/textual_inversion.py

Once this extra state is saved, you can pass to --resume_from_checkpoint the path to the saved Accelerator object state.

If you want to avoid re-training, and use the current embeddings, you can also modify this line of code to load and use the current fine-tuned embeddings instead of the embedding of the initializer token:

if args.resume_from_checkpoint is not None:
    token_embeds[placeholder_token_id] = torch.load(
        args.resume_from_checkpoint)[args.placeholder_token]

else:
    token_embeds[placeholder_token_id] = token_embeds[initializer_token_id]

Let me know if you have additional questions!

Best,
Brandon

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants