Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Problem about training #22

Open
Kangkang625 opened this issue Jul 31, 2023 · 4 comments
Open

Problem about training #22

Kangkang625 opened this issue Jul 31, 2023 · 4 comments

Comments

@Kangkang625
Copy link

Kangkang625 commented Jul 31, 2023

Hi,thank you for your great work!

I was trying to wrtie train code and do some training, but I was confused by the We first train the EMASC modules, the textual-inversion adapter,and the warping component. Then, we freeze all the weights of allmodules except for the textual inversion adapter and train the proposed enhanced Stable Diffusion pipeline in 4.2, should I first freeze other weights including unet and train textual inversion adapter or should I free other weight and train textual inversion adapter and unet together。

@snaiws
Copy link

snaiws commented Aug 7, 2023

I wonder it too.

@ABaldrati
Copy link
Collaborator

Hi @Kangkang625
Thanks for your interest in our work!!

should I first freeze other weights including unet and train textual inversion adapter or should I free other weight and train textual inversion adapter and unet together

First, you should pre-train the inversion adapter, keeping all the other weights (including the unet) frozen.
Then keeping frozen the EMASC and the warping module, you should train the unet and the (pre-trained) inversion adapter together.

I hope this clarify your doubts
Alberto

@Kangkang625
Copy link
Author

Thanks for your answer @ABaldrati
it's very helpful to my further study,but I still have a little confusion about the unet training.

According to my understanding, the unet should be extended based on the unet of stable diffusion pipeline.
Should I extend the unet, initialize the changed part weight randomly and directly freeze it to pre-train the textual inversion adapter ?

Thanks again for your great work and detailed answer!

@ABaldrati
Copy link
Collaborator

According to my understanding, the unet should be extended based on the unet of stable diffusion pipeline.
Should I extend the unet, initialize the changed part weight randomly and directly freeze it to pre-train the textual inversion adapter ?

When we pre-train the inversion adapter we use the standard Stable Diffusion inpainting model. In this phase we do not extend the unet

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants