You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
It seems like the entire latent space is shifted towards what you're training. And also, the longer you train, the more is affected.
However, @nikopueringer was figuring out what's missing in @XavierXiao's code from Google's implementation -- regularization on the go.
To be reductive, it's:
Generate an image from the original ckpt
Move the new ckpt toward your class / face
Generate the same image from step 1 from the new ckpt
If it's too far, rewind... try again...
As a test, I trained my face on the class word "brazilian". At 9K steps, here are some unrelated prompts (euler, seed 1, cfg 15):
photo of an apple:
man:
brazilian:
annakendrick:
kit harington:
photo of a horse:
Some of the issues above might be ameliorated by removing "photo of" from the personalized.py file? Or with more regularization images, perhaps as many images as there are steps? Or perhaps a much much narrower class?
The text was updated successfully, but these errors were encountered:
It seems like the entire latent space is shifted towards what you're training. And also, the longer you train, the more is affected.
However, @nikopueringer was figuring out what's missing in @XavierXiao's code from Google's implementation -- regularization on the go.
To be reductive, it's:
As a test, I trained my face on the class word "brazilian". At 9K steps, here are some unrelated prompts (euler, seed 1, cfg 15):
photo of an apple:
man:
brazilian:
annakendrick:
kit harington:
photo of a horse:
Some of the issues above might be ameliorated by removing "photo of" from the
personalized.py
file? Or with more regularization images, perhaps as many images as there are steps? Or perhaps a much much narrower class?The text was updated successfully, but these errors were encountered: