Using SDXL's Revision workflow with and without prompts
The Stability AI team released a Revision workflow, where images can be used as prompts to the generation pipeline. This is similar to Midjourney's image prompts or Stability's previously released unCLIP for SD 2.1.
From our experience, Revision was a little finicky with a lot of randomness. In particular, it didn't seem to work well with text prompts (related blogpost).
After some experimentation, we found what seems to be a relatively good mix of parameters that worked with Revision, both with and without prompts. Essentially, the trick was to rescale the image and prompt embeddings, and not to use the prompt's pooled embeddings.
# From text2img.py
result: Image.Image = SDXL_MODEL(
# Scale down prompt_embeds
prompt_embeds=prompt_embeds * 0.8,
# Scale up image_embeddings
pooled_prompt_embeds=image_embeddings*2,
guidance_scale=7.0,
num_inference_steps=20,
generator=None if seed is None else torch.manual_seed(seed),
).images[0]
There's still a lot of randomness involved, but hopefully this helps anyone who is trying to get Revision to work!
See related Reddit post for more results.
Reference image from Your Name, directed by Makoto Shinkai, 2016
Using prompt "fish" with seeds 0 to 3 from left to right
Reference image from the Suikoden series by Utagawa Kuniyoshi, 1830
Using prompt "frog" with seeds 0 to 3 from left to right
Mixing the 2 reference images without prompt with seeds 0 to 3 from left to right
The code uses ComfyUI's CLIP vision loader to load the CLIP vision model, and diffusers to load the SDXL model. There are also minor patches to the diffusers' default Euler scheduler and a sharpness patch adapted from Fooocus.
You will need the CLIP safetensors and SDXL safetensors.
Then run the following to generate the first image above (reference: "your_name.jpg", prompt: "fish", seed 0)
$ python3 main.py -p "fish" -i "./samples/your_name.jpg" --seed 0 --clip_model "PATH_TO_CLIP_SAFETENSORS" --sdxl_model "PATH_TO_SDXL_SAFETENSORS"