Skip to content

Latest commit

History

History
37 lines (26 loc) 路 2.61 KB

unclip.md

File metadata and controls

37 lines (26 loc) 路 2.61 KB

unCLIP

Hierarchical Text-Conditional Image Generation with CLIP Latents is by Aditya Ramesh, Prafulla Dhariwal, Alex Nichol, Casey Chu, Mark Chen. The unCLIP model in 馃 Diffusers comes from kakaobrain's karlo.

The abstract from the paper is following:

Contrastive models like CLIP have been shown to learn robust representations of images that capture both semantics and style. To leverage these representations for image generation, we propose a two-stage model: a prior that generates a CLIP image embedding given a text caption, and a decoder that generates an image conditioned on the image embedding. We show that explicitly generating image representations improves image diversity with minimal loss in photorealism and caption similarity. Our decoders conditioned on image representations can also produce variations of an image that preserve both its semantics and style, while varying the non-essential details absent from the image representation. Moreover, the joint embedding space of CLIP enables language-guided image manipulations in a zero-shot fashion. We use diffusion models for the decoder and experiment with both autoregressive and diffusion models for the prior, finding that the latter are computationally more efficient and produce higher-quality samples.

You can find lucidrains' DALL-E 2 recreation at lucidrains/DALLE2-pytorch.

Make sure to check out the Schedulers guide to learn how to explore the tradeoff between scheduler speed and quality, and see the reuse components across pipelines section to learn how to efficiently load the same components into multiple pipelines.

UnCLIPPipeline

[[autodoc]] UnCLIPPipeline - all - call

UnCLIPImageVariationPipeline

[[autodoc]] UnCLIPImageVariationPipeline - all - call

ImagePipelineOutput

[[autodoc]] pipelines.ImagePipelineOutput