Skip to content

GLIDE: a diffusion-based text-conditional image synthesis model. Now with example files for local running.

License

Notifications You must be signed in to change notification settings

nerdyrodent/glide-text2im

 
 

Repository files navigation

GLIDE

This is a fork of the official codebase for running the small, filtered-data GLIDE model from GLIDE: Towards Photorealistic Image Generation and Editing with Text-Guided Diffusion Models.

For details on the pre-trained models in this repository, see the Model Card.

Usage

To install this package, clone this repository and then run:

pip install -e .

For detailed usage examples, see the notebooks directory.

  • The text2im notebook shows how to use GLIDE (filtered) with classifier-free guidance to produce images conditioned on text prompts. The local version of this notebook is text2im.py
  • The inpaint notebook shows how to use GLIDE (filtered) to fill in a masked region of an image, conditioned on a text prompt. The local version of this notebook is inpaint.py.
  • The clip_guided notebook shows how to use GLIDE (filtered) + a filtered noise-aware CLIP model to produce images conditioned on text prompts. The local version of this notebook is clip_guided.py.

Local versions

The local versions of the notebooks are as close as possible to the original notebooks, which remain unchanged here. Changes to local versions include:

  • No need for "display"
  • Individual images are also saved, as well as the image strip (only upscaled images are saved by default)

About

GLIDE: a diffusion-based text-conditional image synthesis model. Now with example files for local running.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Languages

  • Python 84.0%
  • Jupyter Notebook 16.0%