Skip to content

Get familiar with different fine-tuning techniques for text-to-image models, and learn how to teach a diffusion model a concept of your choosing

License

Notifications You must be signed in to change notification settings

pyladiesams/personalization-with-text-to-image-diffusion-models-feb2024

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

19 Commits
 
 
 
 
 
 
 
 
 
 

Repository files navigation

🎨 Fine-tuning text-to-image diffusion models for personalization and subject-driven generation

📚 Workshop description

During the workshop you will get familiar with different fine-tuning techniques for text-to-image models, and learn how to easily teach a diffusion model a concept of your choosing (special style, a pet, faces, etc) with as little as 3 images depicting your concept.

🛠️ Requirements

Python >= 3.10, acquaintance with Diffusion models, Text-to-Image models.

NOTE 💡 While we will briefly go over diffusion models and specifically Stable Diffusion, we will not get into detail, and assume some familiarity with the diffusion process and architecture of stable diffusion models.

TIP 💌 If you're not familiar with diffusion models but interested in doing this workshop, check this (free & open-sourced) introductory diffusion class 🤓

▶️ Usage

  • Clone the repository
  • Start Jupyter lab and navigate to the workshop folder or Use Google collab and import Jupyter notebooks there.
  • Open the first workshop notebook
    • [Option1] Install requirements with pip install -r requirements.txt
    • [Option2] Run the Setup cells in the notebook

🎬 Video record

Re-watch this YouTube stream

🤝 Credits

This workshop was set up by @pyladiesams and @linoytsaban

About

Get familiar with different fine-tuning techniques for text-to-image models, and learn how to teach a diffusion model a concept of your choosing

Topics

Resources

License

Stars

Watchers

Forks