✨Features | ⭳ Download | 🛠️Installation | 🎞️ Video | 🖼️Screenshots | 📖Wiki | 💬Discussion
Generate images from within Krita with minimal fuss: Select an area, push a button, and new content that matches your image will be generated. Or expand your canvas and fill new areas with generated content that blends right in. Text prompts are optional. No tweaking required!
This plugin seeks to provide what "Generative Fill/Expand" do in Photoshop - and go beyond. Adjust strength to refine existing content (img2img) or generate images from scratch. Powerful customization is available for advanced users.
Local. Open source. Free.
Features are designed to fit an interactive workflow where AI generation is used as just another tool while painting. They are meant to synergize with traditional tools and the layer stack.
- Inpaint: Use Krita's selection tools to mark an area and remove or replace existing content in the image. Simple text prompts can be used to steer generation.
- Outpaint: Extend your canvas, select a blank area and automatically fill it with content that seamlessly blends into the existing image.
- Generate: Create new images from scratch by decribing them with words or existing images. Supports SD1.5 and SDXL.
- Refine: Use the strength slider to refine existing image content instead of replacing it entirely. This also works great for adding new things to an image by painting a (crude) approximation and refining at high strength!
- Live Painting: Let AI interpret your canvas in real time for immediate feedback. Watch Video
- Control: Guide image creation directly with sketches or line art. Use depth or normal maps from existing images or 3D scenes. Transfer character pose from snapshots. Control composition with segmentation maps.
- Resolutions: Work efficiently at any resolution. The plugin will automatically use resolutions appropriate for the AI model, and scale them to fit your image region.
- Upscaling: Upscale and enrich images to 4k, 8k and beyond without running out of memory.
- Job Queue: Depending on hardware, image generation can take some time. The plugin allows you to queue and cancel jobs while working on your image.
- History: Not every image will turn out a masterpiece. Preview results and browse previous generations and prompts at any time.
- Strong Defaults: Versatile default style presets allow for a simple UI which covers many scenarios.
- Customization: Create your own presets - select a Stable Diffusion checkpoint, add LoRA, tweak samplers and more.
See the Plugin Installation Guide for instructions.
A concise (more technical) version is below:
- Windows, Linux, MacOS
- On Linux/Mac: Python + venv must be installed
- recommended version: 3.11 or 3.10
- usually available via package manager, eg.
apt install python3-venv
To run locally a powerful graphics card with at least 6 GB VRAM is recommended. Otherwise generating images will take very long!
NVIDIA GPU | supported via CUDA |
AMD GPU | supported via DirectML on Windows, ROCm on Linux (only custom server) |
Apple M1/M2 | supported via MPS on macOS |
CPU | supported, but very slow |
- If you haven't yet, go and install Krita! Required version: 5.2.0 or newer
- Download the plugin.
- Start Krita and install the plugin via Tools ▸ Scripts ▸ Import Python Plugin from File...
- Point it to the ZIP archive you downloaded in the previous step.
- ⚠ This will delete any previous install of the plugin. If you are updating from 1.14 or older please read updating to a new version.
- Check Krita's official documentation for more options.
- Restart Krita and create a new document or open an existing image.
- To show the plugin docker: Settings ‣ Dockers ‣ AI Image Generation.
- In the plugin docker, click "Configure" to start local server installation or connect.
Note
If you encounter problems please check the FAQ / list of common issues for solutions.
The plugin uses ComfyUI as backend. As an alternative to the automatic installation, you can install it manually or use an existing installation. If the server is already running locally before starting Krita, the plugin will automatically try to connect. Using a remote server is also possible this way.
Please check the list of required extensions and models to make sure your installation is compatible.
If you're looking for a way to easily select objects in the image, there is a separate plugin which adds AI segmentation tools.
You can also rent a GPU instead of running locally. In that case, step 6 is not needed. Instead use the plugin to connect to a remote server.
There is a step by step guide on how to setup cloud GPU on runpod.io or vast.ai or sailflow.ai.
Inpainting on a photo using a realistic model
Reworking and adding content to an AI generated image
Adding detail and iteratively refining small parts of the image
Using ControlNet to guide image generation with a crude scribble
Modifying the pose vector layer to control character stances (Click for video)
Upscaling to improve image quality and add details
Contributions are very welcome! Check the contributing guide to get started.
- Image generation: Stable Diffusion
- Diffusion backend: ComfyUI
- Inpainting: ControlNet, IP-Adapter