- β
USE: Always activate
.venvbefore running ANY command
AI-powered image editing using Qwen's Image Edit 2509 model with quantized INT4 transformers for 24GB VRAM GPUs. - Check prompt shows (.venv) at the start
- If you see errors, you're probably not in the venv
-
β WRONG PARAMETERS: Do NOT assume all models use the same parameters
-
β WRONG MODEL: Do NOT use
Qwen/Qwen-Image-Edit-2509(40GB) - β ALWAYS check the HuggingFace model card for optimal settings-
β USE:
nunchaku-tech/nunchaku-qwen-image-edit-2509(12.7GB quantized) - Lightning/Turbo/distilled models need differenttrue_cfg_scalevalues- Example: Standard models use
true_cfg_scale=4.0, Lightning uses1.0
- Example: Standard models use
-
-
β WRONG NUNCHAKU: Do NOT use
pip install nunchaku(wrong package!) - Using wrong parameters = poor quality, blocky/pixelated output- β
USE: Install from source (see
INSTALL_NUNCHAKU.md) - See "Model-Specific Parameters" section below
- β
USE: Install from source (see
-
β NOT USING VENV: Always activate
.venvbefore running commands## π― Overview- β
Check prompt shows
(.venv)at the startAI-powered multi-image editing using Qwen's Image Edit model with quantized transformers for 24GB VRAM GPUs.
- β
Check prompt shows
.\launch.ps1 # Select option 21. **β WRONG MODEL**: Do NOT use `Qwen/Qwen-Image-Edit-2509` (40GB full model)
``` - β
USE: `nunchaku-tech/nunchaku-qwen-image-edit-2509` (12.7GB quantized)
Access at http://localhost:7860 - The 40GB model **WILL NOT FIT** on RTX 4090 24GB VRAM
- You will get OOM (Out of Memory) errors
### Option 2: REST API
```powershell2. **β WRONG NUNCHAKU**: Do NOT use `pip install nunchaku` (0.15.4 stats package)
.\launch.ps1 # Select option 1 - β
USE: Install from source (see below)
``` - The PyPI package "nunchaku" is a completely different stats library
Swagger UI at http://localhost:8000/docs - You need `nunchaku==1.0.1+torch2.5` for AI model quantization
See [`api/README.md`](api/README.md) for API documentation and test scripts.3. **β NOT USING VENV**: Do NOT install in system Python
- β
ALWAYS activate `.venv` before running ANY command
## π¦ Installation - Check prompt shows `(.venv)` at the start
- If you see errors, you're probably not in the venv
**Full installation guide:** [`INSTALL_NUNCHAKU.md`](INSTALL_NUNCHAKU.md)
## π― Overview
Quick steps:
1. Create venv: `python -m venv .venv`This project implements the Qwen Image Edit 2509 model using quantized INT4 transformers via nunchaku, enabling high-quality AI image editing on consumer GPUs like the RTX 4090 (24GB VRAM).
2. Activate: `.\.venv\Scripts\Activate.ps1`
3. Install requirements: `pip install -r requirements.txt`**π NEW: Gradio Web UI** - Interactive web interface for easy image editing!
4. Install nunchaku from source (see guide)- `qwen_gradio_ui.py` - **Recommended!** Web interface with multi-model support
## β¨ Features**π NEW: REST API** - Production-ready FastAPI server with queue management!
- See [`api/README.md`](api/README.md) for complete API documentation
- **π¨ Gradio Web UI**: Interactive web interface with multi-model support- Job queue system with concurrent processing
- **π REST API**: Production FastAPI server with job queue- Comprehensive test suite (14 tests, 100% coverage)
- **Multi-Model Support**: 4-step (10s), 8-step (40s), 40-step (3min)- API key authentication with automatic management
- **Face Preservation**: Automatic identity preservation- Cloudflare Tunnel ready for remote access
- **INT4 Quantization**: 12.7GB model fits in 24GB VRAM
- **Queue Management**: Concurrent job processing with overflow protection**Command-line scripts:**
- **Comprehensive Testing**: 14-test suite with 100% API coverage- `qwen_image_edit_nunchaku.py` - Standard 40-step model (best quality, ~2:45)
- `qwen_image_edit_lightning.py` - Lightning 8-step model (fast, ~21s)
## πΌοΈ Supported Image Formats- `qwen_image_edit_lightning_4step.py` - Lightning 4-step model (ultra-fast, ~10s)
- `qwen_instruction_edit.py` - Instruction-based single-image editing
- **PNG** (`.png`) - Recommended for transparency
- **JPEG** (`.jpg`, `.jpeg`) - Standard photos## β¨ Features
**API Limitations:** 10MB max, 2048x2048 max dimensions- **π¨ Gradio Web UI**: Easy-to-use web interface with real-time preview
- **π REST API**: Production-ready FastAPI server with job queue (see [`api/README.md`](api/README.md))
## π οΈ System Requirements- **Multi-Model Support**: Switch between 4-step, 8-step, and 40-step models on-the-fly
- **Random Seeds**: Automatic random seed generation for variety
### Hardware- **Face Preservation**: Strong automatic face identity preservation
- **GPU**: NVIDIA RTX 4090 (24GB VRAM) or similar- **Quantized Model Support**: Uses INT4 quantization (rank 128) to fit in 24GB VRAM
- **RAM**: 32GB recommended- **High Quality Output**: ~12.7GB quantized model maintains excellent quality
- **Disk**: ~50GB for models- **Lightning Fast Option**: 4-step model generates in ~10 seconds!
- **CUDA-Optimized**: Built for NVIDIA GPUs with Compute Capability 8.9 (RTX 4090)
### Software- **Queue Management**: API handles concurrent jobs with automatic queuing
- **OS**: Windows 10/11- **Comprehensive Testing**: 14-test suite validates all functionality
- **Python**: 3.10.6
- **CUDA**: 12.1+ or 13.0## πΌοΈ Example
- **Visual Studio Build Tools 2022**: C++ components required
**Prompt**: "The magician bear is on the left, the alchemist bear is on the right, facing each other in the central park square."
## π Project Structure
**Generated Image**: `output_image_edit_plus_r128.png`
```- **Inference Time**: 2:44 (40 steps)
qwen-image-edit/- **Model Size**: 12.7GB quantized
βββ api/ # REST API server- **VRAM Usage**: ~23GB
β βββ main.py # FastAPI application
β βββ job_queue.py # Queue management## π οΈ System Requirements
β βββ pipeline_manager.py # Model loading
β βββ README.md # API documentation### Hardware
βββ qwen_gradio_ui.py # Web UI- **GPU**: NVIDIA GeForce RTX 4090 (24GB VRAM) or similar
βββ test-api-remote.ps1 # Windows test script- **VRAM**: 24GB minimum
βββ test-api-remote.sh # macOS/Linux test script- **Disk Space**: ~50GB for models and dependencies
βββ launch.ps1 # Launch menu- **RAM**: 32GB recommended
βββ README.md # This file- **Compute Capability**: 8.9 (sm_89)
- Python: 3.10.6
Test scripts validate all API functionality (14 tests):- CUDA: 13.0 (or 12.1+)
- Driver: 581.29 or newer
Windows:- Visual Studio Build Tools 2022: With C++ components
.\test-api-remote.ps1 "your-api-key"## π¦ Installation
macOS/Linux:
./test-api-remote.sh "your-api-key"
``````powershell
# Navigate to project directory
See [`TEST_SCRIPTS_README.md`](TEST_SCRIPTS_README.md) for details.cd C:\Projects\qwen-image-edit
## β±οΈ Performance# Create virtual environment
python -m venv .venv
| Model | Steps | Time | Quality |
|-------|-------|------|---------|# Activate it (DO THIS EVERY TIME)
| 4-step | 4 | ~10s | Good |.\.venv\Scripts\Activate.ps1
| 8-step | 8 | ~40s | Better |
| 40-step | 40 | ~3min | Best |# Verify activation - you should see (.venv) in your prompt
# Your prompt should look like: (.venv) PS C:\Projects\qwen-image-edit>
**Model Switching:** Takes 2-3 minutes due to GPU memory cleanup and model loading.```
## π Documentation### Step 2: Install PyTorch with CUDA
- **API Documentation**: [`api/README.md`](api/README.md)```powershell
- **Installation Guide**: [`INSTALL_NUNCHAKU.md`](INSTALL_NUNCHAKU.md)# MAKE SURE (.venv) is showing in your prompt!
- **Test Scripts**: [`TEST_SCRIPTS_README.md`](TEST_SCRIPTS_README.md)pip install torch==2.5.1+cu121 torchvision==0.20.1+cu121 torchaudio==2.5.1+cu121 --index-url https://download.pytorch.org/whl/cu121
- **Flutter Integration**: [`FLUTTERFLOW_GUIDE.md`](FLUTTERFLOW_GUIDE.md)```
- **Changelog**: [`CHANGELOG.md`](CHANGELOG.md)
### Step 3: Install Diffusers from GitHub
## π API Authentication
**β οΈ Must be from GitHub for QwenImageEditPlusPipeline support**
The API uses API key authentication. Keys are auto-generated on first run.
```powershell
View your key:# MAKE SURE (.venv) is showing in your prompt!
```powershellpip install git+https://github.com/huggingface/diffusers.git
cd api```
.\show-api-key.ps1
```### Step 4: Install Other Dependencies
See [`api/README.md`](api/README.md) for authentication details.```powershell
# MAKE SURE (.venv) is showing in your prompt!
## π― Example Usagepip install -r requirements.txt
-
Run
.\launch.ps1β Select option 2### Step 5: Install nunchaku from Source -
Upload image
-
Enter instruction: "Add Superman cape and suit"
β οΈ DO NOT usepip install nunchaku- that's a different package! -
Click Generate
See INSTALL_NUNCHAKU.md for complete instructions.
import requests```powershell
# MAKE SURE (.venv) is showing in your prompt!
headers = {"X-API-Key": "your-key"}
files = {"image": open("photo.jpg", "rb")}# Install Visual Studio Build Tools 2022 first (if not already installed)
data = {# Download from: https://visualstudio.microsoft.com/downloads/
"instruction": "Transform into Superman",
"model": "4-step"# Clone and build nunchaku
}cd $env:TEMP
git clone https://github.com/nunchaku-tech/nunchaku.git
response = requests.post(cd nunchaku
"http://localhost:8000/api/v1/edit",git submodule update --init --recursive
headers=headers,$env:TORCH_CUDA_ARCH_LIST="8.9"
files=files,$env:DISTUTILS_USE_SDK="1"
data=datapip install -e . --no-build-isolation
)
# Return to project
with open("output.png", "wb") as f:cd C:\Projects\qwen-image-edit
f.write(response.content)```
Verify nunchaku installation:
Contributions welcome! Please ensure:pip show nunchaku
-
Code follows existing style```
-
All tests pass
-
Documentation is updated### Step 6: Run Prerequisites Check
See project license file..\check.ps1
## π Acknowledgments
## π Quick Start
- **Qwen Team**: For the Qwen Image Edit 2509 model
- **Nunchaku Team**: For INT4 quantization support**β οΈ ALWAYS activate venv first!**
- **HuggingFace**: For model hosting and diffusers library
```powershell
# Navigate to project
cd C:\Projects\qwen-image-edit
# Activate virtual environment (DO THIS EVERY TIME!)
.\.venv\Scripts\Activate.ps1
# Verify you see (.venv) in your prompt
# Prompt should show: (.venv) PS C:\Projects\qwen-image-edit>
Easy-to-use web interface with all features!
# Start the web UI
python qwen_gradio_ui.pyThen open your browser to http://127.0.0.1:7860
Features:
- π¨ Interactive web interface - no code editing needed
- π Switch between 3 models (4-step, 8-step, 40-step) on-the-fly
- π² Random seeds for variety, or note seed for reproducibility
- π Instruction-based editing with system prompts
- π Automatic face preservation
- πΎ Clean filenames:
qwen04_0001.png,qwen08_0042.png,qwen40_0001.png
All interfaces (Gradio UI, REST API, CLI scripts) support:
- PNG (
.png) - Recommended for images with transparency - JPEG (
.jpg,.jpeg) - Standard photo format
API Limitations (Gradio UI and CLI have no file size limits):
- Maximum file size: 10 MB
- Maximum dimensions: 2048 x 2048 pixels
# Multi-image composition (Sydney Harbour example)
python qwen_image_edit_nunchaku.py # Standard 40-step (~2:45)
python qwen_image_edit_lightning.py # Lightning 8-step (~21s)
python qwen_image_edit_lightning_4step.py # Lightning 4-step (~10s)
# Single-image instruction editing
python qwen_instruction_edit.py # Edit script for custom instructionsGeneration Time:
- 40-step: ~2:45 (best quality)
- 8-step: ~21s β‘ (7.7x faster, very good quality)
- 4-step: ~10s β‘β‘ (16x faster, good quality)
Output Files:
- Gradio UI:
qwen04_0001.png,qwen08_0001.png,qwen40_0001.png(sequential) - Command-line: Timestamped filenames in
generated-images/folder
Output: All generated images are saved in the generated-images/ folder with sequential naming:
- Gradio UI:
qwen04_0001.png,qwen08_0001.png,qwen40_0001.png(sequential per model) - CLI Scripts:
output_r128_YYYYMMDD_HHMMSS.png(timestamp-based for batch processing)
- Lightning 4-step: ~10 seconds/image (ultra-fast)
- Lightning 8-step: ~20 seconds/image (fast)
- Standard 40-step: ~2:45 minutes/image (best quality)
- First Run: ~5 minutes (model download + generation)
- Subsequent Runs: ~2-3 minutes (generation only)
- Inference Steps: 40 for standard, 8 or 4 for lightning
- VRAM Usage: ~23GB during inference
- Model Download Size: 12.7GB per quantized model
qwen-image-edit/
βββ .venv/ # Virtual environment (do not commit)
βββ api/ # REST API server
β βββ main.py # FastAPI application
β βββ models.py # Data models
β βββ pipeline_manager.py # Model & queue management
β βββ requirements.txt # API dependencies
β βββ README.md # API documentation
β βββ .api_key # Current API key (auto-generated)
β βββ .api_key_history # API key history
β βββ manage-key.ps1 # Key management script
β βββ new-api-key.ps1 # Generate new key
β βββ show-api-key.ps1 # Show current key
βββ generated-images/ # Generated images output folder
β βββ api/ # API-generated images
β β βββ qwen04_0001.png # 4-step outputs
β β βββ qwen40_0001.png # 40-step outputs
β βββ qwen04_0001.png # 4-step model outputs
β βββ qwen08_0001.png # 8-step model outputs
β βββ qwen40_0001.png # 40-step model outputs
βββ qwen_gradio_ui.py # β WEB UI (Recommended!)
βββ qwen_instruction_edit.py # Instruction-based editing script
βββ qwen_image_edit_nunchaku.py # Standard 40-step (best quality)
βββ qwen_image_edit_lightning.py # Lightning 8-step (fast)
βββ qwen_image_edit_lightning_4step.py # Lightning 4-step (ultra-fast)
βββ launch.ps1 # Launcher for API/Gradio
βββ test-api-remote.ps1 # Comprehensive API test suite
βββ system_prompt.txt.example # System prompt examples
βββ check.ps1 # Prerequisites checker
βββ install-nunchaku-patched.ps1 # Installation helper
βββ requirements.txt # Python dependencies
βββ README.md # This file
βββ NAMING_CONVENTION.md # File naming guide
βββ INSTRUCTION_EDITING.md # Instruction editing docs
βββ TODO.txt # TODO list and improvements
βββ .gitignore # Git ignore rules
nunchaku-tech/nunchaku-qwen-image-edit-2509
Standard Models (40 steps):
svdq-int4_r32(11.5 GB) - Good qualitysvdq-int4_r128(12.7 GB) β Best Quality β WE USE THIS ONE
Lightning Models (8 steps - Faster):
svdq-int4_r32-lightningv2.0-8steps- Fastsvdq-int4_r128-lightningv2.0-8stepsβ Best Balance - Fast + Quality
Lightning Models (4 steps - Fastest):
svdq-int4_r32-lightningv2.0-4steps- Very fastsvdq-int4_r128-lightningv2.0-4steps- Very fast + Better quality
Current script uses: nunchaku-tech/nunchaku-qwen-image-edit-2509/svdq-int4_r128
inputs = {
"num_inference_steps": 40,
"true_cfg_scale": 4.0, # High guidance for standard models
"guidance_scale": 1.0,
"negative_prompt": " ",
}- Quality: Best
- Speed: Slow (~2:45 for 40 steps)
- Use case: Final production quality
inputs = {
"num_inference_steps": 8,
"true_cfg_scale": 1.0, # β οΈ DIFFERENT! Lightning uses 1.0
"guidance_scale": 1.0,
"negative_prompt": " ",
}- Quality: Very Good
- Speed: Fast (~21 seconds for 8 steps)
- Use case: Quick iterations, testing prompts
inputs = {
"num_inference_steps": 4,
"true_cfg_scale": 1.0, # Same as 8-step
"guidance_scale": 1.0,
"negative_prompt": " ",
}- Quality: Good
- Speed: Very Fast (~10 seconds for 4 steps)
- Use case: Rapid prototyping
ALWAYS check the HuggingFace model card before using a new model!
-
Visit the model repository:
-
Look for example code:
- Check the "Model card" tab
- Look for usage examples with
DiffusionPipelineor similar - Note the values for:
num_inference_stepstrue_cfg_scaleβ οΈ Most important!guidance_scale
-
Common mistakes:
- β Using
true_cfg_scale=4.0with Lightning β Blocky, pixelated output - β Using wrong number of steps β Poor quality or wasted time
- β Assuming all models use same parameters β Unpredictable results
- β Using
| Model Type | Steps | true_cfg_scale | Time | Quality | Script |
|---|---|---|---|---|---|
| Standard r128 | 40 | 4.0 | ~2:45 | Best | qwen_image_edit_nunchaku.py |
| Lightning 8-step r128 | 8 | 1.0 | ~21s | Very Good | qwen_image_edit_lightning.py |
| Lightning 4-step r128 | 4 | 1.0 | ~10s | Good | qwen_image_edit_lightning_4step.py |
| Standard r32 | 40 | 4.0 | ~2:00 | Good | (modify rank in script) |
| Lightning 8-step r32 | 8 | 1.0 | ~18s | Good | (modify rank in script) |
π« Qwen/Qwen-Image-Edit-2509 (~40GB) - DO NOT USE!
This is the WRONG model. It will:
- β Cause OOM (Out of Memory) errors
- β Crash your system
- β NOT fit on RTX 4090 24GB VRAM
- β Waste hours of your time downloading it
IF YOU SEE THIS IN YOUR CODE, YOU'RE USING THE WRONG MODEL:
# β WRONG - DO NOT USE
pipeline = QwenImageEditPlusPipeline.from_pretrained("Qwen/Qwen-Image-Edit-2509")β CORRECT - USE THIS:
# β
CORRECT - Load quantized transformer first
transformer = NunchakuQwenImageTransformer2DModel.from_pretrained(
"nunchaku-tech/nunchaku-qwen-image-edit-2509/svdq-int4_r128-qwen-image-edit-2509.safetensors"
)
pipeline = QwenImageEditPlusPipeline.from_pretrained(
"Qwen/Qwen-Image-Edit-2509", # Pipeline config only
transformer=transformer # Use quantized transformer
)# Your prompt should look like this:
# (.venv) PS C:\Projects\qwen-image-edit>
# If it doesn't, activate venv:
.\.venv\Scripts\Activate.ps1# MAKE SURE (.venv) is showing in your prompt!
pip show nunchaku
# Should show:
# Name: nunchaku
# Version: 1.0.1+torch2.5
# Location: C:\Users\...\AppData\Local\Temp\nunchaku
# β If it shows Version: 0.15.4 - YOU HAVE THE WRONG PACKAGE!
# Fix: pip uninstall nunchaku -y
# Then follow nunchaku installation steps above# Check your script uses quantized models
Get-Content qwen_image_edit_nunchaku.py | Select-String "nunchaku-tech"
# Should show:
# "nunchaku-tech/nunchaku-qwen-image-edit-2509/svdq-..."
# β If it shows only "Qwen/Qwen-Image-Edit-2509" without nunchaku-tech
# YOU'RE USING THE WRONG 40GB MODEL!# MAKE SURE (.venv) is showing in your prompt!
.\check.ps1
# Should show all β checks passingSee INSTALL_NUNCHAKU.md for complete step-by-step instructions including:
- Visual Studio Build Tools setup
- PyTorch CUDA patch (if needed)
- Nunchaku compilation from source
- Troubleshooting common issues
-
Xet Storage Warning: Install
hf_xetfor faster downloads:pip install hf_xet
-
CUDA Version Mismatch: PyTorch compiled with CUDA 12.1 but system has CUDA 13.0
- Solution: Applied patch to skip version check (see INSTALL_NUNCHAKU.md)
-
torch_dtype Deprecation: Minor deprecation warning
- Impact: None, will be fixed in future update
-
Config Attributes Warning: Benign warning about
pooled_projection_dim- Impact: None, can be ignored
- Model: Qwen Image Edit 2509 by Alibaba Cloud
- Quantization: nunchaku by nunchaku-tech
- Diffusers: Hugging Face Diffusers
This project uses models and libraries with their respective licenses. Please check individual component licenses before commercial use.
Contributions welcome! Please see TODO.txt for current improvement ideas.
Before committing changes, ensure no absolute paths exist:
# Check all Python and PowerShell files for absolute paths
Get-ChildItem -Recurse -Include *.py,*.ps1,*.md | Select-String -Pattern "C:\\"Cause: One of two problems:
- Not in virtual environment
- Wrong nunchaku installed (stats package)
Solution:
# 1. Activate venv (look for (.venv) in prompt)
.\.venv\Scripts\Activate.ps1
# 2. Check nunchaku version
pip show nunchaku
# If version is 0.15.4 - WRONG PACKAGE!
pip uninstall nunchaku -y
# Install correct nunchaku from source
# See INSTALL_NUNCHAKU.md for full instructionsCause: One of three problems:
- Using wrong 40GB full model
- Not enough VRAM
- Other apps using GPU
Solution:
# 1. Check you're using quantized model
Get-Content qwen_image_edit_nunchaku.py | Select-String "nunchaku-tech"
# Should show: nunchaku-tech/nunchaku-qwen-image-edit-2509
# 2. Close other GPU applications
# - Close Chrome/Edge (uses GPU)
# - Close other AI apps
# - Check GPU usage: nvidia-smi
# 3. If still failing, you may be using the wrong 40GB model!Solution: Install Visual Studio Build Tools 2022 with C++ components
# Download from: https://visualstudio.microsoft.com/downloads/
# Select: "Desktop development with C++"
# See INSTALL_NUNCHAKU.md for detailed instructionsCause: Using wrong parameters for the model type!
This happens when:
- Using
true_cfg_scale=4.0with Lightning models (should be 1.0) - Using wrong number of inference steps
- Not checking the HuggingFace model card for optimal parameters
Solution:
# 1. Check which model you're using
Get-Content your_script.py | Select-String "lightning"
# 2. If using Lightning model, check true_cfg_scale
Get-Content your_script.py | Select-String "true_cfg_scale"
# Should show:
# Lightning models: true_cfg_scale: 1.0 β
# Standard models: true_cfg_scale: 4.0 β
# 3. Fix your script if needed:
# Lightning (8-step): true_cfg_scale=1.0, num_inference_steps=8
# Lightning (4-step): true_cfg_scale=1.0, num_inference_steps=4
# Standard (40-step): true_cfg_scale=4.0, num_inference_steps=40
# 4. ALWAYS check HuggingFace model card for new models:
# https://huggingface.co/<model_name>
# Look for example code with optimal parametersPrevention: Before using any new model variant:
- Visit the HuggingFace model page
- Check the README for usage examples
- Copy the exact parameters shown in examples
- See "Model-Specific Parameters" section above
This is the #1 most common mistake!
Solution:
# MAKE SURE (.venv) is showing in your prompt!
# Remove wrong package
pip uninstall nunchaku -y
# Install correct version from source
# See INSTALL_NUNCHAKU.md for full instructions
cd $env:TEMP
git clone https://github.com/nunchaku-tech/nunchaku.git
cd nunchaku
git submodule update --init --recursive
pip install -e . --no-build-isolationCause: Probably using wrong 40GB model or wrong nunchaku
Solution:
# Verify BOTH are correct:
# 1. Check nunchaku version (should be 1.0.1+torch2.5)
pip show nunchaku
# 2. Check model in code (should have nunchaku-tech)
Get-Content qwen_image_edit_nunchaku.py | Select-String "nunchaku-tech"Cause: Not in virtual environment
Solution:
# Activate venv - look for (.venv) in prompt
.\.venv\Scripts\Activate.ps1
# Verify activation worked
python --version
# Should show: Python 3.10.6Before running the script, verify ALL of these:
- β
Prompt shows
(.venv)at the start - β
pip show nunchakushows version1.0.1+torch2.5(NOT 0.15.4) - β
Get-Content qwen_image_edit_nunchaku.py | Select-String "nunchaku-tech"shows quantized model path - β
.\check.ps1passes all checks - β
nvidia-smishows RTX 4090 with 24GB VRAM available - β You have ~50GB free disk space for model downloads
- β No other applications are using significant GPU memory
If ANY of these are β, fix them first before running the script!
Activate venv (do this EVERY time):
cd C:\Projects\qwen-image-edit
.\.venv\Scripts\Activate.ps1Run script:
python qwen_image_edit_nunchaku.pyCheck nunchaku version:
pip show nunchaku # Should be 1.0.1+torch2.5Verify model in code:
Get-Content qwen_image_edit_nunchaku.py | Select-String "nunchaku-tech"Made with β€οΈ for AI image editing on consumer GPUs