[Paper] | [Paper (ICCV)] | [Project Page]
We propose Stroke2Sketch, a framework that accurately transfers stroke attributes from a reference sketch to a content image while preserving structure and style fidelity. The top row shows reference sketches, the leftmost column displays content images, and the central and right columns illustrate our method’s precise content preservation and expressive stroke transfer.
conda env create -f environment.yaml
conda activate stroke2sketchDownload the StableDiffusion weights from the stable-diffusion-v1-5 at Hugging Face.
(download the sd-v1-5 file).
Download the u2net.pth from:
- Google Drive
- Baidu Pan (提取码: chgd)
Place the model in the directory: ./U2Net_/saved_models/
You can use the gradio demo locally by running:
python app.pyStroke2Sketch is highly built on Cross-Image-Attention, diffusers library, and Edit-Friendly DDPM Inversion.
If you use the code and models please cite:
@inproceedings{yang2025stroke2sketch,
title={Stroke2Sketch: Harnessing Stroke Attributes for Training-Free Sketch Generation},
author={Yang, Rui and Li, Huining and Long, Yiyi and Wu, Xiaojun and He, Shengfeng},
booktitle={Proceedings of the IEEE/CVF International Conference on Computer Vision},
pages={16545--16554},
year={2025}
}
