Skip to content

rane7/Stroke2Sketch

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

9 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Stroke2Sketch: Harnessing Stroke Attributes for Training-Free Sketch Generation - ICCV 2025

[Paper] | [Paper (ICCV)] | [Project Page]

We propose Stroke2Sketch, a framework that accurately transfers stroke attributes from a reference sketch to a content image while preserving structure and style fidelity. The top row shows reference sketches, the leftmost column displays content images, and the central and right columns illustrate our method’s precise content preservation and expressive stroke transfer.

Setup

Create a Conda Environment

conda env create -f environment.yaml
conda activate stroke2sketch

Download Pre-trained Weights

StableDiffusion

Download the StableDiffusion weights from the stable-diffusion-v1-5 at Hugging Face. (download the sd-v1-5 file).

U2NET

Download the u2net.pth from:

Place the model in the directory: ./U2Net_/saved_models/

Demo

You can use the gradio demo locally by running:

python app.py

Acknowledgements

Stroke2Sketch is highly built on Cross-Image-Attention, diffusers library, and Edit-Friendly DDPM Inversion.

Citation

If you use the code and models please cite:

@inproceedings{yang2025stroke2sketch,
  title={Stroke2Sketch: Harnessing Stroke Attributes for Training-Free Sketch Generation},
  author={Yang, Rui and Li, Huining and Long, Yiyi and Wu, Xiaojun and He, Shengfeng},
  booktitle={Proceedings of the IEEE/CVF International Conference on Computer Vision},
  pages={16545--16554},
  year={2025}
}

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors

Languages