This repository contains the datasets and the code for training the model. Please refer to our TIP 2022 paper for more information: "SceneSketcher-v2: Fine-Grained Scene-Level Sketch-Based Image Retrieval using Adaptive GCNs. "
We modified existing sketch databases SketchyCOCO and SketchyScene for evaluations. The datasets consist of three parts:
- SketchyCOCO-SL (train 1015 + test 210)
- SketchyCOCO-SL Extended (train 5210 + test 210)
- SketchyScene (train 2472 + test 252)
The pretrained model is saved at Google Drive Hosting
For evaluation under SketchyCOCO-SL
dataset, run:
python evaluate_attention.py
You can train your own model by processing the training data into the format of the data in the test
folder.
GraphFeatures
is composed of the category and position of each instance in a scene. We adopt Inception-V3 trained on ImageNet to extract a 2048-d feature for each instance.
After the preparations, run:
python train_attention.py
@ARTICLE{9779565,
author={Liu, Fang and Deng, Xiaoming and Zou, Changqing and Lai, Yu-Kun and Chen, Keqi and Zuo, Ran and Ma, Cuixia and Liu, Yong-Jin and Wang, Hongan},
journal={IEEE Transactions on Image Processing},
title={SceneSketcher-v2: Fine-Grained Scene-Level Sketch-Based Image Retrieval Using Adaptive GCNs},
year={2022},
volume={31},
number={},
pages={3737-3751},
doi={10.1109/TIP.2022.3175403}}