Sorry for the late release... It's been roughly a year since acceptance, but I was so lazy.
This is the released version of our work at CVPR2018:
Note that this is a simplified version of the original paper with feature inputs only for GENERALITY.
One can attach any CNN (+ attention) to the bottom of the network.
- Python 3 (or 2 if you manually add the __future__ import)
- Tensorflow (any up-to-date version)
- Numpy, SkLearn, SciPy, six(not really)
Modify try.py
to fit the document paths, and then
python try.py
We are using a lazy data format, i.e., .mat
files...
Make sure that you have the following objects in the data file:
- feat: feature of the images/sketches
- label: label (this is for evaluation)
- wv: word vector of the label of each image/sketch
One needs to save the image data and the sketch data separately into two files.
If you are interested in getting the original image and sketch data, please refer to this work.
If so, please contact
I am no longer with the university...