No description or website provided.
Switch branches/tags
Nothing to show
Clone or download
Latest commit 0166abd Jul 31, 2018
Failed to load latest commit information.
.idea commit explanations for parameters Mar 14, 2018
caffe add reverse_layer cpu version Jul 10, 2018
dicts init submit Mar 14, 2018
imgs init submit Mar 14, 2018
models add part of the training code Jul 31, 2018
pylayer add part of the training code Jul 31, 2018
results init submit Mar 14, 2018 minor change readme Jul 31, 2018 init submit Mar 14, 2018 cfg2 to cfg May 11, 2018

An End-to-End TextSpotter with Explicit Alignment and Attention

This is initially described in our CVPR 2018 paper.

Getting Started


  • Clone the code
git clone
cd textspotter
  • Install caffe. You can follow this this tutorial. If you have build problem about std::allocater, please refer to this #3
# make sure you set WITH_PYTHON_LAYER := 1
# change Makefile.config according to your library path
cp Makefile.config.example Makefile.config
make clean
make -j8
make pycaffe


we provide part of the training code. But you can not run this directly. 
We have give the comment in the [](
You have to write your own layer, IOUloss layer. We cannot publish this for some IP reason. 
To be noticed: 


  • install editdistance and pyclipper: pip install editdistance and pip install pyclipper

  • After Caffe is set up, you need to download a trained model (about 40M) from Google Drive. This model is trained with VGG800k and finetuned on ICDAR2015.

  • Run python --img=./imgs/img_105.jpg

  • hyperparameters: --mean_val ==> mean value during the testing.
       --max_len ==> maximum length of the text string (here we take 25, meaning a word can contain 25 characters at most.)
       --recog_th ==> the threshold during the recognition process. The score for a word is the average mean of every character.
       --word_score ==> the threshold for those words that contain number or symbols for they are not contained in the dictionary. --weight ==> weights file of caffemodel
        --prototxt-iou ==> the prototxt file for detection.
        --prototxt-lstm ==> the prototxt file for recognition.
        --img ==> the folder or img file for testing. The format can be added in ./pylayer/tool is_image function.
        --scales-ms ==> multiscales input for input during the testing process.
        --thresholds-ms ==> corresponding thresholds of text region for multiscale inputs.
        --nms ==> nms threshold for testing
        --save-dir ==> the dir for save results in format of ICDAR2015 submition.
One thing should be noted: the recognition results are achieved by comparing direct output with words in dictionary, which has about 90K lexicons. 
These lexicons don't contain any number and symbol. You can delete dictionary reference part and directly output recognition results.


If you use this code for your research, please cite our papers.

  title={An End-to-End TextSpotter with Explicit Alignment and Attention},
  author={T. He and Z. Tian and W. Huang and C. Shen and Y. Qiao and C. Sun},
  booktitle={Computer Vision and Pattern Recognition (CVPR), 2018 IEEE Conference on},


This code is for NON-COMMERCIAL purposes only. For commerical purposes, please contact Chunhua Shen This program is free software: you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation, version 3. Please refer to for more details.