Skip to content

A-bone1/Attention-ocr-Chinese-Version

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

25 Commits
 
 
 
 
 
 

Repository files navigation

Attention-ocr-Chinese-Version

The progress was used to Chinese OCR based on Google Attention OCR.

Modify Google's attention model for Chinese text recognition.

More details can be found in this paper:"Attention-based Extraction of Structured Information from Street View Imagery" and Chinese introduction of this project click here

This project can run on Windows10 and Ubuntu 16.04, using the python3 environment and The network is built using tensorflow

According to the official website, I generated FSNS format tfrecord for Chinese text recognition and a dictionary of 5,400 Chinese characters. The method of generating FSNS tfrecord can be referred to here.https://github.com/A-bone1/FSNS-tfrecord-generate

overall framework of the network(Attention-CRNN)

image

train your own model

1、Store data in the same format as the FSNS datasetand put the tfrecord and dic.txt under datasets / data / fsns / train / ,then just reuse the python/datasets/fsns.py module. E.g., create a file datasets/newtextdataset.py, You can imitate this newtextdataset.py, modify some simple parameters and paths on it

2、You will also need to include it into the datasets/init.py and specify the dataset name in the command line.If you are modifying directly on my newtextdataset.py, you do not have to do this step

3、train your own model

cd python
python train.py --dataset_name=newtextdataset

4、(ps)My machine's memory of GPU is not enough to support me training this model, so I temporarily set it to only cpu training, if you want to train in the GPU, then Comment these two lines in the train.py

import os
os.environ['CUDA_VISIBLE_DEVICES'] = ''

5、 The required files of tensorboard are stored under / logs and can be viewed using the commands below.

tensorboard  --logdir=logs

Some suggestions for training

  1. You can use the Curriculum Learning strategy to accelerate convergence and improve the model's generalization ability.first, training with simple background training samples , and then gradually adding real, complex natural scene text pictures to increase sample complexity.
  2. The model has high requirements for the memory of GPU. If the memory does not meet the training requirements.You can reduce the image size when the training sample is generated.and then Modify the image parameters in the Training code(image_shape' in the /python/datasets/newtextdataset.py

Loss Function

image

Original Image

image

Predictive text

image

Verify your own model

1、Generate your validation FSNS tfrecord and name it train_eval*, then place it under datasets / data / fsns / train /

2、Verify your own model

python eval.py

3、The results can be view used tensorboard , the required documents stored under / tmp / attention_ocr / eval

tensorboard  --logdir=/tmp/attention_ocr/eval

Accuracy

Now,The character accuracy of 1.8million Synthetic pictures is 92.96%,and the sequence accuracy is 80.18% image

How to use a trained model

python demo_inference.py --batch_size=32 \
  --checkpoint=model.ckpt-399731\
  --image_path_pattern=./datasets/data/fsns/temp/fsns_train_%02d.png

Releases

No releases published

Packages

No packages published

Languages