Skip to content

bidai541/lstm_ctc_ocr

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

11 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

the pipline version of lstm_ctc_ocr

How to use

  1. run python ./lib/utils/genImg.py to generate the train images in train/, validation set in valand the file name shall has the format of 00000001_name.png, the number of process is set to 16.
  2. python ./lib/lstm/utils/tf_records.py to generate tf_records file, which includes both images and labels(the img_path shall be changed to your image_path)
  3. ./train.sh for training ./test.shfor testing

Notice that,
the pipline version use warpCTC as default : please install the warpCTC tensorflow_binding first
if your machine does not support warpCTC, then use standard ctc version in the master branch

  • standard CTC: use tf.nn.ctc_loss to calculate the ctc loss

Dependency

Some details

The training data:
data

Notice that, sufficient amount of data is a must, otherwise, the network cannot converge.
parameters can be found in ./lstm.yml(higher priority) and lib/lstm/utils
some parameters need to be fined tune:

  • learning rate
  • decay step & decay rate
  • image_width
  • image_height
  • optimizer?

in ./lib/lstm/utils/tf_records.py, I resize the images to the same size. if you want to use your own data and use pipline to read data, the height of the image shall be the same.

Result

update: Notice that, different optimizer may lead to different resuilt.


The accurary is about 85%~92%
acc

Read this blog for more details and this blog for how to use tf.nn.ctc_loss or warpCTC

Releases

No releases published

Packages

No packages published

Languages

  • Python 99.8%
  • Shell 0.2%