Hi
In the recent update of EasyOCR, you provided How to train your custom model tutorial, thanks for that but I found it a little ambiguous.
I know you said not to create issues on data generation and model training but my question is about the integration of these two with EasyOCR.
Suppose my data is ready and I have done some configurations to deep-text-recognition-benchmark and it trains (None-VGG-BiLSTM-CTC) on my data and works fine. In the tutorial, you mentioned we need 3 files to create the custom model:
-
a .pth file
-
a .yml file
-
a .py file
- Apparently, after training the model, the best accuracy and best norm_ED will be saved under ./saved_models, so we have a .pth file. I get from the tutorial that for two others copying and pasting the.yml and .py file from
custom_example.zip will do the work and we don't have to change anything. Is that right?
- My main question: Somewhere you said that
The network needs to be fully convolutional in order to predict flexible text length.
How can I make sure this happens? Should I change anything from deep-text-recognition-benchmark or this happen by default? (I searched the issues there but I didn't find anything relevant)
Hi
In the recent update of EasyOCR, you provided How to train your custom model tutorial, thanks for that but I found it a little ambiguous.
I know you said not to create issues on data generation and model training but my question is about the integration of these two with EasyOCR.
Suppose my data is ready and I have done some configurations to deep-text-recognition-benchmark and it trains (None-VGG-BiLSTM-CTC) on my data and works fine. In the tutorial, you mentioned we need 3 files to create the custom model:
a .pth file
a .yml file
a .py file
custom_example.zipwill do the work and we don't have to change anything. Is that right?The network needs to be fully convolutional in order to predict flexible text length.How can I make sure this happens? Should I change anything from deep-text-recognition-benchmark or this happen by default? (I searched the issues there but I didn't find anything relevant)