Skip to content

shannanyinxiang/SPTS

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

16 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

SPTS: Single-Point Text Spotting (ACM MM 2022 Oral)

The official implementation of SPTS: Single-Point Text Spotting (ACM MM 2022 Oral). The SPTS tackles scene text spotting as an end-to-end sequence prediction task and requires only extremely low-cost single-point annotations. Below is the overall architecture of SPTS.

SPTS

News

  • SPTS v2 is available at link.
  • SPTS is included in MMOCR.
  • NPTS is available at link.

Environment

We recommend using Anaconda to manage environments. Run the following commands to install dependencies.

conda create -n spts python=3.7 -y
conda activate spts
conda install pytorch==1.8.1 torchvision==0.9.1 torchaudio==0.8.1 -c pytorch
git clone https://github.com/shannanyinxiang/SPTS
cd SPTS
pip install -r requirements.txt

Dataset

Please download and extract the above datasets into the data folder following the file structure below.

data
├─CTW1500
│  ├─annotations
│  │      test_ctw1500_maxlen25.json
│  │      train_ctw1500_maxlen25_v2.json
│  ├─ctwtest_text_image
│  └─ctwtrain_text_image
├─icdar2013
│  │  ic13_test.json
│  │  ic13_train.json
│  ├─test_images
│  └─train_images
├─icdar2015
│  │  ic15_test.json
│  │  ic15_train.json
│  ├─test_images
│  └─train_images
├─mlt2017
│  │  train.json
│  └─MLT_train_images
├─syntext1
│  │  train.json
│  └─syntext_word_eng
├─syntext2
│  │  train.json
│  └─emcs_imgs
└─totaltext
    │  test.json
    │  train.json
    ├─test_images
    └─train_images

Train

The model training in the original paper uses 32 GPUs (4 nodes, 8 GPUs per node). Below are the instructions for the training using a single machine with 8 GPUs, which can be simply modified to multi-node training following PyTorch Distributed Docs.

1. Pretrain

You can download our pretrained weight from Google Drive or BaiduNetDisk, or pretrain the model from scratch using the following command.

python -m torch.distributed.launch \
       --nproc_per_node=8 \
       main.py \
       --data_root ./data/ \
       --train_dataset totaltext_train mlt_train ic13_train ic15_train syntext1_train syntext2_train \
       --output_folder ./output/pretrain/ \
       --lr 0.0005 \
       --epochs 150 \
       --checkpoint_freq 2 \
       --freeze_bn \
       --batch_aug \
       --tfm_pre_norm

2. Finetune

For example, the command for finetuning on Total-Text is:

python -m torch.distributed.launch \
       --nproc_per_node=8 \
       main.py \
       --data_root ./data/ \
       --train_dataset totaltext_train\
       --output_folder ./output/totaltext/ \
       --resume ./output/pretrain/checkpoints/checkpoint_ep0149.pth \
       --lr 0.00001 \
       --epochs 350 \
       --checkpoint_freq 10 \
       --freeze_bn \
       --batch_aug \
       --tfm_pre_norm \
       --finetune 

Set "--train_dataset" to "ctw1500_train", "ic13_train" or "ic15_train" to finetune on other datasets.

Inference

The trained models can be obtained after finishing the above steps. You can also download the models for the Total-Text, SCUT-CTW1500, ICDAR2013, and ICDAR2015 datasets from GoogleDrive or BaiduNetDisk password: b9u5.

For example, the command for the inference on Total-Text is:

python main.py \
       --eval \
       --data_root ./data/ \
       --val_dataset totaltext_val \
       --output_folder ./output/totaltext/ \
       --resume ./output/totaltext/checkpoints/checkpoint_ep0349.pth \
       --tfm_pre_norm \
       --visualize

Evaluation

First, download the ground-truth files (GoogleDrive, BaiduNetDisk password: 35tr) and lexicons (GoogleDrive, BaiduNetDisk password: 9eml), and extracted them into the evaluation folder.

evaluation
│  eval.py
├─gt
│  ├─gt_ctw1500
│  ├─gt_ic13
│  ├─gt_ic15
│  └─gt_totaltext
└─lexicons
    ├─ctw1500
    ├─ic13
    ├─ic15
    └─totaltext

Then the command for evaluating the inference result of Total-Text is:

python evaluation/eval.py \
       --result_path ./output/totaltext/results/ep349/totaltext_val.json \
       # --with_lexicon \ # uncomment this line if you want to evaluate with lexicons.
       # --lexicon_type 0 # used for ICDAR2013 and ICDAR2015. 0: Generic; 1: Weak; 2: Strong.

Performance

The end-to-end recognition performances of SPTS on four public benchmarks are:

Dataset Strong Weak Generic
ICDAR 2013 93.3 91.7 88.5
ICDAR 2015 77.5 70.2 65.8
Dataset None Full
Total-Text 74.2 82.4
SCUT-CTW1500 63.6 83.8

Citation

@inproceedings{peng2022spts,
  title={SPTS: Single-Point Text Spotting},
  author={Peng, Dezhi and Wang, Xinyu and Liu, Yuliang and Zhang, Jiaxin and Huang, Mingxin and Lai, Songxuan and Zhu, Shenggao and Li, Jing and Lin, Dahua and Shen, Chunhua and Bai, Xiang and Jin, Lianwen},
  booktitle={Proceedings of the 30th ACM International Conference on Multimedia},
  year={2022}
}

@article{liu2023spts,
  title={SPTS v2: Single-Point Scene Text Spotting},
  author={Liu, Yuliang and Zhang, Jiaxin and Peng, Dezhi and Huang, Mingxin and Wang, Xinyu and Tang, Jingqun and Huang, Can and Lin, Dahua and Shen, Chunhua and Bai, Xiang and Jin, Lianwen},
  journal={arXiv preprint arXiv:2301.01635},
  year={2023}
}

Copyright

This repository can only be used for non-commercial research purpose.

For commercial use, please contact Prof. Lianwen Jin (eelwjin@scut.edu.cn).

Copyright 2022, Deep Learning and Vision Computing Lab, South China University of Technology.

Acknowledgement

We sincerely thank Stable-Pix2Seq, Pix2Seq, DETR, and ABCNet for their excellent works.

About

Official implementation of SPTS: Single-Point Text Spotting

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages