Skip to content

Latest commit

 

History

History
54 lines (42 loc) · 3.78 KB

README.md

File metadata and controls

54 lines (42 loc) · 3.78 KB

High-Performance Transformers for Table Structure Recognition Need Early Convolutions

arxiv badge license

High-Performance Transformers for Table Structure Recognition Need Early Convolutions. ShengYun Peng, Seongmin Lee, Xiaojing Wang, Rajarajeswari Balasubramaniyan, Duen Horng Chau. In NeurIPS 2023 Second Table Representation Learning Workshop, 2023. (Oral)

📖 Research Paper      🚀 Project Page     

drawing

Table structure recognition (TSR) aims to convert tabular images into a machine-readable format, where a visual encoder extracts image features and a textual decoder generates table-representing tokens. Existing approaches use classic convolutional neural network (CNN) backbones for the visual encoder and transformers for the textual decoder. However, this hybrid CNN-Transformer architecture introduces a complex visual encoder that accounts for nearly half of the total model parameters, markedly reduces both training and inference speed, and hinders the potential for self-supervised learning in TSR. In this work, we design a lightweight visual encoder for TSR without sacrificing expressive power. We discover that a convolutional stem can match classic CNN backbone performance, with a much simpler model. The convolutional stem strikes an optimal balance between two crucial factors for high-performance TSR: a higher receptive field (RF) ratio and a longer sequence length. This allows it to "see" an appropriate portion of the table and "store" the complex table structure within sufficient context length for the subsequent transformer.

Our latest work UniTable has been fully released, achieving SOTA performance on four of the largest table recognition datasets! We have also released the first-of-its-kind Jupyter Notebook of the entire inference pipeline, which can fully digitalize your tabular image to HTML!

News

Oct. 2023 - Paper accepted by NeurIPS'23 Table Representation Learning Workshop

Oct. 2023 - Paper selected as oral

Get Started

  1. Prepare PubTabNet dataset available here
  2. Change the "pubtabnet_dir" in Makefile to "your path to PubTabNet"
  3. Set up venv
make .venv_done

Training, Testing & Evaluation

  1. Train an instance of visual encoder with ResNet-18
make experiments/r18_e2_d4_adamw/.done_train_structure
  1. Test + Compute teds score
make experiments/r18_e2_d4_adamw/.done_teds_structure
  1. All models in ablations are defined in "Experiment Configurations" section of Makefile. Replace "r18_e2_d4_adamw" with any other configuration for training and testing.

Citation

@inproceedings{peng2023high,
  title={High-Performance Transformers for Table Structure Recognition Need Early Convolutions},
  author={Peng, Anthony and Lee, Seongmin and Wang, Xiaojing and Balasubramaniyan, Rajarajeswari Raji and Chau, Duen Horng},
  booktitle={NeurIPS 2023 Second Table Representation Learning Workshop},
  year={2023}
}

Contact

If you have any questions, feel free to open an issue or contact Anthony Peng (CS PhD @Georgia Tech).