Official Pytorch implementation of the paper DocSegTr: An Instance-Level End-to-End Document Image Segmentation Transformer. This model is implemented on top of the adelaidet and detectron2 frameworks. The paper proposes a novel bottom-up instance segmentation strategy using Transformers to segment instances(document layouts) in scientific document images from the PubLayNet benchmark.
DocSegtr builds on a simple CNN feature extractor with FPN on the input document image. The multi-scaled feature maps(P2-P6) from FPN are combined with positional embedding information to feed into transformer layers, to predict document instances and generate corresponding kernel dynamically. The layerwise feature aggregation module combines the local FPN features and global transformer feature from P5 to segment the instances on the document image
git clone https://github.com/biswassanket/DocSegTr.git
cd DocSegTr
conda env create -f environment.yml
conda activate instaseg
Building detectron2 v0.2.1 from source using the following link to download.
cd detectron2-0.2.1
python setup.py build develop
To build adelaidet from source you need this simple command:
cd .. //going back to original work dir
python setup.py build develop
- To download PubLayNet dataset:
curl -o <YOUR_TARGET_DIR>/publaynet.tar.gz https://dax-cdn.cdn.appdomain.cloud/dax-publaynet/1.0.0/publaynet.tar.gz
python tools/train_net_custom.py \
--config-file configs/SOTR/R_101_DCN_doc.yaml \
--eval-only \
--num-gpus 1 \
MODEL.WEIGHTS work_dir/.../model_final.pth
python tools/train_net_custom.py \
--config-file configs/SOTR/R_101_DCN_doc.yaml \
--num-gpus 2
python tools/visualize_publaynet.py \
--input /path to JSON created by trained model/ \
--output /path to output_dir \
--dataset publaynet_minival \
--conf-threshold 0.6
Qualitative analysis on the PubLayNet dataset by DocSegTr . Here first, second and third columns represent original image, ground truth and our proposed DocSegTr results, respectively.
In this section, we release the pre-trained weights for all the best DocSegTr model variants trained on benchmark datasets.
PRIMA: Weights HJ: Weights Table: Weights
If you find this code useful in your research then please cite
@article{biswas2022docsegtr,
title={DocSegTr: An Instance-Level End-to-End Document Image Segmentation Transformer},
author={Biswas, Sanket and Banerjee, Ayan and Llad{\'o}s, Josep and Pal, Umapada},
journal={arXiv preprint arXiv:2201.11438},
year={2022}
}
Our project has adapted and borrowed the code structure from SOTR. We thank the authors. This research has been partially supported by the Spanish projects RTI2018-095645-B-C21, and FCT-19-15244, and the Catalan projects 2017-SGR-1783, the CERCA Program / Generalitat de Catalunya and PhD Scholarship from AGAUR (2021FIB-10010).
Thank you and sorry for the bugs!