This repository is forked and modified from the Trans2Seg official repository, the environment, installation and using method are almost the same. However there are several different points:
- The dataset is our own AUtoBio dataset, you should place the
data
folder in the correct position in thedata
in the root path. - When training and testing the model, you should use the
train_autobio.py
in thetools
folder, and other parameters are the same to use. (I mean the command in the terminal) - In the original repository, the
demo.py
cannot be used but here you can use thedemo.py
to conduct inference, the using method is the same as training.
This repository contains the data and code for IJCAI 2021 paper Segmenting transparent object in the wild with transformer.
- python 3
- torch = 1.4.0
- torchvision
- pyyaml
- Pillow
- numpy
python setup.py develop --user
The code of Network pipeline is in segmentron/models/trans2seg.py
.
The code of Transformer Encoder-Decoder is in segmentron/modules/transformer.py
.
Our experiments are based on one machine with 8 V100 GPUs with 32g memory, about 1 hour training time.
bash tools/dist_train.sh $CONFIG-FILE $GPUS
For example:
bash tools/dist_train.sh configs/trans10kv2/trans2seg/trans2seg_medium.yaml 8
bash tools/dist_train.sh $CONFIG-FILE $GPUS --test TEST.TEST_MODEL_PATH $MODEL_PATH
Please consider citing our paper in your publications if the project helps your research. BibTeX reference is as follows.
@article{xie2021segmenting,
title={Segmenting transparent object in the wild with transformer},
author={Xie, Enze and Wang, Wenjia and Wang, Wenhai and Sun, Peize and Xu, Hang and Liang, Ding and Luo, Ping},
journal={arXiv preprint arXiv:2101.08461},
year={2021}
}