This repository provides the official pytorch implementation of our paper:
Youngjoon Jang, Youngtaek Oh, Jae Won Cho, Dong-Jin Kim, Joon Son Chung, In So Kweon
BMVC 2022
This code includes two functionalities: (1) an algorithm to automatically and deterministically generate the proposed Scene-PHOENIX benchmark dataset using scene database LSUN and SUN397, and (2) a pytorch implementation of loading PHOENIX-2014 dataset including Scene-PHOENIX benchmark for evaluation.
- Please note that due to any potential copyright issue, we do not re-distribute the existing benchmarks.
The goal of this work is background-robust continuous sign language recognition. Most existing Continuous Sign Language Recognition (CSLR) benchmarks have fixed backgrounds and are filmed in studios with a static monochromatic background. However, signing is not limited only to studios in the real world.
In order to analyze the robustness of CSLR models under background shifts, we first evaluate existing state-of-the-art CSLR models on diverse backgrounds. To synthesize the sign videos with a variety of backgrounds, we propose a pipeline to automatically generate a benchmark dataset utilizing existing CSLR benchmarks. Our newly constructed benchmark dataset consists of diverse scenes to simulate a real-world environment. We observe that even the most recent CSLR method cannot recognize glosses well on our new dataset with changed backgrounds.
In this regard, we also propose a simple yet effective training scheme including (1) background randomization and (2) feature disentanglement for CSLR models. The experimental results on our dataset demonstrate that our method generalizes well to other unseen background data with minimal additional training images.
-
Human Segmentation Model
- Follow the instructions to setup the segmentation model.
- https://github.com/thuyngch/Human-Segmentation-PyTorch
- Download the pre-trained UNet_MobileNetV2 (alpha=1.0, expansion=6) (see the repository) to the location:
Human-Segmentation-PyTorch/pretrained
.
-
LSUN Database
- Follow the instructions to download the LSUN database to
{DATA_PATH}/lsun
(ex: data/lsun) - https://github.com/fyu/lsun
- Follow the instructions to download the LSUN database to
-
SUN397
- Download SUN397 Image Database and Partition from the following link.
- https://vision.princeton.edu/projects/2010/SUN/
- Unzip the
Partitions.zip
to{DATA_PATH}/SUN397/Partitions
.
-
PHOENIX-2014
- Download RWTH-PHOENIX-Weather 2014: Continuous Sign Language Recognition Dataset to
{DATA_PATH}/phoenix2014-release
- https://www-i6.informatik.rwth-aachen.de/~koller/RWTH-PHOENIX/
- Download RWTH-PHOENIX-Weather 2014: Continuous Sign Language Recognition Dataset to
- Please refer to the files in
generate_scene_phoenix
folder.
-
Locate
bg_dataset.py
andgenerate_scene_phoenix.py
onHuman-Segmentation-PyTorch
folder. -
Copy the attached
lsun
andSUN397
folders to your dataset path{DATA_PATH}
.- The txt files specify the index of each data to be used for synthesizing Scene-PHOENIX as background.
-
Run the
generate_scene_phoenix.py
.python generate_scene_phoenix.py --sign_root {PATH_TO_PHOENIX}
- Note that the variable
bg_root
ingenerate_scene_phoenix.py
should be modified to your paths of LSUN and SUN397 data.
- Note that the variable
-
The Scene-PHOENIX benchmark datasets are created on the locations of PHOENIX-2014 dataset.
-
Please refer to the folder
examples
. It implements the base dataset class for PHOENIX-2014 benchmark. You can test your CSLR model on our Scene-PHOENIX benchmark in your own training and evaluation pipeline by adapting the provided codes for loading data. -
Besides the standard splits of
train
,dev
, andtest
in PHOENIX-2014, it is organized to additionally load Scene-PHOENIX, which are generated in the previous step.dataset/build.py
includes the code to load all the splits and to wrap the dataset class to dataloader class. -
As note,
dev
split andtest
split of Scene-PHOENIX have one and three data partitions respectively, and there are two possible types of background datasets for each split, as specified in the main paper.
If you find our work useful for your research, please cite our work with the following bibtex:
@inproceedings{jang2022signing,
title = {Signing Outside the Studio: Benchmarking Background Robustness for Continuous Sign Language Recognition},
author = {Jang, Youngjoon and Oh, Youngtaek and Cho, Jae Won and Kim, Dong-Jin and Chung, Joon Son and Kweon, In So},
booktitle = {British Machine Vision Conference (BMVC)},
year = {2022}
}