Home

Awesome

Stochastic CSLR

This is the PyTorch implementation for the ECCV 2020 paper: Stochastic Fine-grained Labeling of Multi-state Sign Glosses for Continuous Sign Language Recognition.

Quick Start

1. Installation

pip install git+https://github.com/zheniu/stochastic-cslr

Also, you need to install sclite for evaluation. Take a look at step 2 for instructions.

2. Prepare the dataset

3. Run a quick test

You can use the script quick_test.py for a quick test.

python3 quick_test.py --data-root your_path_to/phoenix-2014-multisigner

By specifying the model type --model sfl/dfl, the data split --split dev/test, whether to use a language model--use-lm, you can get the following results:

ModelWER (dev)sub/del/ins (dev)WER (test)sub/del/ins (test)
DFL27.112.7/7.4/7.027.713.8/7.3/6.6
SFL26.212.7/6.9/6.726.613.7/6.5/6.4
DFL + LM25.611.5/9.2/4.926.412.4/9.3/4.7
SFL + LM24.311.4/8.5/4.425.312.4/8.5/4.3

Note that these results are slightly different from the paper as a different random seed is used.

You may also take a look at quick_test.py as it shows how to use the pretrained models.

4. Train your own model

The configuration files for deterministic and stochastic fine-grained labeling are put under config/. The training script is based on a PyTorch experiment runner torchzq, which automatically reads the hyperparameters in the YAML file and passes them to stochastic_cslr/runner.py.

Before running, change the data_root in the YAML configurations to phoenix-2014-multisigner/ first.

Train (for instance, dfl):

tzq config/dfl-fp16.yml train

Test the trained model

tzq config/dfl-fp16.yml test

Citation

You may cite this work by:

@inproceedings{niu2020stochastic,
  title={Stochastic Fine-Grained Labeling of Multi-state Sign Glosses for Continuous Sign Language Recognition},
  author={Niu, Zhe and Mak, Brian},
  booktitle={European Conference on Computer Vision},
  pages={172--186},
  year={2020},
  organization={Springer}
}