Home

Awesome

CrossInfoNet: Multi-Task Information Sharing Based Hand Pose Estimation

This respository contains the implementation details of this paper

The project page can be found here.

~~I have graduated from the university as a master, so this rep. may not be updated anymore. ~~

Requirments

our code is tested in Ubuntu 14.04 and 16.04 environment with GTX 1080 and RTX 2080 TI.

2 examples

You should match right cudnn version in this site

Data Reprocessing

Download the datasets (ICVL, NYU, and MSRA).

Thanks DeepPrior++ for providing the base data reprocess and online data augmentation codes.

We use the precomputed centers of V2V-PoseNet@mks0601 when training ICVL and NYU datasets.

Please refer to cache/${dataset-name}/readme.md for more details.

Traing and Testing

Here we provide an example for NYU training.

cd $ROOT
cd network/NYU
python train_and_test.py

Here $ROOT is the root path that you put this project.

For testing, just run the command in the path $ROOT/network/NYU/

python test_nyu_cross.py

For the MSRA dataset, just cd $ROOT/network/MSRA/ directory, then run the train or test file, as follow:

train:  python train_and_test.py --test-sub ${sub-num}
test:   python test_msra.py --test-sub ${sub-num}

${sub-num} is the subject that you use to test while cross-validation.

In the end, you can use python combtxt.py to combine the 9 test results.

Results

When testing, the model outputs the mean joint error. If you want to show the qualitative results, just let the visual=True. We use awesome-hand-pose-estimation to evaluate the accuracy of the proposed CrossInfoNet on the ICVL, NYU and MSRA datasets. The predicted labels are here.

We also tested the perfomance on the HANDS 17 frame-based hand pose estiamtion challenge dataset. Here is the result on Feb.2, 2019.

hands

Realtime demo

More details can be found in the realtime_demo directory.