Home

Awesome

Bringing Inputs to Shared Domains for 3D Interacting Hands Recovery in the Wild

Our new Re:InterHand dataset has been released, which has much more diverse image appearances with more stable 3D GT. Check it out at here!

Introduction

This repo is official PyTorch implementation of Bringing Inputs to Shared Domains for 3D Interacting Hands Recovery in the Wild (CVPR 2023).

<p align="middle"> <img src="assets/teaser.png" width="1200" height="250"> </p> <p align="middle"> <img src="assets/demo1.png" width="250" height="150"><img src="assets/demo2.png" width="250" height="150"><img src="assets/demo3.png" width="250" height="150"><img src="assets/demo4.png" width="250" height="150"><img src="assets/demo5.png" width="250" height="150"><img src="assets/demo6.png" width="250" height="150"> </p>

Demo

  1. Prepare human_model_files folder following below Directory part and place it at common/utils/human_model_files.
  2. Move to demo folder.
  3. Download pre-trained InterWild from here.
  4. Put input images at images. The image should be a cropped image, which contain a single human. For example, using a human detector. We have a hand detection network, so no worry about the hand postiions!
  5. Run python demo.py --gpu $GPU_ID
  6. Boxes, meshes, MANO parameters, and renderings are saved at boxes, meshes, params, and renders, respectively.

Directory

Root

The ${ROOT} is described as below.

${ROOT}
|-- data
|-- demo
|-- common
|-- main
|-- output

Data

You need to follow directory structure of the data as below.

${ROOT}
|-- data
|   |-- InterHand26M
|   |   |-- annotations
|   |   |   |-- train
|   |   |   |-- test
|   |   |-- images
|   |-- MSCOCO
|   |   |-- annotations
|   |   |   |-- coco_wholebody_train_v1.0.json
|   |   |   |-- coco_wholebody_val_v1.0.json
|   |   |   |-- MSCOCO_train_MANO_NeuralAnnot.json
|   |   |-- images
|   |   |   |-- train2017
|   |   |   |-- val2017
|   |-- HIC
|   |   |-- data
|   |   |   |-- HIC.json
|   |-- ReInterHand
|   |   |-- data
|   |   |   |-- m--*

Output

You need to follow the directory structure of the output folder as below.

${ROOT}
|-- output
|   |-- log
|   |-- model_dump
|   |-- result
|   |-- vis

Running InterWild

Start

Train

In the main folder, run

python train.py --gpu 0-3

to train the network on the GPU 0,1,2,3. --gpu 0,1,2,3 can be used instead of --gpu 0-3. If you want to continue experiment, run use --continue.

Test

In the main folder, run

python test.py --gpu 0-3 --test_epoch 6

to test the network on the GPU 0,1,2,3 with snapshot_6.pth. --gpu 0,1,2,3 can be used instead of --gpu 0-3.

Reference

@inproceedings{moon2023interwild,  
author = {Moon, Gyeongsik},  
title = {Bringing Inputs to Shared Domains for {3D} Interacting Hands Recovery in the Wild},  
booktitle = {CVPR},  
year = {2023}  
} 

@inproceedings{moon2023reinterhand,
  title     = {A Dataset of Relighted {3D} Interacting Hands},
  author    = {Moon, Gyeongsik and Saito, Shunsuke and Xu, Weipeng and Joshi, Rohan and Buffalini, Julia and Bellan, Harley and Rosen, Nicholas and Richardson, Jesse and Mize Mallorie and Bree, Philippe and Simon, Tomas and Peng, Bo and Garg, Shubham and McPhail, Kevyn and Shiratori, Takaaki},
  booktitle = {NeurIPS Track on Datasets and Benchmarks},
  year      = {2023},
}

License

This repo is CC-BY-NC 4.0 licensed, as found in the LICENSE file.

[Terms of Use] [Privacy Policy]