Home

Awesome

IntrinsicAvatar: Physically Based Inverse Rendering of Dynamic Humans from Monocular Videos via Explicit Ray Tracing

Paper | Project Page

<img src="assets/teaser.png" width="800"/>

This repository contains the implementation of our paper IntrinsicAvatar: Physically Based Inverse Rendering of Dynamic Humans from Monocular Videos via Explicit Ray Tracing.

You can find detailed usage instructions for installation, dataset preparation, training and testing below.

If you find our code useful, please cite:

@inproceedings{WangCVPR2024,
  title   = {IntrinsicAvatar: Physically Based Inverse Rendering of Dynamic Humans from Monocular Videos via Explicit Ray Tracing},
  author  = {Shaofei Wang and Bo\v{z}idar Anti\'{c} and Andreas Geiger and Siyu Tang},
  booktitle = {IEEE Conf. on Computer Vision and Pattern Recognition (CVPR)},
  year    = {2024}
}

Requirements

Install

Code and SMPL Setup

git clone --recursive https://github.com/taconite/IntrinsicAvatar.git
data
 └-- SMPLX
    └-- smpl
       ├-- SMPL_FEMALE.pkl
       ├-- SMPL_MALE.pkl
       └-- SMPL_NEUTRAL.pkl

Environment Setup

Dataset Preparation

Please follow the steps in DATASET.md.

Training

PeopleSnapshot and RANA

Training and validation use wandb for logging, which is free to use but requires online register. If you don't want to use it, append logger.offline=true to your command.

To train on the male-3-casual sequence of PeopleSnapshot, use the following command:

python launch.py dataset=peoplesnapshot/male-3-casual tag=IA-male-3-casual

Checkpoints, code snapshot, and visualizations will be saved under the directory exp/intrinsic-avatar-male-3-casual/male-3-casual@YYYYMMDD-HHMMSS

ZJU-MoCap

Similarly, to train on the 377 sequence of ZJU-MoCap, use the following command:

python launch.py dataset=zju-mocap/377 sampler=balanced pose_correction.dataset_length=125 pose_correction.enable_pose_correction=true tag=IA-377

This default setting trains on the 377 sequence using 125 frames from a single camera. You can also train on longer sequences with 4 cameras (with 300 frames for each camera) via the following command:

python launch.py --config-name config_long dataset=zju-mocap/377_4cam_long sampler=balanced pose_correction.dataset_length=300 pose_correction.enable_pose_correction=true tag=IA-377

Testing

To test on the male-3-casual sequence for relighting on within-distribution poses, use the following command:

python launch.py mode=test \
    resume=${PATH_TO_CKPT} \
    dataset=peoplesnapshot/male-3-casual \
    dataset.hdri_filepath=hdri_images/city.hdr \
    light=envlight_tensor \
    model.render_mode=light \ # light importance sampling
    model.global_illumination=false \
    model.samples_per_pixel=1024 \
    model.resample_light=false \ # set to true if you are doing quantitative evaluation
    tag=IA-male-3-casual \
    model.add_emitter=true  # set to false if you are doing quantitative evaluation

To test on the male-3-casual sequence for relighting on out-of-distribution poses, use the following command:

python launch.py mode=test \
    resume=${PATH_TO_CKPT} \
    dataset=animation/male-3-casual \
    dataset.hdri_filepath=hdri_images/city.hdr \
    light=envlight_tensor \
    model.render_mode=light \
    model.global_illumination=false \
    model.samples_per_pixel=1024 \
    model.resample_light=false \
    tag=IA-male-3-casual \
    model.add_emitter=true

NOTE: if you encounter the error mismatched input '=' expecting <EOF>, it is most likely because your checkpoint path contains = (which is the default checkpoint format of this repo). In such a case you can quote twice, e.g. use 'resume="${PATH_TO_CKPT}"'. For details please check this Hydra issue.

TODO

Acknowledgement

Our code structure is based on instant-nsr-pl. The importance sampling code (lib/nerfacc) follows the structure of NeRFAcc. The SMPL mesh visualization code (utils/smpl_renderer.py) is borrowed from NeuralBody. The LBS-based deformer code (models/deformers/fast-snarf) is borrowed from Fast-SNARF and InstantAvatar. We thank authors of these papers for their wonderful works which greatly facilitates the development of our project.