Home

Awesome

NeuralDome & HOIM3 Dataset Toolbox

Welcome to the repository for the Dataset Toolbox, which facilitates downloading, processing, and visualizing the Dataset. This toolbox supports our publication:

<h2 align="center"> NeuralDome </h2><h2 align="center"> HOIM3 </h2>
NeuralDome: A Neural Modeling Pipeline on Multi-View Human-Object Interactions (CVPR2023)HOI-M3: Capture Multiple Humans and Objects Interaction within Contextual Environment (CVPR2024 Highlight)
We construct a 76-view dome to acquire a complex human object interaction dataset, named HODome,.HOI-M3 is a large-scale dataset for modeling the interactions of multiple humans and multiple objects.
[Paper] [Video] [Project Page][Paper] [Video] [Project Page]
[Hodome Dataset][HOIM3 Dataset]
<img src="assets/NeuralDome.png" alt="drawing" height="130"/><img src="assets/HOIM3.jpg" alt="drawing" height="130"/>

🚩Updates

πŸ“–Setup and download

<details> <summary> Setting Up Your Environment </summary>

To get started, set up your environment as follows:

# Create a conda virtual environment
conda create -n NeuralDome python=3.8 pytorch=1.11 cudatoolkit=11.3 torchvision -c pytorch -y
conda activate NeuralDome

## Install PyTorch3D
conda install -c fvcore -c iopath -c conda-forge fvcore iopath
pip install "git+https://github.com/facebookresearch/pytorch3d.git@stable"

# Install other requirements
pip install -r requirements.txt
</details> <details> <summary> Preparing the Data </summary>

The complete dataset features 76-view RGB videos along with corresponding masks, mocap data, geometry, and scanned object templates. Download and extract the dataset from this link:

for file in *.tar; do tar -xf "$file"; done

Data Structure Overview

The dataset is organized as follows:

β”œβ”€ HODome
    β”œβ”€ images
        β”œβ”€ Seq_Name
            β”œβ”€ 0
                β”œβ”€ 000000.jpg
                β”œβ”€ 000001.jpg
                β”œβ”€ 000003.jpg
                    ...
            ...
    β”œβ”€ videos
        β”œβ”€ Seq_Name
            β”œβ”€ data1.mp4
            β”œβ”€ data2.mp4
            ...
            β”œβ”€ data76.mp4
    β”œβ”€ mocap
        β”œβ”€ Seq_Name
            β”œβ”€ keypoints2d
            β”œβ”€ keypoints3d
            β”œβ”€ object
            β”œβ”€ smpl
    β”œβ”€ mask
        β”œβ”€ Seq_Name
            β”œβ”€ homask
            β”œβ”€ hmask
            β”œβ”€ omask
    β”œβ”€ calibration
        β”œβ”€ 20221018
        ...
    β”œβ”€ dataset_information.json
    β”œβ”€ startframe.json
    ...

Extracting Images from Videos

Since the image files are extremely large, we have not uploaded them. Please run the following scripts to extract the image files from the provided videos.

python ./scripts/video2image.py
</details>

πŸ‘€ Visualization Toolkit

<details> <summary> Using Pytorch3D: </summary>

Our hodome_visualization.py script showcases how to access the diverse annotations in our dataset. It uses the following command-line arguments:

Ensure your environment and data are properly set up before executing the script. Here's an example command:

## Hodome
python ./scripts/hodome_visualization.py --root_path "/path/to/your/data" --seq_name "subject01_baseball" --resolution 720 --output_path "/path/to/your/output"
## HOI-M3
python ./scripts/hoim3_visualization.py --root_path "/path/to/your/data" --seq_name "subject01_baseball" --resolution 720 --output_path "/path/to/your/output --vis_view 0"
</details> <details> <summary> Using Blender:</summary>

Please refer to render.md

</details>

πŸ“–Citation

If you find our toolbox or dataset useful for your research, please consider citing our paper:

@inproceedings{
      zhang2023neuraldome,
      title={NeuralDome: A Neural Modeling Pipeline on Multi-View Human-Object Interactions},
      author={Juze Zhang and Haimin Luo and Hongdi Yang and Xinru Xu and Qianyang Wu and Ye Shi and Jingyi Yu and Lan Xu and Jingya Wang},
      booktitle={CVPR},
      year={2023},
}
@inproceedings{
      zhang2024hoi,
      title={HOI-M3: Capture Multiple Humans and Objects Interaction within Contextual Environment},
      author={Zhang, Juze and Zhang, Jingyan and Song, Zining and Shi, Zhanhe and Zhao, Chengfeng and Shi, Ye and Yu, Jingyi and Xu, Lan and Wang, Jingya},
      booktitle={CVPR},
      year={2024}
}