Home

Awesome

Website Badge arXiv

LEMON: Learning 3D Human-Object Interaction Relation from 2D Images <font color=Red>(CVPR2024)</font>

PyTorch implementation of LEMON: Learning 3D Human-Object Interaction Relation from 2D Images. The repository will gradually release training, evaluation, inference codes, pre-trained models and 3DIR dataset.

📖 To Do List

    • release the pretrained LEMON.
    • release the inference, training and evaluation code.
    • release the 3DIR dataset.

📋 Table of content

  1. ❗ Overview
  2. 💡 Requirements
  3. 📖 Dataset
  4. ✏️ Usage
    1. Environment
    2. Demo
    3. Training
    4. Evaluation
  5. ✉️ Statement
  6. 🔍 Citation

❗Overview <a name="1"></a>

LEMON seek to parse the 3D HOI elements through 2D images:

<p align="center"> <img src="./images/overview.png" width="750"/> <br /> <em> </em> </p>

💡Requirements <a name="2"></a>

(1) Download the SMPL-H used in AMASS project, put them under the folder smpl_models/smplh/. <br> (2) Download the smpl_neutral_geodesic_dist.npy and put it under the folder smpl_models/, this is used to compute the metrics geo. <br> (3) Download the pre-trained HRNet, put .pth file under the folder tools/models/hrnet/config/hrnet/. <br> (4) Download the pre-trained LEMON (DGCNN as backbone), put .pt files under the folder checkpoints/, we release checkpoints with and without curvatures.

📖Dataset <a name="3"></a>

<p align="center"> <img src="./images/dataset.png" width="750"/> <br /> <em> </em> </p> The 3DIR dataset includes the following data: <br> (1) HOI images with human and object masks. <br> (2) Dense 3D human contact annotation. <br> (3) Dense 3D object affordance annotation. <br> (4) Pesudo-SMPLH parameters. <br> (5) Annotation of the Human-Object spatial relation. <br>

Download the 3DIR dataset from Google Drive or Baidu Pan (key: 3DIR). Please refer to Data/DATA.md for more details of 3DIR.

✏️ Usage <a name="4"></a>

Environment <a name="41"></a>

First clone this respository and create a conda environment, as follows:

git clone https://github.com/yyvhang/lemon_3d.git
cd lemon_3d
conda create -n lemon python=3.9 -y
conda activate lemon
#install pytorch 2.0.1
conda install pytorch==2.0.1 torchvision==0.15.2 torchaudio==2.0.2 pytorch-cuda=11.8 -c pytorch -c nvidia

Then, install the other dependancies:

pip install -r requirements.txt

Demo <a name="42"></a>

The following command will run LEMON on a hoi pair, if you want to infer without curvature, please modify the parameter at config/infer.yaml, you should change the checkpoint path and set curvature to False.

python inference.py --outdir Demo/output

For the visualization in main paper, we use Blender to render the human and proxy sphere, and refer to IAG-Net for the object visualization. <br> <font color=Red>Note</font>: If you use the model with curvature, you should obtain curvatures for the human and object geometry. For convenience, we recommend using CloudCompare or trimesh.curvature for calculation. After testing, LEMON could work well with the curvature calculated through these methods.

Training <a name="43"></a>

If you want to train LEMON, please run the following command, you could modify the parameter at config/train.yaml.

bash train.sh

Evaluation <a name="44"></a>

Run the following command to evaluate the model, you could see the setting at config/eval.yaml.

python eval.py --yaml config/eval.yaml

If you take LEMON as a comparative baseline, please indicate whether to use curvature.

✉️ Statement <a name="5"></a>

This project is for research purpose only, please contact us for the licence of commercial use. For any other questions please contact yyuhang@mail.ustc.edu.cn.

🔍 Citation

@article{yang2023lemon,
  title={LEMON: Learning 3D Human-Object Interaction Relation from 2D Images},
  author={Yang, Yuhang and Zhai, Wei and Luo, Hongchen and Cao, Yang and Zha, Zheng-Jun},
  journal={arXiv preprint arXiv:2312.08963},
  year={2023}
}