Home

Awesome

FLAME: Free-form Language-based Motion Synthesis & Editing

[Project Page] | [Paper] | [Video]

Official Implementation of the paper FLAME: Free-form Language-based Motion Synthesis & Editing (AAAI'23)

Generated Samples

<img src="https://user-images.githubusercontent.com/10102721/204811388-748bbe11-bb0f-489b-a532-c668023c22b4.gif" width="640" height="360"/>

Environment

This project is tested on the following environment. Please install them on your running environment.

Prerequisites

Packages

You may need following packages to run this repo.

apt install libboost-dev libglfw3-dev libgles2-mesa-dev freeglut3-dev libosmesa6-dev libgl1-mesa-glx 

Dataset

:exclamation: We cannot directly provide original data files to abide by the license.

<details> <summary>AMASS Dataset</summary>

Visit https://amass.is.tue.mpg.de/ to download AMASS dataset. We used SMPL+H G of following datasets in AMASS:

Downloaded data are compressed in bz2 format. All downloaded files need to be located at data/amass_download_smplhg .

</details> <details> <summary>BABEL</summary>

Visit https://babel.is.tue.mpg.de/ to download BABEL dataset. At the time of experiment, we used babel_v1.0_release . BABEL dataset should be loacated at data/babel_v1.0_release . File structures under data/babel_v1.0_release looks like:

.
├── extra_train.json
├── extra_val.json
├── test.json
├── train.json
└── val.json
</details> <details> <summary>HumanML3D</summary>

You can access full HumanML3D dataset at HumanML3D. However, we used original AMASS SMPL data instead of a customized rig. What you will need to prepare to run this repo is:

./data/HumanML3D/
├── humanact12
├── HumanML3D.csv
├── test.txt
├── texts.zip
├── train.txt
└── val.txt

Note that the files above are located at: data/HumanML3D/. Please download humanact12 and HumanML3D.csv. You can download other files from the original repo.

</details>

SMPL & DMPL Models

You may need SMPL and DMPL to preprocess motion data. Please refer to AMASS for this. smpl_model and dmpl_model should be located in the project root directory.

<details> <summary>SMPL</summary>
smpl_model/
├── female
│   └── model.npz
├── info.txt
├── LICENSE.txt
├── male
│   └── model.npz
└── neutral
    └── model.npz
</details> <details> <summary>DMPL</summary>
dmpl_model/
├── female
│   └── model.npz
├── LICENSE.txt
├── male
│   └── model.npz
└── neutral
    └── model.npz
</details>

External Sources

You may need the following packages for visualization.

Installation

  1. Create a virtual environment and activate it.

    conda create -n flame python=3.8
    conda activate flame
    
  2. Install required packages. Recommend to install corresponding version of PyTorch and PyTorch3D first.

    pip install "git+https://github.com/facebookresearch/pytorch3d.git@stable"  # PyTorch3D
    
  3. Install VPoser and PyOpenGL and PyOpenGL_Accelerate from their installation guide.

  4. Install other required packages.

    pip install -r requirements.txt
    

Preprocessing

  1. Preprocess AMASS dataset.

    ./scripts/unzip_dataset.sh
    

    This will unzip downloaded AMASS data into data/amass_smplhg. You can also unzip data manually.

  2. Prepare HumanML3D dataset.

    python scripts/prepare_humanml3d.py
    
  3. Prepare BABEL dataset.

    python scripts/prepare_babel_dataset.py
    

Training

You can train your own model by running the following command. Training configs can be set by config files in configs/ or by command line arguments (hydra format).

python train.py

Testing

Testing takes a long time, since it needs to generate all samples in testset. You need to run test.py with proper config settings at configs/test.yaml. Then, you can run eval_util.py to evaluate the results.

Sampling

Text-to-Motion Generation

Set your sampling config at configs/t2m_sample.yaml. Sampled results will be saved at outputs/. You can export json output to visualize in Unity Engine. Exported json includes the root joint's position and rotation of all other joints in quaternion format.

python t2m_sample.py

Text-to-Motion Editing

Set your text-to-motion editing config at configs/edit_motion.yaml. You can choose a motion to be edited, editing joints, and text prompt. Sampled results will be saved at outputs/.

python edit_motion.py
<details> <summary>Joint Index</summary> </details>

Pretrained Weights

HumanML3D

BABEL

Citation

@article{kim2022flame,
  title={Flame: Free-form language-based motion synthesis \& editing},
  author={Kim, Jihoon and Kim, Jiseob and Choi, Sungjoon},
  journal={arXiv preprint arXiv:2209.00349},
  year={2022}
}

License

Copyright (c) 2022 Korea University and Kakao Brain Corp. All Rights Reserved. Licensed under the Apache License, Version 2.0. (see LICENSE for details)