Awesome
FLAME: Free-form Language-based Motion Synthesis & Editing
[Project Page] | [Paper] | [Video]
Official Implementation of the paper FLAME: Free-form Language-based Motion Synthesis & Editing (AAAI'23)
Generated Samples
<img src="https://user-images.githubusercontent.com/10102721/204811388-748bbe11-bb0f-489b-a532-c668023c22b4.gif" width="640" height="360"/>Environment
This project is tested on the following environment. Please install them on your running environment.
Prerequisites
Packages
You may need following packages to run this repo.
apt install libboost-dev libglfw3-dev libgles2-mesa-dev freeglut3-dev libosmesa6-dev libgl1-mesa-glx
Dataset
<details> <summary>AMASS Dataset</summary>:exclamation: We cannot directly provide original data files to abide by the license.
Visit https://amass.is.tue.mpg.de/ to download AMASS dataset. We used SMPL+H G of following datasets in AMASS:
- ACCAD
- BMLhandball
- BMLmovi
- BMLrub
- CMU
- DanceDB
- DFaust
- EKUT
- EyesJapanDataset
- HDM05
- Human4D
- HumanEva
- KIT
- Mosh
- PosePrior
- SFU
- SSM
- TCDHands
- TotalCapture
- Transitions
Downloaded data are compressed in bz2
format. All downloaded files need to be located at data/amass_download_smplhg
.
Visit https://babel.is.tue.mpg.de/ to download BABEL dataset. At the time of experiment, we used babel_v1.0_release
.
BABEL dataset should be loacated at data/babel_v1.0_release
. File structures under data/babel_v1.0_release
looks like:
.
├── extra_train.json
├── extra_val.json
├── test.json
├── train.json
└── val.json
</details>
<details>
<summary>HumanML3D</summary>
You can access full HumanML3D dataset at HumanML3D. However, we used original AMASS SMPL data instead of a customized rig. What you will need to prepare to run this repo is:
./data/HumanML3D/
├── humanact12
├── HumanML3D.csv
├── test.txt
├── texts.zip
├── train.txt
└── val.txt
Note that the files above are located at: data/HumanML3D/
. Please download humanact12
and HumanML3D.csv
. You can download other files from the original repo.
SMPL & DMPL Models
You may need SMPL and DMPL to preprocess motion data. Please refer to AMASS for this. smpl_model
and dmpl_model
should be located in the project root directory.
smpl_model/
├── female
│ └── model.npz
├── info.txt
├── LICENSE.txt
├── male
│ └── model.npz
└── neutral
└── model.npz
</details>
<details>
<summary>DMPL</summary>
dmpl_model/
├── female
│ └── model.npz
├── LICENSE.txt
├── male
│ └── model.npz
└── neutral
└── model.npz
</details>
External Sources
You may need the following packages for visualization.
Installation
-
Create a virtual environment and activate it.
conda create -n flame python=3.8 conda activate flame
-
Install required packages. Recommend to install corresponding version of PyTorch and PyTorch3D first.
pip install "git+https://github.com/facebookresearch/pytorch3d.git@stable" # PyTorch3D
-
Install VPoser and PyOpenGL and PyOpenGL_Accelerate from their installation guide.
-
Install other required packages.
pip install -r requirements.txt
Preprocessing
-
Preprocess AMASS dataset.
./scripts/unzip_dataset.sh
This will unzip downloaded AMASS data into data/amass_smplhg. You can also unzip data manually.
-
Prepare HumanML3D dataset.
python scripts/prepare_humanml3d.py
-
Prepare BABEL dataset.
python scripts/prepare_babel_dataset.py
Training
You can train your own model by running the following command.
Training configs can be set by config files in configs/
or by command line arguments (hydra format).
python train.py
Testing
Testing takes a long time, since it needs to generate all samples in testset. You need to run test.py
with proper config settings at configs/test.yaml
. Then, you can run eval_util.py
to evaluate the results.
Sampling
Text-to-Motion Generation
Set your sampling config at configs/t2m_sample.yaml
. Sampled results will be saved at outputs/
. You can export json
output to visualize in Unity Engine. Exported json
includes the root joint's position and rotation of all other joints in quaternion format.
python t2m_sample.py
Text-to-Motion Editing
Set your text-to-motion editing config at configs/edit_motion.yaml
. You can choose a motion to be edited, editing joints, and text prompt. Sampled results will be saved at outputs/
.
python edit_motion.py
<details>
<summary>Joint Index</summary>
- 00: Pelvis
- 01: L_Hip
- 02: R_Hip
- 03: Spine1
- 04: L_Knee
- 05: R_Knee
- 06: Spine2
- 07: L_Ankle
- 08: R_Ankle
- 09: Spine3
- 10: L_Foot
- 11: R_Foot
- 12: Neck
- 13: L_Collar
- 14: R_Collar
- 15: Head
- 16: L_Shoulder
- 17: R_Shoulder
- 18: L_Elbow
- 19: R_Elbow
- 20: L_Wrist
- 21: R_Wrist
- 22: L_Hand
- 23: R_Hand
Pretrained Weights
HumanML3D
BABEL
Citation
@article{kim2022flame,
title={Flame: Free-form language-based motion synthesis \& editing},
author={Kim, Jihoon and Kim, Jiseob and Choi, Sungjoon},
journal={arXiv preprint arXiv:2209.00349},
year={2022}
}
License
Copyright (c) 2022 Korea University and Kakao Brain Corp. All Rights Reserved. Licensed under the Apache License, Version 2.0. (see LICENSE for details)