Awesome
Scripts for BEAT Dataset
Contents
- train and inference scripts
- CaMN (ours)
- End2End (ours)
- Motion AutoEncoder (for evaluation)
- data preprocessing
- load specific number of joints with predefined FPS from bvh
- build word2vec model
- cache generation (.lmdb)
- dataset examples in beat.zip
- original files to generate cache in train/val/test
- cache for language_model, pretrained_vae
Train
python == 3.7
- build folders like:
- download the scripts to
codes/beat/
- extract beat.zip to datasets/beat
- run
pip install -r requirements.txt
in the path ./codes/beat/
- run
python train.py -c ./configs/camn.yaml
for training and inference.
- load
./outputs/exp_name/119/res_000_008.bvh
into blender to visualize the test results.
Modifiaction
- train End2End model, add
g_name: PoseGenerator
in camn.yaml
- generate data cache from stratch
cd ./dataloaders && python bvh2anyjoints.py
for motion data
cd ./dataloaders && python build_vocab.py
for language model
- remove modalities, e.g., remove facial expressions.
- set
facial_rep: None
and facial_f: 0
in camn.yaml
python train.py -c ./configs/camn.yaml
- for semantic-weighted loss, set
sem_weighted == False
in camn_trainer.py
- refer to
./utils/config.py
for other parameters.
Updated, remove all personal informaiton in scripts.