Home

Awesome

<p align="center"> <h1 align="center">TEACH: Temporal Action Compositions for 3D Humans <a href='https://arxiv.org/abs/2209.04066'> <img src='https://img.shields.io/badge/arxiv-report-red' alt='ArXiv PDF'> </a> <a href='https://teach.is.tue.mpg.de/' style='padding-left: 0.5rem;'> <img src='https://img.shields.io/badge/Project-Page-blue?style=flat&logo=Google%20chrome&logoColor=blue' alt='Project Page'> </h1> <p align="center"> <a href="https://ps.is.mpg.de/person/nathanasiou"><strong>Nikos Athanasiou</strong></a> · <a href="https://mathis.petrovich.fr"><strong>Mathis Petrovich</strong></a> · <a href="https://ps.is.tuebingen.mpg.de/person/black"><strong>Michael J. Black</strong></a> · <a href="https://imagine.enpc.fr/~varolg"><strong>G&#252;l Varol</strong></a> </p> <h2 align="center">3DV 2022</h2> <div align="center"> </div> </p> <p float="center"> <img src="assets/action2.gif" width="49%" /> <img src="assets/action3.gif" width="49%" /> </p>

Check our upcoming YouTube video for a quick overview and our paper for more details.

Video

<!-- | Paper Video | Qualitative Results | |------------------------------------------------------------------------------------------------------------|--------------------------------------------------------------------------------------------------------------------| | [![PaperVideo](https://img.youtube.com/vi/vidid/0.jpg)](https://www.youtube.com/) | -->

Features

This implementation:

Updates

To be uploaded:

Getting Started

TEACH has been implemented and tested on Ubuntu 20.04 with python >= 3.9.

Clone the repo:

git clone https://github.com/athn-nik/teach.git

After it do this to install DistillBERT:

cd deps/
git lfs install
git clone https://huggingface.co/distilbert-base-uncased
cd ..

Install the requirements using virtualenv :

# pip
source scripts/install.sh

You can do something equivalent with conda as well.

Running the Demo

We have prepared a nice demo code to run TEACH on arbitrary videos. First, you need download the required data(i.e our trained model from our website). The path/to/experiment directory should look like:

experiment
│   
└───.hydra
│   | config.yaml
|   | overrides.yaml
|   | hydra.yaml
|
└───checkpoints
    │   last.ckpt

Then, running the demo is as simple as:


python interact_teach.py folder=/path/to/experiment output=/path/to/yourfname texts='[text prompt1, text prompt2, text prompt3, <more prompts comma divided>]' durs='[dur1, dur2, dur3, ...]'

Data

‼️⚠️ You can directly download the data from this link and use them!

Download the data from AMASS website. Then, run this command to extract the amass sequences that are annotated in babel:

python scripts/process_amass.py --input-path /path/to/data --output-path path/of/choice/default_is_/babel/babel-smplh-30fps-male --use-betas --gender male

Download the data from TEACH website, after signing in. The data TEACH was trained was a processed version of BABEL. Hence, we provide them directly to your via our website, where you will also find more relevant details. Finally, download the male SMPLH male body model from the SMPLX website. Specifically the AMASS version of the SMPLH model. Then, follow the instructions here to extract the smplh model in pickle format.

The run this script and change your paths accordingly inside it extract the different babel splits from amass:

python scripts/amass_splits_babel.py

Then create a directory named data and put the babel data and the processed amass data in. You should end up with a data folder with the structure like this:

data
|-- amass
|  `-- your-processed-amass-data 
|
|-- babel
|   `-- babel-teach
|       `...
|   `-- babel-smplh-30fps-male 
|       `...
|
|-- smpl_models
|   `-- smplh
|       `--SMPLH_MALE.pkl

Be careful not to push any data! Then you should softlink inside this repo. To softlink your data, do:

ln -s /path/to/data

Training

To start training after activating your environment. Do:

python train.py experiment=baseline logger=none

Explore configs/train.yaml to change some basic things like where you want your output stored, which data you want to choose if you want to do a small experiment on a subset of the data etc. [TODO]: More on this coming soon.

Sampling & Evaluation

Here are some commands if you want to sample from the validaiton set and evaluate on the metrics reported in the paper:

python sample_seq.py folder=/path/to/experiment align=full slerp_ws=8

In general the folder is: folder_our/<project>/<dataname_config>/<experimet>/<run_id> This folder should contain a checkpoints directory with a last.ckpt file inside and a .hydra directory from which the configuration will be pulled and the relevant checkpoint. This folder is created during training in the output directory and is provided in our website for the experiments in the paper.

Then for the evaluation you should do:

python eval.py folder=/path/to/experiment align=true slerp=true

the two extra parameters decide the samples on which the evaluation will be performed.

Transition distance

[TODO]: More on this coming soon.

Citation

@inproceedings{TEACH:3DV:2022,
  title={TEACH: Temporal Action Compositions for 3D Humans},
  author={Athanasiou, Nikos and Petrovich, Mathis and Black, Michael J. and Varol, G\"{u}l },
  booktitle = {International Conference on 3D Vision (3DV)},
  month = {September},
  year = {2022}
}

License

This code is available for non-commercial scientific research purposes as defined in the LICENSE file. By downloading and using this code you agree to the terms in the LICENSE. Third-party datasets and software are subject to their respective licenses.

Acknowledgments

We thank Benjamin Pellkofer for his IT support.

References

Many part of this code were based on the official implementation of TEMOS. Here are some great resources we benefit:

Contact

This code repository was implemented mainly by Nikos Athanasiou with the help of Mathis Petrovich.

Give a ⭐ if you like.

For commercial licensing (and all related questions for business applications), please contact ps-licensing@tue.mpg.de.