Awesome
M3Act: Learning from Synthetic Human Group Activities
CVPR 2024 | Official Repository
This repository contains a Unity project with the core modules and assets for our synthetic data generator, M3Act. We also release the 3D group activity dataset, M3Act3D, as well as the essential tools for data processing, visualization, and evaluation of the dataset.
Introduction
TLDR. M3Act is a synthetic data generator with multi-view multi-group multi-person atomic human actions and group activities. M3Act is designed to support multi-person and multi-group research. It features multiple semantic groups and produces highly diverse and photorealistic videos with a rich set of annotations suitable for human-centered tasks including multi-person tracking, group activity recognition, and controllable human group activity generation. Please refer to our project page and paper for more details.
Synthetic Data Generator
Coming up soon!
3D Group Activity Generation
Please refer to gag folder for more details.
Citation
If you find our work useful, please cite the following works.
@inproceedings{chang2024learning,
title={Learning from Synthetic Human Group Activities},
author={Chang, Che-Jui and Li, Danrui and Patel, Deep and Goel, Parth and Zhou, Honglu and Moon, Seonghyeon and Sohn, Samuel S and Yoon, Sejong and Pavlovic, Vladimir and Kapadia, Mubbasir},
booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
pages={21922--21932},
year={2024}
}
@article{chang2024equivalency,
title={On the Equivalency, Substitutability, and Flexibility of Synthetic Data},
author={Chang, Che-Jui and Li, Danrui and Moon, Seonghyeon and Kapadia, Mubbasir},
journal={arXiv preprint arXiv:2403.16244},
year={2024}
}
License
This project is licensed under the Creative Commons Attribution-NonCommercial 4.0 International License (CC BY-NC 4.0). See the LICENSE file for more details.