Awesome
HAKE-AVA
Fine-grained Spatio-Temporal Activity Understanding based on AVA videos. A part of the HAKE project.
Annotation Diagram
<div align=center> <img src="figs\hake-ava.png" width="800" /> </div>HAKE-AVA-PaSta (Body part states in AVA)
HAKE-AVA contains the human body part states (PaSta) annotations upon AVA (v2.1 & 2.2) and covers all the labeled human instances. PaSta (Part State) describes the action states of 10 human body parts, i.e., head, arms, hands, hip, legs, and feet.
For the procedure of preparing the HAKE-AVA-PaSta dataset, please refer to DATASET.md.
ST-Activity2Vec: A PaSta-based activity understanding model. Its overall pipeline is the same as the image-based HAKE-Activity2Vec except using a different feature backbone (ResNet -> SlowFast). We also provide the weights pretrained on Kinetics-600 and finetuned on HAKE-AVA.
CLIP-Activity2Vec: We also release a CLIP-based human body part states recognizer in CLIP-Activity2Vec!
Besides, in our other work, we also annotate all the interactive objects in AVA 2.2 videos:
HAKE-GIO (Object boxes in AVA)
HAKE-DIO contains the bounding box (290 K) and object class (1,000+) annotations of all the interacted objects in AVA videos (v2.2), according to the labeled humans in AVA v2.2 performing Human-Object Interactions (HOI, 51 classes).
For more details, please refer to this [branch] and [Paper].
Joint version: HAKE-AVA-PaSta + HAKE-GIO
We also provide a joint version combining the human body part states and interactive object boxes in one file, as shown in the above figure. Please refer to [this file].
Citation
If you find our works useful, please consider citing:
@article{li2023hake,
title={HAKE: A Knowledge Engine Foundation for Human Activity Understanding},
author={Li, Yong-Lu and Liu, Xinpeng and Wu, Xiaoqian and Li, Yizhuo and Qiu, Zuoyu and Xu, Liang and Xu, Yue and Fang, Hao-Shu and Lu, Cewu},
journal={TPAMI},
year={2023}
}
@article{li2022discovering,
title={Discovering a Variety of Objects in Spatio-Temporal Human-Object Interactions},
author={Li, Yong-Lu and Fan, Hongwei and Qiu, Zuoyu and Dou, Yiming and Xu, Liang and Fang, Hao-Shu and Guo, Peiyang and Su, Haisheng and Wang, Dongliang and Wu, Wei and Lu, Cewu},
journal={arXiv preprint arXiv:2211.07501},
year={2022}
}
@inproceedings{li2020pastanet,
title={PaStaNet: Toward Human Activity Knowledge Engine},
author={Li, Yong-Lu and Xu, Liang and Liu, Xinpeng and Huang, Xijie and Xu, Yue and Wang, Shiyi and Fang, Hao-Shu and Ma, Ze and Chen, Mingyang and Lu, Cewu},
booktitle={CVPR},
year={2020}
}
Main Project: HAKE (Human Activity Knowledge Engine)
<p align='center'> <img src="https://github.com/DirtyHarryLYL/HAKE-Action-Torch/blob/Activity2Vec/demo/hake_history.jpg", height="300"> </p>For more details please refer to the HAKE website http://hake-mvig.cn.
- HAKE-Reasoning (TPAMI): Neural-Symbolic reasoning engine. HAKE-Reasoning
- HAKE-Image (CVPR'18/20): Human body part state labels in images. HAKE-HICO, HAKE-HICO-DET, HAKE-Large, Extra-40-verbs.
- HAKE-AVA: Human body part state labels in videos from the AVA dataset. HAKE-AVA.
- HAKE-A2V (CVPR'20): Activity2Vec, a general activity feature extractor based on HAKE data, converting a human (box) to a fixed-size vector, PaSta, and action scores.
- HAKE-Action-TF, HAKE-Action-Torch (CVPR'19/20/22, NeurIPS'20, TPAMI'21/22, ECCV'22): SOTA action understanding methods and the corresponding HAKE-enhanced versions (TIN, IDN, IF, mPD, PartMap).
- HAKE-3D (CVPR'20): 3D human-object representation for action understanding (DJ-RN).
- HAKE-Object (CVPR'20, TPAMI'21): object knowledge learner to advance action understanding (SymNet).
- Halpe: a joint project under AlphaPose and HAKE, full-body human keypoints (body, face, hand, 136 points) of 50,000 HOI images.
- HOI Learning List: a list of recent HOI (Human-Object Interaction) papers, code, datasets and leaderboard on widely-used benchmarks. Hope it could help everyone interested in HOI.
News: (2022.12.19) HAKE 2.0 is accepted by TPAMI!
(2022.11.19) We release the interactive object bounding boxes & classes in the interactions within the AVA dataset (2.1 & 2.2)! HAKE-AVA, [Paper].
(2022.07.29) Our new work PartMap is released! Paper, Code
(2022.04.23) Two new works on HOI learning have been released! Interactiveness Field (CVPR'22) and a new HOI metric mPD (AAAI'22).
(2022.02.14) We release the human body part state labels based on AVA: HAKE-AVA and HAKE 2.0 paper.
(2021.10.06) Our extended version of SymNet is accepted by TPAMI! The paper and code are coming soon.
(2021.2.7) Upgraded HAKE-Activity2Vec is released! Images/Videos --> human box + ID + skeleton + part states + action + representation. [Description]
<p align='center'> <img src="https://github.com/DirtyHarryLYL/HAKE-Action-Torch/blob/Activity2Vec/demo/a2v-demo.gif", height="400"> </p>(2021.1.15) Our extended version of TIN (Transferable Interactiveness Network) is accepted by TPAMI!
(2020.10.27) The code of IDN (Paper) in NeurIPS'20 is released!
(2020.6.16) Our larger version HAKE-Large (>122K images, activity and part state labels) and Extra-40-verbs (40 new actions) are released!
TODO
- ava 2.1 pasta annotation download manuscript (ava data, our labels, basic structure)
- dio annotation download manuscript,
- fusing dio and hake-ava data and labels