Awesome
HAKE: Human Activity Knowledge Engine
<p align='center'> <img src="https://github.com/DirtyHarryLYL/HAKE-Action-Torch/blob/Activity2Vec/demo/hake_history.jpg", height="300"> </p>For more details please refer to HAKE website http://hake-mvig.cn.
HAKE project:
- HAKE-Reasoning (TPAMI): Neural-Symbolic reasoning engine. HAKE-Reasoning
- HAKE-Image (CVPR'18/20): Human body part state labels in images. HAKE-HICO, HAKE-HICO-DET, HAKE-Large, Extra-40-verbs.
- HAKE-AVA: Human body part state labels in videos from AVA dataset.
- CLIP-A2V: CLIP-based part states & verb recognizer.
- HAKE-A2V (CVPR'20): Activity2Vec, a general activity feature extractor based on HAKE data, converting a human (box) to a fixed-size vector, PaSta and action scores.
- HAKE-Action-TF, HAKE-Action-Torch (CVPR'19/20/22, NeurIPS'20, TPAMI'21/22, ECCV'22): SOTA action understanding methods and the corresponding HAKE-enhanced versions (TIN, IDN, IF, mPD, PartMap).
- HAKE-3D (CVPR'20): 3D human-object representation for action understanding (DJ-RN).
- HAKE-Object (CVPR'20, TPAMI'21): object knowledge learner to advance action understanding (SymNet).
- Halpe: a joint project under AlphaPose and HAKE, full-body human keypoints (body, face, hand, 136 points) of 50,000 HOI images.
- HOI Learning List: a list of recent HOI (Human-Object Interaction) papers, code, datasets and leaderboard on widely-used benchmarks. Hope it could help everyone interested in HOI.
News: (2022.12.19) HAKE 2.0 is accepted by TPAMI!
(2022.11.19) We release the interactive object bounding boxes & classes in the interactions within AVA dataset (2.1 & 2.2)! HAKE-AVA, [Paper]. BTW, we also release a CLIP-based human body part states recognizer in CLIP-Activity2Vec!
(2022.07.29) Our new work PartMap is released! Paper, Code
(2022.04.23) Two new works on HOI learning are releassed! Interactiveness Field (CVPR'22) and a new HOI metric mPD (AAAI'22).
(2022.02.14) We release the human body part state labels based on AVA: HAKE-AVA and HAKE 2.0 paper.
(2021.10.06) Our extended version of SymNet is accepted by TPAMI! Paper and code are coming soon.
(2021.2.7) Upgraded HAKE-Activity2Vec is released! Images/Videos --> human box + ID + skeleton + part states + action + representation. [Description]
<p align='center'> <img src="https://github.com/DirtyHarryLYL/HAKE-Action-Torch/blob/Activity2Vec/demo/a2v-demo.gif", height="400"> </p> <!-- ## Full demo: [[YouTube]](https://t.co/hXiAYPXEuL?amp=1), [[bilibili]](https://www.bilibili.com/video/BV1s54y1Y76s) -->(2021.1.15) Our extended version of TIN (Transferable Interactiveness Network) is accepted by TPAMI!
(2020.10.27) The code of IDN (Paper) in NeurIPS'20 is released!
(2020.6.16) Our larger version HAKE-Large (>122K images, activity and part state labels) and Extra-40-verbs (40 new actions) are released!
The image-level and instance-level part state annotations upon HICO and HICO-DET are available!
-
Paper is here: PaStaNet, HAKE 2.0 paper.
-
Corresponding Code and model (HAKE-Action): Image-level and Instance-level.
Note that:
- Image-level means that what Human-Object Interactions are included in an image, and the corrsponding task is the HOI recognition (image-level multi-label classification from HICO).
- Instance-level means that what HOIs are performed by a person, and the task is HOI detection (instance-level multi-label detection from HICO-DET).
If you find HAKE useful, please cite our papers:
@article{li2023hake,
title={HAKE: A Knowledge Engine Foundation for Human Activity Understanding},
author={Li, Yong-Lu and Liu, Xinpeng and Wu, Xiaoqian and Li, Yizhuo and Qiu, Zuoyu and Xu, Liang and Xu, Yue and Fang, Hao-Shu and Lu, Cewu},
journal={TPAMI},
year={2023}
}
@inproceedings{li2020pastanet,
title={PaStaNet: Toward Human Activity Knowledge Engine},
author={Li, Yong-Lu and Xu, Liang and Liu, Xinpeng and Huang, Xijie and Xu, Yue and Wang, Shiyi and Fang, Hao-Shu and Ma, Ze and Chen, Mingyang and Lu, Cewu},
booktitle={CVPR},
year={2020}
}
@inproceedings{lu2018beyond,
title={Beyond holistic object recognition: Enriching image understanding with part states},
author={Lu, Cewu and Su, Hao and Li, Yonglu and Lu, Yongyi and Yi, Li and Tang, Chi-Keung and Guibas, Leonidas J},
booktitle={CVPR},
year={2018}
}
HAKE-HICO (For Image-level HOI Recognition)
We have released image-level part state annotations on HICO. HOI recognition task can be modeled as a multi-label classification problem with 600 HOI categories. Given a still image, the model should tell the involved HOI categories in this image.
All the 38,116 images in train set of HICO dataset are annotated with finer human body part states. For better understanding of HOI recognition task, you could refer to these works: HICO, Pair-wise, HAKE.
Dataset
The labels are packaged in Annotations/hico-image-level.tar.gz, you can use:
cd Annotations
tar zxvf hico-image-level.tar.gz
to unzip them and get hico-training-set-image-level.json for train set of HICO respectively. More details about the format are shown in Dataset format.
The HICO dataset can be found here: HICO.
Code and Models
The corresponding code and models can be found here.
Results
We provide our current state-of-the-art result file on HICO.
Method | Few@1 | Few@5 | Few@10 | mAP | result |
---|---|---|---|---|---|
Pairwise-Part+HAKE-ALL | 25.40 | 32.48 | 33.71 | 47.09 | hico_result_pairwise_hake_all.csv |
Evaluation
After downloading above result file, you could use the following commands to evaluate:
- Download evaluation code here (It is a modification of this benchmark)
- Copy the result file to #/data/test-result.csv, where # means the folder of the evaluation code
- run
matlab -nodesktop -nodisplay
- run
eval_default_run
HAKE-HICO-DET (For Instance-level HOI Detection)
Instance-level part state annotations on HICO-DET are also available.
Dataset
The labels are packaged in Annotations/hico-det-instance-level.tar.gz, you could use:
cd Annotations
tar zxvf hico-det-instance-level.tar.gz
to unzip them and get hico-det-training-set-instance-level.json for train set of HICO-DET respectively. More details about the format are shown in Dataset format.
The HICO-DET dataset can be found here: HICO-DET.
Code and Models
The corresponding code and models can be found here.
HAKE-Large (For Instance-level Action Understanding Pre-training)
Instance-level part state annotations on HAKE-Large are also available now!
Dataset
The labels are packaged in Annotations/hake_large_annotation.tar.gz, you could use:
cd Annotations
tar zxvf hake_large_annotation.tar.gz
to unzip them and get hake_large_annotation.json for train set of HAKE-Large respectively. More details about the format are shown in Dataset format.
Images
You could download the corresponding images following this.
Extra 40 verb categories
We also provided the image set and part-state labels of the extra 40 verb categories (includes both HOI and human-only actions). You can download them from Google Drive. The verb_list, part-state_list is attached in the zip file. For these 40 verb categories, objects are also from coco80 categories but object bounding boxes and categories are optional (e.g. dance
has no interactive objects).
HAKE-AVA (For Instance-level Action Detection from Videos)
Fine-grained Spatio-Temporal Activity Understanding based on AVA videos. HAKE-AVA contains the human body part states (PaSta) annotations upon AVA and covers all the labeled human instances. PaSta (Part State) describes the action states of 10 human body parts, i.e., head, arms, hands, hip, legs, and feet.
For details, please refer to this repo.
TODOS
- Image-level label results on HICO
- Image-level code and models
- Instance-level label results on HICO-DET
- Instance-level code and models
- HAKE-Large data
- HAKE-A2V, pipeline, model
- HAKE-Action in PYTorch
- HAKE-AVA data