Home

Awesome

🏊VATL4Pose🎬

Note This is an official implementation of the following two papers from IIM, TTI-J.

Warning The use of code under this repository follows the MIT License. Please take a look at LICENSE for details.

<div align="center"> <img src=".github/overview.png", width="960"> </div>

☑️TODO

📑Abstract

Human Pose (HP) estimation is actively researched because of its wide range of applications. However, even estimators pre-trained on large datasets may not perform satisfactorily due to a domain gap between the training and test data. To address this issue, we present our approach combining Active Learning (AL) and Transfer Learning (TL) to adapt HP estimators to individual video domains efficiently. For efficient learning, our approach quantifies (i) the estimation uncertainty based on the temporal changes in the estimated heatmaps and (ii) the unnaturalness in the estimated full-body HPs. These quantified criteria are then effectively combined with the state-of-the-art representativeness criterion to select uncertain and diverse samples for efficient HP estimator learning. Furthermore, we reconsider the existing Active Transfer Learning (ATL) method to introduce novel ideas related to the retraining methods and Stopping Criteria (SC). Experimental results demonstrate that our method enhances learning efficiency and outperforms comparative methods.

⬇️Installation

Warning Environment: Python 3.10.7, CUDA 11.3, PyTorch 1.12.1

Other versions have not been tested.

🌐Downloads

<details><summary>PoseTrack21</summary>
python ./data/PoseTrack21/make_new_annotation.py
python ./data/PoseTrack21/integrate_new_annotation.py
</details> <details><summary>JRDB-Pose</summary>
python ./data/jrdb-pose/make_new_annotation.py
python ./data/jrdb-pose/integrate_new_annotation.py
</details>

🚀Quick Start

<details><summary><bold>VATL on PoseTrack21 using SimpleBaseline</bold></summary>
  1. (Optional) Train an initial pose estimator from scratch

    python ./scripts/posetrack_train.py --cfg ./configs/posetrack21/{CONFIG_FILE} --exp-id {EXP_ID}
    
  2. (Optional) Evaluate the performance of the pre-trained model on train/val/test split

    python ./scripts/poseestimatoreval.py --cfg ./configs/posetrack21/{CONFIG_FILE} --exp-id {EXP_ID}
    
  3. (Optional) Pre-train the AutoEncoder for WPU (Whole-body Pose Unnaturalness)

    python ./scripts/wholebodyAE_train --dataset_type Posetrack21
    
  4. Execute Video-specific Active Transfer Learning on test videos

    Warning Please specify the detailed settings in the shell script if you like.

    bash ./scripts/run_active_learning.sh ${GPU_ID}
    
  5. Evaluate the results of video-specific ATL

    Warning Please specify the results to summarize in the Python script.

    python ./scripts/detailed_result.py
    
  6. (Optional) Visualize the estimated poses on each ATL cycle

    Warning Please specify the results to summarize in the Python script.

    python ./scripts/visualize_result.py
    
</details>

✍️Citation

If you found this code useful, please consider citing our work :D

<details><summary>WACV2024</summary>
@InProceedings{VATL4Pose_WACV24,
  author       = {Taketsugu, Hiromu and Ukita, Norimichi},
  title        = {Active Transfer Learning for Efficient Video-Specific Human Pose Estimation},
  booktitle    = {IEEE/CVF Winter Conference on Applications of Computer Vision (WACV)},
  year         = {2024}}
</details> <details><summary>MVA2023</summary>
@InProceedings{VATL4Pose_MVA23,
  author       = {Taketsugu, Hiromu and Ukita, Norimichi},
  title        = {Uncertainty Criteria in Active Transfer Learning for Efficient Video-Specific Human Pose Estimation}, 
  booktitle    = {2023 18th International Conference on Machine Vision and Applications (MVA)}, 
  year         = {2023}}
</details>

🤗Acknowledgement

This implementation is based on AlphaPose, ALiPy, DeepAL+, and VL4Pose. We deeply appreciate the authors for their open-source codes.

🤝Contributing

If you'd like to contribute, you can open an issue on this repository.

All contributions are welcome! All content in this repository is licensed under the MIT license.