Awesome
ActionVOS: Actions as Prompts for Video Object Segmentation
Our paper is accepted by ECCV-2024 as oral (2.3%) presentation!
<div align=center> <img src="figures/ActionVOS.png" alt="ActionVOS" width="500" align="bottom" /> </div>Picture: Overview of the proposed ActionVOS setting.
<div align=center> <img src="./figures/method.png" alt="method" width="800" align="center" /> </div>Picture: The proposed method in our paper.
This repository contains the official PyTorch implementation of the following paper:
ActionVOS: Actions as Prompts for Video Object Segmentation<br> Liangyang Ouyang, Ruicong Liu, Yifei Huang, Ryosuke Furuta, and Yoichi Sato<br> <!-- > https://arxiv.org/abs/ -->
Abstract:
Delving into the realm of egocentric vision, the advancement of referring video object segmentation (RVOS) stands as pivotal in understanding human activities. However, existing RVOS task primarily relies on static attributes such as object names to segment target objects, posing challenges in distinguishing target objects from background objects and in identifying objects undergoing state changes. To address these problems, this work proposes a novel action-aware RVOS setting called ActionVOS, aiming at segmenting only active objects in egocentric videos using human actions as a key language prompt. This is because human actions precisely describe the behavior of humans, thereby helping to identify the objects truly involved in the interaction and to understand possible state changes. We also build a method tailored to work under this specific setting. Specifically, we develop an action-aware labeling module with an efficient action-guided focal loss. Such designs enable ActionVOS model to prioritize active objects with existing readily-available annotations. Experimental results on VISOR dataset reveal that ActionVOS significantly reduces the mis-segmentation of inactive objects, confirming that actions help the ActionVOS model understand objects' involvement. Further evaluations on VOST and VSCOS datasets show that the novel ActionVOS setting enhances segmentation performance when encountering challenging circumstances involving object state changes.
Resources
Material related to our paper is available via the following links:
Requirements
- Our experiment is tested with Python 3.8, PyTorch 1.11.0.
- Our experiment with RerferFormer used 4 V100 GPUs, and 6-12 hours for train 6 epochs on VISOR.
- Check Training instructions for necessary packages of RF.
Playing with ActionVOS
Data preparation (Pseudo-labeling and Weight-generation)
For the videos and masks, please download VISOR-VOS,VSCOS,VOST dataset from these links. We recommend to download VISOR-VOS first since we use VISOR-VOS for both training and testing.
Action narration annotations are obtained from EK-100. (We already put them in this repository so you don't need to download it.)
Hand-object annotations are obtained from VISOR-HOS. (Please download from google drive link1, link2 and put them under /annotations.)
Then run data_prepare_visor.py to get data,annotation,action-aware pseudo-labels and action-guided weights for ActionVOS.
python data_prepare_visor.py --VISOR_PATH your_visor_epick_path
It takes 1-2 hours for processing data. After that, the folder dataset_visor will get structure of:
- dataset_visor
- Annotations_Sparse
- train
- 00000001_xxx
- obj_masks.png
- 00000002_xxx
- val
- JPEGImages_Sparse
- train
- 00000001_xxx
- rgb_frames.jpg
- 00000002_xxx
- val
- Weights_Sparse
- train
- 00000001_xxx
- action-guided-weights.png
- 00000002_xxx
- val (not used)
- ImageSets
- train.json
- val.json
- val_human.json
- val_novel.json
There are 2 special files val_human.json and val_novel.json. These files contains the split that used for results in our experiments, where val_human contains the actions annotated by human, val_novel contains actions that unseen in the validation set.
How to find action-aware pseudo labels
Check train.json. For each object name in each video, the json file contains a map such as {"name": "food container", "class_id": 21, "handbox": 0, "narration": 1, "positive": 1}.
handbox = 1 for object mask intersects with hand-object bounding boxes.
narration = 1 for object name mentioned in action narration.
positive = 1 for pseudo positive object.
Note that object masks under Annotations_Sparse are for all objects. We combine them with class labels in experiments.
How to find action-guided weights
Each picture under Weights_Sparse is an action-guided weight.
<div align=center> <img src="figures/weights.png" alt="weights" width="500" align="bottom" /> </div>Picture: Action-guided Weights
3 (yellow) for negative obj mask.
2 (green) for hand | narration obj mask.
4 (blue) for hand & narration obj mask.
1 (red) for other areas
Training
ActionVOS is an action-aware setting for RVOS, and any RVOS model with an extra class head can be trained for ActionVOS. In our experiments, we take ReferFormer-ResNet101 as the base RVOS model.
Clone ReferFormer repository and download their pretrained checkpoints.
git clone https://github.com/wjn922/ReferFormer.git
cd ReferFormer
mkdir pretrained_weights
download from the link
Install the necessary packages for ReferFormer.
cd ReferFormer
pip install -r requirements.txt
pip install 'git+https://github.com/facebookresearch/fvcore'
pip install -U 'git+https://github.com/cocodataset/cocoapi.git#subdirectory=PythonAPI'
cd models/ops
python setup.py build install
Put modificated files to ReferFormer folders.
python copy_rf_actionvos_files.py
Run training scripts. If you want to change training configs, check RF_ActionVOS/configs.md. The following example shows training actionvos on a single GPU 0.
cd ReferFormer
bash scripts/train_actionvos.sh actionvos_dirs/r101 pretrained_weights/r101_refytvos_joint.pth 1 0 29500 --backbone resnet101 --expression_file train_meta_expressions_promptaction.json --use_weights --use_positive_cls --actionvos_path ../dataset_visor --epochs 6 --lr_drop 3 5 --save_interval 3
After the training process, the weights will be saved to actionvos_dirs/r101/checkpoint.pth.
Inference
For quick start to ActionVOS models, we offer a trained RF-R101 checkpoint in this link.
Inference on VISOR
cd ReferFormer
bash scripts/test_actionvos.sh actionvos_dirs/r101 pretrained_weights/actionvos_rf_r101.pth 0 29500 --backbone resnet101 --expression_file val_meta_expressions_promptaction.json --use_positive_cls --pos_cls_thres 0.75 --actionvos_path ../dataset_visor
The output masks will be saved in ReferFormer/actionvos_dirs/r101/val.
Inference on your own videos and prompts
Change your videos and prompts into a actionvos_path like
- demo_path
- JPEGImages_Sparse
- val
- video_name
- rgb_frames.jpg
- ImageSets
- expression_file.json
Check the example json file for the prompt formats.
cd ReferFormer
bash scripts/test_actionvos.sh actionvos_dirs/demo pretrained_weights/actionvos_rf_r101.pth 0 29500 --backbone resnet101 --expression_file expression_file.json --use_positive_cls --pos_cls_thres 0.75 --actionvos_path ../demo_path
The output masks will be saved in ReferFormer/actionvos_dirs/demo/val.
Evaluation Metrics
We use 6 metrics, p-mIoU, n-mIoU, p-cIoU, n-cIoU, gIoU and accuracy to evaluate ActionVOS performance on VISOR val_human split.
python actionvos_metrics.py --pred_path ReferFormer/actionvos_dirs/r101/val --gt_path dataset_visor/Annotations_Sparse/val --split_json dataset_visor/ImageSets/val_human.json
If you correctly generated object masks by this checkpoint, you should get results below:
Model | Split | p-mIoU | n-mIoU | p-cIoU | n-cIoU | gIoU | Acc |
---|---|---|---|---|---|---|---|
RF_R101 | val_human* | 66.1 | 18.6 | 72.7 | 32.2 | 71.2 | 83.0 |
* Note that the val_human here only use 294 videos. Check actionvos_metrics.py for details.
Citation
If this work or code is helpful in your research, please cite:
@inproceedings{ouyang2024actionvos,
title={ActionVOS: Actions as Prompts for Video Object Segmentation},
author={Ouyang, Liangyang and Liu, Ruicong and Huang, Yifei and Furuta, Ryosuke and Sato, Yoichi},
booktitle={European Conference on Computer Vision},
pages={216--235},
year={2024}
}
If you are using the data and annotations from VISOR,VSCOS,VOST, please cite their original paper.
If you are using the training, inference and evaluation code, please cite ReferFormer and GRES.
Contact
For any questions, including algorithms and datasets, feel free to contact me by email: oyly(at)iis.u-tokyo.ac.jp