Home

Awesome

Generic Boundary Event Captioning Challenge at CVPR 2022 LOVEU workshop [paper]

Jaehyuk Heo, YongGi Jeong, Sunwoo Kim, Jaehee Kim, Pilsung Kang
School of Industrial & Management Engineering, Korea University
Seoul, Korea

We propose the Rich Encoder-decoder framework for Video Event Captioner (REVECA). Our model achieves 3rd place in GEBC Challenge.

<p align='center'> <img width='800' src='https://github.com/TooTouch/REVECA/blob/main/assets/figure1.png'> </p>

Environments

  1. Build a docker image and make a docker container
cd docker 
bash docker_build.sh $image_name
  1. Install packages
pip install -r requirements

Datasets

Download Kinetics-GEBC and annotations in here. And save files in ./datasets

datasets/
└── annotations
    ├── testset_highest_f1.json
    ├── trainset_highest_f1.json
    ├── valset_highest_f1.json

Our model uses three video features: instance segmentation mask, TSN features

  1. We use the semantic segmentation mask for the training model. The segmentation model is Mask2Former.

  1. We use TSN features extracted by Temporal Segmentation Networks. TSN features released in GEBC Challenge can download here.

Methods

Our video understanding model is called REVECA, based on CoCa. We use three methods: (1) Temporal-based Pairwise Difference (TPD), (2) Frame position embedding, and (3) LoRA. we use timm version == 0.6.2.dev0 and loralib. And then, we modify a vision_transformer.py for using LoRA.

Results

MethodAvg.CIDErSPICEROUGE-L
CNN+LSTM29.9449.7313.6226.46
Robust Change Captioning34.1658.5616.3427.57
UniVL-revised36.6465.7418.0626.12
ActBERT-revised40.8074.7119.5228.15
REVECA (our model)50.9793.9124.6634.34

Saved Model

Our final model weights can download here.

Citation