Awesome
Context-Aware Visual Policy Network for Sequence-Level Image Captioning
This repository contains the code for the following papers:
-
Daqing Liu, Zheng-Jun Zha, Hanwang Zhang, Yongdong Zhang, Feng Wu, Context-Aware Visual Policy Network for Sequence-Level Image Captioning. in ACM MM, 2018. (PDF)
-
Zheng-Jun Zha, Daqing Liu, Hanwang Zhang, Yongdong Zhang, Feng Wu, Context-Aware Visual Policy Network for Fine-Grained Image Captioning. in TPAMI, 2019. (Extended journal version. PDF)
Installation
pip3 install torch torchvision
- Clone with Git, and then enter the root directory:
git clone --recursive https://github.com/daqingliu/CAVP.git && cd CAVP
- Install requirements for evaluation metrics:
apt install default-jdk
cd coco-caption && bash coco-caption/get_stanford_models.sh && cd ..
Download Data
- Download the image features (tsv extracted from bottom-up-attention) into
data
and unzip it. - Convert tsv files to npz files which can be read in dataloader:
python misc/convert_tsv_to_npz.py
Training and Evaluation
Just simply run:
bash run_train.sh
bash run_eval.sh
Citation
@article{zha2019context,
title={Context-aware visual policy network for fine-grained image captioning},
author={Zha, Zheng-Jun and Liu, Daqing and Zhang, Hanwang and Zhang, Yongdong and Wu, Feng},
journal={IEEE transactions on pattern analysis and machine intelligence},
year={2019},
}
Acknowledgements
Part of this repository is built upon self-critical.pytorch.