Home

Awesome

visDial.pytorch

Visual Dialog model in pytorch

Introduction

This is the pytorch implementation of our NIPS 2017 paper "Best of Both Worlds: Transferring Knowledge from Discriminative Learning to a Generative Visual Dialog Model"

Disclaimer

This is the reimplementation code of visual dialog model based on Pytorch. Our original code was implemented during the first author's internship. All the results presented in our paper were obtained based on the original code, which cannot be released since the firm restriction. This project is an attempt to reproduce the results in our paper.

Citation

If you find this code useful, please cite the following paper:

@article{lu2017best,
    title={Best of Both Worlds: Transferring Knowledge from Discriminative Learning to a Generative Visual Dialog Model},
    author={Lu, Jiasen and Kannan, Anitha and and Yang, Jianwei and Parikh, Devi and Batra, Dhruv},
    journal={NIPS},
    year={2017}
}

Dependencies

  1. PyTorch. Install PyTorch with proper commands. Make sure you also install torchvision.

Evaluation

To evaluate the pre-trained model on validation set, first use the script to download the feature and pre-trained model.

python script/download.py --path [path_to_download]

After download the feature and pre-trained model, you can run the evaluation script by using following command

python eval/eval_D.py --data_dir [path_to_root] --model_path [path_to_root]/save/HCIAE-D-MLE.pth --cuda
python eval/eval_G.py --data_dir [path_to_root] --model_path [path_to_root]/save/HCIAE-G-MLE.pth --cuda
python eval/eval_G_DIS.py --data_dir [path_to_root] --model_path [path_to_root]/save/HCIAE-G-DIS.pth --cuda

You will get the similar results as in the paper :)

Train a visual dialog model.

Preparation

First download the feature. from here

Training

python train/train_D.py --cuda
python train/train_G.py --cuda
python train/train_all.py --cuda --update LM