Awesome
Learning Joint Embedding with Multimodal Cues for Cross-Modal Video-Text Retrieval
Code to evaluate "Learning Joint Embedding with Multimodal Cues for Cross-Modal Video-Text Retrieval" (Mithun, Niluthpol C and Li, Juncheng and Metze, Florian and Roy-Chowdhury, Amit K) 2018
Dependencies
This code is written in python. The necessary packages are below:
- Python 2.7
- PyTorch (>0.4)
- Tensorboard
- NLTK Punkt Sentence Tokenizer
Evaluate Models
-- Download data and models from https://drive.google.com/drive/folders/1t3MwiCR72HDo6XiPvWSZpenqv4CGjnKl -- To evaluate on MSR-VTT dataset : python test_weighted.py
Reference
If you use our code or models, please cite the following paper:
@inproceedings{mithun2018learning, title={Learning Joint Embedding with Multimodal Cues for Cross-Modal Video-Text Retrieval}, author={Mithun, Niluthpol C and Li, Juncheng and Metze, Florian and Roy-Chowdhury, Amit K}, booktitle={ICMR}, year={2018}, organization={ACM} }
--Initial code borrowed heavily from VSE++(https://github.com/fartashf/vsepp)
-- Contact: Niluthpol Chowdhury Mithun (nmith001@ucr.edu)