Awesome
Emergent Translation in Multi-Agent Communication
PyTorch implementation of the models described in the paper Emergent Translation in Multi-Agent Communication.
We present code for training and decoding both word- and sentence-level models and baselines, as well as preprocessed datasets.
Dependencies
Python
- Python 2.7
- PyTorch 0.2
- Numpy
GPU
- CUDA (we recommend using the latest version. The version 8.0 was used in all our experiments.)
Related code
- For preprocessing, we used scripts from Moses and Subword-NMT.
Downloading Datasets
The original corpora can be downloaded from (Bergsma500, Multi30k, MS COCO). For the preprocessed corpora see below.
Dataset | |
---|---|
Bergsma500 | Data |
Multi30k | Data |
MS COCO | Data |
Before you run the code
- Download the datasets and place them in
/data/word
(Bergsma500) and/data/sentence
(Multi30k and MS COCO) - Set correct path in
scr_path()
from/scr/word/util.py
andscr_path()
,multi30k_reorg_path()
andcoco_path()
from/src/sentence/util.py
Word-level Models
Running nearest neighbour baselines
$ python word/bergsma_bli.py
Running our models
$ python word/train_word_joint.py --l1 <L1> --l2 <L2>
where <L1>
and <L2>
are any of {en, de, es, fr, it, nl}
Sentence-level Models
Baseline 1 : Nearest neighbour
$ python sentence/baseline_nn.py --dataset <DATASET> --task <TASK> --src <SRC> --trg <TRG>
Baseline 2 : NMT with neighbouring sentence pairs
$ python sentence/nmt.py --dataset <DATASET> --task <TASK> --src <SRC> --trg <TRG> --nn_baseline
Baseline 3 : Nakayama and Nishida, 2017
$ python sentence/train_naka_encdec.py --dataset <DATASET> --task <TASK> --src <SRC> --trg <TRG> --train_enc_how <ENC_HOW> --train_dec_how <DEC_HOW>
where <ENC_HOW>
is either two
or three
, and <DEC_HOW>
is either img
, des
, or both
.
Our models :
$ python sentence/train_seq_joint.py --dataset <DATASET> --task <TASK>
Aligned NMT :
$ python sentence/nmt.py --dataset <DATASET> --task <TASK> --src <SRC> --trg <TRG>
where <DATASET>
is multi30k
or coco
, and <TASK>
is either 1 or 2 (only applicable for Multi30k).
Dataset & Related Code Attribution
- Moses is licensed under LGPL, and Subword-NMT is licensed under MIT License.
- MS COCO and Multi30k are licensed under Creative Commons.
Citation
If you find the resources in this repository useful, please consider citing:
@inproceedings{Lee:18,
author = {Jason Lee and Kyunghyun Cho and Jason Weston and Douwe Kiela},
title = {Emergent Translation in Multi-Agent Communication},
year = {2018},
booktitle = {Proceedings of the International Conference on Learning Representations},
}