Home

Awesome

Deep Character-Level Neural Machine Translation

We implement a Deep Character-Level Neural Machine Translation By Learning Morphology based on Theano and Blocks. Please intall relative packages according to Blocks before testing our program. Note that, please use Python 3 instead of Python 2. There will be some problems with Python 2.

It is an improved version of DCNMT, the architecture of DCNMT is shown in the following figure which is a single, large neural network. DCNMT

Please refer to the paper for the details.

Deep Character-Level Neural Machine Translation By Learning Morphology (openreview, submitted to ICLR 2017) by Shenjian Zhao, Zhihua Zhang

Training

If you want to train your own model, please prepare a parallel linguistics corpus, like corpus in WMT. A GPU with 12GB memory will be helpful. You could run bash train.sh or follow these steps.

  1. Download the relative scripts (tokenizer.perl, multi-bleu.perl) and nonbreaking_prefix from mose_git.
  2. Download the datasets, then tokenize and shuffle the cropus.
  3. Create the character list for both language using create_vocab.py in preprocess folder. Don't forget to pass the language setting, vocabulary size and file name to this script.
  4. Create a data folder, and put the vocab.*.*.pkl and *.shuf in the data folder.
  5. Prepare the tokenized test set, and put them in data folder.
  6. Edit the configurations.py, and run python training_adam.py. It will take 1 to 2 weeks to train a good model.

You need to decrease the learning rate during training, or set the learning rate to 1e-4 which may result a longer training time. To save training time, you may need to perform validation on other computers manually or use a script. We will dump the model every 20,000 updates by default. For example, when the model is trained after 800,000 updates, you could run python testing.py dcnmt_en2cs_800000 to validate the performance.

Testing

We have trained several models which listed in the following table. However, because of the limitation of available GPU and long training time (two weeks or more), we don't have enough time and resource to train on more language pairs. If you run into any trouble, please open an issue or email me directly at echo c3dvcmQueW9ya0BnbWFpbC5jb20K | base64 -d. Thanks!

language pairdatasetbatch_sizeupdatesBLEU_devBLEU_test
en-cswmt1556800,00017.8916.96
cs-enwmt1556~1,270,00023.15~23.2422.33~22.48
en-frsame as RNNSearch72~480,00029.3130.56

These models are evaluated on newstest2015 (BLEU_test) using the best validation model on newstest2013 (BLEU_dev). You can download these models from dropbox, then put them (dcnmt_*, data, configurations.py) in this directory. To perform testing, run python testing.py dcnmt_en2cs_800000 or other corresponding language pairs. It takes about an hour to do translation on 3000 sentences if you have a moderate GPU.

Subword Detecting

We apply our trained word encoder to Penn Treebank Line 1 and we find that the word encoder is able to detect the boundary of the subword units. As shown in the following figure, "consumers", "monday", "football" and "greatest" are segmented into "consum-er-s", "mon-day", "foot-ball" and "great-est" respectively. Since there are no explicit delimiter, it may be more difficult to detect the subword units. pt1 pt2 pt3

Updating...