Home

Awesome

translatR

Lightweight tools for translation tasks with MXNet R.

Inspiration taken from the AWS Sockeye project.

Getting started

Prepare data using WMT datasets using Preproc_wmt15_train.Rmd. Creates a source and target matrix of word indices along with the associated dictionary. Data preparation mainly relies on data.table package.

A RNN encoder-decoder training demo is provided in NMT_rnn_rnn.Rmd and and CNN-RNN architecture is shown in NMT_cnn_rnn.Rmd.

Performance

Performance during training is tracked using the perplexity metric. Once a training is completed, the above scripts show how to perform a batch inference on a test data, more specifically the WMT official one. sacreBLEU is then used to compute the BLEU score, providing a clear comparison point to the metric typically found in publications.

For example:

cat data/trans_test_wmt_en_fr_rnn_72_Capital.txt | sacrebleu -t wmt14 -l en-fr

A BLEU score of 28.2 was obtained with the CNN-RNN architecture.

Features:

Encoders:

Decoders:

Attention:

Architectures are fully attention based. All information is passed from the encoder to the decoder through the encoded sequences weighted with an attention mechanism (RNN hidden layers are not carry forward from the encoder to the decoder).

Attention modules are defined in attention.R. Three attention approaches are supported, all described in Luong et al, 2015:

The decoder network takes an attention module as a parameter along with an encoder graph. All attentions modules are implemented using the query-key-value approach.

To do:

Tutorial to be added to Examples of application of RNN.