Home

Awesome

MTData

image Travis (.com)

MTData automates the collection and preparation of machine translation (MT) datasets. It provides CLI and python APIs, which can be used for preparing MT experiments.

This tool knows:

MTData is here to:

Installation

# Option 1: from pypi
pip install -I mtdata
# To install a specific version, get version number from https://pypi.org/project/mtdata/#history
pip install mtdata==[version]

# Option 2: install from latest master branch
pip install -I git+https://github.com/thammegowda/mtdata


# Option 3: for development/editable mode
git clone https://github.com/thammegowda/mtdata
cd mtdata
pip install --editable .

Current Status:

We have added some commonly used datasets - you are welcome to add more! These are the summary of datasets from various sources (Updated: Feb 2022).

SourceDataset Count
OPUS151,753
Flores51,714
Microsoft8,128
Leipzig5,893
Neulab4,455
Statmt1,784
Facebook1,617
AllenAi1,611
ELRC1,575
EU1,178
Tilde519
LinguaTools253
Anuvaad196
AI4Bharath192
ParaCrawl127
Lindat56
UN30
JoshuaDec29
StanfordNLP15
ParIce8
LangUk5
Phontron4
NRC_CA4
KECL3
IITB3
WAT3
Masakhane2
Total231,157

Usecases

CLI Usage

mtdata list

Lists datasets that are known to this tool.

mtdata list -h
usage: __main__.py list [-h] [-l L1-L2] [-n [NAME ...]] [-nn [NAME ...]] [-f] [-o OUT]

optional arguments:
  -h, --help            show this help message and exit
  -l L1-L2, --langs L1-L2
                        Language pairs; e.g.: deu-eng (default: None)
  -n [NAME ...], --names [NAME ...]
                        Name of dataset set; eg europarl_v9. (default: None)
  -nn [NAME ...], --not-names [NAME ...]
                        Exclude these names (default: None)
  -f, --full            Show Full Citation (default: False)
# List everything ; add | cut -f1  to see ID column only
mtdata list | cut -f1

# List a lang pair 
mtdata list -l deu-eng 

# List a dataset by name(s)
mtdata list -n europarl
mtdata list -n europarl news_commentary

# list by both language pair and dataset name
 mtdata list -l deu-eng -n europarl news_commentary newstest_deen  | cut -f1
    Statmt-europarl-9-deu-eng
    Statmt-europarl-7-deu-eng
    Statmt-news_commentary-14-deu-eng
    Statmt-news_commentary-15-deu-eng
    Statmt-news_commentary-16-deu-eng
    Statmt-newstest_deen-2014-deu-eng
    Statmt-newstest_deen-2015-deu-eng
    Statmt-newstest_deen-2016-deu-eng
    Statmt-newstest_deen-2017-deu-eng
    Statmt-newstest_deen-2018-deu-eng
    Statmt-newstest_deen-2019-deu-eng
    Statmt-newstest_deen-2020-deu-eng
    Statmt-europarl-10-deu-eng
    OPUS-europarl-8-deu-eng

# get citation of a dataset (if available in index.py)
mtdata list -l deu-eng -n newstest_deen --full

Dataset ID

Dataset IDs are standardized to this format:
<Group>-<name>-<version>-<lang1>-<lang2>

mtdata get

This command downloads datasets specified by names for languages to a directory. You will have to make definite choice for --train and --test arguments

mtdata get -h
python -m mtdata get -h
usage: __main__.py get [-h] -l L1-L2 [-tr [ID ...]] [-ts [ID ...]] [-dv ID] [--merge | --no-merge] [--compress] -o OUT_DIR

optional arguments:
  -h, --help            show this help message and exit
  -l L1-L2, --langs L1-L2
                        Language pairs; e.g.: deu-eng (default: None)
  -tr [ID ...], --train [ID ...]
                        Names of datasets separated by space, to be used for *training*.
                            e.g. -tr Statmt-news_commentary-16-deu-eng europarl_v9 .
                             To concatenate all these into a single train file, set --merge flag. (default: None)
  -ts [ID ...], --test [ID ...]
                        Names of datasets separated by space, to be used for *testing*.
                            e.g. "-ts Statmt-newstest_deen-2019-deu-eng Statmt-newstest_deen-2020-deu-eng ".
                            You may also use shell expansion if your shell supports it.
                            e.g. "-ts Statmt-newstest_deen-20{19,20}-deu-eng"  (default: None)
  -dv ID, --dev ID     Dataset to be used for development (aka validation).
                            e.g. "-dv Statmt-newstest_deen-2017-deu-eng" (default: None)
  --merge               Merge train into a single file (default: False)
  --no-merge            Do not Merge train into a single file (default: True)
  --compress            Keep the files compressed (default: False)
  -o OUT_DIR, --out OUT_DIR
                        Output directory name (default: None)

Quickstart / Example

See what datasets are available for deu-eng

$ mtdata list -l deu-eng | cut -f1  # see available datasets
    Statmt-commoncrawl_wmt13-1-deu-eng
    Statmt-europarl_wmt13-7-deu-eng
    Statmt-news_commentary_wmt18-13-deu-eng
    Statmt-europarl-9-deu-eng
    Statmt-europarl-7-deu-eng
    Statmt-news_commentary-14-deu-eng
    Statmt-news_commentary-15-deu-eng
    Statmt-news_commentary-16-deu-eng
    Statmt-wiki_titles-1-deu-eng
    Statmt-wiki_titles-2-deu-eng
    Statmt-newstest_deen-2014-deu-eng
    ....[truncated]

Get these datasets and store under dir data/deu-eng

 $ mtdata get -l deu-eng --out data/deu-eng --merge \
     --train Statmt-europarl-10-deu-eng Statmt-news_commentary-16-deu-eng \
     --dev Statmt-newstest_deen-2017-deu-eng  --test Statmt-newstest_deen-20{18,19,20}-deu-eng
    # ...[truncated]   
    INFO:root:Train stats:
    {
      "total": 2206240,
      "parts": {
        "Statmt-news_commentary-16-deu-eng": 388482,
        "Statmt-europarl-10-deu-eng": 1817758
      }
    }
    INFO:root:Dataset is ready at deu-eng

To reproduce this dataset again in the future or by others, please refer to <out-dir>/mtdata.signature.txt:

$ cat deu-eng/mtdata.signature.txt
mtdata get -l deu-eng -tr Statmt-europarl-10-deu-eng Statmt-news_commentary-16-deu-eng \
   -ts Statmt-newstest_deen-2018-deu-eng Statmt-newstest_deen-2019-deu-eng Statmt-newstest_deen-2020-deu-eng \
   -dv Statmt-newstest_deen-2017-deu-eng --merge -o <out-dir>
mtdata version 0.3.0-dev

See what the above command has accomplished:

$ tree  data/deu-eng/
├── dev.deu -> tests/Statmt-newstest_deen-2017-deu-eng.deu
├── dev.eng -> tests/Statmt-newstest_deen-2017-deu-eng.eng
├── mtdata.signature.txt
├── test1.deu -> tests/Statmt-newstest_deen-2020-deu-eng.deu
├── test1.eng -> tests/Statmt-newstest_deen-2020-deu-eng.eng
├── test2.deu -> tests/Statmt-newstest_deen-2018-deu-eng.deu
├── test2.eng -> tests/Statmt-newstest_deen-2018-deu-eng.eng
├── test3.deu -> tests/Statmt-newstest_deen-2019-deu-eng.deu
├── test3.eng -> tests/Statmt-newstest_deen-2019-deu-eng.eng
├── tests
│   ├── Statmt-newstest_deen-2017-deu-eng.deu
│   ├── Statmt-newstest_deen-2017-deu-eng.eng
│   ├── Statmt-newstest_deen-2018-deu-eng.deu
│   ├── Statmt-newstest_deen-2018-deu-eng.eng
│   ├── Statmt-newstest_deen-2019-deu-eng.deu
│   ├── Statmt-newstest_deen-2019-deu-eng.eng
│   ├── Statmt-newstest_deen-2020-deu-eng.deu
│   └── Statmt-newstest_deen-2020-deu-eng.eng
├── train-parts
│   ├── Statmt-europarl-10-deu-eng.deu
│   ├── Statmt-europarl-10-deu-eng.eng
│   ├── Statmt-news_commentary-16-deu-eng.deu
│   └── Statmt-news_commentary-16-deu-eng.eng
├── train.deu
├── train.eng
├── train.meta.gz
└── train.stats.json

Recipes

Since v0.3.1

Recipe is a set of datasets nominated for train, dev, and tests, and are meant to improve reproducibility of experiments. Recipes are loaded from

  1. Default: mtdata/recipe/recipes.yml from source code
  2. Cache dir: $MTDATA/mtdata.recipes.yml where $MTDATA has default of ~/.mtdata
  3. Current dir: All files matching the glob: $PWD/mtdata.recipes*.yml
    • If current dir is not preferred, export MTDATA_RECIPES=/path/to/dir
    • Alternatively, MTDATA_RECIPES=/path/to/dir mtdata list-recipe

See mtdata/recipe/recipes.yml for the format and examples.

mtdata list-recipe  # see all recipes
mtdata get-recipe -ri <recipe_id> -o <out_dir>  # get recipe, recreate dataset

Language Name Standardization

ISO 639 3

Internally, all language codes are mapped to ISO-639 3 codes. The mapping can be inspected with python -m mtdata.iso or mtdata-iso

$  mtdata-iso -h
usage: python -m mtdata.iso [-h] [-b] [langs [langs ...]]

ISO 639-3 lookup tool

positional arguments:
  langs        Language code or name that needs to be looked up. When no
               language code is given, all languages are listed.

optional arguments:
  -h, --help   show this help message and exit
  -b, --brief  be brief; do crash on error inputs

# list all 7000+ languages and their 3 letter codes
$ mtdata-iso    # python -m mtdata.iso 
...

# lookup codes for some languages
$ mtdata-iso ka kn en de xx english german
Input   ISO639_3        Name
ka      kat     Georgian
kn      kan     Kannada
en      eng     English
de      deu     German
xx      -none-  -none-
english eng     English
german  deu     German

# Print no header, and crash on error; 
$ mtdata-iso xx -b
Exception: Unable to find ISO 639-3 code for 'xx'. Please run
python -m mtdata.iso | grep -i <name>
to know the 3 letter ISO code for the language.

To use Python API

from mtdata.iso import iso3_code
print(iso3_code('en', fail_error=True))
print(iso3_code('eNgLIsH', fail_error=True))  # case doesnt matter

BCP-47

Since v0.3.0

We used ISO 639-3 from the beginning, however, we soon faced the limitation that ISO 639-3 cannot distinguish script and region variants of language. So we have upgraded to BCP-47 like language tags in v0.3.0.

Our tags are of form xxx_Yyyy_ZZ where

PatternPurposeStandardLengthCaseRequired
xxxLanguageISO 639-3three-letterslowercasemandatory
YyyyScriptISO 15924four-lettersTitlecaseoptional
ZZRegionISO 3166-1two-lettersCAPITALSoptional

Notes:

Example:

To inspect parsing/mapping, use python -m mtdata.iso.bcp47 <args>

mtdata-bcp47 eng English en-US en-GB eng-Latn kan Kannada-Deva hin-Deva kan-Latn
INPUTSTDLANGSCRIPTREGION
engengengNoneNone
EnglishengengNoneNone
en-USeng_USengNoneUS
en-GBeng_GBengNoneGB
eng-LatnengengNoneNone
kankankanNoneNone
Kannada-Devakan_DevakanDevaNone
hin-DevahinhinNoneNone
kan-Latnkan_LatnkanLatnNone
kan-inkan_INkanNoneIN
kn-knda-inkan_INkanNoneIN

Pipe Mode

# --pipe/-p : maps stdin -> stdout  
# -s express : expresses scripts (unlike BCP47, which supresses default script
$ echo -e "en\neng\nfr\nfra\nara\nkan\ntel\neng_Latn\nhin_deva"|  mtdata-bcp47 -p -s express
eng_Latn
eng_Latn
fra_Latn
fra_Latn
ara_Arab
kan_Knda
tel_Telu
eng_Latn
hin_Deva

Python API for BCP47 Mapping

from mtdata.iso.bcp47 import bcp47
tag = bcp47("en_US")
print(*tag)  # tag is a tuple
print(f"{tag}")  # str(tag) gets standardized string

How to Contribute:

Change Cache Directory:

The default cache directory is $HOME/.mtdata. It can grow to a large size when you download a lot of datasets using this command.

To change it:

mv $HOME/.mtdata /path/to/new/place
ln -s /path/to/new/place $HOME/.mtdata

Performance Optimization Tips

Run tests

Tests are located in tests/ directory. To run all the tests:

python -m pytest

Developers and Contributor:

See - https://github.com/thammegowda/mtdata/graphs/contributors

Citation

https://aclanthology.org/2021.acl-demo.37/

@inproceedings{gowda-etal-2021-many,
    title = "Many-to-{E}nglish Machine Translation Tools, Data, and Pretrained Models",
    author = "Gowda, Thamme  and
      Zhang, Zhao  and
      Mattmann, Chris  and
      May, Jonathan",
    booktitle = "Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing: System Demonstrations",
    month = aug,
    year = "2021",
    address = "Online",
    publisher = "Association for Computational Linguistics",
    url = "https://aclanthology.org/2021.acl-demo.37",
    doi = "10.18653/v1/2021.acl-demo.37",
    pages = "306--316",
}

Disclaimer on Datasets

This tools downloads and prepares public datasets. We do not host or distribute these datasets, vouch for their quality or fairness, or make any claims regarding license to use these datasets. It is your responsibility to determine whether you have permission to use the dataset under the dataset's license. We request all the users of this tool to cite the original creators of the datsets, which maybe obtained from mtdata list -n <NAME> -l <L1-L2> -full.

If you're a dataset owner and wish to update any part of it (description, citation, etc.), or do not want your dataset to be included in this library, please get in touch through a GitHub issue. Thanks for your contribution to the ML community!