Home

Awesome

Chimera

Chimera

Environment

We recommend installing all dependencies in a separate Conda environment, or in Docker.

GPU-support

This code will run with or without CUDA, but we recommend using a machine with CUDA.

Installation

Execute setup.sh. This will install pip dependencies, as well as OpenNMT.

Demo

To run the demo, execute server/server.py, preferably on a machine with a GPU. Running it for the first time will process the data and train the models, then expose a server for you to play with.

Enriched Corpus

We enrich the corpus with data we create in pre-processing, and release it in JSON format.

Training and Development sets can be found in the git-assets/enriched/ directory.


Process

For training, the main pipeline consists of these sub-pipelines:

  1. Preprocess Training (both train and dev sets)
    1. Load the data-set
    2. Convert RDFs to graphs
    3. Fix misspellings
    4. Locate entities in the text
    5. Match plans for each graph and reference
    6. Tokenize the plans and reference sentences
  2. Train Model
    1. Initialize model
    2. Pre-process training data
    3. Train Model
    4. Find best checkpoint, chart all checkpoints
  3. Learn Score
    1. Get good plans from training set
    2. Learn Relation-Direction Expert
    3. Learn Global-Direction Expert
    4. Learn Splitting-Tendencies Expert
    5. Learn Relation-Transitions Expert
    6. Create Product of Experts
  4. Preprocess Test Set
    1. Load the data-set
    2. Convert RDFs to graphs
    3. Fix misspellings
    4. Generate best plan
    5. Tokenize plans & sentences
  5. Translate
    1. Translate test plans into text
    2. Post-process translated texts
    3. Save Translations to file (for human reference)
  6. Evaluate model performance
    1. Evaluate test reader

Once running the main pipeline, every pipeline result is cached. If the cache is removed, the pipeline will continue from its last un-cached process.

Note: by default, all pipelines are muted, meaning any screen output will not present on screen.

Example

Let's define the planner to be:

naive_planner = NaivePlanner(WeightedProductOfExperts([
    RelationDirectionExpert,
    GlobalDirectionExpert,
    SplittingTendenciesExpert,
    RelationTransitionsExpert
]))

WebNLG

Setting the config parameter to be Config(reader=WebNLGDataReader, planner=naive_planner).

Output running for the first time: First Run Pipeline

Output running for the second time: (runs for just a few seconds to load the caches) Second Run Pipeline

The expected result (will show on screen) reported by multi-bleu.perl is around:

Delexicalized WebNLG

This dataset does not use a heuristic for entity matches, instead it was constructed manually. This means it is of higher quality and easier to find a correct plan-match in train time.

Setting the config parameter to be Config(reader=DelexWebNLGDataReader, test_reader=WebNLGDataReader, planner=naive_planner).

The expected result is around:

We attribute the worse BLEU to the fact the delexicalizations also remove articles and other text around it, and without proper referring expressions generations while the texts should have better structure, they are worse in fluency.

Literature

This code is based on the following papers

Citations

@inproceedings{step-by-step,
    title = "{S}tep-by-Step: {S}eparating Planning from Realization in Neural Data-to-Text Generation",
    author = "Amit Moryossef and Yoav Goldberg and Ido Dagan",
    booktitle = "Proceedings of the 2019 Conference of the North {A}merican Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers)",
    year = "2019",
    publisher = "Association for Computational Linguistics",
    url = "https://www.aclweb.org/anthology/N19-1236",
    pages = "2267--2277",
}

@inproceedings{step-by-step-improvements,
    title = "Improving Quality and Efficiency in Plan-based Neural Data-to-Text Generation",
    author = "Amit Moryossef and Ido Dagan and Yoav Goldberg",
    booktitle = "Proceedings of the 12th International Conference on Natural Language
               Generation, {INLG} 2019, Tokyo, Japan, October 29 - November 1, 2019",
    year = "2019",
    publisher = "Association for Computational Linguistics",
    url = "https://www.aclweb.org/anthology/volumes/W19-86/",
    pages = "377--382",
}