Awesome
GRUEN for Evaluating Linguistic Quality of Generated Text
This repo is the GRUEN metric implementation of GRUEN for Evaluating Linguistic Quality of Generated Text (Findings of EMNLP 2020).
Table of Contents
Introduction
GRUEN aims at evaluating the linguistic quality of generated text from machine learning models. Specifically, it aims to capture the four linguistic dimensions in Table 1.
<p align="center"><img width="50%" src="linguistic_aspects.png"/></p>GRUEN has been shown to correlate well with human judgments on 13 datasets over the five natural language generation tasks below:
- Abstractive Text Summarization
- Machine Translation
- Dialogue System
- Text Simplification
- Text Compression
Code:
The code is based on Python 3.
-
Install the dependencies as below:
pip install -r requirements.txt
or using conda environment file:
conda env create --file environment.yml
-
Use shell inscript to download CoLa models.
chmod u+x install.sh & ./install.sh
-
Run main.py for an example usage.
python -m main
Dataset
It is critically important to collect human judgments (i.e., the manual linguistic quality annotation score) of the system output.
To ease future research on proposing novel evaluation metrics, we summarize some benchmark datasets below. For license issues, we are unable to provide links for downloading the data and the human judgments. We, however, point out how you can access them.
Abstractive Text Summarization:
- CNN/Daily Mail: The dataset is originally proposed by Hermann et al. (2015) and Nallapati et al. (2016). The human judgments are collected by Chaganty et al. (2018).
- TAC-2011: Please refer to the link here.
- DUC2005, DUC2006, DUC2007: Please refer to the link here.
Machine Translation:
- WMT16: Please refer to the link here. It has six human annotated datasets (i.e., cs-en, de-en, fi-en, ro-en, ru-en, tr-en).
Dialogue System:
- BAGEL: The dataset is originally proposed by Mairesse et al. (2010). The human judgments are collected by Novikova et al. (2017).
- SFHOTEL: The dataset is originally proposed by Wen et al. (2015). The human judgments are collected by Novikova et al. (2017).
- SFREST: The dataset is originally proposed by Wen et al. (2015). The human judgments are collected by Novikova et al. (2017).
Text Simplification:
- Xu et al. (2016): The dataset is available here. Please email the first author to ask for the human judgments.
Text Compression:
- Toutanova et al. (2016): Please refer to the paper.
Related Papers
- Dang (2006): Overview of DUC 2006 (Document Understanding Conference 2006)
- Hermann et al. (2015): Teaching machines to read and comprehend (NIPS 2015)
- Nallapati et al. (2016): Abstractive text summarization using sequence-to-sequence RNNs and beyond (CoNLL 2016)
- Chaganty et al. (2018): The price of debiasing automatic metrics in natural language evaluation (ACL 2018)
- Mairesse et al. (2010): Phrase-based statistical language generation using graphical models and active learning (ACL 2010)
- Wen et al. (2015): Semantically conditioned LSTM-based natural language generation for spoken dialogue systems (EMNLP 2015)
- Novikova et al. (2017): Why we need new evaluation metrics for NLG (EMNLP 2017)
- Xu et al. (2016): Optimizing statistical machine translation for text simplification (TACL 2016)
- Toutanova et al. (2016): A dataset and evaluation metrics for abstractive compression of sentences and short paragraphs (EMNLP 2016)
Citation
If you find this repo useful, please cite:
@inproceedings{zhu2020gruen,
title={GRUEN for Evaluating Linguistic Quality of Generated Text},
author={Zhu, Wanzheng and Bhat, Suma},
booktitle={Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: Findings},
pages={94--108},
year={2020}
}