Home

Awesome

QED: A Framework and Dataset for Explanations in Question Answering

This page contains the data and evaluation scripts associated with the paper:

https://arxiv.org/abs/2009.06354
QED: A Framework and Dataset for Explanations in Question Answering
by Matthew Lamm, Jennimaria Palomaki, Chris Alberti, Daniel Andor, Eunsol Choi, Livio Baldini Soares, Michael Collins

Here is the bibtex for citing this work:

@misc{lamm2020qed,
    title={QED: A Framework and Dataset for Explanations in Question Answering},
    author={Matthew Lamm and Jennimaria Palomaki and Chris Alberti and Daniel Andor and Eunsol Choi and Livio Baldini Soares and Michael Collins},
    year={2020},
    eprint={2009.06354},
    archivePrefix={arXiv},
    primaryClass={cs.CL}
}

Overview of QED

QED is a linguistically principled framework for explanations in question answering. As presented in the paper, given a question and a passage, QED represents an explanation of the answer as a combination of discrete, human-interpretable steps:

  1. sentence selection := identification of a sentence implying an answer to the question
  2. referential equality := identification of noun phrases in the question and the answer sentence that refer to the same thing
  3. predicate entailment := confirmation that the predicate in the sentence entails the predicate in the question once referential equalities are abstracted away.

Here's a simple example:<br>

<img src="images/example-figure.png" width="400"/>

Predicate entailment and referential equality are relations which must hold in order for a sentence to answer an question. Consider if in the selected sentence above, the grammatical subject were not "Howl's Moving Castle", but something else. Then referential equality would be broken, and the sentence wouldn't provide a sound answer to the question. Similarly, if the predicate in the sentence did not mention the word "written" but rather mentioned the word "produced," then again answerhood wouldn't hold.

The annotation definition of QED gives careful treatment to bridging -- a significant but oft-ignored form of reference. In the following example, the question reference "the first game of the 2017 world series" is a bridged argument of the selected sentence. That is, it is not explicitly manifest within the sentence, but nevertheless is a necessary semantic argument for interpreting its meaning.

<img src="images/bridging-example.png" width="900"/>

Please see the paper for additional statistics and examples from the data.

Data Description

The QED dataset consists of a training set of 7638 examples and a validation set of 1355 examples. These are distributed in JSON Lines format, with one QED example per line. Two files are provided:

A QED example consists of an example from the Natural Questions dataset (NQ) and a QED-style explanation annotation where appropriate. It is a dictionary with the following items:

  1. example_id := a unique integer identifier that matches up with NQ<br>
  2. title_text := the title of the wikipedia page containing the paragraph<br>
  3. url := the url of the wikipedia page containing the paragraph<br>
  4. question_text := a natural language question string from NQ<br>
  5. paragraph_text := a paragraph string from a wikipedia page containing the answer to question<br>
  6. sentence_starts := a list of integer character offsets indicating the start of sentences in the paragraph<br>
  7. original_nq_answers := the original short answer spans from NQ<br>
  8. annotation := the QED annotation, a dictionary with the following items and further elaborated upon below: <br> 8.1. referential_equalities := a list of dictionaries, one for each referential equality link annotated <br> 8.2. answer := a list of dictionaries, one for each short answer span<br> 8.3. selected_sentence := a dictionary representing the annotated sentence in the passage<br> 8.4. explanation_type := one of "single_sentence", "multi_sentence", or "none"

Annotation Format

Each element of a QED annotation, excepting the explanation_type (see below), consists of one or more span dictionaries. At a minimum, these contain start and end (inclusive) character offsets, as well as the string associated with the span.

A selected_sentence annotation is a span dictionary representing a supporting sentence in the passage which implies an answer to the question.

Each annotation in referential_equalities is a pair of spans, the question_reference and the sentence_reference, corresponding to an entity mention in the question and the selected_sentence respectively. As described in the paper, sentence_references can be "bridged in", in which case they do not correspond with any actual span in the selected_sentence. Hence, sentence_reference spans contain an additional field, bridge, which is a prepositional phrase when a reference is bridged, and is False otherwise. Prepositional phrases serve to link bridged references to an anchoring phrase in the selected_sentence. In the case a sentence_reference is bridged, the start and end, as well as the span string, map to such an anchoring phrase in the selected_sentence.

An answer annotation is a pair of spans, a sentence_reference and a paragraph_reference. Most of the time these are identical, except when the answer is bridged-in to the sentence. When this is true, the paragraph_reference corresponds with a span that falls outside of the selected_sentence span, and the sentence_reference contains information on how to bridge the answer into the selected_sentence, as described above for referential equalities.

Explanation types

Each instance in QED is assigned an explanation_type from one of three labels: single_sentence, multi_sentence, and none.

In the latter two cases, the other annotation fields are left empty, but the original_nq_answer field is populated.

The breakdown of explanation types in the data is as follows:

TrainDev
single_sentence5,1541,021
multi_sentence1,702183
none782151

Disclaimer

All of the input data for this task comes from Wikipedia, which is licensed for public use by the Creative Commons Attribution-ShareAlike 3.0 Unported License (CC BY-SA) and the GNU Free Documentation License (GFDL).

Evaluation Scripts

We additionally provide an official evaluation script, qed_eval.py to be used for appropriately comparing results on the validation set. The evaluation script expects a predicted file with the same format as input files described above, but with the annotation replaced by predicted values.

Example usage of the evaluation scripts can be seen in qed_eval_test.py. In order to run the test, you may need to install absl by running

pip install absl-py

Baseline Results

QED is a general framework for explanations that can be used to define a variety of tasks. In the paper we define four such tasks, and present baseline results for the first two of these.

Task 1: Recovery of explanations given short answer spans

We apply two models to Task 1. The zero-shot model is a SpanBERT model pretrained on coreference data, with no training on QED data. The fine-tuned model has been additionally fine-tuned on the QED data. We evaluate performance in terms of mention identification (a span-level metric) and mention alignment (a metric of span pairs).

Mention IdentificationPRF1
zero-shot59.035.644.4
fine-tuned76.868.872.6
Mention AlignmentPRF1
zero-shot47.728.835.9
fine-tuned68.461.364.6

Task 2: Jointly predicting answers and explanations

We compare three models on Task 2, according to the metrics above and also in terms of answer accuracy. The QED-only model is trained only on QED data, but without answers given as they are in Task 1. The QA-only model is trained on all of NQ short answer data. The QED+QA model is a multi-task model trained on both QED data and NQ short answer data.

We find that QED+QA outperforms the QED-only model on the metrics of mention identification and mention alignment:

Mention IdentificationPRF1
QED-only74.163.868.6
QA+QED77.564.670.5
Mention AlignmentPRF1
QED-only63.654.958.9
QA+QED68.657.362.4

We also find that the multitask model outperforms the QA-only model on answer accuracy, demonstrating that training with comparably few additional QED explanations still leads to an improvement (+1.1%) on answer accuracy.

Answer Accuracy
QA-only73.4
QA+QED74.5

Contact

Please make use of github issues to ask questions/hold discussion.