Awesome
CommaQA: Communicating with Agents for QA
CommaQA Dataset is a QA benchmark for learning to communicate with agents. It consists of three datasets capturing three forms of multi-hop reasoning -- explicit(CommaQA-E), implicit(CommaQA-I), and numeric(CommaQA-N).
Paper Link: Semantic Scholar
Citation:
@article{Khot2021LearningTS,
title={Learning to Solve Complex Tasks by Talking to Agents},
author={Tushar Khot and Kyle Richardson and Daniel Khashabi and Ashish Sabharwal},
journal={ArXiv},
year={2021},
volume={abs/2110.08542}
}
Table of Contents
Dataset
Download
Download the datasets:
Formats
Each dataset contains three formats:
-
commaqa/: This is default CommaQA format that contains the raw facts, verbalized sentences, agent language specification, QA pairs and associated theories. Each JSON file consists of list of items with each item containing a KB and a group of questions. The JSON file uses the following key structure:
kb
: a map between predicate names and the facts associated with each predicatecontext
: verbalized context used by black-box modelsper_fact_context
: map between the facts in thekb
and the corresponding verbalized sentence incontext
pred_lang_config
: map between the agent name and the questions answerable by this agent (see ModelQuestionConfig for more details)qa_pairs
: list of QA pairs associated with this context using the keys:id
: question idquestion
: questionanswer
: numeric or list answerconfig
: specific theory config used to construct this example (see TheoryConfig for more details)assignment
: assignment to the variables in the theorydecomposition
: agent-based decomposition to answer this questionfacts_used
: facts needed to answer this question
-
drop/: This contains the dataset in the default DROP dataset format by converting the
context
andqa_pairs
from the commaqa/ format. -
seq2seq/: This contains the dataset in a simple text format (one example per line) by converting the
context
andqa_pairs
from the commaqa/ format into<context> Q: <question> A: <answer>
.
Additional Datasets
- Decompositions: The training data for the decomposer can be downloaded from
here.
The data uses the JSONL format with
train_seqs
field containing the decompositions for each question. Each entry in thetrain_seqs
array corresponds to one step in the decomposition chain. E.g.QC: What awards have the actors of the Hallowcock winning movies received? QI: (select) [table] The award Hallowcock has been awarded to which movies? A: [\"Clenestration\"] QS: (project_values_flat_unique) [text] Who all acted in the movie #1?
can be used to train a model to generate the question
(project_values_flat_unique) [text] Who all acted in the movie #1?
(string following QS:
) given
the previous questions and answers.
- Language: The language specification for agents in each dataset can be downloaded from
here. It contains
to files:
operations.txt
: File containing the list of operations for CommaQAmodel_questions.tsv
: A TSV file where the first field corresponds to the model name and all the subsequent fields contain valid questions that can be asked to this model.
Models
We also provide the T5-Large models trained to produce the next sub-question based on the oracle decompositions. These models can be used to perform inference as described [here]((commaqa/inference/README.md) to reproduce the TMN results.
Code
Refer to the individual READMEs in each package for instructions on: