Home

Awesome

ReasoningNLP

Paper list on reasoning in NLP. Here we mainly collect papers about datasets and methods using PLMs (under the progress).

See our survey on natural language reasoning:

Here are anohter two related surveys on LLM prompting:

<h2>Table of Contents</h2> <h2 id="1">Methodology</h2> <h3 id="1.1">Reasoning Paradigm</h3> <h4 id="1.1.1">End-to-End Reasoning</h4> <h4 id="1.1.2">Forward Reasoning</h4> <h4 id="1.1.3">Backward Reasoning</h4> <h3 id="1.2">Learning Paradigm</h3> <h4 id="1.2.1">Finetuning</h4> <h4 id="1.2.2">In-Context Learning</h4>
  1. Show Your Work: Scratchpads for Intermediate Computation with Language Models arXiv (2021)

    Maxwell Nye, Anders Johan Andreassen, Guy Gur-Ari, Henryk Michalewski, Jacob Austin, David Bieber, David Dohan, Aitor Lewkowycz, Maarten Bosma, David Luan, Charles Sutton, Augustus Odena [pdf]

  2. Chain of Thought Prompting Elicits Reasoning in Large Language Models arXiv (2022)

    Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Brian Ichter, Fei Xia, Ed Chi, Quoc Le, Denny Zhou [pdf] [project]

  3. Self-Consistency Improves Chain of Thought Reasoning in Language Models arXiv (2022)

    Xuezhi Wang, Jason Wei, Dale Schuurmans, Quoc Le, Ed Chi, Sharan Narang, Aakanksha Chowdhery, Denny Zhou [pdf]

  4. STaR: Bootstrapping Reasoning With Reasoning arXiv (2022)

    Eric Zelikman, Yuhuai Wu, Jesse Mu, Noah D. Goodman [pdf]

  5. Selection-Inference: Exploiting Large Language Models for Interpretable Logical Reasoning arXiv (2022)

    Antonia Creswell, Murray Shanahan, Irina Higgins [pdf]

  6. Least-to-Most Prompting Enables Complex Reasoning in Large Language Models arXiv (2022)

    Denny Zhou, Nathanael Schärli, Le Hou, Jason Wei, Nathan Scales, Xuezhi Wang, Dale Schuurmans, Olivier Bousquet, Quoc Le, Ed Chi [pdf]

  7. Large Language Models are Zero-Shot Reasoners arXiv (2022)

    Takeshi Kojima, Shixiang Shane Gu, Machel Reid, Yutaka Matsuo, Yusuke Iwasawa [pdf] [project]

  8. Language models show human-like content effects on reasoning arXiv (2022)

    Ishita Dasgupta, Andrew K. Lampinen, Stephanie C. Y. Chan, Antonia Creswell, Dharshan Kumaran, James L. McClelland, Felix Hill [pdf]

  9. Language Model Cascades arXiv (2022)

    David Dohan, Winnie Xu, Aitor Lewkowycz, Jacob Austin, David Bieber, Raphael Gontijo Lopes, Yuhuai Wu, Henryk Michalewski, Rif A. Saurous, Jascha Sohl-dickstein, Kevin Murphy, Charles Sutton [pdf] [project]

  10. Faithful Reasoning Using Large Language Models arXiv (2022)

    Antonia Creswell, Murray Shanahan [pdf]

  11. Language Models Are Greedy Reasoners: A Systematic Formal Analysis of Chain-of-Thought arXiv (2022)

    Abulhair Saparov, He He [pdf] [project]

  12. ThinkSum: Probabilistic reasoning over sets using large language models arXiv (2022)

    Batu Ozturkler, Nikolay Malkin, Zhen Wang, Nebojsa Jojic [pdf]

  13. Measuring and Narrowing the Compositionality Gap in Language Models arXiv (2022)

    Ofir Press, Muru Zhang, Sewon Min, Ludwig Schmidt, Noah A. Smith, Mike Lewis [pdf] [project]

<h2 id="2">NLP Topics</h2> <h3 id="2.1">Classical Logical Reasoning</h3>

Some datasets explicitly target at philosophical reasoning types, e.g. deduction, abduction and induction. Thus, we call them as ``classical logical reasoning tasks''. A key characteristic of this topic is that tasks are mostly artificial to study reasoning.

<h4 id="2.1.1">Datasets & Benchmarks</h4> <table> <tr> <th colspan="2" align="center">Inference Type</th> <th align="center">Dataset</th> <th align="center">Size</th> <th align="center">Task</th> <th align="center">Link</th> <th align="center">Remark</th> </tr > <tr> <th rowspan="8" colspan="2" align="center" valign="middle">Deductive Reasoning</th> <td align="center">babI-15</td> <td align="center">-</td> <td align="center">extraction</td> <td align="center"> <a href="https://arxiv.org/pdf/1502.05698.pdf">paper</a> <br /> <a href="http://fb.ai/babi">project</a> </td> <td align="center">synthetic</td> </tr> <tr> <td align="center">RuleTaker</td> <td align="center">>500k</td> <td align="center">classification</td> <td align="center"> <a href="https://www.ijcai.org/proceedings/2020/0537.pdf">paper</a> <br /> <a href="https://allenai.org/data/ruletaker">project</a> </td> <td align="center">synthetic, the first large-scale benchmark. D*, Birds-Electricity, ParaRules</td> </tr> <tr> <td align="center">ProofWriter</td> <td align="center">>500k</td> <td align="center">classification</td> <td align="center"> <a href="https://aclanthology.org/2021.findings-acl.317.pdf">paper</a> <br /> <a href="https://allenai.org/data/proofwriter">project</a> </td> <td align="center">improvement on RuleTaker, + open-world assumption</td> </tr> <tr> <td align="center">PARARULE Plus</td> <td align="center">400k</td> <td align="center">classification</td> <td align="center"> <a href="https://www.cs.ox.ac.uk/isg/conferences/tmp-proceedings/NeSy2022/paper15.pdf">paper</a> <br /> <a href="https://github.com/Strong-AI-Lab/PARARULE-Plus">project</a> </td> <td align="center">improvement on ParaRules, addresses the depth imbalance issue</td> </tr> <tr> <td align="center">AAC</td> <td align="center">710k</td> <td align="center">generation</td> <td align="center"> <a href="https://aclanthology.org/2021.iwcs-1.7.pdf">paper</a> <br /> <a href="https://github.com/debatelab/aacorpus">project</a> </td> <td align="center">synthetic, syllogistic arguments</td> </tr> <tr> <td align="center">LogicInference</td> <td align="center">200k</td> <td align="center">generation</td> <td align="center"> <a href="https://arxiv.org/pdf/2203.15099.pdf">paper</a> <br /> <a href="https://github.com/google-research/google-research/tree/master/logic_inference_dataset">project</a> </td> <td align="center">synthetic, more tasks</td> </tr> <tr> <td align="center">FOLIO</td> <td align="center">1.4k</td> <td align="center">classification, generation</td> <td align="center"> <a href="https://arxiv.org/pdf/2209.00840.pdf">paper</a> <br /> <a href="https://github.com/Yale-LILY/FOLIO">project</a> </td> <td align="center">expert-written, annotate with FOL</td> </tr> <tr> <td align="center">RobustLR</td> <td align="center">120k</td> <td align="center">classification</td> <td align="center"> <a href="https://aclanthology.org/2022.emnlp-main.653/">paper</a> <br /> <a href="https://github.com/INK-USC/RobustLR">project</a> </td> <td align="center">synthetic, robustness on logical semantics</td> </tr> <tr> <th rowspan="6" align="center" valign="middle">Defeasible Reasoning</th> <th rowspan="2" align="center" valign="middle"><i>Abductive Reasoning</i></th> <td align="center">AbductionRules</td> <td align="center">-</td> <td align="center">generation</td> <td align="center"> <a href="https://arxiv.org/pdf/2203.12186.pdf">paper</a> <br /> <a href="https://github.com/Strong-AI-Lab/AbductionRule">project</a> </td> <td align="center">variant of RuleTaker</td> </tr> <tr> <td align="center">ART <br /> (&alpha;NLI, &alpha;NLG)</td> <td align="center">17.8k</td> <td align="center">classification, generation</td> <td align="center"> <a href="https://openreview.net/pdf?id=Byg1v1HKDB">paper</a> <br /> <a href="http://abductivecommonsense.xyz/">project</a> </td> <td align="center">commonsense, based on ROCStories</td> </tr> <tr> <th rowspan="3" align="center" valign="middle"><i>Inductive Reasoning</i></th> <td align="center">bAbI-16</td> <td align="center">-</td> <td align="center">extraction</td> <td align="center"> <a href="https://arxiv.org/pdf/1502.05698.pdf">paper</a> <br /> <a href="http://fb.ai/babi">project</a> </td> <td align="center">synthetic, induce-then-deduce</td> </tr> <tr> <td align="center">CLUTRR</td> <td align="center">-</td> <td align="center">extraction</td> <td align="center"> <a href="https://aclanthology.org/D19-1458.pdf">paper</a> <br /> <a href="https://github.com/facebookresearch/clutrr">project</a> </td> <td align="center">synthetic, induce-then-deduce, kinship</td> </tr> <tr> <td align="center">DEER</td> <td align="center">1.2k</td> <td align="center">generation</td> <td align="center"> <a href="https://arxiv.org/pdf/2212.10923.pdf">paper</a> <br /> project </td> <td align="center">induce explicit natural language rule (human-authored) from natural language facts (Webs)</td> </tr> <tr> <th align="center" valign="middle"><i>Others</i></th> <td align="center">defeasibleNLI</td> <td align="center">43.8k</td> <td align="center">classification, generation</td> <td align="center"> <a href="https://aclanthology.org/2020.findings-emnlp.418.pdf">paper</a> <br /> <a href="https://github.com/rudinger/defeasible-nli">project</a> </td> <td align="center">direction on evidence updation, based on the existing datasets</td> </tr> </table>

Papers on dataset artifacts:

  1. On the Paradox of Learning to Reason from Data arXiv (2022)

    Honghua Zhang, Liunian Harold Li, Tao Meng, Kai-Wei Chang, Guy Van den Broeck [pdf] [project]

<h4 id="2.1.2">Related Works</h4> Deductive reasoning:
  1. Transformers as Soft Reasoners over Language IJCAI (2020)

    Peter Clark, Oyvind Tafjord, Kyle Richardson [pdf] [project]

  2. PRover: Proof Generation for Interpretable Reasoning over Rules EMNLP (2020)

    Swarnadeep Saha, Sayan Ghosh, Shashank Srivastava, Mohit Bansal [pdf] [project]

  3. multiPRover: Generating Multiple Proofs for Improved Interpretability in Rule Reasoning NAACL (2021)

    Swarnadeep Saha, Prateek Yadav, Mohit Bansal [pdf] [project]

  4. Critical Thinking for Language Models IWCS (2021)

    Gregor Betz, Christian Voigt, Kyle Richardson [pdf] [project]

  5. Explainable Multi-hop Verbal Reasoning Through Internal Monologue NAACL (2021)

    Zhengzhong Liang, Steven Bethard, Mihai Surdeanu [pdf] [project]

  6. ProofWriter: Generating Implications, Proofs, and Abductive Statements over Natural Language ACL findings (2021)

    Oyvind Tafjord, Bhavana Dalvi, Peter Clark [pdf] [project]

  7. Flexible Generation of Natural Language Deductions EMNLP (2021)

    Kaj Bostrom, Xinyu Zhao, Swarat Chaudhuri, Greg Durrett [pdf] [project]

  8. FaiRR: Faithful and Robust Deductive Reasoning over Natural Language ACL (2022)

    Soumya Sanyal, Harman Singh, Xiang Ren [pdf] [project]

  9. Interpretable Proof Generation via Iterative Backward Reasoning NAACL (2022)

    Hanhao Qu, Yu Cao, Jun Gao, Liang Ding, Ruifeng Xu [pdf] [project]

  10. Selection-Inference: Exploiting Large Language Models for Interpretable Logical Reasoning arXiv (2022)

    Antonia Creswell, Murray Shanahan, Irina Higgins [pdf]

  11. Generating Natural Language Proofs with Verifier-Guided Search arXiv (2022)

    Kaiyu Yang, Jia Deng, Danqi Chen [pdf] [project]

  12. ROBUSTLR: A Diagnostic Benchmark for Evaluating Logical Robustness of Deductive Reasoners arXiv (2022)

    Soumya Sanyal, Zeyi Liao, Xiang Ren [pdf] [project]

  13. Language models show human-like content effects on reasoning arXiv (2022)

    Ishita Dasgupta, Andrew K. Lampinen, Stephanie C. Y. Chan, Antonia Creswell, Dharshan Kumaran, James L. McClelland, Felix Hill [pdf]

  14. Faithful Reasoning Using Large Language Models arXiv (2022)

    Antonia Creswell, Murray Shanahan [pdf]

  15. Language Models Are Greedy Reasoners: A Systematic Formal Analysis of Chain-of-Thought arXiv (2022)

    Abulhair Saparov, He He [pdf] [project]

  16. LAMBADA: Backward Chaining for Automated Reasoning in Natural Language arXiv (2022)

    Seyed Mehran Kazemi, Najoung Kim, Deepti Bhatia, Xin Xu, Deepak Ramachandran [pdf]

Defeasible reasoning:

  1. Learning to Rationalize for Nonmonotonic Reasoning with Distant Supervision AAAI (2020)

    Faeze Brahman, Vered Shwartz, Rachel Rudinger, Yejin Choi [pdf] [project]

  2. Could you give me a hint ? Generating inference graphs for defeasible reasoning ACL findings (2021)

    Aman Madaan, Dheeraj Rajagopal, Niket Tandon, Yiming Yang, Eduard H. Hovy [pdf] [project]

  3. Think about it! Improving defeasible reasoning by first modeling the question scenario EMNLP (2021)

    Aman Madaan, Niket Tandon, Dheeraj Rajagopal, Peter Clark, Yiming Yang, Eduard H. Hovy [pdf] [project]

  4. Language Models as Inductive Reasoners arXiv (2022)

    Zonglin Yang, Li Dong, Xinya Du, Hao Cheng, Erik Cambria, Xiaodong Liu, Jianfeng Gao, Furu Wei [pdf]

<h3 id="2.2">Natural Language Inference</h3>

Task: given a premise-hypothesis pair, classify it into three classes: entailment, contradiction and neutral.

There are mainly three types of premise-hypothesis pairs in NLI task: paraphrase, compound semantics understanding (CSU), and reasoning. Here we just consider the last one.

<table> <tr> <th align="center"></th> <th align="center">Premise</th> <th align="center">Hypothesis</th> </tr > <tr> <th align="center">Paraphrase</th> <td align="center">Two doctors perform surgery on patient</td> <td align="center">Doctors are performing surgery</td> </tr> <tr> <th align="center">CSU</th> <td align="center">Two women are embracing while holding to go packages</td> <td align="center">Two women are holding packages <br /> (<i>Two women are embracing</i>)</td> </tr> <tr> <th align="center">Reasoning</th> <td align="center">A soccer game with multiple males playing <br /> (<i>Soccer is a sport</i>)</td> <td align="center">Some men are playing a sport</td> </tr> </table> <h4 id="2.2.1">Datasets & Benchmarks</h4> <table> <tr> <th align="center">Dataset</th> <th align="center">Size</th> <th align="center">Link</th> <th align="center">Remark</th> </tr > <tr> <th colspan=4 align="center">generic</th> </tr > <tr> <td align="center">SNLI</td> <td align="center">570k</td> <td align="center"> <a href="https://aclanthology.org/D15-1075.pdf">paper</a> <br /> <a href="nlp.stanford.edu/projects/snli/">project</a> </td> <td align="center">the first large-scale NLI dataset <br/> one of the most typical</td> </tr> <tr> <td align="center">e-SNLI</td> <td align="center">-</td> <td align="center"> <a href="https://proceedings.neurips.cc/paper/2018/file/4c7a167bb329bd92580a99ce422d6fa6-Paper.pdf">paper</a> <br /> <a href="https://github.com/OanaMariaCamburu/e-SNLI">project</a> </td> <td align="center">annotate natural language explanations for SNLI</td> </tr> <tr> <td align="center">MultiNLI</td> <td align="center">433k</td> <td align="center"> <a href="https://aclanthology.org/N18-1101.pdf">paper</a> <br /> <a href="https://cims.nyu.edu/~sbowman/multinli/">project</a> </td> <td align="center">cover more styles and topics than SNLI <br/> one of the most typical</td> </tr> <tr> <td align="center">DebiasedNLI</td> <td align="center">7.5k</td> <td align="center"> <a href="https://aclanthology.org/2022.acl-long.190.pdf">paper</a> <br /> <a href="https://github.com/jimmycode/gen-debiased-nli">project</a> </td> <td align="center">debiased versions of SNLI & MultiNLI</td> </tr> <tr> <td align="center">XNLI</td> <td align="center">7.5k</td> <td align="center"> <a href="https://aclanthology.org/D18-1269.pdf">paper</a> <br /> <a href="https://github.com/facebookresearch/XNLI/">project</a> </td> <td align="center">cross-lingual, based on MultiNLI</td> </tr> <tr> <td align="center">MPE</td> <td align="center">10k</td> <td align="center"> <a href="https://aclanthology.org/I17-1011.pdf">paper</a> <br /> <a href="https://github.com/aylai/MultiPremiseEntailment">project</a> </td> <td align="center">multiple premises</td> </tr> <tr> <th colspan=4 align="center">science</th> </tr > <tr> <td align="center">SciTail</td> <td align="center">27k</td> <td align="center"> <a href="http://ai2-website.s3.amazonaws.com/team/ashishs/scitail-aaai2018.pdf">paper</a> <br /> <a href="http://data.allenai.org/scitail">project</a> </td> <td align="center">the first NLI dataset with entirely existing text</td> </tr> <tr> <td align="center">SciNLI</td> <td align="center">107k</td> <td align="center"> <a href="https://aclanthology.org/2022.acl-long.511.pdf">paper</a> <br /> <a href="https://github.com/msadat3/SciNLI">project</a> </td> <td align="center">data from scholarly papers</td> </tr> </table>

Recently, some datasets are proposed to model different subjective opinions (on classifying into which class) of crowdworkers.

<table> <tr> <th colspan=5 align="center">Subjective Opinions</th> </tr > <tr> <th align="center">Dataset</th> <th align="center">Domain</th> <th align="center">Size</th> <th align="center">Link</th> <th align="center">Remark</th> </tr > <tr> <td align="center">UNLI</td> <td align="center">generic</td> <td align="center">61k</td> <td align="center"> <a href="https://aclanthology.org/2020.acl-main.774.pdf">paper</a> <br /> <a href="http://nlp.jhu.edu/unli">project</a> </td> <td align="center">subjective probability assessment (regression rather than binary), based on SNLI</td> </tr> <tr> <td align="center">ChaosNLI</td> <td align="center">generic</td> <td align="center">464k</td> <td align="center"> <a href="https://aclanthology.org/2020.emnlp-main.734.pdf">paper</a> <br /> <a href="https://github.com/easonnie/ChaosNLI">project</a> </td> <td align="center">human opinion distribution, based on SNLI, MultiNLI and &alpha;NLI</td> </tr> </table>

Some datasets for other language:

<table> <tr> <th colspan=4 align="center">Other Languages</th> </tr > <tr> <th align="center">Dataset</th> <th align="center">Language</th> <th align="center">Link</th> <th align="center">Remark</th> </tr > <tr> <td align="center">NLI-TR</td> <td align="center">Turkish</td> <td align="center"> <a href="https://aclanthology.org/2020.emnlp-main.662.pdf">paper</a> <br /> <a href="https://github.com/boun-tabi/NLI-TR">project</a> </td> <td align="center">translate SNLI and MultiNLI</td> </tr> <tr> <td align="center">IndoNLI</td> <td align="center">Indonesian</td> <td align="center"> <a href="https://aclanthology.org/2021.emnlp-main.821.pdf">paper</a> <br /> <a href="https://github.com/ir-nlp-csui/indonli">project</a> </td> <td align="center">data collection protocol from MultiNLI</td> </tr> </table>

Papers on dataset artifacts:

  1. Performance Impact Caused by Hidden Bias of Training Data for Recognizing Textual Entailment LREC (2018)

    Masatoshi Tsuchiya [pdf]

  2. Annotation Artifacts in Natural Language Inference Data NAACL (2018)

    Suchin Gururangan, Swabha Swayamdipta, Omer Levy, Roy Schwartz, Samuel R. Bowman, Noah A. Smith [pdf]

  3. Hypothesis Only Baselines in Natural Language Inference SEM (2019)

    Adam Poliak, Jason Naradowsky, Aparajita Haldar, Rachel Rudinger, Benjamin Van Durme [pdf]

<h4 id="2.2.2">Related Works</h4>
  1. NILE : Natural Language Inference with Faithful Natural Language Explanations ACL (2020)

    Sawan Kumar, Partha P. Talukdar [pdf] [project]

  2. Identifying inherent disagreement in natural language inference NAACL (2021)

    Xinliang Frederick Zhang, Marie-Catherine de Marneffe [pdf] [project]

  3. KACE: Generating Knowledge Aware Contrastive Explanations for Natural Language Inference ACL (2021)

    Qianglong Chen, Feng Ji, Xiangji Zeng, Feng-Lin Li, Ji Zhang, Haiqing Chen, Yin Zhang [pdf] [project]

  4. Investigating Transfer Learning in Multilingual Pre-trained Language Models through Chinese Natural Language Inference ACL findings (2021)

    Hai Hu, He Zhou, Zuoyu Tian, Yiwen Zhang, Yina Patterson, Yanting Li, Yixin Nie, Kyle Richardson [pdf] [project]

  5. Enhancing Cross-lingual Natural Language Inference by Prompt-learning from Cross-lingual Templates ACL (2022)

    Kunxun Qi, Hai Wan, Jianfeng Du, Haolan Chen [pdf] [project]

  6. Generating Intermediate Steps for NLI with Next-Step Supervision arXiv (2022)

    Deepanway Ghosal, Somak Aditya, Monojit Choudhury [pdf]

<h3 id="2.3">Multi-hop Question Answering</h3>

This topic studies answering the complex questions that require to reason over evidences scattered in different contexts. The term ``hop'' here indicates the number of contexts required to reason. There are two settings on the required contexts: (1) all provided along with some distractors (i.e. distractor), (2) need to be retrieved (i.e. retrieval).

<h4 id="2.3.1">Datasets & Benchmarks</h4>

Some datasets annotate the ground supporting evidences (paragraph-level, sentence-level, or triple-level), decomposed sub-questions (and the corresponding evidences), or reasoning paths.

<table> <tr> <th align="center">Dataset</th> <th align="center">Size</th> <th align="center">Knowledge Source</th> <th align="center">Setting</th> <th align="center">Answer Type</th> <th align="center">Evidence</th> <th align="center">Link</th> <th align="center">Remark</th> </tr > <tr> <th colspan=8 align="center">generic</th> </tr > <tr> <td align="center">WikiHop</td> <td align="center">51k</td> <td align="center">Wikipedia</td> <td align="center">distractor</td> <td align="center">choice</td> <td align="center">-</td> <td align="center"> <a href="https://aclanthology.org/Q18-1021.pdf">paper</a> <br /> <a href="http://qangaroo.cs.ucl.ac.uk/">project</a> </td> <td align="center">one of the most typical</td> </tr> <tr> <td align="center">HotpotQA</td> <td align="center">112k</td> <td align="center">Wikipedia</td> <td align="center">distractor, retrieval</td> <td align="center">span, yes/no</td> <td align="center">sentence</td> <td align="center"> <a href="https://aclanthology.org/D18-1259.pdf">paper</a> <br /> <a href="https://hotpotqa.github.io/">project</a> </td> <td align="center">the most popular one</td> </tr> <tr> <td align="center">R4C</td> <td align="center">4.6k</td> <td align="center">-</td> <td align="center">-</td> <td align="center">-</td> <td align="center">triple</td> <td align="center"> <a href="https://aclanthology.org/2020.acl-main.602.pdf">paper</a> <br /> <a href="https://naoya-i.github.io/r4c/">project</a> </td> <td align="center">annotate atomic facts for HotpotQA</td> </tr> <tr> <td align="center">BeerQA</td> <td align="center">530</td> <td align="center">Wikipedia</td> <td align="center">retrieval</td> <td align="center">span, yes/no</td> <td align="center">-</td> <td align="center"> <a href="https://aclanthology.org/2021.emnlp-main.292.pdf">paper</a> <br /> <a href="https://beerqa.github.io/">project</a> </td> <td align="center">more hops</td> </tr> <tr> <td align="center">2WikiMultiHopQA</td> <td align="center">192k</td> <td align="center">Wikipedia</td> <td align="center">distractor</td> <td align="center">span</td> <td align="center">sentence <br /> triple</td> <td align="center"> <a href="https://aclanthology.org/2020.coling-main.580.pdf">paper</a> <br /> <a href="https://github.com/Alab-NII/2wikimultihop">project</a> </td> <td align="center">similar to WikiHop</td> </tr> <tr> <td align="center">MuSiQue</td> <td align="center">25k</td> <td align="center">Wikipedia</td> <td align="center">distractor</td> <td align="center">span</td> <td align="center">paragraph <br /> sub-questions</td> <td align="center"> <a href="https://aclanthology.org/2022.tacl-1.31.pdf">paper</a> <br /> <a href="https://github.com/stonybrooknlp/musique">project</a> </td> <td align="center">more hops</td> </tr> <tr> <td align="center">StrategyQA</td> <td align="center">2.7k</td> <td align="center">Wikipedia</td> <td align="center">retrieval</td> <td align="center">yes/no</td> <td align="center">paragraph <br /> sub-questions</td> <td align="center"> <a href="https://aclanthology.org/2021.tacl-1.21.pdf">paper</a> <br /> <a href="https://allenai.org/data/strategyqa">project</a> </td> <td align="center">implicit mult-hop questions</td> </tr> <tr> <th colspan=8 align="center">specific domain</th> </tr > <tr> <td align="center">MedHop</td> <td align="center">2.5k</td> <td align="center">Medline</td> <td align="center">distractor</td> <td align="center">choice</td> <td align="center">-</td> <td align="center"> <a href="https://aclanthology.org/Q18-1021.pdf">paper</a> <br /> <a href="http://qangaroo.cs.ucl.ac.uk/">project</a> </td> <td align="center">medicine. similar to WikiHop</td> </tr> <tr> <td align="center">QASC</td> <td align="center">9.9k</td> <td align="center">WorldTree</td> <td align="center">retrieval</td> <td align="center">choice</td> <td align="center">sentence</td> <td align="center"> <a href="https://arxiv.org/pdf/1910.11473.pdf">paper</a> <br /> <a href="https://github.com/allenai/qasc">project</a> </td> <td align="center">science</td> </tr> <tr> <td align="center">eQASC</td> <td align="center">-</td> <td align="center">-</td> <td align="center">-</td> <td align="center">-</td> <td align="center">reasoning path</td> <td align="center"> <a href="http://aclanthology.lst.uni-saarland.de/2020.emnlp-main.10.pdf">paper</a> <br /> <a href="https://allenai.org/data/eqasc">project</a> </td> <td align="center">annotate reasoning paths for QASC</td> </tr> </table>

Papers on dataset artifacts:

  1. Understanding Dataset Design Choices for Multi-hop Reasoning NAACL (2019)

    Jifan Chen, Greg Durrett [pdf]

  2. Avoiding Reasoning Shortcuts: Adversarial Evaluation, Training, and Model Development for Multi-Hop QA ACL (2019)

    Yichen Jiang, Mohit Bansal [pdf] [project]

  3. Compositional Questions Do Not Necessitate Multi-hop Reasoning ACL (2019)

    Sewon Min, Eric Wallace, Sameer Singh, Matt Gardner, Hannaneh Hajishirzi, Luke Zettlemoyer [pdf] [project]

  4. Is Multihop QA in DiRe Condition? Measuring and Reducing Disconnected Reasoning EMNLP (2020)

    Harsh Trivedi, Niranjan Balasubramanian, Tushar Khot, Ashish Sabharwal [pdf] [project]

<h4 id="2.3.2">Related Works</h4>
  1. Dynamically Fused Graph Network for Multi-hop Reasoning ACL (2019)

    Lin Qiu, Yunxuan Xiao, Yanru Qu, Hao Zhou, Lei Li, Weinan Zhang, Yong Yu [pdf] [project]

  2. Multi-hop Reading Comprehension across Multiple Documents by Reasoning over Heterogeneous Graphs ACL (2019)

    Ming Tu, Guangtao Wang, Jing Huang, Yun Tang, Xiaodong He, Bowen Zhou [pdf]

  3. Answering while Summarizing: Multi-task Learning for Multi-hop QA with Evidence Extraction ACL (2019)

    Kosuke Nishida, Kyosuke Nishida, Masaaki Nagata, Atsushi Otsuka, Itsumi Saito, Hisako Asano, Junji Tomita [pdf]

  4. Multi-hop Reading Comprehension through Question Decomposition and Rescoring ACL (2019)

    Sewon Min, Victor Zhong, Luke Zettlemoyer, Hannaneh Hajishirzi [pdf] [project]

  5. Differentiable Reasoning over a Virtual Knowledge Base ICLR (2020)

    Bhuwan Dhingra, Manzil Zaheer, Vidhisha Balachandran, Graham Neubig, Ruslan Salakhutdinov, William W. Cohen [pdf] [project]

  6. Transformer-XH: Multi-Evidence Reasoning with eXtra Hop Attention ICLR Poster (2020)

    Chen Zhao, Chenyan Xiong, Corby Rosset, Xia Song, Paul N. Bennett, Saurabh Tiwary [pdf] [project]

  7. Low-Resource Generation of Multi-hop Reasoning Questions ACL (2020)

    Jianxing Yu, Wei Liu, Shuang Qiu, Qinliang Su, Kai Wang, Xiaojun Quan, Jian Yin [pdf]

  8. SRLGRN: Semantic Role Labeling Graph Reasoning Network EMNLP (2020)

    Chen Zheng, Parisa Kordjamshidi [pdf]

  9. Learning to Retrieve Reasoning Paths over Wikipedia Graph for Question Answering ICLR (2020)

    Akari Asai, Kazuma Hashimoto, Hannaneh Hajishirzi, Richard Socher, Caiming Xiong [pdf] [project]

  10. Robustifying Multi-hop QA through Pseudo-Evidentiality Training ACL (2021)

    Kyungjae Lee, Seung-won Hwang, Sang-eun Han, Dohyeon Lee [pdf]

  11. Summarize-then-Answer: Generating Concise Explanations for Multi-hop Reading Comprehension EMNLP (2021)

    Naoya Inoue, Harsh Trivedi, Steven Sinha, Niranjan Balasubramanian, Kentaro Inui [pdf] [project]

  12. Generative Context Pair Selection for Multi-hop Question Answering EMNLP (2021)

    Dheeru Dua, Cícero Nogueira dos Santos, Patrick Ng, Ben Athiwaratkun, Bing Xiang, Matt Gardner, Sameer Singh [pdf] [project]

  13. Breadth First Reasoning Graph for Multi-hop Question Answering NAACL (2021)

    Yongjie Huang, Meng Yang [pdf]

  14. Answering Complex Open-Domain Questions with Multi-Hop Dense Retrieval ICLR Poster (2021)

    Wenhan Xiong, Xiang Lorraine Li, Srini Iyer, Jingfei Du, Patrick S. H. Lewis, William Yang Wang, Yashar Mehdad, Scott Yih, Sebastian Riedel, Douwe Kiela, Barlas Oguz [pdf] [project]

  15. Multi-Step Reasoning Over Unstructured Text with Beam Dense Retrieval NAACL (2021)

    Chen Zhao, Chenyan Xiong, Jordan L. Boyd-Graber, Hal Daumé III [pdf] [project]

  16. Unsupervised Multi-hop Question Answering by Question Generation NAACL (2021)

    Liangming Pan, Wenhu Chen, Wenhan Xiong, Min-Yen Kan, William Yang Wang [pdf] [project]

  17. If You Want to Go Far Go Together: Unsupervised Joint Candidate Evidence Retrieval for Multi-hop Question Answering NAACL (2021)

    Vikas Yadav, Steven Bethard, Mihai Surdeanu [pdf] [project]

  18. Baleen: Robust Multi-Hop Reasoning at Scale via Condensed Retrieval NIPS (2021)

    Omar Khattab, Christopher Potts, Matei A. Zaharia [pdf] [project]

  19. Modeling Multi-hop Question Answering as Single Sequence Prediction ACL (2022)

    Semih Yavuz, Kazuma Hashimoto, Yingbo Zhou, Nitish Shirish Keskar, Caiming Xiong [pdf]

  20. CQG: A Simple and Effective Controlled Generation Framework for Multi-hop Question Generation ACL (2022)

    Zichu Fei, Qi Zhang, Tao Gui, Di Liang, Sirui Wang, Wei Wu, Xuanjing Huang [pdf] [project]

<h3 id="2.4">Commonsense Reasoning</h3>

Commonsense reasoning deals with implicit commonsense knowledge, which may be non-trivial to machines since they are difficult to retrieve from the web due to reporting bias. While it is named with "reasoning", the common theme of this topic is commonsense knowledge rather than reasoning. Here, we only list reasoning datasets.

<h4 id="2.4.1">Datasets & Benchmarks</h4>

There are mainly three types of reasoning datasets towards "what" (i.e. assertions or events), "what if / why" (e.g. causal and temporal relations between events), and "how" (i.e. actions) respectively.

"What" commonsense reasoning require combining multiple pieces of knowledge that some are from external knowledge sources while others are commonsense knowledge.

<table> <tr> <th colspan=6 align="center">"What" Commonsense Reasoning</th> </tr > <tr> <th align="center">Dataset</th> <th align="center">Size</th> <th align="center">Other Knowledge Type / Source</th> <th align="center">Task</th> <th align="center">Link</th> <th align="center">Rationale</th> </tr > <tr> <td align="center">OpenBookQA</td> <td align="center">6k</td> <td align="center">science <br /> / WorldTree</td> <td align="center">multi-choice QA</td> <td align="center"> <a href="http://aclanthology.lst.uni-saarland.de/D18-1260.pdf">paper</a> <br /> <a href="http://data.allenai.org/OpenBookQA">project</a> </td> <td align="center">ground science facts</td> </tr> <tr> <td align="center">OpenCSR</td> <td align="center">20k</td> <td align="center">science <br /> / WorldTree, ARC corpus</td> <td align="center">free-form QA</td> <td align="center"> <a href="https://aclanthology.org/2021.naacl-main.366.pdf">paper</a> <br /> <a href="https://open-csr.github.io/">project</a> </td> <td align="center">-</td> </tr> <tr> <td align="center">CREAK</td> <td align="center">13k</td> <td align="center">entity <br /> / Wikipedia</td> <td align="center">claim verification</td> <td align="center"> <a href="https://datasets-benchmarks-proceedings.neurips.cc/paper/2021/file/5737c6ec2e0716f3d8a7a5c4e0de0d9a-Paper-round2.pdf">paper</a> <br /> <a href="https://www.cs.utexas.edu/~yasumasa/creak">project</a> </td> <td align="center">explanation</td> </tr> </table>

"What if / Why" commonsense reasoning often reasons for causal and temporal relations between events. There are two causal relations: causes and effects, which can be seen as backward causal reasoning and forward causal reasoning respectively.

<table> <tr> <th colspan=6 align="center">"What if / Why" Commonsense Reasoning</th> </tr > <tr> <th align="center">Dataset</th> <th align="center">Size</th> <th align="center">Direction</th> <th align="center">Task</th> <th align="center">Link</th> <th align="center">Remark</th> </tr > <tr> <td align="center">ROCStories</td> <td align="center">50k</td> <td align="center">temporal</td> <td align="center">2-choice QA</td> <td align="center"> <a href="https://aclanthology.org/N16-1098.pdf">paper</a> <br /> <a href="https://www.cs.rochester.edu/nlp/rocstories/">project</a> </td> <td align="center">-</td> </tr> <tr> <td align="center">SWAG</td> <td align="center">113k</td> <td align="center">temporal</td> <td align="center">multi-choice QA</td> <td align="center"> <a href="http://aclanthology.lst.uni-saarland.de/D18-1009.pdf">paper</a> <br /> <a href="https://rowanzellers.com/swag/">project</a> </td> <td align="center">-</td> </tr> <tr> <td align="center">HellaSwag</td> <td align="center">20k</td> <td align="center">temporal</td> <td align="center">multi-choice QA</td> <td align="center"> <a href="http://aclanthology.lst.uni-saarland.de/P19-1472.pdf">paper</a> <br /> <a href="https://rowanzellers.com/hellaswag/">project</a> </td> <td align="center">an upgraded SWAG</td> </tr> <tr> <td align="center">COPA</td> <td align="center">1k</td> <td align="center">both</td> <td align="center">2-choice QA</td> <td align="center"> <a href="https://people.ict.usc.edu/~gordon/publications/AAAI-SPRING11A.PDF">paper</a> <br /> <a href="https://people.ict.usc.edu/~gordon/copa.html">project</a> </td> <td align="center">-</td> </tr> <tr> <td align="center">Social-IQA</td> <td align="center">38k</td> <td align="center">both</td> <td align="center">multi-choice QA</td> <td align="center"> <a href="https://aclanthology.org/D19-1454.pdf">paper</a> <br /> <a href="https://tinyurl.com/socialiqa">project</a> </td> <td align="center">social situations</td> </tr> <tr> <td align="center">e-CARE</td> <td align="center">21k</td> <td align="center">both</td> <td align="center">2-choice QA</td> <td align="center"> <a href="https://aclanthology.org/2022.acl-long.33.pdf">paper</a> <br /> <a href="https://github.com/Waste-Wood/e-CARE/">project</a> </td> <td align="center">with ground supporting facts</td> </tr> <tr> <td align="center">WIQA</td> <td align="center">40k</td> <td align="center">forward</td> <td align="center">multi-choice QA</td> <td align="center"> <a href="https://aclanthology.org/D19-1629.pdf">paper</a> <br /> <a href="http://data.allenai.org/wiqa/">project</a> </td> <td align="center">about nature processes</td> </tr> <tr> <td align="center">TIMETRAVEL</td> <td align="center">29k</td> <td align="center">forward</td> <td align="center">generation</td> <td align="center"> <a href="http://aclanthology.lst.uni-saarland.de/D19-1509.pdf">paper</a> <br /> <a href="https://github.com/qkaren/Counterfactual-StoryRW">project</a> </td> <td align="center">counterfactual reasoning</td> </tr> <tr> <td align="center">ART</td> <td align="center">20k</td> <td align="center">backward</td> <td align="center">2-choice/generation</td> <td align="center"> <a href="https://openreview.net/pdf?id=Byg1v1HKDB">paper</a> <br /> <a href="http://abductivecommonsense.xyz/">project</a> </td> <td align="center">abductive commonsense reasoning</td> </tr> <tr> <td align="center">TellMeWhy</td> <td align="center">30k</td> <td align="center">backward</td> <td align="center">free-form QA</td> <td align="center"> <a href="https://aclanthology.org/2021.findings-acl.53v2.pdf">paper</a> <br /> <a href="http://lunr.cs.stonybrook.edu/tellmewhy">project</a> </td> <td align="center">each annotated 3 possible answers</td> </tr> <tr> <td align="center">WikiWhy</td> <td align="center">9k</td> <td align="center">backward</td> <td align="center">free-form QA</td> <td align="center"> <a href="https://arxiv.org/pdf/2210.12152.pdf">paper</a> <br /> <a href="https://github.com/matt-seb-ho/WikiWhy">project</a> </td> <td align="center">about Wikipedia entities / events</td> </tr> </table>

"How" commonsense reasoning is mainly about "how to do it". It is often related to problem-solving or decision-making.

<table> <tr> <th colspan=6 align="center">"How" Commonsense Reasoning</th> </tr > <tr> <th align="center">Dataset</th> <th align="center">Size</th> <th align="center">Source</th> <th align="center">Task</th> <th align="center">Link</th> <th align="center">Remark</th> </tr > <tr> <td align="center">WikiHow Goal-Step</td> <td align="center">1489k</td> <td align="center">WikiHow, generated</td> <td align="center">multi-choice</td> <td align="center"> <a href="https://aclanthology.org/2020.emnlp-main.374v2.pdf">paper</a> <br /> <a href="https://github.com/zharry29/wikihow-goal-step/">project</a> </td> <td align="center">goals, steps, and temporal ordering</td> </tr> <tr> <td align="center">PIQA</td> <td align="center">21k</td> <td align="center">human-authored</td> <td align="center">2-choice</td> <td align="center"> <a href="https://yonatanbisk.com/piqa/">paper</a> <br /> <a href="http://abductivecommonsense.xyz/">project</a> </td> <td align="center">physical</td> </tr> </table>

Some datasets involve multiple types of reasoning.

<table> <tr> <th colspan=5 align="center">Hybrid Commonsense Reasoning</th> </tr > <tr> <th align="center">Dataset</th> <th align="center">Size</th> <th align="center">Task</th> <th align="center">Link</th> <th align="center">Remark</th> </tr > <tr> <td align="center">CSQA</td> <td align="center">12k</td> <td align="center">multi-choice QA</td> <td align="center"> <a href="https://aclanthology.org/N19-1421.pdf">paper</a> <br /> <a href="www.tau-nlp.org/commonsenseqa">project</a> </td> <td align="center">ConceptNet concepts</td> </tr> <tr> <td align="center">CoS-E</td> <td align="center">-</td> <td align="center">-</td> <td align="center"> <a href="https://aclanthology.org/P19-1487.pdf">paper</a> <br /> <a href="https://github.com/salesforce/cos-e">project</a> </td> <td align="center">annotate explanations for CSQA</td> </tr> <tr> <td align="center">ECQA</td> <td align="center">-</td> <td align="center">-</td> <td align="center"> <a href="https://aclanthology.org/2021.acl-long.238.pdf">paper</a> <br /> <a href="https://github.com/dair-iitd/ECQA-Dataset">project</a> </td> <td align="center">annotate commonsense facts for CSQA</td> </tr> <tr> <td align="center">CSQA2</td> <td align="center">14k</td> <td align="center">boolen QA</td> <td align="center"> <a href="https://openreview.net/pdf?id=qF7FlUT5dxa">paper</a> <br /> <a href="http://allenai.github.io/csqa2">project</a> </td> <td align="center">data construction via gamification</td> </tr> <tr> <td align="center">CosmosQA</td> <td align="center">35k</td> <td align="center">multi-choice QA</td> <td align="center"> <a href="http://aclanthology.lst.uni-saarland.de/D19-1243.pdf">paper</a> <br /> <a href="https://wilburone.github.io/cosmos">project</a> </td> <td align="center">reading comprehension on blogs</td> </tr> <tr> <td align="center">Moral Stories</td> <td align="center">12k</td> <td align="center">multi-choice QA</td> <td align="center"> <a href="https://aclanthology.org/2021.emnlp-main.54.pdf">paper</a> <br /> <a href="https://github.com/demelin/moral_stories">project</a> </td> <td align="center">situated reasoning with social norms</td> </tr> </table> <h4 id="2.4.2">Related Works</h4>
  1. Attention Is (not) All You Need for Commonsense Reasoning ACL (2019)

    Tassilo Klein, Moin Nabi [pdf]

  2. COMET: Commonsense Transformers for Automatic Knowledge Graph Construction ACL (2019)

    Antoine Bosselut, Hannah Rashkin, Maarten Sap, Chaitanya Malaviya, Asli Celikyilmaz, Yejin Choi [pdf] [project]

  3. Explain Yourself! Leveraging Language Models for Commonsense Reasoning ACL (2019)

    Nazneen Fatema Rajani, Bryan McCann, Caiming Xiong, Richard Socher [pdf] [project]

  4. Commonsense Knowledge Mining from Pretrained Models EMNLP (2019)

    Joe Davison, Joshua Feldman, Alexander M. Rush [pdf]

  5. How Reasonable are Common-Sense Reasoning Tasks: A Case-Study on the Winograd Schema Challenge and SWAG EMNLP (2019)

    Paul Trichelair, Ali Emami, Adam Trischler, Kaheer Suleman, Jackie Chi Kit Cheung [pdf] [project]

  6. Guided Generation of Cause and Effect IJCAI (2020)

    Zhongyang Li, Xiao Ding, Ting Liu, J. Edward Hu, Benjamin Van Durme [pdf] [project]

  7. Contrastive Self-Supervised Learning for Commonsense Reasoning ACL (2020)

    Tassilo Klein, Moin Nabi [pdf] [project]

  8. Pre-training Is (Almost) All You Need: An Application to Commonsense Reasoning ACL (2020)

    Alexandre Tamborrino, Nicola Pellicanò, Baptiste Pannier, Pascal Voitot, Louise Naudin [pdf]

  9. Evidence-Aware Inferential Text Generation with Vector Quantised Variational AutoEncoder ACL (2020)

    Daya Guo, Duyu Tang, Nan Duan, Jian Yin, Daxin Jiang, Ming Zhou [pdf] [project]

  10. Scalable Multi-Hop Relational Reasoning for Knowledge-Aware Question Answering EMNLP (2020)

    Yanlin Feng, Xinyue Chen, Bill Yuchen Lin, Peifeng Wang, Jun Yan, Xiang Ren [pdf] [project]

  11. Back to the Future: Unsupervised Backprop-based Decoding for Counterfactual and Abductive Commonsense Reasoning EMNLP (2020)

    Lianhui Qin, Vered Shwartz, Peter West, Chandra Bhagavatula, Jena D. Hwang, Ronan Le Bras, Antoine Bosselut, Yejin Choi [pdf] [project]

  12. Self-Supervised Knowledge Triplet Learning for Zero-Shot Question Answering EMNLP (2020)

    Pratyay Banerjee, Chitta Baral [pdf]

  13. Unsupervised Commonsense Question Answering with Self-Talk EMNLP (2020)

    Vered Shwartz, Peter West, Ronan Le Bras, Chandra Bhagavatula, Yejin Choi [pdf]

  14. Paragraph-level Commonsense Transformers with Recurrent Memory AAAI (2021)

    Saadia Gabriel, Chandra Bhagavatula, Vered Shwartz, Ronan Le Bras, Maxwell Forbes, Yejin Choi [pdf] [project]

  15. Knowledge-driven Data Construction for Zero-shot Evaluation in Commonsense Question Answering AAAI (2021)

    Kaixin Ma, Filip Ilievski, Jonathan Francis, Yonatan Bisk, Eric Nyberg, Alessandro Oltramari [pdf] [project]

  16. Reflective Decoding: Beyond Unidirectional Generation with Off-the-Shelf Language Models ACL (2021)

    Peter West, Ximing Lu, Ari Holtzman, Chandra Bhagavatula, Jena D. Hwang, Yejin Choi [pdf] [project]

  17. Doing Good or Doing Right? Exploring the Weakness of Commonsense Causal Reasoning Models ACL (2021)

    Mingyue Han, Yinglin Wang [pdf]

  18. Learning Event Graph Knowledge for Abductive Reasoning ACL (2021)

    Li Du, Xiao Ding, Ting Liu, Bing Qin [pdf] [project]

  19. ExCAR: Event Graph Knowledge Enhanced Explainable Causal Reasoning ACL (2021)

    Li Du, Xiao Ding, Kai Xiong, Ting Liu, Bing Qin [pdf] [project]

  20. Differentiable Open-Ended Commonsense Reasoning NAACL (2021)

    Bill Yuchen Lin, Haitian Sun, Bhuwan Dhingra, Manzil Zaheer, Xiang Ren, William W. Cohen [pdf] [project]

  21. QA-GNN: Reasoning with Language Models and Knowledge Graphs for Question Answering NAACL (2021)

    Michihiro Yasunaga, Hongyu Ren, Antoine Bosselut, Percy Liang, Jure Leskovec [pdf] [project]

  22. Towards Zero-shot Commonsense Reasoning with Self-supervised Refinement of Language Models EMNLP (2021)

    Tassilo Klein, Moin Nabi [pdf] [project]

  23. Exploring Strategies for Generalizable Commonsense Reasoning with Pre-trained Models EMNLP (2021)

    Kaixin Ma, Filip Ilievski, Jonathan Francis, Satoru Ozaki, Eric Nyberg, Alessandro Oltramari [pdf] [project]

  24. Shortcutted Commonsense: Data Spuriousness in Deep Learning of Commonsense Reasoning EMNLP (2021)

    Ruben Branco, António Branco, João António Rodrigues, João Ricardo Silva [pdf] [project]

  25. Improving Unsupervised Commonsense Reasoning Using Knowledge-Enabled Natural Language Inference EMNLP findings (2021)

    Canming Huang, Weinan He, Yongmei Liu [pdf] [project]

  26. SalKG: Learning From Knowledge Graph Explanations for Commonsense Reasoning NIPS (2021)

    Aaron Chan, Jiashu Xu, Boyuan Long, Soumya Sanyal, Tanishq Gupta, Xiang Ren [pdf] [project]

  27. GreaseLM: Graph REASoning Enhanced Language Models ICLR Spotlight (2022)

    Xikun Zhang, Antoine Bosselut, Michihiro Yasunaga, Hongyu Ren, Percy Liang, Christopher D. Manning, Jure Leskovec [pdf] [project]

  28. Generated Knowledge Prompting for Commonsense Reasoning ACL (2022)

    Jiacheng Liu, Alisa Liu, Ximing Lu, Sean Welleck, Peter West, Ronan Le Bras, Yejin Choi, Hannaneh Hajishirzi [pdf] [project]

  29. JointLK: Joint Reasoning with Language Models and Knowledge Graphs for Commonsense Question Answering NAACL (2022)

    Yueqing Sun, Qi Shi, Le Qi, Yu Zhang [pdf] [project]

  30. Modularized Transfer Learning with Multiple Knowledge Graphs for Zero-shot Commonsense Reasoning NAACL (2022)

    Yu Jin Kim, Beong-woo Kwak, Youngwook Kim, Reinald Kim Amplayo, Seung-won Hwang, Jinyoung Yeo [pdf]

  31. On Curriculum Learning for Commonsense Reasoning NAACL (2022)

    Yu Jin Kim, Beong-woo Kwak, Youngwook Kim, Reinald Kim Amplayo, Seung-won Hwang, Jinyoung Yeo [pdf] [project]

  32. Embarrassingly Simple Performance Prediction for Abductive Natural Language Inference NAACL (2022)

    Emils Kadikis, Vaibhav Srivastav, Roman Klinger [pdf] [project]

  33. Does Pre-training Induce Systematic Inference? How Masked Language Models Acquire Commonsense Knowledge NAACL (2022)

    Ian Porada, Alessandro Sordoni, Jackie Chi Kit Cheung [pdf]

  34. ROCK: Causal Inference Principles for Reasoning about Commonsense Causality ICML (2022)

    Jiayao Zhang, Hongming Zhang, Weijie J. Su, Dan Roth [pdf] [project]

  35. ALERT: Adapting Language Models to Reasoning Tasks arXiv (2022)

    Ping Yu, Tianlu Wang, Olga Golovneva, Badr AlKhamissy, Gargi Ghosh, Mona T. Diab, Asli Celikyilmaz [pdf]

  36. Maieutic Prompting: Logically Consistent Reasoning with Recursive Explanations EMNLP (2022)

    Jaehun Jung, Lianhui Qin, Sean Welleck, Faeze Brahman, Chandra Bhagavatula, Ronan Le Bras, Yejin Choi [pdf] [project]

  37. Using Commonsense Knowledge to Answer Why-Questions EMNLP (2022)

    Yash Kumar Lal, Niket Tandon, Tanvi Aggarwal, Horace Liu, Nathanael Chambers, Raymond J. Mooney, Niranjan Balasubramanian [pdf] [project]

<h4 id="2.4.3">Knowledge Bases</h4> <table> <tr> <th align="center">KB</th> <th align="center">Type of Knowledge</th> <th align="center">Format of Knowledge</th> <th align="center">Link</th> </tr > <tr> <td align="center">CYC</td> <td align="center">generic</td> <td align="center">LISP-style logic</td> <td align="center"> <a href="https://dl.acm.org/doi/pdf/10.1145/219717.219745">paper</a> <br /> project </td> </tr> <tr> <td align="center">ConceptNet</td> <td align="center">linguistics</td> <td align="center">triple</td> <td align="center"> <a href="https://agents.media.mit.edu/projects/commonsense/ConceptNet-BTTJ.pdf">paper</a> <br /> <a href="http://www.conceptnet.org/">project</a> </td> </tr> <tr> <td align="center">ConceptNet 5.5</td> <td align="center">linguistics</td> <td align="center">triple</td> <td align="center"> <a href="https://arxiv.org/pdf/1612.03975.pdf">paper</a> <br /> <a href="https://github.com/commonsense/conceptnet5">project</a> </td> </tr> <tr> <td align="center">GenericsKB</td> <td align="center">generic</td> <td align="center">statement</td> <td align="center"> <a href="https://arxiv.org/pdf/2005.00660.pdf">paper</a> <br /> <a href="https://allenai.org/data/genericskb">project</a> </td> </tr> <tr> <td align="center">Event2Mind</td> <td align="center">mental state</td> <td align="center">statement</td> <td align="center"> <a href="https://aclanthology.org/P18-1043.pdf">paper</a> <br /> <a href="https://tinyurl.com/event2mind">project</a> </td> </tr> <tr> <td align="center">ATOMIC</td> <td align="center">social causality</td> <td align="center">statement</td> <td align="center"> <a href="https://arxiv.org/pdf/1811.00146.pdf">paper</a> <br /> <a href="https://allenai.org/data/atomic">project</a> </td> </tr> <tr> <td align="center">ATOMIC 2020</td> <td align="center">+physical and eventive causality</td> <td align="center">statement</td> <td align="center"> <a href="https://arxiv.org/pdf/2010.05953.pdf">paper</a> <br /> <a href="https://github.com/allenai/comet-atomic-2020">project</a> </td> </tr> <tr> <td align="center">Social-Chem-101</td> <td align="center">rules-of-thumb</td> <td align="center">statement</td> <td align="center"> <a href="https://aclanthology.org/2020.emnlp-main.48.pdf">paper</a> <br /> <a href="https://github.com/mbforbes/social-chemistry-101">project</a> </td> </tr> </table> <h3 id="2.5">Complex Reasoning</h3>

There are some datasets collected from realistic examinations or tests or explicitly designed to challenge LLMs, which may require domain-specific knowledge and multiple types of reasoning skills.

<h4 id="2.5.1">Datasets & Benchmarks</h4> <table> <tr> <th colspan=7 align="center">Realistic Examinations</th> </tr > <tr> <th align="center">Dataset</th> <th align="center">Size</th> <th align="center">Domain</th> <th align="center">Source</th> <th align="center">Task</th> <th align="center">Link</th> <th align="center">Remark</th> </tr > <tr> <td align="center">AR-LSAT</td> <td align="center">2k</td> <td align="center">law</td> <td align="center">law school admission test</td> <td align="center">multi-choice QA</td> <td align="center"> <a href="https://aclanthology.org/2022.findings-naacl.177.pdf">paper</a> <br /> <a href="https://github.com/zhongwanjun/AR-LSAT">project</a> </td> <td align="center">-</td> </tr> <tr> <td align="center">HEAD-QA</td> <td align="center">6.7k</td> <td align="center">healthcare</td> <td align="center">specialized healthcare examination</td> <td align="center">multi-choice QA</td> <td align="center"> <a href="https://aclanthology.org/P19-1092.pdf">paper</a> <br /> <a href="http://aghie.github.io/head-qa/">project</a> </td> <td align="center">-</td> </tr> <tr> <td align="center">AI2-ARC</td> <td align="center">7.7k</td> <td align="center">science</td> <td align="center">grade-school standardized test</td> <td align="center">multi-choice QA</td> <td align="center"> <a href="https://arxiv.org/pdf/1803.05457.pdf">paper</a> <br /> <a href="http://data.allenai.org/arc">project</a> </td> <td align="center">-</td> </tr> <tr> <td align="center">EntailmentBank</td> <td align="center">2k</td> <td align="center">-</td> <td align="center">-</td> <td align="center">entailment tree generation</td> <td align="center"> <a href="https://aclanthology.org/2021.emnlp-main.585.pdf">paper</a> <br /> <a href="https://allenai.org/data/entailmentbank">project</a> </td> <td align="center">reasoning paths to hypotheses from AI2-ARC</td> </tr> <tr> <td align="center">ReClor</td> <td align="center">6k</td> <td align="center">generic</td> <td align="center">standardized graduate admission examination</td> <td align="center">RC + multi-choice QA</td> <td align="center"> <a href="https://openreview.net/pdf?id=HJgJtT4tvB">paper</a> <br /> <a href="http://whyu.me/reclor/">project</a> </td> <td align="center">-</td> </tr> <tr> <td align="center">MetaLogic</td> <td align="center">1k</td> <td align="center">-</td> <td align="center">-</td> <td align="center">logic metagraph generation</td> <td align="center"> <a href="https://arxiv.org/pdf/2210.12487.pdf">paper</a> <br /> <a href="https://github.com/tencent-ailab/MetaLogic">project</a> </td> <td align="center">reasoning graphs for passages in ReClor</td> </tr> <tr> <td align="center">LogiQA</td> <td align="center">8k</td> <td align="center">generic</td> <td align="center">national civil servants examination of China</td> <td align="center">RC + multi-choice QA</td> <td align="center"> <a href="https://www.ijcai.org/proceedings/2020/0501.pdf">paper</a> <br /> <a href="https://github.com/lgw863/LogiQA-dataset">project</a> </td> <td align="center">-</td> </tr> <tr> <td align="center">ConTRoL</td> <td align="center">8k</td> <td align="center">generic</td> <td align="center">competitive selection and recruitment test</td> <td align="center">NLI</td> <td align="center"> <a href="https://arxiv.org/pdf/2011.04864.pdf">paper</a> <br /> <a href="https://github.com/csitfun/ConTRoL-dataset">project</a> </td> <td align="center">passage-level</td> </tr> </table> <table> <tr> <th colspan=4 align="center">Diagnostic Benchmarks for LLMs</th> </tr > <tr> <th align="center">Benchmark</th> <th align="center">Tasks</th> <th align="center">Link</th> <th align="center">Remark</th> </tr > <tr> <td align="center">BIG-Bench</td> <td align="center">204</td> <td align="center"> <a href="https://arxiv.org/pdf/2206.04615.pdf">paper</a> <br /> <a href="https://github.com/google/BIG-bench">project</a> </td> <td align="center">believed to be beyond the capabilities of current PLMs</td> </tr> <tr> <td align="center">BBH</td> <td align="center">23</td> <td align="center"> <a href="https://arxiv.org/pdf/2210.09261.pdf">paper</a> <br /> <a href="https://github.com/suzgunmirac/BIG-Bench-Hard">project</a> </td> <td align="center">challenging BIG-Bench tasks</td> </tr> <tr> <td align="center">MMLU</td> <td align="center">57</td> <td align="center"> <a href="https://arxiv.org/pdf/2009.03300.pdf">paper</a> <br /> <a href="https://github.com/hendrycks/test">project</a> </td> <td align="center">across a diverse set of subjects that humans learn</td> </tr> </table>

Knowledge Graph Reasoning

Knowledge graph completion task aims to complete the graph, while multi-hop reasoning over KG is the task querying in incomplete graphs, both of which require reasoning over knowledge graphs. Temporal knowledge graph reasoning aims to predict links in future with the past quadruples.

Knowledge Graph Completion

  1. Collaborative Policy Learning for Open Knowledge Graph Reasoning EMNLP (2019)

    Cong Fu, Tong Chen, Meng Qu, Woojeong Jin, Xiang Ren [pdf] [project]

  2. DIVINE: A Generative Adversarial Imitation Learning Framework for Knowledge Graph Reasoning EMNLP (2019)

    Ruiping Li, Xiang Cheng [pdf] [project]

  3. Learning Collaborative Agents with Rule Guidance for Knowledge Graph Reasoning EMNLP (2020)

    Deren Lei, Gangrong Jiang, Xiaotao Gu, Kexuan Sun, Yuning Mao, Xiang Ren [pdf] [project]

  4. Incorporating Graph Attention Mechanism into Knowledge Graph Reasoning Based on Deep Reinforcement Learning EMNLP (2019)

    Heng Wang, Shuangyin Li, Rong Pan, Mingzhi Mao [pdf] [project]

  5. Dynamically Pruned Message Passing Networks for Large-scale Knowledge Graph Reasoning ICLR Poster (2020)

    Xiaoran Xu, Wei Feng, Yunsheng Jiang, Xiaohui Xie, Zhiqing Sun, Zhi-Hong Deng [pdf] [project]

  6. Inductive Relation Prediction by Subgraph Reasoning ICML (2020)

    Komal K. Teru, Etienne G. Denis, William L. Hamilton [pdf] [project]

  7. Adapting Meta Knowledge Graph Information for Multi-Hop Reasoning over Few-Shot Relations EMNLP (2019)

    Xin Lv, Yuxian Gu, Xu Han, Lei Hou, Juanzi Li, Zhiyuan Liu [pdf] [project]

  8. Dynamic Anticipation and Completion for Multi-Hop Reasoning over Sparse Knowledge Graph EMNLP (2020)

    Xin Lv, Xu Han, Lei Hou, Juanzi Li, Zhiyuan Liu, Wei Zhang, Yichi Zhang, Hao Kong, Suhui Wu [pdf] [project]

  9. UniKER: A Unified Framework for Combining Embedding and Definite Horn Rule Reasoning for Knowledge Graph Inference EMNLP (2021)

    Kewei Cheng, Ziqing Yang, Ming Zhang, Yizhou Sun [pdf]

  10. Is Multi-Hop Reasoning Really Explainable? Towards Benchmarking Reasoning Interpretability EMNLP (2021)

    Xin Lv, Yixin Cao, Lei Hou, Juanzi Li, Zhiyuan Liu, Yichi Zhang, Zelin Dai [pdf] [project]

  11. GMH: A General Multi-hop Reasoning Model for KG Completion EMNLP (2021)

    Yao Zhang, Hongru Liang, Adam Jatowt, Wenqiang Lei, Xin Wei, Ning Jiang, Zhenglu Yang [pdf]

  12. Neural-Symbolic Commonsense Reasoner with Relation Predictors ACL (2021)

    Farhad Moghimifar, Lizhen Qu, Yue Zhuo, Gholamreza Haffari, Mahsa Baktashmotlagh [pdf] [project]

Multi-Hop Reasoning over KG

  1. Query2box: Reasoning over Knowledge Graphs in Vector Space Using Box Embeddings ICLR Poster (2020)

    Hongyu Ren, Weihua Hu, Jure Leskovec [pdf] [project]

  2. Beta Embeddings for Multi-Hop Logical Reasoning in Knowledge Graphs NIPS (2020)

    Hongyu Ren, Jure Leskovec [pdf] [project]

  3. Probabilistic Entity Representation Model for Reasoning over Knowledge Graphs NIPS (2021)

    Nurendra Choudhary, Nikhil Rao, Sumeet Katariya, Karthik Subbian, Chandan K. Reddy [pdf] [project]

  4. ConE: Cone Embeddings for Multi-Hop Reasoning over Knowledge Graphs NIPS (2021)

    Zhanqiu Zhang, Jie Wang, Jiajun Chen, Shuiwang Ji, Feng Wu [pdf] [project]

  5. Complex Query Answering with Neural Link Predictors ICLR Oral (2021)

    Erik Arakelyan, Daniel Daza, Pasquale Minervini, Michael Cochez [pdf] [project]

Temporal Knowledge Graph Reasoning

  1. Explainable Subgraph Reasoning for Forecasting on Temporal Knowledge Graphs ICLR Poster (2021)

    Zhen Han, Peng Chen, Yunpu Ma, Volker Tresp [pdf] [project]

  2. Search from History and Reason for Future: Two-stage Reasoning on Temporal Knowledge Graphs ACL (2021)

    Zixuan Li, Xiaolong Jin, Saiping Guan, Wei Li, Jiafeng Guo, Yuanzhuo Wang, Xueqi Cheng [pdf]

  3. Complex Evolutional Pattern Learning for Temporal Knowledge Graph Reasoning ACL (2022)

    Zixuan Li, Saiping Guan, Xiaolong Jin, Weihua Peng, Yajuan Lyu, Yong Zhu, Long Bai, Wei Li, Jiafeng Guo, Xueqi Cheng [pdf] [project]

Others

  1. Quantum Embedding of Knowledge for Reasoning NIPS (2019)

    Dinesh Garg, Shajith Ikbal, Santosh K. Srivastava, Harit Vishwakarma, Hima P. Karanam, L. Venkata Subramaniam [pdf] [project]

  2. Scalable Neural Methods for Reasoning With a Symbolic Knowledge Base ICLR Poster (2020)

    William W. Cohen, Haitian Sun, R. Alex Hofer, Matthew Siegler [pdf]

  3. Probabilistic Logic Neural Networks for Reasoning NIPS (2019)

    Meng Qu, Jian Tang [pdf]

  4. RNNLogic: Learning Logic Rules for Reasoning on Knowledge Graphs ICLR Poster (2021)

    Meng Qu, Junkun Chen, Louis-Pascal A. C. Xhonneux, Yoshua Bengio, Jian Tang [pdf] [project]

  5. Efficient Probabilistic Logic Reasoning with Graph Neural Networks ICLR Poster (2020)

    Yuyu Zhang, Xinshi Chen, Yuan Yang, Arun Ramamurthy, Bo Li, Yuan Qi, Le Song [pdf]

  6. Probabilistic Box Embeddings for Uncertain Knowledge Graph Reasoning NAACL (2021)

    Xuelu Chen, Michael Boratko, Muhao Chen, Shib Sankar Dasgupta, Xiang Lorraine Li, Andrew McCallum [pdf] [project]

  7. Multimodal Analogical Reasoning over Knowledge Graphs ICLR (2023)

    Ningyu Zhang, Lei Li, Xiang Chen, Xiaozhuan Liang, Shumin Deng, Huajun Chen [pdf] [project]

Mathematical Reasoning

Benchmarks & Datasets

  1. Analysing Mathematical Reasoning Abilities of Neural Models ICLR Poster (2019)

    David Saxton, Edward Grefenstette, Felix Hill, Pushmeet Kohli [pdf] [project]

  2. HOList: An Environment for Machine Learning of Higher-Order Theorem Proving ICML (2019)

    Kshitij Bansal, Sarah M. Loos, Markus N. Rabe, Christian Szegedy, Stewart Wilcox [pdf] [project]

  3. DROP: A Reading Comprehension Benchmark Requiring Discrete Reasoning Over Paragraphs EMNLP (2019)

    Dheeru Dua, Yizhong Wang, Pradeep Dasigi, Gabriel Stanovsky, Sameer Singh, Matt Gardner [pdf] [project]

  4. IsarStep: a Benchmark for High-level Mathematical Reasoning ICLR Poster (2021)

    Wenda Li, Lei Yu, Yuhuai Wu, Lawrence C. Paulson [pdf] [project]

  5. Towards Table-to-Text Generation with Numerical Reasoning ACL (2021)

    Lya Hulliyyatus Suadaa, Hidetaka Kamigaito, Kotaro Funakoshi, Manabu Okumura, Hiroya Takamura [pdf] [project]

  6. Inter-GPS: Interpretable Geometry Problem Solving with Formal Language and Symbolic Reasoning ACL (2021)

    Pan Lu, Ran Gong, Shibiao Jiang, Liang Qiu, Siyuan Huang, Xiaodan Liang, Song-Chun Zhu [pdf] [project]

  7. FINQA: A Dataset of Numerical Reasoning over Financial Data EMNLP (2021)

    Zhiyu Chen, Wenhu Chen, Charese Smiley, Sameena Shah, Iana Borova, Dylan Langdon, Reema Moussa, Matt Beane, Ting-Hao Huang, Bryan R. Routledge, William Yang Wang [pdf] [project]

  8. SciGen: a Dataset for Reasoning-Aware Text Generation from Scientific Tables NIPS (2021)

    Nafise Sadat Moosavi, Andreas Rücklé, Dan Roth, Iryna Gurevych [pdf] [project]

  9. MULTIHIERTT: Numerical Reasoning over Multi Hierarchical Tabular and Textual Data ACL (2022)

    Yilun Zhao, Yunxiang Li, Chenying Li, Rui Zhang [pdf] [project]

  10. NUMGLUE: A Suite of Fundamental yet Challenging Mathematical Reasoning Tasks ACL (2022)

    Swaroop Mishra, Arindam Mitra, Neeraj Varshney, Bhavdeep Singh Sachdeva, Peter Clark, Chitta Baral, Ashwin Kalyan [pdf] [project]

Papers

  1. Semantically-Aligned Equation Generation for Solving and Reasoning Math Word Problems NAACL (2019)

    Ting-Rui Chiang, Yun-Nung Chen [pdf] [project]

  2. A Multi-Type Multi-Span Network for Reading Comprehension that Requires Discrete Reasoning EMNLP (2019)

    Minghao Hu, Yuxing Peng, Zhen Huang, Dongsheng Li [pdf] [project]

  3. NumNet: Machine Reading Comprehension with Numerical Reasoning EMNLP (2019)

    Qiu Ran, Yankai Lin, Peng Li, Jie Zhou, Zhiyuan Liu [pdf] [project]

  4. Mathematical Reasoning in Latent Space ICLR Oral (2020)

    Dennis Lee, Christian Szegedy, Markus N. Rabe, Sarah M. Loos, Kshitij Bansal [pdf]

  5. Neural Module Networks for Reasoning over Text ICLR Poster (2020)

    Nitish Gupta, Kevin Lin, Dan Roth, Sameer Singh, Matt Gardner [pdf] [project]

  6. Injecting Numerical Reasoning Skills into Language Models ACL (2020)

    Mor Geva, Ankit Gupta, Jonathan Berant [pdf] [project]

  7. Question Directed Graph Attention Network for Numerical Reasoning over Text EMNLP (2020)

    Kunlong Chen, Weidi Xu, Xingyi Cheng, Zou Xiaochuan, Yuyu Zhang, Le Song, Taifeng Wang, Yuan Qi, Wei Chu [pdf]

  8. Mathematical Reasoning via Self-supervised Skip-tree Training ICLR Spotlight (2021)

    Markus Norman Rabe, Dennis Lee, Kshitij Bansal, Christian Szegedy [pdf]

  9. Incorporating External Knowledge to Enhance Tabular Reasoning NAACL (2021)

    J. Neeraja, Vivek Gupta, Vivek Srikumar [pdf] [project]

  10. Measuring and Improving BERT's Mathematical Abilities by Predicting the Order of Reasoning ACL (2021)

    Piotr Piekos, Mateusz Malinowski, Henryk Michalewski [pdf]

  11. GraphMR: Graph Neural Network for Mathematical Reasoning ACL (2021)

    Weijie Feng, Binbin Liu, Dongpeng Xu, Qilong Zheng, Yun Xu [pdf] [project]

  12. LIME: Learning Inductive Bias for Primitives of Mathematical Reasoning ICML (2021)

    Yuhuai Wu, Markus N. Rabe, Wenda Li, Jimmy Ba, Roger B. Grosse, Christian Szegedy [pdf] [project]

  13. Numerical reasoning in machine reading comprehension tasks: are we there yet? EMNLP (2021)

    Hadeel Al-Negheimish, Pranava Madhyastha, Alessandra Russo [pdf]

  14. Learning to Reason Deductively: Math Word Problem Solving as Complex Relation Extraction ACL (2022)

    Zhanming Jie, Jierui Li, Wei Lu [pdf] [project]

  15. FORTAP: Using Formulas for Numerical-Reasoning-Aware Table Pretraining ACL (2022)

    Zhoujun Cheng, Haoyu Dong, Ran Jia, Pengfei Wu, Shi Han, Fan Cheng, Dongmei Zhang [pdf] [project]

  16. Right for the Right Reason: Evidence Extraction for Trustworthy Tabular Reasoning ACL (2022)

    Vivek Gupta, Shuo Zhang, Alakananda Vempala, Yujie He, Temma Choji, Vivek Srikumar [pdf] [project]

  17. Turning Tables: Generating Examples from Semi-structured Tables for Endowing Language Models with Reasoning Skills ACL (2022)

    Ori Yoran, Alon Talmor, Jonathan Berant [pdf] [project]

  18. OPERA: Operation-Pivoted Discrete Reasoning over Text NAACL (2022)

    Yongwei Zhou, Junwei Bao, Chaoqun Duan, Haipeng Sun, Jiahui Liang, Yifan Wang, Jing Zhao, Youzheng Wu, Xiaodong He, Tiejun Zhao [pdf] [project]

Contributor

Fei YU

Reference

@article{yu2023natural,
  title={Natural Language Reasoning, A Survey},
  author={Yu, Fei and Zhang, Hongbo and Wang, Benyou},
  journal={arXiv preprint arXiv:2303.14725},
  year={2023}
}