Awesome
LogiCoT: Logical Chain-of-Thought Instruction Tuning with GPT-4
For more information, please refer to our EMNLP2023 findings paper - LogiCoT: Logical Chain-of-Thought Instruction-tuning Data Collection with GPT-4
Updates: Our updated paper has been accepted by the findings of EMNLP2023.
Now the dataset is hosted on the Huggingface Dataset page datatune/LogiCoT. It is the only distribution channel we currently allow.
The instructions and demonstrations for building formal logical reasoning generative large language models
Seminal Data
- LogicInference
- EntailmentBank
- FOLIO
- ReClor
- LogiQA
Instruction types
General inference task
- Translate the following inference to logic notation
- What can be inferred from the following premises in a single inference step (ignoring inferences that add new predicates or constants)? Name the inference rule being used
- What can be inferred from the following premises in a single inference step (ignoring inferences that add new predicates or constants)?
- Consider the following premises. <> Can we infer the following from them? <>
- Consider the following premises. <> Can we infer <> from them? If possible, name the inference rules being used at each step
Multi-choice reading comprehension task
- identify the claim that must be true or is required in order for the argument to work.
- identify information that would strengthen an argument.
- identify information that would weaken an argument.
- identify information that would explain or resolve a situation.
- identify a flaw in an argument’s reasoning
Models
We tuned on the LLaMA 7b model with 2 A100 GPUs. The model can be downloaded here
How to cite
@inproceedings{liu2023logicot,
title={LogiCoT: Logical Chain-of-Thought Instruction Tuning},
author={Liu, Hanmeng and Teng, Zhiyang and Cui, Leyang and Zhang, Chaoli and Zhou, Qiji and Zhang, Yue},
booktitle={Findings of the Association for Computational Linguistics: EMNLP 2023},
pages={2908--2921},
year={2023}
}