Awesome
GLoRE
A benchmark for evaluating the logical reasoning of LLMs
For more information, please refer to our Arxiv preprint
Datasets included:
- LogiQA
- ReClor
- FOLIO
- ConTRoL
- AR-LSAT
- FRACAS
- HELP (The size of the HELP dataset surpasses the 25M limit, so we put it in Google Drive here. We also put a 1000 intances version of HELP in the repository.)
- ProofWriter
- RuleTaker
- TaxiNLI
- NaN-NLI
- RobustLR (First 1k pieces included. The 45k full version available at Google Drive)
- LogicInduction
- ConCeptualCombinations
We are working on incorporating more logical reasoning datasets!
Setting-Up
pip install evals
Evaluating OpenAI Models
This repository is compatible with the OpenAI Eval library. Please download the Eval package first, and put the contents in this repository data
and evals
into evals/evals/registry/data/<name_of_your_eval/
and evals/evals/registry/evals/
, respectively.
eg. evals/evals/registry/data/logiqa/logiqa.jsonl
, evals/evals/registry/evals/logiqa.yaml
- export openai api key to the environment
export OPENAI_API_KEY=<your_key>
- run eval
oaieval <model_name> <data_name>
eg. oaieval gpt-3.5-turbo logiqa
Evaluating Huggingface Models
python inference.py
Contribute Your Own Dataset
We welcome your datasets incorporated in GLoRe.
Please fill free to drop us issues (in the repo) or emails (address provided in the paper) to let us know.
We would recommend you to convert your dataset into GLoRe format with the scripts provided in example_scripts/
, which provides three example conversion scripts for .csv
, .tsv
, and .json
formats respectively. But we will also handle them if you meet trouble during format conversion.
How to Cite
@misc{liu2023glore,
title={GLoRE: Evaluating Logical Reasoning of Large Language Models},
author={Hanmeng liu and Zhiyang Teng and Ruoxi Ning and Jian Liu and Qiji Zhou and Yue Zhang},
year={2023},
eprint={2310.09107},
archivePrefix={arXiv},
primaryClass={cs.CL}
}