Home

Awesome

<h1 align="center"> [NeurIPS 2023 Datasets and Benchmarks Track] ChemLLMBench ⚛ </h1>

The official repository of "What can Large Language Models do in chemistry? A comprehensive benchmark on eight tasks". https://arxiv.org/abs/2305.18365

frame

🆕 News

💡 Tasks Overview

Task_overview

📌 Prompt

The followings are our prompt used in the paper. It's extremely easy to try your own designed prompt! Only need to change the prompt in the Jupyter code of each task and then we can see the results and performance.

Zero-shot Prompt

zero_prompt

ICL Prompt

ICL

📊 Dataset

The datasets of some tasks are already uploaded in this repository. Becuase of the size limit, please download these datasets according to the link. After downloading these datasets, please move these datasets to the corresponding folder and then you can run our Jupyter code of each task.

DatasetLinkReference
USPTO_Mixeddownloadhttps://github.com/MolecularAI/Chemformer
USPTO-50kdownloadhttps://github.com/MolecularAI/Chemformer
ChEBI-20downloadhttps://github.com/blender-nlp/MolT5
Suzuki-miyauradownloadhttps://github.com/seokhokang/reaction_yield_nn
Butchward-Hariwigdownloadhttps://github.com/seokhokang/reaction_yield_nn
BBBP,BACE,HIV,Tox21,Clintoxdownloadhttps://github.com/hwwang55/MolR
PubChemdownloadhttps://github.com/ChemFoundationModels/ChemLLMBench/blob/main/data/name_prediction/llm_test.csv

🤗 Cite us

@misc{guo2023gpt,
      title={What indeed can GPT models do in chemistry? A comprehensive benchmark on eight tasks}, 
      author={Taicheng Guo and Kehan Guo and Bozhao Nan and Zhenwen Liang and Zhichun Guo and Nitesh V. Chawla and Olaf Wiest and Xiangliang Zhang},
      year={2023},
      eprint={2305.18365},
      archivePrefix={arXiv},
      primaryClass={cs.CL}
}

🤗 Contact us

Taicheng Guo: tguo2@nd.edu

Xiangliang Zhang: xzhang33@nd.edu