Awesome
<h1 align="center"> GLBench: A Comprehensive Benchmark for Graphs with Large Language Models </a></h2> <h5 align="center"> If you like our project, please give us a star ⭐ on GitHub for the latest update.</h5> <h5 align="center"></h5>
This is the official implementation of the following paper:
<p align="center"><img width="75%" src="images/trend.png" /></p> <p align="center"><em>Trend of Graph&LLM.</em></p>GLBench: A Comprehensive Benchmark for Graphs with Large Language Models [Paper]
Yuhan Li, Peisong Wang, Xiao Zhu, Aochuan Chen, Haiyun Jiang, Deng Cai, Victor Wai Kin Chan, Jia Li
Environment Setup
Before you begin, ensure that you have Anaconda or Miniconda installed on your system. This guide assumes that you have a CUDA-enabled GPU. After create your conda environment (we recommend python==3.10), please run
pip install -r requirements.txt
to install python packages.
Datasets
All datasets in GLBench are available in this link.
Please place them in the datasets
folder.
Benchmarking
Supervised
Classical(GNN)
Benchmark the Classical GNNs (grid-search hyperparameters)
cd models/gnn
bash models/gnn/run.sh
LLM
Benchmark the LLMs in supervised settings (Sent-BERT, BERT, RoBERTa)
cd models/llm/llm_supervised
bash roberta_search.sh
Benchmark the LLMs in zero-shot settings (gpt-4o, gpt-3.5-turbo,llama3-70b,deepseek-chat)
cd models/llm/llm_zeroshot
python inference.py --model gpt-4o --data cora
Enhancer
- GIANT
Due to some package conflicts or version limitations, we recommend using docker to run GIANT. The docker file is in
models/enhancer/giant-xrt/dockerfile
After starting the Docker container, run
cd models/enhancer/giant-xrt/
bash run_all.sh
- TAPE
cd models/enhancer/TAPE/
bash run.sh
- OFA
cd models/enhancer/OneForAll/
bash run.sh
- ENGINE
Predictor
- InstructGLM
- GraphText
Due to some package conflicts or version limitations, we recommend using docker to run GraphText. The docker file is in
models/predictor/GraphText/dockerfile
After starting the Docker container, run
cd models/predictor/GraphText
bash run.sh
- GraphAdapter
cd models/predictor/GraphAdapter
bash run.sh
- LLaGA
Aignment
- GLEM
cd models/alignment/GLEM
bash run.sh
- Patton
bash run_pretrain.sh
bash nc_class_train.sh
bash nc_class_test.sh
We also provide seperate scripts for different datasets.
- Zero-shot
LLM
Benchmark the LLMs(LLaMA3, GPT-3.5-turbo, GPT-4o, DeepSeek-chat)
cd models/llm
You can use your own API key for OpenAI.
Enhancer
- OFA
cd models/enhancer/OneForAll/
bash run_zeroshot.sh
- ZeroG
cd models/enhancer/ZeroG/
bash run.sh
Predictor
- GraphGPT
cd models/predictor/GraphGPT
bash ./scripts/eval_script/graphgpt_eval.sh
FYI: our other works
<p align="center"><em>🔥 <strong>A Survey of Graph Meets Large Language Model: Progress and Future Directions (IJCAI'24) <img src="https://img.shields.io/github/stars/yhLeeee/Awesome-LLMs-in-Graph-tasks.svg" alt="GitHub stars" /></strong></em></p> <p align="center"><em><a href="https://github.com/yhLeeee/Awesome-LLMs-in-Graph-tasks">Github Repo</a> | <a href="https://arxiv.org/abs/2311.12399">Paper</a></em></p> <p align="center"><em>🔥 <strong>ZeroG: Investigating Cross-dataset Zero-shot Transferability in Graphs (KDD'24) <img src="https://img.shields.io/github/stars/NineAbyss/ZeroG.svg" alt="GitHub stars" /></strong></em></p> <p align="center"><em><a href="https://github.com/NineAbyss/ZeroG">Github Repo</a> | <a href="https://arxiv.org/abs/2402.11235">Paper</a></em></p>Acknowledgement
We are appreciated to all authors of works we cite for their solid work and clear code organization! The orginal version of the GraphLLM methods are listed as follows:
Alignment:
GLEM:
- (2022.10) [ICLR' 2023] Learning on Large-scale Text-attributed Graphs via Variational Inference [Paper | Code]
Patton:
Enhancer:
ENGINE:
- (2024.01) [IJCAI' 2024] Efficient Tuning and Inference for Large Language Models on Textual Graphs [Paper]
GIANT:
- (2022.03) [ICLR' 2022] Node Feature Extraction by Self-Supervised Multi-scale Neighborhood Prediction [Paper | Code]
OFA:
- (2023.09) [ICLR' 2024] One for All: Towards Training One Graph Model for All Classification Tasks [Paper | Code]
TAPE:
- (2023.05) [ICLR' 2024] Harnessing Explanations: LLM-to-LM Interpreter for Enhanced Text-Attributed Graph Representation Learning [Paper | Code]
ZeroG:
- (2024.02) [KDD' 2024] ZeroG: Investigating Cross-dataset Zero-shot Transferability in Graphs [Paper] | Code]
Predictor:
GraphAdapter:
- (2024.02) [WWW' 2024] Can GNN be Good Adapter for LLMs? [Paper]
GraphGPT:
GraphText:
InstructGLM:
LLaGA:
Code Base Structure
$CODE_DIR
├── datasets
└── models
├── alignment
│ ├── GLEM
│ └── Patton
├── enhancer
│ ├── ENGINE
│ ├── giant-xrt
│ ├── OneForAll
│ ├── TAPE
│ └── ZeroG
├── gnn
├── llm
│ ├── deepseek-chat
│ ├── gpt-3.5-turbo
│ ├── gpt-4o
│ └── llama3-70b
└── predictor
├── GraphAdapter
├── GraphGPT
├── GraphText
├── InstructGLM
└── LLaGA