Home

Awesome

<h1 align="center"> GLBench: A Comprehensive Benchmark for Graphs with Large Language Models </a></h2> <h5 align="center"> If you like our project, please give us a star ⭐ on GitHub for the latest update.</h5> <h5 align="center">

GitHub stars

</h5>

This is the official implementation of the following paper:

GLBench: A Comprehensive Benchmark for Graphs with Large Language Models [Paper]

Yuhan Li, Peisong Wang, Xiao Zhu, Aochuan Chen, Haiyun Jiang, Deng Cai, Victor Wai Kin Chan, Jia Li

<p align="center"><img width="75%" src="images/trend.png" /></p> <p align="center"><em>Trend of Graph&LLM.</em></p>

Environment Setup

Before you begin, ensure that you have Anaconda or Miniconda installed on your system. This guide assumes that you have a CUDA-enabled GPU. After create your conda environment (we recommend python==3.10), please run

pip install -r requirements.txt

to install python packages.

Datasets

All datasets in GLBench are available in this link. Please place them in the datasets folder.

Benchmarking

Supervised

Classical(GNN)

Benchmark the Classical GNNs (grid-search hyperparameters)

cd models/gnn
bash models/gnn/run.sh

LLM

Benchmark the LLMs in supervised settings (Sent-BERT, BERT, RoBERTa)

cd models/llm/llm_supervised
bash roberta_search.sh

Benchmark the LLMs in zero-shot settings (gpt-4o, gpt-3.5-turbo,llama3-70b,deepseek-chat)

cd models/llm/llm_zeroshot
python inference.py --model gpt-4o --data cora

Enhancer

Due to some package conflicts or version limitations, we recommend using docker to run GIANT. The docker file is in

models/enhancer/giant-xrt/dockerfile

After starting the Docker container, run

cd models/enhancer/giant-xrt/
bash run_all.sh
cd models/enhancer/TAPE/
bash run.sh
cd models/enhancer/OneForAll/
bash run.sh

Predictor

Due to some package conflicts or version limitations, we recommend using docker to run GraphText. The docker file is in

models/predictor/GraphText/dockerfile

After starting the Docker container, run

cd models/predictor/GraphText
bash run.sh
cd models/predictor/GraphAdapter
bash run.sh

Aignment

cd models/alignment/GLEM
bash run.sh
bash run_pretrain.sh
bash nc_class_train.sh
bash nc_class_test.sh

We also provide seperate scripts for different datasets.

LLM

Benchmark the LLMs(LLaMA3, GPT-3.5-turbo, GPT-4o, DeepSeek-chat)

cd models/llm

You can use your own API key for OpenAI.

Enhancer

cd models/enhancer/OneForAll/
bash run_zeroshot.sh
cd models/enhancer/ZeroG/
bash run.sh

Predictor

cd models/predictor/GraphGPT
bash ./scripts/eval_script/graphgpt_eval.sh

FYI: our other works

<p align="center"><em>🔥 <strong>A Survey of Graph Meets Large Language Model: Progress and Future Directions (IJCAI'24) <img src="https://img.shields.io/github/stars/yhLeeee/Awesome-LLMs-in-Graph-tasks.svg" alt="GitHub stars" /></strong></em></p> <p align="center"><em><a href="https://github.com/yhLeeee/Awesome-LLMs-in-Graph-tasks">Github Repo</a> | <a href="https://arxiv.org/abs/2311.12399">Paper</a></em></p> <p align="center"><em>🔥 <strong>ZeroG: Investigating Cross-dataset Zero-shot Transferability in Graphs (KDD'24) <img src="https://img.shields.io/github/stars/NineAbyss/ZeroG.svg" alt="GitHub stars" /></strong></em></p> <p align="center"><em><a href="https://github.com/NineAbyss/ZeroG">Github Repo</a> | <a href="https://arxiv.org/abs/2402.11235">Paper</a></em></p>

Acknowledgement

We are appreciated to all authors of works we cite for their solid work and clear code organization! The orginal version of the GraphLLM methods are listed as follows:

Alignment:

GLEM:

Patton:

Enhancer:

ENGINE:

GIANT:

OFA:

TAPE:

ZeroG:

Predictor:

GraphAdapter:

GraphGPT:

GraphText:

InstructGLM:

LLaGA:

Code Base Structure

$CODE_DIR
    ├── datasets
    └── models
        ├── alignment
        │   ├── GLEM
        │   └── Patton
        ├── enhancer
        │   ├── ENGINE
        │   ├── giant-xrt
        │   ├── OneForAll
        │   ├── TAPE
        │   └── ZeroG
        ├── gnn
        ├── llm
        │   ├── deepseek-chat
        │   ├── gpt-3.5-turbo
        │   ├── gpt-4o
        │   └── llama3-70b
        └── predictor
           ├── GraphAdapter
           ├── GraphGPT
           ├── GraphText
           ├── InstructGLM
           └── LLaGA