Home

Awesome

<h1 align="center"> Awesome-LLMs-in-Graph-tasks </a></h2> <h5 align="center"> If you like our project, please give us a star ⭐ on GitHub for the latest update.</h5> <h5 align="center">

Awesome GitHub stars

</h5>

This is a collection of papers on leveraging Large Language Models in Graph Tasks. It's based on our survey paper: A Survey of Graph Meets Large Language Model: Progress and Future Directions.

We will try to make this list updated frequently. If you found any error or any missed paper, please don't hesitate to open issues or pull requests.

Our survey has been accepted by IJCAI 2024 survey track.

How can LLMs help improve graph-related tasks?

With the help of LLMs, there has been a notable shift in the way we interact with graphs, particularly those containing nodes associated with text attributes. The integration of LLMs with traditional GNNs can be mutually beneficial and enhance graph learning. While GNNs are proficient at capturing structural information, they primarily rely on semantically constrained embeddings as node features, limiting their ability to express the full complexities of the nodes. Incorporating LLMs, GNNs can be enhanced with stronger node features that effectively capture both structural and contextual aspects. On the other hand, LLMs excel at encoding text but often struggle to capture structural information present in graph data. Combining GNNs with LLMs can leverage the robust textual understanding of LLMs while harnessing GNNs' ability to capture structural relationships, leading to more comprehensive and powerful graph learning.

<p align="center"><img src="Figures/overview.png" width=75% height=75%></p> <p align="center"><em>Figure 1.</em> The overview of Graph Meets LLMs.</p>

Summarizations based on proposed taxonomy

<p align="center"><img src="Figures/summarization.png" width=100% height=75%></p> <p align="left"><em>Table 1.</em> A summary of models that leverage LLMs to assist graph-related tasks in literature, ordered by their release time. <b>Fine-tuning</b> denotes whether it is necessary to fine-tune the parameters of LLMs, and &hearts; indicates that models employ parameter-efficient fine-tuning (PEFT) strategies, such as LoRA and prefix tuning. <b>Prompting</b> indicates the use of text-formatted prompts in LLMs, done manually or automatically. Acronyms in <b>Task</b>: Node refers to node-level tasks; Link refers to link-level tasks; Graph refers to graph-level tasks; Reasoning refers to Graph Reasoning; Retrieval refers to Graph-Text Retrieval; Captioning refers to Graph Captioning.</p >

Table of Contents

LLM as Enhancer

LLM as Predictor

GNN-LLM Alignment

Benchmarks

Others

LLM as Annotator

LLM as Controller

LLM as Sample Generator

LLM as Similarity Analyzer

LLM for Robustness

LLM for Task Planning

Other Repos

We note that several repos also summarize papers on the integration of LLMs and graphs. However, we differentiate ourselves by organizing these papers leveraging a new and more granular taxonomy. We recommend researchers to explore some repositories for a comprehensive survey.

We highly recommend a repository that summarizes the work on Graph Prompt, which is very close to Graph-LLM.

Contributing

If you have come across relevant resources, feel free to open an issue or submit a pull request.

* (_time_) [conference] **paper_name** [[Paper](link) | [Code](link)]
   <details close>
   <summary>Model name</summary>
   <p align="center"><img width="75%" src="Figures/xxx.jpg" /></p>
   <p align="center"><em>The framework of model name.</em></p>
   </details>

Cite Us

Feel free to cite this work if you find it useful to you!

@article{li2023survey,
  title={A Survey of Graph Meets Large Language Model: Progress and Future Directions},
  author={Li, Yuhan and Li, Zhixun and Wang, Peisong and Li, Jia and Sun, Xiangguo and Cheng, Hong and Yu, Jeffrey Xu},
  journal={arXiv preprint arXiv:2311.12399},
  year={2023}
}