Home

Awesome

<h2 align="center">Internal Consistency and Self-Feedback in Large Language Models: A Survey</h2> <p align="center"> <i> Explore concepts like Self-Correct, Self-Refine, Self-Improve, Self-Contradict, Self-Play, and Self-Knowledge, alongside o1-like reasoning elevation🍓 and hallucination alleviation🍄. </i> <p> <p align="center"> <!-- arxiv badges --> <a href="https://arxiv.org/abs/2407.14507"> <img src="https://img.shields.io/badge/Paper-red?style=flat&logo=arxiv"> </a> <!-- Github --> <a href="https://github.com/IAAR-Shanghai/ICSFSurvey"> <img src="https://img.shields.io/badge/Code-black?style=flat&logo=github"> </a> <!-- HuggingFace --> <a href="https://huggingface.co/papers/2407.14507"> <img src="https://img.shields.io/badge/-%F0%9F%A4%97%20Page-orange?style=flat"/> </a> </p> <p align="center"> <a href="https://scholar.google.com/citations?user=d0E7YlcAAAAJ">Xun Liang</a><sup>1*</sup>, <a href="https://ki-seki.github.io/">Shichao Song</a><sup>1*</sup>, <a href="https://github.com/fan2goa1">Zifan Zheng</a><sup>2*</sup>, <a href="https://github.com/MarrytheToilet">Hanyu Wang</a><sup>1</sup>, <a href="https://github.com/Duguce">Qingchen Yu</a><sup>2</sup>, <a href="https://xkli-allen.github.io/">Xunkai Li</a><sup>3</sup>, <a href="https://ronghuali.github.io/index.html">Rong-Hua Li</a><sup>3</sup>, Yi Wang<sup>4</sup>, Zhonghao Wang<sup>4</sup>, <a href="https://scholar.google.com/citations?user=GOKgLdQAAAAJ">Feiyu Xiong</a><sup>2</sup>, <a href="https://www.semanticscholar.org/author/Zhiyu-Li/2268429641">Zhiyu Li</a><sup>2†</sup> </p> <p align="center"> <small> <sup>1</sup><a href="https://en.ruc.edu.cn/">RUC</a>, <sup>2</sup><a href="https://www.iaar.ac.cn/">IAAR</a>, <sup>3</sup><a href="https://english.bit.edu.cn/">BIT</a>, <sup>4</sup><a href="https://english.news.cn/">Xinhua</a> <br> <sup>*</sup>Equal contribution, <sup>†</sup>Corresponding author (lizy@iaar.ac.cn) </small> </p>

[!IMPORTANT]

đź“° News

🎉 Introduction

Welcome to the GitHub repository for our survey paper titled "Internal Consistency and Self-Feedback in Large Language Models: A Survey." The survey's goal is to provide a unified perspective on the self-evaluation and self-updating mechanisms in LLMs, encapsulated within the frameworks of Internal Consistency and Self-Feedback.

This repository includes three key resources:

<details><summary>Click Me to Show the Table of Contents</summary> </details>

đź“š Paper List

Here we list the most important references cited in our survey, as well as the papers we consider worth noting. This list will be updated regularly.

Related Survey Papers

These are some of the most relevant surveys related to our paper.

Section IV: Consistency Signal Acquisition

For various forms of expressions from an LLM, we can obtain various forms of consistency signals, which can help in better updating the expressions.

Confidence Estimation

Hallucination Detection

Uncertainty Estimation

Verbal Critiquing

Faithfulness Measurement

Consistency Estimation

Section V: Reasoning Elevation

Enhancing reasoning ability by improving LLM performance on QA tasks through Self-Feedback strategies.

Reasoning Topologically

Refining with Responses

Multi-Agent Collaboration

Section VI: Hallucination Alleviation

Improving factual accuracy in open-ended generation and reducing hallucinations through Self-Feedback strategies.

Mitigating Hallucination while Generating

Refining the Response Iteratively

Activating Truthfulness

Decoding Truthfully

Section VII: Other Tasks

In addition to tasks aimed at improving consistency (enhancing reasoning and alleviating hallucinations), there are other tasks that also utilize Self-Feedback strategies.

Preference Learning

Knowledge Distillation

Continuous Learning

Data Synthesis

Consistency Optimization

Decision Making

Event Argument Extraction

Inference Acceleration

Machine Translation

Negotiation Optimization

Retrieval Augmented Generation

Text Classification

Section VIII.A: Meta Evaluation

Some common evaluation benchmarks.

Consistency Evaluation

Self-Knowledge Evaluation

Uncertainty Evaluation

Feedback Ability Evaluation

Theoretical Perspectives

Some theoretical research on Internal Consistency and Self-Feedback strategies.

đź“ť Citation

@article{liang2024internal,
  title={Internal consistency and self-feedback in large language models: A survey},
  author={Liang, Xun and Song, Shichao and Zheng, Zifan and Wang, Hanyu and Yu, Qingchen and Li, Xunkai and Li, Rong-Hua and Wang, Yi and Wang, Zhonghao and Xiong, Feiyu and Li, Zhiyu},
  journal={arXiv preprint arXiv:2407.14507},
  year={2024}
}