Home

Awesome

<div align="center"> <img src="https://raw.githubusercontent.com/TrustLLMBenchmark/TrustLLM-Website/main/img/logo.png" width="100%">

Toolkit for "TrustLLM: Trustworthiness in Large Language Models"

Website Paper Dataset Data Map Leaderboard Toolkit Document

Downloads Downloads Downloads

<img src="https://img.shields.io/github/last-commit/HowieHwong/TrustLLM?style=flat-square&color=5D6D7E" alt="git-last-commit" /> <img src="https://img.shields.io/github/commit-activity/m/HowieHwong/TrustLLM?style=flat-square&color=5D6D7E" alt="GitHub commit activity" /> <img src="https://img.shields.io/github/languages/top/HowieHwong/TrustLLM?style=flat-square&color=5D6D7E" alt="GitHub top language" /> </div> <div align="center"> </div>

Updates & News

πŸ‘‚TL;DR

Table of Content

πŸ™‹ About TrustLLM

We introduce TrustLLM, a comprehensive study of trustworthiness in LLMs, including principles for different dimensions of trustworthiness, established benchmark, evaluation, and analysis of trustworthiness for mainstream LLMs, and discussion of open challenges and future directions. Specifically, we first propose a set of principles for trustworthy LLMs that span eight different dimensions. Based on these principles, we further establish a benchmark across six dimensions including truthfulness, safety, fairness, robustness, privacy, and machine ethics. We then present a study evaluating 16 mainstream LLMs in TrustLLM, consisting of over 30 datasets. The document explains how to use the trustllm python package to help you assess the performance of your LLM in trustworthiness more quickly. For more details about TrustLLM, please refer to project website.

<div align="center"> <img src="https://raw.githubusercontent.com/TrustLLMBenchmark/TrustLLM-Website/main/img/benchmark_arch_00.png" width="100%"> </div>

🧹 Before Evaluation

Installation

Installation via Github (recommended):

git clone git@github.com:HowieHwong/TrustLLM.git

Installation via pip:

pip install trustllm

Installation via conda:

conda install -c conda-forge trustllm

Create a new environment:

conda create --name trustllm python=3.9

Install required packages:

cd trustllm_pkg
pip install .

Dataset Download

Download TrustLLM dataset:

from trustllm.dataset_download import download_dataset

download_dataset(save_path='save_path')

Generation

We have added generation section from version 0.2.0. Start your generation from this page. Here is an example:

from trustllm.generation.generation import LLMGeneration

llm_gen = LLMGeneration(
    model_path="your model name", 
    test_type="test section", 
    data_path="your dataset file path",
    model_name="", 
    online_model=False, 
    use_deepinfra=False,
    use_replicate=False,
    repetition_penalty=1.0,
    num_gpus=1, 
    max_new_tokens=512, 
    debug=False,
    device='cuda:0'
)

llm_gen.generation_results()

πŸ™Œ Evaluation

We have provided a toolkit that allows you to more conveniently assess the trustworthiness of large language models. Please refer to the document for more details. Here is an example:

from trustllm.task.pipeline import run_truthfulness

truthfulness_results = run_truthfulness(  
    internal_path="path_to_internal_consistency_data.json",  
    external_path="path_to_external_consistency_data.json",  
    hallucination_path="path_to_hallucination_data.json",  
    sycophancy_path="path_to_sycophancy_data.json",
    advfact_path="path_to_advfact_data.json"
)

πŸ›ŽοΈ Dataset & Task

Dataset overview:

βœ“ the dataset is from prior work, and βœ— means the dataset is first proposed in our benchmark.

DatasetDescriptionNum.Exist?Section
SQuAD2.0It combines questions in SQuAD1.1 with over 50,000 unanswerable questions.100βœ“Misinformation
CODAHIt contains 28,000 commonsense questions.100βœ“Misinformation
HotpotQAIt contains 113k Wikipedia-based question-answer pairs for complex multi-hop reasoning.100βœ“Misinformation
AdversarialQAIt contains 30,000 adversarial reading comprehension question-answer pairs.100βœ“Misinformation
Climate-FEVERIt contains 7,675 climate change-related claims manually curated by human fact-checkers.100βœ“Misinformation
SciFactIt contains 1,400 expert-written scientific claims pairs with evidence abstracts.100βœ“Misinformation
COVID-FactIt contains 4,086 real-world COVID claims.100βœ“Misinformation
HealthVerIt contains 14,330 health-related claims against scientific articles.100βœ“Misinformation
TruthfulQAThe multiple-choice questions to evaluate whether a language model is truthful in generating answers to questions.352βœ“Hallucination
HaluEvalIt contains 35,000 generated and human-annotated hallucinated samples.300βœ“Hallucination
LM-exp-sycophancyA dataset consists of human questions with one sycophancy response example and one non-sycophancy response example.179βœ“Sycophancy
Opinion pairsIt contains 120 pairs of opposite opinions.240, 120βœ—Sycophancy, Preference
WinoBiasIt contains 3,160 sentences, split for development and testing, created by researchers familiar with the project.734βœ“Stereotype
StereoSetIt contains the sentences that measure model preferences across gender, race, religion, and profession.734βœ“Stereotype
AdultThe dataset, containing attributes like sex, race, age, education, work hours, and work type, is utilized to predict salary levels for individuals.810βœ“Disparagement
Jailbreak TriggerThe dataset contains the prompts based on 13 jailbreak attacks.1300βœ—Jailbreak, Toxicity
Misuse (additional)This dataset contains prompts crafted to assess how LLMs react when confronted by attackers or malicious users seeking to exploit the model for harmful purposes.261βœ—Misuse
Do-Not-AnswerIt is curated and filtered to consist only of prompts to which responsible LLMs do not answer.344 + 95βœ“Misuse, Stereotype
AdvGLUEA multi-task dataset with different adversarial attacks.912βœ“Natural Noise
AdvInstruction600 instructions generated by 11 perturbation methods.600βœ—Natural Noise
ToolEA dataset with the users' queries which may trigger LLMs to use external tools.241βœ“Out of Domain (OOD)
FlipkartA product review dataset, collected starting from December 2022.400βœ“Out of Domain (OOD)
DDXPlusA 2022 medical diagnosis dataset comprising synthetic data representing about 1.3 million patient cases.100βœ“Out of Domain (OOD)
ETHICSIt contains numerous morally relevant scenarios descriptions and their moral correctness.500βœ“Implicit Ethics
Social Chemistry 101It contains various social norms, each consisting of an action and its label.500βœ“Implicit Ethics
MoralChoiceIt consists of different contexts with morally correct and wrong actions.668βœ“Explicit Ethics
ConfAIdeIt contains the description of how information is used.196βœ“Privacy Awareness
Privacy AwarenessIt includes different privacy information queries about various scenarios.280βœ—Privacy Awareness
Enron EmailIt contains approximately 500,000 emails generated by employees of the Enron Corporation.400βœ“Privacy Leakage
XstestIt's a test suite for identifying exaggerated safety behaviors in LLMs.200βœ“Exaggerated Safety

Task overview:

β—‹ means evaluation through the automatic scripts (e.g., keywords matching), ● means the automatic evaluation by ChatGPT, GPT-4 or longformer, and ◐ means the mixture evaluation.

More trustworthy LLMs are expected to have a higher value of the metrics with ↑ and a lower value with ↓.

Task NameMetricsTypeEvalSection
Closed-book QAAccuracy (↑)Generationβ—‹Misinformation(Internal)
Fact-CheckingMacro F-1 (↑)Classification●Misinformation(External)
Multiple Choice QAAccuracy (↑)Classification●Hallucination
Hallucination ClassificationAccuracy (↑)Classification●Hallucination
Persona SycophancyEmbedding similarity (↑)Generation◐Sycophancy
Opinion SycophancyPercentage change (↓)Generationβ—‹Sycophancy
Factuality CorrectionPercentage change (↑)Generationβ—‹Adversarial Factuality
Jailbreak Attack EvaluationRtA (↑)Generationβ—‹Jailbreak
Toxicity MeasurementToxicity Value (↓)Generation●Toxicity
Misuse EvaluationRtA (↑)Generationβ—‹Misuse
Exaggerated Safety EvaluationRtA (↓)Generationβ—‹Exaggerated Safety
Agreement on StereotypesAccuracy (↑)Generation◐Stereotype
Recognition of StereotypesAgreement Percentage (↓)Classification◐Stereotype
Stereotype Query TestRtA (↑)Generationβ—‹Stereotype
Preference SelectionRtA (↑)Generationβ—‹Preference
Salary Predictionp-value (↑)Generation●Disparagement
Adversarial Perturbation in Downstream TasksASR (↓), RS (↑)Generation◐Natural Noise
Adversarial Perturbation in Open-Ended TasksEmbedding similarity (↑)Generation◐Natural Noise
OOD DetectionRtA (↑)Generationβ—‹Out of Domain (OOD)
OOD GeneralizationMicro F1 (↑)Classificationβ—‹Out of Domain (OOD)
Agreement on Privacy InformationPearson’s correlation (↑)Classification●Privacy Awareness
Privacy Scenario TestRtA (↑)Generationβ—‹Privacy Awareness
Probing Privacy Information UsageRtA (↑), Accuracy (↓)Generation◐Privacy Leakage
Moral Action JudgementAccuracy (↑)Classification◐Implicit Ethics
Moral Reaction Selection (Low-Ambiguity)Accuracy (↑)Classification◐Explicit Ethics
Moral Reaction Selection (High-Ambiguity)RtA (↑)Generationβ—‹Explicit Ethics
Emotion ClassificationAccuracy (↑)Classification●Emotional Awareness

πŸ† Leaderboard

If you want to view the performance of all models or upload the performance of your LLM, please refer to this link.

images/rank_card_00.png

πŸ“£ Contribution

We welcome your contributions, including but not limited to the following:

If you intend to make improvements to the toolkit, please fork the repository first, make the relevant modifications to the code, and finally initiate a pull request.

⏰ TODO in Coming Versions

Citation

@inproceedings{huang2024trustllm,
  title={TrustLLM: Trustworthiness in Large Language Models},
  author={Yue Huang and Lichao Sun and Haoran Wang and Siyuan Wu and Qihui Zhang and Yuan Li and Chujie Gao and Yixin Huang and Wenhan Lyu and Yixuan Zhang and Xiner Li and Hanchi Sun and Zhengliang Liu and Yixin Liu and Yijue Wang and Zhikun Zhang and Bertie Vidgen and Bhavya Kailkhura and Caiming Xiong and Chaowei Xiao and Chunyuan Li and Eric P. Xing and Furong Huang and Hao Liu and Heng Ji and Hongyi Wang and Huan Zhang and Huaxiu Yao and Manolis Kellis and Marinka Zitnik and Meng Jiang and Mohit Bansal and James Zou and Jian Pei and Jian Liu and Jianfeng Gao and Jiawei Han and Jieyu Zhao and Jiliang Tang and Jindong Wang and Joaquin Vanschoren and John Mitchell and Kai Shu and Kaidi Xu and Kai-Wei Chang and Lifang He and Lifu Huang and Michael Backes and Neil Zhenqiang Gong and Philip S. Yu and Pin-Yu Chen and Quanquan Gu and Ran Xu and Rex Ying and Shuiwang Ji and Suman Jana and Tianlong Chen and Tianming Liu and Tianyi Zhou and William Yang Wang and Xiang Li and Xiangliang Zhang and Xiao Wang and Xing Xie and Xun Chen and Xuyu Wang and Yan Liu and Yanfang Ye and Yinzhi Cao and Yong Chen and Yue Zhao},
  booktitle={Forty-first International Conference on Machine Learning},
  year={2024},
  url={https://openreview.net/forum?id=bWUU0LwwMp}
}

License

The code in this repository is open source under the MIT license.