Home

Awesome

<h1 align="center"> <img src="resources/prompt-icon.svg" alt="prompt-icon"> Prompt Fuzzer <img src="resources/prompt-icon.svg" alt="prompt-icon"> </h1> <h2 align="center"> The open-source tool to help you harden your GenAI applications <br> <br>

License: MIT ci GitHub contributors Last release Open In Colab

</h2> <div align="center"> <h4> Brought to you by Prompt Security, the Complete Platform for GenAI Security </div>
<div align="center">

Prompt Security Logo

</div>

Table of Contents

<!-- vim-markdown-toc GFM --> <br>

<a id="what-is-prompt-fuzzer"></a>

โœจ What is the Prompt Fuzzer

  1. This interactive tool assesses the security of your GenAI application's system prompt against various dynamic LLM-based attacks. It provides a security evaluation based on the outcome of these attack simulations, enabling you to strengthen your system prompt as needed.
  2. The Prompt Fuzzer dynamically tailors its tests to your application's unique configuration and domain.
  3. The Fuzzer also includes a Playground chat interface, giving you the chance to iteratively improve your system prompt, hardening it against a wide spectrum of generative AI attacks.

:warning: Using the Prompt Fuzzer will lead to the consumption of tokens. :warning:

<br>

<a id="installation"></a>

๐Ÿš€ Installation

prompt-fuzzer-install-final

  1. Install the Fuzzer package <a id="using-pip"></a>

    Using pip install

    pip install prompt-security-fuzzer
    

    <a id="using-pypi"></a>

    Using the package page on PyPi

    You can also visit the package page on PyPi

    Or grab latest release wheel file form releases

  2. Launch the Fuzzer

    export OPENAI_API_KEY=sk-123XXXXXXXXXXXX
    
    prompt-security-fuzzer
    
  3. Input your system prompt

  4. Start testing

  5. Test yourself with the Playground! Iterate as many times are you like until your system prompt is secure.

<a id="usage"></a>

:computer: Usage

<a id="features"></a>

Features

<b>The Prompt Fuzzer Supports:</b><br> ๐Ÿงž 16 llm providers<br> ๐Ÿ”ซ 15 different attacks<br> ๐Ÿ’ฌ Interactive mode<br> ๐Ÿค– CLI mode<br> ๐Ÿงต Multi threaded testing<br>

<a id="environment-variables"></a>

Environment variables:

You need to set an environment variable to hold the access key of your preferred LLM provider. default is OPENAI_API_KEY

Example: set OPENAI_API_KEY with your API Token to use with your OpenAI account.

Alternatively, create a file named .env in the current directory and set the OPENAI_API_KEY there. <a id="llm-providers"></a>

<details><summary>We're fully LLM agnostic. (Click for full configuration list of llm providers)</summary>
ENVIORMENT KEYDescription
ANTHROPIC_API_KEYAnthropic Chat large language models.
ANYSCALE_API_KEYAnyscale Chat large language models.
AZURE OPENAI_API_KEYAzure OpenAI Chat Completion API.
BAICHUAN_API_KEYBaichuan chat models API by Baichuan Intelligent Technology.
COHERE_API_KEYCohere chat large language models.
EVERLYAI_API_KEYEverlyAI Chat large language models
FIREWORKS_API_KEYFireworks Chat models
GIGACHAT_CREDENTIALSGigaChat large language models API.
GOOGLE_API_KEYGoogle PaLM Chat models API.
JINA_API_TOKENJina AI Chat models API.
KONKO_API_KEYChatKonko Chat large language models API.
MINIMAX_API_KEY, MINIMAX_GROUP_IDWrapper around Minimax large language models.
OPENAI_API_KEYOpenAI Chat large language models API.
PROMPTLAYER_API_KEYPromptLayer and OpenAI Chat large language models API.
QIANFAN_AK, QIANFAN_SKBaidu Qianfan chat models.
YC_API_KEYYandexGPT large language models.
</details> <br/> <br/>

<a id="options"></a>

Command line Options

<br/>

<a id="examples"></a>

Examples

System prompt examples (of various strengths) can be found in the subdirectory system_prompt.examples in the sources.

<a id="interactive"></a>

Interactive mode (default mode)

Run tests against the system prompt

    prompt_security_fuzzer 

<a id="singlerun"></a>

:speedboat: Quick start single run

Run tests against the system prompt (in non-interactive batch mode):

    prompt-security-fuzzer -b ./system_prompt.examples/medium_system_prompt.txt

๐Ÿ“บ Custom Benchmark!

Run tests against the system prompt with a custom benchmark

    prompt-security-fuzzer -b ./system_prompt.examples/medium_system_prompt.txt --custom-benchmark=ps_fuzz/attack_data/custom_benchmark1.csv

๐Ÿน Run only a subset of attacks!

Run tests against the system prompt with a subset of attacks

    prompt-security-fuzzer -b ./system_prompt.examples/medium_system_prompt.txt --custom-benchmark=ps_fuzz/attack_data/custom_benchmark1.csv --tests='["ucar","amnesia"]'
<br> <br> <br>

<a id="colab"></a>

๐Ÿ““ Google Colab Notebook

Refine and harden your system prompt in our Google Colab Notebook<br><br> <img src="./resources/PromptFuzzer.png" alt="Prompt Fuzzer Refinement Process"/> <br><br> <a id="demovideo"></a>

๐ŸŽฌ Demo video

Watch the video

<a id="attacks"></a>

:crossed_swords: Simulated Attack Details

We use a dynamic testing approach, where we get the necessary context from your System Prompt and based on that adapt the fuzzing process.

<a id="jailbreak"></a>

Jailbreak

<a id="pi-injection"></a>

Prompt Injection

<a id="systemleak"></a>

System prompt extraction
Definitions
<br/> <br/>

<a id="roadmap"></a>

:rainbow: Whatโ€™s next on the roadmap?

Turn this into a community project! We want this to be useful to everyone building GenAI applications. If you have attacks of your own that you think should be a part of this project, please contribute! This is how: https://github.com/prompt-security/ps-fuzz/blob/main/CONTRIBUTING.md

<a id="contributing"></a>

๐Ÿป Contributing

Interested in contributing to the development of our tools? Great! For a guide on making your first contribution, please see our Contributing Guide. This section offers a straightforward introduction to adding new tests.

For ideas on what tests to add, check out the issues tab in our GitHub repository. Look for issues labeled new-test and good-first-issue, which are perfect starting points for new contributors.