Home

Awesome

LLM Guard - The Security Toolkit for LLM Interactions

LLM Guard by Protect AI is a comprehensive tool designed to fortify the security of Large Language Models (LLMs).

Documentation | Playground | Changelog

GitHub
stars MIT license Code style: black PyPI - Python Version Downloads Downloads

<a href="https://join.slack.com/t/laiyerai/shared_invite/zt-28jv3ci39-sVxXrLs3rQdaN3mIl9IT~w"><img src="https://github.com/protectai/llm-guard/blob/main/docs/assets/join-our-slack-community.png?raw=true" width="200" alt="Join Our Slack Community"></a>

What is LLM Guard?

LLM-Guard

By offering sanitization, detection of harmful language, prevention of data leakage, and resistance against prompt injection attacks, LLM-Guard ensures that your interactions with LLMs remain safe and secure.

Installation

Begin your journey with LLM Guard by downloading the package:

pip install llm-guard

Getting Started

Important Notes:

Examples:

Supported scanners

Prompt scanners

Output scanners

Community, Contributing, Docs & Support

LLM Guard is an open source solution. We are committed to a transparent development process and highly appreciate any contributions. Whether you are helping us fix bugs, propose new features, improve our documentation or spread the word, we would love to have you as part of our community.

Join our Slack to give us feedback, connect with the maintainers and fellow users, ask questions, get help for package usage or contributions, or engage in discussions about LLM security!

<a href="https://join.slack.com/t/laiyerai/shared_invite/zt-28jv3ci39-sVxXrLs3rQdaN3mIl9IT~w"><img src="https://github.com/protectai/llm-guard/blob/main/docs/assets/join-our-slack-community.png?raw=true" width="200" alt="Join Our Slack Community"></a>

Production Support

We're eager to provide personalized assistance when deploying your LLM Guard to a production environment.