Awesome
LLM Security and Privacy
A curated list of papers and tools covering LLM threats and vulnerabilities, both from a security and privacy standpoint. Summaries, key takeaway points, and additional details for each paper are found in the paper-summaries folder.
main.bib file contains the latest citations of the papers listed here.
<p align="center"> <img src="./images/taxonomy.png" alt="A taxonomy of security and privacy threats against deep learning models and consecutively LLMs" style="width:100%"> <b>Overview Figure:</b> A taxonomy of current security and privacy threats against deep learning models and consecutively Large Language Models (LLMs). </p>Table of Contents
Papers
Frameworks & Taxonomies
- OWASP Top 10 for Large Language Model Applications
- MITRE ATLAS (Adversarial Threat Landscape for Artificial-Intelligence Systems)
- NIST AI 100-2 E2023: Adversarial Machine Learning: A Taxonomy and Terminology of Attacks and Mitigations
Tools
News Articles, Blog Posts, and Talks
- Is Generative AI Dangerous?
- Adversarial examples in the age of ChatGPT
- LLMs in Security: Demos vs Deployment?
- Free AI Programs Prone to Security Risks, Researchers Say
- Why 'Good AI' Is Likely The Antidote To The New Era Of AI Cybercrime
- Meet PassGPT, the AI Trained on Millions of Leaked Passwords
Contributing
If you are interested in contributing to this repository, please see CONTRIBUTING.md for details on the guidelines.
A list of current contributors is found HERE.
Contact
For any questions regarding this repository and/or potential (research) collaborations please contact Briland Hitaj.