Home

Awesome

<h1 align="center"> Awesome Reinforcement Learning <br>for Cyber Security </h1> <p align="center"> <img src="https://awesome.re/badge.svg"> <a href="https://github.com/Limmen/awesome-rl-for-cybersecurity"> <img src="https://img.shields.io/badge/Awesome-AwesomeRLForCyber-orange"> </a> <a href="https://github.com/Limmen/awesome-rl-for-cybersecurity/stargazers"> <img src="https://img.shields.io/github/stars/Limmen/awesome-rl-for-cybersecurity"> </a> <a href="https://github.com/Limmen/awesome-rl-for-cybersecurity/network/members"> <img src="https://img.shields.io/github/forks/Limmen/awesome-rl-for-cybersecurity"> </a> <a href="https://github.com/Limmen/awesome-rl-for-cybersecurity"> <img src="https://img.shields.io/github/issues/Limmen/awesome-rl-for-cybersecurity"> </a> <a href="https://github.com/Limmen/awesome-rl-for-cybersecurity#contributors-"><img src="https://img.shields.io/badge/all_contributors-3-orange.svg"></a> </p>

A curated list of resources dedicated to reinforcement learning applied to cyber security. Note that the list includes only work that uses reinforcement learning, general machine learning methods applied to cyber security are not included in this list.

For other related curated lists, see :

<p align="center"> <img src="imgs/network_chess.png" width="50%", height="50%"> </p>

Table of Contents

Environments

Cyborg++

<table> <tbody> <tr> <td width='50%' align='center'> <img src='imgs/cyborgplusplus.png' /> </td> <td width='50%'> <a href='https://arxiv.org/pdf/2410.16324v1'>CybORG++: An Enhanced Gym for the Development of Autonomous Cyber Agents</a> <ul> <li> CybORG++ is an advanced toolkit for reinforcement learning research focused on network defence. Building on the CAGE 2 CybORG environment, it introduces key improvements, including enhanced debugging capabilities, refined agent implementation support, and a streamlined environment that enables faster training and easier customization. Along with addressing several software bugs from its predecessor, CybORG++ introduces MiniCAGE, a lightweight version of CAGE 2. </li> <li> Paper: <a href="https://arxiv.org/pdf/2410.16324v1">(2024) CybORG++: An Enhanced Gym for the Development of Autonomous Cyber Agents</a><br/> </li> </ul> </td> </tr> </tbody> </table>

Cybershield

<table> <tbody> <tr> <td width='50%' align='center'> <img src='imgs/cybershield.png' /> </td> <td width='50%'> <a href='https://ieeexplore.ieee.org/document/10710208'>CYBERSHIELD: A Competitive Simulation Environment for Training AI in Cybersecurity</a> <ul> <li> CyberShield encompasses a comprehensive environment with multiple computers, each hosting various services with unique vulnerabilities. Within this environment, two opposing agents, defender and attacker, participate in a strategic battle, each equipped with distinct actions aimed at outsmarting the other. CyberShield is optimized for competitive multi-agent training using RL algorithms. </li> <li> Paper: <a href="https://ieeexplore.ieee.org/document/10710208">(2024) CYBERSHIELD: A Competitive Simulation Environment for Training AI in Cybersecurity</a><br/> </li> </ul> </td> </tr> </tbody> </table>

Cyberwheel

<table> <tbody> <tr> <td width='50%' align='center'> <img src='imgs/cyberwheel.png' /> </td> <td width='50%'> <a href='https://github.com/ORNL/cyberwheel'>Cyberwheel: A Reinforcement Learning Simulation Environment</a> <ul> <li> Cyberwheel is a Reinforcement Learning (RL) simulation environment built for training and evaluating autonomous cyber defense models on simulated networks. It was built with modularity in mind, to allow users to build on top of it to fit their needs, supporting various robust configuration files to build networks, services, host types, defensive agents, and more. Cyberwheel is being developed by Oak Ridge National Lab (ORNL). </li> <li> Paper: <a href="https://doi.org/10.1145/3675741.3675752">(2024) Towards a High Fidelity Training Environment for Autonomous Cyber Defense Agents</a><br/> </li> </ul> </td> </tr> </tbody> </table>

Pentesting Training Framework for Reinforcement Learning Agents (PenGym)

<table> <tbody> <tr> <td width='50%' align='center'> <img src='imgs/pengym.png' /> </td> <td width='50%'> <a href='https://github.com/cyb3rlab/PenGym'>PenGym: Pentesting Training Framework for Reinforcement Learning Agents</a> <ul> <li> PenGym is a framework for creating and managing realistic environments used for the training of Reinforcement Learning (RL) agents for penetration testing purposes. PenGym uses the same API with the Gymnasium fork of the OpenAI Gym library, thus making it possible to employ PenGym with all the RL agents that follow those specifications. PenGym is being developed by Japan Advanced Institute of Science and Technology (JAIST) in collaboration with KDDI Research, Inc. </li> <li> Paper: <a href="https://www.jaist.ac.jp/~razvan/publications/pengym_framework_rl_agents.pdf">(2024) PenGym: Pentesting Training Framework for Reinforcement Learning Agents</a><br/> </li> </ul> </td> </tr> </tbody> </table>

The ARCD Primary-level AI Training Environment (PrimAITE)

<table> <tbody> <tr> <td width='50%' align='center'> <img src='imgs/primaite.png' /> </td> <td width='50%'> <a href='https://github.com/Autonomous-Resilient-Cyber-Defence/PrimAITE'>The ARCD Primary-level AI Training Environment (PrimAITE)</a> <ul> <li> The ARCD Primary-level AI Training Environment (PrimAITE) provides an effective simulation capability for the purposes of training and evaluating AI in a cyber-defensive role. </li> </ul> </td> </tr> </tbody> </table>

CSLE: The Cyber Security Learning Environment

<table> <tbody> <tr> <td width='50%' align='center'> <img src='imgs/csle_logo_cropped.png' /> </td> <td width='50%'> <a href='https://github.com/Limmen/csle'>CSLE: The Cyber Security Learning Environment</a> <ul> <li> CSLE is a platform for evaluating and developing reinforcement learning agents for control problems in cyber security. It can be considered as a cyber range specifically designed for reinforcement learning agents. Everything from network emulation, to simulation and implementation of network commands have been co-designed to provide an environment where it is possible to train and evaluate reinforcement learning agents on practical problems in cyber security. </li> <li> Paper: <a href="https://ieeexplore.ieee.org/document/9779345">(2022) Intrusion Prevention Through Optimal Stopping</a><br/> </li> </ul> </td> </tr> </tbody> </table>

AutoPentest-DRL

<table> <tbody> <tr> <td width='50%' align='center'> <img src='https://raw.githubusercontent.com/crond-jaist/AutoPentest-DRL/master/Figures/framework_overview.png' /> </td> <td width='50%'> <a href='https://github.com/crond-jaist/AutoPentest-DRL'>AutoPentest-DRL: Automated Penetration Testing Using Deep Reinforcement Learning</a> <ul> <li> AutoPentest-DRL is an automated penetration testing framework based on Deep Reinforcement Learning (DRL) techniques. AutoPentest-DRL can determine the most appropriate attack path for a given logical network, and can also be used to execute a penetration testing attack on a real network via tools such as Nmap and Metasploit. This framework is intended for educational purposes, so that users can study the penetration testing attack mechanisms. AutoPentest-DRL is being developed by the Cyber Range Organization and Design (<a href="https://www.jaist.ac.jp/misc/crond/index-en.html">CROND</a>) NEC-endowed chair at the Japan Advanced Institute of Science and Technology (<a href="https://www.jaist.ac.jp/english/">JAIST</a>) in Ishikawa,Japan. </li> </ul> </td> </tr> </tbody> </table>

NASimEmu

<table> <tbody> <tr> <td width='50%' align='center'> <img src='imgs/nasimemu.svg' width=300 /> </td> <td width='50%'> <a href='https://github.com/jaromiru/NASimEmu'>NASimEmu</a> <ul> <li> NASimEmu is a framework for training deep RL agents in offensive penetration-testing scenarios. It includes both a simulator and an emulator so that a simulation-trained agent can be seamlessly deployed in emulation. Additionally, it includes a random generator that can create scenario instances varying in network configuration and size while fixing certain features, such as exploits and privilege escalations. Furthermore, agents can be trained and tested in multiple scenarios simultaneously.<br/><br/> Paper: <a href="https://arxiv.org/abs/2305.17246">(2023) NASimEmu: Network Attack Simulator & Emulator for Training Agents Generalizing to Novel Scenarios</a><br/> Framework: <a href="https://github.com/jaromiru/NASimEmu">NASimEmu</a><br/> Implemented agents: <a href="https://github.com/jaromiru/NASimEmu-agents">NASimEmu-agents</a> </li> </ul> </td> </tr> </tbody> </table>

gym-idsgame

<table> <tbody> <tr> <td width='50%' align='center'> <img src='gifs/gym_idsgame.gif' width=300 /> </td> <td width='50%'> <a href='https://github.com/Limmen/gym-idsgame'>gym-idsgame</a> <ul> <li> An Abstract Cyber Security Simulation and Markov Game for OpenAI Gym. Paper: <a href="https://arxiv.org/abs/2009.08120">(2020) Finding Effective Security Strategies through Reinforcement Learning and Self-Play</a> </li> </ul> </td> </tr> </tbody> </table>

CyberBattleSim (Microsoft)

<table> <tbody> <tr> <td width='50%' align='center'> <img src='imgs/cyberbattlesim_env.png' width=300 /> </td> <td width='50%'> <a href='https://github.com/microsoft/CyberBattleSim'>CyberBattleSim</a> <ul> <li> CyberBattleSim is an experimentation research platform to investigate the interaction of automated agents operating in a simulated abstract enterprise network environment. The simulation provides a high-level abstraction of computer networks and cyber security concepts. Its Python-based Open AI Gym interface allows for the training of automated agents using reinforcement learning algorithms. Blogpost: <a href="https://www.microsoft.com/security/blog/2021/04/08/gamifying-machine-learning-for-stronger-security-and-ai-models/">(2021) Gamifying machine learning for stronger security and AI models</a> </li> </ul> </td> </tr> </tbody> </table>

gym-malware

<table> <tbody> <tr> <td width='50%' align='center'> <img src='imgs/malware_env.png' width=300 /> </td> <td width='50%'> <a href='https://github.com/endgameinc/gym-malware'>gym-malware</a> <ul> <li> Malware Env for OpenAI Gym Paper: <a href="https://arxiv.org/pdf/1801.08917.pdf">(2018) Learning to Evade Static PE Machine Learning Malware Models via Reinforcement Learning</a> </li> </ul> </td> </tr> </tbody> </table>

malware-rl

<table> <tbody> <tr> <td width='50%' align='center'> <img src='imgs/malware_env.png' width=300 /> </td> <td width='50%'> <a href='https://github.com/bfilar/malware_rl'>malware-rl</a> <ul> <li> Extended and Updated `gym_malware` which supports recent LIEF versionS and an enhanced collection of models (EMBER, MalConv and SOREL-20M) Paper: <a href="https://arxiv.org/pdf/1801.08917.pdf">(2018) Learning to Evade Static PE Machine Learning Malware Models via Reinforcement Learning</a> </li> </ul> </td> </tr> </tbody> </table>

gym-flipit

<table> <tbody> <tr> <td width='50%' align='center'> <img src='imgs/flipit_env.png' width=300 /> </td> <td width='50%'> <a href='https://github.com/lisaoakley/gym-flipit'>gym-flipit</a> <ul> <li> Gym environment for FLIPIT: The Game of "Stealthy Takeover" invented by Marten van Dijk, Ari Juels, Alina Oprea, and Ronald L. Rivest. Paper: <a href="https://arxiv.org/abs/1906.11938">(2019) QFlip: An Adaptive Reinforcement Learning Strategy for the FlipIt Security Game</a> </li> </ul> </td> </tr> </tbody> </table>

gym-threat-defense

<table> <tbody> <tr> <td width='50%' align='center'> <img src='imgs/threat_defense_env.png' width=300 /> </td> <td width='50%'> <a href='https://github.com/hampusramstrom/gym-threat-defense'>gym-threat-defense</a> <ul> <li> Gym environment for the environment described in the paper: <a href="https://dl.acm.org/doi/10.1145/2808475.2808482">(2019) Optimal Defense Policies for Partially Observable Spreading Processes on Bayesian Attack Graphs</a> </li> </ul> </td> </tr> </tbody> </table>

gym-nasim

<table> <tbody> <tr> <td width='50%' align='center'> <img src='imgs/nasim_env.png' width=300 /> </td> <td width='50%'> <a href='https://github.com/Jjschwartz/NetworkAttackSimulator'>gym-nasim</a> <ul> <li> Thesis: <a href="https://arxiv.org/pdf/1905.05965.pdf">(2018) Autonomous Penetration Testing using Reinforcement Learning</a> </li> </ul> </td> </tr> </tbody> </table>

gym-optimal-intrusion-response

<table> <tbody> <tr> <td width='50%' align='center'> <img src='imgs/intrusion_response_env.png' width=300 /> </td> <td width='50%'> <a href='https://github.com/Limmen/gym-optimal-intrusion-response'>gym-optimal-intrusion-response</a> <ul> <li> An OpenAI Gym interface to a MDP/Markov Game model for optimal intrusion response of a realistic infrastructure simulated using system traces. Paper: <a href="https://arxiv.org/pdf/2106.07160.pdf">(2021) Learning Intrusion Prevention Policies through Optimal Stopping</a> </li> </ul> </td> </tr> </tbody> </table>

sql_env

<table> <tbody> <tr> <td width='50%' align='center'> <img src='imgs/sql_env.png' width=300 /> </td> <td width='50%'> <a href='https://github.com/manuel-delverme/sql_env'>sql_env</a> <ul> <li> Paper: <a href="https://link.springer.com/chapter/10.1007/978-3-030-91625-1_6">(2021) SQL Injections and Reinforcement Learning: An Empirical Evaluation of the Role of Action Structure</a> </li> </ul> </td> </tr> </tbody> </table>

cage-challenge

<table> <tbody> <tr> <td align='center'> <img src='imgs/cage_env.png' width=300 /> </td> </tr> <tr> <td > <a href='https://github.com/cage-challenge/cage-challenge-1'>cage-challenge-1</a> <ul> <li> The first Cyber Autonomos Gym for Experimentation (CAGE) challenge environment released at the 1st International Workshop on Adaptive Cyber Defense held as part of the 2021 International Joint Conference on Artificial Intelligence (IJCAI). </li> </ul> </td> </tr> <tr> <td > <a href='https://github.com/cage-challenge/cage-challenge-2'>cage-challenge-2</a> <ul> <li> The second Cyber Autonomous Gym for Experimentation (CAGE) challenge environment announced at the AAAI-22 Workshop on Artificial Intelligence for Cyber Security Workshop (AICS). Paper: <a href="https://arxiv.org/pdf/2309.07388">(2023) On Autonomous Agents in a Cyber Defence Environment</a> </li> </ul> </td> </tr> <tr> <td > <a href='https://github.com/cage-challenge/cage-challenge-3'>cage-challenge-3</a> <ul> <li> The third Cyber Autonomous Gym for Experimentation (CAGE) challenge environment. </li> </ul> </td> </tr> <tr> <td > <a href='https://github.com/cage-challenge/cage-challenge-4'>cage-challenge-4</a> <ul> <li> The fourth Cyber Autonomous Gym for Experimentation (CAGE) challenge environment. </li> </ul> </td> </tr> </tbody> </table>

ATMoS

<table> <tbody> <tr> <td width='50%' align='center'> <img src='imgs/atmos.png' width=300 /> </td> <td width='50%'> <a href='https://github.com/ATMoS-Waterloo/ATMoS'>ATMoS</a> <ul> <li> Paper: <a href="https://ieeexplore.ieee.org/document/9110426">(2020) ATMoS: Autonomous Threat Mitigation in SDN using Reinforcement Learning</a> </li> </ul> </td> </tr> </tbody> </table>

MAB-Malware

<table> <tbody> <tr> <td width='50%' align='center'> <img src='imgs/mab_malware.png' width=300 /> </td> <td width='50%'> <a href='https://github.com/weisong-ucr/MAB-malware'>MAB-malware</a> <ul> <li> Paper: <a href="https://arxiv.org/pdf/2003.03100.pdf">(2022) MAB-Malware: A Reinforcement Learning Framework for Attacking Static Malware Classifiers</a> </li> </ul> </td> </tr> </tbody> </table>

ASAP

<table> <tbody> <tr> <td width='50%' align='center'> <img src='imgs/asap.png' width=300 /> </td> <td width='50%'> <a href=https://github.com/ankur8931/asap>Autonomous Security Analysis and Penetration Testing framework (ASAP)</a> <ul> <li> Paper: <a href="https://ieeexplore.ieee.org/document/9394285">(2020) Autonomous Security Analysis and Penetration Testing</a> </li> </ul> </td> </tr> </tbody> </table>

Yawning Titan

<table> <tbody> <tr> <td width='50%' align='center'> <img src='imgs/yawning_titan_env.png' width=300 /> </td> <td width='50%'> <a href='https://github.com/dstl/YAWNING-TITAN'>Yawning Titan</a> <ul> <li> Yawning Titan is an abstract, highly flexible, cyber security simulator that is capable of simulating a range of cyber security scenarios.

Paper: <a href="https://www.researchgate.net/publication/361638424_Developing_Optimal_Causal_Cyber-Defence_Agents_via_Cyber_Security_Simulation">(2022) Developing Optimal Causal Cyber-Defence Agents via Cyber Security Simulation</a> </li> </ul> </td> </tr>

</tbody> </table>

Cyborg

<table> <tbody> <tr> <td width='50%' align='center'> <img src='imgs/cyborg.png' width=300 /> </td> <td width='50%'> <a href='https://github.com/cage-challenge/CybORG'>Cyborg</a> <ul> <li> Cyborg is a gym for autonomous cyberg operations research that is driven by the need to efficiently support reinforcement learning to train adversarial decision-making models through simulation and emulation. This is a variation of the environments used by cage-challenge above.

Paper: <a href="https://arxiv.org/abs/2108.09118">(2021) CybORG: A Gym for the Development of Autonomous Cyber Agents </a> </li> </ul> </td> </tr>

</tbody> </table>

FARLAND

<table> <tbody> <tr> <td width='50%' align='center'> <img src='imgs/farland.png' width=300 /> </td> <td width='50%'> FARLAND (github repository missing) <ul> <li> FARLAND is a framework for advanced Reinforcement Learning for autonomous network defense, that uniquely enables the design of network environments to gradually increase the complexity of models, providing a path for autonomous agents to increase their performance from apprentice to superhuman level, in the task of reconfiguring networks to mitigate cyberattacks.

Paper: <a href="https://arxiv.org/pdf/2103.07583.pdf">(2021) Network Environment Design for Autonomous Cyberdefense </a> </li> </ul> </td> </tr>

</tbody> </table>

SecureAI

<table> <tbody> <tr> <td width='50%' align='center'> <img src='imgs/secureai.png' width=300 /> </td> <td width='50%'> <a href='https://github.com/ansi-code/secureai-java'>SecureAI</a> <ul> <li> SecureAI: Deep Reinforcement Learning for Self-Protection in Non-Stationary Cloud Architectures Paper: <a href="https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=9659882">(2021) An Intrusion Response Approach for Elastic Applications Based on Reinforcement Learning</a> </li> </ul> </td> </tr> </tbody> </table>

CYST

<table> <tbody> <tr> <td width='50%' align='center'> <img src='imgs/cyst.png' width=300 /> </td> <td width='50%'> <a href='https://muni.cz/go/cyst-user'>CYST</a> <ul> <li> CYST is a multi-agent discrete-event simulation framework tailored for cybersecurity domain. Its goal is to enable high-throughput and realistic simulation of cybersecurity interactions in arbitrary infrastructures. <br/><br/> Paper: <a href="https://ieeexplore.ieee.org/abstract/document/9213690">(2020) Session-level Adversary Intent-Driven Cyberattack Simulator</a></br> Code: <a href="https://gitlab.ics.muni.cz/cyst-public/cyst-core/">HERE</a> </li> </ul> </td> </tr> </tbody> </table>

CLAP

<table> <tbody> <tr> <td width='50%' align='center'> <img src='imgs/clap.jpg' width=300 /> </td> <td width='50%'> <a href='https://github.com/yyzpiero/RL4RedTeam'>CLAP: Curiosity-Driven Reinforcment Learning Automatic Penetration Testing Agent</a> <ul> <li> CLAP is a reinforcement learning PPO agent performs Penetration Testing in simulated computer network environment (we use Network Attack Simulator (NASim)). The agent is trained to scan for vulnerabilities in the network and exploit them to gain access to various network resources. <br/><br/> Paper: <a href="https://arxiv.org/abs/2202.10630">(2022) Behaviour-Diverse Automatic Penetration Testing: A Curiosity-Driven Multi-Objective Deep Reinforcement Learning Approach</a></br> Code: <a href="https://github.com/yyzpiero/RL4RedTeam">HERE</a> </li> </ul> </td> </tr> </tbody> </table>

CyGIL

<table> <tbody> <tr> <td width='50%' align='center'> <img src='imgs/cygil.png' width=300 /> </td> <td width='50%'> <a href='https://arxiv.org/abs/2109.03331'>CyGIL: A Cyber Gym for Training Autonomous Agents over Emulated Network Systems</a> <ul> <li> CyGIL is an experimental testbed of an emulated RL training environment for network cyber operations. CyGIL uses a stateless environment architecture and incorporates the MITRE ATT&CK framework to establish a high fidelity training environment, while presenting a sufficiently abstracted interface to enable RL training. Its comprehensive action space and flexible game design allow the agent training to focus on particular advanced persistent threat (APT) profiles, and to incorporate a broad range of potential threats and vulnerabilities. By striking a balance between fidelity and simplicity, it aims to leverage state of the art RL algorithms for application to real-world cyber defence. <br/><br/> Paper: <a href="https://arxiv.org/abs/2109.03331">(2021) CyGIL: A Cyber Gym for Training Autonomous Agents over Emulated Network Systems </a></br> </li> </ul> </td> </tr> </tbody> </table>

BRAWL

<table> <tbody> <tr> <td width='50%' align='center'> <img src='imgs/brawl.png' width=300 /> </td> <td width='50%'> <a href='https://github.com/mitre/brawl-public-game-001'>BRAWL</a> <ul> <li> BRAWL seeks to create a compromise by creating a system to automatically create an enterprise network inside a cloud environment. OpenStack is the only currently supported environment, but it is being designed in such a way as to easily support other cloud environments in the future. </li> </ul> </td> </tr> </tbody> </table>

DETERLAB

<table> <tbody> <tr> <td width='50%' align='center'> <img src='imgs/deter.jpeg' width=300 /> </td> <td width='50%'> <a href='https://ieeexplore.ieee.org/document/5655108'>DeterLab: Cyber-Defense Technology Experimental Research Laboratory</a> <ul> <li> Since 2004, the DETER Cybersecurity Testbed Project has worked to create the necessary infrastructure - facilities, tools, and processes-to provide a national resource for experimentation in cyber security. The next generation of DETER envisions several conceptual advances in testbed design and experimental research methodology, targeting improved experimental validity, enhanced usability, and increased size, complexity, and diversity of experiments. <br/><br/> Paper: <a href="https://ieeexplore.ieee.org/document/5655108">(2010) The DETER project: Advancing the science of cyber security experimentation and test </a></br> </li> </ul> </td> </tr> </tbody> </table>

EmuLab

<table> <tbody> <tr> <td width='50%' align='center'> <img src='imgs/emulab.png width=300 /> </td> <td width='50%'> <a href='https://www.usenix.org/legacy/event/usenix08/tech/full_papers/hibler/hibler.pdf>EmuLab: Large-scale Virtualization in the Emulab Network Testbed</a> <ul> <li> The Emulab software is the management system for a network-rich PC cluster that provides a space- and timeshared public facility for studying networked and distributed systems. <br/><br/> Paper: <a href="https://www.usenix.org/legacy/event/usenix08/tech/full_papers/hibler/hibler.pdf">(2008) EmuLab: Large-scale Virtualization in the Emulab Network Testbed </a></br> </li> </ul> </td> </tr> </tbody> </table>

Mininet

<table> <tbody> <tr> <td width='50%' align='center'> <img src='imgs/emulab.svg' width=300 /> </td> <td width='50%'> <a href=https://ieeexplore.ieee.org/document/7311238>Mininet creates a realistic virtual network, running real kernel, switch and application code, on a single machine (VM, cloud or native), in seconds, with a single command.</a> <ul> <li> Paper: <a href="https://ieeexplore.ieee.org/document/7311238">(2015) Emulation of Software Defined Networks Using Mininet in Different Simulation Environments </a></br> </li> </ul> </td> </tr> </tbody> </table>

Vine

<table> <tbody> <tr> <td width='50%' align='center'> <img src='imgs/vine.png' width=300 /> </td> <td width='50%'> <a href=https://dl.acm.org/doi/10.1145/2808475.2808486>VINE: A Cyber Emulation Environment for MTD Experimentation</a> <ul> <li> Paper: <a href="https://dl.acm.org/doi/10.1145/2808475.2808486">(2015) VINE: A Cyber Emulation Environment for MTD Experimentation</a></br> </li> </ul> </td> </tr> </tbody> </table>

CRATE

<table> <tbody> <tr> <td width='50%' align='center'> <img src='imgs/crate.png' width=300 /> </td> <td width='50%'> <a href=https://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=9229649&casa_token=b3Vk13dXj_sAAAAA:zrB-U_pA50aq7IMzcdy6qy9YNjFsuccAtpujDmDvJnNq9iCc7aqQNoeKsjC_AddSIgYt-MUkk3A6>CRATE Exercise Control – A cyber defense exercise management and support tool</a> <ul> <li> Paper: <a href="https://dl.acm.org/doi/10.1145/2808475.2808486">(2020) CRATE Exercise Control – A cyber defense exercise management and support</a></br> </li> </ul> </td> </tr> </tbody> </table>

GALAXY

<table> <tbody> <tr> <td width='50%' align='center'> <img src='imgs/galaxy.png' width=300 /> </td> <td width='50%'> <a href=https://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=9229649&casa_token=b3Vk13dXj_sAAAAA:zrB-U_pA50aq7IMzcdy6qy9YNjFsuccAtpujDmDvJnNq9iCc7aqQNoeKsjC_AddSIgYt-MUkk3A6>Galaxy: A Network Emulation Framework for Cybersecurity tool</a> <ul> <li> Paper: <a href="https://www.usenix.org/system/files/conference/cset18/cset18-paper-schoonover.pdf">(2018) Galaxy: A Network Emulation Framework for Cybersecurity </a></br> </li> </ul> </td> </tr> </tbody> </table>

Papers

Surveys

Demonstration papers

Position papers

Regular Papers

PhD Theses

Master Theses

Bachelor Theses

Posters

Books

Blogposts

Talks

Miscellaneous

Contribute

Contributions are very welcome. Please use Github issues and pull requests.

List of Contributors

Thanks for all your contributions and keeping this project up-to-date.

<a href="https://github.com/Limmen/awesome-rl-for-cybersecurity/graphs/contributors"> <img src="https://contrib.rocks/image?repo=Limmen/awesome-rl-for-cybersecurity" /> </a>

License

LICENSE

Creative Commons

(C) 2021-2024