Home

Awesome

Multi-Agent Particle Environment

A simple multi-agent particle world with a continuous observation and discrete action space, along with some basic simulated physics. Used in the paper Multi-Agent Actor-Critic for Mixed Cooperative-Competitive Environments.

Getting started:

Code structure

Creating new environments

You can create new scenarios by implementing the first 4 functions above (make_world(), reset_world(), reward(), and observation()).

List of environments

Env name in code (name in paper)Communication?Competitive?Notes
simple.pyNNSingle agent sees landmark position, rewarded based on how close it gets to landmark. Not a multiagent environment -- used for debugging policies.
simple_adversary.py (Physical deception)NY1 adversary (red), N good agents (green), N landmarks (usually N=2). All agents observe position of landmarks and other agents. One landmark is the ‘target landmark’ (colored green). Good agents rewarded based on how close one of them is to the target landmark, but negatively rewarded if the adversary is close to target landmark. Adversary is rewarded based on how close it is to the target, but it doesn’t know which landmark is the target landmark. So good agents have to learn to ‘split up’ and cover all landmarks to deceive the adversary.
simple_crypto.py (Covert communication)YYTwo good agents (alice and bob), one adversary (eve). Alice must sent a private message to bob over a public channel. Alice and bob are rewarded based on how well bob reconstructs the message, but negatively rewarded if eve can reconstruct the message. Alice and bob have a private key (randomly generated at beginning of each episode), which they must learn to use to encrypt the message.
simple_push.py (Keep-away)NY1 agent, 1 adversary, 1 landmark. Agent is rewarded based on distance to landmark. Adversary is rewarded if it is close to the landmark, and if the agent is far from the landmark. So the adversary learns to push agent away from the landmark.
simple_reference.pyYN2 agents, 3 landmarks of different colors. Each agent wants to get to their target landmark, which is known only by other agent. Reward is collective. So agents have to learn to communicate the goal of the other agent, and navigate to their landmark. This is the same as the simple_speaker_listener scenario where both agents are simultaneous speakers and listeners.
simple_speaker_listener.py (Cooperative communication)YNSame as simple_reference, except one agent is the ‘speaker’ (gray) that does not move (observes goal of other agent), and other agent is the listener (cannot speak, but must navigate to correct landmark).
simple_spread.py (Cooperative navigation)NNN agents, N landmarks. Agents are rewarded based on how far any agent is from each landmark. Agents are penalized if they collide with other agents. So, agents have to learn to cover all the landmarks while avoiding collisions.
simple_tag.py (Predator-prey)NYPredator-prey environment. Good agents (green) are faster and want to avoid being hit by adversaries (red). Adversaries are slower and want to hit good agents. Obstacles (large black circles) block the way.
simple_world_comm.pyYYEnvironment seen in the video accompanying the paper. Same as simple_tag, except (1) there is food (small blue balls) that the good agents are rewarded for being near, (2) we now have ‘forests’ that hide agents inside from being seen from outside; (3) there is a ‘leader adversary” that can see the agents at all times, and can communicate with the other adversaries to help coordinate the chase.

Paper citation

If you used this environment for your experiments or found it helpful, consider citing the following papers:

Environments in this repo:

<pre> @article{lowe2017multi, title={Multi-Agent Actor-Critic for Mixed Cooperative-Competitive Environments}, author={Lowe, Ryan and Wu, Yi and Tamar, Aviv and Harb, Jean and Abbeel, Pieter and Mordatch, Igor}, journal={Neural Information Processing Systems (NIPS)}, year={2017} } </pre>

Original particle world environment:

<pre> @article{mordatch2017emergence, title={Emergence of Grounded Compositional Language in Multi-Agent Populations}, author={Mordatch, Igor and Abbeel, Pieter}, journal={arXiv preprint arXiv:1703.04908}, year={2017} } </pre>