Awesome
<h1 align="center"> ๐ค AgentVerse ๐ช </h1> <!-- <h3 align="center"> <p>A Framework for Multi-LLM Environment Simulation</p> </h3> --> <p align="center"> <a href="https://github.com/OpenBMB/AgentVerse/blob/main/LICENSE"> <img alt="License: Apache2" src="https://img.shields.io/badge/License-Apache_2.0-green.svg"> </a> <a href="https://www.python.org/downloads/release/python-3916/"> <img alt="Python Version" src="https://img.shields.io/badge/python-3.9+-blue.svg"> </a> <a href="https://github.com/OpenBMB/AgentVerse/actions/"> <img alt="Build" src="https://img.shields.io/github/actions/workflow/status/OpenBMB/AgentVerse/test.yml"> </a> <a href="https://github.com/psf/black"> <img alt="Code Style: Black" src="https://img.shields.io/badge/code%20style-black-black"> <!-- </a> <a href="https://github.com/OpenBMB/AgentVerse/issues"> <img alt="Contributions: Welcome" src="https://img.shields.io/badge/contributions-welcome-brightgreen.svg?style=flat"> </a> --> <a href="https://huggingface.co/AgentVerse"> <img alt="HuggingFace" src="https://img.shields.io/badge/hugging_face-play-yellow"> </a> <a href="https://discord.gg/gDAXfjMw"> <img alt="Discord" src="https://img.shields.io/badge/AgentVerse-Discord-purple?style=flat"> </a> </p> <p align="center"> <img src="./imgs/title.png" width="512"> </p> <p align="center"> ใ<a href="https://arxiv.org/abs/2308.10848">Paper</a>ใ </p> <p align="center"> ใEnglish | <a href="README_zh.md">Chinese</a>ใ </p>AgentVerse is designed to facilitate the deployment of multiple LLM-based agents in various applications. AgentVerse primarily provides two frameworks: task-solving and simulation.
- Task-solving: This framework assembles multiple agents as an automatic multi-agent system (AgentVerse-Tasksolving, Multi-agent as system) to collaboratively accomplish the corresponding tasks. Applications: software development system, consulting system, etc.
- Simulation: This framework allows users to set up custom environments to observe behaviors among, or interact with, multiple agents. โ ๏ธโ ๏ธโ ๏ธ We're refactoring the code. If you require a stable version that exclusively supports simulation framework, you can use
release-0.1
branch. Applications: game, social behavior research of LLM-based agents, etc.
๐ฐ What's New
-
[2024/3/17] AgentVerse was introduced in NVIDIA's blog - Building Your First LLM Agent Application.
-
[2024/1/17] We're super excited to announce that our paper got accepted at ICLR 2024. More updates will be coming soon!
-
[2023/10/17] We're super excited to share our open-source AI community hugging face:
AgentVerse
. You are able to try out the two simulation applications, NLP Classroom and Prisoner's Dilemma,with your code of the openai API key and the openai organization. Have fun! -
[2023/10/5] Re-factor our codebase to enable the deployment of both simulation and task-solving framework! We have placed the code for Minecraft example in the paper at the
minecraft
branch. Our tool-using example will soon be updated to themain
branch. Stay tuned! -
[2023/8/22] We're excited to share our paper AgentVerse: Facilitating Multi-Agent Collaboration and Exploring Emergent Behaviors in Agents that illustrate the task-solving framework in detail of AgentVerse.
-
[2023/6/5] We are thrilled to present an array of demos, including NLP Classroom, Prisoner Dilemma, Software Design, Database Administrator, and a simple H5 Pokemon Game that enables the interaction with the characters in Pokemon! Try out these demos and have fun!
-
[2023/5/1] ๐ AgentVerse is officially launched!
๐ Coming Soon
- Code release of our paper
- Add support for local LLM (LLaMA, Vicunna, etc.)
- Add documentation
- Support more sophisticated memory for conversation history
Contents
- ๐ฐ What's New
- ๐ Coming Soon
- Contents
- ๐ Getting Started
- AgentVerse Showcases
- ๐ Join Us!
- Star History
- Contact
๐ Getting Started
Installation
Manually Install (Recommended!)
Make sure you have Python >= 3.9
git clone https://github.com/OpenBMB/AgentVerse.git --depth 1
cd AgentVerse
pip install -e .
If you want to use AgentVerse with local models such as LLaMA, you need to additionally install some other dependencies:
pip install -r requirements_local.txt
Install with pip
Or you can install through pip
pip install -U agentverse
Environment Variables
You need to export your OpenAI API key as follows๏ผ
# Export your OpenAI API key
export OPENAI_API_KEY="your_api_key_here"
If you want use Azure OpenAI services, please export your Azure OpenAI key and OpenAI API base as follows๏ผ
export AZURE_OPENAI_API_KEY="your_api_key_here"
export AZURE_OPENAI_API_BASE="your_api_base_here"
Simulation
Framework Required Modules
- agentverse
- agents
- simulation_agent
- environments
- simulation_env
CLI Example
You can create a multi-agent environments provided by us. Using the classroom scenario as an example. In this scenario, there are nine agents, one playing the role of a professor and the other eight as students.
agentverse-simulation --task simulation/nlp_classroom_9players
GUI Example
We also provide a local website demo for this environment. You can launch it with
agentverse-simulation-gui --task simulation/nlp_classroom_9players
After successfully launching the local server, you can visit http://127.0.0.1:7860/ to view the classroom environment.
If you want to run the simulation cases with tools (e.g., simulation/nlp_classroom_3players_withtool), you need to install BMTools as follows:
git clone git+https://github.com/OpenBMB/BMTools.git
cd BMTools
pip install -r requirements.txt
python setup.py develop
This is optional. If you do not install BMTools, the simulation cases without tools can still run normally.
Task-Solving
Framework Required Modules
- agentverse
- agents
- simulation_env
- environments
- tasksolving_env
CLI Example
To run the experiments with the task-solving environment proposed in our paper, you can use the following command:
To run AgentVerse on a benchmark dataset, you can try
# Run the Humaneval benchmark using gpt-3.5-turbo (config file `agentverse/tasks/tasksolving/humaneval/gpt-3.5/config.yaml`)
agentverse-benchmark --task tasksolving/humaneval/gpt-3.5 --dataset_path data/humaneval/test.jsonl --overwrite
To run AgentVerse on a specific problem, you can try
# Run a single query (config file `agentverse/tasks/tasksolving/brainstorming/gpt-3.5/config.yaml`). The task is specified in the config file.
agentverse-tasksolving --task tasksolving/brainstorming
To run the tool using cases presented in our paper, i.e., multi-agent using tools such as web browser, Jupyter notebook, bing search, etc., you can first build ToolsServer provided by XAgent. You can follow their instruction to build and run the ToolServer.
After building and launching the ToolServer, you can use the following command to run the task-solving cases with tools:
agentverse-tasksolving --task tasksolving/tool_using/24point
We have provided more tasks in agentverse/tasks/tasksolving/tool_using/
that show how multi-agent can use tools to solve problems.
Also, you can take a look at agentverse/tasks/tasksolving
for more experiments we have done in our paper.
Local Model Support
vLLM Support
If you want to use vLLM, follow the guide here to install and setup the vLLM server which is used to handle larger inference workloads. Create the following environment variables to connect to the vLLM server:
export VLLM_API_KEY="your_api_key_here"
export VLLM_API_BASE="http://your_vllm_url_here"
Then modify the model
in the task config file so that it matches the model name in the vLLM server. For example:
model_type: vllm
model: llama-2-7b-chat-hf
FSChat Support
This section provides a step-by-step guide to integrate FSChat into AgentVerse. FSChat is a framework that supports local models such as LLaMA, Vicunna, etc. running on your local machine.
1. Install the Additional Dependencies
If you want to use local models such as LLaMA, you need to additionally install some other dependencies:
pip install -r requirements_local.txt
2. Launch the Local Server
Then modify the MODEL_PATH
and MODEL_NAME
according to your need to launch the local server with the following command:
bash scripts/run_local_model_server.sh
The script will launch a service for Llama 7B chat model.
The MODEL_NAME
in AgentVerse currently supports several models including llama-2-7b-chat-hf
, llama-2-13b-chat-hf
, llama-2-70b-chat-hf
, vicuna-7b-v1.5
, and vicuna-13b-v1.5
. If you wish to integrate additional models that are compatible with FastChat, you need to:
- Add the new
MODEL_NAME
into theLOCAL_LLMS
withinagentverse/llms/__init__.py
. Furthermore, establish - Add the mapping from the new
MODEL_NAME
to its corresponding Huggingface identifier in theLOCAL_LLMS_MAPPING
within theagentverse/llms/__init__.py
file.
3. Modify the Config File
In your config file, set the llm_type
to local
and model
to the MODEL_NAME
. For example
llm:
llm_type: local
model: llama-2-7b-chat-hf
...
You can refer to agentverse/tasks/tasksolving/commongen/llama-2-7b-chat-hf/config.yaml
for a more detailed example.
AgentVerse Showcases
Simulation Showcases
Refer to simulation showcases
Task-Solving Showcases
Refer to tasksolving showcases
<!-- ## ๐ก Philosophy ### Environment At the core of our framework is the environment, which plays a crucial role in enabling researchers to study the behavior of agents under different conditions. We believe that the environment should be flexible and extensible, allowing researchers to easily customize it to fit their needs. To achieve this, we have abstracted the environment into five rule components, and implementing different environments is actually implementing different rules: - **Describer**: This component provides a description of the environment at each turn for each agent. You can customize the describer to define the specific requirements of their environment, such as the agents with whom an agent can interact. - **Order**: This component defines the order in which agents take actions within the environment. You can customize the order to reflect the desired interaction between agents. We provide several basic order options, including `random`, `sequential`, and `concurrent` (in which all agents take an action in each turn). - **Selector**: This component selects the valid messages generated by agents. Sometimes agents may generate invalid responses, and the selector is used to filter out unexpected results. - **Updater**: This component updates the memory of each agent. In certain cases, the response generated by one agent should not be seen by all agents (e.g., if agents are in different rooms). For each response, the updater updates only the agents who can see it. - **Visibility**: This component maintains the list of agents that each agent can see throughout the environment's changes. For example, when an agent moves from one room to another, the list of visible agents of each agent should be updated by `visibility`. By abstracting the environment into these five components, we have created a highly flexible and extensible framework that enables researchers to easily build and customize their own multi-agent environments. ### Agent Another fundamental component is the agent. Currently we provide two types of agents: **ConversationAgent** and **ToolAgent**. You can also customize your own agent by inheriting BaseAgent class (tutorial coming soon). --> <!-- ## โ๏ธ Customize Your Own Environment We have provided several examples in the `agentverse/tasks` directory. To customize your environment, you should 1. Create a task directory in `agentverse/tasks` 2. Write the configuration file 3. Write the output parser that parses the response of your agents. 4. Add your parser in `agentverse/tasks/__init__.py` We will use a simple example in `agentverse/tasks/nlp_classroom_3players` to illustrate the procedure. ### A Simple Example: Building a Classroom Environment To illustrate how to customize your environment, we'll use a simple example of building a classroom environment where one agent is the professor, one is the student, and one is the teaching assistant. ##### 1. Creating a Task Directory and Configuring the Environment First, we need to create a task directory and write our configuration file for the environment. In the `agentverse/tasks` directory, create a new directory called `nlp_classroom_3players`. Inside this directory, create a `config.yaml` file and write the following configuration: ```yaml # config.yaml environment: env_type: basic # Use the basic environment provided in AgentVerse max_turns: 10 # Specify the maximum number of dialogue turns rule: order: type: sequential # Use the sequential order visibility: type: all # Each message can be seen by all agents selector: type: basic # Basic selector (do not select) updater: type: basic # Basic updater (update the message to all agents) describer: type: basic # Basic describer (no description) ``` This configuration specifies that we will use the basic environment provided in AgentVerse, with a maximum of 10 dialogue turns. We'll use the sequential order, with all messages visible to all agents. We won't be using any selectors, our updater will update the messages to all the agents and our describer will provide no description. ##### 2. Configuring the Agents Next, we'll configure the agents. In the `config.yaml` file, we'll add the configuration for each agent. Here's an example configuration for the professor: ```yaml # config.yaml agents: - agent_type: conversation name: Professor Micheal # Name of the agent role_description: You are Prof. Micheal, ... # Description of the agent memory: memory_type: chat_history # Will store all the chat history prompt_template: *professor_prompt llm: llm_type: text-davinci-003 # Will use OpenAICompletion LLM model: text-davinci-003 # The arguments passed to the api call temperature: 0.7 max_tokens: 250 ``` In this example, we'll use the `conversation` agent type. We've given the agent a name and a description, and we'll store the chat history in memory. We've also provided a prompt template with placeholders marked as ${placeholder}. These will be instantiated by the `_fill_prompt_template` method of the agent. ##### 3. Writing an Output Parser The next step is to write a simple parser for your agent's response. Because you may have specified the output format in your prompt template, you need to provide a corresponding parser. In this example, we inform the model to output in the following format in our prompt template ``` Action: Speak Action Input: (the content) ``` We'll write a parser to extract the content from the agent's response. Refer to the code for more details. We've decorated our parser function with `@output_parser_registry.register('classroom_parser')` to register it with our framework. Finally, we import our parser in `agentverse/tasks/__init__.py`. With these steps, we've successfully built a simple classroom environment and customized it for our needs. ### Customization Guide for More Complex Environments While we provide a basic framework for building environments with our five rule components, more complex environments may require further customization. A detailed documentation and tutorial is coming soon. Here we briefly introduce some steps you can take to customize your environment: 1. **Customize the five rule components**. Each rule component has an interface, allowing you to customize its behavior to suit your specific needs. It's important to note that these components are not necessarily independent and can interact through the `rule_params` dictionary in the environment. You can create your own rule components and integrate them with the existing ones to build more complex interactions between agents. 2. **Customize the environment itself**. Our `basic` environment provides a default execution order for the five rule components that is suitable for most cases, but you can inherit the `BaseEnvironment` class and write your own `run` method to implement a more sophisticated execution order. 3. **Customize the agent**. Depending on your specific use case, you may also need to inherit the `BaseAgent` class. For example, you may want to use your local LLM as your agents or create agents with specialized knowledge or skills. --> <!-- ## ๐ Examples Currently, we offer some simple examples in the `agentverse/tasks` directory, each demonstrating different possibilities of our framework. While the performance of these examples may not be optimal due to limited prompt engineering, they are intended to showcase the capabilities of our framework, such as allowing the use of tools. Here's a brief overview of each example: 1. `nlp_classroom_3players`: This example illustrates a simple case in which agents will speak in sequential order. 2. `nlp_classroom_9players`: This is an NLP class example. Here, students can raise their hand when they have a question, and the professor can call on the students to let them ask. Students are only allowed to speak after they are called on. 3. `nlp_classroom_9players_group`: This example showcases group discussions. The professor may initiate a group discussion when needed, and students can exclusively interact with fellow students within the same group during the discussion. 4. `nlp_classroom_3players_withtool`: Students in this classroom can use Bing search API when listening to the class. 5. `math_problem_2players_tools`: A simple example demonstrating how two agents can use the WolframAlpha API to play an arithmetic game. 6. `prisoner_dilema`: The Prisoner's Dilemma is a thought experiment involving two rational agents facing a choice between cooperating for mutual benefit or betraying their partner for individual gain. 7. `db_diag`: The Chief DBA monitors (agents) the database system for anomalies and alerts memory and CPU agents if any are detected. They (agents) analyze root causes and suggest optimization solutions. The Chief DBA (agent) provides a diagnosis summary to the user, who can give instructions or evaluate the proposed solutions' effectiveness. 8. `sde_team`: In the SDE team, code writer, code tester and code reviewer collaborate on the code generation problem. 9. `pokemon`: This example intimates Pokemon game. -->๐ Join Us!
AgentVerse is on a mission to revolutionize the multi-agent environment for large language models, and we're eagerly looking for passionate collaborators to join us on this exciting journey.
Leaders
<a href="https://github.com/chenweize1998"><img src="https://avatars.githubusercontent.com/u/32613237?v=4" alt="Leader" style="width:5%; border-radius: 50%;"/></a> <a href="https://github.com/yushengsu-thu"><img src="https://avatars.githubusercontent.com/u/11704492?v=4" alt="Leader" style="width:5%; border-radius: 50%;"/></a>
Contributors
<a href="https://github.com/chanchimin"><img src="https://avatars.githubusercontent.com/u/75533759?v=4" alt="Contributor" style="width:5%; border-radius: 50%;"/></a> <a href="https://github.com/libowen2121"><img src="https://avatars.githubusercontent.com/u/19970308?v=4" alt="Contributor" style="width:5%; border-radius: 50%;"/></a> <a href="https://github.com/Xial-kotori"><img src="https://avatars.githubusercontent.com/u/53430452?v=4" alt="Contributor" style="width:5%; border-radius: 50%;"/></a> <a href="https://github.com/Dr-Left"><img src="https://avatars.githubusercontent.com/u/39025643?v=4" alt="Contributor" style="width:5%; border-radius: 50%;"/></a> <a href="https://github.com/minleminzui"><img src="https://avatars.githubusercontent.com/u/54172054?v=4" alt="Contributor" style="width:5%; border-radius: 50%;"/></a> <a href="https://github.com/Tsuruko04"><img src="https://avatars.githubusercontent.com/u/129145574?v=4" alt="Contributor" style="width:5%; border-radius: 50%;"/></a> <a href="https://github.com/kierangilliam"><img src="https://avatars.githubusercontent.com/u/18583882?v=4" alt="Contributor" style="width:5%; border-radius: 50%;"/></a> <a href="https://github.com/zhouxh19"><img src="https://avatars.githubusercontent.com/u/17394639?v=4" alt="Contributor" style="width:5%; border-radius: 50%;"/></a> <a href="https://github.com/tzw2698"><img src="https://avatars.githubusercontent.com/u/68865752?v=4" alt="Contributor" style="width:5%; border-radius: 50%;"/></a> <a href="https://github.com/JetSquirrel"><img src="https://avatars.githubusercontent.com/u/20291255?v=4" alt="Contributor" style="width:5%; border-radius: 50%;"/></a> <a href="https://github.com/Muiruriscode"><img src="https://avatars.githubusercontent.com/u/85177347?v=4" alt="Contributor" style="width:5%; border-radius: 50%;"/></a> <a href="https://github.com/eltociear"><img src="https://avatars.githubusercontent.com/u/22633385?v=4" alt="Contributor" style="width:5%; border-radius: 50%;"/></a>
How Can You Contribute?
-
Issue and Pull-Request: If you encounter any problems when use AgentVerse, you can propose the issue in English. Beisdes, you can also autonomously ask us to assign issue to you and send the PR (Please follow the PULL_REQUEST_TEMPLATE) after you solve it.
-
Code Development: If you're an engineer, help us refine, optimize, and expand the current framework. We're always looking for talented developers to enhance our existing features and develop new modules.
-
Documentation and Tutorials: If you have a knack for writing, help us improve our documentation, create tutorials, or write blog posts to make AgentVerse more accessible to the broader community.
-
Application Exploration: If you're intrigued by multi-agent applications and are eager to experiment using AgentVerse, we'd be thrilled to support your journey and see what you create!
-
Feedback and Suggestions: Use AgentVerse and provide us with feedback. Your insights can lead to potential improvements and ensure that our framework remains top-notch.
Also, if you're passionate about advancing the frontiers of multi-agent applications, become core AgentVerse team members, or are eager to dive deeper into agent research. Please reach out AgentVerse Team, and CC to Weize Chen and Yusheng Su. We're keen to welcome motivated individuals like you to our team!
Social Media and Community
-
Twitter: https://twitter.com/Agentverse71134
-
Discord: https://discord.gg/gDAXfjMw.
-
Hugging Face: https://huggingface.co/spaces/AgentVerse/agentVerse.
Star History
Citation
If you find this repo helpful, feel free to cite us.
@article{chen2023agentverse,
title={Agentverse: Facilitating multi-agent collaboration and exploring emergent behaviors in agents},
author={Chen, Weize and Su, Yusheng and Zuo, Jingwei and Yang, Cheng and Yuan, Chenfei and Qian, Chen and Chan, Chi-Min and Qin, Yujia and Lu, Yaxi and Xie, Ruobing and others},
journal={arXiv preprint arXiv:2308.10848},
year={2023}
}
Contact
AgentVerse Team: agentverse2@gmail.com
Project leaders:
-
Weize Chen: chenweize1998@gmail.com