Home

Awesome

<div align= "center"> <h1> 💰GAMA-Bench</h1> </div> <div align="center"> <img src="framework.png" width="750px"> </div>

Leaderboard

OverallGuessingBarDollarGoodsDinerAuctionBattlePirate
👑Gemini-1.5-Pro68.195.437.293.810035.913.381.387.9
LLaMA-3.1-70B64.584.059.787.090.648.14.577.764.0
Mixtral-8x22B61.483.639.379.083.779.95.236.084.3
GPT-4-012560.591.623.098.189.20.98.786.885.4
LLaMA-3.1-405B60.494.320.594.997.014.44.292.765.6
Qwen-2-72B56.593.217.091.981.30.00.981.786.1
LLaMA-3.1-8B53.485.575.756.419.659.316.935.978.3
GPT-3.5-110644.668.564.370.343.51.43.435.769.5
Gemini-1.0-Pro43.677.333.577.668.53.114.916.557.4
GPT-3.5-012543.563.468.768.638.92.85.528.671.6
Mixtral-8x7B43.191.866.81.227.676.40.812.667.3
GPT-3.5-061341.941.474.842.417.767.04.219.568.4

Updates

[Aug 29, 2024] Add models: Gemini-1.5-Pro, LLaMA-3.1-{7, 70, 405}B, Mixtral-8x{7, 22}B, Qwen-2-72B;

[Apr 25, 2024] Update scoring scheme of Public Goods Game, Diner's Dilemma, and Sealed-Bid Auction to favor rational strategies (being self-interested); Update leaderboard;

Execution Process

Create Utils File

Customize the model api, create a utils.py in this dictionary:

openai_api_key = "<key>"    # Keep it empty string if not use
infradeep_api_key = "<key>" # Keep it empty string if not use
google_api_key = "<key>"    # Keep it empty string if not use
temperature = "<model temperature>"
delay_time = "<time break between each request>"

Specify Test Cases

In main.py, specify the server parameters:

  1. Import the following files:

    from server import *
    from global_functions import *
    
  2. Import the game:

    from games.guessing_game import *
    
  3. Create a simple game instance:

    game = GuessingGame(player_num=10, min=0, max=100, ratio=2/3, ratio_str='2/3', version='v1', models='gpt-3.5-turbo', name_exp='test')
    
    • player_num: An integer refers to the number of players
    • min, max, ratio, ratio_str: Game parameters, different games may have different parameters
    • version: A string refers to the prompt version (default: "v1")
    • models: A string or a list of models, the instruction of customizing models would be illustrated in the "Models Instruction" section (default: "gpt-3.5-turbo")
    • name_exp: A string that specifies the name of all output files belonging to this game instance
  4. Run a game instance for 20 rounds:

    game.run(20)
    

    After a round is completed, a JSON saving file will be stored in the "save" directory sharing the same name with name_exp, all model requests and responses records will be stored in the "records" directory , and the visualized game results will be store in the "figures" directory

  5. Load a game:

    game = load(filepath='save/test.json', object=GuessingGame)  # load the saved data 
    game = load(filepath='save/test.json', object=GuessingGame, newfile='test2')  # load and save as new file
    game.run(20)
    
    • filepath: A string refers to the path of the loading data
    • object: The game object that going to load
    • namefile: If it is None, the server will continue to update the current file, otherwise, the server will copy the file to a new file and update there
  6. Show the visualized and statistical analysis of game instance:

    game.show()
    

Additional Operations

Models Instruction

User can specify the models by passing a string or a list of models name when creating the game instance

Supported models

Special players

We also support user to specify some special players with fixed strategies

  1. specified=<response_1>/<response_2>/.../<response_n>: The player will response the specified answers repeatly

    models = ["specified=0"] + ["gpt-3.5-turbo" for _ in range(9)]  # first player always responses 0 in every round
    
    models = ["specified=0/100"] + ["gpt-3.5-turbo" for _ in range(9)]  # first player alternatively responses 0 and 100
    
  2. user: User can participant the game with models by inserting user to the list by giving the responses in the terminal

    models = ["user"] + ["gpt-3.5-turbo" for _ in range(9)]
    

Rephrase prompts

Games usually consist of 3 prompts "description", "report" and "request" (bar game consists of "explicit" and "implicit" versions of report), user can rephrase the prompts using GPT-4 and the provided implementation

from prompt_template.prompt_rephrase import *
game_file = 'guessing_game'
rephrase_files = ['description', 'report', 'request']
rephrase([f"prompt_template/{game_file}/{filename}_v1.txt" for filename in rephrase_files], replace_suffix="v1", suffix="v2")

Result Analysis

For GuessingGame, BarGame, DivideDollar, and DinerDilemma, we developed a tool to visualize and integrate multiple runs into a figure, as shown in our paper. analysis_main.ipynb demonstrated the sample usage of our tool.

Analyze Multiple Runs

In analysis_main.ipynb, specify the server and analysis parameters:

  1. Import the following files:

    from server import *
    from global_functions import *
    from analysis import *
    
  2. Import the game:

    from games.guessing_game import *
    
  3. Create an Analysis instance:

    plane = Analysis(GuessingGame)
    
  4. Add the saved run with label:

    plane.add('raw_results/guessing_game/guessing_game_v1_1.json', "T1")
    
  5. Plot the graph:

    plane.plot()
    
    • index: Figure index, for some games, we provided more than one figure for the analysis. For example, in DinerDilemma, we provide a graph demonstrating the percentage of players who choose "cheap" in each round; we also provide a graph demonstrating the averaged accumulated percentage of players who chose "cheap"; default 0.
    • title: Title of the graph; default None
    • xlabel: Label of the x axis, default "Round"
    • ylabel: Label of the y axis, default None
    • ylim: Range of y axis, default None
    • loc: Location of the legend, default "upper right"
    • format: Figure format, default "png"
    • savename: Name of the figure, default "merge"

👉 Paper and Citation

For more details, please refer to our paper <a href="https://arxiv.org/abs/2403.11807">here</a>.

If you find our paper&tool interesting and useful, please feel free to give us a star and cite us through:

@article{huang2024how,
  author    = {Jen{-}tse Huang and
               Eric John Li and
               Man Ho Lam and
               Tian Liang and
               Wenxuan Wang and
               Youliang Yuan and
               Wenxiang Jiao and
               Xing Wang and
               Zhaopeng Tu and
               Michael R. Lyu},
  title     = {How Far Are We on the Decision-Making of LLMs? Evaluating LLMs' Gaming Ability in Multi-Agent Environments},
  journal   = {arXiv preprint arXiv:2403.11807},
  year      = {2024}
}