Home

Awesome

LevDoom

LevDoom is a benchmark with difficulty levels based on visual modifications, intended for research in generalization of deep reinforcement learning agents. The benchmark is based upon ViZDoom, a platform addressed to pixel based learning in the FPS game domain.

For more details please refer to our CoG2022 paper. To reproduce the paper results, follow the instructions in the RL module.

Default

Installation

To install LevDoom from PyPi, just run:

$ pip install LevDoom

Alternatively, to install LevDoom from source, clone this repo, cd to it, and then:

  1. Clone the repository
$ git clone https://github.com/TTomilin/LevDoom
  1. Navigate into the repository
$ cd LevDoom
  1. Install the dependencies
$ pip install .

Environments

The benchmark consists of 4 scenarios, each with 5 levels of increasing difficulty. The full list of environments can be found in the LevDoom module.

ScenarioSuccess MetricEnemiesWeaponItemsMax StepsActionsStochasticity
Defend the CenterFrames Alive21006Enemy behaviour
Health GatheringFrames Alive21006Health kit spawn locations
Seek and SlayKill Count125012Enemy and agent spawn locations
Dodge ProjectilesFrames Alive21006Enemy behaviour

Environment Modifications

LevDoom imposes generalization difficulty by modifying the base environment of a scenario. Each modification increases the difficulty level of the generalization task. There are 8 types of modifications across all scenarios.

ModificationDescription
TexturesVaries the appearance of the walls, ceilings and floors
ObstaclesAdds impassable obstructions to the map that impede the agent's movement
Entity SizeChanges the size of enemies and obtainable items
Entity TypeChanges the type of enemies and obtainable items
Entity RenderingVaries the rendering type of enemies and obtainable items
Entity SpeedIncreases the speed of enemies
Agent HeightVertically shifts the view point of the agent

Difficulty Levels

The number of combined modifications determines the difficulty level.

ScenarioLevel 0Level 1Level 2Level 3Level 4
Defend the CenterDefaultGoreStone Wall + Flying EnemiesResized Flying Enemies + Mossy BricksComplete
Health GatheringDefaultResized KitsStone Wall + Flying EnemiesLava + Supreme + Resized AgentComplete
Seek and SlayDefaultShadowsObstacles + Resized EnemiesRed + Obstacles + InvulnerableComplete
Dodge ProjectilesDefaultBaronsRevenantsFlames + Flaming Skulls + MancubusComplete

Quick Start

LevDoom follows the Gymnasium interface. You can create an environment using the make function:

import levdoom

env = levdoom.make('DefendTheCenterLevel0-v0')

You can also directly create all environments of a level using the make_level function:

import levdoom
from levdoom.utils.enums import Scenario

level_envs = levdoom.make_level(Scenario.DODGE_PROJECTILES, level=3)

Examples

Find examples of using LevDoom environments in the examples folder.

Single Environment

import levdoom

env = levdoom.make('HealthGatheringLevel3_1-v0')
env.reset()
done = False
steps = 0
total_reward = 0
while not done:
    action = env.action_space.sample()
    state, reward, done, truncated, info = env.step(action)
    env.render()
    steps += 1
    total_reward += reward
print(f"Episode finished in {steps} steps. Reward: {total_reward:.2f}")
env.close()

Single Level

import levdoom
from levdoom.utils.enums import Scenario

max_steps = 100
level_envs = levdoom.make_level(Scenario.SEEK_AND_SLAY, level=1, max_steps=max_steps)
for env in level_envs:
    env.reset()
    total_reward = 0
    for i in range(max_steps):
        action = env.action_space.sample()
        state, reward, done, truncated, info = env.step(action)
        env.render()
        total_reward += reward
        if done or truncated:
            break
    print(f"{env.unwrapped.name} finished in {i + 1} steps. Reward: {total_reward:.2f}")
    env.close()

Citation

If you use our work in your research, please cite it as follows:

@inproceedings{tomilin2022levdoom,
  title     = {LevDoom: A Benchmark for Generalization on Level Difficulty in Reinforcement Learning},
  author    = {Tristan Tomilin and Tianhong Dai and Meng Fang and Mykola Pechenizkiy},
  booktitle = {In Proceedings of the IEEE Conference on Games},
  year      = {2022}
}