Awesome
DECKARD Minecraft Agent
The DECKARD Minecraft agent uses knowledge from large language models to assist the exploration for reinforcement learning agents. This repository includes our implementation of the DECKARD agent for collecting and crafting arbitrary items in Minecraft. For additional details about our approach please see our website and paper, Do Embodied Agents Dream of Electric Sheep?.
Installation
We use Minedojo for agent training and evaluation in Minecraft. Before installing python dependencies for Minedojo, you will need openjdk-8-jdk
and python>=3.9
. This guide contains additional installation details for the Minedojo simulator.
If you didn't clone the VPT submodule yet, run:
git submodule update --init --recursive
Next, install python packages:
pip install -r requirements.txt
We finetune our agent from OpenAI's VPT Minecraft agent. Download their pretrained weights using our script:
bash download_model.sh
Finally, we use MineClip for reward shaping. Download the weights here and place them at weights/mineclip_attn.pth
.
Usage
The default way to use DECKARD occasionally pauses exploration to train subtasks. Run this method using:
python main.py
Alternatively, you can pretrain policies for subtasks by running:
python subtask.py --task base_task --target_item log
Then, add the trained subtask checkpoint to your yaml config under techtree_specs.tasks
:
my_config:
task_id: creative
sim: minedojo
fast_reset: 0
terminal_specs:
max_steps: 10000
techtree_specs:
guide_path: data/codex_techtree.json
target_item: wooden_pickaxe
tasks:
log: log_checkpoint.zip
and run DECKARD for building the Minecraft technology tree using:
python techtree.py --config my_config
Note that Minecraft requires using xvfb-run
to render on a virtual display when using a headless machine.