Home

Awesome

LLM-TAMP

This is the official repository of the paper:

LLM3: Large Language Model-based Task and Motion Planning with Motion Failure Reasoning.

$\text{LLM}^3$ is an LLM-powered Task and Motion Planning (TAMP) framework that leverages a pre-trained LLM (GPT-4) as the task planner, parameter sampler, and motion failure reasoner. We evaluate the framework in a series of tabletop box-packing tasks in Pybullet.

Demo

https://github.com/AssassinWS/LLM-TAMP/assets/144423427/74566b14-a62e-401d-a8d9-2f27f3a7ede3

Prerequisite

Install dependencies

git clone git@github.com:AssassinWS/LLM-TAMP.git
cd LLM-TAMP
pip install -r requirements.txt

Project structure

We use hydra-core to configure the project.

Usage

Before Running

First, create a folder openai_keys under the project directory; Second, create a file openai_key.json under the folder openai_keys; Third, fill in this json file with your openAI API key:

{
    "key": "",
    "org": "",
    "proxy" : ""
}

Run TAMP planning

The ablation study in the LLM^3 paper.

Full example with various options:

python main.py --config-name=llm_tamp env=easy_box_small_basket planner=llm_backtrack max_llm_calls=10 overwrite_instances=true play_traj=true use_gui=true

Run parameter sampling

The action parameter selection experiment in the LLM^3 paper.

Run with the LLM sampler:

python main.py --config-name=llm_tamp env=easy_box_small_basket planner=llm_sample_params max_llm_calls=10 play_traj=true use_gui=true

Run with the random sampler:

python main.py --config-name=random_sample env=easy_box_small_basket