Home

Awesome

cordial-sync

This codebase can be used to reproduce the results from our paper, A Cordial Sync: Going Beyond Marginal Policies for Multi-Agent Embodied Tasks accepted to the 2020 European Conference on Computer Vision (ECCV'20). For any concerns or questions, please either create a GitHub issue (preferred) or reach out to one of the joint first-authors on the above paper.

We also include useful code for our previous work on collaborative embodied agents - Two Body Problem (CVPR'19).

teaser

Table of contents

  1. Installation
  2. Running a FurnMove experiment
  3. Reproducing plots and tables
  4. Code structure
  5. Additional information

Installation

The following will describe how you may install all python dependencies as well as how to download our custom AI2-THOR build and model checkpoints. While downloading model checkpoints is optional, downloading our AI2-THOR build is required to enable the FurnMove and FurnLift tasks.

Begin by cloning this repository to your local machine and moving into the top-level directory.

This library has been tested only in python 3.6, the following assumes you have a working version of python 3.6 installed locally. In order to install requirements we recommend using pipenv but also include instructions if you would prefer to install things directly using pip.

Installing python dependencies

Installing requirements with pipenv (recommended)

If you have already installed pipenv, you may run the following to install all requirements.

pipenv install --skip-lock

This should an automatically fix any dependencies and give an output like:

$ pipenv install --skip-lock
Removing virtualenv (/Users/USERNAME/.local/share/virtualenvs/cordial-sync-TKiR-u2H)…
Creating a virtualenv for this project…
Pipfile: /PATH/TO/DIR/cordial-sync/Pipfile
Using /usr/bin/python3 (3.6.10) to create virtualenv…
⠹ Creating virtual environment...Using base prefix '/usr'
New python executable in /SOME/PATH/virtualenvs/cordial-sync-TKiR-u2H/bin/python3
Also creating executable in /SOME/PATH/virtualenvs/cordial-sync-TKiR-u2H/bin/python
Installing setuptools, pip, wheel...
done.
Running virtualenv with interpreter /usr/bin/python3

✔ Successfully created virtual environment!
Virtualenv location: /SOME/PATH/virtualenvs/new-mathor-TKiR-u2H
Installing dependencies from Pipfile…
An error occurred while installing torch~=1.1.0! Will try again.
An error occurred while installing tensorboardx! Will try again.
An error occurred while installing tables! Will try again.
  🐍   ▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉ 29/29 — 00:02:56
Installing initially failed dependencies…
  ☤  ▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉ 3/3 — 00:01:47
To activate this project's virtualenv, run pipenv shell.
Alternatively, run a command inside the virtualenv with pipenv run.

You may need to run this command twice to ensure everything is installed properly.

Installing requirements with pip

Note: do not run the following if you have already installed requirements with pipenv as above. If you prefer using pip, you may install all requirements as follows

pip install -r requirements.txt

Depending on your machine configuration, you may need to use pip3 instead of pip in the above.

Downloading data

To reproduce our results and run new experiments you will need to download our custom AI2-THOR build, model checkpoints, and evaluation data. We have included a helper script that will download these files and extract them into the appropriate directories. To run this script simply run the following two lines from within the top-level project directory

export PYTHONPATH=$PYTHONPATH:$PWD
python rl_multi_agent/scripts/download_evaluation_data.py

You may then need to make the downloaded AI2-THOR builds executable:

chmod -R +x ai2thor_builds

If you would prefer to download all files manually, or are interested in what is being downloaded, please see the following three sections.

Downloading our AI2-THOR build

You should not need to do the following if you have successfully run the download_evaluation_data.py script.

To run our FurnLift and FurnMove experiments, you must download our custom AI2-THOR build. You can find this build (for MacOSX and Linux) at this link. Please unzip this directory into the top-level directory of this project (the path cordial-sync/ai2thor_builds should contain 2 files and a directory).

After downloading, you may then need to make the downloaded AI2-THOR builds executable:

chmod -R +x ai2thor_builds

Downloading model checkpoints

You should not need to do the following if you have successfully run the download_evaluation_data.py script.

Our model checkpoints can be downloaded at this link. Please unzip this file and move the contents of the unzipped folder (this should be the two directories final_furnlift_ckpts and final_furnmove_ckpts) into the trained_models directory of this project. These two directories contain all final model checkpoints for our FurnLift and FurnMove experiments.

Downloading evaluation data

You should not need to do the following if you have successfully run the download_evaluation_data.py script.

While not necessary to begin using our codebase, if you would like to reproduce the plots from our paper (or evaluate your model on the same test set that we use) you will need to download this link. Extract the contents to data directory into the top-level directory of this project (this will overwrite) the empty data directory that already exists when cloning this repository.

Running a FurnMove experiment

All FurnMove (and FurnLift) experiment config files can be found in the rl_multi_agent/experiments directory. As an example, let's say we wish to train two agents to complete the FurnMove task and, furthermore, we wish to use agents with a CORDIAL loss and SYNC policies. The experimental configuration for this can be found in the file rl_multi_agent/experiments/furnmove_vision_mixture_cl_rot_config.py. To beginning training for a total of 1e6 episodes with

we should run the command

python rl_multi_agent/main.py --task furnmove_vision_mixture_cl_rot --experiment_dir rl_multi_agent/experiments --gpu_ids 0 1 --amsgrad t --workers 2 --num_steps 50 --save_freq 5000 --max_ep 1000000

To train a similar model in our gridworld one can simply replace furnmove_vision_mixture_cl_rot with furnmove_grid_mixture_cl_rot in the above. To train the same (visual) model as above but in the FurnLift task, one can replace furnmove_vision_mixture_cl_rot with furnlift_vision_mixture_cl.

Reproducing plots and tables

Evaluating using the released data

Downloading our evaluation data, is the easiest means by which you can quickly reproduce our tables and plots. With the data/ repository in place the following scripts would come useful:

Evaluating from scratch

If you would like to rerun all evaluations from scratch (but using our model checkpoints), please see the documentation for the scripts:

If you wish to go even one step further and retrain our models, you will need to follow the directions in the above running a FurnMove experiment section on all of the relevant experiments. Suppose now you have trained a new model checkpoint (e.g. my_great_checkpoint.dat) using the rl_multi_agent/experiments/furnlift_vision_central_cl_config.py experiment configuration. To evaluate this checkpoint you will now need to edit the corresponding evaluation config file (rl_multi_agent/furnlift_eval_experiments/furnlift_vision_central_cl_config.py) so that the saved_model_path method will return the path to your checkpoint.

Code structure

Here we briefly summarize contents of each directory:

Additional information

Contributions

If you would like to make such a contributions we recommend first submitting an issue describing your proposed improvement. Doing so can ensure we can validate your suggestions before you spend a great deal of time upon them. Small (or validated) improvements and bug fixes should be made via a pull request from your fork of this repository.

References

This work builds upon the open-sourced pytorch-a3c library of Ilya Kostrikov.

Citation

If you use this codebase, please consider citing Cordial Sync and Two Body Problem.

@InProceedings{JainWeihs2020CordialSync,
author = {Jain, Unnat and Weihs, Luca and Kolve, Eric and Farhadi, Ali and Lazebnik, Svetlana and Kembhavi, Aniruddha and Schwing, Alexander G.},
title = {A Cordial Sync: Going Beyond Marginal Policies For Multi-Agent Embodied Tasks},
booktitle = {ECCV},
year = {2020},
note = {first two authors contributed equally},
}
@InProceedings{JainWeihs2019TwoBody,
author = {Jain, Unnat and Weihs, Luca and Kolve, Eric and Rastegari, Mohammad and Lazebnik, Svetlana and Farhadi, Ali and Schwing, Alexander G. and Kembhavi, Aniruddha},
title = {Two Body Problem: Collaborative Visual Task Completion},
booktitle = {CVPR},
year = {2019},
note = {first two authors contributed equally},
}