Home

Awesome

Emergence of exploratory look-around behaviors through active observation completion

A journal version of this work in conjunction with our prior work on Learning to Look Around: Intelligently Exploring Unseen Environments for Unknown Tasks has been published in Science Robotics 2019.

Emergence of exploratory look-around behaviors through active observation completion
Santhosh K. Ramakrishnan, Dinesh Jayaraman, Kristen Grauman
Science Robotics 2019

A cleaned version of this codebase along with new transfer tasks are available at https://github.com/srama2512/visual-exploration.

Sidekick Policy Learning

This repository contains code and data for the paper

Sidekick Policy Learning for Active Visual Exploration
Santhosh K. Ramakrishnan, Kristen Grauman
ECCV 2018

Setup

conda create -n spl python=2.7
source activate spl
git clone https://github.com/srama2512/sidekicks.git
cd sidekicks
pip install -r requirements.txt
wget http://vision.cs.utexas.edu/projects/sidekicks/data.zip
unzip data.zip

Evaluating pre-trained models

All the pre-trained models have been provided here. To evaluate them, download them to the models directory. To reproduce results from the paper:

wget http://vision.cs.utexas.edu/projects/sidekicks/models.zip
unzip models.zip
sh evaluation_script_final.sh

Evaluation examples

python eval.py --h5_path data/sun360/sun360_processed.h5 --dataset 0 \
				  --model_path models/sun360/one-view.net --T 1 --M 8 --N 4 \
				  --start_view 2 --save_path dummy/ 
python eval.py --h5_path data/sun360/sun360_processed.h5 --dataset 0 \
				  --model_path models/sun360/ltla.net --T 4 --M 8 --N 4 \
				  --start_view 2 --save_path dummy/ 
python eval.py --h5_path data/sun360/sun360_processed.h5 --dataset 0 \
				  --model_path models/sun360/ltla.net --T 4 --M 8 --N 4 \
				  --start_view 2 --save_path dummy/ 
python eval.py --h5_path data/sun360/sun360_processed.h5 --dataset 0 \
				  --model_path models/sun360/rnd-actions.net --T 4 --M 8 --N 4 \
				  --start_view 2 --actorType random --save_path dummy/ 
python eval.py --h5_path modelnet30_processed.h5 \
				  --h5_path_unseen modelnet10_processed.h5 --dataset 1 \
				  --model_path models/modelnet_hard/one-view.net --T 1 --M 9 --N 5 \
				  --start_view 2 --save_path dummy/

Training models

Ensure that the pre-trained models and pre-computed scores are downloaded and extracted.

python main.py --T 1 --training_setting 0 --epochs 100 \
				  --save_path saved_models/sun360/one-view
python main.py --T 4 --training_setting 1 --epochs 1000 \
				  --save_path saved_models/sun360/ltla/  \
				  --load_model models/sun360/one-view.net
python main.py --T 4 --training_setting 1 --epochs 1000 \
				  --save_path saved_models/sun360/ours-rew/ \
				  --load_model models/sun360/one-view.net --expert_rewards True \
				  --rewards_h5_path scores/sun360/ours-rew-scores.h5
python main.py --T 4 --training_setting 1 --epochs 1000 \
				  --save_path saved_models/sun360/ours-demo/ \
				  --load_model models/sun360/one-view.net --expert_trajectories True \
				  --utility_h5_path scores/sun360/ours-demo-scores.h5
python main.py --h5_path data/modelnet_hard/modelnet30_processed.h5 \
				  --training_setting 1 --dataset 1 --T 4 --M 9 --N 5 \
				  --load_model models/modelnet_hard/one-view.net \
				  --save_path saved_models/modelnet_hard/ltla/

The other ModelNet Hard models can be trained similar to SUN360 models. To train actor critic models, set --baselineType critic. To add full observability to the critic (for asymm-ac), set --critic_full_obs True.

Visualization

From the repository directory, start jupyter notebook and open visualize_policy_paper.ipynb. Perform the TODOs mentioned in the comments (setting the correct paths) and run the entire script. It will generate tensorboard files contained visualized heatmaps on several examples.