Awesome
agame-vos
PyTorch implementation of the paper A Generative Appearance Model for End-to-End Video Object Segmentation, including complete training code and trained models.
Dependencies:
python (>= 3.5 or 3.6)
numpy
pytorch (>= 0.5 probably)
torchvision
pillow
tqdm
Datasets utilized:
DAVIS
YouTubeVOS
How to setup:
- Install dependencies
- Clone this repo:
git clone https://github.com/joakimjohnander/agame-vos.git
- Download datasets
- Set up local_config.py to point to appropriate directories for saving and reading data
- Move the ytvos_trainval_split/ImageSets directory into your YouTubeVOS data directory. The directory structure should look like
/...some_path.../youtube_vos
-- train
---- Annotations
---- JPEGImages
-- valid
---- Annotations
---- JPEGImages
-- ImageSets
---- train.txt
---- train_joakim.txt
---- val_joakim.txt
How to run method on DAVIS and YouTubeVOS with pre-trained weights:
- Download weights from https://drive.google.com/file/d/1lVv7n0qOtJEPk3aJ2-KGrOfYrOHVnBbT/view?usp=sharing
- Put the weights at the path pointed out by config['workspace_path'] in local_config.py.
- Run
python3 -u runfiles/main_runfile.py --test
How to train (and test) a new model:
- Run
python3 -u runfiles/main_runfile.py --train --test
Most settings used for training and evaluation are set in your runfiles. Each runfile should correspond to a single experiment. I supplied an example runfile.