Awesome
3DC-Seg
This repository contains the official implementation for the paper:
Making a Case for 3D Convolutions for Object Segmentation in Videos
Sabarinath Mahadevan*, Ali Athar*,Aljoša Ošep, Laura Leal-Taixé, Bastian Leibe
BMVC 2020 | Paper | Video | Project Page
Required Packages
- Python 3.7
- PyTorch 1.4 or greater
- Nvidia-apex: https://github.com/NVIDIA/apex
- tensorboard, pycocotools and other packages listed in requirements.txt
Setup
-
Clone the repository and append it to the
PYTHONPATH
variable:git clone https://github.com/sabarim/3DC-Seg.git cd 3DC-Seg export PYTHONPATH=$(pwd):$PYTHONPATH
-
Create a folder named 'saved_models'
Checkpoint
-
The trained checkpoint is available in the below given link:
Target Dataset Datasets Required for Training Model Checkpoint DAVIS, FBMS, ViSal COCO, YouTubeVOS, DAVIS'17 link
Usage
Training:
- Run
mkdir -p saved_models/csn/
- Download the pretrained backbone weights and place it in the folder created above.
python main.py -c run_configs/<name>.yaml --num_workers <number of workers for dataloader> --task train
Inference:
Use the pre-trained checkpoint downloaded from our server along with the provided config files to reproduce the results from Table. 4 and Table. 5 of the paper. Please note that you'll have to use the official davis evaluation package adapted for DAVIS-16 as per the issue listed here if you wish to run an evaluation on DAVIS.
- DAVIS:
python main.py -c run_configs/bmvc_final.yaml --task infer --wts <path>/bmvc_final.pth
- DAVIS - Dense
python main.py -c run_configs/bmvc_final_dense.yaml --task infer --wts <path>/bmvc_final.pth
- FBMS:
python main.py -c run_configs/bmvc_fbms.yaml --task infer --wts <path>/bmvc_final.pth
- ViSal
python main.py -c run_configs/bmvc_visal.yaml --task infer --wts <path>/bmvc_final.pth
Pre-computed results
Pre-computed segmentation masks for different datasets can be downloaded from the below given links:
Target Dataset | Results |
---|---|
DAVIS | link |
DAVIS - Dense | link |
FBMS | link |
ViSal | link |