Home

Awesome

3DC-Seg

This repository contains the official implementation for the paper:

Making a Case for 3D Convolutions for Object Segmentation in Videos

Sabarinath Mahadevan*, Ali Athar*,Aljoša Ošep, Laura Leal-Taixé, Bastian Leibe

BMVC 2020 | Paper | Video | Project Page

Required Packages

Setup

  1. Clone the repository and append it to the PYTHONPATH variable:

    git clone https://github.com/sabarim/3DC-Seg.git
    cd 3DC-Seg
    export PYTHONPATH=$(pwd):$PYTHONPATH
    
  2. Create a folder named 'saved_models'

Checkpoint

  1. The trained checkpoint is available in the below given link:

    Target DatasetDatasets Required for TrainingModel Checkpoint
    DAVIS, FBMS, ViSalCOCO, YouTubeVOS, DAVIS'17link

Usage

Training:

  1. Run mkdir -p saved_models/csn/
  2. Download the pretrained backbone weights and place it in the folder created above.
  python main.py -c run_configs/<name>.yaml --num_workers <number of workers for dataloader> --task train

Inference:

Use the pre-trained checkpoint downloaded from our server along with the provided config files to reproduce the results from Table. 4 and Table. 5 of the paper. Please note that you'll have to use the official davis evaluation package adapted for DAVIS-16 as per the issue listed here if you wish to run an evaluation on DAVIS.

  1. DAVIS:
python main.py -c run_configs/bmvc_final.yaml --task infer --wts <path>/bmvc_final.pth

  1. DAVIS - Dense
python main.py -c run_configs/bmvc_final_dense.yaml --task infer --wts <path>/bmvc_final.pth

  1. FBMS:
python main.py -c run_configs/bmvc_fbms.yaml --task infer --wts <path>/bmvc_final.pth

  1. ViSal
python main.py -c run_configs/bmvc_visal.yaml --task infer --wts <path>/bmvc_final.pth

Pre-computed results

Pre-computed segmentation masks for different datasets can be downloaded from the below given links:

Target DatasetResults
DAVISlink
DAVIS - Denselink
FBMSlink
ViSallink