Home

Awesome

Towards Segmenting Anything That Moves

<img src="http://www.achaldave.com/projects/anything-that-moves/videos/ZXN6A-tracked-with-objectness-trimmed.gif" width="32%" /><img src="http://www.achaldave.com/projects/anything-that-moves/videos/c95cd17749.gif" width="32%" /><img src="http://www.achaldave.com/projects/anything-that-moves/videos/e0bdb5dfae.gif" width="32%" />

[Pre-print] [Website]

Achal Dave, Pavel Tokmakov, Deva Ramanan

Setup

  1. Download models and extract them to release/models
  2. Install pytorch 0.4.0.
  3. Run git submodule update --init.
  4. Setup detectron-pytorch.
  5. Setup flownet2. If you just want to use the appearance stream, you can skip this step.
  6. Install requirements with pip install -r requirements.txt<sup>1</sup>.
  7. Copy ./release/example_config.yaml to ./release/config.yaml, and edit fields marked with ***EDIT THIS***.
  8. Add root directory to PYTHONPATH: source ./env.sh activate.

Running models

All scripts needed for running our models on standard datasets, as well as on new videos, are provided in the ./release directory. Outside of the release directory, this repository contains a number of scripts which are not used for the final results. They can be safely ignored, but are provided in case anyone finds them useful.

Run on your own video

  1. Extract frames: To run the model on your own video, first dump the frames from your video. For a single video, you can just use

    ffmpeg -i video.mp4 %04d.jpg

    Alternatively, you can use this script to extract frames in parallel on multiple videos.

  2. Run joint model: To run the joint model, run the following commands:

    # Inputs
    FRAMES_DIR=/path/to/frames/dir
    # Outputs
    OUTPUT_DIR=/path/to/output/dir
    
    python release/custom/run.py \
    --model joint \
    --frames-dir ${FRAMES_DIR} \
    --output-dir ${OUTPUT_DIR}
    
  3. Run appearance only model: To run only the appearance model, you don't need to compute optical flow, or set up flownet2:

    python release/custom/run.py \
    --model appearance \
    --frames-dir ${FRAMES_DIR} \
    --output-dir ${OUTPUT_DIR}
    

FBMS, DAVIS 2016/2017, YTVOS

The instructions for FBMS, DAVIS 2016/2017 and YTVOS datasets are roughly the same. Once you have downloaded the dataset and edited the paths in ./release/config.yaml, run the following scripts:

# or davis16, davis17, ytvos
dataset=fbms
python release/${dataset}/compute_flow.py
python release/${dataset}/infer.py
python release/${dataset}/track.py
# For evaluation:
python release/${dataset}/evaluate.py

Note that by default, we use our final model trained on COCO, FlyingThings3D, DAVIS, and YTVOS. For YTVOS, we provide the option to run using a model that was trained without YTVOS, to evaluate generalization. To activate this, pass --without-ytvos-train to release/ytvos/infer.py and release/ytvos/track.py.


<a name="footnote1">1</a>: This should contain all the requirements, but this was created manually so I may be missing some pip modules. If you run into an import error, try pip installing the module, and/or file an issue.