Home

Awesome

Video Noise Contrastive Estimation (VINCE)

This is a repository containing code used to implement the models in the paper Watching the World Go By: Representation Learning from Unlabeled Videos (https://arxiv.org/abs/2003.07990).

<img src="https://danielgordon10.github.io/images/projects/vince.jpg" height="500"/>

Environment Setup

We recommend using Anaconda to manage your environment setup and run our code. The following commands will create an environment similar to ours with minimal requirements.

Conda

conda create -n video-env python=3.6.8
conda deactivate
conda env update -n video-env -f env.yml
conda activate video-env
pip install git+https://github.com/danielgordon10/dg_util.git -U

Virtualenv

If you instead prefer virtualenv or similar, we have also provided a requirements.txt.

virtualenv --python=python3.6 video-env
source video-env/bin/activate
pip install -r requirements.txt

Downlaod Random Related Video Views (R2V2)

Due to budgetary constraints, I can no longer directly host the dataset directly, however I have made available a script to recreate the dataset. Note however that many of the original videos have since been deleted from youtube, so their data cannot be recreated. If you are interested in hosting the dataset for me, please contact me.

Recreate the dataset

  1. Ensure you have set up the conda environment and installed dg_util.git as noted in Conda
  2. Follow instructions to create cookies.txt
  3. Run python download_scripts/recreate_r2v2_dataset.py

Notes

Original Dataset:

Size (GB)Number of FilesNumber of ImagesNumber of FoldersNumber of Source Videos
Train1102,788,4242,784,3284096696,082
Val8.8226,620222,524409655,631

Downloading your own set of YouTube videos

If you would like to download a different set of YouTube videos, you may still find our code helpful. Here is a basic workflow for downloading many YouTube videos.

  1. Follow instructions to create cookies.txt
  2. Create a list of many YouTube URLs to download.
    1. One option would be to use youtube_scrape/search_youtube_for_urls.py
    2. Another would be YouTube-8m URLs (https://github.com/danielgordon10/youtube8m-data)
  3. Run python run_cache_video_dataset.py --title cache --description caching --num-workers 100 after appropriately formatting the files.
    • Note - You can often use more workers than your CPU has threads because YouTube downloading tends to be the bottleneck.
  4. youtube_scrape/download_kinetics.py is a convenient file for downloading Kinetics videos.

Create cookies.txt

  1. Follow instructions at https://apple.stackexchange.com/a/349759
  2. Go to any youtube video: https://www.youtube.com/watch?v=AKQE9RyOIMY
  3. Click the extension icon and save the data into youtube_scrape/cookies.txt.

Training

Train VINCE

  1. Download R2V2 training data or create your own dataset to train on.
  2. Read over the arguments list in arg_parser.py.
  3. Train the model. We have provided an example train script as well as a debug script to check everything is working. Edit the paths in the file to point to your data/output locations.

Train baselines

  1. The official MoCo baseline is available at https://github.com/facebookresearch/moco, but for our work, we wrote our own version.
  2. We have provided an example train script to train this model.
  3. We additionally include MoCoV2 baseline scripts for ResNet50 at vince/train_moco_v2.sh.
  4. We additionally include the Jigsaw method from PIRL and an accompanying script vince/train_vince_jigsaw.sh. Pretrained weights and results are currently not provided.

Train End Task

  1. We include various end tasks and an interface for easily adding more. Training scripts for each task are available at:
    1. end_tasks/train_imagenet.sh
    2. end_tasks/train_sun_scene.sh
    3. end_tasks/train_kinetics_400.sh
    4. end_tasks/train_tracking.sh
  2. New end tasks can be added by creating a new solver which inherits from EndTaskBaseSolver and an accompanying dataset which inherits from BaseDataset.

Evaluation

  1. While training each end task, evaluation is done after every epoch on a val set.
  2. If more evaluation is needed, it can be added by implementing run_eval for that solver. For an example, see solvers/end_task_tracking_solver.py and end_tasks/eval_tracking.sh.

Download Pretrained Weights

Pretrained weights are available for VINCE as well as all baselines mentioned in the paper. We provide the pretrained weights for the backbone only, not for any end task.

ResNet18

To download the weights, from the root directory, run sh download_scripts/download_pretrained_weights_resnet18.sh Alternatively, download them directly from https://drive.google.com/uc?id=1L2SZvsvpxe-A1gCN9Nxg9LwB_d604aQf

ResNet50

These models were trained using the hyperparameters in https://arxiv.org/abs/2003.04297 except for batch size which was 896 (starting loss was scaled proportionally to 0.105). To download the weights, from the root directory, run sh download_scripts/download_pretrained_weights_resnet50.sh Alternatively, download them directly from https://drive.google.com/uc?id=11TfKfZLLx2FYCATjkll5nUIOxSgSBWGi

Benchmark Results

The results you achieve should somewhat match the table below, though different learning schedules and other factors may slightly change performance.

Method Name (In Paper)Dir NameBackboneImageNetSun ScenesKinetics 400OTB 2015 PrecisionOTB 2015 Success
Sup-INN/AResNet180.6960.4910.2070.5570.396
MoCo-INmoco-inResNet180.4470.4870.3360.5830.429
MoCo-Gmoco-gResNet180.3930.4440.3130.5110.413
MoCo-R2V2moco-r2v2ResNet180.3580.4500.3180.5550.403
VINCEvince-r2v2-multi-frame-multi-pairResNet180.4000.4950.3620.6290.465
Sup-INN/AResNet500.7620.5930.3050.4580.320
MoCo-V2-INmoco-v2-inResNet500.6520.6080.4590.3000.260
MoCo-R2V2moco-v2-r2v2ResNet500.5360.5810.4560.3860.299
VINCEvince-r2v2-multi-frame-multi-pairResNet500.5440.6110.4910.4020.300

Citation

@misc{gordon2020watching,
    title={Watching the World Go By: Representation Learning from Unlabeled Videos},
    author={Gordon, Daniel and Ehsani, Kiana and Fox, Dieter and Farhadi, Ali},
    year={2020},
    eprint={2003.07990},
    archivePrefix={arXiv},
    primaryClass={cs.CV}
}