Home

Awesome

ACM

This repository contains the code for the paper:

Online Continual Learning Without the Storage Constraint
Ameya Prabhu, Zhipeng Cai, Puneet Dokania, Philip Torr, Vladlen Koltun, Ozan Sener [Arxiv] [PDF] [Bibtex]

<p align="center"> <img src="https://github.com/drimpossible/ACM/blob/main/Model.png" width="600" alt="Figure which describes our ACM model"> </p>

Installation and Dependencies

Our code was run on a 16GB RTX 3080Ti Laptop GPU with 64GB RAM and PyTorch >=1.13, although better GPU/RAM space will allow for faster experimentation.

# First, activate a new virtual environment
pip3 install -r requirements.txt

Fast Dataset Setup

Recreating the Datasets

Continual Google Landmarks V2 (CGLM)

Download Images

wget -c https://raw.githubusercontent.com/cvdfoundation/google-landmark/master/download-dataset.sh
mkdir train && cd train
bash ../download-dataset.sh train 499

Recreating Metadata

wget -c https://s3.amazonaws.com/google-landmark/metadata/train_attribution.csv
python cglm_scrape.py

Continual YFCC100M (CLOC)

Extremely Fast Image Downloader

pip install img2dataset
img2dataset --url_list cyfcc.txt --input_format "txt" --output_form webdataset output_folder images --process_count 16 --thread_count 256 --resize_mode no --skip_reencode True

Running the Code

Replication

Additional Experiments

cd scripts/
python knn_scaling.py
python plot_knn_results.py
cd scripts/
python run_blind.py
If you discover any bugs in the code please contact me, I will cross-check them with my nightmares.

Updates

Citation

We hope ACM is a strong method for comparison, and this idea/codebase is useful for your cool CL idea! To cite our work:

@article{prabhu2023online,
  title={Online Continual Learning Without the Storage Constraint},
  author={Prabhu, Ameya and Cai, Zhipeng and Dokania, Puneet and Torr, Philip and Koltun, Vladlen and Sener, Ozan},
  journal={arXiv preprint arXiv:2305.09253},
  year={2023}
}