Awesome
Introduction
This repostitiory contains code to run the experiments outlined in OLED: One-Class Learned Encoder-Decoder Network with Adversarial Context Masking for Novelty Detection.
<p align="center"> <img width="979" alt="Screen Shot 2021-10-18 at 8 41 04 PM" src="https://user-images.githubusercontent.com/34798787/137825275-d34cf654-97e4-490c-b308-5f63bf89a75a.png"> </p>Running Experiments
The following document contains detailed instructions to run the experiments described in the paper. In particular, there are four sections that correspond to running the four experiments:
- MNIST Experiment
- CIFAR Experiment
- UCSD Experiment
Directory Structure
Each of the experiments has a corresponding subfolder in the root directory of the supplementary materials folder generated by unzipping the submitted file. The directory structure is the same across these folders, including the following directories:
- data: The data files needed for the experiment (Need to add from google drive, instructions below)
- scripts: The train and test python and shell scripts
- pretrained_models: The models used to yield the results reported in the experiments (Need to add from google drive, instructions below)
- models: A folder to store models generated by running the given training scripts
Pretrained Models
Due to the size constraints of the submission, a link to an anonymous google drive is provided that contains the pre-trained models for each experiment. Simply download the corresponding pretrained_models.zip file located in the directory of the provided link, unzip the folder as pretrained_models, and insert the directory into the root of the directory of the experiment you wish to run.
Data
Similar to Pretrained models, a link to an anonymous google drive is provided that contains the data for each experiment. Simply download the corresponding data.zip file located in the directory of the provided link, unzip the folder as data, and insert the directory into the root of the directory of the experiment you wish to run.
It is important to note that the data included in the data folder has been included in the NumPy format so that the experiments can be conveniently run without having to download the relevant datasets and convert them to the correct format. If you prefer to download the data directly, you can download the data and convert it to the NumPy format and replace the existing files following the same naming convention.
Python Version and Dependencies
Each shell script installs the required dependencies. A more thorough document outlining the python version and environment is located at the root of the supplemental materials folder.
MNIST Experiment
Data
Download the data.zip from this link, unzip with name data, and include it in the mnist_experiment folder.
Testing from Pretrained Models
Testing
- Unzip the supplementary file sup_mat.zip into a folder called sup_mat. Download pretrained_models.zip file from link
- Unzip the file to create a folder called pretrained_models
- Within the folder sup_mat, insert pretrained_models into the root of the mnist_experiment folder
- Change the working directory to sup_mat/mnist_experiment/scripts and run test.sh 6. The results of the experiment are present in the corresponding log file
Training and Testing from Scratch
Training
- Unzip the supplementary file sup_mat into a folder called sup_mat. Change the working directory to sup_mat/mnist_experiment/scripts and run train.sh a. This shell script will train ten OLED models, corresponding to the ten anomaly detection datasets involved in the MNIST experiment as described in the paper. Accordingly, this shell script contains 10 executions of the same python script for each of the classes b. Training time will be upwards of 12 hours to train all of these models, if your gpu resources risk preemption after a certain amount of time, disperse the runs from each class among multiple shell script files c. Models are saved in the models folder
Testing
- Change the working directory to sup_mat/mnist_experiment/scripts and run test_trained.sh
- The results of the experiment are present in the corresponding log file
Cifar Experiment
Data
Download the data.zip from this link, unzip with name data, and include it in the cifar_experiment folder.
Testing from Pretrained Models
Testing
- Unzip the supplementary file sup_mat.zip into a folder called sup_mat. Download pretrained_models.zip file from link
- Unzip the file to create a folder called pretrained_models
- Within the folder sup_mat, insert pretrained_models into the root of the cifar_experiment folder
- Change the working directory to sup_mat/cifar_experiment/scripts and run test.sh 6. The results of the experiment are present in the corresponding log file
Training and Testing from Scratch
Training
- Unzip the supplementary file sup_mat into a folder called sup_mat. Change the working directory to sup_mat/cifar_experiment/scripts and run the train.sh a. This shell script will train ten OLED models, corresponding to the ten anomaly detection datasets involved in the CIFAR experiment as described in the paper. Accordingly, this shell script contains 10 executions of the same python script for each of the classes b. Training time will be upwards of 12 hours to train all of these models, if your gpu resources risk preemption after a certain amount of time, disperse the runs from each class among multiple shell script files Testing c. Models are saved in the sup_mat/cifar_experiment/models folder
Testing
- Change the working directory to sup_mat/cifar_experiment/scripts and run test_trained.sh
- The results of the experiment are present in the corresponding log file
UCSD Experiment
Data
Download the data.zip from this link, unzip with name data, and include it in the ucsd_experiment folder.
Testing from Pretrained Models
Training
- Unzip the supplementary file sup_mat into a folder called sup_mat. Change the working directory to sup_mat/ucsd_experiment/scripts and run the train.sh a. Models are saved in the sup_mat/ucsd_experiment/models folder
Testing
- Change the working directory to sup_mat/ucsd_experiment/scripts and run test_trained.sh
- The results of the experiment are present in the corresponding log file
Citation
@InProceedings{Jewell_2022_WACV,
author = {Jewell, John Taylor and Khazaie, Vahid Reza and Mohsenzadeh, Yalda},
title = {One-Class Learned Encoder-Decoder Network With Adversarial Context Masking for Novelty Detection},
booktitle = {Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (WACV)},
month = {January},
year = {2022},
pages = {3591-3601}
}