Awesome
OccamNets v1 (ECCV 2022 Oral Paper)
This is the repository for our new paper OccamNets. In this paper, we apply Occam's razor to neural networks to use only the required network depth and required visual regions. This increases bias robustness.
<img src="occamnets.jpg" width="800"/>Install the dependencies
./requirements.sh
Configuration:
- Specify the root directory (where the dataset/logs will be stored) in the
paths.root
entry insideconf/base_config.yaml
Instructions for each dataset
BiasedMNISTv2 (released under Creative Commons Attribution 4.0 International (CC BY 4.0) license)
- Download BiasedMNISTv2 from: https://drive.google.com/file/d/1_77AKsY5MoYpDnXgNkjWi9n2_mfQBW-F/view?usp=sharing
- Provide the full path for Biased MNIST in
data_dir
insideconf/dataset/biased_mnist.yaml
- You can also generate Biased MNIST by using/modifying:
./scripts/biased_mnist/generate.sh
COCO-on-Places
- Download the dataset from: https://github.com/Faruk-Ahmed/predictive_group_invariance
- Specify the location to the dataset in
data_dir
ofconf/dataset/coco_on_places.yaml
Training Scripts
- We provide bash scripts to train OccamResNet and ResNet (including baselines and SoTA debiasing methods on both the architectures)
- Train baseline and SoTA methods on OccamResNet/ResNet using:
./scripts/{dataset}/{dataset_shortform}_{method}.sh
- E.g., To train
./scripts/biased_mnist/bmnist_occam.sh
trains OccamNet with BiasedMNIST
- Train baseline and SoTA methods on OccamResNet/ResNet using:
Relevant files for OccamNets
- Model definition: Find OccamNets in
models/occam_resnet.py
,occam_efficient_net.py
andoccam_mobile_net.py
. - Training script:
trainers/occam_trainer.py
. - Training configuration:
conf/trainer/occam_trainer.yaml
(all of these parameters can be overridden from command line)
Citation
@article{shrestha2022occamnets,
title={OccamNets: Mitigating Dataset Bias by Favoring Simpler Hypotheses},
author={Shrestha, Robik and Kafle, Kushal and Kanan, Christopher},
booktitle={European Conference on Computer Vision (ECCV)},
year={2022}
}
This work was supported in part by NSF awards #1909696 and #2047556.