Awesome
Calibration of Neural Networks
Introduction
This repository contains all scripts needed to train neural networks (ResNet, DenseNet, DAN etc) and to calibrate the probabilities. These networks are trained on 4 different datasets and the model weights and output logits are available for use in this repository.
Structure
Structure of the repository:
- Logits - pickled files with logits for the trained models. Additionally logits can be downloaded from HERE.
- Models - model weights of the trained models.
- Reliability diagrams - reliability diagrams generated for the models.
- Scripts - Python code and notebooks used to train the models, evaluate the outcome and calibrate the probabilities of the models (Python 3.6.4, Keras 2.1.4, Tensorflow 1.4.1)
Datasets
Following datasets were used:
- CIFAR-10/100 - more information on https://www.cs.toronto.edu/~kriz/cifar.html
- ImageNet - more information on http://www.image-net.org/challenges/LSVRC/2012/
- SVHN - more information on http://ufldl.stanford.edu/housenumbers/
- Caltech-UCSD Birds - more information on http://www.vision.caltech.edu/visipedia/CUB-200.html
Models
Following models were used and trained:
- ResNet - based on paper "Deep Residual Learning for Image Recognition"
- ResNet (SD) - based on paper "Deep Networks with Stochastic Depth"
- Wide ResNet - based on paper "Wide Residual Networks"
- DenseNet - based on paper "Densely Connected Convolutional Networks"
- LeNet - based on paper "Gradient-based learning applied to document recognition"
- DAN - based on paper "Deep Unordered Composition Rivals Syntactic Methods for Text Classification"
The hyperparameters and data preparation suggested by the authors of the papers were used to train the models, except for LeNet and DAN.
Calibration
Following calibration methods were used:
- Histogram binning - based on paper "Obtaining calibrated probability estimates from decision trees and naive bayesian classifiers"
- Isotonic regression - based on paper "Transforming classifier scores into accurate multiclass probability estimates "
- Temperature Scaling - based on paper "On Calibration of Modern Neural Networks"
- Beta Calibration - based on paper "Beta calibration: a well-founded and easily implemented improvement on logistic calibration for binary classifiers"
Citation
If you find the work relevant to your research, please cite:
@article{kull2019beyond,
title={Beyond temperature scaling: Obtaining well-calibrated multiclass probabilities with Dirichlet calibration},
author={Kull, Meelis and Perello-Nieto, Miquel and K{\"a}ngsepp, Markus and Song, Hao and Flach, Peter and others},
journal={arXiv preprint arXiv:1910.12656},
year={2019}
}
Author
Markus Kängsepp, University of Tartu