Home

Awesome

Results on CIFAR100

The table prvoides the models and results of various models on CIFAR100. Learning rate =0.1 and will be divided by 10 every 70 epochs. Total 300 epochs. Using SGD optimizer, momentum=0.9, weight_decay=5e-4. Loss is CrossEntropyLoss. Batch-size=512.

ModelParametersFlopsCIFAR-100
PreActResNet18--74.91%
PreActResNet50--77.39%
PreActResNet101--77.74%
SEResNet18--75.19%
SEResNet50--77.91%
SEResNet101--78.03%
PSEResNet18--74.97%
PSEResNet50--77.45%
PSEResNet101--77.88%
CPSEResNet18--75.25%
CPSEResNet50--77.43%
CPSEResNet101--77.61%
SPPSEResNet18--75.41%
SPPSEResNet50-78.21%
SPPSEResNet101--78.11
PSPPSEResNet18--75.01%
PSPPSEResNet50--78.11%
PSPPSEResNet101--78.35%
CPSPPSEResNet18--75.56%
CPSPPSEResNet50--77.95%
CPSPPSEResNet101--79.17%

For a better understanding, we reschedule the table as follows:

Model18-Layer50-Layer101-Layer
PreActResNet74.91%77.39%77.74%
SEResNet75.19%77.91%78.03%
PSEResNet74.97%77.45%77.88%
CPSEResNet75.25%77.43%77.61%
SPPSEResNet75.41%78.21%78.11%
PSPPSEResNet75.01%78.11%78.35%
CPSPPSEResNet75.56%77.95%79.17%