Awesome
LIP: Local Importance-based Pooling
PyTorch implementations of LIP (ICCV 2019).
This codebase is now complete and it contains:
- the implementation of LIP based on PyTorch primitives,
- LIP-ResNet,
- LIP-DenseNet,
- ImageNet training and testing code,
- CUDA implementation of LIP.
News
[2021] A case of LIP when G(I)=I
, SoftPool, is accepted to ICCV 2021. Check SoftPool.
A Simple Step to Customize LIP
LIP as the learnable generic pooling, its code is simply (in PyTorch):
def lip2d(x, logit, kernel=3, stride=2, padding=1):
weight = logit.exp()
return F.avg_pool2d(x*weight, kernel, stride, padding)/F.avg_pool2d(weight, kernel, stride, padding)
You need a sub fully convolutional network (FCN) as the logit module (whose output is of the same shape as the input) to produce the logit. You can customize the logit module like
logit_module_a = nn.Identity()
lip2d(x, logit_module_a(x)) // it gives SoftPool
logit_module_b = lambda x: x.mul(20)
lip2d(x, logit_module_b(x)) // it approximates max pooling
logit_module_c = lambda x: x.mul(0)
lip2d(x, logit_module_c(x)) // it is average pooling
logit_module_d = nn.Conv2d(in_channels, in_channels, 1) // the simple projection form logit module
lip2d(x, logit_module_d(x))
logit_module_e = MyLogitModule() // Your customized logit module (a FCN) begins here
lip2d(x, logit_module_e(x))
Dependencies
- Python 3.6
- PyTorch 1.0
- tensorboard and tensorboardX
Pretrained Models
You can download ImageNet pretrained models here.
ImageNet
Please refer to imagenet/README.md.
CUDA LIP
Please refer to cuda-lip/README.md.
Misc
If you find our research helpful, please consider citing our paper.
@InProceedings{LIP_2019_ICCV,
author = {Gao, Ziteng and Wang, Limin and Wu, Gangshan},
title = {LIP: Local Importance-Based Pooling},
booktitle = {The IEEE International Conference on Computer Vision (ICCV)},
month = {October},
year = {2019}
}