Home

Awesome

LIP: Local Importance-based Pooling

PyTorch implementations of LIP (ICCV 2019).

[paper link]

This codebase is now complete and it contains:

News

[2021] A case of LIP when G(I)=I, SoftPool, is accepted to ICCV 2021. Check SoftPool.

A Simple Step to Customize LIP

LIP as the learnable generic pooling, its code is simply (in PyTorch):

def lip2d(x, logit, kernel=3, stride=2, padding=1):
    weight = logit.exp()
    return F.avg_pool2d(x*weight, kernel, stride, padding)/F.avg_pool2d(weight, kernel, stride, padding)

You need a sub fully convolutional network (FCN) as the logit module (whose output is of the same shape as the input) to produce the logit. You can customize the logit module like

logit_module_a = nn.Identity()
lip2d(x, logit_module_a(x)) // it gives SoftPool

logit_module_b = lambda x: x.mul(20)
lip2d(x, logit_module_b(x)) // it approximates max pooling

logit_module_c = lambda x: x.mul(0)
lip2d(x, logit_module_c(x)) // it is average pooling

logit_module_d = nn.Conv2d(in_channels, in_channels, 1) // the simple projection form logit module
lip2d(x, logit_module_d(x))

logit_module_e = MyLogitModule() // Your customized logit module (a FCN) begins here
lip2d(x, logit_module_e(x))

Dependencies

  1. Python 3.6
  2. PyTorch 1.0
  3. tensorboard and tensorboardX

Pretrained Models

You can download ImageNet pretrained models here.

ImageNet

Please refer to imagenet/README.md.

CUDA LIP

Please refer to cuda-lip/README.md.

Misc

If you find our research helpful, please consider citing our paper.

@InProceedings{LIP_2019_ICCV,
author = {Gao, Ziteng and Wang, Limin and Wu, Gangshan},
title = {LIP: Local Importance-Based Pooling},
booktitle = {The IEEE International Conference on Computer Vision (ICCV)},
month = {October},
year = {2019}
}