Home

Awesome

Focal Modulation Networks

This is the official Pytorch implementation of FocalNets:

"Focal Modulation Networks" by Jianwei Yang, Chunyuan Li, Xiyang Dai, Lu Yuan and Jianfeng Gao.

PWC PWC PWC PWC

News

Introduction

<p align="center"> <img src="figures/SA_FM_Comparison.png" width=95% height=95% class="center"> </p>

We propose FocalNets: Focal Modulation Networks, an attention-free architecture that achieves superior performance than SoTA self-attention (SA) methods across various vision benchmarks. SA is an first interaction, last aggregation (FILA) process as shown above. Our Focal Modulation inverts the process by first aggregating, last interaction (FALI). This inversion brings several merits:

<p align="center"> <img src="figures/focalnet-model.png" width=80% height=80% class="center"> </p>

Before getting started, see what our FocalNets have learned to perceive images and where to modulate!

<p align="center"> <img src="figures/teaser_fig.png" width=90% class="center"> </p>

Finally, FocalNets are built with convolutional and linear layers, but goes beyond by proposing a new modulation mechanism that is simple, generic, effective and efficient. We hereby recommend:

Focal-Modulation May be What We Need for Visual Modeling!

Getting Started

Benchmarking

Image Classification on ImageNet-1K

ModelDepthDimKernels#Params. (M)FLOPs (G)Throughput (imgs/s)Top-1Download
FocalNet-T[2,2,6,2]96[3,5]28.44.474382.1ckpt/config/log
FocalNet-T[2,2,6,2]96[3,5,7]28.64.569682.3ckpt/config/log
FocalNet-S[2,2,18,2]96[3,5]49.98.643483.4ckpt/config/log
FocalNet-S[2,2,18,2]96[3,5,7]50.38.740683.5ckpt/config/log
FocalNet-B[2,2,18,2]128[3,5]88.115.328083.7ckpt/config/log
FocalNet-B[2,2,18,2]128[3,5,7]88.715.426983.9ckpt/config/log
ModelDepthDimKernels#Params. (M)FLOPs (G)Throughput (imgs/s)Top-1Download
FocalNet-T12192[3,5,7]5.91.1233474.1ckpt/config/log
FocalNet-S12384[3,5,7]22.44.392080.9ckpt/config/log
FocalNet-B12768[3,5,7]87.216.930082.4ckpt/config/log

ImageNet-22K Pretraining

ModelDepthDimKernels#Params. (M)Download
FocalNet-L[2,2,18,2]192[5,7,9]207ckpt/config
FocalNet-L[2,2,18,2]192[3,5,7,9]207ckpt/config
FocalNet-XL[2,2,18,2]256[5,7,9]366ckpt/config
FocalNet-XL[2,2,18,2]256[3,5,7,9]366ckpt/config
FocalNet-H[2,2,18,2]352[3,5,7]687ckpt/config
FocalNet-H[2,2,18,2]352[3,5,7,9]689ckpt/config

NOTE: We reorder the class names in imagenet-22k so that we can directly use the first 1k logits for evaluating on imagenet-1k. We remind that the 851th class (label=850) in imagenet-1k is missed in imagenet-22k. Please refer to this labelmap. More discussion found in this issue.

Object Detection on COCO

BackboneKernelsLr Schd#Params. (M)FLOPs (G)box mAPmask mAPDownload
FocalNet-T[9,11]1x48.626745.941.3ckpt/config/log
FocalNet-T[9,11]3x48.626747.642.6ckpt/config/log
FocalNet-T[9,11,13]1x48.826846.141.5ckpt/config/log
FocalNet-T[9,11,13]3x48.826848.042.9ckpt/config/log
FocalNet-S[9,11]1x70.835648.042.7ckpt/config/log
FocalNet-S[9,11]3x70.835648.943.6ckpt/config/log
FocalNet-S[9,11,13]1x72.336548.343.1ckpt/config/log
FocalNet-S[9,11,13]3x72.336549.343.8ckpt/config/log
FocalNet-B[9,11]1x109.449648.843.3ckpt/config/log
FocalNet-B[9,11]3x109.449649.644.1ckpt/config/log
FocalNet-B[9,11,13]1x111.450749.043.5ckpt/config/log
FocalNet-B[9,11,13]3x111.450749.844.1ckpt/config/log
BackboneKernelsMethodLr Schd#Params. (M)FLOPs (G)box mAPDownload
FocalNet-T[11,9,9,7]Cascade Mask R-CNN3x87.175151.5ckpt/config/log
FocalNet-T[11,9,9,7]ATSS3x37.222049.6ckpt/config/log
FocalNet-T[11,9,9,7]Sparse R-CNN3x111.217849.9ckpt/config/log

Semantic Segmentation on ADE20K

BackboneKernelsMethod#Params. (M)FLOPs (G)mIoUmIoU (MS)Download
FocalNet-T[9,11]UPerNet6194446.547.2ckpt/config/log
FocalNet-T[9,11,13]UPerNet6194946.847.8ckpt/config/log
FocalNet-S[9,11]UPerNet83103549.350.1ckpt/config/log
FocalNet-S[9,11,13]UPerNet84104449.150.1ckpt/config/log
FocalNet-B[9,11]UPerNet124118050.251.1ckpt/config/log
FocalNet-B[9,11,13]UPerNet126119250.551.4ckpt/config/log

Visualizations

There are three steps in our FocalNets:

  1. Contexualization with depth-wise conv;
  2. Multi-scale aggregation with gating mechanism;
  3. Modulator derived from context aggregation and projection.

We visualize them one by one.

<p align="center"> <img src="figures/dw-kernels.png" width=70% height=70% class="center"> </p>

Yellow colors represent higher values. Apparently, FocalNets learn to gather more local context at earlier stages while more global context at later stages.

<p align="center"> <img src="figures/pic1.png" width=70% height=70% class="center"> <img src="figures/pic2.png" width=70% height=70% class="center"> <img src="figures/pic3.png" width=70% height=70% class="center"> <img src="figures/pic4.png" width=70% height=70% class="center"> </p>

From left to right, the images are input image, gating map for focal level 1,2,3 and the global context. Clearly, our model has learned where to gather the context depending on the visual contents at different locations.

<p align="center"> <img src="figures/vis-modulator.png" width=70% height=70% class="center"> </p>

The modulator derived from our model automatically learns to focus on the foreground regions.

For visualization by your own, please refer to visualization notebook.

Citation

If you find this repo useful to your project, please consider to cite it with following bib:

@misc{yang2022focal,
      title={Focal Modulation Networks}, 
      author={Jianwei Yang and Chunyuan Li and Xiyang Dai and Jianfeng Gao},
      journal={Advances in Neural Information Processing Systems (NeurIPS)},
      year={2022}
}

Acknowledgement

Our codebase is built based on Swin Transformer and Focal Transformer. To achieve the SoTA object detection performance, we heavily rely on the most advanced method DINO and the advices from the authors. We thank the authors for the nicely organized code!

Contributing

This project welcomes contributions and suggestions. Most contributions require you to agree to a Contributor License Agreement (CLA) declaring that you have the right to, and actually do, grant us the rights to use your contribution. For details, visit https://cla.opensource.microsoft.com.

When you submit a pull request, a CLA bot will automatically determine whether you need to provide a CLA and decorate the PR appropriately (e.g., status check, comment). Simply follow the instructions provided by the bot. You will only need to do this once across all repos using our CLA.

This project has adopted the Microsoft Open Source Code of Conduct. For more information see the Code of Conduct FAQ or contact opencode@microsoft.com with any additional questions or comments.

Trademarks

This project may contain trademarks or logos for projects, products, or services. Authorized use of Microsoft trademarks or logos is subject to and must follow Microsoft's Trademark & Brand Guidelines. Use of Microsoft trademarks or logos in modified versions of this project must not cause confusion or imply Microsoft sponsorship. Any use of third-party trademarks or logos are subject to those third-party's policies.