Home

Awesome

NATTENLogo NATTENLogo

<a href="https://www.shi-labs.com/natten/"><img src="https://img.shields.io/badge/pip%20install%20natten-read%20more-%23C209C1" /></a> | <a href="docs/"><img src="https://img.shields.io/badge/Documentation-B31942" /></a> | <a href="https://arxiv.org/abs/2403.04690"><img src="https://img.shields.io/badge/arXiv-2403.04690-orange" /></a>

Neighborhood Attention Extension

Bringing attention to a neighborhood near you!

<div align="center"> <picture> <source media="(prefers-color-scheme: dark)" srcset="docs/assets/neighborhood_attn_2d_vis_dark.png"> <img alt="Visualization of neighborhood attention in 2D." src="docs/assets/neighborhood_attn_2d_vis_light.png" width="384" /> </picture> <picture> <source media="(prefers-color-scheme: dark)" srcset="docs/assets/dilated_neighborhood_attn_2d_vis_dark.png"> <img alt="Visualization of dilated neighborhood attention in 2D." src="docs/assets/dilated_neighborhood_attn_2d_vis_light.png" width="384" /> </picture> </div>

NATTEN is an open-source project dedicated to providing fast implementations for Neighborhood Attention, a sliding window self-attention mechanism.

If you're not familiar with neighborhood attention, please refer to our papers, or watch our YouTube video from CVPR 2023.

To read more about our GEMM-based and fused neighborhood attention kernels, please refer to our new preprint, Faster Neighborhood Attention.

New: Fused Neighborhood Attention now supports backpropagation!

We've released the Fused Neighborhood Attention (FNA) backward kernel and interface, which means you can now train models based on neighborhood attention faster and more efficiently.

FNA can be seen as a generalization of methods such as Flash Attention and FMHA from back-to-back matrix multiplication to back-to-back tensor-tensor contraction, and comes with neighborhood attention masking built in. This accelerates neighborhood attention, a multi-dimensional sliding window attention pattern, by never storing the attention tensor to global memory, which aside from reducing global memory footprint also reduces the memory bandwidth bottleneck.

<div align="center"> <picture> <source media="(prefers-color-scheme: dark)" srcset="assets/fna-chart-dark.png"> <img alt="Op-level average speedup." src="assets/fna-chart-light.png" /> </picture> </div>

We highly recommend referring to FNA quick start or the Fused vs unfused NA guide before starting to use FNA, since the interface, memory layout, and feature set can differ from all unfused ops in NATTEN.

Getting started

NATTEN supports PyTorch version 2.0 and later, and Python versions 3.8 and above. Python 3.12 is only supported with torch >= 2.2.0.

Older NATTEN releases supported python >= 3.7 and torch >= 1.8.

Please refer to install instructions to find out whether your operating system and hardware accelerator is compatible with NATTEN.

Feature availability

Problem spaceCPU backendCUDA backend
1Dnaivenaive, gemm, fna
2Dnaivenaive, gemm, fna
3Dnaivenaive, fna

CPU

Problem spaceCPU BackendCausal maskingVarying parametersRelative positional biasAutograd support
1Dnaive:white_check_mark::white_check_mark::white_check_mark:Forward and reverse mode
2Dnaive:white_check_mark::white_check_mark::white_check_mark:Forward and reverse mode
3Dnaive:white_check_mark::white_check_mark::white_check_mark:Forward and reverse mode

Notes:

CUDA

Problem spaceCUDA BackendCausal maskingVarying parametersRelative positional biasAutograd supportMin. Arch
1Dnaive:white_check_mark::white_check_mark::white_check_mark:Forward and reverse modeSM35
2Dnaive:white_check_mark::white_check_mark::white_check_mark:Forward and reverse modeSM35
3Dnaive:white_check_mark::white_check_mark::white_check_mark:Forward and reverse modeSM35
1Dgemm--:white_check_mark:Forward and reverse modeSM70
2Dgemm--:white_check_mark:Forward and reverse modeSM70
1Dfna:white_check_mark::white_check_mark::white_check_mark:Reverse modeSM50
2Dfna:white_check_mark::white_check_mark::white_check_mark:Reverse modeSM50
3Dfna:white_check_mark::white_check_mark::white_check_mark:Reverse modeSM50

Notes:

Features that will likely no longer be worked on or improved:

License

NATTEN is released under the MIT License.

Citation

@inproceedings{hassani2024faster,
  title        = {Faster Neighborhood Attention: Reducing the O(n^2) Cost of Self Attention at the Threadblock Level},
  author       = {Ali Hassani and Wen-Mei Hwu and Humphrey Shi},
  year         = 2024,
  booktitle    = {Advances in Neural Information Processing Systems},
}
@inproceedings{hassani2023neighborhood,
  title        = {Neighborhood Attention Transformer},
  author       = {Ali Hassani and Steven Walton and Jiachen Li and Shen Li and Humphrey Shi},
  year         = 2023,
  booktitle    = {IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)}
}
@misc{hassani2022dilated,
  title        = {Dilated Neighborhood Attention Transformer},
  author       = {Ali Hassani and Humphrey Shi},
  year         = 2022,
  url          = {https://arxiv.org/abs/2209.15001},
  eprint       = {2209.15001},
  archiveprefix = {arXiv},
  primaryclass = {cs.CV}
}

Acknowledgements

We thank NVIDIA, and the CUTLASS project and team for their efforts in creating and open-sourcing CUTLASS. We would also like to thank Haicheng Wu for his valuable feedback and comments which led to the creation of GEMM-based NA. We also thank Meta and the xFormers team for their FMHA kernel, which is what our Fused Neighborhood Attention kernel is based on. We thank the PyTorch project and team.