Home

Awesome

RF-Next: Efficient Receptive Field Search for Convolutional Neural Networks

A general receptive field searching method for CNN.If your network has Conv with kernel larger than 1, RF-Next can further improve your model. The official implementation of:

TPAMI2022 paper: 'RF-Next: Efficient Receptive Field Search for Convolutional Neural Networks'

CVPR2021 paper: 'Global2Local: Efficient Structure Search for Video Action Segmentation'

News

Introduction

Temporal/spatial receptive fields of models play an important role in sequential/spatial tasks. Large receptive fields facilitate long-term relations, while small receptive fields help to capture the local details. Existing methods construct models with hand-designed receptive fields in layers. Can we effectively search for receptive field combinations to replace hand-designed patterns? To answer this question, we propose to find better receptive field combinations through a global-to-local search scheme. Our search scheme exploits both global search to find the coarse combinations and local search to get the refined receptive field combinations further. The global search finds possible coarse combinations other than human-designed patterns. On top of the global search, we propose an expectation-guided iterative local search scheme to refine combinations effectively. Our RF-Next models, plugging receptive field search to various models, boost the performance on many tasks, e.g., temporal action segmentation, object detection, instance segmentation, and speech synthesis.

Applications and Codes

RF-Next supports many applications.

Citation

If you find this work or code is helpful in your research, please cite:

@article{gao2022rfnext,   
title={RF-Next: Efficient Receptive Field Search for Convolutional Neural Networks},   
author={Gao, Shanghua and Li, Zhong-Yu and Han, Qi and Cheng, Ming-Ming and Wang, Liang},   
journal=TPAMI,   
year={2022} }

@inproceedings{gao2021global2local,
  title={Global2Local: Efficient Structure Search for Video Action Segmentation},
  author={Gao, Shang-Hua and Han, Qi and Li, Zhong-Yu and Peng, Pai and Wang, Liang and Cheng, Ming-Ming},
  booktitle=CVPR,
  year={2021}
}

License

The source code is free for research and education use only. Any comercial use should get formal permission first.

Contact

If you have any questions, feel free to E-mail Shang-Hua Gao (shgao(at)live.com)