Home

Awesome

Fourier Features Let Networks Learn High Frequency Functions in Low Dimensional Domains

Project Page | Paper

Open Demo in Colab<br>

Matthew Tancik*<sup>1</sup>, Pratul P. Srinivasan*<sup>1,2</sup>, Ben Mildenhall*<sup>1</sup>, Sara Fridovich-Keil<sup>1</sup>, Nithin Raghavan<sup>1</sup>, Utkarsh Singhal<sup>1</sup>, Ravi Ramamoorthi<sup>3</sup>, Jonathan T. Barron<sup>2</sup>, Ren Ng<sup>1</sup><br>

<sup>1</sup>UC Berkeley, <sup>2</sup>Google Research, <sup>3</sup>UC San Diego <br> <sup>*</sup>denotes equal contribution

Abstract

Teaser Image

We show that passing input points through a simple Fourier feature mapping enables a multilayer perceptron (MLP) to learn high-frequency functions in low-dimensional problem domains. These results shed light on recent advances in computer vision and graphics that achieve state-of-the-art results by using MLPs to represent complex 3D objects and scenes. Using tools from the neural tangent kernel (NTK) literature, we show that a standard MLP fails to learn high frequencies both in theory and in practice. To overcome this spectral bias, we use a Fourier feature mapping to transform the effective NTK into a stationary kernel with a tunable bandwidth. We suggest an approach for selecting problem-specific Fourier features that greatly improves the performance of MLPs for low-dimensional regression tasks relevant to the computer vision and graphics communities.

Code

We provide a demo IPython notebook as a simple reference for the core idea. The scripts used to generate the paper plots and tables are located in the Experiments directory.