Home

Awesome

A PyTorch wrapper for CUDA FFTs License

A package that provides a PyTorch C extension for performing batches of 2D CuFFT transformations, by Eric Wong

Update: FFT functionality is now officially in PyTorch 0.4, see the documentation here. This repository is only useful for older versions of PyTorch, and will no longer be updated.

Installation

This package is on PyPi. Install with pip install pytorch-fft.

Usage

# Example that does a batch of three 2D transformations of size 4 by 5. 
import torch
import pytorch_fft.fft as fft

A_real, A_imag = torch.randn(3,4,5).cuda(), torch.zeros(3,4,5).cuda()
B_real, B_imag = fft.fft2(A_real, A_imag)
fft.ifft2(B_real, B_imag) # equals (A, zeros)

B_real, B_imag = fft.rfft2(A) # is a truncated version which omits
                                   # redundant entries

reverse(torch.arange(0,6)) # outputs [5,4,3,2,1,0]
reverse(torch.arange(0,6), 2) # outputs [4,5,2,3,0,1]

expand(B_real) # is equivalent to  fft.fft2(A, zeros)[0]
expand(B_imag, imag=True) # is equivalent to  fft.fft2(A, zeros)[1]
# Example that uses the autograd for 2D fft:
import torch
from torch.autograd import Variable
import pytorch_fft.fft.autograd as fft
import numpy as np

f = fft.Fft2d()
invf= fft.Ifft2d()

fx, fy = (Variable(torch.arange(0,100).view((1,1,10,10)).cuda(), requires_grad=True), 
          Variable(torch.zeros(1, 1, 10, 10).cuda(),requires_grad=True))
k1,k2 = f(fx,fy)
z = k1.sum() + k2.sum()
z.backward()
print(fx.grad, fy.grad)

Notes

Repository contents

Issues and Contributions

If you have any issues or feature requests, file an issue or send in a PR.