Home

Awesome

KNN_CUDA

Modifications

Performance

LoopsklearnCUDAMemory
1002.34 ms0.06 ms652/1024
10002.30 ms1.40 ms652/1024

Install

git clone https://github.com/unlimblue/KNN_CUDA.git
cd KNN_CUDA
make && make install
pip install --upgrade https://github.com/unlimblue/KNN_CUDA/releases/download/0.2/KNN_CUDA-0.2-py3-none-any.whl

And then, make sure ninja has been installed:

  1. see https://pytorch.org/tutorials/advanced/cpp_extension.html
  2. or just:
wget -P /usr/bin https://github.com/unlimblue/KNN_CUDA/raw/master/ninja

You should use branch windows:

git clone --branch windows https://github.com/unlimblue/KNN_CUDA.git
cd C:\\PATH_TO_KNN_CUDA
make
make install

Usage

import torch

# Make sure your CUDA is available.
assert torch.cuda.is_available()

from knn_cuda import KNN
"""
if transpose_mode is True, 
    ref   is Tensor [bs x nr x dim]
    query is Tensor [bs x nq x dim]
    
    return 
        dist is Tensor [bs x nq x k]
        indx is Tensor [bs x nq x k]
else
    ref   is Tensor [bs x dim x nr]
    query is Tensor [bs x dim x nq]
    
    return 
        dist is Tensor [bs x k x nq]
        indx is Tensor [bs x k x nq]
"""

knn = KNN(k=10, transpose_mode=True)

ref = torch.rand(32, 1000, 5).cuda()
query = torch.rand(32, 50, 5).cuda()

dist, indx = knn(ref, query)  # 32 x 50 x 10