Awesome
<div align="center"> <img width="500px" src="https://raw.githubusercontent.com/KernelTuner/kernel_tuner/master/doc/images/KernelTuner-logo.png"/> </div>Create optimized GPU applications in any mainstream GPU programming language (CUDA, HIP, OpenCL, OpenACC).
What Kernel Tuner does:
- Works as an external tool to benchmark and optimize GPU kernels in isolation
- Can be used directly on existing kernel code without extensive changes
- Can be used with applications in any host programming language
- Blazing fast search space construction
- More than 20 optimization algorithms to speedup tuning
- Energy measurements and optimizations (power capping, clock frequency tuning)
- ... and much more! For example, caching, output verification, tuning host and device code, user defined metrics, see the full documentation.
Installation
- First, make sure you have your CUDA, OpenCL, or HIP compiler installed
- Then type:
pip install kernel_tuner[cuda]
,pip install kernel_tuner[opencl]
, orpip install kernel_tuner[hip]
- or why not all of them:
pip install kernel_tuner[cuda,opencl,hip]
More information on installation, also for other languages, in the installation guide.
Example
import numpy as np
from kernel_tuner import tune_kernel
kernel_string = """
__global__ void vector_add(float *c, float *a, float *b, int n) {
int i = blockIdx.x * block_size_x + threadIdx.x;
if (i<n) {
c[i] = a[i] + b[i];
}
}
"""
n = np.int32(10000000)
a = np.random.randn(n).astype(np.float32)
b = np.random.randn(n).astype(np.float32)
c = np.zeros_like(a)
args = [c, a, b, n]
tune_params = {"block_size_x": [32, 64, 128, 256, 512]}
tune_kernel("vector_add", kernel_string, n, args, tune_params)
More examples here.
Resources
- Full documentation
- Guides:
- Features & Use cases:
- Kernel Tuner Tutorial slides [PDF], hands-on:
- Energy Efficient GPU Computing tutorial slides [PDF], hands-on:
Kernel Tuner ecosystem
<img width="250px" src="https://raw.githubusercontent.com/KernelTuner/kernel_tuner/master/doc/images/kernel_launcher.png"/><br />C++ magic to integrate auto-tuned kernels into C++ applications
<img width="250px" src="https://raw.githubusercontent.com/KernelTuner/kernel_tuner/master/doc/images/kernel_float.png"/><br />C++ data types for mixed-precision CUDA kernel programming
<img width="275px" src="https://raw.githubusercontent.com/KernelTuner/kernel_tuner/master/doc/images/kernel_dashboard.png"/><br />Monitor, analyze, and visualize auto-tuning runs
Communication & Contribution
- GitHub Issues: Bug reports, install issues, feature requests, work in progress
- GitHub Discussion group: General questions, Q&A, thoughts
Contributions are welcome! For feature requests, bug reports, or usage problems, please feel free to create an issue. For more extensive contributions, check the contribution guide.
Citation
If you use Kernel Tuner in research or research software, please cite the most relevant among the publications on Kernel Tuner. To refer to the project as a whole, please cite:
@article{kerneltuner,
author = {Ben van Werkhoven},
title = {Kernel Tuner: A search-optimizing GPU code auto-tuner},
journal = {Future Generation Computer Systems},
year = {2019},
volume = {90},
pages = {347-358},
url = {https://www.sciencedirect.com/science/article/pii/S0167739X18313359},
doi = {https://doi.org/10.1016/j.future.2018.08.004}
}