Home

Awesome

Multi GPU Programming Models

This project implements the well known multi GPU Jacobi solver with different multi GPU Programming Models:

Each variant is a stand alone Makefile project and most variants have been discussed in various GTC Talks, e.g.:

Some examples in this repository are the basis for an interactive tutorial: FZJ-JSC/tutorial-multi-gpu.

Requirements

Building

Each variant comes with a Makefile and can be built by simply issuing make, e.g.

multi-gpu-programming-models$ cd multi_threaded_copy
multi_threaded_copy$ make
nvcc -DHAVE_CUB -Xcompiler -fopenmp -lineinfo -DUSE_NVTX -lnvToolsExt -gencode arch=compute_70,code=sm_70 -gencode arch=compute_80,code=sm_80 -gencode arch=compute_90,code=sm_90 -gencode arch=compute_90,code=compute_90 -std=c++14 jacobi.cu -o jacobi
multi_threaded_copy$ ls jacobi
jacobi

Run instructions

All variants have the following command line options

The nvshmem variant additionally provides

The multi_node_p2p variant additionally provides

The nccl variants additionally provide

The provided script bench.sh contains some examples executing all the benchmarks presented in the GTC Talks referenced above.

Developers guide

The code applies the style guide implemented in .clang-format file. clang-format version 7 or later should be used to format the code prior to submitting it. E.g. with

multi-gpu-programming-models$ cd multi_threaded_copy
multi_threaded_copy$ clang-format -style=file -i jacobi.cu

Footnotes

  1. A check for CUDA-aware support is done at compile and run time (see the OpenMPI FAQ for details). If your CUDA-aware MPI implementation does not support this check, which requires MPIX_CUDA_AWARE_SUPPORT and MPIX_Query_cuda_support() to be defined in mpi-ext.h, it can be skipped by setting SKIP_CUDA_AWARENESS_CHECK=1.