Awesome
TensorFlow Optimized Wheels
Custom builds for TensorFlow with platform optimizations, including SSE, AVX and FMA. If you are seeing messages like the following with the stock pip install tensorflow
, you've come to the right place.
The TensorFlow library wasn't compiled to use AVX instructions, but these are available on your machine and could speed up CPU computations.
The TensorFlow library wasn't compiled to use AVX2 instructions, but these are available on your machine and could speed up CPU computations.
The TensorFlow library wasn't compiled to use SSE4.1 instructions, but these are available on your machine and could speed up CPU computations.
The TensorFlow library wasn't compiled to use SSE4.2 instructions, but these are available on your machine and could speed up CPU computations.
or:
Your CPU supports instructions that this TensorFlow binary was not compiled to use: SSE4.1 SSE4.2 AVX AVX2 FMA
These wheels are built for use on TinyMind, the cloud machine learning platform. If you want to install them on your own Linux box (Ubuntu 16.04 LTS), you can do so with:
# RELEASE is the git tag like tf1.1-cpu. WHEEL is the full wheel name.
pip --no-cache-dir install https://github.com/mind/wheels/releases/download/{RELEASE}/{WHEEL}
The list of all wheels can be found in the releases page.
Versions
Click on the links below to jump to specific release versions. Again, they are built for Ubuntu 16.04 LTS unless otherwise noted.
TF | Builds |
---|---|
1.1 | CPU, GPU |
1.2 | CPU, GPU (Python 3.6 only) |
1.2.1 | CPU, GPU |
1.3 | CPU, GPU with MPI |
1.3.1 | CPU, CPU Debug, GPU, GPU with MPI |
1.4 | CPU, CPU Debug, CPU macOS, GPU (CUDA 8, CUDA 9 for Compute 3.7, CUDA 9 for Compute 3.7/6.0/7.0, CUDA 9 generic, CUDA 9 without MKL) |
1.4.1 | CPU, GPU (CUDA 8, CUDA 9, CUDA 9.1) |
1.5 | CPU, GPU (CUDA 9, CUDA 9 without MKL, CUDA 9.1, CUDA 9.1 without MKL) |
1.6 | CPU, GPU (CUDA 9.1, CUDA 9.1 without MKL) |
1.7 | CPU, GPU (CUDA 9, CUDA 9.1, cuDNN 7.1) |
Please note that your machine needs to have a relatively new Intel CPU (and Nvidia GPU if you use the GPU version) to be compatible with the wheels below. If the hardware is not up-to-date, the wheels will not work.
Wheels for TensorFlow 1.4.1 and above contain support for GCP, S3 and Hadoop. Compilation flags include:
--config=opt --config=cuda --cxxopt=-D_GLIBCXX_USE_CXX11_ABI=0 --copt=-mavx --copt=-msse4.1 --copt=-msse4.2 --copt=-mavx2 --copt=-mfma --copt=-mfpmath=both
Wheels you will most likely need are listed below. Need something or a wheel doesn't work for you? File an issue. (Unfortunately, we won't be able to accommodate for requests for Windows wheels, as we don't have Windows machines ourselves.)
Help!
This section contains tips for debugging your setup. Seriously though, try TinyMind out and you will never need to waste time debugging again. We also have Docker images that you can use on your own machines. If this section doesn't solve your problem, be sure to file an issue.
CUDA
Different TensorFlow versions support/require different CUDA versions:
TF | CUDA | cuDNN | Compute Capability |
---|---|---|---|
1.1, 1.2 | 8.0 | 5.1 | 3.7 (K80) |
1.2.1-1.3.1 | 8.0 | 6.0 | 3.7 |
1.4 | 8.0/9.0 | 6.0/7.0 | 3.7, 6.0 (P100), 7.0 (V100) |
1.4.1 | 8.0/9.0/9.1 | 6.0/7.0 | 3.7, 6.0, 7.0 |
1.5 | 9.0/9.1 | 7.0 | 3.7, 6.0, 7.0 |
1.6 | 9.1 | 7.0 | 3.7, 6.0, 7.0 |
1.7 | 9.0/9.1 | 7.0/7.1 | 3.7, 6.0, 7.0 |
TensorFlow < 1.4 doesn't work with CUDA 9, the current version. Instead of sudo apt-get install cuda
, you need to do sudo apt-get install cuda-8-0
. CUDA 8 variants of TensorFlow 1.4 go with cuDNN 6.0, and CUDA 9.x variants go with cuDNN 7.x.
# Install CUDA 8
curl -O http://developer.download.nvidia.com/compute/cuda/repos/ubuntu1604/x86_64/cuda-repo-ubuntu1604_8.0.61-1_amd64.deb
sudo dpkg -i cuda-repo-ubuntu1604_8.0.61-1_amd64.deb
sudo apt-get update
sudo apt-get install cuda-8-0
# Install CUDA 9
curl -O http://developer.download.nvidia.com/compute/cuda/repos/ubuntu1604/x86_64/cuda-repo-ubuntu1604_9.0.176-1_amd64.deb
sudo dpkg -i cuda-repo-ubuntu1604_9.0.176-1_amd64.deb
sudo apt-key adv --fetch-keys http://developer.download.nvidia.com/compute/cuda/repos/ubuntu1604/x86_64/7fa2af80.pub
sudo apt-get update
sudo apt-get install cuda
Make sure that CUDA-related environment variables are set properly:
echo 'export CUDA_HOME=/usr/local/cuda' >> ~/.bashrc
echo 'export PATH=$PATH:$CUDA_HOME/bin' >> ~/.bashrc
echo 'export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:$CUDA_HOME/lib64' >> ~/.bashrc
. ~/.bashrc
Download the correct cuDNN and install it as follows:
# The cuDNN tar file.
tar xzvf cudnn-9.0-linux-x64-v7.0.tgz
sudo cp cuda/lib64/* /usr/local/cuda/lib64/
sudo cp cuda/include/cudnn.h /usr/local/cuda/include/
Missing libcupti
library? Install it and add it to your PATH
.
sudo apt-get install libcupti-dev
echo 'export LD_LIBRARY_PATH=/usr/local/cuda/extras/CUPTI/lib64:$LD_LIBRARY_PATH' >> ~/.bashrc
TensorRT
Certain wheels support TensorRT. To install TensorRT, first download it from Nvidia's website, and then run:
sudo dpkg -i nv-tensorrt-repo-ubuntu1604-ga-cuda9.0-trt3.0.4-20180208_1-1_amd64.deb
sudo apt-get update
sudo apt-get install tensorrt
MKL
MKL is Intel's deep learning kernel library, which makes training neural nets on CPU much faster. If you don't have it, install it like the following:
# If you don't have cmake
sudo apt install cmake
git clone https://github.com/01org/mkl-dnn.git
cd mkl-dnn/scripts && ./prepare_mkl.sh && cd ..
mkdir -p build && cd build && cmake .. && make
sudo make install
echo 'export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/usr/local/lib' >> ~/.bashrc
Glibc 2.23
Please note that Ubuntu 16.04 LTS is the intended environment. If you have an old OS, you may run into issues with old glibc versions. You may want to check out discussions here to see if they would help.
MPI
Using a wheel with MPI support? Be sure to run sudo apt-get install mpich
.