Home

Awesome

<a href="https://tensorlayer.readthedocs.io/"> <div align="center"> <img src="img/tl_transparent_logo.png" width="50%" height="30%"/> </div> </a> <!--- [![PyPI Version](https://badge.fury.io/py/tensorlayer.svg)](https://badge.fury.io/py/tensorlayer) ---> <!--- ![PyPI - Python Version](https://img.shields.io/pypi/pyversions/tensorlayer.svg)) --->

GitHub last commit (branch) Supported TF Version Documentation Status Build Status Downloads Downloads Docker Pulls Codacy Badge

<!--- [![CircleCI](https://circleci.com/gh/tensorlayer/tensorlayer/tree/master.svg?style=svg)](https://circleci.com/gh/tensorlayer/tensorlayer/tree/master) ---> <!--- [![Documentation Status](https://readthedocs.org/projects/tensorlayercn/badge/)](https://tensorlayercn.readthedocs.io/) <!--- [![PyUP Updates](https://pyup.io/repos/github/tensorlayer/tensorlayer/shield.svg)](https://pyup.io/repos/github/tensorlayer/tensorlayer/) --->

Please click TensorLayerX 🔥🔥🔥

TensorLayer is a novel TensorFlow-based deep learning and reinforcement learning library designed for researchers and engineers. It provides an extensive collection of customizable neural layers to build advanced AI models quickly, based on this, the community open-sourced mass tutorials and applications. TensorLayer is awarded the 2017 Best Open Source Software by the ACM Multimedia Society. This project can also be found at OpenI and Gitee.

News

<!-- 🔥 [NNoM](https://github.com/majianjia/nnom): Run TensorLayer quantized models on the **MCU** (e.g., STM32) (Coming Soon) -->

Design Features

TensorLayer is a new deep learning library designed with simplicity, flexibility and high-performance in mind.

TensorLayer stands at a unique spot in the TensorFlow wrappers. Other wrappers like Keras and TFLearn hide many powerful features of TensorFlow and provide little support for writing custom AI models. Inspired by PyTorch, TensorLayer APIs are simple, flexible and Pythonic, making it easy to learn while being flexible enough to cope with complex AI tasks. TensorLayer has a fast-growing community. It has been used by researchers and engineers all over the world, including those from Peking University, Imperial College London, UC Berkeley, Carnegie Mellon University, Stanford University, and companies like Google, Microsoft, Alibaba, Tencent, Xiaomi, and Bloomberg.

Multilingual Documents

TensorLayer has extensive documentation for both beginners and professionals. The documentation is available in both English and Chinese.

English Documentation Chinese Documentation Chinese Book

If you want to try the experimental features on the the master branch, you can find the latest document here.

Extensive Examples

You can find a large collection of examples that use TensorLayer in here and the following space:

<a href="https://github.com/tensorlayer/awesome-tensorlayer/blob/master/readme.md" target="\_blank"> <div align="center"> <img src="img/awesome-mentioned.png" width="40%"/> </div> </a>

Getting Start

TensorLayer 2.0 relies on TensorFlow, numpy, and others. To use GPUs, CUDA and cuDNN are required.

Install TensorFlow:

pip3 install tensorflow-gpu==2.0.0-rc1 # TensorFlow GPU (version 2.0 RC1)
pip3 install tensorflow # CPU version

Install the stable release of TensorLayer:

pip3 install tensorlayer

Install the unstable development version of TensorLayer:

pip3 install git+https://github.com/tensorlayer/tensorlayer.git

If you want to install the additional dependencies, you can also run

pip3 install --upgrade tensorlayer[all]              # all additional dependencies
pip3 install --upgrade tensorlayer[extra]            # only the `extra` dependencies
pip3 install --upgrade tensorlayer[contrib_loggers]  # only the `contrib_loggers` dependencies

If you are TensorFlow 1.X users, you can use TensorLayer 1.11.0:

# For last stable version of TensorLayer 1.X
pip3 install --upgrade tensorlayer==1.11.0
<!--- ## Using Docker The [TensorLayer containers](https://hub.docker.com/r/tensorlayer/tensorlayer/) are built on top of the official [TensorFlow containers](https://hub.docker.com/r/tensorflow/tensorflow/): ### Containers with CPU support ```bash # for CPU version and Python 2 docker pull tensorlayer/tensorlayer:latest docker run -it --rm -p 8888:8888 -p 6006:6006 -e PASSWORD=JUPYTER_NB_PASSWORD tensorlayer/tensorlayer:latest # for CPU version and Python 3 docker pull tensorlayer/tensorlayer:latest-py3 docker run -it --rm -p 8888:8888 -p 6006:6006 -e PASSWORD=JUPYTER_NB_PASSWORD tensorlayer/tensorlayer:latest-py3 ``` ### Containers with GPU support NVIDIA-Docker is required for these containers to work: [Project Link](https://github.com/NVIDIA/nvidia-docker) ```bash # for GPU version and Python 2 docker pull tensorlayer/tensorlayer:latest-gpu nvidia-docker run -it --rm -p 8888:8888 -p 6006:6006 -e PASSWORD=JUPYTER_NB_PASSWORD tensorlayer/tensorlayer:latest-gpu # for GPU version and Python 3 docker pull tensorlayer/tensorlayer:latest-gpu-py3 nvidia-docker run -it --rm -p 8888:8888 -p 6006:6006 -e PASSWORD=JUPYTER_NB_PASSWORD tensorlayer/tensorlayer:latest-gpu-py3 ``` --->

Performance Benchmark

The following table shows the training speeds of VGG16 using TensorLayer and native TensorFlow on a TITAN Xp.

ModeLibData FormatMax GPU Memory Usage(MB)Max CPU Memory Usage(MB)Avg CPU Memory Usage(MB)Runtime (sec)
AutoGraphTensorFlow 2.0channel last118332161213674
TensorLayer 2.0channel last118332187216976
GraphKeraschannel last867725802576101
EagerTensorFlow 2.0channel last87232052202497
TensorLayer 2.0channel last87232010200795

Getting Involved

Please read the Contributor Guideline before submitting your PRs.

We suggest users to report bugs using Github issues. Users can also discuss how to use TensorLayer in the following slack channel.

<br/> <a href="https://join.slack.com/t/tensorlayer/shared_invite/enQtODk1NTQ5NTY1OTM5LTQyMGZhN2UzZDBhM2I3YjYzZDBkNGExYzcyZDNmOGQzNmYzNjc3ZjE3MzhiMjlkMmNiMmM3Nzc4ZDY2YmNkMTY" target="\_blank"> <div align="center"> <img src="img/join_slack.png" width="40%"/> </div> </a> <br/>

Citing TensorLayer

If you find TensorLayer useful for your project, please cite the following papers:

@article{tensorlayer2017,
    author  = {Dong, Hao and Supratak, Akara and Mai, Luo and Liu, Fangde and Oehmichen, Axel and Yu, Simiao and Guo, Yike},
    journal = {ACM Multimedia},
    title   = {{TensorLayer: A Versatile Library for Efficient Deep Learning Development}},
    url     = {http://tensorlayer.org},
    year    = {2017}
}

@inproceedings{tensorlayer2021,
  title={Tensorlayer 3.0: A Deep Learning Library Compatible With Multiple Backends},
  author={Lai, Cheng and Han, Jiarong and Dong, Hao},
  booktitle={2021 IEEE International Conference on Multimedia \& Expo Workshops (ICMEW)},
  pages={1--3},
  year={2021},
  organization={IEEE}
}