Awesome
Awesome Tensorlayer - A curated list of dedicated resources
<a href="https://tensorlayer.readthedocs.io/en/stable/"> <div align="center"> <img src="https://raw.githubusercontent.com/tensorlayer/tensorlayer/master/img/tl_transparent_logo.png" width="50%" height="30%"/> </div> </a>You have just found TensorLayer! High performance DL and RL library for industry and academic.
Contribute
Contributions welcome! Read the contribution guidelines first.
<!--- ## Contents - [1. Basics Examples](#1-basics-examples) - [2. Computer Vision](#2-computer-vision) - [3. Natural Language Processing](#3-natural-language-processing) - [4. Reinforcement Learning](#4-reinforcement-learning) - [5. Adversarial Learning](#5-adversarial-learning) - [6. Pretrained Models](#6-pretrained-models) - [7. Auto Encoders](#7-auto-encoders) - [8. Data and Model Managment Tools](#8-data-and-model-managment-tools) -->1. Basics Examples
1.1 MNIST and CIFAR10
TensorLayer can define models in two ways. Static model allows you to build model in a fluent way while dynamic model allows you to fully control the forward process. Please read this DOCS before you start.
- MNIST Simplest Example
- MNIST Static Example
- MNIST Static Example for Reused Model
- MNIST Dynamic Example
- MNIST Dynamic Example for Seperated Models
- MNIST Static Siamese Model Example
- CIFAR10 Static Example with Data Augmentation
1.2 DatasetAPI and TFRecord Examples
- Downloading and Preprocessing PASCAL VOC with TensorLayer VOC data loader. 知乎文章
- Read and Save data in TFRecord Format.
- Read and Save time-series data in TFRecord Format.
- Convert CIFAR10 in TFRecord Format for performance optimization.
- More dataset loader can be found in tl.files.load_xxx
2. General Computer Vision
- Arbitrary Style Transfer in Real-time with Adaptive Instance Normalization
- OpenPose: Real-time multi-person keypoint detection
- InsignFace - Additive Angular Margin Loss for Deep Face Recognition
- Spatial-Transformer-Nets (STN) trained on MNIST dataset based on the paper by [M. Jaderberg et al, 2015].
- U-Net Brain Tumor Segmentation trained on BRATS 2017 dataset based on the paper by [M. Jaderberg et al, 2015] with some modifications.
- Image2Text: im2txt based on the paper by [O. Vinyals et al, 2016].
- More Computer Vision Application can be found in Adversarial Learning Section
3. Quantization Networks
- Binary Networks works on mnist and cifar10.
- Ternary Network works on mnist and cifar10.
- DoReFa-Net works on mnist and cifar10.
- Quantization For Efficient Integer-Arithmetic-Only Inference works on mnist and cifar10.
4. GAN
- DCGAN trained on the CelebA dataset based on the paper by [A. Radford et al, 2015].
- CycleGAN improved with resize-convolution based on the paper by [J. Zhu et al, 2017].
- SRGAN - A Super Resolution GAN based on the paper by [C. Ledig et al, 2016].
- DAGAN: Fast Compressed Sensing MRI Reconstruction based on the paper by [G. Yang et al, 2017].
- GAN-CLS for Text to Image Synthesis based on the paper by [S. Reed et al, 2016]
- Unsupervised Image-to-Image Translation with Generative Adversarial Networks, code
- BEGAN: Boundary Equilibrium Generative Adversarial Networks based on the paper by [D. Berthelot et al, 2017].
- BiGAN Adversarial Feature Learning
- Attention CycleGAN: Unsupervised Attention-guided Image-to-Image Translation
- MoCoGAN Decomposing Motion and Content for Video Generation
- InfoGAN: Interpretable Representation Learning by Information Maximizing Generative Adversarial Nets, 2016
- Lifelong GAN: Continual Learning for Conditional Image Generation, 2019, ICCV
5. Natural Language Processing
5.1 ChatBot
- Seq2Seq Chatbot in 200 lines of code for Seq2Seq.
5.2 Text Generation
- Text Generation with LSTMs - Generating Trump Speech.
- Modelling PennTreebank code1 and code2, see blog post.
5.3 Text Classification
- FastText Classifier running on the IMDB dataset based on the paper by [A. Joulin et al, 2016].
5.4 Word Embedding
- Minimalistic Implementation of Word2Vec based on the paper by [T. Mikolov et al, 2013].
5.5 Spam Detection
6. Reinforcement Learning
7. (Variational) Autoencoders
- Variational Autoencoder trained on the CelebA dataset.
- Variational Autoencoder trained on the MNIST dataset.
8. Pretrained Models
- The guideline of using pretrained models is here.
9. Data and Model Managment Tools
- Why Database?.
- Put Tasks into Database and Execute on Other Agents, see code.
- TensorDB applied on Pong Game on OpenAI Gym: Trainer File and Generator File based on the following blog post.
- TensorDB applied to classification task on MNIST dataset: Master File and Worker File.
How to cite TL in Research Papers ?
If you find this project useful, we would be grateful if you cite the TensorLayer paper:
@article{tensorlayer2017,
author = {Dong, Hao and Supratak, Akara and Mai, Luo and Liu, Fangde and Oehmichen, Axel and Yu, Simiao and Guo, Yike},
journal = {ACM Multimedia},
title = {{TensorLayer: A Versatile Library for Efficient Deep Learning Development}},
url = {http://tensorlayer.org},
year = {2017}
}