Awesome
How to use TensorLayer
While research in Deep Learning continues to improve the world, we use a bunch of tricks to implement algorithms with TensorLayer day to day.
Here are a summary of the tricks to use TensorLayer. If you find a trick that is particularly useful in practice, please open a Pull Request to add it to the document. If we find it to be reasonable and verified, we will merge it in.
- 🇨🇳 《深度学习:一起玩转TensorLayer》已上架。
1. Installation
- To keep your TL version and edit the source code easily, you can download the whole repository by excuting
git clone https://github.com/zsdonghao/tensorlayer.git
in your terminal, then copy thetensorlayer
folder into your project - As TL is growing very fast, if you want to use
pip
install, we suggest you to install the master version - For NLP application, you will need to install NLTK and NLTK data
2. Interaction between TF and TL
- TF to TL : use InputLayer
- TL to TF : use network.outputs
- Other methods issues7, multiple inputs issues31
3. Training/Testing switching
- Use network.all_drop to control the training/testing phase (for DropoutLayer only) see this example and Understand Basic layer
- Alternatively, set
is_fix
toTrue
in DropoutLayer, and build different graphs for training/testing by reusing the parameters. You can also set differentbatch_size
and noise probability for different graphs. This method is the best when you use GaussianNoiseLayer, BatchNormLayer and etc. Here is an example:
def mlp(x, is_train=True, reuse=False):
with tf.variable_scope("MLP", reuse=reuse):
net = InputLayer(x, name='in')
net = DropoutLayer(net, 0.8, True, is_train, name='drop1')
net = DenseLayer(net, n_units=800, act=tf.nn.relu, name='dense1')
net = DropoutLayer(net, 0.8, True, is_train, name='drop2')
net = DenseLayer(net, n_units=800, act=tf.nn.relu, name='dense2')
net = DropoutLayer(net, 0.8, True, is_train, name='drop3')
net = DenseLayer(net, n_units=10, act=tf.identity, name='out')
logits = net.outputs
net.outputs = tf.nn.sigmoid(net.outputs)
return net, logits
x = tf.placeholder(tf.float32, shape=[None, 784], name='x')
y_ = tf.placeholder(tf.int64, shape=[None, ], name='y_')
net_train, logits = mlp(x, is_train=True, reuse=False)
net_test, _ = mlp(x, is_train=False, reuse=True)
cost = tl.cost.cross_entropy(logits, y_, name='cost')
More in here.
4. Get variables and outputs
- Use tl.layers.get_variables_with_name instead of using net.all_params
train_vars = tl.layers.get_variables_with_name('MLP', True, True)
train_op = tf.train.AdamOptimizer(learning_rate=0.0001).minimize(cost, var_list=train_vars)
- This method can also be used to freeze some layers during training, just simply don't get some variables
- Other methods issues17, issues26, FQA
- Use tl.layers.get_layers_with_name to get list of activation outputs from a network.
layers = tl.layers.get_layers_with_name(network, "MLP", True)
- This method usually be used for activation regularization.
5. Data augmentation for large dataset
If your dataset is large, data loading and data augmentation will become the bottomneck and slow down the training. To speed up the data processing you can:
- Use TFRecord or TF DatasetAPI, see cifar10 examples
6. Data augmentation for small dataset
If your data size is small enough to feed into the memory of your machine, and data augmentation is simple. To debug easily, you can:
- Use tl.iterate.minibatches to shuffle and return the examples and labels by the given batchsize.
- Use tl.prepro.threading_data to read a batch of data at the beginning of every step, the performance is slow but good for small dataset.
- For time-series data, use tl.iterate.seq_minibatches, tl.iterate.seq_minibatches2, tl.iterate.ptb_iterator and etc
7. Pre-trained CNN and Resnet
- Pre-trained CNN
- Many applications make need pre-trained CNN model
- TL provides pre-trained VGG16, VGG19, MobileNet, SqueezeNet and etc : tl.models
- tl.layers.SlimNetsLayer allows you to use all Tf-Slim pre-trained models and tensorlayer/pretrained-models
- Resnet
- Implemented by "for" loop issues85
- Other methods by @ritchieng
8. Using tl.models
- Use pretrained VGG16 for ImageNet classification
x = tf.placeholder(tf.float32, [None, 224, 224, 3])
# get the whole model
vgg = tl.models.VGG16(x)
# restore pre-trained VGG parameters
sess = tf.InteractiveSession()
vgg.restore_params(sess)
# use for inferencing
probs = tf.nn.softmax(vgg.outputs)
- Extract features with VGG16 and retrain a classifier with 100 classes
x = tf.placeholder(tf.float32, [None, 224, 224, 3])
# get VGG without the last layer
vgg = tl.models.VGG16(x, end_with='fc2_relu')
# add one more layer
net = tl.layers.DenseLayer(vgg, 100, name='out')
# initialize all parameters
sess = tf.InteractiveSession()
tl.layers.initialize_global_variables(sess)
# restore pre-trained VGG parameters
vgg.restore_params(sess)
# train your own classifier (only update the last layer)
train_params = tl.layers.get_variables_with_name('out')
- Reuse model
x1 = tf.placeholder(tf.float32, [None, 224, 224, 3])
x2 = tf.placeholder(tf.float32, [None, 224, 224, 3])
# get VGG without the last layer
vgg1 = tl.models.VGG16(x1, end_with='fc2_relu')
# reuse the parameters of vgg1 with different input
vgg2 = tl.models.VGG16(x2, end_with='fc2_relu', reuse=True)
# restore pre-trained VGG parameters (as they share parameters, we don’t need to restore vgg2)
sess = tf.InteractiveSession()
vgg1.restore_params(sess)
9. Customized layer
-
- Use LambdaLayer, it can also accept functions with new variables. With this layer you can connect all third party TF libraries and your customized function to TL. Here is an example of using Keras and TL together.
import tensorflow as tf
import tensorlayer as tl
from keras.layers import *
from tensorlayer.layers import *
def my_fn(x):
x = Dropout(0.8)(x)
x = Dense(800, activation='relu')(x)
x = Dropout(0.5)(x)
x = Dense(800, activation='relu')(x)
x = Dropout(0.5)(x)
logits = Dense(10, activation='linear')(x)
return logits
network = InputLayer(x, name='input')
network = LambdaLayer(network, my_fn, name='keras')
...
10. Sentences tokenization
- Use tl.nlp.process_sentence to tokenize the sentences, NLTK and NLTK data is required
>>> captions = ["one two , three", "four five five"] # 2个 句 子
>>> processed_capts = []
>>> for c in captions:
>>> c = tl.nlp.process_sentence(c, start_word="<S>", end_word="</S>")
>>> processed_capts.append(c)
>>> print(processed_capts)
... [['<S>', 'one', 'two', ',', 'three', '</S>'],
... ['<S>', 'four', 'five', 'five', '</S>']]
- Then use tl.nlp.create_vocab to create a vocabulary and save as txt file (it will return a tl.nlp.SimpleVocabulary object for word to id only)
>>> tl.nlp.create_vocab(processed_capts, word_counts_output_file='vocab.txt', min_word_count=1)
... [TL] Creating vocabulary.
... Total words: 8
... Words in vocabulary: 8
... Wrote vocabulary file: vocab.txt
- Finally use tl.nlp.Vocabulary to create a vocabulary object from the txt vocabulary file created by
tl.nlp.create_vocab
>>> vocab = tl.nlp.Vocabulary('vocab.txt', start_word="<S>", end_word="</S>", unk_word="<UNK>")
... INFO:tensorflow:Initializing vocabulary from file: vocab.txt
... [TL] Vocabulary from vocab.txt : <S> </S> <UNK>
... vocabulary with 10 words (includes start_word, end_word, unk_word)
... start_id: 2
... end_id: 3
... unk_id: 9
... pad_id: 0
Then you can map word to ID or vice verse as follow:
>>> vocab.id_to_word(2)
... 'one'
>>> vocab.word_to_id('one')
... 2
>>> vocab.id_to_word(100)
... '<UNK>'
>>> vocab.word_to_id('hahahaha')
... 9
11. Dynamic RNN and sequence length
- Apply zero padding on a batch of tokenized sentences as follow:
>>> sequences = [[1,1,1,1,1],[2,2,2],[3,3]]
>>> sequences = tl.prepro.pad_sequences(sequences, maxlen=None,
... dtype='int32', padding='post', truncating='pre', value=0.)
... [[1 1 1 1 1]
... [2 2 2 0 0]
... [3 3 0 0 0]]
- Use tl.layers.retrieve_seq_length_op2 to automatically compute the sequence length from placeholder, and feed it to the
sequence_length
of DynamicRNNLayer
>>> data = [[1,2,0,0,0], [1,2,3,0,0], [1,2,6,1,0]]
>>> o = tl.layers.retrieve_seq_length_op2(data)
>>> sess = tf.InteractiveSession()
>>> tl.layers.initialize_global_variables(sess)
>>> print(o.eval())
... [2 3 4]
- Other methods issues18
12. Save models
-
- tl.files.save_npz save all model parameters (weights) into a a list of array, restore using
tl.files.load_and_assign_npz
- tl.files.save_npz save all model parameters (weights) into a a list of array, restore using
-
- tl.files.save_npz_dict save all model parameters (weights) into a dictionary of array, key is the parameter name, restore using
tl.files.load_and_assign_npz_dict
- tl.files.save_npz_dict save all model parameters (weights) into a dictionary of array, key is the parameter name, restore using
-
- tl.files.save_ckpt save all model parameters (weights) into TensorFlow ckpt file, restore using
tl.files.load_ckpt
.
- tl.files.save_ckpt save all model parameters (weights) into TensorFlow ckpt file, restore using
13. Compatibility with other TF wrappers
TL can interact with other TF wrappers, which means if you find some codes or models implemented by other wrappers, you can just use it !
- Other TensorFlow layer implementations can be connected into TensorLayer via LambdaLayer, see example here)
- TF-Slim to TL: SlimNetsLayer (you can use all Google's pre-trained convolutional models with this layer !!!)
14. Others
BatchNormLayer
'sdecay
default is 0.9, set to 0.999 for large dataset.- Matplotlib issue arise when importing TensorLayer issues, see FQA
Useful links
- Awesome-TensorLayer for all examples
- TL official sites: Docs, 中文文档, Github
- Learning Deep Learning with TF and TL
- Follow zsdonghao for further examples
Author
- Zhang Rui
- Hao Dong